Julien Thierry <julien.thierry.kdev@gmail.com> <julien.thierry@arm.com>
Kamil Konieczny <k.konieczny@samsung.com> <k.konieczny@partner.samsung.com>
Kay Sievers <kay.sievers@vrfy.org>
+Kees Cook <keescook@chromium.org> <kees.cook@canonical.com>
+Kees Cook <keescook@chromium.org> <keescook@google.com>
+Kees Cook <keescook@chromium.org> <kees@outflux.net>
+Kees Cook <keescook@chromium.org> <kees@ubuntu.com>
Kenneth W Chen <kenneth.w.chen@intel.com>
Konstantin Khlebnikov <koct9i@gmail.com> <khlebnikov@yandex-team.ru>
Konstantin Khlebnikov <koct9i@gmail.com> <k.khlebnikov@samsung.com>
pgmajfault
Number of major page faults incurred
- workingset_refault
- Number of refaults of previously evicted pages
+ workingset_refault_anon
+ Number of refaults of previously evicted anonymous pages.
- workingset_activate
- Number of refaulted pages that were immediately activated
+ workingset_refault_file
+ Number of refaults of previously evicted file pages.
- workingset_restore
- Number of restored pages which have been detected as an active
- workingset before they got reclaimed.
+ workingset_activate_anon
+ Number of refaulted anonymous pages that were immediately
+ activated.
+
+ workingset_activate_file
+ Number of refaulted file pages that were immediately activated.
+
+ workingset_restore_anon
+ Number of restored anonymous pages which have been detected as
+ an active workingset before they got reclaimed.
+
+ workingset_restore_file
+ Number of restored file pages which have been detected as an
+ active workingset before they got reclaimed.
workingset_nodereclaim
Number of times a shadow node has been reclaimed
the value passed in <key_size>.
<key_type>
- Either 'logon' or 'user' kernel key type.
+ Either 'logon', 'user' or 'encrypted' kernel key type.
<key_description>
The kernel keyring key description crypt target should look for
thread because it benefits CFQ to have writes submitted using the
same context.
+no_read_workqueue
+ Bypass dm-crypt internal workqueue and process read requests synchronously.
+
+no_write_workqueue
+ Bypass dm-crypt internal workqueue and process write requests synchronously.
+ This option is automatically enabled for host-managed zoned block devices
+ (e.g. host-managed SMR hard-disks).
+
integrity:<bytes>:<type>
The device requires additional <bytes> metadata per-sector stored
in per-bio integrity structure. This metadata must by provided
already committed. It is thus possible for slow producers to temporarily hold
off submitted records, that were reserved later.
-Reservation/commit/consumer protocol is verified by litmus tests in
-Documentation/litmus_tests/bpf-rb/_.
-
One interesting implementation bit, that significantly simplifies (and thus
speeds up as well) implementation of both producers and consumers is how data
area is mapped twice contiguously back-to-back in the virtual memory. This
being available after commit only if consumer has already caught up right up to
the record being committed. If not, consumer still has to catch up and thus
will see new data anyways without needing an extra poll notification.
-Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbuf.c_) show that
+Benchmarks (see tools/testing/selftests/bpf/benchs/bench_ringbufs.c) show that
this allows to achieve a very high throughput without having to resort to
tricks like "notify only every Nth sample", which are necessary with perf
buffer. For extreme cases, when BPF program wants more manual control of
compatible:
items:
- const: raspberrypi,bcm2835-firmware
- - const: simple-bus
+ - const: simple-mfd
mboxes:
$ref: '/schemas/types.yaml#/definitions/phandle'
examples:
- |
firmware {
- compatible = "raspberrypi,bcm2835-firmware", "simple-bus";
+ compatible = "raspberrypi,bcm2835-firmware", "simple-mfd";
mboxes = <&mailbox>;
firmware_clocks: clocks {
main_crypto: crypto@4e00000 {
compatible = "ti,j721-sa2ul";
- reg = <0x0 0x4e00000 0x0 0x1200>;
+ reg = <0x4e00000 0x1200>;
power-domains = <&k3_pds 264 TI_SCI_PD_EXCLUSIVE>;
dmas = <&main_udmap 0xc000>, <&main_udmap 0x4000>,
<&main_udmap 0x4001>;
display@fd4a0000 {
compatible = "xlnx,zynqmp-dpsub-1.7";
- reg = <0x0 0xfd4a0000 0x0 0x1000>,
- <0x0 0xfd4aa000 0x0 0x1000>,
- <0x0 0xfd4ab000 0x0 0x1000>,
- <0x0 0xfd4ac000 0x0 0x1000>;
+ reg = <0xfd4a0000 0x1000>,
+ <0xfd4aa000 0x1000>,
+ <0xfd4ab000 0x1000>,
+ <0xfd4ac000 0x1000>;
reg-names = "dp", "blend", "av_buf", "aud";
interrupts = <0 119 4>;
interrupt-parent = <&gic>;
dma: dma-controller@fd4c0000 {
compatible = "xlnx,zynqmp-dpdma";
- reg = <0x0 0xfd4c0000 0x0 0x1000>;
+ reg = <0xfd4c0000 0x1000>;
interrupts = <GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>;
interrupt-parent = <&gic>;
clocks = <&dpdma_clk>;
const: 0
patternProperties:
- "^multi-led[0-9a-f]$":
+ "^multi-led@[0-9a-b]$":
type: object
allOf:
- $ref: leds-class-multicolor.yaml#
+++ /dev/null
-* Sony 1/2.5-Inch 8.51Mp CMOS Digital Image Sensor
-
-The Sony imx274 is a 1/2.5-inch CMOS active pixel digital image sensor with
-an active array size of 3864H x 2202V. It is programmable through I2C
-interface. The I2C address is fixed to 0x1a as per sensor data sheet.
-Image data is sent through MIPI CSI-2, which is configured as 4 lanes
-at 1440 Mbps.
-
-
-Required Properties:
-- compatible: value should be "sony,imx274" for imx274 sensor
-- reg: I2C bus address of the device
-
-Optional Properties:
-- reset-gpios: Sensor reset GPIO
-- clocks: Reference to the input clock.
-- clock-names: Should be "inck".
-- VANA-supply: Sensor 2.8v analog supply.
-- VDIG-supply: Sensor 1.8v digital core supply.
-- VDDL-supply: Sensor digital IO 1.2v supply.
-
-The imx274 device node should contain one 'port' child node with
-an 'endpoint' subnode. For further reading on port node refer to
-Documentation/devicetree/bindings/media/video-interfaces.txt.
-
-Example:
- sensor@1a {
- compatible = "sony,imx274";
- reg = <0x1a>;
- #address-cells = <1>;
- #size-cells = <0>;
- reset-gpios = <&gpio_sensor 0 0>;
- port {
- sensor_out: endpoint {
- remote-endpoint = <&csiss_in>;
- };
- };
- };
--- /dev/null
+# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/media/i2c/sony,imx274.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Sony 1/2.5-Inch 8.51MP CMOS Digital Image Sensor
+
+maintainers:
+ - Leon Luo <leonl@leopardimaging.com>
+
+description: |
+ The Sony IMX274 is a 1/2.5-inch CMOS active pixel digital image sensor with an
+ active array size of 3864H x 2202V. It is programmable through I2C interface.
+ Image data is sent through MIPI CSI-2, which is configured as 4 lanes at 1440
+ Mbps.
+
+properties:
+ compatible:
+ const: sony,imx274
+
+ reg:
+ const: 0x1a
+
+ reset-gpios:
+ maxItems: 1
+
+ clocks:
+ maxItems: 1
+
+ clock-names:
+ const: inck
+
+ vana-supply:
+ description: Sensor 2.8 V analog supply.
+ maxItems: 1
+
+ vdig-supply:
+ description: Sensor 1.8 V digital core supply.
+ maxItems: 1
+
+ vddl-supply:
+ description: Sensor digital IO 1.2 V supply.
+ maxItems: 1
+
+ port:
+ type: object
+ description: Output video port. See ../video-interfaces.txt.
+
+required:
+ - compatible
+ - reg
+ - port
+
+additionalProperties: false
+
+examples:
+ - |
+ i2c0 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ imx274: camera-sensor@1a {
+ compatible = "sony,imx274";
+ reg = <0x1a>;
+ reset-gpios = <&gpio_sensor 0 0>;
+
+ port {
+ sensor_out: endpoint {
+ remote-endpoint = <&csiss_in>;
+ };
+ };
+ };
+ };
+
+...
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
- | powerpc: | ok |
+ | powerpc: | TODO |
| riscv: | ok |
| s390: | ok |
| sh: | TODO |
ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- make CC=clang
``CROSS_COMPILE`` is not used to prefix the Clang compiler binary, instead
-``CROSS_COMPILE`` is used to set a command line flag: ``--target <triple>``. For
+``CROSS_COMPILE`` is used to set a command line flag: ``--target=<triple>``. For
example: ::
- clang --target aarch64-linux-gnu foo.c
+ clang --target=aarch64-linux-gnu foo.c
LLVM Utilities
--------------
``ETHTOOL_MSG_TSINFO_GET`` get timestamping info
``ETHTOOL_MSG_CABLE_TEST_ACT`` action start cable test
``ETHTOOL_MSG_CABLE_TEST_TDR_ACT`` action start raw TDR cable test
+ ``ETHTOOL_MSG_TUNNEL_INFO_GET`` get tunnel offload info
===================================== ================================
Kernel to userspace:
``ETHTOOL_MSG_TSINFO_GET_REPLY`` timestamping info
``ETHTOOL_MSG_CABLE_TEST_NTF`` Cable test results
``ETHTOOL_MSG_CABLE_TEST_TDR_NTF`` Cable test TDR results
+ ``ETHTOOL_MSG_TUNNEL_INFO_GET_REPLY`` tunnel offload info
===================================== =================================
``GET`` requests are sent by userspace applications to retrieve device
``ETHTOOL_SFECPARAM`` n/a
n/a ''ETHTOOL_MSG_CABLE_TEST_ACT''
n/a ''ETHTOOL_MSG_CABLE_TEST_TDR_ACT''
+ n/a ``ETHTOOL_MSG_TUNNEL_INFO_GET``
=================================== =====================================
:stub-columns: 0
:widths: 3 1 4
- * .. _`V4L2-FLAG-MEMORY-NON-CONSISTENT`:
-
- - ``V4L2_FLAG_MEMORY_NON_CONSISTENT``
- - 0x00000001
- - A buffer is allocated either in consistent (it will be automatically
- coherent between the CPU and the bus) or non-consistent memory. The
- latter can provide performance gains, for instance the CPU cache
- sync/flush operations can be avoided if the buffer is accessed by the
- corresponding device only and the CPU does not read/write to/from that
- buffer. However, this requires extra care from the driver -- it must
- guarantee memory consistency by issuing a cache flush/sync when
- consistency is needed. If this flag is set V4L2 will attempt to
- allocate the buffer in non-consistent memory. The flag takes effect
- only if the buffer is used for :ref:`memory mapping <mmap>` I/O and the
- queue reports the :ref:`V4L2_BUF_CAP_SUPPORTS_MMAP_CACHE_HINTS
- <V4L2-BUF-CAP-SUPPORTS-MMAP-CACHE-HINTS>` capability.
-
.. c:type:: v4l2_memory
enum v4l2_memory
If you want to just query the capabilities without making any
other changes, then set ``count`` to 0, ``memory`` to
``V4L2_MEMORY_MMAP`` and ``format.type`` to the buffer type.
- * - __u32
- - ``flags``
- - Specifies additional buffer management attributes.
- See :ref:`memory-flags`.
* - __u32
- - ``reserved``\ [6]
+ - ``reserved``\ [7]
- A place holder for future extensions. Drivers and applications
must set the array to zero.
``V4L2_MEMORY_MMAP`` and ``type`` set to the buffer type. This will
free any previously allocated buffers, so this is typically something
that will be done at the start of the application.
- * - union {
- - (anonymous)
- * - __u32
- - ``flags``
- - Specifies additional buffer management attributes.
- See :ref:`memory-flags`.
* - __u32
- ``reserved``\ [1]
- - Kept for backwards compatibility. Use ``flags`` instead.
- * - }
- -
+ - A place holder for future extensions. Drivers and applications
+ must set the array to zero.
.. tabularcolumns:: |p{6.1cm}|p{2.2cm}|p{8.7cm}|
- This capability is set by the driver to indicate that the queue supports
cache and memory management hints. However, it's only valid when the
queue is used for :ref:`memory mapping <mmap>` streaming I/O. See
- :ref:`V4L2_FLAG_MEMORY_NON_CONSISTENT <V4L2-FLAG-MEMORY-NON-CONSISTENT>`,
:ref:`V4L2_BUF_FLAG_NO_CACHE_INVALIDATE <V4L2-BUF-FLAG-NO-CACHE-INVALIDATE>` and
:ref:`V4L2_BUF_FLAG_NO_CACHE_CLEAN <V4L2-BUF-FLAG-NO-CACHE-CLEAN>`.
is supported, than the other should as well and vice versa. For arm64
see Documentation/virt/kvm/devices/vcpu.rst "KVM_ARM_VCPU_PVTIME_CTRL".
For x86 see Documentation/virt/kvm/msr.rst "MSR_KVM_STEAL_TIME".
+
+8.25 KVM_CAP_S390_DIAG318
+-------------------------
+
+:Architectures: s390
+
+This capability enables a guest to set information about its control program
+(i.e. guest kernel type and version). The information is helpful during
+system/firmware service events, providing additional data about the guest
+environments running on the machine.
+
+The information is associated with the DIAGNOSE 0x318 instruction, which sets
+an 8-byte value consisting of a one-byte Control Program Name Code (CPNC) and
+a 7-byte Control Program Version Code (CPVC). The CPNC determines what
+environment the control program is running in (e.g. Linux, z/VM...), and the
+CPVC is used for information specific to OS (e.g. Linux version, Linux
+distribution...)
+
+If this capability is available, then the CPNC and CPVC can be synchronized
+between KVM and userspace via the sync regs mechanism (KVM_SYNC_DIAG318).
F: fs/configfs/
F: include/linux/configfs.h
-CONNECTOR
-M: Evgeniy Polyakov <zbr@ioremap.net>
-L: netdev@vger.kernel.org
-S: Maintained
-F: drivers/connector/
-
CONSOLE SUBSYSTEM
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
S: Supported
F: drivers/edac/aspeed_edac.c
EDAC-BLUEFIELD
-M: Shravan Kumar Ramani <sramani@nvidia.com>
+M: Shravan Kumar Ramani <shravankr@nvidia.com>
S: Supported
F: drivers/edac/bluefield_edac.c
F: drivers/pci/hotplug/rpaphp*
IBM Power SRIOV Virtual NIC Device Driver
-M: Thomas Falcon <tlfalcon@linux.ibm.com>
-M: John Allen <jallen@linux.ibm.com>
+M: Dany Madden <drt@linux.ibm.com>
+M: Lijun Pan <ljp@linux.ibm.com>
+M: Sukadev Bhattiprolu <sukadev@linux.ibm.com>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/ethernet/ibm/ibmvnic.*
F: arch/powerpc/platforms/powernv/vas*
IBM Power Virtual Ethernet Device Driver
-M: Thomas Falcon <tlfalcon@linux.ibm.com>
+M: Cristobal Forno <cforno12@linux.ibm.com>
L: netdev@vger.kernel.org
S: Supported
F: drivers/net/ethernet/ibm/ibmveth.*
ISCSI EXTENSIONS FOR RDMA (ISER) INITIATOR
M: Sagi Grimberg <sagi@grimberg.me>
-M: Max Gurtovoy <maxg@nvidia.com>
+M: Max Gurtovoy <mgurtovoy@nvidia.com>
L: linux-rdma@vger.kernel.org
S: Supported
W: http://www.openfabrics.org
MEDIATEK SWITCH DRIVER
M: Sean Wang <sean.wang@mediatek.com>
+M: Landen Chao <Landen.Chao@mediatek.com>
L: netdev@vger.kernel.org
S: Maintained
F: drivers/net/dsa/mt7530.*
T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git
T: git git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git
F: Documentation/devicetree/bindings/net/
+F: drivers/connector/
F: drivers/net/
F: include/linux/etherdevice.h
F: include/linux/fcdevice.h
L: linux-media@vger.kernel.org
S: Maintained
T: git git://linuxtv.org/media_tree.git
-F: Documentation/devicetree/bindings/media/i2c/imx274.txt
+F: Documentation/devicetree/bindings/media/i2c/sony,imx274.yaml
F: drivers/media/i2c/imx274.c
SONY IMX290 SENSOR DRIVER
VERSION = 5
PATCHLEVEL = 9
SUBLEVEL = 0
-EXTRAVERSION = -rc5
+EXTRAVERSION = -rc7
NAME = Kleptomaniac Octopus
# *DOCUMENTATION*
switch0: ksz8563@0 {
compatible = "microchip,ksz8563";
reg = <0>;
- phy-mode = "mii";
reset-gpios = <&pioA PIN_PD4 GPIO_ACTIVE_LOW>;
spi-max-frequency = <500000>;
reg = <2>;
label = "cpu";
ethernet = <&macb0>;
+ phy-mode = "mii";
fixed-link {
speed = <100>;
full-duplex;
soc {
firmware: firmware {
- compatible = "raspberrypi,bcm2835-firmware", "simple-bus";
+ compatible = "raspberrypi,bcm2835-firmware", "simple-mfd";
#address-cells = <1>;
#size-cells = <1>;
return (kvm_vcpu_get_esr(vcpu) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT;
}
-static __always_inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu)
+static __always_inline bool kvm_vcpu_abt_iss1tw(const struct kvm_vcpu *vcpu)
{
return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_S1PTW);
}
+/* Always check for S1PTW *before* using this. */
static __always_inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
{
- return !!(kvm_vcpu_get_esr(vcpu) & ESR_ELx_WNR) ||
- kvm_vcpu_dabt_iss1tw(vcpu); /* AF/DBM update */
+ return kvm_vcpu_get_esr(vcpu) & ESR_ELx_WNR;
}
static inline bool kvm_vcpu_dabt_is_cm(const struct kvm_vcpu *vcpu)
return kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_IABT_LOW;
}
+static inline bool kvm_vcpu_trap_is_exec_fault(const struct kvm_vcpu *vcpu)
+{
+ return kvm_vcpu_trap_is_iabt(vcpu) && !kvm_vcpu_abt_iss1tw(vcpu);
+}
+
static __always_inline u8 kvm_vcpu_trap_get_fault(const struct kvm_vcpu *vcpu)
{
return kvm_vcpu_get_esr(vcpu) & ESR_ELx_FSC;
static inline bool kvm_is_write_fault(struct kvm_vcpu *vcpu)
{
+ if (kvm_vcpu_abt_iss1tw(vcpu))
+ return true;
+
if (kvm_vcpu_trap_is_iabt(vcpu))
return false;
.desc = "ARM erratum 1418040",
.capability = ARM64_WORKAROUND_1418040,
ERRATA_MIDR_RANGE_LIST(erratum_1418040_list),
- .type = (ARM64_CPUCAP_SCOPE_LOCAL_CPU |
- ARM64_CPUCAP_PERMITTED_FOR_LATE_CPU),
+ /*
+ * We need to allow affected CPUs to come in late, but
+ * also need the non-affected CPUs to be able to come
+ * in at any point in time. Wonderful.
+ */
+ .type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
},
#endif
#ifdef CONFIG_ARM64_WORKAROUND_SPECULATIVE_AT
struct pv_time_stolen_time_region *reg;
reg = per_cpu_ptr(&stolen_time_region, cpu);
- if (!reg->kaddr) {
- pr_warn_once("stolen time enabled but not configured for cpu %d\n",
- cpu);
+
+ /*
+ * paravirt_steal_clock() may be called before the CPU
+ * online notification callback runs. Until the callback
+ * has run we just return zero.
+ */
+ if (!reg->kaddr)
return 0;
- }
return le64_to_cpu(READ_ONCE(reg->kaddr->stolen_time));
}
-static int stolen_time_dying_cpu(unsigned int cpu)
+static int stolen_time_cpu_down_prepare(unsigned int cpu)
{
struct pv_time_stolen_time_region *reg;
return 0;
}
-static int init_stolen_time_cpu(unsigned int cpu)
+static int stolen_time_cpu_online(unsigned int cpu)
{
struct pv_time_stolen_time_region *reg;
struct arm_smccc_res res;
return 0;
}
-static int pv_time_init_stolen_time(void)
+static int __init pv_time_init_stolen_time(void)
{
int ret;
- ret = cpuhp_setup_state(CPUHP_AP_ARM_KVMPV_STARTING,
- "hypervisor/arm/pvtime:starting",
- init_stolen_time_cpu, stolen_time_dying_cpu);
+ ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN,
+ "hypervisor/arm/pvtime:online",
+ stolen_time_cpu_online,
+ stolen_time_cpu_down_prepare);
if (ret < 0)
return ret;
return 0;
}
-static bool has_pv_steal_clock(void)
+static bool __init has_pv_steal_clock(void)
{
struct arm_smccc_res res;
kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT &&
kvm_vcpu_dabt_isvalid(vcpu) &&
!kvm_vcpu_abt_issea(vcpu) &&
- !kvm_vcpu_dabt_iss1tw(vcpu);
+ !kvm_vcpu_abt_iss1tw(vcpu);
if (valid) {
int ret = __vgic_v2_perform_cpuif_access(vcpu);
struct kvm_s2_mmu *mmu = vcpu->arch.hw_mmu;
write_fault = kvm_is_write_fault(vcpu);
- exec_fault = kvm_vcpu_trap_is_iabt(vcpu);
+ exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu);
VM_BUG_ON(write_fault && exec_fault);
if (fault_status == FSC_PERM && !write_fault && !exec_fault) {
goto out;
}
- if (kvm_vcpu_dabt_iss1tw(vcpu)) {
+ if (kvm_vcpu_abt_iss1tw(vcpu)) {
kvm_inject_dabt(vcpu, kvm_vcpu_get_hfar(vcpu));
ret = 1;
goto out_unlock;
}
}
-static inline int bpf2a64_offset(int bpf_to, int bpf_from,
+static inline int bpf2a64_offset(int bpf_insn, int off,
const struct jit_ctx *ctx)
{
- int to = ctx->offset[bpf_to];
- /* -1 to account for the Branch instruction */
- int from = ctx->offset[bpf_from] - 1;
-
- return to - from;
+ /* BPF JMP offset is relative to the next instruction */
+ bpf_insn++;
+ /*
+ * Whereas arm64 branch instructions encode the offset
+ * from the branch itself, so we must subtract 1 from the
+ * instruction offset.
+ */
+ return ctx->offset[bpf_insn + off] - (ctx->offset[bpf_insn] - 1);
}
static void jit_fill_hole(void *area, unsigned int size)
/* JUMP off */
case BPF_JMP | BPF_JA:
- jmp_offset = bpf2a64_offset(i + off, i, ctx);
+ jmp_offset = bpf2a64_offset(i, off, ctx);
check_imm26(jmp_offset);
emit(A64_B(jmp_offset), ctx);
break;
case BPF_JMP32 | BPF_JSLE | BPF_X:
emit(A64_CMP(is64, dst, src), ctx);
emit_cond_jmp:
- jmp_offset = bpf2a64_offset(i + off, i, ctx);
+ jmp_offset = bpf2a64_offset(i, off, ctx);
check_imm19(jmp_offset);
switch (BPF_OP(code)) {
case BPF_JEQ:
const struct bpf_prog *prog = ctx->prog;
int i;
+ /*
+ * - offset[0] offset of the end of prologue,
+ * start of the 1st instruction.
+ * - offset[1] - offset of the end of 1st instruction,
+ * start of the 2nd instruction
+ * [....]
+ * - offset[3] - offset of the end of 3rd instruction,
+ * start of 4th instruction
+ */
for (i = 0; i < prog->len; i++) {
const struct bpf_insn *insn = &prog->insnsi[i];
int ret;
+ if (ctx->image == NULL)
+ ctx->offset[i] = ctx->idx;
ret = build_insn(insn, ctx, extra_pass);
if (ret > 0) {
i++;
ctx->offset[i] = ctx->idx;
continue;
}
- if (ctx->image == NULL)
- ctx->offset[i] = ctx->idx;
if (ret)
return ret;
}
+ /*
+ * offset is allocated with prog->len + 1 so fill in
+ * the last element with the offset after the last
+ * instruction (end of program)
+ */
+ if (ctx->image == NULL)
+ ctx->offset[i] = ctx->idx;
return 0;
}
memset(&ctx, 0, sizeof(ctx));
ctx.prog = prog;
- ctx.offset = kcalloc(prog->len, sizeof(int), GFP_KERNEL);
+ ctx.offset = kcalloc(prog->len + 1, sizeof(int), GFP_KERNEL);
if (ctx.offset == NULL) {
prog = orig_prog;
goto out_off;
prog->jited_len = prog_size;
if (!prog->is_func || extra_pass) {
- bpf_prog_fill_jited_linfo(prog, ctx.offset);
+ bpf_prog_fill_jited_linfo(prog, ctx.offset + 1);
out_off:
kfree(ctx.offset);
kfree(jit_data);
buf[2] |= ACPI_PDC_EST_CAPABILITY_SMP;
}
-#define acpi_unlazy_tlb(x)
-
#ifdef CONFIG_ACPI_NUMA
extern cpumask_t early_cpu_possible_map;
#define for_each_possible_early_cpu(cpu) \
if (map_start < map_end)
memmap_init_zone((unsigned long)(map_end - map_start),
args->nid, args->zone, page_to_pfn(map_start),
- MEMMAP_EARLY, NULL);
+ MEMINIT_EARLY, NULL);
return 0;
}
unsigned long start_pfn)
{
if (!vmem_map) {
- memmap_init_zone(size, nid, zone, start_pfn, MEMMAP_EARLY,
- NULL);
+ memmap_init_zone(size, nid, zone, start_pfn,
+ MEMINIT_EARLY, NULL);
} else {
struct page *start;
struct memmap_init_callback_data args;
select I8253
select I8259
select ISA
+ select MIPS_L1_CACHE_SHIFT_6
select SWAP_IO_SPACE if CPU_BIG_ENDIAN
select SYS_HAS_CPU_R4X00
select SYS_HAS_CPU_R5000
{
struct cpuinfo_mips *c = ¤t_cpu_data;
- if ((c->cputype == CPU_74K) || (c->cputype == CPU_1074K)) {
+ if (c->cputype == CPU_74K) {
pr_info("Using bcma bus\n");
#ifdef CONFIG_BCM47XX_BCMA
bcm47xx_bus_type = BCM47XX_BUS_TYPE_BCMA;
case CPU_34K:
case CPU_1004K:
case CPU_74K:
+ case CPU_1074K:
case CPU_M14KC:
case CPU_M14KEC:
case CPU_INTERAPTIV:
endif
endif
+# Some -march= flags enable MMI instructions, and GCC complains about that
+# support being enabled alongside -msoft-float. Thus explicitly disable MMI.
+cflags-y += $(call cc-option,-mno-loongson-mmi)
+
#
# Loongson Machines' Support
#
if (res)
goto fault;
- set_fpr64(current->thread.fpu.fpr,
- insn.loongson3_lswc2_format.rt, value);
- set_fpr64(current->thread.fpu.fpr,
- insn.loongson3_lswc2_format.rq, value_next);
+ set_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lswc2_format.rt], 0, value);
+ set_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lswc2_format.rq], 0, value_next);
compute_return_epc(regs);
own_fpu(1);
}
goto sigbus;
lose_fpu(1);
- value_next = get_fpr64(current->thread.fpu.fpr,
- insn.loongson3_lswc2_format.rq);
+ value_next = get_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lswc2_format.rq], 0);
StoreDW(addr + 8, value_next, res);
if (res)
goto fault;
- value = get_fpr64(current->thread.fpu.fpr,
- insn.loongson3_lswc2_format.rt);
+ value = get_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lswc2_format.rt], 0);
StoreDW(addr, value, res);
if (res)
if (res)
goto fault;
- set_fpr64(current->thread.fpu.fpr,
- insn.loongson3_lsdc2_format.rt, value);
+ set_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lsdc2_format.rt], 0, value);
compute_return_epc(regs);
own_fpu(1);
if (res)
goto fault;
- set_fpr64(current->thread.fpu.fpr,
- insn.loongson3_lsdc2_format.rt, value);
+ set_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lsdc2_format.rt], 0, value);
compute_return_epc(regs);
own_fpu(1);
break;
goto sigbus;
lose_fpu(1);
- value = get_fpr64(current->thread.fpu.fpr,
- insn.loongson3_lsdc2_format.rt);
+ value = get_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lsdc2_format.rt], 0);
StoreW(addr, value, res);
if (res)
goto sigbus;
lose_fpu(1);
- value = get_fpr64(current->thread.fpu.fpr,
- insn.loongson3_lsdc2_format.rt);
+ value = get_fpr64(¤t->thread.fpu.fpr[insn.loongson3_lsdc2_format.rt], 0);
StoreDW(addr, value, res);
if (res)
},
};
-static u32 a20r_ack_hwint(void)
+/*
+ * Trigger chipset to update CPU's CAUSE IP field
+ */
+static u32 a20r_update_cause_ip(void)
{
u32 status = read_c0_status();
int irq;
clear_c0_status(IE_IRQ0);
- status = a20r_ack_hwint();
+ status = a20r_update_cause_ip();
cause = read_c0_cause();
irq = ffs(((cause & status) >> 8) & 0xf8);
if (likely(irq > 0))
do_IRQ(SNI_A20R_IRQ_BASE + irq - 1);
+
+ a20r_update_cause_ip();
set_c0_status(IE_IRQ0);
}
#
select ARCH_32BIT_OFF_T if PPC32
select ARCH_HAS_DEBUG_VIRTUAL
- select ARCH_HAS_DEBUG_VM_PGTABLE
select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAS_ELF_RANDOMIZE
select ARCH_HAS_FORTIFY_SOURCE
CONFIG_FB_NVIDIA_I2C=y
CONFIG_FB_RADEON=y
# CONFIG_LCD_CLASS_DEVICE is not set
-CONFIG_VGACON_SOFT_SCROLLBACK=y
CONFIG_LOGO=y
CONFIG_SOUND=y
CONFIG_SND=y
CONFIG_FB_SM501=m
CONFIG_FB_IBM_GXT4500=y
CONFIG_LCD_PLATFORM=m
-CONFIG_VGACON_SOFT_SCROLLBACK=y
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE_ROTATION=y
CONFIG_LOGO=y
extern void hash__setup_initial_memory_limit(phys_addr_t first_memblock_base,
phys_addr_t first_memblock_size);
-extern void radix__setup_initial_memory_limit(phys_addr_t first_memblock_base,
- phys_addr_t first_memblock_size);
static inline void setup_initial_memory_limit(phys_addr_t first_memblock_base,
phys_addr_t first_memblock_size)
{
- if (early_radix_enabled())
- return radix__setup_initial_memory_limit(first_memblock_base,
- first_memblock_size);
+ /*
+ * Hash has more strict restrictions. At this point we don't
+ * know which translations we will pick. Hence go with hash
+ * restrictions.
+ */
return hash__setup_initial_memory_limit(first_memblock_base,
first_memblock_size);
}
if (!tbl)
return 0;
- mask = 1ULL < (fls_long(tbl->it_offset + tbl->it_size) - 1);
+ mask = 1ULL << (fls_long(tbl->it_offset + tbl->it_size) +
+ tbl->it_page_shift - 1);
mask += mask - 1;
return mask;
# actual build commands
quiet_cmd_vdso32ld = VDSO32L $@
- cmd_vdso32ld = $(VDSOCC) $(c_flags) $(CC32FLAGS) -o $@ $(call cc-ldoption, -Wl$(comma)--orphan-handling=warn) -Wl,-T$(filter %.lds,$^) $(filter %.o,$^)
+ cmd_vdso32ld = $(VDSOCC) $(c_flags) $(CC32FLAGS) -o $@ -Wl,-T$(filter %.lds,$^) $(filter %.o,$^)
quiet_cmd_vdso32as = VDSO32A $@
cmd_vdso32as = $(VDSOCC) $(a_flags) $(CC32FLAGS) -c -o $@ $<
*(.note.GNU-stack)
*(.data .data.* .gnu.linkonce.d.* .sdata*)
*(.bss .sbss .dynbss .dynsbss)
- *(.glink .iplt .plt .rela*)
}
}
# actual build commands
quiet_cmd_vdso64ld = VDSO64L $@
- cmd_vdso64ld = $(CC) $(c_flags) -o $@ -Wl,-T$(filter %.lds,$^) $(filter %.o,$^) $(call cc-ldoption, -Wl$(comma)--orphan-handling=warn)
+ cmd_vdso64ld = $(CC) $(c_flags) -o $@ -Wl,-T$(filter %.lds,$^) $(filter %.o,$^)
# install commands for the unstripped file
quiet_cmd_vdso_install = INSTALL $@
. = ALIGN(16);
.text : {
*(.text .stub .text.* .gnu.linkonce.t.* __ftr_alt_*)
- *(.sfpr)
+ *(.sfpr .glink)
} :text
PROVIDE(__etext = .);
PROVIDE(_etext = .);
*(.branch_lt)
*(.data .data.* .gnu.linkonce.d.* .sdata*)
*(.bss .sbss .dynbss .dynsbss)
- *(.glink .iplt .plt .rela*)
}
}
}
}
-void radix__setup_initial_memory_limit(phys_addr_t first_memblock_base,
- phys_addr_t first_memblock_size)
-{
- /*
- * We don't currently support the first MEMBLOCK not mapping 0
- * physical on those processors
- */
- BUG_ON(first_memblock_base != 0);
-
- /*
- * Radix mode is not limited by RMA / VRMA addressing.
- */
- ppc64_rma_size = ULONG_MAX;
-}
-
#ifdef CONFIG_MEMORY_HOTPLUG
static void free_pte_table(pte_t *pte_start, pmd_t *pmd)
{
if (!(mfmsr() & MSR_HV))
early_check_vec5();
- if (early_radix_enabled())
+ if (early_radix_enabled()) {
radix__early_init_devtree();
- else
+ /*
+ * We have finalized the translation we are going to use by now.
+ * Radix mode is not limited by RMA / VRMA addressing.
+ * Hence don't limit memblock allocations.
+ */
+ ppc64_rma_size = ULONG_MAX;
+ memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE);
+ } else
hash__early_init_devtree();
}
#endif /* CONFIG_PPC_BOOK3S_64 */
kfree(stats);
return rc ? rc : seq_buf_used(&s);
}
-DEVICE_ATTR_RO(perf_stats);
+DEVICE_ATTR_ADMIN_RO(perf_stats);
static ssize_t flags_show(struct device *dev,
struct device_attribute *attr, char *buf)
select ARCH_WANT_FRAME_POINTERS
select ARCH_WANT_HUGE_PMD_SHARE if 64BIT
select CLONE_BACKWARDS
+ select CLINT_TIMER if !MMU
select COMMON_CLK
select EDAC_SUPPORT
select GENERIC_ARCH_TOPOLOGY if SMP
#clock-cells = <1>;
};
- clint0: interrupt-controller@2000000 {
+ clint0: clint@2000000 {
+ #interrupt-cells = <1>;
compatible = "riscv,clint0";
reg = <0x2000000 0xC000>;
- interrupts-extended = <&cpu0_intc 3>, <&cpu1_intc 3>;
+ interrupts-extended = <&cpu0_intc 3 &cpu0_intc 7
+ &cpu1_intc 3 &cpu1_intc 7>;
clocks = <&sysctl K210_CLK_ACLK>;
};
--- /dev/null
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2020 Google, Inc
+ */
+
+#ifndef _ASM_RISCV_CLINT_H
+#define _ASM_RISCV_CLINT_H
+
+#include <linux/types.h>
+#include <asm/mmio.h>
+
+#ifdef CONFIG_RISCV_M_MODE
+/*
+ * This lives in the CLINT driver, but is accessed directly by timex.h to avoid
+ * any overhead when accessing the MMIO timer.
+ *
+ * The ISA defines mtime as a 64-bit memory-mapped register that increments at
+ * a constant frequency, but it doesn't define some other constraints we depend
+ * on (most notably ordering constraints, but also some simpler stuff like the
+ * memory layout). Thus, this is called "clint_time_val" instead of something
+ * like "riscv_mtime", to signify that these non-ISA assumptions must hold.
+ */
+extern u64 __iomem *clint_time_val;
+#endif
+
+#endif
* Let auipc+jalr be the basic *mcount unit*, so we make it 8 bytes here.
*/
#define MCOUNT_INSN_SIZE 8
+
+#ifndef __ASSEMBLY__
+struct dyn_ftrace;
+int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec);
+#define ftrace_init_nop ftrace_init_nop
+#endif
+
#endif
#endif /* _ASM_RISCV_FTRACE_H */
typedef unsigned long cycles_t;
+#ifdef CONFIG_RISCV_M_MODE
+
+#include <asm/clint.h>
+
+#ifdef CONFIG_64BIT
+static inline cycles_t get_cycles(void)
+{
+ return readq_relaxed(clint_time_val);
+}
+#else /* !CONFIG_64BIT */
+static inline u32 get_cycles(void)
+{
+ return readl_relaxed(((u32 *)clint_time_val));
+}
+#define get_cycles get_cycles
+
+static inline u32 get_cycles_hi(void)
+{
+ return readl_relaxed(((u32 *)clint_time_val) + 1);
+}
+#define get_cycles_hi get_cycles_hi
+#endif /* CONFIG_64BIT */
+
+#else /* CONFIG_RISCV_M_MODE */
+
static inline cycles_t get_cycles(void)
{
return csr_read(CSR_TIME);
}
#endif /* CONFIG_64BIT */
+#endif /* !CONFIG_RISCV_M_MODE */
+
#define ARCH_HAS_READ_CURRENT_TIMER
static inline int read_current_timer(unsigned long *timer_val)
{
return __ftrace_modify_call(rec->ip, addr, false);
}
+
+/*
+ * This is called early on, and isn't wrapped by
+ * ftrace_arch_code_modify_{prepare,post_process}() and therefor doesn't hold
+ * text_mutex, which triggers a lockdep failure. SMP isn't running so we could
+ * just directly poke the text, but it's simpler to just take the lock
+ * ourselves.
+ */
+int ftrace_init_nop(struct module *mod, struct dyn_ftrace *rec)
+{
+ int out;
+
+ ftrace_arch_code_modify_prepare();
+ out = ftrace_make_nop(mod, rec, MCOUNT_ADDR);
+ ftrace_arch_code_modify_post_process();
+
+ return out;
+}
+
int ftrace_update_ftrace_func(ftrace_func_t func)
{
int ret = __ftrace_modify_call((unsigned long)&ftrace_call,
ptep = &fixmap_pte[pte_index(addr)];
- if (pgprot_val(prot)) {
+ if (pgprot_val(prot))
set_pte(ptep, pfn_pte(phys >> PAGE_SHIFT, prot));
- } else {
+ else
pte_clear(&init_mm, addr, ptep);
- local_flush_tlb_page(addr);
- }
+ local_flush_tlb_page(addr);
}
static pte_t *__init get_pte_virt(phys_addr_t pa)
#define pgd_offset(mm, address) pgd_offset_raw(READ_ONCE((mm)->pgd), address)
-static inline p4d_t *p4d_offset(pgd_t *pgd, unsigned long address)
+static inline p4d_t *p4d_offset_lockless(pgd_t *pgdp, pgd_t pgd, unsigned long address)
{
- if ((pgd_val(*pgd) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R1)
- return (p4d_t *) pgd_deref(*pgd) + p4d_index(address);
- return (p4d_t *) pgd;
+ if ((pgd_val(pgd) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R1)
+ return (p4d_t *) pgd_deref(pgd) + p4d_index(address);
+ return (p4d_t *) pgdp;
}
+#define p4d_offset_lockless p4d_offset_lockless
-static inline pud_t *pud_offset(p4d_t *p4d, unsigned long address)
+static inline p4d_t *p4d_offset(pgd_t *pgdp, unsigned long address)
{
- if ((p4d_val(*p4d) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R2)
- return (pud_t *) p4d_deref(*p4d) + pud_index(address);
- return (pud_t *) p4d;
+ return p4d_offset_lockless(pgdp, *pgdp, address);
+}
+
+static inline pud_t *pud_offset_lockless(p4d_t *p4dp, p4d_t p4d, unsigned long address)
+{
+ if ((p4d_val(p4d) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R2)
+ return (pud_t *) p4d_deref(p4d) + pud_index(address);
+ return (pud_t *) p4dp;
+}
+#define pud_offset_lockless pud_offset_lockless
+
+static inline pud_t *pud_offset(p4d_t *p4dp, unsigned long address)
+{
+ return pud_offset_lockless(p4dp, *p4dp, address);
}
#define pud_offset pud_offset
-static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address)
+static inline pmd_t *pmd_offset_lockless(pud_t *pudp, pud_t pud, unsigned long address)
+{
+ if ((pud_val(pud) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R3)
+ return (pmd_t *) pud_deref(pud) + pmd_index(address);
+ return (pmd_t *) pudp;
+}
+#define pmd_offset_lockless pmd_offset_lockless
+
+static inline pmd_t *pmd_offset(pud_t *pudp, unsigned long address)
{
- if ((pud_val(*pud) & _REGION_ENTRY_TYPE_MASK) >= _REGION_ENTRY_TYPE_R3)
- return (pmd_t *) pud_deref(*pud) + pmd_index(address);
- return (pmd_t *) pud;
+ return pmd_offset_lockless(pudp, *pudp, address);
}
#define pmd_offset pmd_offset
void do_dat_exception(struct pt_regs *regs);
void do_secure_storage_access(struct pt_regs *regs);
void do_non_secure_storage_access(struct pt_regs *regs);
+void do_secure_storage_violation(struct pt_regs *regs);
void addressing_exception(struct pt_regs *regs);
void data_exception(struct pt_regs *regs);
local_irq_restore(flags);
/* Account time spent with enabled wait psw loaded as idle time. */
- /* XXX seqcount has tracepoints that require RCU */
- write_seqcount_begin(&idle->seqcount);
+ raw_write_seqcount_begin(&idle->seqcount);
idle_time = idle->clock_idle_exit - idle->clock_idle_enter;
idle->clock_idle_enter = idle->clock_idle_exit = 0ULL;
idle->idle_time += idle_time;
idle->idle_count++;
account_idle_time(cputime_to_nsecs(idle_time));
- write_seqcount_end(&idle->seqcount);
+ raw_write_seqcount_end(&idle->seqcount);
}
NOKPROBE_SYMBOL(enabled_wait);
PGM_CHECK_DEFAULT /* 3c */
PGM_CHECK(do_secure_storage_access) /* 3d */
PGM_CHECK(do_non_secure_storage_access) /* 3e */
-PGM_CHECK_DEFAULT /* 3f */
+PGM_CHECK(do_secure_storage_violation) /* 3f */
PGM_CHECK(monitor_event_exception) /* 40 */
PGM_CHECK_DEFAULT /* 41 */
PGM_CHECK_DEFAULT /* 42 */
/*
* Make sure that the area behind memory_end is protected
*/
-static void reserve_memory_end(void)
+static void __init reserve_memory_end(void)
{
if (memory_end_set)
memblock_reserve(memory_end, ULONG_MAX);
/*
* Make sure that oldmem, where the dump is stored, is protected
*/
-static void reserve_oldmem(void)
+static void __init reserve_oldmem(void)
{
#ifdef CONFIG_CRASH_DUMP
if (OLDMEM_BASE)
/*
* Make sure that oldmem, where the dump is stored, is protected
*/
-static void remove_oldmem(void)
+static void __init remove_oldmem(void)
{
#ifdef CONFIG_CRASH_DUMP
if (OLDMEM_BASE)
}
NOKPROBE_SYMBOL(do_non_secure_storage_access);
+void do_secure_storage_violation(struct pt_regs *regs)
+{
+ /*
+ * Either KVM messed up the secure guest mapping or the same
+ * page is mapped into multiple secure guests.
+ *
+ * This exception is only triggered when a guest 2 is running
+ * and can therefore never occur in kernel context.
+ */
+ printk_ratelimited(KERN_WARNING
+ "Secure storage violation in task: %s, pid %d\n",
+ current->comm, current->pid);
+ send_sig(SIGSEGV, current, 0);
+}
+
#else
void do_secure_storage_access(struct pt_regs *regs)
{
{
default_trap_handler(regs);
}
+
+void do_secure_storage_violation(struct pt_regs *regs)
+{
+ default_trap_handler(regs);
+}
#endif
int zpci_disable_device(struct zpci_dev *zdev)
{
zpci_dma_exit_device(zdev);
+ /*
+ * The zPCI function may already be disabled by the platform, this is
+ * detected in clp_disable_fh() which becomes a no-op.
+ */
return clp_disable_fh(zdev);
}
EXPORT_SYMBOL_GPL(zpci_disable_device);
zpci_remove_device(zdev);
}
+ zdev->fh = ccdf->fh;
+ zpci_disable_device(zdev);
zdev->state = ZPCI_FN_STATE_STANDBY;
if (!clp_get_state(ccdf->fid, &state) &&
state == ZPCI_FN_STATE_RESERVED) {
#ifdef CONFIG_SMP
-#include <linux/spinlock.h>
#include <linux/atomic.h>
#include <asm/current.h>
#include <asm/percpu.h>
nop
cmp/eq #-1, r0
bt syscall_exit
- mov.l r0, @(OFF_R0,r15) ! Save return value
! Reload R0-R4 from kernel stack, where the
! parent may have modified them using
! ptrace(POKEUSR). (Note that R0-R2 are
asmlinkage long do_syscall_trace_enter(struct pt_regs *regs)
{
- long ret = 0;
-
if (test_thread_flag(TIF_SYSCALL_TRACE) &&
- tracehook_report_syscall_entry(regs))
- /*
- * Tracing decided this syscall should not happen.
- * We'll return a bogus call number to get an ENOSYS
- * error, but leave the original number in regs->regs[0].
- */
- ret = -1L;
+ tracehook_report_syscall_entry(regs)) {
+ regs->regs[0] = -ENOSYS;
+ return -1;
+ }
if (secure_computing() == -1)
return -1;
audit_syscall_entry(regs->regs[3], regs->regs[4], regs->regs[5],
regs->regs[6], regs->regs[7]);
- return ret ?: regs->regs[0];
+ return 0;
}
asmlinkage void do_syscall_trace_leave(struct pt_regs *regs)
KBUILD_CFLAGS += $(call cc-option,-fmacro-prefix-map=$(srctree)/=)
KBUILD_CFLAGS += -fno-asynchronous-unwind-tables
KBUILD_CFLAGS += -D__DISABLE_EXPORTS
+# Disable relocation relaxation in case the link is not PIE.
+KBUILD_CFLAGS += $(call as-option,-Wa$(comma)-mrelax-relocations=no)
KBUILD_AFLAGS := $(KBUILD_CFLAGS) -D__ASSEMBLY__
GCOV_PROFILE := n
CONFIG_BLK_DEV_INITRD=y
# CONFIG_COMPAT_BRK is not set
CONFIG_PROFILING=y
+# CONFIG_64BIT is not set
CONFIG_SMP=y
CONFIG_X86_GENERIC=y
CONFIG_HPET_TIMER=y
CONFIG_FB_MODE_HELPERS=y
CONFIG_FB_TILEBLITTING=y
CONFIG_FB_EFI=y
-CONFIG_VGACON_SOFT_SCROLLBACK=y
CONFIG_LOGO=y
# CONFIG_LOGO_LINUX_MONO is not set
# CONFIG_LOGO_LINUX_VGA16 is not set
CONFIG_FB_MODE_HELPERS=y
CONFIG_FB_TILEBLITTING=y
CONFIG_FB_EFI=y
-CONFIG_VGACON_SOFT_SCROLLBACK=y
CONFIG_LOGO=y
# CONFIG_LOGO_LINUX_MONO is not set
# CONFIG_LOGO_LINUX_VGA16 is not set
old_regs = set_irq_regs(regs);
instrumentation_begin();
- run_on_irqstack_cond(__xen_pv_evtchn_do_upcall, NULL, regs);
+ run_on_irqstack_cond(__xen_pv_evtchn_do_upcall, regs);
instrumentation_begin();
set_irq_regs(old_regs);
* rdx: Function argument (can be NULL if none)
*/
SYM_FUNC_START(asm_call_on_stack)
+SYM_INNER_LABEL(asm_call_sysvec_on_stack, SYM_L_GLOBAL)
+SYM_INNER_LABEL(asm_call_irq_on_stack, SYM_L_GLOBAL)
/*
* Save the frame pointer unconditionally. This allows the ORC
* unwinder to handle the stack switch.
extern int x86_acpi_numa_init(void);
#endif /* CONFIG_ACPI_NUMA */
-#define acpi_unlazy_tlb(x) leave_mm(x)
-
#ifdef CONFIG_ACPI_APEI
static inline pgprot_t arch_apei_get_mem_attribute(phys_addr_t addr)
{
#define FRAME_END "pop %" _ASM_BP "\n"
#ifdef CONFIG_X86_64
+
#define ENCODE_FRAME_POINTER \
"lea 1(%rsp), %rbp\n\t"
+
+static inline unsigned long encode_frame_pointer(struct pt_regs *regs)
+{
+ return (unsigned long)regs + 1;
+}
+
#else /* !CONFIG_X86_64 */
+
#define ENCODE_FRAME_POINTER \
"movl %esp, %ebp\n\t" \
"andl $0x7fffffff, %ebp\n\t"
+
+static inline unsigned long encode_frame_pointer(struct pt_regs *regs)
+{
+ return (unsigned long)regs & 0x7fffffff;
+}
+
#endif /* CONFIG_X86_64 */
#endif /* __ASSEMBLY__ */
#define ENCODE_FRAME_POINTER
+static inline unsigned long encode_frame_pointer(struct pt_regs *regs)
+{
+ return 0;
+}
+
#endif
#define FRAME_BEGIN
instrumentation_begin(); \
irq_enter_rcu(); \
kvm_set_cpu_l1tf_flush_l1d(); \
- run_on_irqstack_cond(__##func, regs, regs); \
+ run_sysvec_on_irqstack_cond(__##func, regs); \
irq_exit_rcu(); \
instrumentation_end(); \
irqentry_exit(regs, state); \
return __this_cpu_read(irq_count) != -1;
}
-void asm_call_on_stack(void *sp, void *func, void *arg);
+void asm_call_on_stack(void *sp, void (*func)(void), void *arg);
+void asm_call_sysvec_on_stack(void *sp, void (*func)(struct pt_regs *regs),
+ struct pt_regs *regs);
+void asm_call_irq_on_stack(void *sp, void (*func)(struct irq_desc *desc),
+ struct irq_desc *desc);
-static __always_inline void __run_on_irqstack(void *func, void *arg)
+static __always_inline void __run_on_irqstack(void (*func)(void))
{
void *tos = __this_cpu_read(hardirq_stack_ptr);
__this_cpu_add(irq_count, 1);
- asm_call_on_stack(tos - 8, func, arg);
+ asm_call_on_stack(tos - 8, func, NULL);
+ __this_cpu_sub(irq_count, 1);
+}
+
+static __always_inline void
+__run_sysvec_on_irqstack(void (*func)(struct pt_regs *regs),
+ struct pt_regs *regs)
+{
+ void *tos = __this_cpu_read(hardirq_stack_ptr);
+
+ __this_cpu_add(irq_count, 1);
+ asm_call_sysvec_on_stack(tos - 8, func, regs);
+ __this_cpu_sub(irq_count, 1);
+}
+
+static __always_inline void
+__run_irq_on_irqstack(void (*func)(struct irq_desc *desc),
+ struct irq_desc *desc)
+{
+ void *tos = __this_cpu_read(hardirq_stack_ptr);
+
+ __this_cpu_add(irq_count, 1);
+ asm_call_irq_on_stack(tos - 8, func, desc);
__this_cpu_sub(irq_count, 1);
}
#else /* CONFIG_X86_64 */
static inline bool irqstack_active(void) { return false; }
-static inline void __run_on_irqstack(void *func, void *arg) { }
+static inline void __run_on_irqstack(void (*func)(void)) { }
+static inline void __run_sysvec_on_irqstack(void (*func)(struct pt_regs *regs),
+ struct pt_regs *regs) { }
+static inline void __run_irq_on_irqstack(void (*func)(struct irq_desc *desc),
+ struct irq_desc *desc) { }
#endif /* !CONFIG_X86_64 */
static __always_inline bool irq_needs_irq_stack(struct pt_regs *regs)
return !user_mode(regs) && !irqstack_active();
}
-static __always_inline void run_on_irqstack_cond(void *func, void *arg,
+
+static __always_inline void run_on_irqstack_cond(void (*func)(void),
struct pt_regs *regs)
{
- void (*__func)(void *arg) = func;
+ lockdep_assert_irqs_disabled();
+
+ if (irq_needs_irq_stack(regs))
+ __run_on_irqstack(func);
+ else
+ func();
+}
+
+static __always_inline void
+run_sysvec_on_irqstack_cond(void (*func)(struct pt_regs *regs),
+ struct pt_regs *regs)
+{
+ lockdep_assert_irqs_disabled();
+ if (irq_needs_irq_stack(regs))
+ __run_sysvec_on_irqstack(func, regs);
+ else
+ func(regs);
+}
+
+static __always_inline void
+run_irq_on_irqstack_cond(void (*func)(struct irq_desc *desc), struct irq_desc *desc,
+ struct pt_regs *regs)
+{
lockdep_assert_irqs_disabled();
if (irq_needs_irq_stack(regs))
- __run_on_irqstack(__func, arg);
+ __run_irq_on_irqstack(func, desc);
else
- __func(arg);
+ func(desc);
}
#endif
legacy_pic->init(0);
legacy_pic->make_irq(0);
apic_write(APIC_LVT0, APIC_DM_EXTINT);
+ legacy_pic->unmask(0);
unlock_ExtINT_logic();
struct pt_regs *regs)
{
if (IS_ENABLED(CONFIG_X86_64))
- run_on_irqstack_cond(desc->handle_irq, desc, regs);
+ run_irq_on_irqstack_cond(desc->handle_irq, desc, regs);
else
__handle_irq(desc, regs);
}
void do_softirq_own_stack(void)
{
- run_on_irqstack_cond(__do_softirq, NULL, NULL);
+ run_on_irqstack_cond(__do_softirq, NULL);
}
}
if (pv_tlb_flush_supported()) {
+ pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
pv_ops.mmu.tlb_remove_table = tlb_remove_table;
pr_info("KVM setup pv remote TLB flush\n");
}
}
arch_initcall(activate_jump_labels);
-static void kvm_free_pv_cpu_mask(void)
-{
- unsigned int cpu;
-
- for_each_possible_cpu(cpu)
- free_cpumask_var(per_cpu(__pv_cpu_mask, cpu));
-}
-
static __init int kvm_alloc_cpumask(void)
{
int cpu;
if (alloc)
for_each_possible_cpu(cpu) {
- if (!zalloc_cpumask_var_node(
- per_cpu_ptr(&__pv_cpu_mask, cpu),
- GFP_KERNEL, cpu_to_node(cpu))) {
- goto zalloc_cpumask_fail;
- }
+ zalloc_cpumask_var_node(per_cpu_ptr(&__pv_cpu_mask, cpu),
+ GFP_KERNEL, cpu_to_node(cpu));
}
- apic->send_IPI_mask_allbutself = kvm_send_ipi_mask_allbutself;
- pv_ops.mmu.flush_tlb_others = kvm_flush_tlb_others;
return 0;
-
-zalloc_cpumask_fail:
- kvm_free_pv_cpu_mask();
- return -ENOMEM;
}
arch_initcall(kvm_alloc_cpumask);
#include <asm/spec-ctrl.h>
#include <asm/io_bitmap.h>
#include <asm/proto.h>
+#include <asm/frame.h>
#include "process.h"
fork_frame = container_of(childregs, struct fork_frame, regs);
frame = &fork_frame->frame;
- frame->bp = 0;
+ frame->bp = encode_frame_pointer(childregs);
frame->ret_addr = (unsigned long) ret_from_fork;
p->thread.sp = (unsigned long) fork_frame;
p->thread.io_bitmap = NULL;
return 1;
}
+static int invd_interception(struct vcpu_svm *svm)
+{
+ /* Treat an INVD instruction as a NOP and just skip it. */
+ return kvm_skip_emulated_instruction(&svm->vcpu);
+}
+
static int invlpg_interception(struct vcpu_svm *svm)
{
if (!static_cpu_has(X86_FEATURE_DECODEASSISTS))
[SVM_EXIT_RDPMC] = rdpmc_interception,
[SVM_EXIT_CPUID] = cpuid_interception,
[SVM_EXIT_IRET] = iret_interception,
- [SVM_EXIT_INVD] = emulate_on_interception,
+ [SVM_EXIT_INVD] = invd_interception,
[SVM_EXIT_PAUSE] = pause_interception,
[SVM_EXIT_HLT] = halt_interception,
[SVM_EXIT_INVLPG] = invlpg_interception,
module_param_named(preemption_timer, enable_preemption_timer, bool, S_IRUGO);
#endif
+extern bool __read_mostly allow_smaller_maxphyaddr;
+module_param(allow_smaller_maxphyaddr, bool, S_IRUGO);
+
#define KVM_VM_CR0_ALWAYS_OFF (X86_CR0_NW | X86_CR0_CD)
#define KVM_VM_CR0_ALWAYS_ON_UNRESTRICTED_GUEST X86_CR0_NE
#define KVM_VM_CR0_ALWAYS_ON \
* EPT will cause page fault only if we need to
* detect illegal GPAs.
*/
+ WARN_ON_ONCE(!allow_smaller_maxphyaddr);
kvm_fixup_and_inject_pf_error(vcpu, cr2, error_code);
return 1;
} else
* would also use advanced VM-exit information for EPT violations to
* reconstruct the page fault error code.
*/
- if (unlikely(kvm_mmu_is_illegal_gpa(vcpu, gpa)))
+ if (unlikely(allow_smaller_maxphyaddr && kvm_mmu_is_illegal_gpa(vcpu, gpa)))
return kvm_emulate_instruction(vcpu, 0);
return kvm_mmu_page_fault(vcpu, gpa, error_code, NULL, 0);
vmx_check_vmcs12_offsets();
/*
- * Intel processors don't have problems with
- * GUEST_MAXPHYADDR < HOST_MAXPHYADDR so enable
- * it for VMX by default
+ * Shadow paging doesn't have a (further) performance penalty
+ * from GUEST_MAXPHYADDR < HOST_MAXPHYADDR so enable it
+ * by default
*/
- allow_smaller_maxphyaddr = true;
+ if (!enable_ept)
+ allow_smaller_maxphyaddr = true;
return 0;
}
static inline bool vmx_need_pf_intercept(struct kvm_vcpu *vcpu)
{
- return !enable_ept || cpuid_maxphyaddr(vcpu) < boot_cpu_data.x86_phys_bits;
+ if (!enable_ept)
+ return true;
+
+ return allow_smaller_maxphyaddr && cpuid_maxphyaddr(vcpu) < boot_cpu_data.x86_phys_bits;
}
void dump_vmcs(void);
u64 __read_mostly host_efer;
EXPORT_SYMBOL_GPL(host_efer);
-bool __read_mostly allow_smaller_maxphyaddr;
+bool __read_mostly allow_smaller_maxphyaddr = 0;
EXPORT_SYMBOL_GPL(allow_smaller_maxphyaddr);
static u64 __read_mostly host_xss;
unsigned long old_cr4 = kvm_read_cr4(vcpu);
unsigned long pdptr_bits = X86_CR4_PGE | X86_CR4_PSE | X86_CR4_PAE |
X86_CR4_SMEP;
+ unsigned long mmu_role_bits = pdptr_bits | X86_CR4_SMAP | X86_CR4_PKE;
if (kvm_valid_cr4(vcpu, cr4))
return 1;
if (kvm_x86_ops.set_cr4(vcpu, cr4))
return 1;
- if (((cr4 ^ old_cr4) & pdptr_bits) ||
+ if (((cr4 ^ old_cr4) & mmu_role_bits) ||
(!(cr4 & X86_CR4_PCIDE) && (old_cr4 & X86_CR4_PCIDE)))
kvm_mmu_reset_context(vcpu);
case MSR_IA32_POWER_CTL:
msr_info->data = vcpu->arch.msr_ia32_power_ctl;
break;
- case MSR_IA32_TSC:
- msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + vcpu->arch.tsc_offset;
+ case MSR_IA32_TSC: {
+ /*
+ * Intel SDM states that MSR_IA32_TSC read adds the TSC offset
+ * even when not intercepted. AMD manual doesn't explicitly
+ * state this but appears to behave the same.
+ *
+ * On userspace reads and writes, however, we unconditionally
+ * operate L1's TSC value to ensure backwards-compatible
+ * behavior for migration.
+ */
+ u64 tsc_offset = msr_info->host_initiated ? vcpu->arch.l1_tsc_offset :
+ vcpu->arch.tsc_offset;
+
+ msr_info->data = kvm_scale_tsc(vcpu, rdtsc()) + tsc_offset;
break;
+ }
case MSR_MTRRcap:
case 0x200 ... 0x2ff:
return kvm_mtrr_get_msr(vcpu, msr_info->index, &msr_info->data);
*/
if (size < 8) {
if (!IS_ALIGNED(dest, 4) || size != 4)
- clean_cache_range(dst, 1);
+ clean_cache_range(dst, size);
} else {
if (!IS_ALIGNED(dest, 8)) {
dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);
}
EXPORT_SYMBOL_GPL(blk_queue_can_use_dma_map_merging);
+/**
+ * blk_queue_set_zoned - configure a disk queue zoned model.
+ * @disk: the gendisk of the queue to configure
+ * @model: the zoned model to set
+ *
+ * Set the zoned model of the request queue of @disk according to @model.
+ * When @model is BLK_ZONED_HM (host managed), this should be called only
+ * if zoned block device support is enabled (CONFIG_BLK_DEV_ZONED option).
+ * If @model specifies BLK_ZONED_HA (host aware), the effective model used
+ * depends on CONFIG_BLK_DEV_ZONED settings and on the existence of partitions
+ * on the disk.
+ */
+void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model)
+{
+ switch (model) {
+ case BLK_ZONED_HM:
+ /*
+ * Host managed devices are supported only if
+ * CONFIG_BLK_DEV_ZONED is enabled.
+ */
+ WARN_ON_ONCE(!IS_ENABLED(CONFIG_BLK_DEV_ZONED));
+ break;
+ case BLK_ZONED_HA:
+ /*
+ * Host aware devices can be treated either as regular block
+ * devices (similar to drive managed devices) or as zoned block
+ * devices to take advantage of the zone command set, similarly
+ * to host managed devices. We try the latter if there are no
+ * partitions and zoned block device support is enabled, else
+ * we do nothing special as far as the block layer is concerned.
+ */
+ if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED) ||
+ disk_has_partitions(disk))
+ model = BLK_ZONED_NONE;
+ break;
+ case BLK_ZONED_NONE:
+ default:
+ if (WARN_ON_ONCE(model != BLK_ZONED_NONE))
+ model = BLK_ZONED_NONE;
+ break;
+ }
+
+ disk->queue->limits.zoned = model;
+}
+EXPORT_SYMBOL_GPL(blk_queue_set_zoned);
+
static int __init blk_settings_init(void)
{
blk_max_low_pfn = max_low_pfn - 1;
}
/* Power(C) State timer broadcast control */
-static void lapic_timer_state_broadcast(struct acpi_processor *pr,
- struct acpi_processor_cx *cx,
- int broadcast)
+static bool lapic_timer_needs_broadcast(struct acpi_processor *pr,
+ struct acpi_processor_cx *cx)
{
- int state = cx - pr->power.states;
-
- if (state >= pr->power.timer_broadcast_on_state) {
- if (broadcast)
- tick_broadcast_enter();
- else
- tick_broadcast_exit();
- }
+ return cx - pr->power.states >= pr->power.timer_broadcast_on_state;
}
#else
static void lapic_timer_check_state(int state, struct acpi_processor *pr,
struct acpi_processor_cx *cstate) { }
static void lapic_timer_propagate_broadcast(struct acpi_processor *pr) { }
-static void lapic_timer_state_broadcast(struct acpi_processor *pr,
- struct acpi_processor_cx *cx,
- int broadcast)
+
+static bool lapic_timer_needs_broadcast(struct acpi_processor *pr,
+ struct acpi_processor_cx *cx)
{
+ return false;
}
#endif
/**
* acpi_idle_enter_bm - enters C3 with proper BM handling
+ * @drv: cpuidle driver
* @pr: Target processor
* @cx: Target state context
- * @timer_bc: Whether or not to change timer mode to broadcast
+ * @index: index of target state
*/
-static void acpi_idle_enter_bm(struct acpi_processor *pr,
- struct acpi_processor_cx *cx, bool timer_bc)
+static int acpi_idle_enter_bm(struct cpuidle_driver *drv,
+ struct acpi_processor *pr,
+ struct acpi_processor_cx *cx,
+ int index)
{
- acpi_unlazy_tlb(smp_processor_id());
-
- /*
- * Must be done before busmaster disable as we might need to
- * access HPET !
- */
- if (timer_bc)
- lapic_timer_state_broadcast(pr, cx, 1);
+ static struct acpi_processor_cx safe_cx = {
+ .entry_method = ACPI_CSTATE_HALT,
+ };
/*
* disable bus master
* bm_check implies we need ARB_DIS
* bm_control implies whether we can do ARB_DIS
*
- * That leaves a case where bm_check is set and bm_control is
- * not set. In that case we cannot do much, we enter C3
- * without doing anything.
+ * That leaves a case where bm_check is set and bm_control is not set.
+ * In that case we cannot do much, we enter C3 without doing anything.
*/
- if (pr->flags.bm_control) {
+ bool dis_bm = pr->flags.bm_control;
+
+ /* If we can skip BM, demote to a safe state. */
+ if (!cx->bm_sts_skip && acpi_idle_bm_check()) {
+ dis_bm = false;
+ index = drv->safe_state_index;
+ if (index >= 0) {
+ cx = this_cpu_read(acpi_cstate[index]);
+ } else {
+ cx = &safe_cx;
+ index = -EBUSY;
+ }
+ }
+
+ if (dis_bm) {
raw_spin_lock(&c3_lock);
c3_cpu_count++;
/* Disable bus master arbitration when all CPUs are in C3 */
raw_spin_unlock(&c3_lock);
}
+ rcu_idle_enter();
+
acpi_idle_do_entry(cx);
+ rcu_idle_exit();
+
/* Re-enable bus master arbitration */
- if (pr->flags.bm_control) {
+ if (dis_bm) {
raw_spin_lock(&c3_lock);
acpi_write_bit_register(ACPI_BITREG_ARB_DISABLE, 0);
c3_cpu_count--;
raw_spin_unlock(&c3_lock);
}
- if (timer_bc)
- lapic_timer_state_broadcast(pr, cx, 0);
+ return index;
}
static int acpi_idle_enter(struct cpuidle_device *dev,
return -EINVAL;
if (cx->type != ACPI_STATE_C1) {
+ if (cx->type == ACPI_STATE_C3 && pr->flags.bm_check)
+ return acpi_idle_enter_bm(drv, pr, cx, index);
+
+ /* C2 to C1 demotion. */
if (acpi_idle_fallback_to_c1(pr) && num_online_cpus() > 1) {
index = ACPI_IDLE_STATE_START;
cx = per_cpu(acpi_cstate[index], dev->cpu);
- } else if (cx->type == ACPI_STATE_C3 && pr->flags.bm_check) {
- if (cx->bm_sts_skip || !acpi_idle_bm_check()) {
- acpi_idle_enter_bm(pr, cx, true);
- return index;
- } else if (drv->safe_state_index >= 0) {
- index = drv->safe_state_index;
- cx = per_cpu(acpi_cstate[index], dev->cpu);
- } else {
- acpi_safe_halt();
- return -EBUSY;
- }
}
}
- lapic_timer_state_broadcast(pr, cx, 1);
-
if (cx->type == ACPI_STATE_C3)
ACPI_FLUSH_CPU_CACHE();
acpi_idle_do_entry(cx);
- lapic_timer_state_broadcast(pr, cx, 0);
-
return index;
}
return 0;
if (pr->flags.bm_check) {
- acpi_idle_enter_bm(pr, cx, false);
+ u8 bm_sts_skip = cx->bm_sts_skip;
+
+ /* Don't check BM_STS, do an unconditional ARB_DIS for S2IDLE */
+ cx->bm_sts_skip = 1;
+ acpi_idle_enter_bm(drv, pr, cx, index);
+ cx->bm_sts_skip = bm_sts_skip;
+
return 0;
} else {
ACPI_FLUSH_CPU_CACHE();
{
int i, count = ACPI_IDLE_STATE_START;
struct acpi_processor_cx *cx;
+ struct cpuidle_state *state;
if (max_cstate == 0)
max_cstate = 1;
for (i = 1; i < ACPI_PROCESSOR_MAX_POWER && i <= max_cstate; i++) {
+ state = &acpi_idle_driver.states[count];
cx = &pr->power.states[i];
if (!cx->valid)
per_cpu(acpi_cstate[count], dev->cpu) = cx;
+ if (lapic_timer_needs_broadcast(pr, cx))
+ state->flags |= CPUIDLE_FLAG_TIMER_STOP;
+
+ if (cx->type == ACPI_STATE_C3) {
+ state->flags |= CPUIDLE_FLAG_TLB_FLUSHED;
+ if (pr->flags.bm_check)
+ state->flags |= CPUIDLE_FLAG_RCU_IDLE;
+ }
+
count++;
if (count == CPUIDLE_STATE_MAX)
break;
rc = dma_set_mask_and_coherent(&pci_dev->dev, DMA_BIT_MASK(32));
if (rc < 0)
- goto out;
+ goto err_disable;
rc = -ENOMEM;
eni_dev = kmalloc(sizeof(struct eni_dev), GFP_KERNEL);
return pfn_to_nid(pfn);
}
+static int do_register_memory_block_under_node(int nid,
+ struct memory_block *mem_blk)
+{
+ int ret;
+
+ /*
+ * If this memory block spans multiple nodes, we only indicate
+ * the last processed node.
+ */
+ mem_blk->nid = nid;
+
+ ret = sysfs_create_link_nowarn(&node_devices[nid]->dev.kobj,
+ &mem_blk->dev.kobj,
+ kobject_name(&mem_blk->dev.kobj));
+ if (ret)
+ return ret;
+
+ return sysfs_create_link_nowarn(&mem_blk->dev.kobj,
+ &node_devices[nid]->dev.kobj,
+ kobject_name(&node_devices[nid]->dev.kobj));
+}
+
/* register memory section under specified node if it spans that node */
-static int register_mem_sect_under_node(struct memory_block *mem_blk,
- void *arg)
+static int register_mem_block_under_node_early(struct memory_block *mem_blk,
+ void *arg)
{
unsigned long memory_block_pfns = memory_block_size_bytes() / PAGE_SIZE;
unsigned long start_pfn = section_nr_to_pfn(mem_blk->start_section_nr);
unsigned long end_pfn = start_pfn + memory_block_pfns - 1;
- int ret, nid = *(int *)arg;
+ int nid = *(int *)arg;
unsigned long pfn;
for (pfn = start_pfn; pfn <= end_pfn; pfn++) {
}
/*
- * We need to check if page belongs to nid only for the boot
- * case, during hotplug we know that all pages in the memory
- * block belong to the same node.
- */
- if (system_state == SYSTEM_BOOTING) {
- page_nid = get_nid_for_pfn(pfn);
- if (page_nid < 0)
- continue;
- if (page_nid != nid)
- continue;
- }
-
- /*
- * If this memory block spans multiple nodes, we only indicate
- * the last processed node.
+ * We need to check if page belongs to nid only at the boot
+ * case because node's ranges can be interleaved.
*/
- mem_blk->nid = nid;
-
- ret = sysfs_create_link_nowarn(&node_devices[nid]->dev.kobj,
- &mem_blk->dev.kobj,
- kobject_name(&mem_blk->dev.kobj));
- if (ret)
- return ret;
+ page_nid = get_nid_for_pfn(pfn);
+ if (page_nid < 0)
+ continue;
+ if (page_nid != nid)
+ continue;
- return sysfs_create_link_nowarn(&mem_blk->dev.kobj,
- &node_devices[nid]->dev.kobj,
- kobject_name(&node_devices[nid]->dev.kobj));
+ return do_register_memory_block_under_node(nid, mem_blk);
}
/* mem section does not span the specified node */
return 0;
}
/*
+ * During hotplug we know that all pages in the memory block belong to the same
+ * node.
+ */
+static int register_mem_block_under_node_hotplug(struct memory_block *mem_blk,
+ void *arg)
+{
+ int nid = *(int *)arg;
+
+ return do_register_memory_block_under_node(nid, mem_blk);
+}
+
+/*
* Unregister a memory block device under the node it spans. Memory blocks
* with multiple nodes cannot be offlined and therefore also never be removed.
*/
kobject_name(&node_devices[mem_blk->nid]->dev.kobj));
}
-int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn)
+int link_mem_sections(int nid, unsigned long start_pfn, unsigned long end_pfn,
+ enum meminit_context context)
{
+ walk_memory_blocks_func_t func;
+
+ if (context == MEMINIT_HOTPLUG)
+ func = register_mem_block_under_node_hotplug;
+ else
+ func = register_mem_block_under_node_early;
+
return walk_memory_blocks(PFN_PHYS(start_pfn),
PFN_PHYS(end_pfn - start_pfn), (void *)&nid,
- register_mem_sect_under_node);
+ func);
}
#ifdef CONFIG_HUGETLBFS
#ifdef CONFIG_DEBUG_FS
extern void regmap_debugfs_initcall(void);
-extern void regmap_debugfs_init(struct regmap *map, const char *name);
+extern void regmap_debugfs_init(struct regmap *map);
extern void regmap_debugfs_exit(struct regmap *map);
static inline void regmap_debugfs_disable(struct regmap *map)
#else
static inline void regmap_debugfs_initcall(void) { }
-static inline void regmap_debugfs_init(struct regmap *map, const char *name) { }
+static inline void regmap_debugfs_init(struct regmap *map) { }
static inline void regmap_debugfs_exit(struct regmap *map) { }
static inline void regmap_debugfs_disable(struct regmap *map) { }
#endif
int regcache_lookup_reg(struct regmap *map, unsigned int reg);
int _regmap_raw_write(struct regmap *map, unsigned int reg,
- const void *val, size_t val_len);
+ const void *val, size_t val_len, bool noinc);
void regmap_async_complete_cb(struct regmap_async *async, int ret);
map->cache_bypass = true;
- ret = _regmap_raw_write(map, base, *data, count * val_bytes);
+ ret = _regmap_raw_write(map, base, *data, count * val_bytes, false);
if (ret)
dev_err(map->dev, "Unable to sync registers %#x-%#x. %d\n",
base, cur - map->reg_stride, ret);
struct regmap_debugfs_node {
struct regmap *map;
- const char *name;
struct list_head link;
};
.write = regmap_cache_bypass_write_file,
};
-void regmap_debugfs_init(struct regmap *map, const char *name)
+void regmap_debugfs_init(struct regmap *map)
{
struct rb_node *next;
struct regmap_range_node *range_node;
const char *devname = "dummy";
+ const char *name = map->name;
/*
* Userspace can initiate reads from the hardware over debugfs.
if (!node)
return;
node->map = map;
- node->name = name;
mutex_lock(®map_debugfs_early_lock);
list_add(&node->link, ®map_debugfs_early_list);
mutex_unlock(®map_debugfs_early_lock);
mutex_lock(®map_debugfs_early_lock);
list_for_each_entry_safe(node, tmp, ®map_debugfs_early_list, link) {
- regmap_debugfs_init(node->map, node->name);
+ regmap_debugfs_init(node->map);
list_del(&node->link);
kfree(node);
}
kfree(map->selector_work_buf);
}
+static int regmap_set_name(struct regmap *map, const struct regmap_config *config)
+{
+ if (config->name) {
+ const char *name = kstrdup_const(config->name, GFP_KERNEL);
+
+ if (!name)
+ return -ENOMEM;
+
+ kfree_const(map->name);
+ map->name = name;
+ }
+
+ return 0;
+}
+
int regmap_attach_dev(struct device *dev, struct regmap *map,
const struct regmap_config *config)
{
struct regmap **m;
+ int ret;
map->dev = dev;
- regmap_debugfs_init(map, config->name);
+ ret = regmap_set_name(map, config);
+ if (ret)
+ return ret;
+
+ regmap_debugfs_init(map);
/* Add a devres resource for dev_get_regmap() */
m = devres_alloc(dev_get_regmap_release, sizeof(*m), GFP_KERNEL);
goto err;
}
- if (config->name) {
- map->name = kstrdup_const(config->name, GFP_KERNEL);
- if (!map->name) {
- ret = -ENOMEM;
- goto err_map;
- }
- }
+ ret = regmap_set_name(map, config);
+ if (ret)
+ goto err_map;
if (config->disable_locking) {
map->lock = map->unlock = regmap_lock_unlock_none;
if (ret != 0)
goto err_regcache;
} else {
- regmap_debugfs_init(map, config->name);
+ regmap_debugfs_init(map);
}
return map;
*/
int regmap_reinit_cache(struct regmap *map, const struct regmap_config *config)
{
+ int ret;
+
regcache_exit(map);
regmap_debugfs_exit(map);
map->readable_noinc_reg = config->readable_noinc_reg;
map->cache_type = config->cache_type;
- regmap_debugfs_init(map, config->name);
+ ret = regmap_set_name(map, config);
+ if (ret)
+ return ret;
+
+ regmap_debugfs_init(map);
map->cache_bypass = false;
map->cache_only = false;
}
static int _regmap_raw_write_impl(struct regmap *map, unsigned int reg,
- const void *val, size_t val_len)
+ const void *val, size_t val_len, bool noinc)
{
struct regmap_range_node *range;
unsigned long flags;
win_residue, val_len / map->format.val_bytes);
ret = _regmap_raw_write_impl(map, reg, val,
win_residue *
- map->format.val_bytes);
+ map->format.val_bytes, noinc);
if (ret != 0)
return ret;
win_residue = range->window_len - win_offset;
}
- ret = _regmap_select_page(map, ®, range, val_num);
+ ret = _regmap_select_page(map, ®, range, noinc ? 1 : val_num);
if (ret != 0)
return ret;
}
map->work_buf +
map->format.reg_bytes +
map->format.pad_bytes,
- map->format.val_bytes);
+ map->format.val_bytes,
+ false);
}
static inline void *_regmap_map_get_context(struct regmap *map)
EXPORT_SYMBOL_GPL(regmap_write_async);
int _regmap_raw_write(struct regmap *map, unsigned int reg,
- const void *val, size_t val_len)
+ const void *val, size_t val_len, bool noinc)
{
size_t val_bytes = map->format.val_bytes;
size_t val_count = val_len / val_bytes;
/* Write as many bytes as possible with chunk_size */
for (i = 0; i < chunk_count; i++) {
- ret = _regmap_raw_write_impl(map, reg, val, chunk_bytes);
+ ret = _regmap_raw_write_impl(map, reg, val, chunk_bytes, noinc);
if (ret)
return ret;
/* Write remaining bytes */
if (val_len)
- ret = _regmap_raw_write_impl(map, reg, val, val_len);
+ ret = _regmap_raw_write_impl(map, reg, val, val_len, noinc);
return ret;
}
map->lock(map->lock_arg);
- ret = _regmap_raw_write(map, reg, val, val_len);
+ ret = _regmap_raw_write(map, reg, val, val_len, false);
map->unlock(map->lock_arg);
write_len = map->max_raw_write;
else
write_len = val_len;
- ret = _regmap_raw_write(map, reg, val, write_len);
+ ret = _regmap_raw_write(map, reg, val, write_len, true);
if (ret)
goto out_unlock;
val = ((u8 *)val) + write_len;
map->async = true;
- ret = _regmap_raw_write(map, reg, val, val_len);
+ ret = _regmap_raw_write(map, reg, val, val_len, false);
map->async = false;
EXPORT_SYMBOL_GPL(regmap_raw_write_async);
static int _regmap_raw_read(struct regmap *map, unsigned int reg, void *val,
- unsigned int val_len)
+ unsigned int val_len, bool noinc)
{
struct regmap_range_node *range;
int ret;
range = _regmap_range_lookup(map, reg);
if (range) {
ret = _regmap_select_page(map, ®, range,
- val_len / map->format.val_bytes);
+ noinc ? 1 : val_len / map->format.val_bytes);
if (ret != 0)
return ret;
}
if (!map->format.parse_val)
return -EINVAL;
- ret = _regmap_raw_read(map, reg, work_val, map->format.val_bytes);
+ ret = _regmap_raw_read(map, reg, work_val, map->format.val_bytes, false);
if (ret == 0)
*val = map->format.parse_val(work_val);
/* Read bytes that fit into whole chunks */
for (i = 0; i < chunk_count; i++) {
- ret = _regmap_raw_read(map, reg, val, chunk_bytes);
+ ret = _regmap_raw_read(map, reg, val, chunk_bytes, false);
if (ret != 0)
goto out;
/* Read remaining bytes */
if (val_len) {
- ret = _regmap_raw_read(map, reg, val, val_len);
+ ret = _regmap_raw_read(map, reg, val, val_len, false);
if (ret != 0)
goto out;
}
read_len = map->max_raw_read;
else
read_len = val_len;
- ret = _regmap_raw_read(map, reg, val, read_len);
+ ret = _regmap_raw_read(map, reg, val, read_len, true);
if (ret)
goto out_unlock;
val = ((u8 *)val) + read_len;
depends on ARCH_BCM2835 ||COMPILE_TEST
depends on COMMON_CLK
default ARCH_BCM2835
+ select RESET_CONTROLLER
select RESET_SIMPLE
help
Enable common clock framework support for the Broadcom BCM2711
parent_name = postdiv_name;
}
- pllen = kzalloc(sizeof(*pllout), GFP_KERNEL);
+ pllen = kzalloc(sizeof(*pllen), GFP_KERNEL);
if (!pllen) {
ret = -ENOMEM;
goto err_unregister_postdiv;
pm_runtime_enable(&pdev->dev);
ret = pm_clk_create(&pdev->dev);
if (ret)
- return ret;
+ goto disable_pm_runtime;
ret = pm_clk_add(&pdev->dev, "iface");
if (ret < 0) {
dev_err(&pdev->dev, "failed to acquire iface clock\n");
- goto disable_pm_runtime;
+ goto destroy_pm_clk;
}
+ ret = -EINVAL;
clk_probe = of_device_get_match_data(&pdev->dev);
if (!clk_probe)
- return -EINVAL;
+ goto destroy_pm_clk;
ret = clk_probe(pdev);
if (ret)
PNAME(mux_hdmiphy_p) = { "hdmiphy_phy", "xin24m" };
PNAME(mux_aclk_cpu_src_p) = { "cpll_aclk_cpu", "gpll_aclk_cpu", "hdmiphy_aclk_cpu" };
-PNAME(mux_pll_src_4plls_p) = { "cpll", "gpll", "hdmiphy" "usb480m" };
+PNAME(mux_pll_src_4plls_p) = { "cpll", "gpll", "hdmiphy", "usb480m" };
PNAME(mux_pll_src_3plls_p) = { "cpll", "gpll", "hdmiphy" };
PNAME(mux_pll_src_2plls_p) = { "cpll", "gpll" };
PNAME(mux_sclk_hdmi_cec_p) = { "cpll", "gpll", "xin24m" };
GATE(CLK_PCIE, "pcie", "aclk133", GATE_IP_FSYS, 14, 0, 0),
GATE(CLK_SMMU_PCIE, "smmu_pcie", "aclk133", GATE_IP_FSYS, 18, 0, 0),
GATE(CLK_MODEMIF, "modemif", "aclk100", GATE_IP_PERIL, 28, 0, 0),
- GATE(CLK_CHIPID, "chipid", "aclk100", E4210_GATE_IP_PERIR, 0, 0, 0),
+ GATE(CLK_CHIPID, "chipid", "aclk100", E4210_GATE_IP_PERIR, 0, CLK_IGNORE_UNUSED, 0),
GATE(CLK_SYSREG, "sysreg", "aclk100", E4210_GATE_IP_PERIR, 0,
CLK_IGNORE_UNUSED, 0),
GATE(CLK_HDMI_CEC, "hdmi_cec", "aclk100", E4210_GATE_IP_PERIR, 11, 0,
0),
GATE(CLK_TSADC, "tsadc", "aclk133", E4X12_GATE_BUS_FSYS1, 16, 0, 0),
GATE(CLK_MIPI_HSI, "mipi_hsi", "aclk133", GATE_IP_FSYS, 10, 0, 0),
- GATE(CLK_CHIPID, "chipid", "aclk100", E4X12_GATE_IP_PERIR, 0, 0, 0),
+ GATE(CLK_CHIPID, "chipid", "aclk100", E4X12_GATE_IP_PERIR, 0, CLK_IGNORE_UNUSED, 0),
GATE(CLK_SYSREG, "sysreg", "aclk100", E4X12_GATE_IP_PERIR, 1,
CLK_IGNORE_UNUSED, 0),
GATE(CLK_HDMI_CEC, "hdmi_cec", "aclk100", E4X12_GATE_IP_PERIR, 11, 0,
* main G3D clock enablement status.
*/
clk_prepare_enable(__clk_lookup("mout_sw_aclk_g3d"));
+ /*
+ * Keep top BPLL mux enabled permanently to ensure that DRAM operates
+ * properly.
+ */
+ clk_prepare_enable(__clk_lookup("mout_bpll"));
samsung_clk_of_add_provider(np, ctx);
}
{ STRATIX10_EMAC_B_FREE_CLK, "emacb_free_clk", NULL, emacb_free_mux, ARRAY_SIZE(emacb_free_mux),
0, 0, 2, 0xB0, 1},
{ STRATIX10_EMAC_PTP_FREE_CLK, "emac_ptp_free_clk", NULL, emac_ptp_free_mux,
- ARRAY_SIZE(emac_ptp_free_mux), 0, 0, 4, 0xB0, 2},
+ ARRAY_SIZE(emac_ptp_free_mux), 0, 0, 2, 0xB0, 2},
{ STRATIX10_GPIO_DB_FREE_CLK, "gpio_db_free_clk", NULL, gpio_db_free_mux,
ARRAY_SIZE(gpio_db_free_mux), 0, 0, 0, 0xB0, 3},
{ STRATIX10_SDMMC_FREE_CLK, "sdmmc_free_clk", NULL, sdmmc_free_mux,
unsigned long flags = 0;
unsigned long input_rate;
- if (clk_pll_is_enabled(hw))
- return 0;
-
input_rate = clk_hw_get_rate(clk_hw_get_parent(hw));
if (_get_table_rate(hw, &sel, pll->params->fixed_rate, input_rate))
pll_writel(val, PLLE_SS_CTRL, pll);
udelay(1);
- /* Enable hw control of xusb brick pll */
+ /* Enable HW control of XUSB brick PLL */
val = pll_readl_misc(pll);
val &= ~PLLE_MISC_IDDQ_SW_CTRL;
pll_writel_misc(val, pll);
val |= XUSBIO_PLL_CFG0_SEQ_ENABLE;
pll_writel(val, XUSBIO_PLL_CFG0, pll);
- /* Enable hw control of SATA pll */
+ /* Enable HW control of SATA PLL */
val = pll_readl(SATA_PLL_CFG0, pll);
val &= ~SATA_PLL_CFG0_PADPLL_RESET_SWCTL;
val |= SATA_PLL_CFG0_PADPLL_USE_LOCKDET;
#include <linux/io.h>
#include <linux/slab.h>
+#include "clk.h"
+
#define CLK_SOURCE_EMC 0x19c
#define CLK_SOURCE_EMC_2X_CLK_SRC GENMASK(31, 29)
#define CLK_SOURCE_EMC_MC_EMC_SAME_FREQ BIT(16)
for_each_available_child_of_node(np, child) {
ret = integrator_impd1_clk_spawn(dev, np, child);
- if (ret)
+ if (ret) {
+ of_node_put(child);
break;
+ }
}
return ret;
return PTR_ERR(clk);
}
- ret = ENXIO;
+ ret = -ENXIO;
base = of_iomap(node, 0);
if (!base) {
pr_err("failed to map registers for clockevent\n");
#include <linux/interrupt.h>
#include <linux/of_irq.h>
#include <linux/smp.h>
+#include <linux/timex.h>
+
+#ifndef CONFIG_RISCV_M_MODE
+#include <asm/clint.h>
+#endif
#define CLINT_IPI_OFF 0
#define CLINT_TIMER_CMP_OFF 0x4000
static unsigned long clint_timer_freq;
static unsigned int clint_timer_irq;
+#ifdef CONFIG_RISCV_M_MODE
+u64 __iomem *clint_time_val;
+#endif
+
static void clint_send_ipi(const struct cpumask *target)
{
unsigned int cpu;
clint_timer_val = base + CLINT_TIMER_VAL_OFF;
clint_timer_freq = riscv_timebase;
+#ifdef CONFIG_RISCV_M_MODE
+ /*
+ * Yes, that's an odd naming scheme. time_val is public, but hopefully
+ * will die in favor of something cleaner.
+ */
+ clint_time_val = clint_timer_val;
+#endif
+
pr_info("%pOFP: timer running at %ld Hz\n", np, clint_timer_freq);
rc = clocksource_register_hz(&clint_clocksource, clint_timer_freq);
void __iomem *base = timer_of_base(to_timer_of(ce));
writel_relaxed(GX6605S_STATUS_CLR, base + TIMER_STATUS);
+ writel_relaxed(0, base + TIMER_INI);
ce->event_handler(ce);
return !(tidr >> 16);
}
+static void dmtimer_systimer_enable(struct dmtimer_systimer *t)
+{
+ u32 val;
+
+ if (dmtimer_systimer_revision1(t))
+ val = DMTIMER_TYPE1_ENABLE;
+ else
+ val = DMTIMER_TYPE2_ENABLE;
+
+ writel_relaxed(val, t->base + t->sysc);
+}
+
+static void dmtimer_systimer_disable(struct dmtimer_systimer *t)
+{
+ if (!dmtimer_systimer_revision1(t))
+ return;
+
+ writel_relaxed(DMTIMER_TYPE1_DISABLE, t->base + t->sysc);
+}
+
static int __init dmtimer_systimer_type1_reset(struct dmtimer_systimer *t)
{
void __iomem *syss = t->base + OMAP_TIMER_V1_SYS_STAT_OFFSET;
int ret;
u32 l;
+ dmtimer_systimer_enable(t);
writel_relaxed(BIT(1) | BIT(2), t->base + t->ifctrl);
ret = readl_poll_timeout_atomic(syss, l, l & BIT(0), 100,
DMTIMER_RESET_WAIT);
void __iomem *sysc = t->base + t->sysc;
u32 l;
+ dmtimer_systimer_enable(t);
l = readl_relaxed(sysc);
l |= BIT(0);
writel_relaxed(l, sysc);
return 0;
}
-static void dmtimer_systimer_enable(struct dmtimer_systimer *t)
-{
- u32 val;
-
- if (dmtimer_systimer_revision1(t))
- val = DMTIMER_TYPE1_ENABLE;
- else
- val = DMTIMER_TYPE2_ENABLE;
-
- writel_relaxed(val, t->base + t->sysc);
-}
-
-static void dmtimer_systimer_disable(struct dmtimer_systimer *t)
-{
- if (!dmtimer_systimer_revision1(t))
- return;
-
- writel_relaxed(DMTIMER_TYPE1_DISABLE, t->base + t->sysc);
-}
-
static int __init dmtimer_systimer_setup(struct device_node *np,
struct dmtimer_systimer *t)
{
t->wakeup = regbase + _OMAP_TIMER_WAKEUP_EN_OFFSET;
t->ifctrl = regbase + _OMAP_TIMER_IF_CTRL_OFFSET;
- dmtimer_systimer_enable(t);
dmtimer_systimer_reset(t);
+ dmtimer_systimer_enable(t);
pr_debug("dmtimer rev %08x sysc %08x\n", readl_relaxed(t->base),
readl_relaxed(t->base + t->sysc));
return -1;
/* Do runtime PM to manage a hierarchical CPU toplogy. */
- pm_runtime_put_sync_suspend(pd_dev);
+ RCU_NONIDLE(pm_runtime_put_sync_suspend(pd_dev));
state = psci_get_domain_state();
if (!state)
ret = psci_cpu_suspend_enter(state) ? -1 : idx;
- pm_runtime_get_sync(pd_dev);
+ RCU_NONIDLE(pm_runtime_get_sync(pd_dev));
cpu_pm_exit();
for (i = 0; i < nr_xcede_records; i++) {
struct xcede_latency_record *record = &payload->records[i];
u64 latency_tb = be64_to_cpu(record->latency_ticks);
- u64 latency_us = tb_to_ns(latency_tb) / NSEC_PER_USEC;
+ u64 latency_us = DIV_ROUND_UP_ULL(tb_to_ns(latency_tb), NSEC_PER_USEC);
+
+ if (latency_us == 0)
+ pr_warn("cpuidle: xcede record %d has an unrealistic latency of 0us.\n", i);
if (latency_us < min_latency_us)
min_latency_us = latency_us;
* Perform the fix-up.
*/
if (min_latency_us < dedicated_states[1].exit_latency) {
- u64 cede0_latency = min_latency_us - 1;
+ /*
+ * We set a minimum of 1us wakeup latency for cede0 to
+ * distinguish it from snooze
+ */
+ u64 cede0_latency = 1;
- if (cede0_latency <= 0)
- cede0_latency = min_latency_us;
+ if (min_latency_us > cede0_latency)
+ cede0_latency = min_latency_us - 1;
dedicated_states[1].exit_latency = cede0_latency;
dedicated_states[1].target_residency = 10 * (cede0_latency);
struct cpuidle_device *dev, int index)
{
ktime_t time_start, time_end;
+ struct cpuidle_state *target_state = &drv->states[index];
time_start = ns_to_ktime(local_clock());
- /*
- * trace_suspend_resume() called by tick_freeze() for the last CPU
- * executing it contains RCU usage regarded as invalid in the idle
- * context, so tell RCU about that.
- */
tick_freeze();
/*
* The state used here cannot be a "coupled" one, because the "coupled"
* suspended is generally unsafe.
*/
stop_critical_timings();
- rcu_idle_enter();
- drv->states[index].enter_s2idle(dev, drv, index);
+ if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE))
+ rcu_idle_enter();
+ target_state->enter_s2idle(dev, drv, index);
if (WARN_ON_ONCE(!irqs_disabled()))
local_irq_disable();
- /*
- * timekeeping_resume() that will be called by tick_unfreeze() for the
- * first CPU executing it calls functions containing RCU read-side
- * critical sections, so tell RCU about that.
- */
- rcu_idle_exit();
+ if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE))
+ rcu_idle_exit();
tick_unfreeze();
start_critical_timings();
time_start = ns_to_ktime(local_clock());
stop_critical_timings();
- rcu_idle_enter();
+ if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE))
+ rcu_idle_enter();
entered_state = target_state->enter(dev, drv, index);
- rcu_idle_exit();
+ if (!(target_state->flags & CPUIDLE_FLAG_RCU_IDLE))
+ rcu_idle_exit();
start_critical_timings();
sched_clock_idle_wakeup_event();
return false;
}
+ if (!dax_dev) {
+ pr_debug("%s: error: dax unsupported by block device\n",
+ bdevname(bdev, buf));
+ return false;
+ }
+
err = bdev_dax_pgoff(bdev, start, PAGE_SIZE, &pgoff);
if (err) {
pr_info("%s: error: unaligned partition for dax\n",
return false;
}
- if (!dax_dev || !bdev_dax_supported(bdev, blocksize)) {
- pr_debug("%s: error: dax unsupported by block device\n",
- bdevname(bdev, buf));
- return false;
- }
-
id = dax_read_lock();
len = dax_direct_access(dax_dev, pgoff, 1, &kaddr, &pfn);
len2 = dax_direct_access(dax_dev, pgoff_end, 1, &end_kaddr, &end_pfn);
bool dax_supported(struct dax_device *dax_dev, struct block_device *bdev,
int blocksize, sector_t start, sector_t len)
{
+ if (!dax_dev)
+ return false;
+
if (!dax_alive(dax_dev))
return false;
return dax_dev->ops->dax_supported(dax_dev, bdev, blocksize, start, len);
}
+EXPORT_SYMBOL_GPL(dax_supported);
size_t dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
size_t bytes, struct iov_iter *i)
struct devfreq *p_devfreq = NULL;
unsigned long cur_freq, min_freq, max_freq;
unsigned int polling_ms;
+ unsigned int timer;
- seq_printf(s, "%-30s %-30s %-15s %10s %12s %12s %12s\n",
+ seq_printf(s, "%-30s %-30s %-15s %-10s %10s %12s %12s %12s\n",
"dev",
"parent_dev",
"governor",
+ "timer",
"polling_ms",
"cur_freq_Hz",
"min_freq_Hz",
"max_freq_Hz");
- seq_printf(s, "%30s %30s %15s %10s %12s %12s %12s\n",
+ seq_printf(s, "%30s %30s %15s %10s %10s %12s %12s %12s\n",
"------------------------------",
"------------------------------",
"---------------",
"----------",
+ "----------",
"------------",
"------------",
"------------");
cur_freq = devfreq->previous_freq;
get_freq_range(devfreq, &min_freq, &max_freq);
polling_ms = devfreq->profile->polling_ms;
+ timer = devfreq->profile->timer;
mutex_unlock(&devfreq->lock);
seq_printf(s,
- "%-30s %-30s %-15s %10d %12ld %12ld %12ld\n",
+ "%-30s %-30s %-15s %-10s %10d %12ld %12ld %12ld\n",
dev_name(&devfreq->dev),
p_devfreq ? dev_name(&p_devfreq->dev) : "null",
devfreq->governor_name,
+ polling_ms ? timer_name[timer] : "null",
polling_ms,
cur_freq,
min_freq,
rate = clk_round_rate(tegra->emc_clock, ULONG_MAX);
if (rate < 0) {
dev_err(&pdev->dev, "Failed to round clock rate: %ld\n", rate);
- return rate;
+ err = rate;
+ goto disable_clk;
}
tegra->max_freq = rate / KHZ;
dev_pm_opp_remove_all_dynamic(&pdev->dev);
reset_control_reset(tegra->reset);
+disable_clk:
clk_disable_unprepare(tegra->clock);
return err;
struct dma_buf *dmabuf;
dmabuf = dentry->d_fsdata;
+ if (unlikely(!dmabuf))
+ return;
BUG_ON(dmabuf->vmapping_counter);
* @nr_channels: number of channels under test
* @lock: access protection to the fields of this structure
* @did_init: module has been initialized completely
+ * @last_error: test has faced configuration issues
*/
static struct dmatest_info {
/* Test parameters */
/* Internal state */
struct list_head channels;
unsigned int nr_channels;
+ int last_error;
struct mutex lock;
bool did_init;
} test_info = {
return ret;
} else if (dmatest_run) {
if (!is_threaded_test_pending(info)) {
- pr_info("No channels configured, continue with any\n");
- if (!is_threaded_test_run(info))
- stop_threaded_test(info);
- add_threaded_test(info);
+ /*
+ * We have nothing to run. This can be due to:
+ */
+ ret = info->last_error;
+ if (ret) {
+ /* 1) Misconfiguration */
+ pr_err("Channel misconfigured, can't continue\n");
+ mutex_unlock(&info->lock);
+ return ret;
+ } else {
+ /* 2) We rely on defaults */
+ pr_info("No channels configured, continue with any\n");
+ if (!is_threaded_test_run(info))
+ stop_threaded_test(info);
+ add_threaded_test(info);
+ }
}
start_threaded_tests(info);
} else {
struct dmatest_info *info = &test_info;
struct dmatest_chan *dtc;
char chan_reset_val[20];
- int ret = 0;
+ int ret;
mutex_lock(&info->lock);
ret = param_set_copystring(val, kp);
goto add_chan_err;
}
+ info->last_error = ret;
mutex_unlock(&info->lock);
return ret;
add_chan_err:
param_set_copystring(chan_reset_val, kp);
+ info->last_error = ret;
mutex_unlock(&info->lock);
return ret;
if (!force_load && idx < 0)
return -ENODEV;
} else {
+ force_load = true;
idx = 0;
}
struct mem_ctl_info *mci;
unsigned long flags;
+ if (!force_load)
+ return;
+
mutex_lock(&ghes_reg_mutex);
system_scanned = false;
+ memset(&ghes_hw, 0, sizeof(struct ghes_hw_desc));
if (!refcount_dec_and_test(&ghes_refcount))
goto unlock;
{
int ret;
- if (!efi_enabled(EFI_RUNTIME_SERVICES))
+ if (!efivars_kobject() || !efivar_supports_writes())
return -ENODEV;
ret = register_reboot_notifier(&efibc_reboot_notifier);
return ret;
}
- if (adev->asic_type == CHIP_NAVI10) {
+ if (adev->asic_type == CHIP_NAVI10 || adev->asic_type == CHIP_SIENNA_CICHLID) {
ret= psp_sysfs_init(adev);
if (ret) {
return ret;
MODULE_FIRMWARE("amdgpu/sienna_cichlid_sos.bin");
MODULE_FIRMWARE("amdgpu/sienna_cichlid_ta.bin");
MODULE_FIRMWARE("amdgpu/navy_flounder_sos.bin");
-MODULE_FIRMWARE("amdgpu/navy_flounder_asd.bin");
+MODULE_FIRMWARE("amdgpu/navy_flounder_ta.bin");
/* address block */
#define smnMP1_FIRMWARE_FLAGS 0x3010024
dqm->sched_running = false;
dqm_unlock(dqm);
+ pm_release_ib(&dqm->packets);
+
kfd_gtt_sa_free(dqm->dev, dqm->fence_mem);
pm_uninit(&dqm->packets, hanging);
if (q->properties.is_active) {
increment_queue_count(dqm, q->properties.type);
- retval = execute_queues_cpsch(dqm,
+ execute_queues_cpsch(dqm,
KFD_UNMAP_QUEUES_FILTER_DYNAMIC_QUEUES, 0);
}
{
}
-static bool does_crtc_have_active_cursor(struct drm_crtc_state *new_crtc_state)
-{
- struct drm_device *dev = new_crtc_state->crtc->dev;
- struct drm_plane *plane;
-
- drm_for_each_plane_mask(plane, dev, new_crtc_state->plane_mask) {
- if (plane->type == DRM_PLANE_TYPE_CURSOR)
- return true;
- }
-
- return false;
-}
-
static int count_crtc_active_planes(struct drm_crtc_state *new_crtc_state)
{
struct drm_atomic_state *state = new_crtc_state->state;
return ret;
}
- /* In some use cases, like reset, no stream is attached */
- if (!dm_crtc_state->stream)
- return 0;
-
/*
- * We want at least one hardware plane enabled to use
- * the stream with a cursor enabled.
+ * We require the primary plane to be enabled whenever the CRTC is, otherwise
+ * drm_mode_cursor_universal may end up trying to enable the cursor plane while all other
+ * planes are disabled, which is not supported by the hardware. And there is legacy
+ * userspace which stops using the HW cursor altogether in response to the resulting EINVAL.
*/
- if (state->enable && state->active &&
- does_crtc_have_active_cursor(state) &&
- dm_crtc_state->active_planes == 0)
+ if (state->enable &&
+ !(state->plane_mask & drm_plane_mask(crtc->primary)))
return -EINVAL;
+ /* In some use cases, like reset, no stream is attached */
+ if (!dm_crtc_state->stream)
+ return 0;
+
if (dc_validate_stream(dc, dm_crtc_state->stream) == DC_OK)
return 0;
},
},
.num_states = 5,
- .sr_exit_time_us = 8.6,
- .sr_enter_plus_exit_time_us = 10.9,
+ .sr_exit_time_us = 11.6,
+ .sr_enter_plus_exit_time_us = 13.9,
.urgent_latency_us = 4.0,
.urgent_latency_pixel_data_only_us = 4.0,
.urgent_latency_pixel_mixed_with_vm_data_us = 4.0,
#define MOD_HDCP_LOG_H_
#ifdef CONFIG_DRM_AMD_DC_HDCP
-#define HDCP_LOG_ERR(hdcp, ...) DRM_WARN(__VA_ARGS__)
+#define HDCP_LOG_ERR(hdcp, ...) DRM_DEBUG_KMS(__VA_ARGS__)
#define HDCP_LOG_VER(hdcp, ...) DRM_DEBUG_KMS(__VA_ARGS__)
#define HDCP_LOG_FSM(hdcp, ...) DRM_DEBUG_KMS(__VA_ARGS__)
#define HDCP_LOG_TOP(hdcp, ...) pr_debug("[HDCP_TOP]:"__VA_ARGS__)
enum mod_hdcp_status status = MOD_HDCP_STATUS_SUCCESS;
if (!psp->dtm_context.dtm_initialized) {
- DRM_ERROR("Failed to add display topology, DTM TA is not initialized.");
+ DRM_INFO("Failed to add display topology, DTM TA is not initialized.");
display->state = MOD_HDCP_DISPLAY_INACTIVE;
return MOD_HDCP_STATUS_FAILURE;
}
*/
if (smu->uploading_custom_pp_table &&
(adev->asic_type >= CHIP_NAVI10) &&
- (adev->asic_type <= CHIP_NAVI12))
+ (adev->asic_type <= CHIP_NAVY_FLOUNDER))
return 0;
/*
int smu_reset(struct smu_context *smu)
{
struct amdgpu_device *adev = smu->adev;
- int ret = 0;
+ int ret;
+
+ amdgpu_gfx_off_ctrl(smu->adev, false);
ret = smu_hw_fini(adev);
if (ret)
return ret;
ret = smu_late_init(adev);
+ if (ret)
+ return ret;
- return ret;
+ amdgpu_gfx_off_ctrl(smu->adev, true);
+
+ return 0;
}
static int smu_suspend(void *handle)
return __reset_engine(engine);
}
-static struct intel_engine_cs *__active_engine(struct i915_request *rq)
+static bool
+__active_engine(struct i915_request *rq, struct intel_engine_cs **active)
{
struct intel_engine_cs *engine, *locked;
+ bool ret = false;
/*
* Serialise with __i915_request_submit() so that it sees
* is-banned?, or we know the request is already inflight.
+ *
+ * Note that rq->engine is unstable, and so we double
+ * check that we have acquired the lock on the final engine.
*/
locked = READ_ONCE(rq->engine);
spin_lock_irq(&locked->active.lock);
while (unlikely(locked != (engine = READ_ONCE(rq->engine)))) {
spin_unlock(&locked->active.lock);
- spin_lock(&engine->active.lock);
locked = engine;
+ spin_lock(&locked->active.lock);
}
- engine = NULL;
- if (i915_request_is_active(rq) && rq->fence.error != -EIO)
- engine = rq->engine;
+ if (!i915_request_completed(rq)) {
+ if (i915_request_is_active(rq) && rq->fence.error != -EIO)
+ *active = locked;
+ ret = true;
+ }
spin_unlock_irq(&locked->active.lock);
- return engine;
+ return ret;
}
static struct intel_engine_cs *active_engine(struct intel_context *ce)
if (!ce->timeline)
return NULL;
- mutex_lock(&ce->timeline->mutex);
- list_for_each_entry_reverse(rq, &ce->timeline->requests, link) {
- if (i915_request_completed(rq))
- break;
+ rcu_read_lock();
+ list_for_each_entry_rcu(rq, &ce->timeline->requests, link) {
+ if (i915_request_is_active(rq) && i915_request_completed(rq))
+ continue;
/* Check with the backend if the request is inflight */
- engine = __active_engine(rq);
- if (engine)
+ if (__active_engine(rq, &engine))
break;
}
- mutex_unlock(&ce->timeline->mutex);
+ rcu_read_unlock();
return engine;
}
ctx->i915 = i915;
ctx->sched.priority = I915_USER_PRIORITY(I915_PRIORITY_NORMAL);
mutex_init(&ctx->mutex);
+ INIT_LIST_HEAD(&ctx->link);
spin_lock_init(&ctx->stale.lock);
INIT_LIST_HEAD(&ctx->stale.engines);
for (i = 0; i < ARRAY_SIZE(ctx->hang_timestamp); i++)
ctx->hang_timestamp[i] = jiffies - CONTEXT_FAST_HANG_JIFFIES;
- spin_lock(&i915->gem.contexts.lock);
- list_add_tail(&ctx->link, &i915->gem.contexts.list);
- spin_unlock(&i915->gem.contexts.lock);
-
return ctx;
err_free:
struct drm_i915_file_private *fpriv,
u32 *id)
{
+ struct drm_i915_private *i915 = ctx->i915;
struct i915_address_space *vm;
int ret;
/* And finally expose ourselves to userspace via the idr */
ret = xa_alloc(&fpriv->context_xa, id, ctx, xa_limit_32b, GFP_KERNEL);
if (ret)
- put_pid(fetch_and_zero(&ctx->pid));
+ goto err_pid;
+
+ spin_lock(&i915->gem.contexts.lock);
+ list_add_tail(&ctx->link, &i915->gem.contexts.list);
+ spin_unlock(&i915->gem.contexts.lock);
+
+ return 0;
+err_pid:
+ put_pid(fetch_and_zero(&ctx->pid));
return ret;
}
memset_p((void **)ports, NULL, count);
}
+static inline void
+copy_ports(struct i915_request **dst, struct i915_request **src, int count)
+{
+ /* A memcpy_p() would be very useful here! */
+ while (count--)
+ WRITE_ONCE(*dst++, *src++); /* avoid write tearing */
+}
+
static void execlists_dequeue(struct intel_engine_cs *engine)
{
struct intel_engine_execlists * const execlists = &engine->execlists;
/* switch pending to inflight */
GEM_BUG_ON(!assert_pending_valid(execlists, "promote"));
- memcpy(execlists->inflight,
- execlists->pending,
- execlists_num_ports(execlists) *
- sizeof(*execlists->pending));
+ copy_ports(execlists->inflight,
+ execlists->pending,
+ execlists_num_ports(execlists));
smp_wmb(); /* complete the seqlock */
WRITE_ONCE(execlists->active, execlists->inflight);
static struct intel_vgpu *__intel_gvt_create_vgpu(struct intel_gvt *gvt,
struct intel_vgpu_creation_params *param)
{
+ struct drm_i915_private *dev_priv = gvt->gt->i915;
struct intel_vgpu *vgpu;
int ret;
if (ret)
goto out_clean_sched_policy;
- ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_D);
+ if (IS_BROADWELL(dev_priv))
+ ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_B);
+ else
+ ret = intel_gvt_hypervisor_set_edid(vgpu, PORT_D);
if (ret)
goto out_clean_sched_policy;
* As we know that there are always preemption points between
* requests, we know that only the currently executing request
* may be still active even though we have cleared the flag.
- * However, we can't rely on our tracking of ELSP[0] to known
+ * However, we can't rely on our tracking of ELSP[0] to know
* which request is currently active and so maybe stuck, as
* the tracking maybe an event behind. Instead assume that
* if the context is still inflight, then it is still active
* even if the active flag has been cleared.
+ *
+ * To further complicate matters, if there a pending promotion, the HW
+ * may either perform a context switch to the second inflight execlists,
+ * or it may switch to the pending set of execlists. In the case of the
+ * latter, it may send the ACK and we process the event copying the
+ * pending[] over top of inflight[], _overwriting_ our *active. Since
+ * this implies the HW is arbitrating and not struck in *active, we do
+ * not worry about complete accuracy, but we do require no read/write
+ * tearing of the pointer [the read of the pointer must be valid, even
+ * as the array is being overwritten, for which we require the writes
+ * to avoid tearing.]
+ *
+ * Note that the read of *execlists->active may race with the promotion
+ * of execlists->pending[] to execlists->inflight[], overwritting
+ * the value at *execlists->active. This is fine. The promotion implies
+ * that we received an ACK from the HW, and so the context is not
+ * stuck -- if we do not see ourselves in *active, the inflight status
+ * is valid. If instead we see ourselves being copied into *active,
+ * we are inflight and may signal the callback.
*/
if (!intel_context_inflight(signal->context))
return false;
rcu_read_lock();
- for (port = __engine_active(signal->engine); (rq = *port); port++) {
+ for (port = __engine_active(signal->engine);
+ (rq = READ_ONCE(*port)); /* may race with promotion of pending[] */
+ port++) {
if (rq->context == signal->context) {
inflight = i915_seqno_passed(rq->fence.seqno,
signal->fence.seqno);
do {
list_for_each_entry_safe(pos, next, &x->head, entry) {
- pos->func(pos,
- TASK_NORMAL, fence->error,
- &extra);
+ int wake_flags;
+
+ wake_flags = fence->error;
+ if (pos->func == autoremove_wake_function)
+ wake_flags = 0;
+
+ pos->func(pos, TASK_NORMAL, wake_flags, &extra);
}
if (list_empty(&extra))
struct drm_i915_private *mock_gem_device(void)
{
- struct drm_i915_private *i915;
- struct pci_dev *pdev;
#if IS_ENABLED(CONFIG_IOMMU_API) && defined(CONFIG_INTEL_IOMMU)
- struct dev_iommu iommu;
+ static struct dev_iommu fake_iommu = { .priv = (void *)-1 };
#endif
+ struct drm_i915_private *i915;
+ struct pci_dev *pdev;
int err;
pdev = kzalloc(sizeof(*pdev), GFP_KERNEL);
dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64));
#if IS_ENABLED(CONFIG_IOMMU_API) && defined(CONFIG_INTEL_IOMMU)
- /* HACK HACK HACK to disable iommu for the fake device; force identity mapping */
- memset(&iommu, 0, sizeof(iommu));
- iommu.priv = (void *)-1;
- pdev->dev.iommu = &iommu;
+ /* HACK to disable iommu for the fake device; force identity mapping */
+ pdev->dev.iommu = &fake_iommu;
#endif
pci_set_drvdata(pdev, i915);
drm_crtc_index(&mtk_crtc->base));
mtk_crtc->cmdq_client = NULL;
}
- ret = of_property_read_u32_index(priv->mutex_node,
- "mediatek,gce-events",
- drm_crtc_index(&mtk_crtc->base),
- &mtk_crtc->cmdq_event);
- if (ret)
- dev_dbg(dev, "mtk_crtc %d failed to get mediatek,gce-events property\n",
- drm_crtc_index(&mtk_crtc->base));
+
+ if (mtk_crtc->cmdq_client) {
+ ret = of_property_read_u32_index(priv->mutex_node,
+ "mediatek,gce-events",
+ drm_crtc_index(&mtk_crtc->base),
+ &mtk_crtc->cmdq_event);
+ if (ret) {
+ dev_dbg(dev, "mtk_crtc %d failed to get mediatek,gce-events property\n",
+ drm_crtc_index(&mtk_crtc->base));
+ cmdq_mbox_destroy(mtk_crtc->cmdq_client);
+ mtk_crtc->cmdq_client = NULL;
+ }
+ }
#endif
return 0;
}
#if IS_REACHABLE(CONFIG_MTK_CMDQ)
if (of_address_to_resource(node, 0, &res) != 0) {
dev_err(dev, "Missing reg in %s node\n", node->full_name);
+ put_device(&larb_pdev->dev);
return -EINVAL;
}
comp->regs_pa = res.start;
#include "mtk_drm_crtc.h"
#include "mtk_drm_ddp.h"
-#include "mtk_drm_ddp.h"
#include "mtk_drm_ddp_comp.h"
#include "mtk_drm_drv.h"
#include "mtk_drm_gem.h"
ret = drmm_mode_config_init(drm);
if (ret)
- return ret;
+ goto put_mutex_dev;
drm->mode_config.min_width = 64;
drm->mode_config.min_height = 64;
ret = component_bind_all(drm->dev, drm);
if (ret)
- return ret;
+ goto put_mutex_dev;
/*
* We currently support two fixed data streams, each optional,
}
if (!dma_dev->dma_parms) {
ret = -ENOMEM;
- goto err_component_unbind;
+ goto put_dma_dev;
}
ret = dma_set_max_seg_size(dma_dev, (unsigned int)DMA_BIT_MASK(32));
err_unset_dma_parms:
if (private->dma_parms_allocated)
dma_dev->dma_parms = NULL;
+put_dma_dev:
+ put_device(private->dma_dev);
err_component_unbind:
component_unbind_all(drm->dev, drm);
-
+put_mutex_dev:
+ put_device(private->mutex_dev);
return ret;
}
pm_runtime_disable(dev);
err_node:
of_node_put(private->mutex_node);
- for (i = 0; i < DDP_COMPONENT_ID_MAX; i++)
+ for (i = 0; i < DDP_COMPONENT_ID_MAX; i++) {
of_node_put(private->comp_node[i]);
+ if (private->ddp_comp[i]) {
+ put_device(private->ddp_comp[i]->larb_dev);
+ private->ddp_comp[i] = NULL;
+ }
+ }
return ret;
}
horizontal_sync_active_byte = (vm->hsync_len * dsi_tmp_buf_bpp - 10);
if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_SYNC_PULSE)
- horizontal_backporch_byte =
- (vm->hback_porch * dsi_tmp_buf_bpp - 10);
+ horizontal_backporch_byte = vm->hback_porch * dsi_tmp_buf_bpp;
else
- horizontal_backporch_byte = ((vm->hback_porch + vm->hsync_len) *
- dsi_tmp_buf_bpp - 10);
+ horizontal_backporch_byte = (vm->hback_porch + vm->hsync_len) *
+ dsi_tmp_buf_bpp;
data_phy_cycles = timing->lpx + timing->da_hs_prepare +
- timing->da_hs_zero + timing->da_hs_exit + 3;
+ timing->da_hs_zero + timing->da_hs_exit;
if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO_BURST) {
if ((vm->hfront_porch + vm->hback_porch) * dsi_tmp_buf_bpp >
dev_err(dev,
"Failed to get system configuration registers: %d\n",
ret);
- return ret;
+ goto put_device;
}
hdmi->sys_regmap = regmap;
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
hdmi->regs = devm_ioremap_resource(dev, mem);
- if (IS_ERR(hdmi->regs))
- return PTR_ERR(hdmi->regs);
+ if (IS_ERR(hdmi->regs)) {
+ ret = PTR_ERR(hdmi->regs);
+ goto put_device;
+ }
remote = of_graph_get_remote_node(np, 1, 0);
- if (!remote)
- return -EINVAL;
+ if (!remote) {
+ ret = -EINVAL;
+ goto put_device;
+ }
if (!of_device_is_compatible(remote, "hdmi-connector")) {
hdmi->next_bridge = of_drm_find_bridge(remote);
if (!hdmi->next_bridge) {
dev_err(dev, "Waiting for external bridge\n");
of_node_put(remote);
- return -EPROBE_DEFER;
+ ret = -EPROBE_DEFER;
+ goto put_device;
}
}
dev_err(dev, "Failed to find ddc-i2c-bus node in %pOF\n",
remote);
of_node_put(remote);
- return -EINVAL;
+ ret = -EINVAL;
+ goto put_device;
}
of_node_put(remote);
of_node_put(i2c_np);
if (!hdmi->ddc_adpt) {
dev_err(dev, "Failed to get ddc i2c adapter by node\n");
- return -EINVAL;
+ ret = -EINVAL;
+ goto put_device;
}
return 0;
+put_device:
+ put_device(hdmi->cec_dev);
+ return ret;
}
/*
/* get matching reference and feedback divider */
*ref_div = min(max(den/post_div, 1u), ref_div_max);
- *fb_div = max(nom * *ref_div * post_div / den, 1u);
+ *fb_div = DIV_ROUND_CLOSEST(nom * *ref_div * post_div, den);
/* limit fb divider to its maximum */
if (*fb_div > fb_div_max) {
/* VI channel CSC units offsets */
#define CCSC00_OFFSET 0xAA050
-#define CCSC01_OFFSET 0xFA000
+#define CCSC01_OFFSET 0xFA050
#define CCSC10_OFFSET 0xA0000
#define CCSC11_OFFSET 0xF0000
.reg_bits = 32,
.val_bits = 32,
.reg_stride = 4,
- .max_register = 0xbfffc, /* guessed */
+ .max_register = 0xffffc, /* guessed */
};
static int sun8i_mixer_of_get_id(struct device_node *node)
card->num_links = 1;
card->name = "vc4-hdmi";
card->dev = dev;
+ card->owner = THIS_MODULE;
/*
* Be careful, snd_soc_register_card() calls dev_set_drvdata() and
void *page_addr;
struct hv_message *msg;
struct vmbus_channel_message_header *hdr;
- u32 message_type;
+ u32 message_type, i;
/*
* CHANNELMSG_UNLOAD_RESPONSE is always delivered to the CPU which was
* functional and vmbus_unload_response() will complete
* vmbus_connection.unload_event. If not, the last thing we can do is
* read message pages for all CPUs directly.
+ *
+ * Wait no more than 10 seconds so that the panic path can't get
+ * hung forever in case the response message isn't seen.
*/
- while (1) {
+ for (i = 0; i < 1000; i++) {
if (completion_done(&vmbus_connection.unload_event))
break;
if (atomic_read(&vmbus_connection.nr_chan_close_on_suspend) > 0)
wait_for_completion(&vmbus_connection.ready_for_suspend_event);
- WARN_ON(atomic_read(&vmbus_connection.nr_chan_fixup_on_resume) != 0);
+ if (atomic_read(&vmbus_connection.nr_chan_fixup_on_resume) != 0) {
+ pr_err("Can not suspend due to a previous failed resuming\n");
+ return -EBUSY;
+ }
mutex_lock(&vmbus_connection.channel_mutex);
vmbus_request_offers();
- wait_for_completion(&vmbus_connection.ready_for_resume_event);
+ if (wait_for_completion_timeout(
+ &vmbus_connection.ready_for_resume_event, 10 * HZ) == 0)
+ pr_err("Some vmbus device is missing after suspending?\n");
/* Reset the event for the next suspend. */
reinit_completion(&vmbus_connection.ready_for_suspend_event);
* These share bit definitions, so use the same values for the enable &
* status bits.
*/
+#define ASPEED_I2CD_INTR_RECV_MASK 0xf000ffff
#define ASPEED_I2CD_INTR_SDA_DL_TIMEOUT BIT(14)
#define ASPEED_I2CD_INTR_BUS_RECOVER_DONE BIT(13)
#define ASPEED_I2CD_INTR_SLAVE_MATCH BIT(7)
writel(irq_received & ~ASPEED_I2CD_INTR_RX_DONE,
bus->base + ASPEED_I2C_INTR_STS_REG);
readl(bus->base + ASPEED_I2C_INTR_STS_REG);
+ irq_received &= ASPEED_I2CD_INTR_RECV_MASK;
irq_remaining = irq_received;
#if IS_ENABLED(CONFIG_I2C_SLAVE)
static inline void i801_acpi_remove(struct i801_priv *priv) { }
#endif
+static unsigned char i801_setup_hstcfg(struct i801_priv *priv)
+{
+ unsigned char hstcfg = priv->original_hstcfg;
+
+ hstcfg &= ~SMBHSTCFG_I2C_EN; /* SMBus timing */
+ hstcfg |= SMBHSTCFG_HST_EN;
+ pci_write_config_byte(priv->pci_dev, SMBHSTCFG, hstcfg);
+ return hstcfg;
+}
+
static int i801_probe(struct pci_dev *dev, const struct pci_device_id *id)
{
unsigned char temp;
return err;
}
- pci_read_config_byte(priv->pci_dev, SMBHSTCFG, &temp);
- priv->original_hstcfg = temp;
- temp &= ~SMBHSTCFG_I2C_EN; /* SMBus timing */
- if (!(temp & SMBHSTCFG_HST_EN)) {
+ pci_read_config_byte(priv->pci_dev, SMBHSTCFG, &priv->original_hstcfg);
+ temp = i801_setup_hstcfg(priv);
+ if (!(priv->original_hstcfg & SMBHSTCFG_HST_EN))
dev_info(&dev->dev, "Enabling SMBus device\n");
- temp |= SMBHSTCFG_HST_EN;
- }
- pci_write_config_byte(priv->pci_dev, SMBHSTCFG, temp);
if (temp & SMBHSTCFG_SMB_SMI_EN) {
dev_dbg(&dev->dev, "SMBus using interrupt SMI#\n");
#ifdef CONFIG_PM_SLEEP
static int i801_suspend(struct device *dev)
{
- struct pci_dev *pci_dev = to_pci_dev(dev);
- struct i801_priv *priv = pci_get_drvdata(pci_dev);
+ struct i801_priv *priv = dev_get_drvdata(dev);
- pci_write_config_byte(pci_dev, SMBHSTCFG, priv->original_hstcfg);
+ pci_write_config_byte(priv->pci_dev, SMBHSTCFG, priv->original_hstcfg);
return 0;
}
{
struct i801_priv *priv = dev_get_drvdata(dev);
+ i801_setup_hstcfg(priv);
i801_enable_host_notify(&priv->adapter);
return 0;
unsigned int cnt_mul;
int ret = -EINVAL;
- if (target_speed > I2C_MAX_FAST_MODE_PLUS_FREQ)
- target_speed = I2C_MAX_FAST_MODE_PLUS_FREQ;
+ if (target_speed > I2C_MAX_HIGH_SPEED_MODE_FREQ)
+ target_speed = I2C_MAX_HIGH_SPEED_MODE_FREQ;
max_step_cnt = mtk_i2c_max_step_cnt(target_speed);
base_step_cnt = max_step_cnt;
for (clk_div = 1; clk_div <= max_clk_div; clk_div++) {
clk_src = parent_clk / clk_div;
- if (target_speed > I2C_MAX_FAST_MODE_FREQ) {
+ if (target_speed > I2C_MAX_FAST_MODE_PLUS_FREQ) {
/* Set master code speed register */
ret = mtk_i2c_calculate_speed(i2c, clk_src,
I2C_MAX_FAST_MODE_FREQ,
#include <linux/of_device.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
+#include <linux/dma/mxs-dma.h>
#define DRIVER_NAME "mxs-i2c"
dma_map_sg(i2c->dev, &i2c->sg_io[0], 1, DMA_TO_DEVICE);
desc = dmaengine_prep_slave_sg(i2c->dmach, &i2c->sg_io[0], 1,
DMA_MEM_TO_DEV,
- DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ DMA_PREP_INTERRUPT |
+ MXS_DMA_CTRL_WAIT4END);
if (!desc) {
dev_err(i2c->dev,
"Failed to get DMA data write descriptor.\n");
dma_map_sg(i2c->dev, &i2c->sg_io[1], 1, DMA_FROM_DEVICE);
desc = dmaengine_prep_slave_sg(i2c->dmach, &i2c->sg_io[1], 1,
DMA_DEV_TO_MEM,
- DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ DMA_PREP_INTERRUPT |
+ MXS_DMA_CTRL_WAIT4END);
if (!desc) {
dev_err(i2c->dev,
"Failed to get DMA data write descriptor.\n");
dma_map_sg(i2c->dev, i2c->sg_io, 2, DMA_TO_DEVICE);
desc = dmaengine_prep_slave_sg(i2c->dmach, i2c->sg_io, 2,
DMA_MEM_TO_DEV,
- DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ DMA_PREP_INTERRUPT |
+ MXS_DMA_CTRL_WAIT4END);
if (!desc) {
dev_err(i2c->dev,
"Failed to get DMA data write descriptor.\n");
/* create pre-declared device nodes */
of_i2c_register_devices(adap);
- i2c_acpi_register_devices(adap);
i2c_acpi_install_space_handler(adap);
+ i2c_acpi_register_devices(adap);
if (adap->nr < __i2c_first_dynamic_bus_num)
i2c_scan_static_board_info(adap);
remove_client_context(device, cid);
}
+ ib_cq_pool_destroy(device);
+
/* Pairs with refcount_set in enable_device */
ib_device_put(device);
wait_for_completion(&device->unreg_completion);
goto out;
}
+ ib_cq_pool_init(device);
+
down_read(&clients_rwsem);
xa_for_each_marked (&clients, index, client, CLIENT_REGISTERED) {
ret = add_client_context(device, client);
goto dev_cleanup;
}
- ib_cq_pool_init(device);
ret = enable_device_and_get(device);
dev_set_uevent_suppress(&device->dev, false);
/* Mark for userspace that device is ready */
goto out;
disable_device(ib_dev);
- ib_cq_pool_destroy(ib_dev);
/* Expedite removing unregistered pointers from the hash table */
free_netdevs(ib_dev);
#include "trackpoint.h"
static const char * const trackpoint_variants[] = {
- [TP_VARIANT_IBM] = "IBM",
- [TP_VARIANT_ALPS] = "ALPS",
- [TP_VARIANT_ELAN] = "Elan",
- [TP_VARIANT_NXP] = "NXP",
+ [TP_VARIANT_IBM] = "IBM",
+ [TP_VARIANT_ALPS] = "ALPS",
+ [TP_VARIANT_ELAN] = "Elan",
+ [TP_VARIANT_NXP] = "NXP",
+ [TP_VARIANT_JYT_SYNAPTICS] = "JYT_Synaptics",
+ [TP_VARIANT_SYNAPTICS] = "Synaptics",
};
/*
* 0x01 was the original IBM trackpoint, others implement very limited
* subset of trackpoint features.
*/
-#define TP_VARIANT_IBM 0x01
-#define TP_VARIANT_ALPS 0x02
-#define TP_VARIANT_ELAN 0x03
-#define TP_VARIANT_NXP 0x04
+#define TP_VARIANT_IBM 0x01
+#define TP_VARIANT_ALPS 0x02
+#define TP_VARIANT_ELAN 0x03
+#define TP_VARIANT_NXP 0x04
+#define TP_VARIANT_JYT_SYNAPTICS 0x05
+#define TP_VARIANT_SYNAPTICS 0x06
/*
* Commands
DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5738"),
},
},
+ {
+ /* Entroware Proteus */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Entroware"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Proteus"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "EL07R4"),
+ },
+ },
{ }
};
DMI_MATCH(DMI_PRODUCT_NAME, "33474HU"),
},
},
+ {
+ /* Entroware Proteus */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Entroware"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Proteus"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "EL07R4"),
+ },
+ },
{ }
};
{
struct amd_ir_data *ir_data = (struct amd_ir_data *)data;
struct irte_ga *entry = (struct irte_ga *) ir_data->entry;
+ u64 valid;
if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) ||
!entry || entry->lo.fields_vapic.guest_mode)
return 0;
+ valid = entry->lo.fields_vapic.valid;
+
entry->lo.val = 0;
entry->hi.val = 0;
+ entry->lo.fields_vapic.valid = valid;
entry->lo.fields_vapic.guest_mode = 1;
entry->lo.fields_vapic.ga_log_intr = 1;
entry->hi.fields.ga_root_ptr = ir_data->ga_root_ptr;
struct amd_ir_data *ir_data = (struct amd_ir_data *)data;
struct irte_ga *entry = (struct irte_ga *) ir_data->entry;
struct irq_cfg *cfg = ir_data->cfg;
- u64 valid = entry->lo.fields_remap.valid;
+ u64 valid;
if (!AMD_IOMMU_GUEST_IR_VAPIC(amd_iommu_guest_ir) ||
!entry || !entry->lo.fields_vapic.guest_mode)
return 0;
+ valid = entry->lo.fields_remap.valid;
+
entry->lo.val = 0;
entry->hi.val = 0;
int device_supports_dax(struct dm_target *ti, struct dm_dev *dev,
sector_t start, sector_t len, void *data)
{
- int blocksize = *(int *) data;
+ int blocksize = *(int *) data, id;
+ bool rc;
- return generic_fsdax_supported(dev->dax_dev, dev->bdev, blocksize,
- start, len);
+ id = dax_read_lock();
+ rc = dax_supported(dev->dax_dev, dev->bdev, blocksize, start, len);
+ dax_read_unlock(id);
+
+ return rc;
}
/* Check devices support synchronous DAX */
{
struct mapped_device *md = dax_get_private(dax_dev);
struct dm_table *map;
+ bool ret = false;
int srcu_idx;
- bool ret;
map = dm_get_live_table(md, &srcu_idx);
if (!map)
- return false;
+ goto out;
ret = dm_table_supports_dax(map, device_supports_dax, &blocksize);
+out:
dm_put_live_table(md, srcu_idx);
return ret;
return ret;
}
-static void dm_queue_split(struct mapped_device *md, struct dm_target *ti, struct bio **bio)
-{
- unsigned len, sector_count;
-
- sector_count = bio_sectors(*bio);
- len = min_t(sector_t, max_io_len((*bio)->bi_iter.bi_sector, ti), sector_count);
-
- if (sector_count > len) {
- struct bio *split = bio_split(*bio, len, GFP_NOIO, &md->queue->bio_split);
-
- bio_chain(split, *bio);
- trace_block_split(md->queue, split, (*bio)->bi_iter.bi_sector);
- submit_bio_noacct(*bio);
- *bio = split;
- }
-}
-
static blk_qc_t dm_process_bio(struct mapped_device *md,
struct dm_table *map, struct bio *bio)
{
}
/*
- * If in ->queue_bio we need to use blk_queue_split(), otherwise
+ * If in ->submit_bio we need to use blk_queue_split(), otherwise
* queue_limits for abnormal requests (e.g. discard, writesame, etc)
* won't be imposed.
+ * If called from dm_wq_work() for deferred bio processing, bio
+ * was already handled by following code with previous ->submit_bio.
*/
if (current->bio_list) {
if (is_abnormal_io(bio))
blk_queue_split(&bio);
- else
- dm_queue_split(md, ti, &bio);
+ /* regular IO is split by __split_and_process_bio */
}
if (dm_get_md_type(md) == DM_TYPE_NVME_BIO_BASED)
return __process_bio(md, map, bio, ti);
- else
- return __split_and_process_bio(md, map, bio);
+ return __split_and_process_bio(md, map, bio);
}
static blk_qc_t dm_submit_bio(struct bio *bio)
/* Cancel the pending timeout work */
if (!cancel_delayed_work(&data->work)) {
mutex_unlock(&adap->lock);
- flush_scheduled_work();
+ cancel_delayed_work_sync(&data->work);
mutex_lock(&adap->lock);
}
/*
}
EXPORT_SYMBOL(vb2_verify_memory_type);
-static void set_queue_consistency(struct vb2_queue *q, bool consistent_mem)
-{
- q->dma_attrs &= ~DMA_ATTR_NON_CONSISTENT;
-
- if (!vb2_queue_allows_cache_hints(q))
- return;
- if (!consistent_mem)
- q->dma_attrs |= DMA_ATTR_NON_CONSISTENT;
-}
-
-static bool verify_consistency_attr(struct vb2_queue *q, bool consistent_mem)
-{
- bool queue_is_consistent = !(q->dma_attrs & DMA_ATTR_NON_CONSISTENT);
-
- if (consistent_mem != queue_is_consistent) {
- dprintk(q, 1, "memory consistency model mismatch\n");
- return false;
- }
- return true;
-}
-
int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory,
- unsigned int flags, unsigned int *count)
+ unsigned int *count)
{
unsigned int num_buffers, allocated_buffers, num_planes = 0;
unsigned plane_sizes[VB2_MAX_PLANES] = { };
- bool consistent_mem = true;
unsigned int i;
int ret;
- if (flags & V4L2_FLAG_MEMORY_NON_CONSISTENT)
- consistent_mem = false;
-
if (q->streaming) {
dprintk(q, 1, "streaming active\n");
return -EBUSY;
}
if (*count == 0 || q->num_buffers != 0 ||
- (q->memory != VB2_MEMORY_UNKNOWN && q->memory != memory) ||
- !verify_consistency_attr(q, consistent_mem)) {
+ (q->memory != VB2_MEMORY_UNKNOWN && q->memory != memory)) {
/*
* We already have buffers allocated, so first check if they
* are not in use and can be freed.
num_buffers = min_t(unsigned int, num_buffers, VB2_MAX_FRAME);
memset(q->alloc_devs, 0, sizeof(q->alloc_devs));
q->memory = memory;
- set_queue_consistency(q, consistent_mem);
/*
* Ask the driver how many buffers and planes per buffer it requires.
EXPORT_SYMBOL_GPL(vb2_core_reqbufs);
int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,
- unsigned int flags, unsigned int *count,
+ unsigned int *count,
unsigned int requested_planes,
const unsigned int requested_sizes[])
{
unsigned int num_planes = 0, num_buffers, allocated_buffers;
unsigned plane_sizes[VB2_MAX_PLANES] = { };
- bool consistent_mem = true;
int ret;
- if (flags & V4L2_FLAG_MEMORY_NON_CONSISTENT)
- consistent_mem = false;
-
if (q->num_buffers == VB2_MAX_FRAME) {
dprintk(q, 1, "maximum number of buffers already allocated\n");
return -ENOBUFS;
}
memset(q->alloc_devs, 0, sizeof(q->alloc_devs));
q->memory = memory;
- set_queue_consistency(q, consistent_mem);
q->waiting_for_buffers = !q->is_output;
} else {
if (q->memory != memory) {
dprintk(q, 1, "memory model mismatch\n");
return -EINVAL;
}
- if (!verify_consistency_attr(q, consistent_mem))
- return -EINVAL;
}
num_buffers = min(*count, VB2_MAX_FRAME - q->num_buffers);
fileio->memory = VB2_MEMORY_MMAP;
fileio->type = q->type;
q->fileio = fileio;
- ret = vb2_core_reqbufs(q, fileio->memory, 0, &fileio->count);
+ ret = vb2_core_reqbufs(q, fileio->memory, &fileio->count);
if (ret)
goto err_kfree;
err_reqbufs:
fileio->count = 0;
- vb2_core_reqbufs(q, fileio->memory, 0, &fileio->count);
+ vb2_core_reqbufs(q, fileio->memory, &fileio->count);
err_kfree:
q->fileio = NULL;
vb2_core_streamoff(q, q->type);
q->fileio = NULL;
fileio->count = 0;
- vb2_core_reqbufs(q, fileio->memory, 0, &fileio->count);
+ vb2_core_reqbufs(q, fileio->memory, &fileio->count);
kfree(fileio);
dprintk(q, 3, "file io emulator closed\n");
}
struct dma_buf_attachment *db_attach;
};
-static inline bool vb2_dc_buffer_consistent(unsigned long attr)
-{
- return !(attr & DMA_ATTR_NON_CONSISTENT);
-}
-
/*********************************************/
/* scatterlist table functions */
/*********************************************/
vb2_dc_dmabuf_ops_begin_cpu_access(struct dma_buf *dbuf,
enum dma_data_direction direction)
{
- struct vb2_dc_buf *buf = dbuf->priv;
- struct sg_table *sgt = buf->dma_sgt;
-
- if (vb2_dc_buffer_consistent(buf->attrs))
- return 0;
-
- dma_sync_sg_for_cpu(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir);
return 0;
}
vb2_dc_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
enum dma_data_direction direction)
{
- struct vb2_dc_buf *buf = dbuf->priv;
- struct sg_table *sgt = buf->dma_sgt;
-
- if (vb2_dc_buffer_consistent(buf->attrs))
- return 0;
-
- dma_sync_sg_for_device(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir);
return 0;
}
/*
* NOTE: dma-sg allocates memory using the page allocator directly, so
* there is no memory consistency guarantee, hence dma-sg ignores DMA
- * attributes passed from the upper layer. That means that
- * V4L2_FLAG_MEMORY_NON_CONSISTENT has no effect on dma-sg buffers.
+ * attributes passed from the upper layer.
*/
buf->pages = kvmalloc_array(buf->num_pages, sizeof(struct page *),
GFP_KERNEL | __GFP_ZERO);
#endif
}
-static void clear_consistency_attr(struct vb2_queue *q,
- int memory,
- unsigned int *flags)
-{
- if (!q->allow_cache_hints || memory != V4L2_MEMORY_MMAP)
- *flags &= ~V4L2_FLAG_MEMORY_NON_CONSISTENT;
-}
-
int vb2_reqbufs(struct vb2_queue *q, struct v4l2_requestbuffers *req)
{
int ret = vb2_verify_memory_type(q, req->memory, req->type);
fill_buf_caps(q, &req->capabilities);
- clear_consistency_attr(q, req->memory, &req->flags);
- return ret ? ret : vb2_core_reqbufs(q, req->memory,
- req->flags, &req->count);
+ return ret ? ret : vb2_core_reqbufs(q, req->memory, &req->count);
}
EXPORT_SYMBOL_GPL(vb2_reqbufs);
unsigned i;
fill_buf_caps(q, &create->capabilities);
- clear_consistency_attr(q, create->memory, &create->flags);
create->index = q->num_buffers;
if (create->count == 0)
return ret != -EBUSY ? ret : 0;
if (requested_sizes[i] == 0)
return -EINVAL;
return ret ? ret : vb2_core_create_bufs(q, create->memory,
- create->flags,
&create->count,
requested_planes,
requested_sizes);
int res = vb2_verify_memory_type(vdev->queue, p->memory, p->type);
fill_buf_caps(vdev->queue, &p->capabilities);
- clear_consistency_attr(vdev->queue, p->memory, &p->flags);
if (res)
return res;
if (vb2_queue_is_busy(vdev, file))
return -EBUSY;
- res = vb2_core_reqbufs(vdev->queue, p->memory, p->flags, &p->count);
+ res = vb2_core_reqbufs(vdev->queue, p->memory, &p->count);
/* If count == 0, then the owner has released all buffers and he
is no longer owner of the queue. Otherwise we have a new owner. */
if (res == 0)
p->index = vdev->queue->num_buffers;
fill_buf_caps(vdev->queue, &p->capabilities);
- clear_consistency_attr(vdev->queue, p->memory, &p->flags);
/*
* If count == 0, then just check if memory and type are valid.
* Any -EBUSY result from vb2_verify_memory_type can be mapped to 0.
ctx->buf_siz = req->size;
ctx->buf_cnt = req->count;
- ret = vb2_core_reqbufs(&ctx->vb_q, VB2_MEMORY_MMAP, 0, &req->count);
+ ret = vb2_core_reqbufs(&ctx->vb_q, VB2_MEMORY_MMAP, &req->count);
if (ret) {
ctx->state = DVB_VB2_STATE_NONE;
dprintk(1, "[%s] count=%d size=%d errno=%d\n", ctx->name,
* @memory: buffer memory type
* @format: frame format, for which buffers are requested
* @capabilities: capabilities of this buffer type.
- * @flags: additional buffer management attributes (ignored unless the
- * queue has V4L2_BUF_CAP_SUPPORTS_MMAP_CACHE_HINTS capability and
- * configured for MMAP streaming I/O).
* @reserved: future extensions
*/
struct v4l2_create_buffers32 {
__u32 memory; /* enum v4l2_memory */
struct v4l2_format32 format;
__u32 capabilities;
- __u32 flags;
- __u32 reserved[6];
+ __u32 reserved[7];
};
static int __bufsize_v4l2_format(struct v4l2_format32 __user *p32, u32 *size)
{
if (!access_ok(p32, sizeof(*p32)) ||
copy_in_user(p64, p32,
- offsetof(struct v4l2_create_buffers32, format)) ||
- assign_in_user(&p64->flags, &p32->flags))
+ offsetof(struct v4l2_create_buffers32, format)))
return -EFAULT;
return __get_v4l2_format32(&p64->format, &p32->format,
aux_buf, aux_space);
copy_in_user(p32, p64,
offsetof(struct v4l2_create_buffers32, format)) ||
assign_in_user(&p32->capabilities, &p64->capabilities) ||
- assign_in_user(&p32->flags, &p64->flags) ||
copy_in_user(p32->reserved, p64->reserved, sizeof(p64->reserved)))
return -EFAULT;
return __put_v4l2_format32(&p64->format, &p32->format);
if (ret)
return ret;
+
+ CLEAR_AFTER_FIELD(p, capabilities);
+
return ops->vidioc_reqbufs(file, fh, p);
}
if (ret)
return ret;
- CLEAR_AFTER_FIELD(create, flags);
+ CLEAR_AFTER_FIELD(create, capabilities);
v4l_sanitize_format(&create->format);
DMA_BIDIRECTIONAL);
}
#else
-static inline mmc_spi_dma_alloc(struct mmc_spi_host *host) { return 0; }
+static inline int mmc_spi_dma_alloc(struct mmc_spi_host *host) { return 0; }
static inline void mmc_spi_dma_free(struct mmc_spi_host *host) {}
#endif
}
/**
- * spi_nor_sr1_bit6_quad_enable() - Set/Unset the Quad Enable BIT(6) in the
- * Status Register 1.
+ * spi_nor_sr1_bit6_quad_enable() - Set the Quad Enable BIT(6) in the Status
+ * Register 1.
* @nor: pointer to a 'struct spi_nor'
- * @enable: true to enable Quad mode, false to disable Quad mode.
*
* Bit 6 of the Status Register 1 is the QE bit for Macronix like QSPI memories.
*
* Return: 0 on success, -errno otherwise.
*/
-int spi_nor_sr1_bit6_quad_enable(struct spi_nor *nor, bool enable)
+int spi_nor_sr1_bit6_quad_enable(struct spi_nor *nor)
{
int ret;
if (ret)
return ret;
- if ((enable && (nor->bouncebuf[0] & SR1_QUAD_EN_BIT6)) ||
- (!enable && !(nor->bouncebuf[0] & SR1_QUAD_EN_BIT6)))
+ if (nor->bouncebuf[0] & SR1_QUAD_EN_BIT6)
return 0;
- if (enable)
- nor->bouncebuf[0] |= SR1_QUAD_EN_BIT6;
- else
- nor->bouncebuf[0] &= ~SR1_QUAD_EN_BIT6;
+ nor->bouncebuf[0] |= SR1_QUAD_EN_BIT6;
return spi_nor_write_sr1_and_check(nor, nor->bouncebuf[0]);
}
/**
- * spi_nor_sr2_bit1_quad_enable() - set/unset the Quad Enable BIT(1) in the
- * Status Register 2.
+ * spi_nor_sr2_bit1_quad_enable() - set the Quad Enable BIT(1) in the Status
+ * Register 2.
* @nor: pointer to a 'struct spi_nor'.
- * @enable: true to enable Quad mode, false to disable Quad mode.
*
* Bit 1 of the Status Register 2 is the QE bit for Spansion like QSPI memories.
*
* Return: 0 on success, -errno otherwise.
*/
-int spi_nor_sr2_bit1_quad_enable(struct spi_nor *nor, bool enable)
+int spi_nor_sr2_bit1_quad_enable(struct spi_nor *nor)
{
int ret;
if (nor->flags & SNOR_F_NO_READ_CR)
- return spi_nor_write_16bit_cr_and_check(nor,
- enable ? SR2_QUAD_EN_BIT1 : 0);
+ return spi_nor_write_16bit_cr_and_check(nor, SR2_QUAD_EN_BIT1);
ret = spi_nor_read_cr(nor, nor->bouncebuf);
if (ret)
return ret;
- if ((enable && (nor->bouncebuf[0] & SR2_QUAD_EN_BIT1)) ||
- (!enable && !(nor->bouncebuf[0] & SR2_QUAD_EN_BIT1)))
+ if (nor->bouncebuf[0] & SR2_QUAD_EN_BIT1)
return 0;
- if (enable)
- nor->bouncebuf[0] |= SR2_QUAD_EN_BIT1;
- else
- nor->bouncebuf[0] &= ~SR2_QUAD_EN_BIT1;
+ nor->bouncebuf[0] |= SR2_QUAD_EN_BIT1;
return spi_nor_write_16bit_cr_and_check(nor, nor->bouncebuf[0]);
}
/**
- * spi_nor_sr2_bit7_quad_enable() - set/unset QE bit in Status Register 2.
+ * spi_nor_sr2_bit7_quad_enable() - set QE bit in Status Register 2.
* @nor: pointer to a 'struct spi_nor'
- * @enable: true to enable Quad mode, false to disable Quad mode.
*
* Set the Quad Enable (QE) bit in the Status Register 2.
*
*
* Return: 0 on success, -errno otherwise.
*/
-int spi_nor_sr2_bit7_quad_enable(struct spi_nor *nor, bool enable)
+int spi_nor_sr2_bit7_quad_enable(struct spi_nor *nor)
{
u8 *sr2 = nor->bouncebuf;
int ret;
ret = spi_nor_read_sr2(nor, sr2);
if (ret)
return ret;
- if ((enable && (*sr2 & SR2_QUAD_EN_BIT7)) ||
- (!enable && !(*sr2 & SR2_QUAD_EN_BIT7)))
+ if (*sr2 & SR2_QUAD_EN_BIT7)
return 0;
/* Update the Quad Enable bit. */
- if (enable)
- *sr2 |= SR2_QUAD_EN_BIT7;
- else
- *sr2 &= ~SR2_QUAD_EN_BIT7;
+ *sr2 |= SR2_QUAD_EN_BIT7;
ret = spi_nor_write_sr2(nor, sr2);
if (ret)
}
/**
- * spi_nor_quad_enable() - enable/disable Quad I/O if needed.
+ * spi_nor_quad_enable() - enable Quad I/O if needed.
* @nor: pointer to a 'struct spi_nor'
- * @enable: true to enable Quad mode. false to disable Quad mode.
*
* Return: 0 on success, -errno otherwise.
*/
-static int spi_nor_quad_enable(struct spi_nor *nor, bool enable)
+static int spi_nor_quad_enable(struct spi_nor *nor)
{
if (!nor->params->quad_enable)
return 0;
spi_nor_get_protocol_width(nor->write_proto) == 4))
return 0;
- return nor->params->quad_enable(nor, enable);
+ return nor->params->quad_enable(nor);
}
/**
{
int err;
- err = spi_nor_quad_enable(nor, true);
+ err = spi_nor_quad_enable(nor);
if (err) {
dev_dbg(nor->dev, "quad mode not supported\n");
return err;
if (nor->addr_width == 4 && !(nor->flags & SNOR_F_4B_OPCODES) &&
nor->flags & SNOR_F_BROKEN_RESET)
nor->params->set_4byte_addr_mode(nor, false);
-
- spi_nor_quad_enable(nor, false);
}
EXPORT_SYMBOL_GPL(spi_nor_restore);
* higher index in the array, the higher priority.
* @erase_map: the erase map parsed from the SFDP Sector Map Parameter
* Table.
- * @quad_enable: enables/disables SPI NOR Quad mode.
+ * @quad_enable: enables SPI NOR quad mode.
* @set_4byte_addr_mode: puts the SPI NOR in 4 byte addressing mode.
* @convert_addr: converts an absolute address into something the flash
* will understand. Particularly useful when pagesize is
struct spi_nor_erase_map erase_map;
- int (*quad_enable)(struct spi_nor *nor, bool enable);
+ int (*quad_enable)(struct spi_nor *nor);
int (*set_4byte_addr_mode)(struct spi_nor *nor, bool enable);
u32 (*convert_addr)(struct spi_nor *nor, u32 addr);
int (*setup)(struct spi_nor *nor, const struct spi_nor_hwcaps *hwcaps);
int spi_nor_wait_till_ready(struct spi_nor *nor);
int spi_nor_lock_and_prep(struct spi_nor *nor);
void spi_nor_unlock_and_unprep(struct spi_nor *nor);
-int spi_nor_sr1_bit6_quad_enable(struct spi_nor *nor, bool enable);
-int spi_nor_sr2_bit1_quad_enable(struct spi_nor *nor, bool enable);
-int spi_nor_sr2_bit7_quad_enable(struct spi_nor *nor, bool enable);
+int spi_nor_sr1_bit6_quad_enable(struct spi_nor *nor);
+int spi_nor_sr2_bit1_quad_enable(struct spi_nor *nor);
+int spi_nor_sr2_bit7_quad_enable(struct spi_nor *nor);
int spi_nor_xread_sr(struct spi_nor *nor, u8 *sr);
ssize_t spi_nor_read_data(struct spi_nor *nor, loff_t from, size_t len,
ksz_port_cfg(dev, port, P_PRIO_CTRL, PORT_802_1P_ENABLE, true);
if (cpu_port) {
+ if (!p->interface && dev->compat_interface) {
+ dev_warn(dev->dev,
+ "Using legacy switch \"phy-mode\" property, because it is missing on port %d node. "
+ "Please update your device tree.\n",
+ port);
+ p->interface = dev->compat_interface;
+ }
+
/* Configure MII interface for proper network communication. */
ksz_read8(dev, REG_PORT_5_CTRL_6, &data8);
data8 &= ~PORT_INTERFACE_TYPE;
data8 &= ~PORT_GMII_1GPS_MODE;
- switch (dev->interface) {
+ switch (p->interface) {
case PHY_INTERFACE_MODE_MII:
p->phydev.speed = SPEED_100;
break;
default:
data8 &= ~PORT_RGMII_ID_IN_ENABLE;
data8 &= ~PORT_RGMII_ID_OUT_ENABLE;
- if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID ||
- dev->interface == PHY_INTERFACE_MODE_RGMII_RXID)
+ if (p->interface == PHY_INTERFACE_MODE_RGMII_ID ||
+ p->interface == PHY_INTERFACE_MODE_RGMII_RXID)
data8 |= PORT_RGMII_ID_IN_ENABLE;
- if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID ||
- dev->interface == PHY_INTERFACE_MODE_RGMII_TXID)
+ if (p->interface == PHY_INTERFACE_MODE_RGMII_ID ||
+ p->interface == PHY_INTERFACE_MODE_RGMII_TXID)
data8 |= PORT_RGMII_ID_OUT_ENABLE;
data8 |= PORT_GMII_1GPS_MODE;
data8 |= PORT_INTERFACE_RGMII;
}
/* set the real number of ports */
- dev->ds->num_ports = dev->port_cnt;
+ dev->ds->num_ports = dev->port_cnt + 1;
return 0;
}
/* configure MAC to 1G & RGMII mode */
ksz_pread8(dev, port, REG_PORT_XMII_CTRL_1, &data8);
- switch (dev->interface) {
+ switch (p->interface) {
case PHY_INTERFACE_MODE_MII:
ksz9477_set_xmii(dev, 0, &data8);
ksz9477_set_gbit(dev, false, &data8);
ksz9477_set_gbit(dev, true, &data8);
data8 &= ~PORT_RGMII_ID_IG_ENABLE;
data8 &= ~PORT_RGMII_ID_EG_ENABLE;
- if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID ||
- dev->interface == PHY_INTERFACE_MODE_RGMII_RXID)
+ if (p->interface == PHY_INTERFACE_MODE_RGMII_ID ||
+ p->interface == PHY_INTERFACE_MODE_RGMII_RXID)
data8 |= PORT_RGMII_ID_IG_ENABLE;
- if (dev->interface == PHY_INTERFACE_MODE_RGMII_ID ||
- dev->interface == PHY_INTERFACE_MODE_RGMII_TXID)
+ if (p->interface == PHY_INTERFACE_MODE_RGMII_ID ||
+ p->interface == PHY_INTERFACE_MODE_RGMII_TXID)
data8 |= PORT_RGMII_ID_EG_ENABLE;
p->phydev.speed = SPEED_1000;
break;
dev->cpu_port = i;
dev->host_mask = (1 << dev->cpu_port);
dev->port_mask |= dev->host_mask;
+ p = &dev->ports[i];
/* Read from XMII register to determine host port
* interface. If set specifically in device tree
* note the difference to help debugging.
*/
interface = ksz9477_get_interface(dev, i);
- if (!dev->interface)
- dev->interface = interface;
- if (interface && interface != dev->interface)
+ if (!p->interface) {
+ if (dev->compat_interface) {
+ dev_warn(dev->dev,
+ "Using legacy switch \"phy-mode\" property, because it is missing on port %d node. "
+ "Please update your device tree.\n",
+ i);
+ p->interface = dev->compat_interface;
+ } else {
+ p->interface = interface;
+ }
+ }
+ if (interface && interface != p->interface)
dev_info(dev->dev,
"use %s instead of %s\n",
- phy_modes(dev->interface),
+ phy_modes(p->interface),
phy_modes(interface));
/* enable cpu port */
ksz9477_port_setup(dev, i, true);
- p = &dev->ports[dev->cpu_port];
p->vid_member = dev->port_mask;
p->on = 1;
}
const struct ksz_dev_ops *ops)
{
phy_interface_t interface;
+ struct device_node *port;
+ unsigned int port_num;
int ret;
if (dev->pdata)
/* Host port interface will be self detected, or specifically set in
* device tree.
*/
+ for (port_num = 0; port_num < dev->port_cnt; ++port_num)
+ dev->ports[port_num].interface = PHY_INTERFACE_MODE_NA;
if (dev->dev->of_node) {
ret = of_get_phy_mode(dev->dev->of_node, &interface);
if (ret == 0)
- dev->interface = interface;
+ dev->compat_interface = interface;
+ for_each_available_child_of_node(dev->dev->of_node, port) {
+ if (of_property_read_u32(port, "reg", &port_num))
+ continue;
+ if (port_num >= dev->port_cnt)
+ return -EINVAL;
+ of_get_phy_mode(port, &dev->ports[port_num].interface);
+ }
dev->synclko_125 = of_property_read_bool(dev->dev->of_node,
"microchip,synclko-125");
}
u32 freeze:1; /* MIB counter freeze is enabled */
struct ksz_port_mib mib;
+ phy_interface_t interface;
};
struct ksz_device {
int mib_cnt;
int mib_port_cnt;
int last_port; /* ports after that not used */
- phy_interface_t interface;
+ phy_interface_t compat_interface;
u32 regs_size;
bool phy_errata_9477;
bool synclko_125;
if (err)
return err;
- ocelot_init(ocelot);
+ err = ocelot_init(ocelot);
+ if (err)
+ return err;
+
if (ocelot->ptp) {
err = ocelot_init_timestamp(ocelot, &ocelot_ptp_clock_info);
if (err) {
{
struct ocelot *ocelot = ds->priv;
struct felix *felix = ocelot_to_felix(ocelot);
+ int port;
if (felix->info->mdio_bus_free)
felix->info->mdio_bus_free(ocelot);
+ for (port = 0; port < ocelot->num_phys_ports; port++)
+ ocelot_deinit_port(ocelot, port);
ocelot_deinit_timestamp(ocelot);
/* stop workqueue thread */
ocelot_deinit(ocelot);
[VCAP_IS2_HK_DIP_EQ_SIP] = {118, 1},
/* IP4_TCP_UDP (TYPE=100) */
[VCAP_IS2_HK_TCP] = {119, 1},
- [VCAP_IS2_HK_L4_SPORT] = {120, 16},
- [VCAP_IS2_HK_L4_DPORT] = {136, 16},
+ [VCAP_IS2_HK_L4_DPORT] = {120, 16},
+ [VCAP_IS2_HK_L4_SPORT] = {136, 16},
[VCAP_IS2_HK_L4_RNG] = {152, 8},
[VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {160, 1},
[VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {161, 1},
- [VCAP_IS2_HK_L4_URG] = {162, 1},
- [VCAP_IS2_HK_L4_ACK] = {163, 1},
- [VCAP_IS2_HK_L4_PSH] = {164, 1},
- [VCAP_IS2_HK_L4_RST] = {165, 1},
- [VCAP_IS2_HK_L4_SYN] = {166, 1},
- [VCAP_IS2_HK_L4_FIN] = {167, 1},
+ [VCAP_IS2_HK_L4_FIN] = {162, 1},
+ [VCAP_IS2_HK_L4_SYN] = {163, 1},
+ [VCAP_IS2_HK_L4_RST] = {164, 1},
+ [VCAP_IS2_HK_L4_PSH] = {165, 1},
+ [VCAP_IS2_HK_L4_ACK] = {166, 1},
+ [VCAP_IS2_HK_L4_URG] = {167, 1},
[VCAP_IS2_HK_L4_1588_DOM] = {168, 8},
[VCAP_IS2_HK_L4_1588_VER] = {176, 4},
/* IP4_OTHER (TYPE=101) */
[VCAP_IS2_HK_DIP_EQ_SIP] = {122, 1},
/* IP4_TCP_UDP (TYPE=100) */
[VCAP_IS2_HK_TCP] = {123, 1},
- [VCAP_IS2_HK_L4_SPORT] = {124, 16},
- [VCAP_IS2_HK_L4_DPORT] = {140, 16},
+ [VCAP_IS2_HK_L4_DPORT] = {124, 16},
+ [VCAP_IS2_HK_L4_SPORT] = {140, 16},
[VCAP_IS2_HK_L4_RNG] = {156, 8},
[VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {164, 1},
[VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {165, 1},
- [VCAP_IS2_HK_L4_URG] = {166, 1},
- [VCAP_IS2_HK_L4_ACK] = {167, 1},
- [VCAP_IS2_HK_L4_PSH] = {168, 1},
- [VCAP_IS2_HK_L4_RST] = {169, 1},
- [VCAP_IS2_HK_L4_SYN] = {170, 1},
- [VCAP_IS2_HK_L4_FIN] = {171, 1},
+ [VCAP_IS2_HK_L4_FIN] = {166, 1},
+ [VCAP_IS2_HK_L4_SYN] = {167, 1},
+ [VCAP_IS2_HK_L4_RST] = {168, 1},
+ [VCAP_IS2_HK_L4_PSH] = {169, 1},
+ [VCAP_IS2_HK_L4_ACK] = {170, 1},
+ [VCAP_IS2_HK_L4_URG] = {171, 1},
/* IP4_OTHER (TYPE=101) */
[VCAP_IS2_HK_IP4_L3_PROTO] = {123, 8},
[VCAP_IS2_HK_L3_PAYLOAD] = {131, 56},
.vcap_is2_keys = vsc9953_vcap_is2_keys,
.vcap_is2_actions = vsc9953_vcap_is2_actions,
.vcap = vsc9953_vcap_props,
- .shared_queue_sz = 128 * 1024,
+ .shared_queue_sz = 2048 * 1024,
.num_mact_rows = 2048,
.num_ports = 10,
.mdio_bus_alloc = vsc9953_mdio_bus_alloc,
return ret;
if (vid == vlanmc.vid) {
- /* clear VLAN member configurations */
- vlanmc.vid = 0;
- vlanmc.priority = 0;
- vlanmc.member = 0;
- vlanmc.untag = 0;
- vlanmc.fid = 0;
-
+ /* Remove this port from the VLAN */
+ vlanmc.member &= ~BIT(port);
+ vlanmc.untag &= ~BIT(port);
+ /*
+ * If no ports are members of this VLAN
+ * anymore then clear the whole member
+ * config so it can be reused.
+ */
+ if (!vlanmc.member && vlanmc.untag) {
+ vlanmc.vid = 0;
+ vlanmc.priority = 0;
+ vlanmc.fid = 0;
+ }
ret = smi->ops->set_vlan_mc(smi, i, &vlanmc);
if (ret) {
dev_err(smi->dev,
return -EOPNOTSUPP;
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_QSTATS_EXT, -1, -1);
+ req.fid = cpu_to_le16(0xffff);
req.flags = FUNC_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK;
mutex_lock(&bp->hwrm_cmd_lock);
rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
tx_masks = stats->hw_masks;
tx_count = sizeof(struct tx_port_stats_ext) / 8;
- flags = FUNC_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK;
+ flags = PORT_QSTATS_EXT_REQ_FLAGS_COUNTER_MASK;
rc = bnxt_hwrm_port_qstats_ext(bp, flags);
if (rc) {
mask = (1ULL << 40) - 1;
u32 bar_offset = BNXT_GRCPF_REG_CHIMP_COMM;
u16 dst = BNXT_HWRM_CHNL_CHIMP;
- if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state))
+ if (BNXT_NO_FW_ACCESS(bp))
return -EBUSY;
if (msg_len > BNXT_HWRM_MAX_REQ_LEN) {
struct hwrm_ring_free_output *resp = bp->hwrm_cmd_resp_addr;
u16 error_code;
- if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state))
+ if (BNXT_NO_FW_ACCESS(bp))
return 0;
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_RING_FREE, cmpl_ring_id, -1);
if (set_tpa)
tpa_flags = bp->flags & BNXT_FLAG_TPA;
- else if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state))
+ else if (BNXT_NO_FW_ACCESS(bp))
return 0;
for (i = 0; i < bp->nr_vnics; i++) {
rc = bnxt_hwrm_vnic_set_tpa(bp, i, tpa_flags);
struct hwrm_temp_monitor_query_output *resp;
struct bnxt *bp = dev_get_drvdata(dev);
u32 len = 0;
+ int rc;
resp = bp->hwrm_cmd_resp_addr;
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1);
mutex_lock(&bp->hwrm_cmd_lock);
- if (!_hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT))
+ rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+ if (!rc)
len = sprintf(buf, "%u\n", resp->temp * 1000); /* display millidegree */
mutex_unlock(&bp->hwrm_cmd_lock);
-
- if (len)
- return len;
-
- return sprintf(buf, "unknown\n");
+ return rc ?: len;
}
static SENSOR_DEVICE_ATTR(temp1_input, 0444, bnxt_show_temp, NULL, 0);
static void bnxt_hwmon_open(struct bnxt *bp)
{
+ struct hwrm_temp_monitor_query_input req = {0};
struct pci_dev *pdev = bp->pdev;
+ int rc;
+
+ bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_TEMP_MONITOR_QUERY, -1, -1);
+ rc = hwrm_send_message_silent(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+ if (rc == -EACCES || rc == -EOPNOTSUPP) {
+ bnxt_hwmon_close(bp);
+ return;
+ }
if (bp->hwmon_dev)
return;
if (BNXT_PF(bp))
bnxt_sriov_disable(bp);
+ clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
+ bnxt_cancel_sp_work(bp);
+ bp->sp_event = 0;
+
bnxt_dl_fw_reporters_destroy(bp, true);
if (BNXT_PF(bp))
devlink_port_type_clear(&bp->dl_port);
unregister_netdev(dev);
bnxt_dl_unregister(bp);
bnxt_shutdown_tc(bp);
- clear_bit(BNXT_STATE_IN_FW_RESET, &bp->state);
- bnxt_cancel_sp_work(bp);
- bp->sp_event = 0;
bnxt_clear_int_mode(bp);
bnxt_hwrm_func_drv_unrgtr(bp);
static void bnxt_vpd_read_info(struct bnxt *bp)
{
struct pci_dev *pdev = bp->pdev;
- int i, len, pos, ro_size;
+ int i, len, pos, ro_size, size;
ssize_t vpd_size;
u8 *vpd_data;
if (len + pos > vpd_size)
goto read_sn;
- strlcpy(bp->board_partno, &vpd_data[pos], min(len, BNXT_VPD_FLD_LEN));
+ size = min(len, BNXT_VPD_FLD_LEN - 1);
+ memcpy(bp->board_partno, &vpd_data[pos], size);
read_sn:
pos = pci_vpd_find_info_keyword(vpd_data, i, ro_size,
if (len + pos > vpd_size)
goto exit;
- strlcpy(bp->board_serialno, &vpd_data[pos], min(len, BNXT_VPD_FLD_LEN));
+ size = min(len, BNXT_VPD_FLD_LEN - 1);
+ memcpy(bp->board_serialno, &vpd_data[pos], size);
exit:
kfree(vpd_data);
}
#define BNXT_STATE_FW_FATAL_COND 6
#define BNXT_STATE_DRV_REGISTERED 7
+#define BNXT_NO_FW_ACCESS(bp) \
+ (test_bit(BNXT_STATE_FW_FATAL_COND, &(bp)->state) || \
+ pci_channel_offline((bp)->pdev))
+
struct bnxt_irq *irq_tbl;
int total_irqs;
u8 mac_addr[ETH_ALEN];
struct bnxt *bp = netdev_priv(dev);
int reg_len;
+ if (!BNXT_PF(bp))
+ return -EOPNOTSUPP;
+
reg_len = BNXT_PXP_REG_LEN;
if (bp->fw_cap & BNXT_FW_CAP_PCIE_STATS_SUPPORTED)
if (!BNXT_PHY_CFG_ABLE(bp))
return -EOPNOTSUPP;
+ mutex_lock(&bp->link_lock);
if (epause->autoneg) {
- if (!(link_info->autoneg & BNXT_AUTONEG_SPEED))
- return -EINVAL;
+ if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) {
+ rc = -EINVAL;
+ goto pause_exit;
+ }
link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL;
if (bp->hwrm_spec_code >= 0x10201)
if (epause->tx_pause)
link_info->req_flow_ctrl |= BNXT_LINK_PAUSE_TX;
- if (netif_running(dev)) {
- mutex_lock(&bp->link_lock);
+ if (netif_running(dev))
rc = bnxt_hwrm_set_pause(bp);
- mutex_unlock(&bp->link_lock);
- }
+
+pause_exit:
+ mutex_unlock(&bp->link_lock);
return rc;
}
struct bnxt *bp = netdev_priv(dev);
struct ethtool_eee *eee = &bp->eee;
struct bnxt_link_info *link_info = &bp->link_info;
- u32 advertising =
- _bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0);
+ u32 advertising;
int rc = 0;
if (!BNXT_PHY_CFG_ABLE(bp))
if (!(bp->flags & BNXT_FLAG_EEE_CAP))
return -EOPNOTSUPP;
+ mutex_lock(&bp->link_lock);
+ advertising = _bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0);
if (!edata->eee_enabled)
goto eee_ok;
if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) {
netdev_warn(dev, "EEE requires autoneg\n");
- return -EINVAL;
+ rc = -EINVAL;
+ goto eee_exit;
}
if (edata->tx_lpi_enabled) {
if (bp->lpi_tmr_hi && (edata->tx_lpi_timer > bp->lpi_tmr_hi ||
edata->tx_lpi_timer < bp->lpi_tmr_lo)) {
netdev_warn(dev, "Valid LPI timer range is %d and %d microsecs\n",
bp->lpi_tmr_lo, bp->lpi_tmr_hi);
- return -EINVAL;
+ rc = -EINVAL;
+ goto eee_exit;
} else if (!bp->lpi_tmr_hi) {
edata->tx_lpi_timer = eee->tx_lpi_timer;
}
} else if (edata->advertised & ~advertising) {
netdev_warn(dev, "EEE advertised %x must be a subset of autoneg advertised speeds %x\n",
edata->advertised, advertising);
- return -EINVAL;
+ rc = -EINVAL;
+ goto eee_exit;
}
eee->advertised = edata->advertised;
if (netif_running(dev))
rc = bnxt_hwrm_set_link_setting(bp, false, true);
+eee_exit:
+ mutex_unlock(&bp->link_lock);
return rc;
}
ctrl |= GEM_BIT(GBE);
}
- /* We do not support MLO_PAUSE_RX yet */
- if (tx_pause)
+ if (rx_pause)
ctrl |= MACB_BIT(PAE);
macb_set_tx_clk(bp->tx_clk, speed, ndev);
static int configure_filter_tcb(struct adapter *adap, unsigned int tid,
struct filter_entry *f)
{
- if (f->fs.hitcnts)
+ if (f->fs.hitcnts) {
set_tcb_field(adap, f, tid, TCB_TIMESTAMP_W,
- TCB_TIMESTAMP_V(TCB_TIMESTAMP_M) |
+ TCB_TIMESTAMP_V(TCB_TIMESTAMP_M),
+ TCB_TIMESTAMP_V(0ULL),
+ 1);
+ set_tcb_field(adap, f, tid, TCB_RTT_TS_RECENT_AGE_W,
TCB_RTT_TS_RECENT_AGE_V(TCB_RTT_TS_RECENT_AGE_M),
- TCB_TIMESTAMP_V(0ULL) |
TCB_RTT_TS_RECENT_AGE_V(0ULL),
1);
+ }
if (f->fs.newdmac)
set_tcb_tflag(adap, f, tid, TF_CCTRL_ECE_S, 1,
{
struct mps_entries_ref *mps_entry, *tmp;
- if (!list_empty(&adap->mps_ref))
+ if (list_empty(&adap->mps_ref))
return;
spin_lock(&adap->mps_ref_lock);
#define DSL CONFIG_DE2104X_DSL
#endif
-#define DE_RX_RING_SIZE 64
+#define DE_RX_RING_SIZE 128
#define DE_TX_RING_SIZE 64
#define DE_RING_BYTES \
((sizeof(struct de_desc) * DE_RX_RING_SIZE) + \
};
struct dpmac_rsp_get_counter {
- u64 pad;
- u64 counter;
+ __le64 pad;
+ __le64 counter;
};
#endif /* _FSL_DPMAC_CMD_H */
err_reg_netdev:
enetc_teardown_serdes(priv);
- enetc_mdio_remove(pf);
enetc_free_msix(priv);
err_alloc_msix:
enetc_free_si_resources(priv);
si->ndev = NULL;
free_netdev(ndev);
err_alloc_netdev:
+ enetc_mdio_remove(pf);
enetc_of_put_phy(pf);
err_map_pf_space:
enetc_pci_remove(pdev);
* bit6-11 for ppe0-5
* bit12-17 for roce0-5
* bit18-19 for com/dfx
- * @enable: false - request reset , true - drop reset
+ * @dereset: false - request reset , true - drop reset
*/
static void
hns_dsaf_srst_chns(struct dsaf_device *dsaf_dev, u32 msk, bool dereset)
* bit6-11 for ppe0-5
* bit12-17 for roce0-5
* bit18-19 for com/dfx
- * @enable: false - request reset , true - drop reset
+ * @dereset: false - request reset , true - drop reset
*/
static void
hns_dsaf_srst_chns_acpi(struct dsaf_device *dsaf_dev, u32 msk, bool dereset)
/**
* nic_run_loopback_test - run loopback test
- * @nic_dev: net device
- * @loopback_type: loopback type
+ * @ndev: net device
+ * @loop_mode: loopback mode
*/
static int __lb_run_test(struct net_device *ndev,
enum hnae_loop loop_mode)
/**
* hns_nic_self_test - self test
- * @dev: net device
+ * @ndev: net device
* @eth_test: test cmd
* @data: test result
*/
/**
* hns_nic_get_drvinfo - get net driver info
- * @dev: net device
+ * @net_dev: net device
* @drvinfo: driver info
*/
static void hns_nic_get_drvinfo(struct net_device *net_dev,
/**
* hns_get_ringparam - get ring parameter
- * @dev: net device
+ * @net_dev: net device
* @param: ethtool parameter
*/
static void hns_get_ringparam(struct net_device *net_dev,
/**
* hns_get_pauseparam - get pause parameter
- * @dev: net device
+ * @net_dev: net device
* @param: pause parameter
*/
static void hns_get_pauseparam(struct net_device *net_dev,
/**
* hns_set_pauseparam - set pause parameter
- * @dev: net device
+ * @net_dev: net device
* @param: pause parameter
*
* Return 0 on success, negative on failure
/**
* hns_get_coalesce - get coalesce info.
- * @dev: net device
+ * @net_dev: net device
* @ec: coalesce info.
*
* Return 0 on success, negative on failure.
/**
* hns_set_coalesce - set coalesce info.
- * @dev: net device
+ * @net_dev: net device
* @ec: coalesce info.
*
* Return 0 on success, negative on failure.
/**
* hns_get_channels - get channel info.
- * @dev: net device
+ * @net_dev: net device
* @ch: channel info.
*/
static void
/**
* get_ethtool_stats - get detail statistics.
- * @dev: net device
+ * @netdev: net device
* @stats: statistics info.
* @data: statistics data.
*/
/**
* get_strings: Return a set of strings that describe the requested objects
- * @dev: net device
- * @stats: string set ID.
+ * @netdev: net device
+ * @stringset: string set ID.
* @data: objects data.
*/
static void hns_get_strings(struct net_device *netdev, u32 stringset, u8 *data)
/**
* nic_get_sset_count - get string set count witch returned by nic_get_strings.
- * @dev: net device
+ * @netdev: net device
* @stringset: string set index, 0: self test string; 1: statistics string.
*
* Return string set count.
/**
* hns_phy_led_set - set phy LED status.
- * @dev: net device
+ * @netdev: net device
* @value: LED state.
*
* Return 0 on success, negative on failure.
/**
* nic_set_phys_id - set phy identify LED.
- * @dev: net device
+ * @netdev: net device
* @state: LED state.
*
* Return 0 on success, negative on failure.
/**
* hns_get_regs - get net device register
- * @dev: net device
+ * @net_dev: net device
* @cmd: ethtool cmd
- * @date: register data
+ * @data: register data
*/
static void hns_get_regs(struct net_device *net_dev, struct ethtool_regs *cmd,
void *data)
/**
* nic_get_regs_len - get total register len.
- * @dev: net device
+ * @net_dev: net device
*
* Return total register len.
*/
/**
* hns_nic_nway_reset - nway reset
- * @dev: net device
+ * @netdev: net device
*
* Return 0 on success, negative on failure
*/
}
netif_carrier_off(netdev);
+ netif_tx_disable(netdev);
err = do_lp_test(nic_dev, eth_test->flags, LP_DEFAULT_TIME,
&test_index);
data[test_index] = 1;
}
+ netif_tx_wake_all_queues(netdev);
+
err = hinic_port_link_state(nic_dev, &link_state);
if (!err && link_state == HINIC_LINK_STATE_UP)
netif_carrier_on(netdev);
+
}
static int hinic_set_phys_id(struct net_device *netdev,
#define MGMT_MSG_TIMEOUT 5000
+#define SET_FUNC_PORT_MBOX_TIMEOUT 30000
+
#define SET_FUNC_PORT_MGMT_TIMEOUT 25000
+#define UPDATE_FW_MGMT_TIMEOUT 20000
+
#define mgmt_to_pfhwdev(pf_mgmt) \
container_of(pf_mgmt, struct hinic_pfhwdev, pf_to_mgmt)
return -EINVAL;
}
- if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE)
- timeout = SET_FUNC_PORT_MGMT_TIMEOUT;
+ if (HINIC_IS_VF(hwif)) {
+ if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE)
+ timeout = SET_FUNC_PORT_MBOX_TIMEOUT;
- if (HINIC_IS_VF(hwif))
return hinic_mbox_to_pf(pf_to_mgmt->hwdev, mod, cmd, buf_in,
- in_size, buf_out, out_size, 0);
- else
+ in_size, buf_out, out_size, timeout);
+ } else {
+ if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE)
+ timeout = SET_FUNC_PORT_MGMT_TIMEOUT;
+ else if (cmd == HINIC_PORT_CMD_UPDATE_FW)
+ timeout = UPDATE_FW_MGMT_TIMEOUT;
+
return msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size,
buf_out, out_size, MGMT_DIRECT_SEND,
MSG_NOT_RESP, timeout);
+ }
}
static void recv_mgmt_msg_work_handler(struct work_struct *work)
return err;
}
+static void enable_txqs_napi(struct hinic_dev *nic_dev)
+{
+ int num_txqs = hinic_hwdev_num_qps(nic_dev->hwdev);
+ int i;
+
+ for (i = 0; i < num_txqs; i++)
+ napi_enable(&nic_dev->txqs[i].napi);
+}
+
+static void disable_txqs_napi(struct hinic_dev *nic_dev)
+{
+ int num_txqs = hinic_hwdev_num_qps(nic_dev->hwdev);
+ int i;
+
+ for (i = 0; i < num_txqs; i++)
+ napi_disable(&nic_dev->txqs[i].napi);
+}
+
/**
* free_txqs - Free the Logical Tx Queues of specific NIC device
* @nic_dev: the specific NIC device
goto err_create_txqs;
}
+ enable_txqs_napi(nic_dev);
+
err = create_rxqs(nic_dev);
if (err) {
netif_err(nic_dev, drv, netdev,
}
err_create_rxqs:
+ disable_txqs_napi(nic_dev);
free_txqs(nic_dev);
err_create_txqs:
struct hinic_dev *nic_dev = netdev_priv(netdev);
unsigned int flags;
+ /* Disable txq napi firstly to aviod rewaking txq in free_tx_poll */
+ disable_txqs_napi(nic_dev);
+
down(&nic_dev->mgmt_lock);
flags = nic_dev->flags;
if (err) {
netif_err(nic_dev, drv, rxq->netdev,
"Failed to set RX interrupt coalescing attribute\n");
- rx_del_napi(rxq);
- return err;
+ goto err_req_irq;
}
err = request_irq(rq->irq, rx_irq, 0, rxq->irq_name, rxq);
- if (err) {
- rx_del_napi(rxq);
- return err;
- }
+ if (err)
+ goto err_req_irq;
cpumask_set_cpu(qp->q_id % num_online_cpus(), &rq->affinity_mask);
- return irq_set_affinity_hint(rq->irq, &rq->affinity_mask);
+ err = irq_set_affinity_hint(rq->irq, &rq->affinity_mask);
+ if (err)
+ goto err_irq_affinity;
+
+ return 0;
+
+err_irq_affinity:
+ free_irq(rq->irq, rxq);
+err_req_irq:
+ rx_del_napi(rxq);
+ return err;
}
static void rx_free_irq(struct hinic_rxq *rxq)
netdev_txq = netdev_get_tx_queue(txq->netdev, qp->q_id);
__netif_tx_lock(netdev_txq, smp_processor_id());
-
- netif_wake_subqueue(nic_dev->netdev, qp->q_id);
+ if (!netif_testing(nic_dev->netdev))
+ netif_wake_subqueue(nic_dev->netdev, qp->q_id);
__netif_tx_unlock(netdev_txq);
return budget;
}
-static void tx_napi_add(struct hinic_txq *txq, int weight)
-{
- netif_napi_add(txq->netdev, &txq->napi, free_tx_poll, weight);
- napi_enable(&txq->napi);
-}
-
-static void tx_napi_del(struct hinic_txq *txq)
-{
- napi_disable(&txq->napi);
- netif_napi_del(&txq->napi);
-}
-
static irqreturn_t tx_irq(int irq, void *data)
{
struct hinic_txq *txq = data;
qp = container_of(sq, struct hinic_qp, sq);
- tx_napi_add(txq, nic_dev->tx_weight);
+ netif_napi_add(txq->netdev, &txq->napi, free_tx_poll, nic_dev->tx_weight);
hinic_hwdev_msix_set(nic_dev->hwdev, sq->msix_entry,
TX_IRQ_NO_PENDING, TX_IRQ_NO_COALESC,
if (err) {
netif_err(nic_dev, drv, txq->netdev,
"Failed to set TX interrupt coalescing attribute\n");
- tx_napi_del(txq);
+ netif_napi_del(&txq->napi);
return err;
}
err = request_irq(sq->irq, tx_irq, 0, txq->irq_name, txq);
if (err) {
dev_err(&pdev->dev, "Failed to request Tx irq\n");
- tx_napi_del(txq);
+ netif_napi_del(&txq->napi);
return err;
}
struct hinic_sq *sq = txq->sq;
free_irq(sq->irq, txq);
- tx_napi_del(txq);
+ netif_napi_del(&txq->napi);
}
/**
} else {
rc = reset_tx_pools(adapter);
- if (rc)
+ if (rc) {
netdev_dbg(adapter->netdev, "reset tx pools failed (%d)\n",
rc);
goto out;
+ }
rc = reset_rx_pools(adapter);
- if (rc)
+ if (rc) {
netdev_dbg(adapter->netdev, "reset rx pools failed (%d)\n",
rc);
goto out;
+ }
}
ibmvnic_disable_irqs(adapter);
}
static int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
{
struct i40e_mac_filter *f;
- int num_vlans = 0, bkt;
+ u16 num_vlans = 0, bkt;
hash_for_each(vsi->mac_filter_hash, bkt, f, hlist) {
if (f->vlan >= 0 && f->vlan <= I40E_MAX_VLANID)
*
* Called to get number of VLANs and VLAN list present in mac_filter_hash.
**/
-static void i40e_get_vlan_list_sync(struct i40e_vsi *vsi, int *num_vlans,
- s16 **vlan_list)
+static void i40e_get_vlan_list_sync(struct i40e_vsi *vsi, u16 *num_vlans,
+ s16 **vlan_list)
{
struct i40e_mac_filter *f;
int i = 0;
**/
static i40e_status
i40e_set_vsi_promisc(struct i40e_vf *vf, u16 seid, bool multi_enable,
- bool unicast_enable, s16 *vl, int num_vlans)
+ bool unicast_enable, s16 *vl, u16 num_vlans)
{
+ i40e_status aq_ret, aq_tmp = 0;
struct i40e_pf *pf = vf->pf;
struct i40e_hw *hw = &pf->hw;
- i40e_status aq_ret;
int i;
/* No VLAN to set promisc on, set on VSI */
vf->vf_id,
i40e_stat_str(&pf->hw, aq_ret),
i40e_aq_str(&pf->hw, aq_err));
+
+ if (!aq_tmp)
+ aq_tmp = aq_ret;
}
aq_ret = i40e_aq_set_vsi_uc_promisc_on_vlan(hw, seid,
vf->vf_id,
i40e_stat_str(&pf->hw, aq_ret),
i40e_aq_str(&pf->hw, aq_err));
+
+ if (!aq_tmp)
+ aq_tmp = aq_ret;
}
}
+
+ if (aq_tmp)
+ aq_ret = aq_tmp;
+
return aq_ret;
}
i40e_status aq_ret = I40E_SUCCESS;
struct i40e_pf *pf = vf->pf;
struct i40e_vsi *vsi;
- int num_vlans;
+ u16 num_vlans;
s16 *vl;
vsi = i40e_find_vsi_from_id(pf, vsi_id);
#define IGC_RX_HDR_LEN IGC_RXBUFFER_256
/* Transmit and receive latency (for PTP timestamps) */
-/* FIXME: These values were estimated using the ones that i225 has as
- * basis, they seem to provide good numbers with ptp4l/phc2sys, but we
- * need to confirm them.
- */
-#define IGC_I225_TX_LATENCY_10 9542
-#define IGC_I225_TX_LATENCY_100 1024
-#define IGC_I225_TX_LATENCY_1000 178
-#define IGC_I225_TX_LATENCY_2500 64
-#define IGC_I225_RX_LATENCY_10 20662
-#define IGC_I225_RX_LATENCY_100 2213
-#define IGC_I225_RX_LATENCY_1000 448
-#define IGC_I225_RX_LATENCY_2500 160
+#define IGC_I225_TX_LATENCY_10 240
+#define IGC_I225_TX_LATENCY_100 58
+#define IGC_I225_TX_LATENCY_1000 80
+#define IGC_I225_TX_LATENCY_2500 1325
+#define IGC_I225_RX_LATENCY_10 6450
+#define IGC_I225_RX_LATENCY_100 185
+#define IGC_I225_RX_LATENCY_1000 300
+#define IGC_I225_RX_LATENCY_2500 1485
/* RX and TX descriptor control thresholds.
* PTHRESH - MAC will consider prefetch if it has fewer than this number of
struct sk_buff *skb = adapter->ptp_tx_skb;
struct skb_shared_hwtstamps shhwtstamps;
struct igc_hw *hw = &adapter->hw;
+ int adjust = 0;
u64 regval;
if (WARN_ON_ONCE(!skb))
regval |= (u64)rd32(IGC_TXSTMPH) << 32;
igc_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval);
+ switch (adapter->link_speed) {
+ case SPEED_10:
+ adjust = IGC_I225_TX_LATENCY_10;
+ break;
+ case SPEED_100:
+ adjust = IGC_I225_TX_LATENCY_100;
+ break;
+ case SPEED_1000:
+ adjust = IGC_I225_TX_LATENCY_1000;
+ break;
+ case SPEED_2500:
+ adjust = IGC_I225_TX_LATENCY_2500;
+ break;
+ }
+
+ shhwtstamps.hwtstamp =
+ ktime_add_ns(shhwtstamps.hwtstamp, adjust);
+
/* Clear the lock early before calling skb_tstamp_tx so that
* applications are not woken up before the lock bit is clear. We use
* a copy of the skb pointer to ensure other threads can't change it
}
if (rx < budget) {
- napi_complete(&ch->napi);
- ltq_dma_enable_irq(&ch->dma);
+ if (napi_complete_done(&ch->napi, rx))
+ ltq_dma_enable_irq(&ch->dma);
}
return rx;
net_dev->stats.tx_bytes += bytes;
netdev_completed_queue(ch->priv->net_dev, pkts, bytes);
+ if (netif_queue_stopped(net_dev))
+ netif_wake_queue(net_dev);
+
if (pkts < budget) {
- napi_complete(&ch->napi);
- ltq_dma_enable_irq(&ch->dma);
+ if (napi_complete_done(&ch->napi, pkts))
+ ltq_dma_enable_irq(&ch->dma);
}
return pkts;
{
struct xrx200_chan *ch = ptr;
- ltq_dma_disable_irq(&ch->dma);
- ltq_dma_ack_irq(&ch->dma);
+ if (napi_schedule_prep(&ch->napi)) {
+ __napi_schedule(&ch->napi);
+ ltq_dma_disable_irq(&ch->dma);
+ }
- napi_schedule(&ch->napi);
+ ltq_dma_ack_irq(&ch->dma);
return IRQ_HANDLED;
}
/* setup NAPI */
netif_napi_add(net_dev, &priv->chan_rx.napi, xrx200_poll_rx, 32);
- netif_napi_add(net_dev, &priv->chan_tx.napi, xrx200_tx_housekeeping, 32);
+ netif_tx_napi_add(net_dev, &priv->chan_tx.napi, xrx200_tx_housekeeping, 32);
platform_set_drvdata(pdev, priv);
struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
int i;
- page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data),
- sync_len, napi);
for (i = 0; i < sinfo->nr_frags; i++)
page_pool_put_full_page(rxq->page_pool,
skb_frag_page(&sinfo->frags[i]), napi);
+ page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data),
+ sync_len, napi);
}
static int
mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf,
&size, page, &ps);
} else {
- if (unlikely(!xdp_buf.data_hard_start))
+ if (unlikely(!xdp_buf.data_hard_start)) {
+ rx_desc->buf_phys_addr = 0;
+ page_pool_put_full_page(rxq->page_pool, page,
+ true);
continue;
+ }
mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf,
&size, page);
struct dim dim; /* Dynamic Interrupt Moderation */
/* XDP */
- struct bpf_prog *xdp_prog;
+ struct bpf_prog __rcu *xdp_prog;
struct mlx5e_xdpsq *xdpsq;
DECLARE_BITMAP(flags, 8);
struct page_pool *page_pool;
void mlx5e_update_carrier(struct mlx5e_priv *priv);
int mlx5e_close(struct net_device *netdev);
int mlx5e_open(struct net_device *netdev);
-void mlx5e_update_ndo_stats(struct mlx5e_priv *priv);
void mlx5e_queue_update_stats(struct mlx5e_priv *priv);
int mlx5e_bits_invert(unsigned long a, int size);
monitor_counters_work);
mutex_lock(&priv->state_lock);
- mlx5e_update_ndo_stats(priv);
+ mlx5e_stats_update_ndo_stats(priv);
mutex_unlock(&priv->state_lock);
mlx5e_monitor_counter_arm(priv);
}
int err;
int i;
- if (!MLX5_CAP_GEN(dev, pcam_reg))
- return -EOPNOTSUPP;
-
- if (!MLX5_CAP_PCAM_REG(dev, pplm))
- return -EOPNOTSUPP;
+ if (!MLX5_CAP_GEN(dev, pcam_reg) || !MLX5_CAP_PCAM_REG(dev, pplm))
+ return false;
MLX5_SET(pplm_reg, in, local_port, 1);
err = mlx5_core_access_reg(dev, in, sz, out, sz, MLX5_REG_PPLM, 0, 0);
err_rule:
mlx5e_mod_hdr_detach(ct_priv->esw->dev,
&esw->offloads.mod_hdr, zone_rule->mh);
+ mapping_remove(ct_priv->labels_mapping, attr->ct_attr.ct_labels_id);
err_mod_hdr:
kfree(spec);
return err;
return 0;
}
+void mlx5_tc_ct_match_del(struct mlx5e_priv *priv, struct mlx5_ct_attr *ct_attr)
+{
+ struct mlx5_tc_ct_priv *ct_priv = mlx5_tc_ct_get_ct_priv(priv);
+
+ if (!ct_priv || !ct_attr->ct_labels_id)
+ return;
+
+ mapping_remove(ct_priv->labels_mapping, ct_attr->ct_labels_id);
+}
+
int
-mlx5_tc_ct_parse_match(struct mlx5e_priv *priv,
- struct mlx5_flow_spec *spec,
- struct flow_cls_offload *f,
- struct mlx5_ct_attr *ct_attr,
- struct netlink_ext_ack *extack)
+mlx5_tc_ct_match_add(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f,
+ struct mlx5_ct_attr *ct_attr,
+ struct netlink_ext_ack *extack)
{
struct mlx5_tc_ct_priv *ct_priv = mlx5_tc_ct_get_ct_priv(priv);
struct flow_rule *rule = flow_cls_offload_flow_rule(f);
void
mlx5_tc_ct_clean(struct mlx5_rep_uplink_priv *uplink_priv);
+void
+mlx5_tc_ct_match_del(struct mlx5e_priv *priv, struct mlx5_ct_attr *ct_attr);
+
int
-mlx5_tc_ct_parse_match(struct mlx5e_priv *priv,
- struct mlx5_flow_spec *spec,
- struct flow_cls_offload *f,
- struct mlx5_ct_attr *ct_attr,
- struct netlink_ext_ack *extack);
+mlx5_tc_ct_match_add(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f,
+ struct mlx5_ct_attr *ct_attr,
+ struct netlink_ext_ack *extack);
int
mlx5_tc_ct_add_no_trk_match(struct mlx5e_priv *priv,
struct mlx5_flow_spec *spec);
{
}
+static inline void
+mlx5_tc_ct_match_del(struct mlx5e_priv *priv, struct mlx5_ct_attr *ct_attr) {}
+
static inline int
-mlx5_tc_ct_parse_match(struct mlx5e_priv *priv,
- struct mlx5_flow_spec *spec,
- struct flow_cls_offload *f,
- struct mlx5_ct_attr *ct_attr,
- struct netlink_ext_ack *extack)
+mlx5_tc_ct_match_add(struct mlx5e_priv *priv,
+ struct mlx5_flow_spec *spec,
+ struct flow_cls_offload *f,
+ struct mlx5_ct_attr *ct_attr,
+ struct netlink_ext_ack *extack)
{
struct flow_rule *rule = flow_cls_offload_flow_rule(f);
};
/* General */
+static inline bool mlx5e_skb_is_multicast(struct sk_buff *skb)
+{
+ return skb->pkt_type == PACKET_MULTICAST || skb->pkt_type == PACKET_BROADCAST;
+}
+
void mlx5e_trigger_irq(struct mlx5e_icosq *sq);
void mlx5e_completion_event(struct mlx5_core_cq *mcq, struct mlx5_eqe *eqe);
void mlx5e_cq_error_event(struct mlx5_core_cq *mcq, enum mlx5_event event);
bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct mlx5e_dma_info *di,
u32 *len, struct xdp_buff *xdp)
{
- struct bpf_prog *prog = READ_ONCE(rq->xdp_prog);
+ struct bpf_prog *prog = rcu_dereference(rq->xdp_prog);
u32 act;
int err;
{
struct xdp_buff *xdp = wi->umr.dma_info[page_idx].xsk;
u32 cqe_bcnt32 = cqe_bcnt;
- bool consumed;
/* Check packet size. Note LRO doesn't use linear SKB */
if (unlikely(cqe_bcnt > rq->hw_mtu)) {
xsk_buff_dma_sync_for_cpu(xdp);
prefetch(xdp->data);
- rcu_read_lock();
- consumed = mlx5e_xdp_handle(rq, NULL, &cqe_bcnt32, xdp);
- rcu_read_unlock();
-
/* Possible flows:
* - XDP_REDIRECT to XSKMAP:
* The page is owned by the userspace from now.
* allocated first from the Reuse Ring, so it has enough space.
*/
- if (likely(consumed)) {
+ if (likely(mlx5e_xdp_handle(rq, NULL, &cqe_bcnt32, xdp))) {
if (likely(__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)))
__set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */
return NULL; /* page/packet was consumed by XDP */
u32 cqe_bcnt)
{
struct xdp_buff *xdp = wi->di->xsk;
- bool consumed;
/* wi->offset is not used in this function, because xdp->data and the
* DMA address point directly to the necessary place. Furthermore, the
return NULL;
}
- rcu_read_lock();
- consumed = mlx5e_xdp_handle(rq, NULL, &cqe_bcnt, xdp);
- rcu_read_unlock();
-
- if (likely(consumed))
+ if (likely(mlx5e_xdp_handle(rq, NULL, &cqe_bcnt, xdp)))
return NULL; /* page/packet was consumed by XDP */
/* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse
void mlx5e_close_xsk(struct mlx5e_channel *c)
{
clear_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
- napi_synchronize(&c->napi);
- synchronize_rcu(); /* Sync with the XSK wakeup. */
+ synchronize_rcu(); /* Sync with the XSK wakeup and with NAPI. */
mlx5e_close_rq(&c->xskrq);
mlx5e_close_cq(&c->xskrq.cq);
/* Re-sync */
/* Runs in work context */
-static struct mlx5_wqe_ctrl_seg *
+static int
resync_post_get_progress_params(struct mlx5e_icosq *sq,
struct mlx5e_ktls_offload_context_rx *priv_rx)
{
PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE);
if (unlikely(dma_mapping_error(pdev, buf->dma_addr))) {
err = -ENOMEM;
- goto err_out;
+ goto err_free;
}
buf->priv_rx = priv_rx;
BUILD_BUG_ON(MLX5E_KTLS_GET_PROGRESS_WQEBBS != 1);
+
+ spin_lock(&sq->channel->async_icosq_lock);
+
if (unlikely(!mlx5e_wqc_has_room_for(&sq->wq, sq->cc, sq->pc, 1))) {
+ spin_unlock(&sq->channel->async_icosq_lock);
err = -ENOSPC;
- goto err_out;
+ goto err_dma_unmap;
}
pi = mlx5e_icosq_get_next_pi(sq, 1);
};
icosq_fill_wi(sq, pi, &wi);
sq->pc++;
+ mlx5e_notify_hw(&sq->wq, sq->pc, sq->uar_map, cseg);
+ spin_unlock(&sq->channel->async_icosq_lock);
- return cseg;
+ return 0;
+err_dma_unmap:
+ dma_unmap_single(pdev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE);
+err_free:
+ kfree(buf);
err_out:
priv_rx->stats->tls_resync_req_skip++;
- return ERR_PTR(err);
+ return err;
}
/* Function is called with elevated refcount.
{
struct mlx5e_ktls_offload_context_rx *priv_rx;
struct mlx5e_ktls_rx_resync_ctx *resync;
- struct mlx5_wqe_ctrl_seg *cseg;
struct mlx5e_channel *c;
struct mlx5e_icosq *sq;
- struct mlx5_wq_cyc *wq;
resync = container_of(work, struct mlx5e_ktls_rx_resync_ctx, work);
priv_rx = container_of(resync, struct mlx5e_ktls_offload_context_rx, resync);
c = resync->priv->channels.c[priv_rx->rxq];
sq = &c->async_icosq;
- wq = &sq->wq;
-
- spin_lock(&c->async_icosq_lock);
- cseg = resync_post_get_progress_params(sq, priv_rx);
- if (IS_ERR(cseg)) {
+ if (resync_post_get_progress_params(sq, priv_rx))
refcount_dec(&resync->refcnt);
- goto unlock;
- }
- mlx5e_notify_hw(wq, sq->pc, sq->uar_map, cseg);
-unlock:
- spin_unlock(&c->async_icosq_lock);
}
static void resync_init(struct mlx5e_ktls_rx_resync_ctx *resync,
struct mlx5e_ktls_offload_context_rx *priv_rx;
struct mlx5e_ktls_rx_resync_ctx *resync;
u8 tracker_state, auth_state, *ctx;
+ struct device *dev;
u32 hw_seq;
priv_rx = buf->priv_rx;
resync = &priv_rx->resync;
-
+ dev = resync->priv->mdev->device;
if (unlikely(test_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags)))
goto out;
- dma_sync_single_for_cpu(resync->priv->mdev->device, buf->dma_addr,
- PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE);
+ dma_sync_single_for_cpu(dev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE,
+ DMA_FROM_DEVICE);
ctx = buf->progress.ctx;
tracker_state = MLX5_GET(tls_progress_params, ctx, record_tracker_state);
priv_rx->stats->tls_resync_req_end++;
out:
refcount_dec(&resync->refcnt);
+ dma_unmap_single(dev, buf->dma_addr, PROGRESS_PARAMS_PADDED_SIZE, DMA_FROM_DEVICE);
kfree(buf);
}
priv_rx = mlx5e_get_ktls_rx_priv_ctx(tls_ctx);
set_bit(MLX5E_PRIV_RX_FLAG_DELETING, priv_rx->flags);
mlx5e_set_ktls_rx_priv_ctx(tls_ctx, NULL);
- napi_synchronize(&priv->channels.c[priv_rx->rxq]->napi);
+ synchronize_rcu(); /* Sync with NAPI */
if (!cancel_work_sync(&priv_rx->rule.work))
/* completion is needed, as the priv_rx in the add flow
* is maintained on the wqe info (wi), not on the socket.
#include <net/sock.h>
#include "en.h"
-#include "accel/tls.h"
#include "fpga/sdk.h"
#include "en_accel/tls.h"
#define NUM_TLS_SW_COUNTERS ARRAY_SIZE(mlx5e_tls_sw_stats_desc)
+static bool is_tls_atomic_stats(struct mlx5e_priv *priv)
+{
+ return priv->tls && !mlx5_accel_is_ktls_device(priv->mdev);
+}
+
int mlx5e_tls_get_count(struct mlx5e_priv *priv)
{
- if (!priv->tls)
+ if (!is_tls_atomic_stats(priv))
return 0;
return NUM_TLS_SW_COUNTERS;
{
unsigned int i, idx = 0;
- if (!priv->tls)
+ if (!is_tls_atomic_stats(priv))
return 0;
for (i = 0; i < NUM_TLS_SW_COUNTERS; i++)
{
int i, idx = 0;
- if (!priv->tls)
+ if (!is_tls_atomic_stats(priv))
return 0;
for (i = 0; i < NUM_TLS_SW_COUNTERS; i++)
mutex_unlock(&priv->state_lock);
}
-void mlx5e_update_ndo_stats(struct mlx5e_priv *priv)
-{
- int i;
-
- for (i = mlx5e_nic_stats_grps_num(priv) - 1; i >= 0; i--)
- if (mlx5e_nic_stats_grps[i]->update_stats_mask &
- MLX5E_NDO_UPDATE_STATS)
- mlx5e_nic_stats_grps[i]->update_stats(priv);
-}
-
static void mlx5e_update_stats_work(struct work_struct *work)
{
struct mlx5e_priv *priv = container_of(work, struct mlx5e_priv,
if (params->xdp_prog)
bpf_prog_inc(params->xdp_prog);
- rq->xdp_prog = params->xdp_prog;
+ RCU_INIT_POINTER(rq->xdp_prog, params->xdp_prog);
rq_xdp_ix = rq->ix;
if (xsk)
if (err < 0)
goto err_rq_wq_destroy;
- rq->buff.map_dir = rq->xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;
+ rq->buff.map_dir = params->xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE;
rq->buff.headroom = mlx5e_get_rq_headroom(mdev, params, xsk);
pool_size = 1 << params->log_rq_mtu_frames;
}
err_rq_wq_destroy:
- if (rq->xdp_prog)
- bpf_prog_put(rq->xdp_prog);
+ if (params->xdp_prog)
+ bpf_prog_put(params->xdp_prog);
xdp_rxq_info_unreg(&rq->xdp_rxq);
page_pool_destroy(rq->page_pool);
mlx5_wq_destroy(&rq->wq_ctrl);
static void mlx5e_free_rq(struct mlx5e_rq *rq)
{
+ struct mlx5e_channel *c = rq->channel;
+ struct bpf_prog *old_prog = NULL;
int i;
- if (rq->xdp_prog)
- bpf_prog_put(rq->xdp_prog);
+ /* drop_rq has neither channel nor xdp_prog. */
+ if (c)
+ old_prog = rcu_dereference_protected(rq->xdp_prog,
+ lockdep_is_held(&c->priv->state_lock));
+ if (old_prog)
+ bpf_prog_put(old_prog);
switch (rq->wq_type) {
case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
void mlx5e_deactivate_rq(struct mlx5e_rq *rq)
{
clear_bit(MLX5E_RQ_STATE_ENABLED, &rq->state);
- napi_synchronize(&rq->channel->napi); /* prevent mlx5e_post_rx_wqes */
+ synchronize_rcu(); /* Sync with NAPI to prevent mlx5e_post_rx_wqes. */
}
void mlx5e_close_rq(struct mlx5e_rq *rq)
static void mlx5e_deactivate_txqsq(struct mlx5e_txqsq *sq)
{
- struct mlx5e_channel *c = sq->channel;
struct mlx5_wq_cyc *wq = &sq->wq;
clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state);
- /* prevent netif_tx_wake_queue */
- napi_synchronize(&c->napi);
+ synchronize_rcu(); /* Sync with NAPI to prevent netif_tx_wake_queue. */
mlx5e_tx_disable_queue(sq->txq);
void mlx5e_deactivate_icosq(struct mlx5e_icosq *icosq)
{
- struct mlx5e_channel *c = icosq->channel;
-
clear_bit(MLX5E_SQ_STATE_ENABLED, &icosq->state);
- napi_synchronize(&c->napi);
+ synchronize_rcu(); /* Sync with NAPI. */
}
void mlx5e_close_icosq(struct mlx5e_icosq *sq)
struct mlx5e_channel *c = sq->channel;
clear_bit(MLX5E_SQ_STATE_ENABLED, &sq->state);
- napi_synchronize(&c->napi);
+ synchronize_rcu(); /* Sync with NAPI. */
mlx5e_destroy_sq(c->mdev, sq->sqn);
mlx5e_free_xdpsq_descs(sq);
s->rx_packets += rq_stats->packets + xskrq_stats->packets;
s->rx_bytes += rq_stats->bytes + xskrq_stats->bytes;
+ s->multicast += rq_stats->mcast_packets + xskrq_stats->mcast_packets;
for (j = 0; j < priv->max_opened_tc; j++) {
struct mlx5e_sq_stats *sq_stats = &channel_stats->sq[j];
mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats)
{
struct mlx5e_priv *priv = netdev_priv(dev);
- struct mlx5e_vport_stats *vstats = &priv->stats.vport;
struct mlx5e_pport_stats *pstats = &priv->stats.pport;
/* In switchdev mode, monitor counters doesn't monitor
stats->rx_errors = stats->rx_length_errors + stats->rx_crc_errors +
stats->rx_frame_errors;
stats->tx_errors = stats->tx_aborted_errors + stats->tx_carrier_errors;
-
- /* vport multicast also counts packets that are dropped due to steering
- * or rx out of buffer
- */
- stats->multicast =
- VPORT_COUNTER_GET(vstats, received_eth_multicast.packets);
}
static void mlx5e_set_rx_mode(struct net_device *dev)
return 0;
}
+static void mlx5e_rq_replace_xdp_prog(struct mlx5e_rq *rq, struct bpf_prog *prog)
+{
+ struct bpf_prog *old_prog;
+
+ old_prog = rcu_replace_pointer(rq->xdp_prog, prog,
+ lockdep_is_held(&rq->channel->priv->state_lock));
+ if (old_prog)
+ bpf_prog_put(old_prog);
+}
+
static int mlx5e_xdp_set(struct net_device *netdev, struct bpf_prog *prog)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
*/
for (i = 0; i < priv->channels.num; i++) {
struct mlx5e_channel *c = priv->channels.c[i];
- bool xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
-
- clear_bit(MLX5E_RQ_STATE_ENABLED, &c->rq.state);
- if (xsk_open)
- clear_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state);
- napi_synchronize(&c->napi);
- /* prevent mlx5e_poll_rx_cq from accessing rq->xdp_prog */
-
- old_prog = xchg(&c->rq.xdp_prog, prog);
- if (old_prog)
- bpf_prog_put(old_prog);
-
- if (xsk_open) {
- old_prog = xchg(&c->xskrq.xdp_prog, prog);
- if (old_prog)
- bpf_prog_put(old_prog);
- }
- set_bit(MLX5E_RQ_STATE_ENABLED, &c->rq.state);
- if (xsk_open)
- set_bit(MLX5E_RQ_STATE_ENABLED, &c->xskrq.state);
- /* napi_schedule in case we have missed anything */
- napi_schedule(&c->napi);
+ mlx5e_rq_replace_xdp_prog(&c->rq, prog);
+ if (test_bit(MLX5E_CHANNEL_STATE_XSK, c->state))
+ mlx5e_rq_replace_xdp_prog(&c->xskrq, prog);
}
unlock:
.enable = mlx5e_nic_enable,
.disable = mlx5e_nic_disable,
.update_rx = mlx5e_update_nic_rx,
- .update_stats = mlx5e_update_ndo_stats,
+ .update_stats = mlx5e_stats_update_ndo_stats,
.update_carrier = mlx5e_update_carrier,
.rx_handlers = &mlx5e_rx_handlers_nic,
.max_tc = MLX5E_MAX_NUM_TC,
.cleanup_tx = mlx5e_cleanup_rep_tx,
.enable = mlx5e_rep_enable,
.update_rx = mlx5e_update_rep_rx,
- .update_stats = mlx5e_update_ndo_stats,
+ .update_stats = mlx5e_stats_update_ndo_stats,
.rx_handlers = &mlx5e_rx_handlers_rep,
.max_tc = 1,
.rq_groups = MLX5E_NUM_RQ_GROUPS(REGULAR),
.enable = mlx5e_uplink_rep_enable,
.disable = mlx5e_uplink_rep_disable,
.update_rx = mlx5e_update_rep_rx,
- .update_stats = mlx5e_update_ndo_stats,
+ .update_stats = mlx5e_stats_update_ndo_stats,
.update_carrier = mlx5e_update_carrier,
.rx_handlers = &mlx5e_rx_handlers_rep,
.max_tc = MLX5E_MAX_NUM_TC,
#include "en/xsk/rx.h"
#include "en/health.h"
#include "en/params.h"
+#include "en/txrx.h"
static struct sk_buff *
mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
mlx5e_enable_ecn(rq, skb);
skb->protocol = eth_type_trans(skb, netdev);
+
+ if (unlikely(mlx5e_skb_is_multicast(skb)))
+ stats->mcast_packets++;
}
static inline void mlx5e_complete_rx_cqe(struct mlx5e_rq *rq,
struct xdp_buff xdp;
struct sk_buff *skb;
void *va, *data;
- bool consumed;
u32 frag_size;
va = page_address(di->page) + wi->offset;
prefetchw(va); /* xdp_frame data area */
prefetch(data);
- rcu_read_lock();
mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp);
- consumed = mlx5e_xdp_handle(rq, di, &cqe_bcnt, &xdp);
- rcu_read_unlock();
- if (consumed)
+ if (mlx5e_xdp_handle(rq, di, &cqe_bcnt, &xdp))
return NULL; /* page/packet was consumed by XDP */
rx_headroom = xdp.data - xdp.data_hard_start;
struct sk_buff *skb;
void *va, *data;
u32 frag_size;
- bool consumed;
/* Check packet size. Note LRO doesn't use linear SKB */
if (unlikely(cqe_bcnt > rq->hw_mtu)) {
prefetchw(va); /* xdp_frame data area */
prefetch(data);
- rcu_read_lock();
mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt32, &xdp);
- consumed = mlx5e_xdp_handle(rq, di, &cqe_bcnt32, &xdp);
- rcu_read_unlock();
- if (consumed) {
+ if (mlx5e_xdp_handle(rq, di, &cqe_bcnt32, &xdp)) {
if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))
__set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */
return NULL; /* page/packet was consumed by XDP */
return total;
}
+void mlx5e_stats_update_ndo_stats(struct mlx5e_priv *priv)
+{
+ mlx5e_stats_grp_t *stats_grps = priv->profile->stats_grps;
+ const unsigned int num_stats_grps = stats_grps_num(priv);
+ int i;
+
+ for (i = num_stats_grps - 1; i >= 0; i--)
+ if (stats_grps[i]->update_stats &&
+ stats_grps[i]->update_stats_mask & MLX5E_NDO_UPDATE_STATS)
+ stats_grps[i]->update_stats(priv);
+}
+
void mlx5e_stats_update(struct mlx5e_priv *priv)
{
mlx5e_stats_grp_t *stats_grps = priv->profile->stats_grps;
void mlx5e_stats_update(struct mlx5e_priv *priv);
void mlx5e_stats_fill(struct mlx5e_priv *priv, u64 *data, int idx);
void mlx5e_stats_fill_strings(struct mlx5e_priv *priv, u8 *data);
+void mlx5e_stats_update_ndo_stats(struct mlx5e_priv *priv);
/* Concrete NIC Stats */
u64 tx_nop;
u64 rx_lro_packets;
u64 rx_lro_bytes;
+ u64 rx_mcast_packets;
u64 rx_ecn_mark;
u64 rx_removed_vlan_packets;
u64 rx_csum_unnecessary;
u64 csum_none;
u64 lro_packets;
u64 lro_bytes;
+ u64 mcast_packets;
u64 ecn_mark;
u64 removed_vlan_packets;
u64 xdp_drop;
mlx5e_put_flow_tunnel_id(flow);
- if (flow_flag_test(flow, NOT_READY)) {
+ if (flow_flag_test(flow, NOT_READY))
remove_unready_flow(flow);
- kvfree(attr->parse_attr);
- return;
- }
if (mlx5e_is_offloaded_flow(flow)) {
if (flow_flag_test(flow, SLOW))
}
kvfree(attr->parse_attr);
+ mlx5_tc_ct_match_del(priv, &flow->esw_attr->ct_attr);
+
if (attr->action & MLX5_FLOW_CONTEXT_ACTION_MOD_HDR)
mlx5e_detach_mod_hdr(priv, flow);
OFFLOAD(UDP_DPORT, 16, U16_MAX, udp.dest, 0, udp_dport),
};
+static unsigned long mask_to_le(unsigned long mask, int size)
+{
+ __be32 mask_be32;
+ __be16 mask_be16;
+
+ if (size == 32) {
+ mask_be32 = (__force __be32)(mask);
+ mask = (__force unsigned long)cpu_to_le32(be32_to_cpu(mask_be32));
+ } else if (size == 16) {
+ mask_be32 = (__force __be32)(mask);
+ mask_be16 = *(__be16 *)&mask_be32;
+ mask = (__force unsigned long)cpu_to_le16(be16_to_cpu(mask_be16));
+ }
+
+ return mask;
+}
static int offload_pedit_fields(struct mlx5e_priv *priv,
int namespace,
struct pedit_headers_action *hdrs,
u32 *s_masks_p, *a_masks_p, s_mask, a_mask;
struct mlx5e_tc_mod_hdr_acts *mod_acts;
struct mlx5_fields *f;
- unsigned long mask;
- __be32 mask_be32;
- __be16 mask_be16;
+ unsigned long mask, field_mask;
int err;
u8 cmd;
if (skip)
continue;
- if (f->field_bsize == 32) {
- mask_be32 = (__force __be32)(mask);
- mask = (__force unsigned long)cpu_to_le32(be32_to_cpu(mask_be32));
- } else if (f->field_bsize == 16) {
- mask_be32 = (__force __be32)(mask);
- mask_be16 = *(__be16 *)&mask_be32;
- mask = (__force unsigned long)cpu_to_le16(be16_to_cpu(mask_be16));
- }
+ mask = mask_to_le(mask, f->field_bsize);
first = find_first_bit(&mask, f->field_bsize);
next_z = find_next_zero_bit(&mask, f->field_bsize, first);
if (cmd == MLX5_ACTION_TYPE_SET) {
int start;
+ field_mask = mask_to_le(f->field_mask, f->field_bsize);
+
/* if field is bit sized it can start not from first bit */
- start = find_first_bit((unsigned long *)&f->field_mask,
- f->field_bsize);
+ start = find_first_bit(&field_mask, f->field_bsize);
MLX5_SET(set_action_in, action, offset, first - start);
/* length is num of bits to be written, zero means length of 32 */
goto err_free;
/* actions validation depends on parsing the ct matches first */
- err = mlx5_tc_ct_parse_match(priv, &parse_attr->spec, f,
- &flow->esw_attr->ct_attr, extack);
+ err = mlx5_tc_ct_match_add(priv, &parse_attr->spec, f,
+ &flow->esw_attr->ct_attr, extack);
if (err)
goto err_free;
struct mlx5e_xdpsq *xsksq = &c->xsksq;
struct mlx5e_rq *xskrq = &c->xskrq;
struct mlx5e_rq *rq = &c->rq;
- bool xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
bool aff_change = false;
bool busy_xsk = false;
bool busy = false;
int work_done = 0;
+ bool xsk_open;
int i;
+ rcu_read_lock();
+
+ xsk_open = test_bit(MLX5E_CHANNEL_STATE_XSK, c->state);
+
ch_stats->poll++;
for (i = 0; i < c->num_tc; i++)
busy |= busy_xsk;
if (busy) {
- if (likely(mlx5e_channel_no_affinity_change(c)))
- return budget;
+ if (likely(mlx5e_channel_no_affinity_change(c))) {
+ work_done = budget;
+ goto out;
+ }
ch_stats->aff_change++;
aff_change = true;
if (budget && work_done == budget)
}
if (unlikely(!napi_complete_done(napi, work_done)))
- return work_done;
+ goto out;
ch_stats->arm++;
ch_stats->force_irq++;
}
+out:
+ rcu_read_unlock();
+
return work_done;
}
}
esw->fdb_table.offloads.send_to_vport_grp = g;
- /* create peer esw miss group */
- memset(flow_group_in, 0, inlen);
+ if (MLX5_CAP_ESW(esw->dev, merged_eswitch)) {
+ /* create peer esw miss group */
+ memset(flow_group_in, 0, inlen);
- esw_set_flow_group_source_port(esw, flow_group_in);
+ esw_set_flow_group_source_port(esw, flow_group_in);
- if (!mlx5_eswitch_vport_match_metadata_enabled(esw)) {
- match_criteria = MLX5_ADDR_OF(create_flow_group_in,
- flow_group_in,
- match_criteria);
+ if (!mlx5_eswitch_vport_match_metadata_enabled(esw)) {
+ match_criteria = MLX5_ADDR_OF(create_flow_group_in,
+ flow_group_in,
+ match_criteria);
- MLX5_SET_TO_ONES(fte_match_param, match_criteria,
- misc_parameters.source_eswitch_owner_vhca_id);
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+ misc_parameters.source_eswitch_owner_vhca_id);
- MLX5_SET(create_flow_group_in, flow_group_in,
- source_eswitch_owner_vhca_id_valid, 1);
- }
+ MLX5_SET(create_flow_group_in, flow_group_in,
+ source_eswitch_owner_vhca_id_valid, 1);
+ }
- MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix);
- MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index,
- ix + esw->total_vports - 1);
- ix += esw->total_vports;
+ MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, ix);
+ MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index,
+ ix + esw->total_vports - 1);
+ ix += esw->total_vports;
- g = mlx5_create_flow_group(fdb, flow_group_in);
- if (IS_ERR(g)) {
- err = PTR_ERR(g);
- esw_warn(dev, "Failed to create peer miss flow group err(%d)\n", err);
- goto peer_miss_err;
+ g = mlx5_create_flow_group(fdb, flow_group_in);
+ if (IS_ERR(g)) {
+ err = PTR_ERR(g);
+ esw_warn(dev, "Failed to create peer miss flow group err(%d)\n", err);
+ goto peer_miss_err;
+ }
+ esw->fdb_table.offloads.peer_miss_grp = g;
}
- esw->fdb_table.offloads.peer_miss_grp = g;
/* create miss group */
memset(flow_group_in, 0, inlen);
miss_rule_err:
mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp);
miss_err:
- mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp);
+ if (MLX5_CAP_ESW(esw->dev, merged_eswitch))
+ mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp);
peer_miss_err:
mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp);
send_vport_err:
mlx5_del_flow_rules(esw->fdb_table.offloads.miss_rule_multi);
mlx5_del_flow_rules(esw->fdb_table.offloads.miss_rule_uni);
mlx5_destroy_flow_group(esw->fdb_table.offloads.send_to_vport_grp);
- mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp);
+ if (MLX5_CAP_ESW(esw->dev, merged_eswitch))
+ mlx5_destroy_flow_group(esw->fdb_table.offloads.peer_miss_grp);
mlx5_destroy_flow_group(esw->fdb_table.offloads.miss_grp);
mlx5_esw_chains_destroy(esw);
fte->action = *flow_act;
fte->flow_context = spec->flow_context;
- tree_init_node(&fte->node, NULL, del_sw_fte);
+ tree_init_node(&fte->node, del_hw_fte, del_sw_fte);
return fte;
}
up_write_ref_node(&g->node, false);
rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte);
up_write_ref_node(&fte->node, false);
- tree_put_node(&fte->node, false);
return rule;
}
rule = ERR_PTR(-ENOENT);
up_write_ref_node(&g->node, false);
rule = add_rule_fg(g, spec, flow_act, dest, dest_num, fte);
up_write_ref_node(&fte->node, false);
- tree_put_node(&fte->node, false);
tree_put_node(&g->node, false);
return rule;
up_write_ref_node(&fte->node, false);
} else {
del_hw_fte(&fte->node);
- up_write(&fte->node.lock);
+ /* Avoid double call to del_hw_fte */
+ fte->node.del_hw_func = NULL;
+ up_write_ref_node(&fte->node, false);
tree_put_node(&fte->node, false);
}
kfree(handle);
if (ocelot->ptp && shinfo->tx_flags & SKBTX_HW_TSTAMP &&
ocelot_port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP) {
+ spin_lock(&ocelot_port->ts_id_lock);
+
shinfo->tx_flags |= SKBTX_IN_PROGRESS;
/* Store timestamp ID in cb[0] of sk_buff */
- skb->cb[0] = ocelot_port->ts_id % 4;
+ skb->cb[0] = ocelot_port->ts_id;
+ ocelot_port->ts_id = (ocelot_port->ts_id + 1) % 4;
skb_queue_tail(&ocelot_port->tx_skbs, skb);
+
+ spin_unlock(&ocelot_port->ts_id_lock);
return 0;
}
return -ENODATA;
struct ocelot_port *ocelot_port = ocelot->ports[port];
skb_queue_head_init(&ocelot_port->tx_skbs);
+ spin_lock_init(&ocelot_port->ts_id_lock);
/* Basic L2 initialization */
void ocelot_deinit(struct ocelot *ocelot)
{
- struct ocelot_port *port;
- int i;
-
cancel_delayed_work(&ocelot->stats_work);
destroy_workqueue(ocelot->stats_queue);
mutex_destroy(&ocelot->stats_lock);
-
- for (i = 0; i < ocelot->num_phys_ports; i++) {
- port = ocelot->ports[i];
- skb_queue_purge(&port->tx_skbs);
- }
}
EXPORT_SYMBOL(ocelot_deinit);
+void ocelot_deinit_port(struct ocelot *ocelot, int port)
+{
+ struct ocelot_port *ocelot_port = ocelot->ports[port];
+
+ skb_queue_purge(&ocelot_port->tx_skbs);
+}
+EXPORT_SYMBOL(ocelot_deinit_port);
+
MODULE_LICENSE("Dual MIT/GPL");
u8 grp = 0; /* Send everything on CPU group 0 */
unsigned int i, count, last;
int port = priv->chip_port;
+ bool do_tstamp;
val = ocelot_read(ocelot, QS_INJ_STATUS);
if (!(val & QS_INJ_STATUS_FIFO_RDY(BIT(grp))) ||
info.vid = skb_vlan_tag_get(skb);
/* Check if timestamping is needed */
+ do_tstamp = (ocelot_port_add_txtstamp_skb(ocelot_port, skb) == 0);
+
if (ocelot->ptp && shinfo->tx_flags & SKBTX_HW_TSTAMP) {
info.rew_op = ocelot_port->ptp_cmd;
if (ocelot_port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP)
- info.rew_op |= (ocelot_port->ts_id % 4) << 3;
+ info.rew_op |= skb->cb[0] << 3;
}
ocelot_gen_ifh(ifh, &info);
dev->stats.tx_packets++;
dev->stats.tx_bytes += skb->len;
- if (!ocelot_port_add_txtstamp_skb(ocelot_port, skb)) {
- ocelot_port->ts_id++;
- return NETDEV_TX_OK;
- }
+ if (!do_tstamp)
+ dev_kfree_skb_any(skb);
- dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
[VCAP_IS2_HK_DIP_EQ_SIP] = {123, 1},
/* IP4_TCP_UDP (TYPE=100) */
[VCAP_IS2_HK_TCP] = {124, 1},
- [VCAP_IS2_HK_L4_SPORT] = {125, 16},
- [VCAP_IS2_HK_L4_DPORT] = {141, 16},
+ [VCAP_IS2_HK_L4_DPORT] = {125, 16},
+ [VCAP_IS2_HK_L4_SPORT] = {141, 16},
[VCAP_IS2_HK_L4_RNG] = {157, 8},
[VCAP_IS2_HK_L4_SPORT_EQ_DPORT] = {165, 1},
[VCAP_IS2_HK_L4_SEQUENCE_EQ0] = {166, 1},
- [VCAP_IS2_HK_L4_URG] = {167, 1},
- [VCAP_IS2_HK_L4_ACK] = {168, 1},
- [VCAP_IS2_HK_L4_PSH] = {169, 1},
- [VCAP_IS2_HK_L4_RST] = {170, 1},
- [VCAP_IS2_HK_L4_SYN] = {171, 1},
- [VCAP_IS2_HK_L4_FIN] = {172, 1},
+ [VCAP_IS2_HK_L4_FIN] = {167, 1},
+ [VCAP_IS2_HK_L4_SYN] = {168, 1},
+ [VCAP_IS2_HK_L4_RST] = {169, 1},
+ [VCAP_IS2_HK_L4_PSH] = {170, 1},
+ [VCAP_IS2_HK_L4_ACK] = {171, 1},
+ [VCAP_IS2_HK_L4_URG] = {172, 1},
[VCAP_IS2_HK_L4_1588_DOM] = {173, 8},
[VCAP_IS2_HK_L4_1588_VER] = {181, 4},
/* IP4_OTHER (TYPE=101) */
.enable = ocelot_ptp_enable,
};
+static void mscc_ocelot_release_ports(struct ocelot *ocelot)
+{
+ int port;
+
+ for (port = 0; port < ocelot->num_phys_ports; port++) {
+ struct ocelot_port_private *priv;
+ struct ocelot_port *ocelot_port;
+
+ ocelot_port = ocelot->ports[port];
+ if (!ocelot_port)
+ continue;
+
+ ocelot_deinit_port(ocelot, port);
+
+ priv = container_of(ocelot_port, struct ocelot_port_private,
+ port);
+
+ unregister_netdev(priv->dev);
+ free_netdev(priv->dev);
+ }
+}
+
+static int mscc_ocelot_init_ports(struct platform_device *pdev,
+ struct device_node *ports)
+{
+ struct ocelot *ocelot = platform_get_drvdata(pdev);
+ struct device_node *portnp;
+ int err;
+
+ ocelot->ports = devm_kcalloc(ocelot->dev, ocelot->num_phys_ports,
+ sizeof(struct ocelot_port *), GFP_KERNEL);
+ if (!ocelot->ports)
+ return -ENOMEM;
+
+ /* No NPI port */
+ ocelot_configure_cpu(ocelot, -1, OCELOT_TAG_PREFIX_NONE,
+ OCELOT_TAG_PREFIX_NONE);
+
+ for_each_available_child_of_node(ports, portnp) {
+ struct ocelot_port_private *priv;
+ struct ocelot_port *ocelot_port;
+ struct device_node *phy_node;
+ phy_interface_t phy_mode;
+ struct phy_device *phy;
+ struct regmap *target;
+ struct resource *res;
+ struct phy *serdes;
+ char res_name[8];
+ u32 port;
+
+ if (of_property_read_u32(portnp, "reg", &port))
+ continue;
+
+ snprintf(res_name, sizeof(res_name), "port%d", port);
+
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ res_name);
+ target = ocelot_regmap_init(ocelot, res);
+ if (IS_ERR(target))
+ continue;
+
+ phy_node = of_parse_phandle(portnp, "phy-handle", 0);
+ if (!phy_node)
+ continue;
+
+ phy = of_phy_find_device(phy_node);
+ of_node_put(phy_node);
+ if (!phy)
+ continue;
+
+ err = ocelot_probe_port(ocelot, port, target, phy);
+ if (err) {
+ of_node_put(portnp);
+ return err;
+ }
+
+ ocelot_port = ocelot->ports[port];
+ priv = container_of(ocelot_port, struct ocelot_port_private,
+ port);
+
+ of_get_phy_mode(portnp, &phy_mode);
+
+ ocelot_port->phy_mode = phy_mode;
+
+ switch (ocelot_port->phy_mode) {
+ case PHY_INTERFACE_MODE_NA:
+ continue;
+ case PHY_INTERFACE_MODE_SGMII:
+ break;
+ case PHY_INTERFACE_MODE_QSGMII:
+ /* Ensure clock signals and speed is set on all
+ * QSGMII links
+ */
+ ocelot_port_writel(ocelot_port,
+ DEV_CLOCK_CFG_LINK_SPEED
+ (OCELOT_SPEED_1000),
+ DEV_CLOCK_CFG);
+ break;
+ default:
+ dev_err(ocelot->dev,
+ "invalid phy mode for port%d, (Q)SGMII only\n",
+ port);
+ of_node_put(portnp);
+ return -EINVAL;
+ }
+
+ serdes = devm_of_phy_get(ocelot->dev, portnp, NULL);
+ if (IS_ERR(serdes)) {
+ err = PTR_ERR(serdes);
+ if (err == -EPROBE_DEFER)
+ dev_dbg(ocelot->dev, "deferring probe\n");
+ else
+ dev_err(ocelot->dev,
+ "missing SerDes phys for port%d\n",
+ port);
+
+ of_node_put(portnp);
+ return err;
+ }
+
+ priv->serdes = serdes;
+ }
+
+ return 0;
+}
+
static int mscc_ocelot_probe(struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
- struct device_node *ports, *portnp;
int err, irq_xtr, irq_ptp_rdy;
+ struct device_node *ports;
struct ocelot *ocelot;
struct regmap *hsio;
unsigned int i;
ports = of_get_child_by_name(np, "ethernet-ports");
if (!ports) {
- dev_err(&pdev->dev, "no ethernet-ports child node found\n");
+ dev_err(ocelot->dev, "no ethernet-ports child node found\n");
return -ENODEV;
}
ocelot->num_phys_ports = of_get_child_count(ports);
- ocelot->ports = devm_kcalloc(&pdev->dev, ocelot->num_phys_ports,
- sizeof(struct ocelot_port *), GFP_KERNEL);
-
ocelot->vcap_is2_keys = vsc7514_vcap_is2_keys;
ocelot->vcap_is2_actions = vsc7514_vcap_is2_actions;
ocelot->vcap = vsc7514_vcap_props;
- ocelot_init(ocelot);
+ err = ocelot_init(ocelot);
+ if (err)
+ goto out_put_ports;
+
+ err = mscc_ocelot_init_ports(pdev, ports);
+ if (err)
+ goto out_put_ports;
+
if (ocelot->ptp) {
err = ocelot_init_timestamp(ocelot, &ocelot_ptp_clock_info);
if (err) {
}
}
- /* No NPI port */
- ocelot_configure_cpu(ocelot, -1, OCELOT_TAG_PREFIX_NONE,
- OCELOT_TAG_PREFIX_NONE);
-
- for_each_available_child_of_node(ports, portnp) {
- struct ocelot_port_private *priv;
- struct ocelot_port *ocelot_port;
- struct device_node *phy_node;
- phy_interface_t phy_mode;
- struct phy_device *phy;
- struct regmap *target;
- struct resource *res;
- struct phy *serdes;
- char res_name[8];
- u32 port;
-
- if (of_property_read_u32(portnp, "reg", &port))
- continue;
-
- snprintf(res_name, sizeof(res_name), "port%d", port);
-
- res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
- res_name);
- target = ocelot_regmap_init(ocelot, res);
- if (IS_ERR(target))
- continue;
-
- phy_node = of_parse_phandle(portnp, "phy-handle", 0);
- if (!phy_node)
- continue;
-
- phy = of_phy_find_device(phy_node);
- of_node_put(phy_node);
- if (!phy)
- continue;
-
- err = ocelot_probe_port(ocelot, port, target, phy);
- if (err) {
- of_node_put(portnp);
- goto out_put_ports;
- }
-
- ocelot_port = ocelot->ports[port];
- priv = container_of(ocelot_port, struct ocelot_port_private,
- port);
-
- of_get_phy_mode(portnp, &phy_mode);
-
- ocelot_port->phy_mode = phy_mode;
-
- switch (ocelot_port->phy_mode) {
- case PHY_INTERFACE_MODE_NA:
- continue;
- case PHY_INTERFACE_MODE_SGMII:
- break;
- case PHY_INTERFACE_MODE_QSGMII:
- /* Ensure clock signals and speed is set on all
- * QSGMII links
- */
- ocelot_port_writel(ocelot_port,
- DEV_CLOCK_CFG_LINK_SPEED
- (OCELOT_SPEED_1000),
- DEV_CLOCK_CFG);
- break;
- default:
- dev_err(ocelot->dev,
- "invalid phy mode for port%d, (Q)SGMII only\n",
- port);
- of_node_put(portnp);
- err = -EINVAL;
- goto out_put_ports;
- }
-
- serdes = devm_of_phy_get(ocelot->dev, portnp, NULL);
- if (IS_ERR(serdes)) {
- err = PTR_ERR(serdes);
- if (err == -EPROBE_DEFER)
- dev_dbg(ocelot->dev, "deferring probe\n");
- else
- dev_err(ocelot->dev,
- "missing SerDes phys for port%d\n",
- port);
-
- of_node_put(portnp);
- goto out_put_ports;
- }
-
- priv->serdes = serdes;
- }
-
register_netdevice_notifier(&ocelot_netdevice_nb);
register_switchdev_notifier(&ocelot_switchdev_nb);
register_switchdev_blocking_notifier(&ocelot_switchdev_blocking_nb);
struct ocelot *ocelot = platform_get_drvdata(pdev);
ocelot_deinit_timestamp(ocelot);
+ mscc_ocelot_release_ports(ocelot);
ocelot_deinit(ocelot);
unregister_switchdev_blocking_notifier(&ocelot_switchdev_blocking_nb);
unregister_switchdev_notifier(&ocelot_switchdev_nb);
struct nfp_eth_table_port *eth_port;
struct nfp_port *port;
- param->active_fec = ETHTOOL_FEC_NONE_BIT;
- param->fec = ETHTOOL_FEC_NONE_BIT;
+ param->active_fec = ETHTOOL_FEC_NONE;
+ param->fec = ETHTOOL_FEC_NONE;
port = nfp_port_from_netdev(netdev);
eth_port = nfp_port_get_eth_port(port);
cdev->mf_bits = BIT(QED_MF_LLH_MAC_CLSS) |
BIT(QED_MF_LLH_PROTO_CLSS) |
BIT(QED_MF_LL2_NON_UNICAST) |
- BIT(QED_MF_INTER_PF_SWITCH);
+ BIT(QED_MF_INTER_PF_SWITCH) |
+ BIT(QED_MF_DISABLE_ARFS);
break;
case NVM_CFG1_GLOB_MF_MODE_DEFAULT:
cdev->mf_bits = BIT(QED_MF_LLH_MAC_CLSS) |
DP_INFO(p_hwfn, "Multi function mode is 0x%lx\n",
cdev->mf_bits);
+
+ /* In CMT the PF is unknown when the GFS block processes the
+ * packet. Therefore cannot use searcher as it has a per PF
+ * database, and thus ARFS must be disabled.
+ *
+ */
+ if (QED_IS_CMT(cdev))
+ cdev->mf_bits |= BIT(QED_MF_DISABLE_ARFS);
}
DP_INFO(p_hwfn, "Multi function mode is 0x%lx\n",
struct qed_ptt *p_ptt,
struct qed_arfs_config_params *p_cfg_params)
{
+ if (test_bit(QED_MF_DISABLE_ARFS, &p_hwfn->cdev->mf_bits))
+ return;
+
if (p_cfg_params->mode != QED_FILTER_CONFIG_MODE_DISABLE) {
qed_gft_config(p_hwfn, p_ptt, p_hwfn->rel_pf_id,
p_cfg_params->tcp,
dev_info->fw_eng = FW_ENGINEERING_VERSION;
dev_info->b_inter_pf_switch = test_bit(QED_MF_INTER_PF_SWITCH,
&cdev->mf_bits);
+ if (!test_bit(QED_MF_DISABLE_ARFS, &cdev->mf_bits))
+ dev_info->b_arfs_capable = true;
dev_info->tx_switching = true;
if (hw_info->b_wol_support == QED_WOL_SUPPORT_PME)
p_ramrod->personality = PERSONALITY_ETH;
break;
case QED_PCI_ETH_ROCE:
+ case QED_PCI_ETH_IWARP:
p_ramrod->personality = PERSONALITY_RDMA_AND_ETH;
break;
default:
{
int i;
+ if (!edev->dev_info.common.b_arfs_capable)
+ return -EINVAL;
+
edev->arfs = vzalloc(sizeof(*edev->arfs));
if (!edev->arfs)
return -ENOMEM;
NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
NETIF_F_TSO | NETIF_F_TSO6 | NETIF_F_HW_TC;
- if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1)
+ if (edev->dev_info.common.b_arfs_capable)
hw_features |= NETIF_F_NTUPLE;
if (edev->dev_info.common.vxlan_enable ||
qede_vlan_mark_nonconfigured(edev);
edev->ops->fastpath_stop(edev->cdev);
- if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1) {
+ if (edev->dev_info.common.b_arfs_capable) {
qede_poll_for_freeing_arfs_filters(edev);
qede_free_arfs(edev);
}
if (rc)
goto err2;
- if (!IS_VF(edev) && edev->dev_info.common.num_hwfns == 1) {
- rc = qede_alloc_arfs(edev);
- if (rc)
- DP_NOTICE(edev, "aRFS memory allocation failed\n");
+ if (qede_alloc_arfs(edev)) {
+ edev->ndev->features &= ~NETIF_F_NTUPLE;
+ edev->dev_info.common.b_arfs_capable = false;
}
qede_napi_add_enable(edev);
if (fcw.offset > pci_resource_len(efx->pci_dev, fcw.bar) - ESE_GZ_FCW_LEN) {
netif_err(efx, probe, efx->net_dev,
"Func control window overruns BAR\n");
+ rc = -EIO;
goto fail;
}
#include <linux/phy.h>
#include <linux/phy/phy.h>
#include <linux/delay.h>
+#include <linux/pinctrl/consumer.h>
#include <linux/pm_runtime.h>
#include <linux/gpio/consumer.h>
#include <linux/of.h>
return 0;
}
+static int __maybe_unused cpsw_suspend(struct device *dev)
+{
+ struct cpsw_common *cpsw = dev_get_drvdata(dev);
+ int i;
+
+ rtnl_lock();
+
+ for (i = 0; i < cpsw->data.slaves; i++) {
+ struct net_device *ndev = cpsw->slaves[i].ndev;
+
+ if (!(ndev && netif_running(ndev)))
+ continue;
+
+ cpsw_ndo_stop(ndev);
+ }
+
+ rtnl_unlock();
+
+ /* Select sleep pin state */
+ pinctrl_pm_select_sleep_state(dev);
+
+ return 0;
+}
+
+static int __maybe_unused cpsw_resume(struct device *dev)
+{
+ struct cpsw_common *cpsw = dev_get_drvdata(dev);
+ int i;
+
+ /* Select default pin state */
+ pinctrl_pm_select_default_state(dev);
+
+ /* shut up ASSERT_RTNL() warning in netif_set_real_num_tx/rx_queues */
+ rtnl_lock();
+
+ for (i = 0; i < cpsw->data.slaves; i++) {
+ struct net_device *ndev = cpsw->slaves[i].ndev;
+
+ if (!(ndev && netif_running(ndev)))
+ continue;
+
+ cpsw_ndo_open(ndev);
+ }
+
+ rtnl_unlock();
+
+ return 0;
+}
+
+static SIMPLE_DEV_PM_OPS(cpsw_pm_ops, cpsw_suspend, cpsw_resume);
+
static struct platform_driver cpsw_driver = {
.driver = {
.name = "cpsw-switch",
+ .pm = &cpsw_pm_ops,
.of_match_table = cpsw_of_mtable,
},
.probe = cpsw_probe,
struct net_device *dev,
struct geneve_sock *gs4,
struct flowi4 *fl4,
- const struct ip_tunnel_info *info)
+ const struct ip_tunnel_info *info,
+ __be16 dport, __be16 sport)
{
bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
struct geneve_dev *geneve = netdev_priv(dev);
fl4->flowi4_proto = IPPROTO_UDP;
fl4->daddr = info->key.u.ipv4.dst;
fl4->saddr = info->key.u.ipv4.src;
+ fl4->fl4_dport = dport;
+ fl4->fl4_sport = sport;
tos = info->key.tos;
if ((tos == 1) && !geneve->cfg.collect_md) {
struct net_device *dev,
struct geneve_sock *gs6,
struct flowi6 *fl6,
- const struct ip_tunnel_info *info)
+ const struct ip_tunnel_info *info,
+ __be16 dport, __be16 sport)
{
bool use_cache = ip_tunnel_dst_cache_usable(skb, info);
struct geneve_dev *geneve = netdev_priv(dev);
fl6->flowi6_proto = IPPROTO_UDP;
fl6->daddr = info->key.u.ipv6.dst;
fl6->saddr = info->key.u.ipv6.src;
+ fl6->fl6_dport = dport;
+ fl6->fl6_sport = sport;
+
prio = info->key.tos;
if ((prio == 1) && !geneve->cfg.collect_md) {
prio = ip_tunnel_get_dsfield(ip_hdr(skb), skb);
__be16 sport;
int err;
- rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info);
+ sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+ rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
+ geneve->cfg.info.key.tp_dst, sport);
if (IS_ERR(rt))
return PTR_ERR(rt);
return -EMSGSIZE;
}
- sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
if (geneve->cfg.collect_md) {
tos = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
ttl = key->ttl;
__be16 sport;
int err;
- dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info);
+ sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
+ dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
+ geneve->cfg.info.key.tp_dst, sport);
if (IS_ERR(dst))
return PTR_ERR(dst);
return -EMSGSIZE;
}
- sport = udp_flow_src_port(geneve->net, skb, 1, USHRT_MAX, true);
if (geneve->cfg.collect_md) {
prio = ip_tunnel_ecn_encap(key->tos, ip_hdr(skb), skb);
ttl = key->ttl;
{
struct ip_tunnel_info *info = skb_tunnel_info(skb);
struct geneve_dev *geneve = netdev_priv(dev);
+ __be16 sport;
if (ip_tunnel_info_af(info) == AF_INET) {
struct rtable *rt;
struct flowi4 fl4;
+
struct geneve_sock *gs4 = rcu_dereference(geneve->sock4);
+ sport = udp_flow_src_port(geneve->net, skb,
+ 1, USHRT_MAX, true);
- rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info);
+ rt = geneve_get_v4_rt(skb, dev, gs4, &fl4, info,
+ geneve->cfg.info.key.tp_dst, sport);
if (IS_ERR(rt))
return PTR_ERR(rt);
} else if (ip_tunnel_info_af(info) == AF_INET6) {
struct dst_entry *dst;
struct flowi6 fl6;
+
struct geneve_sock *gs6 = rcu_dereference(geneve->sock6);
+ sport = udp_flow_src_port(geneve->net, skb,
+ 1, USHRT_MAX, true);
- dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info);
+ dst = geneve_get_v6_dst(skb, dev, gs6, &fl6, info,
+ geneve->cfg.info.key.tp_dst, sport);
if (IS_ERR(dst))
return PTR_ERR(dst);
return -EINVAL;
}
- info->key.tp_src = udp_flow_src_port(geneve->net, skb,
- 1, USHRT_MAX, true);
+ info->key.tp_src = sport;
info->key.tp_dst = geneve->cfg.info.key.tp_dst;
return 0;
}
#define NETVSC_XDP_HDRM 256
+#define NETVSC_XFER_HEADER_SIZE(rng_cnt) \
+ (offsetof(struct vmtransfer_page_packet_header, ranges) + \
+ (rng_cnt) * sizeof(struct vmtransfer_page_range))
+
struct multi_send_data {
struct sk_buff *skb; /* skb containing the pkt */
struct hv_netvsc_packet *pkt; /* netvsc pkt pending */
/* Serial number of the VF to team with */
u32 vf_serial;
+ /* Is the current data path through the VF NIC? */
+ bool data_path_is_vf;
+
/* Used to temporarily save the config info across hibernation */
struct netvsc_device_info *saved_netvsc_dev_info;
};
net_device->recv_section_size = resp->sections[0].sub_alloc_size;
net_device->recv_section_cnt = resp->sections[0].num_sub_allocs;
+ /* Ensure buffer will not overflow */
+ if (net_device->recv_section_size < NETVSC_MTU_MIN || (u64)net_device->recv_section_size *
+ (u64)net_device->recv_section_cnt > (u64)buf_size) {
+ netdev_err(ndev, "invalid recv_section_size %u\n",
+ net_device->recv_section_size);
+ ret = -EINVAL;
+ goto cleanup;
+ }
+
/* Setup receive completion ring.
* Add 1 to the recv_section_cnt because at least one entry in a
* ring buffer has to be empty.
/* Parse the response */
net_device->send_section_size = init_packet->msg.
v1_msg.send_send_buf_complete.section_size;
+ if (net_device->send_section_size < NETVSC_MTU_MIN) {
+ netdev_err(ndev, "invalid send_section_size %u\n",
+ net_device->send_section_size);
+ ret = -EINVAL;
+ goto cleanup;
+ }
/* Section count is simply the size divided by the section size. */
net_device->send_section_cnt = buf_size / net_device->send_section_size;
int budget)
{
const struct nvsp_message *nvsp_packet = hv_pkt_data(desc);
+ u32 msglen = hv_pkt_datalen(desc);
+
+ /* Ensure packet is big enough to read header fields */
+ if (msglen < sizeof(struct nvsp_message_header)) {
+ netdev_err(ndev, "nvsp_message length too small: %u\n", msglen);
+ return;
+ }
switch (nvsp_packet->hdr.msg_type) {
case NVSP_MSG_TYPE_INIT_COMPLETE:
+ if (msglen < sizeof(struct nvsp_message_header) +
+ sizeof(struct nvsp_message_init_complete)) {
+ netdev_err(ndev, "nvsp_msg length too small: %u\n",
+ msglen);
+ return;
+ }
+ fallthrough;
+
case NVSP_MSG1_TYPE_SEND_RECV_BUF_COMPLETE:
+ if (msglen < sizeof(struct nvsp_message_header) +
+ sizeof(struct nvsp_1_message_send_receive_buffer_complete)) {
+ netdev_err(ndev, "nvsp_msg1 length too small: %u\n",
+ msglen);
+ return;
+ }
+ fallthrough;
+
case NVSP_MSG1_TYPE_SEND_SEND_BUF_COMPLETE:
+ if (msglen < sizeof(struct nvsp_message_header) +
+ sizeof(struct nvsp_1_message_send_send_buffer_complete)) {
+ netdev_err(ndev, "nvsp_msg1 length too small: %u\n",
+ msglen);
+ return;
+ }
+ fallthrough;
+
case NVSP_MSG5_TYPE_SUBCHANNEL:
+ if (msglen < sizeof(struct nvsp_message_header) +
+ sizeof(struct nvsp_5_subchannel_complete)) {
+ netdev_err(ndev, "nvsp_msg5 length too small: %u\n",
+ msglen);
+ return;
+ }
/* Copy the response back */
memcpy(&net_device->channel_init_pkt, nvsp_packet,
sizeof(struct nvsp_message));
static int netvsc_receive(struct net_device *ndev,
struct netvsc_device *net_device,
struct netvsc_channel *nvchan,
- const struct vmpacket_descriptor *desc,
- const struct nvsp_message *nvsp)
+ const struct vmpacket_descriptor *desc)
{
struct net_device_context *net_device_ctx = netdev_priv(ndev);
struct vmbus_channel *channel = nvchan->channel;
const struct vmtransfer_page_packet_header *vmxferpage_packet
= container_of(desc, const struct vmtransfer_page_packet_header, d);
+ const struct nvsp_message *nvsp = hv_pkt_data(desc);
+ u32 msglen = hv_pkt_datalen(desc);
u16 q_idx = channel->offermsg.offer.sub_channel_index;
char *recv_buf = net_device->recv_buf;
u32 status = NVSP_STAT_SUCCESS;
int i;
int count = 0;
+ /* Ensure packet is big enough to read header fields */
+ if (msglen < sizeof(struct nvsp_message_header)) {
+ netif_err(net_device_ctx, rx_err, ndev,
+ "invalid nvsp header, length too small: %u\n",
+ msglen);
+ return 0;
+ }
+
/* Make sure this is a valid nvsp packet */
if (unlikely(nvsp->hdr.msg_type != NVSP_MSG1_TYPE_SEND_RNDIS_PKT)) {
netif_err(net_device_ctx, rx_err, ndev,
return 0;
}
+ /* Validate xfer page pkt header */
+ if ((desc->offset8 << 3) < sizeof(struct vmtransfer_page_packet_header)) {
+ netif_err(net_device_ctx, rx_err, ndev,
+ "Invalid xfer page pkt, offset too small: %u\n",
+ desc->offset8 << 3);
+ return 0;
+ }
+
if (unlikely(vmxferpage_packet->xfer_pageset_id != NETVSC_RECEIVE_BUFFER_ID)) {
netif_err(net_device_ctx, rx_err, ndev,
"Invalid xfer page set id - expecting %x got %x\n",
count = vmxferpage_packet->range_cnt;
+ /* Check count for a valid value */
+ if (NETVSC_XFER_HEADER_SIZE(count) > desc->offset8 << 3) {
+ netif_err(net_device_ctx, rx_err, ndev,
+ "Range count is not valid: %d\n",
+ count);
+ return 0;
+ }
+
/* Each range represents 1 RNDIS pkt that contains 1 ethernet frame */
for (i = 0; i < count; i++) {
u32 offset = vmxferpage_packet->ranges[i].byte_offset;
void *data;
int ret;
- if (unlikely(offset + buflen > net_device->recv_buf_size)) {
+ if (unlikely(offset > net_device->recv_buf_size ||
+ buflen > net_device->recv_buf_size - offset)) {
nvchan->rsc.cnt = 0;
status = NVSP_STAT_FAIL;
netif_err(net_device_ctx, rx_err, ndev,
u32 count, offset, *tab;
int i;
+ /* Ensure packet is big enough to read send_table fields */
+ if (msglen < sizeof(struct nvsp_message_header) +
+ sizeof(struct nvsp_5_send_indirect_table)) {
+ netdev_err(ndev, "nvsp_v5_msg length too small: %u\n", msglen);
+ return;
+ }
+
count = nvmsg->msg.v5_msg.send_table.count;
offset = nvmsg->msg.v5_msg.send_table.offset;
}
static void netvsc_send_vf(struct net_device *ndev,
- const struct nvsp_message *nvmsg)
+ const struct nvsp_message *nvmsg,
+ u32 msglen)
{
struct net_device_context *net_device_ctx = netdev_priv(ndev);
+ /* Ensure packet is big enough to read its fields */
+ if (msglen < sizeof(struct nvsp_message_header) +
+ sizeof(struct nvsp_4_send_vf_association)) {
+ netdev_err(ndev, "nvsp_v4_msg length too small: %u\n", msglen);
+ return;
+ }
+
net_device_ctx->vf_alloc = nvmsg->msg.v4_msg.vf_assoc.allocated;
net_device_ctx->vf_serial = nvmsg->msg.v4_msg.vf_assoc.serial;
netdev_info(ndev, "VF slot %u %s\n",
static void netvsc_receive_inband(struct net_device *ndev,
struct netvsc_device *nvscdev,
- const struct nvsp_message *nvmsg,
- u32 msglen)
+ const struct vmpacket_descriptor *desc)
{
+ const struct nvsp_message *nvmsg = hv_pkt_data(desc);
+ u32 msglen = hv_pkt_datalen(desc);
+
+ /* Ensure packet is big enough to read header fields */
+ if (msglen < sizeof(struct nvsp_message_header)) {
+ netdev_err(ndev, "inband nvsp_message length too small: %u\n", msglen);
+ return;
+ }
+
switch (nvmsg->hdr.msg_type) {
case NVSP_MSG5_TYPE_SEND_INDIRECTION_TABLE:
netvsc_send_table(ndev, nvscdev, nvmsg, msglen);
break;
case NVSP_MSG4_TYPE_SEND_VF_ASSOCIATION:
- netvsc_send_vf(ndev, nvmsg);
+ netvsc_send_vf(ndev, nvmsg, msglen);
break;
}
}
{
struct vmbus_channel *channel = nvchan->channel;
const struct nvsp_message *nvmsg = hv_pkt_data(desc);
- u32 msglen = hv_pkt_datalen(desc);
trace_nvsp_recv(ndev, channel, nvmsg);
switch (desc->type) {
case VM_PKT_COMP:
- netvsc_send_completion(ndev, net_device, channel,
- desc, budget);
+ netvsc_send_completion(ndev, net_device, channel, desc, budget);
break;
case VM_PKT_DATA_USING_XFER_PAGES:
- return netvsc_receive(ndev, net_device, nvchan,
- desc, nvmsg);
+ return netvsc_receive(ndev, net_device, nvchan, desc);
break;
case VM_PKT_DATA_INBAND:
- netvsc_receive_inband(ndev, net_device, nvmsg, msglen);
+ netvsc_receive_inband(ndev, net_device, desc);
break;
default:
struct netvsc_reconfig *event;
unsigned long flags;
+ /* Ensure the packet is big enough to access its fields */
+ if (resp->msg_len - RNDIS_HEADER_SIZE < sizeof(struct rndis_indicate_status)) {
+ netdev_err(net, "invalid rndis_indicate_status packet, len: %u\n",
+ resp->msg_len);
+ return;
+ }
+
/* Update the physical link speed when changing to another vSwitch */
if (indicate->status == RNDIS_STATUS_LINK_SPEED_CHANGE) {
u32 speed;
return NOTIFY_OK;
}
-/* VF up/down change detected, schedule to change data path */
+/* Change the data path when VF UP/DOWN/CHANGE are detected.
+ *
+ * Typically a UP or DOWN event is followed by a CHANGE event, so
+ * net_device_ctx->data_path_is_vf is used to cache the current data path
+ * to avoid the duplicate call of netvsc_switch_datapath() and the duplicate
+ * message.
+ *
+ * During hibernation, if a VF NIC driver (e.g. mlx5) preserves the network
+ * interface, there is only the CHANGE event and no UP or DOWN event.
+ */
static int netvsc_vf_changed(struct net_device *vf_netdev)
{
struct net_device_context *net_device_ctx;
if (!netvsc_dev)
return NOTIFY_DONE;
+ if (net_device_ctx->data_path_is_vf == vf_is_up)
+ return NOTIFY_OK;
+ net_device_ctx->data_path_is_vf = vf_is_up;
+
netvsc_switch_datapath(ndev, vf_is_up);
netdev_info(ndev, "Data path switched %s VF: %s\n",
vf_is_up ? "to" : "from", vf_netdev->name);
static int netvsc_suspend(struct hv_device *dev)
{
struct net_device_context *ndev_ctx;
- struct net_device *vf_netdev, *net;
struct netvsc_device *nvdev;
+ struct net_device *net;
int ret;
net = hv_get_drvdata(dev);
goto out;
}
- vf_netdev = rtnl_dereference(ndev_ctx->vf_netdev);
- if (vf_netdev)
- netvsc_unregister_vf(vf_netdev);
-
/* Save the current config info */
ndev_ctx->saved_netvsc_dev_info = netvsc_devinfo_get(nvdev);
rtnl_lock();
net_device_ctx = netdev_priv(net);
+
+ /* Reset the data path to the netvsc NIC before re-opening the vmbus
+ * channel. Later netvsc_netdev_event() will switch the data path to
+ * the VF upon the UP or CHANGE event.
+ */
+ net_device_ctx->data_path_is_vf = false;
device_info = net_device_ctx->saved_netvsc_dev_info;
ret = netvsc_attach(net, device_info);
return netvsc_unregister_vf(event_dev);
case NETDEV_UP:
case NETDEV_DOWN:
+ case NETDEV_CHANGE:
return netvsc_vf_changed(event_dev);
default:
return NOTIFY_DONE;
return;
}
+ /* Ensure the packet is big enough to read req_id. Req_id is the 1st
+ * field in any request/response message, so the payload should have at
+ * least sizeof(u32) bytes
+ */
+ if (resp->msg_len - RNDIS_HEADER_SIZE < sizeof(u32)) {
+ netdev_err(ndev, "rndis msg_len too small: %u\n",
+ resp->msg_len);
+ return;
+ }
+
spin_lock_irqsave(&dev->request_lock, flags);
list_for_each_entry(request, &dev->req_list, list_ent) {
/*
* Get the Per-Packet-Info with the specified type
* return NULL if not found.
*/
-static inline void *rndis_get_ppi(struct rndis_packet *rpkt,
- u32 type, u8 internal)
+static inline void *rndis_get_ppi(struct net_device *ndev,
+ struct rndis_packet *rpkt,
+ u32 rpkt_len, u32 type, u8 internal)
{
struct rndis_per_packet_info *ppi;
int len;
if (rpkt->per_pkt_info_offset == 0)
return NULL;
+ /* Validate info_offset and info_len */
+ if (rpkt->per_pkt_info_offset < sizeof(struct rndis_packet) ||
+ rpkt->per_pkt_info_offset > rpkt_len) {
+ netdev_err(ndev, "Invalid per_pkt_info_offset: %u\n",
+ rpkt->per_pkt_info_offset);
+ return NULL;
+ }
+
+ if (rpkt->per_pkt_info_len > rpkt_len - rpkt->per_pkt_info_offset) {
+ netdev_err(ndev, "Invalid per_pkt_info_len: %u\n",
+ rpkt->per_pkt_info_len);
+ return NULL;
+ }
+
ppi = (struct rndis_per_packet_info *)((ulong)rpkt +
rpkt->per_pkt_info_offset);
len = rpkt->per_pkt_info_len;
while (len > 0) {
+ /* Validate ppi_offset and ppi_size */
+ if (ppi->size > len) {
+ netdev_err(ndev, "Invalid ppi size: %u\n", ppi->size);
+ continue;
+ }
+
+ if (ppi->ppi_offset >= ppi->size) {
+ netdev_err(ndev, "Invalid ppi_offset: %u\n", ppi->ppi_offset);
+ continue;
+ }
+
if (ppi->type == type && ppi->internal == internal)
return (void *)((ulong)ppi + ppi->ppi_offset);
len -= ppi->size;
const struct ndis_pkt_8021q_info *vlan;
const struct rndis_pktinfo_id *pktinfo_id;
const u32 *hash_info;
- u32 data_offset;
+ u32 data_offset, rpkt_len;
void *data;
bool rsc_more = false;
int ret;
+ /* Ensure data_buflen is big enough to read header fields */
+ if (data_buflen < RNDIS_HEADER_SIZE + sizeof(struct rndis_packet)) {
+ netdev_err(ndev, "invalid rndis pkt, data_buflen too small: %u\n",
+ data_buflen);
+ return NVSP_STAT_FAIL;
+ }
+
+ /* Validate rndis_pkt offset */
+ if (rndis_pkt->data_offset >= data_buflen - RNDIS_HEADER_SIZE) {
+ netdev_err(ndev, "invalid rndis packet offset: %u\n",
+ rndis_pkt->data_offset);
+ return NVSP_STAT_FAIL;
+ }
+
/* Remove the rndis header and pass it back up the stack */
data_offset = RNDIS_HEADER_SIZE + rndis_pkt->data_offset;
+ rpkt_len = data_buflen - RNDIS_HEADER_SIZE;
data_buflen -= data_offset;
/*
return NVSP_STAT_FAIL;
}
- vlan = rndis_get_ppi(rndis_pkt, IEEE_8021Q_INFO, 0);
+ vlan = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, IEEE_8021Q_INFO, 0);
- csum_info = rndis_get_ppi(rndis_pkt, TCPIP_CHKSUM_PKTINFO, 0);
+ csum_info = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, TCPIP_CHKSUM_PKTINFO, 0);
- hash_info = rndis_get_ppi(rndis_pkt, NBL_HASH_VALUE, 0);
+ hash_info = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, NBL_HASH_VALUE, 0);
- pktinfo_id = rndis_get_ppi(rndis_pkt, RNDIS_PKTINFO_ID, 1);
+ pktinfo_id = rndis_get_ppi(ndev, rndis_pkt, rpkt_len, RNDIS_PKTINFO_ID, 1);
data = (void *)msg + data_offset;
if (netif_msg_rx_status(net_device_ctx))
dump_rndis_message(ndev, rndis_msg);
+ /* Validate incoming rndis_message packet */
+ if (buflen < RNDIS_HEADER_SIZE || rndis_msg->msg_len < RNDIS_HEADER_SIZE ||
+ buflen < rndis_msg->msg_len) {
+ netdev_err(ndev, "Invalid rndis_msg (buflen: %u, msg_len: %u)\n",
+ buflen, rndis_msg->msg_len);
+ return NVSP_STAT_FAIL;
+ }
+
switch (rndis_msg->ndis_msg_type) {
case RNDIS_MSG_PACKET:
return rndis_filter_receive_data(ndev, net_dev, nvchan,
int ret;
u8 lqi, len_u8, *data;
- adf7242_read_reg(lp, 0, &len_u8);
+ ret = adf7242_read_reg(lp, 0, &len_u8);
+ if (ret)
+ return ret;
len = len_u8;
);
if (!priv->irq_workqueue) {
dev_crit(&priv->spi->dev, "alloc of irq_workqueue failed!\n");
+ destroy_workqueue(priv->mlme_workqueue);
return -ENOMEM;
}
val = ioread32(endpoint->ipa->reg_virt + offset);
/* Zero all filter-related fields, preserving the rest */
- u32_replace_bits(val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL);
+ u32p_replace_bits(&val, 0, IPA_REG_ENDP_FILTER_HASH_MSK_ALL);
iowrite32(val, endpoint->ipa->reg_virt + offset);
}
val = ioread32(ipa->reg_virt + offset);
/* Zero all route-related fields, preserving the rest */
- u32_replace_bits(val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL);
+ u32p_replace_bits(&val, 0, IPA_REG_ENDP_ROUTER_HASH_MSK_ALL);
iowrite32(val, ipa->reg_virt + offset);
}
{
struct net_device *dev = phydev->attached_dev;
- if (!phy_is_started(phydev)) {
+ if (!phy_is_started(phydev) && phydev->state != PHY_DOWN) {
WARN(1, "called from state %s\n",
phy_state_to_str(phydev->state));
return;
if (ret < 0)
return ret;
- ret = phy_disable_interrupts(phydev);
- if (ret)
- return ret;
-
if (phydev->drv->config_init)
ret = phydev->drv->config_init(phydev);
if (err)
goto error;
+ err = phy_disable_interrupts(phydev);
+ if (err)
+ return err;
+
phy_resume(phydev);
phy_led_triggers_register(phydev);
phy_led_triggers_unregister(phydev);
- module_put(phydev->mdio.dev.driver->owner);
+ if (phydev->mdio.dev.driver)
+ module_put(phydev->mdio.dev.driver->owner);
/* If the device had no specific driver before (i.e. - it
* was using the generic driver), we unbind the device
dev_dbg(&info->control->dev,
"rndis response error, code %d\n", retval);
}
- msleep(20);
+ msleep(40);
}
dev_dbg(&info->control->dev, "rndis response timeout\n");
return -ETIMEDOUT;
skb_put(skb, sizeof(struct cisco_packet));
skb->priority = TC_PRIO_CONTROL;
skb->dev = dev;
+ skb->protocol = htons(ETH_P_HDLC);
skb_reset_network_header(skb);
dev_queue_xmit(skb);
if (pvc->state.fecn) /* TX Congestion counter */
dev->stats.tx_compressed++;
skb->dev = pvc->frad;
+ skb->protocol = htons(ETH_P_HDLC);
+ skb_reset_network_header(skb);
dev_queue_xmit(skb);
return NETDEV_TX_OK;
}
skb_put(skb, i);
skb->priority = TC_PRIO_CONTROL;
skb->dev = dev;
+ skb->protocol = htons(ETH_P_HDLC);
skb_reset_network_header(skb);
dev_queue_xmit(skb);
{
dev->type = ARPHRD_DLCI;
dev->flags = IFF_POINTOPOINT;
- dev->hard_header_len = 10;
+ dev->hard_header_len = 0;
dev->addr_len = 2;
netif_keep_dst(dev);
}
dev->mtu = HDLC_MAX_MTU;
dev->min_mtu = 68;
dev->max_mtu = HDLC_MAX_MTU;
+ dev->needed_headroom = 10;
dev->priv_flags |= IFF_NO_QUEUE;
dev->ml_priv = pvc;
skb->priority = TC_PRIO_CONTROL;
skb->dev = dev;
+ skb->protocol = htons(ETH_P_HDLC);
skb_reset_network_header(skb);
skb_queue_tail(&tx_queue, skb);
}
}
for (opt = data; len; len -= opt[1], opt += opt[1]) {
- if (len < 2 || len < opt[1]) {
- dev->stats.rx_errors++;
- kfree(out);
- return; /* bad packet, drop silently */
- }
+ if (len < 2 || opt[1] < 2 || len < opt[1])
+ goto err_out;
if (pid == PID_LCP)
switch (opt[0]) {
continue; /* MRU always OK and > 1500 bytes? */
case LCP_OPTION_ACCM: /* async control character map */
+ if (opt[1] < sizeof(valid_accm))
+ goto err_out;
if (!memcmp(opt, valid_accm,
sizeof(valid_accm)))
continue;
}
break;
case LCP_OPTION_MAGIC:
+ if (len < 6)
+ goto err_out;
if (opt[1] != 6 || (!opt[2] && !opt[3] &&
!opt[4] && !opt[5]))
break; /* reject invalid magic number */
ppp_cp_event(dev, pid, RCR_GOOD, CP_CONF_ACK, id, req_len, data);
kfree(out);
+ return;
+
+err_out:
+ dev->stats.rx_errors++;
+ kfree(out);
}
static int ppp_rx(struct sk_buff *skb)
struct net_device *dev;
int size = skb->len;
- skb->protocol = htons(ETH_P_X25);
-
ptr = skb_push(skb, 2);
*ptr++ = size % 256;
skb->dev = dev = lapbeth->ethdev;
+ skb->protocol = htons(ETH_P_DEC);
+
skb_reset_network_header(skb);
dev_hard_header(skb, dev, ETH_P_DEC, bcast_addr, NULL, 0);
void wg_noise_handshake_clear(struct noise_handshake *handshake)
{
+ down_write(&handshake->lock);
wg_index_hashtable_remove(
handshake->entry.peer->device->index_hashtable,
&handshake->entry);
- down_write(&handshake->lock);
handshake_zero(handshake);
up_write(&handshake->lock);
- wg_index_hashtable_remove(
- handshake->entry.peer->device->index_hashtable,
- &handshake->entry);
}
static struct noise_keypair *keypair_create(struct wg_peer *peer)
struct index_hashtable_entry *old,
struct index_hashtable_entry *new)
{
- if (unlikely(hlist_unhashed(&old->index_hash)))
- return false;
+ bool ret;
+
spin_lock_bh(&table->lock);
+ ret = !hlist_unhashed(&old->index_hash);
+ if (unlikely(!ret))
+ goto out;
+
new->index = old->index;
hlist_replace_rcu(&old->index_hash, &new->index_hash);
* simply gets dropped, which isn't terrible.
*/
INIT_HLIST_NODE(&old->index_hash);
+out:
spin_unlock_bh(&table->lock);
- return true;
+ return ret;
}
void wg_index_hashtable_remove(struct index_hashtable *table,
/* To check if there's window offered */
static bool data_ok(struct brcmf_sdio *bus)
{
- /* Reserve TXCTL_CREDITS credits for txctl */
- return (bus->tx_max - bus->tx_seq) > TXCTL_CREDITS &&
- ((bus->tx_max - bus->tx_seq) & 0x80) == 0;
+ u8 tx_rsv = 0;
+
+ /* Reserve TXCTL_CREDITS credits for txctl when it is ready to send */
+ if (bus->ctrl_frame_stat)
+ tx_rsv = TXCTL_CREDITS;
+
+ return (bus->tx_max - bus->tx_seq - tx_rsv) != 0 &&
+ ((bus->tx_max - bus->tx_seq - tx_rsv) & 0x80) == 0;
+
}
/* To check if there's window offered */
struct mwifiex_aes_param {
u8 pn[WPA_PN_SIZE];
__le16 key_len;
- u8 key[WLAN_KEY_LEN_CCMP];
+ u8 key[WLAN_KEY_LEN_CCMP_256];
} __packed;
struct mwifiex_wapi_param {
key_v2 = &resp->params.key_material_v2;
len = le16_to_cpu(key_v2->key_param_set.key_params.aes.key_len);
- if (len > WLAN_KEY_LEN_CCMP)
+ if (len > sizeof(key_v2->key_param_set.key_params.aes.key))
return -EINVAL;
if (le16_to_cpu(key_v2->action) == HostCmd_ACT_GEN_SET) {
return 0;
memset(priv->aes_key_v2.key_param_set.key_params.aes.key, 0,
- WLAN_KEY_LEN_CCMP);
+ sizeof(key_v2->key_param_set.key_params.aes.key));
priv->aes_key_v2.key_param_set.key_params.aes.key_len =
cpu_to_le16(len);
memcpy(priv->aes_key_v2.key_param_set.key_params.aes.key,
sizeof(dev->mt76.hw->wiphy->fw_version),
"%.10s-%.15s", hdr->fw_ver, hdr->build_date);
- if (!strncmp(hdr->fw_ver, "2.0", sizeof(hdr->fw_ver))) {
+ if (!is_mt7615(&dev->mt76) &&
+ !strncmp(hdr->fw_ver, "2.0", sizeof(hdr->fw_ver))) {
dev->fw_ver = MT7615_FIRMWARE_V2;
dev->mcu_ops = &sta_update_ops;
} else {
spin_lock_bh(&dev->token_lock);
idr_for_each_entry(&dev->token, txwi, id) {
mt7915_txp_skb_unmap(&dev->mt76, txwi);
- if (txwi->skb)
- dev_kfree_skb_any(txwi->skb);
+ if (txwi->skb) {
+ struct ieee80211_hw *hw;
+
+ hw = mt76_tx_status_get_hw(&dev->mt76, txwi->skb);
+ ieee80211_free_txskb(hw, txwi->skb);
+ }
mt76_put_txwi(&dev->mt76, txwi);
}
spin_unlock_bh(&dev->token_lock);
if (sta || !(info->flags & IEEE80211_TX_CTL_NO_ACK))
mt7915_tx_status(sta, hw, info, NULL);
- dev_kfree_skb(skb);
+ ieee80211_free_txskb(hw, skb);
}
void mt7915_txp_skb_unmap(struct mt76_dev *dev,
KEY_TKIP = 2,
KEY_AES = 3,
KEY_GEM = 4,
- KEY_IGTK = 5,
};
struct wl1271_cmd_set_keys {
case WL1271_CIPHER_SUITE_GEM:
key_type = KEY_GEM;
break;
- case WLAN_CIPHER_SUITE_AES_CMAC:
- key_type = KEY_IGTK;
- break;
default:
wl1271_error("Unknown key algo 0x%x", key_conf->cipher);
WLAN_CIPHER_SUITE_TKIP,
WLAN_CIPHER_SUITE_CCMP,
WL1271_CIPHER_SUITE_GEM,
- WLAN_CIPHER_SUITE_AES_CMAC,
};
/* The tx descriptor buffer */
depends on INET
depends on BLK_DEV_NVME
select NVME_FABRICS
+ select CRYPTO
select CRYPTO_CRC32C
help
This provides support for the NVMe over Fabrics protocol using
if (!cel)
return -ENOMEM;
- ret = nvme_get_log(ctrl, NVME_NSID_ALL, NVME_LOG_CMD_EFFECTS, 0, csi,
+ ret = nvme_get_log(ctrl, 0x00, NVME_LOG_CMD_EFFECTS, 0, csi,
&cel->log, sizeof(cel->log), 0);
if (ret) {
kfree(cel);
if (ret < 0)
return ret;
- if (!ctrl->identified)
- nvme_hwmon_init(ctrl);
+ if (!ctrl->identified) {
+ ret = nvme_hwmon_init(ctrl);
+ if (ret < 0)
+ return ret;
+ }
ctrl->identified = true;
return -EWOULDBLOCK;
}
+ nvme_get_ctrl(ctrl);
+ if (!try_module_get(ctrl->ops->module))
+ return -EINVAL;
+
file->private_data = ctrl;
return 0;
}
+static int nvme_dev_release(struct inode *inode, struct file *file)
+{
+ struct nvme_ctrl *ctrl =
+ container_of(inode->i_cdev, struct nvme_ctrl, cdev);
+
+ module_put(ctrl->ops->module);
+ nvme_put_ctrl(ctrl);
+ return 0;
+}
+
static int nvme_dev_user_cmd(struct nvme_ctrl *ctrl, void __user *argp)
{
struct nvme_ns *ns;
static const struct file_operations nvme_dev_fops = {
.owner = THIS_MODULE,
.open = nvme_dev_open,
+ .release = nvme_dev_release,
.unlocked_ioctl = nvme_dev_ioctl,
.compat_ioctl = compat_ptr_ioctl,
};
spin_lock_irqsave(&nvme_fc_lock, flags);
list_for_each_entry(lport, &nvme_fc_lport_list, port_list) {
if (lport->localport.node_name != laddr.nn ||
- lport->localport.port_name != laddr.pn)
+ lport->localport.port_name != laddr.pn ||
+ lport->localport.port_state != FC_OBJSTATE_ONLINE)
continue;
list_for_each_entry(rport, &lport->endp_list, endp_list) {
if (rport->remoteport.node_name != raddr.nn ||
- rport->remoteport.port_name != raddr.pn)
+ rport->remoteport.port_name != raddr.pn ||
+ rport->remoteport.port_state != FC_OBJSTATE_ONLINE)
continue;
/* if fail to get reference fall through. Will error */
static int nvme_hwmon_get_smart_log(struct nvme_hwmon_data *data)
{
- int ret;
-
- ret = nvme_get_log(data->ctrl, NVME_NSID_ALL, NVME_LOG_SMART, 0,
+ return nvme_get_log(data->ctrl, NVME_NSID_ALL, NVME_LOG_SMART, 0,
NVME_CSI_NVM, &data->log, sizeof(data->log), 0);
-
- return ret <= 0 ? ret : -EIO;
}
static int nvme_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
.info = nvme_hwmon_info,
};
-void nvme_hwmon_init(struct nvme_ctrl *ctrl)
+int nvme_hwmon_init(struct nvme_ctrl *ctrl)
{
struct device *dev = ctrl->dev;
struct nvme_hwmon_data *data;
data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL);
if (!data)
- return;
+ return 0;
data->ctrl = ctrl;
mutex_init(&data->read_lock);
dev_warn(ctrl->device,
"Failed to read smart log (error %d)\n", err);
devm_kfree(dev, data);
- return;
+ return err;
}
hwmon = devm_hwmon_device_register_with_info(dev, "nvme", data,
dev_warn(dev, "Failed to instantiate hwmon device\n");
devm_kfree(dev, data);
}
+
+ return 0;
}
}
#ifdef CONFIG_NVME_HWMON
-void nvme_hwmon_init(struct nvme_ctrl *ctrl);
+int nvme_hwmon_init(struct nvme_ctrl *ctrl);
#else
-static inline void nvme_hwmon_init(struct nvme_ctrl *ctrl) { }
+static inline int nvme_hwmon_init(struct nvme_ctrl *ctrl)
+{
+ return 0;
+}
#endif
u32 nvme_command_effects(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
struct nvme_completion *cqe = &nvmeq->cqes[idx];
struct request *req;
- if (unlikely(cqe->command_id >= nvmeq->q_depth)) {
- dev_warn(nvmeq->dev->ctrl.device,
- "invalid id %d completed on queue %d\n",
- cqe->command_id, le16_to_cpu(cqe->sq_id));
- return;
- }
-
/*
* AEN requests are special as they don't time out and can
* survive any kind of queue freeze and often don't respond to
}
req = blk_mq_tag_to_rq(nvme_queue_tagset(nvmeq), cqe->command_id);
+ if (unlikely(!req)) {
+ dev_warn(nvmeq->dev->ctrl.device,
+ "invalid id %d completed on queue %d\n",
+ cqe->command_id, le16_to_cpu(cqe->sq_id));
+ return;
+ }
+
trace_nvme_sq(req, cqe->sq_head, nvmeq->sq_tail);
if (!nvme_try_complete_req(req, cqe->status, cqe->result))
nvme_pci_complete_rq(req);
{ PCI_VDEVICE(INTEL, 0xf1a5), /* Intel 600P/P3100 */
.driver_data = NVME_QUIRK_NO_DEEPEST_PS |
NVME_QUIRK_MEDIUM_PRIO_SQ |
- NVME_QUIRK_NO_TEMP_THRESH_CHANGE },
+ NVME_QUIRK_NO_TEMP_THRESH_CHANGE |
+ NVME_QUIRK_DISABLE_WRITE_ZEROES, },
{ PCI_VDEVICE(INTEL, 0xf1a6), /* Intel 760p/Pro 7600p */
.driver_data = NVME_QUIRK_IGNORE_DEV_SUBNQN, },
{ PCI_VDEVICE(INTEL, 0x5845), /* Qemu emulated controller */
subsys->ver = NVME_VS(1, 2, 1);
}
+ __module_get(subsys->passthru_ctrl->ops->module);
mutex_unlock(&subsys->lock);
return 0;
{
if (subsys->passthru_ctrl) {
xa_erase(&passthru_subsystems, subsys->passthru_ctrl->cntlid);
+ module_put(subsys->passthru_ctrl->ops->module);
nvme_put_ctrl(subsys->passthru_ctrl);
}
subsys->passthru_ctrl = NULL;
X86_MATCH_INTEL_FAM6_MODEL(ROCKETLAKE, &rapl_defaults_core),
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, &rapl_defaults_core),
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, &rapl_defaults_spr_server),
+ X86_MATCH_INTEL_FAM6_MODEL(LAKEFIELD, &rapl_defaults_core),
X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT, &rapl_defaults_byt),
X86_MATCH_INTEL_FAM6_MODEL(ATOM_AIRMONT, &rapl_defaults_cht),
#define AXP20X_DCDC2_V_OUT_MASK GENMASK(5, 0)
#define AXP20X_DCDC3_V_OUT_MASK GENMASK(7, 0)
-#define AXP20X_LDO24_V_OUT_MASK GENMASK(7, 4)
+#define AXP20X_LDO2_V_OUT_MASK GENMASK(7, 4)
#define AXP20X_LDO3_V_OUT_MASK GENMASK(6, 0)
+#define AXP20X_LDO4_V_OUT_MASK GENMASK(3, 0)
#define AXP20X_LDO5_V_OUT_MASK GENMASK(7, 4)
#define AXP20X_PWR_OUT_EXTEN_MASK BIT_MASK(0)
AXP20X_PWR_OUT_CTRL, AXP20X_PWR_OUT_DCDC3_MASK),
AXP_DESC_FIXED(AXP20X, LDO1, "ldo1", "acin", 1300),
AXP_DESC(AXP20X, LDO2, "ldo2", "ldo24in", 1800, 3300, 100,
- AXP20X_LDO24_V_OUT, AXP20X_LDO24_V_OUT_MASK,
+ AXP20X_LDO24_V_OUT, AXP20X_LDO2_V_OUT_MASK,
AXP20X_PWR_OUT_CTRL, AXP20X_PWR_OUT_LDO2_MASK),
AXP_DESC(AXP20X, LDO3, "ldo3", "ldo3in", 700, 3500, 25,
AXP20X_LDO3_V_OUT, AXP20X_LDO3_V_OUT_MASK,
AXP20X_PWR_OUT_CTRL, AXP20X_PWR_OUT_LDO3_MASK),
AXP_DESC_RANGES(AXP20X, LDO4, "ldo4", "ldo24in",
axp20x_ldo4_ranges, AXP20X_LDO4_V_OUT_NUM_VOLTAGES,
- AXP20X_LDO24_V_OUT, AXP20X_LDO24_V_OUT_MASK,
+ AXP20X_LDO24_V_OUT, AXP20X_LDO4_V_OUT_MASK,
AXP20X_PWR_OUT_CTRL, AXP20X_PWR_OUT_LDO4_MASK),
AXP_DESC_IO(AXP20X, LDO5, "ldo5", "ldo5in", 1800, 3300, 100,
AXP20X_LDO5_V_OUT, AXP20X_LDO5_V_OUT_MASK,
MODULE_LICENSE("GPL");
static struct dasd_discipline dasd_fba_discipline;
+static void *dasd_fba_zero_page;
struct dasd_fba_private {
struct dasd_fba_characteristics rdc_data;
ccw->cmd_code = DASD_FBA_CCW_WRITE;
ccw->flags |= CCW_FLAG_SLI;
ccw->count = count;
- ccw->cda = (__u32) (addr_t) page_to_phys(ZERO_PAGE(0));
+ ccw->cda = (__u32) (addr_t) dasd_fba_zero_page;
}
/*
int ret;
ASCEBC(dasd_fba_discipline.ebcname, 4);
+
+ dasd_fba_zero_page = (void *)get_zeroed_page(GFP_KERNEL | GFP_DMA);
+ if (!dasd_fba_zero_page)
+ return -ENOMEM;
+
ret = ccw_driver_register(&dasd_fba_driver);
if (!ret)
wait_for_device_probe();
dasd_fba_cleanup(void)
{
ccw_driver_unregister(&dasd_fba_driver);
+ free_page((unsigned long)dasd_fba_zero_page);
}
module_init(dasd_fba_init);
if (!reqcnt)
return -ENOMEM;
zcrypt_perdev_reqcnt(reqcnt, AP_DEVICES);
- if (copy_to_user((int __user *) arg, reqcnt, sizeof(reqcnt)))
+ if (copy_to_user((int __user *) arg, reqcnt,
+ sizeof(u32) * AP_DEVICES))
rc = -EFAULT;
kfree(reqcnt);
return rc;
*nr_apqns = 0;
/* fetch status of all crypto cards */
- device_status = kmalloc_array(MAX_ZDEV_ENTRIES_EXT,
- sizeof(struct zcrypt_device_status_ext),
- GFP_KERNEL);
+ device_status = kvmalloc_array(MAX_ZDEV_ENTRIES_EXT,
+ sizeof(struct zcrypt_device_status_ext),
+ GFP_KERNEL);
if (!device_status)
return -ENOMEM;
zcrypt_device_status_mask_ext(device_status);
verify = 0;
}
- kfree(device_status);
+ kvfree(device_status);
return rc;
}
EXPORT_SYMBOL(cca_findcard2);
if (card->state == CARD_STATE_SOFTSETUP) {
qeth_clear_ipacmd_list(card);
- qeth_drain_output_queues(card);
card->state = CARD_STATE_DOWN;
}
qeth_qdio_clear_card(card, 0);
+ qeth_drain_output_queues(card);
qeth_clear_working_pool_list(card);
flush_workqueue(card->event_wq);
qeth_flush_local_addrs(card);
if (card->state == CARD_STATE_SOFTSETUP) {
qeth_l3_clear_ip_htable(card, 1);
qeth_clear_ipacmd_list(card);
- qeth_drain_output_queues(card);
card->state = CARD_STATE_DOWN;
}
qeth_qdio_clear_card(card, 0);
+ qeth_drain_output_queues(card);
qeth_clear_working_pool_list(card);
flush_workqueue(card->event_wq);
qeth_flush_local_addrs(card);
pr_warn("driver on host %s cannot handle device %016llx, error:%d\n",
dev_name(sas_ha->dev),
SAS_ADDR(dev->sas_addr), res);
+ return res;
}
set_bit(SAS_DEV_FOUND, &dev->state);
kref_get(&dev->kref);
- return res;
+ return 0;
}
static void lpfc_disc_flush_list(struct lpfc_vport *vport);
static void lpfc_unregister_fcfi_cmpl(struct lpfc_hba *, LPFC_MBOXQ_t *);
static int lpfc_fcf_inuse(struct lpfc_hba *);
+static void lpfc_mbx_cmpl_read_sparam(struct lpfc_hba *, LPFC_MBOXQ_t *);
void
lpfc_terminate_rport_io(struct fc_rport *rport)
return;
}
-
void
lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
{
struct lpfc_vport *vport = pmb->vport;
+ LPFC_MBOXQ_t *sparam_mb;
+ struct lpfc_dmabuf *sparam_mp;
+ int rc;
if (pmb->u.mb.mbxStatus)
goto out;
}
/* Start discovery by sending a FLOGI. port_state is identically
- * LPFC_FLOGI while waiting for FLOGI cmpl. Check if sending
- * the FLOGI is being deferred till after MBX_READ_SPARAM completes.
+ * LPFC_FLOGI while waiting for FLOGI cmpl.
*/
if (vport->port_state != LPFC_FLOGI) {
- if (!(phba->hba_flag & HBA_DEFER_FLOGI))
+ /* Issue MBX_READ_SPARAM to update CSPs before FLOGI if
+ * bb-credit recovery is in place.
+ */
+ if (phba->bbcredit_support && phba->cfg_enable_bbcr &&
+ !(phba->link_flag & LS_LOOPBACK_MODE)) {
+ sparam_mb = mempool_alloc(phba->mbox_mem_pool,
+ GFP_KERNEL);
+ if (!sparam_mb)
+ goto sparam_out;
+
+ rc = lpfc_read_sparam(phba, sparam_mb, 0);
+ if (rc) {
+ mempool_free(sparam_mb, phba->mbox_mem_pool);
+ goto sparam_out;
+ }
+ sparam_mb->vport = vport;
+ sparam_mb->mbox_cmpl = lpfc_mbx_cmpl_read_sparam;
+ rc = lpfc_sli_issue_mbox(phba, sparam_mb, MBX_NOWAIT);
+ if (rc == MBX_NOT_FINISHED) {
+ sparam_mp = (struct lpfc_dmabuf *)
+ sparam_mb->ctx_buf;
+ lpfc_mbuf_free(phba, sparam_mp->virt,
+ sparam_mp->phys);
+ kfree(sparam_mp);
+ sparam_mb->ctx_buf = NULL;
+ mempool_free(sparam_mb, phba->mbox_mem_pool);
+ goto sparam_out;
+ }
+
+ phba->hba_flag |= HBA_DEFER_FLOGI;
+ } else {
lpfc_initial_flogi(vport);
+ }
} else {
if (vport->fc_flag & FC_PT2PT)
lpfc_disc_start(vport);
"0306 CONFIG_LINK mbxStatus error x%x "
"HBA state x%x\n",
pmb->u.mb.mbxStatus, vport->port_state);
+sparam_out:
mempool_free(pmb, phba->mbox_mem_pool);
lpfc_linkdown(phba);
lpfc_linkup(phba);
sparam_mbox = NULL;
- if (!(phba->hba_flag & HBA_FCOE_MODE)) {
- cfglink_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
- if (!cfglink_mbox)
- goto out;
- vport->port_state = LPFC_LOCAL_CFG_LINK;
- lpfc_config_link(phba, cfglink_mbox);
- cfglink_mbox->vport = vport;
- cfglink_mbox->mbox_cmpl = lpfc_mbx_cmpl_local_config_link;
- rc = lpfc_sli_issue_mbox(phba, cfglink_mbox, MBX_NOWAIT);
- if (rc == MBX_NOT_FINISHED) {
- mempool_free(cfglink_mbox, phba->mbox_mem_pool);
- goto out;
- }
- }
-
sparam_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
if (!sparam_mbox)
goto out;
goto out;
}
- if (phba->hba_flag & HBA_FCOE_MODE) {
+ if (!(phba->hba_flag & HBA_FCOE_MODE)) {
+ cfglink_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+ if (!cfglink_mbox)
+ goto out;
+ vport->port_state = LPFC_LOCAL_CFG_LINK;
+ lpfc_config_link(phba, cfglink_mbox);
+ cfglink_mbox->vport = vport;
+ cfglink_mbox->mbox_cmpl = lpfc_mbx_cmpl_local_config_link;
+ rc = lpfc_sli_issue_mbox(phba, cfglink_mbox, MBX_NOWAIT);
+ if (rc == MBX_NOT_FINISHED) {
+ mempool_free(cfglink_mbox, phba->mbox_mem_pool);
+ goto out;
+ }
+ } else {
vport->port_state = LPFC_VPORT_UNKNOWN;
/*
* Add the driver's default FCF record at FCF index 0 now. This
}
/* Reset FCF roundrobin bmask for new discovery */
lpfc_sli4_clear_fcf_rr_bmask(phba);
- } else {
- if (phba->bbcredit_support && phba->cfg_enable_bbcr &&
- !(phba->link_flag & LS_LOOPBACK_MODE))
- phba->hba_flag |= HBA_DEFER_FLOGI;
}
/* Prepare for LINK up registrations */
if (sdkp->device->type == TYPE_ZBC) {
/* Host-managed */
- q->limits.zoned = BLK_ZONED_HM;
+ blk_queue_set_zoned(sdkp->disk, BLK_ZONED_HM);
} else {
sdkp->zoned = (buffer[8] >> 4) & 3;
- if (sdkp->zoned == 1 && !disk_has_partitions(sdkp->disk)) {
+ if (sdkp->zoned == 1) {
/* Host-aware */
- q->limits.zoned = BLK_ZONED_HA;
+ blk_queue_set_zoned(sdkp->disk, BLK_ZONED_HA);
} else {
- /*
- * Treat drive-managed devices and host-aware devices
- * with partitions as regular block devices.
- */
- q->limits.zoned = BLK_ZONED_NONE;
- if (sdkp->zoned == 2 && sdkp->first_scan)
- sd_printk(KERN_NOTICE, sdkp,
- "Drive-managed SMR disk\n");
+ /* Regular disk or drive managed disk */
+ blk_queue_set_zoned(sdkp->disk, BLK_ZONED_NONE);
}
}
- if (blk_queue_is_zoned(q) && sdkp->first_scan)
+
+ if (!sdkp->first_scan)
+ goto out;
+
+ if (blk_queue_is_zoned(q)) {
sd_printk(KERN_NOTICE, sdkp, "Host-%s zoned block device\n",
q->limits.zoned == BLK_ZONED_HM ? "managed" : "aware");
+ } else {
+ if (sdkp->zoned == 1)
+ sd_printk(KERN_NOTICE, sdkp,
+ "Host-aware SMR disk used as regular disk\n");
+ else if (sdkp->zoned == 2)
+ sd_printk(KERN_NOTICE, sdkp,
+ "Drive-managed SMR disk\n");
+ }
out:
kfree(buffer);
sdkp->first_scan = 1;
sdkp->max_medium_access_timeouts = SD_MAX_MEDIUM_TIMEOUTS;
- error = sd_zbc_init_disk(sdkp);
- if (error)
- goto out_free_index;
-
sd_revalidate_disk(gd);
gd->flags = GENHD_FL_EXT_DEVT;
#ifdef CONFIG_BLK_DEV_ZONED
-int sd_zbc_init_disk(struct scsi_disk *sdkp);
void sd_zbc_release_disk(struct scsi_disk *sdkp);
int sd_zbc_read_zones(struct scsi_disk *sdkp, unsigned char *buffer);
int sd_zbc_revalidate_zones(struct scsi_disk *sdkp);
#else /* CONFIG_BLK_DEV_ZONED */
-static inline int sd_zbc_init_disk(struct scsi_disk *sdkp)
-{
- return 0;
-}
-
static inline void sd_zbc_release_disk(struct scsi_disk *sdkp) {}
static inline int sd_zbc_read_zones(struct scsi_disk *sdkp,
static inline unsigned int sd_zbc_complete(struct scsi_cmnd *cmd,
unsigned int good_bytes, struct scsi_sense_hdr *sshdr)
{
- return 0;
+ return good_bytes;
}
static inline blk_status_t sd_zbc_prepare_zone_append(struct scsi_cmnd *cmd,
sdkp->zone_blocks);
}
+static int sd_zbc_init_disk(struct scsi_disk *sdkp)
+{
+ sdkp->zones_wp_offset = NULL;
+ spin_lock_init(&sdkp->zones_wp_offset_lock);
+ sdkp->rev_wp_offset = NULL;
+ mutex_init(&sdkp->rev_mutex);
+ INIT_WORK(&sdkp->zone_wp_offset_work, sd_zbc_update_wp_offset_workfn);
+ sdkp->zone_wp_update_buf = kzalloc(SD_BUF_SIZE, GFP_KERNEL);
+ if (!sdkp->zone_wp_update_buf)
+ return -ENOMEM;
+
+ return 0;
+}
+
+void sd_zbc_release_disk(struct scsi_disk *sdkp)
+{
+ kvfree(sdkp->zones_wp_offset);
+ sdkp->zones_wp_offset = NULL;
+ kfree(sdkp->zone_wp_update_buf);
+ sdkp->zone_wp_update_buf = NULL;
+}
+
static void sd_zbc_revalidate_zones_cb(struct gendisk *disk)
{
struct scsi_disk *sdkp = scsi_disk(disk);
u32 max_append;
int ret = 0;
- if (!sd_is_zoned(sdkp))
+ /*
+ * For all zoned disks, initialize zone append emulation data if not
+ * already done. This is necessary also for host-aware disks used as
+ * regular disks due to the presence of partitions as these partitions
+ * may be deleted and the disk zoned model changed back from
+ * BLK_ZONED_NONE to BLK_ZONED_HA.
+ */
+ if (sd_is_zoned(sdkp) && !sdkp->zone_wp_update_buf) {
+ ret = sd_zbc_init_disk(sdkp);
+ if (ret)
+ return ret;
+ }
+
+ /*
+ * There is nothing to do for regular disks, including host-aware disks
+ * that have partitions.
+ */
+ if (!blk_queue_is_zoned(q))
return 0;
/*
return ret;
}
-
-int sd_zbc_init_disk(struct scsi_disk *sdkp)
-{
- if (!sd_is_zoned(sdkp))
- return 0;
-
- sdkp->zones_wp_offset = NULL;
- spin_lock_init(&sdkp->zones_wp_offset_lock);
- sdkp->rev_wp_offset = NULL;
- mutex_init(&sdkp->rev_mutex);
- INIT_WORK(&sdkp->zone_wp_offset_work, sd_zbc_update_wp_offset_workfn);
- sdkp->zone_wp_update_buf = kzalloc(SD_BUF_SIZE, GFP_KERNEL);
- if (!sdkp->zone_wp_update_buf)
- return -ENOMEM;
-
- return 0;
-}
-
-void sd_zbc_release_disk(struct scsi_disk *sdkp)
-{
- kvfree(sdkp->zones_wp_offset);
- sdkp->zones_wp_offset = NULL;
- kfree(sdkp->zone_wp_update_buf);
- sdkp->zone_wp_update_buf = NULL;
-}
},
{
.compatible = "brcm,spi-bcm-qspi",
- .data = &bcm_qspi_rev_data,
+ .data = &bcm_qspi_no_rev_data,
},
{
.compatible = "brcm,spi-bcm7216-qspi",
#define DRV_NAME "spi-bcm2835"
/* define polling limits */
-unsigned int polling_limit_us = 30;
+static unsigned int polling_limit_us = 30;
module_param(polling_limit_us, uint, 0664);
MODULE_PARM_DESC(polling_limit_us,
"time in us to run a transfer in polling mode\n");
.fifo_size = 16,
},
[LS2080A] = {
- .trans_mode = DSPI_DMA_MODE,
+ .trans_mode = DSPI_XSPI_MODE,
.max_clock_factor = 8,
.fifo_size = 4,
},
[LS2085A] = {
- .trans_mode = DSPI_DMA_MODE,
+ .trans_mode = DSPI_XSPI_MODE,
.max_clock_factor = 8,
.fifo_size = 4,
},
[LX2160A] = {
- .trans_mode = DSPI_DMA_MODE,
+ .trans_mode = DSPI_XSPI_MODE,
.max_clock_factor = 8,
.fifo_size = 4,
},
void __iomem *base;
bool big_endian;
- ctlr = spi_alloc_master(&pdev->dev, sizeof(struct fsl_dspi));
+ dspi = devm_kzalloc(&pdev->dev, sizeof(*dspi), GFP_KERNEL);
+ if (!dspi)
+ return -ENOMEM;
+
+ ctlr = spi_alloc_master(&pdev->dev, 0);
if (!ctlr)
return -ENOMEM;
- dspi = spi_controller_get_devdata(ctlr);
dspi->pdev = pdev;
dspi->ctlr = ctlr;
if (dspi->devtype_data->trans_mode != DSPI_DMA_MODE)
ctlr->ptp_sts_supported = true;
- platform_set_drvdata(pdev, ctlr);
+ platform_set_drvdata(pdev, dspi);
ret = spi_register_controller(ctlr);
if (ret != 0) {
static int dspi_remove(struct platform_device *pdev)
{
- struct spi_controller *ctlr = platform_get_drvdata(pdev);
- struct fsl_dspi *dspi = spi_controller_get_devdata(ctlr);
+ struct fsl_dspi *dspi = platform_get_drvdata(pdev);
/* Disconnect from the SPI framework */
spi_unregister_controller(dspi->ctlr);
static irqreturn_t fsl_espi_irq(s32 irq, void *context_data)
{
struct fsl_espi *espi = context_data;
- u32 events;
+ u32 events, mask;
spin_lock(&espi->lock);
/* Get interrupt events(tx/rx) */
events = fsl_espi_read_reg(espi, ESPI_SPIE);
- if (!events) {
+ mask = fsl_espi_read_reg(espi, ESPI_SPIM);
+ if (!(events & mask)) {
spin_unlock(&espi->lock);
return IRQ_NONE;
}
*/
#include <linux/crc32.h>
+#include <linux/delay.h>
#include <linux/property.h>
#include <linux/slab.h>
#include "tb.h"
struct tb_drom_entry_header *entry = (void *) (sw->drom + pos);
if (pos + 1 == drom_size || pos + entry->len > drom_size
|| !entry->len) {
- tb_sw_warn(sw, "drom buffer overrun, aborting\n");
- return -EIO;
+ tb_sw_warn(sw, "DROM buffer overrun\n");
+ return -EILSEQ;
}
switch (entry->type) {
u16 size;
u32 crc;
struct tb_drom_header *header;
- int res;
+ int res, retries = 1;
+
if (sw->drom)
return 0;
tb_sw_warn(sw, "drom device_rom_revision %#x unknown\n",
header->device_rom_revision);
- return tb_drom_parse_entries(sw);
+ res = tb_drom_parse_entries(sw);
+ /* If the DROM parsing fails, wait a moment and retry once */
+ if (res == -EILSEQ && retries--) {
+ tb_sw_warn(sw, "parsing DROM failed, retrying\n");
+ msleep(100);
+ res = tb_drom_read_n(sw, 0, sw->drom, size);
+ if (!res)
+ goto parse;
+ }
+
+ return res;
err:
kfree(sw->drom);
sw->drom = NULL;
PCI_ANY_ID, PCI_ANY_ID,
0, 0, pbn_wch384_4 },
+ /*
+ * Realtek RealManage
+ */
+ { PCI_VENDOR_ID_REALTEK, 0x816a,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_b0_1_115200 },
+
+ { PCI_VENDOR_ID_REALTEK, 0x816b,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_b0_1_115200 },
+
/* Fintek PCI serial cards */
{ PCI_DEVICE(0x1c29, 0x1104), .driver_data = pbn_fintek_4 },
{ PCI_DEVICE(0x1c29, 0x1108), .driver_data = pbn_fintek_8 },
return uart_console(port) && (port->cons->flags & CON_ENABLED);
}
-static void __uart_port_spin_lock_init(struct uart_port *port)
+static void uart_port_spin_lock_init(struct uart_port *port)
{
spin_lock_init(&port->lock);
lockdep_set_class(&port->lock, &port_lock_key);
}
-/*
- * Ensure that the serial console lock is initialised early.
- * If this port is a console, then the spinlock is already initialised.
- */
-static inline void uart_port_spin_lock_init(struct uart_port *port)
-{
- if (uart_console(port))
- return;
-
- __uart_port_spin_lock_init(port);
-}
-
#if defined(CONFIG_SERIAL_CORE_CONSOLE) || defined(CONFIG_CONSOLE_POLL)
/**
* uart_console_write - write a console message to a serial port
struct ktermios termios;
static struct ktermios dummy;
- uart_port_spin_lock_init(port);
+ /*
+ * Ensure that the serial-console lock is initialised early.
+ *
+ * Note that the console-enabled check is needed because of kgdboc,
+ * which can end up calling uart_set_options() for an already enabled
+ * console via tty_find_polling_driver() and uart_poll_init().
+ */
+ if (!uart_console_enabled(port) && !port->console_reinit)
+ uart_port_spin_lock_init(port);
memset(&termios, 0, sizeof(struct ktermios));
uart_change_pm(state, UART_PM_STATE_ON);
/*
- * If this driver supports console, and it hasn't been
- * successfully registered yet, initialise spin lock for it.
- */
- if (port->cons && !(port->cons->flags & CON_ENABLED))
- __uart_port_spin_lock_init(port);
-
- /*
* Ensure that the modem control lines are de-activated.
* keep the DTR setting that is set in uart_set_options()
* We probably don't need a spinlock around this, but
if (oldconsole && !newconsole) {
ret = unregister_console(uport->cons);
} else if (!oldconsole && newconsole) {
- if (uart_console(uport))
+ if (uart_console(uport)) {
+ uport->console_reinit = 1;
register_console(uport->cons);
- else
+ } else {
ret = -ENOENT;
+ }
}
} else {
ret = -ENXIO;
goto out;
}
- uart_port_spin_lock_init(uport);
+ /*
+ * If this port is in use as a console then the spinlock is already
+ * initialised.
+ */
+ if (!uart_console_enabled(uport))
+ uart_port_spin_lock_init(uport);
if (uport->cons && uport->dev)
of_console_check(uport->dev->of_node, uport->cons->name, uport->line);
if (rv < 0)
return rv;
+ if (!usblp->present) {
+ count = -ENODEV;
+ goto done;
+ }
+
if ((avail = usblp->rstatus) < 0) {
printk(KERN_ERR "usblp%d: error %d reading from printer\n",
usblp->minor, (int)avail);
/* Generic RTL8153 based ethernet adapters */
{ USB_DEVICE(0x0bda, 0x8153), .driver_info = USB_QUIRK_NO_LPM },
+ /* SONiX USB DEVICE Touchpad */
+ { USB_DEVICE(0x0c45, 0x7056), .driver_info =
+ USB_QUIRK_IGNORE_REMOTE_WAKEUP },
+
/* Action Semiconductor flash disk */
{ USB_DEVICE(0x10d6, 0x2200), .driver_info =
USB_QUIRK_STRING_FETCH_255 },
#include <linux/interrupt.h>
#include <linux/usb.h>
#include <linux/usb/hcd.h>
+#include <linux/usb/otg.h>
#include <linux/moduleparam.h>
#include <linux/dma-mapping.h>
#include <linux/debugfs.h>
*/
/*-------------------------------------------------------------------------*/
-#include <linux/usb/otg.h>
#define PORT_WAKE_BITS (PORT_WKOC_E|PORT_WKDISC_E|PORT_WKCONN_E)
if (devinfo->resetting) {
cmnd->result = DID_ERROR << 16;
cmnd->scsi_done(cmnd);
- spin_unlock_irqrestore(&devinfo->lock, flags);
- return 0;
+ goto zombie;
}
/* Find a free uas-tag */
cmdinfo->state &= ~(SUBMIT_DATA_IN_URB | SUBMIT_DATA_OUT_URB);
err = uas_submit_urbs(cmnd, devinfo);
+ /*
+ * in case of fatal errors the SCSI layer is peculiar
+ * a command that has finished is a success for the purpose
+ * of queueing, no matter how fatal the error
+ */
+ if (err == -ENODEV) {
+ cmnd->result = DID_ERROR << 16;
+ cmnd->scsi_done(cmnd);
+ goto zombie;
+ }
if (err) {
/* If we did nothing, give up now */
if (cmdinfo->state & SUBMIT_STATUS_URB) {
}
devinfo->cmnd[idx] = cmnd;
+zombie:
spin_unlock_irqrestore(&devinfo->lock, flags);
return 0;
}
static int pmc_usb_command(struct pmc_usb_port *port, u8 *msg, u32 len)
{
u8 response[4];
+ int ret;
/*
* Error bit will always be 0 with the USBC command.
- * Status can be checked from the response message.
+ * Status can be checked from the response message if the
+ * function intel_scu_ipc_dev_command succeeds.
*/
- intel_scu_ipc_dev_command(port->pmc->ipc, PMC_USBC_CMD, 0, msg, len,
- response, sizeof(response));
+ ret = intel_scu_ipc_dev_command(port->pmc->ipc, PMC_USBC_CMD, 0, msg,
+ len, response, sizeof(response));
+
+ if (ret)
+ return ret;
+
if (response[2] & PMC_USB_RESP_STATUS_FAILURE) {
if (response[2] & PMC_USB_RESP_STATUS_FATAL)
return -EIO;
con->partner_altmode[i] == altmode);
}
-static u8 ucsi_altmode_next_mode(struct typec_altmode **alt, u16 svid)
+static int ucsi_altmode_next_mode(struct typec_altmode **alt, u16 svid)
{
u8 mode = 1;
int i;
- for (i = 0; alt[i]; i++)
+ for (i = 0; alt[i]; i++) {
+ if (i > MODE_DISCOVERY_MAX)
+ return -ERANGE;
+
if (alt[i]->svid == svid)
mode++;
+ }
return mode;
}
goto err;
}
- desc->mode = ucsi_altmode_next_mode(con->port_altmode,
- desc->svid);
+ ret = ucsi_altmode_next_mode(con->port_altmode, desc->svid);
+ if (ret < 0)
+ return ret;
+
+ desc->mode = ret;
switch (desc->svid) {
case USB_TYPEC_DP_SID:
goto err;
}
- desc->mode = ucsi_altmode_next_mode(con->partner_altmode,
- desc->svid);
+ ret = ucsi_altmode_next_mode(con->partner_altmode, desc->svid);
+ if (ret < 0)
+ return ret;
+
+ desc->mode = ret;
alt = typec_partner_register_altmode(con->partner, desc);
if (IS_ERR(alt)) {
if (ret)
goto out_clear_bit;
- if (!wait_for_completion_timeout(&ua->complete, msecs_to_jiffies(5000)))
+ if (!wait_for_completion_timeout(&ua->complete, 60 * HZ))
ret = -ETIMEDOUT;
out_clear_bit:
* vhost_iotlb_itree_first - return the first overlapped range
* @iotlb: the IOTLB
* @start: start of IOVA range
- * @end: end of IOVA range
+ * @last: last byte in IOVA range
*/
struct vhost_iotlb_map *
vhost_iotlb_itree_first(struct vhost_iotlb *iotlb, u64 start, u64 last)
* vhost_iotlb_itree_next - return the next overlapped range
* @map: the starting map node
* @start: start of IOVA range
- * @end: end of IOVA range
+ * @last: last byte IOVA range
*/
struct vhost_iotlb_map *
vhost_iotlb_itree_next(struct vhost_iotlb_map *map, u64 start, u64 last)
struct vdpa_callback cb;
struct vhost_virtqueue *vq;
struct vhost_vring_state s;
- u64 __user *featurep = argp;
- u64 features;
u32 idx;
long r;
vq->last_avail_idx = vq_state.avail_index;
break;
- case VHOST_GET_BACKEND_FEATURES:
- features = VHOST_VDPA_BACKEND_FEATURES;
- if (copy_to_user(featurep, &features, sizeof(features)))
- return -EFAULT;
- return 0;
- case VHOST_SET_BACKEND_FEATURES:
- if (copy_from_user(&features, featurep, sizeof(features)))
- return -EFAULT;
- if (features & ~VHOST_VDPA_BACKEND_FEATURES)
- return -EOPNOTSUPP;
- vhost_set_backend_features(&v->vdev, features);
- return 0;
}
r = vhost_vring_ioctl(&v->vdev, cmd, argp);
struct vhost_vdpa *v = filep->private_data;
struct vhost_dev *d = &v->vdev;
void __user *argp = (void __user *)arg;
+ u64 __user *featurep = argp;
+ u64 features;
long r;
+ if (cmd == VHOST_SET_BACKEND_FEATURES) {
+ r = copy_from_user(&features, featurep, sizeof(features));
+ if (r)
+ return r;
+ if (features & ~VHOST_VDPA_BACKEND_FEATURES)
+ return -EOPNOTSUPP;
+ vhost_set_backend_features(&v->vdev, features);
+ return 0;
+ }
+
mutex_lock(&d->mutex);
switch (cmd) {
case VHOST_VDPA_SET_CONFIG_CALL:
r = vhost_vdpa_set_config_call(v, argp);
break;
+ case VHOST_GET_BACKEND_FEATURES:
+ features = VHOST_VDPA_BACKEND_FEATURES;
+ r = copy_to_user(featurep, &features, sizeof(features));
+ break;
default:
r = vhost_dev_ioctl(&v->vdev, cmd, argp);
if (r == -ENOIOCTLCMD)
Say Y.
-config VGACON_SOFT_SCROLLBACK
- bool "Enable Scrollback Buffer in System RAM"
- depends on VGA_CONSOLE
- default n
- help
- The scrollback buffer of the standard VGA console is located in
- the VGA RAM. The size of this RAM is fixed and is quite small.
- If you require a larger scrollback buffer, this can be placed in
- System RAM which is dynamically allocated during initialization.
- Placing the scrollback buffer in System RAM will slightly slow
- down the console.
-
- If you want this feature, say 'Y' here and enter the amount of
- RAM to allocate for this buffer. If unsure, say 'N'.
-
-config VGACON_SOFT_SCROLLBACK_SIZE
- int "Scrollback Buffer Size (in KB)"
- depends on VGACON_SOFT_SCROLLBACK
- range 1 1024
- default "64"
- help
- Enter the amount of System RAM to allocate for scrollback
- buffers of VGA consoles. Each 64KB will give you approximately
- 16 80x25 screenfuls of scrollback buffer.
-
-config VGACON_SOFT_SCROLLBACK_PERSISTENT_ENABLE_BY_DEFAULT
- bool "Persistent Scrollback History for each console by default"
- depends on VGACON_SOFT_SCROLLBACK
- default n
- help
- Say Y here if the scrollback history should persist by default when
- switching between consoles. Otherwise, the scrollback history will be
- flushed each time the console is switched. This feature can also be
- enabled using the boot command line parameter
- 'vgacon.scrollback_persistent=1'.
-
- This feature might break your tool of choice to flush the scrollback
- buffer, e.g. clear(1) will work fine but Debian's clear_console(1)
- will be broken, which might cause security issues.
- You can use the escape sequence \e[3J instead if this feature is
- activated.
-
- Note that a buffer of VGACON_SOFT_SCROLLBACK_SIZE is taken for each
- created tty device.
- So if you use a RAM-constrained system, say N here.
-
config MDA_CONSOLE
depends on !M68K && !PARISC && ISA
tristate "MDA text console (dual-headed)"
write_vga(12, (c->vc_visible_origin - vga_vram_base) / 2);
}
-#ifdef CONFIG_VGACON_SOFT_SCROLLBACK
-/* software scrollback */
-struct vgacon_scrollback_info {
- void *data;
- int tail;
- int size;
- int rows;
- int cnt;
- int cur;
- int save;
- int restore;
-};
-
-static struct vgacon_scrollback_info *vgacon_scrollback_cur;
-static struct vgacon_scrollback_info vgacon_scrollbacks[MAX_NR_CONSOLES];
-static bool scrollback_persistent = \
- IS_ENABLED(CONFIG_VGACON_SOFT_SCROLLBACK_PERSISTENT_ENABLE_BY_DEFAULT);
-module_param_named(scrollback_persistent, scrollback_persistent, bool, 0000);
-MODULE_PARM_DESC(scrollback_persistent, "Enable persistent scrollback for all vga consoles");
-
-static void vgacon_scrollback_reset(int vc_num, size_t reset_size)
-{
- struct vgacon_scrollback_info *scrollback = &vgacon_scrollbacks[vc_num];
-
- if (scrollback->data && reset_size > 0)
- memset(scrollback->data, 0, reset_size);
-
- scrollback->cnt = 0;
- scrollback->tail = 0;
- scrollback->cur = 0;
-}
-
-static void vgacon_scrollback_init(int vc_num)
-{
- int pitch = vga_video_num_columns * 2;
- size_t size = CONFIG_VGACON_SOFT_SCROLLBACK_SIZE * 1024;
- int rows = size / pitch;
- void *data;
-
- data = kmalloc_array(CONFIG_VGACON_SOFT_SCROLLBACK_SIZE, 1024,
- GFP_NOWAIT);
-
- vgacon_scrollbacks[vc_num].data = data;
- vgacon_scrollback_cur = &vgacon_scrollbacks[vc_num];
-
- vgacon_scrollback_cur->rows = rows - 1;
- vgacon_scrollback_cur->size = rows * pitch;
-
- vgacon_scrollback_reset(vc_num, size);
-}
-
-static void vgacon_scrollback_switch(int vc_num)
-{
- if (!scrollback_persistent)
- vc_num = 0;
-
- if (!vgacon_scrollbacks[vc_num].data) {
- vgacon_scrollback_init(vc_num);
- } else {
- if (scrollback_persistent) {
- vgacon_scrollback_cur = &vgacon_scrollbacks[vc_num];
- } else {
- size_t size = CONFIG_VGACON_SOFT_SCROLLBACK_SIZE * 1024;
-
- vgacon_scrollback_reset(vc_num, size);
- }
- }
-}
-
-static void vgacon_scrollback_startup(void)
-{
- vgacon_scrollback_cur = &vgacon_scrollbacks[0];
- vgacon_scrollback_init(0);
-}
-
-static void vgacon_scrollback_update(struct vc_data *c, int t, int count)
-{
- void *p;
-
- if (!vgacon_scrollback_cur->data || !vgacon_scrollback_cur->size ||
- c->vc_num != fg_console)
- return;
-
- p = (void *) (c->vc_origin + t * c->vc_size_row);
-
- while (count--) {
- if ((vgacon_scrollback_cur->tail + c->vc_size_row) >
- vgacon_scrollback_cur->size)
- vgacon_scrollback_cur->tail = 0;
-
- scr_memcpyw(vgacon_scrollback_cur->data +
- vgacon_scrollback_cur->tail,
- p, c->vc_size_row);
-
- vgacon_scrollback_cur->cnt++;
- p += c->vc_size_row;
- vgacon_scrollback_cur->tail += c->vc_size_row;
-
- if (vgacon_scrollback_cur->tail >= vgacon_scrollback_cur->size)
- vgacon_scrollback_cur->tail = 0;
-
- if (vgacon_scrollback_cur->cnt > vgacon_scrollback_cur->rows)
- vgacon_scrollback_cur->cnt = vgacon_scrollback_cur->rows;
-
- vgacon_scrollback_cur->cur = vgacon_scrollback_cur->cnt;
- }
-}
-
-static void vgacon_restore_screen(struct vc_data *c)
-{
- c->vc_origin = c->vc_visible_origin;
- vgacon_scrollback_cur->save = 0;
-
- if (!vga_is_gfx && !vgacon_scrollback_cur->restore) {
- scr_memcpyw((u16 *) c->vc_origin, (u16 *) c->vc_screenbuf,
- c->vc_screenbuf_size > vga_vram_size ?
- vga_vram_size : c->vc_screenbuf_size);
- vgacon_scrollback_cur->restore = 1;
- vgacon_scrollback_cur->cur = vgacon_scrollback_cur->cnt;
- }
-}
-
-static void vgacon_scrolldelta(struct vc_data *c, int lines)
-{
- int start, end, count, soff;
-
- if (!lines) {
- vgacon_restore_screen(c);
- return;
- }
-
- if (!vgacon_scrollback_cur->data)
- return;
-
- if (!vgacon_scrollback_cur->save) {
- vgacon_cursor(c, CM_ERASE);
- vgacon_save_screen(c);
- c->vc_origin = (unsigned long)c->vc_screenbuf;
- vgacon_scrollback_cur->save = 1;
- }
-
- vgacon_scrollback_cur->restore = 0;
- start = vgacon_scrollback_cur->cur + lines;
- end = start + abs(lines);
-
- if (start < 0)
- start = 0;
-
- if (start > vgacon_scrollback_cur->cnt)
- start = vgacon_scrollback_cur->cnt;
-
- if (end < 0)
- end = 0;
-
- if (end > vgacon_scrollback_cur->cnt)
- end = vgacon_scrollback_cur->cnt;
-
- vgacon_scrollback_cur->cur = start;
- count = end - start;
- soff = vgacon_scrollback_cur->tail -
- ((vgacon_scrollback_cur->cnt - end) * c->vc_size_row);
- soff -= count * c->vc_size_row;
-
- if (soff < 0)
- soff += vgacon_scrollback_cur->size;
-
- count = vgacon_scrollback_cur->cnt - start;
-
- if (count > c->vc_rows)
- count = c->vc_rows;
-
- if (count) {
- int copysize;
-
- int diff = c->vc_rows - count;
- void *d = (void *) c->vc_visible_origin;
- void *s = (void *) c->vc_screenbuf;
-
- count *= c->vc_size_row;
- /* how much memory to end of buffer left? */
- copysize = min(count, vgacon_scrollback_cur->size - soff);
- scr_memcpyw(d, vgacon_scrollback_cur->data + soff, copysize);
- d += copysize;
- count -= copysize;
-
- if (count) {
- scr_memcpyw(d, vgacon_scrollback_cur->data, count);
- d += count;
- }
-
- if (diff)
- scr_memcpyw(d, s, diff * c->vc_size_row);
- } else
- vgacon_cursor(c, CM_MOVE);
-}
-
-static void vgacon_flush_scrollback(struct vc_data *c)
-{
- size_t size = CONFIG_VGACON_SOFT_SCROLLBACK_SIZE * 1024;
-
- vgacon_scrollback_reset(c->vc_num, size);
-}
-#else
-#define vgacon_scrollback_startup(...) do { } while (0)
-#define vgacon_scrollback_init(...) do { } while (0)
-#define vgacon_scrollback_update(...) do { } while (0)
-#define vgacon_scrollback_switch(...) do { } while (0)
-
static void vgacon_restore_screen(struct vc_data *c)
{
if (c->vc_origin != c->vc_visible_origin)
vga_set_mem_top(c);
}
-static void vgacon_flush_scrollback(struct vc_data *c)
-{
-}
-#endif /* CONFIG_VGACON_SOFT_SCROLLBACK */
-
static const char *vgacon_startup(void)
{
const char *display_desc = NULL;
vgacon_xres = screen_info.orig_video_cols * VGA_FONTWIDTH;
vgacon_yres = vga_scan_lines;
- if (!vga_init_done) {
- vgacon_scrollback_startup();
- vga_init_done = true;
- }
+ vga_init_done = true;
return display_desc;
}
vgacon_doresize(c, c->vc_cols, c->vc_rows);
}
- vgacon_scrollback_switch(c->vc_num);
return 0; /* Redrawing not needed */
}
oldo = c->vc_origin;
delta = lines * c->vc_size_row;
if (dir == SM_UP) {
- vgacon_scrollback_update(c, t, lines);
if (c->vc_scr_end + delta >= vga_vram_end) {
scr_memcpyw((u16 *) vga_vram_base,
(u16 *) (oldo + delta),
.con_save_screen = vgacon_save_screen,
.con_build_attr = vgacon_build_attr,
.con_invert_region = vgacon_invert_region,
- .con_flush_scrollback = vgacon_flush_scrollback,
};
EXPORT_SYMBOL(vga_con);
}
static void bit_cursor(struct vc_data *vc, struct fb_info *info, int mode,
- int softback_lines, int fg, int bg)
+ int fg, int bg)
{
struct fb_cursor cursor;
struct fbcon_ops *ops = info->fbcon_par;
cursor.set = 0;
- if (softback_lines) {
- if (y + softback_lines >= vc->vc_rows) {
- mode = CM_ERASE;
- ops->cursor_flash = 0;
- return;
- } else
- y += softback_lines;
- }
-
c = scr_readw((u16 *) vc->vc_pos);
attribute = get_attribute(info, c);
src = vc->vc_font.data + ((c & charmask) * (w * vc->vc_font.height));
/* logo_shown is an index to vc_cons when >= 0; otherwise follows FBCON_LOGO
enums. */
static int logo_shown = FBCON_LOGO_CANSHOW;
-/* Software scrollback */
-static int fbcon_softback_size = 32768;
-static unsigned long softback_buf, softback_curr;
-static unsigned long softback_in;
-static unsigned long softback_top, softback_end;
-static int softback_lines;
/* console mappings */
static int first_fb_vc;
static int last_fb_vc = MAX_NR_CONSOLES - 1;
static const struct consw fb_con;
-#define CM_SOFTBACK (8)
-
#define advance_row(p, delta) (unsigned short *)((unsigned long)(p) + (delta) * vc->vc_size_row)
static int fbcon_set_origin(struct vc_data *);
return color;
}
-static void fbcon_update_softback(struct vc_data *vc)
-{
- int l = fbcon_softback_size / vc->vc_size_row;
-
- if (l > 5)
- softback_end = softback_buf + l * vc->vc_size_row;
- else
- /* Smaller scrollback makes no sense, and 0 would screw
- the operation totally */
- softback_top = 0;
-}
-
static void fb_flashcursor(struct work_struct *work)
{
struct fb_info *info = container_of(work, struct fb_info, queue);
c = scr_readw((u16 *) vc->vc_pos);
mode = (!ops->cursor_flash || ops->cursor_state.enable) ?
CM_ERASE : CM_DRAW;
- ops->cursor(vc, info, mode, softback_lines, get_color(vc, info, c, 1),
+ ops->cursor(vc, info, mode, get_color(vc, info, c, 1),
get_color(vc, info, c, 0));
console_unlock();
}
}
if (!strncmp(options, "scrollback:", 11)) {
- options += 11;
- if (*options) {
- fbcon_softback_size = simple_strtoul(options, &options, 0);
- if (*options == 'k' || *options == 'K') {
- fbcon_softback_size *= 1024;
- }
- }
+ pr_warn("Ignoring scrollback size option\n");
continue;
}
set_blitting_type(vc, info);
- if (info->fix.type != FB_TYPE_TEXT) {
- if (fbcon_softback_size) {
- if (!softback_buf) {
- softback_buf =
- (unsigned long)
- kvmalloc(fbcon_softback_size,
- GFP_KERNEL);
- if (!softback_buf) {
- fbcon_softback_size = 0;
- softback_top = 0;
- }
- }
- } else {
- if (softback_buf) {
- kvfree((void *) softback_buf);
- softback_buf = 0;
- softback_top = 0;
- }
- }
- if (softback_buf)
- softback_in = softback_top = softback_curr =
- softback_buf;
- softback_lines = 0;
- }
-
/* Setup default font */
if (!p->fontdata && !vc->vc_font.data) {
if (!fontname[0] || !(font = find_font(fontname)))
if (logo)
fbcon_prepare_logo(vc, info, cols, rows, new_cols, new_rows);
- if (vc == svc && softback_buf)
- fbcon_update_softback(vc);
-
if (ops->rotate_font && ops->rotate_font(info, vc)) {
ops->rotate = FB_ROTATE_UR;
set_blitting_type(vc, info);
{
struct fb_info *info = registered_fb[con2fb_map[vc->vc_num]];
struct fbcon_ops *ops = info->fbcon_par;
- int y;
int c = scr_readw((u16 *) vc->vc_pos);
ops->cur_blink_jiffies = msecs_to_jiffies(vc->vc_cur_blink_ms);
fbcon_add_cursor_timer(info);
ops->cursor_flash = (mode == CM_ERASE) ? 0 : 1;
- if (mode & CM_SOFTBACK) {
- mode &= ~CM_SOFTBACK;
- y = softback_lines;
- } else {
- if (softback_lines)
- fbcon_set_origin(vc);
- y = 0;
- }
- ops->cursor(vc, info, mode, y, get_color(vc, info, c, 1),
+ ops->cursor(vc, info, mode, get_color(vc, info, c, 1),
get_color(vc, info, c, 0));
}
if (con_is_visible(vc)) {
update_screen(vc);
- if (softback_buf)
- fbcon_update_softback(vc);
}
}
scrollback_current = 0;
}
-static void fbcon_redraw_softback(struct vc_data *vc, struct fbcon_display *p,
- long delta)
-{
- int count = vc->vc_rows;
- unsigned short *d, *s;
- unsigned long n;
- int line = 0;
-
- d = (u16 *) softback_curr;
- if (d == (u16 *) softback_in)
- d = (u16 *) vc->vc_origin;
- n = softback_curr + delta * vc->vc_size_row;
- softback_lines -= delta;
- if (delta < 0) {
- if (softback_curr < softback_top && n < softback_buf) {
- n += softback_end - softback_buf;
- if (n < softback_top) {
- softback_lines -=
- (softback_top - n) / vc->vc_size_row;
- n = softback_top;
- }
- } else if (softback_curr >= softback_top
- && n < softback_top) {
- softback_lines -=
- (softback_top - n) / vc->vc_size_row;
- n = softback_top;
- }
- } else {
- if (softback_curr > softback_in && n >= softback_end) {
- n += softback_buf - softback_end;
- if (n > softback_in) {
- n = softback_in;
- softback_lines = 0;
- }
- } else if (softback_curr <= softback_in && n > softback_in) {
- n = softback_in;
- softback_lines = 0;
- }
- }
- if (n == softback_curr)
- return;
- softback_curr = n;
- s = (u16 *) softback_curr;
- if (s == (u16 *) softback_in)
- s = (u16 *) vc->vc_origin;
- while (count--) {
- unsigned short *start;
- unsigned short *le;
- unsigned short c;
- int x = 0;
- unsigned short attr = 1;
-
- start = s;
- le = advance_row(s, 1);
- do {
- c = scr_readw(s);
- if (attr != (c & 0xff00)) {
- attr = c & 0xff00;
- if (s > start) {
- fbcon_putcs(vc, start, s - start,
- line, x);
- x += s - start;
- start = s;
- }
- }
- if (c == scr_readw(d)) {
- if (s > start) {
- fbcon_putcs(vc, start, s - start,
- line, x);
- x += s - start + 1;
- start = s + 1;
- } else {
- x++;
- start++;
- }
- }
- s++;
- d++;
- } while (s < le);
- if (s > start)
- fbcon_putcs(vc, start, s - start, line, x);
- line++;
- if (d == (u16 *) softback_end)
- d = (u16 *) softback_buf;
- if (d == (u16 *) softback_in)
- d = (u16 *) vc->vc_origin;
- if (s == (u16 *) softback_end)
- s = (u16 *) softback_buf;
- if (s == (u16 *) softback_in)
- s = (u16 *) vc->vc_origin;
- }
-}
-
static void fbcon_redraw_move(struct vc_data *vc, struct fbcon_display *p,
int line, int count, int dy)
{
}
}
-static inline void fbcon_softback_note(struct vc_data *vc, int t,
- int count)
-{
- unsigned short *p;
-
- if (vc->vc_num != fg_console)
- return;
- p = (unsigned short *) (vc->vc_origin + t * vc->vc_size_row);
-
- while (count) {
- scr_memcpyw((u16 *) softback_in, p, vc->vc_size_row);
- count--;
- p = advance_row(p, 1);
- softback_in += vc->vc_size_row;
- if (softback_in == softback_end)
- softback_in = softback_buf;
- if (softback_in == softback_top) {
- softback_top += vc->vc_size_row;
- if (softback_top == softback_end)
- softback_top = softback_buf;
- }
- }
- softback_curr = softback_in;
-}
-
static bool fbcon_scroll(struct vc_data *vc, unsigned int t, unsigned int b,
enum con_scroll dir, unsigned int count)
{
case SM_UP:
if (count > vc->vc_rows) /* Maximum realistic size */
count = vc->vc_rows;
- if (softback_top)
- fbcon_softback_note(vc, t, count);
if (logo_shown >= 0)
goto redraw_up;
switch (p->scrollmode) {
struct fb_var_screeninfo var = info->var;
int x_diff, y_diff, virt_w, virt_h, virt_fw, virt_fh;
- if (ops->p && ops->p->userfont && FNTSIZE(vc->vc_font.data)) {
+ if (p->userfont && FNTSIZE(vc->vc_font.data)) {
int size;
int pitch = PITCH(vc->vc_font.width);
info = registered_fb[con2fb_map[vc->vc_num]];
ops = info->fbcon_par;
- if (softback_top) {
- if (softback_lines)
- fbcon_set_origin(vc);
- softback_top = softback_curr = softback_in = softback_buf;
- softback_lines = 0;
- fbcon_update_softback(vc);
- }
-
if (logo_shown >= 0) {
struct vc_data *conp2 = vc_cons[logo_shown].d;
int cnt;
char *old_data = NULL;
- if (con_is_visible(vc) && softback_lines)
- fbcon_set_origin(vc);
-
resize = (w != vc->vc_font.width) || (h != vc->vc_font.height);
if (p->userfont)
old_data = vc->vc_font.data;
cols /= w;
rows /= h;
vc_resize(vc, cols, rows);
- if (con_is_visible(vc) && softback_buf)
- fbcon_update_softback(vc);
} else if (con_is_visible(vc)
&& vc->vc_mode == KD_TEXT) {
fbcon_clear_margins(vc, 0);
static u16 *fbcon_screen_pos(struct vc_data *vc, int offset)
{
- unsigned long p;
- int line;
-
- if (vc->vc_num != fg_console || !softback_lines)
- return (u16 *) (vc->vc_origin + offset);
- line = offset / vc->vc_size_row;
- if (line >= softback_lines)
- return (u16 *) (vc->vc_origin + offset -
- softback_lines * vc->vc_size_row);
- p = softback_curr + offset;
- if (p >= softback_end)
- p += softback_buf - softback_end;
- return (u16 *) p;
+ return (u16 *) (vc->vc_origin + offset);
}
static unsigned long fbcon_getxy(struct vc_data *vc, unsigned long pos,
x = offset % vc->vc_cols;
y = offset / vc->vc_cols;
- if (vc->vc_num == fg_console)
- y += softback_lines;
ret = pos + (vc->vc_cols - x) * 2;
- } else if (vc->vc_num == fg_console && softback_lines) {
- unsigned long offset = pos - softback_curr;
-
- if (pos < softback_curr)
- offset += softback_end - softback_buf;
- offset /= 2;
- x = offset % vc->vc_cols;
- y = offset / vc->vc_cols;
- ret = pos + (vc->vc_cols - x) * 2;
- if (ret == softback_end)
- ret = softback_buf;
- if (ret == softback_in)
- ret = vc->vc_origin;
} else {
/* Should not happen */
x = y = 0;
a = ((a) & 0x88ff) | (((a) & 0x7000) >> 4) |
(((a) & 0x0700) << 4);
scr_writew(a, p++);
- if (p == (u16 *) softback_end)
- p = (u16 *) softback_buf;
- if (p == (u16 *) softback_in)
- p = (u16 *) vc->vc_origin;
- }
-}
-
-static void fbcon_scrolldelta(struct vc_data *vc, int lines)
-{
- struct fb_info *info = registered_fb[con2fb_map[fg_console]];
- struct fbcon_ops *ops = info->fbcon_par;
- struct fbcon_display *disp = &fb_display[fg_console];
- int offset, limit, scrollback_old;
-
- if (softback_top) {
- if (vc->vc_num != fg_console)
- return;
- if (vc->vc_mode != KD_TEXT || !lines)
- return;
- if (logo_shown >= 0) {
- struct vc_data *conp2 = vc_cons[logo_shown].d;
-
- if (conp2->vc_top == logo_lines
- && conp2->vc_bottom == conp2->vc_rows)
- conp2->vc_top = 0;
- if (logo_shown == vc->vc_num) {
- unsigned long p, q;
- int i;
-
- p = softback_in;
- q = vc->vc_origin +
- logo_lines * vc->vc_size_row;
- for (i = 0; i < logo_lines; i++) {
- if (p == softback_top)
- break;
- if (p == softback_buf)
- p = softback_end;
- p -= vc->vc_size_row;
- q -= vc->vc_size_row;
- scr_memcpyw((u16 *) q, (u16 *) p,
- vc->vc_size_row);
- }
- softback_in = softback_curr = p;
- update_region(vc, vc->vc_origin,
- logo_lines * vc->vc_cols);
- }
- logo_shown = FBCON_LOGO_CANSHOW;
- }
- fbcon_cursor(vc, CM_ERASE | CM_SOFTBACK);
- fbcon_redraw_softback(vc, disp, lines);
- fbcon_cursor(vc, CM_DRAW | CM_SOFTBACK);
- return;
}
-
- if (!scrollback_phys_max)
- return;
-
- scrollback_old = scrollback_current;
- scrollback_current -= lines;
- if (scrollback_current < 0)
- scrollback_current = 0;
- else if (scrollback_current > scrollback_max)
- scrollback_current = scrollback_max;
- if (scrollback_current == scrollback_old)
- return;
-
- if (fbcon_is_inactive(vc, info))
- return;
-
- fbcon_cursor(vc, CM_ERASE);
-
- offset = disp->yscroll - scrollback_current;
- limit = disp->vrows;
- switch (disp->scrollmode) {
- case SCROLL_WRAP_MOVE:
- info->var.vmode |= FB_VMODE_YWRAP;
- break;
- case SCROLL_PAN_MOVE:
- case SCROLL_PAN_REDRAW:
- limit -= vc->vc_rows;
- info->var.vmode &= ~FB_VMODE_YWRAP;
- break;
- }
- if (offset < 0)
- offset += limit;
- else if (offset >= limit)
- offset -= limit;
-
- ops->var.xoffset = 0;
- ops->var.yoffset = offset * vc->vc_font.height;
- ops->update_start(info);
-
- if (!scrollback_current)
- fbcon_cursor(vc, CM_DRAW);
}
static int fbcon_set_origin(struct vc_data *vc)
{
- if (softback_lines)
- fbcon_scrolldelta(vc, softback_lines);
return 0;
}
fbcon_set_palette(vc, color_table);
update_screen(vc);
- if (softback_buf)
- fbcon_update_softback(vc);
}
}
.con_font_default = fbcon_set_def_font,
.con_font_copy = fbcon_copy_font,
.con_set_palette = fbcon_set_palette,
- .con_scrolldelta = fbcon_scrolldelta,
.con_set_origin = fbcon_set_origin,
.con_invert_region = fbcon_invert_region,
.con_screen_pos = fbcon_screen_pos,
}
#endif
- kvfree((void *)softback_buf);
- softback_buf = 0UL;
-
for_each_registered_fb(i) {
int pending = 0;
void (*clear_margins)(struct vc_data *vc, struct fb_info *info,
int color, int bottom_only);
void (*cursor)(struct vc_data *vc, struct fb_info *info, int mode,
- int softback_lines, int fg, int bg);
+ int fg, int bg);
int (*update_start)(struct fb_info *info);
int (*rotate_font)(struct fb_info *info, struct vc_data *vc);
struct fb_var_screeninfo var; /* copy of the current fb_var_screeninfo */
}
static void ccw_cursor(struct vc_data *vc, struct fb_info *info, int mode,
- int softback_lines, int fg, int bg)
+ int fg, int bg)
{
struct fb_cursor cursor;
struct fbcon_ops *ops = info->fbcon_par;
cursor.set = 0;
- if (softback_lines) {
- if (y + softback_lines >= vc->vc_rows) {
- mode = CM_ERASE;
- ops->cursor_flash = 0;
- return;
- } else
- y += softback_lines;
- }
-
c = scr_readw((u16 *) vc->vc_pos);
attribute = get_attribute(info, c);
src = ops->fontbuffer + ((c & charmask) * (w * vc->vc_font.width));
}
static void cw_cursor(struct vc_data *vc, struct fb_info *info, int mode,
- int softback_lines, int fg, int bg)
+ int fg, int bg)
{
struct fb_cursor cursor;
struct fbcon_ops *ops = info->fbcon_par;
cursor.set = 0;
- if (softback_lines) {
- if (y + softback_lines >= vc->vc_rows) {
- mode = CM_ERASE;
- ops->cursor_flash = 0;
- return;
- } else
- y += softback_lines;
- }
-
c = scr_readw((u16 *) vc->vc_pos);
attribute = get_attribute(info, c);
src = ops->fontbuffer + ((c & charmask) * (w * vc->vc_font.width));
}
static void ud_cursor(struct vc_data *vc, struct fb_info *info, int mode,
- int softback_lines, int fg, int bg)
+ int fg, int bg)
{
struct fb_cursor cursor;
struct fbcon_ops *ops = info->fbcon_par;
cursor.set = 0;
- if (softback_lines) {
- if (y + softback_lines >= vc->vc_rows) {
- mode = CM_ERASE;
- ops->cursor_flash = 0;
- return;
- } else
- y += softback_lines;
- }
-
c = scr_readw((u16 *) vc->vc_pos);
attribute = get_attribute(info, c);
src = ops->fontbuffer + ((c & charmask) * (w * vc->vc_font.height));
}
static void tile_cursor(struct vc_data *vc, struct fb_info *info, int mode,
- int softback_lines, int fg, int bg)
+ int fg, int bg)
{
struct fb_tilecursor cursor;
int use_sw = vc->vc_cursor_type & CUR_SW;
mutex_lock(&sbi->pipe_mutex);
while (bytes) {
- wr = kernel_write(file, data, bytes, &file->f_pos);
+ wr = __kernel_write(file, data, bytes, NULL);
if (wr <= 0)
break;
data += wr;
csum_tree_block(eb, result);
if (memcmp_extent_buffer(eb, result, 0, csum_size)) {
- u32 val;
- u32 found = 0;
-
- memcpy(&found, result, csum_size);
+ u8 val[BTRFS_CSUM_SIZE] = { 0 };
read_extent_buffer(eb, &val, 0, csum_size);
btrfs_warn_rl(fs_info,
- "%s checksum verify failed on %llu wanted %x found %x level %d",
+ "%s checksum verify failed on %llu wanted " CSUM_FMT " found " CSUM_FMT " level %d",
fs_info->sb->s_id, eb->start,
- val, found, btrfs_header_level(eb));
+ CSUM_FMT_VALUE(csum_size, val),
+ CSUM_FMT_VALUE(csum_size, result),
+ btrfs_header_level(eb));
ret = -EUCLEAN;
goto err;
}
key.offset = sk->min_offset;
while (1) {
- ret = fault_in_pages_writeable(ubuf, *buf_size - sk_offset);
+ ret = fault_in_pages_writeable(ubuf + sk_offset,
+ *buf_size - sk_offset);
if (ret)
break;
disk_kobj->name);
}
- kobject_del(&one_device->devid_kobj);
- kobject_put(&one_device->devid_kobj);
+ if (one_device->devid_kobj.state_initialized) {
+ kobject_del(&one_device->devid_kobj);
+ kobject_put(&one_device->devid_kobj);
- wait_for_completion(&one_device->kobj_unregister);
+ wait_for_completion(&one_device->kobj_unregister);
+ }
return 0;
}
sysfs_remove_link(fs_devices->devices_kobj,
disk_kobj->name);
}
- kobject_del(&one_device->devid_kobj);
- kobject_put(&one_device->devid_kobj);
+ if (one_device->devid_kobj.state_initialized) {
+ kobject_del(&one_device->devid_kobj);
+ kobject_put(&one_device->devid_kobj);
- wait_for_completion(&one_device->kobj_unregister);
+ wait_for_completion(&one_device->kobj_unregister);
+ }
}
return 0;
__initcall(start_dirtytime_writeback);
int dirtytime_interval_handler(struct ctl_table *table, int write,
- void __user *buffer, size_t *lenp, loff_t *ppos)
+ void *buffer, size_t *lenp, loff_t *ppos)
{
int ret;
ssize_t ret = 0;
struct file *file = iocb->ki_filp;
struct fuse_file *ff = file->private_data;
- bool async_dio = ff->fc->async_dio;
loff_t pos = 0;
struct inode *inode;
loff_t i_size;
- size_t count = iov_iter_count(iter);
+ size_t count = iov_iter_count(iter), shortened = 0;
loff_t offset = iocb->ki_pos;
struct fuse_io_priv *io;
inode = file->f_mapping->host;
i_size = i_size_read(inode);
- if ((iov_iter_rw(iter) == READ) && (offset > i_size))
+ if ((iov_iter_rw(iter) == READ) && (offset >= i_size))
return 0;
- /* optimization for short read */
- if (async_dio && iov_iter_rw(iter) != WRITE && offset + count > i_size) {
- if (offset >= i_size)
- return 0;
- iov_iter_truncate(iter, fuse_round_up(ff->fc, i_size - offset));
- count = iov_iter_count(iter);
- }
-
io = kmalloc(sizeof(struct fuse_io_priv), GFP_KERNEL);
if (!io)
return -ENOMEM;
* By default, we want to optimize all I/Os with async request
* submission to the client filesystem if supported.
*/
- io->async = async_dio;
+ io->async = ff->fc->async_dio;
io->iocb = iocb;
io->blocking = is_sync_kiocb(iocb);
+ /* optimization for short read */
+ if (io->async && !io->write && offset + count > i_size) {
+ iov_iter_truncate(iter, fuse_round_up(ff->fc, i_size - offset));
+ shortened = count - iov_iter_count(iter);
+ count -= shortened;
+ }
+
/*
* We cannot asynchronously extend the size of a file.
* In such case the aio will behave exactly like sync io.
*/
- if ((offset + count > i_size) && iov_iter_rw(iter) == WRITE)
+ if ((offset + count > i_size) && io->write)
io->blocking = true;
if (io->async && io->blocking) {
} else {
ret = __fuse_direct_read(io, iter, &pos);
}
+ iov_iter_reexpand(iter, iov_iter_count(iter) + shortened);
if (io->async) {
bool blocking = io->blocking;
struct io_ring_ctx *ctx = req->ctx;
int ret, notify;
+ if (tsk->flags & PF_EXITING)
+ return -ESRCH;
+
/*
* SQPOLL kernel thread doesn't need notification, just a wakeup. For
* all other cases, use TWA_SIGNAL unconditionally to ensure we're
static void io_req_task_cancel(struct callback_head *cb)
{
struct io_kiocb *req = container_of(cb, struct io_kiocb, task_work);
+ struct io_ring_ctx *ctx = req->ctx;
__io_req_task_cancel(req, -ECANCELED);
+ percpu_ref_put(&ctx->refs);
}
static void __io_req_task_submit(struct io_kiocb *req)
static inline bool io_run_task_work(void)
{
+ /*
+ * Not safe to run on exiting task, and the task_work handling will
+ * not add work to such a task.
+ */
+ if (unlikely(current->flags & PF_EXITING))
+ return false;
if (current->task_works) {
__set_current_state(TASK_RUNNING);
task_work_run();
goto end_req;
}
- ret = io_import_iovec(rw, req, &iovec, &iter, false);
- if (ret < 0)
- goto end_req;
- ret = io_setup_async_rw(req, iovec, inline_vecs, &iter, false);
- if (!ret)
+ if (!req->io) {
+ ret = io_import_iovec(rw, req, &iovec, &iter, false);
+ if (ret < 0)
+ goto end_req;
+ ret = io_setup_async_rw(req, iovec, inline_vecs, &iter, false);
+ if (!ret)
+ return true;
+ kfree(iovec);
+ } else {
return true;
- kfree(iovec);
+ }
end_req:
req_set_fail_links(req);
io_req_complete(req, ret);
struct iov_iter __iter, *iter = &__iter;
ssize_t io_size, ret, ret2;
size_t iov_count;
+ bool no_async;
if (req->io)
iter = &req->io->rw.iter;
kiocb->ki_flags &= ~IOCB_NOWAIT;
/* If the file doesn't support async, just async punt */
- if (force_nonblock && !io_file_supports_async(req->file, READ))
+ no_async = force_nonblock && !io_file_supports_async(req->file, READ);
+ if (no_async)
goto copy_iov;
ret = rw_verify_area(READ, req->file, io_kiocb_ppos(kiocb), iov_count);
goto done;
/* some cases will consume bytes even on error returns */
iov_iter_revert(iter, iov_count - iov_iter_count(iter));
- ret = io_setup_async_rw(req, iovec, inline_vecs, iter, false);
- if (ret)
- goto out_free;
- return -EAGAIN;
+ ret = 0;
+ goto copy_iov;
} else if (ret < 0) {
/* make sure -ERESTARTSYS -> -EINTR is done */
goto done;
ret = ret2;
goto out_free;
}
+ if (no_async)
+ return -EAGAIN;
/* it's copied and will be cleaned with ->io */
iovec = NULL;
/* now use our persistent iterator, if we aren't already */
const char __user *fname;
int ret;
- if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
- return -EINVAL;
if (unlikely(sqe->ioprio || sqe->buf_index))
return -EINVAL;
if (unlikely(req->flags & REQ_F_FIXED_FILE))
{
u64 flags, mode;
+ if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
+ return -EINVAL;
if (req->flags & REQ_F_NEED_CLEANUP)
return 0;
mode = READ_ONCE(sqe->len);
size_t len;
int ret;
+ if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL|IORING_SETUP_SQPOLL)))
+ return -EINVAL;
if (req->flags & REQ_F_NEED_CLEANUP)
return 0;
how = u64_to_user_ptr(READ_ONCE(sqe->addr2));
#if defined(CONFIG_EPOLL)
if (sqe->ioprio || sqe->buf_index)
return -EINVAL;
- if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL)))
return -EINVAL;
req->epoll.epfd = READ_ONCE(sqe->fd);
static int io_statx_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
{
- if (unlikely(req->ctx->flags & IORING_SETUP_IOPOLL))
+ if (unlikely(req->ctx->flags & (IORING_SETUP_IOPOLL | IORING_SETUP_SQPOLL)))
return -EINVAL;
if (sqe->ioprio || sqe->buf_index)
return -EINVAL;
static int io_files_update_prep(struct io_kiocb *req,
const struct io_uring_sqe *sqe)
{
+ if (unlikely(req->ctx->flags & IORING_SETUP_SQPOLL))
+ return -EINVAL;
if (unlikely(req->flags & (REQ_F_FIXED_FILE | REQ_F_BUFFER_SELECT)))
return -EINVAL;
if (sqe->ioprio || sqe->rw_flags)
if (unlikely(ret))
return ret;
+ io_prep_async_work(req);
+
switch (req->opcode) {
case IORING_OP_NOP:
break;
io_put_file(req, req->splice.file_in,
(req->splice.flags & SPLICE_F_FD_IN_FIXED));
break;
+ case IORING_OP_OPENAT:
+ case IORING_OP_OPENAT2:
+ if (req->open.filename)
+ putname(req->open.filename);
+ break;
}
req->flags &= ~REQ_F_NEED_CLEANUP;
}
struct io_ring_ctx *ctx, unsigned int max_ios)
{
blk_start_plug(&state->plug);
-#ifdef CONFIG_BLOCK
- state->plug.nowait = true;
-#endif
state->comp.nr = 0;
INIT_LIST_HEAD(&state->comp.list);
state->comp.ctx = ctx;
/* cancel this request, or head link requests */
io_attempt_cancel(ctx, cancel_req);
io_put_req(cancel_req);
+ /* cancellations _may_ trigger task work */
+ io_run_task_work();
schedule();
finish_wait(&ctx->inflight_wait, &wait);
}
xdr_set_scratch_buffer(&stream, page_address(scratch), PAGE_SIZE);
do {
+ if (entry->label)
+ entry->label->len = NFS4_MAXLABELLEN;
+
status = xdr_decode(desc, entry, &stream);
if (status != 0) {
if (status == -EAGAIN)
}
static void
-ff_layout_mark_ds_unreachable(struct pnfs_layout_segment *lseg, int idx)
+ff_layout_mark_ds_unreachable(struct pnfs_layout_segment *lseg, u32 idx)
{
struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx);
}
static void
-ff_layout_mark_ds_reachable(struct pnfs_layout_segment *lseg, int idx)
+ff_layout_mark_ds_reachable(struct pnfs_layout_segment *lseg, u32 idx)
{
struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx);
static struct nfs4_pnfs_ds *
ff_layout_choose_ds_for_read(struct pnfs_layout_segment *lseg,
- int start_idx, int *best_idx,
+ u32 start_idx, u32 *best_idx,
bool check_device)
{
struct nfs4_ff_layout_segment *fls = FF_LAYOUT_LSEG(lseg);
struct nfs4_ff_layout_mirror *mirror;
struct nfs4_pnfs_ds *ds;
bool fail_return = false;
- int idx;
+ u32 idx;
/* mirrors are initially sorted by efficiency */
for (idx = start_idx; idx < fls->mirror_array_cnt; idx++) {
static struct nfs4_pnfs_ds *
ff_layout_choose_any_ds_for_read(struct pnfs_layout_segment *lseg,
- int start_idx, int *best_idx)
+ u32 start_idx, u32 *best_idx)
{
return ff_layout_choose_ds_for_read(lseg, start_idx, best_idx, false);
}
static struct nfs4_pnfs_ds *
ff_layout_choose_valid_ds_for_read(struct pnfs_layout_segment *lseg,
- int start_idx, int *best_idx)
+ u32 start_idx, u32 *best_idx)
{
return ff_layout_choose_ds_for_read(lseg, start_idx, best_idx, true);
}
static struct nfs4_pnfs_ds *
ff_layout_choose_best_ds_for_read(struct pnfs_layout_segment *lseg,
- int start_idx, int *best_idx)
+ u32 start_idx, u32 *best_idx)
{
struct nfs4_pnfs_ds *ds;
}
static struct nfs4_pnfs_ds *
-ff_layout_get_ds_for_read(struct nfs_pageio_descriptor *pgio, int *best_idx)
+ff_layout_get_ds_for_read(struct nfs_pageio_descriptor *pgio,
+ u32 *best_idx)
{
struct pnfs_layout_segment *lseg = pgio->pg_lseg;
struct nfs4_pnfs_ds *ds;
struct nfs_pgio_mirror *pgm;
struct nfs4_ff_layout_mirror *mirror;
struct nfs4_pnfs_ds *ds;
- int ds_idx;
+ u32 ds_idx, i;
retry:
ff_layout_pg_check_layout(pgio, req);
goto retry;
}
- mirror = FF_LAYOUT_COMP(pgio->pg_lseg, ds_idx);
+ for (i = 0; i < pgio->pg_mirror_count; i++) {
+ mirror = FF_LAYOUT_COMP(pgio->pg_lseg, i);
+ pgm = &pgio->pg_mirrors[i];
+ pgm->pg_bsize = mirror->mirror_ds->ds_versions[0].rsize;
+ }
pgio->pg_mirror_idx = ds_idx;
- /* read always uses only one mirror - idx 0 for pgio layer */
- pgm = &pgio->pg_mirrors[0];
- pgm->pg_bsize = mirror->mirror_ds->ds_versions[0].rsize;
-
if (NFS_SERVER(pgio->pg_inode)->flags &
(NFS_MOUNT_SOFT|NFS_MOUNT_SOFTERR))
pgio->pg_maxretrans = io_maxretrans;
struct nfs4_ff_layout_mirror *mirror;
struct nfs_pgio_mirror *pgm;
struct nfs4_pnfs_ds *ds;
- int i;
+ u32 i;
retry:
ff_layout_pg_check_layout(pgio, req);
static void ff_layout_resend_pnfs_read(struct nfs_pgio_header *hdr)
{
u32 idx = hdr->pgio_mirror_idx + 1;
- int new_idx = 0;
+ u32 new_idx = 0;
if (ff_layout_choose_any_ds_for_read(hdr->lseg, idx + 1, &new_idx))
ff_layout_send_layouterror(hdr->lseg);
struct nfs4_state *state,
struct nfs_client *clp,
struct pnfs_layout_segment *lseg,
- int idx)
+ u32 idx)
{
struct pnfs_layout_hdr *lo = lseg->pls_layout;
struct inode *inode = lo->plh_inode;
/* Retry all errors through either pNFS or MDS except for -EJUKEBOX */
static int ff_layout_async_handle_error_v3(struct rpc_task *task,
struct pnfs_layout_segment *lseg,
- int idx)
+ u32 idx)
{
struct nfs4_deviceid_node *devid = FF_LAYOUT_DEVID_NODE(lseg, idx);
struct nfs4_state *state,
struct nfs_client *clp,
struct pnfs_layout_segment *lseg,
- int idx)
+ u32 idx)
{
int vers = clp->cl_nfs_mod->rpc_vers->number;
}
static void ff_layout_io_track_ds_error(struct pnfs_layout_segment *lseg,
- int idx, u64 offset, u64 length,
+ u32 idx, u64 offset, u64 length,
u32 *op_status, int opnum, int error)
{
struct nfs4_ff_layout_mirror *mirror;
loff_t offset = hdr->args.offset;
int vers;
struct nfs_fh *fh;
- int idx = hdr->pgio_mirror_idx;
+ u32 idx = hdr->pgio_mirror_idx;
mirror = FF_LAYOUT_COMP(lseg, idx);
ds = nfs4_ff_layout_prepare_ds(lseg, mirror, true);
truncate_pagecache_range(dst_inode, pos_dst,
pos_dst + res->write_res.count);
-
+ spin_lock(&dst_inode->i_lock);
+ NFS_I(dst_inode)->cache_validity |= (NFS_INO_REVAL_PAGECACHE |
+ NFS_INO_REVAL_FORCED | NFS_INO_INVALID_SIZE |
+ NFS_INO_INVALID_ATTR | NFS_INO_INVALID_DATA);
+ spin_unlock(&dst_inode->i_lock);
+ spin_lock(&src_inode->i_lock);
+ NFS_I(src_inode)->cache_validity |= (NFS_INO_REVAL_PAGECACHE |
+ NFS_INO_REVAL_FORCED | NFS_INO_INVALID_ATIME);
+ spin_unlock(&src_inode->i_lock);
status = res->write_res.count;
out:
if (args->sync)
inc_syscw(current);
return ret;
}
+/*
+ * This "EXPORT_SYMBOL_GPL()" is more of a "EXPORT_SYMBOL_DONTUSE()",
+ * but autofs is one of the few internal kernel users that actually
+ * wants this _and_ can be built as a module. So we need to export
+ * this symbol for autofs, even though it really isn't appropriate
+ * for any other kernel modules.
+ */
+EXPORT_SYMBOL_GPL(__kernel_write);
ssize_t kernel_write(struct file *file, const void *buf, size_t count,
loff_t *pos)
static int vboxsf_parse_monolithic(struct fs_context *fc, void *data)
{
- char *options = data;
+ unsigned char *options = data;
if (options && options[0] == VBSF_MOUNT_SIGNATURE_BYTE_0 &&
options[1] == VBSF_MOUNT_SIGNATURE_BYTE_1 &&
typedef unsigned int blk_qc_t;
#define BLK_QC_T_NONE -1U
-#define BLK_QC_T_EAGAIN -2U
#define BLK_QC_T_SHIFT 16
#define BLK_QC_T_INTERNAL (1U << 31)
static inline bool blk_qc_t_valid(blk_qc_t cookie)
{
- return cookie != BLK_QC_T_NONE && cookie != BLK_QC_T_EAGAIN;
+ return cookie != BLK_QC_T_NONE;
}
static inline unsigned int blk_qc_t_to_queue_num(blk_qc_t cookie)
typedef int (*report_zones_cb)(struct blk_zone *zone, unsigned int idx,
void *data);
+void blk_queue_set_zoned(struct gendisk *disk, enum blk_zoned_model model);
+
#ifdef CONFIG_BLK_DEV_ZONED
#define BLK_ALL_ZONES ((unsigned int)-1)
/* Must be the last timer callback */
CPUHP_AP_DUMMY_TIMER_STARTING,
CPUHP_AP_ARM_XEN_STARTING,
- CPUHP_AP_ARM_KVMPV_STARTING,
CPUHP_AP_ARM_CORESIGHT_STARTING,
CPUHP_AP_ARM_CORESIGHT_CTI_STARTING,
CPUHP_AP_ARM64_ISNDEP_STARTING,
#define CPUIDLE_FLAG_UNUSABLE BIT(3) /* avoid using this state */
#define CPUIDLE_FLAG_OFF BIT(4) /* disable this state by default */
#define CPUIDLE_FLAG_TLB_FLUSHED BIT(5) /* idle-state flushes TLBs */
+#define CPUIDLE_FLAG_RCU_IDLE BIT(6) /* idle-state takes care of RCU */
struct cpuidle_device_kobj;
struct cpuidle_state_kobj;
{
__set_dax_synchronous(dax_dev);
}
+bool dax_supported(struct dax_device *dax_dev, struct block_device *bdev,
+ int blocksize, sector_t start, sector_t len);
/*
* Check if given mapping is supported by the file / underlying device.
*/
static inline void set_dax_synchronous(struct dax_device *dax_dev)
{
}
+static inline bool dax_supported(struct dax_device *dax_dev,
+ struct block_device *bdev, int blocksize, sector_t start,
+ sector_t len)
+{
+ return false;
+}
static inline bool daxdev_mapping_supported(struct vm_area_struct *vma,
struct dax_device *dax_dev)
{
}
#endif
+#if IS_ENABLED(CONFIG_DAX)
int dax_read_lock(void);
void dax_read_unlock(int id);
+#else
+static inline int dax_read_lock(void)
+{
+ return 0;
+}
+
+static inline void dax_read_unlock(int id)
+{
+}
+#endif /* CONFIG_DAX */
bool dax_alive(struct dax_device *dax_dev);
void *dax_get_private(struct dax_device *dax_dev);
long dax_direct_access(struct dax_device *dax_dev, pgoff_t pgoff, long nr_pages,
void **kaddr, pfn_t *pfn);
-bool dax_supported(struct dax_device *dax_dev, struct block_device *bdev,
- int blocksize, sector_t start, sector_t len);
size_t dax_copy_from_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
size_t bytes, struct iov_iter *i);
size_t dax_copy_to_iter(struct dax_device *dax_dev, pgoff_t pgoff, void *addr,
#define fsparam_u32oct(NAME, OPT) \
__fsparam(fs_param_is_u32, NAME, OPT, 0, (void *)8)
#define fsparam_u32hex(NAME, OPT) \
- __fsparam(fs_param_is_u32_hex, NAME, OPT, 0, (void *16))
+ __fsparam(fs_param_is_u32_hex, NAME, OPT, 0, (void *)16)
#define fsparam_s32(NAME, OPT) __fsparam(fs_param_is_s32, NAME, OPT, 0, NULL)
#define fsparam_u64(NAME, OPT) __fsparam(fs_param_is_u64, NAME, OPT, 0, NULL)
#define fsparam_enum(NAME, OPT, array) __fsparam(fs_param_is_enum, NAME, OPT, 0, array)
struct bdi_writeback;
struct pt_regs;
+extern int sysctl_page_lock_unfairness;
+
void init_mm_internals(void);
#ifndef CONFIG_NEED_MULTIPLE_NODES /* Don't use mapnrs, do it properly */
void free_pgd_range(struct mmu_gather *tlb, unsigned long addr,
unsigned long end, unsigned long floor, unsigned long ceiling);
int copy_page_range(struct mm_struct *dst, struct mm_struct *src,
- struct vm_area_struct *vma);
+ struct vm_area_struct *vma, struct vm_area_struct *new);
int follow_pte_pmd(struct mm_struct *mm, unsigned long address,
struct mmu_notifier_range *range,
pte_t **ptepp, pmd_t **pmdpp, spinlock_t **ptlp);
extern void set_dma_reserve(unsigned long new_dma_reserve);
extern void memmap_init_zone(unsigned long, int, unsigned long, unsigned long,
- enum memmap_context, struct vmem_altmap *);
+ enum meminit_context, struct vmem_altmap *);
extern void setup_per_zone_wmarks(void);
extern int __meminit init_per_zone_wmark_min(void);
extern void mem_init(void);
*/
atomic_t mm_count;
+ /**
+ * @has_pinned: Whether this mm has pinned any pages. This can
+ * be either replaced in the future by @pinned_vm when it
+ * becomes stable, or grow into a counter on its own. We're
+ * aggresive on this bit now - even if the pinned pages were
+ * unpinned later on, we'll still keep this bit set for the
+ * lifecycle of this mm just for simplicity.
+ */
+ atomic_t has_pinned;
+
#ifdef CONFIG_MMU
atomic_long_t pgtables_bytes; /* PTE page table pages */
#endif
unsigned int alloc_flags);
bool zone_watermark_ok_safe(struct zone *z, unsigned int order,
unsigned long mark, int highest_zoneidx);
-enum memmap_context {
- MEMMAP_EARLY,
- MEMMAP_HOTPLUG,
+/*
+ * Memory initialization context, use to differentiate memory added by
+ * the platform statically or via memory hotplug interface.
+ */
+enum meminit_context {
+ MEMINIT_EARLY,
+ MEMINIT_HOTPLUG,
};
+
extern void init_currently_empty_zone(struct zone *zone, unsigned long start_pfn,
unsigned long size);
#define NETIF_F_GSO_MASK (__NETIF_F_BIT(NETIF_F_GSO_LAST + 1) - \
__NETIF_F_BIT(NETIF_F_GSO_SHIFT))
-/* List of IP checksum features. Note that NETIF_F_ HW_CSUM should not be
+/* List of IP checksum features. Note that NETIF_F_HW_CSUM should not be
* set in features when NETIF_F_IP_CSUM or NETIF_F_IPV6_CSUM are set--
* this would be contradictory
*/
* the watchdog (see dev_watchdog())
* @watchdog_timer: List of timers
*
+ * @proto_down_reason: reason a netdev interface is held down
* @pcpu_refcnt: Number of references to this device
* @todo_list: Delayed register/unregister
* @link_watch_list: XXX: need comments on this one
* @udp_tunnel_nic_info: static structure describing the UDP tunnel
* offload capabilities of the device
* @udp_tunnel_nic: UDP tunnel offload state
+ * @xdp_state: stores info on attached XDP BPF programs
*
* FIXME: cleanup struct net_device such that network protocol info
* moves out.
__u64 mds_offset; /* Filelayout dense stripe */
struct nfs_page_array page_array;
struct nfs_client *ds_clp; /* pNFS data server */
- int ds_commit_idx; /* ds index if ds_clp is set */
- int pgio_mirror_idx;/* mirror index in pgio layer */
+ u32 ds_commit_idx; /* ds index if ds_clp is set */
+ u32 pgio_mirror_idx;/* mirror index in pgio layer */
};
struct nfs_mds_commit_info {
typedef void (*node_registration_func_t)(struct node *);
#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_NUMA)
-extern int link_mem_sections(int nid, unsigned long start_pfn,
- unsigned long end_pfn);
+int link_mem_sections(int nid, unsigned long start_pfn,
+ unsigned long end_pfn,
+ enum meminit_context context);
#else
static inline int link_mem_sections(int nid, unsigned long start_pfn,
- unsigned long end_pfn)
+ unsigned long end_pfn,
+ enum meminit_context context)
{
return 0;
}
if (error)
return error;
/* link memory sections under this node */
- error = link_mem_sections(nid, start_pfn, end_pfn);
+ error = link_mem_sections(nid, start_pfn, end_pfn,
+ MEMINIT_EARLY);
}
return error;
* anything we did within this RCU-sched read-size critical section.
*/
if (likely(rcu_sync_is_idle(&sem->rss)))
- __this_cpu_inc(*sem->read_count);
+ this_cpu_inc(*sem->read_count);
else
__percpu_down_read(sem, false); /* Unconditional memory barrier */
/*
* Same as in percpu_down_read().
*/
if (likely(rcu_sync_is_idle(&sem->rss)))
- __this_cpu_inc(*sem->read_count);
+ this_cpu_inc(*sem->read_count);
else
ret = __percpu_down_read(sem, true); /* Unconditional memory barrier */
preempt_enable();
* Same as in percpu_down_read().
*/
if (likely(rcu_sync_is_idle(&sem->rss))) {
- __this_cpu_dec(*sem->read_count);
+ this_cpu_dec(*sem->read_count);
} else {
/*
* slowpath; reader will only ever wake a single blocked
* aggregate zero, as that is the only time it matters) they
* will also see our critical section.
*/
- __this_cpu_dec(*sem->read_count);
+ this_cpu_dec(*sem->read_count);
rcuwait_wake_up(&sem->writer);
}
preempt_enable();
#define mm_pmd_folded(mm) __is_defined(__PAGETABLE_PMD_FOLDED)
#endif
+#ifndef p4d_offset_lockless
+#define p4d_offset_lockless(pgdp, pgd, address) p4d_offset(&(pgd), address)
+#endif
+#ifndef pud_offset_lockless
+#define pud_offset_lockless(p4dp, p4d, address) pud_offset(&(p4d), address)
+#endif
+#ifndef pmd_offset_lockless
+#define pmd_offset_lockless(pudp, pud, address) pmd_offset(&(pud), address)
+#endif
+
/*
* p?d_leaf() - true if this entry is a final mapping to a physical address.
* This differs from p?d_huge() by the fact that they are always available (if
#define QED_MFW_VERSION_3_OFFSET 24
u32 flash_size;
+ bool b_arfs_capable;
bool b_inter_pf_switch;
bool tx_switching;
bool rdma_supported;
unsigned char hub6; /* this should be in the 8250 driver */
unsigned char suspended;
+ unsigned char console_reinit;
const char *name; /* port name */
struct attribute_group *attr_group; /* port specific attributes */
const struct attribute_group **tty_groups; /* all attributes (serial core use only) */
* is untouched. Otherwise it is extended. Returns zero on
* success. The skb is freed on error if @free_on_error is true.
*/
-static inline int __skb_put_padto(struct sk_buff *skb, unsigned int len,
- bool free_on_error)
+static inline int __must_check __skb_put_padto(struct sk_buff *skb,
+ unsigned int len,
+ bool free_on_error)
{
unsigned int size = skb->len;
* is untouched. Otherwise it is extended. Returns zero on
* success. The skb is freed on error.
*/
-static inline int skb_put_padto(struct sk_buff *skb, unsigned int len)
+static inline int __must_check skb_put_padto(struct sk_buff *skb, unsigned int len)
{
return __skb_put_padto(skb, len, true);
}
#ifdef CONFIG_STACKLEAK_RUNTIME_DISABLE
int stack_erasing_sysctl(struct ctl_table *table, int write,
- void __user *buffer, size_t *lenp, loff_t *ppos);
+ void *buffer, size_t *lenp, loff_t *ppos);
#endif
#else /* !CONFIG_GCC_PLUGIN_STACKLEAK */
#define WQ_FLAG_WOKEN 0x02
#define WQ_FLAG_BOOKMARK 0x04
#define WQ_FLAG_CUSTOM 0x08
+#define WQ_FLAG_DONE 0x10
/*
* A single wait-queue entry structure:
* vb2_core_reqbufs() - Initiate streaming.
* @q: pointer to &struct vb2_queue with videobuf2 queue.
* @memory: memory type, as defined by &enum vb2_memory.
- * @flags: auxiliary queue/buffer management flags. Currently, the only
- * used flag is %V4L2_FLAG_MEMORY_NON_CONSISTENT.
* @count: requested buffer count.
*
* Videobuf2 core helper to implement VIDIOC_REQBUF() operation. It is called
* Return: returns zero on success; an error code otherwise.
*/
int vb2_core_reqbufs(struct vb2_queue *q, enum vb2_memory memory,
- unsigned int flags, unsigned int *count);
+ unsigned int *count);
/**
* vb2_core_create_bufs() - Allocate buffers and any required auxiliary structs
* @q: pointer to &struct vb2_queue with videobuf2 queue.
* @memory: memory type, as defined by &enum vb2_memory.
- * @flags: auxiliary queue/buffer management flags.
* @count: requested buffer count.
* @requested_planes: number of planes requested.
* @requested_sizes: array with the size of the planes.
* Return: returns zero on success; an error code otherwise.
*/
int vb2_core_create_bufs(struct vb2_queue *q, enum vb2_memory memory,
- unsigned int flags, unsigned int *count,
+ unsigned int *count,
unsigned int requested_planes,
const unsigned int requested_sizes[]);
fl4->saddr = saddr;
fl4->fl4_dport = dport;
fl4->fl4_sport = sport;
+ fl4->flowi4_multipath_hash = 0;
}
/* Reset some input parameters after previous lookup */
* @hdrlen: length of family specific header
* @tb: destination array with maxtype+1 elements
* @maxtype: maximum attribute type to be expected
- * @validate: validation strictness
* @extack: extended ACK report struct
*
* See nla_parse()
* @len: length of attribute stream
* @maxtype: maximum attribute type to be expected
* @policy: validation policy
- * @validate: validation strictness
* @extack: extended ACK report struct
*
* Validates all attributes in the specified attribute stream against the
struct list_head tables;
struct list_head commit_list;
struct list_head module_list;
+ struct list_head notify_list;
struct mutex commit_mutex;
unsigned int base_seq;
u8 gencursor;
data_ready_signalled:1;
atomic_t pd_mode;
+
+ /* Fields after this point will be skipped on copies, like on accept
+ * and peeloff operations
+ */
+
/* Receive to here while partial delivery is in effect. */
struct sk_buff_head pd_lobby;
- /* These must be the last fields, as they will skipped on copies,
- * like on accept and peeloff operations
- */
struct list_head auto_asconf_list;
int do_auto_asconf;
};
#define VXLAN_GBP_POLICY_APPLIED (BIT(3) << 16)
#define VXLAN_GBP_ID_MASK (0xFFFF)
+#define VXLAN_GBP_MASK (VXLAN_GBP_DONT_LEARN | VXLAN_GBP_POLICY_APPLIED | \
+ VXLAN_GBP_ID_MASK)
+
/*
* VXLAN Generic Protocol Extension (VXLAN_F_GPE):
* +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
u8 ptp_cmd;
struct sk_buff_head tx_skbs;
u8 ts_id;
+ spinlock_t ts_id_lock;
phy_interface_t phy_mode;
int ocelot_init(struct ocelot *ocelot);
void ocelot_deinit(struct ocelot *ocelot);
void ocelot_init_port(struct ocelot *ocelot, int port);
+void ocelot_deinit_port(struct ocelot *ocelot, int port);
/* DSA callbacks */
void ocelot_port_enable(struct ocelot *ocelot, int port,
((i) < (rtd)->num_cpus + (rtd)->num_codecs) && \
((dai) = (rtd)->dais[i]); \
(i)++)
+#define for_each_rtd_dais_rollback(rtd, i, dai) \
+ for (; (--(i) >= 0) && ((dai) = (rtd)->dais[i]);)
void snd_soc_close_delayed_work(struct snd_soc_pcm_runtime *rtd);
struct snd_soc_dai *snd_soc_find_dai(
const struct snd_soc_dai_link_component *dlc);
+struct snd_soc_dai *snd_soc_find_dai_with_mutex(
+ const struct snd_soc_dai_link_component *dlc);
#include <sound/soc-dai.h>
ETHTOOL_MSG_TSINFO_GET_REPLY,
ETHTOOL_MSG_CABLE_TEST_NTF,
ETHTOOL_MSG_CABLE_TEST_TDR_NTF,
+ ETHTOOL_MSG_TUNNEL_INFO_GET_REPLY,
/* add new constants above here */
__ETHTOOL_MSG_KERNEL_CNT,
V4L2_MEMORY_DMABUF = 4,
};
-#define V4L2_FLAG_MEMORY_NON_CONSISTENT (1 << 0)
-
/* see also http://vektor.theorem.ca/graphics/ycbcr/ */
enum v4l2_colorspace {
/*
__u32 type; /* enum v4l2_buf_type */
__u32 memory; /* enum v4l2_memory */
__u32 capabilities;
- union {
- __u32 flags;
- __u32 reserved[1];
- };
+ __u32 reserved[1];
};
/* capabilities for struct v4l2_requestbuffers and v4l2_create_buffers */
* @memory: enum v4l2_memory; buffer memory type
* @format: frame format, for which buffers are requested
* @capabilities: capabilities of this buffer type.
- * @flags: additional buffer management attributes (ignored unless the
- * queue has V4L2_BUF_CAP_SUPPORTS_MMAP_CACHE_HINTS capability
- * and configured for MMAP streaming I/O).
* @reserved: future extensions
*/
struct v4l2_create_buffers {
__u32 memory;
struct v4l2_format format;
__u32 capabilities;
- __u32 flags;
- __u32 reserved[6];
+ __u32 reserved[7];
};
/*
struct bpf_map *map;
struct bpf_htab *htab;
void *percpu_value_buf; // non-zero means percpu hash
- unsigned long flags;
u32 bucket_id;
u32 skip_elems;
};
struct htab_elem *prev_elem)
{
const struct bpf_htab *htab = info->htab;
- unsigned long flags = info->flags;
u32 skip_elems = info->skip_elems;
u32 bucket_id = info->bucket_id;
struct hlist_nulls_head *head;
/* not found, unlock and go to the next bucket */
b = &htab->buckets[bucket_id++];
- htab_unlock_bucket(htab, b, flags);
+ rcu_read_unlock();
skip_elems = 0;
}
for (i = bucket_id; i < htab->n_buckets; i++) {
b = &htab->buckets[i];
- flags = htab_lock_bucket(htab, b);
+ rcu_read_lock();
count = 0;
head = &b->head;
hlist_nulls_for_each_entry_rcu(elem, n, head, hash_node) {
if (count >= skip_elems) {
- info->flags = flags;
info->bucket_id = i;
info->skip_elems = count;
return elem;
count++;
}
- htab_unlock_bucket(htab, b, flags);
+ rcu_read_unlock();
skip_elems = 0;
}
static void bpf_hash_map_seq_stop(struct seq_file *seq, void *v)
{
- struct bpf_iter_seq_hash_map_info *info = seq->private;
-
if (!v)
(void)__bpf_hash_map_seq_show(seq, NULL);
else
- htab_unlock_bucket(info->htab,
- &info->htab->buckets[info->bucket_id],
- info->flags);
+ rcu_read_unlock();
}
static int bpf_iter_init_hash_map(void *priv_data,
else
prev_key = key;
+ rcu_read_lock();
if (map->ops->map_get_next_key(map, prev_key, key)) {
map_iter(m)->done = true;
- return NULL;
+ key = NULL;
}
+ rcu_read_unlock();
return key;
}
return ret;
}
+ /* Either of the above might have changed the syscall number */
+ syscall = syscall_get_nr(current, regs);
+
if (unlikely(ti_work & _TIF_SYSCALL_TRACEPOINT))
trace_sys_enter(regs, syscall);
syscall_enter_audit(regs, syscall);
- /* The above might have changed the syscall number */
- return ret ? : syscall_get_nr(current, regs);
+ return ret ? : syscall;
}
static __always_inline long
mm->map_count++;
if (!(tmp->vm_flags & VM_WIPEONFORK))
- retval = copy_page_range(mm, oldmm, mpnt);
+ retval = copy_page_range(mm, oldmm, mpnt, tmp);
if (tmp->vm_ops && tmp->vm_ops->open)
tmp->vm_ops->open(tmp);
mm_pgtables_bytes_init(mm);
mm->map_count = 0;
mm->locked_vm = 0;
+ atomic_set(&mm->has_pinned, 0);
atomic64_set(&mm->pinned_vm, 0);
memset(&mm->rss_stat, 0, sizeof(mm->rss_stat));
spin_lock_init(&mm->page_table_lock);
lockdep_assert_held(&kprobe_mutex);
+ if (WARN_ON_ONCE(kprobe_gone(p)))
+ return;
+
p->flags |= KPROBE_FLAG_GONE;
if (kprobe_aggrprobe(p)) {
/*
mutex_lock(&kprobe_mutex);
for (i = 0; i < KPROBE_TABLE_SIZE; i++) {
head = &kprobe_table[i];
- hlist_for_each_entry(p, head, hlist)
+ hlist_for_each_entry(p, head, hlist) {
+ if (kprobe_gone(p))
+ continue;
+
if (within_module_init((unsigned long)p->addr, mod) ||
(checkcore &&
within_module_core((unsigned long)p->addr, mod))) {
*/
kill_kprobe(p);
}
+ }
}
if (val == MODULE_STATE_GOING)
remove_module_kprobe_blacklist(mod);
static int mark_lock(struct task_struct *curr, struct held_lock *this,
enum lock_usage_bit new_bit)
{
- unsigned int new_mask = 1 << new_bit, ret = 1;
+ unsigned int old_mask, new_mask, ret = 1;
if (new_bit >= LOCK_USAGE_STATES) {
DEBUG_LOCKS_WARN_ON(1);
return 0;
}
+ if (new_bit == LOCK_USED && this->read)
+ new_bit = LOCK_USED_READ;
+
+ new_mask = 1 << new_bit;
+
/*
* If already set then do not dirty the cacheline,
* nor do any checks:
/*
* Make sure we didn't race:
*/
- if (unlikely(hlock_class(this)->usage_mask & new_mask)) {
- graph_unlock();
- return 1;
- }
+ if (unlikely(hlock_class(this)->usage_mask & new_mask))
+ goto unlock;
+ old_mask = hlock_class(this)->usage_mask;
hlock_class(this)->usage_mask |= new_mask;
+ /*
+ * Save one usage_traces[] entry and map both LOCK_USED and
+ * LOCK_USED_READ onto the same entry.
+ */
+ if (new_bit == LOCK_USED || new_bit == LOCK_USED_READ) {
+ if (old_mask & (LOCKF_USED | LOCKF_USED_READ))
+ goto unlock;
+ new_bit = LOCK_USED;
+ }
+
if (!(hlock_class(this)->usage_traces[new_bit] = save_trace()))
return 0;
return 0;
}
+unlock:
graph_unlock();
/*
{
#ifdef CONFIG_PROVE_LOCKING
struct lock_class *class = look_up_lock_class(lock, subclass);
+ unsigned long mask = LOCKF_USED;
/* if it doesn't have a class (yet), it certainly hasn't been used yet */
if (!class)
return;
- if (!(class->usage_mask & LOCK_USED))
+ /*
+ * READ locks only conflict with USED, such that if we only ever use
+ * READ locks, there is no deadlock possible -- RCU.
+ */
+ if (!hlock->read)
+ mask |= LOCKF_USED_READ;
+
+ if (!(class->usage_mask & mask))
return;
hlock->class_idx = class - lock_classes;
#include "lockdep_states.h"
#undef LOCKDEP_STATE
LOCK_USED,
+ LOCK_USED_READ,
LOCK_USAGE_STATES
};
#include "lockdep_states.h"
#undef LOCKDEP_STATE
__LOCKF(USED)
+ __LOCKF(USED_READ)
};
#define LOCKDEP_STATE(__STATE) LOCKF_ENABLED_##__STATE |
static bool __percpu_down_read_trylock(struct percpu_rw_semaphore *sem)
{
- __this_cpu_inc(*sem->read_count);
+ this_cpu_inc(*sem->read_count);
/*
* Due to having preemption disabled the decrement happens on
if (likely(!atomic_read_acquire(&sem->block)))
return true;
- __this_cpu_dec(*sem->read_count);
+ this_cpu_dec(*sem->read_count);
/* Prod writer to re-evaluate readers_active_check() */
rcuwait_wake_up(&sem->writer);
}
#else /* #ifdef CONFIG_TASKS_RCU */
-static void show_rcu_tasks_classic_gp_kthread(void) { }
+static inline void show_rcu_tasks_classic_gp_kthread(void) { }
void exit_tasks_rcu_start(void) { }
void exit_tasks_rcu_finish(void) { exit_tasks_rcu_finish_trace(current); }
#endif /* #else #ifdef CONFIG_TASKS_RCU */
lockdep_assert_irqs_disabled();
rcu_eqs_enter(false);
}
+EXPORT_SYMBOL_GPL(rcu_idle_enter);
#ifdef CONFIG_NO_HZ_FULL
/**
rcu_eqs_exit(false);
local_irq_restore(flags);
}
+EXPORT_SYMBOL_GPL(rcu_idle_exit);
#ifdef CONFIG_NO_HZ_FULL
/**
static DEFINE_STATIC_KEY_FALSE(stack_erasing_bypass);
int stack_erasing_sysctl(struct ctl_table *table, int write,
- void __user *buffer, size_t *lenp, loff_t *ppos)
+ void *buffer, size_t *lenp, loff_t *ppos)
{
int ret = 0;
int state = !static_branch_unlikely(&stack_erasing_bypass);
.proc_handler = percpu_pagelist_fraction_sysctl_handler,
.extra1 = SYSCTL_ZERO,
},
+ {
+ .procname = "page_lock_unfairness",
+ .data = &sysctl_page_lock_unfairness,
+ .maxlen = sizeof(sysctl_page_lock_unfairness),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = SYSCTL_ZERO,
+ },
#ifdef CONFIG_MMU
{
.procname = "max_map_count",
__visible void trace_hardirqs_off_caller(unsigned long caller_addr)
{
+ lockdep_hardirqs_off(CALLER_ADDR0);
+
if (!this_cpu_read(tracing_irq_cpu)) {
this_cpu_write(tracing_irq_cpu, 1);
tracer_hardirqs_off(CALLER_ADDR0, caller_addr);
if (!in_nmi())
trace_irq_disable_rcuidle(CALLER_ADDR0, caller_addr);
}
-
- lockdep_hardirqs_off(CALLER_ADDR0);
}
EXPORT_SYMBOL(trace_hardirqs_off_caller);
NOKPROBE_SYMBOL(trace_hardirqs_off_caller);
endchoice
source "lib/Kconfig.kgdb"
-
source "lib/Kconfig.ubsan"
+source "lib/Kconfig.kcsan"
endmenu
source "samples/Kconfig"
-source "lib/Kconfig.kcsan"
-
config ARCH_HAS_DEVMEM_IS_ALLOWED
bool
/* identifiers for device / performance-differentiated memory regions */
#include <linux/idr.h>
#include <linux/types.h>
+#include <linux/memregion.h>
static DEFINE_IDA(memregion_ids);
}
EXPORT_SYMBOL(strscpy_pad);
+/**
+ * stpcpy - copy a string from src to dest returning a pointer to the new end
+ * of dest, including src's %NUL-terminator. May overrun dest.
+ * @dest: pointer to end of string being copied into. Must be large enough
+ * to receive copy.
+ * @src: pointer to the beginning of string being copied from. Must not overlap
+ * dest.
+ *
+ * stpcpy differs from strcpy in a key way: the return value is a pointer
+ * to the new %NUL-terminating character in @dest. (For strcpy, the return
+ * value is a pointer to the start of @dest). This interface is considered
+ * unsafe as it doesn't perform bounds checking of the inputs. As such it's
+ * not recommended for usage. Instead, its definition is provided in case
+ * the compiler lowers other libcalls to stpcpy.
+ */
+char *stpcpy(char *__restrict__ dest, const char *__restrict__ src);
+char *stpcpy(char *__restrict__ dest, const char *__restrict__ src)
+{
+ while ((*dest++ = *src++) != '\0')
+ /* nothing */;
+ return --dest;
+}
+EXPORT_SYMBOL(stpcpy);
+
#ifndef __HAVE_ARCH_STRCAT
/**
* strcat - Append one %NUL-terminated string to another
} else {
if (WARN(err != -ENOENT, "removed non-existent element, error %d not %d",
err, -ENOENT))
- continue;
+ continue;
}
}
page_writeback_init();
}
+/*
+ * The page wait code treats the "wait->flags" somewhat unusually, because
+ * we have multiple different kinds of waits, not just the usual "exclusive"
+ * one.
+ *
+ * We have:
+ *
+ * (a) no special bits set:
+ *
+ * We're just waiting for the bit to be released, and when a waker
+ * calls the wakeup function, we set WQ_FLAG_WOKEN and wake it up,
+ * and remove it from the wait queue.
+ *
+ * Simple and straightforward.
+ *
+ * (b) WQ_FLAG_EXCLUSIVE:
+ *
+ * The waiter is waiting to get the lock, and only one waiter should
+ * be woken up to avoid any thundering herd behavior. We'll set the
+ * WQ_FLAG_WOKEN bit, wake it up, and remove it from the wait queue.
+ *
+ * This is the traditional exclusive wait.
+ *
+ * (c) WQ_FLAG_EXCLUSIVE | WQ_FLAG_CUSTOM:
+ *
+ * The waiter is waiting to get the bit, and additionally wants the
+ * lock to be transferred to it for fair lock behavior. If the lock
+ * cannot be taken, we stop walking the wait queue without waking
+ * the waiter.
+ *
+ * This is the "fair lock handoff" case, and in addition to setting
+ * WQ_FLAG_WOKEN, we set WQ_FLAG_DONE to let the waiter easily see
+ * that it now has the lock.
+ */
static int wake_page_function(wait_queue_entry_t *wait, unsigned mode, int sync, void *arg)
{
- int ret;
+ unsigned int flags;
struct wait_page_key *key = arg;
struct wait_page_queue *wait_page
= container_of(wait, struct wait_page_queue, wait);
return 0;
/*
- * If it's an exclusive wait, we get the bit for it, and
- * stop walking if we can't.
- *
- * If it's a non-exclusive wait, then the fact that this
- * wake function was called means that the bit already
- * was cleared, and we don't care if somebody then
- * re-took it.
+ * If it's a lock handoff wait, we get the bit for it, and
+ * stop walking (and do not wake it up) if we can't.
*/
- ret = 0;
- if (wait->flags & WQ_FLAG_EXCLUSIVE) {
- if (test_and_set_bit(key->bit_nr, &key->page->flags))
+ flags = wait->flags;
+ if (flags & WQ_FLAG_EXCLUSIVE) {
+ if (test_bit(key->bit_nr, &key->page->flags))
return -1;
- ret = 1;
+ if (flags & WQ_FLAG_CUSTOM) {
+ if (test_and_set_bit(key->bit_nr, &key->page->flags))
+ return -1;
+ flags |= WQ_FLAG_DONE;
+ }
}
- wait->flags |= WQ_FLAG_WOKEN;
+ /*
+ * We are holding the wait-queue lock, but the waiter that
+ * is waiting for this will be checking the flags without
+ * any locking.
+ *
+ * So update the flags atomically, and wake up the waiter
+ * afterwards to avoid any races. This store-release pairs
+ * with the load-acquire in wait_on_page_bit_common().
+ */
+ smp_store_release(&wait->flags, flags | WQ_FLAG_WOKEN);
wake_up_state(wait->private, mode);
/*
* Ok, we have successfully done what we're waiting for,
* and we can unconditionally remove the wait entry.
*
- * Note that this has to be the absolute last thing we do,
- * since after list_del_init(&wait->entry) the wait entry
+ * Note that this pairs with the "finish_wait()" in the
+ * waiter, and has to be the absolute last thing we do.
+ * After this list_del_init(&wait->entry) the wait entry
* might be de-allocated and the process might even have
* exited.
*/
list_del_init_careful(&wait->entry);
- return ret;
+ return (flags & WQ_FLAG_EXCLUSIVE) != 0;
}
static void wake_up_page_bit(struct page *page, int bit_nr)
};
/*
- * Attempt to check (or get) the page bit, and mark the
- * waiter woken if successful.
+ * Attempt to check (or get) the page bit, and mark us done
+ * if successful.
*/
static inline bool trylock_page_bit_common(struct page *page, int bit_nr,
struct wait_queue_entry *wait)
} else if (test_bit(bit_nr, &page->flags))
return false;
- wait->flags |= WQ_FLAG_WOKEN;
+ wait->flags |= WQ_FLAG_WOKEN | WQ_FLAG_DONE;
return true;
}
+/* How many times do we accept lock stealing from under a waiter? */
+int sysctl_page_lock_unfairness = 5;
+
static inline int wait_on_page_bit_common(wait_queue_head_t *q,
struct page *page, int bit_nr, int state, enum behavior behavior)
{
+ int unfairness = sysctl_page_lock_unfairness;
struct wait_page_queue wait_page;
wait_queue_entry_t *wait = &wait_page.wait;
bool thrashing = false;
}
init_wait(wait);
- wait->flags = behavior == EXCLUSIVE ? WQ_FLAG_EXCLUSIVE : 0;
wait->func = wake_page_function;
wait_page.page = page;
wait_page.bit_nr = bit_nr;
+repeat:
+ wait->flags = 0;
+ if (behavior == EXCLUSIVE) {
+ wait->flags = WQ_FLAG_EXCLUSIVE;
+ if (--unfairness < 0)
+ wait->flags |= WQ_FLAG_CUSTOM;
+ }
+
/*
* Do one last check whether we can get the
* page bit synchronously.
/*
* From now on, all the logic will be based on
- * the WQ_FLAG_WOKEN flag, and the and the page
- * bit testing (and setting) will be - or has
- * already been - done by the wake function.
+ * the WQ_FLAG_WOKEN and WQ_FLAG_DONE flag, to
+ * see whether the page bit testing has already
+ * been done by the wake function.
*
* We can drop our reference to the page.
*/
if (behavior == DROP)
put_page(page);
+ /*
+ * Note that until the "finish_wait()", or until
+ * we see the WQ_FLAG_WOKEN flag, we need to
+ * be very careful with the 'wait->flags', because
+ * we may race with a waker that sets them.
+ */
for (;;) {
+ unsigned int flags;
+
set_current_state(state);
- if (signal_pending_state(state, current))
+ /* Loop until we've been woken or interrupted */
+ flags = smp_load_acquire(&wait->flags);
+ if (!(flags & WQ_FLAG_WOKEN)) {
+ if (signal_pending_state(state, current))
+ break;
+
+ io_schedule();
+ continue;
+ }
+
+ /* If we were non-exclusive, we're done */
+ if (behavior != EXCLUSIVE)
break;
- if (wait->flags & WQ_FLAG_WOKEN)
+ /* If the waker got the lock for us, we're done */
+ if (flags & WQ_FLAG_DONE)
break;
- io_schedule();
+ /*
+ * Otherwise, if we're getting the lock, we need to
+ * try to get it ourselves.
+ *
+ * And if that fails, we'll have to retry this all.
+ */
+ if (unlikely(test_and_set_bit(bit_nr, &page->flags)))
+ goto repeat;
+
+ wait->flags |= WQ_FLAG_DONE;
+ break;
}
+ /*
+ * If a signal happened, this 'finish_wait()' may remove the last
+ * waiter from the wait-queues, but the PageWaiters bit will remain
+ * set. That's ok. The next wakeup will take care of it, and trying
+ * to do it here would be difficult and prone to races.
+ */
finish_wait(q, wait);
if (thrashing) {
}
/*
- * A signal could leave PageWaiters set. Clearing it here if
- * !waitqueue_active would be possible (by open-coding finish_wait),
- * but still fail to catch it in the case of wait hash collision. We
- * already can fail to clear wait hash collision cases, so don't
- * bother with signals either.
+ * NOTE! The wait->flags weren't stable until we've done the
+ * 'finish_wait()', and we could have exited the loop above due
+ * to a signal, and had a wakeup event happen after the signal
+ * test but before the 'finish_wait()'.
+ *
+ * So only after the finish_wait() can we reliably determine
+ * if we got woken up or not, so we can now figure out the final
+ * return value based on that state without races.
+ *
+ * Also note that WQ_FLAG_WOKEN is sufficient for a non-exclusive
+ * waiter, but an exclusive one requires WQ_FLAG_DONE.
*/
+ if (behavior == EXCLUSIVE)
+ return wait->flags & WQ_FLAG_DONE ? 0 : -EINTR;
return wait->flags & WQ_FLAG_WOKEN ? 0 : -EINTR;
}
BUG_ON(*locked != 1);
}
+ if (flags & FOLL_PIN)
+ atomic_set(&mm->has_pinned, 1);
+
/*
* FOLL_PIN and FOLL_GET are mutually exclusive. Traditional behavior
* is to set FOLL_GET if the caller wants pages[] filled in (but has
return 1;
}
-static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end,
+static int gup_pmd_range(pud_t *pudp, pud_t pud, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
{
unsigned long next;
pmd_t *pmdp;
- pmdp = pmd_offset(&pud, addr);
+ pmdp = pmd_offset_lockless(pudp, pud, addr);
do {
pmd_t pmd = READ_ONCE(*pmdp);
return 1;
}
-static int gup_pud_range(p4d_t p4d, unsigned long addr, unsigned long end,
+static int gup_pud_range(p4d_t *p4dp, p4d_t p4d, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
{
unsigned long next;
pud_t *pudp;
- pudp = pud_offset(&p4d, addr);
+ pudp = pud_offset_lockless(p4dp, p4d, addr);
do {
pud_t pud = READ_ONCE(*pudp);
if (!gup_huge_pd(__hugepd(pud_val(pud)), addr,
PUD_SHIFT, next, flags, pages, nr))
return 0;
- } else if (!gup_pmd_range(pud, addr, next, flags, pages, nr))
+ } else if (!gup_pmd_range(pudp, pud, addr, next, flags, pages, nr))
return 0;
} while (pudp++, addr = next, addr != end);
return 1;
}
-static int gup_p4d_range(pgd_t pgd, unsigned long addr, unsigned long end,
+static int gup_p4d_range(pgd_t *pgdp, pgd_t pgd, unsigned long addr, unsigned long end,
unsigned int flags, struct page **pages, int *nr)
{
unsigned long next;
p4d_t *p4dp;
- p4dp = p4d_offset(&pgd, addr);
+ p4dp = p4d_offset_lockless(pgdp, pgd, addr);
do {
p4d_t p4d = READ_ONCE(*p4dp);
if (!gup_huge_pd(__hugepd(p4d_val(p4d)), addr,
P4D_SHIFT, next, flags, pages, nr))
return 0;
- } else if (!gup_pud_range(p4d, addr, next, flags, pages, nr))
+ } else if (!gup_pud_range(p4dp, p4d, addr, next, flags, pages, nr))
return 0;
} while (p4dp++, addr = next, addr != end);
if (!gup_huge_pd(__hugepd(pgd_val(pgd)), addr,
PGDIR_SHIFT, next, flags, pages, nr))
return;
- } else if (!gup_p4d_range(pgd, addr, next, flags, pages, nr))
+ } else if (!gup_p4d_range(pgdp, pgd, addr, next, flags, pages, nr))
return;
} while (pgdp++, addr = next, addr != end);
}
FOLL_FAST_ONLY)))
return -EINVAL;
+ if (gup_flags & FOLL_PIN)
+ atomic_set(¤t->mm->has_pinned, 1);
+
if (!(gup_flags & FOLL_FAST_ONLY))
might_lock_read(¤t->mm->mmap_lock);
src_page = pmd_page(pmd);
VM_BUG_ON_PAGE(!PageHead(src_page), src_page);
+
+ /*
+ * If this page is a potentially pinned page, split and retry the fault
+ * with smaller page size. Normally this should not happen because the
+ * userspace should use MADV_DONTFORK upon pinned regions. This is a
+ * best effort that the pinned pages won't be replaced by another
+ * random page during the coming copy-on-write.
+ */
+ if (unlikely(is_cow_mapping(vma->vm_flags) &&
+ atomic_read(&src_mm->has_pinned) &&
+ page_maybe_dma_pinned(src_page))) {
+ pte_free(dst_mm, pgtable);
+ spin_unlock(src_ptl);
+ spin_unlock(dst_ptl);
+ __split_huge_pmd(vma, src_pmd, addr, false, NULL);
+ return -EAGAIN;
+ }
+
get_page(src_page);
page_dup_rmap(src_page, true);
add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
/* No huge zero pud yet */
}
+ /* Please refer to comments in copy_huge_pmd() */
+ if (unlikely(is_cow_mapping(vma->vm_flags) &&
+ atomic_read(&src_mm->has_pinned) &&
+ page_maybe_dma_pinned(pud_page(pud)))) {
+ spin_unlock(src_ptl);
+ spin_unlock(dst_ptl);
+ __split_huge_pud(vma, src_pud, addr);
+ return -EAGAIN;
+ }
+
pudp_set_wrprotect(src_mm, addr, src_pud);
pud = pud_mkold(pud_wrprotect(pud));
set_pud_at(dst_mm, addr, dst_pud, pud);
put_page(page);
add_mm_counter(mm, mm_counter_file(page), -HPAGE_PMD_NR);
return;
- } else if (is_huge_zero_pmd(*pmd)) {
+ } else if (pmd_trans_huge(*pmd) && is_huge_zero_pmd(*pmd)) {
/*
* FIXME: Do we want to invalidate secondary mmu by calling
* mmu_notifier_invalidate_range() see comments below inside
pte = pte_offset_map(&_pmd, addr);
BUG_ON(!pte_none(*pte));
set_pte_at(mm, addr, pte, entry);
- atomic_inc(&page[i]._mapcount);
- pte_unmap(pte);
- }
-
- /*
- * Set PG_double_map before dropping compound_mapcount to avoid
- * false-negative page_mapped().
- */
- if (compound_mapcount(page) > 1 && !TestSetPageDoubleMap(page)) {
- for (i = 0; i < HPAGE_PMD_NR; i++)
+ if (!pmd_migration)
atomic_inc(&page[i]._mapcount);
+ pte_unmap(pte);
}
- lock_page_memcg(page);
- if (atomic_add_negative(-1, compound_mapcount_ptr(page))) {
- /* Last compound_mapcount is gone. */
- __dec_lruvec_page_state(page, NR_ANON_THPS);
- if (TestClearPageDoubleMap(page)) {
- /* No need in mapcount reference anymore */
+ if (!pmd_migration) {
+ /*
+ * Set PG_double_map before dropping compound_mapcount to avoid
+ * false-negative page_mapped().
+ */
+ if (compound_mapcount(page) > 1 &&
+ !TestSetPageDoubleMap(page)) {
for (i = 0; i < HPAGE_PMD_NR; i++)
- atomic_dec(&page[i]._mapcount);
+ atomic_inc(&page[i]._mapcount);
+ }
+
+ lock_page_memcg(page);
+ if (atomic_add_negative(-1, compound_mapcount_ptr(page))) {
+ /* Last compound_mapcount is gone. */
+ __dec_lruvec_page_state(page, NR_ANON_THPS);
+ if (TestClearPageDoubleMap(page)) {
+ /* No need in mapcount reference anymore */
+ for (i = 0; i < HPAGE_PMD_NR; i++)
+ atomic_dec(&page[i]._mapcount);
+ }
}
+ unlock_page_memcg(page);
}
- unlock_page_memcg(page);
smp_wmb(); /* make pte visible before pmd */
pmd_populate(mm, pmd, pgtable);
return page; /* let do_swap_page report the error */
new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, address);
+ if (new_page && mem_cgroup_charge(new_page, vma->vm_mm, GFP_KERNEL)) {
+ put_page(new_page);
+ new_page = NULL;
+ }
if (new_page) {
copy_user_highpage(new_page, page, address, vma);
return 0;
}
+regular_page:
if (pmd_trans_unstable(pmd))
return 0;
-regular_page:
#endif
tlb_change_page_size(tlb, PAGE_SIZE);
orig_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
memcg_page_state(memcg, WORKINGSET_ACTIVATE_ANON));
seq_buf_printf(&s, "workingset_activate_file %lu\n",
memcg_page_state(memcg, WORKINGSET_ACTIVATE_FILE));
- seq_buf_printf(&s, "workingset_restore %lu\n",
+ seq_buf_printf(&s, "workingset_restore_anon %lu\n",
memcg_page_state(memcg, WORKINGSET_RESTORE_ANON));
- seq_buf_printf(&s, "workingset_restore %lu\n",
+ seq_buf_printf(&s, "workingset_restore_file %lu\n",
memcg_page_state(memcg, WORKINGSET_RESTORE_FILE));
seq_buf_printf(&s, "workingset_nodereclaim %lu\n",
memcg_page_state(memcg, WORKINGSET_NODERECLAIM));
* covered by this vma.
*/
-static inline unsigned long
-copy_one_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+static unsigned long
+copy_nonpresent_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *vma,
unsigned long addr, int *rss)
{
unsigned long vm_flags = vma->vm_flags;
pte_t pte = *src_pte;
struct page *page;
+ swp_entry_t entry = pte_to_swp_entry(pte);
+
+ if (likely(!non_swap_entry(entry))) {
+ if (swap_duplicate(entry) < 0)
+ return entry.val;
+
+ /* make sure dst_mm is on swapoff's mmlist. */
+ if (unlikely(list_empty(&dst_mm->mmlist))) {
+ spin_lock(&mmlist_lock);
+ if (list_empty(&dst_mm->mmlist))
+ list_add(&dst_mm->mmlist,
+ &src_mm->mmlist);
+ spin_unlock(&mmlist_lock);
+ }
+ rss[MM_SWAPENTS]++;
+ } else if (is_migration_entry(entry)) {
+ page = migration_entry_to_page(entry);
- /* pte contains position in swap or file, so copy. */
- if (unlikely(!pte_present(pte))) {
- swp_entry_t entry = pte_to_swp_entry(pte);
-
- if (likely(!non_swap_entry(entry))) {
- if (swap_duplicate(entry) < 0)
- return entry.val;
-
- /* make sure dst_mm is on swapoff's mmlist. */
- if (unlikely(list_empty(&dst_mm->mmlist))) {
- spin_lock(&mmlist_lock);
- if (list_empty(&dst_mm->mmlist))
- list_add(&dst_mm->mmlist,
- &src_mm->mmlist);
- spin_unlock(&mmlist_lock);
- }
- rss[MM_SWAPENTS]++;
- } else if (is_migration_entry(entry)) {
- page = migration_entry_to_page(entry);
-
- rss[mm_counter(page)]++;
-
- if (is_write_migration_entry(entry) &&
- is_cow_mapping(vm_flags)) {
- /*
- * COW mappings require pages in both
- * parent and child to be set to read.
- */
- make_migration_entry_read(&entry);
- pte = swp_entry_to_pte(entry);
- if (pte_swp_soft_dirty(*src_pte))
- pte = pte_swp_mksoft_dirty(pte);
- if (pte_swp_uffd_wp(*src_pte))
- pte = pte_swp_mkuffd_wp(pte);
- set_pte_at(src_mm, addr, src_pte, pte);
- }
- } else if (is_device_private_entry(entry)) {
- page = device_private_entry_to_page(entry);
+ rss[mm_counter(page)]++;
+ if (is_write_migration_entry(entry) &&
+ is_cow_mapping(vm_flags)) {
/*
- * Update rss count even for unaddressable pages, as
- * they should treated just like normal pages in this
- * respect.
- *
- * We will likely want to have some new rss counters
- * for unaddressable pages, at some point. But for now
- * keep things as they are.
+ * COW mappings require pages in both
+ * parent and child to be set to read.
*/
- get_page(page);
- rss[mm_counter(page)]++;
- page_dup_rmap(page, false);
+ make_migration_entry_read(&entry);
+ pte = swp_entry_to_pte(entry);
+ if (pte_swp_soft_dirty(*src_pte))
+ pte = pte_swp_mksoft_dirty(pte);
+ if (pte_swp_uffd_wp(*src_pte))
+ pte = pte_swp_mkuffd_wp(pte);
+ set_pte_at(src_mm, addr, src_pte, pte);
+ }
+ } else if (is_device_private_entry(entry)) {
+ page = device_private_entry_to_page(entry);
- /*
- * We do not preserve soft-dirty information, because so
- * far, checkpoint/restore is the only feature that
- * requires that. And checkpoint/restore does not work
- * when a device driver is involved (you cannot easily
- * save and restore device driver state).
- */
- if (is_write_device_private_entry(entry) &&
- is_cow_mapping(vm_flags)) {
- make_device_private_entry_read(&entry);
- pte = swp_entry_to_pte(entry);
- if (pte_swp_uffd_wp(*src_pte))
- pte = pte_swp_mkuffd_wp(pte);
- set_pte_at(src_mm, addr, src_pte, pte);
- }
+ /*
+ * Update rss count even for unaddressable pages, as
+ * they should treated just like normal pages in this
+ * respect.
+ *
+ * We will likely want to have some new rss counters
+ * for unaddressable pages, at some point. But for now
+ * keep things as they are.
+ */
+ get_page(page);
+ rss[mm_counter(page)]++;
+ page_dup_rmap(page, false);
+
+ /*
+ * We do not preserve soft-dirty information, because so
+ * far, checkpoint/restore is the only feature that
+ * requires that. And checkpoint/restore does not work
+ * when a device driver is involved (you cannot easily
+ * save and restore device driver state).
+ */
+ if (is_write_device_private_entry(entry) &&
+ is_cow_mapping(vm_flags)) {
+ make_device_private_entry_read(&entry);
+ pte = swp_entry_to_pte(entry);
+ if (pte_swp_uffd_wp(*src_pte))
+ pte = pte_swp_mkuffd_wp(pte);
+ set_pte_at(src_mm, addr, src_pte, pte);
}
- goto out_set_pte;
+ }
+ set_pte_at(dst_mm, addr, dst_pte, pte);
+ return 0;
+}
+
+/*
+ * Copy a present and normal page if necessary.
+ *
+ * NOTE! The usual case is that this doesn't need to do
+ * anything, and can just return a positive value. That
+ * will let the caller know that it can just increase
+ * the page refcount and re-use the pte the traditional
+ * way.
+ *
+ * But _if_ we need to copy it because it needs to be
+ * pinned in the parent (and the child should get its own
+ * copy rather than just a reference to the same page),
+ * we'll do that here and return zero to let the caller
+ * know we're done.
+ *
+ * And if we need a pre-allocated page but don't yet have
+ * one, return a negative error to let the preallocation
+ * code know so that it can do so outside the page table
+ * lock.
+ */
+static inline int
+copy_present_page(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ pte_t *dst_pte, pte_t *src_pte,
+ struct vm_area_struct *vma, struct vm_area_struct *new,
+ unsigned long addr, int *rss, struct page **prealloc,
+ pte_t pte, struct page *page)
+{
+ struct page *new_page;
+
+ if (!is_cow_mapping(vma->vm_flags))
+ return 1;
+
+ /*
+ * The trick starts.
+ *
+ * What we want to do is to check whether this page may
+ * have been pinned by the parent process. If so,
+ * instead of wrprotect the pte on both sides, we copy
+ * the page immediately so that we'll always guarantee
+ * the pinned page won't be randomly replaced in the
+ * future.
+ *
+ * To achieve this, we do the following:
+ *
+ * 1. Write-protect the pte if it's writable. This is
+ * to protect concurrent write fast-gup with
+ * FOLL_PIN, so that we'll fail the fast-gup with
+ * the write bit removed.
+ *
+ * 2. Check page_maybe_dma_pinned() to see whether this
+ * page may have been pinned.
+ *
+ * The order of these steps is important to serialize
+ * against the fast-gup code (gup_pte_range()) on the
+ * pte check and try_grab_compound_head(), so that
+ * we'll make sure either we'll capture that fast-gup
+ * so we'll copy the pinned page here, or we'll fail
+ * that fast-gup.
+ *
+ * NOTE! Even if we don't end up copying the page,
+ * we won't undo this wrprotect(), because the normal
+ * reference copy will need it anyway.
+ */
+ if (pte_write(pte))
+ ptep_set_wrprotect(src_mm, addr, src_pte);
+
+ /*
+ * These are the "normally we can just copy by reference"
+ * checks.
+ */
+ if (likely(!atomic_read(&src_mm->has_pinned)))
+ return 1;
+ if (likely(!page_maybe_dma_pinned(page)))
+ return 1;
+
+ /*
+ * Uhhuh. It looks like the page might be a pinned page,
+ * and we actually need to copy it. Now we can set the
+ * source pte back to being writable.
+ */
+ if (pte_write(pte))
+ set_pte_at(src_mm, addr, src_pte, pte);
+
+ new_page = *prealloc;
+ if (!new_page)
+ return -EAGAIN;
+
+ /*
+ * We have a prealloc page, all good! Take it
+ * over and copy the page & arm it.
+ */
+ *prealloc = NULL;
+ copy_user_highpage(new_page, page, addr, vma);
+ __SetPageUptodate(new_page);
+ page_add_new_anon_rmap(new_page, new, addr, false);
+ lru_cache_add_inactive_or_unevictable(new_page, new);
+ rss[mm_counter(new_page)]++;
+
+ /* All done, just insert the new page copy in the child */
+ pte = mk_pte(new_page, new->vm_page_prot);
+ pte = maybe_mkwrite(pte_mkdirty(pte), new);
+ set_pte_at(dst_mm, addr, dst_pte, pte);
+ return 0;
+}
+
+/*
+ * Copy one pte. Returns 0 if succeeded, or -EAGAIN if one preallocated page
+ * is required to copy this pte.
+ */
+static inline int
+copy_present_pte(struct mm_struct *dst_mm, struct mm_struct *src_mm,
+ pte_t *dst_pte, pte_t *src_pte, struct vm_area_struct *vma,
+ struct vm_area_struct *new,
+ unsigned long addr, int *rss, struct page **prealloc)
+{
+ unsigned long vm_flags = vma->vm_flags;
+ pte_t pte = *src_pte;
+ struct page *page;
+
+ page = vm_normal_page(vma, addr, pte);
+ if (page) {
+ int retval;
+
+ retval = copy_present_page(dst_mm, src_mm,
+ dst_pte, src_pte,
+ vma, new,
+ addr, rss, prealloc,
+ pte, page);
+ if (retval <= 0)
+ return retval;
+
+ get_page(page);
+ page_dup_rmap(page, false);
+ rss[mm_counter(page)]++;
}
/*
if (!(vm_flags & VM_UFFD_WP))
pte = pte_clear_uffd_wp(pte);
- page = vm_normal_page(vma, addr, pte);
- if (page) {
- get_page(page);
- page_dup_rmap(page, false);
- rss[mm_counter(page)]++;
- }
-
-out_set_pte:
set_pte_at(dst_mm, addr, dst_pte, pte);
return 0;
}
+static inline struct page *
+page_copy_prealloc(struct mm_struct *src_mm, struct vm_area_struct *vma,
+ unsigned long addr)
+{
+ struct page *new_page;
+
+ new_page = alloc_page_vma(GFP_HIGHUSER_MOVABLE, vma, addr);
+ if (!new_page)
+ return NULL;
+
+ if (mem_cgroup_charge(new_page, src_mm, GFP_KERNEL)) {
+ put_page(new_page);
+ return NULL;
+ }
+ cgroup_throttle_swaprate(new_page, GFP_KERNEL);
+
+ return new_page;
+}
+
static int copy_pte_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pmd_t *dst_pmd, pmd_t *src_pmd, struct vm_area_struct *vma,
+ struct vm_area_struct *new,
unsigned long addr, unsigned long end)
{
pte_t *orig_src_pte, *orig_dst_pte;
pte_t *src_pte, *dst_pte;
spinlock_t *src_ptl, *dst_ptl;
- int progress = 0;
+ int progress, ret = 0;
int rss[NR_MM_COUNTERS];
swp_entry_t entry = (swp_entry_t){0};
+ struct page *prealloc = NULL;
again:
+ progress = 0;
init_rss_vec(rss);
dst_pte = pte_alloc_map_lock(dst_mm, dst_pmd, addr, &dst_ptl);
- if (!dst_pte)
- return -ENOMEM;
+ if (!dst_pte) {
+ ret = -ENOMEM;
+ goto out;
+ }
src_pte = pte_offset_map(src_pmd, addr);
src_ptl = pte_lockptr(src_mm, src_pmd);
spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
progress++;
continue;
}
- entry.val = copy_one_pte(dst_mm, src_mm, dst_pte, src_pte,
+ if (unlikely(!pte_present(*src_pte))) {
+ entry.val = copy_nonpresent_pte(dst_mm, src_mm,
+ dst_pte, src_pte,
vma, addr, rss);
- if (entry.val)
+ if (entry.val)
+ break;
+ progress += 8;
+ continue;
+ }
+ /* copy_present_pte() will clear `*prealloc' if consumed */
+ ret = copy_present_pte(dst_mm, src_mm, dst_pte, src_pte,
+ vma, new, addr, rss, &prealloc);
+ /*
+ * If we need a pre-allocated page for this pte, drop the
+ * locks, allocate, and try again.
+ */
+ if (unlikely(ret == -EAGAIN))
break;
+ if (unlikely(prealloc)) {
+ /*
+ * pre-alloc page cannot be reused by next time so as
+ * to strictly follow mempolicy (e.g., alloc_page_vma()
+ * will allocate page according to address). This
+ * could only happen if one pinned pte changed.
+ */
+ put_page(prealloc);
+ prealloc = NULL;
+ }
progress += 8;
} while (dst_pte++, src_pte++, addr += PAGE_SIZE, addr != end);
cond_resched();
if (entry.val) {
- if (add_swap_count_continuation(entry, GFP_KERNEL) < 0)
+ if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ entry.val = 0;
+ } else if (ret) {
+ WARN_ON_ONCE(ret != -EAGAIN);
+ prealloc = page_copy_prealloc(src_mm, vma, addr);
+ if (!prealloc)
return -ENOMEM;
- progress = 0;
+ /* We've captured and resolved the error. Reset, try again. */
+ ret = 0;
}
if (addr != end)
goto again;
- return 0;
+out:
+ if (unlikely(prealloc))
+ put_page(prealloc);
+ return ret;
}
static inline int copy_pmd_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pud_t *dst_pud, pud_t *src_pud, struct vm_area_struct *vma,
+ struct vm_area_struct *new,
unsigned long addr, unsigned long end)
{
pmd_t *src_pmd, *dst_pmd;
if (pmd_none_or_clear_bad(src_pmd))
continue;
if (copy_pte_range(dst_mm, src_mm, dst_pmd, src_pmd,
- vma, addr, next))
+ vma, new, addr, next))
return -ENOMEM;
} while (dst_pmd++, src_pmd++, addr = next, addr != end);
return 0;
static inline int copy_pud_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
p4d_t *dst_p4d, p4d_t *src_p4d, struct vm_area_struct *vma,
+ struct vm_area_struct *new,
unsigned long addr, unsigned long end)
{
pud_t *src_pud, *dst_pud;
if (pud_none_or_clear_bad(src_pud))
continue;
if (copy_pmd_range(dst_mm, src_mm, dst_pud, src_pud,
- vma, addr, next))
+ vma, new, addr, next))
return -ENOMEM;
} while (dst_pud++, src_pud++, addr = next, addr != end);
return 0;
static inline int copy_p4d_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
pgd_t *dst_pgd, pgd_t *src_pgd, struct vm_area_struct *vma,
+ struct vm_area_struct *new,
unsigned long addr, unsigned long end)
{
p4d_t *src_p4d, *dst_p4d;
if (p4d_none_or_clear_bad(src_p4d))
continue;
if (copy_pud_range(dst_mm, src_mm, dst_p4d, src_p4d,
- vma, addr, next))
+ vma, new, addr, next))
return -ENOMEM;
} while (dst_p4d++, src_p4d++, addr = next, addr != end);
return 0;
}
int copy_page_range(struct mm_struct *dst_mm, struct mm_struct *src_mm,
- struct vm_area_struct *vma)
+ struct vm_area_struct *vma, struct vm_area_struct *new)
{
pgd_t *src_pgd, *dst_pgd;
unsigned long next;
if (pgd_none_or_clear_bad(src_pgd))
continue;
if (unlikely(copy_p4d_range(dst_mm, src_mm, dst_pgd, src_pgd,
- vma, addr, next))) {
+ vma, new, addr, next))) {
ret = -ENOMEM;
break;
}
* page count reference, and the page is locked,
* it's dark out, and we're wearing sunglasses. Hit it.
*/
- wp_page_reuse(vmf);
unlock_page(page);
+ wp_page_reuse(vmf);
return VM_FAULT_WRITE;
} else if (unlikely((vma->vm_flags & (VM_WRITE|VM_SHARED)) ==
(VM_WRITE|VM_SHARED))) {
* are reserved so nobody should be touching them so we should be safe
*/
memmap_init_zone(nr_pages, nid, zone_idx(zone), start_pfn,
- MEMMAP_HOTPLUG, altmap);
+ MEMINIT_HOTPLUG, altmap);
set_zone_contiguous(zone);
}
}
/* link memory sections under this node.*/
- ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1));
+ ret = link_mem_sections(nid, PFN_DOWN(start), PFN_UP(start + size - 1),
+ MEMINIT_HOTPLUG);
BUG_ON(ret);
/* create new memmap entry */
/* check again */
ret = walk_system_ram_range(start_pfn, end_pfn - start_pfn,
NULL, check_pages_isolated_cb);
+ /*
+ * per-cpu pages are drained in start_isolate_page_range, but if
+ * there are still pages that are not free, make sure that we
+ * drain again, because when we isolated range we might
+ * have raced with another thread that was adding pages to pcp
+ * list.
+ *
+ * Forward progress should be still guaranteed because
+ * pages on the pcp list can only belong to MOVABLE_ZONE
+ * because has_unmovable_pages explicitly checks for
+ * PageBuddy on freed pages on other zones.
+ */
+ if (ret)
+ drain_all_pages(zone);
} while (ret);
/* Ok, all of our target is isolated.
copy_page_owner(page, newpage);
- mem_cgroup_migrate(page, newpage);
+ if (!PageHuge(page))
+ mem_cgroup_migrate(page, newpage);
}
EXPORT_SYMBOL(migrate_page_states);
* Capture required information that might get lost
* during migration.
*/
- is_thp = PageTransHuge(page);
+ is_thp = PageTransHuge(page) && !PageHuge(page);
nr_subpages = thp_nr_pages(page);
cond_resched();
* we encounter them after the rest of the list
* is processed.
*/
- if (PageTransHuge(page) && !PageHuge(page)) {
+ if (is_thp) {
lock_page(page);
rc = split_huge_page_to_list(page, from);
unlock_page(page);
nr_thp_split++;
goto retry;
}
- }
- if (is_thp) {
+
nr_thp_failed++;
nr_failed += nr_subpages;
goto out;
*/
void clear_page_mlock(struct page *page)
{
+ int nr_pages;
+
if (!TestClearPageMlocked(page))
return;
- mod_zone_page_state(page_zone(page), NR_MLOCK, -thp_nr_pages(page));
- count_vm_event(UNEVICTABLE_PGCLEARED);
+ nr_pages = thp_nr_pages(page);
+ mod_zone_page_state(page_zone(page), NR_MLOCK, -nr_pages);
+ count_vm_events(UNEVICTABLE_PGCLEARED, nr_pages);
/*
* The previous TestClearPageMlocked() corresponds to the smp_mb()
* in __pagevec_lru_add_fn().
* We lost the race. the page already moved to evictable list.
*/
if (PageUnevictable(page))
- count_vm_event(UNEVICTABLE_PGSTRANDED);
+ count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages);
}
}
VM_BUG_ON_PAGE(PageCompound(page) && PageDoubleMap(page), page);
if (!TestSetPageMlocked(page)) {
- mod_zone_page_state(page_zone(page), NR_MLOCK,
- thp_nr_pages(page));
- count_vm_event(UNEVICTABLE_PGMLOCKED);
+ int nr_pages = thp_nr_pages(page);
+
+ mod_zone_page_state(page_zone(page), NR_MLOCK, nr_pages);
+ count_vm_events(UNEVICTABLE_PGMLOCKED, nr_pages);
if (!isolate_lru_page(page))
putback_lru_page(page);
}
/* Did try_to_unlock() succeed or punt? */
if (!PageMlocked(page))
- count_vm_event(UNEVICTABLE_PGMUNLOCKED);
+ count_vm_events(UNEVICTABLE_PGMUNLOCKED, thp_nr_pages(page));
putback_lru_page(page);
}
*/
static void __munlock_isolation_failed(struct page *page)
{
+ int nr_pages = thp_nr_pages(page);
+
if (PageUnevictable(page))
- __count_vm_event(UNEVICTABLE_PGSTRANDED);
+ __count_vm_events(UNEVICTABLE_PGSTRANDED, nr_pages);
else
- __count_vm_event(UNEVICTABLE_PGMUNLOCKED);
+ __count_vm_events(UNEVICTABLE_PGMUNLOCKED, nr_pages);
}
/**
* done. Non-atomic initialization, single-pass.
*/
void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
- unsigned long start_pfn, enum memmap_context context,
+ unsigned long start_pfn, enum meminit_context context,
struct vmem_altmap *altmap)
{
unsigned long pfn, end_pfn = start_pfn + size;
* There can be holes in boot-time mem_map[]s handed to this
* function. They do not exist on hotplugged memory.
*/
- if (context == MEMMAP_EARLY) {
+ if (context == MEMINIT_EARLY) {
if (overlap_memmap_init(zone, &pfn))
continue;
if (defer_init(nid, pfn, end_pfn))
page = pfn_to_page(pfn);
__init_single_page(page, pfn, zone, nid);
- if (context == MEMMAP_HOTPLUG)
+ if (context == MEMINIT_HOTPLUG)
__SetPageReserved(page);
/*
* check here not to call set_pageblock_migratetype() against
* pfn out of zone.
*
- * Please note that MEMMAP_HOTPLUG path doesn't clear memmap
+ * Please note that MEMINIT_HOTPLUG path doesn't clear memmap
* because this is done early in section_activate()
*/
if (!(pfn & (pageblock_nr_pages - 1))) {
if (end_pfn > start_pfn) {
size = end_pfn - start_pfn;
memmap_init_zone(size, nid, zone, start_pfn,
- MEMMAP_EARLY, NULL);
+ MEMINIT_EARLY, NULL);
}
}
}
* pageblocks we may have modified and return -EBUSY to caller. This
* prevents two threads from simultaneously working on overlapping ranges.
*
+ * Please note that there is no strong synchronization with the page allocator
+ * either. Pages might be freed while their page blocks are marked ISOLATED.
+ * In some cases pages might still end up on pcp lists and that would allow
+ * for their allocation even when they are in fact isolated already. Depending
+ * on how strong of a guarantee the caller needs drain_all_pages might be needed
+ * (e.g. __offline_pages will need to call it after check for isolated range for
+ * a next retry).
+ *
* Return: the number of isolated pageblocks on success and -EBUSY if any part
* of range cannot be isolated.
*/
/* allocate chunk */
alloc_size = sizeof(struct pcpu_chunk) +
- BITS_TO_LONGS(region_size >> PAGE_SHIFT);
+ BITS_TO_LONGS(region_size >> PAGE_SHIFT) * sizeof(unsigned long);
chunk = memblock_alloc(alloc_size, SMP_CACHE_BYTES);
if (!chunk)
panic("%s: Failed to allocate %zu bytes\n", __func__,
if (!(sb->s_flags & SB_KERNMOUNT)) {
spin_lock(&sbinfo->stat_lock);
- if (!sbinfo->free_inodes) {
- spin_unlock(&sbinfo->stat_lock);
- return -ENOSPC;
+ if (sbinfo->max_inodes) {
+ if (!sbinfo->free_inodes) {
+ spin_unlock(&sbinfo->stat_lock);
+ return -ENOSPC;
+ }
+ sbinfo->free_inodes--;
}
- sbinfo->free_inodes--;
if (inop) {
ino = sbinfo->next_ino++;
if (unlikely(is_zero_ino(ino)))
kmem_cache_free(cachep->freelist_cache, freelist);
}
+/*
+ * Update the size of the caches before calling slabs_destroy as it may
+ * recursively call kfree.
+ */
static void slabs_destroy(struct kmem_cache *cachep, struct list_head *list)
{
struct page *page, *n;
spin_lock(&n->list_lock);
free_block(cachep, ac->entry, ac->avail, node, &list);
spin_unlock(&n->list_lock);
- slabs_destroy(cachep, &list);
ac->avail = 0;
+ slabs_destroy(cachep, &list);
}
static void drain_cpu_caches(struct kmem_cache *cachep)
}
#endif
spin_unlock(&n->list_lock);
- slabs_destroy(cachep, &list);
ac->avail -= batchcount;
memmove(ac->entry, &(ac->entry[batchcount]), sizeof(void *)*ac->avail);
+ slabs_destroy(cachep, &list);
}
/*
unevictable = (vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) == VM_LOCKED;
if (unlikely(unevictable) && !TestSetPageMlocked(page)) {
+ int nr_pages = thp_nr_pages(page);
/*
* We use the irq-unsafe __mod_zone_page_stat because this
* counter is not modified from interrupt context, and the pte
* lock is held(spinlock), which implies preemption disabled.
*/
- __mod_zone_page_state(page_zone(page), NR_MLOCK,
- thp_nr_pages(page));
- count_vm_event(UNEVICTABLE_PGMLOCKED);
+ __mod_zone_page_state(page_zone(page), NR_MLOCK, nr_pages);
+ count_vm_events(UNEVICTABLE_PGMLOCKED, nr_pages);
}
lru_cache_add(page);
}
goto nextsi;
}
if (size == SWAPFILE_CLUSTER) {
- if (!(si->flags & SWP_FS))
+ if (si->flags & SWP_BLKDEV)
n_ret = swap_alloc_cluster(si, swp_entries);
} else
n_ret = scan_swap_map_slots(si, SWAP_HAS_CACHE,
for (i = 0; i < pvec->nr; i++) {
struct page *page = pvec->pages[i];
struct pglist_data *pagepgdat = page_pgdat(page);
+ int nr_pages;
+
+ if (PageTransTail(page))
+ continue;
+
+ nr_pages = thp_nr_pages(page);
+ pgscanned += nr_pages;
- pgscanned++;
if (pagepgdat != pgdat) {
if (pgdat)
spin_unlock_irq(&pgdat->lru_lock);
ClearPageUnevictable(page);
del_page_from_lru_list(page, lruvec, LRU_UNEVICTABLE);
add_page_to_lru_list(page, lruvec, lru);
- pgrescued++;
+ pgrescued += nr_pages;
}
}
#include <linux/lockdep.h>
#include <linux/netdevice.h>
#include <linux/netlink.h>
+#include <linux/preempt.h>
#include <linux/rculist.h>
#include <linux/rcupdate.h>
#include <linux/seq_file.h>
*/
static inline u32 batadv_choose_backbone_gw(const void *data, u32 size)
{
- const struct batadv_bla_claim *claim = (struct batadv_bla_claim *)data;
+ const struct batadv_bla_backbone_gw *gw;
u32 hash = 0;
- hash = jhash(&claim->addr, sizeof(claim->addr), hash);
- hash = jhash(&claim->vid, sizeof(claim->vid), hash);
+ gw = (struct batadv_bla_backbone_gw *)data;
+ hash = jhash(&gw->orig, sizeof(gw->orig), hash);
+ hash = jhash(&gw->vid, sizeof(gw->vid), hash);
return hash % size;
}
}
/**
- * batadv_bla_check_bcast_duplist() - Check if a frame is in the broadcast dup.
+ * batadv_bla_check_duplist() - Check if a frame is in the broadcast dup.
* @bat_priv: the bat priv with all the soft interface information
- * @skb: contains the bcast_packet to be checked
+ * @skb: contains the multicast packet to be checked
+ * @payload_ptr: pointer to position inside the head buffer of the skb
+ * marking the start of the data to be CRC'ed
+ * @orig: originator mac address, NULL if unknown
*
- * check if it is on our broadcast list. Another gateway might
- * have sent the same packet because it is connected to the same backbone,
- * so we have to remove this duplicate.
+ * Check if it is on our broadcast list. Another gateway might have sent the
+ * same packet because it is connected to the same backbone, so we have to
+ * remove this duplicate.
*
* This is performed by checking the CRC, which will tell us
* with a good chance that it is the same packet. If it is furthermore
*
* Return: true if a packet is in the duplicate list, false otherwise.
*/
-bool batadv_bla_check_bcast_duplist(struct batadv_priv *bat_priv,
- struct sk_buff *skb)
+static bool batadv_bla_check_duplist(struct batadv_priv *bat_priv,
+ struct sk_buff *skb, u8 *payload_ptr,
+ const u8 *orig)
{
- int i, curr;
- __be32 crc;
- struct batadv_bcast_packet *bcast_packet;
struct batadv_bcast_duplist_entry *entry;
bool ret = false;
-
- bcast_packet = (struct batadv_bcast_packet *)skb->data;
+ int i, curr;
+ __be32 crc;
/* calculate the crc ... */
- crc = batadv_skb_crc32(skb, (u8 *)(bcast_packet + 1));
+ crc = batadv_skb_crc32(skb, payload_ptr);
spin_lock_bh(&bat_priv->bla.bcast_duplist_lock);
if (entry->crc != crc)
continue;
- if (batadv_compare_eth(entry->orig, bcast_packet->orig))
- continue;
+ /* are the originators both known and not anonymous? */
+ if (orig && !is_zero_ether_addr(orig) &&
+ !is_zero_ether_addr(entry->orig)) {
+ /* If known, check if the new frame came from
+ * the same originator:
+ * We are safe to take identical frames from the
+ * same orig, if known, as multiplications in
+ * the mesh are detected via the (orig, seqno) pair.
+ * So we can be a bit more liberal here and allow
+ * identical frames from the same orig which the source
+ * host might have sent multiple times on purpose.
+ */
+ if (batadv_compare_eth(entry->orig, orig))
+ continue;
+ }
/* this entry seems to match: same crc, not too old,
* and from another gw. therefore return true to forbid it.
entry = &bat_priv->bla.bcast_duplist[curr];
entry->crc = crc;
entry->entrytime = jiffies;
- ether_addr_copy(entry->orig, bcast_packet->orig);
+
+ /* known originator */
+ if (orig)
+ ether_addr_copy(entry->orig, orig);
+ /* anonymous originator */
+ else
+ eth_zero_addr(entry->orig);
+
bat_priv->bla.bcast_duplist_curr = curr;
out:
}
/**
+ * batadv_bla_check_ucast_duplist() - Check if a frame is in the broadcast dup.
+ * @bat_priv: the bat priv with all the soft interface information
+ * @skb: contains the multicast packet to be checked, decapsulated from a
+ * unicast_packet
+ *
+ * Check if it is on our broadcast list. Another gateway might have sent the
+ * same packet because it is connected to the same backbone, so we have to
+ * remove this duplicate.
+ *
+ * Return: true if a packet is in the duplicate list, false otherwise.
+ */
+static bool batadv_bla_check_ucast_duplist(struct batadv_priv *bat_priv,
+ struct sk_buff *skb)
+{
+ return batadv_bla_check_duplist(bat_priv, skb, (u8 *)skb->data, NULL);
+}
+
+/**
+ * batadv_bla_check_bcast_duplist() - Check if a frame is in the broadcast dup.
+ * @bat_priv: the bat priv with all the soft interface information
+ * @skb: contains the bcast_packet to be checked
+ *
+ * Check if it is on our broadcast list. Another gateway might have sent the
+ * same packet because it is connected to the same backbone, so we have to
+ * remove this duplicate.
+ *
+ * Return: true if a packet is in the duplicate list, false otherwise.
+ */
+bool batadv_bla_check_bcast_duplist(struct batadv_priv *bat_priv,
+ struct sk_buff *skb)
+{
+ struct batadv_bcast_packet *bcast_packet;
+ u8 *payload_ptr;
+
+ bcast_packet = (struct batadv_bcast_packet *)skb->data;
+ payload_ptr = (u8 *)(bcast_packet + 1);
+
+ return batadv_bla_check_duplist(bat_priv, skb, payload_ptr,
+ bcast_packet->orig);
+}
+
+/**
* batadv_bla_is_backbone_gw_orig() - Check if the originator is a gateway for
* the VLAN identified by vid.
* @bat_priv: the bat priv with all the soft interface information
* @bat_priv: the bat priv with all the soft interface information
* @skb: the frame to be checked
* @vid: the VLAN ID of the frame
- * @is_bcast: the packet came in a broadcast packet type.
+ * @packet_type: the batman packet type this frame came in
*
* batadv_bla_rx avoidance checks if:
* * we have to race for a claim
* further process the skb.
*/
bool batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb,
- unsigned short vid, bool is_bcast)
+ unsigned short vid, int packet_type)
{
struct batadv_bla_backbone_gw *backbone_gw;
struct ethhdr *ethhdr;
goto handled;
if (unlikely(atomic_read(&bat_priv->bla.num_requests)))
- /* don't allow broadcasts while requests are in flight */
- if (is_multicast_ether_addr(ethhdr->h_dest) && is_bcast)
- goto handled;
+ /* don't allow multicast packets while requests are in flight */
+ if (is_multicast_ether_addr(ethhdr->h_dest))
+ /* Both broadcast flooding or multicast-via-unicasts
+ * delivery might send to multiple backbone gateways
+ * sharing the same LAN and therefore need to coordinate
+ * which backbone gateway forwards into the LAN,
+ * by claiming the payload source address.
+ *
+ * Broadcast flooding and multicast-via-unicasts
+ * delivery use the following two batman packet types.
+ * Note: explicitly exclude BATADV_UNICAST_4ADDR,
+ * as the DHCP gateway feature will send explicitly
+ * to only one BLA gateway, so the claiming process
+ * should be avoided there.
+ */
+ if (packet_type == BATADV_BCAST ||
+ packet_type == BATADV_UNICAST)
+ goto handled;
+
+ /* potential duplicates from foreign BLA backbone gateways via
+ * multicast-in-unicast packets
+ */
+ if (is_multicast_ether_addr(ethhdr->h_dest) &&
+ packet_type == BATADV_UNICAST &&
+ batadv_bla_check_ucast_duplist(bat_priv, skb))
+ goto handled;
ether_addr_copy(search_claim.addr, ethhdr->h_source);
search_claim.vid = vid;
goto allow;
}
- /* if it is a broadcast ... */
- if (is_multicast_ether_addr(ethhdr->h_dest) && is_bcast) {
+ /* if it is a multicast ... */
+ if (is_multicast_ether_addr(ethhdr->h_dest) &&
+ (packet_type == BATADV_BCAST || packet_type == BATADV_UNICAST)) {
/* ... drop it. the responsible gateway is in charge.
*
- * We need to check is_bcast because with the gateway
+ * We need to check packet type because with the gateway
* feature, broadcasts (like DHCP requests) may be sent
- * using a unicast packet type.
+ * using a unicast 4 address packet type. See comment above.
*/
goto handled;
} else {
#ifdef CONFIG_BATMAN_ADV_BLA
bool batadv_bla_rx(struct batadv_priv *bat_priv, struct sk_buff *skb,
- unsigned short vid, bool is_bcast);
+ unsigned short vid, int packet_type);
bool batadv_bla_tx(struct batadv_priv *bat_priv, struct sk_buff *skb,
unsigned short vid);
bool batadv_bla_is_backbone_gw(struct sk_buff *skb,
static inline bool batadv_bla_rx(struct batadv_priv *bat_priv,
struct sk_buff *skb, unsigned short vid,
- bool is_bcast)
+ int packet_type)
{
return false;
}
#include <uapi/linux/batadv_packet.h>
#include <uapi/linux/batman_adv.h>
+#include "bridge_loop_avoidance.h"
#include "hard-interface.h"
#include "hash.h"
#include "log.h"
}
/**
+ * batadv_mcast_forw_send_orig() - send a multicast packet to an originator
+ * @bat_priv: the bat priv with all the soft interface information
+ * @skb: the multicast packet to send
+ * @vid: the vlan identifier
+ * @orig_node: the originator to send the packet to
+ *
+ * Return: NET_XMIT_DROP in case of error or NET_XMIT_SUCCESS otherwise.
+ */
+int batadv_mcast_forw_send_orig(struct batadv_priv *bat_priv,
+ struct sk_buff *skb,
+ unsigned short vid,
+ struct batadv_orig_node *orig_node)
+{
+ /* Avoid sending multicast-in-unicast packets to other BLA
+ * gateways - they already got the frame from the LAN side
+ * we share with them.
+ * TODO: Refactor to take BLA into account earlier, to avoid
+ * reducing the mcast_fanout count.
+ */
+ if (batadv_bla_is_backbone_gw_orig(bat_priv, orig_node->orig, vid)) {
+ dev_kfree_skb(skb);
+ return NET_XMIT_SUCCESS;
+ }
+
+ return batadv_send_skb_unicast(bat_priv, skb, BATADV_UNICAST, 0,
+ orig_node, vid);
+}
+
+/**
* batadv_mcast_forw_tt() - forwards a packet to multicast listeners
* @bat_priv: the bat priv with all the soft interface information
* @skb: the multicast packet to transmit
break;
}
- batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0,
- orig_entry->orig_node, vid);
+ batadv_mcast_forw_send_orig(bat_priv, newskb, vid,
+ orig_entry->orig_node);
}
rcu_read_unlock();
break;
}
- batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0,
- orig_node, vid);
+ batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node);
}
rcu_read_unlock();
return ret;
break;
}
- batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0,
- orig_node, vid);
+ batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node);
}
rcu_read_unlock();
return ret;
break;
}
- batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0,
- orig_node, vid);
+ batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node);
}
rcu_read_unlock();
return ret;
break;
}
- batadv_send_skb_unicast(bat_priv, newskb, BATADV_UNICAST, 0,
- orig_node, vid);
+ batadv_mcast_forw_send_orig(bat_priv, newskb, vid, orig_node);
}
rcu_read_unlock();
return ret;
batadv_mcast_forw_mode(struct batadv_priv *bat_priv, struct sk_buff *skb,
struct batadv_orig_node **mcast_single_orig);
+int batadv_mcast_forw_send_orig(struct batadv_priv *bat_priv,
+ struct sk_buff *skb,
+ unsigned short vid,
+ struct batadv_orig_node *orig_node);
+
int batadv_mcast_forw_send(struct batadv_priv *bat_priv, struct sk_buff *skb,
unsigned short vid);
}
static inline int
+batadv_mcast_forw_send_orig(struct batadv_priv *bat_priv,
+ struct sk_buff *skb,
+ unsigned short vid,
+ struct batadv_orig_node *orig_node)
+{
+ kfree_skb(skb);
+ return NET_XMIT_DROP;
+}
+
+static inline int
batadv_mcast_forw_send(struct batadv_priv *bat_priv, struct sk_buff *skb,
unsigned short vid)
{
vid = batadv_get_vid(skb, hdr_len);
ethhdr = (struct ethhdr *)(skb->data + hdr_len);
+ /* do not reroute multicast frames in a unicast header */
+ if (is_multicast_ether_addr(ethhdr->h_dest))
+ return true;
+
/* check if the destination client was served by this node and it is now
* roaming. In this case, it means that the node has got a ROAM_ADV
* message and that it knows the new destination in the mesh to re-route
goto dropped;
ret = batadv_send_skb_via_gw(bat_priv, skb, vid);
} else if (mcast_single_orig) {
- ret = batadv_send_skb_unicast(bat_priv, skb,
- BATADV_UNICAST, 0,
- mcast_single_orig, vid);
+ ret = batadv_mcast_forw_send_orig(bat_priv, skb, vid,
+ mcast_single_orig);
} else if (forw_mode == BATADV_FORW_SOME) {
ret = batadv_mcast_forw_send(bat_priv, skb, vid);
} else {
struct vlan_ethhdr *vhdr;
struct ethhdr *ethhdr;
unsigned short vid;
- bool is_bcast;
+ int packet_type;
batadv_bcast_packet = (struct batadv_bcast_packet *)skb->data;
- is_bcast = (batadv_bcast_packet->packet_type == BATADV_BCAST);
+ packet_type = batadv_bcast_packet->packet_type;
skb_pull_rcsum(skb, hdr_size);
skb_reset_mac_header(skb);
/* Let the bridge loop avoidance check the packet. If will
* not handle it, we can safely push it up.
*/
- if (batadv_bla_rx(bat_priv, skb, vid, is_bcast))
+ if (batadv_bla_rx(bat_priv, skb, vid, packet_type))
goto out;
if (orig_node)
}
}
-static int __br_vlan_get_pvid(const struct net_device *dev,
- struct net_bridge_port *p, u16 *p_pvid)
+int br_vlan_get_pvid(const struct net_device *dev, u16 *p_pvid)
{
struct net_bridge_vlan_group *vg;
+ struct net_bridge_port *p;
+ ASSERT_RTNL();
+ p = br_port_get_check_rtnl(dev);
if (p)
vg = nbp_vlan_group(p);
else if (netif_is_bridge_master(dev))
*p_pvid = br_get_pvid(vg);
return 0;
}
-
-int br_vlan_get_pvid(const struct net_device *dev, u16 *p_pvid)
-{
- ASSERT_RTNL();
-
- return __br_vlan_get_pvid(dev, br_port_get_check_rtnl(dev), p_pvid);
-}
EXPORT_SYMBOL_GPL(br_vlan_get_pvid);
int br_vlan_get_pvid_rcu(const struct net_device *dev, u16 *p_pvid)
{
- return __br_vlan_get_pvid(dev, br_port_get_check_rcu(dev), p_pvid);
+ struct net_bridge_vlan_group *vg;
+ struct net_bridge_port *p;
+
+ p = br_port_get_check_rcu(dev);
+ if (p)
+ vg = nbp_vlan_group_rcu(p);
+ else if (netif_is_bridge_master(dev))
+ vg = br_vlan_group_rcu(netdev_priv(dev));
+ else
+ return -EINVAL;
+
+ *p_pvid = br_get_pvid(vg);
+ return 0;
}
EXPORT_SYMBOL_GPL(br_vlan_get_pvid_rcu);
if (!first.id_len)
first = *ppid;
else if (memcmp(&first, ppid, sizeof(*ppid)))
- return -ENODATA;
+ return -EOPNOTSUPP;
}
return err;
/* Operations to mark dst as DEAD and clean up the net device referenced
* by dst:
- * 1. put the dst under loopback interface and discard all tx/rx packets
+ * 1. put the dst under blackhole interface and discard all tx/rx packets
* on this route.
* 2. release the net_device
* This function should be called when removing routes from the fib tree
#include <net/ip_tunnels.h>
#include <linux/indirect_call_wrapper.h>
-#ifdef CONFIG_IPV6_MULTIPLE_TABLES
+#if defined(CONFIG_IPV6) && defined(CONFIG_IPV6_MULTIPLE_TABLES)
#ifdef CONFIG_IP_MULTIPLE_TABLES
#define INDIRECT_CALL_MT(f, f2, f1, ...) \
INDIRECT_CALL_INET(f, f2, f1, __VA_ARGS__)
fl4.saddr = params->ipv4_src;
fl4.fl4_sport = params->sport;
fl4.fl4_dport = params->dport;
+ fl4.flowi4_multipath_hash = 0;
if (flags & BPF_FIB_LOOKUP_DIRECT) {
u32 tbid = l3mdev_fib_table_rcu(dev) ? : RT_TABLE_MAIN;
bool indirect = BPF_MODE(orig->code) == BPF_IND;
struct bpf_insn *insn = insn_buf;
- /* We're guaranteed here that CTX is in R6. */
- *insn++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_CTX);
if (!indirect) {
*insn++ = BPF_MOV64_IMM(BPF_REG_2, orig->imm);
} else {
if (orig->imm)
*insn++ = BPF_ALU64_IMM(BPF_ADD, BPF_REG_2, orig->imm);
}
+ /* We're guaranteed here that CTX is in R6. */
+ *insn++ = BPF_MOV64_REG(BPF_REG_1, BPF_REG_CTX);
switch (BPF_SIZE(orig->code)) {
case BPF_B:
* trigger an explicit type generation here.
*/
BTF_TYPE_EMIT(struct tcp6_sock);
- if (sk_fullsock(sk) && sk->sk_protocol == IPPROTO_TCP &&
+ if (sk && sk_fullsock(sk) && sk->sk_protocol == IPPROTO_TCP &&
sk->sk_family == AF_INET6)
return (unsigned long)sk;
BPF_CALL_1(bpf_skc_to_tcp_sock, struct sock *, sk)
{
- if (sk_fullsock(sk) && sk->sk_protocol == IPPROTO_TCP)
+ if (sk && sk_fullsock(sk) && sk->sk_protocol == IPPROTO_TCP)
return (unsigned long)sk;
return (unsigned long)NULL;
BPF_CALL_1(bpf_skc_to_tcp_timewait_sock, struct sock *, sk)
{
#ifdef CONFIG_INET
- if (sk->sk_prot == &tcp_prot && sk->sk_state == TCP_TIME_WAIT)
+ if (sk && sk->sk_prot == &tcp_prot && sk->sk_state == TCP_TIME_WAIT)
return (unsigned long)sk;
#endif
#if IS_BUILTIN(CONFIG_IPV6)
- if (sk->sk_prot == &tcpv6_prot && sk->sk_state == TCP_TIME_WAIT)
+ if (sk && sk->sk_prot == &tcpv6_prot && sk->sk_state == TCP_TIME_WAIT)
return (unsigned long)sk;
#endif
BPF_CALL_1(bpf_skc_to_tcp_request_sock, struct sock *, sk)
{
#ifdef CONFIG_INET
- if (sk->sk_prot == &tcp_prot && sk->sk_state == TCP_NEW_SYN_RECV)
+ if (sk && sk->sk_prot == &tcp_prot && sk->sk_state == TCP_NEW_SYN_RECV)
return (unsigned long)sk;
#endif
#if IS_BUILTIN(CONFIG_IPV6)
- if (sk->sk_prot == &tcpv6_prot && sk->sk_state == TCP_NEW_SYN_RECV)
+ if (sk && sk->sk_prot == &tcpv6_prot && sk->sk_state == TCP_NEW_SYN_RECV)
return (unsigned long)sk;
#endif
* trigger an explicit type generation here.
*/
BTF_TYPE_EMIT(struct udp6_sock);
- if (sk_fullsock(sk) && sk->sk_protocol == IPPROTO_UDP &&
+ if (sk && sk_fullsock(sk) && sk->sk_protocol == IPPROTO_UDP &&
sk->sk_type == SOCK_DGRAM && sk->sk_family == AF_INET6)
return (unsigned long)sk;
if (refcount_read(&net->count) == 0)
return NETNSA_NSID_NOT_ASSIGNED;
- spin_lock(&net->nsid_lock);
+ spin_lock_bh(&net->nsid_lock);
id = __peernet2id(net, peer);
if (id >= 0) {
- spin_unlock(&net->nsid_lock);
+ spin_unlock_bh(&net->nsid_lock);
return id;
}
* just been idr_remove()'d from there in cleanup_net().
*/
if (!maybe_get_net(peer)) {
- spin_unlock(&net->nsid_lock);
+ spin_unlock_bh(&net->nsid_lock);
return NETNSA_NSID_NOT_ASSIGNED;
}
id = alloc_netid(net, peer, -1);
- spin_unlock(&net->nsid_lock);
+ spin_unlock_bh(&net->nsid_lock);
put_net(peer);
if (id < 0)
for_each_net(tmp) {
int id;
- spin_lock(&tmp->nsid_lock);
+ spin_lock_bh(&tmp->nsid_lock);
id = __peernet2id(tmp, net);
if (id >= 0)
idr_remove(&tmp->netns_ids, id);
- spin_unlock(&tmp->nsid_lock);
+ spin_unlock_bh(&tmp->nsid_lock);
if (id >= 0)
rtnl_net_notifyid(tmp, RTM_DELNSID, id, 0, NULL,
GFP_KERNEL);
if (tmp == last)
break;
}
- spin_lock(&net->nsid_lock);
+ spin_lock_bh(&net->nsid_lock);
idr_destroy(&net->netns_ids);
- spin_unlock(&net->nsid_lock);
+ spin_unlock_bh(&net->nsid_lock);
}
static LLIST_HEAD(cleanup_list);
return PTR_ERR(peer);
}
- spin_lock(&net->nsid_lock);
+ spin_lock_bh(&net->nsid_lock);
if (__peernet2id(net, peer) >= 0) {
- spin_unlock(&net->nsid_lock);
+ spin_unlock_bh(&net->nsid_lock);
err = -EEXIST;
NL_SET_BAD_ATTR(extack, nla);
NL_SET_ERR_MSG(extack,
}
err = alloc_netid(net, peer, nsid);
- spin_unlock(&net->nsid_lock);
+ spin_unlock_bh(&net->nsid_lock);
if (err >= 0) {
rtnl_net_notifyid(net, RTM_NEWNSID, err, NETLINK_CB(skb).portid,
nlh, GFP_KERNEL);
{
const struct dcbnl_rtnl_ops *ops = netdev->dcbnl_ops;
struct nlattr *ieee[DCB_ATTR_IEEE_MAX + 1];
+ int prio;
int err;
if (!ops)
struct dcbnl_buffer *buffer =
nla_data(ieee[DCB_ATTR_DCB_BUFFER]);
+ for (prio = 0; prio < ARRAY_SIZE(buffer->prio2buffer); prio++) {
+ if (buffer->prio2buffer[prio] >= DCBX_MAX_BUFFERS) {
+ err = -EINVAL;
+ goto err;
+ }
+ }
+
err = ops->dcbnl_setbuffer(netdev, buffer);
if (err)
goto err;
dsa_slave_notify(slave_dev, DSA_PORT_REGISTER);
- ret = register_netdev(slave_dev);
+ rtnl_lock();
+
+ ret = register_netdevice(slave_dev);
if (ret) {
netdev_err(master, "error %d registering interface %s\n",
ret, slave_dev->name);
+ rtnl_unlock();
goto out_phy;
}
+ ret = netdev_upper_dev_link(master, slave_dev, NULL);
+
+ rtnl_unlock();
+
+ if (ret)
+ goto out_unregister;
+
return 0;
+out_unregister:
+ unregister_netdev(slave_dev);
out_phy:
rtnl_lock();
phylink_disconnect_phy(p->dp->pl);
void dsa_slave_destroy(struct net_device *slave_dev)
{
+ struct net_device *master = dsa_slave_to_master(slave_dev);
struct dsa_port *dp = dsa_slave_to_port(slave_dev);
struct dsa_slave_priv *p = netdev_priv(slave_dev);
netif_carrier_off(slave_dev);
rtnl_lock();
+ netdev_upper_dev_unlink(master, slave_dev);
+ unregister_netdevice(slave_dev);
phylink_disconnect_phy(dp->pl);
rtnl_unlock();
dsa_slave_notify(slave_dev, DSA_PORT_UNREGISTER);
- unregister_netdev(slave_dev);
phylink_destroy(dp->pl);
gro_cells_destroy(&p->gcells);
free_percpu(p->stats64);
packing(injection, &qos_class, 19, 17, OCELOT_TAG_LEN, PACK, 0);
if (ocelot->ptp && (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)) {
+ struct sk_buff *clone = DSA_SKB_CB(skb)->clone;
+
rew_op = ocelot_port->ptp_cmd;
- if (ocelot_port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP) {
- rew_op |= (ocelot_port->ts_id % 4) << 3;
- ocelot_port->ts_id++;
- }
+ /* Retrieve timestamp ID populated inside skb->cb[0] of the
+ * clone by ocelot_port_add_txtstamp_skb
+ */
+ if (ocelot_port->ptp_cmd == IFH_REW_OP_TWO_STEP_PTP)
+ rew_op |= clone->cb[0] << 3;
packing(injection, &rew_op, 125, 117, OCELOT_TAG_LEN, PACK, 0);
}
reply_len = ret + ethnl_reply_header_size();
rskb = ethnl_reply_init(reply_len, req_info.dev,
- ETHTOOL_MSG_TUNNEL_INFO_GET,
+ ETHTOOL_MSG_TUNNEL_INFO_GET_REPLY,
ETHTOOL_A_TUNNEL_INFO_HEADER,
info, &reply_payload);
if (!rskb) {
goto cont;
ehdr = ethnl_dump_put(skb, cb,
- ETHTOOL_MSG_TUNNEL_INFO_GET);
+ ETHTOOL_MSG_TUNNEL_INFO_GET_REPLY);
if (!ehdr) {
ret = -EMSGSIZE;
goto out;
proto = nla_get_u8(data[IFLA_HSR_PROTOCOL]);
if (proto >= HSR_PROTOCOL_MAX) {
- NL_SET_ERR_MSG_MOD(extack, "Unsupported protocol\n");
+ NL_SET_ERR_MSG_MOD(extack, "Unsupported protocol");
return -EINVAL;
}
proto_version = HSR_V0;
} else {
if (proto == HSR_PROTOCOL_PRP) {
- NL_SET_ERR_MSG_MOD(extack, "PRP version unsupported\n");
+ NL_SET_ERR_MSG_MOD(extack, "PRP version unsupported");
return -EINVAL;
}
proto_version = nla_get_u8(data[IFLA_HSR_VERSION]);
if (proto_version > HSR_V1) {
NL_SET_ERR_MSG_MOD(extack,
- "Only HSR version 0/1 supported\n");
+ "Only HSR version 0/1 supported");
return -EINVAL;
}
}
fl4.flowi4_tun_key.tun_id = 0;
fl4.flowi4_flags = 0;
fl4.flowi4_uid = sock_net_uid(net, NULL);
+ fl4.flowi4_multipath_hash = 0;
no_addr = idev->ifa_list == NULL;
}
EXPORT_SYMBOL_GPL(inet_diag_msg_attrs_fill);
-static void inet_diag_parse_attrs(const struct nlmsghdr *nlh, int hdrlen,
- struct nlattr **req_nlas)
+static int inet_diag_parse_attrs(const struct nlmsghdr *nlh, int hdrlen,
+ struct nlattr **req_nlas)
{
struct nlattr *nla;
int remaining;
nlmsg_for_each_attr(nla, nlh, hdrlen, remaining) {
int type = nla_type(nla);
+ if (type == INET_DIAG_REQ_PROTOCOL && nla_len(nla) != sizeof(u32))
+ return -EINVAL;
+
if (type < __INET_DIAG_REQ_MAX)
req_nlas[type] = nla;
}
+ return 0;
}
static int inet_diag_get_protocol(const struct inet_diag_req_v2 *req,
int err, protocol;
memset(&dump_data, 0, sizeof(dump_data));
- inet_diag_parse_attrs(nlh, hdrlen, dump_data.req_nlas);
+ err = inet_diag_parse_attrs(nlh, hdrlen, dump_data.req_nlas);
+ if (err)
+ return err;
+
protocol = inet_diag_get_protocol(req, &dump_data);
handler = inet_diag_lock_handler(protocol);
if (!cb_data)
return -ENOMEM;
- inet_diag_parse_attrs(nlh, hdrlen, cb_data->req_nlas);
-
+ err = inet_diag_parse_attrs(nlh, hdrlen, cb_data->req_nlas);
+ if (err) {
+ kfree(cb_data);
+ return err;
+ }
nla = cb_data->inet_diag_nla_bc;
if (nla) {
err = inet_diag_bc_audit(nla, skb);
#include <net/icmp.h>
#include <net/checksum.h>
#include <net/inetpeer.h>
+#include <net/inet_ecn.h>
#include <net/lwtunnel.h>
#include <linux/bpf-cgroup.h>
#include <linux/igmp.h>
if (IS_ERR(rt))
return;
- inet_sk(sk)->tos = arg->tos;
+ inet_sk(sk)->tos = arg->tos & ~INET_ECN_MASK;
sk->sk_protocol = ip_hdr(skb)->protocol;
sk->sk_bound_dev_if = arg->bound_dev_if;
attr = tb[LWTUNNEL_IP_OPT_VXLAN_GBP];
md->gbp = nla_get_u32(attr);
+ md->gbp &= VXLAN_GBP_MASK;
info->key.tun_flags |= TUNNEL_VXLAN_OPT;
}
neigh_event_send(n, NULL);
} else {
if (fib_lookup(net, fl4, &res, 0) == 0) {
- struct fib_nh_common *nhc = FIB_RES_NHC(res);
+ struct fib_nh_common *nhc;
+ fib_select_path(net, &res, fl4, skb);
+ nhc = FIB_RES_NHC(res);
update_or_create_fnhe(nhc, fl4->daddr, new_gw,
0, false,
jiffies + ip_rt_gc_timeout);
static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
{
struct dst_entry *dst = &rt->dst;
+ struct net *net = dev_net(dst->dev);
u32 old_mtu = ipv4_mtu(dst);
struct fib_result res;
bool lock = false;
return;
rcu_read_lock();
- if (fib_lookup(dev_net(dst->dev), fl4, &res, 0) == 0) {
- struct fib_nh_common *nhc = FIB_RES_NHC(res);
+ if (fib_lookup(net, fl4, &res, 0) == 0) {
+ struct fib_nh_common *nhc;
+ fib_select_path(net, &res, fl4, NULL);
+ nhc = FIB_RES_NHC(res);
update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock,
jiffies + ip_rt_mtu_expires);
}
fl4.daddr = daddr;
fl4.saddr = saddr;
fl4.flowi4_uid = sock_net_uid(net, NULL);
+ fl4.flowi4_multipath_hash = 0;
if (fib4_rules_early_flow_dissect(net, skb, &fl4, &_flkeys)) {
flkeys = &_flkeys;
fib_select_path(net, res, fl4, skb);
dev_out = FIB_RES_DEV(*res);
- fl4->flowi4_oif = dev_out->ifindex;
-
make_route:
rth = __mkroute_output(res, fl4, orig_oif, dev_out, flags);
config IPV6_SEG6_HMAC
bool "IPv6: Segment Routing HMAC support"
depends on IPV6
+ select CRYPTO
select CRYPTO_HMAC
select CRYPTO_SHA1
select CRYPTO_SHA256
/* Need to own table->tb6_lock */
int fib6_del(struct fib6_info *rt, struct nl_info *info)
{
- struct fib6_node *fn = rcu_dereference_protected(rt->fib6_node,
- lockdep_is_held(&rt->fib6_table->tb6_lock));
- struct fib6_table *table = rt->fib6_table;
struct net *net = info->nl_net;
struct fib6_info __rcu **rtp;
struct fib6_info __rcu **rtp_next;
+ struct fib6_table *table;
+ struct fib6_node *fn;
+
+ if (rt == net->ipv6.fib6_null_entry)
+ return -ENOENT;
- if (!fn || rt == net->ipv6.fib6_null_entry)
+ table = rt->fib6_table;
+ fn = rcu_dereference_protected(rt->fib6_node,
+ lockdep_is_held(&table->tb6_lock));
+ if (!fn)
return -ENOENT;
WARN_ON(!(fn->fn_flags & RTN_RTINFO));
.fc_nlinfo.nl_net = net,
};
- cfg.fc_table = l3mdev_fib_table(dev) ? : RT6_TABLE_INFO,
+ cfg.fc_table = l3mdev_fib_table(dev) ? : RT6_TABLE_INFO;
cfg.fc_dst = *prefix;
cfg.fc_gateway = *gwaddr;
if (rate->idx < 0 || !rate->count)
return -1;
- if (rate->flags & IEEE80211_TX_RC_80_MHZ_WIDTH)
+ if (rate->flags & IEEE80211_TX_RC_160_MHZ_WIDTH)
+ stat->bw = RATE_INFO_BW_160;
+ else if (rate->flags & IEEE80211_TX_RC_80_MHZ_WIDTH)
stat->bw = RATE_INFO_BW_80;
else if (rate->flags & IEEE80211_TX_RC_40_MHZ_WIDTH)
stat->bw = RATE_INFO_BW_40;
* This will not be very accurate, but much better than simply
* assuming un-aggregated tx in all cases.
*/
- if (duration > 400) /* <= VHT20 MCS2 1S */
+ if (duration > 400 * 1024) /* <= VHT20 MCS2 1S */
agg_shift = 1;
- else if (duration > 250) /* <= VHT20 MCS3 1S or MCS1 2S */
+ else if (duration > 250 * 1024) /* <= VHT20 MCS3 1S or MCS1 2S */
agg_shift = 2;
- else if (duration > 150) /* <= VHT20 MCS5 1S or MCS3 2S */
+ else if (duration > 150 * 1024) /* <= VHT20 MCS5 1S or MCS2 2S */
agg_shift = 3;
- else
+ else if (duration > 70 * 1024) /* <= VHT20 MCS5 2S */
agg_shift = 4;
+ else if (stat.encoding != RX_ENC_HE ||
+ duration > 20 * 1024) /* <= HE40 MCS6 2S */
+ agg_shift = 5;
+ else
+ agg_shift = 6;
duration *= len;
duration /= AVG_PKT_SIZE;
duration /= 1024;
+ duration += (overhead >> agg_shift);
- return duration + (overhead >> agg_shift);
+ return max_t(u32, duration, 4);
}
if (!conf)
struct ieee80211_supported_band *sband;
struct cfg80211_chan_def chandef;
bool is_6ghz = cbss->channel->band == NL80211_BAND_6GHZ;
+ bool is_5ghz = cbss->channel->band == NL80211_BAND_5GHZ;
struct ieee80211_bss *bss = (void *)cbss->priv;
int ret;
u32 i;
ifmgd->flags |= IEEE80211_STA_DISABLE_HE;
}
- if (!sband->vht_cap.vht_supported && !is_6ghz) {
+ if (!sband->vht_cap.vht_supported && is_5ghz) {
ifmgd->flags |= IEEE80211_STA_DISABLE_VHT;
ifmgd->flags |= IEEE80211_STA_DISABLE_HE;
}
else if (status->bw == RATE_INFO_BW_5)
channel_flags |= IEEE80211_CHAN_QUARTER;
- if (status->band == NL80211_BAND_5GHZ)
+ if (status->band == NL80211_BAND_5GHZ ||
+ status->band == NL80211_BAND_6GHZ)
channel_flags |= IEEE80211_CHAN_OFDM | IEEE80211_CHAN_5GHZ;
else if (status->encoding != RX_ENC_LEGACY)
channel_flags |= IEEE80211_CHAN_DYN | IEEE80211_CHAN_2GHZ;
he_chandef.center_freq1 =
ieee80211_channel_to_frequency(he_6ghz_oper->ccfs0,
NL80211_BAND_6GHZ);
- he_chandef.center_freq2 =
- ieee80211_channel_to_frequency(he_6ghz_oper->ccfs1,
- NL80211_BAND_6GHZ);
+ if (support_80_80 || support_160)
+ he_chandef.center_freq2 =
+ ieee80211_channel_to_frequency(he_6ghz_oper->ccfs1,
+ NL80211_BAND_6GHZ);
}
if (!cfg80211_chandef_valid(&he_chandef)) {
/* take some capabilities as-is */
cap_info = le32_to_cpu(vht_cap_ie->vht_cap_info);
vht_cap->cap = cap_info;
- vht_cap->cap &= IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_3895 |
- IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_7991 |
- IEEE80211_VHT_CAP_MAX_MPDU_LENGTH_11454 |
- IEEE80211_VHT_CAP_RXLDPC |
+ vht_cap->cap &= IEEE80211_VHT_CAP_RXLDPC |
IEEE80211_VHT_CAP_VHT_TXOP_PS |
IEEE80211_VHT_CAP_HTC_VHT |
IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK |
IEEE80211_VHT_CAP_RX_ANTENNA_PATTERN |
IEEE80211_VHT_CAP_TX_ANTENNA_PATTERN;
+ vht_cap->cap |= min_t(u32, cap_info & IEEE80211_VHT_CAP_MAX_MPDU_MASK,
+ own_cap.cap & IEEE80211_VHT_CAP_MAX_MPDU_MASK);
+
/* and some based on our own capabilities */
switch (own_cap.cap & IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_MASK) {
case IEEE80211_VHT_CAP_SUPP_CHAN_WIDTH_160MHZ:
if (res)
goto err_tx;
- ieee802154_xmit_complete(&local->hw, skb, false);
-
dev->stats.tx_packets++;
dev->stats.tx_bytes += skb->len;
+ ieee802154_xmit_complete(&local->hw, skb, false);
+
return;
err_tx:
/* async is priority, otherwise sync is fallback */
if (local->ops->xmit_async) {
+ unsigned int len = skb->len;
+
ret = drv_xmit_async(local, skb);
if (ret) {
ieee802154_wake_queue(&local->hw);
}
dev->stats.tx_packets++;
- dev->stats.tx_bytes += skb->len;
+ dev->stats.tx_bytes += len;
} else {
local->tx_skb = skb;
queue_work(local->workqueue, &local->tx_work);
return a->port == b->port;
}
+static bool address_zero(const struct mptcp_addr_info *addr)
+{
+ struct mptcp_addr_info zero;
+
+ memset(&zero, 0, sizeof(zero));
+ zero.family = addr->family;
+
+ return addresses_equal(addr, &zero, false);
+}
+
static void local_address(const struct sock_common *skc,
struct mptcp_addr_info *addr)
{
static void mptcp_pm_create_subflow_or_signal_addr(struct mptcp_sock *msk)
{
+ struct mptcp_addr_info remote = { 0 };
struct sock *sk = (struct sock *)msk;
struct mptcp_pm_addr_entry *local;
- struct mptcp_addr_info remote;
struct pm_nl_pernet *pernet;
pernet = net_generic(sock_net((struct sock *)msk), pm_nl_pernet_id);
* addr
*/
local_address((struct sock_common *)msk, &msk_local);
- local_address((struct sock_common *)msk, &skc_local);
+ local_address((struct sock_common *)skc, &skc_local);
if (addresses_equal(&msk_local, &skc_local, false))
return 0;
+ if (address_zero(&skc_local))
+ return 0;
+
pernet = net_generic(sock_net((struct sock *)msk), pm_nl_pernet_id);
rcu_read_lock();
return ret;
/* address not found, add to local list */
- entry = kmalloc(sizeof(*entry), GFP_KERNEL);
+ entry = kmalloc(sizeof(*entry), GFP_ATOMIC);
if (!entry)
return -ENOMEM;
struct mptcp_sock *msk = mptcp_sk(sk);
struct mptcp_subflow_context *subflow;
struct sockaddr_storage addr;
+ int remote_id = remote->id;
int local_id = loc->id;
struct socket *sf;
struct sock *ssk;
goto failed;
mptcp_crypto_key_sha(subflow->remote_key, &remote_token, NULL);
- pr_debug("msk=%p remote_token=%u local_id=%d", msk, remote_token,
- local_id);
+ pr_debug("msk=%p remote_token=%u local_id=%d remote_id=%d", msk,
+ remote_token, local_id, remote_id);
subflow->remote_token = remote_token;
subflow->local_id = local_id;
+ subflow->remote_id = remote_id;
subflow->request_join = 1;
subflow->request_bkup = 1;
mptcp_info2sockaddr(remote, &addr);
new_ctx->fully_established = 1;
new_ctx->backup = subflow_req->backup;
new_ctx->local_id = subflow_req->local_id;
+ new_ctx->remote_id = subflow_req->remote_id;
new_ctx->token = subflow_req->token;
new_ctx->thmac = subflow_req->thmac;
}
}
struct ctnetlink_filter {
- u_int32_t cta_flags;
u8 family;
u_int32_t orig_flags;
struct nf_conntrack_zone *zone,
u_int32_t flags);
-/* applied on filters */
-#define CTA_FILTER_F_CTA_MARK (1 << 0)
-#define CTA_FILTER_F_CTA_MARK_MASK (1 << 1)
-
static struct ctnetlink_filter *
ctnetlink_alloc_filter(const struct nlattr * const cda[], u8 family)
{
#ifdef CONFIG_NF_CONNTRACK_MARK
if (cda[CTA_MARK]) {
filter->mark.val = ntohl(nla_get_be32(cda[CTA_MARK]));
- filter->cta_flags |= CTA_FILTER_FLAG(CTA_MARK);
-
- if (cda[CTA_MARK_MASK]) {
+ if (cda[CTA_MARK_MASK])
filter->mark.mask = ntohl(nla_get_be32(cda[CTA_MARK_MASK]));
- filter->cta_flags |= CTA_FILTER_FLAG(CTA_MARK_MASK);
- } else {
+ else
filter->mark.mask = 0xffffffff;
- }
} else if (cda[CTA_MARK_MASK]) {
err = -EINVAL;
goto err_filter;
}
#ifdef CONFIG_NF_CONNTRACK_MARK
- if ((filter->cta_flags & CTA_FILTER_FLAG(CTA_MARK_MASK)) &&
- (ct->mark & filter->mark.mask) != filter->mark.val)
- goto ignore_entry;
- else if ((filter->cta_flags & CTA_FILTER_FLAG(CTA_MARK)) &&
- ct->mark != filter->mark.val)
+ if ((ct->mark & filter->mark.mask) != filter->mark.val)
goto ignore_entry;
#endif
if (err < 0)
return err;
-
+ if (l3num != NFPROTO_IPV4 && l3num != NFPROTO_IPV6)
+ return -EOPNOTSUPP;
tuple->src.l3num = l3num;
if (flags & CTA_FILTER_FLAG(CTA_IP_DST) ||
int err;
err = nf_ct_netns_do_get(net, NFPROTO_IPV4);
+#if IS_ENABLED(CONFIG_IPV6)
if (err < 0)
goto err1;
err = nf_ct_netns_do_get(net, NFPROTO_IPV6);
err2:
nf_ct_netns_put(net, NFPROTO_IPV4);
err1:
+#endif
return err;
}
return -1;
}
+struct nftnl_skb_parms {
+ bool report;
+};
+#define NFT_CB(skb) (*(struct nftnl_skb_parms*)&((skb)->cb))
+
+static void nft_notify_enqueue(struct sk_buff *skb, bool report,
+ struct list_head *notify_list)
+{
+ NFT_CB(skb).report = report;
+ list_add_tail(&skb->list, notify_list);
+}
+
static void nf_tables_table_notify(const struct nft_ctx *ctx, int event)
{
struct sk_buff *skb;
goto err;
}
- nfnetlink_send(skb, ctx->net, ctx->portid, NFNLGRP_NFTABLES,
- ctx->report, GFP_KERNEL);
+ nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list);
return;
err:
nfnetlink_set_err(ctx->net, ctx->portid, NFNLGRP_NFTABLES, -ENOBUFS);
goto err;
}
- nfnetlink_send(skb, ctx->net, ctx->portid, NFNLGRP_NFTABLES,
- ctx->report, GFP_KERNEL);
+ nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list);
return;
err:
nfnetlink_set_err(ctx->net, ctx->portid, NFNLGRP_NFTABLES, -ENOBUFS);
goto err;
}
- nfnetlink_send(skb, ctx->net, ctx->portid, NFNLGRP_NFTABLES,
- ctx->report, GFP_KERNEL);
+ nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list);
return;
err:
nfnetlink_set_err(ctx->net, ctx->portid, NFNLGRP_NFTABLES, -ENOBUFS);
goto err;
}
- nfnetlink_send(skb, ctx->net, portid, NFNLGRP_NFTABLES, ctx->report,
- gfp_flags);
+ nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list);
return;
err:
nfnetlink_set_err(ctx->net, portid, NFNLGRP_NFTABLES, -ENOBUFS);
goto err;
}
- nfnetlink_send(skb, net, portid, NFNLGRP_NFTABLES, ctx->report,
- GFP_KERNEL);
+ nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list);
return;
err:
nfnetlink_set_err(net, portid, NFNLGRP_NFTABLES, -ENOBUFS);
goto err;
}
- nfnetlink_send(skb, net, portid, NFNLGRP_NFTABLES, report, gfp);
+ nft_notify_enqueue(skb, report, &net->nft.notify_list);
return;
err:
nfnetlink_set_err(net, portid, NFNLGRP_NFTABLES, -ENOBUFS);
goto err;
}
- nfnetlink_send(skb, ctx->net, ctx->portid, NFNLGRP_NFTABLES,
- ctx->report, GFP_KERNEL);
+ nft_notify_enqueue(skb, ctx->report, &ctx->net->nft.notify_list);
return;
err:
nfnetlink_set_err(ctx->net, ctx->portid, NFNLGRP_NFTABLES, -ENOBUFS);
mutex_unlock(&net->nft.commit_mutex);
}
+static void nft_commit_notify(struct net *net, u32 portid)
+{
+ struct sk_buff *batch_skb = NULL, *nskb, *skb;
+ unsigned char *data;
+ int len;
+
+ list_for_each_entry_safe(skb, nskb, &net->nft.notify_list, list) {
+ if (!batch_skb) {
+new_batch:
+ batch_skb = skb;
+ len = NLMSG_GOODSIZE - skb->len;
+ list_del(&skb->list);
+ continue;
+ }
+ len -= skb->len;
+ if (len > 0 && NFT_CB(skb).report == NFT_CB(batch_skb).report) {
+ data = skb_put(batch_skb, skb->len);
+ memcpy(data, skb->data, skb->len);
+ list_del(&skb->list);
+ kfree_skb(skb);
+ continue;
+ }
+ nfnetlink_send(batch_skb, net, portid, NFNLGRP_NFTABLES,
+ NFT_CB(batch_skb).report, GFP_KERNEL);
+ goto new_batch;
+ }
+
+ if (batch_skb) {
+ nfnetlink_send(batch_skb, net, portid, NFNLGRP_NFTABLES,
+ NFT_CB(batch_skb).report, GFP_KERNEL);
+ }
+
+ WARN_ON_ONCE(!list_empty(&net->nft.notify_list));
+}
+
static int nf_tables_commit(struct net *net, struct sk_buff *skb)
{
struct nft_trans *trans, *next;
}
}
+ nft_commit_notify(net, NETLINK_CB(skb).portid);
nf_tables_gen_notify(net, skb, NFT_MSG_NEWGEN);
nf_tables_commit_release(net);
INIT_LIST_HEAD(&net->nft.tables);
INIT_LIST_HEAD(&net->nft.commit_list);
INIT_LIST_HEAD(&net->nft.module_list);
+ INIT_LIST_HEAD(&net->nft.notify_list);
mutex_init(&net->nft.commit_mutex);
net->nft.base_seq = 1;
net->nft.validate_state = NFT_VALIDATE_SKIP;
mutex_unlock(&net->nft.commit_mutex);
WARN_ON_ONCE(!list_empty(&net->nft.tables));
WARN_ON_ONCE(!list_empty(&net->nft.module_list));
+ WARN_ON_ONCE(!list_empty(&net->nft.notify_list));
}
static struct pernet_operations nf_tables_net_ops = {
switch (key) {
case NFT_META_SKUID:
- *dest = from_kuid_munged(&init_user_ns,
+ *dest = from_kuid_munged(sock_net(sk)->user_ns,
sock->file->f_cred->fsuid);
break;
case NFT_META_SKGID:
- *dest = from_kgid_munged(&init_user_ns,
+ *dest = from_kgid_munged(sock_net(sk)->user_ns,
sock->file->f_cred->fsgid);
break;
default:
{
struct qrtr_hdr_v1 *hdr;
size_t len = skb->len;
- int rc = -ENODEV;
- int confirm_rx;
+ int rc, confirm_rx;
confirm_rx = qrtr_tx_wait(node, to->sq_node, to->sq_port, type);
if (confirm_rx < 0) {
hdr->size = cpu_to_le32(len);
hdr->confirm_rx = !!confirm_rx;
- skb_put_padto(skb, ALIGN(len, 4) + sizeof(*hdr));
-
- mutex_lock(&node->ep_lock);
- if (node->ep)
- rc = node->ep->xmit(node->ep, skb);
- else
- kfree_skb(skb);
- mutex_unlock(&node->ep_lock);
+ rc = skb_put_padto(skb, ALIGN(len, 4) + sizeof(*hdr));
+ if (!rc) {
+ mutex_lock(&node->ep_lock);
+ rc = -ENODEV;
+ if (node->ep)
+ rc = node->ep->xmit(node->ep, skb);
+ else
+ kfree_skb(skb);
+ mutex_unlock(&node->ep_lock);
+ }
/* Need to ensure that a subsequent message carries the otherwise lost
* confirm_rx flag if we dropped this one */
if (rc && confirm_rx)
kfree_rcu(p, rcu);
}
+static int load_metalist(struct nlattr **tb, bool rtnl_held)
+{
+ int i;
+
+ for (i = 1; i < max_metacnt; i++) {
+ if (tb[i]) {
+ void *val = nla_data(tb[i]);
+ int len = nla_len(tb[i]);
+ int rc;
+
+ rc = load_metaops_and_vet(i, val, len, rtnl_held);
+ if (rc != 0)
+ return rc;
+ }
+ }
+
+ return 0;
+}
+
static int populate_metalist(struct tcf_ife_info *ife, struct nlattr **tb,
bool exists, bool rtnl_held)
{
val = nla_data(tb[i]);
len = nla_len(tb[i]);
- rc = load_metaops_and_vet(i, val, len, rtnl_held);
- if (rc != 0)
- return rc;
-
rc = add_metainfo(ife, i, val, len, exists);
if (rc)
return rc;
if (!p)
return -ENOMEM;
+ if (tb[TCA_IFE_METALST]) {
+ err = nla_parse_nested_deprecated(tb2, IFE_META_MAX,
+ tb[TCA_IFE_METALST], NULL,
+ NULL);
+ if (err) {
+ kfree(p);
+ return err;
+ }
+ err = load_metalist(tb2, rtnl_held);
+ if (err) {
+ kfree(p);
+ return err;
+ }
+ }
+
index = parm->index;
err = tcf_idr_check_alloc(tn, &index, a, bind);
if (err < 0) {
}
if (tb[TCA_IFE_METALST]) {
- err = nla_parse_nested_deprecated(tb2, IFE_META_MAX,
- tb[TCA_IFE_METALST], NULL,
- NULL);
- if (err)
- goto metadata_parse_err;
err = populate_metalist(ife, tb2, exists, rtnl_held);
if (err)
goto metadata_parse_err;
-
} else {
/* if no passed metadata allow list or passed allow-all
* then here we process by adding as many supported metadatum
struct vxlan_metadata *md = dst;
md->gbp = nla_get_u32(tb[TCA_TUNNEL_KEY_ENC_OPT_VXLAN_GBP]);
+ md->gbp &= VXLAN_GBP_MASK;
}
return sizeof(struct vxlan_metadata);
return -EINVAL;
}
- if (tb[TCA_FLOWER_KEY_ENC_OPT_VXLAN_GBP])
+ if (tb[TCA_FLOWER_KEY_ENC_OPT_VXLAN_GBP]) {
md->gbp = nla_get_u32(tb[TCA_FLOWER_KEY_ENC_OPT_VXLAN_GBP]);
+ md->gbp &= VXLAN_GBP_MASK;
+ }
return sizeof(*md);
}
}
if (tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_INDEX]) {
nla = tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_INDEX];
+ memset(&md->u, 0x00, sizeof(md->u));
md->u.index = nla_get_be32(nla);
}
} else if (md->version == 2) {
static void qdisc_deactivate(struct Qdisc *qdisc)
{
- bool nolock = qdisc->flags & TCQ_F_NOLOCK;
-
if (qdisc->flags & TCQ_F_BUILTIN)
return;
- if (test_bit(__QDISC_STATE_DEACTIVATED, &qdisc->state))
- return;
-
- if (nolock)
- spin_lock_bh(&qdisc->seqlock);
- spin_lock_bh(qdisc_lock(qdisc));
set_bit(__QDISC_STATE_DEACTIVATED, &qdisc->state);
-
- qdisc_reset(qdisc);
-
- spin_unlock_bh(qdisc_lock(qdisc));
- if (nolock)
- spin_unlock_bh(&qdisc->seqlock);
}
static void dev_deactivate_queue(struct net_device *dev,
}
}
+static void dev_reset_queue(struct net_device *dev,
+ struct netdev_queue *dev_queue,
+ void *_unused)
+{
+ struct Qdisc *qdisc;
+ bool nolock;
+
+ qdisc = dev_queue->qdisc_sleeping;
+ if (!qdisc)
+ return;
+
+ nolock = qdisc->flags & TCQ_F_NOLOCK;
+
+ if (nolock)
+ spin_lock_bh(&qdisc->seqlock);
+ spin_lock_bh(qdisc_lock(qdisc));
+
+ qdisc_reset(qdisc);
+
+ spin_unlock_bh(qdisc_lock(qdisc));
+ if (nolock)
+ spin_unlock_bh(&qdisc->seqlock);
+}
+
static bool some_qdisc_is_busy(struct net_device *dev)
{
unsigned int i;
dev_watchdog_down(dev);
}
- /* Wait for outstanding qdisc-less dev_queue_xmit calls.
+ /* Wait for outstanding qdisc-less dev_queue_xmit calls or
+ * outstanding qdisc enqueuing calls.
* This is avoided if all devices are in dismantle phase :
* Caller will call synchronize_net() for us
*/
synchronize_net();
+ list_for_each_entry(dev, head, close_list) {
+ netdev_for_each_tx_queue(dev, dev_reset_queue, NULL);
+
+ if (dev_ingress_queue(dev))
+ dev_reset_queue(dev, dev_ingress_queue(dev), NULL);
+ }
+
/* Wait for outstanding qdisc_run calls. */
list_for_each_entry(dev, head, close_list) {
while (some_qdisc_is_busy(dev)) {
[TCA_TAPRIO_ATTR_TXTIME_DELAY] = { .type = NLA_U32 },
};
-static int fill_sched_entry(struct nlattr **tb, struct sched_entry *entry,
+static int fill_sched_entry(struct taprio_sched *q, struct nlattr **tb,
+ struct sched_entry *entry,
struct netlink_ext_ack *extack)
{
+ int min_duration = length_to_duration(q, ETH_ZLEN);
u32 interval = 0;
if (tb[TCA_TAPRIO_SCHED_ENTRY_CMD])
interval = nla_get_u32(
tb[TCA_TAPRIO_SCHED_ENTRY_INTERVAL]);
- if (interval == 0) {
+ /* The interval should allow at least the minimum ethernet
+ * frame to go out.
+ */
+ if (interval < min_duration) {
NL_SET_ERR_MSG(extack, "Invalid interval for schedule entry");
return -EINVAL;
}
return 0;
}
-static int parse_sched_entry(struct nlattr *n, struct sched_entry *entry,
- int index, struct netlink_ext_ack *extack)
+static int parse_sched_entry(struct taprio_sched *q, struct nlattr *n,
+ struct sched_entry *entry, int index,
+ struct netlink_ext_ack *extack)
{
struct nlattr *tb[TCA_TAPRIO_SCHED_ENTRY_MAX + 1] = { };
int err;
entry->index = index;
- return fill_sched_entry(tb, entry, extack);
+ return fill_sched_entry(q, tb, entry, extack);
}
-static int parse_sched_list(struct nlattr *list,
+static int parse_sched_list(struct taprio_sched *q, struct nlattr *list,
struct sched_gate_list *sched,
struct netlink_ext_ack *extack)
{
return -ENOMEM;
}
- err = parse_sched_entry(n, entry, i, extack);
+ err = parse_sched_entry(q, n, entry, i, extack);
if (err < 0) {
kfree(entry);
return err;
return i;
}
-static int parse_taprio_schedule(struct nlattr **tb,
+static int parse_taprio_schedule(struct taprio_sched *q, struct nlattr **tb,
struct sched_gate_list *new,
struct netlink_ext_ack *extack)
{
new->cycle_time = nla_get_s64(tb[TCA_TAPRIO_ATTR_SCHED_CYCLE_TIME]);
if (tb[TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST])
- err = parse_sched_list(
- tb[TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST], new, extack);
+ err = parse_sched_list(q, tb[TCA_TAPRIO_ATTR_SCHED_ENTRY_LIST],
+ new, extack);
if (err < 0)
return err;
goto free_sched;
}
- err = parse_taprio_schedule(tb, new_admin, extack);
+ err = parse_taprio_schedule(q, tb, new_admin, extack);
if (err < 0)
goto free_sched;
static inline void sctp_copy_descendant(struct sock *sk_to,
const struct sock *sk_from)
{
- int ancestor_size = sizeof(struct inet_sock) +
- sizeof(struct sctp_sock) -
- offsetof(struct sctp_sock, pd_lobby);
-
- if (sk_from->sk_family == PF_INET6)
- ancestor_size += sizeof(struct ipv6_pinfo);
+ size_t ancestor_size = sizeof(struct inet_sock);
+ ancestor_size += sk_from->sk_prot->obj_size;
+ ancestor_size -= offsetof(struct sctp_sock, pd_lobby);
__inet_sk_copy_descendant(sk_to, sk_from, ancestor_size);
}
static void svc_flush_bvec(const struct bio_vec *bvec, size_t size, size_t seek)
{
struct bvec_iter bi = {
- .bi_size = size,
+ .bi_size = size + seek,
};
struct bio_vec bv;
return NULL;
}
-static void tipc_group_add_to_tree(struct tipc_group *grp,
- struct tipc_member *m)
+static int tipc_group_add_to_tree(struct tipc_group *grp,
+ struct tipc_member *m)
{
u64 nkey, key = (u64)m->node << 32 | m->port;
struct rb_node **n, *parent = NULL;
else if (key > nkey)
n = &(*n)->rb_right;
else
- return;
+ return -EEXIST;
}
rb_link_node(&m->tree_node, parent, n);
rb_insert_color(&m->tree_node, &grp->members);
+ return 0;
}
static struct tipc_member *tipc_group_create_member(struct tipc_group *grp,
u32 instance, int state)
{
struct tipc_member *m;
+ int ret;
m = kzalloc(sizeof(*m), GFP_ATOMIC);
if (!m)
m->port = port;
m->instance = instance;
m->bc_acked = grp->bc_snd_nxt - 1;
+ ret = tipc_group_add_to_tree(grp, m);
+ if (ret < 0) {
+ kfree(m);
+ return NULL;
+ }
grp->member_cnt++;
- tipc_group_add_to_tree(grp, m);
tipc_nlist_add(&grp->dests, m->node);
m->state = state;
return m;
* tipc_link_bc_create - create new link to be used for broadcast
* @net: pointer to associated network namespace
* @mtu: mtu to be used initially if no peers
- * @window: send window to be used
+ * @min_win: minimal send window to be used by link
+ * @max_win: maximal send window to be used by link
* @inputq: queue to put messages ready for delivery
* @namedq: queue to put binding table update messages ready for delivery
* @link: return value, pointer to put the created link
if (fragid == FIRST_FRAGMENT) {
if (unlikely(head))
goto err;
- if (unlikely(skb_unclone(frag, GFP_ATOMIC)))
+ frag = skb_unshare(frag, GFP_ATOMIC);
+ if (unlikely(!frag))
goto err;
head = *headbuf = frag;
*buf = NULL;
trace_tipc_sk_shutdown(sk, NULL, TIPC_DUMP_ALL, " ");
__tipc_shutdown(sock, TIPC_CONN_SHUTDOWN);
- if (tipc_sk_type_connectionless(sk))
- sk->sk_shutdown = SHUTDOWN_MASK;
- else
- sk->sk_shutdown = SEND_SHUTDOWN;
+ sk->sk_shutdown = SHUTDOWN_MASK;
if (sk->sk_state == TIPC_DISCONNECTING) {
/* Discard any unreceived messages */
config LIB80211_CRYPT_CCMP
tristate
+ select CRYPTO
select CRYPTO_AES
select CRYPTO_CCM
/* see 802.11ax D6.1 27.3.23.2 */
if (chan == 2)
return MHZ_TO_KHZ(5935);
- if (chan <= 253)
+ if (chan <= 233)
return MHZ_TO_KHZ(5950 + chan * 5);
break;
case NL80211_BAND_60GHZ:
static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
{
+ u32 npgs_rem, chunk_size = mr->chunk_size, headroom = mr->headroom;
bool unaligned_chunks = mr->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG;
- u32 chunk_size = mr->chunk_size, headroom = mr->headroom;
u64 npgs, addr = mr->addr, size = mr->len;
- unsigned int chunks, chunks_per_page;
+ unsigned int chunks, chunks_rem;
int err;
if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) {
if ((addr + size) < addr)
return -EINVAL;
- npgs = size >> PAGE_SHIFT;
+ npgs = div_u64_rem(size, PAGE_SIZE, &npgs_rem);
+ if (npgs_rem)
+ npgs++;
if (npgs > U32_MAX)
return -EINVAL;
- chunks = (unsigned int)div_u64(size, chunk_size);
+ chunks = (unsigned int)div_u64_rem(size, chunk_size, &chunks_rem);
if (chunks == 0)
return -EINVAL;
- if (!unaligned_chunks) {
- chunks_per_page = PAGE_SIZE / chunk_size;
- if (chunks < chunks_per_page || chunks % chunks_per_page)
- return -EINVAL;
- }
+ if (!unaligned_chunks && chunks_rem)
+ return -EINVAL;
if (headroom >= chunk_size - XDP_PACKET_HEADROOM)
return -EINVAL;
dtc-objs += dtc-lexer.lex.o dtc-parser.tab.o
# Source files need to get at the userspace version of libfdt_env.h to compile
-HOST_EXTRACFLAGS := -I $(srctree)/$(src)/libfdt
+HOST_EXTRACFLAGS += -I $(srctree)/$(src)/libfdt
ifeq ($(shell pkg-config --exists yaml-0.1 2>/dev/null && echo yes),)
ifneq ($(CHECK_DT_BINDING)$(CHECK_DTBS),)
static bool is_ignored_symbol(const char *name, char type)
{
+ /* Symbol names that exactly match to the following are ignored.*/
static const char * const ignored_symbols[] = {
/*
* Symbols which vary between passes. Passes 1 and 2 must have
NULL
};
+ /* Symbol names that begin with the following are ignored.*/
static const char * const ignored_prefixes[] = {
"$", /* local symbols for ARM, MIPS, etc. */
".LASANPC", /* s390 kasan local symbols */
NULL
};
+ /* Symbol names that end with the following are ignored.*/
static const char * const ignored_suffixes[] = {
"_from_arm", /* arm */
"_from_thumb", /* arm */
NULL
};
+ /* Symbol names that contain the following are ignored.*/
+ static const char * const ignored_matches[] = {
+ ".long_branch.", /* ppc stub */
+ ".plt_branch.", /* ppc stub */
+ NULL
+ };
+
const char * const *p;
- /* Exclude symbols which vary between passes. */
for (p = ignored_symbols; *p; p++)
if (!strcmp(name, *p))
return true;
return true;
}
+ for (p = ignored_matches; *p; p++) {
+ if (strstr(name, *p))
+ return true;
+ }
+
if (type == 'U' || type == 'u')
return true;
/* exclude debugging symbols */
fprintf(stderr, "Error in writing or end of file.\n");
}
-/* menu.c */
-void _menu_init(void);
-void menu_warn(struct menu *menu, const char *fmt, ...);
-struct menu *menu_add_menu(void);
-void menu_end_menu(void);
-void menu_add_entry(struct symbol *sym);
-void menu_add_dep(struct expr *dep);
-void menu_add_visibility(struct expr *dep);
-struct property *menu_add_prompt(enum prop_type type, char *prompt, struct expr *dep);
-void menu_add_expr(enum prop_type type, struct expr *expr, struct expr *dep);
-void menu_add_symbol(enum prop_type type, struct symbol *sym, struct expr *dep);
-void menu_add_option_modules(void);
-void menu_add_option_defconfig_list(void);
-void menu_add_option_allnoconfig_y(void);
-void menu_finalize(struct menu *parent);
-void menu_set_type(int type);
-
/* util.c */
struct file *file_lookup(const char *name);
void *xmalloc(size_t size);
void str_printf(struct gstr *gs, const char *fmt, ...);
const char *str_get(struct gstr *gs);
+/* menu.c */
+void _menu_init(void);
+void menu_warn(struct menu *menu, const char *fmt, ...);
+struct menu *menu_add_menu(void);
+void menu_end_menu(void);
+void menu_add_entry(struct symbol *sym);
+void menu_add_dep(struct expr *dep);
+void menu_add_visibility(struct expr *dep);
+struct property *menu_add_prompt(enum prop_type type, char *prompt, struct expr *dep);
+void menu_add_expr(enum prop_type type, struct expr *expr, struct expr *dep);
+void menu_add_symbol(enum prop_type type, struct symbol *sym, struct expr *dep);
+void menu_add_option_modules(void);
+void menu_add_option_defconfig_list(void);
+void menu_add_option_allnoconfig_y(void);
+void menu_finalize(struct menu *parent);
+void menu_set_type(int type);
+
+extern struct menu rootmenu;
+
+bool menu_is_empty(struct menu *menu);
+bool menu_is_visible(struct menu *menu);
+bool menu_has_prompt(struct menu *menu);
+const char *menu_get_prompt(struct menu *menu);
+struct menu *menu_get_root_menu(struct menu *menu);
+struct menu *menu_get_parent_menu(struct menu *menu);
+bool menu_has_help(struct menu *menu);
+const char *menu_get_help(struct menu *menu);
+struct gstr get_relations_str(struct symbol **sym_arr, struct list_head *head);
+void menu_get_ext_help(struct menu *menu, struct gstr *help);
+
/* symbol.c */
void sym_clear_all_valid(void);
struct symbol *sym_choice_default(struct symbol *sym);
void conf_set_changed_callback(void (*fn)(void));
void conf_set_message_callback(void (*fn)(const char *s));
-/* menu.c */
-extern struct menu rootmenu;
-
-bool menu_is_empty(struct menu *menu);
-bool menu_is_visible(struct menu *menu);
-bool menu_has_prompt(struct menu *menu);
-const char * menu_get_prompt(struct menu *menu);
-struct menu * menu_get_root_menu(struct menu *menu);
-struct menu * menu_get_parent_menu(struct menu *menu);
-bool menu_has_help(struct menu *menu);
-const char * menu_get_help(struct menu *menu);
-struct gstr get_relations_str(struct symbol **sym_arr, struct list_head *head);
-void menu_get_ext_help(struct menu *menu, struct gstr *help);
-
/* symbol.c */
extern struct symbol * symbol_hash[SYMBOL_HASHSIZE];
if (showDebug())
stream << debug_info(sym);
+ struct gstr help_gstr = str_new();
+
+ menu_get_ext_help(_menu, &help_gstr);
+ stream << print_filter(str_get(&help_gstr));
+ str_free(&help_gstr);
} else if (_menu->prompt) {
stream << "<big><b>";
stream << print_filter(_menu->prompt->text);
expr_print_help, &stream, E_NONE);
stream << "<br><br>";
}
+
+ stream << "defined at " << _menu->file->name << ":"
+ << _menu->lineno << "<br><br>";
}
}
- if (showDebug())
- stream << "defined at " << _menu->file->name << ":"
- << _menu->lineno << "<br><br>";
setText(info);
}
}
free(result);
- delete data;
+ delete[] data;
}
void ConfigInfoView::contextMenuEvent(QContextMenuEvent *event)
{
struct dev_exception_item *ex;
- list_for_each_entry_rcu(ex, exceptions, list) {
+ list_for_each_entry_rcu(ex, exceptions, list,
+ lockdep_is_held(&devcgroup_mutex)) {
if ((type & DEVCG_DEV_BLOCK) && !(ex->type & DEVCG_DEV_BLOCK))
continue;
if ((type & DEVCG_DEV_CHAR) && !(ex->type & DEVCG_DEV_CHAR))
struct hpi_message hm;
struct hpi_response hr;
struct hpi_adapter adapter;
- struct hpi_pci pci;
+ struct hpi_pci pci = { 0 };
memset(&adapter, 0, sizeof(adapter));
return 0;
err:
- for (idx = 0; idx < HPI_MAX_ADAPTER_MEM_SPACES; idx++) {
+ while (--idx >= 0) {
if (pci.ap_mem_base[idx]) {
iounmap(pci.ap_mem_base[idx]);
pci.ap_mem_base[idx] = NULL;
SND_PCI_QUIRK(0x1462, 0x1276, "MSI-GL73", ALC1220_FIXUP_CLEVO_P950),
SND_PCI_QUIRK(0x1462, 0x1293, "MSI-GP65", ALC1220_FIXUP_CLEVO_P950),
SND_PCI_QUIRK(0x1462, 0x7350, "MSI-7350", ALC889_FIXUP_CD),
- SND_PCI_QUIRK(0x1462, 0x9c37, "MSI X570-A PRO", ALC1220_FIXUP_CLEVO_P950),
SND_PCI_QUIRK(0x1462, 0xda57, "MSI Z270-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
SND_PCI_QUIRK_VENDOR(0x1462, "MSI", ALC882_FIXUP_GPIO3),
SND_PCI_QUIRK(0x147b, 0x107a, "Abit AW9D-MAX", ALC882_FIXUP_ABIT_AW9D_MAX),
/* 3k pull low control for Headset jack. */
/* NOTE: call this before clearing the pin, otherwise codec stalls */
- alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+ /* If disable 3k pulldown control for alc257, the Mic detection will not work correctly
+ * when booting with headset plugged. So skip setting it for the codec alc257
+ */
+ if (codec->core.vendor_id != 0x10ec0257)
+ alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
if (!spec->no_shutup_pins)
snd_hda_codec_write(codec, hp_pin, 0,
snd_hda_codec_set_pin_target(codec, 0x19, PIN_VREFHIZ);
}
+
+static void alc294_gx502_toggle_output(struct hda_codec *codec,
+ struct hda_jack_callback *cb)
+{
+ /* The Windows driver sets the codec up in a very different way where
+ * it appears to leave 0x10 = 0x8a20 set. For Linux we need to toggle it
+ */
+ if (snd_hda_jack_detect_state(codec, 0x21) == HDA_JACK_PRESENT)
+ alc_write_coef_idx(codec, 0x10, 0x8a20);
+ else
+ alc_write_coef_idx(codec, 0x10, 0x0a20);
+}
+
+static void alc294_fixup_gx502_hp(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+{
+ /* Pin 0x21: headphones/headset mic */
+ if (!is_jack_detectable(codec, 0x21))
+ return;
+
+ switch (action) {
+ case HDA_FIXUP_ACT_PRE_PROBE:
+ snd_hda_jack_detect_enable_callback(codec, 0x21,
+ alc294_gx502_toggle_output);
+ break;
+ case HDA_FIXUP_ACT_INIT:
+ /* Make sure to start in a correct state, i.e. if
+ * headphones have been plugged in before powering up the system
+ */
+ alc294_gx502_toggle_output(codec, NULL);
+ break;
+ }
+}
+
static void alc285_fixup_hp_gpio_amp_init(struct hda_codec *codec,
const struct hda_fixup *fix, int action)
{
#include "hp_x360_helper.c"
enum {
+ ALC269_FIXUP_GPIO2,
ALC269_FIXUP_SONY_VAIO,
ALC275_FIXUP_SONY_VAIO_GPIO2,
ALC269_FIXUP_DELL_M101Z,
ALC285_FIXUP_THINKPAD_HEADSET_JACK,
ALC294_FIXUP_ASUS_HPE,
ALC294_FIXUP_ASUS_COEF_1B,
+ ALC294_FIXUP_ASUS_GX502_HP,
+ ALC294_FIXUP_ASUS_GX502_PINS,
+ ALC294_FIXUP_ASUS_GX502_VERBS,
ALC285_FIXUP_HP_GPIO_LED,
ALC285_FIXUP_HP_MUTE_LED,
ALC236_FIXUP_HP_MUTE_LED,
ALC269_FIXUP_LEMOTE_A1802,
ALC269_FIXUP_LEMOTE_A190X,
ALC256_FIXUP_INTEL_NUC8_RUGGED,
+ ALC255_FIXUP_XIAOMI_HEADSET_MIC,
};
static const struct hda_fixup alc269_fixups[] = {
+ [ALC269_FIXUP_GPIO2] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc_fixup_gpio2,
+ },
[ALC269_FIXUP_SONY_VAIO] = {
.type = HDA_FIXUP_PINCTLS,
.v.pins = (const struct hda_pintbl[]) {
[ALC233_FIXUP_LENOVO_MULTI_CODECS] = {
.type = HDA_FIXUP_FUNC,
.v.func = alc233_alc662_fixup_lenovo_dual_codecs,
+ .chained = true,
+ .chain_id = ALC269_FIXUP_GPIO2
},
[ALC233_FIXUP_ACER_HEADSET_MIC] = {
.type = HDA_FIXUP_VERBS,
.chained = true,
.chain_id = ALC294_FIXUP_ASUS_HEADSET_MIC
},
+ [ALC294_FIXUP_ASUS_GX502_PINS] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+ { 0x19, 0x03a11050 }, /* front HP mic */
+ { 0x1a, 0x01a11830 }, /* rear external mic */
+ { 0x21, 0x03211020 }, /* front HP out */
+ { }
+ },
+ .chained = true,
+ .chain_id = ALC294_FIXUP_ASUS_GX502_VERBS
+ },
+ [ALC294_FIXUP_ASUS_GX502_VERBS] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+ /* set 0x15 to HP-OUT ctrl */
+ { 0x15, AC_VERB_SET_PIN_WIDGET_CONTROL, 0xc0 },
+ /* unmute the 0x15 amp */
+ { 0x15, AC_VERB_SET_AMP_GAIN_MUTE, 0xb000 },
+ { }
+ },
+ .chained = true,
+ .chain_id = ALC294_FIXUP_ASUS_GX502_HP
+ },
+ [ALC294_FIXUP_ASUS_GX502_HP] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc294_fixup_gx502_hp,
+ },
[ALC294_FIXUP_ASUS_COEF_1B] = {
.type = HDA_FIXUP_VERBS,
.v.verbs = (const struct hda_verb[]) {
.chained = true,
.chain_id = ALC269_FIXUP_HEADSET_MODE
},
+ [ALC255_FIXUP_XIAOMI_HEADSET_MIC] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+ { 0x20, AC_VERB_SET_COEF_INDEX, 0x45 },
+ { 0x20, AC_VERB_SET_PROC_COEF, 0x5089 },
+ { }
+ },
+ .chained = true,
+ .chain_id = ALC289_FIXUP_ASUS_GA401
+ },
};
static const struct snd_pci_quirk alc269_fixup_tbl[] = {
SND_PCI_QUIRK(0x1043, 0x1ccd, "ASUS X555UB", ALC256_FIXUP_ASUS_MIC),
SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
SND_PCI_QUIRK(0x1043, 0x1f11, "ASUS Zephyrus G14", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS),
SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
SND_PCI_QUIRK(0x1b35, 0x1236, "CZC TMI", ALC269_FIXUP_CZC_TMI),
SND_PCI_QUIRK(0x1b35, 0x1237, "CZC L101", ALC269_FIXUP_CZC_L101),
SND_PCI_QUIRK(0x1b7d, 0xa831, "Ordissimo EVE2 ", ALC269VB_FIXUP_ORDISSIMO_EVE2), /* Also known as Malata PC-B1303 */
+ SND_PCI_QUIRK(0x1d72, 0x1602, "RedmiBook", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
SND_PCI_QUIRK(0x10ec, 0x118c, "Medion EE4254 MD62100", ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE),
SND_PCI_QUIRK(0x1c06, 0x2013, "Lemote A1802", ALC269_FIXUP_LEMOTE_A1802),
{.id = ALC298_FIXUP_HUAWEI_MBX_STEREO, .name = "huawei-mbx-stereo"},
{.id = ALC256_FIXUP_MEDION_HEADSET_NO_PRESENCE, .name = "alc256-medion-headset"},
{.id = ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET, .name = "alc298-samsung-headphone"},
+ {.id = ALC255_FIXUP_XIAOMI_HEADSET_MIC, .name = "alc255-xiaomi-headset"},
{}
};
#define ALC225_STANDARD_PINS \
/* Regmap Initialization */
regmap = devm_regmap_init_sdw(slave, &max98373_sdw_regmap);
- if (!regmap)
- return -EINVAL;
+ if (IS_ERR(regmap))
+ return PTR_ERR(regmap);
return max98373_init(slave, regmap);
}
struct pcm3168a_priv *pcm3168a = snd_soc_component_get_drvdata(dai->component);
int ret;
+ /*
+ * Some sound card sets 0 Hz as reset,
+ * but it is impossible to set. Ignore it here
+ */
+ if (freq == 0)
+ return 0;
+
if (freq > PCM3168A_MAX_SYSCLK)
return -EINVAL;
/* Regmap Initialization */
regmap = devm_regmap_init_sdw(slave, &rt1308_sdw_regmap);
- if (!regmap)
- return -EINVAL;
+ if (IS_ERR(regmap))
+ return PTR_ERR(regmap);
rt1308_sdw_init(&slave->dev, regmap, slave);
/* Regmap Initialization */
sdw_regmap = devm_regmap_init_sdw(slave, &rt700_sdw_regmap);
- if (!sdw_regmap)
- return -EINVAL;
+ if (IS_ERR(sdw_regmap))
+ return PTR_ERR(sdw_regmap);
regmap = devm_regmap_init(&slave->dev, NULL,
&slave->dev, &rt700_regmap);
/* Regmap Initialization */
sdw_regmap = devm_regmap_init_sdw(slave, &rt711_sdw_regmap);
- if (!sdw_regmap)
- return -EINVAL;
+ if (IS_ERR(sdw_regmap))
+ return PTR_ERR(sdw_regmap);
regmap = devm_regmap_init(&slave->dev, NULL,
&slave->dev, &rt711_regmap);
/* Regmap Initialization */
sdw_regmap = devm_regmap_init_sdw(slave, &rt715_sdw_regmap);
- if (!sdw_regmap)
- return -EINVAL;
+ if (IS_ERR(sdw_regmap))
+ return PTR_ERR(sdw_regmap);
regmap = devm_regmap_init(&slave->dev, NULL, &slave->dev,
&rt715_regmap);
if (ret)
goto out;
+ if (adcx140->supply_areg == NULL)
+ sleep_cfg_val |= ADCX140_AREG_INTERNAL;
+
+ ret = regmap_write(adcx140->regmap, ADCX140_SLEEP_CFG, sleep_cfg_val);
+ if (ret) {
+ dev_err(adcx140->dev, "setting sleep config failed %d\n", ret);
+ goto out;
+ }
+
+ /* 8.4.3: Wait >= 1ms after entering active mode. */
+ usleep_range(1000, 100000);
+
pdm_count = device_property_count_u32(adcx140->dev,
"ti,pdm-edge-select");
if (pdm_count <= ADCX140_NUM_PDM_EDGES && pdm_count > 0) {
if (ret)
goto out;
- if (adcx140->supply_areg == NULL)
- sleep_cfg_val |= ADCX140_AREG_INTERNAL;
-
- ret = regmap_write(adcx140->regmap, ADCX140_SLEEP_CFG, sleep_cfg_val);
- if (ret) {
- dev_err(adcx140->dev, "setting sleep config failed %d\n", ret);
- goto out;
- }
-
- /* 8.4.3: Wait >= 1ms after entering active mode. */
- usleep_range(1000, 100000);
-
ret = regmap_update_bits(adcx140->regmap, ADCX140_BIAS_CFG,
ADCX140_MIC_BIAS_VAL_MSK |
ADCX140_MIC_BIAS_VREF_MSK, bias_cfg);
if (!adcx140)
return -ENOMEM;
+ adcx140->dev = &i2c->dev;
+
adcx140->gpio_reset = devm_gpiod_get_optional(adcx140->dev,
"reset", GPIOD_OUT_LOW);
if (IS_ERR(adcx140->gpio_reset))
ret);
return ret;
}
- adcx140->dev = &i2c->dev;
+
i2c_set_clientdata(i2c, adcx140);
return devm_snd_soc_register_component(&i2c->dev,
return -EINVAL;
}
+ pm_runtime_get_sync(component->dev);
+
switch (micbias) {
case 1:
micdet = &wm8994->micdet[0];
snd_soc_dapm_sync(dapm);
+ pm_runtime_put(component->dev);
+
return 0;
}
EXPORT_SYMBOL_GPL(wm8994_mic_detect);
return -EINVAL;
}
+ pm_runtime_get_sync(component->dev);
+
if (jack) {
snd_soc_dapm_force_enable_pin(dapm, "CLK_SYS");
snd_soc_dapm_sync(dapm);
snd_soc_dapm_sync(dapm);
}
+ pm_runtime_put(component->dev);
+
return 0;
}
EXPORT_SYMBOL_GPL(wm8958_mic_detect);
wm8994->hubs.dcs_readback_mode = 2;
break;
}
+ wm8994->hubs.micd_scthr = true;
break;
case WM8958:
wm8994->hubs.dcs_readback_mode = 1;
wm8994->hubs.hp_startup_mode = 1;
+ wm8994->hubs.micd_scthr = true;
switch (control->revision) {
case 0:
snd_soc_component_update_bits(component, WM8993_ADDITIONAL_CONTROL,
WM8993_LINEOUT2_FB, WM8993_LINEOUT2_FB);
+ if (!hubs->micd_scthr)
+ return 0;
+
snd_soc_component_update_bits(component, WM8993_MICBIAS,
WM8993_JD_SCTHR_MASK | WM8993_JD_THR_MASK |
WM8993_MICB1_LVL | WM8993_MICB2_LVL,
int hp_startup_mode;
int series_startup;
int no_series_update;
+ bool micd_scthr;
bool no_cache_dac_hp_direct;
struct list_head dcs_cache;
if (ret_val < 0)
goto out_power_up;
+ /*
+ * Make sure the period to be multiple of 1ms to align the
+ * design of firmware. Apply same rule to buffer size to make
+ * sure alsa could always find a value for period size
+ * regardless the buffer size given by user space.
+ */
+ snd_pcm_hw_constraint_step(substream->runtime, 0,
+ SNDRV_PCM_HW_PARAM_PERIOD_SIZE, 48);
+ snd_pcm_hw_constraint_step(substream->runtime, 0,
+ SNDRV_PCM_HW_PARAM_BUFFER_SIZE, 48);
+
/* Make sure, that the period size is always even */
snd_pcm_hw_constraint_step(substream->runtime, 0,
SNDRV_PCM_HW_PARAM_PERIODS, 2);
BYT_RT5640_SSP0_AIF1 |
BYT_RT5640_MCLK_EN),
},
+ { /* MPMAN Converter 9, similar hw as the I.T.Works TW891 2-in-1 */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "MPMAN"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Converter9"),
+ },
+ .driver_data = (void *)(BYTCR_INPUT_DEFAULTS |
+ BYT_RT5640_MONO_SPEAKER |
+ BYT_RT5640_SSP0_AIF1 |
+ BYT_RT5640_MCLK_EN),
+ },
{
/* MPMAN MPWIN895CL */
.matches = {
struct snd_soc_dai *dai;
for_each_card_rtds(card, rtd) {
- if (!strstr(rtd->dai_link->codecs->name, "ehdaudio"))
+ if (!strstr(rtd->dai_link->codecs->name, "ehdaudio0D0"))
continue;
dai = asoc_rtd_to_codec(rtd, 0);
hda_pvt = snd_soc_component_get_drvdata(dai->component);
int j;
int ret = 0;
+ /* set spk pin by playback only */
+ if (substream->stream == SNDRV_PCM_STREAM_CAPTURE)
+ return 0;
+
for_each_rtd_codec_dais(rtd, j, codec_dai) {
struct snd_soc_component *component = codec_dai->component;
struct snd_soc_dapm_context *dapm =
case SNDRV_PCM_TRIGGER_STOP:
case SNDRV_PCM_TRIGGER_SUSPEND:
case SNDRV_PCM_TRIGGER_PAUSE_PUSH:
- /* Make sure no streams are active before disable pin */
- if (snd_soc_dai_active(codec_dai) != 1)
- break;
ret = snd_soc_dapm_disable_pin(dapm, pin_name);
if (!ret)
snd_soc_dapm_sync(dapm);
return ret;
}
-#define CSR_DEFAULT_VALUE 0x8480040E
-#define ISC_DEFAULT_VALUE 0x0
-#define ISD_DEFAULT_VALUE 0x0
-#define IMC_DEFAULT_VALUE 0x7FFF0003
-#define IMD_DEFAULT_VALUE 0x7FFF0003
-#define IPCC_DEFAULT_VALUE 0x0
-#define IPCD_DEFAULT_VALUE 0x0
-#define CLKCTL_DEFAULT_VALUE 0x7FF
-#define CSR2_DEFAULT_VALUE 0x0
-#define LTR_CTRL_DEFAULT_VALUE 0x0
-#define HMD_CTRL_DEFAULT_VALUE 0x0
-
-static void hsw_set_shim_defaults(struct sst_dsp *sst)
-{
- sst_dsp_shim_write_unlocked(sst, SST_CSR, CSR_DEFAULT_VALUE);
- sst_dsp_shim_write_unlocked(sst, SST_ISRX, ISC_DEFAULT_VALUE);
- sst_dsp_shim_write_unlocked(sst, SST_ISRD, ISD_DEFAULT_VALUE);
- sst_dsp_shim_write_unlocked(sst, SST_IMRX, IMC_DEFAULT_VALUE);
- sst_dsp_shim_write_unlocked(sst, SST_IMRD, IMD_DEFAULT_VALUE);
- sst_dsp_shim_write_unlocked(sst, SST_IPCX, IPCC_DEFAULT_VALUE);
- sst_dsp_shim_write_unlocked(sst, SST_IPCD, IPCD_DEFAULT_VALUE);
- sst_dsp_shim_write_unlocked(sst, SST_CLKCTL, CLKCTL_DEFAULT_VALUE);
- sst_dsp_shim_write_unlocked(sst, SST_CSR2, CSR2_DEFAULT_VALUE);
- sst_dsp_shim_write_unlocked(sst, SST_LTRC, LTR_CTRL_DEFAULT_VALUE);
- sst_dsp_shim_write_unlocked(sst, SST_HMDC, HMD_CTRL_DEFAULT_VALUE);
-}
-
-/* all clock-gating minus DCLCGE and DTCGE */
-#define SST_VDRTCL2_CG_OTHER 0xB7D
-
static void hsw_set_dsp_D3(struct sst_dsp *sst)
{
+ u32 val;
u32 reg;
- /* disable clock core gating */
+ /* Disable core clock gating (VDRTCTL2.DCLCGE = 0) */
reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
- reg &= ~(SST_VDRTCL2_DCLCGE);
+ reg &= ~(SST_VDRTCL2_DCLCGE | SST_VDRTCL2_DTCGE);
writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
- /* stall, reset and set 24MHz XOSC */
- sst_dsp_shim_update_bits_unlocked(sst, SST_CSR,
- SST_CSR_24MHZ_LPCS | SST_CSR_STALL | SST_CSR_RST,
- SST_CSR_24MHZ_LPCS | SST_CSR_STALL | SST_CSR_RST);
-
- /* DRAM power gating all */
- reg = readl(sst->addr.pci_cfg + SST_VDRTCTL0);
- reg |= SST_VDRTCL0_ISRAMPGE_MASK |
- SST_VDRTCL0_DSRAMPGE_MASK;
- reg &= ~(SST_VDRTCL0_D3SRAMPGD);
- reg |= SST_VDRTCL0_D3PGD;
- writel(reg, sst->addr.pci_cfg + SST_VDRTCTL0);
- udelay(50);
+ /* enable power gating and switch off DRAM & IRAM blocks */
+ val = readl(sst->addr.pci_cfg + SST_VDRTCTL0);
+ val |= SST_VDRTCL0_DSRAMPGE_MASK |
+ SST_VDRTCL0_ISRAMPGE_MASK;
+ val &= ~(SST_VDRTCL0_D3PGD | SST_VDRTCL0_D3SRAMPGD);
+ writel(val, sst->addr.pci_cfg + SST_VDRTCTL0);
- /* PLL shutdown enable */
- reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
- reg |= SST_VDRTCL2_APLLSE_MASK;
- writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
+ /* switch off audio PLL */
+ val = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
+ val |= SST_VDRTCL2_APLLSE_MASK;
+ writel(val, sst->addr.pci_cfg + SST_VDRTCTL2);
- /* disable MCLK */
+ /* disable MCLK(clkctl.smos = 0) */
sst_dsp_shim_update_bits_unlocked(sst, SST_CLKCTL,
- SST_CLKCTL_MASK, 0);
-
- /* switch clock gating */
- reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
- reg |= SST_VDRTCL2_CG_OTHER;
- reg &= ~(SST_VDRTCL2_DTCGE);
- writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
- /* enable DTCGE separatelly */
- reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
- reg |= SST_VDRTCL2_DTCGE;
- writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
+ SST_CLKCTL_MASK, 0);
- /* set shim defaults */
- hsw_set_shim_defaults(sst);
-
- /* set D3 */
- reg = readl(sst->addr.pci_cfg + SST_PMCS);
- reg |= SST_PMCS_PS_MASK;
- writel(reg, sst->addr.pci_cfg + SST_PMCS);
+ /* Set D3 state, delay 50 us */
+ val = readl(sst->addr.pci_cfg + SST_PMCS);
+ val |= SST_PMCS_PS_MASK;
+ writel(val, sst->addr.pci_cfg + SST_PMCS);
udelay(50);
- /* enable clock core gating */
+ /* Enable core clock gating (VDRTCTL2.DCLCGE = 1), delay 50 us */
reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
- reg |= SST_VDRTCL2_DCLCGE;
+ reg |= SST_VDRTCL2_DCLCGE | SST_VDRTCL2_DTCGE;
writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
+
udelay(50);
+
}
static void hsw_reset(struct sst_dsp *sst)
SST_CSR_RST | SST_CSR_STALL, SST_CSR_STALL);
}
-/* recommended CSR state for power-up */
-#define SST_CSR_D0_MASK (0x18A09C0C | SST_CSR_DCS_MASK)
-
static int hsw_set_dsp_D0(struct sst_dsp *sst)
{
- u32 reg;
+ int tries = 10;
+ u32 reg, fw_dump_bit;
- /* disable clock core gating */
+ /* Disable core clock gating (VDRTCTL2.DCLCGE = 0) */
reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
- reg &= ~(SST_VDRTCL2_DCLCGE);
+ reg &= ~(SST_VDRTCL2_DCLCGE | SST_VDRTCL2_DTCGE);
writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
- /* switch clock gating */
- reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
- reg |= SST_VDRTCL2_CG_OTHER;
- reg &= ~(SST_VDRTCL2_DTCGE);
- writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
+ /* Disable D3PG (VDRTCTL0.D3PGD = 1) */
+ reg = readl(sst->addr.pci_cfg + SST_VDRTCTL0);
+ reg |= SST_VDRTCL0_D3PGD;
+ writel(reg, sst->addr.pci_cfg + SST_VDRTCTL0);
- /* set D0 */
+ /* Set D0 state */
reg = readl(sst->addr.pci_cfg + SST_PMCS);
- reg &= ~(SST_PMCS_PS_MASK);
+ reg &= ~SST_PMCS_PS_MASK;
writel(reg, sst->addr.pci_cfg + SST_PMCS);
- /* DRAM power gating none */
- reg = readl(sst->addr.pci_cfg + SST_VDRTCTL0);
- reg &= ~(SST_VDRTCL0_ISRAMPGE_MASK |
- SST_VDRTCL0_DSRAMPGE_MASK);
- reg |= SST_VDRTCL0_D3SRAMPGD;
- reg |= SST_VDRTCL0_D3PGD;
- writel(reg, sst->addr.pci_cfg + SST_VDRTCTL0);
- mdelay(10);
+ /* check that ADSP shim is enabled */
+ while (tries--) {
+ reg = readl(sst->addr.pci_cfg + SST_PMCS) & SST_PMCS_PS_MASK;
+ if (reg == 0)
+ goto finish;
+
+ msleep(1);
+ }
+
+ return -ENODEV;
- /* set shim defaults */
- hsw_set_shim_defaults(sst);
+finish:
+ /* select SSP1 19.2MHz base clock, SSP clock 0, turn off Low Power Clock */
+ sst_dsp_shim_update_bits_unlocked(sst, SST_CSR,
+ SST_CSR_S1IOCS | SST_CSR_SBCS1 | SST_CSR_LPCS, 0x0);
+
+ /* stall DSP core, set clk to 192/96Mhz */
+ sst_dsp_shim_update_bits_unlocked(sst,
+ SST_CSR, SST_CSR_STALL | SST_CSR_DCS_MASK,
+ SST_CSR_STALL | SST_CSR_DCS(4));
- /* restore MCLK */
+ /* Set 24MHz MCLK, prevent local clock gating, enable SSP0 clock */
sst_dsp_shim_update_bits_unlocked(sst, SST_CLKCTL,
- SST_CLKCTL_MASK, SST_CLKCTL_MASK);
+ SST_CLKCTL_MASK | SST_CLKCTL_DCPLCG | SST_CLKCTL_SCOE0,
+ SST_CLKCTL_MASK | SST_CLKCTL_DCPLCG | SST_CLKCTL_SCOE0);
- /* PLL shutdown disable */
+ /* Stall and reset core, set CSR */
+ hsw_reset(sst);
+
+ /* Enable core clock gating (VDRTCTL2.DCLCGE = 1), delay 50 us */
reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
- reg &= ~(SST_VDRTCL2_APLLSE_MASK);
+ reg |= SST_VDRTCL2_DCLCGE | SST_VDRTCL2_DTCGE;
writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
- sst_dsp_shim_update_bits_unlocked(sst, SST_CSR,
- SST_CSR_D0_MASK, SST_CSR_SBCS0 | SST_CSR_SBCS1 |
- SST_CSR_STALL | SST_CSR_DCS(4));
udelay(50);
- /* enable clock core gating */
+ /* switch on audio PLL */
reg = readl(sst->addr.pci_cfg + SST_VDRTCTL2);
- reg |= SST_VDRTCL2_DCLCGE;
+ reg &= ~SST_VDRTCL2_APLLSE_MASK;
writel(reg, sst->addr.pci_cfg + SST_VDRTCTL2);
- /* clear reset */
- sst_dsp_shim_update_bits_unlocked(sst, SST_CSR, SST_CSR_RST, 0);
+ /* set default power gating control, enable power gating control for all blocks. that is,
+ can't be accessed, please enable each block before accessing. */
+ reg = readl(sst->addr.pci_cfg + SST_VDRTCTL0);
+ reg |= SST_VDRTCL0_DSRAMPGE_MASK | SST_VDRTCL0_ISRAMPGE_MASK;
+ /* for D0, always enable the block(DSRAM[0]) used for FW dump */
+ fw_dump_bit = 1 << SST_VDRTCL0_DSRAMPGE_SHIFT;
+ writel(reg & ~fw_dump_bit, sst->addr.pci_cfg + SST_VDRTCTL0);
+
/* disable DMA finish function for SSP0 & SSP1 */
sst_dsp_shim_update_bits_unlocked(sst, SST_CSR2, SST_CSR2_SDFD_SSP1,
sst_dsp_shim_update_bits(sst, SST_IMRD, (SST_IMRD_DONE | SST_IMRD_BUSY |
SST_IMRD_SSP0 | SST_IMRD_DMAC), 0x0);
+ /* clear IPC registers */
+ sst_dsp_shim_write(sst, SST_IPCX, 0x0);
+ sst_dsp_shim_write(sst, SST_IPCD, 0x0);
+ sst_dsp_shim_write(sst, 0x80, 0x6);
+ sst_dsp_shim_write(sst, 0xe0, 0x300a);
+
return 0;
}
{
dev_dbg(sst->dev, "HSW_PM dsp runtime suspend\n");
+ /* put DSP into reset and stall */
+ sst_dsp_shim_update_bits(sst, SST_CSR,
+ SST_CSR_24MHZ_LPCS | SST_CSR_RST | SST_CSR_STALL,
+ SST_CSR_RST | SST_CSR_STALL | SST_CSR_24MHZ_LPCS);
+
hsw_set_dsp_D3(sst);
dev_dbg(sst->dev, "HSW_PM dsp runtime suspend exit\n");
}
#define CTRL0_TODDR_SEL_RESAMPLE BIT(30)
#define CTRL0_TODDR_EXT_SIGNED BIT(29)
#define CTRL0_TODDR_PP_MODE BIT(28)
+#define CTRL0_TODDR_SYNC_CH BIT(27)
#define CTRL0_TODDR_TYPE_MASK GENMASK(15, 13)
#define CTRL0_TODDR_TYPE(x) ((x) << 13)
#define CTRL0_TODDR_MSB_POS_MASK GENMASK(12, 8)
.dai_drv = &axg_toddr_dai_drv
};
+static int g12a_toddr_dai_startup(struct snd_pcm_substream *substream,
+ struct snd_soc_dai *dai)
+{
+ struct axg_fifo *fifo = snd_soc_dai_get_drvdata(dai);
+ int ret;
+
+ ret = axg_toddr_dai_startup(substream, dai);
+ if (ret)
+ return ret;
+
+ /*
+ * Make sure the first channel ends up in the at beginning of the output
+ * As weird as it looks, without this the first channel may be misplaced
+ * in memory, with a random shift of 2 channels.
+ */
+ regmap_update_bits(fifo->map, FIFO_CTRL0, CTRL0_TODDR_SYNC_CH,
+ CTRL0_TODDR_SYNC_CH);
+
+ return 0;
+}
+
static const struct snd_soc_dai_ops g12a_toddr_ops = {
.prepare = g12a_toddr_dai_prepare,
.hw_params = axg_toddr_dai_hw_params,
- .startup = axg_toddr_dai_startup,
+ .startup = g12a_toddr_dai_startup,
.shutdown = axg_toddr_dai_shutdown,
};
card = &data->card;
card->dev = dev;
+ card->owner = THIS_MODULE;
card->dapm_widgets = apq8016_sbc_dapm_widgets;
card->num_dapm_widgets = ARRAY_SIZE(apq8016_sbc_dapm_widgets);
return -ENOMEM;
card->dev = dev;
+ card->owner = THIS_MODULE;
dev_set_drvdata(dev, card);
ret = qcom_snd_parse_of(card);
if (ret)
for_each_child_of_node(dev->of_node, np) {
dlc = devm_kzalloc(dev, 2 * sizeof(*dlc), GFP_KERNEL);
- if (!dlc)
- return -ENOMEM;
+ if (!dlc) {
+ ret = -ENOMEM;
+ goto err;
+ }
link->cpus = &dlc[0];
link->platforms = &dlc[1];
card->dapm_widgets = sdm845_snd_widgets;
card->num_dapm_widgets = ARRAY_SIZE(sdm845_snd_widgets);
card->dev = dev;
+ card->owner = THIS_MODULE;
dev_set_drvdata(dev, card);
ret = qcom_snd_parse_of(card);
if (ret)
return -ENOMEM;
card->dev = &pdev->dev;
+ card->owner = THIS_MODULE;
ret = snd_soc_of_parse_card_name(card, "qcom,model");
if (ret) {
}
EXPORT_SYMBOL_GPL(snd_soc_find_dai);
+struct snd_soc_dai *snd_soc_find_dai_with_mutex(
+ const struct snd_soc_dai_link_component *dlc)
+{
+ struct snd_soc_dai *dai;
+
+ mutex_lock(&client_mutex);
+ dai = snd_soc_find_dai(dlc);
+ mutex_unlock(&client_mutex);
+
+ return dai;
+}
+EXPORT_SYMBOL_GPL(snd_soc_find_dai_with_mutex);
+
static int soc_dai_link_sanity_check(struct snd_soc_card *card,
struct snd_soc_dai_link *link)
{
supported_codec = false;
for_each_link_cpus(dai_link, i, cpu) {
- dai = snd_soc_find_dai(cpu);
+ dai = snd_soc_find_dai_with_mutex(cpu);
if (dai && snd_soc_dai_stream_valid(dai, direction)) {
supported_cpu = true;
break;
}
}
for_each_link_codecs(dai_link, i, codec) {
- dai = snd_soc_find_dai(codec);
+ dai = snd_soc_find_dai_with_mutex(codec);
if (dai && snd_soc_dai_stream_valid(dai, direction)) {
supported_codec = true;
break;
return 0;
config_err:
- for_each_rtd_dais(rtd, i, dai)
+ for_each_rtd_dais_rollback(rtd, i, dai)
snd_soc_dai_shutdown(dai, substream);
snd_soc_link_shutdown(substream);
/* Will be used if the codec ever has its own digital_mute function */
static int ams_delta_startup(struct snd_pcm_substream *substream)
{
- return ams_delta_digital_mute(NULL, 0, substream->stream);
+ return ams_delta_mute(NULL, 0, substream->stream);
}
static void ams_delta_shutdown(struct snd_pcm_substream *substream)
{
- ams_delta_digital_mute(NULL, 1, substream->stream);
+ ams_delta_mute(NULL, 1, substream->stream);
}
};
static const struct usbmix_name_map lenovo_p620_rear_map[] = {
- { 19, NULL, 2 }, /* FU, Volume */
{ 19, NULL, 12 }, /* FU, Input Gain Pad */
{}
};
&& (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
msleep(20);
- /* Zoom R16/24, Logitech H650e, Jabra 550a, Kingston HyperX needs a tiny
- * delay here, otherwise requests like get/set frequency return as
- * failed despite actually succeeding.
+ /* Zoom R16/24, Logitech H650e/H570e, Jabra 550a, Kingston HyperX
+ * needs a tiny delay here, otherwise requests like get/set
+ * frequency return as failed despite actually succeeding.
*/
if ((chip->usb_id == USB_ID(0x1686, 0x00dd) ||
chip->usb_id == USB_ID(0x046d, 0x0a46) ||
+ chip->usb_id == USB_ID(0x046d, 0x0a56) ||
chip->usb_id == USB_ID(0x0b0e, 0x0349) ||
chip->usb_id == USB_ID(0x0951, 0x16ad)) &&
(requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
FEATURE_DISPLAY = libbfd disassembler-four-args
check_feat := 1
-NON_CHECK_FEAT_TARGETS := clean bpftool_clean runqslower_clean
+NON_CHECK_FEAT_TARGETS := clean bpftool_clean runqslower_clean resolve_btfids_clean
ifdef MAKECMDGOALS
ifeq ($(filter-out $(NON_CHECK_FEAT_TARGETS),$(MAKECMDGOALS)),)
check_feat := 0
$(OUTPUT)bpf_exp.yacc.o: $(OUTPUT)bpf_exp.yacc.c
$(OUTPUT)bpf_exp.lex.o: $(OUTPUT)bpf_exp.lex.c
-clean: bpftool_clean runqslower_clean
+clean: bpftool_clean runqslower_clean resolve_btfids_clean
$(call QUIET_CLEAN, bpf-progs)
$(Q)$(RM) -r -- $(OUTPUT)*.o $(OUTPUT)bpf_jit_disasm $(OUTPUT)bpf_dbg \
$(OUTPUT)bpf_asm $(OUTPUT)bpf_exp.yacc.* $(OUTPUT)bpf_exp.lex.*
clean: libsubcmd-clean libbpf-clean fixdep-clean
$(call msg,CLEAN,$(BINARY))
$(Q)$(RM) -f $(BINARY); \
+ $(RM) -rf $(if $(OUTPUT),$(OUTPUT),.)/feature; \
find $(if $(OUTPUT),$(OUTPUT),.) -name \*.o -or -name \*.o.cmd -or -name \*.o.d | xargs $(RM)
tags:
* this socket to prevent accepting spoofed ones.
*/
#define IP_PMTUDISC_INTERFACE 4
-/* weaker version of IP_PMTUDISC_INTERFACE, which allos packets to get
+/* weaker version of IP_PMTUDISC_INTERFACE, which allows packets to get
* fragmented if they exeed the interface mtu
*/
#define IP_PMTUDISC_OMIT 5
#define KVM_VM_PPC_HV 1
#define KVM_VM_PPC_PR 2
-/* on MIPS, 0 forces trap & emulate, 1 forces VZ ASE */
-#define KVM_VM_MIPS_TE 0
+/* on MIPS, 0 indicates auto, 1 forces VZ ASE, 2 forces trap & emulate */
+#define KVM_VM_MIPS_AUTO 0
#define KVM_VM_MIPS_VZ 1
+#define KVM_VM_MIPS_TE 2
#define KVM_S390_SIE_PAGE_OFFSET 1
#define KVM_CAP_LAST_CPU 184
#define KVM_CAP_SMALLER_MAXPHYADDR 185
#define KVM_CAP_S390_DIAG318 186
+#define KVM_CAP_STEAL_TIME 187
#ifdef KVM_CAP_IRQ_ROUTING
s->nr_files);
}
-static int gettid(void)
+static int lk_gettid(void)
{
return syscall(__NR_gettid);
}
struct io_sq_ring *ring = &s->sq_ring;
int ret, prepped;
- printf("submitter=%d\n", gettid());
+ printf("submitter=%d\n", lk_gettid());
srand48_r(pthread_self(), &s->rand);
FEATURE_TESTS = libelf libelf-mmap zlib bpf reallocarray
FEATURE_DISPLAY = libelf zlib bpf
-INCLUDES = -I. -I$(srctree)/tools/include -I$(srctree)/tools/arch/$(ARCH)/include/uapi -I$(srctree)/tools/include/uapi
+INCLUDES = -I. -I$(srctree)/tools/include -I$(srctree)/tools/include/uapi
FEATURE_CHECK_CFLAGS-bpf = $(INCLUDES)
check_feat := 1
awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}' | \
sort -u | wc -l)
VERSIONED_SYM_COUNT = $(shell readelf --dyn-syms --wide $(OUTPUT)libbpf.so | \
+ awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}' | \
grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 | sort -u | wc -l)
CMD_TARGETS = $(LIB_TARGET) $(PC_FILE)
awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}'| \
sort -u > $(OUTPUT)libbpf_global_syms.tmp; \
readelf --dyn-syms --wide $(OUTPUT)libbpf.so | \
+ awk '/GLOBAL/ && /DEFAULT/ && !/UND/ {print $$NF}'| \
grep -Eo '[^ ]+@LIBBPF_' | cut -d@ -f1 | \
sort -u > $(OUTPUT)libbpf_versioned_syms.tmp; \
diff -u $(OUTPUT)libbpf_global_syms.tmp \
int i, j, nrels, new_sz;
const struct btf_var_secinfo *vi = NULL;
const struct btf_type *sec, *var, *def;
+ struct bpf_map *map = NULL, *targ_map;
const struct btf_member *member;
- struct bpf_map *map, *targ_map;
const char *name, *mname;
Elf_Data *symbols;
unsigned int moff;
if (!is_static_jump(insn))
continue;
- if (insn->ignore || insn->offset == FAKE_JUMP_OFFSET)
+ if (insn->offset == FAKE_JUMP_OFFSET)
continue;
reloc = find_reloc_by_dest_range(file->elf, insn->sec,
/* Block until we're ready to go */
static void ready(int ready_out, int wakefd)
{
- char dummy;
struct pollfd pollfd = { .fd = wakefd, .events = POLLIN };
/* Tell them we're ready. */
- if (write(ready_out, &dummy, 1) != 1)
+ if (write(ready_out, "R", 1) != 1)
err(EXIT_FAILURE, "CLIENT: ready write");
/* Wait for "GO" signal */
unsigned int i, j;
ready(ctx->ready_out, ctx->wakefd);
+ memset(data, 'S', sizeof(data));
/* Now pump to every receiver. */
for (i = 0; i < nr_loops; i++) {
{
"EventName": "ex_ret_brn_ind_misp",
"EventCode": "0xca",
- "BriefDescription": "Retired Indirect Branch Instructions Mispredicted.",
+ "BriefDescription": "Retired Indirect Branch Instructions Mispredicted."
},
{
"EventName": "ex_ret_mmx_fp_instr.sse_instr",
{
"EventName": "ex_ret_fus_brnch_inst",
"EventCode": "0x1d0",
- "BriefDescription": "Retired Fused Instructions. The number of fuse-branch instructions retired per cycle. The number of events logged per cycle can vary from 0-8.",
+ "BriefDescription": "Retired Fused Instructions. The number of fuse-branch instructions retired per cycle. The number of events logged per cycle can vary from 0-8."
}
]
perf record --call-graph fp kill (test-record-graph-fp)
perf record --group -e cycles,instructions kill (test-record-group)
perf record -e '{cycles,instructions}' kill (test-record-group1)
+ perf record -e '{cycles/period=1/,instructions/period=2/}:S' kill (test-record-group2)
perf record -D kill (test-record-no-delay)
perf record -i kill (test-record-no-inherit)
perf record -n kill (test-record-no-samples)
--- /dev/null
+[config]
+command = record
+args = --no-bpf-event -e '{cycles/period=1234000/,instructions/period=6789000/}:S' kill >/dev/null 2>&1
+ret = 1
+
+[event-1:base-record]
+fd=1
+group_fd=-1
+config=0|1
+sample_period=1234000
+sample_type=87
+read_format=12
+inherit=0
+freq=0
+
+[event-2:base-record]
+fd=2
+group_fd=1
+config=0|1
+sample_period=6789000
+sample_type=87
+read_format=12
+disabled=0
+inherit=0
+mmap=0
+comm=0
+freq=0
+enable_on_exec=0
+task=0
#if defined (__x86_64__)
extern void __test_function(volatile long *ptr);
asm (
+ ".pushsection .text;"
".globl __test_function\n"
+ ".type __test_function, @function;"
"__test_function:\n"
"incq (%rdi)\n"
- "ret\n");
+ "ret\n"
+ ".popsection\n");
#else
static void __test_function(volatile long *ptr)
{
return -ENOMEM;
cpus = perf_cpu_map__new("0");
- if (!cpus)
+ if (!cpus) {
+ evlist__delete(evlist);
return -ENOMEM;
+ }
perf_evlist__set_maps(&evlist->core, cpus, NULL);
false, false,
&metric_events);
if (err)
- return err;
+ goto out;
- if (perf_evlist__alloc_stats(evlist, false))
- return -1;
+ err = perf_evlist__alloc_stats(evlist, false);
+ if (err)
+ goto out;
/* Load the runtime stats with given numbers for events. */
runtime_stat__init(&st);
if (name2 && ratio2)
*ratio2 = compute_single(&metric_events, evlist, &st, name2);
+out:
/* ... clenup. */
metricgroup__rblist_exit(&metric_events);
runtime_stat__exit(&st);
perf_evlist__free_stats(evlist);
perf_cpu_map__put(cpus);
evlist__delete(evlist);
- return 0;
+ return err;
}
static int compute_metric(const char *name, struct value *vals, double *ratio)
int res = 0;
bool use_uncore_table;
struct pmu_events_map *map = __test_pmu_get_events_map();
+ struct perf_pmu_alias *a, *tmp;
if (!map)
return -1;
pmu_name, alias->name);
}
+ list_for_each_entry_safe(a, tmp, &aliases, list) {
+ list_del(&a->list);
+ perf_pmu_free_alias(a);
+ }
free(pmu);
return res;
}
ret = 0;
} while (0);
+ perf_pmu__del_formats(&formats);
test_format_dir_put(format);
return ret;
}
perf_evlist__set_maps(&evlist->core, cpus, threads);
+ /* as evlist now has references, put count here */
+ perf_cpu_map__put(cpus);
+ perf_thread_map__put(threads);
+
return 0;
out_delete_threads:
goto out_put;
perf_evlist__set_maps(&evlist->core, cpus, threads);
-out:
- return err;
+
+ perf_thread_map__put(threads);
out_put:
perf_cpu_map__put(cpus);
- goto out;
+out:
+ return err;
}
int evlist__open(struct evlist *evlist)
* We default some events to have a default interval. But keep
* it a weak assumption overridable by the user.
*/
- if (!attr->sample_period || (opts->user_freq != UINT_MAX ||
- opts->user_interval != ULLONG_MAX)) {
+ if (!attr->sample_period) {
if (opts->freq) {
- evsel__set_sample_bit(evsel, PERIOD);
attr->freq = 1;
attr->sample_freq = opts->freq;
} else {
attr->sample_period = opts->default_interval;
}
}
+ /*
+ * If attr->freq was set (here or earlier), ask for period
+ * to be sampled.
+ */
+ if (attr->freq)
+ evsel__set_sample_bit(evsel, PERIOD);
if (opts->no_samples)
attr->sample_freq = 0;
list_for_each_entry_safe(expr, tmp, &me->head, nd) {
free(expr->metric_refs);
+ free(expr->metric_events);
free(expr);
}
if (!metric_refs) {
ret = -ENOMEM;
free(metric_events);
+ free(expr);
break;
}
continue;
strlist__add(me->metrics, s);
}
+
+ if (!raw)
+ free(s);
}
free(omg);
}
m->has_constraint = metric_no_group || metricgroup__has_constraint(pe);
INIT_LIST_HEAD(&m->metric_refs);
m->metric_refs_cnt = 0;
- *mp = m;
parent = expr_ids__alloc(ids);
if (!parent) {
free(m);
return -ENOMEM;
}
+ *mp = m;
} else {
/*
* We got here for the referenced metric, via the
* all the metric's IDs and add it to the parent context.
*/
if (expr__find_other(pe->metric_expr, NULL, &m->pctx, runtime) < 0) {
- expr__ctx_clear(&m->pctx);
- free(m);
+ if (m->metric_refs_cnt == 0) {
+ expr__ctx_clear(&m->pctx);
+ free(m);
+ *mp = NULL;
+ }
return -EINVAL;
}
ret = add_metric(&list, pe, metric_no_group, &m, NULL, &ids);
if (ret)
- return ret;
+ goto out;
/*
* Process any possible referenced metrics
ret = resolve_metric(metric_no_group,
&list, map, &ids);
if (ret)
- return ret;
+ goto out;
}
/* End of pmu events. */
- if (!has_match)
- return -EINVAL;
+ if (!has_match) {
+ ret = -EINVAL;
+ goto out;
+ }
list_for_each_entry(m, &list, nd) {
if (events->len > 0)
}
}
+out:
+ /*
+ * add to metric_list so that they can be released
+ * even if it's failed
+ */
list_splice(&list, metric_list);
expr_ids__exit(&ids);
- return 0;
+ return ret;
}
static int metricgroup__add_metric_list(const char *list, bool metric_no_group,
ret = metricgroup__add_metric_list(str, metric_no_group,
&extra_events, &metric_list, map);
if (ret)
- return ret;
+ goto out;
pr_debug("adding %s\n", extra_events.buf);
bzero(&parse_error, sizeof(parse_error));
ret = __parse_events(perf_evlist, extra_events.buf, &parse_error, fake_pmu);
parse_events_print_error(&parse_error, extra_events.buf);
goto out;
}
- strbuf_release(&extra_events);
ret = metricgroup__setup_events(&metric_list, metric_no_merge,
perf_evlist, metric_events);
out:
metricgroup__free_metrics(&metric_list);
+ strbuf_release(&extra_events);
return ret;
}
return -ENOMEM;
evsel->tool_event = tool_event;
if (tool_event == PERF_TOOL_DURATION_TIME)
- evsel->unit = strdup("ns");
+ evsel->unit = "ns";
return 0;
}
}
/* Delete an alias entry. */
-static void perf_pmu_free_alias(struct perf_pmu_alias *newalias)
+void perf_pmu_free_alias(struct perf_pmu_alias *newalias)
{
zfree(&newalias->name);
zfree(&newalias->desc);
set_bit(b, bits);
}
+void perf_pmu__del_formats(struct list_head *formats)
+{
+ struct perf_pmu_format *fmt, *tmp;
+
+ list_for_each_entry_safe(fmt, tmp, formats, list) {
+ list_del(&fmt->list);
+ free(fmt->name);
+ free(fmt);
+ }
+}
+
static int sub_non_neg(int a, int b)
{
if (b > a)
int config, unsigned long *bits);
void perf_pmu__set_format(unsigned long *bits, long from, long to);
int perf_pmu__format_parse(char *dir, struct list_head *head);
+void perf_pmu__del_formats(struct list_head *formats);
struct perf_pmu *perf_pmu__scan(struct perf_pmu *pmu);
struct pmu_events_map *perf_pmu__find_map(struct perf_pmu *pmu);
bool pmu_uncore_alias_match(const char *pmu_name, const char *name);
+void perf_pmu_free_alias(struct perf_pmu_alias *alias);
int perf_pmu__convert_scale(const char *scale, char **end, double *sval);
#include "debug.h"
#include "evlist.h"
#include "evsel.h"
+#include "evsel_config.h"
#include "parse-events.h"
#include <errno.h>
#include <limits.h>
return leader;
}
+static u64 evsel__config_term_mask(struct evsel *evsel)
+{
+ struct evsel_config_term *term;
+ struct list_head *config_terms = &evsel->config_terms;
+ u64 term_types = 0;
+
+ list_for_each_entry(term, config_terms, list) {
+ term_types |= 1 << term->type;
+ }
+ return term_types;
+}
+
static void evsel__config_leader_sampling(struct evsel *evsel, struct evlist *evlist)
{
struct perf_event_attr *attr = &evsel->core.attr;
struct evsel *leader = evsel->leader;
struct evsel *read_sampler;
+ u64 term_types, freq_mask;
if (!leader->sample_read)
return;
if (evsel == read_sampler)
return;
+ term_types = evsel__config_term_mask(evsel);
/*
- * Disable sampling for all group members other than the leader in
- * case the leader 'leads' the sampling, except when the leader is an
- * AUX area event, in which case the 2nd event in the group is the one
- * that 'leads' the sampling.
+ * Disable sampling for all group members except those with explicit
+ * config terms or the leader. In the case of an AUX area event, the 2nd
+ * event in the group is the one that 'leads' the sampling.
*/
- attr->freq = 0;
- attr->sample_freq = 0;
- attr->sample_period = 0;
- attr->write_backward = 0;
+ freq_mask = (1 << EVSEL__CONFIG_TERM_FREQ) | (1 << EVSEL__CONFIG_TERM_PERIOD);
+ if ((term_types & freq_mask) == 0) {
+ attr->freq = 0;
+ attr->sample_freq = 0;
+ attr->sample_period = 0;
+ }
+ if ((term_types & (1 << EVSEL__CONFIG_TERM_OVERWRITE)) == 0)
+ attr->write_backward = 0;
/*
* We don't get a sample for slave events, we make them when delivering
color = get_ratio_color(GRC_CACHE_MISSES, ratio);
- out->print_metric(config, out->ctx, color, "%7.2f%%", "of all L1-dcache hits", ratio);
+ out->print_metric(config, out->ctx, color, "%7.2f%%", "of all L1-dcache accesses", ratio);
}
static void print_l1_icache_misses(struct perf_stat_config *config,
ratio = avg / total * 100.0;
color = get_ratio_color(GRC_CACHE_MISSES, ratio);
- out->print_metric(config, out->ctx, color, "%7.2f%%", "of all L1-icache hits", ratio);
+ out->print_metric(config, out->ctx, color, "%7.2f%%", "of all L1-icache accesses", ratio);
}
static void print_dtlb_cache_misses(struct perf_stat_config *config,
ratio = avg / total * 100.0;
color = get_ratio_color(GRC_CACHE_MISSES, ratio);
- out->print_metric(config, out->ctx, color, "%7.2f%%", "of all dTLB cache hits", ratio);
+ out->print_metric(config, out->ctx, color, "%7.2f%%", "of all dTLB cache accesses", ratio);
}
static void print_itlb_cache_misses(struct perf_stat_config *config,
ratio = avg / total * 100.0;
color = get_ratio_color(GRC_CACHE_MISSES, ratio);
- out->print_metric(config, out->ctx, color, "%7.2f%%", "of all iTLB cache hits", ratio);
+ out->print_metric(config, out->ctx, color, "%7.2f%%", "of all iTLB cache accesses", ratio);
}
static void print_ll_cache_misses(struct perf_stat_config *config,
ratio = avg / total * 100.0;
color = get_ratio_color(GRC_CACHE_MISSES, ratio);
- out->print_metric(config, out->ctx, color, "%7.2f%%", "of all LL-cache hits", ratio);
+ out->print_metric(config, out->ctx, color, "%7.2f%%", "of all LL-cache accesses", ratio);
}
/*
double test_generic_metric(struct metric_expr *mexp, int cpu, struct runtime_stat *st)
{
struct expr_parse_ctx pctx;
- double ratio;
+ double ratio = 0.0;
if (prepare_metric(mexp->metric_events, mexp->metric_refs, &pctx, cpu, st) < 0)
- return 0.;
+ goto out;
if (expr__parse(&ratio, &pctx, mexp->metric_expr, 1))
- return 0.;
+ ratio = 0.0;
+out:
+ expr__ctx_clear(&pctx);
return ratio;
}
if (runtime_stat_n(st, STAT_L1_DCACHE, ctx, cpu) != 0)
print_l1_dcache_misses(config, cpu, evsel, avg, out, st);
else
- print_metric(config, ctxp, NULL, NULL, "of all L1-dcache hits", 0);
+ print_metric(config, ctxp, NULL, NULL, "of all L1-dcache accesses", 0);
} else if (
evsel->core.attr.type == PERF_TYPE_HW_CACHE &&
evsel->core.attr.config == ( PERF_COUNT_HW_CACHE_L1I |
if (runtime_stat_n(st, STAT_L1_ICACHE, ctx, cpu) != 0)
print_l1_icache_misses(config, cpu, evsel, avg, out, st);
else
- print_metric(config, ctxp, NULL, NULL, "of all L1-icache hits", 0);
+ print_metric(config, ctxp, NULL, NULL, "of all L1-icache accesses", 0);
} else if (
evsel->core.attr.type == PERF_TYPE_HW_CACHE &&
evsel->core.attr.config == ( PERF_COUNT_HW_CACHE_DTLB |
if (runtime_stat_n(st, STAT_DTLB_CACHE, ctx, cpu) != 0)
print_dtlb_cache_misses(config, cpu, evsel, avg, out, st);
else
- print_metric(config, ctxp, NULL, NULL, "of all dTLB cache hits", 0);
+ print_metric(config, ctxp, NULL, NULL, "of all dTLB cache accesses", 0);
} else if (
evsel->core.attr.type == PERF_TYPE_HW_CACHE &&
evsel->core.attr.config == ( PERF_COUNT_HW_CACHE_ITLB |
if (runtime_stat_n(st, STAT_ITLB_CACHE, ctx, cpu) != 0)
print_itlb_cache_misses(config, cpu, evsel, avg, out, st);
else
- print_metric(config, ctxp, NULL, NULL, "of all iTLB cache hits", 0);
+ print_metric(config, ctxp, NULL, NULL, "of all iTLB cache accesses", 0);
} else if (
evsel->core.attr.type == PERF_TYPE_HW_CACHE &&
evsel->core.attr.config == ( PERF_COUNT_HW_CACHE_LL |
if (runtime_stat_n(st, STAT_LL_CACHE, ctx, cpu) != 0)
print_ll_cache_misses(config, cpu, evsel, avg, out, st);
else
- print_metric(config, ctxp, NULL, NULL, "of all LL-cache hits", 0);
+ print_metric(config, ctxp, NULL, NULL, "of all LL-cache accesses", 0);
} else if (evsel__match(evsel, HARDWARE, HW_CACHE_MISSES)) {
total = runtime_stat_avg(st, STAT_CACHEREFS, ctx, cpu);
__u32 seq_num = ctx->meta->seq_num;
struct bpf_map *map = ctx->map;
struct key_t *key = ctx->key;
+ struct key_t tmp_key;
__u64 *val = ctx->value;
+ __u64 tmp_val = 0;
+ int ret;
if (in_test_mode) {
/* test mode is used by selftests to
if (key == (void *)0 || val == (void *)0)
return 0;
+ /* update the value and then delete the <key, value> pair.
+ * it should not impact the existing 'val' which is still
+ * accessible under rcu.
+ */
+ __builtin_memcpy(&tmp_key, key, sizeof(struct key_t));
+ ret = bpf_map_update_elem(&hashmap1, &tmp_key, &tmp_val, 0);
+ if (ret)
+ return 0;
+ ret = bpf_map_delete_elem(&hashmap1, &tmp_key);
+ if (ret)
+ return 0;
+
key_sum_a += key->a;
key_sum_b += key->b;
key_sum_c += key->c;
int i;
/* Instruction lengths starting at ss_start */
int ss_size[4] = {
- 3, /* xor */
+ 2, /* xor */
2, /* cpuid */
5, /* mov */
2, /* rdmsr */
echo "PASS: neigh get"
}
+kci_test_bridge_parent_id()
+{
+ local ret=0
+ sysfsnet=/sys/bus/netdevsim/devices/netdevsim
+ probed=false
+
+ if [ ! -w /sys/bus/netdevsim/new_device ] ; then
+ modprobe -q netdevsim
+ check_err $?
+ if [ $ret -ne 0 ]; then
+ echo "SKIP: bridge_parent_id can't load netdevsim"
+ return $ksft_skip
+ fi
+ probed=true
+ fi
+
+ echo "10 1" > /sys/bus/netdevsim/new_device
+ while [ ! -d ${sysfsnet}10 ] ; do :; done
+ echo "20 1" > /sys/bus/netdevsim/new_device
+ while [ ! -d ${sysfsnet}20 ] ; do :; done
+ udevadm settle
+ dev10=`ls ${sysfsnet}10/net/`
+ dev20=`ls ${sysfsnet}20/net/`
+
+ ip link add name test-bond0 type bond mode 802.3ad
+ ip link set dev $dev10 master test-bond0
+ ip link set dev $dev20 master test-bond0
+ ip link add name test-br0 type bridge
+ ip link set dev test-bond0 master test-br0
+ check_err $?
+
+ # clean up any leftovers
+ ip link del dev test-br0
+ ip link del dev test-bond0
+ echo 20 > /sys/bus/netdevsim/del_device
+ echo 10 > /sys/bus/netdevsim/del_device
+ $probed && rmmod netdevsim
+
+ if [ $ret -ne 0 ]; then
+ echo "FAIL: bridge_parent_id"
+ return 1
+ fi
+ echo "PASS: bridge_parent_id"
+}
+
kci_test_rtnl()
{
local ret=0
check_err $?
kci_test_neigh_get
check_err $?
+ kci_test_bridge_parent_id
+ check_err $?
kci_del_dummy
return $ret
#include <stdlib.h>
#include <string.h>
#include <sys/mman.h>
+#include <unistd.h>
#include <asm/cputable.h>
{
char *p;
- /* SAO was introduced in 2.06 and removed in 3.1 */
+ /*
+ * SAO was introduced in 2.06 and removed in 3.1. It's disabled in
+ * guests/LPARs by default, so also skip if we are running in a guest.
+ */
SKIP_IF(!have_hwcap(PPC_FEATURE_ARCH_2_06) ||
- have_hwcap2(PPC_FEATURE2_ARCH_3_1));
+ have_hwcap2(PPC_FEATURE2_ARCH_3_1) ||
+ access("/proc/device-tree/rtas/ibm,hypertas-functions", F_OK) == 0);
/*
* Ensure we can ask for PROT_SAO.
}
if (shift)
- printf("%u kB hugepages\n", 1 << shift);
+ printf("%u kB hugepages\n", 1 << (shift - 10));
else
printf("Default size hugepages\n");
printf("Mapping %lu Mbytes\n", (unsigned long)length >> 20);