John Stultz <johnstul@us.ibm.com>
<jon.toppins+linux@gmail.com> <jtoppins@cumulusnetworks.com>
<jon.toppins+linux@gmail.com> <jtoppins@redhat.com>
+Jonas Gorski <jonas.gorski@gmail.com> <jogo@openwrt.org>
Jordan Crouse <jordan@cosmicpenguin.net> <jcrouse@codeaurora.org>
<josh@joshtriplett.org> <josh@freedesktop.org>
<josh@joshtriplett.org> <josh@kernel.org>
What: /sys/bus/platform/drivers/ufshcd/*/rpm_lvl
What: /sys/bus/platform/devices/*.ufs/rpm_lvl
Date: September 2014
-Contact: Subhash Jadavani <subhashj@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This entry could be used to set or show the UFS device
runtime power management level. The current driver
implementation supports 7 levels with next target states:
What: /sys/bus/platform/drivers/ufshcd/*/rpm_target_dev_state
What: /sys/bus/platform/devices/*.ufs/rpm_target_dev_state
Date: February 2018
-Contact: Subhash Jadavani <subhashj@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This entry shows the target power mode of an UFS device
for the chosen runtime power management level.
What: /sys/bus/platform/drivers/ufshcd/*/rpm_target_link_state
What: /sys/bus/platform/devices/*.ufs/rpm_target_link_state
Date: February 2018
-Contact: Subhash Jadavani <subhashj@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This entry shows the target state of an UFS UIC link
for the chosen runtime power management level.
What: /sys/bus/platform/drivers/ufshcd/*/spm_lvl
What: /sys/bus/platform/devices/*.ufs/spm_lvl
Date: September 2014
-Contact: Subhash Jadavani <subhashj@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This entry could be used to set or show the UFS device
system power management level. The current driver
implementation supports 7 levels with next target states:
What: /sys/bus/platform/drivers/ufshcd/*/spm_target_dev_state
What: /sys/bus/platform/devices/*.ufs/spm_target_dev_state
Date: February 2018
-Contact: Subhash Jadavani <subhashj@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This entry shows the target power mode of an UFS device
for the chosen system power management level.
What: /sys/bus/platform/drivers/ufshcd/*/spm_target_link_state
What: /sys/bus/platform/devices/*.ufs/spm_target_link_state
Date: February 2018
-Contact: Subhash Jadavani <subhashj@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This entry shows the target state of an UFS UIC link
for the chosen system power management level.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/monitor_enable
What: /sys/bus/platform/devices/*.ufs/monitor/monitor_enable
Date: January 2021
-Contact: Can Guo <cang@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This file shows the status of performance monitor enablement
and it can be used to start/stop the monitor. When the monitor
is stopped, the performance data collected is also cleared.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/monitor_chunk_size
What: /sys/bus/platform/devices/*.ufs/monitor/monitor_chunk_size
Date: January 2021
-Contact: Can Guo <cang@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This file tells the monitor to focus on requests transferring
data of specific chunk size (in Bytes). 0 means any chunk size.
It can only be changed when monitor is disabled.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_total_sectors
What: /sys/bus/platform/devices/*.ufs/monitor/read_total_sectors
Date: January 2021
-Contact: Can Guo <cang@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This file shows how many sectors (in 512 Bytes) have been
sent from device to host after monitor gets started.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_total_busy
What: /sys/bus/platform/devices/*.ufs/monitor/read_total_busy
Date: January 2021
-Contact: Can Guo <cang@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This file shows how long (in micro seconds) has been spent
sending data from device to host after monitor gets started.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_nr_requests
What: /sys/bus/platform/devices/*.ufs/monitor/read_nr_requests
Date: January 2021
-Contact: Can Guo <cang@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This file shows how many read requests have been sent after
monitor gets started.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_max
What: /sys/bus/platform/devices/*.ufs/monitor/read_req_latency_max
Date: January 2021
-Contact: Can Guo <cang@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This file shows the maximum latency (in micro seconds) of
read requests after monitor gets started.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_min
What: /sys/bus/platform/devices/*.ufs/monitor/read_req_latency_min
Date: January 2021
-Contact: Can Guo <cang@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This file shows the minimum latency (in micro seconds) of
read requests after monitor gets started.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_avg
What: /sys/bus/platform/devices/*.ufs/monitor/read_req_latency_avg
Date: January 2021
-Contact: Can Guo <cang@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This file shows the average latency (in micro seconds) of
read requests after monitor gets started.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/read_req_latency_sum
What: /sys/bus/platform/devices/*.ufs/monitor/read_req_latency_sum
Date: January 2021
-Contact: Can Guo <cang@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This file shows the total latency (in micro seconds) of
read requests sent after monitor gets started.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_total_sectors
What: /sys/bus/platform/devices/*.ufs/monitor/write_total_sectors
Date: January 2021
-Contact: Can Guo <cang@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This file shows how many sectors (in 512 Bytes) have been sent
from host to device after monitor gets started.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_total_busy
What: /sys/bus/platform/devices/*.ufs/monitor/write_total_busy
Date: January 2021
-Contact: Can Guo <cang@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This file shows how long (in micro seconds) has been spent
sending data from host to device after monitor gets started.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_nr_requests
What: /sys/bus/platform/devices/*.ufs/monitor/write_nr_requests
Date: January 2021
-Contact: Can Guo <cang@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This file shows how many write requests have been sent after
monitor gets started.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_max
What: /sys/bus/platform/devices/*.ufs/monitor/write_req_latency_max
Date: January 2021
-Contact: Can Guo <cang@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This file shows the maximum latency (in micro seconds) of write
requests after monitor gets started.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_min
What: /sys/bus/platform/devices/*.ufs/monitor/write_req_latency_min
Date: January 2021
-Contact: Can Guo <cang@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This file shows the minimum latency (in micro seconds) of write
requests after monitor gets started.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_avg
What: /sys/bus/platform/devices/*.ufs/monitor/write_req_latency_avg
Date: January 2021
-Contact: Can Guo <cang@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This file shows the average latency (in micro seconds) of write
requests after monitor gets started.
What: /sys/bus/platform/drivers/ufshcd/*/monitor/write_req_latency_sum
What: /sys/bus/platform/devices/*.ufs/monitor/write_req_latency_sum
Date: January 2021
-Contact: Can Guo <cang@codeaurora.org>
+Contact: Can Guo <quic_cang@quicinc.com>
Description: This file shows the total latency (in micro seconds) of write
requests after monitor gets started.
What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_presv_us_en
What: /sys/bus/platform/devices/*.ufs/device_descriptor/wb_presv_us_en
Date: June 2020
-Contact: Asutosh Das <asutoshd@codeaurora.org>
+Contact: Asutosh Das <quic_asutoshd@quicinc.com>
Description: This entry shows if preserve user-space was configured
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_shared_alloc_units
What: /sys/bus/platform/devices/*.ufs/device_descriptor/wb_shared_alloc_units
Date: June 2020
-Contact: Asutosh Das <asutoshd@codeaurora.org>
+Contact: Asutosh Das <quic_asutoshd@quicinc.com>
Description: This entry shows the shared allocated units of WB buffer
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/device_descriptor/wb_type
What: /sys/bus/platform/devices/*.ufs/device_descriptor/wb_type
Date: June 2020
-Contact: Asutosh Das <asutoshd@codeaurora.org>
+Contact: Asutosh Das <quic_asutoshd@quicinc.com>
Description: This entry shows the configured WB type.
0x1 for shared buffer mode. 0x0 for dedicated buffer mode.
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_buff_cap_adj
What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_buff_cap_adj
Date: June 2020
-Contact: Asutosh Das <asutoshd@codeaurora.org>
+Contact: Asutosh Das <quic_asutoshd@quicinc.com>
Description: This entry shows the total user-space decrease in shared
buffer mode.
The value of this parameter is 3 for TLC NAND when SLC mode
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_max_alloc_units
What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_max_alloc_units
Date: June 2020
-Contact: Asutosh Das <asutoshd@codeaurora.org>
+Contact: Asutosh Das <quic_asutoshd@quicinc.com>
Description: This entry shows the Maximum total WriteBooster Buffer size
which is supported by the entire device.
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_max_wb_luns
What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_max_wb_luns
Date: June 2020
-Contact: Asutosh Das <asutoshd@codeaurora.org>
+Contact: Asutosh Das <quic_asutoshd@quicinc.com>
Description: This entry shows the maximum number of luns that can support
WriteBooster.
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_sup_red_type
What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_sup_red_type
Date: June 2020
-Contact: Asutosh Das <asutoshd@codeaurora.org>
+Contact: Asutosh Das <quic_asutoshd@quicinc.com>
Description: The supportability of user space reduction mode
and preserve user space mode.
00h: WriteBooster Buffer can be configured only in
What: /sys/bus/platform/drivers/ufshcd/*/geometry_descriptor/wb_sup_wb_type
What: /sys/bus/platform/devices/*.ufs/geometry_descriptor/wb_sup_wb_type
Date: June 2020
-Contact: Asutosh Das <asutoshd@codeaurora.org>
+Contact: Asutosh Das <quic_asutoshd@quicinc.com>
Description: The supportability of WriteBooster Buffer type.
=== ==========================================================
What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_enable
What: /sys/bus/platform/devices/*.ufs/flags/wb_enable
Date: June 2020
-Contact: Asutosh Das <asutoshd@codeaurora.org>
+Contact: Asutosh Das <quic_asutoshd@quicinc.com>
Description: This entry shows the status of WriteBooster.
== ============================
What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_flush_en
What: /sys/bus/platform/devices/*.ufs/flags/wb_flush_en
Date: June 2020
-Contact: Asutosh Das <asutoshd@codeaurora.org>
+Contact: Asutosh Das <quic_asutoshd@quicinc.com>
Description: This entry shows if flush is enabled.
== =================================
What: /sys/bus/platform/drivers/ufshcd/*/flags/wb_flush_during_h8
What: /sys/bus/platform/devices/*.ufs/flags/wb_flush_during_h8
Date: June 2020
-Contact: Asutosh Das <asutoshd@codeaurora.org>
+Contact: Asutosh Das <quic_asutoshd@quicinc.com>
Description: Flush WriteBooster Buffer during hibernate state.
== =================================================
What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_avail_buf
What: /sys/bus/platform/devices/*.ufs/attributes/wb_avail_buf
Date: June 2020
-Contact: Asutosh Das <asutoshd@codeaurora.org>
+Contact: Asutosh Das <quic_asutoshd@quicinc.com>
Description: This entry shows the amount of unused WriteBooster buffer
available.
What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_cur_buf
What: /sys/bus/platform/devices/*.ufs/attributes/wb_cur_buf
Date: June 2020
-Contact: Asutosh Das <asutoshd@codeaurora.org>
+Contact: Asutosh Das <quic_asutoshd@quicinc.com>
Description: This entry shows the amount of unused current buffer.
The file is read only.
What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_flush_status
What: /sys/bus/platform/devices/*.ufs/attributes/wb_flush_status
Date: June 2020
-Contact: Asutosh Das <asutoshd@codeaurora.org>
+Contact: Asutosh Das <quic_asutoshd@quicinc.com>
Description: This entry shows the flush operation status.
What: /sys/bus/platform/drivers/ufshcd/*/attributes/wb_life_time_est
What: /sys/bus/platform/devices/*.ufs/attributes/wb_life_time_est
Date: June 2020
-Contact: Asutosh Das <asutoshd@codeaurora.org>
+Contact: Asutosh Das <quic_asutoshd@quicinc.com>
Description: This entry shows an indication of the WriteBooster Buffer
lifetime based on the amount of performed program/erase cycles
What: /sys/class/scsi_device/*/device/unit_descriptor/wb_buf_alloc_units
Date: June 2020
-Contact: Asutosh Das <asutoshd@codeaurora.org>
+Contact: Asutosh Das <quic_asutoshd@quicinc.com>
Description: This entry shows the configured size of WriteBooster buffer.
0400h corresponds to 4GB.
privileged ISA, with the following known exceptions (more exceptions may be
added, but only if it can be demonstrated that the user ABI is not broken):
- * The :fence.i: instruction cannot be directly executed by userspace
+ * The ``fence.i`` instruction cannot be directly executed by userspace
programs (it may still be executed in userspace via a
kernel-controlled mechanism such as the vDSO).
F: drivers/spi/spi-bcm63xx-hsspi.c
F: drivers/spi/spi-bcmbca-hsspi.c
+BROADCOM BCM6348/BCM6358 SPI controller DRIVER
+M: Jonas Gorski <jonas.gorski@gmail.com>
+L: linux-spi@vger.kernel.org
+S: Odd Fixes
+F: Documentation/devicetree/bindings/spi/spi-bcm63xx.txt
+F: drivers/spi/spi-bcm63xx.c
+
BROADCOM ETHERNET PHY DRIVERS
M: Florian Fainelli <florian.fainelli@broadcom.com>
R: Broadcom internal kernel review list <bcm-kernel-feedback-list@broadcom.com>
F: drivers/input/touchscreen/resistive-adc-touch.c
GENERIC STRING LIBRARY
+M: Kees Cook <keescook@chromium.org>
R: Andy Shevchenko <andy@kernel.org>
-S: Maintained
+L: linux-hardening@vger.kernel.org
+S: Supported
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git for-next/hardening
F: include/linux/string.h
F: include/linux/string_choices.h
F: include/linux/string_helpers.h
F: drivers/soc/microchip/
MICROCHIP SPI DRIVER
-M: Tudor Ambarus <tudor.ambarus@linaro.org>
+M: Ryan Wanner <ryan.wanner@microchip.com>
S: Supported
F: drivers/spi/spi-atmel.*
VERSION = 6
PATCHLEVEL = 5
SUBLEVEL = 0
-EXTRAVERSION = -rc1
+EXTRAVERSION = -rc2
NAME = Hurr durr I'ma ninja sloth
# *DOCUMENTATION*
return 0;
}
-static inline int hash__pmd_same(pmd_t pmd_a, pmd_t pmd_b)
-{
- BUG();
- return 0;
-}
-
static inline pmd_t hash__pmd_mkhuge(pmd_t pmd)
{
BUG();
(_PAGE_PTE | H_PAGE_THP_HUGE));
}
-static inline int hash__pmd_same(pmd_t pmd_a, pmd_t pmd_b)
-{
- return (((pmd_raw(pmd_a) ^ pmd_raw(pmd_b)) & ~cpu_to_be64(_PAGE_HPTEFLAGS)) == 0);
-}
-
static inline pmd_t hash__pmd_mkhuge(pmd_t pmd)
{
return __pmd(pmd_val(pmd) | (_PAGE_PTE | H_PAGE_THP_HUGE));
return region_id;
}
+static inline int hash__pmd_same(pmd_t pmd_a, pmd_t pmd_b)
+{
+ return (((pmd_raw(pmd_a) ^ pmd_raw(pmd_b)) & ~cpu_to_be64(_PAGE_HPTEFLAGS)) == 0);
+}
+
#define hash__pmd_bad(pmd) (pmd_val(pmd) & H_PMD_BAD_BITS)
#define hash__pud_bad(pud) (pud_val(pud) & H_PUD_BAD_BITS)
static inline int hash__p4d_bad(p4d_t p4d)
* Copyright (C) 2007 Ben. Herrenschmidt (benh@kernel.crashing.org), IBM Corp.
*/
+#include <linux/linkage.h>
#include <linux/threads.h>
#include <asm/reg.h>
#include <asm/page.h>
#define SPECIAL_EXC_LOAD(reg, name) \
ld reg, (SPECIAL_EXC_##name * 8 + SPECIAL_EXC_FRAME_OFFS)(r1)
-special_reg_save:
+SYM_CODE_START_LOCAL(special_reg_save)
/*
* We only need (or have stack space) to save this stuff if
* we interrupted the kernel.
SPECIAL_EXC_STORE(r10,CSRR1)
blr
+SYM_CODE_END(special_reg_save)
-ret_from_level_except:
+SYM_CODE_START_LOCAL(ret_from_level_except)
ld r3,_MSR(r1)
andi. r3,r3,MSR_PR
beq 1f
mtxer r11
blr
+SYM_CODE_END(ret_from_level_except)
.macro ret_from_level srr0 srr1 paca_ex scratch
bl ret_from_level_except
mfspr r13,\scratch
.endm
-ret_from_crit_except:
+SYM_CODE_START_LOCAL(ret_from_crit_except)
ret_from_level SPRN_CSRR0 SPRN_CSRR1 PACA_EXCRIT SPRN_SPRG_CRIT_SCRATCH
rfci
+SYM_CODE_END(ret_from_crit_except)
-ret_from_mc_except:
+SYM_CODE_START_LOCAL(ret_from_mc_except)
ret_from_level SPRN_MCSRR0 SPRN_MCSRR1 PACA_EXMC SPRN_SPRG_MC_SCRATCH
rfmci
+SYM_CODE_END(ret_from_mc_except)
/* Exception prolog code for all exceptions */
#define EXCEPTION_PROLOG(n, intnum, type, addition) \
* r14 and r15 containing the fault address and error code, with the
* original values stashed away in the PACA
*/
-storage_fault_common:
+SYM_CODE_START_LOCAL(storage_fault_common)
addi r3,r1,STACK_INT_FRAME_REGS
bl do_page_fault
b interrupt_return
+SYM_CODE_END(storage_fault_common)
/*
* Alignment exception doesn't fit entirely in the 0x100 bytes so it
* continues here.
*/
-alignment_more:
+SYM_CODE_START_LOCAL(alignment_more)
addi r3,r1,STACK_INT_FRAME_REGS
bl alignment_exception
REST_NVGPRS(r1)
b interrupt_return
+SYM_CODE_END(alignment_more)
/*
* Trampolines used when spotting a bad kernel stack pointer in
BAD_STACK_TRAMPOLINE(0xf00)
BAD_STACK_TRAMPOLINE(0xf20)
- .globl bad_stack_book3e
-bad_stack_book3e:
+_GLOBAL(bad_stack_book3e)
/* XXX: Needs to make SPRN_SPRG_GEN depend on exception type */
mfspr r10,SPRN_SRR0; /* read SRR0 before touching stack */
ld r1,PACAEMERGSP(r13)
* ever takes any parameters, the SCOM code must also be updated to
* provide them.
*/
- .globl a2_tlbinit_code_start
-a2_tlbinit_code_start:
+_GLOBAL(a2_tlbinit_code_start)
ori r11,r3,MAS0_WQ_ALLWAYS
oris r11,r11,MAS0_ESEL(3)@h /* Use way 3: workaround A2 erratum 376 */
mflr r28
b 3b
- .globl init_core_book3e
-init_core_book3e:
+_GLOBAL(init_core_book3e)
/* Establish the interrupt vector base */
tovirt(r2,r2)
LOAD_REG_ADDR(r3, interrupt_base_book3e)
sync
blr
-init_thread_book3e:
+SYM_CODE_START_LOCAL(init_thread_book3e)
lis r3,(SPRN_EPCR_ICM | SPRN_EPCR_GICM)@h
mtspr SPRN_EPCR,r3
mtspr SPRN_TSR,r3
blr
+SYM_CODE_END(init_thread_book3e)
_GLOBAL(__setup_base_ivors)
SET_IVOR(0, 0x020) /* Critical Input */
static int ssb_prctl_get(struct task_struct *task)
{
+ /*
+ * The STF_BARRIER feature is on by default, so if it's off that means
+ * firmware has explicitly said the CPU is not vulnerable via either
+ * the hypercall or device tree.
+ */
+ if (!security_ftr_enabled(SEC_FTR_STF_BARRIER))
+ return PR_SPEC_NOT_AFFECTED;
+
+ /*
+ * If the system's CPU has no known barrier (see setup_stf_barrier())
+ * then assume that the CPU is not vulnerable.
+ */
if (stf_enabled_flush_types == STF_BARRIER_NONE)
- /*
- * We don't have an explicit signal from firmware that we're
- * vulnerable or not, we only have certain CPU revisions that
- * are known to be vulnerable.
- *
- * We assume that if we're on another CPU, where the barrier is
- * NONE, then we are not vulnerable.
- */
return PR_SPEC_NOT_AFFECTED;
- else
- /*
- * If we do have a barrier type then we are vulnerable. The
- * barrier is not a global or per-process mitigation, so the
- * only value we can report here is PR_SPEC_ENABLE, which
- * appears as "vulnerable" in /proc.
- */
- return PR_SPEC_ENABLE;
-
- return -EINVAL;
+
+ /*
+ * Otherwise the CPU is vulnerable. The barrier is not a global or
+ * per-process mitigation, so the only value that can be reported here
+ * is PR_SPEC_ENABLE, which appears as "vulnerable" in /proc.
+ */
+ return PR_SPEC_ENABLE;
}
int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
static long native_hpte_remove(unsigned long hpte_group)
{
+ unsigned long hpte_v, flags;
struct hash_pte *hptep;
int i;
int slot_offset;
- unsigned long hpte_v;
+
+ local_irq_save(flags);
DBG_LOW(" remove(group=%lx)\n", hpte_group);
slot_offset &= 0x7;
}
- if (i == HPTES_PER_GROUP)
- return -1;
+ if (i == HPTES_PER_GROUP) {
+ i = -1;
+ goto out;
+ }
/* Invalidate the hpte. NOTE: this also unlocks it */
release_hpte_lock();
hptep->v = 0;
-
+out:
+ local_irq_restore(flags);
return i;
}
}
/*
- * Linux requires the following extensions, so we may as well
- * always set them.
- */
- set_bit(RISCV_ISA_EXT_ZICSR, isainfo->isa);
- set_bit(RISCV_ISA_EXT_ZIFENCEI, isainfo->isa);
-
- /*
* These ones were as they were part of the base ISA when the
* port & dt-bindings were upstreamed, and so can be set
* unconditionally where `i` is in riscv,isa on DT systems.
*/
if (acpi_disabled) {
+ set_bit(RISCV_ISA_EXT_ZICSR, isainfo->isa);
+ set_bit(RISCV_ISA_EXT_ZIFENCEI, isainfo->isa);
set_bit(RISCV_ISA_EXT_ZICNTR, isainfo->isa);
set_bit(RISCV_ISA_EXT_ZIHPM, isainfo->isa);
}
*/
crash_base = memblock_phys_alloc_range(crash_size, PMD_SIZE,
search_start,
- min(search_end, (unsigned long) SZ_4G));
+ min(search_end, (unsigned long)(SZ_4G - 1)));
if (crash_base == 0) {
/* Try again without restricting region to 32bit addressible memory */
crash_base = memblock_phys_alloc_range(crash_size, PMD_SIZE,
unsigned long __xchg_u32(volatile u32 *m, u32 new);
void __xchg_called_with_bad_pointer(void);
-static inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size)
+static __always_inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size)
{
switch (size) {
case 4:
return (load32 & mask) >> bit_shift;
}
-static inline unsigned long
+static __always_inline unsigned long
__arch_xchg(unsigned long x, __volatile__ void * ptr, int size)
{
switch (size) {
os_check_bugs();
}
-void apply_ibt_endbr(s32 *start, s32 *end)
+void apply_seal_endbr(s32 *start, s32 *end)
{
}
.popsection
/*
- * The unwinder expects the last frame on the stack to always be at the same
- * offset from the end of the page, which allows it to validate the stack.
- * Calling schedule_tail() directly would break that convention because its an
- * asmlinkage function so its argument has to be pushed on the stack. This
- * wrapper creates a proper "end of stack" frame header before the call.
- */
-.pushsection .text, "ax"
-SYM_FUNC_START(schedule_tail_wrapper)
- FRAME_BEGIN
-
- pushl %eax
- call schedule_tail
- popl %eax
-
- FRAME_END
- RET
-SYM_FUNC_END(schedule_tail_wrapper)
-.popsection
-
-/*
* A newly forked process directly context switches into this address.
*
* eax: prev task we switched from
* edi: kernel thread arg
*/
.pushsection .text, "ax"
-SYM_CODE_START(ret_from_fork)
- call schedule_tail_wrapper
+SYM_CODE_START(ret_from_fork_asm)
+ movl %esp, %edx /* regs */
- testl %ebx, %ebx
- jnz 1f /* kernel threads are uncommon */
+ /* return address for the stack unwinder */
+ pushl $.Lsyscall_32_done
-2:
- /* When we fork, we trace the syscall return in the child, too. */
- movl %esp, %eax
- call syscall_exit_to_user_mode
- jmp .Lsyscall_32_done
+ FRAME_BEGIN
+ /* prev already in EAX */
+ movl %ebx, %ecx /* fn */
+ pushl %edi /* fn_arg */
+ call ret_from_fork
+ addl $4, %esp
+ FRAME_END
- /* kernel thread */
-1: movl %edi, %eax
- CALL_NOSPEC ebx
- /*
- * A kernel thread is allowed to return here after successfully
- * calling kernel_execve(). Exit to userspace to complete the execve()
- * syscall.
- */
- movl $0, PT_EAX(%esp)
- jmp 2b
-SYM_CODE_END(ret_from_fork)
+ RET
+SYM_CODE_END(ret_from_fork_asm)
.popsection
SYM_ENTRY(__begin_SYSENTER_singlestep_region, SYM_L_GLOBAL, SYM_A_NONE)
* r12: kernel thread arg
*/
.pushsection .text, "ax"
- __FUNC_ALIGN
-SYM_CODE_START_NOALIGN(ret_from_fork)
- UNWIND_HINT_END_OF_STACK
+SYM_CODE_START(ret_from_fork_asm)
+ UNWIND_HINT_REGS
ANNOTATE_NOENDBR // copy_thread
CALL_DEPTH_ACCOUNT
- movq %rax, %rdi
- call schedule_tail /* rdi: 'prev' task parameter */
- testq %rbx, %rbx /* from kernel_thread? */
- jnz 1f /* kernel threads are uncommon */
+ movq %rax, %rdi /* prev */
+ movq %rsp, %rsi /* regs */
+ movq %rbx, %rdx /* fn */
+ movq %r12, %rcx /* fn_arg */
+ call ret_from_fork
-2:
- UNWIND_HINT_REGS
- movq %rsp, %rdi
- call syscall_exit_to_user_mode /* returns with IRQs disabled */
jmp swapgs_restore_regs_and_return_to_usermode
-
-1:
- /* kernel thread */
- UNWIND_HINT_END_OF_STACK
- movq %r12, %rdi
- CALL_NOSPEC rbx
- /*
- * A kernel thread is allowed to return here after successfully
- * calling kernel_execve(). Exit to userspace to complete the execve()
- * syscall.
- */
- movq $0, RAX(%rsp)
- jmp 2b
-SYM_CODE_END(ret_from_fork)
+SYM_CODE_END(ret_from_fork_asm)
.popsection
.macro DEBUG_ENTRY_ASSERT_IRQS_OFF
struct perf_event *leader = event->group_leader;
struct perf_event *sibling = NULL;
+ /*
+ * When this memload event is also the first event (no group
+ * exists yet), then there is no aux event before it.
+ */
+ if (leader == event)
+ return -ENODATA;
+
if (!is_mem_loads_aux_event(leader)) {
for_each_sibling_event(sibling, leader) {
if (is_mem_loads_aux_event(sibling))
extern void apply_alternatives(struct alt_instr *start, struct alt_instr *end);
extern void apply_retpolines(s32 *start, s32 *end);
extern void apply_returns(s32 *start, s32 *end);
-extern void apply_ibt_endbr(s32 *start, s32 *end);
+extern void apply_seal_endbr(s32 *start, s32 *end);
extern void apply_fineibt(s32 *start_retpoline, s32 *end_retpoine,
s32 *start_cfi, s32 *end_cfi);
/*
* Create a dummy function pointer reference to prevent objtool from marking
* the function as needing to be "sealed" (i.e. ENDBR converted to NOP by
- * apply_ibt_endbr()).
+ * apply_seal_endbr()).
*/
#define IBT_NOSEAL(fname) \
".pushsection .discard.ibt_endbr_noseal\n\t" \
* JMP_NOSPEC and CALL_NOSPEC macros can be used instead of a simple
* indirect jmp/call which may be susceptible to the Spectre variant 2
* attack.
+ *
+ * NOTE: these do not take kCFI into account and are thus not comparable to C
+ * indirect calls, take care when using. The target of these should be an ENDBR
+ * instruction irrespective of kCFI.
*/
.macro JMP_NOSPEC reg:req
#ifdef CONFIG_RETPOLINE
__visible struct task_struct *__switch_to(struct task_struct *prev,
struct task_struct *next);
-asmlinkage void ret_from_fork(void);
+asmlinkage void ret_from_fork_asm(void);
+__visible void ret_from_fork(struct task_struct *prev, struct pt_regs *regs,
+ int (*fn)(void *), void *fn_arg);
/*
* This is the structure pointed to by thread.sp for an inactive task. The
#ifdef CONFIG_X86_KERNEL_IBT
+static void poison_cfi(void *addr);
+
static void __init_or_module poison_endbr(void *addr, bool warn)
{
u32 endbr, poison = gen_endbr_poison();
/*
* Generated by: objtool --ibt
+ *
+ * Seal the functions for indirect calls by clobbering the ENDBR instructions
+ * and the kCFI hash value.
*/
-void __init_or_module noinline apply_ibt_endbr(s32 *start, s32 *end)
+void __init_or_module noinline apply_seal_endbr(s32 *start, s32 *end)
{
s32 *s;
poison_endbr(addr, true);
if (IS_ENABLED(CONFIG_FINEIBT))
- poison_endbr(addr - 16, false);
+ poison_cfi(addr - 16);
}
}
#else
-void __init_or_module apply_ibt_endbr(s32 *start, s32 *end) { }
+void __init_or_module apply_seal_endbr(s32 *start, s32 *end) { }
#endif /* CONFIG_X86_KERNEL_IBT */
return 0;
}
+static void cfi_rewrite_endbr(s32 *start, s32 *end)
+{
+ s32 *s;
+
+ for (s = start; s < end; s++) {
+ void *addr = (void *)s + *s;
+
+ poison_endbr(addr+16, false);
+ }
+}
+
/* .retpoline_sites */
static int cfi_rand_callers(s32 *start, s32 *end)
{
return;
case CFI_FINEIBT:
+ /* place the FineIBT preamble at func()-16 */
ret = cfi_rewrite_preamble(start_cfi, end_cfi);
if (ret)
goto err;
+ /* rewrite the callers to target func()-16 */
ret = cfi_rewrite_callers(start_retpoline, end_retpoline);
if (ret)
goto err;
+ /* now that nobody targets func()+0, remove ENDBR there */
+ cfi_rewrite_endbr(start_cfi, end_cfi);
+
if (builtin)
pr_info("Using FineIBT CFI\n");
return;
pr_err("Something went horribly wrong trying to rewrite the CFI implementation.\n");
}
+static inline void poison_hash(void *addr)
+{
+ *(u32 *)addr = 0;
+}
+
+static void poison_cfi(void *addr)
+{
+ switch (cfi_mode) {
+ case CFI_FINEIBT:
+ /*
+ * __cfi_\func:
+ * osp nopl (%rax)
+ * subl $0, %r10d
+ * jz 1f
+ * ud2
+ * 1: nop
+ */
+ poison_endbr(addr, false);
+ poison_hash(addr + fineibt_preamble_hash);
+ break;
+
+ case CFI_KCFI:
+ /*
+ * __cfi_\func:
+ * movl $0, %eax
+ * .skip 11, 0x90
+ */
+ poison_hash(addr + 1);
+ break;
+
+ default:
+ break;
+ }
+}
+
#else
static void __apply_fineibt(s32 *start_retpoline, s32 *end_retpoline,
{
}
+#ifdef CONFIG_X86_KERNEL_IBT
+static void poison_cfi(void *addr) { }
+#endif
+
#endif
void apply_fineibt(s32 *start_retpoline, s32 *end_retpoline,
*/
callthunks_patch_builtin_calls();
- apply_ibt_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end);
+ /*
+ * Seal all functions that do not have their address taken.
+ */
+ apply_seal_endbr(__ibt_endbr_seal, __ibt_endbr_seal_end);
#ifdef CONFIG_SMP
/* Patch to UP if other cpus not imminent. */
}
if (ibt_endbr) {
void *iseg = (void *)ibt_endbr->sh_addr;
- apply_ibt_endbr(iseg, iseg + ibt_endbr->sh_size);
+ apply_seal_endbr(iseg, iseg + ibt_endbr->sh_size);
}
if (locks) {
void *lseg = (void *)locks->sh_addr;
#include <linux/static_call.h>
#include <trace/events/power.h>
#include <linux/hw_breakpoint.h>
+#include <linux/entry-common.h>
#include <asm/cpu.h>
#include <asm/apic.h>
#include <linux/uaccess.h>
return do_set_thread_area_64(p, ARCH_SET_FS, tls);
}
+__visible void ret_from_fork(struct task_struct *prev, struct pt_regs *regs,
+ int (*fn)(void *), void *fn_arg)
+{
+ schedule_tail(prev);
+
+ /* Is this a kernel thread? */
+ if (unlikely(fn)) {
+ fn(fn_arg);
+ /*
+ * A kernel thread is allowed to return here after successfully
+ * calling kernel_execve(). Exit to userspace to complete the
+ * execve() syscall.
+ */
+ regs->ax = 0;
+ }
+
+ syscall_exit_to_user_mode(regs);
+}
+
int copy_thread(struct task_struct *p, const struct kernel_clone_args *args)
{
unsigned long clone_flags = args->flags;
frame = &fork_frame->frame;
frame->bp = encode_frame_pointer(childregs);
- frame->ret_addr = (unsigned long) ret_from_fork;
+ frame->ret_addr = (unsigned long) ret_from_fork_asm;
p->thread.sp = (unsigned long) fork_frame;
p->thread.io_bitmap = NULL;
p->thread.iopl_warn = 0;
/*
* arch/xtensa/kernel/align.S
*
- * Handle unalignment exceptions in kernel space.
+ * Handle unalignment and load/store exceptions.
*
* This file is subject to the terms and conditions of the GNU General
* Public License. See the file "COPYING" in the main directory of
#define LOAD_EXCEPTION_HANDLER
#endif
-#if XCHAL_UNALIGNED_STORE_EXCEPTION || defined LOAD_EXCEPTION_HANDLER
+#if XCHAL_UNALIGNED_STORE_EXCEPTION || defined CONFIG_XTENSA_LOAD_STORE
+#define STORE_EXCEPTION_HANDLER
+#endif
+
+#if defined LOAD_EXCEPTION_HANDLER || defined STORE_EXCEPTION_HANDLER
#define ANY_EXCEPTION_HANDLER
#endif
-#if XCHAL_HAVE_WINDOWED
+#if XCHAL_HAVE_WINDOWED && defined CONFIG_MMU
#define UNALIGNED_USER_EXCEPTION
#endif
-/* First-level exception handler for unaligned exceptions.
- *
- * Note: This handler works only for kernel exceptions. Unaligned user
- * access should get a seg fault.
- */
-
/* Big and little endian 16-bit values are located in
* different halves of a register. HWORD_START helps to
* abstract the notion of extracting a 16-bit value from a
#ifdef ANY_EXCEPTION_HANDLER
ENTRY(fast_unaligned)
-#if XCHAL_UNALIGNED_LOAD_EXCEPTION || XCHAL_UNALIGNED_STORE_EXCEPTION
-
call0 .Lsave_and_load_instruction
/* Analyze the instruction (load or store?). */
/* 'store indicator bit' not set, jump */
_bbci.l a4, OP1_SI_BIT + INSN_OP1, .Lload
-#endif
-#if XCHAL_UNALIGNED_STORE_EXCEPTION
+#ifdef STORE_EXCEPTION_HANDLER
/* Store: Jump to table entry to get the value in the source register.*/
addx8 a5, a6, a5
jx a5 # jump into table
#endif
-#if XCHAL_UNALIGNED_LOAD_EXCEPTION
+#ifdef LOAD_EXCEPTION_HANDLER
/* Load: Load memory address. */
mov a14, a3 ; _j .Lexit; .align 8
mov a15, a3 ; _j .Lexit; .align 8
#endif
-#if XCHAL_UNALIGNED_STORE_EXCEPTION
+#ifdef STORE_EXCEPTION_HANDLER
.Lstore_table:
l32i a3, a2, PT_AREG0; _j .Lstore_w; .align 8
mov a3, a1; _j .Lstore_w; .align 8 # fishy??
mov a3, a15 ; _j .Lstore_w; .align 8
#endif
-#ifdef ANY_EXCEPTION_HANDLER
/* We cannot handle this exception. */
.extern _kernel_exception
2: movi a0, _user_exception
jx a0
-#endif
-#if XCHAL_UNALIGNED_STORE_EXCEPTION
+
+#ifdef STORE_EXCEPTION_HANDLER
# a7: instruction pointer, a4: instruction, a3: value
.Lstore_w:
s32i a6, a4, 4
#endif
#endif
-#ifdef ANY_EXCEPTION_HANDLER
+
.Lexit:
#if XCHAL_HAVE_LOOPS
rsr a4, lend # check if we reached LEND
__src_b a4, a4, a5 # a4 has the instruction
ret
-#endif
+
ENDPROC(fast_unaligned)
ENTRY(fast_unaligned_fixup)
#endif
{ EXCCAUSE_INTEGER_DIVIDE_BY_ZERO, 0, do_div0 },
/* EXCCAUSE_PRIVILEGED unhandled */
-#if XCHAL_UNALIGNED_LOAD_EXCEPTION || XCHAL_UNALIGNED_STORE_EXCEPTION
+#if XCHAL_UNALIGNED_LOAD_EXCEPTION || XCHAL_UNALIGNED_STORE_EXCEPTION || \
+ IS_ENABLED(CONFIG_XTENSA_LOAD_STORE)
#ifdef CONFIG_XTENSA_UNALIGNED_USER
{ EXCCAUSE_UNALIGNED, USER, fast_unaligned },
#endif
init += sizeof(TRANSPORT_TUNTAP_NAME) - 1;
if (*init == ',') {
- rem = split_if_spec(init + 1, &mac_str, &dev_name);
+ rem = split_if_spec(init + 1, &mac_str, &dev_name, NULL);
if (rem != NULL) {
pr_err("%s: extra garbage on specification : '%s'\n",
dev->name, rem);
rtnl_unlock();
pr_err("%s: error registering net device!\n", dev->name);
platform_device_unregister(&lp->pdev);
+ /* dev is freed by the iss_net_pdev_release callback */
return;
}
rtnl_unlock();
unsigned int slot_hashtable_size;
memset(profile, 0, sizeof(*profile));
- init_rwsem(&profile->lock);
+
+ /*
+ * profile->lock of an underlying device can nest inside profile->lock
+ * of a device-mapper device, so use a dynamic lock class to avoid
+ * false-positive lockdep reports.
+ */
+ lockdep_register_key(&profile->lockdep_key);
+ __init_rwsem(&profile->lock, "&profile->lock", &profile->lockdep_key);
if (num_slots == 0)
return 0;
profile->slots = kvcalloc(num_slots, sizeof(profile->slots[0]),
GFP_KERNEL);
if (!profile->slots)
- return -ENOMEM;
+ goto err_destroy;
profile->num_slots = num_slots;
{
if (!profile)
return;
+ lockdep_unregister_key(&profile->lockdep_key);
kvfree(profile->slot_hashtable);
kvfree_sensitive(profile->slots,
sizeof(profile->slots[0]) * profile->num_slots);
case REQ_FSEQ_DATA:
list_move_tail(&rq->flush.list, &fq->flush_data_in_flight);
spin_lock(&q->requeue_lock);
- list_add_tail(&rq->queuelist, &q->flush_list);
+ list_add(&rq->queuelist, &q->requeue_list);
spin_unlock(&q->requeue_lock);
blk_mq_kick_requeue_list(q);
break;
}
EXPORT_SYMBOL(blk_rq_init);
+/* Set start and alloc time when the allocated request is actually used */
+static inline void blk_mq_rq_time_init(struct request *rq, u64 alloc_time_ns)
+{
+ if (blk_mq_need_time_stamp(rq))
+ rq->start_time_ns = ktime_get_ns();
+ else
+ rq->start_time_ns = 0;
+
+#ifdef CONFIG_BLK_RQ_ALLOC_TIME
+ if (blk_queue_rq_alloc_time(rq->q))
+ rq->alloc_time_ns = alloc_time_ns ?: rq->start_time_ns;
+ else
+ rq->alloc_time_ns = 0;
+#endif
+}
+
static struct request *blk_mq_rq_ctx_init(struct blk_mq_alloc_data *data,
- struct blk_mq_tags *tags, unsigned int tag, u64 alloc_time_ns)
+ struct blk_mq_tags *tags, unsigned int tag)
{
struct blk_mq_ctx *ctx = data->ctx;
struct blk_mq_hw_ctx *hctx = data->hctx;
}
rq->timeout = 0;
- if (blk_mq_need_time_stamp(rq))
- rq->start_time_ns = ktime_get_ns();
- else
- rq->start_time_ns = 0;
rq->part = NULL;
-#ifdef CONFIG_BLK_RQ_ALLOC_TIME
- rq->alloc_time_ns = alloc_time_ns;
-#endif
rq->io_start_time_ns = 0;
rq->stats_sectors = 0;
rq->nr_phys_segments = 0;
}
static inline struct request *
-__blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data,
- u64 alloc_time_ns)
+__blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data)
{
unsigned int tag, tag_offset;
struct blk_mq_tags *tags;
tag = tag_offset + i;
prefetch(tags->static_rqs[tag]);
tag_mask &= ~(1UL << i);
- rq = blk_mq_rq_ctx_init(data, tags, tag, alloc_time_ns);
+ rq = blk_mq_rq_ctx_init(data, tags, tag);
rq_list_add(data->cached_rq, rq);
nr++;
}
* Try batched alloc if we want more than 1 tag.
*/
if (data->nr_tags > 1) {
- rq = __blk_mq_alloc_requests_batch(data, alloc_time_ns);
- if (rq)
+ rq = __blk_mq_alloc_requests_batch(data);
+ if (rq) {
+ blk_mq_rq_time_init(rq, alloc_time_ns);
return rq;
+ }
data->nr_tags = 1;
}
goto retry;
}
- return blk_mq_rq_ctx_init(data, blk_mq_tags_from_data(data), tag,
- alloc_time_ns);
+ rq = blk_mq_rq_ctx_init(data, blk_mq_tags_from_data(data), tag);
+ blk_mq_rq_time_init(rq, alloc_time_ns);
+ return rq;
}
static struct request *blk_mq_rq_cache_fill(struct request_queue *q,
return NULL;
plug->cached_rq = rq_list_next(rq);
+ blk_mq_rq_time_init(rq, 0);
}
rq->cmd_flags = opf;
tag = blk_mq_get_tag(&data);
if (tag == BLK_MQ_NO_TAG)
goto out_queue_exit;
- rq = blk_mq_rq_ctx_init(&data, blk_mq_tags_from_data(&data), tag,
- alloc_time_ns);
+ rq = blk_mq_rq_ctx_init(&data, blk_mq_tags_from_data(&data), tag);
+ blk_mq_rq_time_init(rq, alloc_time_ns);
rq->__data_len = 0;
rq->__sector = (sector_t) -1;
rq->bio = rq->biotail = NULL;
plug->cached_rq = rq_list_next(rq);
rq_qos_throttle(q, *bio);
+ blk_mq_rq_time_init(rq, 0);
rq->cmd_flags = (*bio)->bi_opf;
INIT_LIST_HEAD(&rq->queuelist);
return rq;
unsigned long *conv_zones_bitmap;
unsigned long *seq_zones_wlock;
unsigned int nr_zones;
- sector_t zone_sectors;
sector_t sector;
};
struct gendisk *disk = args->disk;
struct request_queue *q = disk->queue;
sector_t capacity = get_capacity(disk);
+ sector_t zone_sectors = q->limits.chunk_sectors;
+
+ /* Check for bad zones and holes in the zone report */
+ if (zone->start != args->sector) {
+ pr_warn("%s: Zone gap at sectors %llu..%llu\n",
+ disk->disk_name, args->sector, zone->start);
+ return -ENODEV;
+ }
+
+ if (zone->start >= capacity || !zone->len) {
+ pr_warn("%s: Invalid zone start %llu, length %llu\n",
+ disk->disk_name, zone->start, zone->len);
+ return -ENODEV;
+ }
/*
* All zones must have the same size, with the exception on an eventual
* smaller last zone.
*/
- if (zone->start == 0) {
- if (zone->len == 0 || !is_power_of_2(zone->len)) {
- pr_warn("%s: Invalid zoned device with non power of two zone size (%llu)\n",
- disk->disk_name, zone->len);
- return -ENODEV;
- }
-
- args->zone_sectors = zone->len;
- args->nr_zones = (capacity + zone->len - 1) >> ilog2(zone->len);
- } else if (zone->start + args->zone_sectors < capacity) {
- if (zone->len != args->zone_sectors) {
+ if (zone->start + zone->len < capacity) {
+ if (zone->len != zone_sectors) {
pr_warn("%s: Invalid zoned device with non constant zone size\n",
disk->disk_name);
return -ENODEV;
}
- } else {
- if (zone->len > args->zone_sectors) {
- pr_warn("%s: Invalid zoned device with larger last zone size\n",
- disk->disk_name);
- return -ENODEV;
- }
- }
-
- /* Check for holes in the zone report */
- if (zone->start != args->sector) {
- pr_warn("%s: Zone gap at sectors %llu..%llu\n",
- disk->disk_name, args->sector, zone->start);
+ } else if (zone->len > zone_sectors) {
+ pr_warn("%s: Invalid zoned device with larger last zone size\n",
+ disk->disk_name);
return -ENODEV;
}
* @disk: Target disk
* @update_driver_data: Callback to update driver data on the frozen disk
*
- * Helper function for low-level device drivers to (re) allocate and initialize
- * a disk request queue zone bitmaps. This functions should normally be called
- * within the disk ->revalidate method for blk-mq based drivers. For BIO based
- * drivers only q->nr_zones needs to be updated so that the sysfs exposed value
- * is correct.
+ * Helper function for low-level device drivers to check and (re) allocate and
+ * initialize a disk request queue zone bitmaps. This functions should normally
+ * be called within the disk ->revalidate method for blk-mq based drivers.
+ * Before calling this function, the device driver must already have set the
+ * device zone size (chunk_sector limit) and the max zone append limit.
+ * For BIO based drivers, this function cannot be used. BIO based device drivers
+ * only need to set disk->nr_zones so that the sysfs exposed value is correct.
* If the @update_driver_data callback function is not NULL, the callback is
* executed with the device request queue frozen after all zones have been
* checked.
void (*update_driver_data)(struct gendisk *disk))
{
struct request_queue *q = disk->queue;
- struct blk_revalidate_zone_args args = {
- .disk = disk,
- };
+ sector_t zone_sectors = q->limits.chunk_sectors;
+ sector_t capacity = get_capacity(disk);
+ struct blk_revalidate_zone_args args = { };
unsigned int noio_flag;
int ret;
if (WARN_ON_ONCE(!queue_is_mq(q)))
return -EIO;
- if (!get_capacity(disk))
- return -EIO;
+ if (!capacity)
+ return -ENODEV;
+
+ /*
+ * Checks that the device driver indicated a valid zone size and that
+ * the max zone append limit is set.
+ */
+ if (!zone_sectors || !is_power_of_2(zone_sectors)) {
+ pr_warn("%s: Invalid non power of two zone size (%llu)\n",
+ disk->disk_name, zone_sectors);
+ return -ENODEV;
+ }
+
+ if (!q->limits.max_zone_append_sectors) {
+ pr_warn("%s: Invalid 0 maximum zone append limit\n",
+ disk->disk_name);
+ return -ENODEV;
+ }
/*
* Ensure that all memory allocations in this context are done as if
* GFP_NOIO was specified.
*/
+ args.disk = disk;
+ args.nr_zones = (capacity + zone_sectors - 1) >> ilog2(zone_sectors);
noio_flag = memalloc_noio_save();
ret = disk->fops->report_zones(disk, 0, UINT_MAX,
blk_revalidate_zone_cb, &args);
* If zones where reported, make sure that the entire disk capacity
* has been checked.
*/
- if (ret > 0 && args.sector != get_capacity(disk)) {
+ if (ret > 0 && args.sector != capacity) {
pr_warn("%s: Missing zones from sector %llu\n",
disk->disk_name, args.sector);
ret = -ENODEV;
*/
blk_mq_freeze_queue(q);
if (ret > 0) {
- blk_queue_chunk_sectors(q, args.zone_sectors);
disk->nr_zones = args.nr_zones;
swap(disk->seq_zones_wlock, args.seq_zones_wlock);
swap(disk->conv_zones_bitmap, args.conv_zones_bitmap);
* zoned writes, start searching from the start of a zone.
*/
if (blk_rq_is_seq_zoned_write(rq))
- pos -= round_down(pos, rq->q->limits.chunk_sectors);
+ pos = round_down(pos, rq->q->limits.chunk_sectors);
while (node) {
rq = rb_entry_rq(node);
}
blk = be32_to_cpu(rdb->rdb_PartitionList);
put_dev_sector(sect);
- for (part = 1; blk>0 && part<=16; part++, put_dev_sector(sect)) {
+ for (part = 1; (s32) blk>0 && part<=16; part++, put_dev_sector(sect)) {
/* Read in terms partition table understands */
if (check_mul_overflow(blk, (sector_t) blksize, &blk)) {
pr_err("Dev %s: overflow calculating partition block %llu! Skipping partitions %u and beyond\n",
bool punit_disabled;
bool clear_runtime_mem;
bool d3hot_after_power_off;
+ bool interrupt_clear_with_0;
};
struct ivpu_hw_info;
vdev->wa.punit_disabled = ivpu_is_fpga(vdev);
vdev->wa.clear_runtime_mem = false;
vdev->wa.d3hot_after_power_off = true;
+
+ if (ivpu_device_id(vdev) == PCI_DEVICE_ID_MTL && ivpu_revision(vdev) < 4)
+ vdev->wa.interrupt_clear_with_0 = true;
}
static void ivpu_hw_timeouts_init(struct ivpu_device *vdev)
REGB_WR32(MTL_BUTTRESS_GLOBAL_INT_MASK, 0x1);
REGB_WR32(MTL_BUTTRESS_LOCAL_INT_MASK, BUTTRESS_IRQ_DISABLE_MASK);
REGV_WR64(MTL_VPU_HOST_SS_ICB_ENABLE_0, 0x0ull);
- REGB_WR32(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, 0x0);
+ REGV_WR32(MTL_VPU_HOST_SS_FW_SOC_IRQ_EN, 0x0);
}
static void ivpu_hw_mtl_irq_wdt_nce_handler(struct ivpu_device *vdev)
schedule_recovery = true;
}
- /*
- * Clear local interrupt status by writing 0 to all bits.
- * This must be done after interrupts are cleared at the source.
- * Writing 1 triggers an interrupt, so we can't perform read update write.
- */
- REGB_WR32(MTL_BUTTRESS_INTERRUPT_STAT, 0x0);
+ /* This must be done after interrupts are cleared at the source. */
+ if (IVPU_WA(interrupt_clear_with_0))
+ /*
+ * Writing 1 triggers an interrupt, so we can't perform read update write.
+ * Clear local interrupt status by writing 0 to all bits.
+ */
+ REGB_WR32(MTL_BUTTRESS_INTERRUPT_STAT, 0x0);
+ else
+ REGB_WR32(MTL_BUTTRESS_INTERRUPT_STAT, status);
/* Re-enable global interrupt */
REGB_WR32(MTL_BUTTRESS_GLOBAL_INT_MASK, 0x0);
if (!d->config_buf)
goto err_alloc;
- for (i = 0; i < chip->num_config_regs; i++) {
+ for (i = 0; i < chip->num_config_bases; i++) {
d->config_buf[i] = kcalloc(chip->num_config_regs,
sizeof(**d->config_buf),
GFP_KERNEL);
disk_set_zoned(nullb->disk, BLK_ZONED_HM);
blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q);
blk_queue_required_elevator_features(q, ELEVATOR_F_ZBD_SEQ_WRITE);
-
- if (queue_is_mq(q)) {
- int ret = blk_revalidate_disk_zones(nullb->disk, NULL);
-
- if (ret)
- return ret;
- } else {
- blk_queue_chunk_sectors(q, dev->zone_size_sects);
- nullb->disk->nr_zones = bdev_nr_zones(nullb->disk->part0);
- }
-
+ blk_queue_chunk_sectors(q, dev->zone_size_sects);
+ nullb->disk->nr_zones = bdev_nr_zones(nullb->disk->part0);
blk_queue_max_zone_append_sectors(q, dev->zone_size_sects);
disk_set_max_open_zones(nullb->disk, dev->zone_max_open);
disk_set_max_active_zones(nullb->disk, dev->zone_max_active);
+ if (queue_is_mq(q))
+ return blk_revalidate_disk_zones(nullb->disk, NULL);
+
return 0;
}
{
u32 v, wg;
u8 model;
- int ret;
virtio_cread(vdev, struct virtio_blk_config,
zoned.model, &model);
vblk->zone_sectors);
return -ENODEV;
}
+ blk_queue_chunk_sectors(q, vblk->zone_sectors);
dev_dbg(&vdev->dev, "zone sectors = %u\n", vblk->zone_sectors);
if (virtio_has_feature(vdev, VIRTIO_BLK_F_DISCARD)) {
blk_queue_max_discard_sectors(q, 0);
}
- ret = blk_revalidate_disk_zones(vblk->disk, NULL);
- if (!ret) {
- virtio_cread(vdev, struct virtio_blk_config,
- zoned.max_append_sectors, &v);
- if (!v) {
- dev_warn(&vdev->dev, "zero max_append_sectors reported\n");
- return -ENODEV;
- }
- if ((v << SECTOR_SHIFT) < wg) {
- dev_err(&vdev->dev,
- "write granularity %u exceeds max_append_sectors %u limit\n",
- wg, v);
- return -ENODEV;
- }
-
- blk_queue_max_zone_append_sectors(q, v);
- dev_dbg(&vdev->dev, "max append sectors = %u\n", v);
+ virtio_cread(vdev, struct virtio_blk_config,
+ zoned.max_append_sectors, &v);
+ if (!v) {
+ dev_warn(&vdev->dev, "zero max_append_sectors reported\n");
+ return -ENODEV;
+ }
+ if ((v << SECTOR_SHIFT) < wg) {
+ dev_err(&vdev->dev,
+ "write granularity %u exceeds max_append_sectors %u limit\n",
+ wg, v);
+ return -ENODEV;
}
+ blk_queue_max_zone_append_sectors(q, v);
+ dev_dbg(&vdev->dev, "max append sectors = %u\n", v);
- return ret;
+ return blk_revalidate_disk_zones(vblk->disk, NULL);
}
#else
* 6.x.y.z series: 6.0.18.6 +
* 3.x.y.z series: 3.57.y.5 +
*/
+#ifdef CONFIG_X86
static bool tpm_amd_is_rng_defective(struct tpm_chip *chip)
{
u32 val1, val2;
return true;
}
+#else
+static inline bool tpm_amd_is_rng_defective(struct tpm_chip *chip)
+{
+ return false;
+}
+#endif /* CONFIG_X86 */
static int tpm_hwrng_read(struct hwrng *rng, void *data, size_t max, bool wait)
{
u32 rsp_size;
int ret;
- INIT_LIST_HEAD(&acpi_resource_list);
- ret = acpi_dev_get_resources(device, &acpi_resource_list,
- crb_check_resource, iores_array);
- if (ret < 0)
- return ret;
- acpi_dev_free_resource_list(&acpi_resource_list);
-
- /* Pluton doesn't appear to define ACPI memory regions */
+ /*
+ * Pluton sometimes does not define ACPI memory regions.
+ * Mapping is then done in crb_map_pluton
+ */
if (priv->sm != ACPI_TPM2_COMMAND_BUFFER_WITH_PLUTON) {
+ INIT_LIST_HEAD(&acpi_resource_list);
+ ret = acpi_dev_get_resources(device, &acpi_resource_list,
+ crb_check_resource, iores_array);
+ if (ret < 0)
+ return ret;
+ acpi_dev_free_resource_list(&acpi_resource_list);
+
if (resource_type(iores_array) != IORESOURCE_MEM) {
dev_err(dev, FW_BUG "TPM2 ACPI table does not define a memory resource\n");
return -EINVAL;
static const struct dmi_system_id tpm_tis_dmi_table[] = {
{
.callback = tpm_tis_disable_irq,
+ .ident = "Framework Laptop (12th Gen Intel Core)",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Framework"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Laptop (12th Gen Intel Core)"),
+ },
+ },
+ {
+ .callback = tpm_tis_disable_irq,
+ .ident = "Framework Laptop (13th Gen Intel Core)",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Framework"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Laptop (13th Gen Intel Core)"),
+ },
+ },
+ {
+ .callback = tpm_tis_disable_irq,
.ident = "ThinkPad T490s",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
},
{
.callback = tpm_tis_disable_irq,
+ .ident = "ThinkPad L590",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad L590"),
+ },
+ },
+ {
+ .callback = tpm_tis_disable_irq,
.ident = "UPX-TGL",
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "AAEON"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "UPX-TGL"),
},
},
{}
#include <linux/wait.h>
#include <linux/acpi.h>
#include <linux/freezer.h>
+#include <linux/dmi.h>
#include "tpm.h"
#include "tpm_tis_core.h"
+#define TPM_TIS_MAX_UNHANDLED_IRQS 1000
+
static void tpm_tis_clkrun_enable(struct tpm_chip *chip, bool value);
static bool wait_for_tpm_stat_cond(struct tpm_chip *chip, u8 mask,
return rc;
}
-static void disable_interrupts(struct tpm_chip *chip)
+static void __tpm_tis_disable_interrupts(struct tpm_chip *chip)
+{
+ struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+ u32 int_mask = 0;
+
+ tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &int_mask);
+ int_mask &= ~TPM_GLOBAL_INT_ENABLE;
+ tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), int_mask);
+
+ chip->flags &= ~TPM_CHIP_FLAG_IRQ;
+}
+
+static void tpm_tis_disable_interrupts(struct tpm_chip *chip)
{
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
- u32 intmask;
- int rc;
if (priv->irq == 0)
return;
- rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask);
- if (rc < 0)
- intmask = 0;
-
- intmask &= ~TPM_GLOBAL_INT_ENABLE;
- rc = tpm_tis_write32(priv, TPM_INT_ENABLE(priv->locality), intmask);
+ __tpm_tis_disable_interrupts(chip);
devm_free_irq(chip->dev.parent, priv->irq, chip);
priv->irq = 0;
- chip->flags &= ~TPM_CHIP_FLAG_IRQ;
}
/*
if (!test_bit(TPM_TIS_IRQ_TESTED, &priv->flags))
tpm_msleep(1);
if (!test_bit(TPM_TIS_IRQ_TESTED, &priv->flags))
- disable_interrupts(chip);
+ tpm_tis_disable_interrupts(chip);
set_bit(TPM_TIS_IRQ_TESTED, &priv->flags);
return rc;
}
return status == TPM_STS_COMMAND_READY;
}
+static irqreturn_t tpm_tis_revert_interrupts(struct tpm_chip *chip)
+{
+ struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+ const char *product;
+ const char *vendor;
+
+ dev_warn(&chip->dev, FW_BUG
+ "TPM interrupt storm detected, polling instead\n");
+
+ vendor = dmi_get_system_info(DMI_SYS_VENDOR);
+ product = dmi_get_system_info(DMI_PRODUCT_VERSION);
+
+ if (vendor && product) {
+ dev_info(&chip->dev,
+ "Consider adding the following entry to tpm_tis_dmi_table:\n");
+ dev_info(&chip->dev, "\tDMI_SYS_VENDOR: %s\n", vendor);
+ dev_info(&chip->dev, "\tDMI_PRODUCT_VERSION: %s\n", product);
+ }
+
+ if (tpm_tis_request_locality(chip, 0) != 0)
+ return IRQ_NONE;
+
+ __tpm_tis_disable_interrupts(chip);
+ tpm_tis_relinquish_locality(chip, 0);
+
+ schedule_work(&priv->free_irq_work);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t tpm_tis_update_unhandled_irqs(struct tpm_chip *chip)
+{
+ struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
+ irqreturn_t irqret = IRQ_HANDLED;
+
+ if (!(chip->flags & TPM_CHIP_FLAG_IRQ))
+ return IRQ_HANDLED;
+
+ if (time_after(jiffies, priv->last_unhandled_irq + HZ/10))
+ priv->unhandled_irqs = 1;
+ else
+ priv->unhandled_irqs++;
+
+ priv->last_unhandled_irq = jiffies;
+
+ if (priv->unhandled_irqs > TPM_TIS_MAX_UNHANDLED_IRQS)
+ irqret = tpm_tis_revert_interrupts(chip);
+
+ return irqret;
+}
+
static irqreturn_t tis_int_handler(int dummy, void *dev_id)
{
struct tpm_chip *chip = dev_id;
rc = tpm_tis_read32(priv, TPM_INT_STATUS(priv->locality), &interrupt);
if (rc < 0)
- return IRQ_NONE;
+ goto err;
if (interrupt == 0)
- return IRQ_NONE;
+ goto err;
set_bit(TPM_TIS_IRQ_TESTED, &priv->flags);
if (interrupt & TPM_INTF_DATA_AVAIL_INT)
rc = tpm_tis_write32(priv, TPM_INT_STATUS(priv->locality), interrupt);
tpm_tis_relinquish_locality(chip, 0);
if (rc < 0)
- return IRQ_NONE;
+ goto err;
tpm_tis_read32(priv, TPM_INT_STATUS(priv->locality), &interrupt);
return IRQ_HANDLED;
+
+err:
+ return tpm_tis_update_unhandled_irqs(chip);
}
static void tpm_tis_gen_interrupt(struct tpm_chip *chip)
chip->flags &= ~TPM_CHIP_FLAG_IRQ;
}
+static void tpm_tis_free_irq_func(struct work_struct *work)
+{
+ struct tpm_tis_data *priv = container_of(work, typeof(*priv), free_irq_work);
+ struct tpm_chip *chip = priv->chip;
+
+ devm_free_irq(chip->dev.parent, priv->irq, chip);
+ priv->irq = 0;
+}
+
/* Register the IRQ and issue a command that will cause an interrupt. If an
* irq is seen then leave the chip setup for IRQ operation, otherwise reverse
* everything and leave in polling mode. Returns 0 on success.
int rc;
u32 int_status;
+ INIT_WORK(&priv->free_irq_work, tpm_tis_free_irq_func);
rc = devm_request_threaded_irq(chip->dev.parent, irq, NULL,
tis_int_handler, IRQF_ONESHOT | flags,
interrupt = 0;
tpm_tis_write32(priv, reg, ~TPM_GLOBAL_INT_ENABLE & interrupt);
+ flush_work(&priv->free_irq_work);
tpm_tis_clkrun_enable(chip, false);
chip->timeout_b = msecs_to_jiffies(TIS_TIMEOUT_B_MAX);
chip->timeout_c = msecs_to_jiffies(TIS_TIMEOUT_C_MAX);
chip->timeout_d = msecs_to_jiffies(TIS_TIMEOUT_D_MAX);
+ priv->chip = chip;
priv->timeout_min = TPM_TIMEOUT_USECS_MIN;
priv->timeout_max = TPM_TIMEOUT_USECS_MAX;
priv->phy_ops = phy_ops;
rc = tpm_tis_request_locality(chip, 0);
if (rc < 0)
goto out_err;
- disable_interrupts(chip);
+ tpm_tis_disable_interrupts(chip);
tpm_tis_relinquish_locality(chip, 0);
}
}
};
struct tpm_tis_data {
+ struct tpm_chip *chip;
u16 manufacturer_id;
struct mutex locality_count_mutex;
unsigned int locality_count;
int locality;
int irq;
+ struct work_struct free_irq_work;
+ unsigned long last_unhandled_irq;
+ unsigned int unhandled_irqs;
unsigned int int_mask;
unsigned long flags;
void __iomem *ilb_base_addr;
int ret;
for (i = 0; i < TPM_RETRY; i++) {
- /* write register */
- msg.len = sizeof(reg);
- msg.buf = ®
- msg.flags = 0;
- ret = tpm_tis_i2c_retry_transfer_until_ack(data, &msg);
- if (ret < 0)
- return ret;
-
- /* read data */
- msg.buf = result;
- msg.len = len;
- msg.flags = I2C_M_RD;
- ret = tpm_tis_i2c_retry_transfer_until_ack(data, &msg);
- if (ret < 0)
- return ret;
+ u16 read = 0;
+
+ while (read < len) {
+ /* write register */
+ msg.len = sizeof(reg);
+ msg.buf = ®
+ msg.flags = 0;
+ ret = tpm_tis_i2c_retry_transfer_until_ack(data, &msg);
+ if (ret < 0)
+ return ret;
+
+ /* read data */
+ msg.buf = result + read;
+ msg.len = len - read;
+ msg.flags = I2C_M_RD;
+ if (msg.len > I2C_SMBUS_BLOCK_MAX)
+ msg.len = I2C_SMBUS_BLOCK_MAX;
+ ret = tpm_tis_i2c_retry_transfer_until_ack(data, &msg);
+ if (ret < 0)
+ return ret;
+ read += msg.len;
+ }
ret = tpm_tis_i2c_sanity_check_read(reg, len, result);
if (ret == 0)
struct i2c_msg msg = { .addr = phy->i2c_client->addr };
u8 reg = tpm_tis_i2c_address_to_register(addr);
int ret;
+ u16 wrote = 0;
if (len > TPM_BUFSIZE - 1)
return -EIO;
- /* write register and data in one go */
phy->io_buf[0] = reg;
- memcpy(phy->io_buf + sizeof(reg), value, len);
-
- msg.len = sizeof(reg) + len;
msg.buf = phy->io_buf;
- ret = tpm_tis_i2c_retry_transfer_until_ack(data, &msg);
- if (ret < 0)
- return ret;
+ while (wrote < len) {
+ /* write register and data in one go */
+ msg.len = sizeof(reg) + len - wrote;
+ if (msg.len > I2C_SMBUS_BLOCK_MAX)
+ msg.len = I2C_SMBUS_BLOCK_MAX;
+
+ memcpy(phy->io_buf + sizeof(reg), value + wrote,
+ msg.len - sizeof(reg));
+
+ ret = tpm_tis_i2c_retry_transfer_until_ack(data, &msg);
+ if (ret < 0)
+ return ret;
+ wrote += msg.len - sizeof(reg);
+ }
return 0;
}
}
exit:
+ if (ret < 0) {
+ /* Deactivate chip select */
+ memset(&spi_xfer, 0, sizeof(spi_xfer));
+ spi_message_init(&m);
+ spi_message_add_tail(&spi_xfer, &m);
+ spi_sync_locked(phy->spi_device, &m);
+ }
+
spi_bus_unlock(phy->spi_device->master);
return ret;
}
.fops = &vtpmx_fops,
};
-static int vtpmx_init(void)
-{
- return misc_register(&vtpmx_miscdev);
-}
-
-static void vtpmx_cleanup(void)
-{
- misc_deregister(&vtpmx_miscdev);
-}
-
static int __init vtpm_module_init(void)
{
int rc;
- rc = vtpmx_init();
- if (rc) {
- pr_err("couldn't create vtpmx device\n");
- return rc;
- }
-
workqueue = create_workqueue("tpm-vtpm");
if (!workqueue) {
pr_err("couldn't create workqueue\n");
- rc = -ENOMEM;
- goto err_vtpmx_cleanup;
+ return -ENOMEM;
}
- return 0;
-
-err_vtpmx_cleanup:
- vtpmx_cleanup();
+ rc = misc_register(&vtpmx_miscdev);
+ if (rc) {
+ pr_err("couldn't create vtpmx device\n");
+ destroy_workqueue(workqueue);
+ }
return rc;
}
static void __exit vtpm_module_exit(void)
{
destroy_workqueue(workqueue);
- vtpmx_cleanup();
+ misc_deregister(&vtpmx_miscdev);
}
module_init(vtpm_module_init);
return smp_call_function_single(cpu, __us2e_freq_target, &index, 1);
}
-static int __init us2e_freq_cpu_init(struct cpufreq_policy *policy)
+static int us2e_freq_cpu_init(struct cpufreq_policy *policy)
{
unsigned int cpu = policy->cpu;
unsigned long clock_tick = sparc64_get_clock_tick(cpu) / 1000;
return smp_call_function_single(cpu, update_safari_cfg, &new_bits, 1);
}
-static int __init us3_freq_cpu_init(struct cpufreq_policy *policy)
+static int us3_freq_cpu_init(struct cpufreq_policy *policy)
{
unsigned int cpu = policy->cpu;
unsigned long clock_tick = sparc64_get_clock_tick(cpu) / 1000;
{
struct dma_fence_array *result;
struct dma_fence *tmp, **array;
+ ktime_t timestamp;
unsigned int i;
size_t count;
count = 0;
+ timestamp = ns_to_ktime(0);
for (i = 0; i < num_fences; ++i) {
- dma_fence_unwrap_for_each(tmp, &iter[i], fences[i])
- if (!dma_fence_is_signaled(tmp))
+ dma_fence_unwrap_for_each(tmp, &iter[i], fences[i]) {
+ if (!dma_fence_is_signaled(tmp)) {
++count;
+ } else if (test_bit(DMA_FENCE_FLAG_TIMESTAMP_BIT,
+ &tmp->flags)) {
+ if (ktime_after(tmp->timestamp, timestamp))
+ timestamp = tmp->timestamp;
+ } else {
+ /*
+ * Use the current time if the fence is
+ * currently signaling.
+ */
+ timestamp = ktime_get();
+ }
+ }
}
+ /*
+ * If we couldn't find a pending fence just return a private signaled
+ * fence with the timestamp of the last signaled one.
+ */
if (count == 0)
- return dma_fence_get_stub();
+ return dma_fence_allocate_private_stub(timestamp);
array = kmalloc_array(count, sizeof(*array), GFP_KERNEL);
if (!array)
} while (tmp);
if (count == 0) {
- tmp = dma_fence_get_stub();
+ tmp = dma_fence_allocate_private_stub(ktime_get());
goto return_tmp;
}
/**
* dma_fence_allocate_private_stub - return a private, signaled fence
+ * @timestamp: timestamp when the fence was signaled
*
* Return a newly allocated and signaled stub fence.
*/
-struct dma_fence *dma_fence_allocate_private_stub(void)
+struct dma_fence *dma_fence_allocate_private_stub(ktime_t timestamp)
{
struct dma_fence *fence;
fence = kzalloc(sizeof(*fence), GFP_KERNEL);
if (fence == NULL)
- return ERR_PTR(-ENOMEM);
+ return NULL;
dma_fence_init(fence,
&dma_fence_stub_ops,
set_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT,
&fence->flags);
- dma_fence_signal(fence);
+ dma_fence_signal_timestamp(fence, timestamp);
return fence;
}
void amdgpu_device_pci_config_reset(struct amdgpu_device *adev);
int amdgpu_device_pci_reset(struct amdgpu_device *adev);
bool amdgpu_device_need_post(struct amdgpu_device *adev);
+bool amdgpu_device_pcie_dynamic_switching_supported(void);
bool amdgpu_device_should_use_aspm(struct amdgpu_device *adev);
bool amdgpu_device_aspm_support_quirk(void);
if (!attachment->is_mapped)
continue;
+ if (attachment->bo_va->base.bo->tbo.pin_count)
+ continue;
+
kfd_mem_dmaunmap_attachment(mem, attachment);
ret = update_gpuvm_pte(mem, attachment, &sync_obj);
if (ret) {
return true;
}
+/*
+ * Intel hosts such as Raptor Lake and Sapphire Rapids don't support dynamic
+ * speed switching. Until we have confirmation from Intel that a specific host
+ * supports it, it's safer that we keep it disabled for all.
+ *
+ * https://edc.intel.com/content/www/us/en/design/products/platforms/details/raptor-lake-s/13th-generation-core-processors-datasheet-volume-1-of-2/005/pci-express-support/
+ * https://gitlab.freedesktop.org/drm/amd/-/issues/2663
+ */
+bool amdgpu_device_pcie_dynamic_switching_supported(void)
+{
+#if IS_ENABLED(CONFIG_X86)
+ struct cpuinfo_x86 *c = &cpu_data(0);
+
+ if (c->x86_vendor == X86_VENDOR_INTEL)
+ return false;
+#endif
+ return true;
+}
+
/**
* amdgpu_device_should_use_aspm - check if the device should program ASPM
*
uint32_t *size,
uint32_t pptable_id);
+int smu_v13_0_update_pcie_parameters(struct smu_context *smu,
+ uint32_t pcie_gen_cap,
+ uint32_t pcie_width_cap);
+
#endif
#endif
}
mutex_lock(&adev->pm.mutex);
r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
- mutex_unlock(&adev->pm.mutex);
if (r)
goto fail;
}
r = num_msgs;
fail:
+ mutex_unlock(&adev->pm.mutex);
kfree(req);
return r;
}
}
mutex_lock(&adev->pm.mutex);
r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
- mutex_unlock(&adev->pm.mutex);
if (r)
goto fail;
}
r = num_msgs;
fail:
+ mutex_unlock(&adev->pm.mutex);
kfree(req);
return r;
}
return ret;
}
-static void sienna_cichlid_get_override_pcie_settings(struct smu_context *smu,
- uint32_t *gen_speed_override,
- uint32_t *lane_width_override)
-{
- struct amdgpu_device *adev = smu->adev;
-
- *gen_speed_override = 0xff;
- *lane_width_override = 0xff;
-
- switch (adev->pdev->device) {
- case 0x73A0:
- case 0x73A1:
- case 0x73A2:
- case 0x73A3:
- case 0x73AB:
- case 0x73AE:
- /* Bit 7:0: PCIE lane width, 1 to 7 corresponds is x1 to x32 */
- *lane_width_override = 6;
- break;
- case 0x73E0:
- case 0x73E1:
- case 0x73E3:
- *lane_width_override = 4;
- break;
- case 0x7420:
- case 0x7421:
- case 0x7422:
- case 0x7423:
- case 0x7424:
- *lane_width_override = 3;
- break;
- default:
- break;
- }
-}
-
-#define MAX(a, b) ((a) > (b) ? (a) : (b))
-
static int sienna_cichlid_update_pcie_parameters(struct smu_context *smu,
uint32_t pcie_gen_cap,
uint32_t pcie_width_cap)
{
struct smu_11_0_dpm_context *dpm_context = smu->smu_dpm.dpm_context;
struct smu_11_0_pcie_table *pcie_table = &dpm_context->dpm_tables.pcie_table;
- uint32_t gen_speed_override, lane_width_override;
- uint8_t *table_member1, *table_member2;
- uint32_t min_gen_speed, max_gen_speed;
- uint32_t min_lane_width, max_lane_width;
- uint32_t smu_pcie_arg;
+ u32 smu_pcie_arg;
int ret, i;
- GET_PPTABLE_MEMBER(PcieGenSpeed, &table_member1);
- GET_PPTABLE_MEMBER(PcieLaneCount, &table_member2);
-
- sienna_cichlid_get_override_pcie_settings(smu,
- &gen_speed_override,
- &lane_width_override);
+ /* PCIE gen speed and lane width override */
+ if (!amdgpu_device_pcie_dynamic_switching_supported()) {
+ if (pcie_table->pcie_gen[NUM_LINK_LEVELS - 1] < pcie_gen_cap)
+ pcie_gen_cap = pcie_table->pcie_gen[NUM_LINK_LEVELS - 1];
- /* PCIE gen speed override */
- if (gen_speed_override != 0xff) {
- min_gen_speed = MIN(pcie_gen_cap, gen_speed_override);
- max_gen_speed = MIN(pcie_gen_cap, gen_speed_override);
- } else {
- min_gen_speed = MAX(0, table_member1[0]);
- max_gen_speed = MIN(pcie_gen_cap, table_member1[1]);
- min_gen_speed = min_gen_speed > max_gen_speed ?
- max_gen_speed : min_gen_speed;
- }
- pcie_table->pcie_gen[0] = min_gen_speed;
- pcie_table->pcie_gen[1] = max_gen_speed;
+ if (pcie_table->pcie_lane[NUM_LINK_LEVELS - 1] < pcie_width_cap)
+ pcie_width_cap = pcie_table->pcie_lane[NUM_LINK_LEVELS - 1];
- /* PCIE lane width override */
- if (lane_width_override != 0xff) {
- min_lane_width = MIN(pcie_width_cap, lane_width_override);
- max_lane_width = MIN(pcie_width_cap, lane_width_override);
+ /* Force all levels to use the same settings */
+ for (i = 0; i < NUM_LINK_LEVELS; i++) {
+ pcie_table->pcie_gen[i] = pcie_gen_cap;
+ pcie_table->pcie_lane[i] = pcie_width_cap;
+ }
} else {
- min_lane_width = MAX(1, table_member2[0]);
- max_lane_width = MIN(pcie_width_cap, table_member2[1]);
- min_lane_width = min_lane_width > max_lane_width ?
- max_lane_width : min_lane_width;
+ for (i = 0; i < NUM_LINK_LEVELS; i++) {
+ if (pcie_table->pcie_gen[i] > pcie_gen_cap)
+ pcie_table->pcie_gen[i] = pcie_gen_cap;
+ if (pcie_table->pcie_lane[i] > pcie_width_cap)
+ pcie_table->pcie_lane[i] = pcie_width_cap;
+ }
}
- pcie_table->pcie_lane[0] = min_lane_width;
- pcie_table->pcie_lane[1] = max_lane_width;
for (i = 0; i < NUM_LINK_LEVELS; i++) {
smu_pcie_arg = (i << 16 |
}
mutex_lock(&adev->pm.mutex);
r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
- mutex_unlock(&adev->pm.mutex);
if (r)
goto fail;
}
r = num_msgs;
fail:
+ mutex_unlock(&adev->pm.mutex);
kfree(req);
return r;
}
}
mutex_lock(&adev->pm.mutex);
r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
- mutex_unlock(&adev->pm.mutex);
if (r)
goto fail;
}
r = num_msgs;
fail:
+ mutex_unlock(&adev->pm.mutex);
kfree(req);
return r;
}
return ret;
}
+
+int smu_v13_0_update_pcie_parameters(struct smu_context *smu,
+ uint32_t pcie_gen_cap,
+ uint32_t pcie_width_cap)
+{
+ struct smu_13_0_dpm_context *dpm_context = smu->smu_dpm.dpm_context;
+ struct smu_13_0_pcie_table *pcie_table =
+ &dpm_context->dpm_tables.pcie_table;
+ int num_of_levels = pcie_table->num_of_link_levels;
+ uint32_t smu_pcie_arg;
+ int ret, i;
+
+ if (!amdgpu_device_pcie_dynamic_switching_supported()) {
+ if (pcie_table->pcie_gen[num_of_levels - 1] < pcie_gen_cap)
+ pcie_gen_cap = pcie_table->pcie_gen[num_of_levels - 1];
+
+ if (pcie_table->pcie_lane[num_of_levels - 1] < pcie_width_cap)
+ pcie_width_cap = pcie_table->pcie_lane[num_of_levels - 1];
+
+ /* Force all levels to use the same settings */
+ for (i = 0; i < num_of_levels; i++) {
+ pcie_table->pcie_gen[i] = pcie_gen_cap;
+ pcie_table->pcie_lane[i] = pcie_width_cap;
+ }
+ } else {
+ for (i = 0; i < num_of_levels; i++) {
+ if (pcie_table->pcie_gen[i] > pcie_gen_cap)
+ pcie_table->pcie_gen[i] = pcie_gen_cap;
+ if (pcie_table->pcie_lane[i] > pcie_width_cap)
+ pcie_table->pcie_lane[i] = pcie_width_cap;
+ }
+ }
+
+ for (i = 0; i < num_of_levels; i++) {
+ smu_pcie_arg = i << 16;
+ smu_pcie_arg |= pcie_table->pcie_gen[i] << 8;
+ smu_pcie_arg |= pcie_table->pcie_lane[i];
+
+ ret = smu_cmn_send_smc_msg_with_param(smu,
+ SMU_MSG_OverridePcieParameters,
+ smu_pcie_arg,
+ NULL);
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
return ret;
}
-static int smu_v13_0_0_update_pcie_parameters(struct smu_context *smu,
- uint32_t pcie_gen_cap,
- uint32_t pcie_width_cap)
-{
- struct smu_13_0_dpm_context *dpm_context = smu->smu_dpm.dpm_context;
- struct smu_13_0_pcie_table *pcie_table =
- &dpm_context->dpm_tables.pcie_table;
- uint32_t smu_pcie_arg;
- int ret, i;
-
- for (i = 0; i < pcie_table->num_of_link_levels; i++) {
- if (pcie_table->pcie_gen[i] > pcie_gen_cap)
- pcie_table->pcie_gen[i] = pcie_gen_cap;
- if (pcie_table->pcie_lane[i] > pcie_width_cap)
- pcie_table->pcie_lane[i] = pcie_width_cap;
-
- smu_pcie_arg = i << 16;
- smu_pcie_arg |= pcie_table->pcie_gen[i] << 8;
- smu_pcie_arg |= pcie_table->pcie_lane[i];
-
- ret = smu_cmn_send_smc_msg_with_param(smu,
- SMU_MSG_OverridePcieParameters,
- smu_pcie_arg,
- NULL);
- if (ret)
- return ret;
- }
-
- return 0;
-}
-
static const struct smu_temperature_range smu13_thermal_policy[] = {
{-273150, 99000, 99000, -273150, 99000, 99000, -273150, 99000, 99000},
{ 120000, 120000, 120000, 120000, 120000, 120000, 120000, 120000, 120000},
}
mutex_lock(&adev->pm.mutex);
r = smu_cmn_update_table(smu, SMU_TABLE_I2C_COMMANDS, 0, req, true);
- mutex_unlock(&adev->pm.mutex);
if (r)
goto fail;
}
r = num_msgs;
fail:
+ mutex_unlock(&adev->pm.mutex);
kfree(req);
return r;
}
.feature_is_enabled = smu_cmn_feature_is_enabled,
.print_clk_levels = smu_v13_0_0_print_clk_levels,
.force_clk_levels = smu_v13_0_0_force_clk_levels,
- .update_pcie_parameters = smu_v13_0_0_update_pcie_parameters,
+ .update_pcie_parameters = smu_v13_0_update_pcie_parameters,
.get_thermal_temperature_range = smu_v13_0_0_get_thermal_temperature_range,
.register_irq_handler = smu_v13_0_register_irq_handler,
.enable_thermal_alert = smu_v13_0_enable_thermal_alert,
}
mutex_lock(&adev->pm.mutex);
r = smu_v13_0_6_request_i2c_xfer(smu, req);
- mutex_unlock(&adev->pm.mutex);
if (r)
goto fail;
}
r = num_msgs;
fail:
+ mutex_unlock(&adev->pm.mutex);
kfree(req);
return r;
}
return ret;
}
-static int smu_v13_0_7_update_pcie_parameters(struct smu_context *smu,
- uint32_t pcie_gen_cap,
- uint32_t pcie_width_cap)
-{
- struct smu_13_0_dpm_context *dpm_context = smu->smu_dpm.dpm_context;
- struct smu_13_0_pcie_table *pcie_table =
- &dpm_context->dpm_tables.pcie_table;
- uint32_t smu_pcie_arg;
- int ret, i;
-
- for (i = 0; i < pcie_table->num_of_link_levels; i++) {
- if (pcie_table->pcie_gen[i] > pcie_gen_cap)
- pcie_table->pcie_gen[i] = pcie_gen_cap;
- if (pcie_table->pcie_lane[i] > pcie_width_cap)
- pcie_table->pcie_lane[i] = pcie_width_cap;
-
- smu_pcie_arg = i << 16;
- smu_pcie_arg |= pcie_table->pcie_gen[i] << 8;
- smu_pcie_arg |= pcie_table->pcie_lane[i];
-
- ret = smu_cmn_send_smc_msg_with_param(smu,
- SMU_MSG_OverridePcieParameters,
- smu_pcie_arg,
- NULL);
- if (ret)
- return ret;
- }
-
- return 0;
-}
-
static const struct smu_temperature_range smu13_thermal_policy[] =
{
{-273150, 99000, 99000, -273150, 99000, 99000, -273150, 99000, 99000},
.feature_is_enabled = smu_cmn_feature_is_enabled,
.print_clk_levels = smu_v13_0_7_print_clk_levels,
.force_clk_levels = smu_v13_0_7_force_clk_levels,
- .update_pcie_parameters = smu_v13_0_7_update_pcie_parameters,
+ .update_pcie_parameters = smu_v13_0_update_pcie_parameters,
.get_thermal_temperature_range = smu_v13_0_7_get_thermal_temperature_range,
.register_irq_handler = smu_v13_0_register_irq_handler,
.enable_thermal_alert = smu_v13_0_enable_thermal_alert,
goto err_drm_client_init;
}
- ret = armada_fbdev_client_hotplug(&fbh->client);
- if (ret)
- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);
-
drm_client_register(&fbh->client);
return;
/* Control for TMDS Bit Period/TMDS Clock-Period Ratio */
if (dw_hdmi_support_scdc(hdmi, display)) {
if (mtmdsclock > HDMI14_MAX_TMDSCLK)
- drm_scdc_set_high_tmds_clock_ratio(&hdmi->connector, 1);
+ drm_scdc_set_high_tmds_clock_ratio(hdmi->curr_conn, 1);
else
- drm_scdc_set_high_tmds_clock_ratio(&hdmi->connector, 0);
+ drm_scdc_set_high_tmds_clock_ratio(hdmi->curr_conn, 0);
}
}
EXPORT_SYMBOL_GPL(dw_hdmi_set_high_tmds_clock_ratio);
min_t(u8, bytes, SCDC_MIN_SOURCE_VERSION));
/* Enabled Scrambling in the Sink */
- drm_scdc_set_scrambling(&hdmi->connector, 1);
+ drm_scdc_set_scrambling(hdmi->curr_conn, 1);
/*
* To activate the scrambler feature, you must ensure
hdmi_writeb(hdmi, 0, HDMI_FC_SCRAMBLER_CTRL);
hdmi_writeb(hdmi, (u8)~HDMI_MC_SWRSTZ_TMDSSWRST_REQ,
HDMI_MC_SWRSTZ);
- drm_scdc_set_scrambling(&hdmi->connector, 0);
+ drm_scdc_set_scrambling(hdmi->curr_conn, 0);
}
}
hdmi->bridge.ops = DRM_BRIDGE_OP_DETECT | DRM_BRIDGE_OP_EDID
| DRM_BRIDGE_OP_HPD;
hdmi->bridge.interlace_allowed = true;
+ hdmi->bridge.ddc = hdmi->ddc;
#ifdef CONFIG_OF
hdmi->bridge.of_node = pdev->dev.of_node;
#endif
* @pwm_refclk_freq: Cache for the reference clock input to the PWM.
*/
struct ti_sn65dsi86 {
- struct auxiliary_device bridge_aux;
- struct auxiliary_device gpio_aux;
- struct auxiliary_device aux_aux;
- struct auxiliary_device pwm_aux;
+ struct auxiliary_device *bridge_aux;
+ struct auxiliary_device *gpio_aux;
+ struct auxiliary_device *aux_aux;
+ struct auxiliary_device *pwm_aux;
struct device *dev;
struct regmap *regmap;
auxiliary_device_delete(data);
}
-/*
- * AUX bus docs say that a non-NULL release is mandatory, but it makes no
- * sense for the model used here where all of the aux devices are allocated
- * in the single shared structure. We'll use this noop as a workaround.
- */
-static void ti_sn65dsi86_noop(struct device *dev) {}
+static void ti_sn65dsi86_aux_device_release(struct device *dev)
+{
+ struct auxiliary_device *aux = container_of(dev, struct auxiliary_device, dev);
+
+ kfree(aux);
+}
static int ti_sn65dsi86_add_aux_device(struct ti_sn65dsi86 *pdata,
- struct auxiliary_device *aux,
+ struct auxiliary_device **aux_out,
const char *name)
{
struct device *dev = pdata->dev;
+ struct auxiliary_device *aux;
int ret;
+ aux = kzalloc(sizeof(*aux), GFP_KERNEL);
+ if (!aux)
+ return -ENOMEM;
+
aux->name = name;
aux->dev.parent = dev;
- aux->dev.release = ti_sn65dsi86_noop;
+ aux->dev.release = ti_sn65dsi86_aux_device_release;
device_set_of_node_from_dev(&aux->dev, dev);
ret = auxiliary_device_init(aux);
- if (ret)
+ if (ret) {
+ kfree(aux);
return ret;
+ }
ret = devm_add_action_or_reset(dev, ti_sn65dsi86_uninit_aux, aux);
if (ret)
return ret;
if (ret)
return ret;
ret = devm_add_action_or_reset(dev, ti_sn65dsi86_delete_aux, aux);
+ if (!ret)
+ *aux_out = aux;
return ret;
}
* drm_client_register() it is no longer permissible to call drm_client_release()
* directly (outside the unregister callback), instead cleanup will happen
* automatically on driver unload.
+ *
+ * Registering a client generates a hotplug event that allows the client
+ * to set up its display from pre-existing outputs. The client must have
+ * initialized its state to able to handle the hotplug event successfully.
*/
void drm_client_register(struct drm_client_dev *client)
{
struct drm_device *dev = client->dev;
+ int ret;
mutex_lock(&dev->clientlist_mutex);
list_add(&client->list, &dev->clientlist);
+
+ if (client->funcs && client->funcs->hotplug) {
+ /*
+ * Perform an initial hotplug event to pick up the
+ * display configuration for the client. This step
+ * has to be performed *after* registering the client
+ * in the list of clients, or a concurrent hotplug
+ * event might be lost; leaving the display off.
+ *
+ * Hold the clientlist_mutex as for a regular hotplug
+ * event.
+ */
+ ret = client->funcs->hotplug(client);
+ if (ret)
+ drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);
+ }
mutex_unlock(&dev->clientlist_mutex);
}
EXPORT_SYMBOL(drm_client_register);
* drm_fbdev_dma_setup() - Setup fbdev emulation for GEM DMA helpers
* @dev: DRM device
* @preferred_bpp: Preferred bits per pixel for the device.
- * @dev->mode_config.preferred_depth is used if this is zero.
+ * 32 is used if this is zero.
*
* This function sets up fbdev emulation for GEM DMA drivers that support
* dumb buffers with a virtual address and that can be mmap'ed.
goto err_drm_client_init;
}
- ret = drm_fbdev_dma_client_hotplug(&fb_helper->client);
- if (ret)
- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);
-
drm_client_register(&fb_helper->client);
return;
goto err_drm_client_init;
}
- ret = drm_fbdev_generic_client_hotplug(&fb_helper->client);
- if (ret)
- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);
-
drm_client_register(&fb_helper->client);
return;
*/
static int drm_syncobj_assign_null_handle(struct drm_syncobj *syncobj)
{
- struct dma_fence *fence = dma_fence_allocate_private_stub();
+ struct dma_fence *fence = dma_fence_allocate_private_stub(ktime_get());
- if (IS_ERR(fence))
- return PTR_ERR(fence);
+ if (!fence)
+ return -ENOMEM;
drm_syncobj_replace_fence(syncobj, fence);
dma_fence_put(fence);
if (ret)
goto err_drm_client_init;
- ret = exynos_drm_fbdev_client_hotplug(&fb_helper->client);
- if (ret)
- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);
-
drm_client_register(&fb_helper->client);
return;
goto err_drm_fb_helper_unprepare;
}
- ret = psb_fbdev_client_hotplug(&fb_helper->client);
- if (ret)
- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);
-
drm_client_register(&fb_helper->client);
return;
saved_state->uapi = slave_crtc_state->uapi;
saved_state->scaler_state = slave_crtc_state->scaler_state;
saved_state->shared_dpll = slave_crtc_state->shared_dpll;
- saved_state->dpll_hw_state = slave_crtc_state->dpll_hw_state;
saved_state->crc_enabled = slave_crtc_state->crc_enabled;
intel_crtc_free_hw_state(slave_crtc_state);
if (unlikely(flags & PTE_READ_ONLY))
pte &= ~GEN8_PAGE_RW;
- if (flags & PTE_LM)
- pte |= GEN12_PPGTT_PTE_LM;
-
/*
* For pre-gen12 platforms pat_index is the same as enum
* i915_cache_level, so the switch-case here is still valid.
if (IS_ERR(obj))
return ERR_CAST(obj);
- i915_gem_object_set_cache_coherency(obj, I915_CACHING_CACHED);
+ i915_gem_object_set_cache_coherency(obj, I915_CACHE_LLC);
vma = i915_vma_instance(obj, vm, NULL);
if (IS_ERR(vma)) {
oa_report_id_clear(stream, report32);
oa_timestamp_clear(stream, report32);
} else {
+ u8 *oa_buf_end = stream->oa_buffer.vaddr +
+ OA_BUFFER_SIZE;
+ u32 part = oa_buf_end - (u8 *)report32;
+
/* Zero out the entire report */
- memset(report32, 0, report_size);
+ if (report_size <= part) {
+ memset(report32, 0, report_size);
+ } else {
+ memset(report32, 0, part);
+ memset(oa_buf_base, 0, report_size - part);
+ }
}
}
goto err_drm_fb_helper_unprepare;
}
- ret = msm_fbdev_client_hotplug(&helper->client);
- if (ret)
- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);
-
drm_client_register(&helper->client);
return;
struct nouveau_drm *drm = nouveau_drm(msto->encoder.dev);
struct nv50_mstc *mstc = msto->mstc;
struct nv50_mstm *mstm = mstc->mstm;
- struct drm_dp_mst_atomic_payload *payload;
+ struct drm_dp_mst_topology_state *old_mst_state;
+ struct drm_dp_mst_atomic_payload *payload, *old_payload;
NV_ATOMIC(drm, "%s: msto prepare\n", msto->encoder.name);
+ old_mst_state = drm_atomic_get_old_mst_topology_state(state, mgr);
+
payload = drm_atomic_get_mst_payload_state(mst_state, mstc->port);
+ old_payload = drm_atomic_get_mst_payload_state(old_mst_state, mstc->port);
// TODO: Figure out if we want to do a better job of handling VCPI allocation failures here?
if (msto->disabled) {
- drm_dp_remove_payload(mgr, mst_state, payload, payload);
+ drm_dp_remove_payload(mgr, mst_state, old_payload, payload);
nvif_outp_dp_mst_vcpi(&mstm->outp->outp, msto->head->base.index, 0, 0, 0, 0);
} else {
if (cli)
nouveau_svmm_part(chan->vmm->svmm, chan->inst);
+ nvif_object_dtor(&chan->blit);
nvif_object_dtor(&chan->nvsw);
nvif_object_dtor(&chan->gart);
nvif_object_dtor(&chan->vram);
u32 user_put;
struct nvif_object user;
+ struct nvif_object blit;
struct nvif_event kill;
atomic_t killed;
ret = nvif_object_ctor(&drm->channel->user, "drmNvsw",
NVDRM_NVSW, nouveau_abi16_swclass(drm),
NULL, 0, &drm->channel->nvsw);
+
+ if (ret == 0 && device->info.chipset >= 0x11) {
+ ret = nvif_object_ctor(&drm->channel->user, "drmBlit",
+ 0x005f, 0x009f,
+ NULL, 0, &drm->channel->blit);
+ }
+
if (ret == 0) {
struct nvif_push *push = drm->channel->chan.push;
- ret = PUSH_WAIT(push, 2);
- if (ret == 0)
+ ret = PUSH_WAIT(push, 8);
+ if (ret == 0) {
+ if (device->info.chipset >= 0x11) {
+ PUSH_NVSQ(push, NV05F, 0x0000, drm->channel->blit.handle);
+ PUSH_NVSQ(push, NV09F, 0x0120, 0,
+ 0x0124, 1,
+ 0x0128, 2);
+ }
PUSH_NVSQ(push, NV_SW, 0x0000, drm->channel->nvsw.handle);
+ }
}
if (ret) {
- NV_ERROR(drm, "failed to allocate sw class, %d\n", ret);
+ NV_ERROR(drm, "failed to allocate sw or blit class, %d\n", ret);
nouveau_accel_gr_fini(drm);
return;
}
.clock = nv50_sor_clock,
.war_2 = g94_sor_war_2,
.war_3 = g94_sor_war_3,
+ .hdmi = &g84_sor_hdmi,
.dp = &g94_sor_dp,
};
pack_hdmi_infoframe(&avi, data, size);
nvkm_mask(device, 0x61c520 + soff, 0x00000001, 0x00000000);
- if (size)
+ if (!size)
return;
nvkm_wr32(device, 0x61c528 + soff, avi.header);
u64 falcons;
int ret, i;
- if (list_empty(&acr->hsfw)) {
+ if (list_empty(&acr->hsfw) || !acr->func || !acr->func->wpr_layout) {
nvkm_debug(subdev, "No HSFW(s)\n");
nvkm_acr_cleanup(acr);
return 0;
INIT_WORK(&fbdev->work, pan_worker);
- ret = omap_fbdev_client_hotplug(&helper->client);
- if (ret)
- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);
-
drm_client_register(&helper->client);
return;
.height = 54,
},
.bus_format = MEDIA_BUS_FMT_RGB888_1X24,
+ .connector_type = DRM_MODE_CONNECTOR_DPI,
.bus_flags = DRM_BUS_FLAG_DE_HIGH | DRM_BUS_FLAG_PIXDATA_DRIVE_POSEDGE,
};
.vsync_start = 480 + 49,
.vsync_end = 480 + 49 + 2,
.vtotal = 480 + 49 + 2 + 22,
+ .flags = DRM_MODE_FLAG_NVSYNC | DRM_MODE_FLAG_NHSYNC,
};
static const struct panel_desc powertip_ph800480t013_idf02 = {
goto err_drm_client_init;
}
- ret = radeon_fbdev_client_hotplug(&fb_helper->client);
- if (ret)
- drm_dbg_kms(rdev->ddev, "client hotplug ret=%d\n", ret);
-
drm_client_register(&fb_helper->client);
return;
{
struct drm_sched_job *job = container_of(cb, struct drm_sched_job,
finish_cb);
- int r;
+ unsigned long index;
dma_fence_put(f);
/* Wait for all dependencies to avoid data corruptions */
- while (!xa_empty(&job->dependencies)) {
- f = xa_erase(&job->dependencies, job->last_dependency++);
- r = dma_fence_add_callback(f, &job->finish_cb,
- drm_sched_entity_kill_jobs_cb);
- if (!r)
+ xa_for_each(&job->dependencies, index, f) {
+ struct drm_sched_fence *s_fence = to_drm_sched_fence(f);
+
+ if (s_fence && f == &s_fence->scheduled) {
+ /* The dependencies array had a reference on the scheduled
+ * fence, and the finished fence refcount might have
+ * dropped to zero. Use dma_fence_get_rcu() so we get
+ * a NULL fence in that case.
+ */
+ f = dma_fence_get_rcu(&s_fence->finished);
+
+ /* Now that we have a reference on the finished fence,
+ * we can release the reference the dependencies array
+ * had on the scheduled fence.
+ */
+ dma_fence_put(&s_fence->scheduled);
+ }
+
+ xa_erase(&job->dependencies, index);
+ if (f && !dma_fence_add_callback(f, &job->finish_cb,
+ drm_sched_entity_kill_jobs_cb))
return;
dma_fence_put(f);
drm_sched_job_dependency(struct drm_sched_job *job,
struct drm_sched_entity *entity)
{
- if (!xa_empty(&job->dependencies))
- return xa_erase(&job->dependencies, job->last_dependency++);
+ struct dma_fence *f;
+
+ /* We keep the fence around, so we can iterate over all dependencies
+ * in drm_sched_entity_kill_jobs_cb() to ensure all deps are signaled
+ * before killing the job.
+ */
+ f = xa_load(&job->dependencies, job->last_dependency);
+ if (f) {
+ job->last_dependency++;
+ return dma_fence_get(f);
+ }
if (job->sched->ops->prepare_job)
return job->sched->ops->prepare_job(job, entity);
kmem_cache_destroy(sched_fence_slab);
}
-void drm_sched_fence_scheduled(struct drm_sched_fence *fence)
+static void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence,
+ struct dma_fence *fence)
{
+ /*
+ * smp_store_release() to ensure another thread racing us
+ * in drm_sched_fence_set_deadline_finished() sees the
+ * fence's parent set before test_bit()
+ */
+ smp_store_release(&s_fence->parent, dma_fence_get(fence));
+ if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT,
+ &s_fence->finished.flags))
+ dma_fence_set_deadline(fence, s_fence->deadline);
+}
+
+void drm_sched_fence_scheduled(struct drm_sched_fence *fence,
+ struct dma_fence *parent)
+{
+ /* Set the parent before signaling the scheduled fence, such that,
+ * any waiter expecting the parent to be filled after the job has
+ * been scheduled (which is the case for drivers delegating waits
+ * to some firmware) doesn't have to busy wait for parent to show
+ * up.
+ */
+ if (!IS_ERR_OR_NULL(parent))
+ drm_sched_fence_set_parent(fence, parent);
+
dma_fence_signal(&fence->scheduled);
}
}
EXPORT_SYMBOL(to_drm_sched_fence);
-void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence,
- struct dma_fence *fence)
-{
- /*
- * smp_store_release() to ensure another thread racing us
- * in drm_sched_fence_set_deadline_finished() sees the
- * fence's parent set before test_bit()
- */
- smp_store_release(&s_fence->parent, dma_fence_get(fence));
- if (test_bit(DRM_SCHED_FENCE_FLAG_HAS_DEADLINE_BIT,
- &s_fence->finished.flags))
- dma_fence_set_deadline(fence, s_fence->deadline);
-}
-
struct drm_sched_fence *drm_sched_fence_alloc(struct drm_sched_entity *entity,
void *owner)
{
trace_drm_run_job(sched_job, entity);
fence = sched->ops->run_job(sched_job);
complete_all(&entity->entity_idle);
- drm_sched_fence_scheduled(s_fence);
+ drm_sched_fence_scheduled(s_fence, fence);
if (!IS_ERR_OR_NULL(fence)) {
- drm_sched_fence_set_parent(s_fence, fence);
/* Drop for original kref_init of the fence */
dma_fence_put(fence);
if (ret)
goto err_drm_client_init;
- ret = tegra_fbdev_client_hotplug(&helper->client);
- if (ret)
- drm_dbg_kms(dev, "client hotplug ret=%d\n", ret);
-
drm_client_register(&helper->client);
return;
goto out;
}
-bounce:
- ret = ttm_bo_handle_move_mem(bo, evict_mem, true, ctx, &hop);
- if (ret == -EMULTIHOP) {
+ do {
+ ret = ttm_bo_handle_move_mem(bo, evict_mem, true, ctx, &hop);
+ if (ret != -EMULTIHOP)
+ break;
+
ret = ttm_bo_bounce_temp_buffer(bo, &evict_mem, ctx, &hop);
- if (ret) {
- if (ret != -ERESTARTSYS && ret != -EINTR)
- pr_err("Buffer eviction failed\n");
- ttm_resource_free(bo, &evict_mem);
- goto out;
- }
- /* try and move to final place now. */
- goto bounce;
+ } while (!ret);
+
+ if (ret) {
+ ttm_resource_free(bo, &evict_mem);
+ if (ret != -ERESTARTSYS && ret != -EINTR)
+ pr_err("Buffer eviction failed\n");
}
out:
return ret;
{
bool ret = false;
+ if (bo->pin_count) {
+ *locked = false;
+ *busy = false;
+ return false;
+ }
+
if (bo->base.resv == ctx->resv) {
dma_resv_assert_held(bo->base.resv);
if (ctx->allow_res_evict)
ret = ttm_bo_handle_move_mem(bo, evict_mem, true, &ctx, &hop);
if (unlikely(ret != 0)) {
WARN(ret == -EMULTIHOP, "Unexpected multihop in swaput - likely driver bug.\n");
+ ttm_resource_free(bo, &evict_mem);
goto out;
}
}
struct ttm_resource *res)
{
if (pos->last != res) {
+ if (pos->first == res)
+ pos->first = list_next_entry(res, lru);
list_move(&res->lru, &pos->last->lru);
pos->last = res;
}
{
struct ttm_lru_bulk_move_pos *pos = ttm_lru_bulk_move_pos(bulk, res);
- if (unlikely(pos->first == res && pos->last == res)) {
+ if (unlikely(WARN_ON(!pos->first || !pos->last) ||
+ (pos->first == res && pos->last == res))) {
pos->first = NULL;
pos->last = NULL;
} else if (pos->first == res) {
}
ret = ida_alloc_range(&iommu_global_pasid_ida, min, max, GFP_KERNEL);
- if (ret < min)
+ if (ret < 0)
goto out;
+
mm->pasid = ret;
ret = 0;
out:
ret = __iommu_group_set_domain_internal(
group, dom, IOMMU_SET_DOMAIN_MUST_SUCCEED);
if (WARN_ON(ret))
- goto out_free;
+ goto out_free_old;
} else {
ret = __iommu_group_set_domain(group, dom);
- if (ret) {
- iommu_domain_free(dom);
- group->default_domain = old_dom;
- return ret;
- }
+ if (ret)
+ goto err_restore_def_domain;
}
/*
for_each_group_device(group, gdev) {
ret = iommu_create_device_direct_mappings(dom, gdev->dev);
if (ret)
- goto err_restore;
+ goto err_restore_domain;
}
}
-err_restore:
- if (old_dom) {
+out_free_old:
+ if (old_dom)
+ iommu_domain_free(old_dom);
+ return ret;
+
+err_restore_domain:
+ if (old_dom)
__iommu_group_set_domain_internal(
group, old_dom, IOMMU_SET_DOMAIN_MUST_SUCCEED);
+err_restore_def_domain:
+ if (old_dom) {
iommu_domain_free(dom);
- old_dom = NULL;
+ group->default_domain = old_dom;
}
-out_free:
- if (old_dom)
- iommu_domain_free(old_dom);
return ret;
}
ret = nvme_global_check_duplicate_ids(ctrl->subsys, &info->ids);
if (ret) {
- dev_err(ctrl->device,
- "globally duplicate IDs for nsid %d\n", info->nsid);
+ /*
+ * We've found two different namespaces on two different
+ * subsystems that report the same ID. This is pretty nasty
+ * for anything that actually requires unique device
+ * identification. In the kernel we need this for multipathing,
+ * and in user space the /dev/disk/by-id/ links rely on it.
+ *
+ * If the device also claims to be multi-path capable back off
+ * here now and refuse the probe the second device as this is a
+ * recipe for data corruption. If not this is probably a
+ * cheap consumer device if on the PCIe bus, so let the user
+ * proceed and use the shiny toy, but warn that with changing
+ * probing order (which due to our async probing could just be
+ * device taking longer to startup) the other device could show
+ * up at any time.
+ */
nvme_print_device_info(ctrl);
- return ret;
+ if ((ns->ctrl->ops->flags & NVME_F_FABRICS) || /* !PCIe */
+ ((ns->ctrl->subsys->cmic & NVME_CTRL_CMIC_MULTI_CTRL) &&
+ info->is_shared)) {
+ dev_err(ctrl->device,
+ "ignoring nsid %d because of duplicate IDs\n",
+ info->nsid);
+ return ret;
+ }
+
+ dev_err(ctrl->device,
+ "clearing duplicate IDs for nsid %d\n", info->nsid);
+ dev_err(ctrl->device,
+ "use of /dev/disk/by-id/ may cause data corruption\n");
+ memset(&info->ids.nguid, 0, sizeof(info->ids.nguid));
+ memset(&info->ids.uuid, 0, sizeof(info->ids.uuid));
+ memset(&info->ids.eui64, 0, sizeof(info->ids.eui64));
+ ctrl->quirks |= NVME_QUIRK_BOGUS_NID;
}
mutex_lock(&ctrl->subsys->lock);
/* create debugfs directory and attribute */
parent = debugfs_create_dir(dev_name, NULL);
- if (!parent) {
+ if (IS_ERR(parent)) {
pr_warn("%s: failed to create debugfs directory\n", dev_name);
return;
}
* the controller. Abort any ios on the association and let the
* create_association error path resolve things.
*/
- if (ctrl->ctrl.state == NVME_CTRL_CONNECTING) {
- __nvme_fc_abort_outstanding_ios(ctrl, true);
+ enum nvme_ctrl_state state;
+ unsigned long flags;
+
+ spin_lock_irqsave(&ctrl->lock, flags);
+ state = ctrl->ctrl.state;
+ if (state == NVME_CTRL_CONNECTING) {
set_bit(ASSOC_FAILED, &ctrl->flags);
+ spin_unlock_irqrestore(&ctrl->lock, flags);
+ __nvme_fc_abort_outstanding_ios(ctrl, true);
+ dev_warn(ctrl->ctrl.device,
+ "NVME-FC{%d}: transport error during (re)connect\n",
+ ctrl->cnum);
return;
}
+ spin_unlock_irqrestore(&ctrl->lock, flags);
/* Otherwise, only proceed if in LIVE state - e.g. on first error */
- if (ctrl->ctrl.state != NVME_CTRL_LIVE)
+ if (state != NVME_CTRL_LIVE)
return;
dev_warn(ctrl->ctrl.device,
*/
ret = nvme_enable_ctrl(&ctrl->ctrl);
- if (ret || test_bit(ASSOC_FAILED, &ctrl->flags))
+ if (!ret && test_bit(ASSOC_FAILED, &ctrl->flags))
+ ret = -EIO;
+ if (ret)
goto out_disconnect_admin_queue;
ctrl->ctrl.max_segments = ctrl->lport->ops->max_sgl_segments;
nvme_unquiesce_admin_queue(&ctrl->ctrl);
ret = nvme_init_ctrl_finish(&ctrl->ctrl, false);
- if (ret || test_bit(ASSOC_FAILED, &ctrl->flags))
+ if (!ret && test_bit(ASSOC_FAILED, &ctrl->flags))
+ ret = -EIO;
+ if (ret)
goto out_disconnect_admin_queue;
/* sanity checks */
else
ret = nvme_fc_recreate_io_queues(ctrl);
}
- if (ret || test_bit(ASSOC_FAILED, &ctrl->flags))
- goto out_term_aen_ops;
+ spin_lock_irqsave(&ctrl->lock, flags);
+ if (!ret && test_bit(ASSOC_FAILED, &ctrl->flags))
+ ret = -EIO;
+ if (ret) {
+ spin_unlock_irqrestore(&ctrl->lock, flags);
+ goto out_term_aen_ops;
+ }
changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE);
+ spin_unlock_irqrestore(&ctrl->lock, flags);
ctrl->ctrl.nr_reconnects = 0;
out_term_aen_ops:
nvme_fc_term_aen_ops(ctrl);
out_disconnect_admin_queue:
+ dev_warn(ctrl->ctrl.device,
+ "NVME-FC{%d}: create_assoc failed, assoc_id %llx ret %d\n",
+ ctrl->cnum, ctrl->association_id, ret);
/* send a Disconnect(association) LS to fc-nvme target */
nvme_fc_xmt_disconnect_assoc(ctrl);
spin_lock_irqsave(&ctrl->lock, flags);
struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
dma_unmap_page(dev->dev, iod->meta_dma,
- rq_integrity_vec(req)->bv_len, rq_data_dir(req));
+ rq_integrity_vec(req)->bv_len, rq_dma_dir(req));
}
if (blk_rq_nr_phys_segments(req))
*/
if (nvme_should_reset(dev, csts)) {
nvme_warn_reset(dev, csts);
- nvme_dev_disable(dev, false);
- nvme_reset_ctrl(&dev->ctrl);
- return BLK_EH_DONE;
+ goto disable;
}
/*
"I/O %d QID %d timeout, reset controller\n",
req->tag, nvmeq->qid);
nvme_req(req)->flags |= NVME_REQ_CANCELLED;
- nvme_dev_disable(dev, false);
- nvme_reset_ctrl(&dev->ctrl);
-
- return BLK_EH_DONE;
+ goto disable;
}
if (atomic_dec_return(&dev->ctrl.abort_limit) < 0) {
* as the device then is in a faulty state.
*/
return BLK_EH_RESET_TIMER;
+
+disable:
+ if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETTING))
+ return BLK_EH_DONE;
+
+ nvme_dev_disable(dev, false);
+ if (nvme_try_sched_reset(&dev->ctrl))
+ nvme_unquiesce_io_queues(&dev->ctrl);
+ return BLK_EH_DONE;
}
static void nvme_free_queue(struct nvme_queue *nvmeq)
case pci_channel_io_frozen:
dev_warn(dev->ctrl.device,
"frozen state error detected, reset controller\n");
+ if (!nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_RESETTING)) {
+ nvme_dev_disable(dev, true);
+ return PCI_ERS_RESULT_DISCONNECT;
+ }
nvme_dev_disable(dev, false);
return PCI_ERS_RESULT_NEED_RESET;
case pci_channel_io_perm_failure:
dev_info(dev->ctrl.device, "restart after slot reset\n");
pci_restore_state(pdev);
- nvme_reset_ctrl(&dev->ctrl);
+ if (!nvme_try_sched_reset(&dev->ctrl))
+ nvme_unquiesce_io_queues(&dev->ctrl);
return PCI_ERS_RESULT_RECOVERED;
}
.driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
{ PCI_DEVICE(0x144d, 0xa809), /* Samsung MZALQ256HBJD 256G */
.driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
+ { PCI_DEVICE(0x144d, 0xa802), /* Samsung SM953 */
+ .driver_data = NVME_QUIRK_BOGUS_NID, },
{ PCI_DEVICE(0x1cc4, 0x6303), /* UMIS RPJTJ512MGE1QDY 512G */
.driver_data = NVME_QUIRK_DISABLE_WRITE_ZEROES, },
{ PCI_DEVICE(0x1cc4, 0x6302), /* UMIS RPJTJ256MGE1QDY 256G */
* we have no UUID set
*/
if (uuid_is_null(&ids->uuid)) {
- dev_warn_ratelimited(dev,
+ dev_warn_once(dev,
"No UUID available providing old NGUID\n");
return sysfs_emit(buf, "%pU\n", ids->nguid);
}
int nvme_revalidate_zones(struct nvme_ns *ns)
{
struct request_queue *q = ns->queue;
- int ret;
- ret = blk_revalidate_disk_zones(ns->disk, NULL);
- if (!ret)
- blk_queue_max_zone_append_sectors(q, ns->ctrl->max_zone_append);
- return ret;
+ blk_queue_chunk_sectors(q, ns->zsze);
+ blk_queue_max_zone_append_sectors(q, ns->ctrl->max_zone_append);
+
+ return blk_revalidate_disk_zones(ns->disk, NULL);
}
static int nvme_set_max_append(struct nvme_ctrl *ctrl)
goto out_cleanup_tagset;
ctrl->ctrl.max_hw_sectors =
- (NVME_LOOP_MAX_SEGMENTS - 1) << (PAGE_SHIFT - 9);
+ (NVME_LOOP_MAX_SEGMENTS - 1) << PAGE_SECTORS_SHIFT;
nvme_unquiesce_admin_queue(&ctrl->ctrl);
* which depends on the host's memory fragementation. To solve this,
* ensure mdts is limited to the pages equal to the number of segments.
*/
- max_hw_sectors = min_not_zero(pctrl->max_segments << (PAGE_SHIFT - 9),
+ max_hw_sectors = min_not_zero(pctrl->max_segments << PAGE_SECTORS_SHIFT,
pctrl->max_hw_sectors);
/*
* nvmet_passthru_map_sg is limitted to using a single bio so limit
* the mdts based on BIO_MAX_VECS as well
*/
- max_hw_sectors = min_not_zero(BIO_MAX_VECS << (PAGE_SHIFT - 9),
+ max_hw_sectors = min_not_zero(BIO_MAX_VECS << PAGE_SECTORS_SHIFT,
max_hw_sectors);
page_shift = NVME_CAP_MPSMIN(ctrl->cap) + 12;
uint64_t max_period = riscv_pmu_ctr_get_width_mask(event);
u64 init_val;
- if (WARN_ON_ONCE(!(event->hw.state & PERF_HES_STOPPED)))
- return;
-
if (flags & PERF_EF_RELOAD)
WARN_ON_ONCE(!(event->hw.state & PERF_HES_UPTODATE));
raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
}
-static int amd_gpio_set_debounce(struct gpio_chip *gc, unsigned offset,
- unsigned debounce)
+static int amd_gpio_set_debounce(struct amd_gpio *gpio_dev, unsigned int offset,
+ unsigned int debounce)
{
u32 time;
u32 pin_reg;
int ret = 0;
- unsigned long flags;
- struct amd_gpio *gpio_dev = gpiochip_get_data(gc);
-
- raw_spin_lock_irqsave(&gpio_dev->lock, flags);
/* Use special handling for Pin0 debounce */
- pin_reg = readl(gpio_dev->base + WAKE_INT_MASTER_REG);
- if (pin_reg & INTERNAL_GPIO0_DEBOUNCE)
- debounce = 0;
+ if (offset == 0) {
+ pin_reg = readl(gpio_dev->base + WAKE_INT_MASTER_REG);
+ if (pin_reg & INTERNAL_GPIO0_DEBOUNCE)
+ debounce = 0;
+ }
pin_reg = readl(gpio_dev->base + offset * 4);
pin_reg &= ~(DB_CNTRl_MASK << DB_CNTRL_OFF);
}
writel(pin_reg, gpio_dev->base + offset * 4);
- raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
return ret;
}
-static int amd_gpio_set_config(struct gpio_chip *gc, unsigned offset,
- unsigned long config)
-{
- u32 debounce;
-
- if (pinconf_to_config_param(config) != PIN_CONFIG_INPUT_DEBOUNCE)
- return -ENOTSUPP;
-
- debounce = pinconf_to_config_argument(config);
- return amd_gpio_set_debounce(gc, offset, debounce);
-}
-
#ifdef CONFIG_DEBUG_FS
static void amd_gpio_dbg_show(struct seq_file *s, struct gpio_chip *gc)
{
char *pin_sts;
char *interrupt_sts;
char *wake_sts;
- char *pull_up_sel;
char *orientation;
char debounce_value[40];
char *debounce_enable;
seq_printf(s, " %s|", wake_sts);
if (pin_reg & BIT(PULL_UP_ENABLE_OFF)) {
- if (pin_reg & BIT(PULL_UP_SEL_OFF))
- pull_up_sel = "8k";
- else
- pull_up_sel = "4k";
- seq_printf(s, "%s ↑|",
- pull_up_sel);
+ seq_puts(s, " ↑ |");
} else if (pin_reg & BIT(PULL_DOWN_ENABLE_OFF)) {
- seq_puts(s, " ↓|");
+ seq_puts(s, " ↓ |");
} else {
seq_puts(s, " |");
}
break;
case PIN_CONFIG_BIAS_PULL_UP:
- arg = (pin_reg >> PULL_UP_SEL_OFF) & (BIT(0) | BIT(1));
+ arg = (pin_reg >> PULL_UP_ENABLE_OFF) & BIT(0);
break;
case PIN_CONFIG_DRIVE_STRENGTH:
}
static int amd_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
- unsigned long *configs, unsigned num_configs)
+ unsigned long *configs, unsigned int num_configs)
{
int i;
u32 arg;
switch (param) {
case PIN_CONFIG_INPUT_DEBOUNCE:
- pin_reg &= ~DB_TMR_OUT_MASK;
- pin_reg |= arg & DB_TMR_OUT_MASK;
- break;
+ ret = amd_gpio_set_debounce(gpio_dev, pin, arg);
+ goto out_unlock;
case PIN_CONFIG_BIAS_PULL_DOWN:
pin_reg &= ~BIT(PULL_DOWN_ENABLE_OFF);
break;
case PIN_CONFIG_BIAS_PULL_UP:
- pin_reg &= ~BIT(PULL_UP_SEL_OFF);
- pin_reg |= (arg & BIT(0)) << PULL_UP_SEL_OFF;
pin_reg &= ~BIT(PULL_UP_ENABLE_OFF);
- pin_reg |= ((arg>>1) & BIT(0)) << PULL_UP_ENABLE_OFF;
+ pin_reg |= (arg & BIT(0)) << PULL_UP_ENABLE_OFF;
break;
case PIN_CONFIG_DRIVE_STRENGTH:
writel(pin_reg, gpio_dev->base + pin*4);
}
+out_unlock:
raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
return ret;
return 0;
}
+static int amd_gpio_set_config(struct gpio_chip *gc, unsigned int pin,
+ unsigned long config)
+{
+ struct amd_gpio *gpio_dev = gpiochip_get_data(gc);
+
+ return amd_pinconf_set(gpio_dev->pctrl, pin, &config, 1);
+}
+
static const struct pinconf_ops amd_pinconf_ops = {
.pin_config_get = amd_pinconf_get,
.pin_config_set = amd_pinconf_set,
#define WAKE_CNTRL_OFF_S4 15
#define PIN_STS_OFF 16
#define DRV_STRENGTH_SEL_OFF 17
-#define PULL_UP_SEL_OFF 19
#define PULL_UP_ENABLE_OFF 20
#define PULL_DOWN_ENABLE_OFF 21
#define OUTPUT_VALUE_OFF 22
static int rzg2l_dt_subnode_to_map(struct pinctrl_dev *pctldev,
struct device_node *np,
+ struct device_node *parent,
struct pinctrl_map **map,
unsigned int *num_maps,
unsigned int *index)
struct property *prop;
int ret, gsel, fsel;
const char **pin_fn;
+ const char *name;
const char *pin;
pinmux = of_find_property(np, "pinmux", NULL);
psel_val[i] = MUX_FUNC(value);
}
+ if (parent) {
+ name = devm_kasprintf(pctrl->dev, GFP_KERNEL, "%pOFn.%pOFn",
+ parent, np);
+ if (!name) {
+ ret = -ENOMEM;
+ goto done;
+ }
+ } else {
+ name = np->name;
+ }
+
/* Register a single pin group listing all the pins we read from DT */
- gsel = pinctrl_generic_add_group(pctldev, np->name, pins, num_pinmux, NULL);
+ gsel = pinctrl_generic_add_group(pctldev, name, pins, num_pinmux, NULL);
if (gsel < 0) {
ret = gsel;
goto done;
* Register a single group function where the 'data' is an array PSEL
* register values read from DT.
*/
- pin_fn[0] = np->name;
- fsel = pinmux_generic_add_function(pctldev, np->name, pin_fn, 1,
- psel_val);
+ pin_fn[0] = name;
+ fsel = pinmux_generic_add_function(pctldev, name, pin_fn, 1, psel_val);
if (fsel < 0) {
ret = fsel;
goto remove_group;
}
maps[idx].type = PIN_MAP_TYPE_MUX_GROUP;
- maps[idx].data.mux.group = np->name;
- maps[idx].data.mux.function = np->name;
+ maps[idx].data.mux.group = name;
+ maps[idx].data.mux.function = name;
idx++;
dev_dbg(pctrl->dev, "Parsed %pOF with %d pins\n", np, num_pinmux);
index = 0;
for_each_child_of_node(np, child) {
- ret = rzg2l_dt_subnode_to_map(pctldev, child, map,
+ ret = rzg2l_dt_subnode_to_map(pctldev, child, np, map,
num_maps, &index);
if (ret < 0) {
of_node_put(child);
}
if (*num_maps == 0) {
- ret = rzg2l_dt_subnode_to_map(pctldev, np, map,
+ ret = rzg2l_dt_subnode_to_map(pctldev, np, NULL, map,
num_maps, &index);
if (ret < 0)
goto done;
static int rzv2m_dt_subnode_to_map(struct pinctrl_dev *pctldev,
struct device_node *np,
+ struct device_node *parent,
struct pinctrl_map **map,
unsigned int *num_maps,
unsigned int *index)
struct property *prop;
int ret, gsel, fsel;
const char **pin_fn;
+ const char *name;
const char *pin;
pinmux = of_find_property(np, "pinmux", NULL);
psel_val[i] = MUX_FUNC(value);
}
+ if (parent) {
+ name = devm_kasprintf(pctrl->dev, GFP_KERNEL, "%pOFn.%pOFn",
+ parent, np);
+ if (!name) {
+ ret = -ENOMEM;
+ goto done;
+ }
+ } else {
+ name = np->name;
+ }
+
/* Register a single pin group listing all the pins we read from DT */
- gsel = pinctrl_generic_add_group(pctldev, np->name, pins, num_pinmux, NULL);
+ gsel = pinctrl_generic_add_group(pctldev, name, pins, num_pinmux, NULL);
if (gsel < 0) {
ret = gsel;
goto done;
* Register a single group function where the 'data' is an array PSEL
* register values read from DT.
*/
- pin_fn[0] = np->name;
- fsel = pinmux_generic_add_function(pctldev, np->name, pin_fn, 1,
- psel_val);
+ pin_fn[0] = name;
+ fsel = pinmux_generic_add_function(pctldev, name, pin_fn, 1, psel_val);
if (fsel < 0) {
ret = fsel;
goto remove_group;
}
maps[idx].type = PIN_MAP_TYPE_MUX_GROUP;
- maps[idx].data.mux.group = np->name;
- maps[idx].data.mux.function = np->name;
+ maps[idx].data.mux.group = name;
+ maps[idx].data.mux.function = name;
idx++;
dev_dbg(pctrl->dev, "Parsed %pOF with %d pins\n", np, num_pinmux);
index = 0;
for_each_child_of_node(np, child) {
- ret = rzv2m_dt_subnode_to_map(pctldev, child, map,
+ ret = rzv2m_dt_subnode_to_map(pctldev, child, np, map,
num_maps, &index);
if (ret < 0) {
of_node_put(child);
}
if (*num_maps == 0) {
- ret = rzv2m_dt_subnode_to_map(pctldev, np, map,
+ ret = rzv2m_dt_subnode_to_map(pctldev, np, NULL, map,
num_maps, &index);
if (ret < 0)
goto done;
const struct notification_limit *uv_l = &constr->under_voltage_limits;
const struct notification_limit *ov_l = &constr->over_voltage_limits;
+ if (!config->init_data) /* No config in DT, pointers will be invalid */
+ return 0;
+
/* make sure that only one severity is used to clarify if unchanged, enabled or disabled */
if ((!!uv_l->prot + !!uv_l->err + !!uv_l->warn) > 1) {
dev_err(config->dev, "%s: at most one voltage monitoring severity allowed!\n",
struct aac_aifcmd {
__le32 command; /* Tell host what type of notify this is */
__le32 seqnum; /* To allow ordering of reports (if necessary) */
- u8 data[1]; /* Undefined length (from kernel viewpoint) */
+ u8 data[]; /* Undefined length (from kernel viewpoint) */
};
/**
fnic_max_trace_entries = (trace_max_pages * PAGE_SIZE)/
FNIC_ENTRY_SIZE_BYTES;
- fnic_trace_buf_p = (unsigned long)vzalloc(trace_max_pages * PAGE_SIZE);
+ fnic_trace_buf_p = (unsigned long)vcalloc(trace_max_pages, PAGE_SIZE);
if (!fnic_trace_buf_p) {
printk(KERN_ERR PFX "Failed to allocate memory "
"for fnic_trace_buf_p\n");
if (rc)
return;
/* Reset HBA FCF states after successful unregister FCF */
+ spin_lock_irq(&phba->hbalock);
phba->fcf.fcf_flag = 0;
+ spin_unlock_irq(&phba->hbalock);
phba->fcf.current_rec.flag = 0;
/*
/* n2n */
struct fc_els_flogi plogi_els_payld;
-#define LOGIN_TEMPLATE_SIZE (sizeof(struct fc_els_flogi) - 4)
void *swl;
ql_dbg(ql_dbg_init, vha, 0x0163,
"-> fwdt%u template allocate template %#x words...\n",
j, risc_size);
- fwdt->template = vmalloc(risc_size * sizeof(*dcode));
+ fwdt->template = vmalloc_array(risc_size, sizeof(*dcode));
if (!fwdt->template) {
ql_log(ql_log_warn, vha, 0x0164,
"-> fwdt%u failed allocate template.\n", j);
ql_dbg(ql_dbg_init, vha, 0x0173,
"-> fwdt%u template allocate template %#x words...\n",
j, risc_size);
- fwdt->template = vmalloc(risc_size * sizeof(*dcode));
+ fwdt->template = vmalloc_array(risc_size, sizeof(*dcode));
if (!fwdt->template) {
ql_log(ql_log_warn, vha, 0x0174,
"-> fwdt%u failed allocate template.\n", j);
memset(ptr, 0, sizeof(struct els_plogi_payload));
memset(resp_ptr, 0, sizeof(struct els_plogi_payload));
memcpy(elsio->u.els_plogi.els_plogi_pyld->data,
- &ha->plogi_els_payld.fl_csp, LOGIN_TEMPLATE_SIZE);
+ (void *)&ha->plogi_els_payld + offsetof(struct fc_els_flogi, fl_csp),
+ sizeof(ha->plogi_els_payld) - offsetof(struct fc_els_flogi, fl_csp));
elsio->u.els_plogi.els_cmd = els_opcode;
elsio->u.els_plogi.els_plogi_pyld->opcode = els_opcode;
pkt = __qla2x00_alloc_iocbs(sp->qpair, sp);
if (!pkt) {
- rval = EAGAIN;
+ rval = -EAGAIN;
ql_log(ql_log_warn, vha, 0x700c,
"qla2x00_alloc_iocbs failed.\n");
goto done;
static int submit_queues = DEF_SUBMIT_QUEUES; /* > 1 for multi-queue (mq) */
static int poll_queues; /* iouring iopoll interface.*/
-static DEFINE_RWLOCK(atomic_rw);
-static DEFINE_RWLOCK(atomic_rw2);
-
-static rwlock_t *ramdisk_lck_a[2];
-
static char sdebug_proc_name[] = MY_NAME;
static const char *my_name = MY_NAME;
int k, ret, hosts_to_add;
int idx = -1;
- ramdisk_lck_a[0] = &atomic_rw;
- ramdisk_lck_a[1] = &atomic_rw2;
-
if (sdebug_ndelay >= 1000 * 1000 * 1000) {
pr_warn("ndelay must be less than 1 second, ignored\n");
sdebug_ndelay = 0;
struct request_queue *q = disk->queue;
u32 zone_blocks = sdkp->early_zone_info.zone_blocks;
unsigned int nr_zones = sdkp->early_zone_info.nr_zones;
- u32 max_append;
int ret = 0;
unsigned int flags;
goto unlock;
}
+ blk_queue_chunk_sectors(q,
+ logical_to_sectors(sdkp->device, zone_blocks));
+ blk_queue_max_zone_append_sectors(q,
+ q->limits.max_segments << PAGE_SECTORS_SHIFT);
+
ret = blk_revalidate_disk_zones(disk, sd_zbc_revalidate_zones_cb);
memalloc_noio_restore(flags);
goto unlock;
}
- max_append = min_t(u32, logical_to_sectors(sdkp->device, zone_blocks),
- q->limits.max_segments << PAGE_SECTORS_SHIFT);
- max_append = min_t(u32, max_append, queue_max_hw_sectors(q));
-
- blk_queue_max_zone_append_sectors(q, max_append);
-
sd_zbc_print_zones(sdkp);
unlock:
#define SRB_STATUS_INVALID_REQUEST 0x06
#define SRB_STATUS_DATA_OVERRUN 0x12
#define SRB_STATUS_INVALID_LUN 0x20
+#define SRB_STATUS_INTERNAL_ERROR 0x30
#define SRB_STATUS(status) \
(status & ~(SRB_STATUS_AUTOSENSE_VALID | SRB_STATUS_QUEUE_FROZEN))
case SRB_STATUS_ERROR:
case SRB_STATUS_ABORTED:
case SRB_STATUS_INVALID_REQUEST:
+ case SRB_STATUS_INTERNAL_ERROR:
if (vm_srb->srb_status & SRB_STATUS_AUTOSENSE_VALID) {
/* Check for capacity change */
if ((asc == 0x2a) && (ascq == 0x9)) {
SPI_MSG_DATA_SIZE,
};
-#define BCM63XX_SPI_MAX_PREPEND 15
+#define BCM63XX_SPI_MAX_PREPEND 7
#define BCM63XX_SPI_MAX_CS 8
#define BCM63XX_SPI_BUS_NUM 0
if ((sdd->cur_mode & SPI_LOOP) && sdd->port_conf->has_loopback)
val |= S3C64XX_SPI_MODE_SELF_LOOPBACK;
+ else
+ val &= ~S3C64XX_SPI_MODE_SELF_LOOPBACK;
writel(val, regs + S3C64XX_SPI_MODE_CFG);
return ret;
}
+static void ufshcd_set_timestamp_attr(struct ufs_hba *hba)
+{
+ int err;
+ struct ufs_query_req *request = NULL;
+ struct ufs_query_res *response = NULL;
+ struct ufs_dev_info *dev_info = &hba->dev_info;
+ struct utp_upiu_query_v4_0 *upiu_data;
+
+ if (dev_info->wspecversion < 0x400)
+ return;
+
+ ufshcd_hold(hba);
+
+ mutex_lock(&hba->dev_cmd.lock);
+
+ ufshcd_init_query(hba, &request, &response,
+ UPIU_QUERY_OPCODE_WRITE_ATTR,
+ QUERY_ATTR_IDN_TIMESTAMP, 0, 0);
+
+ request->query_func = UPIU_QUERY_FUNC_STANDARD_WRITE_REQUEST;
+
+ upiu_data = (struct utp_upiu_query_v4_0 *)&request->upiu_req;
+
+ put_unaligned_be64(ktime_get_real_ns(), &upiu_data->osf3);
+
+ err = ufshcd_exec_dev_cmd(hba, DEV_CMD_TYPE_QUERY, QUERY_REQ_TIMEOUT);
+
+ if (err)
+ dev_err(hba->dev, "%s: failed to set timestamp %d\n",
+ __func__, err);
+
+ mutex_unlock(&hba->dev_cmd.lock);
+ ufshcd_release(hba);
+}
+
/**
* ufshcd_add_lus - probe and add UFS logical units
* @hba: per-adapter instance
ufshcd_set_ufs_dev_active(hba);
ufshcd_force_reset_auto_bkops(hba);
+ ufshcd_set_timestamp_attr(hba);
+
/* Gear up to HS gear if supported */
if (hba->max_pwr_info.is_valid) {
/*
ret = ufshcd_set_dev_pwr_mode(hba, UFS_ACTIVE_PWR_MODE);
if (ret)
goto set_old_link_state;
+ ufshcd_set_timestamp_attr(hba);
}
if (ufshcd_keep_autobkops_enabled_except_suspend(hba))
config SCSI_UFS_MEDIATEK
tristate "Mediatek specific hooks to UFS controller platform driver"
depends on SCSI_UFSHCD_PLATFORM && ARCH_MEDIATEK
+ depends on RESET_CONTROLLER
select PHY_MTK_UFS
select RESET_TI_SYSCON
help
{
struct btrfs_fs_info *fs_info = bg->fs_info;
- trace_btrfs_add_unused_block_group(bg);
spin_lock(&fs_info->unused_bgs_lock);
if (list_empty(&bg->bg_list)) {
btrfs_get_block_group(bg);
+ trace_btrfs_add_unused_block_group(bg);
list_add_tail(&bg->bg_list, &fs_info->unused_bgs);
- } else {
+ } else if (!test_bit(BLOCK_GROUP_FLAG_NEW, &bg->runtime_flags)) {
/* Pull out the block group from the reclaim_bgs list. */
+ trace_btrfs_add_unused_block_group(bg);
list_move_tail(&bg->bg_list, &fs_info->unused_bgs);
}
spin_unlock(&fs_info->unused_bgs_lock);
/* Shouldn't have super stripes in sequential zones */
if (zoned && nr) {
+ kfree(logical);
btrfs_err(fs_info,
"zoned: block group %llu must not contain super block",
cache->start);
next:
btrfs_delayed_refs_rsv_release(fs_info, 1);
list_del_init(&block_group->bg_list);
+ clear_bit(BLOCK_GROUP_FLAG_NEW, &block_group->runtime_flags);
}
btrfs_trans_release_chunk_metadata(trans);
}
if (!cache)
return ERR_PTR(-ENOMEM);
+ /*
+ * Mark it as new before adding it to the rbtree of block groups or any
+ * list, so that no other task finds it and calls btrfs_mark_bg_unused()
+ * before the new flag is set.
+ */
+ set_bit(BLOCK_GROUP_FLAG_NEW, &cache->runtime_flags);
+
cache->length = size;
set_free_space_tree_thresholds(cache);
cache->flags = type;
BLOCK_GROUP_FLAG_NEEDS_FREE_SPACE,
/* Indicate that the block group is placed on a sequential zone */
BLOCK_GROUP_FLAG_SEQUENTIAL_ZONE,
+ /*
+ * Indicate that block group is in the list of new block groups of a
+ * transaction.
+ */
+ BLOCK_GROUP_FLAG_NEW,
};
enum btrfs_caching_type {
void btrfs_add_delayed_iput(struct btrfs_inode *inode)
{
struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ unsigned long flags;
if (atomic_add_unless(&inode->vfs_inode.i_count, -1, 1))
return;
atomic_inc(&fs_info->nr_delayed_iputs);
- spin_lock(&fs_info->delayed_iput_lock);
+ /*
+ * Need to be irq safe here because we can be called from either an irq
+ * context (see bio.c and btrfs_put_ordered_extent()) or a non-irq
+ * context.
+ */
+ spin_lock_irqsave(&fs_info->delayed_iput_lock, flags);
ASSERT(list_empty(&inode->delayed_iput));
list_add_tail(&inode->delayed_iput, &fs_info->delayed_iputs);
- spin_unlock(&fs_info->delayed_iput_lock);
+ spin_unlock_irqrestore(&fs_info->delayed_iput_lock, flags);
if (!test_bit(BTRFS_FS_CLEANER_RUNNING, &fs_info->flags))
wake_up_process(fs_info->cleaner_kthread);
}
struct btrfs_inode *inode)
{
list_del_init(&inode->delayed_iput);
- spin_unlock(&fs_info->delayed_iput_lock);
+ spin_unlock_irq(&fs_info->delayed_iput_lock);
iput(&inode->vfs_inode);
if (atomic_dec_and_test(&fs_info->nr_delayed_iputs))
wake_up(&fs_info->delayed_iputs_wait);
- spin_lock(&fs_info->delayed_iput_lock);
+ spin_lock_irq(&fs_info->delayed_iput_lock);
}
static void btrfs_run_delayed_iput(struct btrfs_fs_info *fs_info,
struct btrfs_inode *inode)
{
if (!list_empty(&inode->delayed_iput)) {
- spin_lock(&fs_info->delayed_iput_lock);
+ spin_lock_irq(&fs_info->delayed_iput_lock);
if (!list_empty(&inode->delayed_iput))
run_delayed_iput_locked(fs_info, inode);
- spin_unlock(&fs_info->delayed_iput_lock);
+ spin_unlock_irq(&fs_info->delayed_iput_lock);
}
}
void btrfs_run_delayed_iputs(struct btrfs_fs_info *fs_info)
{
-
- spin_lock(&fs_info->delayed_iput_lock);
+ /*
+ * btrfs_put_ordered_extent() can run in irq context (see bio.c), which
+ * calls btrfs_add_delayed_iput() and that needs to lock
+ * fs_info->delayed_iput_lock. So we need to disable irqs here to
+ * prevent a deadlock.
+ */
+ spin_lock_irq(&fs_info->delayed_iput_lock);
while (!list_empty(&fs_info->delayed_iputs)) {
struct btrfs_inode *inode;
inode = list_first_entry(&fs_info->delayed_iputs,
struct btrfs_inode, delayed_iput);
run_delayed_iput_locked(fs_info, inode);
- cond_resched_lock(&fs_info->delayed_iput_lock);
+ if (need_resched()) {
+ spin_unlock_irq(&fs_info->delayed_iput_lock);
+ cond_resched();
+ spin_lock_irq(&fs_info->delayed_iput_lock);
+ }
}
- spin_unlock(&fs_info->delayed_iput_lock);
+ spin_unlock_irq(&fs_info->delayed_iput_lock);
}
/*
found_key.type = BTRFS_INODE_ITEM_KEY;
found_key.offset = 0;
inode = btrfs_iget(fs_info->sb, last_objectid, root);
- ret = PTR_ERR_OR_ZERO(inode);
- if (ret && ret != -ENOENT)
- goto out;
+ if (IS_ERR(inode)) {
+ ret = PTR_ERR(inode);
+ inode = NULL;
+ if (ret != -ENOENT)
+ goto out;
+ }
- if (ret == -ENOENT && root == fs_info->tree_root) {
+ if (!inode && root == fs_info->tree_root) {
struct btrfs_root *dead_root;
int is_dead_root = 0;
* deleted but wasn't. The inode number may have been reused,
* but either way, we can delete the orphan item.
*/
- if (ret == -ENOENT || inode->i_nlink) {
- if (!ret) {
+ if (!inode || inode->i_nlink) {
+ if (inode) {
ret = btrfs_drop_verity_items(BTRFS_I(inode));
iput(inode);
+ inode = NULL;
if (ret)
goto out;
}
trans = btrfs_start_transaction(root, 1);
if (IS_ERR(trans)) {
ret = PTR_ERR(trans);
- iput(inode);
goto out;
}
btrfs_debug(fs_info, "auto deleting %Lu",
ret = btrfs_del_orphan_item(trans, root,
found_key.objectid);
btrfs_end_transaction(trans);
- if (ret) {
- iput(inode);
+ if (ret)
goto out;
- }
continue;
}
ret = -ENOMEM;
goto out;
}
- ret = set_page_extent_mapped(page);
- if (ret < 0)
- goto out_unlock;
if (!PageUptodate(page)) {
ret = btrfs_read_folio(NULL, page_folio(page));
goto out_unlock;
}
}
+
+ /*
+ * We unlock the page after the io is completed and then re-lock it
+ * above. release_folio() could have come in between that and cleared
+ * PagePrivate(), but left the page in the mapping. Set the page mapped
+ * here to make sure it's properly set for the subpage stuff.
+ */
+ ret = set_page_extent_mapped(page);
+ if (ret < 0)
+ goto out_unlock;
+
wait_on_page_writeback(page);
lock_extent(io_tree, block_start, block_end, &cached_state);
ret = btrfs_extract_ordered_extent(bbio, dio_data->ordered);
if (ret) {
- bbio->bio.bi_status = errno_to_blk_status(ret);
- btrfs_dio_end_io(bbio);
+ btrfs_finish_ordered_extent(dio_data->ordered, NULL,
+ file_offset, dip->bytes,
+ !ret);
+ bio->bi_status = errno_to_blk_status(ret);
+ iomap_dio_bio_end_io(bio);
return;
}
}
ulist_free(entry->old_roots);
kfree(entry);
}
+ *root = RB_ROOT;
}
static void index_rbio_pages(struct btrfs_raid_bio *rbio);
static int alloc_rbio_pages(struct btrfs_raid_bio *rbio);
-static int finish_parity_scrub(struct btrfs_raid_bio *rbio, int need_check);
+static int finish_parity_scrub(struct btrfs_raid_bio *rbio);
static void scrub_rbio_work_locked(struct work_struct *work);
static void free_raid_bio_pointers(struct btrfs_raid_bio *rbio)
return 0;
}
-static int finish_parity_scrub(struct btrfs_raid_bio *rbio, int need_check)
+static int finish_parity_scrub(struct btrfs_raid_bio *rbio)
{
struct btrfs_io_context *bioc = rbio->bioc;
const u32 sectorsize = bioc->fs_info->sectorsize;
*/
clear_bit(RBIO_CACHE_READY_BIT, &rbio->flags);
- if (!need_check)
- goto writeback;
-
p_sector.page = alloc_page(GFP_NOFS);
if (!p_sector.page)
return -ENOMEM;
q_sector.page = NULL;
}
-writeback:
/*
* time to start writing. Make bios for everything from the
* higher layers (the bio_list in our rbio) and our p/q. Ignore
static void scrub_rbio(struct btrfs_raid_bio *rbio)
{
- bool need_check = false;
int sector_nr;
int ret;
* We have every sector properly prepared. Can finish the scrub
* and writeback the good content.
*/
- ret = finish_parity_scrub(rbio, need_check);
+ ret = finish_parity_scrub(rbio);
wait_event(rbio->io_wait, atomic_read(&rbio->stripes_pending) == 0);
for (sector_nr = 0; sector_nr < rbio->stripe_nsectors; sector_nr++) {
int found_errors;
return has_single_bit_set(flags);
}
-static inline int balance_need_close(struct btrfs_fs_info *fs_info)
-{
- /* cancel requested || normal exit path */
- return atomic_read(&fs_info->balance_cancel_req) ||
- (atomic_read(&fs_info->balance_pause_req) == 0 &&
- atomic_read(&fs_info->balance_cancel_req) == 0);
-}
-
/*
* Validate target profile against allowed profiles and return true if it's OK.
* Otherwise print the error message and return false.
u64 num_devices;
unsigned seq;
bool reducing_redundancy;
+ bool paused = false;
int i;
if (btrfs_fs_closing(fs_info) ||
if (ret == -ECANCELED && atomic_read(&fs_info->balance_pause_req)) {
btrfs_info(fs_info, "balance: paused");
btrfs_exclop_balance(fs_info, BTRFS_EXCLOP_BALANCE_PAUSED);
+ paused = true;
}
/*
* Balance can be canceled by:
btrfs_update_ioctl_balance_args(fs_info, bargs);
}
- if ((ret && ret != -ECANCELED && ret != -ENOSPC) ||
- balance_need_close(fs_info)) {
+ /* We didn't pause, we can clean everything up. */
+ if (!paused) {
reset_balance_state(fs_info);
btrfs_exclop_finish(fs_info);
}
(op == BTRFS_MAP_READ || !dev_replace_is_ongoing ||
!dev_replace->tgtdev)) {
set_io_stripe(smap, map, stripe_index, stripe_offset, stripe_nr);
- *mirror_num_ret = mirror_num;
+ if (mirror_num_ret)
+ *mirror_num_ret = mirror_num;
*bioc_ret = NULL;
ret = 0;
goto out;
*maptype = 0;
return inpage;
}
- kunmap_atomic(inpage);
+ kunmap_local(inpage);
might_sleep();
src = erofs_vm_map_ram(rq->in, ctx->inpages);
if (!src)
src = erofs_get_pcpubuf(ctx->inpages);
if (!src) {
DBG_BUGON(1);
- kunmap_atomic(inpage);
+ kunmap_local(inpage);
return ERR_PTR(-EFAULT);
}
min_t(unsigned int, total, PAGE_SIZE - *inputmargin);
if (!inpage)
- inpage = kmap_atomic(*in);
+ inpage = kmap_local_page(*in);
memcpy(tmp, inpage + *inputmargin, page_copycnt);
- kunmap_atomic(inpage);
+ kunmap_local(inpage);
inpage = NULL;
tmp += page_copycnt;
total -= page_copycnt;
int ret, maptype;
DBG_BUGON(*rq->in == NULL);
- headpage = kmap_atomic(*rq->in);
+ headpage = kmap_local_page(*rq->in);
/* LZ4 decompression inplace is only safe if zero_padding is enabled */
if (erofs_sb_has_zero_padding(EROFS_SB(rq->sb))) {
min_t(unsigned int, rq->inputsize,
rq->sb->s_blocksize - rq->pageofs_in));
if (ret) {
- kunmap_atomic(headpage);
+ kunmap_local(headpage);
return ret;
}
may_inplace = !((rq->pageofs_in + rq->inputsize) &
}
if (maptype == 0) {
- kunmap_atomic(headpage);
+ kunmap_local(headpage);
} else if (maptype == 1) {
vm_unmap_ram(src, ctx->inpages);
} else if (maptype == 2) {
/* one optimized fast path only for non bigpcluster cases yet */
if (ctx.inpages == 1 && ctx.outpages == 1 && !rq->inplace_io) {
DBG_BUGON(!*rq->out);
- dst = kmap_atomic(*rq->out);
+ dst = kmap_local_page(*rq->out);
dst_maptype = 0;
goto dstmap_out;
}
dstmap_out:
ret = z_erofs_lz4_decompress_mem(&ctx, dst + rq->pageofs_out);
if (!dst_maptype)
- kunmap_atomic(dst);
+ kunmap_local(dst);
else if (dst_maptype == 2)
vm_unmap_ram(dst, ctx.outpages);
return ret;
const unsigned int lefthalf = rq->outputsize - righthalf;
const unsigned int interlaced_offset =
rq->alg == Z_EROFS_COMPRESSION_SHIFTED ? 0 : rq->pageofs_out;
- unsigned char *src, *dst;
+ u8 *src;
if (outpages > 2 && rq->alg == Z_EROFS_COMPRESSION_SHIFTED) {
DBG_BUGON(1);
}
src = kmap_local_page(rq->in[inpages - 1]) + rq->pageofs_in;
- if (rq->out[0]) {
- dst = kmap_local_page(rq->out[0]);
- memcpy(dst + rq->pageofs_out, src + interlaced_offset,
- righthalf);
- kunmap_local(dst);
- }
+ if (rq->out[0])
+ memcpy_to_page(rq->out[0], rq->pageofs_out,
+ src + interlaced_offset, righthalf);
if (outpages > inpages) {
DBG_BUGON(!rq->out[outpages - 1]);
if (rq->out[outpages - 1] != rq->in[inpages - 1]) {
- dst = kmap_local_page(rq->out[outpages - 1]);
- memcpy(dst, interlaced_offset ? src :
- (src + righthalf), lefthalf);
- kunmap_local(dst);
+ memcpy_to_page(rq->out[outpages - 1], 0, src +
+ (interlaced_offset ? 0 : righthalf),
+ lefthalf);
} else if (!interlaced_offset) {
memmove(src, src + righthalf, lefthalf);
+ flush_dcache_page(rq->in[inpages - 1]);
}
}
kunmap_local(src);
inode->i_flags &= ~S_DAX;
if (test_opt(&sbi->opt, DAX_ALWAYS) && S_ISREG(inode->i_mode) &&
- vi->datalayout == EROFS_INODE_FLAT_PLAIN)
+ (vi->datalayout == EROFS_INODE_FLAT_PLAIN ||
+ vi->datalayout == EROFS_INODE_CHUNK_BASED))
inode->i_flags |= S_DAX;
if (!nblks)
*/
tight &= (fe->mode > Z_EROFS_PCLUSTER_FOLLOWED_NOINPLACE);
- cur = end - min_t(unsigned int, offset + end - map->m_la, end);
+ cur = end - min_t(erofs_off_t, offset + end - map->m_la, end);
if (!(map->m_flags & EROFS_MAP_MAPPED)) {
zero_user_segment(page, cur, end);
goto next_part;
}
cur = map->m_la + map->m_llen - 1;
- while (cur >= end) {
+ while ((cur >= end) && (cur < i_size_read(inode))) {
pgoff_t index = cur >> PAGE_SHIFT;
struct page *page;
spin_unlock(&fi->lock);
}
kfree(forget);
- if (ret == -ENOMEM)
+ if (ret == -ENOMEM || ret == -EINTR)
goto out;
if (ret || fuse_invalid_attr(&outarg.attr) ||
fuse_stale_inode(inode, outarg.generation, &outarg.attr))
goto out_put_forget;
err = -EIO;
- if (!outarg->nodeid)
- goto out_put_forget;
if (fuse_invalid_attr(&outarg->attr))
goto out_put_forget;
process_init_limits(fc, arg);
if (arg->minor >= 6) {
- u64 flags = arg->flags | (u64) arg->flags2 << 32;
+ u64 flags = arg->flags;
+
+ if (flags & FUSE_INIT_EXT)
+ flags |= (u64) arg->flags2 << 32;
ra_pages = arg->max_readahead / PAGE_SIZE;
if (flags & FUSE_ASYNC_READ)
FUSE_ABORT_ERROR | FUSE_MAX_PAGES | FUSE_CACHE_SYMLINKS |
FUSE_NO_OPENDIR_SUPPORT | FUSE_EXPLICIT_INVAL_DATA |
FUSE_HANDLE_KILLPRIV_V2 | FUSE_SETXATTR_EXT | FUSE_INIT_EXT |
- FUSE_SECURITY_CTX | FUSE_CREATE_SUPP_GROUP;
+ FUSE_SECURITY_CTX | FUSE_CREATE_SUPP_GROUP |
+ FUSE_HAS_EXPIRE_ONLY;
#ifdef CONFIG_FUSE_DAX
if (fm->fc->dax)
flags |= FUSE_MAP_ALIGNMENT;
#include <linux/compat.h>
#include <linux/fileattr.h>
-static ssize_t fuse_send_ioctl(struct fuse_mount *fm, struct fuse_args *args)
+static ssize_t fuse_send_ioctl(struct fuse_mount *fm, struct fuse_args *args,
+ struct fuse_ioctl_out *outarg)
{
- ssize_t ret = fuse_simple_request(fm, args);
+ ssize_t ret;
+
+ args->out_args[0].size = sizeof(*outarg);
+ args->out_args[0].value = outarg;
+
+ ret = fuse_simple_request(fm, args);
/* Translate ENOSYS, which shouldn't be returned from fs */
if (ret == -ENOSYS)
ret = -ENOTTY;
+ if (ret >= 0 && outarg->result == -ENOSYS)
+ outarg->result = -ENOTTY;
+
return ret;
}
}
ap.args.out_numargs = 2;
- ap.args.out_args[0].size = sizeof(outarg);
- ap.args.out_args[0].value = &outarg;
ap.args.out_args[1].size = out_size;
ap.args.out_pages = true;
ap.args.out_argvar = true;
- transferred = fuse_send_ioctl(fm, &ap.args);
+ transferred = fuse_send_ioctl(fm, &ap.args, &outarg);
err = transferred;
if (transferred < 0)
goto out;
args.in_args[1].size = inarg.in_size;
args.in_args[1].value = ptr;
args.out_numargs = 2;
- args.out_args[0].size = sizeof(outarg);
- args.out_args[0].value = &outarg;
args.out_args[1].size = inarg.out_size;
args.out_args[1].value = ptr;
- err = fuse_send_ioctl(fm, &args);
+ err = fuse_send_ioctl(fm, &args, &outarg);
if (!err) {
if (outarg.result < 0)
err = outarg.result;
while ((ret = iomap_iter(&iter, ops)) > 0)
iter.processed = iomap_write_iter(&iter, i);
- if (unlikely(ret < 0))
+ if (unlikely(iter.pos == iocb->ki_pos))
return ret;
ret = iter.pos - iocb->ki_pos;
- iocb->ki_pos += ret;
+ iocb->ki_pos = iter.pos;
return ret;
}
EXPORT_SYMBOL_GPL(iomap_file_buffered_write);
/* Check for STATUS_IO_TIMEOUT */
bool (*is_status_io_timeout)(char *buf);
/* Check for STATUS_NETWORK_NAME_DELETED */
- void (*is_network_name_deleted)(char *buf, struct TCP_Server_Info *srv);
+ bool (*is_network_name_deleted)(char *buf, struct TCP_Server_Info *srv);
};
struct smb_version_values {
param_offset = offsetof(struct smb_com_transaction2_spi_req,
InformationLevel) - 4;
offset = param_offset + params;
- parm_data = ((char *) &pSMB->hdr.Protocol) + offset;
+ parm_data = ((char *)pSMB) + sizeof(pSMB->hdr.smb_buf_length) + offset;
pSMB->ParameterOffset = cpu_to_le16(param_offset);
/* convert to on the wire format for POSIX ACL */
#define TLINK_IDLE_EXPIRE (600 * HZ)
/* Drop the connection to not overload the server */
-#define NUM_STATUS_IO_TIMEOUT 5
+#define MAX_STATUS_IO_TIMEOUT 5
static int ip_connect(struct TCP_Server_Info *server);
static int generic_ip_connect(struct TCP_Server_Info *server);
struct mid_q_entry *mids[MAX_COMPOUND];
char *bufs[MAX_COMPOUND];
unsigned int noreclaim_flag, num_io_timeout = 0;
+ bool pending_reconnect = false;
noreclaim_flag = memalloc_noreclaim_save();
cifs_dbg(FYI, "Demultiplex PID: %d\n", task_pid_nr(current));
cifs_dbg(FYI, "RFC1002 header 0x%x\n", pdu_length);
if (!is_smb_response(server, buf[0]))
continue;
+
+ pending_reconnect = false;
next_pdu:
server->pdu_size = pdu_length;
if (server->ops->is_status_io_timeout &&
server->ops->is_status_io_timeout(buf)) {
num_io_timeout++;
- if (num_io_timeout > NUM_STATUS_IO_TIMEOUT) {
- cifs_reconnect(server, false);
+ if (num_io_timeout > MAX_STATUS_IO_TIMEOUT) {
+ cifs_server_dbg(VFS,
+ "Number of request timeouts exceeded %d. Reconnecting",
+ MAX_STATUS_IO_TIMEOUT);
+
+ pending_reconnect = true;
num_io_timeout = 0;
- continue;
}
}
if (mids[i] != NULL) {
mids[i]->resp_buf_size = server->pdu_size;
- if (bufs[i] && server->ops->is_network_name_deleted)
- server->ops->is_network_name_deleted(bufs[i],
- server);
+ if (bufs[i] != NULL) {
+ if (server->ops->is_network_name_deleted &&
+ server->ops->is_network_name_deleted(bufs[i],
+ server)) {
+ cifs_server_dbg(FYI,
+ "Share deleted. Reconnect needed");
+ }
+ }
if (!mids[i]->multiRsp || mids[i]->multiEnd)
mids[i]->callback(mids[i]);
buf = server->smallbuf;
goto next_pdu;
}
+
+ /* do this reconnect at the very end after processing all MIDs */
+ if (pending_reconnect)
+ cifs_reconnect(server, true);
+
} /* end while !EXITING */
/* buffer usually freed in free_mid - need to free it here on exit */
return rc;
}
+/*
+ * Track individual DFS referral servers used by new DFS mount.
+ *
+ * On success, their lifetime will be shared by final tcon (dfs_ses_list).
+ * Otherwise, they will be put by dfs_put_root_smb_sessions() in cifs_mount().
+ */
static int add_root_smb_session(struct cifs_mount_ctx *mnt_ctx)
{
struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;
INIT_LIST_HEAD(&root_ses->list);
spin_lock(&cifs_tcp_ses_lock);
- ses->ses_count++;
+ cifs_smb_ses_inc_refcount(ses);
spin_unlock(&cifs_tcp_ses_lock);
root_ses->ses = ses;
list_add_tail(&root_ses->list, &mnt_ctx->dfs_ses_list);
}
+ /* Select new DFS referral server so that new referrals go through it */
ctx->dfs_root_ses = ses;
return 0;
}
int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx, bool *isdfs)
{
struct smb3_fs_context *ctx = mnt_ctx->fs_ctx;
- struct cifs_ses *ses;
bool nodfs = ctx->nodfs;
int rc;
}
*isdfs = true;
- /*
- * Prevent DFS root session of being put in the first call to
- * cifs_mount_put_conns(). If another DFS root server was not found
- * while chasing the referrals (@ctx->dfs_root_ses == @ses), then we
- * can safely put extra refcount of @ses.
- */
- ses = mnt_ctx->ses;
- mnt_ctx->ses = NULL;
- mnt_ctx->server = NULL;
- rc = __dfs_mount_share(mnt_ctx);
- if (ses == ctx->dfs_root_ses)
- cifs_put_smb_ses(ses);
-
- return rc;
+ add_root_smb_session(mnt_ctx);
+ return __dfs_mount_share(mnt_ctx);
}
/* Update dfs referral path of superblock */
cfile = file->private_data;
file->private_data = NULL;
dclose = kmalloc(sizeof(struct cifs_deferred_close), GFP_KERNEL);
- if ((cinode->oplock == CIFS_CACHE_RHW_FLG) &&
- cinode->lease_granted &&
+ if ((cifs_sb->ctx->closetimeo && cinode->oplock == CIFS_CACHE_RHW_FLG)
+ && cinode->lease_granted &&
!test_bit(CIFS_INO_CLOSE_ON_LOCK, &cinode->flags) &&
dclose) {
if (test_and_clear_bit(CIFS_INO_MODIFIED_ATTR, &cinode->flags)) {
return false;
}
-static void
+static bool
smb2_is_network_name_deleted(char *buf, struct TCP_Server_Info *server)
{
struct smb2_hdr *shdr = (struct smb2_hdr *)buf;
struct cifs_tcon *tcon;
if (shdr->Status != STATUS_NETWORK_NAME_DELETED)
- return;
+ return false;
/* If server is a channel, select the primary channel */
pserver = CIFS_SERVER_IS_CHAN(server) ? server->primary_server : server;
spin_unlock(&cifs_tcp_ses_lock);
pr_warn_once("Server share %s deleted.\n",
tcon->tree_name);
- return;
+ return true;
}
}
}
spin_unlock(&cifs_tcp_ses_lock);
+
+ return false;
}
static int
spin_unlock(&ses->ses_lock);
continue;
}
- ++ses->ses_count;
+ cifs_smb_ses_inc_refcount(ses);
spin_unlock(&ses->ses_lock);
return ses;
}
uint8_t valuelen; /* actual length of value (no NULL) */
uint8_t flags; /* flags bits (see xfs_attr_leaf.h) */
uint8_t nameval[]; /* name & value bytes concatenated */
- } list[1]; /* variable sized array */
+ } list[]; /* variable sized array */
};
typedef struct xfs_attr_leaf_map { /* RLE map of free bytes */
typedef struct xfs_attr_leaf_name_local {
__be16 valuelen; /* number of bytes in value */
__u8 namelen; /* length of name bytes */
- __u8 nameval[1]; /* name/value bytes */
+ /*
+ * In Linux 6.5 this flex array was converted from nameval[1] to
+ * nameval[]. Be very careful here about extra padding at the end;
+ * see xfs_attr_leaf_entsize_local() for details.
+ */
+ __u8 nameval[]; /* name/value bytes */
} xfs_attr_leaf_name_local_t;
typedef struct xfs_attr_leaf_name_remote {
__be32 valueblk; /* block number of value bytes */
__be32 valuelen; /* number of bytes in value */
__u8 namelen; /* length of name bytes */
- __u8 name[1]; /* name bytes */
+ /*
+ * In Linux 6.5 this flex array was converted from name[1] to name[].
+ * Be very careful here about extra padding at the end; see
+ * xfs_attr_leaf_entsize_remote() for details.
+ */
+ __u8 name[]; /* name bytes */
} xfs_attr_leaf_name_remote_t;
typedef struct xfs_attr_leafblock {
xfs_attr_leaf_hdr_t hdr; /* constant-structure header block */
- xfs_attr_leaf_entry_t entries[1]; /* sorted on key, not name */
+ xfs_attr_leaf_entry_t entries[]; /* sorted on key, not name */
/*
* The rest of the block contains the following structures after the
* leaf entries, growing from the bottom up. The variables are never
struct xfs_attr3_leafblock {
struct xfs_attr3_leaf_hdr hdr;
- struct xfs_attr_leaf_entry entries[1];
+ struct xfs_attr_leaf_entry entries[];
/*
* The rest of the block contains the following structures after the
*/
static inline int xfs_attr_leaf_entsize_remote(int nlen)
{
- return round_up(sizeof(struct xfs_attr_leaf_name_remote) - 1 +
- nlen, XFS_ATTR_LEAF_NAME_ALIGN);
+ /*
+ * Prior to Linux 6.5, struct xfs_attr_leaf_name_remote ended with
+ * name[1], which was used as a flexarray. The layout of this struct
+ * is 9 bytes of fixed-length fields followed by a __u8 flex array at
+ * offset 9.
+ *
+ * On most architectures, struct xfs_attr_leaf_name_remote had two
+ * bytes of implicit padding at the end of the struct to make the
+ * struct length 12. After converting name[1] to name[], there are
+ * three implicit padding bytes and the struct size remains 12.
+ * However, there are compiler configurations that do not add implicit
+ * padding at all (m68k) and have been broken for years.
+ *
+ * This entsize computation historically added (the xattr name length)
+ * to (the padded struct length - 1) and rounded that sum up to the
+ * nearest multiple of 4 (NAME_ALIGN). IOWs, round_up(11 + nlen, 4).
+ * This is encoded in the ondisk format, so we cannot change this.
+ *
+ * Compute the entsize from offsetof of the flexarray and manually
+ * adding bytes for the implicit padding.
+ */
+ const size_t remotesize =
+ offsetof(struct xfs_attr_leaf_name_remote, name) + 2;
+
+ return round_up(remotesize + nlen, XFS_ATTR_LEAF_NAME_ALIGN);
}
static inline int xfs_attr_leaf_entsize_local(int nlen, int vlen)
{
- return round_up(sizeof(struct xfs_attr_leaf_name_local) - 1 +
- nlen + vlen, XFS_ATTR_LEAF_NAME_ALIGN);
+ /*
+ * Prior to Linux 6.5, struct xfs_attr_leaf_name_local ended with
+ * nameval[1], which was used as a flexarray. The layout of this
+ * struct is 3 bytes of fixed-length fields followed by a __u8 flex
+ * array at offset 3.
+ *
+ * struct xfs_attr_leaf_name_local had zero bytes of implicit padding
+ * at the end of the struct to make the struct length 4. On most
+ * architectures, after converting nameval[1] to nameval[], there is
+ * one implicit padding byte and the struct size remains 4. However,
+ * there are compiler configurations that do not add implicit padding
+ * at all (m68k) and would break.
+ *
+ * This entsize computation historically added (the xattr name and
+ * value length) to (the padded struct length - 1) and rounded that sum
+ * up to the nearest multiple of 4 (NAME_ALIGN). IOWs, the formula is
+ * round_up(3 + nlen + vlen, 4). This is encoded in the ondisk format,
+ * so we cannot change this.
+ *
+ * Compute the entsize from offsetof of the flexarray and manually
+ * adding bytes for the implicit padding.
+ */
+ const size_t localsize =
+ offsetof(struct xfs_attr_leaf_name_local, nameval);
+
+ return round_up(localsize + nlen + vlen, XFS_ATTR_LEAF_NAME_ALIGN);
}
static inline int xfs_attr_leaf_entsize_local_max(int bsize)
struct xfs_attrlist {
__s32 al_count; /* number of entries in attrlist */
__s32 al_more; /* T/F: more attrs (do call again) */
- __s32 al_offset[1]; /* byte offsets of attrs [var-sized] */
+ __s32 al_offset[]; /* byte offsets of attrs [var-sized] */
};
struct xfs_attrlist_ent { /* data from attr_list() */
__u32 a_valuelen; /* number bytes in value of attr */
- char a_name[1]; /* attr name (NULL terminated) */
+ char a_name[]; /* attr name (NULL terminated) */
};
typedef struct xfs_fsop_attrlist_handlereq {
/* dir/attr trees */
XFS_CHECK_STRUCT_SIZE(struct xfs_attr3_leaf_hdr, 80);
- XFS_CHECK_STRUCT_SIZE(struct xfs_attr3_leafblock, 88);
+ XFS_CHECK_STRUCT_SIZE(struct xfs_attr3_leafblock, 80);
XFS_CHECK_STRUCT_SIZE(struct xfs_attr3_rmt_hdr, 56);
XFS_CHECK_STRUCT_SIZE(struct xfs_da3_blkinfo, 56);
XFS_CHECK_STRUCT_SIZE(struct xfs_da3_intnode, 64);
XFS_CHECK_OFFSET(xfs_attr_leaf_name_remote_t, valuelen, 4);
XFS_CHECK_OFFSET(xfs_attr_leaf_name_remote_t, namelen, 8);
XFS_CHECK_OFFSET(xfs_attr_leaf_name_remote_t, name, 9);
- XFS_CHECK_STRUCT_SIZE(xfs_attr_leafblock_t, 40);
+ XFS_CHECK_STRUCT_SIZE(xfs_attr_leafblock_t, 32);
+ XFS_CHECK_STRUCT_SIZE(struct xfs_attr_shortform, 4);
XFS_CHECK_OFFSET(struct xfs_attr_shortform, hdr.totsize, 0);
XFS_CHECK_OFFSET(struct xfs_attr_shortform, hdr.count, 2);
XFS_CHECK_OFFSET(struct xfs_attr_shortform, list[0].namelen, 4);
*(.text.unlikely .text.unlikely.*) \
*(.text.unknown .text.unknown.*) \
NOINSTR_TEXT \
- *(.text..refcount) \
*(.ref.text) \
*(.text.asan.* .text.tsan.*) \
MEM_KEEP(init.text*) \
bool drm_sched_entity_is_ready(struct drm_sched_entity *entity);
int drm_sched_entity_error(struct drm_sched_entity *entity);
-void drm_sched_fence_set_parent(struct drm_sched_fence *s_fence,
- struct dma_fence *fence);
struct drm_sched_fence *drm_sched_fence_alloc(
struct drm_sched_entity *s_entity, void *owner);
void drm_sched_fence_init(struct drm_sched_fence *fence,
struct drm_sched_entity *entity);
void drm_sched_fence_free(struct drm_sched_fence *fence);
-void drm_sched_fence_scheduled(struct drm_sched_fence *fence);
+void drm_sched_fence_scheduled(struct drm_sched_fence *fence,
+ struct dma_fence *parent);
void drm_sched_fence_finished(struct drm_sched_fence *fence, int result);
unsigned long drm_sched_suspend_timeout(struct drm_gpu_scheduler *sched);
* keyslots while ensuring that they can't be changed concurrently.
*/
struct rw_semaphore lock;
+ struct lock_class_key lockdep_key;
/* List of idle slots, with least recently used slot at front */
wait_queue_head_t idle_slots_wait_queue;
/*
* The rb_node is only used inside the io scheduler, requests
- * are pruned when moved to the dispatch queue. So let the
- * completion_data share space with the rb_node.
+ * are pruned when moved to the dispatch queue. special_vec must
+ * only be used if RQF_SPECIAL_PAYLOAD is set, and those cannot be
+ * insert into an IO scheduler.
*/
union {
struct rb_node rb_node; /* sort/lookup */
struct bio_vec special_vec;
- void *completion_data;
};
/*
void dma_fence_set_deadline(struct dma_fence *fence, ktime_t deadline);
struct dma_fence *dma_fence_get_stub(void);
-struct dma_fence *dma_fence_allocate_private_stub(void);
+struct dma_fence *dma_fence_allocate_private_stub(ktime_t timestamp);
u64 dma_fence_context_alloc(unsigned num);
extern const struct dma_fence_ops dma_fence_array_ops;
};
enum {
- NVME_ID_NS_NVM_STS_MASK = 0x3f,
+ NVME_ID_NS_NVM_STS_MASK = 0x7f,
NVME_ID_NS_NVM_GUARD_SHIFT = 7,
NVME_ID_NS_NVM_GUARD_MASK = 0x3,
};
void psi_memstall_leave(unsigned long *flags);
int psi_show(struct seq_file *s, struct psi_group *group, enum psi_res res);
-struct psi_trigger *psi_trigger_create(struct psi_group *group,
- char *buf, enum psi_res res, struct file *file);
+struct psi_trigger *psi_trigger_create(struct psi_group *group, char *buf,
+ enum psi_res res, struct file *file,
+ struct kernfs_open_file *of);
void psi_trigger_destroy(struct psi_trigger *t);
__poll_t psi_trigger_poll(void **trigger_ptr, struct file *file,
/* Wait queue for polling */
wait_queue_head_t event_wait;
+ /* Kernfs file for cgroup triggers */
+ struct kernfs_open_file *of;
+
/* Pending event flag */
int event;
* - add extension header
* - add FUSE_EXT_GROUPS
* - add FUSE_CREATE_SUPP_GROUP
+ * - add FUSE_HAS_EXPIRE_ONLY
*/
#ifndef _LINUX_FUSE_H
* FUSE_HAS_INODE_DAX: use per inode DAX
* FUSE_CREATE_SUPP_GROUP: add supplementary group info to create, mkdir,
* symlink and mknod (single group that matches parent)
+ * FUSE_HAS_EXPIRE_ONLY: kernel supports expiry-only entry invalidation
*/
#define FUSE_ASYNC_READ (1 << 0)
#define FUSE_POSIX_LOCKS (1 << 1)
#define FUSE_SECURITY_CTX (1ULL << 32)
#define FUSE_HAS_INODE_DAX (1ULL << 33)
#define FUSE_CREATE_SUPP_GROUP (1ULL << 34)
+#define FUSE_HAS_EXPIRE_ONLY (1ULL << 35)
/**
* CUSE INIT request/reply flags
};
/**
+ * struct utp_upiu_query_v4_0 - upiu request buffer structure for
+ * query request >= UFS 4.0 spec.
+ * @opcode: command to perform B-0
+ * @idn: a value that indicates the particular type of data B-1
+ * @index: Index to further identify data B-2
+ * @selector: Index to further identify data B-3
+ * @osf4: spec field B-5
+ * @osf5: spec field B 6,7
+ * @osf6: spec field DW 8,9
+ * @osf7: spec field DW 10,11
+ */
+struct utp_upiu_query_v4_0 {
+ __u8 opcode;
+ __u8 idn;
+ __u8 index;
+ __u8 selector;
+ __u8 osf3;
+ __u8 osf4;
+ __be16 osf5;
+ __be32 osf6;
+ __be32 osf7;
+ __be32 reserved;
+};
+
+/**
* struct utp_upiu_cmd - Command UPIU structure
* @data_transfer_len: Data Transfer Length DW-3
* @cdb: Command Descriptor Block CDB DW-4 to DW-7
QUERY_ATTR_IDN_WB_BUFF_LIFE_TIME_EST = 0x1E,
QUERY_ATTR_IDN_CURR_WB_BUFF_SIZE = 0x1F,
QUERY_ATTR_IDN_EXT_IID_EN = 0x2A,
+ QUERY_ATTR_IDN_TIMESTAMP = 0x30
};
/* Descriptor idn for Query requests */
static inline int io_cqring_wait_schedule(struct io_ring_ctx *ctx,
struct io_wait_queue *iowq)
{
+ int token, ret;
+
if (unlikely(READ_ONCE(ctx->check_cq)))
return 1;
if (unlikely(!llist_empty(&ctx->work_llist)))
return -EINTR;
if (unlikely(io_should_wake(iowq)))
return 0;
+
+ /*
+ * Use io_schedule_prepare/finish, so cpufreq can take into account
+ * that the task is waiting for IO - turns out to be important for low
+ * QD IO.
+ */
+ token = io_schedule_prepare();
+ ret = 0;
if (iowq->timeout == KTIME_MAX)
schedule();
else if (!schedule_hrtimeout(&iowq->timeout, HRTIMER_MODE_ABS))
- return -ETIME;
- return 0;
+ ret = -ETIME;
+ io_schedule_finish(token);
+ return ret;
}
/*
}
psi = cgroup_psi(cgrp);
- new = psi_trigger_create(psi, buf, res, of->file);
+ new = psi_trigger_create(psi, buf, res, of->file, of);
if (IS_ERR(new)) {
cgroup_put(cgrp);
return PTR_ERR(new);
* LLVM appends various suffixes for local functions and variables that
* must be promoted to global scope as part of LTO. This can break
* hooking of static functions with kprobes. '.' is not a valid
- * character in an identifier in C. Suffixes observed:
+ * character in an identifier in C. Suffixes only in LLVM LTO observed:
* - foo.llvm.[0-9a-f]+
- * - foo.[0-9a-f]+
*/
- res = strchr(s, '.');
+ res = strstr(s, ".llvm.");
if (res) {
*res = '\0';
return true;
unsigned maj, min, offset;
char *p, dummy;
+ error = 0;
if (sscanf(name, "%u:%u%c", &maj, &min, &dummy) == 2 ||
sscanf(name, "%u:%u:%u:%c", &maj, &min, &offset,
&dummy) == 3) {
/* Definitions related to the frequency QoS below. */
+static inline bool freq_qos_value_invalid(s32 value)
+{
+ return value < 0 && value != PM_QOS_DEFAULT_VALUE;
+}
+
/**
* freq_constraints_init - Initialize frequency QoS constraints.
* @qos: Frequency QoS constraints to initialize.
{
int ret;
- if (IS_ERR_OR_NULL(qos) || !req || value < 0)
+ if (IS_ERR_OR_NULL(qos) || !req || freq_qos_value_invalid(value))
return -EINVAL;
if (WARN(freq_qos_request_active(req),
*/
int freq_qos_update_request(struct freq_qos_request *req, s32 new_value)
{
- if (!req || new_value < 0)
+ if (!req || freq_qos_value_invalid(new_value))
return -EINVAL;
if (WARN(!freq_qos_request_active(req),
recent_used_cpu != target &&
cpus_share_cache(recent_used_cpu, target) &&
(available_idle_cpu(recent_used_cpu) || sched_idle_cpu(recent_used_cpu)) &&
- cpumask_test_cpu(p->recent_used_cpu, p->cpus_ptr) &&
+ cpumask_test_cpu(recent_used_cpu, p->cpus_ptr) &&
asym_fits_cpu(task_util, util_min, util_max, recent_used_cpu)) {
return recent_used_cpu;
}
continue;
/* Generate an event */
- if (cmpxchg(&t->event, 0, 1) == 0)
- wake_up_interruptible(&t->event_wait);
+ if (cmpxchg(&t->event, 0, 1) == 0) {
+ if (t->of)
+ kernfs_notify(t->of->kn);
+ else
+ wake_up_interruptible(&t->event_wait);
+ }
t->last_event_time = now;
/* Reset threshold breach flag once event got generated */
t->pending_event = false;
return 0;
}
-struct psi_trigger *psi_trigger_create(struct psi_group *group,
- char *buf, enum psi_res res, struct file *file)
+struct psi_trigger *psi_trigger_create(struct psi_group *group, char *buf,
+ enum psi_res res, struct file *file,
+ struct kernfs_open_file *of)
{
struct psi_trigger *t;
enum psi_states state;
t->event = 0;
t->last_event_time = 0;
- init_waitqueue_head(&t->event_wait);
+ t->of = of;
+ if (!of)
+ init_waitqueue_head(&t->event_wait);
t->pending_event = false;
t->aggregator = privileged ? PSI_POLL : PSI_AVGS;
* being accessed later. Can happen if cgroup is deleted from under a
* polling process.
*/
- wake_up_pollfree(&t->event_wait);
+ if (t->of)
+ kernfs_notify(t->of->kn);
+ else
+ wake_up_interruptible(&t->event_wait);
if (t->aggregator == PSI_AVGS) {
mutex_lock(&group->avgs_lock);
if (!t)
return DEFAULT_POLLMASK | EPOLLERR | EPOLLPRI;
- poll_wait(file, &t->event_wait, wait);
+ if (t->of)
+ kernfs_generic_poll(t->of, wait);
+ else
+ poll_wait(file, &t->event_wait, wait);
if (cmpxchg(&t->event, 1, 0) == 1)
ret |= EPOLLPRI;
return -EBUSY;
}
- new = psi_trigger_create(&psi_system, buf, res, file);
+ new = psi_trigger_create(&psi_system, buf, res, file, NULL);
if (IS_ERR(new)) {
mutex_unlock(&seq->lock);
return PTR_ERR(new);
else
return -EINVAL;
break;
- case PR_GET_AUXV:
- if (arg4 || arg5)
- return -EINVAL;
- error = prctl_get_auxv((void __user *)arg2, arg3);
- break;
default:
return -EINVAL;
}
case PR_SET_VMA:
error = prctl_set_vma(arg2, arg3, arg4, arg5);
break;
+ case PR_GET_AUXV:
+ if (arg4 || arg5)
+ return -EINVAL;
+ error = prctl_get_auxv((void __user *)arg2, arg3);
+ break;
#ifdef CONFIG_KSM
case PR_SET_MEMORY_MERGE:
if (arg3 || arg4 || arg5)
return;
}
+ /*
+ * This user handler is shared with other kprobes and is not expected to be
+ * called recursively. So if any other kprobe handler is running, this will
+ * exit as kprobe does. See the section 'Share the callbacks with kprobes'
+ * in Documentation/trace/fprobe.rst for more information.
+ */
if (unlikely(kprobe_running())) {
fp->nmissed++;
goto recursion_unlock;
#define MEM_FAIL(condition, fmt, ...) \
DO_ONCE_LITE_IF(condition, pr_err, "ERROR: " fmt, ##__VA_ARGS__)
+#define FAULT_STRING "(fault)"
+
#define HIST_STACKTRACE_DEPTH 16
#define HIST_STACKTRACE_SIZE (HIST_STACKTRACE_DEPTH * sizeof(unsigned long))
#define HIST_STACKTRACE_SKIP 5
int len = *(u32 *)data >> 16;
if (!len)
- trace_seq_puts(s, "(fault)");
+ trace_seq_puts(s, FAULT_STRING);
else
trace_seq_printf(s, "\"%s\"",
(const char *)get_loc_data(data, ent));
#ifndef __TRACE_PROBE_KERNEL_H_
#define __TRACE_PROBE_KERNEL_H_
-#define FAULT_STRING "(fault)"
-
/*
* This depends on trace_probe.h, but can not include it due to
* the way trace_probe_tmpl.h is used by trace_kprobe.c and trace_eprobe.c.
fetch_store_strlen_user(unsigned long addr)
{
const void __user *uaddr = (__force const void __user *)addr;
- int ret;
- ret = strnlen_user_nofault(uaddr, MAX_STRING_SIZE);
- /*
- * strnlen_user_nofault returns zero on fault, insert the
- * FAULT_STRING when that occurs.
- */
- if (ret <= 0)
- return strlen(FAULT_STRING) + 1;
- return ret;
+ return strnlen_user_nofault(uaddr, MAX_STRING_SIZE);
}
/* Return the length of string -- including null terminal byte */
len++;
} while (c && ret == 0 && len < MAX_STRING_SIZE);
- /* For faults, return enough to hold the FAULT_STRING */
- return (ret < 0) ? strlen(FAULT_STRING) + 1 : len;
+ return (ret < 0) ? ret : len;
}
-static nokprobe_inline void set_data_loc(int ret, void *dest, void *__dest, void *base, int len)
+static nokprobe_inline void set_data_loc(int ret, void *dest, void *__dest, void *base)
{
- if (ret >= 0) {
- *(u32 *)dest = make_data_loc(ret, __dest - base);
- } else {
- strscpy(__dest, FAULT_STRING, len);
- ret = strlen(__dest) + 1;
- }
+ if (ret < 0)
+ ret = 0;
+ *(u32 *)dest = make_data_loc(ret, __dest - base);
}
/*
__dest = get_loc_data(dest, base);
ret = strncpy_from_user_nofault(__dest, uaddr, maxlen);
- set_data_loc(ret, dest, __dest, base, maxlen);
+ set_data_loc(ret, dest, __dest, base);
return ret;
}
* probing.
*/
ret = strncpy_from_kernel_nofault(__dest, (void *)addr, maxlen);
- set_data_loc(ret, dest, __dest, base, maxlen);
+ set_data_loc(ret, dest, __dest, base);
return ret;
}
code++;
goto array;
case FETCH_OP_ST_USTRING:
- ret += fetch_store_strlen_user(val + code->offset);
+ ret = fetch_store_strlen_user(val + code->offset);
code++;
goto array;
case FETCH_OP_ST_SYMSTR:
- ret += fetch_store_symstrlen(val + code->offset);
+ ret = fetch_store_symstrlen(val + code->offset);
code++;
goto array;
default:
array:
/* the last stage: Loop on array */
if (code->op == FETCH_OP_LP_ARRAY) {
+ if (ret < 0)
+ ret = 0;
total += ret;
if (++i < code->param) {
code = s3;
if (unlikely(arg->dynamic))
*dl = make_data_loc(maxlen, dyndata - base);
ret = process_fetch_insn(arg->code, rec, dl, base);
- if (unlikely(ret < 0 && arg->dynamic)) {
- *dl = make_data_loc(0, dyndata - base);
- } else {
+ if (arg->dynamic && likely(ret > 0)) {
dyndata += ret;
maxlen -= ret;
}
*/
ret++;
*(u32 *)dest = make_data_loc(ret, (void *)dst - base);
- }
+ } else
+ *(u32 *)dest = make_data_loc(0, (void *)dst - base);
return ret;
}
return ret;
}
-static int copy_iovec_from_user(struct iovec *iov,
+static __noclone int copy_iovec_from_user(struct iovec *iov,
const struct iovec __user *uiov, unsigned long nr_segs)
{
int ret = -EFAULT;
mas->offset = slot;
pivots[slot] = mas->last;
if (mas->last != ULONG_MAX)
- slot++;
+ pivots[++slot] = ULONG_MAX;
+
mas->depth = 1;
mas_set_height(mas);
ma_set_meta(node, maple_leaf_64, 0, slot);
725};
static const unsigned long level2_32[] = { 1747, 2000, 1750, 1755,
1760, 1765};
+ unsigned long last_index;
if (MAPLE_32BIT) {
nr_entries = 500;
level2 = level2_32;
+ last_index = 0x138e;
} else {
nr_entries = 200;
level2 = level2_64;
+ last_index = 0x7d6;
}
for (i = 0; i <= nr_entries; i++)
val = mas_next(&mas, ULONG_MAX);
MT_BUG_ON(mt, val != NULL);
- MT_BUG_ON(mt, mas.index != 0x7d6);
+ MT_BUG_ON(mt, mas.index != last_index);
MT_BUG_ON(mt, mas.last != ULONG_MAX);
val = mas_prev(&mas, 0);
{
unsigned long nstart, end, tmp;
struct vm_area_struct *vma, *prev;
- int error;
VMA_ITERATOR(vmi, current->mm, start);
VM_BUG_ON(offset_in_page(start));
nstart = start;
tmp = vma->vm_start;
for_each_vma_range(vmi, vma, end) {
+ int error;
vm_flags_t newflags;
if (vma->vm_start != tmp)
tmp = end;
error = mlock_fixup(&vmi, vma, &prev, nstart, tmp, newflags);
if (error)
- break;
+ return error;
+ tmp = vma_iter_end(&vmi);
nstart = tmp;
}
- if (vma_iter_end(&vmi) < end)
+ if (tmp < end)
return -ENOMEM;
- return error;
+ return 0;
}
/*
int head_len;
int rem_len;
+ BUG_ON(ctrl_len < 0 || ctrl_len > CEPH_MSG_MAX_CONTROL_LEN);
+
if (secure) {
head_len = CEPH_PREAMBLE_SECURE_LEN;
if (ctrl_len > CEPH_PREAMBLE_INLINE_LEN) {
static int __tail_onwire_len(int front_len, int middle_len, int data_len,
bool secure)
{
+ BUG_ON(front_len < 0 || front_len > CEPH_MSG_MAX_FRONT_LEN ||
+ middle_len < 0 || middle_len > CEPH_MSG_MAX_MIDDLE_LEN ||
+ data_len < 0 || data_len > CEPH_MSG_MAX_DATA_LEN);
+
if (!front_len && !middle_len && !data_len)
return 0;
desc->fd_aligns[i] = ceph_decode_16(&p);
}
- /*
- * This would fire for FRAME_TAG_WAIT (it has one empty
- * segment), but we should never get it as client.
- */
- if (!desc->fd_lens[desc->fd_seg_cnt - 1]) {
- pr_err("last segment empty\n");
+ if (desc->fd_lens[0] < 0 ||
+ desc->fd_lens[0] > CEPH_MSG_MAX_CONTROL_LEN) {
+ pr_err("bad control segment length %d\n", desc->fd_lens[0]);
return -EINVAL;
}
-
- if (desc->fd_lens[0] > CEPH_MSG_MAX_CONTROL_LEN) {
- pr_err("control segment too big %d\n", desc->fd_lens[0]);
+ if (desc->fd_lens[1] < 0 ||
+ desc->fd_lens[1] > CEPH_MSG_MAX_FRONT_LEN) {
+ pr_err("bad front segment length %d\n", desc->fd_lens[1]);
return -EINVAL;
}
- if (desc->fd_lens[1] > CEPH_MSG_MAX_FRONT_LEN) {
- pr_err("front segment too big %d\n", desc->fd_lens[1]);
+ if (desc->fd_lens[2] < 0 ||
+ desc->fd_lens[2] > CEPH_MSG_MAX_MIDDLE_LEN) {
+ pr_err("bad middle segment length %d\n", desc->fd_lens[2]);
return -EINVAL;
}
- if (desc->fd_lens[2] > CEPH_MSG_MAX_MIDDLE_LEN) {
- pr_err("middle segment too big %d\n", desc->fd_lens[2]);
+ if (desc->fd_lens[3] < 0 ||
+ desc->fd_lens[3] > CEPH_MSG_MAX_DATA_LEN) {
+ pr_err("bad data segment length %d\n", desc->fd_lens[3]);
return -EINVAL;
}
- if (desc->fd_lens[3] > CEPH_MSG_MAX_DATA_LEN) {
- pr_err("data segment too big %d\n", desc->fd_lens[3]);
+
+ /*
+ * This would fire for FRAME_TAG_WAIT (it has one empty
+ * segment), but we should never get it as client.
+ */
+ if (!desc->fd_lens[desc->fd_seg_cnt - 1]) {
+ pr_err("last segment empty, segment count %d\n",
+ desc->fd_seg_cnt);
return -EINVAL;
}
* ASCII[_] = 5f
* ASCII[a-z] = 61,7a
*
- * As above, replacing '.' with '\0' does not affect the main sorting,
- * but it helps us with subsorting.
+ * As above, replacing the first '.' in ".llvm." with '\0' does not
+ * affect the main sorting, but it helps us with subsorting.
*/
- p = strchr(s, '.');
+ p = strstr(s, ".llvm.");
if (p)
*p = '\0';
}
set_bit(KEY_FLAG_USER_CONSTRUCT, &key->flags);
if (dest_keyring) {
- ret = __key_link_lock(dest_keyring, &ctx->index_key);
+ ret = __key_link_lock(dest_keyring, &key->index_key);
if (ret < 0)
goto link_lock_failed;
- ret = __key_link_begin(dest_keyring, &ctx->index_key, &edit);
- if (ret < 0)
- goto link_prealloc_failed;
}
- /* attach the key to the destination keyring under lock, but we do need
+ /*
+ * Attach the key to the destination keyring under lock, but we do need
* to do another check just in case someone beat us to it whilst we
- * waited for locks */
+ * waited for locks.
+ *
+ * The caller might specify a comparison function which looks for keys
+ * that do not exactly match but are still equivalent from the caller's
+ * perspective. The __key_link_begin() operation must be done only after
+ * an actual key is determined.
+ */
mutex_lock(&key_construction_mutex);
rcu_read_lock();
if (!IS_ERR(key_ref))
goto key_already_present;
- if (dest_keyring)
+ if (dest_keyring) {
+ ret = __key_link_begin(dest_keyring, &key->index_key, &edit);
+ if (ret < 0)
+ goto link_alloc_failed;
__key_link(dest_keyring, key, &edit);
+ }
mutex_unlock(&key_construction_mutex);
if (dest_keyring)
- __key_link_end(dest_keyring, &ctx->index_key, edit);
+ __key_link_end(dest_keyring, &key->index_key, edit);
mutex_unlock(&user->cons_lock);
*_key = key;
kleave(" = 0 [%d]", key_serial(key));
mutex_unlock(&key_construction_mutex);
key = key_ref_to_ptr(key_ref);
if (dest_keyring) {
+ ret = __key_link_begin(dest_keyring, &key->index_key, &edit);
+ if (ret < 0)
+ goto link_alloc_failed_unlocked;
ret = __key_link_check_live_key(dest_keyring, key);
if (ret == 0)
__key_link(dest_keyring, key, &edit);
- __key_link_end(dest_keyring, &ctx->index_key, edit);
+ __key_link_end(dest_keyring, &key->index_key, edit);
if (ret < 0)
goto link_check_failed;
}
kleave(" = %d [linkcheck]", ret);
return ret;
-link_prealloc_failed:
- __key_link_end(dest_keyring, &ctx->index_key, edit);
+link_alloc_failed:
+ mutex_unlock(&key_construction_mutex);
+link_alloc_failed_unlocked:
+ __key_link_end(dest_keyring, &key->index_key, edit);
link_lock_failed:
mutex_unlock(&user->cons_lock);
key_put(key);
}
/**
- * tpm_buf_append_auth() - append TPMS_AUTH_COMMAND to the buffer.
+ * tpm2_buf_append_auth() - append TPMS_AUTH_COMMAND to the buffer.
*
* @buf: an allocated tpm_buf instance
* @session_handle: session handle
#define APPLE_CPU_PART_M1_FIRESTORM_MAX 0x029
#define APPLE_CPU_PART_M2_BLIZZARD 0x032
#define APPLE_CPU_PART_M2_AVALANCHE 0x033
+#define APPLE_CPU_PART_M2_BLIZZARD_PRO 0x034
+#define APPLE_CPU_PART_M2_AVALANCHE_PRO 0x035
+#define APPLE_CPU_PART_M2_BLIZZARD_MAX 0x038
+#define APPLE_CPU_PART_M2_AVALANCHE_MAX 0x039
#define AMPERE_CPU_PART_AMPERE1 0xAC3
#define MIDR_APPLE_M1_FIRESTORM_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM_MAX)
#define MIDR_APPLE_M2_BLIZZARD MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD)
#define MIDR_APPLE_M2_AVALANCHE MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE)
+#define MIDR_APPLE_M2_BLIZZARD_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD_PRO)
+#define MIDR_APPLE_M2_AVALANCHE_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE_PRO)
+#define MIDR_APPLE_M2_BLIZZARD_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_BLIZZARD_MAX)
+#define MIDR_APPLE_M2_AVALANCHE_MAX MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M2_AVALANCHE_MAX)
#define MIDR_AMPERE1 MIDR_CPU_MODEL(ARM_CPU_IMP_AMPERE, AMPERE_CPU_PART_AMPERE1)
/* Fujitsu Erratum 010001 affects A64FX 1.0 and 1.1, (v0r0 and v1r0) */
$(BUILD) -ltraceevent
$(OUTPUT)test-libtracefs.bin:
- $(BUILD) -ltracefs
+ $(BUILD) $(shell $(PKG_CONFIG) --cflags libtraceevent 2>/dev/null) -ltracefs
$(OUTPUT)test-libcrypto.bin:
$(BUILD) -lcrypto
#define __NR_set_mempolicy_home_node 450
__SYSCALL(__NR_set_mempolicy_home_node, sys_set_mempolicy_home_node)
+#define __NR_cachestat 451
+__SYSCALL(__NR_cachestat, sys_cachestat)
+
#undef __NR_syscalls
-#define __NR_syscalls 451
+#define __NR_syscalls 452
/*
* 32 bit systems traditionally used different
#define I915_PMU_ENGINE_SEMA(class, instance) \
__I915_PMU_ENGINE(class, instance, I915_SAMPLE_SEMA)
-#define __I915_PMU_OTHER(x) (__I915_PMU_ENGINE(0xff, 0xff, 0xf) + 1 + (x))
+/*
+ * Top 4 bits of every non-engine counter are GT id.
+ */
+#define __I915_PMU_GT_SHIFT (60)
+
+#define ___I915_PMU_OTHER(gt, x) \
+ (((__u64)__I915_PMU_ENGINE(0xff, 0xff, 0xf) + 1 + (x)) | \
+ ((__u64)(gt) << __I915_PMU_GT_SHIFT))
+
+#define __I915_PMU_OTHER(x) ___I915_PMU_OTHER(0, x)
#define I915_PMU_ACTUAL_FREQUENCY __I915_PMU_OTHER(0)
#define I915_PMU_REQUESTED_FREQUENCY __I915_PMU_OTHER(1)
#define I915_PMU_LAST /* Deprecated - do not use */ I915_PMU_RC6_RESIDENCY
+#define __I915_PMU_ACTUAL_FREQUENCY(gt) ___I915_PMU_OTHER(gt, 0)
+#define __I915_PMU_REQUESTED_FREQUENCY(gt) ___I915_PMU_OTHER(gt, 1)
+#define __I915_PMU_INTERRUPTS(gt) ___I915_PMU_OTHER(gt, 2)
+#define __I915_PMU_RC6_RESIDENCY(gt) ___I915_PMU_OTHER(gt, 3)
+#define __I915_PMU_SOFTWARE_GT_AWAKE_TIME(gt) ___I915_PMU_OTHER(gt, 4)
+
/* Each region is a minimum of 16k, and there are at most 255 of them.
*/
#define I915_NR_TEX_REGIONS 255 /* table size 2k - maximum due to use
* If the IOCTL is successful, the returned parameter will be set to one of the
* following values:
* * 0 if HuC firmware load is not complete,
- * * 1 if HuC firmware is authenticated and running.
+ * * 1 if HuC firmware is loaded and fully authenticated,
+ * * 2 if HuC firmware is loaded and authenticated for clear media only
*/
#define I915_PARAM_HUC_STATUS 42
*/
#define I915_PARAM_OA_TIMESTAMP_FREQUENCY 57
+/*
+ * Query the status of PXP support in i915.
+ *
+ * The query can fail in the following scenarios with the listed error codes:
+ * -ENODEV = PXP support is not available on the GPU device or in the
+ * kernel due to missing component drivers or kernel configs.
+ *
+ * If the IOCTL is successful, the returned parameter will be set to one of
+ * the following values:
+ * 1 = PXP feature is supported and is ready for use.
+ * 2 = PXP feature is supported but should be ready soon (pending
+ * initialization of non-i915 system dependencies).
+ *
+ * NOTE: When param is supported (positive return values), user space should
+ * still refer to the GEM PXP context-creation UAPI header specs to be
+ * aware of possible failure due to system state machine at the time.
+ */
+#define I915_PARAM_PXP_STATUS 58
+
/* Must be kept compact -- no holes and well documented */
/**
*
* -ENODEV: feature not available
* -EPERM: trying to mark a recoverable or not bannable context as protected
+ * -ENXIO: A dependency such as a component driver or firmware is not yet
+ * loaded so user space may need to attempt again. Depending on the
+ * device, this error may be reported if protected context creation is
+ * attempted very early after kernel start because the internal timeout
+ * waiting for such dependencies is not guaranteed to be larger than
+ * required (numbers differ depending on system and kernel config):
+ * - ADL/RPL: dependencies may take up to 3 seconds from kernel start
+ * while context creation internal timeout is 250 milisecs
+ * - MTL: dependencies may take up to 8 seconds from kernel start
+ * while context creation internal timeout is 250 milisecs
+ * NOTE: such dependencies happen once, so a subsequent call to create a
+ * protected context after a prior successful call will not experience
+ * such timeouts and will not return -ENXIO (unless the driver is reloaded,
+ * or, depending on the device, resumes from a suspended state).
+ * -EIO: The firmware did not succeed in creating the protected context.
*/
#define I915_CONTEXT_PARAM_PROTECTED_CONTENT 0xd
/* Must be kept compact -- no holes and well documented */
*
* For I915_GEM_CREATE_EXT_PROTECTED_CONTENT usage see
* struct drm_i915_gem_create_ext_protected_content.
+ *
+ * For I915_GEM_CREATE_EXT_SET_PAT usage see
+ * struct drm_i915_gem_create_ext_set_pat.
*/
#define I915_GEM_CREATE_EXT_MEMORY_REGIONS 0
#define I915_GEM_CREATE_EXT_PROTECTED_CONTENT 1
+#define I915_GEM_CREATE_EXT_SET_PAT 2
__u64 extensions;
};
__u32 flags;
};
+/**
+ * struct drm_i915_gem_create_ext_set_pat - The
+ * I915_GEM_CREATE_EXT_SET_PAT extension.
+ *
+ * If this extension is provided, the specified caching policy (PAT index) is
+ * applied to the buffer object.
+ *
+ * Below is an example on how to create an object with specific caching policy:
+ *
+ * .. code-block:: C
+ *
+ * struct drm_i915_gem_create_ext_set_pat set_pat_ext = {
+ * .base = { .name = I915_GEM_CREATE_EXT_SET_PAT },
+ * .pat_index = 0,
+ * };
+ * struct drm_i915_gem_create_ext create_ext = {
+ * .size = PAGE_SIZE,
+ * .extensions = (uintptr_t)&set_pat_ext,
+ * };
+ *
+ * int err = ioctl(fd, DRM_IOCTL_I915_GEM_CREATE_EXT, &create_ext);
+ * if (err) ...
+ */
+struct drm_i915_gem_create_ext_set_pat {
+ /** @base: Extension link. See struct i915_user_extension. */
+ struct i915_user_extension base;
+ /**
+ * @pat_index: PAT index to be set
+ * PAT index is a bit field in Page Table Entry to control caching
+ * behaviors for GPU accesses. The definition of PAT index is
+ * platform dependent and can be found in hardware specifications,
+ */
+ __u32 pat_index;
+ /** @rsvd: reserved for future use */
+ __u32 rsvd;
+};
+
/* ID of the protected content session managed by i915 when PXP is active */
#define I915_PROTECTED_CONTENT_DEFAULT_SESSION 0xf
#define AT_RECURSIVE 0x8000 /* Apply to the entire subtree */
+/* Flags for name_to_handle_at(2). We reuse AT_ flag space to save bits... */
+#define AT_HANDLE_FID AT_REMOVEDIR /* file handle is needed to
+ compare object identity and may not
+ be usable to open_by_handle_at(2) */
+
#endif /* _UAPI_LINUX_FCNTL_H */
#define KVM_CAP_DIRTY_LOG_RING_WITH_BITMAP 225
#define KVM_CAP_PMU_EVENT_MASKED_EVENTS 226
#define KVM_CAP_COUNTER_OFFSET 227
+#define KVM_CAP_ARM_EAGER_SPLIT_CHUNK_SIZE 228
+#define KVM_CAP_ARM_SUPPORTED_BLOCK_SIZES 229
#ifdef KVM_CAP_IRQ_ROUTING
#define KVM_DEV_TYPE_XIVE KVM_DEV_TYPE_XIVE
KVM_DEV_TYPE_ARM_PV_TIME,
#define KVM_DEV_TYPE_ARM_PV_TIME KVM_DEV_TYPE_ARM_PV_TIME
+ KVM_DEV_TYPE_RISCV_AIA,
+#define KVM_DEV_TYPE_RISCV_AIA KVM_DEV_TYPE_RISCV_AIA
KVM_DEV_TYPE_MAX,
};
#define KVM_GET_DEBUGREGS _IOR(KVMIO, 0xa1, struct kvm_debugregs)
#define KVM_SET_DEBUGREGS _IOW(KVMIO, 0xa2, struct kvm_debugregs)
/*
- * vcpu version available with KVM_ENABLE_CAP
+ * vcpu version available with KVM_CAP_ENABLE_CAP
* vm version available with KVM_CAP_ENABLE_CAP_VM
*/
#define KVM_ENABLE_CAP _IOW(KVMIO, 0xa3, struct kvm_enable_cap)
#include <asm/mman.h>
#include <asm-generic/hugetlb_encode.h>
+#include <linux/types.h>
#define MREMAP_MAYMOVE 1
#define MREMAP_FIXED 2
#define MAP_HUGE_2GB HUGETLB_FLAG_ENCODE_2GB
#define MAP_HUGE_16GB HUGETLB_FLAG_ENCODE_16GB
+struct cachestat_range {
+ __u64 off;
+ __u64 len;
+};
+
+struct cachestat {
+ __u64 nr_cache;
+ __u64 nr_dirty;
+ __u64 nr_writeback;
+ __u64 nr_evicted;
+ __u64 nr_recently_evicted;
+};
+
#endif /* _UAPI_LINUX_MMAN_H */
#define MOVE_MOUNT_T_AUTOMOUNTS 0x00000020 /* Follow automounts on to path */
#define MOVE_MOUNT_T_EMPTY_PATH 0x00000040 /* Empty to path permitted */
#define MOVE_MOUNT_SET_GROUP 0x00000100 /* Set sharing group instead */
-#define MOVE_MOUNT__MASK 0x00000177
+#define MOVE_MOUNT_BENEATH 0x00000200 /* Mount beneath top mount */
+#define MOVE_MOUNT__MASK 0x00000377
/*
* fsopen() flags.
#define PR_SET_MEMORY_MERGE 67
#define PR_GET_MEMORY_MERGE 68
+
+#define PR_RISCV_V_SET_CONTROL 69
+#define PR_RISCV_V_GET_CONTROL 70
+# define PR_RISCV_V_VSTATE_CTRL_DEFAULT 0
+# define PR_RISCV_V_VSTATE_CTRL_OFF 1
+# define PR_RISCV_V_VSTATE_CTRL_ON 2
+# define PR_RISCV_V_VSTATE_CTRL_INHERIT (1 << 4)
+# define PR_RISCV_V_VSTATE_CTRL_CUR_MASK 0x3
+# define PR_RISCV_V_VSTATE_CTRL_NEXT_MASK 0xc
+# define PR_RISCV_V_VSTATE_CTRL_MASK 0x1f
+
#endif /* _LINUX_PRCTL_H */
#define VHOST_SET_LOG_BASE _IOW(VHOST_VIRTIO, 0x04, __u64)
/* Specify an eventfd file descriptor to signal on log write. */
#define VHOST_SET_LOG_FD _IOW(VHOST_VIRTIO, 0x07, int)
+/* By default, a device gets one vhost_worker that its virtqueues share. This
+ * command allows the owner of the device to create an additional vhost_worker
+ * for the device. It can later be bound to 1 or more of its virtqueues using
+ * the VHOST_ATTACH_VRING_WORKER command.
+ *
+ * This must be called after VHOST_SET_OWNER and the caller must be the owner
+ * of the device. The new thread will inherit caller's cgroups and namespaces,
+ * and will share the caller's memory space. The new thread will also be
+ * counted against the caller's RLIMIT_NPROC value.
+ *
+ * The worker's ID used in other commands will be returned in
+ * vhost_worker_state.
+ */
+#define VHOST_NEW_WORKER _IOR(VHOST_VIRTIO, 0x8, struct vhost_worker_state)
+/* Free a worker created with VHOST_NEW_WORKER if it's not attached to any
+ * virtqueue. If userspace is not able to call this for workers its created,
+ * the kernel will free all the device's workers when the device is closed.
+ */
+#define VHOST_FREE_WORKER _IOW(VHOST_VIRTIO, 0x9, struct vhost_worker_state)
/* Ring setup. */
/* Set number of descriptors in ring. This parameter can not
#define VHOST_VRING_BIG_ENDIAN 1
#define VHOST_SET_VRING_ENDIAN _IOW(VHOST_VIRTIO, 0x13, struct vhost_vring_state)
#define VHOST_GET_VRING_ENDIAN _IOW(VHOST_VIRTIO, 0x14, struct vhost_vring_state)
+/* Attach a vhost_worker created with VHOST_NEW_WORKER to one of the device's
+ * virtqueues.
+ *
+ * This will replace the virtqueue's existing worker. If the replaced worker
+ * is no longer attached to any virtqueues, it can be freed with
+ * VHOST_FREE_WORKER.
+ */
+#define VHOST_ATTACH_VRING_WORKER _IOW(VHOST_VIRTIO, 0x15, \
+ struct vhost_vring_worker)
+/* Return the vring worker's ID */
+#define VHOST_GET_VRING_WORKER _IOWR(VHOST_VIRTIO, 0x16, \
+ struct vhost_vring_worker)
/* The following ioctls use eventfd file descriptors to signal and poll
* for events. */
#define SNDRV_PCM_INFO_DOUBLE 0x00000004 /* Double buffering needed for PCM start/stop */
#define SNDRV_PCM_INFO_BATCH 0x00000010 /* double buffering */
#define SNDRV_PCM_INFO_SYNC_APPLPTR 0x00000020 /* need the explicit sync of appl_ptr update */
+#define SNDRV_PCM_INFO_PERFECT_DRAIN 0x00000040 /* silencing at the end of stream is not required */
#define SNDRV_PCM_INFO_INTERLEAVED 0x00000100 /* channels are interleaved */
#define SNDRV_PCM_INFO_NONINTERLEAVED 0x00000200 /* channels are not interleaved */
#define SNDRV_PCM_INFO_COMPLEX 0x00000400 /* complex frame organization (mmap only) */
#define SNDRV_PCM_HW_PARAMS_NORESAMPLE (1<<0) /* avoid rate resampling */
#define SNDRV_PCM_HW_PARAMS_EXPORT_BUFFER (1<<1) /* export buffer */
#define SNDRV_PCM_HW_PARAMS_NO_PERIOD_WAKEUP (1<<2) /* disable period wakeups */
+#define SNDRV_PCM_HW_PARAMS_NO_DRAIN_SILENCE (1<<3) /* suppress drain with the filling
+ * of the silence samples
+ */
struct snd_interval {
unsigned int min, max;
* Raw MIDI section - /dev/snd/midi??
*/
-#define SNDRV_RAWMIDI_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 2)
+#define SNDRV_RAWMIDI_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 4)
enum {
SNDRV_RAWMIDI_STREAM_OUTPUT = 0,
#define SNDRV_RAWMIDI_INFO_OUTPUT 0x00000001
#define SNDRV_RAWMIDI_INFO_INPUT 0x00000002
#define SNDRV_RAWMIDI_INFO_DUPLEX 0x00000004
+#define SNDRV_RAWMIDI_INFO_UMP 0x00000008
struct snd_rawmidi_info {
unsigned int device; /* RO/WR (control): device number */
};
#endif
+/* UMP EP info flags */
+#define SNDRV_UMP_EP_INFO_STATIC_BLOCKS 0x01
+
+/* UMP EP Protocol / JRTS capability bits */
+#define SNDRV_UMP_EP_INFO_PROTO_MIDI_MASK 0x0300
+#define SNDRV_UMP_EP_INFO_PROTO_MIDI1 0x0100 /* MIDI 1.0 */
+#define SNDRV_UMP_EP_INFO_PROTO_MIDI2 0x0200 /* MIDI 2.0 */
+#define SNDRV_UMP_EP_INFO_PROTO_JRTS_MASK 0x0003
+#define SNDRV_UMP_EP_INFO_PROTO_JRTS_TX 0x0001 /* JRTS Transmit */
+#define SNDRV_UMP_EP_INFO_PROTO_JRTS_RX 0x0002 /* JRTS Receive */
+
+/* UMP Endpoint information */
+struct snd_ump_endpoint_info {
+ int card; /* card number */
+ int device; /* device number */
+ unsigned int flags; /* additional info */
+ unsigned int protocol_caps; /* protocol capabilities */
+ unsigned int protocol; /* current protocol */
+ unsigned int num_blocks; /* # of function blocks */
+ unsigned short version; /* UMP major/minor version */
+ unsigned short family_id; /* MIDI device family ID */
+ unsigned short model_id; /* MIDI family model ID */
+ unsigned int manufacturer_id; /* MIDI manufacturer ID */
+ unsigned char sw_revision[4]; /* software revision */
+ unsigned short padding;
+ unsigned char name[128]; /* endpoint name string */
+ unsigned char product_id[128]; /* unique product id string */
+ unsigned char reserved[32];
+} __packed;
+
+/* UMP direction */
+#define SNDRV_UMP_DIR_INPUT 0x01
+#define SNDRV_UMP_DIR_OUTPUT 0x02
+#define SNDRV_UMP_DIR_BIDIRECTION 0x03
+
+/* UMP block info flags */
+#define SNDRV_UMP_BLOCK_IS_MIDI1 (1U << 0) /* MIDI 1.0 port w/o restrict */
+#define SNDRV_UMP_BLOCK_IS_LOWSPEED (1U << 1) /* 31.25Kbps B/W MIDI1 port */
+
+/* UMP block user-interface hint */
+#define SNDRV_UMP_BLOCK_UI_HINT_UNKNOWN 0x00
+#define SNDRV_UMP_BLOCK_UI_HINT_RECEIVER 0x01
+#define SNDRV_UMP_BLOCK_UI_HINT_SENDER 0x02
+#define SNDRV_UMP_BLOCK_UI_HINT_BOTH 0x03
+
+/* UMP groups and blocks */
+#define SNDRV_UMP_MAX_GROUPS 16
+#define SNDRV_UMP_MAX_BLOCKS 32
+
+/* UMP Block information */
+struct snd_ump_block_info {
+ int card; /* card number */
+ int device; /* device number */
+ unsigned char block_id; /* block ID (R/W) */
+ unsigned char direction; /* UMP direction */
+ unsigned char active; /* Activeness */
+ unsigned char first_group; /* first group ID */
+ unsigned char num_groups; /* number of groups */
+ unsigned char midi_ci_version; /* MIDI-CI support version */
+ unsigned char sysex8_streams; /* max number of sysex8 streams */
+ unsigned char ui_hint; /* user interface hint */
+ unsigned int flags; /* various info flags */
+ unsigned char name[128]; /* block name string */
+ unsigned char reserved[32];
+} __packed;
+
#define SNDRV_RAWMIDI_IOCTL_PVERSION _IOR('W', 0x00, int)
#define SNDRV_RAWMIDI_IOCTL_INFO _IOR('W', 0x01, struct snd_rawmidi_info)
#define SNDRV_RAWMIDI_IOCTL_USER_PVERSION _IOW('W', 0x02, int)
#define SNDRV_RAWMIDI_IOCTL_STATUS _IOWR('W', 0x20, struct snd_rawmidi_status)
#define SNDRV_RAWMIDI_IOCTL_DROP _IOW('W', 0x30, int)
#define SNDRV_RAWMIDI_IOCTL_DRAIN _IOW('W', 0x31, int)
+/* Additional ioctls for UMP rawmidi devices */
+#define SNDRV_UMP_IOCTL_ENDPOINT_INFO _IOR('W', 0x40, struct snd_ump_endpoint_info)
+#define SNDRV_UMP_IOCTL_BLOCK_INFO _IOR('W', 0x41, struct snd_ump_block_info)
/*
* Timer section - /dev/snd/timer
* *
****************************************************************************/
-#define SNDRV_CTL_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 8)
+#define SNDRV_CTL_VERSION SNDRV_PROTOCOL_VERSION(2, 0, 9)
struct snd_ctl_card_info {
int card; /* card number */
#define SNDRV_CTL_IOCTL_RAWMIDI_NEXT_DEVICE _IOWR('U', 0x40, int)
#define SNDRV_CTL_IOCTL_RAWMIDI_INFO _IOWR('U', 0x41, struct snd_rawmidi_info)
#define SNDRV_CTL_IOCTL_RAWMIDI_PREFER_SUBDEVICE _IOW('U', 0x42, int)
+#define SNDRV_CTL_IOCTL_UMP_NEXT_DEVICE _IOWR('U', 0x43, int)
+#define SNDRV_CTL_IOCTL_UMP_ENDPOINT_INFO _IOWR('U', 0x44, struct snd_ump_endpoint_info)
+#define SNDRV_CTL_IOCTL_UMP_BLOCK_INFO _IOWR('U', 0x45, struct snd_ump_block_info)
#define SNDRV_CTL_IOCTL_POWER _IOWR('U', 0xd0, int)
#define SNDRV_CTL_IOCTL_POWER_STATE _IOR('U', 0xd1, int)
while (ci < cmds->cnt && ei < excludes->cnt) {
cmp = strcmp(cmds->names[ci]->name, excludes->names[ei]->name);
if (cmp < 0) {
- zfree(&cmds->names[cj]);
- cmds->names[cj++] = cmds->names[ci++];
+ if (ci == cj) {
+ ci++;
+ cj++;
+ } else {
+ zfree(&cmds->names[cj]);
+ cmds->names[cj++] = cmds->names[ci++];
+ }
} else if (cmp == 0) {
ci++;
ei++;
ei++;
}
}
-
- while (ci < cmds->cnt) {
- zfree(&cmds->names[cj]);
- cmds->names[cj++] = cmds->names[ci++];
+ if (ci != cj) {
+ while (ci < cmds->cnt) {
+ zfree(&cmds->names[cj]);
+ cmds->names[cj++] = cmds->names[ci++];
+ }
}
for (ci = cj; ci < cmds->cnt; ci++)
zfree(&cmds->names[ci]);
perror("malloc");
return NULL;
}
- memset(elf, 0, offsetof(struct elf, sections));
+ memset(elf, 0, sizeof(*elf));
INIT_LIST_HEAD(&elf->sections);
ifdef CSINCLUDES
LIBOPENCSD_CFLAGS := -I$(CSINCLUDES)
endif
-OPENCSDLIBS := -lopencsd_c_api
+OPENCSDLIBS := -lopencsd_c_api -lopencsd
ifeq ($(findstring -static,${LDFLAGS}),-static)
- OPENCSDLIBS += -lopencsd -lstdc++
+ OPENCSDLIBS += -lstdc++
endif
ifdef CSLIBS
LIBOPENCSD_LDFLAGS := -L$(CSLIBS)
448 n64 process_mrelease sys_process_mrelease
449 n64 futex_waitv sys_futex_waitv
450 common set_mempolicy_home_node sys_set_mempolicy_home_node
+451 n64 cachestat sys_cachestat
448 common process_mrelease sys_process_mrelease
449 common futex_waitv sys_futex_waitv
450 nospu set_mempolicy_home_node sys_set_mempolicy_home_node
+451 common cachestat sys_cachestat
448 common process_mrelease sys_process_mrelease sys_process_mrelease
449 common futex_waitv sys_futex_waitv sys_futex_waitv
450 common set_mempolicy_home_node sys_set_mempolicy_home_node sys_set_mempolicy_home_node
+451 common cachestat sys_cachestat sys_cachestat
448 common process_mrelease sys_process_mrelease
449 common futex_waitv sys_futex_waitv
450 common set_mempolicy_home_node sys_set_mempolicy_home_node
+451 common cachestat sys_cachestat
#
# Due to a historical design error, certain syscalls are numbered differently
},
{
"MetricName": "nps1_die_to_dram",
- "BriefDescription": "Approximate: Combined DRAM B/bytes of all channels on a NPS1 node (die) (may need --metric-no-group)",
+ "BriefDescription": "Approximate: Combined DRAM B/bytes of all channels on a NPS1 node (die)",
"MetricExpr": "dram_channel_data_controller_0 + dram_channel_data_controller_1 + dram_channel_data_controller_2 + dram_channel_data_controller_3 + dram_channel_data_controller_4 + dram_channel_data_controller_5 + dram_channel_data_controller_6 + dram_channel_data_controller_7",
+ "MetricConstraint": "NO_GROUP_EVENTS",
"MetricGroup": "data_fabric",
"PerPkg": "1",
"ScaleUnit": "6.1e-5MiB"
},
{
"MetricName": "nps1_die_to_dram",
- "BriefDescription": "Approximate: Combined DRAM B/bytes of all channels on a NPS1 node (die) (may need --metric-no-group)",
+ "BriefDescription": "Approximate: Combined DRAM B/bytes of all channels on a NPS1 node (die)",
"MetricExpr": "dram_channel_data_controller_0 + dram_channel_data_controller_1 + dram_channel_data_controller_2 + dram_channel_data_controller_3 + dram_channel_data_controller_4 + dram_channel_data_controller_5 + dram_channel_data_controller_6 + dram_channel_data_controller_7",
+ "MetricConstraint": "NO_GROUP_EVENTS",
"MetricGroup": "data_fabric",
"PerPkg": "1",
"ScaleUnit": "6.1e-5MiB"
},
{
"MetricName": "nps1_die_to_dram",
- "BriefDescription": "Approximate: Combined DRAM B/bytes of all channels on a NPS1 node (die) (may need --metric-no-group)",
+ "BriefDescription": "Approximate: Combined DRAM B/bytes of all channels on a NPS1 node (die)",
"MetricExpr": "dram_channel_data_controller_0 + dram_channel_data_controller_1 + dram_channel_data_controller_2 + dram_channel_data_controller_3 + dram_channel_data_controller_4 + dram_channel_data_controller_5 + dram_channel_data_controller_6 + dram_channel_data_controller_7",
"MetricGroup": "data_fabric",
"PerPkg": "1",
+ "MetricConstraint": "NO_GROUP_EVENTS",
"ScaleUnit": "6.1e-5MiB"
}
]
--- /dev/null
+#!/bin/bash
+# test perf probe of function from different CU
+# SPDX-License-Identifier: GPL-2.0
+
+set -e
+
+temp_dir=$(mktemp -d /tmp/perf-uprobe-different-cu-sh.XXXXXXXXXX)
+
+cleanup()
+{
+ trap - EXIT TERM INT
+ if [[ "${temp_dir}" =~ ^/tmp/perf-uprobe-different-cu-sh.*$ ]]; then
+ echo "--- Cleaning up ---"
+ perf probe -x ${temp_dir}/testfile -d foo
+ rm -f "${temp_dir}/"*
+ rmdir "${temp_dir}"
+ fi
+}
+
+trap_cleanup()
+{
+ cleanup
+ exit 1
+}
+
+trap trap_cleanup EXIT TERM INT
+
+cat > ${temp_dir}/testfile-foo.h << EOF
+struct t
+{
+ int *p;
+ int c;
+};
+
+extern int foo (int i, struct t *t);
+EOF
+
+cat > ${temp_dir}/testfile-foo.c << EOF
+#include "testfile-foo.h"
+
+int
+foo (int i, struct t *t)
+{
+ int j, res = 0;
+ for (j = 0; j < i && j < t->c; j++)
+ res += t->p[j];
+
+ return res;
+}
+EOF
+
+cat > ${temp_dir}/testfile-main.c << EOF
+#include "testfile-foo.h"
+
+static struct t g;
+
+int
+main (int argc, char **argv)
+{
+ int i;
+ int j[argc];
+ g.c = argc;
+ g.p = j;
+ for (i = 0; i < argc; i++)
+ j[i] = (int) argv[i][0];
+ return foo (3, &g);
+}
+EOF
+
+gcc -g -Og -flto -c ${temp_dir}/testfile-foo.c -o ${temp_dir}/testfile-foo.o
+gcc -g -Og -c ${temp_dir}/testfile-main.c -o ${temp_dir}/testfile-main.o
+gcc -g -Og -o ${temp_dir}/testfile ${temp_dir}/testfile-foo.o ${temp_dir}/testfile-main.o
+
+perf probe -x ${temp_dir}/testfile --funcs foo
+perf probe -x ${temp_dir}/testfile foo
+
+cleanup
signal(SIGCHLD, sig_handler);
- evlist = evlist__new_default();
+ evlist = evlist__new_dummy();
if (evlist == NULL) {
- pr_debug("evlist__new_default\n");
+ pr_debug("evlist__new_dummy\n");
return -1;
}
#define SCM_RIGHTS 0x01 /* rw: access rights (array of int) */
#define SCM_CREDENTIALS 0x02 /* rw: struct ucred */
#define SCM_SECURITY 0x03 /* rw: security label */
+#define SCM_PIDFD 0x04 /* ro: pidfd (int) */
struct ucred {
__u32 pid;
*/
#define MSG_ZEROCOPY 0x4000000 /* Use user data in kernel path */
+#define MSG_SPLICE_PAGES 0x8000000 /* Splice the pages from the iterator in sendmsg() */
#define MSG_FASTOPEN 0x20000000 /* Send data in TCP SYN */
#define MSG_CMSG_CLOEXEC 0x40000000 /* Set close_on_exec for file
descriptor received through
#define MSG_CMSG_COMPAT 0 /* We never have 32 bit fixups */
#endif
+/* Flags to be cleared on entry by sendmsg and sendmmsg syscalls */
+#define MSG_INTERNAL_SENDMSG_FLAGS \
+ (MSG_SPLICE_PAGES | MSG_SENDPAGE_NOPOLICY | MSG_SENDPAGE_DECRYPTED)
/* Setsockoptions(2) level. Thanks to BSD these must match IPPROTO_xxx */
#define SOL_IP 0
linux_mount=${linux_header_dir}/mount.h
printf "static const char *move_mount_flags[] = {\n"
-regex='^[[:space:]]*#[[:space:]]*define[[:space:]]+MOVE_MOUNT_([^_]+_[[:alnum:]_]+)[[:space:]]+(0x[[:xdigit:]]+)[[:space:]]*.*'
+regex='^[[:space:]]*#[[:space:]]*define[[:space:]]+MOVE_MOUNT_([^_]+[[:alnum:]_]+)[[:space:]]+(0x[[:xdigit:]]+)[[:space:]]*.*'
grep -E $regex ${linux_mount} | \
sed -r "s/$regex/\2 \1/g" | \
xargs printf "\t[ilog2(%s) + 1] = \"%s\",\n"
#ifndef MSG_WAITFORONE
#define MSG_WAITFORONE 0x10000
#endif
+#ifndef MSG_BATCH
+#define MSG_BATCH 0x40000
+#endif
+#ifndef MSG_ZEROCOPY
+#define MSG_ZEROCOPY 0x4000000
+#endif
#ifndef MSG_SPLICE_PAGES
#define MSG_SPLICE_PAGES 0x8000000
#endif
P_MSG_FLAG(NOSIGNAL);
P_MSG_FLAG(MORE);
P_MSG_FLAG(WAITFORONE);
+ P_MSG_FLAG(BATCH);
+ P_MSG_FLAG(ZEROCOPY);
P_MSG_FLAG(SPLICE_PAGES);
P_MSG_FLAG(FASTOPEN);
P_MSG_FLAG(CMSG_CLOEXEC);
{
Dwarf_Die cu_die;
Dwarf_Files *files;
+ Dwarf_Attribute attr_mem;
- if (idx < 0 || !dwarf_diecu(dw_die, &cu_die, NULL, NULL) ||
+ if (idx < 0 || !dwarf_attr_integrate(dw_die, DW_AT_decl_file, &attr_mem) ||
+ !dwarf_cu_die(attr_mem.cu, &cu_die, NULL, NULL, NULL, NULL, NULL, NULL) ||
dwarf_getsrcfiles(&cu_die, &files, NULL) != 0)
return NULL;
if (term->type_term == PARSE_EVENTS__TERM_TYPE_LEGACY_CACHE) {
const struct perf_pmu *pmu = perf_pmus__find_by_type(attr->type);
+ if (!pmu) {
+ char *err_str;
+
+ if (asprintf(&err_str, "Failed to find PMU for type %d", attr->type) >= 0)
+ parse_events_error__handle(err, term->err_term,
+ err_str, /*help=*/NULL);
+ return -EINVAL;
+ }
if (perf_pmu__supports_legacy_cache(pmu)) {
attr->type = PERF_TYPE_HW_CACHE;
return parse_events__decode_legacy_cache(term->config, pmu->type,
e = i - 1;
} else {
if (i >= 4)
- e = i - 4;
- else if (i == 3)
- e = i - 2;
+ e = i - 3;
+ else if (i >= 1)
+ e = i - 1;
else
e = 0;
}
done
# Avoid any output on non arm64 on emit_tests
-emit_tests: all
+emit_tests:
@for DIR in $(ARM64_SUBTARGETS); do \
BUILD_TARGET=$(OUTPUT)/$$DIR; \
make OUTPUT=$$BUILD_TARGET -C $$DIR $@; \
MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB,
-1, 0);
if (addr == MAP_FAILED) {
- if (errno == ENOMEM)
- SKIP(return, "No huge pages available.");
+ if (errno == ENOMEM || errno == EINVAL)
+ SKIP(return, "No huge pages available or CONFIG_HUGETLB_PAGE disabled.");
else
TH_LOG("mmap error: %s", strerror(errno));
}
munmap:
munmap(dst, pagesize);
free(src);
-#endif /* __NR_userfaultfd */
}
+#endif /* __NR_userfaultfd */
int main(void)
{
done
# Avoid any output on non riscv on emit_tests
-emit_tests: all
+emit_tests:
@for DIR in $(RISCV_SUBTARGETS); do \
BUILD_TARGET=$(OUTPUT)/$$DIR; \
$(MAKE) OUTPUT=$$BUILD_TARGET -C $$DIR $@; \
printf("%lld.%i(est)", eppm/1000, abs((int)(eppm%1000)));
/* Avg the two actual freq samples adjtimex gave us */
- ppm = (tx1.freq + tx2.freq) * 1000 / 2;
- ppm = (long long)tx1.freq * 1000;
+ ppm = (long long)(tx1.freq + tx2.freq) * 1000 / 2;
ppm = shift_right(ppm, 16);
printf(" %lld.%i(act)", ppm/1000, abs((int)(ppm%1000)));