Trond Myklebust [Fri, 15 Feb 2019 19:59:52 +0000 (14:59 -0500)]
NFS: Fix an I/O request leakage in nfs_do_recoalesce
commit
4d91969ed4dbcefd0e78f77494f0cb8fada9048a upstream.
Whether we need to exit early, or just reprocess the list, we
must not lost track of the request which failed to get recoalesced.
Fixes:
03d5eb65b538 ("NFS: Fix a memory leak in nfs_do_recoalesce")
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
Cc: stable@vger.kernel.org # v4.0+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Trond Myklebust [Wed, 13 Feb 2019 14:21:38 +0000 (09:21 -0500)]
NFS: Fix I/O request leakages
commit
f57dcf4c72113c745d83f1c65f7291299f65c14f upstream.
When we fail to add the request to the I/O queue, we currently leave it
to the caller to free the failed request. However since some of the
requests that fail are actually created by nfs_pageio_add_request()
itself, and are not passed back the caller, this leads to a leakage
issue, which can again cause page locks to leak.
This commit addresses the leakage by freeing the created requests on
error, using desc->pg_completion_ops->error_cleanup()
Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
Fixes:
a7d42ddb30997 ("nfs: add mirroring support to pgio layer")
Cc: stable@vger.kernel.org # v4.0: c18b96a1b862: nfs: clean up rest of reqs
Cc: stable@vger.kernel.org # v4.0: d600ad1f2bdb: NFS41: pop some layoutget
Cc: stable@vger.kernel.org # v4.0+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pavel Machek [Thu, 27 Dec 2018 19:52:21 +0000 (20:52 +0100)]
cpcap-charger: generate events for userspace
commit
fd10606f93a149a9f3d37574e5385b083b4a7b32 upstream.
The driver doesn't generate uevents on charger connect/disconnect.
This leads to UPower not detecting when AC is on or off... and that is
bad.
Reported by Arthur D. on github (
https://github.com/maemo-leste/bugtracker/issues/206 ), thanks to
Merlijn Wajer for suggesting a fix.
Cc: stable@kernel.org
Signed-off-by: Pavel Machek <pavel@ucw.cz>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gustavo A. R. Silva [Tue, 22 Jan 2019 16:56:36 +0000 (10:56 -0600)]
mfd: sm501: Fix potential NULL pointer dereference
commit
ae7b8eda27b33b1f688dfdebe4d46f690a8f9162 upstream.
There is a potential NULL pointer dereference in case devm_kzalloc()
fails and returns NULL.
Fix this by adding a NULL check on *lookup*
This bug was detected with the help of Coccinelle.
Fixes:
b2e63555592f ("i2c: gpio: Convert to use descriptors")
Cc: stable@vger.kernel.org
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mikulas Patocka [Wed, 6 Mar 2019 13:29:34 +0000 (08:29 -0500)]
dm integrity: limit the rate of error messages
commit
225557446856448039a9e495da37b72c20071ef2 upstream.
When using dm-integrity underneath md-raid, some tests with raid
auto-correction trigger large amounts of integrity failures - and all
these failures print an error message. These messages can bring the
system to a halt if the system is using serial console.
Fix this by limiting the rate of error messages - it improves the speed
of raid recovery and avoids the hang.
Fixes:
7eada909bfd7a ("dm: add integrity target")
Cc: stable@vger.kernel.org # v4.12+
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
NeilBrown [Sun, 6 Jan 2019 10:06:25 +0000 (21:06 +1100)]
dm: fix to_sector() for 32bit
commit
0bdb50c531f7377a9da80d3ce2d61f389c84cb30 upstream.
A dm-raid array with devices larger than 4GB won't assemble on
a 32 bit host since _check_data_dev_sectors() was added in 4.16.
This is because to_sector() treats its argument as an "unsigned long"
which is 32bits (4GB) on a 32bit host. Using "unsigned long long"
is more correct.
Kernels as early as 4.2 can have other problems due to to_sector()
being used on the size of a device.
Fixes:
0cf4503174c1 ("dm raid: add support for the MD RAID0 personality")
cc: stable@vger.kernel.org (v4.2+)
Reported-and-tested-by: Guillaume Perréal <gperreal@free.fr>
Signed-off-by: NeilBrown <neil@brown.name>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Yang Yingliang [Mon, 28 Jan 2019 03:08:54 +0000 (11:08 +0800)]
ipmi_si: fix use-after-free of resource->name
commit
401e7e88d4ef80188ffa07095ac00456f901b8c4 upstream.
When we excute the following commands, we got oops
rmmod ipmi_si
cat /proc/ioports
[ 1623.482380] Unable to handle kernel paging request at virtual address
ffff00000901d478
[ 1623.482382] Mem abort info:
[ 1623.482383] ESR = 0x96000007
[ 1623.482385] Exception class = DABT (current EL), IL = 32 bits
[ 1623.482386] SET = 0, FnV = 0
[ 1623.482387] EA = 0, S1PTW = 0
[ 1623.482388] Data abort info:
[ 1623.482389] ISV = 0, ISS = 0x00000007
[ 1623.482390] CM = 0, WnR = 0
[ 1623.482393] swapper pgtable: 4k pages, 48-bit VAs, pgdp =
00000000d7d94a66
[ 1623.482395] [
ffff00000901d478] pgd=
000000dffbfff003, pud=
000000dffbffe003, pmd=
0000003f5d06e003, pte=
0000000000000000
[ 1623.482399] Internal error: Oops:
96000007 [#1] SMP
[ 1623.487407] Modules linked in: ipmi_si(E) nls_utf8 isofs rpcrdma ib_iser ib_srpt target_core_mod ib_srp scsi_transport_srp ib_ipoib rdma_ucm ib_umad rdma_cm ib_cm dm_mirror dm_region_hash dm_log iw_cm dm_mod aes_ce_blk crypto_simd cryptd aes_ce_cipher ses ghash_ce sha2_ce enclosure sha256_arm64 sg sha1_ce hisi_sas_v2_hw hibmc_drm sbsa_gwdt hisi_sas_main ip_tables mlx5_ib ib_uverbs marvell ib_core mlx5_core ixgbe mdio hns_dsaf ipmi_devintf hns_enet_drv ipmi_msghandler hns_mdio [last unloaded: ipmi_si]
[ 1623.532410] CPU: 30 PID: 11438 Comm: cat Kdump: loaded Tainted: G E 5.0.0-rc3+ #168
[ 1623.541498] Hardware name: Huawei TaiShan 2280 /BC11SPCD, BIOS 1.37 11/21/2017
[ 1623.548822] pstate:
a0000005 (NzCv daif -PAN -UAO)
[ 1623.553684] pc : string+0x28/0x98
[ 1623.557040] lr : vsnprintf+0x368/0x5e8
[ 1623.560837] sp :
ffff000013213a80
[ 1623.564191] x29:
ffff000013213a80 x28:
ffff00001138abb5
[ 1623.569577] x27:
ffff000013213c18 x26:
ffff805f67d06049
[ 1623.574963] x25:
0000000000000000 x24:
ffff00001138abb5
[ 1623.580349] x23:
0000000000000fb7 x22:
ffff0000117ed000
[ 1623.585734] x21:
ffff000011188fd8 x20:
ffff805f67d07000
[ 1623.591119] x19:
ffff805f67d06061 x18:
ffffffffffffffff
[ 1623.596505] x17:
0000000000000200 x16:
0000000000000000
[ 1623.601890] x15:
ffff0000117ed748 x14:
ffff805f67d07000
[ 1623.607276] x13:
ffff805f67d0605e x12:
0000000000000000
[ 1623.612661] x11:
0000000000000000 x10:
0000000000000000
[ 1623.618046] x9 :
0000000000000000 x8 :
000000000000000f
[ 1623.623432] x7 :
ffff805f67d06061 x6 :
fffffffffffffffe
[ 1623.628817] x5 :
0000000000000012 x4 :
ffff00000901d478
[ 1623.634203] x3 :
ffff0a00ffffff04 x2 :
ffff805f67d07000
[ 1623.639588] x1 :
ffff805f67d07000 x0 :
ffffffffffffffff
[ 1623.644974] Process cat (pid: 11438, stack limit = 0x000000008d4cbc10)
[ 1623.651592] Call trace:
[ 1623.654068] string+0x28/0x98
[ 1623.657071] vsnprintf+0x368/0x5e8
[ 1623.660517] seq_vprintf+0x70/0x98
[ 1623.668009] seq_printf+0x7c/0xa0
[ 1623.675530] r_show+0xc8/0xf8
[ 1623.682558] seq_read+0x330/0x440
[ 1623.689877] proc_reg_read+0x78/0xd0
[ 1623.697346] __vfs_read+0x60/0x1a0
[ 1623.704564] vfs_read+0x94/0x150
[ 1623.711339] ksys_read+0x6c/0xd8
[ 1623.717939] __arm64_sys_read+0x24/0x30
[ 1623.725077] el0_svc_common+0x120/0x148
[ 1623.732035] el0_svc_handler+0x30/0x40
[ 1623.738757] el0_svc+0x8/0xc
[ 1623.744520] Code:
d1000406 aa0103e2 54000149 b4000080 (
39400085)
[ 1623.753441] ---[ end trace
f91b6a4937de9835 ]---
[ 1623.760871] Kernel panic - not syncing: Fatal exception
[ 1623.768935] SMP: stopping secondary CPUs
[ 1623.775718] Kernel Offset: disabled
[ 1623.781998] CPU features: 0x002,
21006008
[ 1623.788777] Memory Limit: none
[ 1623.798329] Starting crashdump kernel...
[ 1623.805202] Bye!
If io_setup is called successful in try_smi_init() but try_smi_init()
goes out_err before calling ipmi_register_smi(), so ipmi_unregister_smi()
will not be called while removing module. It leads to the resource that
allocated in io_setup() can not be freed, but the name(DEVICE_NAME) of
resource is freed while removing the module. It causes use-after-free
when cat /proc/ioports.
Fix this by calling io_cleanup() while try_smi_init() goes to out_err.
and don't call io_cleanup() until io_setup() returns successful to avoid
warning prints.
Fixes:
93c303d2045b ("ipmi_si: Clean up shutdown a bit")
Cc: stable@vger.kernel.org
Reported-by: NuoHan Qiao <qiaonuohan@huawei.com>
Suggested-by: Corey Minyard <cminyard@mvista.com>
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dave Martin [Thu, 21 Feb 2019 11:42:32 +0000 (11:42 +0000)]
arm64: KVM: Fix architecturally invalid reset value for FPEXC32_EL2
commit
c88b093693ccbe41991ef2e9b1d251945e6e54ed upstream.
Due to what looks like a typo dating back to the original addition
of FPEXC32_EL2 handling, KVM currently initialises this register to
an architecturally invalid value.
As a result, the VECITR field (RES1) in bits [10:8] is initialised
with 0, and the two reserved (RES0) bits [6:5] are initialised with
1. (In the Common VFP Subarchitecture as specified by ARMv7-A,
these two bits were IMP DEF. ARMv8-A removes them.)
This patch changes the reset value from 0x70 to 0x700, which
reflects the architectural constraints and is presumably what was
originally intended.
Cc: <stable@vger.kernel.org> # 4.12.x-
Cc: Christoffer Dall <christoffer.dall@arm.com>
Fixes:
62a89c44954f ("arm64: KVM: 32bit handling of coprocessor traps")
Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Will Deacon [Fri, 1 Mar 2019 13:28:01 +0000 (13:28 +0000)]
arm64: debug: Ensure debug handlers check triggering exception level
commit
6bd288569b50bc89fa5513031086746968f585cb upstream.
Debug exception handlers may be called for exceptions generated both by
user and kernel code. In many cases, this is checked explicitly, but
in other cases things either happen to work by happy accident or they
go slightly wrong. For example, executing 'brk #4' from userspace will
enter the kprobes code and be ignored, but the instruction will be
retried forever in userspace instead of delivering a SIGTRAP.
Fix this issue in the most stable-friendly fashion by simply adding
explicit checks of the triggering exception level to all of our debug
exception handlers.
Cc: <stable@vger.kernel.org>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Julien Thierry [Thu, 31 Jan 2019 14:58:39 +0000 (14:58 +0000)]
arm64: Fix HCR.TGE status for NMI contexts
commit
5870970b9a828d8693aa6d15742573289d7dbcd0 upstream.
When using VHE, the host needs to clear HCR_EL2.TGE bit in order
to interact with guest TLBs, switching from EL2&0 translation regime
to EL1&0.
However, some non-maskable asynchronous event could happen while TGE is
cleared like SDEI. Because of this address translation operations
relying on EL2&0 translation regime could fail (tlb invalidation,
userspace access, ...).
Fix this by properly setting HCR_EL2.TGE when entering NMI context and
clear it if necessary when returning to the interrupted context.
Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Suggested-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: James Morse <james.morse@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: linux-arch@vger.kernel.org
Cc: stable@vger.kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gustavo A. R. Silva [Thu, 3 Jan 2019 20:14:08 +0000 (14:14 -0600)]
ARM: s3c24xx: Fix boolean expressions in osiris_dvs_notify
commit
e2477233145f2156434afb799583bccd878f3e9f upstream.
Fix boolean expressions by using logical AND operator '&&' instead of
bitwise operator '&'.
This issue was detected with the help of Coccinelle.
Fixes:
4fa084af28ca ("ARM: OSIRIS: DVS (Dynamic Voltage Scaling) supoort.")
Cc: stable@vger.kernel.org
Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
[krzk: Fix -Wparentheses warning]
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christophe Leroy [Tue, 29 Jan 2019 16:37:55 +0000 (16:37 +0000)]
powerpc/traps: Fix the message printed when stack overflows
commit
9bf3d3c4e4fd82c7174f4856df372ab2a71005b9 upstream.
Today's message is useless:
[ 42.253267] Kernel stack overflow in process (ptrval), r1=
c65500b0
This patch fixes it:
[ 66.905235] Kernel stack overflow in process sh[356], r1=
c65560b0
Fixes:
ad67b74d2469 ("printk: hash addresses printed with %p")
Cc: stable@vger.kernel.org # v4.15+
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
[mpe: Use task_pid_nr()]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christophe Leroy [Tue, 22 Jan 2019 14:11:24 +0000 (14:11 +0000)]
powerpc/traps: fix recoverability of machine check handling on book3s/32
commit
0bbea75c476b77fa7d7811d6be911cc7583e640f upstream.
Looks like book3s/32 doesn't set RI on machine check, so
checking RI before calling die() will always be fatal
allthought this is not an issue in most cases.
Fixes:
b96672dd840f ("powerpc: Machine check interrupt is a non-maskable interrupt")
Fixes:
daf00ae71dad ("powerpc/traps: restore recoverability of machine_check interrupts")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Cc: stable@vger.kernel.org
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Aneesh Kumar K.V [Fri, 22 Feb 2019 17:25:31 +0000 (22:55 +0530)]
powerpc/hugetlb: Don't do runtime allocation of 16G pages in LPAR configuration
commit
35f2806b481f5b9207f25e1886cba5d1c4d12cc7 upstream.
We added runtime allocation of 16G pages in commit
4ae279c2c96a
("powerpc/mm/hugetlb: Allow runtime allocation of 16G.") That was done
to enable 16G allocation on PowerNV and KVM config. In case of KVM
config, we mostly would have the entire guest RAM backed by 16G
hugetlb pages for this to work. PAPR do support partial backing of
guest RAM with hugepages via ibm,expected#pages node of memory node in
the device tree. This means rest of the guest RAM won't be backed by
16G contiguous pages in the host and hence a hash page table insertion
can fail in such case.
An example error message will look like
hash-mmu: mm: Hashing failure ! EA=0x7efc00000000 access=0x8000000000000006 current=readback
hash-mmu: trap=0x300 vsid=0x67af789 ssize=1 base psize=14 psize 14 pte=0xc000000400000386
readback[12260]: unhandled signal 7 at
00007efc00000000 nip
00000000100012d0 lr
000000001000127c code 2
This patch address that by preventing runtime allocation of 16G
hugepages in LPAR config. To allocate 16G hugetlb one need to kernel
command line hugepagesz=16G hugepages=<number of 16G pages>
With radix translation mode we don't run into this issue.
This change will prevent runtime allocation of 16G hugetlb pages on
kvm with hash translation mode. However, with the current upstream it
was observed that 16G hugetlbfs backed guest doesn't boot at all.
We observe boot failure with the below message:
[131354.647546] KVM: map_vrma at 0 failed, ret=-4
That means this patch is not resulting in an observable regression.
Once we fix the boot issue with 16G hugetlb backed memory, we need to
use ibm,expected#pages memory node attribute to indicate 16G page
reservation to the guest. This will also enable partial backing of
guest RAM with 16G pages.
Fixes:
4ae279c2c96a ("powerpc/mm/hugetlb: Allow runtime allocation of 16G.")
Cc: stable@vger.kernel.org # v4.14+
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael Ellerman [Thu, 14 Feb 2019 00:08:29 +0000 (11:08 +1100)]
powerpc/ptrace: Simplify vr_get/set() to avoid GCC warning
commit
ca6d5149d2ad0a8d2f9c28cbe379802260a0a5e0 upstream.
GCC 8 warns about the logic in vr_get/set(), which with -Werror breaks
the build:
In function ‘user_regset_copyin’,
inlined from ‘vr_set’ at arch/powerpc/kernel/ptrace.c:628:9:
include/linux/regset.h:295:4: error: ‘memcpy’ offset [-527, -529] is
out of the bounds [0, 16] of object ‘vrsave’ with type ‘union
<anonymous>’ [-Werror=array-bounds]
arch/powerpc/kernel/ptrace.c: In function ‘vr_set’:
arch/powerpc/kernel/ptrace.c:623:5: note: ‘vrsave’ declared here
} vrsave;
This has been identified as a regression in GCC, see GCC bug 88273.
However we can avoid the warning and also simplify the logic and make
it more robust.
Currently we pass -1 as end_pos to user_regset_copyout(). This says
"copy up to the end of the regset".
The definition of the regset is:
[REGSET_VMX] = {
.core_note_type = NT_PPC_VMX, .n = 34,
.size = sizeof(vector128), .align = sizeof(vector128),
.active = vr_active, .get = vr_get, .set = vr_set
},
The end is calculated as (n * size), ie. 34 * sizeof(vector128).
In vr_get/set() we pass start_pos as 33 * sizeof(vector128), meaning
we can copy up to sizeof(vector128) into/out-of vrsave.
The on-stack vrsave is defined as:
union {
elf_vrreg_t reg;
u32 word;
} vrsave;
And elf_vrreg_t is:
typedef __vector128 elf_vrreg_t;
So there is no bug, but we rely on all those sizes lining up,
otherwise we would have a kernel stack exposure/overwrite on our
hands.
Rather than relying on that we can pass an explict end_pos based on
the sizeof(vrsave). The result should be exactly the same but it's
more obviously not over-reading/writing the stack and it avoids the
compiler warning.
Reported-by: Meelis Roos <mroos@linux.ee>
Reported-by: Mathieu Malaterre <malat@debian.org>
Cc: stable@vger.kernel.org
Tested-by: Mathieu Malaterre <malat@debian.org>
Tested-by: Meelis Roos <mroos@linux.ee>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mark Cave-Ayland [Fri, 8 Feb 2019 14:33:19 +0000 (14:33 +0000)]
powerpc: Fix 32-bit KVM-PR lockup and host crash with MacOS guest
commit
fe1ef6bcdb4fca33434256a802a3ed6aacf0bd2f upstream.
Commit
8792468da5e1 "powerpc: Add the ability to save FPU without
giving it up" unexpectedly removed the MSR_FE0 and MSR_FE1 bits from
the bitmask used to update the MSR of the previous thread in
__giveup_fpu() causing a KVM-PR MacOS guest to lockup and panic the
host kernel.
Leaving FE0/1 enabled means unrelated processes might receive FPEs
when they're not expecting them and crash. In particular if this
happens to init the host will then panic.
eg (transcribed):
qemu-system-ppc[837]: unhandled signal 8 at
12cc9ce4 nip
12cc9ce4 lr
12cc9ca4 code 0
systemd[1]: unhandled signal 8 at
202f02e0 nip
202f02e0 lr
001003d4 code 0
Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
Reinstate these bits to the MSR bitmask to enable MacOS guests to run
under 32-bit KVM-PR once again without issue.
Fixes:
8792468da5e1 ("powerpc: Add the ability to save FPU without giving it up")
Cc: stable@vger.kernel.org # v4.6+
Signed-off-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paul Mackerras [Tue, 12 Feb 2019 00:58:29 +0000 (11:58 +1100)]
powerpc/powernv: Don't reprogram SLW image on every KVM guest entry/exit
commit
19f8a5b5be2898573a5e1dc1db93e8d40117606a upstream.
Commit
24be85a23d1f ("powerpc/powernv: Clear PECE1 in LPCR via stop-api
only on Hotplug", 2017-07-21) added two calls to opal_slw_set_reg()
inside pnv_cpu_offline(), with the aim of changing the LPCR value in
the SLW image to disable wakeups from the decrementer while a CPU is
offline. However, pnv_cpu_offline() gets called each time a secondary
CPU thread is woken up to participate in running a KVM guest, that is,
not just when a CPU is offlined.
Since opal_slw_set_reg() is a very slow operation (with observed
execution times around 20 milliseconds), this means that an offline
secondary CPU can often be busy doing the opal_slw_set_reg() call
when the primary CPU wants to grab all the secondary threads so that
it can run a KVM guest. This leads to messages like "KVM: couldn't
grab CPU n" being printed and guest execution failing.
There is no need to reprogram the SLW image on every KVM guest entry
and exit. So that we do it only when a CPU is really transitioning
between online and offline, this moves the calls to
pnv_program_cpu_hotplug_lpcr() into pnv_smp_cpu_kill_self().
Fixes:
24be85a23d1f ("powerpc/powernv: Clear PECE1 in LPCR via stop-api only on Hotplug")
Cc: stable@vger.kernel.org # v4.14+
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christophe Leroy [Fri, 25 Jan 2019 12:03:55 +0000 (12:03 +0000)]
powerpc/83xx: Also save/restore SPRG4-7 during suspend
commit
36da5ff0bea2dc67298150ead8d8471575c54c7d upstream.
The 83xx has 8 SPRG registers and uses at least SPRG4
for DTLB handling LRU.
Fixes:
2319f1239592 ("powerpc/mm: e300c2/c3/c4 TLB errata workaround")
Cc: stable@vger.kernel.org
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jordan Niethe [Wed, 27 Feb 2019 03:02:29 +0000 (14:02 +1100)]
powerpc/powernv: Make opal log only readable by root
commit
7b62f9bd2246b7d3d086e571397c14ba52645ef1 upstream.
Currently the opal log is globally readable. It is kernel policy to
limit the visibility of physical addresses / kernel pointers to root.
Given this and the fact the opal log may contain this information it
would be better to limit the readability to root.
Fixes:
bfc36894a48b ("powerpc/powernv: Add OPAL message log interface")
Cc: stable@vger.kernel.org # v3.15+
Signed-off-by: Jordan Niethe <jniethe5@gmail.com>
Reviewed-by: Stewart Smith <stewart@linux.ibm.com>
Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christophe Leroy [Thu, 21 Feb 2019 19:08:37 +0000 (19:08 +0000)]
powerpc/wii: properly disable use of BATs when requested.
commit
6d183ca8baec983dc4208ca45ece3c36763df912 upstream.
'nobats' kernel parameter or some options like CONFIG_DEBUG_PAGEALLOC
deny the use of BATS for mapping memory.
This patch makes sure that the specific wii RAM mapping function
takes it into account as well.
Fixes:
de32400dd26e ("wii: use both mem1 and mem2 as ram")
Cc: stable@vger.kernel.org
Reviewed-by: Jonathan Neuschafer <j.neuschaefer@gmx.net>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christophe Leroy [Wed, 27 Feb 2019 11:45:30 +0000 (11:45 +0000)]
powerpc/32: Clear on-stack exception marker upon exception return
commit
9580b71b5a7863c24a9bd18bcd2ad759b86b1eff upstream.
Clear the on-stack STACK_FRAME_REGS_MARKER on exception exit in order
to avoid confusing stacktrace like the one below.
Call Trace:
[
c0e9dca0] [
c01c42a0] print_address_description+0x64/0x2bc (unreliable)
[
c0e9dcd0] [
c01c4684] kasan_report+0xfc/0x180
[
c0e9dd10] [
c0895130] memchr+0x24/0x74
[
c0e9dd30] [
c00a9e38] msg_print_text+0x124/0x574
[
c0e9dde0] [
c00ab710] console_unlock+0x114/0x4f8
[
c0e9de40] [
c00adc60] vprintk_emit+0x188/0x1c4
--- interrupt:
c0e9df00 at 0x400f330
LR = init_stack+0x1f00/0x2000
[
c0e9de80] [
c00ae3c4] printk+0xa8/0xcc (unreliable)
[
c0e9df20] [
c0c27e44] early_irq_init+0x38/0x108
[
c0e9df50] [
c0c15434] start_kernel+0x310/0x488
[
c0e9dff0] [
00003484] 0x3484
With this patch the trace becomes:
Call Trace:
[
c0e9dca0] [
c01c42c0] print_address_description+0x64/0x2bc (unreliable)
[
c0e9dcd0] [
c01c46a4] kasan_report+0xfc/0x180
[
c0e9dd10] [
c0895150] memchr+0x24/0x74
[
c0e9dd30] [
c00a9e58] msg_print_text+0x124/0x574
[
c0e9dde0] [
c00ab730] console_unlock+0x114/0x4f8
[
c0e9de40] [
c00adc80] vprintk_emit+0x188/0x1c4
[
c0e9de80] [
c00ae3e4] printk+0xa8/0xcc
[
c0e9df20] [
c0c27e44] early_irq_init+0x38/0x108
[
c0e9df50] [
c0c15434] start_kernel+0x310/0x488
[
c0e9dff0] [
00003484] 0x3484
Cc: stable@vger.kernel.org
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
J. Bruce Fields [Tue, 5 Mar 2019 21:17:58 +0000 (16:17 -0500)]
security/selinux: fix SECURITY_LSM_NATIVE_LABELS on reused superblock
commit
3815a245b50124f0865415dcb606a034e97494d4 upstream.
In the case when we're reusing a superblock, selinux_sb_clone_mnt_opts()
fails to set set_kern_flags, with the result that
nfs_clone_sb_security() incorrectly clears NFS_CAP_SECURITY_LABEL.
The result is that if you mount the same NFS filesystem twice, NFS
security labels are turned off, even if they would work fine if you
mounted the filesystem only once.
("fixes" may be not exactly the right tag, it may be more like
"fixed-other-cases-but-missed-this-one".)
Cc: Scott Mayhew <smayhew@redhat.com>
Cc: stable@vger.kernel.org
Fixes:
0b4d3452b8b4 "security/selinux: allow security_sb_clone_mnt_opts..."
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Acked-by: Stephen Smalley <sds@tycho.nsa.gov>
Signed-off-by: Paul Moore <paul@paul-moore.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Xin Long [Fri, 8 Mar 2019 16:07:34 +0000 (00:07 +0800)]
selinux: add the missing walk_size + len check in selinux_sctp_bind_connect
commit
292c997a1970f8d1e1dfa354ed770a22f7b5a434 upstream.
As does in __sctp_connect(), when checking addrs in a while loop, after
get the addr len according to sa_family, it's necessary to do the check
walk_size + af->sockaddr_len > addrs_size to make sure it won't access
an out-of-bounds addr.
The same thing is needed in selinux_sctp_bind_connect(), otherwise an
out-of-bounds issue can be triggered:
[14548.772313] BUG: KASAN: slab-out-of-bounds in selinux_sctp_bind_connect+0x1aa/0x1f0
[14548.927083] Call Trace:
[14548.938072] dump_stack+0x9a/0xe9
[14548.953015] print_address_description+0x65/0x22e
[14548.996524] kasan_report.cold.6+0x92/0x1a6
[14549.015335] selinux_sctp_bind_connect+0x1aa/0x1f0
[14549.036947] security_sctp_bind_connect+0x58/0x90
[14549.058142] __sctp_setsockopt_connectx+0x5a/0x150 [sctp]
[14549.081650] sctp_setsockopt.part.24+0x1322/0x3ce0 [sctp]
Cc: stable@vger.kernel.org
Fixes:
d452930fd3b9 ("selinux: Add SCTP support")
Reported-by: Chunyu Hu <chuhu@redhat.com>
Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: Paul Moore <paul@paul-moore.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
zhangyi (F) [Thu, 21 Feb 2019 16:24:09 +0000 (11:24 -0500)]
jbd2: fix compile warning when using JBUFFER_TRACE
commit
01215d3edb0f384ddeaa5e4a22c1ae5ff634149f upstream.
The jh pointer may be used uninitialized in the two cases below and the
compiler complain about it when enabling JBUFFER_TRACE macro, fix them.
In file included from fs/jbd2/transaction.c:19:0:
fs/jbd2/transaction.c: In function ‘jbd2_journal_get_undo_access’:
./include/linux/jbd2.h:1637:38: warning: ‘jh’ is used uninitialized in this function [-Wuninitialized]
#define JBUFFER_TRACE(jh, info) do { printk("%s: %d\n", __func__, jh->b_jcount);} while (0)
^
fs/jbd2/transaction.c:1219:23: note: ‘jh’ was declared here
struct journal_head *jh;
^
In file included from fs/jbd2/transaction.c:19:0:
fs/jbd2/transaction.c: In function ‘jbd2_journal_dirty_metadata’:
./include/linux/jbd2.h:1637:38: warning: ‘jh’ may be used uninitialized in this function [-Wmaybe-uninitialized]
#define JBUFFER_TRACE(jh, info) do { printk("%s: %d\n", __func__, jh->b_jcount);} while (0)
^
fs/jbd2/transaction.c:1332:23: note: ‘jh’ was declared here
struct journal_head *jh;
^
Signed-off-by: zhangyi (F) <yi.zhang@huawei.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
zhangyi (F) [Mon, 11 Feb 2019 04:23:04 +0000 (23:23 -0500)]
jbd2: clear dirty flag when revoking a buffer from an older transaction
commit
904cdbd41d749a476863a0ca41f6f396774f26e4 upstream.
Now, we capture a data corruption problem on ext4 while we're truncating
an extent index block. Imaging that if we are revoking a buffer which
has been journaled by the committing transaction, the buffer's jbddirty
flag will not be cleared in jbd2_journal_forget(), so the commit code
will set the buffer dirty flag again after refile the buffer.
fsx kjournald2
jbd2_journal_commit_transaction
jbd2_journal_revoke commit phase 1~5...
jbd2_journal_forget
belongs to older transaction commit phase 6
jbddirty not clear __jbd2_journal_refile_buffer
__jbd2_journal_unfile_buffer
test_clear_buffer_jbddirty
mark_buffer_dirty
Finally, if the freed extent index block was allocated again as data
block by some other files, it may corrupt the file data after writing
cached pages later, such as during unmount time. (In general,
clean_bdev_aliases() related helpers should be invoked after
re-allocation to prevent the above corruption, but unfortunately we
missed it when zeroout the head of extra extent blocks in
ext4_ext_handle_unwritten_extents()).
This patch mark buffer as freed and set j_next_transaction to the new
transaction when it already belongs to the committing transaction in
jbd2_journal_forget(), so that commit code knows it should clear dirty
bits when it is done with the buffer.
This problem can be reproduced by xfstests generic/455 easily with
seeds (3246 3247 3248 3249).
Signed-off-by: zhangyi (F) <yi.zhang@huawei.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: stable@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jay Dolan [Wed, 13 Feb 2019 05:43:12 +0000 (21:43 -0800)]
serial: 8250_pci: Have ACCES cards that use the four port Pericom PI7C9X7954 chip use the pci_pericom_setup()
commit
78d3820b9bd39028727c6aab7297b63c093db343 upstream.
The four port Pericom chips have the fourth port at the wrong address.
Make use of quirk to fix it.
Fixes:
c8d192428f52 ("serial: 8250: added acces i/o products quad and octal serial cards")
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Jay Dolan <jay.dolan@accesio.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jay Dolan [Wed, 13 Feb 2019 05:43:11 +0000 (21:43 -0800)]
serial: 8250_pci: Fix number of ports for ACCES serial cards
commit
b896b03bc7fce43a07012cc6bf5e2ab2fddf3364 upstream.
Have the correct number of ports created for ACCES serial cards. Two port
cards show up as four ports, and four port cards show up as eight.
Fixes:
c8d192428f52 ("serial: 8250: added acces i/o products quad and octal serial cards")
Signed-off-by: Jay Dolan <jay.dolan@accesio.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Lubomir Rintel [Sun, 24 Feb 2019 12:00:53 +0000 (13:00 +0100)]
serial: 8250_of: assume reg-shift of 2 for mrvl,mmp-uart
commit
f4817843e39ce78aace0195a57d4e8500a65a898 upstream.
There are two other drivers that bind to mrvl,mmp-uart and both of them
assume register shift of 2 bits. There are device trees that lack the
property and rely on that assumption.
If this driver wins the race to bind to those devices, it should behave
the same as the older deprecated driver.
Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
Cc: stable@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Anssi Hannula [Fri, 15 Feb 2019 16:45:08 +0000 (18:45 +0200)]
serial: uartps: Fix stuck ISR if RX disabled with non-empty FIFO
commit
7abab1605139bc41442864c18f9573440f7ca105 upstream.
If RX is disabled while there are still unprocessed bytes in RX FIFO,
cdns_uart_handle_rx() called from interrupt handler will get stuck in
the receive loop as read bytes will not get removed from the RX FIFO
and CDNS_UART_SR_RXEMPTY bit will never get set.
Avoid the stuck handler by checking first if RX is disabled. port->lock
protects against race with RX-disabling functions.
This HW behavior was mentioned by Nathan Rossi in
43e98facc4a3 ("tty:
xuartps: Fix RX hang, and TX corruption in termios call") which fixed a
similar issue in cdns_uart_set_termios().
The behavior can also be easily verified by e.g. setting
CDNS_UART_CR_RX_DIS at the beginning of cdns_uart_handle_rx() - the
following loop will then get stuck.
Resetting the FIFO using RXRST would not set RXEMPTY either so simply
issuing a reset after RX-disable would not work.
I observe this frequently on a ZynqMP board during heavy RX load at 1M
baudrate when the reader process exits and thus RX gets disabled.
Fixes:
61ec9016988f ("tty/serial: add support for Xilinx PS UART")
Signed-off-by: Anssi Hannula <anssi.hannula@bitwise.fi>
Cc: stable@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Willem de Bruijn [Wed, 6 Mar 2019 19:35:15 +0000 (14:35 -0500)]
bpf: only test gso type on gso packets
[ Upstream commit
4c3024debf62de4c6ac6d3cb4c0063be21d4f652 ]
BPF can adjust gso only for tcp bytestreams. Fail on other gso types.
But only on gso packets. It does not touch this field if !gso_size.
Fixes:
b90efd225874 ("bpf: only adjust gso_size on bytestream protocols")
Signed-off-by: Willem de Bruijn <willemb@google.com>
Acked-by: Yonghong Song <yhs@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Tvrtko Ursulin [Tue, 5 Mar 2019 11:04:08 +0000 (11:04 +0000)]
drm/i915: Relax mmap VMA check
[ Upstream commit
ca22f32a6296cbfa29de56328c8505560a18cfa8 ]
Legacy behaviour was to allow non-page-aligned mmap requests, as does the
linux mmap(2) implementation by virtue of automatically rounding up for
the caller.
To avoid breaking legacy userspace relax the newly introduced fix.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Fixes:
5c4604e757ba ("drm/i915: Prevent a race during I915_GEM_MMAP ioctl with WC set")
Reported-by: Guenter Roeck <linux@roeck-us.net>
Cc: Adam Zabrocki <adamza@microsoft.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: <stable@vger.kernel.org> # v4.0+
Cc: Akash Goel <akash.goel@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: intel-gfx@lists.freedesktop.org
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190305110409.28633-1-tvrtko.ursulin@linux.intel.com
(cherry picked from commit
a90e1948efb648f567444f87f3c19b2a0787affd)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Marc Kleine-Budde [Wed, 28 Nov 2018 14:31:37 +0000 (15:31 +0100)]
can: flexcan: FLEXCAN_IFLAG_MB: add () around macro argument
[ Upstream commit
22233f7bf2c99ef52ec19d30876a12d2f725972e ]
This patch fixes the following checkpatch warning:
| Macro argument 'x' may be better as '(x)' to avoid precedence issues
Fixes:
cbffaf7aa09e ("can: flexcan: Always use last mailbox for TX")
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Mark Walton [Thu, 28 Feb 2019 15:46:36 +0000 (15:46 +0000)]
gpio: pca953x: Fix dereference of irq data in shutdown
commit
c378b3aa015931a46c91d6ccc2fe04d97801d060 upstream.
If a PCA953x gpio was used as an interrupt and then released,
the shutdown function was trying to extract the pca953x_chip
pointer directly from the irq_data, but in reality was getting
the gpio_chip structure.
The net effect was that the subsequent writes to the data
structure corrupted data in the gpio_chip structure, which wasn't
immediately obvious until attempting to use the GPIO again in the
future, at which point the kernel panics.
This fix correctly extracts the pca953x_chip structure via the
gpio_chip structure, as is correctly done in the other irq
functions.
Fixes:
0a70fe00efea ("gpio: pca953x: Clear irq trigger type on irq shutdown")
Cc: stable@vger.kernel.org
Signed-off-by: Mark Walton <mark.walton@serialtek.com>
Reviewed-by: Bartosz Golaszewski <bgolaszewski@baylibre.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Loic Poulain [Wed, 30 Jan 2019 16:48:07 +0000 (11:48 -0500)]
media: i2c: ov5640: Fix post-reset delay
commit
1d4c41f3d887bcd66e82cb2fda124533dad8808a upstream.
According to the ov5640 specification (2.7 power up sequence), host can
access the sensor's registers 20ms after reset. Trying to access them
before leads to undefined behavior and result in sporadic initialization
errors.
Signed-off-by: Loic Poulain <loic.poulain@linaro.org>
Signed-off-by: Sakari Ailus <sakari.ailus@linux.intel.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>
Cc: Adam Ford <aford173@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sowjanya Komatineni [Tue, 12 Feb 2019 19:06:44 +0000 (11:06 -0800)]
i2c: tegra: fix maximum transfer size
commit
f4e3f4ae1d9c9330de355f432b69952e8cef650c upstream.
Tegra186 and prior supports maximum 4K bytes per packet transfer
including 12 bytes of packet header.
This patch fixes max write length limit to account packet header
size for transfers.
Cc: stable@vger.kernel.org # 4.4+
Reviewed-by: Dmitry Osipenko <digetx@gmail.com>
Signed-off-by: Sowjanya Komatineni <skomatineni@nvidia.com>
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
QiaoChong [Sat, 9 Feb 2019 20:59:07 +0000 (20:59 +0000)]
parport_pc: fix find_superio io compare code, should use equal test.
commit
21698fd57984cd28207d841dbdaa026d6061bceb upstream.
In the original code before
181bf1e815a2 the loop was continuing until
it finds the first matching superios[i].io and p->base.
But after
181bf1e815a2 the logic changed and the loop now returns the
pointer to the first mismatched array element which is then used in
get_superio_dma() and get_superio_irq() and thus returning the wrong
value.
Fix the condition so that it now returns the correct pointer.
Fixes:
181bf1e815a2 ("parport_pc: clean up the modified while loops using for")
Cc: Alan Cox <alan@linux.intel.com>
Cc: stable@vger.kernel.org
Signed-off-by: QiaoChong <qiaochong@loongson.cn>
[rewrite the commit message]
Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alexander Shishkin [Thu, 24 Jan 2019 13:11:53 +0000 (15:11 +0200)]
intel_th: Don't reference unassigned outputs
commit
9ed3f22223c33347ed963e7c7019cf2956dd4e37 upstream.
When an output port driver is removed, also remove references to it from
any masters. Failing to do this causes a NULL ptr dereference when
configuring another output port:
> BUG: unable to handle kernel NULL pointer dereference at
000000000000000d
> RIP: 0010:master_attr_store+0x9d/0x160 [intel_th_gth]
> Call Trace:
> dev_attr_store+0x1b/0x30
> sysfs_kf_write+0x3c/0x50
> kernfs_fop_write+0x125/0x1a0
> __vfs_write+0x3a/0x190
> ? __vfs_write+0x5/0x190
> ? _cond_resched+0x1a/0x50
> ? rcu_all_qs+0x5/0xb0
> ? __vfs_write+0x5/0x190
> vfs_write+0xb8/0x1b0
> ksys_write+0x55/0xc0
> __x64_sys_write+0x1a/0x20
> do_syscall_64+0x5a/0x140
> entry_SYSCALL_64_after_hwframe+0x44/0xa9
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Fixes:
b27a6a3f97b9 ("intel_th: Add Global Trace Hub driver")
CC: stable@vger.kernel.org # v4.4+
Reported-by: Ammy Yi <ammy.yi@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Heikki Krogerus [Wed, 23 Jan 2019 14:44:16 +0000 (17:44 +0300)]
device property: Fix the length used in PROPERTY_ENTRY_STRING()
commit
2b6e492467c78183bb629bb0a100ea3509b615a5 upstream.
With string type property entries we need to use
sizeof(const char *) instead of the number of characters as
the length of the entry.
If the string was shorter then sizeof(const char *),
attempts to read it would have failed with -EOVERFLOW. The
problem has been hidden because all build-in string
properties have had a string longer then 8 characters until
now.
Fixes:
a85f42047533 ("device property: helper macros for property entry creation")
Cc: 4.5+ <stable@vger.kernel.org> # 4.5+
Signed-off-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Zev Weiss [Tue, 12 Mar 2019 06:28:02 +0000 (23:28 -0700)]
kernel/sysctl.c: add missing range check in do_proc_dointvec_minmax_conv
commit
8cf7630b29701d364f8df4a50e4f1f5e752b2778 upstream.
This bug has apparently existed since the introduction of this function
in the pre-git era (
4500e91754d3 in Thomas Gleixner's history.git,
"[NET]: Add proc_dointvec_userhz_jiffies, use it for proper handling of
neighbour sysctls.").
As a minimal fix we can simply duplicate the corresponding check in
do_proc_dointvec_conv().
Link: http://lkml.kernel.org/r/20190207123426.9202-3-zev@bewilderbeest.net
Signed-off-by: Zev Weiss <zev@bewilderbeest.net>
Cc: Brendan Higgins <brendanhiggins@google.com>
Cc: Iurii Zaikin <yzaikin@google.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Luis Chamberlain <mcgrof@kernel.org>
Cc: <stable@vger.kernel.org> [2.6.2+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jan Stancek [Tue, 5 Mar 2019 23:50:08 +0000 (15:50 -0800)]
mm/memory.c: do_fault: avoid usage of stale vm_area_struct
commit
fc8efd2ddfed3f343c11b693e87140ff358d7ff5 upstream.
LTP testcase mtest06 [1] can trigger a crash on s390x running 5.0.0-rc8.
This is a stress test, where one thread mmaps/writes/munmaps memory area
and other thread is trying to read from it:
CPU: 0 PID: 2611 Comm: mmap1 Not tainted 5.0.0-rc8+ #51
Hardware name: IBM 2964 N63 400 (z/VM 6.4.0)
Krnl PSW :
0404e00180000000 00000000001ac8d8 (__lock_acquire+0x7/0x7a8)
Call Trace:
([<
0000000000000000>] (null))
[<
00000000001adae4>] lock_acquire+0xec/0x258
[<
000000000080d1ac>] _raw_spin_lock_bh+0x5c/0x98
[<
000000000012a780>] page_table_free+0x48/0x1a8
[<
00000000002f6e54>] do_fault+0xdc/0x670
[<
00000000002fadae>] __handle_mm_fault+0x416/0x5f0
[<
00000000002fb138>] handle_mm_fault+0x1b0/0x320
[<
00000000001248cc>] do_dat_exception+0x19c/0x2c8
[<
000000000080e5ee>] pgm_check_handler+0x19e/0x200
page_table_free() is called with NULL mm parameter, but because "0" is a
valid address on s390 (see S390_lowcore), it keeps going until it
eventually crashes in lockdep's lock_acquire. This crash is
reproducible at least since 4.14.
Problem is that "vmf->vma" used in do_fault() can become stale. Because
mmap_sem may be released, other threads can come in, call munmap() and
cause "vma" be returned to kmem cache, and get zeroed/re-initialized and
re-used:
handle_mm_fault |
__handle_mm_fault |
do_fault |
vma = vmf->vma |
do_read_fault |
__do_fault |
vma->vm_ops->fault(vmf); |
mmap_sem is released |
|
| do_munmap()
| remove_vma_list()
| remove_vma()
| vm_area_free()
| # vma is released
| ...
| # same vma is allocated
| # from kmem cache
| do_mmap()
| vm_area_alloc()
| memset(vma, 0, ...)
|
pte_free(vma->vm_mm, ...); |
page_table_free |
spin_lock_bh(&mm->context.lock);|
<crash> |
Cache mm_struct to avoid using potentially stale "vma".
[1] https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/mem/mtest06/mmap1.c
Link: http://lkml.kernel.org/r/5b3fdf19e2a5be460a384b936f5b56e13733f1b8.1551595137.git.jstancek@redhat.com
Signed-off-by: Jan Stancek <jstancek@redhat.com>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
Reviewed-by: Matthew Wilcox <willy@infradead.org>
Acked-by: Rafael Aquini <aquini@redhat.com>
Reviewed-by: Minchan Kim <minchan@kernel.org>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Rik van Riel <riel@surriel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Souptick Joarder <jrdr.linux@gmail.com>
Cc: Jerome Glisse <jglisse@redhat.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Roman Penyaev [Tue, 5 Mar 2019 23:43:20 +0000 (15:43 -0800)]
mm/vmalloc: fix size check for remap_vmalloc_range_partial()
commit
401592d2e095947344e10ec0623adbcd58934dd4 upstream.
When VM_NO_GUARD is not set area->size includes adjacent guard page,
thus for correct size checking get_vm_area_size() should be used, but
not area->size.
This fixes possible kernel oops when userspace tries to mmap an area on
1 page bigger than was allocated by vmalloc_user() call: the size check
inside remap_vmalloc_range_partial() accounts non-existing guard page
also, so check successfully passes but vmalloc_to_page() returns NULL
(guard page does not physically exist).
The following code pattern example should trigger an oops:
static int oops_mmap(struct file *file, struct vm_area_struct *vma)
{
void *mem;
mem = vmalloc_user(4096);
BUG_ON(!mem);
/* Do not care about mem leak */
return remap_vmalloc_range(vma, mem, 0);
}
And userspace simply mmaps size + PAGE_SIZE:
mmap(NULL, 8192, PROT_WRITE|PROT_READ, MAP_PRIVATE, fd, 0);
Possible candidates for oops which do not have any explicit size
checks:
*** drivers/media/usb/stkwebcam/stk-webcam.c:
v4l_stk_mmap[789] ret = remap_vmalloc_range(vma, sbuf->buffer, 0);
Or the following one:
*** drivers/video/fbdev/core/fbmem.c
static int
fb_mmap(struct file *file, struct vm_area_struct * vma)
...
res = fb->fb_mmap(info, vma);
Where fb_mmap callback calls remap_vmalloc_range() directly without any
explicit checks:
*** drivers/video/fbdev/vfb.c
static int vfb_mmap(struct fb_info *info,
struct vm_area_struct *vma)
{
return remap_vmalloc_range(vma, (void *)info->fix.smem_start, vma->vm_pgoff);
}
Link: http://lkml.kernel.org/r/20190103145954.16942-2-rpenyaev@suse.de
Signed-off-by: Roman Penyaev <rpenyaev@suse.de>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Joe Perches <joe@perches.com>
Cc: "Luis R. Rodriguez" <mcgrof@kernel.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
zhongjiang [Tue, 5 Mar 2019 23:41:16 +0000 (15:41 -0800)]
mm: hwpoison: fix thp split handing in soft_offline_in_use_page()
commit
46612b751c4941c5c0472ddf04027e877ae5990f upstream.
When soft_offline_in_use_page() runs on a thp tail page after pmd is
split, we trigger the following VM_BUG_ON_PAGE():
Memory failure: 0x3755ff: non anonymous thp
__get_any_page: 0x3755ff: unknown zero refcount page type
2fffff80000000
Soft offlining pfn 0x34d805 at process virtual address 0x20fff000
page:
ffffea000d360140 count:0 mapcount:0 mapping:
0000000000000000 index:0x1
flags: 0x2fffff80000000()
raw:
002fffff80000000 ffffea000d360108 ffffea000d360188 0000000000000000
raw:
0000000000000001 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0)
------------[ cut here ]------------
kernel BUG at ./include/linux/mm.h:519!
soft_offline_in_use_page() passed refcount and page lock from tail page
to head page, which is not needed because we can pass any subpage to
split_huge_page().
Naoya had fixed a similar issue in
c3901e722b29 ("mm: hwpoison: fix thp
split handling in memory_failure()"). But he missed fixing soft
offline.
Link: http://lkml.kernel.org/r/1551452476-24000-1-git-send-email-zhongjiang@huawei.com
Fixes:
61f5d698cc97 ("mm: re-enable THP")
Signed-off-by: zhongjiang <zhongjiang@huawei.com>
Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Kirill A. Shutemov <kirill@shutemov.name>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: <stable@vger.kernel.org> [4.5+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Phuong Nguyen [Thu, 17 Jan 2019 08:44:17 +0000 (17:44 +0900)]
dmaengine: usb-dmac: Make DMAC system sleep callbacks explicit
commit
d9140a0da4a230a03426d175145989667758aa6a upstream.
This commit fixes the issue that USB-DMAC hangs silently after system
resumes on R-Car Gen3 hence renesas_usbhs will not work correctly
when using USB-DMAC for bulk transfer e.g. ethernet or serial
gadgets.
The issue can be reproduced by these steps:
1. modprobe g_serial
2. Suspend and resume system.
3. connect a usb cable to host side
4. Transfer data from Host to Target
5. cat /dev/ttyGS0 (Target side)
6. echo "test" > /dev/ttyACM0 (Host side)
The 'cat' will not result anything. However, system still can work
normally.
Currently, USB-DMAC driver does not have system sleep callbacks hence
this driver relies on the PM core to force runtime suspend/resume to
suspend and reinitialize USB-DMAC during system resume. After
the commit
17218e0092f8 ("PM / genpd: Stop/start devices without
pm_runtime_force_suspend/resume()"), PM core will not force
runtime suspend/resume anymore so this issue happens.
To solve this, make system suspend resume explicit by using
pm_runtime_force_{suspend,resume}() as the system sleep callbacks.
SET_NOIRQ_SYSTEM_SLEEP_PM_OPS() is used to make sure USB-DMAC
suspended after and initialized before renesas_usbhs."
Signed-off-by: Phuong Nguyen <phuong.nguyen.xw@renesas.com>
Signed-off-by: Hiroyuki Yokoyama <hiroyuki.yokoyama.vx@renesas.com>
Cc: <stable@vger.kernel.org> # v4.16+
[shimoda: revise the commit log and add Cc tag]
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Vinod Koul <vkoul@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nikolaus Voss [Wed, 20 Feb 2019 15:11:38 +0000 (16:11 +0100)]
usb: typec: tps6598x: handle block writes separately with plain-I2C adapters
commit
8a863a608d47fa5d9dd15cf841817f73f804cf91 upstream.
Commit
1a2f474d328f handles block _reads_ separately with plain-I2C
adapters, but the problem described with regmap-i2c not handling
SMBus block transfers (i.e. read and writes) correctly also exists
with writes.
As workaround, this patch adds a block write function the same way
1a2f474d328f adds a block read function.
Fixes:
1a2f474d328f ("usb: typec: tps6598x: handle block reads separately with plain-I2C adapters")
Fixes:
0a4c005bd171 ("usb: typec: driver for TI TPS6598x USB Power Delivery controllers")
Signed-off-by: Nikolaus Voss <nikolaus.voss@loewensteinmedical.de>
Cc: stable <stable@vger.kernel.org>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Acked-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dmitry Osipenko [Sun, 24 Feb 2019 15:36:22 +0000 (18:36 +0300)]
usb: chipidea: tegra: Fix missed ci_hdrc_remove_device()
commit
563b9372f7ec57e44e8f9a8600c5107d7ffdd166 upstream.
The ChipIdea's platform device need to be unregistered on Tegra's driver
module removal.
Fixes:
dfebb5f43a78827a ("usb: chipidea: Add support for Tegra20/30/114/124")
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Peter Chen <peter.chen@nxp.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paul Cercueil [Mon, 28 Jan 2019 02:09:21 +0000 (23:09 -0300)]
clk: ingenic: Fix doc of ingenic_cgu_div_info
commit
7ca4c922aad2e3c46767a12f80d01c6b25337b59 upstream.
The 'div' field does not represent a number of bits used to divide
(understand: right-shift) the divider, but a number itself used to
divide the divider.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Signed-off-by: Maarten ter Huurne <maarten@treewalker.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paul Cercueil [Mon, 28 Jan 2019 02:09:20 +0000 (23:09 -0300)]
clk: ingenic: Fix round_rate misbehaving with non-integer dividers
commit
bc5d922c93491878c44c9216e9d227c7eeb81d7f upstream.
Take a parent rate of 180 MHz, and a requested rate of 4.285715 MHz.
This results in a theorical divider of 41.999993 which is then rounded
up to 42. The .round_rate function would then return (180 MHz / 42) as
the clock, rounded down, so 4.285714 MHz.
Calling clk_set_rate on 4.285714 MHz would round the rate again, and
give a theorical divider of 42,0000028, now rounded up to 43, and the
rate returned would be (180 MHz / 43) which is 4.186046 MHz, aka. not
what we requested.
Fix this by rounding up the divisions.
Signed-off-by: Paul Cercueil <paul@crapouillou.net>
Tested-by: Maarten ter Huurne <maarten@treewalker.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Krzysztof Kozlowski [Thu, 21 Feb 2019 11:45:52 +0000 (12:45 +0100)]
clk: samsung: exynos5: Fix kfree() of const memory on setting driver_override
commit
785c9f411eb2d9a6076d3511c631587d5e676bf3 upstream.
Platform driver driver_override field should not be initialized from
const memory because the core later kfree() it. If driver_override is
manually set later through sysfs, kfree() of old value leads to:
$ echo "new_value" > /sys/bus/platform/drivers/.../driver_override
kernel BUG at ../mm/slub.c:3960!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP ARM
...
(kfree) from [<
c058e8c0>] (platform_set_driver_override+0x84/0xac)
(platform_set_driver_override) from [<
c058e908>] (driver_override_store+0x20/0x34)
(driver_override_store) from [<
c031f778>] (kernfs_fop_write+0x100/0x1dc)
(kernfs_fop_write) from [<
c0296de8>] (__vfs_write+0x2c/0x17c)
(__vfs_write) from [<
c02970c4>] (vfs_write+0xa4/0x188)
(vfs_write) from [<
c02972e8>] (ksys_write+0x4c/0xac)
(ksys_write) from [<
c0101000>] (ret_fast_syscall+0x0/0x28)
The clk-exynos5-subcmu driver uses override only for the purpose of
creating meaningful names for children devices (matching names of power
domains, e.g. DISP, MFC). The driver_override was not developed for
this purpose so just switch to default names of devices to fix the
issue.
Fixes:
b06a532bf1fa ("clk: samsung: Add Exynos5 sub-CMU clock driver")
Cc: <stable@vger.kernel.org>
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Krzysztof Kozlowski [Thu, 21 Feb 2019 11:45:51 +0000 (12:45 +0100)]
clk: samsung: exynos5: Fix possible NULL pointer exception on platform_device_alloc() failure
commit
5f0b6216ea381b43c0dff88702d6cc5673d63922 upstream.
During initialization of subdevices if platform_device_alloc() failed,
returned NULL pointer will be later dereferenced. Add proper error
paths to exynos5_clk_register_subcmu(). The return value of this
function is still ignored because at this stage of init there is nothing
we can do.
Fixes:
b06a532bf1fa ("clk: samsung: Add Exynos5 sub-CMU clock driver")
Cc: <stable@vger.kernel.org>
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tony Lindgren [Mon, 11 Feb 2019 22:59:07 +0000 (14:59 -0800)]
clk: clk-twl6040: Fix imprecise external abort for pdmclk
commit
5ae51d67aec95f6f9386aa8dd5db424964895575 upstream.
I noticed that modprobe clk-twl6040 can fail after a cold boot with:
abe_cm:clk:0010:0: failed to enable
...
Unhandled fault: imprecise external abort (0x1406) at 0xbe896b20
WARNING: CPU: 1 PID: 29 at drivers/clk/clk.c:828 clk_core_disable_lock+0x18/0x24
...
(clk_core_disable_lock) from [<
c0123534>] (_disable_clocks+0x18/0x90)
(_disable_clocks) from [<
c0124040>] (_idle+0x17c/0x244)
(_idle) from [<
c0125ad4>] (omap_hwmod_idle+0x24/0x44)
(omap_hwmod_idle) from [<
c053a038>] (sysc_runtime_suspend+0x48/0x108)
(sysc_runtime_suspend) from [<
c06084c4>] (__rpm_callback+0x144/0x1d8)
(__rpm_callback) from [<
c0608578>] (rpm_callback+0x20/0x80)
(rpm_callback) from [<
c0607034>] (rpm_suspend+0x120/0x694)
(rpm_suspend) from [<
c0607a78>] (__pm_runtime_idle+0x60/0x84)
(__pm_runtime_idle) from [<
c053aaf0>] (sysc_probe+0x874/0xf2c)
(sysc_probe) from [<
c05fecd4>] (platform_drv_probe+0x48/0x98)
After searching around for a similar issue, I came across an earlier fix
that never got merged upstream in the Android tree for glass-omap-xrr02.
There is patch "MFD: twl6040-codec: Implement PDMCLK cold temp errata"
by Misael Lopez Cruz <misael.lopez@ti.com>.
Based on my observations, this fix is also needed when cold booting
devices, and not just for deeper idle modes. Since we now have a clock
driver for pdmclk, let's fix the issue in twl6040_pdmclk_prepare().
Cc: Misael Lopez Cruz <misael.lopez@ti.com>
Cc: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kunihiko Hayashi [Fri, 8 Feb 2019 02:25:23 +0000 (11:25 +0900)]
clk: uniphier: Fix update register for CPU-gear
commit
521282237b9d78b9bff423ec818becd4c95841c2 upstream.
Need to set the update bit in UNIPHIER_CLK_CPUGEAR_UPD to update
the CPU-gear value.
Fixes:
d08f1f0d596c ("clk: uniphier: add CPU-gear change (cpufreq) support")
Cc: linux-stable@vger.kernel.org
Signed-off-by: Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jan Kara [Tue, 29 Jan 2019 16:17:24 +0000 (17:17 +0100)]
ext2: Fix underflow in ext2_max_size()
commit
1c2d14212b15a60300a2d4f6364753e87394c521 upstream.
When ext2 filesystem is created with 64k block size, ext2_max_size()
will return value less than 0. Also, we cannot write any file in this fs
since the sb->maxbytes is less than 0. The core of the problem is that
the size of block index tree for such large block size is more than
i_blocks can carry. So fix the computation to count with this
possibility.
File size limits computed with the new function for the full range of
possible block sizes look like:
bits file_size
10
17247252480
11
275415851008
12
2196873666560
13
2197948973056
14
2198486220800
15
2198754754560
16
2198888906752
CC: stable@vger.kernel.org
Reported-by: yangerkun <yangerkun@huawei.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vaibhav Jain [Tue, 29 Jan 2019 11:06:18 +0000 (16:36 +0530)]
cxl: Wrap iterations over afu slices inside 'afu_list_lock'
commit
edeb304f659792fb5bab90d7d6f3408b4c7301fb upstream.
Within cxl module, iteration over array 'adapter->afu' may be racy
at few points as it might be simultaneously read during an EEH and its
contents being set to NULL while driver is being unloaded or unbound
from the adapter. This might result in a NULL pointer to 'struct afu'
being de-referenced during an EEH thereby causing a kernel oops.
This patch fixes this by making sure that all access to the array
'adapter->afu' is wrapped within the context of spin-lock
'adapter->afu_list_lock'.
Fixes:
9e8df8a21963 ("cxl: EEH support")
Cc: stable@vger.kernel.org # v4.3+
Acked-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Acked-by: Frederic Barrat <fbarrat@linux.ibm.com>
Acked-by: Christophe Lombard <clombard@linux.vnet.ibm.com>
Signed-off-by: Vaibhav Jain <vaibhav@linux.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael J. Ruhl [Tue, 26 Feb 2019 16:45:35 +0000 (08:45 -0800)]
IB/hfi1: Close race condition on user context disable and close
commit
bc5add09764c123f58942a37c8335247e683d234 upstream.
When disabling and removing a receive context, it is possible for an
asynchronous event (i.e IRQ) to occur. Because of this, there is a race
between cleaning up the context, and the context being used by the
asynchronous event.
cpu 0 (context cleanup)
rc->ref_count-- (ref_count == 0)
hfi1_rcd_free()
cpu 1 (IRQ (with rcd index))
rcd_get_by_index()
lock
ref_count+++ <-- reference count race (WARNING)
return rcd
unlock
cpu 0
hfi1_free_ctxtdata() <-- incorrect free location
lock
remove rcd from array
unlock
free rcd
This race will cause the following WARNING trace:
WARNING: CPU: 0 PID: 175027 at include/linux/kref.h:52 hfi1_rcd_get_by_index+0x84/0xa0 [hfi1]
CPU: 0 PID: 175027 Comm: IMB-MPI1 Kdump: loaded Tainted: G OE ------------ 3.10.0-957.el7.x86_64 #1
Hardware name: Intel Corporation S2600KP/S2600KP, BIOS SE5C610.86B.11.01.0076.C4.
111920150602 11/19/2015
Call Trace:
dump_stack+0x19/0x1b
__warn+0xd8/0x100
warn_slowpath_null+0x1d/0x20
hfi1_rcd_get_by_index+0x84/0xa0 [hfi1]
is_rcv_urgent_int+0x24/0x90 [hfi1]
general_interrupt+0x1b6/0x210 [hfi1]
__handle_irq_event_percpu+0x44/0x1c0
handle_irq_event_percpu+0x32/0x80
handle_irq_event+0x3c/0x60
handle_edge_irq+0x7f/0x150
handle_irq+0xe4/0x1a0
do_IRQ+0x4d/0xf0
common_interrupt+0x162/0x162
The race can also lead to a use after free which could be similar to:
general protection fault: 0000 1 SMP
CPU: 71 PID: 177147 Comm: IMB-MPI1 Kdump: loaded Tainted: G W OE ------------ 3.10.0-957.el7.x86_64 #1
Hardware name: Intel Corporation S2600KP/S2600KP, BIOS SE5C610.86B.11.01.0076.C4.
111920150602 11/19/2015
task:
ffff9962a8098000 ti:
ffff99717a508000 task.ti:
ffff99717a508000 __kmalloc+0x94/0x230
Call Trace:
? hfi1_user_sdma_process_request+0x9c8/0x1250 [hfi1]
hfi1_user_sdma_process_request+0x9c8/0x1250 [hfi1]
hfi1_aio_write+0xba/0x110 [hfi1]
do_sync_readv_writev+0x7b/0xd0
do_readv_writev+0xce/0x260
? handle_mm_fault+0x39d/0x9b0
? pick_next_task_fair+0x5f/0x1b0
? sched_clock_cpu+0x85/0xc0
? __schedule+0x13a/0x890
vfs_writev+0x35/0x60
SyS_writev+0x7f/0x110
system_call_fastpath+0x22/0x27
Use the appropriate kref API to verify access.
Reorder context cleanup to ensure context removal before cleanup occurs
correctly.
Cc: stable@vger.kernel.org # v4.14.0+
Fixes:
f683c80ca68e ("IB/hfi1: Resolve kernel panics by reference counting receive contexts")
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Lucas Stach [Wed, 27 Feb 2019 16:52:19 +0000 (17:52 +0100)]
PCI: dwc: skip MSI init if MSIs have been explicitly disabled
commit
3afc8299f39a27b60e1519a28e18878ce878e7dd upstream.
Since
7c5925afbc58 (PCI: dwc: Move MSI IRQs allocation to IRQ domains
hierarchical API) the MSI init claims one of the controller IRQs as a
chained IRQ line for the MSI controller. On some designs, like the i.MX6,
this line is shared with a PCIe legacy IRQ. When the line is claimed for
the MSI domain, any device trying to use this legacy IRQs will fail to
request this IRQ line.
As MSI and legacy IRQs are already mutually exclusive on the DWC core,
as the core won't forward any legacy IRQs once any MSI has been enabled,
users wishing to use legacy IRQs already need to explictly disable MSI
support (usually via the pci=nomsi kernel commandline option). To avoid
any issues with MSI conflicting with legacy IRQs, just skip all of the
DWC MSI initalization, including the IRQ line claim, when MSI is disabled.
Fixes:
7c5925afbc58 ("PCI: dwc: Move MSI IRQs allocation to IRQ domains hierarchical API")
Tested-by: Tim Harvey <tharvey@gateworks.com>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Gustavo Pimentel <gustavo.pimentel@synopsys.com>
Cc: stable@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dongdong Liu [Mon, 11 Feb 2019 07:02:59 +0000 (15:02 +0800)]
PCI/DPC: Fix print AER status in DPC event handling
commit
9f08a5d896ce43380314c34ed3f264c8e6075b80 upstream.
Previously dpc_handler() called aer_get_device_error_info() without
initializing info->severity, so aer_get_device_error_info() relied on
uninitialized data.
Add dpc_get_aer_uncorrect_severity() to read the port's AER status, mask,
and severity registers and set info->severity.
Also, clear the port's AER fatal error status bits.
Fixes:
8aefa9b0d910 ("PCI/DPC: Print AER status in DPC event handling")
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
[bhelgaas: changelog]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Cc: stable@vger.kernel.org # v4.19+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bjorn Helgaas [Fri, 4 Jan 2019 23:59:07 +0000 (17:59 -0600)]
PCI/ASPM: Use LTR if already enabled by platform
commit
10ecc818ea7319b5d0d2b4e1aa6a77323e776f76 upstream.
RussianNeuroMancer reported that the Intel 7265 wifi on a Dell Venue 11 Pro
7140 table stopped working after wakeup from suspend and bisected the
problem to
9ab105deb60f ("PCI/ASPM: Disable ASPM L1.2 Substate if we don't
have LTR"). David Ward reported the same problem on a Dell Latitude 7350.
After
af8bb9f89838 ("PCI/ACPI: Request LTR control from platform before
using it"), we don't enable LTR unless the platform has granted LTR control
to us. In addition, we don't notice if the platform had already enabled
LTR itself.
After
9ab105deb60f ("PCI/ASPM: Disable ASPM L1.2 Substate if we don't have
LTR"), we avoid using LTR if we don't think the path to the device has LTR
enabled.
The combination means that if the platform itself enables LTR but declines
to give the OS control over LTR, we unnecessarily avoided using ASPM L1.2.
Link: https://bugzilla.kernel.org/show_bug.cgi?id=201469
Fixes:
9ab105deb60f ("PCI/ASPM: Disable ASPM L1.2 Substate if we don't have LTR")
Fixes:
af8bb9f89838 ("PCI/ACPI: Request LTR control from platform before using it")
Reported-by: RussianNeuroMancer <russianneuromancer@ya.ru>
Reported-by: David Ward <david.ward@ll.mit.edu>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: stable@vger.kernel.org # v4.18+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jan Kara [Mon, 11 Feb 2019 18:30:32 +0000 (13:30 -0500)]
ext4: fix crash during online resizing
commit
f96c3ac8dfc24b4e38fc4c2eba5fea2107b929d1 upstream.
When computing maximum size of filesystem possible with given number of
group descriptor blocks, we forget to include s_first_data_block into
the number of blocks. Thus for filesystems with non-zero
s_first_data_block it can happen that computed maximum filesystem size
is actually lower than current filesystem size which confuses the code
and eventually leads to a BUG_ON in ext4_alloc_group_tables() hitting on
flex_gd->count == 0. The problem can be reproduced like:
truncate -s 100g /tmp/image
mkfs.ext4 -b 1024 -E resize=262144 /tmp/image 32768
mount -t ext4 -o loop /tmp/image /mnt
resize2fs /dev/loop0 262145
resize2fs /dev/loop0 300000
Fix the problem by properly including s_first_data_block into the
computed number of filesystem blocks.
Fixes:
1c6bd7173d66 "ext4: convert file system to meta_bg if needed..."
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
yangerkun [Mon, 11 Feb 2019 05:35:06 +0000 (00:35 -0500)]
ext4: add mask of ext4 flags to swap
commit
abdc644e8cbac2e9b19763680e5a7cf9bab2bee7 upstream.
The reason is that while swapping two inode, we swap the flags too.
Some flags such as EXT4_JOURNAL_DATA_FL can really confuse the things
since we're not resetting the address operations structure. The
simplest way to keep things sane is to restrict the flags that can be
swapped.
Signed-off-by: yangerkun <yangerkun@huawei.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
yangerkun [Mon, 11 Feb 2019 05:14:02 +0000 (00:14 -0500)]
ext4: update quota information while swapping boot loader inode
commit
aa507b5faf38784defe49f5e64605ac3c4425e26 upstream.
While do swap between two inode, they swap i_data without update
quota information. Also, swap_inode_boot_loader can do "revert"
somtimes, so update the quota while all operations has been finished.
Signed-off-by: yangerkun <yangerkun@huawei.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
yangerkun [Mon, 11 Feb 2019 05:05:24 +0000 (00:05 -0500)]
ext4: cleanup pagecache before swap i_data
commit
a46c68a318b08f819047843abf349aeee5d10ac2 upstream.
While do swap, we should make sure there has no new dirty page since we
should swap i_data between two inode:
1.We should lock i_mmap_sem with write to avoid new pagecache from mmap
read/write;
2.Change filemap_flush to filemap_write_and_wait and move them to the
space protected by inode lock to avoid new pagecache from buffer read/write.
Signed-off-by: yangerkun <yangerkun@huawei.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
yangerkun [Mon, 11 Feb 2019 05:02:05 +0000 (00:02 -0500)]
ext4: fix check of inode in swap_inode_boot_loader
commit
67a11611e1a5211f6569044fbf8150875764d1d0 upstream.
Before really do swap between inode and boot inode, something need to
check to avoid invalid or not permitted operation, like does this inode
has inline data. But the condition check should be protected by inode
lock to avoid change while swapping. Also some other condition will not
change between swapping, but there has no problem to do this under inode
lock.
Signed-off-by: yangerkun <yangerkun@huawei.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Arnd Bergmann [Thu, 7 Mar 2019 10:22:41 +0000 (11:22 +0100)]
cpufreq: pxa2xx: remove incorrect __init annotation
commit
9505b98ccddc454008ca7efff90044e3e857c827 upstream.
pxa_cpufreq_init_voltages() is marked __init but usually inlined into
the non-__init pxa_cpufreq_init() function. When building with clang,
it can stay as a standalone function in a discarded section, and produce
this warning:
WARNING: vmlinux.o(.text+0x616a00): Section mismatch in reference from the function pxa_cpufreq_init() to the function .init.text:pxa_cpufreq_init_voltages()
The function pxa_cpufreq_init() references
the function __init pxa_cpufreq_init_voltages().
This is often because pxa_cpufreq_init lacks a __init
annotation or the annotation of pxa_cpufreq_init_voltages is wrong.
Fixes:
50e77fcd790e ("ARM: pxa: remove __init from cpufreq_driver->init()")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Reviewed-by: Nathan Chancellor <natechancellor@gmail.com>
Acked-by: Robert Jarzmik <robert.jarzmik@free.fr>
Cc: All applicable <stable@vger.kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Yangtao Li [Mon, 4 Feb 2019 07:48:54 +0000 (02:48 -0500)]
cpufreq: tegra124: add missing of_node_put()
commit
446fae2bb5395f3028d8e3aae1508737e5a72ea1 upstream.
of_cpu_device_node_get() will increase the refcount of device_node,
it is necessary to call of_node_put() at the end to release the
refcount.
Fixes:
9eb15dbbfa1a2 ("cpufreq: Add cpufreq driver for Tegra124")
Cc: <stable@vger.kernel.org> # 4.4+
Signed-off-by: Yangtao Li <tiny.windzz@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Viresh Kumar [Wed, 20 Feb 2019 11:11:18 +0000 (16:41 +0530)]
cpufreq: kryo: Release OPP tables on module removal
commit
0334906c06967142c8805fbe88acf787f65d3d26 upstream.
Commit
5ad7346b4ae2 ("cpufreq: kryo: Add module remove and exit") made
it possible to build the kryo cpufreq driver as a module, but it failed
to release all the resources, i.e. OPP tables, when the module is
unloaded.
This patch fixes it by releasing the OPP tables, by calling
dev_pm_opp_put_supported_hw() for them, from the
qcom_cpufreq_kryo_remove() routine. The array of pointers to the OPP
tables is also allocated dynamically now in qcom_cpufreq_kryo_probe(),
as the pointers will be required while releasing the resources.
Compile tested only.
Cc: 4.18+ <stable@vger.kernel.org> # v4.18+
Fixes:
5ad7346b4ae2 ("cpufreq: kryo: Add module remove and exit")
Reviewed-by: Georgi Djakov <georgi.djakov@linaro.org>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Masami Hiramatsu [Tue, 12 Feb 2019 16:11:19 +0000 (01:11 +0900)]
x86/kprobes: Prohibit probing on optprobe template code
commit
0192e6535ebe9af68614198ced4fd6d37b778ebf upstream.
Prohibit probing on optprobe template code, since it is not
a code but a template instruction sequence. If we modify
this template, copied template must be broken.
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andrea Righi <righi.andrea@gmail.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Fixes:
9326638cbee2 ("kprobes, x86: Use NOKPROBE_SYMBOL() instead of __kprobes annotation")
Link: http://lkml.kernel.org/r/154998787911.31052.15274376330136234452.stgit@devbox
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Doug Berger [Wed, 20 Feb 2019 22:15:28 +0000 (14:15 -0800)]
irqchip/brcmstb-l2: Use _irqsave locking variants in non-interrupt code
commit
33517881ede742107f416533b8c3e4abc56763da upstream.
Using the irq_gc_lock/irq_gc_unlock functions in the suspend and
resume functions creates the opportunity for a deadlock during
suspend, resume, and shutdown. Using the irq_gc_lock_irqsave/
irq_gc_unlock_irqrestore variants prevents this possible deadlock.
Cc: stable@vger.kernel.org
Fixes:
7f646e92766e2 ("irqchip: brcmstb-l2: Add Broadcom Set Top Box Level-2 interrupt controller")
Signed-off-by: Doug Berger <opendmb@gmail.com>
Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
[maz: tidied up $SUBJECT]
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Zenghui Yu [Sun, 10 Feb 2019 05:24:10 +0000 (05:24 +0000)]
irqchip/gic-v3-its: Avoid parsing _indirect_ twice for Device table
commit
8d565748b6035eeda18895c213396a4c9fac6a4c upstream.
In current logic, its_parse_indirect_baser() will be invoked twice
when allocating Device tables. Add a *break* to omit the unnecessary
and annoying (might be ...) invoking.
Fixes:
32bd44dc19de ("irqchip/gic-v3-its: Fix the incorrect parsing of VCPU table size")
Cc: stable@vger.kernel.org
Signed-off-by: Zenghui Yu <yuzenghui@huawei.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Lubomir Rintel [Sun, 10 Feb 2019 19:47:49 +0000 (20:47 +0100)]
libertas_tf: don't set URB_ZERO_PACKET on IN USB transfer
commit
607076a904c435f2677fadaadd4af546279db68b upstream.
It doesn't make sense and the USB core warns on each submit of such
URB, easily flooding the message buffer with tracebacks.
Analogous issue was fixed in regular libertas driver in commit
6528d8804780
("libertas: don't set URB_ZERO_PACKET on IN USB transfer").
Cc: stable@vger.kernel.org
Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
Reviewed-by: Steve deRosier <derosier@cal-sierra.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stephen Boyd [Tue, 15 Jan 2019 22:54:47 +0000 (14:54 -0800)]
soc: qcom: rpmh: Avoid accessing freed memory from batch API
commit
baef1c90aac7e5bf13f0360a3b334825a23d31a1 upstream.
Using the batch API from the interconnect driver sometimes leads to a
KASAN error due to an access to freed memory. This is easier to trigger
with threadirqs on the kernel commandline.
BUG: KASAN: use-after-free in rpmh_tx_done+0x114/0x12c
Read of size 1 at addr
fffffff51414ad84 by task irq/110-apps_rs/57
CPU: 0 PID: 57 Comm: irq/110-apps_rs Tainted: G W 4.19.10 #72
Call trace:
dump_backtrace+0x0/0x2f8
show_stack+0x20/0x2c
__dump_stack+0x20/0x28
dump_stack+0xcc/0x10c
print_address_description+0x74/0x240
kasan_report+0x250/0x26c
__asan_report_load1_noabort+0x20/0x2c
rpmh_tx_done+0x114/0x12c
tcs_tx_done+0x450/0x768
irq_forced_thread_fn+0x58/0x9c
irq_thread+0x120/0x1dc
kthread+0x248/0x260
ret_from_fork+0x10/0x18
Allocated by task 385:
kasan_kmalloc+0xac/0x148
__kmalloc+0x170/0x1e4
rpmh_write_batch+0x174/0x540
qcom_icc_set+0x8dc/0x9ac
icc_set+0x288/0x2e8
a6xx_gmu_stop+0x320/0x3c0
a6xx_pm_suspend+0x108/0x124
adreno_suspend+0x50/0x60
pm_generic_runtime_suspend+0x60/0x78
__rpm_callback+0x214/0x32c
rpm_callback+0x54/0x184
rpm_suspend+0x3f8/0xa90
pm_runtime_work+0xb4/0x178
process_one_work+0x544/0xbc0
worker_thread+0x514/0x7d0
kthread+0x248/0x260
ret_from_fork+0x10/0x18
Freed by task 385:
__kasan_slab_free+0x12c/0x1e0
kasan_slab_free+0x10/0x1c
kfree+0x134/0x588
rpmh_write_batch+0x49c/0x540
qcom_icc_set+0x8dc/0x9ac
icc_set+0x288/0x2e8
a6xx_gmu_stop+0x320/0x3c0
a6xx_pm_suspend+0x108/0x124
adreno_suspend+0x50/0x60
cr50_spi spi5.0: SPI transfer timed out
pm_generic_runtime_suspend+0x60/0x78
__rpm_callback+0x214/0x32c
rpm_callback+0x54/0x184
rpm_suspend+0x3f8/0xa90
pm_runtime_work+0xb4/0x178
process_one_work+0x544/0xbc0
worker_thread+0x514/0x7d0
kthread+0x248/0x260
ret_from_fork+0x10/0x18
The buggy address belongs to the object at
fffffff51414ac80
which belongs to the cache kmalloc-512 of size 512
The buggy address is located 260 bytes inside of
512-byte region [
fffffff51414ac80,
fffffff51414ae80)
The buggy address belongs to the page:
page:
ffffffbfd4505200 count:1 mapcount:0 mapping:
fffffff51e00c680 index:0x0 compound_mapcount: 0
flags: 0x4000000000008100(slab|head)
raw:
4000000000008100 ffffffbfd4529008 ffffffbfd44f9208 fffffff51e00c680
raw:
0000000000000000 0000000000200020 00000001ffffffff 0000000000000000
page dumped because: kasan: bad access detected
Memory state around the buggy address:
fffffff51414ac80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
fffffff51414ad00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
>
fffffff51414ad80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^
fffffff51414ae00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
fffffff51414ae80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
The batch API sets the same completion for each rpmh message that's sent
and then loops through all the messages and waits for that single
completion declared on the stack to be completed before returning from
the function and freeing the message structures. Unfortunately, some
messages may still be in process and 'stuck' in the TCS. At some later
point, the tcs_tx_done() interrupt will run and try to process messages
that have already been freed at the end of rpmh_write_batch(). This will
in turn access the 'needs_free' member of the rpmh_request structure and
cause KASAN to complain. Furthermore, if there's a message that's
completed in rpmh_tx_done() and freed immediately after the complete()
call is made we'll be racing with potentially freed memory when
accessing the 'needs_free' member:
CPU0 CPU1
---- ----
rpmh_tx_done()
complete(&compl)
wait_for_completion(&compl)
kfree(rpm_msg)
if (rpm_msg->needs_free)
<KASAN warning splat>
Let's fix this by allocating a chunk of completions for each message and
waiting for all of them to be completed before returning from the batch
API. Alternatively, we could wait for the last message in the batch, but
that may be a more complicated change because it looks like
tcs_tx_done() just iterates through the indices of the queue and
completes each message instead of tracking the last inserted message and
completing that first.
Fixes:
c8790cb6da58 ("drivers: qcom: rpmh: add support for batch RPMH request")
Cc: Lina Iyer <ilina@codeaurora.org>
Cc: "Raju P.L.S.S.S.N" <rplsssn@codeaurora.org>
Cc: Matthias Kaehlcke <mka@chromium.org>
Cc: Evan Green <evgreen@chromium.org>
Cc: stable@vger.kernel.org
Reviewed-by: Lina Iyer <ilina@codeaurora.org>
Reviewed-by: Evan Green <evgreen@chromium.org>
Signed-off-by: Stephen Boyd <swboyd@chromium.org>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Andy Gross <andy.gross@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Filipe Manana [Thu, 14 Feb 2019 15:17:20 +0000 (15:17 +0000)]
Btrfs: fix corruption reading shared and compressed extents after hole punching
commit
8e928218780e2f1cf2f5891c7575e8f0b284fcce upstream.
In the past we had data corruption when reading compressed extents that
are shared within the same file and they are consecutive, this got fixed
by commit
005efedf2c7d0 ("Btrfs: fix read corruption of compressed and
shared extents") and by commit
808f80b46790f ("Btrfs: update fix for read
corruption of compressed and shared extents"). However there was a case
that was missing in those fixes, which is when the shared and compressed
extents are referenced with a non-zero offset. The following shell script
creates a reproducer for this issue:
#!/bin/bash
mkfs.btrfs -f /dev/sdc &> /dev/null
mount -o compress /dev/sdc /mnt/sdc
# Create a file with 3 consecutive compressed extents, each has an
# uncompressed size of 128Kb and a compressed size of 4Kb.
for ((i = 1; i <= 3; i++)); do
head -c 4096 /dev/zero
for ((j = 1; j <= 31; j++)); do
head -c 4096 /dev/zero | tr '\0' "\377"
done
done > /mnt/sdc/foobar
sync
echo "Digest after file creation: $(md5sum /mnt/sdc/foobar)"
# Clone the first extent into offsets 128K and 256K.
xfs_io -c "reflink /mnt/sdc/foobar 0 128K 128K" /mnt/sdc/foobar
xfs_io -c "reflink /mnt/sdc/foobar 0 256K 128K" /mnt/sdc/foobar
sync
echo "Digest after cloning: $(md5sum /mnt/sdc/foobar)"
# Punch holes into the regions that are already full of zeroes.
xfs_io -c "fpunch 0 4K" /mnt/sdc/foobar
xfs_io -c "fpunch 128K 4K" /mnt/sdc/foobar
xfs_io -c "fpunch 256K 4K" /mnt/sdc/foobar
sync
echo "Digest after hole punching: $(md5sum /mnt/sdc/foobar)"
echo "Dropping page cache..."
sysctl -q vm.drop_caches=1
echo "Digest after hole punching: $(md5sum /mnt/sdc/foobar)"
umount /dev/sdc
When running the script we get the following output:
Digest after file creation:
5a0888d80d7ab1fd31c229f83a3bbcc8 /mnt/sdc/foobar
linked 131072/131072 bytes at offset 131072
128 KiB, 1 ops; 0.0033 sec (36.960 MiB/sec and 295.6830 ops/sec)
linked 131072/131072 bytes at offset 262144
128 KiB, 1 ops; 0.0015 sec (78.567 MiB/sec and 628.5355 ops/sec)
Digest after cloning:
5a0888d80d7ab1fd31c229f83a3bbcc8 /mnt/sdc/foobar
Digest after hole punching:
5a0888d80d7ab1fd31c229f83a3bbcc8 /mnt/sdc/foobar
Dropping page cache...
Digest after hole punching:
fba694ae8664ed0c2e9ff8937e7f1484 /mnt/sdc/foobar
This happens because after reading all the pages of the extent in the
range from 128K to 256K for example, we read the hole at offset 256K
and then when reading the page at offset 260K we don't submit the
existing bio, which is responsible for filling all the page in the
range 128K to 256K only, therefore adding the pages from range 260K
to 384K to the existing bio and submitting it after iterating over the
entire range. Once the bio completes, the uncompressed data fills only
the pages in the range 128K to 256K because there's no more data read
from disk, leaving the pages in the range 260K to 384K unfilled. It is
just a slightly different variant of what was solved by commit
005efedf2c7d0 ("Btrfs: fix read corruption of compressed and shared
extents").
Fix this by forcing a bio submit, during readpages(), whenever we find a
compressed extent map for a page that is different from the extent map
for the previous page or has a different starting offset (in case it's
the same compressed extent), instead of the extent map's original start
offset.
A test case for fstests follows soon.
Reported-by: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
Fixes:
808f80b46790f ("Btrfs: update fix for read corruption of compressed and shared extents")
Fixes:
005efedf2c7d0 ("Btrfs: fix read corruption of compressed and shared extents")
Cc: stable@vger.kernel.org # 4.3+
Tested-by: Zygo Blaxell <ce3g8jdj@umail.furryterror.org>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Thumshirn [Mon, 18 Feb 2019 10:28:37 +0000 (11:28 +0100)]
btrfs: ensure that a DUP or RAID1 block group has exactly two stripes
commit
349ae63f40638a28c6fce52e8447c2d14b84cc0c upstream.
We recently had a customer issue with a corrupted filesystem. When
trying to mount this image btrfs panicked with a division by zero in
calc_stripe_length().
The corrupt chunk had a 'num_stripes' value of 1. calc_stripe_length()
takes this value and divides it by the number of copies the RAID profile
is expected to have to calculate the amount of data stripes. As a DUP
profile is expected to have 2 copies this division resulted in 1/2 = 0.
Later then the 'data_stripes' variable is used as a divisor in the
stripe length calculation which results in a division by 0 and thus a
kernel panic.
When encountering a filesystem with a DUP block group and a
'num_stripes' value unequal to 2, refuse mounting as the image is
corrupted and will lead to unexpected behaviour.
Code inspection showed a RAID1 block group has the same issues.
Fixes:
e06cd3dd7cea ("Btrfs: add validadtion checks for chunk loading")
CC: stable@vger.kernel.org # 4.4+
Reviewed-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Filipe Manana [Thu, 13 Dec 2018 21:16:56 +0000 (21:16 +0000)]
Btrfs: setup a nofs context for memory allocation at __btrfs_set_acl
commit
a0873490660246db587849a9e172f2b7b21fa88a upstream.
We are holding a transaction handle when setting an acl, therefore we can
not allocate the xattr value buffer using GFP_KERNEL, as we could deadlock
if reclaim is triggered by the allocation, therefore setup a nofs context.
Fixes:
39a27ec1004e8 ("btrfs: use GFP_KERNEL for xattr and acl allocations")
CC: stable@vger.kernel.org # 4.9+
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Filipe Manana [Thu, 13 Dec 2018 21:16:45 +0000 (21:16 +0000)]
Btrfs: setup a nofs context for memory allocation at btrfs_create_tree()
commit
b89f6d1fcb30a8cbdc18ce00c7d93792076af453 upstream.
We are holding a transaction handle when creating a tree, therefore we can
not allocate the root using GFP_KERNEL, as we could deadlock if reclaim is
triggered by the allocation, therefore setup a nofs context.
Fixes:
74e4d82757f74 ("btrfs: let callers of btrfs_alloc_root pass gfp flags")
CC: stable@vger.kernel.org # 4.9+
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Finn Thain [Wed, 16 Jan 2019 05:23:24 +0000 (16:23 +1100)]
m68k: Add -ffreestanding to CFLAGS
commit
28713169d879b67be2ef2f84dcf54905de238294 upstream.
This patch fixes a build failure when using GCC 8.1:
/usr/bin/ld: block/partitions/ldm.o: in function `ldm_parse_tocblock':
block/partitions/ldm.c:153: undefined reference to `strcmp'
This is caused by a new optimization which effectively replaces a
strncmp() call with a strcmp() call. This affects a number of strncmp()
call sites in the kernel.
The entire class of optimizations is avoided with -fno-builtin, which
gets enabled by -ffreestanding. This may avoid possible future build
failures in case new optimizations appear in future compilers.
I haven't done any performance measurements with this patch but I did
count the function calls in a defconfig build. For example, there are now
23 more sprintf() calls and 39 fewer strcpy() calls. The effect on the
other libc functions is smaller.
If this harms performance we can tackle that regression by optimizing
the call sites, ideally using semantic patches. That way, clang and ICC
builds might benfit too.
Cc: stable@vger.kernel.org
Reference: https://marc.info/?l=linux-m68k&m=
154514816222244&w=2
Signed-off-by: Finn Thain <fthain@telegraphics.com.au>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vivek Goyal [Wed, 30 Jan 2019 19:01:57 +0000 (14:01 -0500)]
ovl: Do not lose security.capability xattr over metadata file copy-up
commit
993a0b2aec52754f0897b1dab4c453be8217cae5 upstream.
If a file has been copied up metadata only, and later data is copied up,
upper loses any security.capability xattr it has (underlying filesystem
clears it as upon file write).
From a user's point of view, this is just a file copy-up and that should
not result in losing security.capability xattr. Hence, before data copy
up, save security.capability xattr (if any) and restore it on upper after
data copy up is complete.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Reviewed-by: Amir Goldstein <amir73il@gmail.com>
Fixes:
0c2888749363 ("ovl: A new xattr OVL_XATTR_METACOPY for file on upper")
Cc: <stable@vger.kernel.org> # v4.19+
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vivek Goyal [Fri, 11 Jan 2019 18:37:00 +0000 (19:37 +0100)]
ovl: During copy up, first copy up data and then xattrs
commit
5f32879ea35523b9842bdbdc0065e13635caada2 upstream.
If a file with capability set (and hence security.capability xattr) is
written kernel clears security.capability xattr. For overlay, during file
copy up if xattrs are copied up first and then data is, copied up. This
means data copy up will result in clearing of security.capability xattr
file on lower has. And this can result into surprises. If a lower file has
CAP_SETUID, then it should not be cleared over copy up (if nothing was
actually written to file).
This also creates problems with chown logic where it first copies up file
and then tries to clear setuid bit. But by that time security.capability
xattr is already gone (due to data copy up), and caller gets -ENODATA.
This has been reported by Giuseppe here.
https://github.com/containers/libpod/issues/2015#issuecomment-
447824842
Fix this by copying up data first and then metadta. This is a regression
which has been introduced by my commit as part of metadata only copy up
patches.
TODO: There will be some corner cases where a file is copied up metadata
only and later data copy up happens and that will clear security.capability
xattr. Something needs to be done about that too.
Fixes:
bd64e57586d3 ("ovl: During copy up, first copy up metadata and then data")
Cc: <stable@vger.kernel.org> # v4.19+
Reported-by: Giuseppe Scrivano <gscrivan@redhat.com>
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jann Horn [Wed, 23 Jan 2019 14:19:17 +0000 (15:19 +0100)]
splice: don't merge into linked buffers
commit
a0ce2f0aa6ad97c3d4927bf2ca54bcebdf062d55 upstream.
Before this patch, it was possible for two pipes to affect each other after
data had been transferred between them with tee():
============
$ cat tee_test.c
int main(void) {
int pipe_a[2];
if (pipe(pipe_a)) err(1, "pipe");
int pipe_b[2];
if (pipe(pipe_b)) err(1, "pipe");
if (write(pipe_a[1], "abcd", 4) != 4) err(1, "write");
if (tee(pipe_a[0], pipe_b[1], 2, 0) != 2) err(1, "tee");
if (write(pipe_b[1], "xx", 2) != 2) err(1, "write");
char buf[5];
if (read(pipe_a[0], buf, 4) != 4) err(1, "read");
buf[4] = 0;
printf("got back: '%s'\n", buf);
}
$ gcc -o tee_test tee_test.c
$ ./tee_test
got back: 'abxx'
$
============
As suggested by Al Viro, fix it by creating a separate type for
non-mergeable pipe buffers, then changing the types of buffers in
splice_pipe_to_pipe() and link_pipe().
Cc: <stable@vger.kernel.org>
Fixes:
7c77f0b3f920 ("splice: implement pipe to pipe splicing")
Fixes:
70524490ee2e ("[PATCH] splice: add support for sys_tee()")
Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Varad Gautam [Thu, 24 Jan 2019 13:03:06 +0000 (14:03 +0100)]
fs/devpts: always delete dcache dentry-s in dput()
commit
73052b0daee0b750b39af18460dfec683e4f5887 upstream.
d_delete only unhashes an entry if it is reached with
dentry->d_lockref.count != 1. Prior to commit
8ead9dd54716 ("devpts:
more pty driver interface cleanups"), d_delete was called on a dentry
from devpts_pty_kill with two references held, which would trigger the
unhashing, and the subsequent dputs would release it.
Commit
8ead9dd54716 reworked devpts_pty_kill to stop acquiring the second
reference from d_find_alias, and the d_delete call left the dentries
still on the hashed list without actually ever being dropped from dcache
before explicit cleanup. This causes the number of negative dentries for
devpts to pile up, and an `ls /dev/pts` invocation can take seconds to
return.
Provide always_delete_dentry() from simple_dentry_operations
as .d_delete for devpts, to make the dentry be dropped from dcache.
Without this cleanup, the number of dentries in /dev/pts/ can be grown
arbitrarily as:
`python -c 'import pty; pty.spawn(["ls", "/dev/pts"])'`
A systemtap probe on dcache_readdir to count d_subdirs shows this count
to increase with each pty spawn invocation above:
probe kernel.function("dcache_readdir") {
subdirs = &@cast($file->f_path->dentry, "dentry")->d_subdirs;
p = subdirs;
p = @cast(p, "list_head")->next;
i = 0
while (p != subdirs) {
p = @cast(p, "list_head")->next;
i = i+1;
}
printf("number of dentries: %d\n", i);
}
Fixes:
8ead9dd54716 ("devpts: more pty driver interface cleanups")
Signed-off-by: Varad Gautam <vrd@amazon.de>
Reported-by: Zheng Wang <wanz@amazon.de>
Reported-by: Brandon Schwartz <bsschwar@amazon.de>
Root-caused-by: Maximilian Heyne <mheyne@amazon.de>
Root-caused-by: Nicolas Pernas Maradei <npernas@amazon.de>
CC: David Woodhouse <dwmw@amazon.co.uk>
CC: Maximilian Heyne <mheyne@amazon.de>
CC: Stefan Nuernberger <snu@amazon.de>
CC: Amit Shah <aams@amazon.de>
CC: Linus Torvalds <torvalds@linux-foundation.org>
CC: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
CC: Al Viro <viro@ZenIV.linux.org.uk>
CC: Christian Brauner <christian.brauner@ubuntu.com>
CC: Eric W. Biederman <ebiederm@xmission.com>
CC: Matthew Wilcox <willy@infradead.org>
CC: Eric Biggers <ebiggers@google.com>
CC: <stable@vger.kernel.org> # 4.9+
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Himanshu Madhani [Fri, 15 Feb 2019 22:37:12 +0000 (14:37 -0800)]
scsi: qla2xxx: Fix LUN discovery if loop id is not assigned yet by firmware
commit
ec322937a7f152d68755dc8316523bf6f831b48f upstream.
This patch fixes LUN discovery when loop ID is not yet assigned by the
firmware during driver load/sg_reset operations. Driver will now search for
new loop id before retrying login.
Fixes:
48acad099074 ("scsi: qla2xxx: Fix N2N link re-connect")
Cc: stable@vger.kernel.org #4.19
Signed-off-by: Himanshu Madhani <hmadhani@marvell.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bart Van Assche [Fri, 25 Jan 2019 18:34:56 +0000 (10:34 -0800)]
scsi: target/iscsi: Avoid iscsit_release_commands_from_conn() deadlock
commit
32e36bfbcf31452a854263e7c7f32fbefc4b44d8 upstream.
When using SCSI passthrough in combination with the iSCSI target driver
then cmd->t_state_lock may be obtained from interrupt context. Hence, all
code that obtains cmd->t_state_lock from thread context must disable
interrupts first. This patch avoids that lockdep reports the following:
WARNING: inconsistent lock state
4.18.0-dbg+ #1 Not tainted
--------------------------------
inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage.
iscsi_ttx/1800 [HC1[1]:SC0[2]:HE0:SE0] takes:
000000006e7b0ceb (&(&cmd->t_state_lock)->rlock){?...}, at: target_complete_cmd+0x47/0x2c0 [target_core_mod]
{HARDIRQ-ON-W} state was registered at:
lock_acquire+0xd2/0x260
_raw_spin_lock+0x32/0x50
iscsit_close_connection+0x97e/0x1020 [iscsi_target_mod]
iscsit_take_action_for_connection_exit+0x108/0x200 [iscsi_target_mod]
iscsi_target_rx_thread+0x180/0x190 [iscsi_target_mod]
kthread+0x1cf/0x1f0
ret_from_fork+0x24/0x30
irq event stamp: 1281
hardirqs last enabled at (1279): [<
ffffffff970ade79>] __local_bh_enable_ip+0xa9/0x160
hardirqs last disabled at (1281): [<
ffffffff97a008a5>] interrupt_entry+0xb5/0xd0
softirqs last enabled at (1278): [<
ffffffff977cd9a1>] lock_sock_nested+0x51/0xc0
softirqs last disabled at (1280): [<
ffffffffc07a6e04>] ip6_finish_output2+0x124/0xe40 [ipv6]
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock(&(&cmd->t_state_lock)->rlock);
<Interrupt>
lock(&(&cmd->t_state_lock)->rlock);
Martin K. Petersen [Tue, 12 Feb 2019 21:21:05 +0000 (16:21 -0500)]
scsi: sd: Optimal I/O size should be a multiple of physical block size
commit
a83da8a4509d3ebfe03bb7fffce022e4d5d4764f upstream.
It was reported that some devices report an OPTIMAL TRANSFER LENGTH of
0xFFFF blocks. That looks bogus, especially for a device with a
4096-byte physical block size.
Ignore OPTIMAL TRANSFER LENGTH if it is not a multiple of the device's
reported physical block size.
To make the sanity checking conditionals more readable--and to
facilitate printing warnings--relocate the checking to a helper
function. No functional change aside from the printks.
Cc: <stable@vger.kernel.org>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=199759
Reported-by: Christoph Anton Mitterer <calestyo@scientia.net>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sagar Biradar [Fri, 8 Mar 2019 07:26:41 +0000 (23:26 -0800)]
scsi: aacraid: Fix performance issue on logical drives
commit
0015437cc046e5ec2b57b00ff8312b8d432eac7c upstream.
Fix performance issue where the queue depth for SmartIOC logical volumes is
set to 1, and allow the usual logical volume code to be executed
Fixes:
a052865fe287 (aacraid: Set correct Queue Depth for HBA1000 RAW disks)
Cc: stable@vger.kernel.org
Signed-off-by: Sagar Biradar <Sagar.Biradar@microchip.com>
Reviewed-by: Dave Carroll <david.carroll@microsemi.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Felipe Franciosi [Wed, 27 Feb 2019 16:10:34 +0000 (16:10 +0000)]
scsi: virtio_scsi: don't send sc payload with tmfs
commit
3722e6a52174d7c3a00e6f5efd006ca093f346c1 upstream.
The virtio scsi spec defines struct virtio_scsi_ctrl_tmf as a set of
device-readable records and a single device-writable response entry:
struct virtio_scsi_ctrl_tmf
{
// Device-readable part
le32 type;
le32 subtype;
u8 lun[8];
le64 id;
// Device-writable part
u8 response;
}
The above should be organised as two descriptor entries (or potentially
more if using VIRTIO_F_ANY_LAYOUT), but without any extra data after "le64
id" or after "u8 response".
The Linux driver doesn't respect that, with virtscsi_abort() and
virtscsi_device_reset() setting cmd->sc before calling virtscsi_tmf(). It
results in the original scsi command payload (or writable buffers) added to
the tmf.
This fixes the problem by leaving cmd->sc zeroed out, which makes
virtscsi_kick_cmd() add the tmf to the control vq without any payload.
Cc: stable@vger.kernel.org
Signed-off-by: Felipe Franciosi <felipe@nutanix.com>
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Halil Pasic [Mon, 21 Jan 2019 12:19:43 +0000 (13:19 +0100)]
s390/virtio: handle find on invalid queue gracefully
commit
3438b2c039b4bf26881786a1f3450f016d66ad11 upstream.
A queue with a capacity of zero is clearly not a valid virtio queue.
Some emulators report zero queue size if queried with an invalid queue
index. Instead of crashing in this case let us just return -ENOENT. To
make that work properly, let us fix the notifier cleanup logic as well.
Cc: stable@vger.kernel.org
Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Martin Schwidefsky [Thu, 14 Feb 2019 14:40:56 +0000 (15:40 +0100)]
s390/setup: fix early warning messages
commit
8727638426b0aea59d7f904ad8ddf483f9234f88 upstream.
The setup_lowcore() function creates a new prefix page for the boot CPU.
The PSW mask for the system_call, external interrupt, i/o interrupt and
the program check handler have the DAT bit set in this new prefix page.
At the time setup_lowcore is called the system still runs without virtual
address translation, the paging_init() function creates the kernel page
table and loads the CR13 with the kernel ASCE.
Any code between setup_lowcore() and the end of paging_init() that has
a BUG or WARN statement will create a program check that can not be
handled correctly as there is no kernel page table yet.
To allow early WARN statements initially setup the lowcore with DAT off
and set the DAT bit only after paging_init() has completed.
Cc: stable@vger.kernel.org
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Samuel Holland [Sun, 13 Jan 2019 02:17:18 +0000 (20:17 -0600)]
clocksource/drivers/arch_timer: Workaround for Allwinner A64 timer instability
commit
c950ca8c35eeb32224a63adc47e12f9e226da241 upstream.
The Allwinner A64 SoC is known[1] to have an unstable architectural
timer, which manifests itself most obviously in the time jumping forward
a multiple of 95 years[2][3]. This coincides with 2^56 cycles at a
timer frequency of 24 MHz, implying that the time went slightly backward
(and this was interpreted by the kernel as it jumping forward and
wrapping around past the epoch).
Investigation revealed instability in the low bits of CNTVCT at the
point a high bit rolls over. This leads to power-of-two cycle forward
and backward jumps. (Testing shows that forward jumps are about twice as
likely as backward jumps.) Since the counter value returns to normal
after an indeterminate read, each "jump" really consists of both a
forward and backward jump from the software perspective.
Unless the kernel is trapping CNTVCT reads, a userspace program is able
to read the register in a loop faster than it changes. A test program
running on all 4 CPU cores that reported jumps larger than 100 ms was
run for 13.6 hours and reported the following:
Count | Event
-------+---------------------------
9940 | jumped backward 699ms
268 | jumped backward 1398ms
1 | jumped backward 2097ms
16020 | jumped forward 175ms
6443 | jumped forward 699ms
2976 | jumped forward 1398ms
9 | jumped forward 356516ms
9 | jumped forward 357215ms
4 | jumped forward 714430ms
1 | jumped forward 3578440ms
This works out to a jump larger than 100 ms about every 5.5 seconds on
each CPU core.
The largest jump (almost an hour!) was the following sequence of reads:
0x0000007fffffffff → 0x00000093feffffff → 0x0000008000000000
Note that the middle bits don't necessarily all read as all zeroes or
all ones during the anomalous behavior; however the low 10 bits checked
by the function in this patch have never been observed with any other
value.
Also note that smaller jumps are much more common, with backward jumps
of 2048 (2^11) cycles observed over 400 times per second on each core.
(Of course, this is partially explained by lower bits rolling over more
frequently.) Any one of these could have caused the 95 year time skip.
Similar anomalies were observed while reading CNTPCT (after patching the
kernel to allow reads from userspace). However, the CNTPCT jumps are
much less frequent, and only small jumps were observed. The same program
as before (except now reading CNTPCT) observed after 72 hours:
Count | Event
-------+---------------------------
17 | jumped backward 699ms
52 | jumped forward 175ms
2831 | jumped forward 699ms
5 | jumped forward 1398ms
Further investigation showed that the instability in CNTPCT/CNTVCT also
affected the respective timer's TVAL register. The following values were
observed immediately after writing CNVT_TVAL to 0x10000000:
CNTVCT | CNTV_TVAL | CNTV_CVAL | CNTV_TVAL Error
--------------------+------------+--------------------+-----------------
0x000000d4a2d8bfff | 0x10003fff | 0x000000d4b2d8bfff | +0x00004000
0x000000d4a2d94000 | 0x0fffffff | 0x000000d4b2d97fff | -0x00004000
0x000000d4a2d97fff | 0x10003fff | 0x000000d4b2d97fff | +0x00004000
0x000000d4a2d9c000 | 0x0fffffff | 0x000000d4b2d9ffff | -0x00004000
The pattern of errors in CNTV_TVAL seemed to depend on exactly which
value was written to it. For example, after writing 0x10101010:
CNTVCT | CNTV_TVAL | CNTV_CVAL | CNTV_TVAL Error
--------------------+------------+--------------------+-----------------
0x000001ac3effffff | 0x1110100f | 0x000001ac4f10100f | +0x1000000
0x000001ac40000000 | 0x1010100f | 0x000001ac5110100f | -0x1000000
0x000001ac58ffffff | 0x1110100f | 0x000001ac6910100f | +0x1000000
0x000001ac66000000 | 0x1010100f | 0x000001ac7710100f | -0x1000000
0x000001ac6affffff | 0x1110100f | 0x000001ac7b10100f | +0x1000000
0x000001ac6e000000 | 0x1010100f | 0x000001ac7f10100f | -0x1000000
I was also twice able to reproduce the issue covered by Allwinner's
workaround[4], that writing to TVAL sometimes fails, and both CVAL and
TVAL are left with entirely bogus values. One was the following values:
CNTVCT | CNTV_TVAL | CNTV_CVAL
--------------------+------------+--------------------------------------
0x000000d4a2d6014c | 0x8fbd5721 | 0x000000d132935fff (615s in the past)
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
========================================================================
Because the CPU can read the CNTPCT/CNTVCT registers faster than they
change, performing two reads of the register and comparing the high bits
(like other workarounds) is not a workable solution. And because the
timer can jump both forward and backward, no pair of reads can
distinguish a good value from a bad one. The only way to guarantee a
good value from consecutive reads would be to read _three_ times, and
take the middle value only if the three values are 1) each unique and
2) increasing. This takes at minimum 3 counter cycles (125 ns), or more
if an anomaly is detected.
However, since there is a distinct pattern to the bad values, we can
optimize the common case (1022/1024 of the time) to a single read by
simply ignoring values that match the error pattern. This still takes no
more than 3 cycles in the worst case, and requires much less code. As an
additional safety check, we still limit the loop iteration to the number
of max-frequency (1.2 GHz) CPU cycles in three 24 MHz counter periods.
For the TVAL registers, the simple solution is to not use them. Instead,
read or write the CVAL and calculate the TVAL value in software.
Although the manufacturer is aware of at least part of the erratum[4],
there is no official name for it. For now, use the kernel-internal name
"UNKNOWN1".
[1]: https://github.com/armbian/build/commit/
a08cd6fe7ae9
[2]: https://forum.armbian.com/topic/3458-a64-datetime-clock-issue/
[3]: https://irclog.whitequark.org/linux-sunxi/2018-01-26
[4]: https://github.com/Allwinner-Homlet/H6-BSP4.9-linux/blob/master/drivers/clocksource/arm_arch_timer.c#L272
Acked-by: Maxime Ripard <maxime.ripard@bootlin.com>
Tested-by: Andre Przywara <andre.przywara@arm.com>
Signed-off-by: Samuel Holland <samuel@sholland.org>
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stuart Menefy [Sun, 10 Feb 2019 22:51:14 +0000 (22:51 +0000)]
clocksource/drivers/exynos_mct: Clear timer interrupt when shutdown
commit
d2f276c8d3c224d5b493c42b6cf006ae4e64fb1c upstream.
When shutting down the timer, ensure that after we have stopped the
timer any pending interrupts are cleared. This fixes a problem when
suspending, as interrupts are disabled before the timer is stopped,
so the timer interrupt may still be asserted, preventing the system
entering a low power state when the wfi is executed.
Signed-off-by: Stuart Menefy <stuart.menefy@mathembedded.com>
Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: <stable@vger.kernel.org> # v4.3+
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stuart Menefy [Sun, 10 Feb 2019 22:51:13 +0000 (22:51 +0000)]
clocksource/drivers/exynos_mct: Move one-shot check from tick clear to ISR
commit
a5719a40aef956ba704f2aa1c7b977224d60fa96 upstream.
When a timer tick occurs and the clock is in one-shot mode, the timer
needs to be stopped to prevent it triggering subsequent interrupts.
Currently this code is in exynos4_mct_tick_clear(), but as it is
only needed when an ISR occurs move it into exynos4_mct_tick_isr(),
leaving exynos4_mct_tick_clear() just doing what its name suggests it
should.
Signed-off-by: Stuart Menefy <stuart.menefy@mathembedded.com>
Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org>
Tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Cc: stable@vger.kernel.org # v4.3+
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stuart Menefy [Tue, 12 Feb 2019 21:51:18 +0000 (21:51 +0000)]
regulator: s2mpa01: Fix step values for some LDOs
commit
28c4f730d2a44f2591cb104091da29a38dac49fe upstream.
The step values for some of the LDOs appears to be incorrect, resulting
in incorrect voltages (or at least, ones which are different from the
Samsung 3.4 vendor kernel).
Signed-off-by: Stuart Menefy <stuart.menefy@mathembedded.com>
Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: stable@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mark Zhang [Thu, 10 Jan 2019 04:11:16 +0000 (12:11 +0800)]
regulator: max77620: Initialize values for DT properties
commit
0ab66b3c326ef8f77dae9f528118966365757c0c upstream.
If regulator DT node doesn't exist, its of_parse_cb callback
function isn't called. Then all values for DT properties are
filled with zero. This leads to wrong register update for
FPS and POK settings.
Signed-off-by: Jinyoung Park <jinyoungp@nvidia.com>
Signed-off-by: Mark Zhang <markz@nvidia.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: stable@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Krzysztof Kozlowski [Sat, 9 Feb 2019 17:14:14 +0000 (18:14 +0100)]
regulator: s2mps11: Fix steps for buck7, buck8 and LDO35
commit
56b5d4ea778c1b0989c5cdb5406d4a488144c416 upstream.
LDO35 uses 25 mV step, not 50 mV. Bucks 7 and 8 use 12.5 mV step
instead of 6.25 mV. Wrong step caused over-voltage (LDO35) or
under-voltage (buck7 and 8) if regulators were used (e.g. on Exynos5420
Arndale Octa board).
Cc: <stable@vger.kernel.org>
Fixes:
cb74685ecb39 ("regulator: s2mps11: Add samsung s2mps11 regulator driver")
Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andy Shevchenko [Tue, 19 Feb 2019 20:21:28 +0000 (23:21 +0300)]
spi: pxa2xx: Setup maximum supported DMA transfer length
commit
ef070b4e4aa25bb5f8632ad196644026c11903bf upstream.
When the commit
b6ced294fb61
("spi: pxa2xx: Switch to SPI core DMA mapping functionality")
switches to SPI core provided DMA helpers, it missed to setup maximum
supported DMA transfer length for the controller and thus users
mistakenly try to send more data than supported with the following
warning:
ili9341 spi-PRP0001:01: DMA disabled for transfer length 153600 greater than 65536
Setup maximum supported DMA transfer length in order to make users know
the limit.
Fixes:
b6ced294fb61 ("spi: pxa2xx: Switch to SPI core DMA mapping functionality")
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: stable@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vignesh R [Tue, 29 Jan 2019 07:44:22 +0000 (13:14 +0530)]
spi: ti-qspi: Fix mmap read when more than one CS in use
commit
673c865efbdc5fec3cc525c46d71844d42c60072 upstream.
Commit
4dea6c9b0b64 ("spi: spi-ti-qspi: add mmap mode read support") has
has got order of parameter wrong when calling regmap_update_bits() to
select CS for mmap access. Mask and value arguments are interchanged.
Code will work on a system with single slave, but fails when more than
one CS is in use. Fix this by correcting the order of parameters when
calling regmap_update_bits().
Fixes:
4dea6c9b0b64 ("spi: spi-ti-qspi: add mmap mode read support")
Cc: stable@vger.kernel.org
Signed-off-by: Vignesh R <vigneshr@ti.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Anders Roxell [Wed, 23 Jan 2019 11:48:11 +0000 (12:48 +0100)]
netfilter: ipt_CLUSTERIP: fix warning unused variable cn
commit
206b8cc514d7ff2b79dd2d5ad939adc7c493f07a upstream.
When CONFIG_PROC_FS isn't set the variable cn isn't used.
net/ipv4/netfilter/ipt_CLUSTERIP.c: In function ‘clusterip_net_exit’:
net/ipv4/netfilter/ipt_CLUSTERIP.c:849:24: warning: unused variable ‘cn’ [-Wunused-variable]
struct clusterip_net *cn = clusterip_pernet(net);
^~
Rework so the variable 'cn' is declared inside "#ifdef CONFIG_PROC_FS".
Fixes:
b12f7bad5ad3 ("netfilter: ipt_CLUSTERIP: remove wrong WARN_ON_ONCE in netns exit routine")
Signed-off-by: Anders Roxell <anders.roxell@linaro.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiong Wu [Thu, 28 Feb 2019 16:18:33 +0000 (00:18 +0800)]
mmc:fix a bug when max_discard is 0
commit
d4721339dcca7def04909a8e60da43c19a24d8bf upstream.
The original purpose of the code I fix is to replace max_discard with
max_trim if max_trim is less than max_discard. When max_discard is 0
we should replace max_discard with max_trim as well, because
max_discard equals 0 happens only when the max_do_calc_max_discard
process is overflowed, so if mmc_can_trim(card) is true, max_discard
should be replaced by an available max_trim.
However, in the original code, there are two lines of code interfere
the right process.
1) if (max_discard && mmc_can_trim(card))
when max_discard is 0, it skips the process checking if max_discard
needs to be replaced with max_trim.
2) if (max_trim < max_discard)
the condition is false when max_discard is 0. it also skips the process
that replaces max_discard with max_trim, in fact, we should replace the
0-valued max_discard with max_trim.
Signed-off-by: Jiong Wu <Lohengrin1024@gmail.com>
Fixes:
b305882fbc87 (mmc: core: optimize mmc_calc_max_discard)
Cc: stable@vger.kernel.org # v4.17+
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
BOUGH CHEN [Thu, 27 Dec 2018 11:20:24 +0000 (11:20 +0000)]
mmc: sdhci-esdhc-imx: fix HS400 timing issue
commit
de0a0decf2edfc5b0c782915f4120cf990a9bd13 upstream.
Now tuning reset will be done when the timing is MMC_TIMING_LEGACY/
MMC_TIMING_MMC_HS/MMC_TIMING_SD_HS. But for timing MMC_TIMING_MMC_HS,
we can not do tuning reset, otherwise HS400 timing is not right.
Here is the process of init HS400, first finish tuning in HS200 mode,
then switch to HS mode and 8 bit DDR mode, finally switch to HS400
mode. If we do tuning reset in HS mode, this will cause HS400 mode
lost the tuning setting, which will cause CRC error.
Signed-off-by: Haibo Chen <haibo.chen@nxp.com>
Cc: stable@vger.kernel.org # v4.12+
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Fixes:
d9370424c948 ("mmc: sdhci-esdhc-imx: reset tuning circuit when power on mmc card")
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andy Shevchenko [Mon, 11 Mar 2019 16:41:03 +0000 (18:41 +0200)]
ACPI / device_sysfs: Avoid OF modalias creation for removed device
commit
f16eb8a4b096514ac06fb25bf599dcc792899b3d upstream.
If SSDT overlay is loaded via ConfigFS and then unloaded the device,
we would like to have OF modalias for, already gone. Thus, acpi_get_name()
returns no allocated buffer for such case and kernel crashes afterwards:
ACPI: Host-directed Dynamic ACPI Table Unload
ads7950 spi-PRP0001:00: Dropping the link to regulator.0
BUG: unable to handle kernel NULL pointer dereference at
0000000000000000
#PF error: [normal kernel read fault]
PGD
80000000070d6067 P4D
80000000070d6067 PUD 70d0067 PMD 0
Oops: 0000 [#1] SMP PTI
CPU: 0 PID: 40 Comm: kworker/u4:2 Not tainted 5.0.0+ #96
Hardware name: Intel Corporation Merrifield/BODEGA BAY, BIOS 542 2015.01.21:18.19.48
Workqueue: kacpi_hotplug acpi_device_del_work_fn
RIP: 0010:create_of_modalias.isra.1+0x4c/0x150
Code: 00 00 48 89 44 24 18 31 c0 48 8d 54 24 08 48 c7 44 24 10 00 00 00 00 48 c7 44 24 08 ff ff ff ff e8 7a b0 03 00 48 8b 4c 24 10 <0f> b6 01 84 c0 74 27 48 c7 c7 00 09 f4 a5 0f b6 f0 8d 50 20 f6 04
RSP: 0000:
ffffa51040297c10 EFLAGS:
00010246
RAX:
0000000000001001 RBX:
0000000000000785 RCX:
0000000000000000
RDX:
0000000000001001 RSI:
0000000000000286 RDI:
ffffa2163dc042e0
RBP:
ffffa216062b1196 R08:
0000000000001001 R09:
ffffa21639873000
R10:
ffffffffa606761d R11:
0000000000000001 R12:
ffffa21639873218
R13:
ffffa2163deb5060 R14:
ffffa216063d1010 R15:
0000000000000000
FS:
0000000000000000(0000) GS:
ffffa2163e000000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
0000000000000000 CR3:
0000000007114000 CR4:
00000000001006f0
Call Trace:
__acpi_device_uevent_modalias+0xb0/0x100
spi_uevent+0xd/0x40
...
In order to fix above let create_of_modalias() check the status returned
by acpi_get_name() and bail out in case of failure.
Fixes:
8765c5ba1949 ("ACPI / scan: Rework modalias creation when "compatible" is present")
Link: https://bugzilla.kernel.org/show_bug.cgi?id=201381
Reported-by: Ferry Toth <fntoth@gmail.com>
Tested-by: Ferry Toth<fntoth@gmail.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Cc: 4.1+ <stable@vger.kernel.org> # 4.1+
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Juergen Gross [Thu, 7 Mar 2019 09:11:19 +0000 (10:11 +0100)]
xen: fix dom0 boot on huge systems
commit
01bd2ac2f55a1916d81dace12fa8d7ae1c79b5ea upstream.
Commit
f7c90c2aa40048 ("x86/xen: don't write ptes directly in 32-bit
PV guests") introduced a regression for booting dom0 on huge systems
with lots of RAM (in the TB range).
Reason is that on those hosts the p2m list needs to be moved early in
the boot process and this requires temporary page tables to be created.
Said commit modified xen_set_pte_init() to use a hypercall for writing
a PTE, but this requires the page table being in the direct mapped
area, which is not the case for the temporary page tables used in
xen_relocate_p2m().
As the page tables are completely written before being linked to the
actual address space instead of set_pte() a plain write to memory can
be used in xen_relocate_p2m().
Fixes:
f7c90c2aa40048 ("x86/xen: don't write ptes directly in 32-bit PV guests")
Cc: stable@vger.kernel.org
Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jann Horn [Wed, 20 Feb 2019 16:54:43 +0000 (17:54 +0100)]
tracing/perf: Use strndup_user() instead of buggy open-coded version
commit
83540fbc8812a580b6ad8f93f4c29e62e417687e upstream.
The first version of this method was missing the check for
`ret == PATH_MAX`; then such a check was added, but it didn't call kfree()
on error, so there was still a small memory leak in the error case.
Fix it by using strndup_user() instead of open-coding it.
Link: http://lkml.kernel.org/r/20190220165443.152385-1-jannh@google.com
Cc: Ingo Molnar <mingo@kernel.org>
Cc: stable@vger.kernel.org
Fixes:
0eadcc7a7bc0 ("perf/core: Fix perf_uprobe_init()")
Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>