Ian Kent [Thu, 30 Nov 2017 00:11:26 +0000 (16:11 -0800)]
autofs: revert "autofs: fix AT_NO_AUTOMOUNT not being honored"
commit
5d38f049cee1e1c4a7ac55aa79d37d01ddcc3860 upstream.
Commit
42f461482178 ("autofs: fix AT_NO_AUTOMOUNT not being honored")
allowed the fstatat(2) system call to properly honor the AT_NO_AUTOMOUNT
flag but introduced a semantic change.
In order to honor AT_NO_AUTOMOUNT a semantic change was made to the
negative dentry case for stat family system calls in follow_automount().
This changed the unconditional triggering of an automount in this case
to no longer be done and an error returned instead.
This has caused more problems than I expected so reverting the change is
needed.
In a discussion with Neil Brown it was concluded that the automount(8)
daemon can implement this change without kernel modifications. So that
will be done instead and the autofs module documentation updated with a
description of the problem and what needs to be done by module users for
this specific case.
Link: http://lkml.kernel.org/r/151174730120.6162.3848002191530283984.stgit@pluto.themaw.net
Fixes:
42f4614821 ("autofs: fix AT_NO_AUTOMOUNT not being honored")
Signed-off-by: Ian Kent <raven@themaw.net>
Cc: Neil Brown <neilb@suse.com>
Cc: Al Viro <viro@ZenIV.linux.org.uk>
Cc: David Howells <dhowells@redhat.com>
Cc: Colin Walters <walters@redhat.com>
Cc: Ondrej Holy <oholy@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ian Kent [Thu, 30 Nov 2017 00:11:23 +0000 (16:11 -0800)]
autofs: revert "autofs: take more care to not update last_used on path walk"
commit
43694d4bf843ddd34519e8e9de983deefeada699 upstream.
While commit
092a53452bb7 ("autofs: take more care to not update
last_used on path walk") helped (partially) resolve a problem where
automounts were not expiring due to aggressive accesses from user space
it has a side effect for very large environments.
This change helps with the expire problem by making the expire more
aggressive but, for very large environments, that means more mount
requests from clients. When there are a lot of clients that can mean
fairly significant server load increases.
It turns out I put the last_used in this position to solve this very
problem and failed to update my own thinking of the autofs expire
policy. So the patch being reverted introduces a regression which
should be fixed.
Link: http://lkml.kernel.org/r/151174729420.6162.1832622523537052460.stgit@pluto.themaw.net
Fixes:
092a53452b ("autofs: take more care to not update last_used on path walk")
Signed-off-by: Ian Kent <raven@themaw.net>
Reviewed-by: NeilBrown <neilb@suse.com>
Cc: Al Viro <viro@ZenIV.linux.org.uk>
Cc: Colin Walters <walters@redhat.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Ondrej Holy <oholy@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
OGAWA Hirofumi [Thu, 30 Nov 2017 00:11:19 +0000 (16:11 -0800)]
fs/fat/inode.c: fix sb_rdonly() change
commit
b6e8e12c0aeb5fbf1bf46c84d58cc93aedede385 upstream.
Commit
bc98a42c1f7d ("VFS: Convert sb->s_flags & MS_RDONLY to
sb_rdonly(sb)") converted fat_remount():new_rdonly from a bool to an
int.
However fat_remount() depends upon the compiler's conversion of a
non-zero integer into boolean `true'.
Fix it by switching `new_rdonly' back into a bool.
Link: http://lkml.kernel.org/r/87mv3d5x51.fsf@mail.parknet.co.jp
Fixes:
bc98a42c1f7d0f8 ("VFS: Convert sb->s_flags & MS_RDONLY to sb_rdonly(sb)")
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Cc: Joe Perches <joe@perches.com>
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Shakeel Butt [Thu, 30 Nov 2017 00:11:15 +0000 (16:11 -0800)]
mm, memcg: fix mem_cgroup_swapout() for THPs
commit
d08afa149acfd00871484ada6dabc3880524cd1c upstream.
Commit
d6810d730022 ("memcg, THP, swap: make mem_cgroup_swapout()
support THP") changed mem_cgroup_swapout() to support transparent huge
page (THP).
However the patch missed one location which should be changed for
correctly handling THPs. The resulting bug will cause the memory
cgroups whose THPs were swapped out to become zombies on deletion.
Link: http://lkml.kernel.org/r/20171128161941.20931-1-shakeelb@google.com
Fixes:
d6810d730022 ("memcg, THP, swap: make mem_cgroup_swapout() support THP")
Signed-off-by: Shakeel Butt <shakeelb@google.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Huang Ying <ying.huang@intel.com>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: Greg Thelen <gthelen@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Zi Yan [Thu, 30 Nov 2017 00:11:12 +0000 (16:11 -0800)]
mm: migrate: fix an incorrect call of prep_transhuge_page()
commit
40a899ed16486455f964e46d1af31fd4fded21c1 upstream.
In https://lkml.org/lkml/2017/11/20/411, Andrea reported that during
memory hotplug/hot remove prep_transhuge_page() is called incorrectly on
non-THP pages for migration, when THP is on but THP migration is not
enabled. This leads to a bad state of target pages for migration.
By inspecting the code, if called on a non-THP, prep_transhuge_page()
will
1) change the value of the mapping of (page + 2), since it is used for
THP deferred list;
2) change the lru value of (page + 1), since it is used for THP's dtor.
Both can lead to data corruption of these two pages.
Andrea said:
"Pragmatically and from the point of view of the memory_hotplug subsys,
the effect is a kernel crash when pages are being migrated during a
memory hot remove offline and migration target pages are found in a
bad state"
This patch fixes it by only calling prep_transhuge_page() when we are
certain that the target page is THP.
Link: http://lkml.kernel.org/r/20171121021855.50525-1-zi.yan@sent.com
Fixes:
8135d8926c08 ("mm: memory_hotplug: memory hotremove supports thp migration")
Signed-off-by: Zi Yan <zi.yan@cs.rutgers.edu>
Reported-by: Andrea Reale <ar@linux.vnet.ibm.com>
Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: "Jérôme Glisse" <jglisse@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
chenjie [Thu, 30 Nov 2017 00:10:54 +0000 (16:10 -0800)]
mm/madvise.c: fix madvise() infinite loop under special circumstances
commit
6ea8d958a2c95a1d514015d4e29ba21a8c0a1a91 upstream.
MADVISE_WILLNEED has always been a noop for DAX (formerly XIP) mappings.
Unfortunately madvise_willneed() doesn't communicate this information
properly to the generic madvise syscall implementation. The calling
convention is quite subtle there. madvise_vma() is supposed to either
return an error or update &prev otherwise the main loop will never
advance to the next vma and it will keep looping for ever without a way
to get out of the kernel.
It seems this has been broken since introduction. Nobody has noticed
because nobody seems to be using MADVISE_WILLNEED on these DAX mappings.
[mhocko@suse.com: rewrite changelog]
Link: http://lkml.kernel.org/r/20171127115318.911-1-guoxuenan@huawei.com
Fixes:
fe77ba6f4f97 ("[PATCH] xip: madvice/fadvice: execute in place")
Signed-off-by: chenjie <chenjie6@huawei.com>
Signed-off-by: guoxuenan <guoxuenan@huawei.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: zhangyi (F) <yi.zhang@huawei.com>
Cc: Miao Xie <miaoxie@huawei.com>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Shaohua Li <shli@fb.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Carsten Otte <cotte@de.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kees Cook [Thu, 30 Nov 2017 00:10:51 +0000 (16:10 -0800)]
exec: avoid RLIMIT_STACK races with prlimit()
commit
04e35f4495dd560db30c25efca4eecae8ec8c375 upstream.
While the defense-in-depth RLIMIT_STACK limit on setuid processes was
protected against races from other threads calling setrlimit(), I missed
protecting it against races from external processes calling prlimit().
This adds locking around the change and makes sure that rlim_max is set
too.
Link: http://lkml.kernel.org/r/20171127193457.GA11348@beast
Fixes:
64701dee4178e ("exec: Use sane stack rlimit under secureexec")
Signed-off-by: Kees Cook <keescook@chromium.org>
Reported-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Reported-by: Brad Spengler <spender@grsecurity.net>
Acked-by: Serge Hallyn <serge@hallyn.com>
Cc: James Morris <james.l.morris@oracle.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Thu, 30 Nov 2017 00:10:47 +0000 (16:10 -0800)]
IB/core: disable memory registration of filesystem-dax vmas
commit
5f1d43de54164dcfb9bfa542fcc92c1e1a1b6c1d upstream.
Until there is a solution to the dma-to-dax vs truncate problem it is
not safe to allow RDMA to create long standing memory registrations
against filesytem-dax vmas.
Link: http://lkml.kernel.org/r/151068941011.7446.7766030590347262502.stgit@dwillia2-desk3.amr.corp.intel.com
Fixes:
3565fce3a659 ("mm, x86: get_user_pages() for dax mappings")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reported-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Jason Gunthorpe <jgg@mellanox.com>
Acked-by: Doug Ledford <dledford@redhat.com>
Cc: Sean Hefty <sean.hefty@intel.com>
Cc: Hal Rosenstock <hal.rosenstock@gmail.com>
Cc: Jeff Moyer <jmoyer@redhat.com>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Inki Dae <inki.dae@samsung.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Joonyoung Shim <jy0922.shim@samsung.com>
Cc: Kyungmin Park <kyungmin.park@samsung.com>
Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Seung-Woo Kim <sw0312.kim@samsung.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Thu, 30 Nov 2017 00:10:43 +0000 (16:10 -0800)]
v4l2: disable filesystem-dax mapping support
commit
b70131de648c2b997d22f4653934438013f407a1 upstream.
V4L2 memory registrations are incompatible with filesystem-dax that
needs the ability to revoke dma access to a mapping at will, or
otherwise allow the kernel to wait for completion of DMA. The
filesystem-dax implementation breaks the traditional solution of
truncate of active file backed mappings since there is no page-cache
page we can orphan to sustain ongoing DMA.
If v4l2 wants to support long lived DMA mappings it needs to arrange to
hold a file lease or use some other mechanism so that the kernel can
coordinate revoking DMA access when the filesystem needs to truncate
mappings.
Link: http://lkml.kernel.org/r/151068940499.7446.12846708245365671207.stgit@dwillia2-desk3.amr.corp.intel.com
Fixes:
3565fce3a659 ("mm, x86: get_user_pages() for dax mappings")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reported-by: Jan Kara <jack@suse.cz>
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Doug Ledford <dledford@redhat.com>
Cc: Hal Rosenstock <hal.rosenstock@gmail.com>
Cc: Inki Dae <inki.dae@samsung.com>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: Jeff Moyer <jmoyer@redhat.com>
Cc: Joonyoung Shim <jy0922.shim@samsung.com>
Cc: Kyungmin Park <kyungmin.park@samsung.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Sean Hefty <sean.hefty@intel.com>
Cc: Seung-Woo Kim <sw0312.kim@samsung.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Thu, 30 Nov 2017 00:10:39 +0000 (16:10 -0800)]
mm: fail get_vaddr_frames() for filesystem-dax mappings
commit
b7f0554a56f21fb3e636a627450a9add030889be upstream.
Until there is a solution to the dma-to-dax vs truncate problem it is
not safe to allow V4L2, Exynos, and other frame vector users to create
long standing / irrevocable memory registrations against filesytem-dax
vmas.
[dan.j.williams@intel.com: add comment for vma_is_fsdax() check in get_vaddr_frames(), per Jan]
Link: http://lkml.kernel.org/r/151197874035.26211.4061781453123083667.stgit@dwillia2-desk3.amr.corp.intel.com
Link: http://lkml.kernel.org/r/151068939985.7446.15684639617389154187.stgit@dwillia2-desk3.amr.corp.intel.com
Fixes:
3565fce3a659 ("mm, x86: get_user_pages() for dax mappings")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Cc: Inki Dae <inki.dae@samsung.com>
Cc: Seung-Woo Kim <sw0312.kim@samsung.com>
Cc: Joonyoung Shim <jy0922.shim@samsung.com>
Cc: Kyungmin Park <kyungmin.park@samsung.com>
Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Doug Ledford <dledford@redhat.com>
Cc: Hal Rosenstock <hal.rosenstock@gmail.com>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: Jeff Moyer <jmoyer@redhat.com>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Sean Hefty <sean.hefty@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Thu, 30 Nov 2017 00:10:35 +0000 (16:10 -0800)]
mm: introduce get_user_pages_longterm
commit
2bb6d2837083de722bfdc369cb0d76ce188dd9b4 upstream.
Patch series "introduce get_user_pages_longterm()", v2.
Here is a new get_user_pages api for cases where a driver intends to
keep an elevated page count indefinitely. This is distinct from usages
like iov_iter_get_pages where the elevated page counts are transient.
The iov_iter_get_pages cases immediately turn around and submit the
pages to a device driver which will put_page when the i/o operation
completes (under kernel control).
In the longterm case userspace is responsible for dropping the page
reference at some undefined point in the future. This is untenable for
filesystem-dax case where the filesystem is in control of the lifetime
of the block / page and needs reasonable limits on how long it can wait
for pages in a mapping to become idle.
Fixing filesystems to actually wait for dax pages to be idle before
blocks from a truncate/hole-punch operation are repurposed is saved for
a later patch series.
Also, allowing longterm registration of dax mappings is a future patch
series that introduces a "map with lease" semantic where the kernel can
revoke a lease and force userspace to drop its page references.
I have also tagged these for -stable to purposely break cases that might
assume that longterm memory registrations for filesystem-dax mappings
were supported by the kernel. The behavior regression this policy
change implies is one of the reasons we maintain the "dax enabled.
Warning: EXPERIMENTAL, use at your own risk" notification when mounting
a filesystem in dax mode.
It is worth noting the device-dax interface does not suffer the same
constraints since it does not support file space management operations
like hole-punch.
This patch (of 4):
Until there is a solution to the dma-to-dax vs truncate problem it is
not safe to allow long standing memory registrations against
filesytem-dax vmas. Device-dax vmas do not have this problem and are
explicitly allowed.
This is temporary until a "memory registration with layout-lease"
mechanism can be implemented for the affected sub-systems (RDMA and
V4L2).
[akpm@linux-foundation.org: use kcalloc()]
Link: http://lkml.kernel.org/r/151068939435.7446.13560129395419350737.stgit@dwillia2-desk3.amr.corp.intel.com
Fixes:
3565fce3a659 ("mm, x86: get_user_pages() for dax mappings")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Suggested-by: Christoph Hellwig <hch@lst.de>
Cc: Doug Ledford <dledford@redhat.com>
Cc: Hal Rosenstock <hal.rosenstock@gmail.com>
Cc: Inki Dae <inki.dae@samsung.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Jason Gunthorpe <jgg@mellanox.com>
Cc: Jeff Moyer <jmoyer@redhat.com>
Cc: Joonyoung Shim <jy0922.shim@samsung.com>
Cc: Kyungmin Park <kyungmin.park@samsung.com>
Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Sean Hefty <sean.hefty@intel.com>
Cc: Seung-Woo Kim <sw0312.kim@samsung.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Thu, 30 Nov 2017 00:10:32 +0000 (16:10 -0800)]
device-dax: implement ->split() to catch invalid munmap attempts
commit
9702cffdbf2129516db679e4467db81e1cd287da upstream.
Similar to how device-dax enforces that the 'address', 'offset', and
'len' parameters to mmap() be aligned to the device's fundamental
alignment, the same constraints apply to munmap(). Implement ->split()
to fail munmap calls that violate the alignment constraint.
Otherwise, we later fail VM_BUG_ON checks in the unmap_page_range() path
with crash signatures of the form:
vma
ffff8800b60c8a88 start
00007f88c0000000 end
00007f88c0e00000
next (null) prev (null) mm
ffff8800b61150c0
prot
8000000000000027 anon_vma (null) vm_ops
ffffffffa0091240
pgoff 0 file
ffff8800b638ef80 private_data (null)
flags: 0x380000fb(read|write|shared|mayread|maywrite|mayexec|mayshare|softdirty|mixedmap|hugepage)
------------[ cut here ]------------
kernel BUG at mm/huge_memory.c:2014!
[..]
RIP: 0010:__split_huge_pud+0x12a/0x180
[..]
Call Trace:
unmap_page_range+0x245/0xa40
? __vma_adjust+0x301/0x990
unmap_vmas+0x4c/0xa0
unmap_region+0xae/0x120
? __vma_rb_erase+0x11a/0x230
do_munmap+0x276/0x410
vm_munmap+0x6a/0xa0
SyS_munmap+0x1d/0x30
Link: http://lkml.kernel.org/r/151130418681.4029.7118245855057952010.stgit@dwillia2-desk3.amr.corp.intel.com
Fixes:
dee410792419 ("/dev/dax, core: file operations and dax-mmap")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reported-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Thu, 30 Nov 2017 00:10:28 +0000 (16:10 -0800)]
mm, hugetlbfs: introduce ->split() to vm_operations_struct
commit
31383c6865a578834dd953d9dbc88e6b19fe3997 upstream.
Patch series "device-dax: fix unaligned munmap handling"
When device-dax is operating in huge-page mode we want it to behave like
hugetlbfs and fail attempts to split vmas into unaligned ranges. It
would be messy to teach the munmap path about device-dax alignment
constraints in the same (hstate) way that hugetlbfs communicates this
constraint. Instead, these patches introduce a new ->split() vm
operation.
This patch (of 2):
The device-dax interface has similar constraints as hugetlbfs in that it
requires the munmap path to unmap in huge page aligned units. Rather
than add more custom vma handling code in __split_vma() introduce a new
vm operation to perform this vma specific check.
Link: http://lkml.kernel.org/r/151130418135.4029.6783191281930729710.stgit@dwillia2-desk3.amr.corp.intel.com
Fixes:
dee410792419 ("/dev/dax, core: file operations and dax-mmap")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Cc: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Thu, 30 Nov 2017 00:10:06 +0000 (16:10 -0800)]
mm: fix device-dax pud write-faults triggered by get_user_pages()
commit
1501899a898dfb5477c55534bdfd734c046da06d upstream.
Currently only get_user_pages_fast() can safely handle the writable gup
case due to its use of pud_access_permitted() to check whether the pud
entry is writable. In the gup slow path pud_write() is used instead of
pud_access_permitted() and to date it has been unimplemented, just calls
BUG_ON().
kernel BUG at ./include/linux/hugetlb.h:244!
[..]
RIP: 0010:follow_devmap_pud+0x482/0x490
[..]
Call Trace:
follow_page_mask+0x28c/0x6e0
__get_user_pages+0xe4/0x6c0
get_user_pages_unlocked+0x130/0x1b0
get_user_pages_fast+0x89/0xb0
iov_iter_get_pages_alloc+0x114/0x4a0
nfs_direct_read_schedule_iovec+0xd2/0x350
? nfs_start_io_direct+0x63/0x70
nfs_file_direct_read+0x1e0/0x250
nfs_file_read+0x90/0xc0
For now this just implements a simple check for the _PAGE_RW bit similar
to pmd_write. However, this implies that the gup-slow-path check is
missing the extra checks that the gup-fast-path performs with
pud_access_permitted. Later patches will align all checks to use the
'access_permitted' helper if the architecture provides it.
Note that the generic 'access_permitted' helper fallback is the simple
_PAGE_RW check on architectures that do not define the
'access_permitted' helper(s).
[dan.j.williams@intel.com: fix powerpc compile error]
Link: http://lkml.kernel.org/r/151129126165.37405.16031785266675461397.stgit@dwillia2-desk3.amr.corp.intel.com
Link: http://lkml.kernel.org/r/151043109938.2842.14834662818213616199.stgit@dwillia2-desk3.amr.corp.intel.com
Fixes:
a00cc7d9dd93 ("mm, x86: add support for PUD-sized transparent hugepages")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Acked-by: Thomas Gleixner <tglx@linutronix.de> [x86]
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mike Kravetz [Thu, 30 Nov 2017 00:10:01 +0000 (16:10 -0800)]
mm/cma: fix alloc_contig_range ret code/potential leak
commit
63cd448908b5eb51d84c52f02b31b9b4ccd1cb5a upstream.
If the call __alloc_contig_migrate_range() in alloc_contig_range returns
-EBUSY, processing continues so that test_pages_isolated() is called
where there is a tracepoint to identify the busy pages. However, it is
possible for busy pages to become available between the calls to these
two routines. In this case, the range of pages may be allocated.
Unfortunately, the original return code (ret == -EBUSY) is still set and
returned to the caller. Therefore, the caller believes the pages were
not allocated and they are leaked.
Update the comment to indicate that allocation is still possible even if
__alloc_contig_migrate_range returns -EBUSY. Also, clear return code in
this case so that it is not accidentally used or returned to caller.
Link: http://lkml.kernel.org/r/20171122185214.25285-1-mike.kravetz@oracle.com
Fixes:
8ef5849fa8a2 ("mm/cma: always check which page caused allocation failure")
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Nazarewicz <mina86@mina86.com>
Cc: Laura Abbott <labbott@redhat.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kirill A. Shutemov [Mon, 27 Nov 2017 03:21:25 +0000 (06:21 +0300)]
mm, thp: Do not make page table dirty unconditionally in touch_p[mu]d()
commit
a8f97366452ed491d13cf1e44241bc0b5740b1f0 upstream.
Currently, we unconditionally make page table dirty in touch_pmd().
It may result in false-positive can_follow_write_pmd().
We may avoid the situation, if we would only make the page table entry
dirty if caller asks for write access -- FOLL_WRITE.
The patch also changes touch_pud() in the same way.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Hugh Dickins <hughd@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Wang Nan [Thu, 30 Nov 2017 00:09:58 +0000 (16:09 -0800)]
mm, oom_reaper: gather each vma to prevent leaking TLB entry
commit
687cb0884a714ff484d038e9190edc874edcf146 upstream.
tlb_gather_mmu(&tlb, mm, 0, -1) means gathering the whole virtual memory
space. In this case, tlb->fullmm is true. Some archs like arm64
doesn't flush TLB when tlb->fullmm is true:
commit
5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1").
Which causes leaking of tlb entries.
Will clarifies his patch:
"Basically, we tag each address space with an ASID (PCID on x86) which
is resident in the TLB. This means we can elide TLB invalidation when
pulling down a full mm because we won't ever assign that ASID to
another mm without doing TLB invalidation elsewhere (which actually
just nukes the whole TLB).
I think that means that we could potentially not fault on a kernel
uaccess, because we could hit in the TLB"
There could be a window between complete_signal() sending IPI to other
cores and all threads sharing this mm are really kicked off from cores.
In this window, the oom reaper may calls tlb_flush_mmu_tlbonly() to
flush TLB then frees pages. However, due to the above problem, the TLB
entries are not really flushed on arm64. Other threads are possible to
access these pages through TLB entries. Moreover, a copy_to_user() can
also write to these pages without generating page fault, causes
use-after-free bugs.
This patch gathers each vma instead of gathering full vm space. In this
case tlb->fullmm is not true. The behavior of oom reaper become similar
to munmapping before do_exit, which should be safe for all archs.
Link: http://lkml.kernel.org/r/20171107095453.179940-1-wangnan0@huawei.com
Fixes:
aac453635549 ("mm, oom: introduce oom reaper")
Signed-off-by: Wang Nan <wangnan0@huawei.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Bob Liu <liubo95@huawei.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Roman Gushchin <guro@fb.com>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michal Hocko [Thu, 30 Nov 2017 00:09:54 +0000 (16:09 -0800)]
mm, memory_hotplug: do not back off draining pcp free pages from kworker context
commit
4b81cb2ff69c8a8e297a147d2eb4d9b5e8d7c435 upstream.
drain_all_pages backs off when called from a kworker context since
commit
0ccce3b92421 ("mm, page_alloc: drain per-cpu pages from workqueue
context") because the original IPI based pcp draining has been replaced
by a WQ based one and the check wanted to prevent from recursion and
inter workers dependencies. This has made some sense at the time
because the system WQ has been used and one worker holding the lock
could be blocked while waiting for new workers to emerge which can be a
problem under OOM conditions.
Since then commit
ce612879ddc7 ("mm: move pcp and lru-pcp draining into
single wq") has moved draining to a dedicated (mm_percpu_wq) WQ with a
rescuer so we shouldn't depend on any other WQ activity to make a
forward progress so calling drain_all_pages from a worker context is
safe as long as this doesn't happen from mm_percpu_wq itself which is
not the case because all workers are required to _not_ depend on any MM
locks.
Why is this a problem in the first place? ACPI driven memory hot-remove
(acpi_device_hotplug) is executed from the worker context. We end up
calling __offline_pages to free all the pages and that requires both
lru_add_drain_all_cpuslocked and drain_all_pages to do their job
otherwise we can have dangling pages on pcp lists and fail the offline
operation (__test_page_isolated_in_pageblock would see a page with 0 ref
count but without PageBuddy set).
Fix the issue by removing the worker check in drain_all_pages.
lru_add_drain_all_cpuslocked doesn't have this restriction so it works
as expected.
Link: http://lkml.kernel.org/r/20170828093341.26341-1-mhocko@kernel.org
Fixes:
0ccce3b924212 ("mm, page_alloc: drain per-cpu pages from workqueue context")
Signed-off-by: Michal Hocko <mhocko@suse.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stefan Brüns [Fri, 3 Nov 2017 02:01:53 +0000 (03:01 +0100)]
platform/x86: hp-wmi: Fix tablet mode detection for convertibles
commit
9968e12a291e639dd51d1218b694d440b22a917f upstream.
Commit
f9cf3b2880cc ("platform/x86: hp-wmi: Refactor dock and tablet
state fetchers") consolidated the methods for docking and laptop mode
detection, but omitted to apply the correct mask for the laptop mode
(it always uses the constant for docking).
Fixes:
f9cf3b2880cc ("platform/x86: hp-wmi: Refactor dock and tablet state fetchers")
Signed-off-by: Stefan Brüns <stefan.bruens@rwth-aachen.de>
Cc: Michel Dänzer <michel@daenzer.net>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Thu, 30 Nov 2017 08:41:00 +0000 (08:41 +0000)]
Linux 4.14.3
Sasha Neftin [Sun, 6 Aug 2017 13:49:18 +0000 (16:49 +0300)]
e1000e: fix buffer overrun while the I219 is processing DMA transactions
commit
b10effb92e272051dd1ec0d7be56bf9ca85ab927 upstream.
Intel® 100/200 Series Chipset platforms reduced the round-trip
latency for the LAN Controller DMA accesses, causing in some high
performance cases a buffer overrun while the I219 LAN Connected
Device is processing the DMA transactions. I219LM and I219V devices
can fall into unrecovered Tx hang under very stressfully UDP traffic
and multiple reconnection of Ethernet cable. This Tx hang of the LAN
Controller is only recovered if the system is rebooted. Slightly slow
down DMA access by reducing the number of outstanding requests.
This workaround could have an impact on TCP traffic performance
on the platform. Disabling TSO eliminates performance loss for TCP
traffic without a noticeable impact on CPU performance.
Please, refer to I218/I219 specification update:
https://www.intel.com/content/www/us/en/embedded/products/networking/
ethernet-connection-i218-family-documentation.html
Signed-off-by: Sasha Neftin <sasha.neftin@intel.com>
Reviewed-by: Dima Ruinskiy <dima.ruinskiy@intel.com>
Reviewed-by: Raanan Avargil <raanan.avargil@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Benjamin Poirier [Fri, 21 Jul 2017 18:36:27 +0000 (11:36 -0700)]
e1000e: Avoid receiver overrun interrupt bursts
commit
4aea7a5c5e940c1723add439f4088844cd26196d upstream.
When e1000e_poll() is not fast enough to keep up with incoming traffic, the
adapter (when operating in msix mode) raises the Other interrupt to signal
Receiver Overrun.
This is a double problem because 1) at the moment e1000_msix_other()
assumes that it is only called in case of Link Status Change and 2) if the
condition persists, the interrupt is repeatedly raised again in quick
succession.
Ideally we would configure the Other interrupt to not be raised in case of
receiver overrun but this doesn't seem possible on this adapter. Instead,
we handle the first part of the problem by reverting to the practice of
reading ICR in the other interrupt handler, like before commit
16ecba59bc33
("e1000e: Do not read ICR in Other interrupt"). Thanks to commit
0a8047ac68e5 ("e1000e: Fix msi-x interrupt automask") which cleared IAME
from CTRL_EXT, reading ICR doesn't interfere with RxQ0, TxQ0 interrupts
anymore. We handle the second part of the problem by not re-enabling the
Other interrupt right away when there is overrun. Instead, we wait until
traffic subsides, napi polling mode is exited and interrupts are
re-enabled.
Reported-by: Lennart Sorensen <lsorense@csclub.uwaterloo.ca>
Fixes:
16ecba59bc33 ("e1000e: Do not read ICR in Other interrupt")
Signed-off-by: Benjamin Poirier <bpoirier@suse.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Benjamin Poirier [Fri, 21 Jul 2017 18:36:26 +0000 (11:36 -0700)]
e1000e: Separate signaling for link check/link up
commit
19110cfbb34d4af0cdfe14cd243f3b09dc95b013 upstream.
Lennart reported the following race condition:
\ e1000_watchdog_task
\ e1000e_has_link
\ hw->mac.ops.check_for_link() === e1000e_check_for_copper_link
/* link is up */
mac->get_link_status = false;
/* interrupt */
\ e1000_msix_other
hw->mac.get_link_status = true;
link_active = !hw->mac.get_link_status
/* link_active is false, wrongly */
This problem arises because the single flag get_link_status is used to
signal two different states: link status needs checking and link status is
down.
Avoid the problem by using the return value of .check_for_link to signal
the link status to e1000e_has_link().
Reported-by: Lennart Sorensen <lsorense@csclub.uwaterloo.ca>
Signed-off-by: Benjamin Poirier <bpoirier@suse.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Benjamin Poirier [Fri, 21 Jul 2017 18:36:25 +0000 (11:36 -0700)]
e1000e: Fix return value test
commit
d3509f8bc7b0560044c15f0e3ecfde1d9af757a6 upstream.
All the helpers return -E1000_ERR_PHY.
Signed-off-by: Benjamin Poirier <bpoirier@suse.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Benjamin Poirier [Fri, 21 Jul 2017 18:36:23 +0000 (11:36 -0700)]
e1000e: Fix error path in link detection
commit
c4c40e51f9c32c6dd8adf606624c930a1c4d9bbb upstream.
In case of error from e1e_rphy(), the loop will exit early and "success"
will be set to true erroneously.
Signed-off-by: Benjamin Poirier <bpoirier@suse.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Luca Coelho [Fri, 10 Nov 2017 12:03:36 +0000 (14:03 +0200)]
iwlwifi: mvm: support version 7 of the SCAN_REQ_UMAC FW command
commit
dac4df1c5f2c34903f61b1bc4fc722e31b4199e7 upstream.
Newer firmware versions (such as iwlwifi-8000C-34.ucode) have
introduced an API change in the SCAN_REQ_UMAC command that is not
backwards compatible. The driver needs to detect and use the new API
format when the firmware reports it, otherwise the scan command will
not work properly, causing a command timeout.
Fix this by adding a TLV that tells the driver that the new API is in
use and use the correct structures for it.
This fixes https://bugzilla.kernel.org/show_bug.cgi?id=197591
Fixes:
d7a5b3e9e42e ("iwlwifi: mvm: bump API to 34 for 8000 and up")
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Cc: Thomas Backlund <tmb@mageia.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Luca Coelho [Wed, 15 Nov 2017 16:28:04 +0000 (18:28 +0200)]
iwlwifi: fix PCI IDs and configuration mapping for 9000 series
commit
dbc89253a7e15f8f031fb1eeb956de91204655e3 upstream.
A lot of PCI IDs were missing and there were some problems with the
configuration and firmware selection for devices on the 9000 series.
Fix the firmware selection by adding files for the B-steps; add
configuration for some integrated devices; and add a bunch of PCI IDs
(mostly for integrated devices) that were missing from the driver's
list.
Without this patch, a lot of devices will not be recognized or will
try to load the wrong firmware file.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Cc: Thomas Backlund <tmb@mageia.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ihab Zhaika [Tue, 24 Oct 2017 14:36:43 +0000 (17:36 +0300)]
iwlwifi: add new cards for 8260 series
commit
d669fc2d42a43ee0abcf2396df6e9c5a124aa984 upstream.
add three new PCI ID'S for 8260 series
Signed-off-by: Ihab Zhaika <ihab.zhaika@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Cc: Thomas Backlund <tmb@mageia.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ihab Zhaika [Tue, 24 Oct 2017 14:38:12 +0000 (17:38 +0300)]
iwlwifi: add new cards for 8265 series
commit
7cddbef445631109bd530ce7cdacaa04ff0a62d1 upstream.
add two new PCI ID'S for 8265 series
Signed-off-by: Ihab Zhaika <ihab.zhaika@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Cc: Thomas Backlund <tmb@mageia.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ihab Zhaika [Tue, 24 Oct 2017 14:04:24 +0000 (17:04 +0300)]
iwlwifi: add new cards for a000 series
commit
57b36f7fcb39c5eae8c1f463699f747af69643ba upstream.
add four new PCI ID'S for a000 series
Signed-off-by: Ihab Zhaika <ihab.zhaika@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Cc: Thomas Backlund <tmb@mageia.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Luca Coelho [Thu, 12 Oct 2017 08:20:50 +0000 (11:20 +0300)]
iwlwifi: pcie: sort IDs for the 9000 series for easier comparisons
commit
1105a337375258515ed09b92a83fd7bfd6775958 upstream.
It's hard to find values that are missing in the list, so sorting the
values and comparing them makes it much easier. To simplify this
task, sort the devices in the list.
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Cc: Thomas Backlund <tmb@mageia.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Oren Givon [Thu, 31 Aug 2017 10:15:09 +0000 (13:15 +0300)]
iwlwifi: add a new a000 device
commit
d048b36b9654c4e0cf0d3576be2d1ed2a3084c6f upstream.
Add a new a000 device with PCI ID (0x2720, 0x0030).
Signed-off-by: Oren Givon <oren.givon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Cc: Thomas Backlund <tmb@mageia.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Oren Givon [Mon, 28 Aug 2017 07:33:38 +0000 (10:33 +0300)]
iwlwifi: fix wrong struct for a000 device
commit
f7f5873bbd45a67d3097dfb55237ade2ad520184 upstream.
The PCI ID (0x2720, 0x0070) was set with the config struct
iwla000_2ax_cfg_hr instead of iwla000_2ac_cfg_hr_cdb.
Fixes:
175b87c69253 ("iwlwifi: add the new a000_2ax series")
Signed-off-by: Oren Givon <oren.givon@intel.com>
Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
Cc: Thomas Backlund <tmb@mageia.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Neil Armstrong [Wed, 11 Oct 2017 15:23:12 +0000 (17:23 +0200)]
ARM64: dts: meson-gxl: Add alternate ARM Trusted Firmware reserved memory zone
commit
4ee8e51b9edfe7845a094690a365c844e5a35b4b upstream.
This year, Amlogic updated the ARM Trusted Firmware reserved memory mapping
for Meson GXL SoCs and products sold since May 2017 uses this alternate
reserved memory mapping.
But products had been sold using the previous mapping.
This issue has been explained in [1] and a dynamic solution is yet to be
found to avoid loosing another 3Mbytes of reservable memory.
In the meantime, this patch adds this alternate memory zone only for
the GXL and GXM SoCs since GXBB based new products stopped earlier.
[1] http://lists.infradead.org/pipermail/linux-amlogic/2017-October/004860.html
Fixes:
bba8e3f42736 ("ARM64: dts: meson-gx: Add firmware reserved memory zones")
Reported-by: Jerome Brunet <jbrunet@baylibre.com>
Signed-off-by: Neil Armstrong <narmstrong@baylibre.com>
Signed-off-by: Kevin Hilman <khilman@baylibre.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stanimir Varbanov [Fri, 13 Oct 2017 14:13:17 +0000 (16:13 +0200)]
media: venus: reimplement decoder stop command
commit
e69b987a97599456b95b5fef4aca8dcdb1505aea upstream.
This addresses the wrong behavior of decoder stop command by
rewriting it. These new implementation enqueue an empty buffer
on the decoder input buffer queue to signal end-of-stream. The
client should stop queuing buffers on the V4L2 Output queue
and continue queuing/dequeuing buffers on Capture queue. This
process will continue until the client receives a buffer with
V4L2_BUF_FLAG_LAST flag raised, which means that this is last
decoded buffer with data.
Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org>
Tested-by: Nicolas Dufresne <nicolas.dufresne@collabora.com>
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stanimir Varbanov [Tue, 10 Oct 2017 07:52:36 +0000 (09:52 +0200)]
media: venus: venc: fix bytesused v4l2_plane field
commit
5232c37ce244db04fd50d160b92e40d2df46a2e9 upstream.
This fixes wrongly filled bytesused field of v4l2_plane structure
by include data_offset in the plane, Also fill data_offset and
bytesused for capture type of buffers only.
Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org>
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stanimir Varbanov [Mon, 9 Oct 2017 12:24:57 +0000 (14:24 +0200)]
media: venus: fix wrong size on dma_free
commit
cd1a77e3c9cc6dbb57f02aa50e1740fc144d2dad upstream.
This change will fix an issue with dma_free size found with
DMA API debug enabled.
Signed-off-by: Stanimir Varbanov <stanimir.varbanov@linaro.org>
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ricardo Ribalda Delgado [Tue, 17 Oct 2017 15:48:50 +0000 (11:48 -0400)]
media: v4l2-ctrl: Fix flags field on Control events
commit
9cac9d2fb2fe0e0cadacdb94415b3fe49e3f724f upstream.
VIDIOC_DQEVENT and VIDIOC_QUERY_EXT_CTRL should give the same output for
the control flags field.
This patch creates a new function user_flags(), that calculates the user
exported flags value (which is different than the kernel internal flags
structure). This function is then used by all the code that exports the
internal flags to userspace.
Reported-by: Dimitrios Katsaros <patcherwork@gmail.com>
Signed-off-by: Ricardo Ribalda Delgado <ricardo.ribalda@gmail.com>
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johan Hovold [Thu, 21 Sep 2017 08:40:18 +0000 (05:40 -0300)]
cx231xx-cards: fix NULL-deref on missing association descriptor
commit
6c3b047fa2d2286d5e438bcb470c7b1a49f415f6 upstream.
Make sure to check that we actually have an Interface Association
Descriptor before dereferencing it during probe to avoid dereferencing a
NULL-pointer.
Fixes:
e0d3bafd0258 ("V4L/DVB (10954): Add cx231xx USB driver")
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Tested-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sean Young [Sun, 1 Oct 2017 20:38:29 +0000 (16:38 -0400)]
media: rc: nec decoder should not send both repeat and keycode
commit
829bbf268894d0866bb9dd2b1e430cfa5c5f0779 upstream.
When receiving an nec repeat, rc_repeat() is called and then rc_keydown()
with the last decoded scancode. That last call is redundant.
Fixes:
265a2988d202 ("media: rc-core: consistent use of rc_repeat()")
Signed-off-by: Sean Young <sean@mess.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sean Young [Sun, 8 Oct 2017 18:18:52 +0000 (14:18 -0400)]
media: rc: check for integer overflow
commit
3e45067f94bbd61dec0619b1c32744eb0de480c8 upstream.
The ioctl LIRC_SET_REC_TIMEOUT would set a timeout of 704ns if called
with a timeout of 4294968us.
Signed-off-by: Sean Young <sean@mess.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michele Baldessari [Mon, 6 Nov 2017 13:50:22 +0000 (08:50 -0500)]
media: Don't do DMA on stack for firmware upload in the AS102 driver
commit
b3120d2cc447ee77b9d69bf4ad7b452c9adb4d39 upstream.
Firmware load on AS102 is using the stack which is not allowed any
longer. We currently fail with:
kernel: transfer buffer not dma capable
kernel: ------------[ cut here ]------------
kernel: WARNING: CPU: 0 PID: 598 at drivers/usb/core/hcd.c:1595 usb_hcd_map_urb_for_dma+0x41d/0x620
kernel: Modules linked in: amd64_edac_mod(-) edac_mce_amd as102_fe dvb_as102(+) kvm_amd kvm snd_hda_codec_realtek dvb_core snd_hda_codec_generic snd_hda_codec_hdmi snd_hda_intel snd_hda_codec irqbypass crct10dif_pclmul crc32_pclmul snd_hda_core snd_hwdep snd_seq ghash_clmulni_intel sp5100_tco fam15h_power wmi k10temp i2c_piix4 snd_seq_device snd_pcm snd_timer parport_pc parport tpm_infineon snd tpm_tis soundcore tpm_tis_core tpm shpchp acpi_cpufreq xfs libcrc32c amdgpu amdkfd amd_iommu_v2 radeon hid_logitech_hidpp i2c_algo_bit drm_kms_helper crc32c_intel ttm drm r8169 mii hid_logitech_dj
kernel: CPU: 0 PID: 598 Comm: systemd-udevd Not tainted 4.13.10-200.fc26.x86_64 #1
kernel: Hardware name: ASUS All Series/AM1I-A, BIOS 0505 03/13/2014
kernel: task:
ffff979933b24c80 task.stack:
ffffaf83413a4000
kernel: RIP: 0010:usb_hcd_map_urb_for_dma+0x41d/0x620
systemd-fsck[659]: /dev/sda2: clean, 49/128016 files, 268609/512000 blocks
kernel: RSP: 0018:
ffffaf83413a7728 EFLAGS:
00010282
systemd-udevd[604]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable.
kernel: RAX:
000000000000001f RBX:
ffff979930bce780 RCX:
0000000000000000
kernel: RDX:
0000000000000000 RSI:
ffff97993ec0e118 RDI:
ffff97993ec0e118
kernel: RBP:
ffffaf83413a7768 R08:
000000000000039a R09:
0000000000000000
kernel: R10:
0000000000000001 R11:
00000000ffffffff R12:
00000000fffffff5
kernel: R13:
0000000001400000 R14:
0000000000000001 R15:
ffff979930806800
kernel: FS:
00007effaca5c8c0(0000) GS:
ffff97993ec00000(0000) knlGS:
0000000000000000
kernel: CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
kernel: CR2:
00007effa9fca962 CR3:
0000000233089000 CR4:
00000000000406f0
kernel: Call Trace:
kernel: usb_hcd_submit_urb+0x493/0xb40
kernel: ? page_cache_tree_insert+0x100/0x100
kernel: ? xfs_iunlock+0xd5/0x100 [xfs]
kernel: ? xfs_file_buffered_aio_read+0x57/0xc0 [xfs]
kernel: usb_submit_urb+0x22d/0x560
kernel: usb_start_wait_urb+0x6e/0x180
kernel: usb_bulk_msg+0xb8/0x160
kernel: as102_send_ep1+0x49/0xe0 [dvb_as102]
kernel: ? devres_add+0x3f/0x50
kernel: as102_firmware_upload.isra.0+0x1dc/0x210 [dvb_as102]
kernel: as102_fw_upload+0xb6/0x1f0 [dvb_as102]
kernel: as102_dvb_register+0x2af/0x2d0 [dvb_as102]
kernel: as102_usb_probe+0x1f3/0x260 [dvb_as102]
kernel: usb_probe_interface+0x124/0x300
kernel: driver_probe_device+0x2ff/0x450
kernel: __driver_attach+0xa4/0xe0
kernel: ? driver_probe_device+0x450/0x450
kernel: bus_for_each_dev+0x6e/0xb0
kernel: driver_attach+0x1e/0x20
kernel: bus_add_driver+0x1c7/0x270
kernel: driver_register+0x60/0xe0
kernel: usb_register_driver+0x81/0x150
kernel: ? 0xffffffffc0807000
kernel: as102_usb_driver_init+0x1e/0x1000 [dvb_as102]
kernel: do_one_initcall+0x50/0x190
kernel: ? __vunmap+0x81/0xb0
kernel: ? kfree+0x154/0x170
kernel: ? kmem_cache_alloc_trace+0x15f/0x1c0
kernel: ? do_init_module+0x27/0x1e9
kernel: do_init_module+0x5f/0x1e9
kernel: load_module+0x2602/0x2c30
kernel: SYSC_init_module+0x170/0x1a0
kernel: ? SYSC_init_module+0x170/0x1a0
kernel: SyS_init_module+0xe/0x10
kernel: do_syscall_64+0x67/0x140
kernel: entry_SYSCALL64_slow_path+0x25/0x25
kernel: RIP: 0033:0x7effab6cf3ea
kernel: RSP: 002b:
00007fff5cfcbbc8 EFLAGS:
00000246 ORIG_RAX:
00000000000000af
kernel: RAX:
ffffffffffffffda RBX:
00005569e0b83760 RCX:
00007effab6cf3ea
kernel: RDX:
00007effac2099c5 RSI:
0000000000009a13 RDI:
00005569e0b98c50
kernel: RBP:
00007effac2099c5 R08:
00005569e0b83ed0 R09:
0000000000001d80
kernel: R10:
00007effab98db00 R11:
0000000000000246 R12:
00005569e0b98c50
kernel: R13:
00005569e0b81c60 R14:
0000000000020000 R15:
00005569dfadfdf7
kernel: Code: 48 39 c8 73 30 80 3d 59 60 9d 00 00 41 bc f5 ff ff ff 0f 85 26 ff ff ff 48 c7 c7 b8 6b d0 92 c6 05 3f 60 9d 00 01 e8 24 3d ad ff <0f> ff 8b 53 64 e9 09 ff ff ff 65 48 8b 0c 25 00 d3 00 00 48 8b
kernel: ---[ end trace
c4cae366180e70ec ]---
kernel: as10x_usb: error during firmware upload part1
Let's allocate the the structure dynamically so we can get the firmware
loaded correctly:
[ 14.243057] as10x_usb: firmware: as102_data1_st.hex loaded with success
[ 14.500777] as10x_usb: firmware: as102_data2_st.hex loaded with success
Signed-off-by: Michele Baldessari <michele@acksyn.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Piggin [Thu, 9 Nov 2017 17:27:38 +0000 (04:27 +1100)]
powerpc/64s/hash: Allow MAP_FIXED allocations to cross 128TB boundary
commit
35602f82d0c765f991420e319c8d3a596c921eb8 upstream.
While mapping hints with a length that cross 128TB are disallowed,
MAP_FIXED allocations that cross 128TB are allowed. These are failing
on hash (on radix they succeed). Add an additional case for fixed
mappings to expand the addr_limit when crossing 128TB.
Fixes:
f4ea6dcb08ea ("powerpc/mm: Enable mappings above 128TB")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Piggin [Thu, 9 Nov 2017 17:27:37 +0000 (04:27 +1100)]
powerpc/64s/hash: Fix fork() with 512TB process address space
commit
effc1b25088502fbd30305c79773de2d1f7470a6 upstream.
Hash unconditionally resets the addr_limit to default (128TB) when the
mm context is initialised. If a process has > 128TB mappings when it
forks, the child will not get the 512TB addr_limit, so accesses to
valid > 128TB mappings will fail in the child.
Fix this by only resetting the addr_limit to default if it was 0. Non
zero indicates it was duplicated from the parent (0 means exec()).
Fixes:
f4ea6dcb08ea ("powerpc/mm: Enable mappings above 128TB")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Piggin [Thu, 9 Nov 2017 17:27:36 +0000 (04:27 +1100)]
powerpc/64s/hash: Fix 128TB-512TB virtual address boundary case allocation
commit
6a72dc038b615229a1b285829d6c8378d15c2347 upstream.
When allocating VA space with a hint that crosses 128TB, the SLB
addr_limit variable is not expanded if addr is not > 128TB, but the
slice allocation looks at task_size, which is 512TB. This results in
slice_check_fit() incorrectly succeeding because the slice_count
truncates off bit 128 of the requested mask, so the comparison to the
available mask succeeds.
Fix this by using mm->context.addr_limit instead of mm->task_size for
testing allocation limits. This causes such allocations to fail.
Fixes:
f4ea6dcb08ea ("powerpc/mm: Enable mappings above 128TB")
Reported-by: Florian Weimer <fweimer@redhat.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael Ellerman [Mon, 13 Nov 2017 12:17:28 +0000 (23:17 +1100)]
powerpc/64s/hash: Fix 512T hint detection to use >= 128T
commit
7ece370996b694ae263025e056ad785afc1be5ab upstream.
Currently userspace is able to request mmap() search between 128T-512T
by specifying a hint address that is greater than 128T. But that means
a hint of 128T exactly will return an address below 128T, which is
confusing and wrong.
So fix the logic to check the hint is greater than *or equal* to 128T.
Fixes:
f4ea6dcb08ea ("powerpc/mm: Enable mappings above 128TB")
Suggested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Suggested-by: Nicholas Piggin <npiggin@gmail.com>
[mpe: Split out of Nick's bigger patch]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Piggin [Thu, 9 Nov 2017 17:27:39 +0000 (04:27 +1100)]
powerpc/64s/radix: Fix 128TB-512TB virtual address boundary case allocation
commit
85e3f1adcb9d49300b0a943bb93f9604be375bfb upstream.
Radix VA space allocations test addresses against mm->task_size which
is 512TB, even in cases where the intention is to limit allocation to
below 128TB.
This results in mmap with a hint address below 128TB but address +
length above 128TB succeeding when it should fail (as hash does after
the previous patch).
Set the high address limit to be considered up front, and base
subsequent allocation checks on that consistently.
Fixes:
f4ea6dcb08ea ("powerpc/mm: Enable mappings above 128TB")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael Ellerman [Tue, 14 Nov 2017 04:48:47 +0000 (15:48 +1100)]
powerpc/64s: Fix masking of SRR1 bits on instruction fault
commit
475b581ff57bc01437cbc680e281869918447763 upstream.
On 64-bit Book3s, when we take an instruction fault the reason for the
fault may be reported in SRR1. For data faults the reason is reported
in DSISR (Data Storage Instruction Status Register).
The reasons reported in each do not necessarily correspond, so we mask
the SRR1 bits before copying them to the DSISR, which is then used by
the page fault code.
Prior to commit
b4c001dc44f0 ("powerpc/mm: Use symbolic constants for
filtering SRR1 bits on ISIs") we used a hard-coded mask of 0x58200000,
which corresponds to:
DSISR_NOHPTE 0x40000000 /* no translation found */
DSISR_NOEXEC_OR_G 0x10000000 /* exec of no-exec or guarded */
DSISR_PROTFAULT 0x08000000 /* protection fault */
DSISR_KEYFAULT 0x00200000 /* Storage Key fault */
That commit added a #define for the mask, DSISR_SRR1_MATCH_64S, but
incorrectly used a different similarly named DSISR_BAD_FAULT_64S.
This had the effect of changing the mask to 0xa43a0000, which omits
everything but DSISR_KEYFAULT.
Luckily this had no visible effect, because in practice we hardly use
the DSISR bits. The lack of DSISR_NOHPTE means a TLB flush
optimisation was missed in the native HPTE code, and DSISR_NOEXEC_OR_G
and DSISR_PROTFAULT are both only used to trigger rare warnings.
So we got lucky, but let's fix it. The new value only has bits between
17 and 30 set, so we can continue to use andis.
Fixes:
b4c001dc44f0 ("powerpc/mm: Use symbolic constants for filtering SRR1 bits on ISIs")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Naveen N. Rao [Thu, 31 Aug 2017 16:25:57 +0000 (21:55 +0530)]
powerpc/signal: Properly handle return value from uprobe_deny_signal()
commit
46725b17f1c6c815a41429259b3f070c01e71bc1 upstream.
When a uprobe is installed on an instruction that we currently do not
emulate, we copy the instruction into a xol buffer and single step
that instruction. If that instruction generates a fault, we abort the
single stepping before invoking the signal handler. Once the signal
handler is done, the uprobe trap is hit again since the instruction is
retried and the process repeats.
We use uprobe_deny_signal() to detect if the xol instruction triggered
a signal. If so, we clear TIF_SIGPENDING and set TIF_UPROBE so that the
signal is not handled until after the single stepping is aborted. In
this case, uprobe_deny_signal() returns true and get_signal() ends up
returning 0. However, in do_signal(), we are not looking at the return
value, but depending on ksig.sig for further action, all with an
uninitialized ksig that is not touched in this scenario. Fix the same
by initializing ksig.sig to 0.
Fixes:
129b69df9c90 ("powerpc: Use get_signal() signal_setup_done()")
Reported-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael Ellerman [Sun, 15 Oct 2017 18:43:41 +0000 (00:13 +0530)]
powerpc/perf/imc: Use cpu_to_node() not topology_physical_package_id()
commit
f3f1dfd600ff82b18b7ea73d80eb27f476a6aa97 upstream.
init_imc_pmu() uses topology_physical_package_id() to detect the
node id of the processor it is on to get local memory, but that's
wrong, and can lead to crashes. Fix it to use cpu_to_node().
Fixes:
885dcd709ba9 ("powerpc/perf: Add nest IMC PMU support")
Reported-By: Rob Lippert <rlippert@google.com>
Tested-By: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Madhavan Srinivasan <maddy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Balbir Singh [Mon, 16 Oct 2017 05:21:35 +0000 (16:21 +1100)]
powerpc/mm/radix: Fix crashes on Power9 DD1 with radix MMU and STRICT_RWX
commit
f79ad50ea3c73fb1ea5b09e95c864e5bb263adfb upstream.
When using the radix MMU on Power9 DD1, to work around a hardware
problem, radix__pte_update() is required to do a two stage update of
the PTE. First we write a zero value into the PTE, then we flush the
TLB, and then we write the new PTE value.
In the normal case that works OK, but it does not work if we're
updating the PTE that maps the code we're executing, because the
mapping is removed by the TLB flush and we can no longer execute from
it. Unfortunately the STRICT_RWX code needs to do exactly that.
The exact symptoms when we hit this case vary, sometimes we print an
oops and then get stuck after that, but I've also seen a machine just
get stuck continually page faulting with no oops printed. The variance
is presumably due to the exact layout of the text and the page size
used for the mappings. In all cases we are unable to boot to a shell.
There are possible solutions such as creating a second mapping of the
TLB flush code, executing from that, and then jumping back to the
original. However we don't want to add that level of complexity for a
DD1 work around.
So just detect that we're running on Power9 DD1 and refrain from
changing the permissions, effectively disabling STRICT_RWX on Power9
DD1.
Fixes:
7614ff3272a1 ("powerpc/mm/radix: Implement STRICT_RWX/mark_rodata_ro() for Radix")
Reported-by: Andrew Jeffery <andrew@aj.id.au>
[Changelog as suggested by Michael Ellerman <mpe@ellerman.id.au>]
Signed-off-by: Balbir Singh <bsingharora@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christophe Leroy [Tue, 21 Nov 2017 14:28:20 +0000 (15:28 +0100)]
powerpc: Fix boot on BOOK3S_32 with CONFIG_STRICT_KERNEL_RWX
commit
252eb55816a6f69ef9464cad303cdb3326cdc61d upstream.
On powerpc32, patch_instruction() is called by apply_feature_fixups()
which is called from early_init()
There is the following note in front of early_init():
* Note that the kernel may be running at an address which is different
* from the address that it was linked at, so we must use RELOC/PTRRELOC
* to access static data (including strings). -- paulus
Therefore, slab_is_available() cannot be called yet, and
text_poke_area must be addressed with PTRRELOC()
Fixes:
95902e6c8864 ("powerpc/mm: Implement STRICT_KERNEL_RWX on PPC32")
Reported-by: Meelis Roos <mroos@linux.ee>
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
John David Anglin [Sat, 11 Nov 2017 22:11:16 +0000 (17:11 -0500)]
parisc: Fix validity check of pointer size argument in new CAS implementation
commit
05f016d2ca7a4fab99d5d5472168506ddf95e74f upstream.
As noted by Christoph Biedl, passing a pointer size of 4 in the new CAS
implementation causes a kernel crash. The attached patch corrects the
off by one error in the argument validity check.
In reviewing the code, I noticed that we only perform word operations
with the pointer size argument. The subi instruction intentionally uses
a word condition on 64-bit kernels. Nullification was used instead of a
cmpib instruction as the branch should never be taken. The shlw
pseudo-operation generates a depw,z instruction and it clears the target
before doing a shift left word deposit. Thus, we don't need to clip the
upper 32 bits of this argument on 64-bit kernels.
Tested with a gcc testsuite run with a 64-bit kernel. The gcc atomic
code in libgcc is the only direct user of the new CAS implementation
that I am aware of.
Signed-off-by: John David Anglin <dave.anglin@bell.net>
Signed-off-by: Helge Deller <deller@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Brian King [Fri, 17 Nov 2017 17:05:43 +0000 (11:05 -0600)]
ixgbe: Fix skb list corruption on Power systems
commit
0a9a17e3bb4564caf4bfe2a6783ae1287667d188 upstream.
This patch fixes an issue seen on Power systems with ixgbe which results
in skb list corruption and an eventual kernel oops. The following is what
was observed:
CPU 1 CPU2
============================ ============================
1: ixgbe_xmit_frame_ring ixgbe_clean_tx_irq
2: first->skb = skb eop_desc = tx_buffer->next_to_watch
3: ixgbe_tx_map read_barrier_depends()
4: wmb check adapter written status bit
5: first->next_to_watch = tx_desc napi_consume_skb(tx_buffer->skb ..);
6: writel(i, tx_ring->tail);
The read_barrier_depends is insufficient to ensure that tx_buffer->skb does not
get loaded prior to tx_buffer->next_to_watch, which then results in loading
a stale skb pointer. This patch replaces the read_barrier_depends with
smp_rmb to ensure loads are ordered with respect to the load of
tx_buffer->next_to_watch.
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Brian King [Fri, 17 Nov 2017 17:05:48 +0000 (11:05 -0600)]
fm10k: Use smp_rmb rather than read_barrier_depends
commit
7b8edcc685b5e2c3c37aa13dc50a88e84a5bfef8 upstream.
The original issue being fixed in this patch was seen with the ixgbe
driver, but the same issue exists with fm10k as well, as the code is
very similar. read_barrier_depends is not sufficient to ensure
loads following it are not speculatively loaded out of order
by the CPU, which can result in stale data being loaded, causing
potential system crashes.
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Brian King [Fri, 17 Nov 2017 17:05:49 +0000 (11:05 -0600)]
i40evf: Use smp_rmb rather than read_barrier_depends
commit
f72271e2a0ae4277d53c4053f5eed8bb346ba38a upstream.
The original issue being fixed in this patch was seen with the ixgbe
driver, but the same issue exists with i40evf as well, as the code is
very similar. read_barrier_depends is not sufficient to ensure
loads following it are not speculatively loaded out of order
by the CPU, which can result in stale data being loaded, causing
potential system crashes.
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Brian King [Fri, 17 Nov 2017 17:05:45 +0000 (11:05 -0600)]
ixgbevf: Use smp_rmb rather than read_barrier_depends
commit
ae0c585d93dfaf923d2c7eb44b2c3ab92854ea9b upstream.
The original issue being fixed in this patch was seen with the ixgbe
driver, but the same issue exists with ixgbevf as well, as the code is
very similar. read_barrier_depends is not sufficient to ensure
loads following it are not speculatively loaded out of order
by the CPU, which can result in stale data being loaded, causing
potential system crashes.
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Brian King [Fri, 17 Nov 2017 17:05:46 +0000 (11:05 -0600)]
igbvf: Use smp_rmb rather than read_barrier_depends
commit
1e1f9ca546556e508d021545861f6b5fc75a95fe upstream.
The original issue being fixed in this patch was seen with the ixgbe
driver, but the same issue exists with igbvf as well, as the code is
very similar. read_barrier_depends is not sufficient to ensure
loads following it are not speculatively loaded out of order
by the CPU, which can result in stale data being loaded, causing
potential system crashes.
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Brian King [Fri, 17 Nov 2017 17:05:47 +0000 (11:05 -0600)]
igb: Use smp_rmb rather than read_barrier_depends
commit
c4cb99185b4cc96c0a1c70104dc21ae14d7e7f28 upstream.
The original issue being fixed in this patch was seen with the ixgbe
driver, but the same issue exists with igb as well, as the code is
very similar. read_barrier_depends is not sufficient to ensure
loads following it are not speculatively loaded out of order
by the CPU, which can result in stale data being loaded, causing
potential system crashes.
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Brian King [Fri, 17 Nov 2017 17:05:44 +0000 (11:05 -0600)]
i40e: Use smp_rmb rather than read_barrier_depends
commit
52c6912fde0133981ee50ba08808f257829c4c93 upstream.
The original issue being fixed in this patch was seen with the ixgbe
driver, but the same issue exists with i40e as well, as the code is
very similar. read_barrier_depends is not sufficient to ensure
loads following it are not speculatively loaded out of order
by the CPU, which can result in stale data being loaded, causing
potential system crashes.
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Acked-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bin Meng [Mon, 11 Sep 2017 09:41:53 +0000 (02:41 -0700)]
spi-nor: intel-spi: Fix broken software sequencing codes
commit
9d63f17661e25fd28714dac94bdebc4ff5b75f09 upstream.
There are two bugs in current intel_spi_sw_cycle():
- The 'data byte count' field should be the number of bytes
transferred minus 1
- SSFSTS_CTL is the offset from ispi->sregs, not ispi->base
Signed-off-by: Bin Meng <bmeng.cn@gmail.com>
Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Cyrille Pitchen <cyrille.pitchen@wedev4u.fr>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johan Hovold [Sun, 9 Jul 2017 11:08:58 +0000 (13:08 +0200)]
NFC: fix device-allocation error return
commit
c45e3e4c5b134b081e8af362109905427967eb19 upstream.
A recent change fixing NFC device allocation itself introduced an
error-handling bug by returning an error pointer in case device-id
allocation failed. This is clearly broken as the callers still expected
NULL to be returned on errors as detected by Dan's static checker.
Fix this up by returning NULL in the event that we've run out of memory
when allocating a new device id.
Note that the offending commit is marked for stable (3.8) so this fix
needs to be backported along with it.
Fixes:
20777bc57c34 ("NFC: fix broken device allocation")
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniel Jurgens [Tue, 7 Nov 2017 16:33:26 +0000 (18:33 +0200)]
IB/core: Only maintain real QPs in the security lists
commit
877add28178a7fa3c68f29c450d050a8e6513f08 upstream.
When modify QP is called on a shared QP update the security context for
the real QP. When security is subsequently enforced the shared QP
handles will be checked as well.
Without this change shared QP handles get added to the port/pkey lists,
which is a bug, because not all shared QP handles will be checked for
access. Also the shared QP security context wouldn't get removed from
the port/pkey lists causing access to free memory and list corruption
when they are destroyed.
Fixes:
d291f1a65232 ("IB/core: Enforce PKey security on QPs")
Signed-off-by: Daniel Jurgens <danielj@mellanox.com>
Reviewed-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Parav Pandit [Tue, 31 Oct 2017 15:33:18 +0000 (10:33 -0500)]
IB/core: Avoid crash on pkey enforcement failed in received MADs
commit
89548bcafec7ecfeea58c553f0834b5d575a66eb upstream.
Below kernel crash is observed when Pkey security enforcement fails on
received MADs. This issue is reported in [1].
ib_free_recv_mad() accesses the rmpp_list, whose initialization is
needed before accessing it.
When security enformcent fails on received MADs, MAD processing avoided
due to security checks failed.
OpenSM[3770]: SM port is down
kernel: BUG: unable to handle kernel NULL pointer dereference at
0000000000000008
kernel: IP: ib_free_recv_mad+0x44/0xa0 [ib_core]
kernel: PGD 0
kernel: P4D 0
kernel:
kernel: Oops: 0002 [#1] SMP
kernel: CPU: 0 PID: 2833 Comm: kworker/0:1H Tainted: P IO 4.13.4-1-pve #1
kernel: Hardware name: Dell XS23-TY3 /9CMP63, BIOS 1.71 09/17/2013
kernel: Workqueue: ib-comp-wq ib_cq_poll_work [ib_core]
kernel: task:
ffffa069c6541600 task.stack:
ffffb9a729054000
kernel: RIP: 0010:ib_free_recv_mad+0x44/0xa0 [ib_core]
kernel: RSP: 0018:
ffffb9a729057d38 EFLAGS:
00010286
kernel: RAX:
ffffa069cb138a48 RBX:
ffffa069cb138a10 RCX:
0000000000000000
kernel: RDX:
ffffb9a729057d38 RSI:
0000000000000000 RDI:
ffffa069cb138a20
kernel: RBP:
ffffb9a729057d60 R08:
ffffa072d2d49800 R09:
ffffa069cb138ae0
kernel: R10:
ffffa069cb138ae0 R11:
ffffa072b3994e00 R12:
ffffb9a729057d38
kernel: R13:
ffffa069d1c90000 R14:
0000000000000000 R15:
ffffa069d1c90880
kernel: FS:
0000000000000000(0000) GS:
ffffa069dba00000(0000) knlGS:
0000000000000000
kernel: CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
kernel: CR2:
0000000000000008 CR3:
00000011f51f2000 CR4:
00000000000006f0
kernel: Call Trace:
kernel: ib_mad_recv_done+0x5cc/0xb50 [ib_core]
kernel: __ib_process_cq+0x5c/0xb0 [ib_core]
kernel: ib_cq_poll_work+0x20/0x60 [ib_core]
kernel: process_one_work+0x1e9/0x410
kernel: worker_thread+0x4b/0x410
kernel: kthread+0x109/0x140
kernel: ? process_one_work+0x410/0x410
kernel: ? kthread_create_on_node+0x70/0x70
kernel: ? SyS_exit_group+0x14/0x20
kernel: ret_from_fork+0x25/0x30
kernel: RIP: ib_free_recv_mad+0x44/0xa0 [ib_core] RSP:
ffffb9a729057d38
kernel: CR2:
0000000000000008
[1] : https://www.spinics.net/lists/linux-rdma/msg56190.html
Fixes:
47a2b338fe63 ("IB/core: Enforce security on management datagrams")
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reported-by: Chris Blake <chrisrblake93@gmail.com>
Reviewed-by: Daniel Jurgens <danielj@mellanox.com>
Reviewed-by: Hal Rosenstock <hal@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bart Van Assche [Wed, 11 Oct 2017 17:27:26 +0000 (10:27 -0700)]
IB/srp: Avoid that a cable pull can trigger a kernel crash
commit
8a0d18c62121d3c554a83eb96e2752861d84d937 upstream.
This patch fixes the following kernel crash:
general protection fault: 0000 [#1] PREEMPT SMP
Workqueue: ib_mad2 timeout_sends [ib_core]
Call Trace:
ib_sa_path_rec_callback+0x1c4/0x1d0 [ib_core]
send_handler+0xb2/0xd0 [ib_core]
timeout_sends+0x14d/0x220 [ib_core]
process_one_work+0x200/0x630
worker_thread+0x4e/0x3b0
kthread+0x113/0x150
Fixes: commit
aef9ec39c47f ("IB: Add SCSI RDMA Protocol (SRP) initiator")
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael J. Ruhl [Mon, 2 Oct 2017 18:04:19 +0000 (11:04 -0700)]
IB/hfi1: Fix incorrect available receive user context count
commit
d7d626179fb283aba73699071af0df6d00e32138 upstream.
The addition of the VNIC contexts to num_rcv_contexts changes the
meaning of the sysfs value nctxts from available user contexts, to
user contexts + reserved VNIC contexts.
User applications that use nctxts are now broken.
Update the calculation so that VNIC contexts are used only if there are
hardware contexts available, and do not silently affect nctxts.
Update code to use the calculated VNIC context number.
Update the sysfs value nctxts to be available user contexts only.
Fixes:
2280740f01ae ("IB/hfi1: Virtual Network Interface Controller (VNIC) HW support")
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Niranjana Vishwanathapura <Niranjana.Vishwanathapura@intel.com>
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Parav Pandit [Thu, 19 Oct 2017 05:40:30 +0000 (08:40 +0300)]
IB/cm: Fix memory corruption in handling CM request
commit
5a3dc32372439eb9a0d6027c54cbfff64803fce5 upstream.
In recent code, two path record entries are alwasy cleared while
allocated could be either one or two path record entries.
This leads to zero out of unallocated memory.
This fix initializes alternative path record only when alternative path
is set.
While we are at it, path record allocation doesn't check for OPA
alternative path, but rest of the code checks for OPA alternative path.
Path record allocation code doesn't check for OPA alternative LID.
This can further lead to memory corruption when only one path record is
allocated, but there is actually alternative OPA path record present in CM
request.
Fixes:
9fdca4da4d8c ("IB/SA: Split struct sa_path_rec based on IB and ROCE specific fields")
Signed-off-by: Parav Pandit <parav@mellanox.com>
Reviewed-by: Moni Shoua <monis@mellanox.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bart Van Assche [Wed, 11 Oct 2017 17:27:22 +0000 (10:27 -0700)]
IB/srpt: Do not accept invalid initiator port names
commit
c70ca38960399a63d5c048b7b700612ea321d17e upstream.
Make srpt_parse_i_port_id() return a negative value if hex2bin()
fails.
Fixes: commit
a42d985bd5b2 ("ib_srpt: Initial SRP Target merge for v3.3-rc1")
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Chuck Lever [Mon, 16 Oct 2017 16:14:33 +0000 (12:14 -0400)]
svcrdma: Preserve CB send buffer across retransmits
commit
0bad47cada5defba13e98827d22d06f13258dfb3 upstream.
During each NFSv4 callback Call, an RDMA Send completion frees the
page that contains the RPC Call message. If the upper layer
determines that a retransmit is necessary, this is too soon.
One possible symptom: after a GARBAGE_ARGS response an NFSv4.1
callback request, the following BUG fires on the NFS server:
kernel: BUG: Bad page state in process kworker/0:2H pfn:7d3ce2
kernel: page:
ffffea001f4f3880 count:-2 mapcount:0 mapping: (null) index:0x0
kernel: flags: 0x2fffff80000000()
kernel: raw:
002fffff80000000 0000000000000000 0000000000000000 fffffffeffffffff
kernel: raw:
dead000000000100 dead000000000200 0000000000000000 0000000000000000
kernel: page dumped because: nonzero _refcount
kernel: Modules linked in: cts rpcsec_gss_krb5 ocfs2_dlmfs ocfs2_stack_o2cb ocfs2_dlm
ocfs2_nodemanager ocfs2_stackglue rpcrdm a ib_ipoib rdma_ucm ib_ucm ib_uverbs ib_umad
rdma_cm ib_cm iw_cm x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel
kvm irqbypass crct10dif_pc lmul crc32_pclmul ghash_clmulni_intel pcbc iTCO_wdt
iTCO_vendor_support aesni_intel crypto_simd glue_helper cryptd pcspkr lpc_ich i2c_i801
mei_me mf d_core mei raid0 sg wmi ioatdma ipmi_si ipmi_devintf ipmi_msghandler shpchp
acpi_power_meter acpi_pad nfsd nfs_acl lockd auth_rpcgss grace sunrpc ip_tables xfs
libcrc32c mlx4_en mlx4_ib mlx5_ib ib_core sd_mod sr_mod cdrom ast drm_kms_helper
syscopyarea sysfillrect sysimgblt fb_sys_fops ttm ahci crc32c_intel libahci drm
mlx5_core igb libata mlx4_core dca i2c_algo_bit i2c_core nvme
kernel: ptp nvme_core pps_core dm_mirror dm_region_hash dm_log dm_mod dax
kernel: CPU: 0 PID: 11495 Comm: kworker/0:2H Not tainted 4.14.0-rc3-00001-g577ce48 #811
kernel: Hardware name: Supermicro Super Server/X10SRL-F, BIOS 1.0c 09/09/2015
kernel: Workqueue: ib-comp-wq ib_cq_poll_work [ib_core]
kernel: Call Trace:
kernel: dump_stack+0x62/0x80
kernel: bad_page+0xfe/0x11a
kernel: free_pages_check_bad+0x76/0x78
kernel: free_pcppages_bulk+0x364/0x441
kernel: ? ttwu_do_activate.isra.61+0x71/0x78
kernel: free_hot_cold_page+0x1c5/0x202
kernel: __put_page+0x2c/0x36
kernel: svc_rdma_put_context+0xd9/0xe4 [rpcrdma]
kernel: svc_rdma_wc_send+0x50/0x98 [rpcrdma]
This issue exists all the way back to v4.5, but refactoring and code
re-organization prevents this simple patch from applying to kernels
older than v4.12. The fix is the same, however, if someone needs to
backport it.
Reported-by: Ben Coddington <bcodding@redhat.com>
BugLink: https://bugzilla.linux-nfs.org/show_bug.cgi?id=314
Fixes:
5d252f90a800 ('svcrdma: Add class for RDMA backwards ... ')
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Tue, 26 Sep 2017 18:21:24 +0000 (11:21 -0700)]
libnvdimm, namespace: make 'resource' attribute only readable by root
commit
c1fb3542074fd0c4d901d778bd52455111e4eb6f upstream.
For the same reason that /proc/iomem returns 0's for non-root readers
and acpi tables are root-only, make the 'resource' attribute for
namespace devices only readable by root. Otherwise we disclose physical
address information.
Fixes:
bf9bccc14c05 ("libnvdimm: pmem label sets and namespace instantiation")
Reported-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Tue, 26 Sep 2017 18:17:52 +0000 (11:17 -0700)]
libnvdimm, region : make 'resource' attribute only readable by root
commit
b8ff981f88df03c72a4de2f6eaa9ce447a10ac03 upstream.
For the same reason that /proc/iomem returns 0's for non-root readers
and acpi tables are root-only, make the 'resource' attribute for region
devices only readable by root. Otherwise we disclose physical address
information.
Fixes:
802f4be6feee ("libnvdimm: Add 'resource' sysfs attribute to regions")
Cc: Dave Jiang <dave.jiang@intel.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Reported-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Tue, 26 Sep 2017 18:41:28 +0000 (11:41 -0700)]
libnvdimm, namespace: fix label initialization to use valid seq numbers
commit
b18d4b8a25af6fe83d7692191d6ff962ea611c4f upstream.
The set of valid sequence numbers is {1,2,3}. The specification
indicates that an implementation should consider 0 a sign of a critical
error:
UEFI 2.7: 13.19 NVDIMM Label Protocol
Software never writes the sequence number 00, so a correctly
check-summed Index Block with this sequence number probably indicates a
critical error. When software discovers this case it treats it as an
invalid Index Block indication.
While the expectation is that the invalid block is just thrown away, the
Robustness Principle says we should fix this to make both sequence
numbers valid.
Fixes:
f524bf271a5c ("libnvdimm: write pmem label set")
Reported-by: Juston Li <juston.li@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Tue, 26 Sep 2017 20:07:06 +0000 (13:07 -0700)]
libnvdimm, pfn: make 'resource' attribute only readable by root
commit
26417ae4fc6108f8db436f24108b08f68bdc520e upstream.
For the same reason that /proc/iomem returns 0's for non-root readers
and acpi tables are root-only, make the 'resource' attribute for pfn
devices only readable by root. Otherwise we disclose physical address
information.
Fixes:
f6ed58c70d14 ("libnvdimm, pfn: 'resource'-address and 'size'...")
Reported-by: Dave Hansen <dave.hansen@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Mon, 25 Sep 2017 18:01:31 +0000 (11:01 -0700)]
libnvdimm, dimm: clear 'locked' status on successful DIMM enable
commit
d34cb808402898e53b9a9bcbbedd01667a78723b upstream.
If we successfully enable a DIMM then it must not be locked and we can
clear the label-read failure condition. Otherwise, we need to reload the
entire bus provider driver to achieve the same effect, and that can
disrupt unrelated DIMMs and namespaces.
Fixes:
9d62ed965118 ("libnvdimm: handle locked label storage areas")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johan Hovold [Sat, 11 Nov 2017 16:29:29 +0000 (17:29 +0100)]
clk: ti: dra7-atl-clock: fix child-node lookups
commit
33ec6dbc5a02677509d97fe36cd2105753f0f0ea upstream.
Fix child node-lookup during probe, which ended up searching the whole
device tree depth-first starting at parent rather than just matching on
its children.
Note that the original premature free of the parent node has already
been fixed separately, but that fix was apparently never backported to
stable.
Fixes:
9ac33b0ce81f ("CLK: TI: Driver for DRA7 ATL (Audio Tracking Logic)")
Fixes:
660e15519399 ("clk: ti: dra7-atl-clock: Fix of_node reference counting")
Cc: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Acked-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Trond Myklebust [Tue, 10 Oct 2017 21:31:42 +0000 (17:31 -0400)]
SUNRPC: Fix tracepoint storage issues with svc_recv and svc_rqst_status
commit
e9d4bf219c83d09579bc62512fea2ca10f025d93 upstream.
There is no guarantee that either the request or the svc_xprt exist
by the time we get round to printing the trace message.
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mikulas Patocka [Tue, 14 Nov 2017 14:59:54 +0000 (09:59 -0500)]
dax: fix general protection fault in dax_alloc_inode
commit
9f586fff6574f6ecbf323f92d44ffaf0d96225fe upstream.
Don't crash in case of allocation failure in dax_alloc_inode.
syzkaller hit the following crash on
e4880bc5dfb1
kasan: CONFIG_KASAN_INLINE enabled
kasan: GPF could be caused by NULL-ptr deref or user memory access
[..]
RIP: 0010:dax_alloc_inode+0x3b/0x70 drivers/dax/super.c:348
Call Trace:
alloc_inode+0x65/0x180 fs/inode.c:208
new_inode_pseudo+0x69/0x190 fs/inode.c:890
new_inode+0x1c/0x40 fs/inode.c:919
mount_pseudo_xattr+0x288/0x560 fs/libfs.c:261
mount_pseudo include/linux/fs.h:2137 [inline]
dax_mount+0x2e/0x40 drivers/dax/super.c:388
mount_fs+0x66/0x2d0 fs/super.c:1223
Fixes:
7b6be8444e0f ("dax: refactor dax-fs into a generic provider...")
Reported-by: syzbot <syzkaller@googlegroups.com>
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jeff Moyer [Wed, 15 Nov 2017 01:37:27 +0000 (20:37 -0500)]
dax: fix PMD faults on zero-length files
commit
957ac8c421ad8b5eef9b17fe98e146d8311a541e upstream.
PMD faults on a zero length file on a file system mounted with -o dax
will not generate SIGBUS as expected.
fd = open(...O_TRUNC);
addr = mmap(NULL, 2*1024*1024, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
*addr = 'a';
<expect SIGBUS>
The problem is this code in dax_iomap_pmd_fault:
max_pgoff = (i_size_read(inode) - 1) >> PAGE_SHIFT;
If the inode size is zero, we end up with a max_pgoff that is way larger
than 0. :) Fix it by using DIV_ROUND_UP, as is done elsewhere in the
kernel.
I tested this with some simple test code that ensured that SIGBUS was
received where expected.
Fixes:
642261ac995e ("dax: add struct iomap based DAX PMD support")
Signed-off-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paolo Bonzini [Mon, 6 Nov 2017 12:31:12 +0000 (13:31 +0100)]
kvm: vmx: Reinstate support for CPUs without virtual NMI
commit
8a1b43922d0d1279e7936ba85c4c2a870403c95f upstream.
This is more or less a revert of commit
2c82878b0cb3 ("KVM: VMX: require
virtual NMI support", 2017-03-27); it turns out that Core 2 Duo machines
only had virtual NMIs in some SKUs.
The revert is not trivial because in the meanwhile there have been several
fixes to nested NMI injection. Therefore, the entire vNMI state is moved
to struct loaded_vmcs.
Another change compared to before the patch is a simplification here:
if (unlikely(!cpu_has_virtual_nmis() && vmx->soft_vnmi_blocked &&
!(is_guest_mode(vcpu) && nested_cpu_has_virtual_nmis(
get_vmcs12(vcpu))))) {
The final condition here is always true (because nested_cpu_has_virtual_nmis
is always false) and is removed.
Fixes:
2c82878b0cb38fd516fd612c67852a6bbf282003
Fixes: https://bugzilla.redhat.com/show_bug.cgi?id=1490803
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paolo Bonzini [Thu, 26 Oct 2017 07:13:27 +0000 (09:13 +0200)]
KVM: SVM: obey guest PAT
commit
15038e14724799b8c205beb5f20f9e54896013c3 upstream.
For many years some users of assigned devices have reported worse
performance on AMD processors with NPT than on AMD without NPT,
Intel or bare metal.
The reason turned out to be that SVM is discarding the guest PAT
setting and uses the default (PA0=PA4=WB, PA1=PA5=WT, PA2=PA6=UC-,
PA3=UC). The guest might be using a different setting, and
especially might want write combining but isn't getting it
(instead getting slow UC or UC- accesses).
Thanks a lot to geoff@hostfission.com for noticing the relation
to the g_pat setting. The patch has been tested also by a bunch
of people on VFIO users forums.
Fixes:
709ddebf81cb40e3c36c6109a7892e8b93a09464
Fixes: https://bugzilla.kernel.org/show_bug.cgi?id=196409
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Tested-by: Nick Sarnie <commendsarnex@gmail.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ladi Prosek [Wed, 11 Oct 2017 14:54:42 +0000 (16:54 +0200)]
KVM: nVMX: set IDTR and GDTR limits when loading L1 host state
commit
21f2d551183847bc7fbe8d866151d00cdad18752 upstream.
Intel SDM 27.5.2 Loading Host Segment and Descriptor-Table Registers:
"The GDTR and IDTR limits are each set to FFFFH."
Signed-off-by: Ladi Prosek <lprosek@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paul Mackerras [Thu, 26 Oct 2017 06:00:22 +0000 (17:00 +1100)]
KVM: PPC: Book3S HV: Don't call real-mode XICS hypercall handlers if not enabled
commit
00bb6ae5006205e041ce9784c819460562351d47 upstream.
When running a guest on a POWER9 system with the in-kernel XICS
emulation disabled (for example by running QEMU with the parameter
"-machine pseries,kernel_irqchip=off"), the kernel does not pass
the XICS-related hypercalls such as H_CPPR up to userspace for
emulation there as it should.
The reason for this is that the real-mode handlers for these
hypercalls don't check whether a XICS device has been instantiated
before calling the xics-on-xive code. That code doesn't check
either, leading to potential NULL pointer dereferences because
vcpu->arch.xive_vcpu is NULL. Those dereferences won't cause an
exception in real mode but will lead to kernel memory corruption.
This fixes it by adding kvmppc_xics_enabled() checks before calling
the XICS functions.
Fixes:
5af50993850a ("KVM: PPC: Book3S HV: Native usage of the XIVE interrupt controller")
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vasily Averin [Fri, 20 Oct 2017 14:33:18 +0000 (17:33 +0300)]
lockd: double unregister of inetaddr notifiers
commit
dc3033e16c59a2c4e62b31341258a5786cbcee56 upstream.
lockd_up() can call lockd_unregister_notifiers twice:
inside lockd_start_svc() when it calls lockd_svc_exit_thread()
and then in error path of lockd_up()
Patch forces lockd_start_svc() to unregister notifiers in all error cases
and removes extra unregister in error path of lockd_up().
Fixes:
cb7d224f82e4 "lockd: unregister notifier blocks if the service ..."
Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
Reviewed-by: Jeff Layton <jlayton@kernel.org>
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johan Hovold [Sat, 11 Nov 2017 16:51:25 +0000 (17:51 +0100)]
irqchip/gic-v3: Fix ppi-partitions lookup
commit
00ee9a1ca5080202bc37b44e998c3b2c74d45817 upstream.
Fix child-node lookup during initialisation, which ended up searching
the whole device tree depth-first starting at the parent rather than
just matching on its children.
To make things worse, the parent gic node was prematurely freed, while
the ppi-partitions node was leaked.
Fixes:
e3825ba1af3a ("irqchip/gic-v3: Add support for partitioned PPIs")
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marc Zyngier [Thu, 9 Nov 2017 14:17:59 +0000 (14:17 +0000)]
genirq: Track whether the trigger type has been set
commit
4f8413a3a799c958f7a10a6310a451e6b8aef5ad upstream.
When requesting a shared interrupt, we assume that the firmware
support code (DT or ACPI) has called irqd_set_trigger_type
already, so that we can retrieve it and check that the requester
is being reasonnable.
Unfortunately, we still have non-DT, non-ACPI systems around,
and these guys won't call irqd_set_trigger_type before requesting
the interrupt. The consequence is that we fail the request that
would have worked before.
We can either chase all these use cases (boring), or address it
in core code (easier). Let's have a per-irq_desc flag that
indicates whether irqd_set_trigger_type has been called, and
let's just check it when checking for a shared interrupt.
If it hasn't been set, just take whatever the interrupt
requester asks.
Fixes:
382bd4de6182 ("genirq: Use irqd_get_trigger_type to compare the trigger type for shared IRQs")
Reported-and-tested-by: Petr Cvek <petrcvekcz@gmail.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nate Dailey [Tue, 17 Oct 2017 12:17:03 +0000 (08:17 -0400)]
raid1: prevent freeze_array/wait_all_barriers deadlock
commit
f6eca2d43ed694ab8124dd24c88277f7eca93b7d upstream.
If freeze_array is attempted in the middle of close_sync/
wait_all_barriers, deadlock can occur.
freeze_array will wait for nr_pending and nr_queued to line up.
wait_all_barriers increments nr_pending for each barrier bucket, one
at a time, but doesn't actually issue IO that could be counted in
nr_queued. So freeze_array is blocked until wait_all_barriers
completes and allow_all_barriers runs. At the same time, when
_wait_barrier sees array_frozen == 1, it stops and waits for
freeze_array to complete.
Prevent the deadlock by making close_sync call _wait_barrier and
_allow_barrier for one bucket at a time, instead of deferring the
_allow_barrier calls until after all _wait_barriers are complete.
Signed-off-by: Nate Dailey <nate.dailey@stratus.com>
Fix:
fd76863e37fe(RAID1: a new I/O barrier implementation to remove resync window)
Reviewed-by: Coly Li <colyli@suse.de>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bart Van Assche [Thu, 19 Oct 2017 17:00:48 +0000 (10:00 -0700)]
block: Fix a race between blk_cleanup_queue() and timeout handling
commit
4e9b6f20828ac880dbc1fa2fdbafae779473d1af upstream.
Make sure that if the timeout timer fires after a queue has been
marked "dying" that the affected requests are finished.
Reported-by: chenxiang (M) <chenxiang66@hisilicon.com>
Fixes: commit
287922eb0b18 ("block: defer timeouts to a workqueue")
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Tested-by: chenxiang (M) <chenxiang66@hisilicon.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andrey Konovalov [Tue, 26 Sep 2017 15:11:33 +0000 (17:11 +0200)]
p54: don't unregister leds when they are not initialized
commit
fc09785de0a364427a5df63d703bae9a306ed116 upstream.
ieee80211_register_hw() in p54_register_common() may fail and leds won't
get initialized. Currently p54_unregister_common() doesn't check that and
always calls p54_unregister_leds(). The fix is to check priv->registered
flag before calling p54_unregister_leds().
Found by syzkaller.
INFO: trying to register non-static key.
the code is fine but needs lockdep annotation.
turning off the locking correctness validator.
CPU: 1 PID: 1404 Comm: kworker/1:1 Not tainted
4.14.0-rc1-42251-gebb2c2437d80-dirty #205
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Workqueue: usb_hub_wq hub_event
Call Trace:
__dump_stack lib/dump_stack.c:16
dump_stack+0x292/0x395 lib/dump_stack.c:52
register_lock_class+0x6c4/0x1a00 kernel/locking/lockdep.c:769
__lock_acquire+0x27e/0x4550 kernel/locking/lockdep.c:3385
lock_acquire+0x259/0x620 kernel/locking/lockdep.c:4002
flush_work+0xf0/0x8c0 kernel/workqueue.c:2886
__cancel_work_timer+0x51d/0x870 kernel/workqueue.c:2961
cancel_delayed_work_sync+0x1f/0x30 kernel/workqueue.c:3081
p54_unregister_leds+0x6c/0xc0 drivers/net/wireless/intersil/p54/led.c:160
p54_unregister_common+0x3d/0xb0 drivers/net/wireless/intersil/p54/main.c:856
p54u_disconnect+0x86/0x120 drivers/net/wireless/intersil/p54/p54usb.c:1073
usb_unbind_interface+0x21c/0xa90 drivers/usb/core/driver.c:423
__device_release_driver drivers/base/dd.c:861
device_release_driver_internal+0x4f4/0x5c0 drivers/base/dd.c:893
device_release_driver+0x1e/0x30 drivers/base/dd.c:918
bus_remove_device+0x2f4/0x4b0 drivers/base/bus.c:565
device_del+0x5c4/0xab0 drivers/base/core.c:1985
usb_disable_device+0x1e9/0x680 drivers/usb/core/message.c:1170
usb_disconnect+0x260/0x7a0 drivers/usb/core/hub.c:2124
hub_port_connect drivers/usb/core/hub.c:4754
hub_port_connect_change drivers/usb/core/hub.c:5009
port_event drivers/usb/core/hub.c:5115
hub_event+0x1318/0x3740 drivers/usb/core/hub.c:5195
process_one_work+0xc7f/0x1db0 kernel/workqueue.c:2119
process_scheduled_works kernel/workqueue.c:2179
worker_thread+0xb2b/0x1850 kernel/workqueue.c:2255
kthread+0x3a1/0x470 kernel/kthread.c:231
ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431
Signed-off-by: Andrey Konovalov <andreyknvl@google.com>
Acked-by: Christian Lamparter <chunkeey@googlemail.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Anup Patel [Tue, 3 Oct 2017 05:21:48 +0000 (10:51 +0530)]
mailbox: bcm-flexrm-mailbox: Fix FlexRM ring flush sequence
commit
a371c10ea4b38a5f120e86d906d404d50a0f4660 upstream.
As-per suggestion from FlexRM HW folks, we have to first set
FlexRM ring flush state and then clear it for FlexRM ring flush
to work properly.
Currently, the FlexRM driver has incomplete FlexRM ring flush
sequence which causes repeated insmod+rmmod of mailbox client
drivers to fail.
This patch fixes FlexRM ring flush sequence in flexrm_shutdown()
as described above.
Fixes:
dbc049eee730 ("mailbox: Add driver for Broadcom FlexRM
ring manager")
Signed-off-by: Anup Patel <anup.patel@broadcom.com>
Reviewed-by: Scott Branden <scott.branden@broadcom.com>
Signed-off-by: Jassi Brar <jaswinder.singh@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Xiaolei Li [Mon, 30 Oct 2017 02:39:56 +0000 (10:39 +0800)]
mtd: nand: mtk: fix infinite ECC decode IRQ issue
commit
1d2fcdcf33339c7c8016243de0f7f31cf6845e8d upstream.
For MT2701 NAND Controller, there may generate infinite ECC decode IRQ
during long time burn test on some platforms. Once this issue occurred,
the ECC decode IRQ status cannot be cleared in the IRQ handler function,
and threads cannot be scheduled.
ECC HW generates decode IRQ each sector, so there will have more than one
decode IRQ if read one page of large page NAND.
Currently, ECC IRQ handle flow is that we will check whether it is decode
IRQ at first by reading the register ECC_DECIRQ_STA. This is a read-clear
type register. If this IRQ is decode IRQ, then the ECC IRQ signal will be
cleared at the same time.
Secondly, we will check whether all sectors are decoded by reading the
register ECC_DECDONE. This is because the current IRQ may be not dealed
in time, and the next sectors have been decoded before reading the
register ECC_DECIRQ_STA. Then, the next sectors's decode IRQs will not
be generated.
Thirdly, if all sectors are decoded by comparing with ecc->sectors, then we
will complete ecc->done, set ecc->sectors as 0, and disable ECC IRQ by
programming the register ECC_IRQ_REG(op) as 0. Otherwise, wait for the
next ECC IRQ.
But, there is a timing issue between step one and two. When we read the
reigster ECC_DECIRQ_STA, all sectors are decoded except the last sector,
and the ECC IRQ signal is cleared. But the last sector is decoded before
reading ECC_DECDONE, so the ECC IRQ signal is enabled again by ECC HW, and
it means we will receive one extra ECC IRQ later. In step three, we will
find that all sectors were decoded, then disable ECC IRQ and return.
When deal with the extra ECC IRQ, the ECC IRQ status cannot be cleared
anymore. That is because the register ECC_DECIRQ_STA can only be cleared
when the register ECC_IRQ_REG(op) is enabled. But actually we have
disabled ECC IRQ in the previous ECC IRQ handle. So, there will
keep receiving ECC decode IRQ.
Now, we read the register ECC_DECIRQ_STA once again before completing the
ecc done event. This ensures that there will be no extra ECC decode IRQ.
Also, remove writel(0, ecc->regs + ECC_IRQ_REG(op)) from irq handler,
because ECC IRQ is disabled in mtk_ecc_disable(). And clear ECC_DECIRQ_STA
in mtk_ecc_disable() in case there is a timeout to wait decode IRQ.
Fixes:
1d6b1e464950 ("mtd: mediatek: driver for MTK Smart Device")
Signed-off-by: Xiaolei Li <xiaolei.li@mediatek.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Brent Taylor [Tue, 31 Oct 2017 03:32:45 +0000 (22:32 -0500)]
mtd: nand: Fix writing mtdoops to nand flash.
commit
30863e38ebeb500a31cecee8096fb5002677dd9b upstream.
When mtdoops calls mtd_panic_write(), it eventually calls
panic_nand_write() in nand_base.c. In order to properly wait for the
nand chip to be ready in panic_nand_wait(), the chip must first be
selected.
When using the atmel nand flash controller, a panic would occur due to
a NULL pointer exception.
Fixes:
2af7c6539931 ("mtd: Add panic_write for NAND flashes")
Signed-off-by: Brent Taylor <motobud@gmail.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Roger Quadros [Fri, 20 Oct 2017 12:16:21 +0000 (15:16 +0300)]
mtd: nand: omap2: Fix subpage write
commit
739c64414f01748a36e7d82c8e0611dea94412bd upstream.
Since v4.12, NAND subpage writes were causing a NULL pointer
dereference on OMAP platforms (omap2-nand) using OMAP_ECC_BCH4_CODE_HW,
OMAP_ECC_BCH8_CODE_HW and OMAP_ECC_BCH16_CODE_HW.
This is because for those ECC modes, omap_calculate_ecc_bch()
generates ECC bytes for the entire (multi-sector) page and this can
overflow the ECC buffer provided by nand_write_subpage_hwecc()
as it expects ecc.calculate() to return ECC bytes for just one sector.
However, the root cause of the problem is present since v3.9
but was not seen then as NAND buffers were being allocated
as one big chunk prior to commit
3deb9979c731 ("mtd: nand: allocate
aligned buffers if NAND_OWN_BUFFERS is unset").
Fix the issue by providing a OMAP optimized write_subpage()
implementation.
Fixes:
62116e5171e0 ("mtd: nand: omap2: Support for hardware BCH error correction.")
Signed-off-by: Roger Quadros <rogerq@ti.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Boris Brezillon [Thu, 5 Oct 2017 16:57:24 +0000 (18:57 +0200)]
mtd: nand: atmel: Actually use the PM ops
commit
1533bfa6f6b6bcca1ea1f172ef4a1c5ce5e7b335 upstream.
commit
6e532afaca8e ("mtd: nand: atmel: Add PM ops") was defining PM
ops but nothing was using/referencing those PM ops.
Fixes:
6e532afaca8e ("mtd: nand: atmel: Add PM ops")
Cc: Romain Izard <romain.izard.pro@gmail.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Acked-by: Wenyou Yang <wenyou.yang@microchip.com>
Tested-by: Romain Izard <romain.izard.pro@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Boris Brezillon [Thu, 5 Oct 2017 16:53:19 +0000 (18:53 +0200)]
mtd: nand: Export nand_reset() symbol
commit
b9bb98424c51437973b854691aa1e9b2bfd348f5 upstream.
Commit
6e532afaca8e ("mtd: nand: atmel: Add PM ops") started to use the
nand_reset() function which was not yet exported by the NAND framework
(because it was only used internally before that). Export this symbol
to avoid build errors when the driver is enabled as a module.
Fixes:
6e532afaca8e ("mtd: nand: atmel: Add PM ops")
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Boris Brezillon [Sat, 11 Nov 2017 15:08:34 +0000 (16:08 +0100)]
mtd: Avoid probe failures when mtd->dbg.dfs_dir is invalid
commit
1530578abdac4edce9244c7a1962ded3ffdb58ce upstream.
Commit
e8e3edb95ce6 ("mtd: create per-device and module-scope debugfs
entries") tried to make MTD related debugfs stuff consistent across the
MTD framework by creating a root <debugfs>/mtd/ directory containing
one directory per MTD device.
The problem is that, by default, the MTD layer only registers the
master device if no partitions are defined for this master. This
behavior breaks all drivers that expect mtd->dbg.dfs_dir to be filled
correctly after calling mtd_device_register() in order to add their own
debugfs entries.
The only way we can force all MTD masters to be registered no matter if
they expose partitions or not is by enabling the
CONFIG_MTD_PARTITIONED_MASTER option.
In such situations, there's no other solution but to accept skipping
debugfs initialization when dbg.dfs_dir is invalid, and when this
happens, inform the user that he should consider enabling
CONFIG_MTD_PARTITIONED_MASTER.
Fixes:
e8e3edb95ce6 ("mtd: create per-device and module-scope debugfs entries")
Cc: Mario J. Rugiero <mrugiero@gmail.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Reported-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Bellinger [Sat, 28 Oct 2017 06:19:26 +0000 (22:19 -0800)]
target: Avoid early CMD_T_PRE_EXECUTE failures during ABORT_TASK
commit
1c21a48055a67ceb693e9c2587824a8de60a217c upstream.
This patch fixes bug where early se_cmd exceptions that occur
before backend execution can result in use-after-free if/when
a subsequent ABORT_TASK occurs for the same tag.
Since an early se_cmd exception will have had se_cmd added to
se_session->sess_cmd_list via target_get_sess_cmd(), it will
not have CMD_T_COMPLETE set by the usual target_complete_cmd()
backend completion path.
This causes a subsequent ABORT_TASK + __target_check_io_state()
to signal ABORT_TASK should proceed. As core_tmr_abort_task()
executes, it will bring the outstanding se_cmd->cmd_kref count
down to zero releasing se_cmd, after se_cmd has already been
queued with error status into fabric driver response path code.
To address this bug, introduce a CMD_T_PRE_EXECUTE bit that is
set at target_get_sess_cmd() time, and cleared immediately before
backend driver dispatch in target_execute_cmd() once CMD_T_ACTIVE
is set.
Then, check CMD_T_PRE_EXECUTE within __target_check_io_state() to
determine when an early exception has occured, and avoid aborting
this se_cmd since it will have already been queued into fabric
driver response path code.
Reported-by: Donald White <dew@datera.io>
Cc: Donald White <dew@datera.io>
Cc: Mike Christie <mchristi@redhat.com>
Cc: Hannes Reinecke <hare@suse.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Bellinger [Fri, 29 Sep 2017 23:43:11 +0000 (16:43 -0700)]
target: Fix quiese during transport_write_pending_qf endless loop
commit
9574a497df2bbc0a676b609ce0dd24d237cee3a6 upstream.
This patch fixes a potential end-less loop during QUEUE_FULL,
where cmd->se_tfo->write_pending() callback fails repeatedly
but __transport_wait_for_tasks() has already been invoked to
quiese the outstanding se_cmd descriptor.
To address this bug, this patch adds a CMD_T_STOP|CMD_T_ABORTED
check within transport_write_pending_qf() and invokes the
existing se_cmd->t_transport_stop_comp to signal quiese
completion back to __transport_wait_for_tasks().
Cc: Mike Christie <mchristi@redhat.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Bryant G. Ly <bryantly@linux.vnet.ibm.com>
Cc: Michael Cyr <mikecyr@linux.vnet.ibm.com>
Cc: Potnuri Bharat Teja <bharat@chelsio.com>
Cc: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Bellinger [Fri, 29 Sep 2017 23:03:24 +0000 (16:03 -0700)]
target: Fix caw_sem leak in transport_generic_request_failure
commit
fd2f928b0ddd2fe8876d4f1344df2ace2b715a4d upstream.
With the recent addition of transport_check_aborted_status() within
transport_generic_request_failure() to avoid sending a SCSI status
exception after CMD_T_ABORTED w/ TAS=1 has occured, it introduced
a COMPARE_AND_WRITE early failure regression.
Namely when COMPARE_AND_WRITE fails and se_device->caw_sem has
been taken by sbc_compare_and_write(), if the new check for
transport_check_aborted_status() returns true and exits,
cmd->transport_complete_callback() -> compare_and_write_post()
is skipped never releasing se_device->caw_sem.
This regression was originally introduced by:
commit
e3b88ee95b4e4bf3e9729a4695d695b9c7c296c8
Author: Bart Van Assche <bart.vanassche@sandisk.com>
Date: Tue Feb 14 16:25:45 2017 -0800
target: Fix handling of aborted failed commands
To address this bug, move the transport_check_aborted_status()
call after transport_complete_task_attr() and
cmd->transport_complete_callback().
Cc: Mike Christie <mchristi@redhat.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Bart Van Assche <bart.vanassche@sandisk.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Bellinger [Fri, 22 Sep 2017 23:48:28 +0000 (16:48 -0700)]
target: Fix QUEUE_FULL + SCSI task attribute handling
commit
1c79df1f349fb6050016cea4ef1dfbc3853a5685 upstream.
This patch fixes a bug during QUEUE_FULL where transport_complete_qf()
calls transport_complete_task_attr() after it's already been invoked
by target_complete_ok_work() or transport_generic_request_failure()
during initial completion, preceeding QUEUE_FULL.
This will result in se_device->simple_cmds, se_device->dev_cur_ordered_id
and/or se_device->dev_ordered_sync being updated multiple times for
a single se_cmd.
To address this bug, clear SCF_TASK_ATTR_SET after the first call
to transport_complete_task_attr(), and avoid updating SCSI task
attribute related counters for any subsequent calls.
Also, when a se_cmd is deferred due to ordered tags and executed
via target_restart_delayed_cmds(), set CMD_T_SENT before execution
matching what target_execute_cmd() does.
Cc: Michael Cyr <mikecyr@linux.vnet.ibm.com>
Cc: Bryant G. Ly <bryantly@linux.vnet.ibm.com>
Cc: Mike Christie <mchristi@redhat.com>
Cc: Hannes Reinecke <hare@suse.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
tangwenji [Thu, 17 Aug 2017 11:51:54 +0000 (19:51 +0800)]
target: fix buffer offset in core_scsi3_pri_read_full_status
commit
c58a252beb04cf0e02d6a746b2ed7ea89b6deb71 upstream.
When at least two initiators register pr on the same LUN,
the target returns the exception data due to buffer offset
error, therefore the initiator executes command 'sg_persist -s'
may cause the initiator to appear segfault error.
This fixes a regression originally introduced by:
commit
a85d667e58bddf73be84d1981b41eaac985ed216
Author: Bart Van Assche <bart.vanassche@sandisk.com>
Date: Tue May 23 16:48:27 2017 -0700
target: Use {get,put}_unaligned_be*() instead of open coding these functions
Signed-off-by: tangwenji <tang.wenji@zte.com.cn>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>