Gao Xiang [Wed, 27 Feb 2019 05:33:30 +0000 (13:33 +0800)]
staging: erofs: compressed_pages should not be accessed again after freed
commit
af692e117cb8cd9d3d844d413095775abc1217f9 upstream.
This patch resolves the following page use-after-free issue,
z_erofs_vle_unzip:
...
for (i = 0; i < nr_pages; ++i) {
...
z_erofs_onlinepage_endio(page); (1)
}
for (i = 0; i < clusterpages; ++i) {
page = compressed_pages[i];
if (page->mapping == mngda) (2)
continue;
/* recycle all individual staging pages */
(void)z_erofs_gather_if_stagingpage(page_pool, page); (3)
WRITE_ONCE(compressed_pages[i], NULL);
}
...
After (1) is executed, page is freed and could be then reused, if
compressed_pages is scanned after that, it could fall info (2) or
(3) by mistake and that could finally be in a mess.
This patch aims to solve the above issue only with little changes
as much as possible in order to make the fix backport easier.
Fixes:
3883a79abd02 ("staging: erofs: introduce VLE decompression support")
Cc: <stable@vger.kernel.org> # 4.19+
Signed-off-by: Gao Xiang <gaoxiang25@huawei.com>
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gao Xiang [Wed, 27 Feb 2019 05:33:31 +0000 (13:33 +0800)]
staging: erofs: fix illegal address access under memory pressure
commit
1e5ceeab6929585512c63d05911d6657064abf7b upstream.
Considering a read request with two decompressed file pages,
If a decompression work cannot be started on the previous page
due to memory pressure but in-memory LTP map lookup is done,
builder->work should be still NULL.
Moreover, if the current page also belongs to the same map,
it won't try to start the decompression work again and then
run into trouble.
This patch aims to solve the above issue only with little changes
as much as possible in order to make the fix backport easier.
kernel message is:
<4>[1051408.015930s]SLUB: Unable to allocate memory on node -1, gfp=0x2408040(GFP_NOFS|__GFP_ZERO)
<4>[1051408.015930s] cache: erofs_compress, object size: 144, buffer size: 144, default order: 0, min order: 0
<4>[1051408.015930s] node 0: slabs: 98, objs: 2744, free: 0
* Cannot allocate the decompression work
<3>[1051408.015960s]erofs: z_erofs_vle_normalaccess_readpages, readahead error at page 1008 of nid 5391488
* Note that the previous page was failed to read
<0>[1051408.015960s]Internal error: Accessing user space memory outside uaccess.h routines:
96000005 [#1] PREEMPT SMP
...
<4>[1051408.015991s]Hardware name: kirin710 (DT)
...
<4>[1051408.016021s]PC is at z_erofs_vle_work_add_page+0xa0/0x17c
<4>[1051408.016021s]LR is at z_erofs_do_read_page+0x12c/0xcf0
...
<4>[1051408.018096s][<
ffffff80c6fb0fd4>] z_erofs_vle_work_add_page+0xa0/0x17c
<4>[1051408.018096s][<
ffffff80c6fb3814>] z_erofs_vle_normalaccess_readpages+0x1a0/0x37c
<4>[1051408.018096s][<
ffffff80c6d670b8>] read_pages+0x70/0x190
<4>[1051408.018127s][<
ffffff80c6d6736c>] __do_page_cache_readahead+0x194/0x1a8
<4>[1051408.018127s][<
ffffff80c6d59318>] filemap_fault+0x398/0x684
<4>[1051408.018127s][<
ffffff80c6d8a9e0>] __do_fault+0x8c/0x138
<4>[1051408.018127s][<
ffffff80c6d8f90c>] handle_pte_fault+0x730/0xb7c
<4>[1051408.018127s][<
ffffff80c6d8fe04>] __handle_mm_fault+0xac/0xf4
<4>[1051408.018157s][<
ffffff80c6d8fec8>] handle_mm_fault+0x7c/0x118
<4>[1051408.018157s][<
ffffff80c8c52998>] do_page_fault+0x354/0x474
<4>[1051408.018157s][<
ffffff80c8c52af8>] do_translation_fault+0x40/0x48
<4>[1051408.018157s][<
ffffff80c6c002f4>] do_mem_abort+0x80/0x100
<4>[1051408.018310s]---[ end trace
9f4009a3283bd78b ]---
Fixes:
3883a79abd02 ("staging: erofs: introduce VLE decompression support")
Cc: <stable@vger.kernel.org> # 4.19+
Signed-off-by: Gao Xiang <gaoxiang25@huawei.com>
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mans Rullgard [Thu, 14 Feb 2019 19:45:33 +0000 (19:45 +0000)]
USB: serial: ftdi_sio: add ID for Hjelmslund Electronics USB485
commit
8d7fa3d4ea3f0ca69554215e87411494e6346fdc upstream.
This adds the USB ID of the Hjelmslund Electronics USB485 Iso stick.
Signed-off-by: Mans Rullgard <mans@mansr.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ivan Mironov [Wed, 6 Feb 2019 16:14:13 +0000 (21:14 +0500)]
USB: serial: cp210x: add ID for Ingenico 3070
commit
dd9d3d86b08d6a106830364879c42c78db85389c upstream.
Here is how this device appears in kernel log:
usb 3-1: new full-speed USB device number 18 using xhci_hcd
usb 3-1: New USB device found, idVendor=0b00, idProduct=3070
usb 3-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 3-1: Product: Ingenico 3070
usb 3-1: Manufacturer: Silicon Labs
usb 3-1: SerialNumber: 0001
Apparently this is a POS terminal with embedded USB-to-Serial converter.
Cc: stable@vger.kernel.org
Signed-off-by: Ivan Mironov <mironov.ivan@gmail.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniele Palmas [Wed, 20 Feb 2019 10:43:17 +0000 (11:43 +0100)]
USB: serial: option: add Telit ME910 ECM composition
commit
6431866b6707d27151be381252d6eef13025cfce upstream.
This patch adds Telit ME910 family ECM composition 0x1102.
Signed-off-by: Daniele Palmas <dnlplm@gmail.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gao Xiang [Wed, 27 Feb 2019 05:33:32 +0000 (13:33 +0800)]
staging: erofs: fix mis-acted TAIL merging behavior
commit
a112152f6f3a2a88caa6f414d540bd49e406af60 upstream.
EROFS has an optimized path called TAIL merging, which is designed
to merge multiple reads and the corresponding decompressions into
one if these requests read continuous pages almost at the same time.
In general, it behaves as follows:
________________________________________________________________
... | TAIL . HEAD | PAGE | PAGE | TAIL . HEAD | ...
_____|_combined page A_|________|________|_combined page B_|____
1 ] -> [ 2 ] -> [ 3
If the above three reads are requested in the order 1-2-3, it will
generate a large work chain rather than 3 individual work chains
to reduce scheduling overhead and boost up sequential read.
However, if Read 2 is processed slightly earlier than Read 1,
currently it still generates 2 individual work chains (chain 1, 2)
but it does in-place decompression for combined page A, moreover,
if chain 2 decompresses ahead of chain 1, it will be a race and
lead to corrupted decompressed page. This patch fixes it.
Fixes:
3883a79abd02 ("staging: erofs: introduce VLE decompression support")
Cc: <stable@vger.kernel.org> # 4.19+
Signed-off-by: Gao Xiang <gaoxiang25@huawei.com>
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Viresh Kumar [Fri, 25 Jan 2019 07:23:07 +0000 (12:53 +0530)]
cpufreq: Use struct kobj_attribute instead of struct global_attr
commit
625c85a62cb7d3c79f6e16de3cfa972033658250 upstream.
The cpufreq_global_kobject is created using kobject_create_and_add()
helper, which assigns the kobj_type as dynamic_kobj_ktype and show/store
routines are set to kobj_attr_show() and kobj_attr_store().
These routines pass struct kobj_attribute as an argument to the
show/store callbacks. But all the cpufreq files created using the
cpufreq_global_kobject expect the argument to be of type struct
attribute. Things work fine currently as no one accesses the "attr"
argument. We may not see issues even if the argument is used, as struct
kobj_attribute has struct attribute as its first element and so they
will both get same address.
But this is logically incorrect and we should rather use struct
kobj_attribute instead of struct global_attr in the cpufreq core and
drivers and the show/store callbacks should take struct kobj_attribute
as argument instead.
This bug is caught using CFI CLANG builds in android kernel which
catches mismatch in function prototypes for such callbacks.
Reported-by: Donghee Han <dh.han@samsung.com>
Reported-by: Sangkyu Kim <skwith.kim@samsung.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Tue, 5 Mar 2019 16:58:54 +0000 (17:58 +0100)]
Linux 4.19.27
Andy Lutomirski [Sat, 23 Feb 2019 01:17:04 +0000 (17:17 -0800)]
x86/uaccess: Don't leak the AC flag into __put_user() value evaluation
commit
2a418cf3f5f1caf911af288e978d61c9844b0695 upstream.
When calling __put_user(foo(), ptr), the __put_user() macro would call
foo() in between __uaccess_begin() and __uaccess_end(). If that code
were buggy, then those bugs would be run without SMAP protection.
Fortunately, there seem to be few instances of the problem in the
kernel. Nevertheless, __put_user() should be fixed to avoid doing this.
Therefore, evaluate __put_user()'s argument before setting AC.
This issue was noticed when an objtool hack by Peter Zijlstra complained
about genregs_get() and I compared the assembly output to the C source.
[ bp: Massage commit message and fixed up whitespace. ]
Fixes:
11f1a4b9755f ("x86: reorganize SMAP handling in user space accesses")
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/20190225125231.845656645@infradead.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paul Burton [Fri, 1 Mar 2019 22:58:09 +0000 (22:58 +0000)]
MIPS: eBPF: Fix icache flush end address
commit
d1a2930d8a992fb6ac2529449f81a0056e1b98d1 upstream.
The MIPS eBPF JIT calls flush_icache_range() in order to ensure the
icache observes the code that we just wrote. Unfortunately it gets the
end address calculation wrong due to some bad pointer arithmetic.
The struct jit_ctx target field is of type pointer to u32, and as such
adding one to it will increment the address being pointed to by 4 bytes.
Therefore in order to find the address of the end of the code we simply
need to add the number of 4 byte instructions emitted, but we mistakenly
add the number of instructions multiplied by 4. This results in the call
to flush_icache_range() operating on a memory region 4x larger than
intended, which is always wasteful and can cause crashes if we overrun
into an unmapped page.
Fix this by correcting the pointer arithmetic to remove the bogus
multiplication, and use braces to remove the need for a set of brackets
whilst also making it obvious that the target field is a pointer.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Fixes:
b6bd53f9c4e8 ("MIPS: Add missing file for eBPF JIT.")
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Martin KaFai Lau <kafai@fb.com>
Cc: Song Liu <songliubraving@fb.com>
Cc: Yonghong Song <yhs@fb.com>
Cc: netdev@vger.kernel.org
Cc: bpf@vger.kernel.org
Cc: linux-mips@vger.kernel.org
Cc: stable@vger.kernel.org # v4.13+
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jonas Gorski [Thu, 21 Feb 2019 09:56:42 +0000 (10:56 +0100)]
MIPS: BCM63XX: provide DMA masks for ethernet devices
commit
18836b48ebae20850631ee2916d0cdbb86df813d upstream.
The switch to the generic dma ops made dma masks mandatory, breaking
devices having them not set. In case of bcm63xx, it broke ethernet with
the following warning when trying to up the device:
[ 2.633123] ------------[ cut here ]------------
[ 2.637949] WARNING: CPU: 0 PID: 325 at ./include/linux/dma-mapping.h:516 bcm_enetsw_open+0x160/0xbbc
[ 2.647423] Modules linked in: gpio_button_hotplug
[ 2.652361] CPU: 0 PID: 325 Comm: ip Not tainted 4.19.16 #0
[ 2.658080] Stack :
80520000 804cd3ec 00000000 00000000 804ccc00 87085bdc 87d3f9d4 804f9a17
[ 2.666707]
8049cf18 00000145 80a942a0 00000204 80ac0000 10008400 87085b90 eb3d5ab7
[ 2.675325]
00000000 00000000 80ac0000 000022b0 00000000 00000000 00000007 00000000
[ 2.683954]
0000007a 80500000 0013b381 00000000 80000000 00000000 804a1664 80289878
[ 2.692572]
00000009 00000204 80ac0000 00000200 00000002 00000000 00000000 80a90000
[ 2.701191] ...
[ 2.703701] Call Trace:
[ 2.706244] [<
8001f3c8>] show_stack+0x58/0x100
[ 2.710840] [<
800336e4>] __warn+0xe4/0x118
[ 2.715049] [<
800337d4>] warn_slowpath_null+0x48/0x64
[ 2.720237] [<
80289878>] bcm_enetsw_open+0x160/0xbbc
[ 2.725347] [<
802d1d4c>] __dev_open+0xf8/0x16c
[ 2.729913] [<
802d20cc>] __dev_change_flags+0x100/0x1c4
[ 2.735290] [<
802d21b8>] dev_change_flags+0x28/0x70
[ 2.740326] [<
803539e0>] devinet_ioctl+0x310/0x7b0
[ 2.745250] [<
80355fd8>] inet_ioctl+0x1f8/0x224
[ 2.749939] [<
802af290>] sock_ioctl+0x30c/0x488
[ 2.754632] [<
80112b34>] do_vfs_ioctl+0x740/0x7dc
[ 2.759459] [<
80112c20>] ksys_ioctl+0x50/0x94
[ 2.763955] [<
800240b8>] syscall_common+0x34/0x58
[ 2.768782] ---[ end trace
fb1a6b14d74e28b6 ]---
[ 2.773544] bcm63xx_enetsw bcm63xx_enetsw.0: cannot allocate rx ring 512
Fix this by adding appropriate DMA masks for the platform devices.
Fixes:
f8c55dc6e828 ("MIPS: use generic dma noncoherent ops for simple noncoherent platforms")
Signed-off-by: Jonas Gorski <jonas.gorski@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@linux-mips.org
Cc: linux-kernel@vger.kernel.org
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: James Hogan <jhogan@kernel.org>
Cc: stable@vger.kernel.org # v4.19+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael Clark [Mon, 11 Feb 2019 04:38:29 +0000 (17:38 +1300)]
MIPS: fix truncation in __cmpxchg_small for short values
commit
94ee12b507db8b5876e31c9d6c9d84f556a4b49f upstream.
__cmpxchg_small erroneously uses u8 for load comparison which can
be either char or short. This patch changes the local variable to
u32 which is sufficiently sized, as the loaded value is already
masked and shifted appropriately. Using an integer size avoids
any unnecessary canonicalization from use of non native widths.
This patch is part of a series that adapts the MIPS small word
atomics code for xchg and cmpxchg on short and char to RISC-V.
Cc: RISC-V Patches <patches@groups.riscv.org>
Cc: Linux RISC-V <linux-riscv@lists.infradead.org>
Cc: Linux MIPS <linux-mips@linux-mips.org>
Signed-off-by: Michael Clark <michaeljclark@mac.com>
[paul.burton@mips.com:
- Fix varialble typo per Jonas Gorski.
- Consolidate load variable with other declarations.]
Signed-off-by: Paul Burton <paul.burton@mips.com>
Fixes:
3ba7f44d2b19 ("MIPS: cmpxchg: Implement 1 byte & 2 byte cmpxchg()")
Cc: stable@vger.kernel.org # v4.13+
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mike Kravetz [Fri, 1 Mar 2019 00:22:02 +0000 (16:22 -0800)]
hugetlbfs: fix races and page leaks during migration
commit
cb6acd01e2e43fd8bad11155752b7699c3d0fb76 upstream.
hugetlb pages should only be migrated if they are 'active'. The
routines set/clear_page_huge_active() modify the active state of hugetlb
pages.
When a new hugetlb page is allocated at fault time, set_page_huge_active
is called before the page is locked. Therefore, another thread could
race and migrate the page while it is being added to page table by the
fault code. This race is somewhat hard to trigger, but can be seen by
strategically adding udelay to simulate worst case scheduling behavior.
Depending on 'how' the code races, various BUG()s could be triggered.
To address this issue, simply delay the set_page_huge_active call until
after the page is successfully added to the page table.
Hugetlb pages can also be leaked at migration time if the pages are
associated with a file in an explicitly mounted hugetlbfs filesystem.
For example, consider a two node system with 4GB worth of huge pages
available. A program mmaps a 2G file in a hugetlbfs filesystem. It
then migrates the pages associated with the file from one node to
another. When the program exits, huge page counts are as follows:
node0
1024 free_hugepages
1024 nr_hugepages
node1
0 free_hugepages
1024 nr_hugepages
Filesystem Size Used Avail Use% Mounted on
nodev 4.0G 2.0G 2.0G 50% /var/opt/hugepool
That is as expected. 2G of huge pages are taken from the free_hugepages
counts, and 2G is the size of the file in the explicitly mounted
filesystem. If the file is then removed, the counts become:
node0
1024 free_hugepages
1024 nr_hugepages
node1
1024 free_hugepages
1024 nr_hugepages
Filesystem Size Used Avail Use% Mounted on
nodev 4.0G 2.0G 2.0G 50% /var/opt/hugepool
Note that the filesystem still shows 2G of pages used, while there
actually are no huge pages in use. The only way to 'fix' the filesystem
accounting is to unmount the filesystem
If a hugetlb page is associated with an explicitly mounted filesystem,
this information in contained in the page_private field. At migration
time, this information is not preserved. To fix, simply transfer
page_private from old to new page at migration time if necessary.
There is a related race with removing a huge page from a file and
migration. When a huge page is removed from the pagecache, the
page_mapping() field is cleared, yet page_private remains set until the
page is actually freed by free_huge_page(). A page could be migrated
while in this state. However, since page_mapping() is not set the
hugetlbfs specific routine to transfer page_private is not called and we
leak the page count in the filesystem.
To fix that, check for this condition before migrating a huge page. If
the condition is detected, return EBUSY for the page.
Link: http://lkml.kernel.org/r/74510272-7319-7372-9ea6-ec914734c179@oracle.com
Link: http://lkml.kernel.org/r/20190212221400.3512-1-mike.kravetz@oracle.com
Fixes:
bcc54222309c ("mm: hugetlb: introduce page_huge_active")
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: <stable@vger.kernel.org>
[mike.kravetz@oracle.com: v2]
Link: http://lkml.kernel.org/r/7534d322-d782-8ac6-1c8d-a8dc380eb3ab@oracle.com
[mike.kravetz@oracle.com: update comment and changelog]
Link: http://lkml.kernel.org/r/420bcfd6-158b-38e4-98da-26d0cd85bd01@oracle.com
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicholas Kazlauskas [Mon, 7 Jan 2019 17:41:46 +0000 (12:41 -0500)]
drm: Block fb changes for async plane updates
commit
2216322919c8608a448d7ebc560a845238a5d6b6 upstream.
The prepare_fb call always happens on new_plane_state.
The drm_atomic_helper_cleanup_planes checks to see if
plane state pointer has changed when deciding to call cleanup_fb on
either the new_plane_state or the old_plane_state.
For a non-async atomic commit the state pointer is swapped, so this
helper calls prepare_fb on the new_plane_state and cleanup_fb on the
old_plane_state. This makes sense, since we want to prepare the
framebuffer we are going to use and cleanup the the framebuffer we are
no longer using.
For the async atomic update helpers this differs. The async atomic
update helpers perform in-place updates on the existing state. They call
drm_atomic_helper_cleanup_planes but the state pointer is not swapped.
This means that prepare_fb is called on the new_plane_state and
cleanup_fb is called on the new_plane_state (not the old).
In the case where old_plane_state->fb == new_plane_state->fb then
there should be no behavioral difference between an async update
and a non-async commit. But there are issues that arise when
old_plane_state->fb != new_plane_state->fb.
The first is that the new_plane_state->fb is immediately cleaned up
after it has been prepared, so we're using a fb that we shouldn't
be.
The second occurs during a sequence of async atomic updates and
non-async regular atomic commits. Suppose there are two framebuffers
being interleaved in a double-buffering scenario, fb1 and fb2:
- Async update, oldfb = NULL, newfb = fb1, prepare fb1, cleanup fb1
- Async update, oldfb = fb1, newfb = fb2, prepare fb2, cleanup fb2
- Non-async commit, oldfb = fb2, newfb = fb1, prepare fb1, cleanup fb2
We call cleanup_fb on fb2 twice in this example scenario, and any
further use will result in use-after-free.
The simple fix to this problem is to block framebuffer changes
in the drm_atomic_helper_async_check function for now.
v2: Move check by itself, add a FIXME (Daniel)
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Harry Wentland <harry.wentland@amd.com>
Cc: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
Cc: <stable@vger.kernel.org> # v4.14+
Fixes:
fef9df8b5945 ("drm/atomic: initial support for asynchronous plane update")
Signed-off-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
Acked-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
Acked-by: Harry Wentland <harry.wentland@amd.com>
Reviewed-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Harry Wentland <harry.wentland@amd.com>
Link: https://patchwork.freedesktop.org/patch/275364/
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jann Horn [Wed, 27 Feb 2019 20:29:52 +0000 (21:29 +0100)]
mm: enforce min addr even if capable() in expand_downwards()
commit
0a1d52994d440e21def1c2174932410b4f2a98a1 upstream.
security_mmap_addr() does a capability check with current_cred(), but
we can reach this code from contexts like a VFS write handler where
current_cred() must not be used.
This can be abused on systems without SMAP to make NULL pointer
dereferences exploitable again.
Fixes:
8869477a49c3 ("security: protect from stack expansion into low vm addresses")
Cc: stable@kernel.org
Signed-off-by: Jann Horn <jannh@google.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
BOUGH CHEN [Thu, 28 Feb 2019 10:15:42 +0000 (10:15 +0000)]
mmc: sdhci-esdhc-imx: correct the fix of ERR004536
commit
e30be063d6dbcc0f18b1eb25fa709fdef89201fb upstream.
Commit
18094430d6b5 ("mmc: sdhci-esdhc-imx: add ADMA Length
Mismatch errata fix") involve the fix of ERR004536, but the
fix is incorrect. Double confirm with IC, need to clear the
bit 7 of register 0x6c rather than set this bit 7.
Here is the definition of bit 7 of 0x6c:
0: enable the new IC fix for ERR004536
1: do not use the IC fix, keep the same as before
Find this issue on i.MX845s-evk board when enable CMDQ, and
let system in heavy loading.
root@imx8mmevk:~# dd if=/dev/mmcblk2 of=/dev/null bs=1M &
root@imx8mmevk:~# memtester 1000M > /dev/zero &
root@imx8mmevk:~# [ 139.897220] mmc2: cqhci: timeout for tag 16
[ 139.901417] mmc2: cqhci: ============ CQHCI REGISTER DUMP ===========
[ 139.907862] mmc2: cqhci: Caps: 0x0000310a | Version: 0x00000510
[ 139.914311] mmc2: cqhci: Config: 0x00001001 | Control: 0x00000000
[ 139.920753] mmc2: cqhci: Int stat: 0x00000000 | Int enab: 0x00000006
[ 139.927193] mmc2: cqhci: Int sig: 0x00000006 | Int Coal: 0x00000000
[ 139.933634] mmc2: cqhci: TDL base: 0x7809c000 | TDL up32: 0x00000000
[ 139.940073] mmc2: cqhci: Doorbell: 0x00030000 | TCN: 0x00000000
[ 139.946518] mmc2: cqhci: Dev queue: 0x00010000 | Dev Pend: 0x00010000
[ 139.952967] mmc2: cqhci: Task clr: 0x00000000 | SSC1: 0x00011000
[ 139.959411] mmc2: cqhci: SSC2: 0x00000001 | DCMD rsp: 0x00000000
[ 139.965857] mmc2: cqhci: RED mask: 0xfdf9a080 | TERRI: 0x00000000
[ 139.972308] mmc2: cqhci: Resp idx: 0x0000002e | Resp arg: 0x00000900
[ 139.978761] mmc2: sdhci: ============ SDHCI REGISTER DUMP ===========
[ 139.985214] mmc2: sdhci: Sys addr: 0xb2c19000 | Version: 0x00000002
[ 139.991669] mmc2: sdhci: Blk size: 0x00000200 | Blk cnt: 0x00000400
[ 139.998127] mmc2: sdhci: Argument: 0x40110400 | Trn mode: 0x00000033
[ 140.004618] mmc2: sdhci: Present: 0x01088a8f | Host ctl: 0x00000030
[ 140.011113] mmc2: sdhci: Power: 0x00000002 | Blk gap: 0x00000080
[ 140.017583] mmc2: sdhci: Wake-up: 0x00000008 | Clock: 0x0000000f
[ 140.024039] mmc2: sdhci: Timeout: 0x0000008f | Int stat: 0x00000000
[ 140.030497] mmc2: sdhci: Int enab: 0x107f4000 | Sig enab: 0x107f4000
[ 140.036972] mmc2: sdhci: AC12 err: 0x00000000 | Slot int: 0x00000502
[ 140.043426] mmc2: sdhci: Caps: 0x07eb0000 | Caps_1: 0x8000b407
[ 140.049867] mmc2: sdhci: Cmd: 0x00002c1a | Max curr: 0x00ffffff
[ 140.056314] mmc2: sdhci: Resp[0]: 0x00000900 | Resp[1]: 0xffffffff
[ 140.062755] mmc2: sdhci: Resp[2]: 0x328f5903 | Resp[3]: 0x00d00f00
[ 140.069195] mmc2: sdhci: Host ctl2: 0x00000008
[ 140.073640] mmc2: sdhci: ADMA Err: 0x00000007 | ADMA Ptr: 0x7809c108
[ 140.080079] mmc2: sdhci: ============================================
[ 140.086662] mmc2: running CQE recovery
Fixes:
18094430d6b5 ("mmc: sdhci-esdhc-imx: add ADMA Length Mismatch errata fix")
Signed-off-by: Haibo Chen <haibo.chen@nxp.com>
Cc: stable@vger.kernel.org
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alamy Liu [Mon, 25 Feb 2019 19:22:14 +0000 (11:22 -0800)]
mmc: cqhci: Fix a tiny potential memory leak on error condition
commit
d07e9fadf3a6b466ca3ae90fa4859089ff20530f upstream.
Free up the allocated memory in the case of error return
The value of mmc_host->cqe_enabled stays 'false'. Thus, cqhci_disable
(mmc_cqe_ops->cqe_disable) won't be called to free the memory. Also,
cqhci_disable() seems to be designed to disable and free all resources, not
suitable to handle this corner case.
Fixes:
a4080225f51d ("mmc: cqhci: support for command queue enabled host")
Signed-off-by: Alamy Liu <alamy.liu@gmail.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alamy Liu [Mon, 25 Feb 2019 19:22:13 +0000 (11:22 -0800)]
mmc: cqhci: fix space allocated for transfer descriptor
commit
27ec9dc17c48ea2e642ccb90b4ebf7fd47468911 upstream.
There is not enough space being allocated when DCMD is disabled.
CQE_DCMD is not necessary to be enabled when CQE is enabled.
(Software could halt CQE to send command)
In the case that CQE_DCMD is not enabled, it still needs to allocate
space for data transfer. For instance:
CQE_DCMD is enabled: 31 slots space (one slot used by DCMD)
CQE_DCMD is disabled: 32 slots space
Fixes:
a4080225f51d ("mmc: cqhci: support for command queue enabled host")
Signed-off-by: Alamy Liu <alamy.liu@gmail.com>
Acked-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ritesh Harjani [Fri, 22 Feb 2019 13:51:34 +0000 (19:21 +0530)]
mmc: core: Fix NULL ptr crash from mmc_should_fail_request
commit
e5723f95d6b493dd437f1199cacb41459713b32f upstream.
In case of CQHCI, mrq->cmd may be NULL for data requests (non DCMD).
In such case mmc_should_fail_request is directly dereferencing
mrq->cmd while cmd is NULL.
Fix this by checking for mrq->cmd pointer.
Fixes:
72a5af554df8 ("mmc: core: Add support for handling CQE requests")
Signed-off-by: Ritesh Harjani <riteshh@codeaurora.org>
Cc: stable@vger.kernel.org
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takeshi Saito [Thu, 21 Feb 2019 19:38:05 +0000 (20:38 +0100)]
mmc: tmio: fix access width of Block Count Register
commit
5603731a15ef9ca317c122cc8c959f1dee1798b4 upstream.
In R-Car Gen2 or later, the maximum number of transfer blocks are
changed from 0xFFFF to 0xFFFFFFFF. Therefore, Block Count Register
should use iowrite32().
If another system (U-boot, Hypervisor OS, etc) uses bit[31:16], this
value will not be cleared. So, SD/MMC card initialization fails.
So, check for the bigger register and use apropriate write. Also, mark
the register as extended on Gen2.
Signed-off-by: Takeshi Saito <takeshi.saito.xv@renesas.com>
[wsa: use max_blk_count in if(), add Gen2, update commit message]
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Cc: stable@kernel.org
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
[Ulf: Fixed build error]
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sergei Shtylyov [Mon, 18 Feb 2019 17:45:40 +0000 (20:45 +0300)]
mmc: tmio_mmc_core: don't claim spurious interrupts
commit
5c27ff5db1491a947264d6d4e4cbe43ae6535bae upstream.
I have encountered an interrupt storm during the eMMC chip probing (and
the chip finally didn't get detected). It turned out that U-Boot left
the DMAC interrupts enabled while the Linux driver didn't use those.
The SDHI driver's interrupt handler somehow assumes that, even if an
SDIO interrupt didn't happen, it should return IRQ_HANDLED. I think
that if none of the enabled interrupts happened and got handled, we
should return IRQ_NONE -- that way the kernel IRQ code recoginizes
a spurious interrupt and masks it off pretty quickly...
Fixes:
7729c7a232a9 ("mmc: tmio: Provide separate interrupt handlers")
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Reviewed-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Tested-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Reviewed-by: Simon Horman <horms+renesas@verge.net.au>
Cc: stable@vger.kernel.org
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jonathan Neuschäfer [Sun, 10 Feb 2019 17:31:07 +0000 (18:31 +0100)]
mmc: spi: Fix card detection during probe
commit
c9bd505dbd9d3dc80c496f88eafe70affdcf1ba6 upstream.
When using the mmc_spi driver with a card-detect pin, I noticed that the
card was not detected immediately after probe, but only after it was
unplugged and plugged back in (and the CD IRQ fired).
The call tree looks something like this:
mmc_spi_probe
mmc_add_host
mmc_start_host
_mmc_detect_change
mmc_schedule_delayed_work(&host->detect, 0)
mmc_rescan
host->bus_ops->detect(host)
mmc_detect
_mmc_detect_card_removed
host->ops->get_cd(host)
mmc_gpio_get_cd -> -ENOSYS (ctx->cd_gpio not set)
mmc_gpiod_request_cd
ctx->cd_gpio = desc
To fix this issue, call mmc_detect_change after the card-detect GPIO/IRQ
is registered.
Signed-off-by: Jonathan Neuschäfer <j.neuschaefer@gmx.net>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Cc: stable@vger.kernel.org
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ben Gardon [Wed, 16 Jan 2019 17:41:15 +0000 (09:41 -0800)]
kvm: selftests: Fix region overlap check in kvm_util
[ Upstream commit
94a980c39c8e3f8abaff5d3b5bbcd4ccf1c02c4f ]
Fix a call to userspace_mem_region_find to conform to its spec of
taking an inclusive, inclusive range. It was previously being called
with an inclusive, exclusive range. Also remove a redundant region bounds
check in vm_userspace_mem_region_add. Region overlap checking is already
performed by the call to userspace_mem_region_find.
Tested: Compiled tools/testing/selftests/kvm with -static
Ran all resulting test binaries on an Intel Haswell test machine
All tests passed
Signed-off-by: Ben Gardon <bgardon@google.com>
Reviewed-by: Jim Mattson <jmattson@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Vitaly Kuznetsov [Mon, 7 Jan 2019 18:44:51 +0000 (19:44 +0100)]
KVM: nSVM: clear events pending from svm_complete_interrupts() when exiting to L1
[ Upstream commit
619ad846fc3452adaf71ca246c5aa711e2055398 ]
kvm-unit-tests' eventinj "NMI failing on IDT" test results in NMI being
delivered to the host (L1) when it's running nested. The problem seems to
be: svm_complete_interrupts() raises 'nmi_injected' flag but later we
decide to reflect EXIT_NPF to L1. The flag remains pending and we do NMI
injection upon entry so it got delivered to L1 instead of L2.
It seems that VMX code solves the same issue in prepare_vmcs12(), this was
introduced with code refactoring in commit
5f3d5799974b ("KVM: nVMX: Rework
event injection and recovery").
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Suravee Suthikulpanit [Tue, 22 Jan 2019 10:25:13 +0000 (10:25 +0000)]
svm: Fix AVIC incomplete IPI emulation
[ Upstream commit
bb218fbcfaaa3b115d4cd7a43c0ca164f3a96e57 ]
In case of incomplete IPI with invalid interrupt type, the current
SVM driver does not properly emulate the IPI, and fails to boot
FreeBSD guests with multiple vcpus when enabling AVIC.
Fix this by update APIC ICR high/low registers, which also
emulate sending the IPI.
Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Chaitanya Tata [Fri, 18 Jan 2019 21:47:47 +0000 (03:17 +0530)]
cfg80211: extend range deviation for DMG
[ Upstream commit
93183bdbe73bbdd03e9566c8dc37c9d06b0d0db6 ]
Recently, DMG frequency bands have been extended till 71GHz, so extend
the range check till 20GHz (45-71GHZ), else some channels will be marked
as disabled.
Signed-off-by: Chaitanya Tata <Chaitanya.Tata@bluwireless.co.uk>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Mathieu Malaterre [Thu, 24 Jan 2019 18:19:57 +0000 (19:19 +0100)]
mac80211: Add attribute aligned(2) to struct 'action'
[ Upstream commit
7c53eb5d87bc21464da4268c3c0c47457b6d9c9b ]
During refactor in commit
9e478066eae4 ("mac80211: fix MU-MIMO
follow-MAC mode") a new struct 'action' was declared with packed
attribute as:
struct {
struct ieee80211_hdr_3addr hdr;
u8 category;
u8 action_code;
} __packed action;
But since struct 'ieee80211_hdr_3addr' is declared with an aligned
keyword as:
struct ieee80211_hdr {
__le16 frame_control;
__le16 duration_id;
u8 addr1[ETH_ALEN];
u8 addr2[ETH_ALEN];
u8 addr3[ETH_ALEN];
__le16 seq_ctrl;
u8 addr4[ETH_ALEN];
} __packed __aligned(2);
Solve the ambiguity of placing aligned structure in a packed one by
adding the aligned(2) attribute to struct 'action'.
This removes the following warning (W=1):
net/mac80211/rx.c:234:2: warning: alignment 1 of 'struct <anonymous>' is less than 2 [-Wpacked-not-aligned]
Cc: Johannes Berg <johannes.berg@intel.com>
Suggested-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Mathieu Malaterre <malat@debian.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Balaji Pothunoori [Mon, 21 Jan 2019 07:00:43 +0000 (12:30 +0530)]
mac80211: don't initiate TDLS connection if station is not associated to AP
[ Upstream commit
7ed5285396c257fd4070b1e29e7b2341aae2a1ce ]
Following call trace is observed while adding TDLS peer entry in driver
during TDLS setup.
Call Trace:
[<
c1301476>] dump_stack+0x47/0x61
[<
c10537d2>] __warn+0xe2/0x100
[<
fa22415f>] ? sta_apply_parameters+0x49f/0x550 [mac80211]
[<
c1053895>] warn_slowpath_null+0x25/0x30
[<
fa22415f>] sta_apply_parameters+0x49f/0x550 [mac80211]
[<
fa20ad42>] ? sta_info_alloc+0x1c2/0x450 [mac80211]
[<
fa224623>] ieee80211_add_station+0xe3/0x160 [mac80211]
[<
c1876fe3>] nl80211_new_station+0x273/0x420
[<
c170f6d9>] genl_rcv_msg+0x219/0x3c0
[<
c170f4c0>] ? genl_rcv+0x30/0x30
[<
c170ee7e>] netlink_rcv_skb+0x8e/0xb0
[<
c170f4ac>] genl_rcv+0x1c/0x30
[<
c170e8aa>] netlink_unicast+0x13a/0x1d0
[<
c170ec18>] netlink_sendmsg+0x2d8/0x390
[<
c16c5acd>] sock_sendmsg+0x2d/0x40
[<
c16c6369>] ___sys_sendmsg+0x1d9/0x1e0
Fixing this by allowing TDLS setup request only when we have completed
association.
Signed-off-by: Balaji Pothunoori <bpothuno@codeaurora.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Thomas Falcon [Thu, 24 Jan 2019 17:17:01 +0000 (11:17 -0600)]
ibmveth: Do not process frames after calling napi_reschedule
[ Upstream commit
e95d22c69b2c130ccce257b84daf283fd82d611e ]
The IBM virtual ethernet driver's polling function continues
to process frames after rescheduling NAPI, resulting in a warning
if it exhausted its budget. Do not restart polling after calling
napi_reschedule. Instead let frames be processed in the following
instance.
Signed-off-by: Thomas Falcon <tlfalcon@linux.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Maciej Żenczykowski [Thu, 24 Jan 2019 11:07:02 +0000 (03:07 -0800)]
net: dev_is_mac_header_xmit() true for ARPHRD_RAWIP
[ Upstream commit
3b707c3008cad04604c1f50e39f456621821c414 ]
__bpf_redirect() and act_mirred checks this boolean
to determine whether to prefix an ethernet header.
Signed-off-by: Maciej Żenczykowski <maze@google.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Zhang Run [Thu, 24 Jan 2019 05:48:49 +0000 (13:48 +0800)]
net: usb: asix: ax88772_bind return error when hw_reset fail
[ Upstream commit
6eea3527e68acc22483f4763c8682f223eb90029 ]
The ax88772_bind() should return error code immediately when the PHY
was not reset properly through ax88772a_hw_reset().
Otherwise, The asix_get_phyid() will block when get the PHY
Identifier from the PHYSID1 MII registers through asix_mdio_read()
due to the PHY isn't ready. Furthermore, it will produce a lot of
error message cause system crash.As follows:
asix 1-1:1.0 (unnamed net_device) (uninitialized): Failed to write
reg index 0x0000: -71
asix 1-1:1.0 (unnamed net_device) (uninitialized): Failed to send
software reset:
ffffffb9
asix 1-1:1.0 (unnamed net_device) (uninitialized): Failed to write
reg index 0x0000: -71
asix 1-1:1.0 (unnamed net_device) (uninitialized): Failed to enable
software MII access
asix 1-1:1.0 (unnamed net_device) (uninitialized): Failed to read
reg index 0x0000: -71
asix 1-1:1.0 (unnamed net_device) (uninitialized): Failed to write
reg index 0x0000: -71
asix 1-1:1.0 (unnamed net_device) (uninitialized): Failed to enable
software MII access
asix 1-1:1.0 (unnamed net_device) (uninitialized): Failed to read
reg index 0x0000: -71
...
Signed-off-by: Zhang Run <zhang.run@zte.com.cn>
Reviewed-by: Yang Wei <yang.wei9@zte.com.cn>
Tested-by: Marcel Ziswiler <marcel.ziswiler@toradex.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Douglas Anderson [Wed, 16 Jan 2019 18:46:21 +0000 (10:46 -0800)]
drm/msm: Fix A6XX support for opp-level
[ Upstream commit
a3c5e2cd79753121f49a8662c1e0a60ddb5486ca ]
The bindings for Qualcomm opp levels changed after being Acked but
before landing. Thus the code in the GPU driver that was relying on
the old bindings is now broken.
Let's change the code to match the new bindings by adjusting the old
string 'qcom,level' to the new string 'opp-level'. See the patch
("dt-bindings: opp: Introduce opp-level bindings").
NOTE: we will do additional cleanup to totally remove the string from
the code and use the new dev_pm_opp_get_level() but we'll do it in a
future patch. This will facilitate getting the important code fix in
sooner without having to deal with cross-maintainer dependencies.
This patch needs to land before the patch ("arm64: dts: sdm845: Add
gpu and gmu device nodes") since if a tree contains the device tree
patch but not this one you'll get a crash at bootup.
Fixes:
4b565ca5a2cb ("drm/msm: Add A6XX device support")
Signed-off-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Jordan Crouse <jcrouse@codeaurora.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Hannes Reinecke [Wed, 9 Jan 2019 08:45:15 +0000 (09:45 +0100)]
nvme-multipath: drop optimization for static ANA group IDs
[ Upstream commit
78a61cd42a64f3587862b372a79e1d6aaf131fd7 ]
Bit 6 in the ANACAP field is used to indicate that the ANA group ID
doesn't change while the namespace is attached to the controller.
There is an optimisation in the code to only allocate space
for the ANA group header, as the namespace list won't change and
hence would not need to be refreshed.
However, this optimisation was never carried over to the actual
workflow, which always assumes that the buffer is large enough
to hold the ANA header _and_ the namespace list.
So drop this optimisation and always allocate enough space.
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Sagi Grimberg [Tue, 8 Jan 2019 08:53:22 +0000 (00:53 -0800)]
nvme-rdma: fix timeout handler
[ Upstream commit
4c174e6366746ae8d49f9cc409f728eebb7a9ac9 ]
Currently, we have several problems with the timeout
handler:
1. If we timeout on the controller establishment flow, we will hang
because we don't execute the error recovery (and we shouldn't because
the create_ctrl flow needs to fail and cleanup on its own)
2. We might also hang if we get a disconnet on a queue while the
controller is already deleting. This racy flow can cause the controller
disable/shutdown admin command to hang.
We cannot complete a timed out request from the timeout handler without
mutual exclusion from the teardown flow (e.g. nvme_rdma_error_recovery_work).
So we serialize it in the timeout handler and teardown io and admin
queues to guarantee that no one races with us from completing the
request.
Reported-by: Jaesoo Lee <jalee@purestorage.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Haiyang Zhang [Tue, 15 Jan 2019 00:51:44 +0000 (00:51 +0000)]
hv_netvsc: Fix hash key value reset after other ops
[ Upstream commit
17d91256898402daf4425cc541ac9cbf64574d9a ]
Changing mtu, channels, or buffer sizes ops call to netvsc_attach(),
rndis_set_subchannel(), which always reset the hash key to default
value. That will override hash key changed previously. This patch
fixes the problem by save the hash key, then restore it when we re-
add the netvsc device.
Fixes:
ff4a44199012 ("netvsc: allow get/set of RSS indirection table")
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
[sl: fix up subject line]
Signed-off-by: Sasha Levin <sashal@kernel.org>
Haiyang Zhang [Tue, 15 Jan 2019 00:51:43 +0000 (00:51 +0000)]
hv_netvsc: Refactor assignments of struct netvsc_device_info
[ Upstream commit
7c9f335a3ff20557a92584199f3d35c7e992bbe5 ]
These assignments occur in multiple places. The patch refactor them
to a function for simplicity. It also puts the struct to heap area
for future expension.
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
[sl: fix up subject line]
Signed-off-by: Sasha Levin <sashal@kernel.org>
Haiyang Zhang [Tue, 15 Jan 2019 00:51:42 +0000 (00:51 +0000)]
hv_netvsc: Fix ethtool change hash key error
[ Upstream commit
b4a10c750424e01b5e37372fef0a574ebf7b56c3 ]
Hyper-V hosts require us to disable RSS before changing RSS key,
otherwise the changing request will fail. This patch fixes the
coding error.
Fixes:
ff4a44199012 ("netvsc: allow get/set of RSS indirection table")
Reported-by: Wei Hu <weh@microsoft.com>
Signed-off-by: Haiyang Zhang <haiyangz@microsoft.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
[sl: fix up subject line]
Signed-off-by: Sasha Levin <sashal@kernel.org>
Atsushi Nemoto [Mon, 21 Jan 2019 08:26:41 +0000 (17:26 +0900)]
net: altera_tse: fix connect_local_phy error path
[ Upstream commit
17b42a20d7ca59377788c6a2409e77569570cc10 ]
The connect_local_phy should return NULL (not negative errno) on
error, since its caller expects it.
Signed-off-by: Atsushi Nemoto <atsushi.nemoto@sord.co.jp>
Acked-by: Thor Thayer <thor.thayer@linux.intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Varun Prakash [Sat, 12 Jan 2019 16:44:30 +0000 (22:14 +0530)]
scsi: csiostor: fix NULL pointer dereference in csio_vport_set_state()
[ Upstream commit
fe35a40e675473eb65f2f5462b82770f324b5689 ]
Assign fc_vport to ln->fc_vport before calling csio_fcoe_alloc_vnp() to
avoid a NULL pointer dereference in csio_vport_set_state().
ln->fc_vport is dereferenced in csio_vport_set_state().
Signed-off-by: Varun Prakash <varun@chelsio.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Ewan D. Milne [Thu, 17 Jan 2019 16:14:45 +0000 (11:14 -0500)]
scsi: lpfc: nvmet: avoid hang / use-after-free when destroying targetport
[ Upstream commit
c41f59884be5cca293ed61f3d64637dbba3a6381 ]
We cannot wait on a completion object in the lpfc_nvme_targetport structure
in the _destroy_targetport() code path because the NVMe/fc transport will
free that structure immediately after the .targetport_delete() callback.
This results in a use-after-free, and a hang if slub_debug=FZPU is enabled.
Fix this by putting the completion on the stack.
Signed-off-by: Ewan D. Milne <emilne@redhat.com>
Acked-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Ewan D. Milne [Thu, 17 Jan 2019 16:14:44 +0000 (11:14 -0500)]
scsi: lpfc: nvme: avoid hang / use-after-free when destroying localport
[ Upstream commit
7961cba6f7d8215fa632df3d220e5154bb825249 ]
We cannot wait on a completion object in the lpfc_nvme_lport structure in
the _destroy_localport() code path because the NVMe/fc transport will free
that structure immediately after the .localport_delete() callback. This
results in a use-after-free, and a hang if slub_debug=FZPU is enabled.
Fix this by putting the completion on the stack.
Signed-off-by: Ewan D. Milne <emilne@redhat.com>
Acked-by: James Smart <james.smart@broadcom.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Tejun Heo [Tue, 12 Dec 2017 16:38:30 +0000 (08:38 -0800)]
writeback: synchronize sync(2) against cgroup writeback membership switches
[ Upstream commit
7fc5854f8c6efae9e7624970ab49a1eac2faefb1 ]
sync_inodes_sb() can race against cgwb (cgroup writeback) membership
switches and fail to writeback some inodes. For example, if an inode
switches to another wb while sync_inodes_sb() is in progress, the new
wb might not be visible to bdi_split_work_to_wbs() at all or the inode
might jump from a wb which hasn't issued writebacks yet to one which
already has.
This patch adds backing_dev_info->wb_switch_rwsem to synchronize cgwb
switch path against sync_inodes_sb() so that sync_inodes_sb() is
guaranteed to see all the target wbs and inodes can't jump wbs to
escape syncing.
v2: Fixed misplaced rwsem init. Spotted by Jiufei.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Jiufei Xue <xuejiufei@gmail.com>
Link: http://lkml.kernel.org/r/dc694ae2-f07f-61e1-7097-7c8411cee12d@gmail.com
Acked-by: Jan Kara <jack@suse.cz>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Ernesto A. Fernández [Mon, 8 Oct 2018 23:58:23 +0000 (20:58 -0300)]
direct-io: allow direct writes to empty inodes
[ Upstream commit
8b9433eb4de3c26a9226c981c283f9f4896ae030 ]
On a DIO_SKIP_HOLES filesystem, the ->get_block() method is currently
not allowed to create blocks for an empty inode. This confusion comes
from trying to bit shift a negative number, so check the size of the
inode first.
The problem is most visible for hfsplus, because the fallback to
buffered I/O doesn't happen and the write fails with EIO. This is in
part the fault of the module, because it gives a wrong return value on
->get_block(); that will be fixed in a separate patch.
Reviewed-by: Jeff Moyer <jmoyer@redhat.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Ernesto A. Fernández <ernesto.mnd.fernandez@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Liam Mark [Fri, 18 Jan 2019 18:37:44 +0000 (10:37 -0800)]
staging: android: ion: Support cpu access during dma_buf_detach
[ Upstream commit
31eb79db420a3f94c4c45a8c0a05cd30e333f981 ]
Often userspace doesn't know when the kernel will be calling dma_buf_detach
on the buffer.
If userpace starts its CPU access at the same time as the sg list is being
freed it could end up accessing the sg list after it has been freed.
Thread A Thread B
- DMA_BUF_IOCTL_SYNC IOCT
- ion_dma_buf_begin_cpu_access
- list_for_each_entry
- ion_dma_buf_detatch
- free_duped_table
- dma_sync_sg_for_cpu
Fix this by getting the ion_buffer lock before freeing the sg table memory.
Fixes:
2a55e7b5e544 ("staging: android: ion: Call dma_map_sg for syncing and mapping")
Signed-off-by: Liam Mark <lmark@codeaurora.org>
Acked-by: Laura Abbott <labbott@redhat.com>
Acked-by: Andrew F. Davis <afd@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Priit Laes [Tue, 22 Jan 2019 07:32:32 +0000 (09:32 +0200)]
drm/sun4i: hdmi: Fix usage of TMDS clock
[ Upstream commit
5e1bc251cebc84b41b8eb5d2434e54d939a85430 ]
Although TMDS clock is required for HDMI to properly function,
nobody called clk_prepare_enable(). This fixes reference counting
issues and makes sure clock is running when it needs to be running.
Due to TDMS clock being parent clock for DDC clock, TDMS clock
was turned on/off for each EDID probe, causing spurious failures
for certain HDMI/DVI screens.
Fixes:
9c5681011a0c ("drm/sun4i: Add HDMI support")
Signed-off-by: Priit Laes <priit.laes@paf.com>
[Maxime: Moved the TMDS clock enable earlier]
Signed-off-by: Maxime Ripard <maxime.ripard@bootlin.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190122073232.7240-1-plaes@plaes.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
Tomonori Sakita [Mon, 21 Jan 2019 08:34:16 +0000 (17:34 +0900)]
serial: fsl_lpuart: fix maximum acceptable baud rate with over-sampling
[ Upstream commit
815d835b7ba46685c316b000013367dacb2b461b ]
Using over-sampling ratio, lpuart can accept baud rate upto uartclk / 4.
Signed-off-by: Tomonori Sakita <tomonori.sakita@sord.co.jp>
Signed-off-by: Atsushi Nemoto <atsushi.nemoto@sord.co.jp>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Matthias Kaehlcke [Sat, 19 Jan 2019 00:23:05 +0000 (16:23 -0800)]
tty: serial: qcom_geni_serial: Allow mctrl when flow control is disabled
[ Upstream commit
e8a6ca808c5ed1e2b43ab25f1f2cbd43a7574f73 ]
The geni set/get_mctrl() functions currently do nothing unless
hardware flow control is enabled. Remove this arbitrary limitation.
Suggested-by: Johan Hovold <johan@kernel.org>
Fixes:
8a8a66a1a18a ("tty: serial: qcom_geni_serial: Add support for flow control")
Signed-off-by: Matthias Kaehlcke <mka@chromium.org>
Reviewed-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Kenneth Feng [Fri, 18 Jan 2019 10:08:19 +0000 (18:08 +0800)]
drm/amd/powerplay: OD setting fix on Vega10
[ Upstream commit
6d87dc97eb3341de3f7b1efa3156cb0e014f4a96 ]
gfxclk for OD setting is limited to 1980M for non-acg
ASICs of Vega10
Signed-off-by: Kenneth Feng <kenneth.feng@amd.com>
Reviewed-by: Evan Quan <evan.quan@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Xie Yongji [Thu, 29 Nov 2018 12:50:30 +0000 (20:50 +0800)]
locking/rwsem: Fix (possible) missed wakeup
[ Upstream commit
e158488be27b157802753a59b336142dc0eb0380 ]
Because wake_q_add() can imply an immediate wakeup (cmpxchg failure
case), we must not rely on the wakeup being delayed. However, commit:
e38513905eea ("locking/rwsem: Rework zeroing reader waiter->task")
relies on exactly that behaviour in that the wakeup must not happen
until after we clear waiter->task.
[ peterz: Added changelog. ]
Signed-off-by: Xie Yongji <xieyongji@baidu.com>
Signed-off-by: Zhang Yu <zhangyu31@baidu.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes:
e38513905eea ("locking/rwsem: Rework zeroing reader waiter->task")
Link: https://lkml.kernel.org/r/1543495830-2644-1-git-send-email-xieyongji@baidu.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Peter Zijlstra [Thu, 29 Nov 2018 13:44:49 +0000 (14:44 +0100)]
futex: Fix (possible) missed wakeup
[ Upstream commit
b061c38bef43406df8e73c5be06cbfacad5ee6ad ]
We must not rely on wake_q_add() to delay the wakeup; in particular
commit:
1d0dcb3ad9d3 ("futex: Implement lockless wakeups")
moved wake_q_add() before smp_store_release(&q->lock_ptr, NULL), which
could result in futex_wait() waking before observing ->lock_ptr ==
NULL and going back to sleep again.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes:
1d0dcb3ad9d3 ("futex: Implement lockless wakeups")
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Peter Zijlstra [Mon, 17 Dec 2018 09:14:53 +0000 (10:14 +0100)]
sched/wake_q: Fix wakeup ordering for wake_q
[ Upstream commit
4c4e3731564c8945ac5ac90fc2a1e1f21cb79c92 ]
Notable cmpxchg() does not provide ordering when it fails, however
wake_q_add() requires ordering in this specific case too. Without this
it would be possible for the concurrent wakeup to not observe our
prior state.
Andrea Parri provided:
C wake_up_q-wake_q_add
{
int next = 0;
int y = 0;
}
P0(int *next, int *y)
{
int r0;
/* in wake_up_q() */
WRITE_ONCE(*next, 1); /* node->next = NULL */
smp_mb(); /* implied by wake_up_process() */
r0 = READ_ONCE(*y);
}
P1(int *next, int *y)
{
int r1;
/* in wake_q_add() */
WRITE_ONCE(*y, 1); /* wake_cond = true */
smp_mb__before_atomic();
r1 = cmpxchg_relaxed(next, 1, 2);
}
exists (0:r0=0 /\ 1:r1=0)
This "exists" clause cannot be satisfied according to the LKMM:
Test wake_up_q-wake_q_add Allowed
States 3
0:r0=0; 1:r1=1;
0:r0=1; 1:r1=0;
0:r0=1; 1:r1=1;
No
Witnesses
Positive: 0 Negative: 3
Condition exists (0:r0=0 /\ 1:r1=0)
Observation wake_up_q-wake_q_add Never 0 3
Reported-by: Yongji Xie <elohimes@gmail.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Waiman Long <longman@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Prateek Sood [Fri, 30 Nov 2018 15:10:56 +0000 (20:40 +0530)]
sched/wait: Fix rcuwait_wake_up() ordering
[ Upstream commit
6dc080eeb2ba01973bfff0d79844d7a59e12542e ]
For some peculiar reason rcuwait_wake_up() has the right barrier in
the comment, but not in the code.
This mistake has been observed to cause a deadlock in the following
situation:
P1 P2
percpu_up_read() percpu_down_write()
rcu_sync_is_idle() // false
rcu_sync_enter()
...
__percpu_up_read()
[S] ,- __this_cpu_dec(*sem->read_count)
| smp_rmb();
[L] | task = rcu_dereference(w->task) // NULL
|
| [S] w->task = current
| smp_mb();
| [L] readers_active_check() // fail
`-> <store happens here>
Where the smp_rmb() (obviously) fails to constrain the store.
[ peterz: Added changelog. ]
Signed-off-by: Prateek Sood <prsood@codeaurora.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Acked-by: Davidlohr Bueso <dbueso@suse.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Fixes:
8f95c90ceb54 ("sched/wait, RCU: Introduce rcuwait machinery")
Link: https://lkml.kernel.org/r/1543590656-7157-1-git-send-email-prsood@codeaurora.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Bob Copeland [Thu, 17 Jan 2019 21:32:42 +0000 (16:32 -0500)]
mac80211: fix miscounting of ttl-dropped frames
[ Upstream commit
a0dc02039a2ee54fb4ae400e0b755ed30e73e58c ]
In ieee80211_rx_h_mesh_fwding, we increment the 'dropped_frames_ttl'
counter when we decrement the ttl to zero. For unicast frames
destined for other hosts, we stop processing the frame at that point.
For multicast frames, we do not rebroadcast it in this case, but we
do pass the frame up the stack to process it on this STA. That
doesn't match the usual definition of "dropped," so don't count
those as such.
With this change, something like `ping6 -i0.2 ff02::1%mesh0` from a
peer in a ttl=1 network no longer increments the counter rapidly.
Signed-off-by: Bob Copeland <bobcopeland@fb.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Nathan Chancellor [Wed, 16 Jan 2019 13:20:11 +0000 (06:20 -0700)]
staging: rtl8723bs: Fix build error with Clang when inlining is disabled
[ Upstream commit
97715058b70da1262fd07798c8b2e3e894f759dd ]
When CONFIG_NO_AUTO_INLINE was present in linux-next (which added
'-fno-inline-functions' to KBUILD_CFLAGS), an allyesconfig build with
Clang failed at the modpost stage:
ERROR: "is_broadcast_mac_addr" [drivers/staging/rtl8723bs/r8723bs.ko] undefined!
ERROR: "is_zero_mac_addr" [drivers/staging/rtl8723bs/r8723bs.ko] undefined!
ERROR: "is_multicast_mac_addr" [drivers/staging/rtl8723bs/r8723bs.ko] undefined!
These functions were marked as extern inline, meaning that if inlining
doesn't happen, the function will be undefined, as it is above.
This happens to work with GCC because the '-fno-inline-functions' option
respects the __inline attribute so all instances of these functions are
inlined as expected and the definition doesn't actually matter. However,
with Clang and '-fno-inline-functions', a function has to be marked with
the __always_inline attribute to be considered for inlining, which none
of these functions are. Clang tries to find the symbol definition
elsewhere as it was told and fails, which trickles down to modpost.
To make sure that this code compiles regardless of compiler and make the
intention of the code clearer, use 'static' to ensure these functions
are always defined, regardless of inlining. Additionally, silence a
checkpatch warning by switching from '__inline' to 'inline'.
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Aaron Hill [Mon, 24 Dec 2018 19:23:36 +0000 (14:23 -0500)]
drivers: thermal: int340x_thermal: Fix sysfs race condition
[ Upstream commit
129699bb8c7572106b5bbb2407c2daee4727ccad ]
Changes since V1:
* Use dev_info instead of printk
* Use dev_warn instead of BUG_ON
Previously, sysfs_create_group was called before all initialization had
fully run - specifically, before pci_set_drvdata was called. Since the
sysctl group is visible to userspace as soon as sysfs_create_group
returns, a small window of time existed during which a process could read
from an uninitialized/partially-initialized device.
This commit moves the creation of the sysctl group to after all
initialized is completed. This ensures that it's impossible for
userspace to read from a sysctl file before initialization has fully
completed.
To catch any future regressions, I've added a check to ensure
that proc_thermal_emum_mode is never PROC_THERMAL_NONE when a process
tries to read from a sysctl file. Previously, the aforementioned race
condition could result in the 'else' branch
running while PROC_THERMAL_NONE was set,
leading to a null pointer deference.
Signed-off-by: Aaron Hill <aa1ronham@gmail.com>
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Vineet Gupta [Mon, 17 Dec 2018 22:11:19 +0000 (14:11 -0800)]
ARC: show_regs: lockdep: avoid page allocator...
[ Upstream commit
ab6c03676cb190156603cf4c5ecf97aa406c9c53 ]
and use smaller/on-stack buffer instead
The motivation for this change was lockdep splat like below.
| potentially unexpected fatal signal 11.
| BUG: sleeping function called from invalid context at ../mm/page_alloc.c:4317
| in_atomic(): 1, irqs_disabled(): 0, pid: 57, name: segv
| no locks held by segv/57.
| Preemption disabled at:
| [<
8182f17e>] get_signal+0x4a6/0x7c4
| CPU: 0 PID: 57 Comm: segv Not tainted 4.17.0+ #23
|
| Stack Trace:
| arc_unwind_core.constprop.1+0xd0/0xf4
| __might_sleep+0x1f6/0x234
| __get_free_pages+0x174/0xca0
| show_regs+0x22/0x330
| get_signal+0x4ac/0x7c4 # print_fatal_signals() -> preempt_disable()
| do_signal+0x30/0x224
| resume_user_mode_begin+0x90/0xd8
So signal handling core calls show_regs() with preemption disabled but
an ensuing GFP_KERNEL page allocator call is flagged by lockdep.
We could have switched to GFP_NOWAIT, but turns out that is not enough
anways and eliding page allocator call leads to less code and
instruction traces to sift thru when debugging pesky crashes.
FWIW, this patch doesn't cure the lockdep splat (which next patch does).
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Eugeniy Paltsev [Thu, 13 Dec 2018 15:42:57 +0000 (18:42 +0300)]
ARC: fix __ffs return value to avoid build warnings
[ Upstream commit
4e868f8419cb4cb558c5d428e7ab5629cef864c7 ]
| CC mm/nobootmem.o
|In file included from ./include/asm-generic/bug.h:18:0,
| from ./arch/arc/include/asm/bug.h:32,
| from ./include/linux/bug.h:5,
| from ./include/linux/mmdebug.h:5,
| from ./include/linux/gfp.h:5,
| from ./include/linux/slab.h:15,
| from mm/nobootmem.c:14:
|mm/nobootmem.c: In function '__free_pages_memory':
|./include/linux/kernel.h:845:29: warning: comparison of distinct pointer types lacks a cast
| (!!(sizeof((typeof(x) *)1 == (typeof(y) *)1)))
| ^
|./include/linux/kernel.h:859:4: note: in expansion of macro '__typecheck'
| (__typecheck(x, y) && __no_side_effects(x, y))
| ^~~~~~~~~~~
|./include/linux/kernel.h:869:24: note: in expansion of macro '__safe_cmp'
| __builtin_choose_expr(__safe_cmp(x, y), \
| ^~~~~~~~~~
|./include/linux/kernel.h:878:19: note: in expansion of macro '__careful_cmp'
| #define min(x, y) __careful_cmp(x, y, <)
| ^~~~~~~~~~~~~
|mm/nobootmem.c:104:11: note: in expansion of macro 'min'
| order = min(MAX_ORDER - 1UL, __ffs(start));
Change __ffs return value from 'int' to 'unsigned long' as it
is done in other implementations (like asm-generic, x86, etc...)
to avoid build-time warnings in places where type is strictly
checked.
As __ffs may return values in [0-31] interval changing return
type to unsigned is valid.
Signed-off-by: Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Yang Yingliang [Fri, 26 Oct 2018 07:51:17 +0000 (15:51 +0800)]
irqchip/gic-v3-mbi: Fix uninitialized mbi_lock
[ Upstream commit
c530bb8a726a37811e9fb5d68cd6b5408173b545 ]
The mbi_lock mutex is left uninitialized, so let's use DEFINE_MUTEX
to initialize it statically.
Fixes:
505287525c24d ("irqchip/gic-v3: Add support for Message Based Interrupts as an MSI controller")
Signed-off-by: Yang Yingliang <yangyingliang@huawei.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Geert Uytterhoeven [Mon, 14 Jan 2019 13:51:33 +0000 (14:51 +0100)]
selftests: gpio-mockup-chardev: Check asprintf() for error
[ Upstream commit
508cacd7da6659ae7b7bdd0a335f675422277758 ]
With gcc 7.3.0:
gpio-mockup-chardev.c: In function ‘get_debugfs’:
gpio-mockup-chardev.c:62:3: warning: ignoring return value of ‘asprintf’, declared with attribute warn_unused_result [-Wunused-result]
asprintf(path, "%s/gpio", mnt_fs_get_target(fs));
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Handle asprintf() failures to fix this.
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Shuah Khan <shuah@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Fathi Boudra [Wed, 16 Jan 2019 17:43:19 +0000 (11:43 -0600)]
selftests: seccomp: use LDLIBS instead of LDFLAGS
[ Upstream commit
5bbc73a841d7f0bbe025a342146dde462a796a5a ]
seccomp_bpf fails to build due to undefined reference errors:
aarch64-linaro-linux-gcc --sysroot=/build/tmp-rpb-glibc/sysroots/hikey
-O2 -pipe -g -feliminate-unused-debug-types -Wl,-no-as-needed -Wall
-Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed -lpthread seccomp_bpf.c -o
/build/tmp-rpb-glibc/work/hikey-linaro-linux/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf
/tmp/ccrlR3MW.o: In function `tsync_sibling':
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1920: undefined reference to `sem_post'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1920: undefined reference to `sem_post'
/tmp/ccrlR3MW.o: In function `TSYNC_setup':
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1863: undefined reference to `sem_init'
/tmp/ccrlR3MW.o: In function `TSYNC_teardown':
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1904: undefined reference to `sem_destroy'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1897: undefined reference to `pthread_kill'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1898: undefined reference to `pthread_cancel'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1899: undefined reference to `pthread_join'
/tmp/ccrlR3MW.o: In function `tsync_start_sibling':
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1941: undefined reference to `pthread_create'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1941: undefined reference to `pthread_create'
/tmp/ccrlR3MW.o: In function `TSYNC_siblings_fail_prctl':
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1978: undefined reference to `sem_wait'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1990: undefined reference to `pthread_join'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1992: undefined reference to `pthread_join'
/tmp/ccrlR3MW.o: In function `tsync_start_sibling':
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1941: undefined reference to `pthread_create'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1941: undefined reference to `pthread_create'
/tmp/ccrlR3MW.o: In function `TSYNC_two_siblings_with_ancestor':
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:2016: undefined reference to `sem_wait'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:2032: undefined reference to `pthread_join'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:2034: undefined reference to `pthread_join'
/tmp/ccrlR3MW.o: In function `tsync_start_sibling':
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1941: undefined reference to `pthread_create'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1941: undefined reference to `pthread_create'
/tmp/ccrlR3MW.o: In function `TSYNC_two_sibling_want_nnp':
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:2046: undefined reference to `sem_wait'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:2058: undefined reference to `pthread_join'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:2060: undefined reference to `pthread_join'
/tmp/ccrlR3MW.o: In function `tsync_start_sibling':
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1941: undefined reference to `pthread_create'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1941: undefined reference to `pthread_create'
/tmp/ccrlR3MW.o: In function `TSYNC_two_siblings_with_no_filter':
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:2073: undefined reference to `sem_wait'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:2098: undefined reference to `pthread_join'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:2100: undefined reference to `pthread_join'
/tmp/ccrlR3MW.o: In function `tsync_start_sibling':
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1941: undefined reference to `pthread_create'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1941: undefined reference to `pthread_create'
/tmp/ccrlR3MW.o: In function `TSYNC_two_siblings_with_one_divergence':
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:2125: undefined reference to `sem_wait'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:2143: undefined reference to `pthread_join'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:2145: undefined reference to `pthread_join'
/tmp/ccrlR3MW.o: In function `tsync_start_sibling':
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1941: undefined reference to `pthread_create'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1941: undefined reference to `pthread_create'
/tmp/ccrlR3MW.o: In function `TSYNC_two_siblings_not_under_filter':
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:2169: undefined reference to `sem_wait'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:2202: undefined reference to `pthread_join'
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:2227: undefined reference to `pthread_join'
/tmp/ccrlR3MW.o: In function `tsync_start_sibling':
/usr/src/debug/kselftests/4.12-r0/linux-4.12-rc7/tools/testing/selftests/seccomp/seccomp_bpf.c:1941: undefined reference to `pthread_create'
It's GNU Make and linker specific.
The default Makefile rule looks like:
$(CC) $(CFLAGS) $(LDFLAGS) $@ $^ $(LDLIBS)
When linking is done by gcc itself, no issue, but when it needs to be passed
to proper ld, only LDLIBS follows and then ld cannot know what libs to link
with.
More detail:
https://www.gnu.org/software/make/manual/html_node/Implicit-Variables.html
LDFLAGS
Extra flags to give to compilers when they are supposed to invoke the linker,
‘ld’, such as -L. Libraries (-lfoo) should be added to the LDLIBS variable
instead.
LDLIBS
Library flags or names given to compilers when they are supposed to invoke the
linker, ‘ld’. LOADLIBES is a deprecated (but still supported) alternative to
LDLIBS. Non-library linker flags, such as -L, should go in the LDFLAGS
variable.
https://lkml.org/lkml/2010/2/10/362
tools/perf: libraries must come after objects
Link order matters, use LDLIBS instead of LDFLAGS to properly link against
libpthread.
Signed-off-by: Fathi Boudra <fathi.boudra@linaro.org>
Acked-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Shuah Khan <shuah@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Alban Bedel [Mon, 7 Jan 2019 19:44:54 +0000 (20:44 +0100)]
phy: ath79-usb: Fix the main reset name to match the DT binding
[ Upstream commit
827cb0323928952c0db9515aba9d534fb1285b3f ]
I submitted this driver several times before it got accepted. The
first series hasn't been accepted but the DTS binding did made it.
I then made a second series that added generic reset support to the
PHY core, this in turn required a change to the DT binding. This
second series seemed to have been ignored, so I did a third one
without the change to the PHY core and the DT binding update, and this
last attempt finally made it.
But two months later the DT binding update from the second series has
been integrated too. So now the driver doesn't match the binding and
the only DTS using it. This patch fix the driver to match the new
binding.
Signed-off-by: Alban Bedel <albeu@free.fr>
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Alban Bedel [Mon, 7 Jan 2019 19:44:53 +0000 (20:44 +0100)]
phy: ath79-usb: Fix the power on error path
[ Upstream commit
009808154c69c48d5b41fc8cf5ad5ab5704efd8f ]
In the power on function the error path doesn't return the suspend
override to its proper state. It should should deassert this reset
line to enable the suspend override.
Signed-off-by: Alban Bedel <albeu@free.fr>
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Alison Schofield [Sat, 8 Dec 2018 02:06:45 +0000 (18:06 -0800)]
selftests/vm/gup_benchmark.c: match gup struct to kernel
[ Upstream commit
91cd63d320f84dcbf21d4327f31f7e1f85adebd0 ]
An expansion field was added to the kernel copy of this structure for
future use. See mm/gup_benchmark.c.
Add the same expansion field here, so that the IOCTL command decodes
correctly. Otherwise, it fails with EINVAL.
Signed-off-by: Alison Schofield <alison.schofield@intel.com>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Shuah Khan <shuah@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Silvio Cesare [Tue, 15 Jan 2019 03:27:27 +0000 (04:27 +0100)]
ASoC: imx-audmux: change snprintf to scnprintf for possible overflow
[ Upstream commit
c407cd008fd039320d147088b52d0fa34ed3ddcb ]
Change snprintf to scnprintf. There are generally two cases where using
snprintf causes problems.
1) Uses of size += snprintf(buf, SIZE - size, fmt, ...)
In this case, if snprintf would have written more characters than what the
buffer size (SIZE) is, then size will end up larger than SIZE. In later
uses of snprintf, SIZE - size will result in a negative number, leading
to problems. Note that size might already be too large by using
size = snprintf before the code reaches a case of size += snprintf.
2) If size is ultimately used as a length parameter for a copy back to user
space, then it will potentially allow for a buffer overflow and information
disclosure when size is greater than SIZE. When the size is used to index
the buffer directly, we can have memory corruption. This also means when
size = snprintf... is used, it may also cause problems since size may become
large. Copying to userspace is mitigated by the HARDENED_USERCOPY kernel
configuration.
The solution to these issues is to use scnprintf which returns the number of
characters actually written to the buffer, so the size variable will never
exceed SIZE.
Signed-off-by: Silvio Cesare <silvio.cesare@gmail.com>
Cc: Timur Tabi <timur@kernel.org>
Cc: Nicolin Chen <nicoleotsuka@gmail.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Xiubo Li <Xiubo.Lee@gmail.com>
Cc: Fabio Estevam <fabio.estevam@nxp.com>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Acked-by: Nicolin Chen <nicoleotsuka@gmail.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Silvio Cesare [Sat, 12 Jan 2019 15:28:43 +0000 (16:28 +0100)]
ASoC: dapm: change snprintf to scnprintf for possible overflow
[ Upstream commit
e581e151e965bf1f2815dd94620b638fec4d0a7e ]
Change snprintf to scnprintf. There are generally two cases where using
snprintf causes problems.
1) Uses of size += snprintf(buf, SIZE - size, fmt, ...)
In this case, if snprintf would have written more characters than what the
buffer size (SIZE) is, then size will end up larger than SIZE. In later
uses of snprintf, SIZE - size will result in a negative number, leading
to problems. Note that size might already be too large by using
size = snprintf before the code reaches a case of size += snprintf.
2) If size is ultimately used as a length parameter for a copy back to user
space, then it will potentially allow for a buffer overflow and information
disclosure when size is greater than SIZE. When the size is used to index
the buffer directly, we can have memory corruption. This also means when
size = snprintf... is used, it may also cause problems since size may become
large. Copying to userspace is mitigated by the HARDENED_USERCOPY kernel
configuration.
The solution to these issues is to use scnprintf which returns the number of
characters actually written to the buffer, so the size variable will never
exceed SIZE.
Signed-off-by: Silvio Cesare <silvio.cesare@gmail.com>
Cc: Liam Girdwood <lgirdwood@gmail.com>
Cc: Mark Brown <broonie@kernel.org>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Shuming Fan [Tue, 15 Jan 2019 03:27:39 +0000 (11:27 +0800)]
ASoC: rt5682: Fix PLL source register definitions
[ Upstream commit
ee7ea2a9a318a89d21b156dc75e54d53904bdbe5 ]
Fix typo which causes headphone no sound while using BCLK
as PLL source.
Signed-off-by: Shuming Fan <shumingf@realtek.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Peng Hao [Sat, 29 Dec 2018 06:34:12 +0000 (14:34 +0800)]
x86/mm/mem_encrypt: Fix erroneous sizeof()
[ Upstream commit
bf7d28c53453ea904584960de55e33e03b9d93b1 ]
Using sizeof(pointer) for determining the size of a memset() only works
when the size of the pointer and the size of type to which it points are
the same. For pte_t this is only true for 64bit and 32bit-NONPAE. On 32bit
PAE systems this is wrong as the pointer size is 4 byte but the PTE entry
is 8 bytes. It's actually not a real world issue as this code depends on
64bit, but it's wrong nevertheless.
Use sizeof(*p) for correctness sake.
Fixes:
aad983913d77 ("x86/mm/encrypt: Simplify sme_populate_pgd() and sme_populate_pgd_large()")
Signed-off-by: Peng Hao <peng.hao2@zte.com.cn>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>
Cc: dave.hansen@linux.intel.com
Cc: peterz@infradead.org
Cc: luto@kernel.org
Link: https://lkml.kernel.org/r/1546065252-97996-1-git-send-email-peng.hao2@zte.com.cn
Signed-off-by: Sasha Levin <sashal@kernel.org>
Srinivas Ramana [Thu, 20 Dec 2018 13:35:57 +0000 (19:05 +0530)]
genirq: Make sure the initial affinity is not empty
[ Upstream commit
bddda606ec76550dd63592e32a6e87e7d32583f7 ]
If all CPUs in the irq_default_affinity mask are offline when an interrupt
is initialized then irq_setup_affinity() can set an empty affinity mask for
a newly allocated interrupt.
Fix this by falling back to cpu_online_mask in case the resulting affinity
mask is zero.
Signed-off-by: Srinivas Ramana <sramana@codeaurora.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arm-msm@vger.kernel.org
Link: https://lkml.kernel.org/r/1545312957-8504-1-git-send-email-sramana@codeaurora.org
Signed-off-by: Sasha Levin <sashal@kernel.org>
Alexandre Belloni [Tue, 18 Dec 2018 21:34:21 +0000 (22:34 +0100)]
selftests: rtc: rtctest: add alarm test on minute boundary
[ Upstream commit
7b3027728f4d4f6763f4d7e771acfc9424cdd0e6 ]
Unfortunately, some RTC don't have a second resolution for alarm so also
test for alarm on a minute boundary.
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Shuah Khan <shuah@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Alexandre Belloni [Tue, 18 Dec 2018 21:34:20 +0000 (22:34 +0100)]
selftests: rtc: rtctest: fix alarm tests
[ Upstream commit
fdac94489c4d247088b3885875b39b3e1eb621ef ]
Return values for select are not checked properly and timeouts may not be
detected.
Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
Signed-off-by: Shuah Khan <shuah@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Dan Carpenter [Fri, 21 Dec 2018 20:42:52 +0000 (23:42 +0300)]
usb: gadget: Potential NULL dereference on allocation error
[ Upstream commit
df28169e1538e4a8bcd8b779b043e5aa6524545c ]
The source_sink_alloc_func() function is supposed to return error
pointers on error. The function is called from usb_get_function() which
doesn't check for NULL returns so it would result in an Oops.
Of course, in the current kernel, small allocations always succeed so
this doesn't affect runtime.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Zeng Tao [Wed, 26 Dec 2018 11:22:00 +0000 (19:22 +0800)]
usb: dwc3: gadget: Fix the uninitialized link_state when udc starts
[ Upstream commit
88b1bb1f3b88e0bf20b05d543a53a5b99bd7ceb6 ]
Currently the link_state is uninitialized and the default value is 0(U0)
before the first time we start the udc, and after we start the udc then
stop the udc, the link_state will be undefined.
We may have the following warnings if we start the udc again with
an undefined link_state:
WARNING: CPU: 0 PID: 327 at drivers/usb/dwc3/gadget.c:294 dwc3_send_gadget_ep_cmd+0x304/0x308
dwc3
100e0000.hidwc3_0: wakeup failed --> -22
[...]
Call Trace:
[<
c010f270>] (unwind_backtrace) from [<
c010b3d8>] (show_stack+0x10/0x14)
[<
c010b3d8>] (show_stack) from [<
c034a4dc>] (dump_stack+0x84/0x98)
[<
c034a4dc>] (dump_stack) from [<
c0118000>] (__warn+0xe8/0x100)
[<
c0118000>] (__warn) from [<
c0118050>](warn_slowpath_fmt+0x38/0x48)
[<
c0118050>] (warn_slowpath_fmt) from [<
c0442ec0>](dwc3_send_gadget_ep_cmd+0x304/0x308)
[<
c0442ec0>] (dwc3_send_gadget_ep_cmd) from [<
c0445e68>](dwc3_ep0_start_trans+0x48/0xf4)
[<
c0445e68>] (dwc3_ep0_start_trans) from [<
c0446750>](dwc3_ep0_out_start+0x64/0x80)
[<
c0446750>] (dwc3_ep0_out_start) from [<
c04451c0>](__dwc3_gadget_start+0x1e0/0x278)
[<
c04451c0>] (__dwc3_gadget_start) from [<
c04452e0>](dwc3_gadget_start+0x88/0x10c)
[<
c04452e0>] (dwc3_gadget_start) from [<
c045ee54>](udc_bind_to_driver+0x88/0xbc)
[<
c045ee54>] (udc_bind_to_driver) from [<
c045f29c>](usb_gadget_probe_driver+0xf8/0x140)
[<
c045f29c>] (usb_gadget_probe_driver) from [<
bf005424>](gadget_dev_desc_UDC_store+0xac/0xc4 [libcomposite])
[<
bf005424>] (gadget_dev_desc_UDC_store [libcomposite]) from[<
c023d8e0>] (configfs_write_file+0xd4/0x160)
[<
c023d8e0>] (configfs_write_file) from [<
c01d51e8>] (__vfs_write+0x1c/0x114)
[<
c01d51e8>] (__vfs_write) from [<
c01d5ff4>] (vfs_write+0xa4/0x168)
[<
c01d5ff4>] (vfs_write) from [<
c01d6d40>] (SyS_write+0x3c/0x90)
[<
c01d6d40>] (SyS_write) from [<
c0107400>] (ret_fast_syscall+0x0/0x3c)
Signed-off-by: Zeng Tao <prime.zeng@hisilicon.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Bo He [Mon, 14 Jan 2019 07:48:32 +0000 (09:48 +0200)]
usb: dwc3: gadget: synchronize_irq dwc irq in suspend
[ Upstream commit
01c10880d24291a96a4ab0da773e3c5ce4d12da8 ]
We see dwc3 endpoint stopped by unwanted irq during
suspend resume test, which is caused dwc3 ep can't be started
with error "No Resource".
Here, add synchronize_irq before suspend to sync the
pending IRQ handlers complete.
Signed-off-by: Bo He <bo.he@intel.com>
Signed-off-by: Yu Wang <yu.y.wang@intel.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Dan Carpenter [Mon, 17 Dec 2018 07:02:42 +0000 (10:02 +0300)]
thermal: int340x_thermal: Fix a NULL vs IS_ERR() check
[ Upstream commit
3fe931b31a4078395c1967f0495dcc9e5ec6b5e3 ]
The intel_soc_dts_iosf_init() function doesn't return NULL, it returns
error pointers.
Fixes:
4d0dd6c1576b ("Thermal/int340x/processor_thermal: Enable auxiliary DTS for Braswell")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Marek Vasut [Sat, 15 Dec 2018 00:55:19 +0000 (01:55 +0100)]
clk: vc5: Abort clock configuration without upstream clock
[ Upstream commit
2137a109a5e39c2bdccfffe65230ed3fadbaac0e ]
In case the upstream clock are not set, which can happen in case the
VC5 has no valid upstream clock, the $src variable is used uninited
by regmap_update_bits(). Check for this condition and return -EINVAL
in such case.
Note that in case the VC5 has no valid upstream clock, the VC5 can
not operate correctly. That is a hardware property of the VC5. The
internal oscilator present in some VC5 models is also considered
upstream clock.
Signed-off-by: Marek Vasut <marek.vasut+renesas@gmail.com>
Cc: Alexey Firago <alexey_firago@mentor.com>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Cc: Stephen Boyd <sboyd@kernel.org>
Cc: linux-renesas-soc@vger.kernel.org
[sboyd@kernel.org: Added comment about probe preventing this from
happening in the first place]
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Lubomir Rintel [Fri, 4 Jan 2019 22:05:49 +0000 (23:05 +0100)]
clk: sysfs: fix invalid JSON in clk_dump
[ Upstream commit
c6e909972ef87aa2a479269f46b84126f99ec6db ]
Add a missing comma so that the output is valid JSON format again.
Fixes:
9fba738a53dd ("clk: add duty cycle support")
Signed-off-by: Lubomir Rintel <lkundrak@v3.sk>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Dan Carpenter [Tue, 18 Dec 2018 08:22:41 +0000 (11:22 +0300)]
clk: tegra: dfll: Fix a potential Oop in remove()
[ Upstream commit
d39eca547f3ec67140a5d765a426eb157b978a59 ]
If tegra_dfll_unregister() fails then "soc" is an error pointer. We
should just return instead of dereferencing it.
Fixes:
1752c9ee23fb ("clk: tegra: dfll: Fix drvdata overwriting issue")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Stephen Boyd <sboyd@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Yizhuo [Thu, 3 Jan 2019 21:59:12 +0000 (13:59 -0800)]
ASoC: Variable "val" in function rt274_i2c_probe() could be uninitialized
[ Upstream commit
8c3590de0a378c2449fc1aec127cc693632458e4 ]
Inside function rt274_i2c_probe(), if regmap_read() function
returns -EINVAL, then local variable "val" leaves uninitialized
but used in if statement. This is potentially unsafe.
Signed-off-by: Yizhuo <yzhai003@ucr.edu>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Dan Carpenter [Fri, 21 Dec 2018 09:06:58 +0000 (12:06 +0300)]
ALSA: compress: prevent potential divide by zero bugs
[ Upstream commit
678e2b44c8e3fec3afc7202f1996a4500a50be93 ]
The problem is seen in the q6asm_dai_compr_set_params() function:
ret = q6asm_map_memory_regions(dir, prtd->audio_client, prtd->phys,
(prtd->pcm_size / prtd->periods),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
prtd->periods);
In this code prtd->pcm_size is the buffer_size and prtd->periods comes
from params->buffer.fragments. If we allow the number of fragments to
be zero then it results in a divide by zero bug. One possible fix would
be to use prtd->pcm_count directly instead of using the division to
re-calculate it. But I decided that it doesn't really make sense to
allow zero fragments.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Rander Wang [Tue, 18 Dec 2018 08:24:54 +0000 (16:24 +0800)]
ASoC: Intel: Haswell/Broadwell: fix setting for .dynamic field
[ Upstream commit
906a9abc5de73c383af518f5a806f4be2993a0c7 ]
For some reason this field was set to zero when all other drivers use
.dynamic = 1 for front-ends. This change was tested on Dell XPS13 and
has no impact with the existing legacy driver. The SOF driver also works
with this change which enables it to override the fixed topology.
Signed-off-by: Rander Wang <rander.wang@linux.intel.com>
Acked-by: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sasha Levin <sashal@kernel.org>
Kristian H. Kristensen [Wed, 19 Dec 2018 16:57:41 +0000 (08:57 -0800)]
drm/msm: Unblock writer if reader closes file
[ Upstream commit
99c66bc051e7407fe0bf0607b142ec0be1a1d1dd ]
Prevents deadlock when fifo is full and reader closes file.
Signed-off-by: Kristian H. Kristensen <hoegsberg@chromium.org>
Signed-off-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Sasha Levin <sashal@kernel.org>
John Garry [Thu, 14 Feb 2019 16:37:57 +0000 (00:37 +0800)]
scsi: libsas: Fix rphy phy_identifier for PHYs with end devices attached
commit
ffeafdd2bf0b280d67ec1a47ea6287910d271f3f upstream.
The sysfs phy_identifier attribute for a sas_end_device comes from the rphy
phy_identifier value.
Currently this is not being set for rphys with an end device attached, so
we see incorrect symlinks from systemd disk/by-path:
root@localhost:~# ls -l /dev/disk/by-path/
total 0
lrwxrwxrwx 1 root root 9 Feb 13 12:26 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy0-lun-0 -> ../../sdb
lrwxrwxrwx 1 root root 10 Feb 13 12:26 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy0-lun-0-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Feb 13 12:26 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy0-lun-0-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 10 Feb 13 12:26 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy0-lun-0-part3 -> ../../sdc3
Indeed, each sas_end_device phy_identifier value is 0:
root@localhost:/# more sys/class/sas_device/end_device-0\:0\:2/phy_identifier
0
root@localhost:/# more sys/class/sas_device/end_device-0\:0\:10/phy_identifier
0
This patch fixes the discovery code to set the phy_identifier. With this,
we now get proper symlinks:
root@localhost:~# ls -l /dev/disk/by-path/
total 0
lrwxrwxrwx 1 root root 9 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy10-lun-0 -> ../../sdg
lrwxrwxrwx 1 root root 9 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy11-lun-0 -> ../../sdh
lrwxrwxrwx 1 root root 9 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy2-lun-0 -> ../../sda
lrwxrwxrwx 1 root root 10 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy2-lun-0-part1 -> ../../sda1
lrwxrwxrwx 1 root root 9 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy3-lun-0 -> ../../sdb
lrwxrwxrwx 1 root root 10 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy3-lun-0-part1 -> ../../sdb1
lrwxrwxrwx 1 root root 10 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy3-lun-0-part2 -> ../../sdb2
lrwxrwxrwx 1 root root 9 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy4-lun-0 -> ../../sdc
lrwxrwxrwx 1 root root 10 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy4-lun-0-part1 -> ../../sdc1
lrwxrwxrwx 1 root root 10 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy4-lun-0-part2 -> ../../sdc2
lrwxrwxrwx 1 root root 10 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy4-lun-0-part3 -> ../../sdc3
lrwxrwxrwx 1 root root 9 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy5-lun-0 -> ../../sdd
lrwxrwxrwx 1 root root 9 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy7-lun-0 -> ../../sde
lrwxrwxrwx 1 root root 10 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy7-lun-0-part1 -> ../../sde1
lrwxrwxrwx 1 root root 10 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy7-lun-0-part2 -> ../../sde2
lrwxrwxrwx 1 root root 10 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy7-lun-0-part3 -> ../../sde3
lrwxrwxrwx 1 root root 9 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy8-lun-0 -> ../../sdf
lrwxrwxrwx 1 root root 10 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy8-lun-0-part1 -> ../../sdf1
lrwxrwxrwx 1 root root 10 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy8-lun-0-part2 -> ../../sdf2
lrwxrwxrwx 1 root root 10 Feb 13 11:53 platform-HISI0162:01-sas-exp0x500e004aaaaaaa1f-phy8-lun-0-part3 -> ../../sdf3
Fixes:
2908d778ab3e ("[SCSI] aic94xx: new driver")
Reported-by: dann frazier <dann.frazier@canonical.com>
Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-by: Jason Yan <yanaijie@huawei.com>
Tested-by: dann frazier <dann.frazier@canonical.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Toke Høiland-Jørgensen [Thu, 21 Feb 2019 17:29:36 +0000 (18:29 +0100)]
mac80211: Change default tx_sk_pacing_shift to 7
commit
5c14a4d05f68415af9e41a4e667d1748d41d1baf upstream.
When we did the original tests for the optimal value of sk_pacing_shift, we
came up with 6 ms of buffering as the default. Sadly, 6 is not a power of
two, so when picking the shift value I erred on the size of less buffering
and picked 4 ms instead of 8. This was probably wrong; those 2 ms of extra
buffering makes a larger difference than I thought.
So, change the default pacing shift to 7, which corresponds to 8 ms of
buffering. The point of diminishing returns really kicks in after 8 ms, and
so having this as a default should cut down on the need for extensive
per-device testing and overrides needed in the drivers.
Cc: stable@vger.kernel.org
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Long Li [Tue, 6 Nov 2018 04:00:00 +0000 (04:00 +0000)]
genirq/matrix: Improve target CPU selection for managed interrupts.
[ Upstream commit
e8da8794a7fd9eef1ec9a07f0d4897c68581c72b ]
On large systems with multiple devices of the same class (e.g. NVMe disks,
using managed interrupts), the kernel can affinitize these interrupts to a
small subset of CPUs instead of spreading them out evenly.
irq_matrix_alloc_managed() tries to select the CPU in the supplied cpumask
of possible target CPUs which has the lowest number of interrupt vectors
allocated.
This is done by searching the CPU with the highest number of available
vectors. While this is correct for non-managed CPUs it can select the wrong
CPU for managed interrupts. Under certain constellations this results in
affinitizing the managed interrupts of several devices to a single CPU in
a set.
The book keeping of available vectors works the following way:
1) Non-managed interrupts:
available is decremented when the interrupt is actually requested by
the device driver and a vector is assigned. It's incremented when the
interrupt and the vector are freed.
2) Managed interrupts:
Managed interrupts guarantee vector reservation when the MSI/MSI-X
functionality of a device is enabled, which is achieved by reserving
vectors in the bitmaps of the possible target CPUs. This reservation
decrements the available count on each possible target CPU.
When the interrupt is requested by the device driver then a vector is
allocated from the reserved region. The operation is reversed when the
interrupt is freed by the device driver. Neither of these operations
affect the available count.
The reservation persist up to the point where the MSI/MSI-X
functionality is disabled and only this operation increments the
available count again.
For non-managed interrupts the available count is the correct selection
criterion because the guaranteed reservations need to be taken into
account. Using the allocated counter could lead to a failing allocation in
the following situation (total vector space of 10 assumed):
CPU0 CPU1
available: 2 0
allocated: 5 3 <--- CPU1 is selected, but available space = 0
managed reserved: 3 7
while available yields the correct result.
For managed interrupts the available count is not the appropriate
selection criterion because as explained above the available count is not
affected by the actual vector allocation.
The following example illustrates that. Total vector space of 10
assumed. The starting point is:
CPU0 CPU1
available: 5 4
allocated: 2 3
managed reserved: 3 3
Allocating vectors for three non-managed interrupts will result in
affinitizing the first two to CPU0 and the third one to CPU1 because the
available count is adjusted with each allocation:
CPU0 CPU1
available: 5 4 <- Select CPU0 for 1st allocation
--> allocated: 3 3
available: 4 4 <- Select CPU0 for 2nd allocation
--> allocated: 4 3
available: 3 4 <- Select CPU1 for 3rd allocation
--> allocated: 4 4
But the allocation of three managed interrupts starting from the same
point will affinitize all of them to CPU0 because the available count is
not affected by the allocation (see above). So the end result is:
CPU0 CPU1
available: 5 4
allocated: 5 3
Introduce a "managed_allocated" field in struct cpumap to track the vector
allocation for managed interrupts separately. Use this information to
select the target CPU when a vector is allocated for a managed interrupt,
which results in more evenly distributed vector assignments. The above
example results in the following allocations:
CPU0 CPU1
managed_allocated: 0 0 <- Select CPU0 for 1st allocation
--> allocated: 3 3
managed_allocated: 1 0 <- Select CPU1 for 2nd allocation
--> allocated: 3 4
managed_allocated: 1 1 <- Select CPU0 for 3rd allocation
--> allocated: 4 4
The allocation of non-managed interrupts is not affected by this change and
is still evaluating the available count.
The overall distribution of interrupt vectors for both types of interrupts
might still not be perfectly even depending on the number of non-managed
and managed interrupts in a system, but due to the reservation guarantee
for managed interrupts this cannot be avoided.
Expose the new field in debugfs as well.
[ tglx: Clarified the background of the problem in the changelog and
described it independent of NVME ]
Signed-off-by: Long Li <longli@microsoft.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Michael Kelley <mikelley@microsoft.com>
Link: https://lkml.kernel.org/r/20181106040000.27316-1-longli@linuxonhyperv.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dou Liyang [Sat, 8 Sep 2018 17:58:38 +0000 (01:58 +0800)]
irq/matrix: Spread managed interrupts on allocation
[ Upstream commit
76f99ae5b54d48430d1f0c5512a84da0ff9761e0 ]
Linux spreads out the non managed interrupt across the possible target CPUs
to avoid vector space exhaustion.
Managed interrupts are treated differently, as for them the vectors are
reserved (with guarantee) when the interrupt descriptors are initialized.
When the interrupt is requested a real vector is assigned. The assignment
logic uses the first CPU in the affinity mask for assignment. If the
interrupt has more than one CPU in the affinity mask, which happens when a
multi queue device has less queues than CPUs, then doing the same search as
for non managed interrupts makes sense as it puts the interrupt on the
least interrupt plagued CPU. For single CPU affine vectors that's obviously
a NOOP.
Restructre the matrix allocation code so it does the 'best CPU' search, add
the sanity check for an empty affinity mask and adapt the call site in the
x86 vector management code.
[ tglx: Added the empty mask check to the core and improved change log ]
Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: hpa@zytor.com
Link: https://lkml.kernel.org/r/20180908175838.14450-2-dou_liyang@163.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dou Liyang [Sat, 8 Sep 2018 17:58:37 +0000 (01:58 +0800)]
irq/matrix: Split out the CPU selection code into a helper
[ Upstream commit
8ffe4e61c06a48324cfd97f1199bb9838acce2f2 ]
Linux finds the CPU which has the lowest vector allocation count to spread
out the non managed interrupts across the possible target CPUs, but does
not do so for managed interrupts.
Split out the CPU selection code into a helper function for reuse. No
functional change.
Signed-off-by: Dou Liyang <douly.fnst@cn.fujitsu.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: hpa@zytor.com
Link: https://lkml.kernel.org/r/20180908175838.14450-1-dou_liyang@163.com
Signed-off-by: Sasha Levin <sashal@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Wed, 27 Feb 2019 09:09:03 +0000 (10:09 +0100)]
Linux 4.19.26
Russell King [Mon, 11 Feb 2019 15:04:24 +0000 (15:04 +0000)]
net: phylink: avoid resolving link state too early
commit
87454b6edc1b0143fdb3d9853285477e95af74a4 upstream.
During testing on Armada 388 platforms, it was found with a certain
module configuration that it was possible to trigger a kernel oops
during the module load process, caused by the phylink resolver being
triggered for a currently disabled interface.
This problem was introduced by changing the way the SFP registration
works, which now can result in the sfp link down notification being
called during phylink_create().
Fixes:
b5bfc21af5cb ("net: sfp: do not probe SFP module before we're attached")
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cc: Sasha Levin <sashal@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nathan Chancellor [Thu, 1 Nov 2018 00:50:21 +0000 (17:50 -0700)]
pinctrl: max77620: Use define directive for max77620_pinconf_param values
commit
1f60652dd586d1b3eee7c4602892a97a62fa937a upstream.
Clang warns when one enumerated type is implicitly converted to another:
drivers/pinctrl/pinctrl-max77620.c:56:12: warning: implicit conversion
from enumeration type 'enum max77620_pinconf_param' to different
enumeration type 'enum pin_config_param' [-Wenum-conversion]
.param = MAX77620_ACTIVE_FPS_SOURCE,
^~~~~~~~~~~~~~~~~~~~~~~~~~
It is expected that pinctrl drivers can extend pin_config_param because
of the gap between PIN_CONFIG_END and PIN_CONFIG_MAX so this conversion
isn't an issue. Most drivers that take advantage of this define the
PIN_CONFIG variables as constants, rather than enumerated values. Do the
same thing here so that Clang no longer warns.
Link: https://github.com/ClangBuiltLinux/linux/issues/139
Signed-off-by: Nathan Chancellor <natechancellor@gmail.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mikulas Patocka [Mon, 8 Oct 2018 10:57:34 +0000 (12:57 +0200)]
udlfb: handle unplug properly
commit
68a958a915ca912b8ce71b9eea7445996f6e681e upstream.
The udlfb driver maintained an open count and cleaned up itself when the
count reached zero. But the console is also counted in the reference count
- so, if the user unplugged the device, the open count would not drop to
zero and the driver stayed loaded with console attached. If the user
re-plugged the adapter, it would create a device /dev/fb1, show green
screen and the access to the console would be lost.
The framebuffer subsystem has reference counting on its own - in order to
fix the unplug bug, we rely the framebuffer reference counting. When the
user unplugs the adapter, we call unregister_framebuffer unconditionally.
unregister_framebuffer will unbind the console, wait until all users stop
using the framebuffer and then call the fb_destroy method. The fb_destroy
cleans up the USB driver.
This patch makes the following changes:
* Drop dlfb->kref and rely on implicit framebuffer reference counting
instead.
* dlfb_usb_disconnect calls unregister_framebuffer, the rest of driver
cleanup is done in the function dlfb_ops_destroy. dlfb_ops_destroy will
be called by the framebuffer subsystem when no processes have the
framebuffer open or mapped.
* We don't use workqueue during initialization, but initialize directly
from dlfb_usb_probe. The workqueue could race with dlfb_usb_disconnect
and this racing would produce various kinds of memory corruption.
* We use usb_get_dev and usb_put_dev to make sure that the USB subsystem
doesn't free the device under us.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
cc: Dave Airlie <airlied@redhat.com>
Cc: Bernie Thompson <bernie@plugable.com>,
Cc: Ladislav Michl <ladis@linux-mips.org>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Taehee Yoo [Mon, 5 Nov 2018 09:23:13 +0000 (18:23 +0900)]
netfilter: ipt_CLUSTERIP: fix sleep-in-atomic bug in clusterip_config_entry_put()
commit
2a61d8b883bbad26b06d2e6cc3777a697e78830d upstream.
A proc_remove() can sleep. so that it can't be inside of spin_lock.
Hence proc_remove() is moved to outside of spin_lock. and it also
adds mutex to sync create and remove of proc entry(config->pde).
test commands:
SHELL#1
%while :; do iptables -A INPUT -p udp -i enp2s0 -d 192.168.1.100 \
--dport 9000 -j CLUSTERIP --new --hashmode sourceip \
--clustermac 01:00:5e:00:00:21 --total-nodes 3 --local-node 3; \
iptables -F; done
SHELL#2
%while :; do echo +1 > /proc/net/ipt_CLUSTERIP/192.168.1.100; \
echo -1 > /proc/net/ipt_CLUSTERIP/192.168.1.100; done
[ 2949.569864] BUG: sleeping function called from invalid context at kernel/sched/completion.c:99
[ 2949.579944] in_atomic(): 1, irqs_disabled(): 0, pid: 5472, name: iptables
[ 2949.587920] 1 lock held by iptables/5472:
[ 2949.592711] #0:
000000008f0ebcf2 (&(&cn->lock)->rlock){+...}, at: refcount_dec_and_lock+0x24/0x50
[ 2949.603307] CPU: 1 PID: 5472 Comm: iptables Tainted: G W 4.19.0-rc5+ #16
[ 2949.604212] Hardware name: To be filled by O.E.M. To be filled by O.E.M./Aptio CRB, BIOS 5.6.5 07/08/2015
[ 2949.604212] Call Trace:
[ 2949.604212] dump_stack+0xc9/0x16b
[ 2949.604212] ? show_regs_print_info+0x5/0x5
[ 2949.604212] ___might_sleep+0x2eb/0x420
[ 2949.604212] ? set_rq_offline.part.87+0x140/0x140
[ 2949.604212] ? _rcu_barrier_trace+0x400/0x400
[ 2949.604212] wait_for_completion+0x94/0x710
[ 2949.604212] ? wait_for_completion_interruptible+0x780/0x780
[ 2949.604212] ? __kernel_text_address+0xe/0x30
[ 2949.604212] ? __lockdep_init_map+0x10e/0x5c0
[ 2949.604212] ? __lockdep_init_map+0x10e/0x5c0
[ 2949.604212] ? __init_waitqueue_head+0x86/0x130
[ 2949.604212] ? init_wait_entry+0x1a0/0x1a0
[ 2949.604212] proc_entry_rundown+0x208/0x270
[ 2949.604212] ? proc_reg_get_unmapped_area+0x370/0x370
[ 2949.604212] ? __lock_acquire+0x4500/0x4500
[ 2949.604212] ? complete+0x18/0x70
[ 2949.604212] remove_proc_subtree+0x143/0x2a0
[ 2949.708655] ? remove_proc_entry+0x390/0x390
[ 2949.708655] clusterip_tg_destroy+0x27a/0x630 [ipt_CLUSTERIP]
[ ... ]
Fixes:
b3e456fce9f5 ("netfilter: ipt_CLUSTERIP: fix a race condition of proc file creation")
Signed-off-by: Taehee Yoo <ap420073@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Fernando Fernandez Mancera [Mon, 21 Jan 2019 11:53:21 +0000 (12:53 +0100)]
netfilter: nfnetlink_osf: add missing fmatch check
commit
1a6a0951fc009f6d9fe8ebea2d2417d80d54097b upstream.
When we check the tcp options of a packet and it doesn't match the current
fingerprint, the tcp packet option pointer must be restored to its initial
value in order to do the proper tcp options check for the next fingerprint.
Here we can see an example.
Assumming the following fingerprint base with two lines:
S10:64:1:60:M*,S,T,N,W6: Linux:3.0::Linux 3.0
S20:64:1:60:M*,S,T,N,W7: Linux:4.19:arch:Linux 4.1
Where TCP options are the last field in the OS signature, all of them overlap
except by the last one, ie. 'W6' versus 'W7'.
In case a packet for Linux 4.19 kicks in, the osf finds no matching because the
TCP options pointer is updated after checking for the TCP options in the first
line.
Therefore, reset pointer back to where it should be.
Fixes:
11eeef41d5f6 ("netfilter: passive OS fingerprint xtables match")
Signed-off-by: Fernando Fernandez Mancera <ffmancera@riseup.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eli Cooper [Mon, 21 Jan 2019 10:45:27 +0000 (18:45 +0800)]
netfilter: ipv6: Don't preserve original oif for loopback address
commit
15df03c661cb362366ecfc3a21820cb934f3e4ca upstream.
Commit
508b09046c0f ("netfilter: ipv6: Preserve link scope traffic
original oif") made ip6_route_me_harder() keep the original oif for
link-local and multicast packets. However, it also affected packets
for the loopback address because it used rt6_need_strict().
REDIRECT rules in the OUTPUT chain rewrite the destination to loopback
address; thus its oif should not be preserved. This commit fixes the bug
that redirected local packets are being dropped. Actually the packet was
not exactly dropped; Instead it was sent out to the original oif rather
than lo. When a packet with daddr ::1 is sent to the router, it is
effectively dropped.
Fixes:
508b09046c0f ("netfilter: ipv6: Preserve link scope traffic original oif")
Signed-off-by: Eli Cooper <elicooper@gmx.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pablo Neira Ayuso [Wed, 13 Feb 2019 12:03:53 +0000 (13:03 +0100)]
netfilter: nft_compat: use-after-free when deleting targets
commit
753c111f655e38bbd52fc01321266633f022ebe2 upstream.
Fetch pointer to module before target object is released.
Fixes:
29e3880109e3 ("netfilter: nf_tables: fix use-after-free when deleting compat expressions")
Fixes:
0ca743a55991 ("netfilter: nf_tables: add compatibility layer for x_tables")
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pablo Neira Ayuso [Fri, 15 Feb 2019 11:50:24 +0000 (12:50 +0100)]
netfilter: nf_tables: fix flush after rule deletion in the same batch
commit
23b7ca4f745f21c2b9cfcb67fdd33733b3ae7e66 upstream.
Flush after rule deletion bogusly hits -ENOENT. Skip rules that have
been already from nft_delrule_by_chain() which is always called from the
flush path.
Fixes:
cf9dc09d0949 ("netfilter: nf_tables: fix missing rules flushing per table")
Reported-by: Phil Sutter <phil@nwl.cc>
Acked-by: Phil Sutter <phil@nwl.cc>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hangbin Liu [Fri, 22 Feb 2019 13:22:32 +0000 (21:22 +0800)]
Revert "bridge: do not add port to router list when receives query with source 0.0.0.0"
commit
278e2148c07559dd4ad8602f22366d61eb2ee7b7 upstream.
This reverts commit
5a2de63fd1a5 ("bridge: do not add port to router list
when receives query with source 0.0.0.0") and commit
0fe5119e267f ("net:
bridge: remove ipv6 zero address check in mcast queries")
The reason is RFC 4541 is not a standard but suggestive. Currently we
will elect 0.0.0.0 as Querier if there is no ip address configured on
bridge. If we do not add the port which recives query with source
0.0.0.0 to router list, the IGMP reports will not be about to forward
to Querier, IGMP data will also not be able to forward to dest.
As Nikolay suggested, revert this change first and add a boolopt api
to disable none-zero election in future if needed.
Reported-by: Linus Lüssing <linus.luessing@c0d3.blue>
Reported-by: Sebastian Gottschall <s.gottschall@newmedia-net.de>
Fixes:
5a2de63fd1a5 ("bridge: do not add port to router list when receives query with source 0.0.0.0")
Fixes:
0fe5119e267f ("net: bridge: remove ipv6 zero address check in mcast queries")
Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
Acked-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gao Xiang [Tue, 11 Dec 2018 07:17:50 +0000 (15:17 +0800)]
staging: erofs: unzip_vle_lz4.c,utils.c: rectify BUG_ONs
commit
b8e076a6ef253e763bfdb81e5c72bcc828b0fbeb upstream.
remove all redundant BUG_ONs, and turn the rest
useful usages to DBG_BUGONs.
Signed-off-by: Gao Xiang <gaoxiang25@huawei.com>
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gao Xiang [Tue, 11 Dec 2018 07:17:49 +0000 (15:17 +0800)]
staging: erofs: unzip_{pagevec.h,vle.c}: rectify BUG_ONs
commit
70b17991d89554cdd16f3e4fb0179bcc03c808d9 upstream.
remove all redundant BUG_ONs, and turn the rest
useful usages to DBG_BUGONs.
Signed-off-by: Gao Xiang <gaoxiang25@huawei.com>
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gao Xiang [Wed, 5 Dec 2018 13:23:13 +0000 (21:23 +0800)]
staging: erofs: {dir,inode,super}.c: rectify BUG_ONs
commit
8b987bca2d09649683cbe496419a011df8c08493 upstream.
remove all redundant BUG_ONs, and turn the rest
useful usages to DBG_BUGONs.
Signed-off-by: Gao Xiang <gaoxiang25@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gao Xiang [Thu, 22 Nov 2018 17:16:03 +0000 (01:16 +0800)]
staging: erofs: add a full barrier in erofs_workgroup_unfreeze
commit
948bbdb1818b7ad6e539dad4fbd2dd4650793ea9 upstream.
Just like other generic locks, insert a full barrier
in case of memory reorder.
Reviewed-by: Chao Yu <yuchao0@huawei.com>
Signed-off-by: Gao Xiang <gaoxiang25@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>