platform/kernel/linux-exynos.git
8 years agoideapad-laptop: Add Lenovo ideapad Y700-17ISK to no_hw_rfkill dmi list
Josh Boyer [Thu, 10 Dec 2015 02:12:52 +0000 (21:12 -0500)]
ideapad-laptop: Add Lenovo ideapad Y700-17ISK to no_hw_rfkill dmi list

[ Upstream commit edde316acb5f07c04abf09a92f59db5d2efd14e2 ]

One of the newest ideapad models also lacks a physical hw rfkill switch,
and trying to read the hw rfkill switch through the ideapad module
causes it to always reported blocking breaking wifi.

Fix it by adding this model to the DMI list.

BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1286293
Cc: stable@vger.kernel.org
Signed-off-by: Josh Boyer <jwboyer@fedoraproject.org>
Signed-off-by: Darren Hart <dvhart@linux.intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoIB/qib: Support creating qps with GFP_NOIO flag
Vinit Agnihotri [Mon, 11 Jan 2016 17:57:25 +0000 (12:57 -0500)]
IB/qib: Support creating qps with GFP_NOIO flag

[ Upstream commit fbbeb8632bf0b46ab44cfcedc4654cd7831b7161 ]

The current code is problematic when the QP creation and ipoib is used to
support NFS and NFS desires to do IO for paging purposes. In that case, the
GFP_KERNEL allocation in qib_qp.c causes a deadlock in tight memory
situations.

This fix adds support to create queue pair with GFP_NOIO flag for connected
mode only to cleanly fail the create queue pair in those situations.

Cc: <stable@vger.kernel.org> # 3.16+
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Vinit Agnihotri <vinit.abhay.agnihotri@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoIB/qib: fix mcast detach when qp not attached
Mike Marciniszyn [Thu, 7 Jan 2016 21:44:10 +0000 (16:44 -0500)]
IB/qib: fix mcast detach when qp not attached

[ Upstream commit 09dc9cd6528f5b52bcbd3292a6312e762c85260f ]

The code produces the following trace:

[1750924.419007] general protection fault: 0000 [#3] SMP
[1750924.420364] Modules linked in: nfnetlink autofs4 rpcsec_gss_krb5 nfsv4
dcdbas rfcomm bnep bluetooth nfsd auth_rpcgss nfs_acl dm_multipath nfs lockd
scsi_dh sunrpc fscache radeon ttm drm_kms_helper drm serio_raw parport_pc
ppdev i2c_algo_bit lpc_ich ipmi_si ib_mthca ib_qib dca lp parport ib_ipoib
mac_hid ib_cm i3000_edac ib_sa ib_uverbs edac_core ib_umad ib_mad ib_core
ib_addr tg3 ptp dm_mirror dm_region_hash dm_log psmouse pps_core
[1750924.420364] CPU: 1 PID: 8401 Comm: python Tainted: G D
3.13.0-39-generic #66-Ubuntu
[1750924.420364] Hardware name: Dell Computer Corporation PowerEdge
860/0XM089, BIOS A04 07/24/2007
[1750924.420364] task: ffff8800366a9800 ti: ffff88007af1c000 task.ti:
ffff88007af1c000
[1750924.420364] RIP: 0010:[<ffffffffa0131d51>] [<ffffffffa0131d51>]
qib_mcast_qp_free+0x11/0x50 [ib_qib]
[1750924.420364] RSP: 0018:ffff88007af1dd70  EFLAGS: 00010246
[1750924.420364] RAX: 0000000000000001 RBX: ffff88007b822688 RCX:
000000000000000f
[1750924.420364] RDX: ffff88007b822688 RSI: ffff8800366c15a0 RDI:
6764697200000000
[1750924.420364] RBP: ffff88007af1dd78 R08: 0000000000000001 R09:
0000000000000000
[1750924.420364] R10: 0000000000000011 R11: 0000000000000246 R12:
ffff88007baa1d98
[1750924.420364] R13: ffff88003ecab000 R14: ffff88007b822660 R15:
0000000000000000
[1750924.420364] FS:  00007ffff7fd8740(0000) GS:ffff88007fc80000(0000)
knlGS:0000000000000000
[1750924.420364] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[1750924.420364] CR2: 00007ffff597c750 CR3: 000000006860b000 CR4:
00000000000007e0
[1750924.420364] Stack:
[1750924.420364]  ffff88007b822688 ffff88007af1ddf0 ffffffffa0132429
000000007af1de20
[1750924.420364]  ffff88007baa1dc8 ffff88007baa0000 ffff88007af1de70
ffffffffa00cb313
[1750924.420364]  00007fffffffde88 0000000000000000 0000000000000008
ffff88003ecab000
[1750924.420364] Call Trace:
[1750924.420364]  [<ffffffffa0132429>] qib_multicast_detach+0x1e9/0x350
[ib_qib]
[1750924.568035]  [<ffffffffa00cb313>] ? ib_uverbs_modify_qp+0x323/0x3d0
[ib_uverbs]
[1750924.568035]  [<ffffffffa0092d61>] ib_detach_mcast+0x31/0x50 [ib_core]
[1750924.568035]  [<ffffffffa00cc213>] ib_uverbs_detach_mcast+0x93/0x170
[ib_uverbs]
[1750924.568035]  [<ffffffffa00c61f6>] ib_uverbs_write+0xc6/0x2c0 [ib_uverbs]
[1750924.568035]  [<ffffffff81312e68>] ? apparmor_file_permission+0x18/0x20
[1750924.568035]  [<ffffffff812d4cd3>] ? security_file_permission+0x23/0xa0
[1750924.568035]  [<ffffffff811bd214>] vfs_write+0xb4/0x1f0
[1750924.568035]  [<ffffffff811bdc49>] SyS_write+0x49/0xa0
[1750924.568035]  [<ffffffff8172f7ed>] system_call_fastpath+0x1a/0x1f
[1750924.568035] Code: 66 2e 0f 1f 84 00 00 00 00 00 31 c0 5d c3 66 2e 0f 1f
84 00 00 00 00 00 66 90 0f 1f 44 00 00 55 48 89 e5 53 48 89 fb 48 8b 7f 10
<f0> ff 8f 40 01 00 00 74 0e 48 89 df e8 8e f8 06 e1 5b 5d c3 0f
[1750924.568035] RIP  [<ffffffffa0131d51>] qib_mcast_qp_free+0x11/0x50
[ib_qib]
[1750924.568035]  RSP <ffff88007af1dd70>
[1750924.650439] ---[ end trace 73d5d4b3f8ad4851 ]

The fix is to note the qib_mcast_qp that was found.   If none is found, then
return EINVAL indicating the error.

Cc: <stable@vger.kernel.org>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Reported-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agocrypto: crc32c - Fix crc32c soft dependency
Jean Delvare [Mon, 18 Jan 2016 16:06:05 +0000 (17:06 +0100)]
crypto: crc32c - Fix crc32c soft dependency

[ Upstream commit fd7f6727102a1ccf6b4c1dfcc631f9b546526b26 ]

I don't think it makes sense for a module to have a soft dependency
on itself. This seems quite cyclic by nature and I can't see what
purpose it could serve.

OTOH libcrc32c calls crypto_alloc_shash("crc32c", 0, 0) so it pretty
much assumes that some incarnation of the "crc32c" hash algorithm has
been loaded. Therefore it makes sense to have the soft dependency
there (as crc-t10dif does.)

Cc: stable@vger.kernel.org
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Jean Delvare <jdelvare@suse.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoxfs: log mount failures don't wait for buffers to be released
Dave Chinner [Mon, 18 Jan 2016 21:28:10 +0000 (08:28 +1100)]
xfs: log mount failures don't wait for buffers to be released

[ Upstream commit 85bec5460ad8e05e0a8d70fb0f6750eb719ad092 ]

Recently I've been seeing xfs/051 fail on 1k block size filesystems.
Trying to trace the events during the test lead to the problem going
away, indicating that it was a race condition that lead to this
ASSERT failure:

XFS: Assertion failed: atomic_read(&pag->pag_ref) == 0, file: fs/xfs/xfs_mount.c, line: 156
.....
[<ffffffff814e1257>] xfs_free_perag+0x87/0xb0
[<ffffffff814e21b9>] xfs_mountfs+0x4d9/0x900
[<ffffffff814e5dff>] xfs_fs_fill_super+0x3bf/0x4d0
[<ffffffff811d8800>] mount_bdev+0x180/0x1b0
[<ffffffff814e3ff5>] xfs_fs_mount+0x15/0x20
[<ffffffff811d90a8>] mount_fs+0x38/0x170
[<ffffffff811f4347>] vfs_kern_mount+0x67/0x120
[<ffffffff811f7018>] do_mount+0x218/0xd60
[<ffffffff811f7e5b>] SyS_mount+0x8b/0xd0

When I finally caught it with tracing enabled, I saw that AG 2 had
an elevated reference count and a buffer was responsible for it. I
tracked down the specific buffer, and found that it was missing the
final reference count release that would put it back on the LRU and
hence be found by xfs_wait_buftarg() calls in the log mount failure
handling.

The last four traces for the buffer before the assert were (trimmed
for relevance)

kworker/0:1-5259   xfs_buf_iodone:        hold 2  lock 0 flags ASYNC
kworker/0:1-5259   xfs_buf_ioerror:       hold 2  lock 0 error -5
mount-7163    xfs_buf_lock_done:     hold 2  lock 0 flags ASYNC
mount-7163    xfs_buf_unlock:        hold 2  lock 1 flags ASYNC

This is an async write that is completing, so there's nobody waiting
for it directly.  Hence we call xfs_buf_relse() once all the
processing is complete. That does:

static inline void xfs_buf_relse(xfs_buf_t *bp)
{
xfs_buf_unlock(bp);
xfs_buf_rele(bp);
}

Now, it's clear that mount is waiting on the buffer lock, and that
it has been released by xfs_buf_relse() and gained by mount. This is
expected, because at this point the mount process is in
xfs_buf_delwri_submit() waiting for all the IO it submitted to
complete.

The mount process, however, is waiting on the lock for the buffer
because it is in xfs_buf_delwri_submit(). This waits for IO
completion, but it doesn't wait for the buffer reference owned by
the IO to go away. The mount process collects all the completions,
fails the log recovery, and the higher level code then calls
xfs_wait_buftarg() to free all the remaining buffers in the
filesystem.

The issue is that on unlocking the buffer, the scheduler has decided
that the mount process has higher priority than the the kworker
thread that is running the IO completion, and so immediately
switched contexts to the mount process from the semaphore unlock
code, hence preventing the kworker thread from finishing the IO
completion and releasing the IO reference to the buffer.

Hence by the time that xfs_wait_buftarg() is run, the buffer still
has an active reference and so isn't on the LRU list that the
function walks to free the remaining buffers. Hence we miss that
buffer and continue onwards to tear down the mount structures,
at which time we get find a stray reference count on the perag
structure. On a non-debug kernel, this will be ignored and the
structure torn down and freed. Hence when the kworker thread is then
rescheduled and the buffer released and freed, it will access a
freed perag structure.

The problem here is that when the log mount fails, we still need to
quiesce the log to ensure that the IO workqueues have returned to
idle before we run xfs_wait_buftarg(). By synchronising the
workqueues, we ensure that all IO completions are fully processed,
not just to the point where buffers have been unlocked. This ensures
we don't end up in the situation above.

cc: <stable@vger.kernel.org> # 3.18
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoARM: debug-ll: fix BCM63xx entry for multiplatform
Arnd Bergmann [Mon, 18 Jan 2016 09:45:00 +0000 (10:45 +0100)]
ARM: debug-ll: fix BCM63xx entry for multiplatform

[ Upstream commit 6c54809977de3c9e2ef9e9934a2c6625f7e161e7 ]

During my randconfig build testing, I found that a kernel with
DEBUG_AT91_UART and ARCH_BCM_63XX fails to build:

arch/arm/include/debug/at91.S:18:0: error: "CONFIG_DEBUG_UART_VIRT" redefined [-Werror]

It turns out that the DEBUG_UART_BCM63XX option is enabled whenever
the ARCH_BCM_63XX is, and that breaks multiplatform kernels because
we then end up using the UART address from BCM63XX rather than the
one we actually configured (if any).

This changes the BCM63XX options to only have one Kconfig option,
and only enable that if the user explicitly turns it on.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: b51312bebfa4 ("ARM: BCM63XX: add low-level UART debug support")
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agodmaengine: at_xdmac: fix resume for cyclic transfers
Songjun Wu [Mon, 18 Jan 2016 10:14:44 +0000 (11:14 +0100)]
dmaengine: at_xdmac: fix resume for cyclic transfers

[ Upstream commit 611dcadb01c89d1d3521450c05a4ded332e5a32d ]

When having cyclic transfers, the channel was paused when performing
suspend but was not correctly resumed.

Signed-off-by: Songjun Wu <songjun.wu@atmel.com>
Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Fixes: e1f7c9eee707 ("dmaengine: at_xdmac: creation of the atmel
eXtended DMA Controller driver")
Cc: <stable@vger.kernel.org> # 4.1 and later
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agocrypto: algif_hash - Fix race condition in hash_check_key
Herbert Xu [Fri, 15 Jan 2016 14:01:08 +0000 (22:01 +0800)]
crypto: algif_hash - Fix race condition in hash_check_key

[ Upstream commit ad46d7e33219218605ea619e32553daf4f346b9f ]

We need to lock the child socket in hash_check_key as otherwise
two simultaneous calls can cause the parent socket to be freed.

Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agocrypto: af_alg - Forbid bind(2) when nokey child sockets are present
Herbert Xu [Wed, 13 Jan 2016 07:03:32 +0000 (15:03 +0800)]
crypto: af_alg - Forbid bind(2) when nokey child sockets are present

[ Upstream commit a6a48c565f6f112c6983e2a02b1602189ed6e26e ]

This patch forbids the calling of bind(2) when there are child
sockets created by accept(2) in existence, even if they are created
on the nokey path.

This is needed as those child sockets have references to the tfm
object which bind(2) will destroy.

Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agocrypto: algif_hash - Remove custom release parent function
Herbert Xu [Wed, 13 Jan 2016 07:00:36 +0000 (15:00 +0800)]
crypto: algif_hash - Remove custom release parent function

[ Upstream commit f1d84af1835846a5a2b827382c5848faf2bb0e75 ]

This patch removes the custom release parent function as the
generic af_alg_release_parent now works for nokey sockets too.

Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agocrypto: af_alg - Allow af_af_alg_release_parent to be called on nokey path
Herbert Xu [Wed, 13 Jan 2016 06:59:03 +0000 (14:59 +0800)]
crypto: af_alg - Allow af_af_alg_release_parent to be called on nokey path

[ Upstream commit 6a935170a980024dd29199e9dbb5c4da4767a1b9 ]

This patch allows af_alg_release_parent to be called even for
nokey sockets.

Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agocrypto: algif_hash - Require setkey before accept(2)
Herbert Xu [Fri, 8 Jan 2016 13:31:04 +0000 (21:31 +0800)]
crypto: algif_hash - Require setkey before accept(2)

[ Upstream commit 6de62f15b581f920ade22d758f4c338311c2f0d4 ]

Hash implementations that require a key may crash if you use
them without setting a key.  This patch adds the necessary checks
so that if you do attempt to use them without a key that we return
-ENOKEY instead of proceeding.

This patch also adds a compatibility path to support old applications
that do acept(2) before setkey.

Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agocrypto: hash - Add crypto_ahash_has_setkey
Herbert Xu [Fri, 8 Jan 2016 13:28:26 +0000 (21:28 +0800)]
crypto: hash - Add crypto_ahash_has_setkey

[ Upstream commit a5596d6332787fd383b3b5427b41f94254430827 ]

This patch adds a way for ahash users to determine whether a key
is required by a crypto_ahash transform.

Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agocrypto: af_alg - Add nokey compatibility path
Herbert Xu [Mon, 4 Jan 2016 04:35:18 +0000 (13:35 +0900)]
crypto: af_alg - Add nokey compatibility path

[ Upstream commit 37766586c965d63758ad542325a96d5384f4a8c9 ]

This patch adds a compatibility path to support old applications
that do acept(2) before setkey.

Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agocrypto: af_alg - Fix socket double-free when accept fails
Herbert Xu [Wed, 30 Dec 2015 12:24:17 +0000 (20:24 +0800)]
crypto: af_alg - Fix socket double-free when accept fails

[ Upstream commit a383292c86663bbc31ac62cc0c04fc77504636a6 ]

When we fail an accept(2) call we will end up freeing the socket
twice, once due to the direct sk_free call and once again through
newsock.

This patch fixes this by removing the sk_free call.

Cc: stable@vger.kernel.org
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agocrypto: af_alg - Disallow bind/setkey/... after accept(2)
Herbert Xu [Wed, 30 Dec 2015 03:47:53 +0000 (11:47 +0800)]
crypto: af_alg - Disallow bind/setkey/... after accept(2)

[ Upstream commit c840ac6af3f8713a71b4d2363419145760bd6044 ]

Each af_alg parent socket obtained by socket(2) corresponds to a
tfm object once bind(2) has succeeded.  An accept(2) call on that
parent socket creates a context which then uses the tfm object.

Therefore as long as any child sockets created by accept(2) exist
the parent socket must not be modified or freed.

This patch guarantees this by using locks and a reference count
on the parent socket.  Any attempt to modify the parent socket will
fail with EBUSY.

Cc: stable@vger.kernel.org
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoprintk: do cond_resched() between lines while outputting to consoles
Tejun Heo [Sat, 16 Jan 2016 00:58:24 +0000 (16:58 -0800)]
printk: do cond_resched() between lines while outputting to consoles

[ Upstream commit 8d91f8b15361dfb438ab6eb3b319e2ded43458ff ]

@console_may_schedule tracks whether console_sem was acquired through
lock or trylock.  If the former, we're inside a sleepable context and
console_conditional_schedule() performs cond_resched().  This allows
console drivers which use console_lock for synchronization to yield
while performing time-consuming operations such as scrolling.

However, the actual console outputting is performed while holding
irq-safe logbuf_lock, so console_unlock() clears @console_may_schedule
before starting outputting lines.  Also, only a few drivers call
console_conditional_schedule() to begin with.  This means that when a
lot of lines need to be output by console_unlock(), for example on a
console registration, the task doing console_unlock() may not yield for
a long time on a non-preemptible kernel.

If this happens with a slow console devices, for example a serial
console, the outputting task may occupy the cpu for a very long time.
Long enough to trigger softlockup and/or RCU stall warnings, which in
turn pile more messages, sometimes enough to trigger the next cycle of
warnings incapacitating the system.

Fix it by making console_unlock() insert cond_resched() between lines if
@console_may_schedule.

Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Calvin Owens <calvinowens@fb.com>
Acked-by: Jan Kara <jack@suse.com>
Cc: Dave Jones <davej@codemonkey.org.uk>
Cc: Kyle McMartin <kyle@kernel.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agokernel/panic.c: turn off locks debug before releasing console lock
Vitaly Kuznetsov [Fri, 20 Nov 2015 23:57:24 +0000 (15:57 -0800)]
kernel/panic.c: turn off locks debug before releasing console lock

[ Upstream commit 7625b3a0007decf2b135cb47ca67abc78a7b1bc1 ]

Commit 08d78658f393 ("panic: release stale console lock to always get the
logbuf printed out") introduced an unwanted bad unlock balance report when
panic() is called directly and not from OOPS (e.g.  from out_of_memory()).
The difference is that in case of OOPS we disable locks debug in
oops_enter() and on direct panic call nobody does that.

Fixes: 08d78658f393 ("panic: release stale console lock to always get the logbuf printed out")
Reported-by: kernel test robot <ying.huang@linux.intel.com>
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Baoquan He <bhe@redhat.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Xie XiuQi <xiexiuqi@huawei.com>
Cc: Seth Jennings <sjenning@redhat.com>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Petr Mladek <pmladek@suse.cz>
Cc: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agopanic: release stale console lock to always get the logbuf printed out
Vitaly Kuznetsov [Sat, 7 Nov 2015 00:32:58 +0000 (16:32 -0800)]
panic: release stale console lock to always get the logbuf printed out

[ Upstream commit 08d78658f393fefaa2e6507ea052c6f8ef4002a2 ]

In some cases we may end up killing the CPU holding the console lock
while still having valuable data in logbuf. E.g. I'm observing the
following:

- A crash is happening on one CPU and console_unlock() is being called on
  some other.

- console_unlock() tries to print out the buffer before releasing the lock
  and on slow console it takes time.

- in the meanwhile crashing CPU does lots of printk()-s with valuable data
  (which go to the logbuf) and sends IPIs to all other CPUs.

- console_unlock() finishes printing previous chunk and enables interrupts
  before trying to print out the rest, the CPU catches the IPI and never
  releases console lock.

This is not the only possible case: in VT/fb subsystems we have many other
console_lock()/console_unlock() users.  Non-masked interrupts (or
receiving NMI in case of extreme slowness) will have the same result.
Getting the whole console buffer printed out on crash should be top
priority.

[akpm@linux-foundation.org: tweak comment text]
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com>
Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Baoquan He <bhe@redhat.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Xie XiuQi <xiexiuqi@huawei.com>
Cc: Seth Jennings <sjenning@redhat.com>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agomemcg: only free spare array when readers are done
Martijn Coenen [Sat, 16 Jan 2016 00:57:49 +0000 (16:57 -0800)]
memcg: only free spare array when readers are done

[ Upstream commit 6611d8d76132f86faa501de9451a89bf23fb2371 ]

A spare array holding mem cgroup threshold events is kept around to make
sure we can always safely deregister an event and have an array to store
the new set of events in.

In the scenario where we're going from 1 to 0 registered events, the
pointer to the primary array containing 1 event is copied to the spare
slot, and then the spare slot is freed because no events are left.
However, it is freed before calling synchronize_rcu(), which means
readers may still be accessing threshold->primary after it is freed.

Fixed by only freeing after synchronize_rcu().

Signed-off-by: Martijn Coenen <maco@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Vladimir Davydov <vdavydov@virtuozzo.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agomm: soft-offline: check return value in second __get_any_page() call
Naoya Horiguchi [Sat, 16 Jan 2016 00:54:03 +0000 (16:54 -0800)]
mm: soft-offline: check return value in second __get_any_page() call

[ Upstream commit d96b339f453997f2f08c52da3f41423be48c978f ]

I saw the following BUG_ON triggered in a testcase where a process calls
madvise(MADV_SOFT_OFFLINE) on thps, along with a background process that
calls migratepages command repeatedly (doing ping-pong among different
NUMA nodes) for the first process:

   Soft offlining page 0x60000 at 0x700000600000
   __get_any_page: 0x60000 free buddy page
   page:ffffea0001800000 count:0 mapcount:-127 mapping:          (null) index:0x1
   flags: 0x1fffc0000000000()
   page dumped because: VM_BUG_ON_PAGE(atomic_read(&page->_count) == 0)
   ------------[ cut here ]------------
   kernel BUG at /src/linux-dev/include/linux/mm.h:342!
   invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC
   Modules linked in: cfg80211 rfkill crc32c_intel serio_raw virtio_balloon i2c_piix4 virtio_blk virtio_net ata_generic pata_acpi
   CPU: 3 PID: 3035 Comm: test_alloc_gene Tainted: G           O    4.4.0-rc8-v4.4-rc8-160107-1501-00000-rc8+ #74
   Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
   task: ffff88007c63d5c0 ti: ffff88007c210000 task.ti: ffff88007c210000
   RIP: 0010:[<ffffffff8118998c>]  [<ffffffff8118998c>] put_page+0x5c/0x60
   RSP: 0018:ffff88007c213e00  EFLAGS: 00010246
   Call Trace:
     put_hwpoison_page+0x4e/0x80
     soft_offline_page+0x501/0x520
     SyS_madvise+0x6bc/0x6f0
     entry_SYSCALL_64_fastpath+0x12/0x6a
   Code: 8b fc ff ff 5b 5d c3 48 89 df e8 b0 fa ff ff 48 89 df 31 f6 e8 c6 7d ff ff 5b 5d c3 48 c7 c6 08 54 a2 81 48 89 df e8 a4 c5 01 00 <0f> 0b 66 90 66 66 66 66 90 55 48 89 e5 41 55 41 54 53 48 8b 47
   RIP  [<ffffffff8118998c>] put_page+0x5c/0x60
    RSP <ffff88007c213e00>

The root cause resides in get_any_page() which retries to get a refcount
of the page to be soft-offlined.  This function calls
put_hwpoison_page(), expecting that the target page is putback to LRU
list.  But it can be also freed to buddy.  So the second check need to
care about such case.

Fixes: af8fae7c0886 ("mm/memory-failure.c: clean up soft_offline_page()")
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Sasha Levin <sasha.levin@oracle.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Jerome Marchand <jmarchan@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@redhat.com>
Cc: Steve Capper <steve.capper@linaro.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Cc: <stable@vger.kernel.org> [3.9+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agozram: try vmalloc() after kmalloc()
Kyeongdon Kim [Thu, 14 Jan 2016 23:22:29 +0000 (15:22 -0800)]
zram: try vmalloc() after kmalloc()

[ Upstream commit d913897abace843bba20249f3190167f7895e9c3 ]

When we're using LZ4 multi compression streams for zram swap, we found
out page allocation failure message in system running test.  That was
not only once, but a few(2 - 5 times per test).  Also, some failure
cases were continually occurring to try allocation order 3.

In order to make parallel compression private data, we should call
kzalloc() with order 2/3 in runtime(lzo/lz4).  But if there is no order
2/3 size memory to allocate in that time, page allocation fails.  This
patch makes to use vmalloc() as fallback of kmalloc(), this prevents
page alloc failure warning.

After using this, we never found warning message in running test, also
It could reduce process startup latency about 60-120ms in each case.

For reference a call trace :

    Binder_1: page allocation failure: order:3, mode:0x10c0d0
    CPU: 0 PID: 424 Comm: Binder_1 Tainted: GW 3.10.49-perf-g991d02b-dirty #20
    Call trace:
      dump_backtrace+0x0/0x270
      show_stack+0x10/0x1c
      dump_stack+0x1c/0x28
      warn_alloc_failed+0xfc/0x11c
      __alloc_pages_nodemask+0x724/0x7f0
      __get_free_pages+0x14/0x5c
      kmalloc_order_trace+0x38/0xd8
      zcomp_lz4_create+0x2c/0x38
      zcomp_strm_alloc+0x34/0x78
      zcomp_strm_multi_find+0x124/0x1ec
      zcomp_strm_find+0xc/0x18
      zram_bvec_rw+0x2fc/0x780
      zram_make_request+0x25c/0x2d4
      generic_make_request+0x80/0xbc
      submit_bio+0xa4/0x15c
      __swap_writepage+0x218/0x230
      swap_writepage+0x3c/0x4c
      shrink_page_list+0x51c/0x8d0
      shrink_inactive_list+0x3f8/0x60c
      shrink_lruvec+0x33c/0x4cc
      shrink_zone+0x3c/0x100
      try_to_free_pages+0x2b8/0x54c
      __alloc_pages_nodemask+0x514/0x7f0
      __get_free_pages+0x14/0x5c
      proc_info_read+0x50/0xe4
      vfs_read+0xa0/0x12c
      SyS_read+0x44/0x74
    DMA: 3397*4kB (MC) 26*8kB (RC) 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB
         0*512kB 0*1024kB 0*2048kB 0*4096kB = 13796kB

[minchan@kernel.org: change vmalloc gfp and adding comment about gfp]
[sergey.senozhatsky@gmail.com: tweak comments and styles]
Signed-off-by: Kyeongdon Kim <kyeongdon.kim@lge.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Acked-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agozram/zcomp: use GFP_NOIO to allocate streams
Sergey Senozhatsky [Thu, 14 Jan 2016 23:22:26 +0000 (15:22 -0800)]
zram/zcomp: use GFP_NOIO to allocate streams

[ Upstream commit 3d5fe03a3ea013060ebba2a811aeb0f23f56aefa ]

We can end up allocating a new compression stream with GFP_KERNEL from
within the IO path, which may result is nested (recursive) IO
operations.  That can introduce problems if the IO path in question is a
reclaimer, holding some locks that will deadlock nested IOs.

Allocate streams and working memory using GFP_NOIO flag, forbidding
recursive IO and FS operations.

An example:

  inconsistent {IN-RECLAIM_FS-W} -> {RECLAIM_FS-ON-W} usage.
  git/20158 [HC0[0]:SC0[0]:HE1:SE1] takes:
   (jbd2_handle){+.+.?.}, at:  start_this_handle+0x4ca/0x555
  {IN-RECLAIM_FS-W} state was registered at:
     __lock_acquire+0x8da/0x117b
     lock_acquire+0x10c/0x1a7
     start_this_handle+0x52d/0x555
     jbd2__journal_start+0xb4/0x237
     __ext4_journal_start_sb+0x108/0x17e
     ext4_dirty_inode+0x32/0x61
     __mark_inode_dirty+0x16b/0x60c
     iput+0x11e/0x274
     __dentry_kill+0x148/0x1b8
     shrink_dentry_list+0x274/0x44a
     prune_dcache_sb+0x4a/0x55
     super_cache_scan+0xfc/0x176
     shrink_slab.part.14.constprop.25+0x2a2/0x4d3
     shrink_zone+0x74/0x140
     kswapd+0x6b7/0x930
     kthread+0x107/0x10f
     ret_from_fork+0x3f/0x70
  irq event stamp: 138297
  hardirqs last  enabled at (138297):  debug_check_no_locks_freed+0x113/0x12f
  hardirqs last disabled at (138296):  debug_check_no_locks_freed+0x33/0x12f
  softirqs last  enabled at (137818):  __do_softirq+0x2d3/0x3e9
  softirqs last disabled at (137813):  irq_exit+0x41/0x95

               other info that might help us debug this:
   Possible unsafe locking scenario:
         CPU0
         ----
    lock(jbd2_handle);
    <Interrupt>
      lock(jbd2_handle);

                *** DEADLOCK ***
  5 locks held by git/20158:
   #0:  (sb_writers#7){.+.+.+}, at: [<ffffffff81155411>] mnt_want_write+0x24/0x4b
   #1:  (&type->i_mutex_dir_key#2/1){+.+.+.}, at: [<ffffffff81145087>] lock_rename+0xd9/0xe3
   #2:  (&sb->s_type->i_mutex_key#11){+.+.+.}, at: [<ffffffff8114f8e2>] lock_two_nondirectories+0x3f/0x6b
   #3:  (&sb->s_type->i_mutex_key#11/4){+.+.+.}, at: [<ffffffff8114f909>] lock_two_nondirectories+0x66/0x6b
   #4:  (jbd2_handle){+.+.?.}, at: [<ffffffff811e31db>] start_this_handle+0x4ca/0x555

               stack backtrace:
  CPU: 2 PID: 20158 Comm: git Not tainted 4.1.0-rc7-next-20150615-dbg-00016-g8bdf555-dirty #211
  Call Trace:
    dump_stack+0x4c/0x6e
    mark_lock+0x384/0x56d
    mark_held_locks+0x5f/0x76
    lockdep_trace_alloc+0xb2/0xb5
    kmem_cache_alloc_trace+0x32/0x1e2
    zcomp_strm_alloc+0x25/0x73 [zram]
    zcomp_strm_multi_find+0xe7/0x173 [zram]
    zcomp_strm_find+0xc/0xe [zram]
    zram_bvec_rw+0x2ca/0x7e0 [zram]
    zram_make_request+0x1fa/0x301 [zram]
    generic_make_request+0x9c/0xdb
    submit_bio+0xf7/0x120
    ext4_io_submit+0x2e/0x43
    ext4_bio_write_page+0x1b7/0x300
    mpage_submit_page+0x60/0x77
    mpage_map_and_submit_buffers+0x10f/0x21d
    ext4_writepages+0xc8c/0xe1b
    do_writepages+0x23/0x2c
    __filemap_fdatawrite_range+0x84/0x8b
    filemap_flush+0x1c/0x1e
    ext4_alloc_da_blocks+0xb8/0x117
    ext4_rename+0x132/0x6dc
    ? mark_held_locks+0x5f/0x76
    ext4_rename2+0x29/0x2b
    vfs_rename+0x540/0x636
    SyS_renameat2+0x359/0x44d
    SyS_rename+0x1e/0x20
    entry_SYSCALL_64_fastpath+0x12/0x6f

[minchan@kernel.org: add stable mark]
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Cc: Kyeongdon Kim <kyeongdon.kim@lge.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoperf kvm record/report: 'unprocessable sample' error while recording/reporting guest...
Ravi Bangoria [Mon, 7 Dec 2015 06:55:02 +0000 (12:25 +0530)]
perf kvm record/report: 'unprocessable sample' error while recording/reporting guest data

[ Upstream commit 3caeaa562733c4836e61086ec07666635006a787 ]

While recording guest samples in host using perf kvm record, it will
populate unprocessable sample error, though samples will be recorded
properly. While generating report using perf kvm report, no samples will
be processed and same error will populate. We have seen this behaviour
with upstream perf(4.4-rc3) on x86 and ppc64 hardware.

Reason behind this failure is, when it tries to fetch machine from
rb_tree of machines, it fails. As a part of tracing a bug, we figured
out that this code was incorrectly refactored in commit 54245fdc3576
("perf session: Remove wrappers to machines__find").

This patch will change the functionality such that if it can't fetch
machine in first trial, it will create one node of machine and add that to
rb_tree. So next time when it tries to fetch same machine from rb_tree,
it won't fail. Actually it was the case before refactoring of code in
aforementioned commit.

This patch is generated from acme perf/core branch.

Below I've mention an example that demonstrate the behaviour before and
after applying patch.

Before applying patch:
[Note: One needs to run guest before recording data in host]

  ravi@ravi-bangoria:~$ ./perf kvm record -a
  Warning:
  5903 unprocessable samples recorded.
  Do you have a KVM guest running and not using 'perf kvm'?
  [ perf record: Captured and wrote 1.409 MB perf.data.guest (285 samples) ]

  ravi@ravi-bangoria:~$ ./perf kvm report --stdio
  Warning:
  5903 unprocessable samples recorded.
  Do you have a KVM guest running and not using 'perf kvm'?
  # To display the perf.data header info, please use --header/--header-only options.
  #
  # Total Lost Samples: 0
  #
  # Samples: 285  of event 'cycles'
  # Event count (approx.): 88715406
  #
  # Overhead  Command  Shared Object  Symbol
  # ........  .......  .............  ......
  #

  # (For a higher level overview, try: perf report --sort comm,dso)
  #

After applying patch:

  ravi@ravi-bangoria:~$ ./perf kvm record -a
  [ perf record: Captured and wrote 1.188 MB perf.data.guest (17 samples) ]

  ravi@ravi-bangoria:~$ ./perf kvm report --stdio
  # To display the perf.data header info, please use --header/--header-only options.
  #
  # Total Lost Samples: 0
  #
  # Samples: 17  of event 'cycles'
  # Event count (approx.): 700746
  #
  # Overhead  Command  Shared Object     Symbol
  # ........  .......  ................  ......................
  #
      34.19%  :5758    [unknown]         [g] 0xffffffff818682ab
      22.79%  :5758    [unknown]         [g] 0xffffffff812dc7f8
      22.79%  :5758    [unknown]         [g] 0xffffffff818650d0
      14.83%  :5758    [unknown]         [g] 0xffffffff8161a1b6
       2.49%  :5758    [unknown]         [g] 0xffffffff818692bf
       0.48%  :5758    [unknown]         [g] 0xffffffff81869253
       0.05%  :5758    [unknown]         [g] 0xffffffff81869250

Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org # v3.19+
Fixes: 54245fdc3576 ("perf session: Remove wrappers to machines__find")
Link: http://lkml.kernel.org/r/1449471302-11283-1-git-send-email-ravi.bangoria@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoocfs2/dlm: ignore cleaning the migration mle that is inuse
xuejiufei [Thu, 14 Jan 2016 23:17:38 +0000 (15:17 -0800)]
ocfs2/dlm: ignore cleaning the migration mle that is inuse

[ Upstream commit bef5502de074b6f6fa647b94b73155d675694420 ]

We have found that migration source will trigger a BUG that the refcount
of mle is already zero before put when the target is down during
migration.  The situation is as follows:

dlm_migrate_lockres
  dlm_add_migration_mle
  dlm_mark_lockres_migrating
  dlm_get_mle_inuse
  <<<<<< Now the refcount of the mle is 2.
  dlm_send_one_lockres and wait for the target to become the
  new master.
  <<<<<< o2hb detect the target down and clean the migration
  mle. Now the refcount is 1.

dlm_migrate_lockres woken, and put the mle twice when found the target
goes down which trigger the BUG with the following message:

  "ERROR: bad mle: ".

Signed-off-by: Jiufei Xue <xuejiufei@huawei.com>
Reviewed-by: Joseph Qi <joseph.qi@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoscripts/bloat-o-meter: fix python3 syntax error
Sergey Senozhatsky [Thu, 14 Jan 2016 23:16:53 +0000 (15:16 -0800)]
scripts/bloat-o-meter: fix python3 syntax error

[ Upstream commit 72214a24a7677d4c7501eecc9517ed681b5f2db2 ]

In Python3+ print is a function so the old syntax is not correct
anymore:

  $ ./scripts/bloat-o-meter vmlinux.o vmlinux.o.old
    File "./scripts/bloat-o-meter", line 61
      print "add/remove: %s/%s grow/shrink: %s/%s up/down: %s/%s (%s)" % \
                                                                     ^
  SyntaxError: invalid syntax

Fix by calling print as a function.

Tested on python 2.7.11, 3.5.1

Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agodma-debug: switch check from _text to _stext
Laura Abbott [Thu, 14 Jan 2016 23:16:50 +0000 (15:16 -0800)]
dma-debug: switch check from _text to _stext

[ Upstream commit ea535e418c01837d07b6c94e817540f50bfdadb0 ]

In include/asm-generic/sections.h:

  /*
   * Usage guidelines:
   * _text, _data: architecture specific, don't use them in
   * arch-independent code
   * [_stext, _etext]: contains .text.* sections, may also contain
   * .rodata.*
   *                   and/or .init.* sections

_text is not guaranteed across architectures.  Architectures such as ARM
may reuse parts which are not actually text and erroneously trigger a bug.
Switch to using _stext which is guaranteed to contain text sections.

Came out of https://lkml.kernel.org/g/<567B1176.4000106@redhat.com>

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
Reviewed-by: Kees Cook <keescook@chromium.org>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agom32r: fix m32104ut_defconfig build fail
Sudip Mukherjee [Thu, 14 Jan 2016 23:16:47 +0000 (15:16 -0800)]
m32r: fix m32104ut_defconfig build fail

[ Upstream commit 601f1db653217f205ffa5fb33514b4e1711e56d1 ]

The build of m32104ut_defconfig for m32r arch was failing for long long
time with the error:

  ERROR: "memory_start" [fs/udf/udf.ko] undefined!
  ERROR: "memory_end" [fs/udf/udf.ko] undefined!
  ERROR: "memory_end" [drivers/scsi/sg.ko] undefined!
  ERROR: "memory_start" [drivers/scsi/sg.ko] undefined!
  ERROR: "memory_end" [drivers/i2c/i2c-dev.ko] undefined!
  ERROR: "memory_start" [drivers/i2c/i2c-dev.ko] undefined!

As done in other architectures export the symbols to fix the error.

Reported-by: Fengguang Wu <fengguang.wu@intel.com>
Signed-off-by: Sudip Mukherjee <sudip@vectorindia.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agocifs_dbg() outputs an uninitialized buffer in cifs_readdir()
Vasily Averin [Thu, 14 Jan 2016 10:41:14 +0000 (13:41 +0300)]
cifs_dbg() outputs an uninitialized buffer in cifs_readdir()

[ Upstream commit 01b9b0b28626db4a47d7f48744d70abca9914ef1 ]

In some cases tmp_bug can be not filled in cifs_filldir and stay uninitialized,
therefore its printk with "%s" modifier can leak content of kernelspace memory.
If old content of this buffer does not contain '\0' access bejond end of
allocated object can crash the host.

Signed-off-by: Vasily Averin <vvs@virtuozzo.com>
Signed-off-by: Steve French <sfrench@localhost.localdomain>
CC: Stable <stable@vger.kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agocifs: fix race between call_async() and reconnect()
Rabin Vincent [Wed, 23 Dec 2015 06:32:41 +0000 (07:32 +0100)]
cifs: fix race between call_async() and reconnect()

[ Upstream commit 820962dc700598ffe8cd21b967e30e7520c34748 ]

cifs_call_async() queues the MID to the pending list and calls
smb_send_rqst().  If smb_send_rqst() performs a partial send, it sets
the tcpStatus to CifsNeedReconnect and returns an error code to
cifs_call_async().  In this case, cifs_call_async() removes the MID
from the list and returns to the caller.

However, cifs_call_async() releases the server mutex _before_ removing
the MID.  This means that a cifs_reconnect() can race with this function
and manage to remove the MID from the list and delete the entry before
cifs_call_async() calls cifs_delete_mid().  This leads to various
crashes due to the use after free in cifs_delete_mid().

Task1 Task2

cifs_call_async():
 - rc = -EAGAIN
 - mutex_unlock(srv_mutex)

cifs_reconnect():
 - mutex_lock(srv_mutex)
 - mutex_unlock(srv_mutex)
 - list_delete(mid)
 - mid->callback()
  cifs_writev_callback():
  - mutex_lock(srv_mutex)
- delete(mid)
  - mutex_unlock(srv_mutex)

 - cifs_delete_mid(mid) <---- use after free

Fix this by removing the MID in cifs_call_async() before releasing the
srv_mutex.  Also hold the srv_mutex in cifs_reconnect() until the MIDs
are moved out of the pending list.

Signed-off-by: Rabin Vincent <rabin.vincent@axis.com>
Acked-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com>
CC: Stable <stable@vger.kernel.org>
Signed-off-by: Steve French <sfrench@localhost.localdomain>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agocifs: Ratelimit kernel log messages
Jamie Bainbridge [Sat, 7 Nov 2015 12:13:49 +0000 (22:13 +1000)]
cifs: Ratelimit kernel log messages

[ Upstream commit ec7147a99e33a9e4abad6fc6e1b40d15df045d53 ]

Under some conditions, CIFS can repeatedly call the cifs_dbg() logging
wrapper. If done rapidly enough, the console framebuffer can softlockup
or "rcu_sched self-detected stall". Apply the built-in log ratelimiters
to prevent such hangs.

Signed-off-by: Jamie Bainbridge <jamie.bainbridge@gmail.com>
Signed-off-by: Steve French <smfrench@gmail.com>
CC: Stable <stable@vger.kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agosparc64: fix incorrect sign extension in sys_sparc64_personality
Dmitry V. Levin [Sat, 26 Dec 2015 23:13:27 +0000 (02:13 +0300)]
sparc64: fix incorrect sign extension in sys_sparc64_personality

[ Upstream commit 525fd5a94e1be0776fa652df5c687697db508c91 ]

The value returned by sys_personality has type "long int".
It is saved to a variable of type "int", which is not a problem
yet because the type of task_struct->pesonality is "unsigned int".
The problem is the sign extension from "int" to "long int"
that happens on return from sys_sparc64_personality.

For example, a userspace call personality((unsigned) -EINVAL) will
result to any subsequent personality call, including absolutely
harmless read-only personality(0xffffffff) call, failing with
errno set to EINVAL.

Signed-off-by: Dmitry V. Levin <ldv@altlinux.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agommc: core: Enable tuning according to the actual timing
Carlo Caione [Wed, 13 Jan 2016 08:36:55 +0000 (09:36 +0100)]
mmc: core: Enable tuning according to the actual timing

[ Upstream commit e10c321977091f163eceedec0650e0ef4b3cf4bb ]

While in sdhci_execute_tuning() the choice whether or not to enable the
tuning is done on the actual timing, in the mmc_sdio_init_uhs_card() the
check is done on the capability of the card.

This difference is causing some issues with some SDIO cards in DDR50
mode where the CDM19 is wrongly issued.

With this patch we modify the check in both
mmc_(sd|sdio)_init_uhs_card() functions to take the proper decision
only according to the actual timing specification.

Cc: stable@vger.kernel.org
Signed-off-by: Carlo Caione <carlo@endlessm.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agommc: core: enable CMD19 tuning for DDR50 mode
Weijun Yang [Sun, 4 Oct 2015 12:04:11 +0000 (12:04 +0000)]
mmc: core: enable CMD19 tuning for DDR50 mode

[ Upstream commit 4324f6de6d2eb9b232410eb0d67bfafdde8ba711 ]

As SD Specifications Part1 Physical Layer Specification Version
3.01 says, CMD19 tuning is available for unlocked cards in transfer
state of 1.8V signaling mode. The small difference between v3.00
and 3.01 spec means that CMD19 tuning is also available for DDR50
mode.

Signed-off-by: Weijun Yang <york.yang@csr.com>
Signed-off-by: Barry Song <Baohua.Song@csr.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agommc: mmci: fix an ages old detection error
Linus Walleij [Mon, 4 Jan 2016 01:21:55 +0000 (02:21 +0100)]
mmc: mmci: fix an ages old detection error

[ Upstream commit 0bcb7efdff63564e80fe84dd36a9fbdfbf6697a4 ]

commit 4956e10903fd ("ARM: 6244/1: mmci: add variant data and default
MCICLOCK support") added variant data for ARM, U300 and Ux500 variants.
The Nomadik NHK8815/8820 variant was erroneously labeled as a U300
variant, and when the proper Nomadik variant was later introduced in
commit 34fd421349ff ("ARM: 7378/1: mmci: add support for the Nomadik MMCI
variant") this was not fixes. Let's say this fixes the latter commit as
there was no proper Nomadik support until then.

Cc: stable@vger.kernel.org
Fixes: 34fd421349ff ("ARM: 7378/1: mmci: add support for the Nomadik...")
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agodmaengine: dw: fix cyclic transfer callbacks
Mans Rullgard [Mon, 11 Jan 2016 13:04:29 +0000 (13:04 +0000)]
dmaengine: dw: fix cyclic transfer callbacks

[ Upstream commit 2895b2cad6e7a95104cf396e5330054453382ae1 ]

Cyclic transfer callbacks rely on block completion interrupts which were
disabled in commit ff7b05f29fd4 ("dmaengine/dw_dmac: Don't handle block
interrupts").  This re-enables block interrupts so the cyclic callbacks
can work.  Other transfer types are not affected as they set the INT_EN
bit only on the last block.

Fixes: ff7b05f29fd4 ("dmaengine/dw_dmac: Don't handle block interrupts")
Signed-off-by: Mans Rullgard <mans@mansr.com>
Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agodmaengine: dw: fix cyclic transfer setup
Mans Rullgard [Mon, 11 Jan 2016 13:04:28 +0000 (13:04 +0000)]
dmaengine: dw: fix cyclic transfer setup

[ Upstream commit df3bb8a0e619d501cd13334c3e0586edcdcbc716 ]

Commit 61e183f83069 ("dmaengine/dw_dmac: Reconfigure interrupt and
chan_cfg register on resume") moved some channel initialisation to
a new function which must be called before starting a transfer.

This updates dw_dma_cyclic_start() to use dwc_dostart() like the other
modes, thus ensuring dwc_initialize() gets called and removing some code
duplication.

Fixes: 61e183f83069 ("dmaengine/dw_dmac: Reconfigure interrupt and chan_cfg register on resume")
Signed-off-by: Mans Rullgard <mans@mansr.com>
Reviewed-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoKVM: PPC: Fix ONE_REG AltiVec support
Greg Kurz [Wed, 13 Jan 2016 17:28:17 +0000 (18:28 +0100)]
KVM: PPC: Fix ONE_REG AltiVec support

[ Upstream commit b4d7f161feb3015d6306e1d35b565c888ff70c9d ]

The get and set operations got exchanged by mistake when moving the
code from book3s.c to powerpc.c.

Fixes: 3840edc8033ad5b86deee309c1c321ca54257452
Cc: stable@vger.kernel.org # 3.18+
Signed-off-by: Greg Kurz <gkurz@linux.vnet.ibm.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agodrm/i915: Restore inhibiting the load of the default context
Chris Wilson [Fri, 27 Nov 2015 13:28:55 +0000 (13:28 +0000)]
drm/i915: Restore inhibiting the load of the default context

[ Upstream commit 06ef83a705a98da63797a5a570220b6ca36febd4 ]

Following a GPU reset, we may leave the context in a poorly defined
state, and reloading from that context will leave the GPU flummoxed. For
secondary contexts, this will lead to that context being banned - but
currently it is also causing the default context to become banned,
leading to turmoil in the shared state.

This is a regression from

commit 6702cf16e0ba8b0129f5aa1b6609d4e9c70bc13b [v4.1]
Author: Ben Widawsky <benjamin.widawsky@intel.com>
Date:   Mon Mar 16 16:00:58 2015 +0000

    drm/i915: Initialize all contexts

which quietly introduced the removal of the MI_RESTORE_INHIBIT on the
default context.

v2: Mark the global default context as uninitialized on GPU reset so
that the context-local workarounds are reloaded upon re-enabling.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michel Thierry <michel.thierry@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1448630935-27377-1-git-send-email-chris@chris-wilson.co.uk
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: stable@vger.kernel.org
[danvet: This seems to fix a gpu hand on after the first resume,
resulting in any future suspend operation failing with -EIO because
the gpu seems to be in a funky state. Somehow this patch fixes that.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
(cherry picked from commit 42f1cae8c079bcceb3cff079fddc3ff8852c788f)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoparisc: Fix __ARCH_SI_PREAMBLE_SIZE
Helge Deller [Sun, 10 Jan 2016 08:30:42 +0000 (09:30 +0100)]
parisc: Fix __ARCH_SI_PREAMBLE_SIZE

[ Upstream commit e60fc5aa608eb38b47ba4ee058f306f739eb70a0 ]

On a 64bit kernel build the compiler aligns the _sifields union in the
struct siginfo_t on a 64bit address. The __ARCH_SI_PREAMBLE_SIZE define
compensates for this alignment and thus fixes the wait testcase of the
strace package.

The symptoms of a wrong __ARCH_SI_PREAMBLE_SIZE value is that
_sigchld.si_stime variable is missed to be copied and thus after a
copy_siginfo() will have uninitialized values.

Signed-off-by: Helge Deller <deller@gmx.de>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agovirtio_balloon: fix race between migration and ballooning
Minchan Kim [Sun, 27 Dec 2015 23:35:13 +0000 (08:35 +0900)]
virtio_balloon: fix race between migration and ballooning

[ Upstream commit 21ea9fb69e7c4b1b1559c3e410943d3ff248ffcb ]

In balloon_page_dequeue, pages_lock should cover the loop
(ie, list_for_each_entry_safe). Otherwise, the cursor page could
be isolated by compaction and then list_del by isolation could
poison the page->lru.{prev,next} so the loop finally could
access wrong address like this. This patch fixes the bug.

general protection fault: 0000 [#1] SMP
Dumping ftrace buffer:
   (ftrace buffer empty)
Modules linked in:
CPU: 2 PID: 82 Comm: vballoon Not tainted 4.4.0-rc5-mm1-access_bit+ #1906
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
task: ffff8800a7ff0000 ti: ffff8800a7fec000 task.ti: ffff8800a7fec000
RIP: 0010:[<ffffffff8115e754>]  [<ffffffff8115e754>] balloon_page_dequeue+0x54/0x130
RSP: 0018:ffff8800a7fefdc0  EFLAGS: 00010246
RAX: ffff88013fff9a70 RBX: ffffea000056fe00 RCX: 0000000000002b7d
RDX: ffff88013fff9a70 RSI: ffffea000056fe00 RDI: ffff88013fff9a68
RBP: ffff8800a7fefde8 R08: ffffea000056fda0 R09: 0000000000000000
R10: ffff8800a7fefd90 R11: 0000000000000001 R12: dead0000000000e0
R13: ffffea000056fe20 R14: ffff880138809070 R15: ffff880138809060
FS:  0000000000000000(0000) GS:ffff88013fc40000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00007f229c10e000 CR3: 00000000b8b53000 CR4: 00000000000006a0
Stack:
 0000000000000100 ffff880138809088 ffff880138809000 ffff880138809060
 0000000000000046 ffff8800a7fefe28 ffffffff812c86d3 ffff880138809020
 ffff880138809000 fffffffffff91900 0000000000000100 ffff880138809060
Call Trace:
 [<ffffffff812c86d3>] leak_balloon+0x93/0x1a0
 [<ffffffff812c8bc7>] balloon+0x217/0x2a0
 [<ffffffff8143739e>] ? __schedule+0x31e/0x8b0
 [<ffffffff81078160>] ? abort_exclusive_wait+0xb0/0xb0
 [<ffffffff812c89b0>] ? update_balloon_stats+0xf0/0xf0
 [<ffffffff8105b6e9>] kthread+0xc9/0xe0
 [<ffffffff8105b620>] ? kthread_park+0x60/0x60
 [<ffffffff8143b4af>] ret_from_fork+0x3f/0x70
 [<ffffffff8105b620>] ? kthread_park+0x60/0x60
Code: 8d 60 e0 0f 84 af 00 00 00 48 8b 43 20 a8 01 75 3b 48 89 d8 f0 0f ba 28 00 72 10 48 8b 03 f6 c4 08 75 2f 48 89 df e8 8c 83 f9 ff <49> 8b 44 24 20 4d 8d 6c 24 20 48 83 e8 20 4d 39 f5 74 7a 4c 89
RIP  [<ffffffff8115e754>] balloon_page_dequeue+0x54/0x130
 RSP <ffff8800a7fefdc0>
---[ end trace 43cf28060d708d5f ]---
Kernel panic - not syncing: Fatal exception
Dumping ftrace buffer:
   (ftrace buffer empty)
Kernel Offset: disabled

Cc: <stable@vger.kernel.org>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Acked-by: Rafael Aquini <aquini@redhat.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agovirtio_balloon: fix race by fill and leak
Minchan Kim [Sun, 27 Dec 2015 23:35:12 +0000 (08:35 +0900)]
virtio_balloon: fix race by fill and leak

[ Upstream commit f68b992bbb474641881932c61c92dcfa6f5b3689 ]

During my compaction-related stuff, I encountered a bug
with ballooning.

With repeated inflating and deflating cycle, guest memory(
ie, cat /proc/meminfo | grep MemTotal) is decreased and
couldn't be recovered.

The reason is balloon_lock doesn't cover release_pages_balloon
so struct virtio_balloon fields could be overwritten by race
of fill_balloon(e,g, vb->*pfns could be critical).

This patch fixes it in my test.

Cc: <stable@vger.kernel.org>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agovirtio_ballon: change stub of release_pages_by_pfn
Denis V. Lunev [Wed, 19 Aug 2015 21:49:48 +0000 (00:49 +0300)]
virtio_ballon: change stub of release_pages_by_pfn

[ Upstream commit b4d34037329f46ed818d3b0a6e1e23b9c8721f79 ]

and rename it to release_pages_balloon. The function originally takes
arrays of pfns and now it takes pointer to struct virtio_ballon.
This change is necessary to conditionally call adjust_managed_page_count
in the next patch.

Signed-off-by: Denis V. Lunev <den@openvz.org>
CC: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoInput: elantech - mark protocols v2 and v3 as semi-mt
Benjamin Tissoires [Tue, 12 Jan 2016 01:35:38 +0000 (17:35 -0800)]
Input: elantech - mark protocols v2 and v3 as semi-mt

[ Upstream commit 6544a1df11c48c8413071aac3316792e4678fbfb ]

When using a protocol v2 or v3 hardware, elantech uses the function
elantech_report_semi_mt_data() to report data. This devices are rather
creepy because if num_finger is 3, (x2,y2) is (0,0). Yes, only one valid
touch is reported.

Anyway, userspace (libinput) is now confused by these (0,0) touches,
and detect them as palm, and rejects them.

Commit 3c0213d17a09 ("Input: elantech - fix semi-mt protocol for v3 HW")
was sufficient enough for xf86-input-synaptics and libinput before it has
palm rejection. Now we need to actually tell libinput that this device is
a semi-mt one and it should not rely on the actual values of the 2 touches.

Cc: stable@vger.kernel.org
Signed-off-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoclocksource/drivers/vt8500: Increase the minimum delta
Roman Volkov [Fri, 1 Jan 2016 13:24:41 +0000 (16:24 +0300)]
clocksource/drivers/vt8500: Increase the minimum delta

[ Upstream commit f9eccf24615672896dc13251410c3f2f33a14f95 ]

The vt8500 clocksource driver declares itself as capable to handle the
minimum delay of 4 cycles by passing the value into
clockevents_config_and_register(). The vt8500_timer_set_next_event()
requires the passed cycles value to be at least 16. The impact is that
userspace hangs in nanosleep() calls with small delay intervals.

This problem is reproducible in Linux 4.2 starting from:
c6eb3f70d448 ('hrtimer: Get rid of hrtimer softirq')

From Russell King, more detailed explanation:

"It's a speciality of the StrongARM/PXA hardware. It takes a certain
number of OSCR cycles for the value written to hit the compare registers.
So, if a very small delta is written (eg, the compare register is written
with a value of OSCR + 1), the OSCR will have incremented past this value
before it hits the underlying hardware. The result is, that you end up
waiting a very long time for the OSCR to wrap before the event fires.

So, we introduce a check in set_next_event() to detect this and return
-ETIME if the calculated delta is too small, which causes the generic
clockevents code to retry after adding the min_delta specified in
clockevents_config_and_register() to the current time value.

min_delta must be sufficient that we don't re-trip the -ETIME check - if
we do, we will return -ETIME, forward the next event time, try to set it,
return -ETIME again, and basically lock the system up. So, min_delta
must be larger than the check inside set_next_event(). A factor of two
was chosen to ensure that this situation would never occur.

The PXA code worked on PXA systems for years, and I'd suggest no one
changes this mechanism without access to a wide range of PXA systems,
otherwise they're risking breakage."

Cc: stable@vger.kernel.org
Cc: Russell King <linux@arm.linux.org.uk>
Acked-by: Alexey Charkov <alchark@gmail.com>
Signed-off-by: Roman Volkov <rvolkov@v1ros.org>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoxfs: handle dquot buffer readahead in log recovery correctly
Dave Chinner [Mon, 11 Jan 2016 20:04:01 +0000 (07:04 +1100)]
xfs: handle dquot buffer readahead in log recovery correctly

[ Upstream commit 7d6a13f023567d573ac362502bb702eda716e654 ]

When we do dquot readahead in log recovery, we do not use a verifier
as the underlying buffer may not have dquots in it. e.g. the
allocation operation hasn't yet been replayed. Hence we do not want
to fail recovery because we detect an operation to be replayed has
not been run yet. This problem was addressed for inodes in commit
d891400 ("xfs: inode buffers may not be valid during recovery
readahead") but the problem was not recognised to exist for dquots
and their buffers as the dquot readahead did not have a verifier.

The result of not using a verifier is that when the buffer is then
next read to replay a dquot modification, the dquot buffer verifier
will only be attached to the buffer if *readahead is not complete*.
Hence we can read the buffer, replay the dquot changes and then add
it to the delwri submission list without it having a verifier
attached to it. This then generates warnings in xfs_buf_ioapply(),
which catches and warns about this case.

Fix this and make it handle the same readahead verifier error cases
as for inode buffers by adding a new readahead verifier that has a
write operation as well as a read operation that marks the buffer as
not done if any corruption is detected.  Also make sure we don't run
readahead if the dquot buffer has been marked as cancelled by
recovery.

This will result in readahead either succeeding and the buffer
having a valid write verifier, or readahead failing and the buffer
state requiring the subsequent read to resubmit the IO with the new
verifier.  In either case, this will result in the buffer always
ending up with a valid write verifier on it.

Note: we also need to fix the inode buffer readahead error handling
to mark the buffer with EIO. Brian noticed the code I copied from
there wrong during review, so fix it at the same time. Add comments
linking the two functions that handle readahead verifier errors
together so we don't forget this behavioural link in future.

cc: <stable@vger.kernel.org> # 3.12 - current
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoxfs: inode recovery readahead can race with inode buffer creation
Dave Chinner [Mon, 11 Jan 2016 20:03:44 +0000 (07:03 +1100)]
xfs: inode recovery readahead can race with inode buffer creation

[ Upstream commit b79f4a1c68bb99152d0785ee4ea3ab4396cdacc6 ]

When we do inode readahead in log recovery, we do can do the
readahead before we've replayed the icreate transaction that stamps
the buffer with inode cores. The inode readahead verifier catches
this and marks the buffer as !done to indicate that it doesn't yet
contain valid inodes.

In adding buffer error notification  (i.e. setting b_error = -EIO at
the same time as as we clear the done flag) to such a readahead
verifier failure, we can then get subsequent inode recovery failing
with this error:

XFS (dm-0): metadata I/O error: block 0xa00060 ("xlog_recover_do..(read#2)") error 5 numblks 32

This occurs when readahead completion races with icreate item replay
such as:

inode readahead
find buffer
lock buffer
submit RA io
....
icreate recovery
    xfs_trans_get_buffer
find buffer
lock buffer
<blocks on RA completion>
.....
<ra completion>
fails verifier
clear XBF_DONE
set bp->b_error = -EIO
release and unlock buffer
<icreate gains lock>
icreate initialises buffer
marks buffer as done
adds buffer to delayed write queue
releases buffer

At this point, we have an initialised inode buffer that is up to
date but has an -EIO state registered against it. When we finally
get to recovering an inode in that buffer:

inode item recovery
    xfs_trans_read_buffer
find buffer
lock buffer
sees XBF_DONE is set, returns buffer
    sees bp->b_error is set
fail log recovery!

Essentially, we need xfs_trans_get_buf_map() to clear the error status of
the buffer when doing a lookup. This function returns uninitialised
buffers, so the buffer returned can not be in an error state and
none of the code that uses this function expects b_error to be set
on return. Indeed, there is an ASSERT(!bp->b_error); in the
transaction case in xfs_trans_get_buf_map() that would have caught
this if log recovery used transactions....

This patch firstly changes the inode readahead failure to set -EIO
on the buffer, and secondly changes xfs_buf_get_map() to never
return a buffer with an error state set so this first change doesn't
cause unexpected log recovery failures.

cc: <stable@vger.kernel.org> # 3.12 - current
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agos390: fix normalization bug in exception table sorting
Ard Biesheuvel [Fri, 1 Jan 2016 12:39:22 +0000 (13:39 +0100)]
s390: fix normalization bug in exception table sorting

[ Upstream commit bcb7825a77f41c7dd91da6f7ac10b928156a322e ]

The normalization pass in the sorting routine of the relative exception
table serves two purposes:
- it ensures that the address fields of the exception table entries are
  fully ordered, so that no ambiguities arise between entries with
  identical instruction offsets (i.e., when two instructions that are
  exactly 8 bytes apart each have an exception table entry associated with
  them)
- it ensures that the offsets of both the instruction and the fixup fields
  of each entry are relative to their final location after sorting.

Commit eb608fb366de ("s390/exceptions: switch to relative exception table
entries") ported the relative exception table format from x86, but modified
the sorting routine to only normalize the instruction offset field and not
the fixup offset field. The result is that the fixup offset of each entry
will be relative to the original location of the entry before sorting,
likely leading to crashes when those entries are dereferenced.

Fixes: eb608fb366de ("s390/exceptions: switch to relative exception table entries")
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agodrm/nouveau/kms: take mode_config mutex in connector hotplug path
Ben Skeggs [Thu, 7 Jan 2016 22:56:51 +0000 (08:56 +1000)]
drm/nouveau/kms: take mode_config mutex in connector hotplug path

[ Upstream commit 0a882cadbc63fd2da3994af7115b4ada2fcbd638 ]

fdo#93634

Signed-off-by: Ben Skeggs <bskeggs@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agouml: flush stdout before forking
Vegard Nossum [Fri, 18 Dec 2015 20:28:53 +0000 (21:28 +0100)]
uml: flush stdout before forking

[ Upstream commit 0754fb298f2f2719f0393491d010d46cfb25d043 ]

I was seeing some really weird behaviour where piping UML's output
somewhere would cause output to get duplicated:

  $ ./vmlinux | head -n 40
  Checking that ptrace can change system call numbers...Core dump limits :
          soft - 0
          hard - NONE
  OK
  Checking syscall emulation patch for ptrace...Core dump limits :
          soft - 0
          hard - NONE
  OK
  Checking advanced syscall emulation patch for ptrace...Core dump limits :
          soft - 0
          hard - NONE
  OK
  Core dump limits :
          soft - 0
          hard - NONE

This is because these tests do a fork() which duplicates the non-empty
stdout buffer, then glibc flushes the duplicated buffer as each child
exits.

A simple workaround is to flush before forking.

Cc: stable@vger.kernel.org
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agouml: fix hostfs mknod()
Vegard Nossum [Wed, 16 Dec 2015 20:59:56 +0000 (21:59 +0100)]
uml: fix hostfs mknod()

[ Upstream commit 9f2dfda2f2f1c6181c3732c16b85c59ab2d195e0 ]

An inverted return value check in hostfs_mknod() caused the function
to return success after handling it as an error (and cleaning up).

It resulted in the following segfault when trying to bind() a named
unix socket:

  Pid: 198, comm: a.out Not tainted 4.4.0-rc4
  RIP: 0033:[<0000000061077df6>]
  RSP: 00000000daae5d60  EFLAGS: 00010202
  RAX: 0000000000000000 RBX: 000000006092a460 RCX: 00000000dfc54208
  RDX: 0000000061073ef1 RSI: 0000000000000070 RDI: 00000000e027d600
  RBP: 00000000daae5de0 R08: 00000000da980ac0 R09: 0000000000000000
  R10: 0000000000000003 R11: 00007fb1ae08f72a R12: 0000000000000000
  R13: 000000006092a460 R14: 00000000daaa97c0 R15: 00000000daaa9a88
  Kernel panic - not syncing: Kernel mode fault at addr 0x40, ip 0x61077df6
  CPU: 0 PID: 198 Comm: a.out Not tainted 4.4.0-rc4 #1
  Stack:
   e027d620 dfc54208 0000006f da981398
   61bee000 0000c1ed daae5de0 0000006e
   e027d620 dfcd4208 00000005 6092a460
  Call Trace:
   [<60dedc67>] SyS_bind+0xf7/0x110
   [<600587be>] handle_syscall+0x7e/0x80
   [<60066ad7>] userspace+0x3e7/0x4e0
   [<6006321f>] ? save_registers+0x1f/0x40
   [<6006c88e>] ? arch_prctl+0x1be/0x1f0
   [<60054985>] fork_handler+0x85/0x90

Let's also get rid of the "cosmic ray protection" while we're at it.

Fixes: e9193059b1b3 "hostfs: fix races in dentry_name() and inode_name()"
Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Jeff Dike <jdike@addtoit.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: stable@vger.kernel.org
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agodm snapshot: fix hung bios when copy error occurs
Mikulas Patocka [Sat, 9 Jan 2016 00:07:55 +0000 (19:07 -0500)]
dm snapshot: fix hung bios when copy error occurs

[ Upstream commit 385277bfb57faac44e92497104ba542cdd82d5fe ]

When there is an error copying a chunk dm-snapshot can incorrectly hold
associated bios indefinitely, resulting in hung IO.

The function copy_callback sets pe->error if there was error copying the
chunk, and then calls complete_exception.  complete_exception calls
pending_complete on error, otherwise it calls commit_exception with
commit_callback (and commit_callback calls complete_exception).

The persistent exception store (dm-snap-persistent.c) assumes that calls
to prepare_exception and commit_exception are paired.
persistent_prepare_exception increases ps->pending_count and
persistent_commit_exception decreases it.

If there is a copy error, persistent_prepare_exception is called but
persistent_commit_exception is not.  This results in the variable
ps->pending_count never returning to zero and that causes some pending
exceptions (and their associated bios) to be held forever.

Fix this by unconditionally calling commit_exception regardless of
whether the copy was successful.  A new "valid" parameter is added to
commit_exception -- when the copy fails this parameter is set to zero so
that the chunk that failed to copy (and all following chunks) is not
recorded in the snapshot store.  Also, remove commit_callback now that
it is merely a wrapper around pending_complete.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoscsi: add Synology to 1024 sector blacklist
Mike Christie [Thu, 7 Jan 2016 22:34:05 +0000 (16:34 -0600)]
scsi: add Synology to 1024 sector blacklist

[ Upstream commit 9055082fb100cc66e20c048251d05159f5f2cfba ]

Another iscsi target that cannot handle large IOs, but does not tell us
a limit.

The Synology iSCSI targets report:

Block limits VPD page (SBC):
  Write same no zero (WSNZ): 0
  Maximum compare and write length: 0 blocks
  Optimal transfer length granularity: 0 blocks
  Maximum transfer length: 0 blocks
  Optimal transfer length: 0 blocks
  Maximum prefetch length: 0 blocks
  Maximum unmap LBA count: 0
  Maximum unmap block descriptor count: 0
  Optimal unmap granularity: 0
  Unmap granularity alignment valid: 0
  Unmap granularity alignment: 0
  Maximum write same length: 0x0 blocks

and the size of the command it can handle seems to depend on how much
memory it can allocate at the time. This results in IO errors when
handling large IOs. This patch just has us use the old 1024 default
sectors for this target by adding it to the scsi blacklist. We do not
have good contacs with this vendors, so I have not been able to try and
fix on their side.

I have posted this a long while back, but it was not merged. This
version just fixes it up for merge/patch failures in the original
version.

Reported-by: Ancoron Luciferis <ancoron.luciferis@googlemail.com>
Reported-by: Michael Meyers <steltek@tcnnet.com>
Signed-off-by: Mike Christie <mchristi@redhat.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Cc: <stable@vger.kernel.org> # 4.1+
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agolocks: fix unlock when fcntl_setlk races with a close
Jeff Layton [Thu, 7 Jan 2016 21:38:10 +0000 (16:38 -0500)]
locks: fix unlock when fcntl_setlk races with a close

[ Upstream commit 7f3697e24dc3820b10f445a4a7d914fc356012d1 ]

Dmitry reported that he was able to reproduce the WARN_ON_ONCE that
fires in locks_free_lock_context when the flc_posix list isn't empty.

The problem turns out to be that we're basically rebuilding the
file_lock from scratch in fcntl_setlk when we discover that the setlk
has raced with a close. If the l_whence field is SEEK_CUR or SEEK_END,
then we may end up with fl_start and fl_end values that differ from
when the lock was initially set, if the file position or length of the
file has changed in the interim.

Fix this by just reusing the same lock request structure, and simply
override fl_type value with F_UNLCK as appropriate. That ensures that
we really are unlocking the lock that was initially set.

While we're there, make sure that we do pop a WARN_ON_ONCE if the
removal ever fails. Also return -EBADF in this event, since that's
what we would have returned if the close had happened earlier.

Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: <stable@vger.kernel.org>
Fixes: c293621bbf67 (stale POSIX lock handling)
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: Jeff Layton <jeff.layton@primarydata.com>
Acked-by: "J. Bruce Fields" <bfields@fieldses.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoiwlwifi: pcie: properly configure the debug buffer size for 8000
Emmanuel Grumbach [Tue, 5 Jan 2016 13:25:43 +0000 (15:25 +0200)]
iwlwifi: pcie: properly configure the debug buffer size for 8000

[ Upstream commit 62d7476d958ce06d7a10b02bdb30006870286fe2 ]

8000 device family has a new debug engine that needs to be
configured differently than 7000's.
The debug engine's DMA works in chunks of memory and the
size of the buffer really means the start of the last
chunk. Since one chunk is 256-byte long, we should
configure the device to write to buffer_size - 256.
This fixes a situation were the device would write to
memory it is not allowed to access.

CC: <stable@vger.kernel.org> [4.1+]
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoiwlwifi: update and fix 7265 series PCI IDs
Sasha Levin [Mon, 1 Feb 2016 20:02:44 +0000 (15:02 -0500)]
iwlwifi: update and fix 7265 series PCI IDs

[ Upstream commit 006bda75d81fd27a583a3b310e9444fea2aa6ef2 ]

Update and fix some 7265 PCI IDs entries.

CC: <stable@vger.kernel.org> [3.13+]
Signed-off-by: Oren Givon <oren.givon@intel.com>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agobtrfs: handle invalid num_stripes in sys_array
David Sterba [Mon, 30 Nov 2015 16:27:06 +0000 (17:27 +0100)]
btrfs: handle invalid num_stripes in sys_array

[ Upstream commit f5cdedd73fa71b74dcc42f2a11a5735d89ce7c4f ]

We can handle the special case of num_stripes == 0 directly inside
btrfs_read_sys_array. The BUG_ON in btrfs_chunk_item_size is there to
catch other unhandled cases where we fail to validate external data.

A crafted or corrupted image crashes at mount time:

BTRFS: device fsid 9006933e-2a9a-44f0-917f-514252aeec2c devid 1 transid 7 /dev/loop0
BTRFS info (device loop0): disk space caching is enabled
BUG: failure at fs/btrfs/ctree.h:337/btrfs_chunk_item_size()!
Kernel panic - not syncing: BUG!
CPU: 0 PID: 313 Comm: mount Not tainted 4.2.5-00657-ge047887-dirty #25
Stack:
 637af890 60062489 602aeb2e 604192ba
 60387961 00000011 637af8a0 6038a835
 637af9c0 6038776b 634ef32b 00000000
Call Trace:
 [<6001c86d>] show_stack+0xfe/0x15b
 [<6038a835>] dump_stack+0x2a/0x2c
 [<6038776b>] panic+0x13e/0x2b3
 [<6020f099>] btrfs_read_sys_array+0x25d/0x2ff
 [<601cfbbe>] open_ctree+0x192d/0x27af
 [<6019c2c1>] btrfs_mount+0x8f5/0xb9a
 [<600bc9a7>] mount_fs+0x11/0xf3
 [<600d5167>] vfs_kern_mount+0x75/0x11a
 [<6019bcb0>] btrfs_mount+0x2e4/0xb9a
 [<600bc9a7>] mount_fs+0x11/0xf3
 [<600d5167>] vfs_kern_mount+0x75/0x11a
 [<600d710b>] do_mount+0xa35/0xbc9
 [<600d7557>] SyS_mount+0x95/0xc8
 [<6001e884>] handle_syscall+0x6b/0x8e

Reported-by: Jiri Slaby <jslaby@suse.com>
Reported-by: Vegard Nossum <vegard.nossum@oracle.com>
CC: stable@vger.kernel.org # 3.19+
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoPCI: host: Mark PCIe/PCI (MSI) IRQ cascade handlers as IRQF_NO_THREAD
Grygorii Strashko [Thu, 10 Dec 2015 19:18:20 +0000 (21:18 +0200)]
PCI: host: Mark PCIe/PCI (MSI) IRQ cascade handlers as IRQF_NO_THREAD

[ Upstream commit 8ff0ef996ca00028519c70e8d51d32bd37eb51dc ]

On -RT and if kernel is booting with "threadirqs" cmd line parameter,
PCIe/PCI (MSI) IRQ cascade handlers (like dra7xx_pcie_msi_irq_handler())
will be forced threaded and, as result, will generate warnings like this:

  WARNING: CPU: 1 PID: 82 at kernel/irq/handle.c:150 handle_irq_event_percpu+0x14c/0x174()
  irq 460 handler irq_default_primary_handler+0x0/0x14 enabled interrupts
  Backtrace:
   (warn_slowpath_common) from (warn_slowpath_fmt+0x38/0x40)
   (warn_slowpath_fmt) from (handle_irq_event_percpu+0x14c/0x174)
   (handle_irq_event_percpu) from (handle_irq_event+0x84/0xb8)
   (handle_irq_event) from (handle_simple_irq+0x90/0x118)
   (handle_simple_irq) from (generic_handle_irq+0x30/0x44)
   (generic_handle_irq) from (dra7xx_pcie_msi_irq_handler+0x7c/0x8c)
   (dra7xx_pcie_msi_irq_handler) from (irq_forced_thread_fn+0x28/0x5c)
   (irq_forced_thread_fn) from (irq_thread+0x128/0x204)

This happens because all of them invoke generic_handle_irq() from the
requested handler.  generic_handle_irq() grabs raw_locks and thus needs to
run in raw-IRQ context.

This issue was originally reproduced on TI dra7-evem, but, as was
identified during discussion [1], other hosts can also suffer from this
issue.  Fix all them at once by marking PCIe/PCI (MSI) IRQ cascade handlers
IRQF_NO_THREAD explicitly.

[1] http://lkml.kernel.org/r/1448027966-21610-1-git-send-email-grygorii.strashko@ti.com

[bhelgaas: add stable tag, fix typos]
Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Lucas Stach <l.stach@pengutronix.de> (for imx6)
CC: stable@vger.kernel.org
CC: Kishon Vijay Abraham I <kishon@ti.com>
CC: Jingoo Han <jingoohan1@gmail.com>
CC: Kukjin Kim <kgene@kernel.org>
CC: Krzysztof Kozlowski <k.kozlowski@samsung.com>
CC: Richard Zhu <Richard.Zhu@freescale.com>
CC: Thierry Reding <thierry.reding@gmail.com>
CC: Stephen Warren <swarren@wwwdotorg.org>
CC: Alexandre Courbot <gnurou@gmail.com>
CC: Simon Horman <horms@verge.net.au>
CC: Pratyush Anand <pratyush.anand@gmail.com>
CC: Michal Simek <michal.simek@xilinx.com>
CC: "Sören Brinkmann" <soren.brinkmann@xilinx.com>
CC: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoPCI: Fix minimum allocation address overwrite
Christoph Biedl [Wed, 23 Dec 2015 15:51:57 +0000 (16:51 +0100)]
PCI: Fix minimum allocation address overwrite

[ Upstream commit 3460baa620685c20f5ee19afb6d99d26150c382c ]

Commit 36e097a8a297 ("PCI: Split out bridge window override of minimum
allocation address") claimed to do no functional changes but unfortunately
did: The "min" variable is altered.  At least the AVM A1 PCMCIA adapter was
no longer detected, breaking ISDN operation.

Use a local copy of "min" to restore the previous behaviour.

[bhelgaas: avoid gcc "?:" extension for portability and readability]
Fixes: 36e097a8a297 ("PCI: Split out bridge window override of minimum allocation address")
Signed-off-by: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: stable@vger.kernel.org # v3.14+
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agodrm/dp/mst: fix in RAD element access
Mykola Lysenko [Fri, 25 Dec 2015 08:14:48 +0000 (16:14 +0800)]
drm/dp/mst: fix in RAD element access

[ Upstream commit 7a11a334aa6af4c65c6a0d81b60c97fc18673532 ]

This is needed to receive correct port
number from RAD, so MSTB could be found

Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Mykola Lysenko <Mykola.Lysenko@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agodrm/dp/mst: fix in MSTB RAD initialization
Mykola Lysenko [Fri, 25 Dec 2015 08:14:47 +0000 (16:14 +0800)]
drm/dp/mst: fix in MSTB RAD initialization

[ Upstream commit 75af4c8c4c0f60d7ad135419805798f144e9baf9 ]

This fix is needed to support more then two
branch displays, so RAD address consist at
least of 2 elements

Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Mykola Lysenko <Mykola.Lysenko@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agodrm/dp/mst: always send reply for UP request
Mykola Lysenko [Fri, 18 Dec 2015 22:14:43 +0000 (17:14 -0500)]
drm/dp/mst: always send reply for UP request

[ Upstream commit 1f16ee7fa13649f4e55aa48ad31c3eb0722a62d3 ]

We should always send reply for UP request in order
to make downstream device clean-up resources appropriately.

Issue was that reply for UP request was sent only once.

Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Mykola Lysenko <Mykola.Lysenko@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agodrm/dp/mst: process broadcast messages correctly
Mykola Lysenko [Fri, 18 Dec 2015 22:14:42 +0000 (17:14 -0500)]
drm/dp/mst: process broadcast messages correctly

[ Upstream commit bd9343208704fcc70a5b919f228a7d26ae472727 ]

In case broadcast message received in UP request,
RAD cannot be used to identify message originator.
Message should be parsed, originator should be found
by GUID from parsed message.

Also reply with broadcast in case broadcast message
received (for now it is always broadcast)

Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Mykola Lysenko <Mykola.Lysenko@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoudf: Check output buffer length when converting name to CS0
Andrew Gabbasov [Thu, 24 Dec 2015 16:25:33 +0000 (10:25 -0600)]
udf: Check output buffer length when converting name to CS0

[ Upstream commit bb00c898ad1ce40c4bb422a8207ae562e9aea7ae ]

If a name contains at least some characters with Unicode values
exceeding single byte, the CS0 output should have 2 bytes per character.
And if other input characters have single byte Unicode values, then
the single input byte is converted to 2 output bytes, and the length
of output becomes larger than the length of input. And if the input
name is long enough, the output length may exceed the allocated buffer
length.

All this means that conversion from UTF8 or NLS to CS0 requires
checking of output length in order to stop when it exceeds the given
output buffer size.

[JK: Make code return -ENAMETOOLONG instead of silently truncating the
name]

CC: stable@vger.kernel.org
Signed-off-by: Andrew Gabbasov <andrew_gabbasov@mentor.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoudf: Prevent buffer overrun with multi-byte characters
Andrew Gabbasov [Thu, 24 Dec 2015 16:25:32 +0000 (10:25 -0600)]
udf: Prevent buffer overrun with multi-byte characters

[ Upstream commit ad402b265ecf6fa22d04043b41444cdfcdf4f52d ]

udf_CS0toUTF8 function stops the conversion when the output buffer
length reaches UDF_NAME_LEN-2, which is correct maximum name length,
but, when checking, it leaves the space for a single byte only,
while multi-bytes output characters can take more space, causing
buffer overflow.

Similar error exists in udf_CS0toNLS function, that restricts
the output length to UDF_NAME_LEN, while actual maximum allowed
length is UDF_NAME_LEN-2.

In these cases the output can override not only the current buffer
length field, causing corruption of the name buffer itself, but also
following allocation structures, causing kernel crash.

Adjust the output length checks in both functions to prevent buffer
overruns in case of multi-bytes UTF8 or NLS characters.

CC: stable@vger.kernel.org
Signed-off-by: Andrew Gabbasov <andrew_gabbasov@mentor.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoInput: i8042 - add Fujitsu Lifebook U745 to the nomux list
Aurélien Francillon [Sun, 3 Jan 2016 04:39:54 +0000 (20:39 -0800)]
Input: i8042 - add Fujitsu Lifebook U745 to the nomux list

[ Upstream commit dd0d0d4de582a6a61c032332c91f4f4cb2bab569 ]

Without i8042.nomux=1 the Elantech touch pad is not working at all on
a Fujitsu Lifebook U745. This patch does not seem necessary for all
U745 (maybe because of different BIOS versions?). However, it was
verified that the patch does not break those (see opensuse bug 883192:
https://bugzilla.opensuse.org/show_bug.cgi?id=883192).

Signed-off-by: Aurélien Francillon <aurelien@francillon.net>
Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agowlcore/wl12xx: spi: fix NULL pointer dereference (Oops)
Uri Mashiach [Thu, 24 Dec 2015 14:05:00 +0000 (16:05 +0200)]
wlcore/wl12xx: spi: fix NULL pointer dereference (Oops)

[ Upstream commit e47301b06d5a65678690f04c2248fd181db1e59a ]

Fix the below Oops when trying to modprobe wlcore_spi.
The oops occurs because the wl1271_power_{off,on}()
function doesn't check the power() function pointer.

[   23.401447] Unable to handle kernel NULL pointer dereference at
virtual address 00000000
[   23.409954] pgd = c0004000
[   23.412922] [00000000] *pgd=00000000
[   23.416693] Internal error: Oops: 80000007 [#1] SMP ARM
[   23.422168] Modules linked in: wl12xx wlcore mac80211 cfg80211
musb_dsps musb_hdrc usbcore usb_common snd_soc_simple_card evdev joydev
omap_rng wlcore_spi snd_soc_tlv320aic23_i2c rng_core snd_soc_tlv320aic23
c_can_platform c_can can_dev snd_soc_davinci_mcasp snd_soc_edma
snd_soc_omap omap_wdt musb_am335x cpufreq_dt thermal_sys hwmon
[   23.453253] CPU: 0 PID: 36 Comm: kworker/0:2 Not tainted
4.2.0-00002-g951efee-dirty #233
[   23.461720] Hardware name: Generic AM33XX (Flattened Device Tree)
[   23.468123] Workqueue: events request_firmware_work_func
[   23.473690] task: de32efc0 ti: de4ee000 task.ti: de4ee000
[   23.479341] PC is at 0x0
[   23.482112] LR is at wl12xx_set_power_on+0x28/0x124 [wlcore]
[   23.488074] pc : [<00000000>]    lr : [<bf2581f0>]    psr: 60000013
[   23.488074] sp : de4efe50  ip : 00000002  fp : 00000000
[   23.500162] r10: de7cdd00  r9 : dc848800  r8 : bf27af00
[   23.505663] r7 : bf27a1a8  r6 : dcbd8a80  r5 : dce0e2e0  r4 :
dce0d2e0
[   23.512536] r3 : 00000000  r2 : 00000000  r1 : 00000001  r0 :
dc848810
[   23.519412] Flags: nZCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM
Segment kernel
[   23.527109] Control: 10c5387d  Table: 9cb78019  DAC: 00000015
[   23.533160] Process kworker/0:2 (pid: 36, stack limit = 0xde4ee218)
[   23.539760] Stack: (0xde4efe50 to 0xde4f0000)

[...]

[   23.665030] [<bf2581f0>] (wl12xx_set_power_on [wlcore]) from
[<bf25f7ac>] (wlcore_nvs_cb+0x118/0xa4c [wlcore])
[   23.675604] [<bf25f7ac>] (wlcore_nvs_cb [wlcore]) from [<c04387ec>]
(request_firmware_work_func+0x30/0x58)
[   23.685784] [<c04387ec>] (request_firmware_work_func) from
[<c0058e2c>] (process_one_work+0x1b4/0x4b4)
[   23.695591] [<c0058e2c>] (process_one_work) from [<c0059168>]
(worker_thread+0x3c/0x4a4)
[   23.704124] [<c0059168>] (worker_thread) from [<c005ee68>]
(kthread+0xd4/0xf0)
[   23.711747] [<c005ee68>] (kthread) from [<c000f598>]
(ret_from_fork+0x14/0x3c)
[   23.719357] Code: bad PC value
[   23.722760] ---[ end trace 981be8510db9b3a9 ]---

Prevent oops by validationg power() pointer value before
calling the function.

Signed-off-by: Uri Mashiach <uri.mashiach@compulab.co.il>
Cc: stable@vger.kernel.org
Acked-by: Igor Grinberg <grinberg@compulab.co.il>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agobcache: Change refill_dirty() to always scan entire disk if necessary
Kent Overstreet [Mon, 30 Nov 2015 02:47:01 +0000 (18:47 -0800)]
bcache: Change refill_dirty() to always scan entire disk if necessary

[ Upstream commit 627ccd20b4ad3ba836472468208e2ac4dfadbf03 ]

Previously, it would only scan the entire disk if it was starting from
the very start of the disk - i.e. if the previous scan got to the end.

This was broken by refill_full_stripes(), which updates last_scanned so
that refill_dirty was never triggering the searched_from_start path.

But if we change refill_dirty() to always scan the entire disk if
necessary, regardless of what last_scanned was, the code gets cleaner
and we fix that bug too.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agobcache: prevent crash on changing writeback_running
Stefan Bader [Mon, 30 Nov 2015 02:44:49 +0000 (18:44 -0800)]
bcache: prevent crash on changing writeback_running

[ Upstream commit 8d16ce540c94c9d366eb36fc91b7154d92d6397b ]

Added a safeguard in the shutdown case. At least while not being
attached it is also possible to trigger a kernel bug by writing into
writeback_running. This change  adds the same check before trying to
wake up the thread for that case.

Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
Cc: Kent Overstreet <kent.overstreet@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agobcache: allows use of register in udev to avoid "device_busy" error.
Gabriel de Perthuis [Mon, 30 Nov 2015 02:40:23 +0000 (18:40 -0800)]
bcache: allows use of register in udev to avoid "device_busy" error.

[ Upstream commit d7076f21629f8f329bca4a44dc408d94670f49e2 ]

Allows to use register, not register_quiet in udev to avoid "device_busy" error.
The initial patch proposed at https://lkml.org/lkml/2013/8/26/549 by Gabriel de Perthuis
<g2p.code@gmail.com> does not unlock the mutex and hangs the kernel.

See http://thread.gmane.org/gmane.linux.kernel.bcache.devel/2594 for the discussion.

Cc: Denis Bychkov <manover@gmail.com>
Cc: Kent Overstreet <kent.overstreet@gmail.com>
Cc: Eric Wheeler <bcache@linux.ewheeler.net>
Cc: Gabriel de Perthuis <g2p.code@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agobcache: unregister reboot notifier if bcache fails to unregister device
Zheng Liu [Mon, 30 Nov 2015 01:21:57 +0000 (17:21 -0800)]
bcache: unregister reboot notifier if bcache fails to unregister device

[ Upstream commit 2ecf0cdb2b437402110ab57546e02abfa68a716b ]

In bcache_init() function it forgot to unregister reboot notifier if
bcache fails to unregister a block device.  This commit fixes this.

Signed-off-by: Zheng Liu <wenqing.lz@taobao.com>
Tested-by: Joshua Schmid <jschmid@suse.com>
Tested-by: Eric Wheeler <bcache@linux.ewheeler.net>
Cc: Kent Overstreet <kmo@daterainc.com>
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agobcache: fix a leak in bch_cached_dev_run()
Al Viro [Mon, 30 Nov 2015 01:20:59 +0000 (17:20 -0800)]
bcache: fix a leak in bch_cached_dev_run()

[ Upstream commit 4d4d8573a8451acc9f01cbea24b7e55f04a252fe ]

Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Tested-by: Joshua Schmid <jschmid@suse.com>
Tested-by: Eric Wheeler <bcache@linux.ewheeler.net>
Cc: Kent Overstreet <kmo@daterainc.com>
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agobcache: clear BCACHE_DEV_UNLINK_DONE flag when attaching a backing device
Zheng Liu [Mon, 30 Nov 2015 01:19:32 +0000 (17:19 -0800)]
bcache: clear BCACHE_DEV_UNLINK_DONE flag when attaching a backing device

[ Upstream commit fecaee6f20ee122ad75402c53d8278f9bb142ddc ]

This bug can be reproduced by the following script:

  #!/bin/bash

  bcache_sysfs="/sys/fs/bcache"

  function clear_cache()
  {
   if [ ! -e $bcache_sysfs ]; then
   echo "no bcache sysfs"
   exit
   fi

   cset_uuid=$(ls -l $bcache_sysfs|head -n 2|tail -n 1|awk '{print $9}')
   sudo sh -c "echo $cset_uuid > /sys/block/sdb/sdb1/bcache/detach"
   sleep 5
   sudo sh -c "echo $cset_uuid > /sys/block/sdb/sdb1/bcache/attach"
  }

  for ((i=0;i<10;i++)); do
   clear_cache
  done

The warning messages look like below:
[  275.948611] ------------[ cut here ]------------
[  275.963840] WARNING: at fs/sysfs/dir.c:512 sysfs_add_one+0xb8/0xd0() (Tainted: P        W
---------------   )
[  275.979253] Hardware name: Tecal RH2285
[  275.994106] sysfs: cannot create duplicate filename '/devices/pci0000:00/0000:00:09.0/0000:08:00.0/host4/target4:2:1/4:2:1:0/block/sdb/sdb1/bcache/cache'
[  276.024105] Modules linked in: bcache tcp_diag inet_diag ipmi_devintf ipmi_si ipmi_msghandler
bonding 8021q garp stp llc ipv6 ext3 jbd loop sg iomemory_vsl(P) bnx2 microcode serio_raw i2c_i801
i2c_core iTCO_wdt iTCO_vendor_support i7core_edac edac_core shpchp ext4 jbd2 mbcache megaraid_sas
pata_acpi ata_generic ata_piix dm_mod [last unloaded: scsi_wait_scan]
[  276.072643] Pid: 2765, comm: sh Tainted: P        W  ---------------    2.6.32 #1
[  276.089315] Call Trace:
[  276.105801]  [<ffffffff81070fe7>] ? warn_slowpath_common+0x87/0xc0
[  276.122650]  [<ffffffff810710d6>] ? warn_slowpath_fmt+0x46/0x50
[  276.139361]  [<ffffffff81205c08>] ? sysfs_add_one+0xb8/0xd0
[  276.156012]  [<ffffffff8120609b>] ? sysfs_do_create_link+0x12b/0x170
[  276.172682]  [<ffffffff81206113>] ? sysfs_create_link+0x13/0x20
[  276.189282]  [<ffffffffa03bda21>] ? bcache_device_link+0xc1/0x110 [bcache]
[  276.205993]  [<ffffffffa03bfa08>] ? bch_cached_dev_attach+0x478/0x4f0 [bcache]
[  276.222794]  [<ffffffffa03c4a17>] ? bch_cached_dev_store+0x627/0x780 [bcache]
[  276.239680]  [<ffffffff8116783a>] ? alloc_pages_current+0xaa/0x110
[  276.256594]  [<ffffffff81203b15>] ? sysfs_write_file+0xe5/0x170
[  276.273364]  [<ffffffff811887b8>] ? vfs_write+0xb8/0x1a0
[  276.290133]  [<ffffffff811890b1>] ? sys_write+0x51/0x90
[  276.306368]  [<ffffffff8100c072>] ? system_call_fastpath+0x16/0x1b
[  276.322301] ---[ end trace 9f5d4fcdd0c3edfb ]---
[  276.338241] ------------[ cut here ]------------
[  276.354109] WARNING: at /home/wenqing.lz/bcache/bcache/super.c:720
bcache_device_link+0xdf/0x110 [bcache]() (Tainted: P        W  ---------------   )
[  276.386017] Hardware name: Tecal RH2285
[  276.401430] Couldn't create device <-> cache set symlinks
[  276.401759] Modules linked in: bcache tcp_diag inet_diag ipmi_devintf ipmi_si ipmi_msghandler
bonding 8021q garp stp llc ipv6 ext3 jbd loop sg iomemory_vsl(P) bnx2 microcode serio_raw i2c_i801
i2c_core iTCO_wdt iTCO_vendor_support i7core_edac edac_core shpchp ext4 jbd2 mbcache megaraid_sas
pata_acpi ata_generic ata_piix dm_mod [last unloaded: scsi_wait_scan]
[  276.465477] Pid: 2765, comm: sh Tainted: P        W  ---------------    2.6.32 #1
[  276.482169] Call Trace:
[  276.498610]  [<ffffffff81070fe7>] ? warn_slowpath_common+0x87/0xc0
[  276.515405]  [<ffffffff810710d6>] ? warn_slowpath_fmt+0x46/0x50
[  276.532059]  [<ffffffffa03bda3f>] ? bcache_device_link+0xdf/0x110 [bcache]
[  276.548808]  [<ffffffffa03bfa08>] ? bch_cached_dev_attach+0x478/0x4f0 [bcache]
[  276.565569]  [<ffffffffa03c4a17>] ? bch_cached_dev_store+0x627/0x780 [bcache]
[  276.582418]  [<ffffffff8116783a>] ? alloc_pages_current+0xaa/0x110
[  276.599341]  [<ffffffff81203b15>] ? sysfs_write_file+0xe5/0x170
[  276.616142]  [<ffffffff811887b8>] ? vfs_write+0xb8/0x1a0
[  276.632607]  [<ffffffff811890b1>] ? sys_write+0x51/0x90
[  276.648671]  [<ffffffff8100c072>] ? system_call_fastpath+0x16/0x1b
[  276.664756] ---[ end trace 9f5d4fcdd0c3edfc ]---

We forget to clear BCACHE_DEV_UNLINK_DONE flag in bcache_device_attach()
function when we attach a backing device first time.  After detaching this
backing device, this flag will be true and sysfs_remove_link() isn't called in
bcache_device_unlink().  Then when we attach this backing device again,
sysfs_create_link() will return EEXIST error in bcache_device_link().

So the fix is trival and we clear this flag in bcache_device_link().

Signed-off-by: Zheng Liu <wenqing.lz@taobao.com>
Tested-by: Joshua Schmid <jschmid@suse.com>
Tested-by: Eric Wheeler <bcache@linux.ewheeler.net>
Cc: Kent Overstreet <kmo@daterainc.com>
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agobcache: Add a cond_resched() call to gc
Kent Overstreet [Mon, 30 Nov 2015 01:18:33 +0000 (17:18 -0800)]
bcache: Add a cond_resched() call to gc

[ Upstream commit c5f1e5adf956e3ba82d204c7c141a75da9fa449a ]

Signed-off-by: Takashi Iwai <tiwai@suse.de>
Tested-by: Eric Wheeler <bcache@linux.ewheeler.net>
Cc: Kent Overstreet <kmo@daterainc.com>
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agobcache: fix a livelock when we cause a huge number of cache misses
Zheng Liu [Mon, 30 Nov 2015 01:17:05 +0000 (17:17 -0800)]
bcache: fix a livelock when we cause a huge number of cache misses

[ Upstream commit 2ef9ccbfcb90cf84bdba320a571b18b05c41101b ]

Subject : [PATCH v2] bcache: fix a livelock in btree lock
Date : Wed, 25 Feb 2015 20:32:09 +0800 (02/25/2015 04:32:09 AM)

This commit tries to fix a livelock in bcache.  This livelock might
happen when we causes a huge number of cache misses simultaneously.

When we get a cache miss, bcache will execute the following path.

->cached_dev_make_request()
  ->cached_dev_read()
    ->cached_lookup()
      ->bch->btree_map_keys()
        ->btree_root()  <------------------------
          ->bch_btree_map_keys_recurse()        |
            ->cache_lookup_fn()                 |
              ->cached_dev_cache_miss()         |
                ->bch_btree_insert_check_key() -|
                  [If btree->seq is not equal to seq + 1, we should return
                   EINTR and traverse btree again.]

In bch_btree_insert_check_key() function we first need to check upgrade
flag (op->lock == -1), and when this flag is true we need to release
read btree->lock and try to take write btree->lock.  During taking and
releasing this write lock, btree->seq will be monotone increased in
order to prevent other threads modify this in cache miss (see btree.h:74).
But if there are some cache misses caused by some requested, we could
meet a livelock because btree->seq is always changed by others.  Thus no
one can make progress.

This commit will try to take write btree->lock if it encounters a race
when we traverse btree.  Although it sacrifice the scalability but we
can ensure that only one can modify the btree.

Signed-off-by: Zheng Liu <wenqing.lz@taobao.com>
Tested-by: Joshua Schmid <jschmid@suse.com>
Tested-by: Eric Wheeler <bcache@linux.ewheeler.net>
Cc: Joshua Schmid <jschmid@suse.com>
Cc: Zhu Yanhai <zhu.yanhai@gmail.com>
Cc: Kent Overstreet <kmo@daterainc.com>
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agortlwifi: rtl_pci: Fix kernel panic
Larry Finger [Mon, 21 Dec 2015 23:05:08 +0000 (17:05 -0600)]
rtlwifi: rtl_pci: Fix kernel panic

[ Upstream commit f99551a2d39dc26ea03dc6761be11ac913eb2d57 ]

In commit 38506ecefab9 (rtlwifi: rtl_pci: Start modification for new
drivers), a bug was introduced that causes a NULL pointer dereference.
As this bug only affects the infrequently used RTL8192EE and only under
low-memory conditions, it has taken a long time for the bug to show up.

The bug was reported on the linux-wireless mailing list and also at
https://bugs.launchpad.net/ubuntu/+source/ubuntu-release-upgrader/ as
bug #1527603 (kernel crashes due to rtl8192ee driver on ubuntu 15.10).

Fixes: 38506ecefab9 ("rtlwifi: rtl_pci: Start modification for new drivers")
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Cc: Stable <stable@vger.kernel.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoNFS: Fix attribute cache revalidation
Trond Myklebust [Tue, 29 Dec 2015 23:55:19 +0000 (18:55 -0500)]
NFS: Fix attribute cache revalidation

[ Upstream commit ade14a7df796d4e86bd9d181193c883a57b13db0 ]

If a NFSv4 client uses the cache_consistency_bitmask in order to
request only information about the change attribute, timestamps and
size, then it has not revalidated all attributes, and hence the
attribute timeout timestamp should not be updated.

Reported-by: Donald Buczek <buczek@molgen.mpg.de>
Cc: stable@vger.kernel.org
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoNFS: Remove the "NFS_CAP_CHANGE_ATTR" capability
Trond Myklebust [Sun, 5 Jul 2015 15:12:07 +0000 (11:12 -0400)]
NFS: Remove the "NFS_CAP_CHANGE_ATTR" capability

[ Upstream commit cd812599796f500b042f5464b6665755eca21137 ]

Setting the change attribute has been mandatory for all NFS versions, since
commit 3a1556e8662c ("NFSv2/v3: Simulate the change attribute"). We should
therefore not have anything be conditional on it being set/unset.

Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agortlwifi: rtl8192cu: Add missing parameter setup
Larry Finger [Mon, 14 Dec 2015 22:34:38 +0000 (16:34 -0600)]
rtlwifi: rtl8192cu: Add missing parameter setup

[ Upstream commit b68d0ae7e58624c33f2eddab471fee55db27dbf9 ]

This driver fails to copy the module parameter for software encryption
to the locations used by the main code.

Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Cc: Stable <stable@vger.kernel.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agortlwifi: rtl8192ce: Fix handling of module parameters
Larry Finger [Mon, 14 Dec 2015 22:34:37 +0000 (16:34 -0600)]
rtlwifi: rtl8192ce: Fix handling of module parameters

[ Upstream commit b24f19f16b9e43f54218c07609b783ea8625406a ]

The module parameter for software encryption was never transferred to
the location used by the driver.

Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Cc: Stable <stable@vger.kernel.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agortlwifi: rtl8192se: Fix module parameter initialization
Larry Finger [Mon, 14 Dec 2015 22:34:36 +0000 (16:34 -0600)]
rtlwifi: rtl8192se: Fix module parameter initialization

[ Upstream commit 7503efbd82c15c4070adffff1344e5169d3634b4 ]

Two of the module parameter descriptions show incorrect default values.
In addition the value for software encryption is not transferred to
the locations used by the driver.

Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Cc: Stable <stable@vger.kernel.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agortlwifi: rtl8192de: Fix incorrect module parameter descriptions
Larry Finger [Mon, 14 Dec 2015 22:34:35 +0000 (16:34 -0600)]
rtlwifi: rtl8192de: Fix incorrect module parameter descriptions

[ Upstream commit d4d60b4caaa5926e1b243070770968f05656107a ]

Two of the module parameters are listed with incorrect default values.

Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Cc: Stable <stable@vger.kernel.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agortlwifi: rtl8188ee: Fix module parameter initialization
Larry Finger [Mon, 14 Dec 2015 22:34:34 +0000 (16:34 -0600)]
rtlwifi: rtl8188ee: Fix module parameter initialization

[ Upstream commit 06f34572c6110e2e2d5e653a957f1d74db9e3f2b ]

In this driver, parameters disable_watchdog and sw_crypto are never
copied into the locations used in the main code. While modifying the
parameter handling, the copying of parameter msi_support is moved to
be with the rest.

Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Cc: Stable <stable@vger.kernel.org>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoposix-clock: Fix return code on the poll method's error path
Richard Cochran [Tue, 22 Dec 2015 21:19:58 +0000 (22:19 +0100)]
posix-clock: Fix return code on the poll method's error path

[ Upstream commit 1b9f23727abb92c5e58f139e7d180befcaa06fe0 ]

The posix_clock_poll function is supposed to return a bit mask of
POLLxxx values.  However, in case the hardware has disappeared (due to
hot plugging for example) this code returns -ENODEV in a futile
attempt to throw an error at the file descriptor level.  The kernel's
file_operations interface does not accept such error codes from the
poll method.  Instead, this function aught to return POLLERR.

The value -ENODEV does, in fact, contain the POLLERR bit (and almost
all the other POLLxxx bits as well), but only by chance.  This patch
fixes code to return a proper bit mask.

Credit goes to Markus Elfring for pointing out the suspicious
signed/unsigned mismatch.

Reported-by: Markus Elfring <elfring@users.sourceforge.net>
igned-off-by: Richard Cochran <richardcochran@gmail.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Julia Lawall <julia.lawall@lip6.fr>
Link: http://lkml.kernel.org/r/1450819198-17420-1-git-send-email-richardcochran@gmail.com
Cc: stable@vger.kernel.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoThermal: do thermal zone update after a cooling device registered
Chen Yu [Fri, 30 Oct 2015 08:32:10 +0000 (16:32 +0800)]
Thermal: do thermal zone update after a cooling device registered

[ Upstream commit 4511f7166a2deb5f7a578cf87fd2fe1ae83527e3 ]

When a new cooling device is registered, we need to update the
thermal zone to set the new registered cooling device to a proper
state.

This fixes a problem that the system is cool, while the fan devices
are left running on full speed after boot, if fan device is registered
after thermal zone device.

Here is the history of why current patch looks like this:
https://patchwork.kernel.org/patch/7273041/

CC: <stable@vger.kernel.org> #3.18+
Reference:https://bugzilla.kernel.org/show_bug.cgi?id=92431
Tested-by: Manuel Krause <manuelkrause@netscape.net>
Tested-by: szegad <szegadlo@poczta.onet.pl>
Tested-by: prash <prash.n.rao@gmail.com>
Tested-by: amish <ammdispose-arch@yahoo.com>
Reviewed-by: Javi Merino <javi.merino@arm.com>
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoThermal: handle thermal zone device properly during system sleep
Zhang Rui [Fri, 30 Oct 2015 08:31:58 +0000 (16:31 +0800)]
Thermal: handle thermal zone device properly during system sleep

[ Upstream commit ff140fea847e1c2002a220571ab106c2456ed252 ]

Current thermal code does not handle system sleep well because
1. the cooling device cooling state may be changed during suspend
2. the previous temperature reading becomes invalid after resumed because
   it is got before system sleep
3. updating thermal zone device during suspending/resuming
   is wrong because some devices may have already been suspended
   or may have not been resumed.

Thus, the proper way to do this is to cancel all thermal zone
device update requirements during suspend/resume, and after all
the devices have been resumed, reset and update every registered
thermal zone devices.

This also fixes a regression introduced by:
Commit 19593a1fb1f6 ("ACPI / fan: convert to platform driver")
Because, with above commit applied, all the fan devices are attached
to the acpi_general_pm_domain, and they are turned on by the pm_domain
automatically after resume, without the awareness of thermal core.

CC: <stable@vger.kernel.org> #3.18+
Reference: https://bugzilla.kernel.org/show_bug.cgi?id=78201
Reference: https://bugzilla.kernel.org/show_bug.cgi?id=91411
Tested-by: Manuel Krause <manuelkrause@netscape.net>
Tested-by: szegad <szegadlo@poczta.onet.pl>
Tested-by: prash <prash.n.rao@gmail.com>
Tested-by: amish <ammdispose-arch@yahoo.com>
Tested-by: Matthias <morpheusxyz123@yahoo.de>
Reviewed-by: Javi Merino <javi.merino@arm.com>
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoThermal: initialize thermal zone device correctly
Zhang Rui [Fri, 30 Oct 2015 08:31:47 +0000 (16:31 +0800)]
Thermal: initialize thermal zone device correctly

[ Upstream commit bb431ba26c5cd0a17c941ca6c3a195a3a6d5d461 ]

After thermal zone device registered, as we have not read any
temperature before, thus tz->temperature should not be 0,
which actually means 0C, and thermal trend is not available.
In this case, we need specially handling for the first
thermal_zone_device_update().

Both thermal core framework and step_wise governor is
enhanced to handle this. And since the step_wise governor
is the only one that uses trends, so it's the only thermal
governor that needs to be updated.

CC: <stable@vger.kernel.org> #3.18+
Tested-by: Manuel Krause <manuelkrause@netscape.net>
Tested-by: szegad <szegadlo@poczta.onet.pl>
Tested-by: prash <prash.n.rao@gmail.com>
Tested-by: amish <ammdispose-arch@yahoo.com>
Tested-by: Matthias <morpheusxyz123@yahoo.de>
Reviewed-by: Javi Merino <javi.merino@arm.com>
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agonfs: Fix race in __update_open_stateid()
Andrew Elble [Wed, 2 Dec 2015 14:20:57 +0000 (09:20 -0500)]
nfs: Fix race in __update_open_stateid()

[ Upstream commit 361cad3c89070aeb37560860ea8bfc092d545adc ]

We've seen this in a packet capture - I've intermixed what I
think was going on. The fix here is to grab the so_lock sooner.

1964379 -> #1 open (for write) reply seqid=1
1964393 -> #2 open (for read) reply seqid=2

  __nfs4_close(), state->n_wronly--
  nfs4_state_set_mode_locked(), changes state->state = [R]
  state->flags is [RW]
  state->state is [R], state->n_wronly == 0, state->n_rdonly == 1

1964398 -> #3 open (for write) call -> because close is already running
1964399 -> downgrade (to read) call seqid=2 (close of #1)
1964402 -> #3 open (for write) reply seqid=3

 __update_open_stateid()
   nfs_set_open_stateid_locked(), changes state->flags
   state->flags is [RW]
   state->state is [R], state->n_wronly == 0, state->n_rdonly == 1
   new sequence number is exposed now via nfs4_stateid_copy()

   next step would be update_open_stateflags(), pending so_lock

1964403 -> downgrade reply seqid=2, fails with OLD_STATEID (close of #1)

   nfs4_close_prepare() gets so_lock and recalcs flags -> send close

1964405 -> downgrade (to read) call seqid=3 (close of #1 retry)

   __update_open_stateid() gets so_lock
 * update_open_stateflags() updates state->n_wronly.
   nfs4_state_set_mode_locked() updates state->state

   state->flags is [RW]
   state->state is [RW], state->n_wronly == 1, state->n_rdonly == 1

 * should have suppressed the preceding nfs4_close_prepare() from
   sending open_downgrade

1964406 -> write call
1964408 -> downgrade (to read) reply seqid=4 (close of #1 retry)

   nfs_clear_open_stateid_locked()
   state->flags is [R]
   state->state is [RW], state->n_wronly == 1, state->n_rdonly == 1

1964409 -> write reply (fails, openmode)

Signed-off-by: Andrew Elble <aweits@rit.edu>
Cc: stable@vger,kernel.org
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years ago[media] rc: sunxi-cir: Initialize the spinlock properly
Chen-Yu Tsai [Tue, 22 Dec 2015 04:27:35 +0000 (02:27 -0200)]
[media] rc: sunxi-cir: Initialize the spinlock properly

[ Upstream commit 768acf46e1320d6c41ed1b7c4952bab41c1cde79 ]

The driver allocates the spinlock but fails to initialize it correctly.
The kernel reports a BUG indicating bad spinlock magic when spinlock
debugging is enabled.

Call spin_lock_init() on it to initialize it correctly.

Fixes: b4e3e59fb59c ("[media] rc: add sunxi-ir driver")

Signed-off-by: Chen-Yu Tsai <wens@csie.org>
Acked-by: Hans de Goede <hdegoede@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoudf: limit the maximum number of indirect extents in a row
Vegard Nossum [Fri, 11 Dec 2015 14:54:16 +0000 (15:54 +0100)]
udf: limit the maximum number of indirect extents in a row

[ Upstream commit b0918d9f476a8434b055e362b83fa4fd1d462c3f ]

udf_next_aext() just follows extent pointers while extents are marked as
indirect. This can loop forever for corrupted filesystem. Limit number
the of indirect extents we are willing to follow in a row.

[JK: Updated changelog, limit, style]

Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
Cc: stable@vger.kernel.org
Cc: Jan Kara <jack@suse.com>
Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agommc: sdhci: Fix sdhci_runtime_pm_bus_on/off()
Adrian Hunter [Thu, 26 Nov 2015 12:00:50 +0000 (14:00 +0200)]
mmc: sdhci: Fix sdhci_runtime_pm_bus_on/off()

[ Upstream commit 5c671c410c8704800f4f1673b6f572137e7e6ddd ]

sdhci has a legacy facility to prevent runtime suspend if the
bus power is on.  This is needed in cases where the power to
the card is dependent on the bus power.  It is controlled by
a pair of functions: sdhci_runtime_pm_bus_on() and
sdhci_runtime_pm_bus_off().  These functions use a boolean
variable 'bus_on' to ensure changes are always paired.
There is an additional check for 'runtime_suspended' which is
the problem.  In fact, its use is ill-conceived as the only
requirement for the logic is that 'on' and 'off' are paired,
which is actually broken by the check, for example if the bus
power is turned on during runtime resume.  So remove  the check.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org # v3.11+
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agommc: sdhci: Fix DMA descriptor with zero data length
Adrian Hunter [Thu, 26 Nov 2015 12:00:48 +0000 (14:00 +0200)]
mmc: sdhci: Fix DMA descriptor with zero data length

[ Upstream commit 347ea32dc118326c4f2636928239a29d192cc9b8 ]

SDHCI has built-in DMA called ADMA2.  ADMA2 uses a descriptor
table to define DMA scatter-gather.  Each desciptor can specify
a data length up to 65536 bytes, however the length field is
only 16-bits so zero means 65536.  Consequently, putting zero
when the size is zero must not be allowed.  This patch fixes
one case where zero data length could be set inadvertently.

The problem happens because unaligned data gets split and the
code did not consider that the remaining aligned portion might
be zero length.  That case really only happens for SDIO because
SD and eMMC cards transfer blocks that are invariably sector-
aligned.  For SDIO, access to function registers is done by
data transfer (CMD53) when the register is bigger than 1 byte.
Generally registers are 4 bytes but 2-byte registers are possible.
So DMA of 4 bytes or less can happen.  When 32-bit DMA is used,
the data alignment must be 4, so 4-byte transfers won't casue a
problem, but a 2-byte transfer could.  However with the introduction
of 64-bit DMA, the data alignment for 64-bit DMA was made 8 bytes,
so all 4-byte transfers not on 8-byte boundaries get "split" into
a 4-byte chunk and a 0-byte chunk, thereby hitting the bug.

In fact, a closer look at the SDHCI specs indicates that only the
descriptor table requires 8-byte alignment for 64-bit DMA.  That
will be dealt with in a separate patch, but the potential for a
2-byte access remains, so this fix is needed anyway.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org # v3.19+
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agommc: sdio: Fix invalid vdd in voltage switch power cycle
Adrian Hunter [Thu, 26 Nov 2015 12:00:47 +0000 (14:00 +0200)]
mmc: sdio: Fix invalid vdd in voltage switch power cycle

[ Upstream commit d9bfbb95ed598a09cf336adb0f190ee0ff802f0d ]

The 'ocr' parameter passed to mmc_set_signal_voltage()
defines the power-on voltage used when power cycling
after a failure to set the voltage.  However, in the
case of mmc_sdio_init_card(), the value passed has the
R4_18V_PRESENT flag set which is not valid for power-on
and results in an invalid vdd.  Fix by passing the card's
ocr value which does not have the flag.

Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Cc: stable@vger.kernel.org # v3.13+
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agodrm/radeon: clean up fujitsu quirks
Alex Deucher [Thu, 17 Dec 2015 17:52:17 +0000 (12:52 -0500)]
drm/radeon: clean up fujitsu quirks

[ Upstream commit 0eb1c3d4084eeb6fb3a703f88d6ce1521f8fcdd1 ]

Combine the two quirks.

bug:
https://bugzilla.kernel.org/show_bug.cgi?id=109481

Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agodrm/radeon: Fix off-by-one errors in radeon_vm_bo_set_addr
Felix Kuehling [Mon, 23 Nov 2015 22:39:11 +0000 (17:39 -0500)]
drm/radeon: Fix off-by-one errors in radeon_vm_bo_set_addr

[ Upstream commit 42ef344c0994cc453477afdc7a8eadc578ed0257 ]

eoffset is sometimes treated as the last address inside the address
range, and sometimes as the first address outside the range. This
was resulting in errors when a test filled up the entire address
space. Make it consistent to always be the last address within the
range. Also fixed related errors when checking the VA limit and in
radeon_vm_fence_pts.

Signed-off-by: Felix.Kuehling <Felix.Kuehling@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agocoresight: checking for NULL string in coresight_name_match()
Mathieu Poirier [Thu, 17 Dec 2015 15:47:02 +0000 (08:47 -0700)]
coresight: checking for NULL string in coresight_name_match()

[ Upstream commit fadf3a44e974b030e7145218ad1ab25e3ef91738 ]

Connection child names associated to ports can sometimes be NULL,
which is the case when booting a system on QEMU or when the Coresight
power domain isn't switched on.

This patch is adding a check to make sure a NULL string isn't fed
to strcmp(), something that avoid crashing the system.

Cc: <stable@vger.kernel.org> # v3.18+
Reported-by: Tyler Baker <tyler.baker@linaro.org>
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoarm64: kernel: enforce pmuserenr_el0 initialization and restore
Lorenzo Pieralisi [Fri, 18 Dec 2015 10:35:54 +0000 (10:35 +0000)]
arm64: kernel: enforce pmuserenr_el0 initialization and restore

[ Upstream commit d2d39a3b91628ef5abdf58e83905b173e63d5ecf ]

commit 60792ad349f3c6dc5735aafefe5dc9121c79e320 upstream.

The pmuserenr_el0 register value is architecturally UNKNOWN on reset.
Current kernel code resets that register value iff the core pmu device is
correctly probed in the kernel. On platforms with missing DT pmu nodes (or
disabled perf events in the kernel), the pmu is not probed, therefore the
pmuserenr_el0 register is not reset in the kernel, which means that its
value retains the reset value that is architecturally UNKNOWN (system
may run with eg pmuserenr_el0 == 0x1, which means that PMU counters access
is available at EL0, which must be disallowed).

This patch adds code that resets pmuserenr_el0 on cold boot and restores
it on core resume from shutdown, so that the pmuserenr_el0 setup is
always enforced in the kernel.

Cc: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agoarm64: mdscr_el1: avoid exposing DCC to userspace
Will Deacon [Thu, 20 Aug 2015 10:47:13 +0000 (11:47 +0100)]
arm64: mdscr_el1: avoid exposing DCC to userspace

[ Upstream commit d8d23fa0f27f3b2942a7bbc7378c7735324ed519 ]

We don't want to expose the DCC to userspace, particularly as there is
a kernel console driver for it.

This patch resets mdscr_el1 to disable userspace access to the DCC
registers on the cold boot path.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agofutex: Drop refcount if requeue_pi() acquired the rtmutex
Thomas Gleixner [Sat, 19 Dec 2015 20:07:38 +0000 (20:07 +0000)]
futex: Drop refcount if requeue_pi() acquired the rtmutex

[ Upstream commit fb75a4282d0d9a3c7c44d940582c2d226cf3acfb ]

If the proxy lock in the requeue loop acquires the rtmutex for a
waiter then it acquired also refcount on the pi_state related to the
futex, but the waiter side does not drop the reference count.

Add the missing free_pi_state() call.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Darren Hart <darren@dvhart.com>
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Bhuvanesh_Surachari@mentor.com
Cc: Andy Lowe <Andy_Lowe@mentor.com>
Link: http://lkml.kernel.org/r/20151219200607.178132067@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
8 years agodrm/radeon: Fix "slow" audio over DP on DCE8+
Slava Grigorev [Thu, 17 Dec 2015 16:09:58 +0000 (11:09 -0500)]
drm/radeon: Fix "slow" audio over DP on DCE8+

[ Upstream commit ac4a9350abddc51ccb897abf0d9f3fd592b97e0b ]

DP audio is derived from the dfs clock.

Signed-off-by: Slava Grigorev <slava.grigorev@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>