Bart Van Assche [Thu, 27 Apr 2017 17:11:14 +0000 (10:11 -0700)]
dm mpath: split and rename activate_path() to prepare for its expanded use
commit
89bfce763e43fa4897e0d3af6b29ed909df64cfd upstream.
activate_path() is renamed to activate_path_work() which now calls
activate_or_offline_path(). activate_or_offline_path() will be used
by the next commit.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mikulas Patocka [Sun, 30 Apr 2017 21:34:53 +0000 (17:34 -0400)]
dm bufio: check new buffer allocation watermark every 30 seconds
commit
390020ad2af9ca04844c4f3b1f299ad8746d84c8 upstream.
dm-bufio checks a watermark when it allocates a new buffer in
__bufio_new(). However, it doesn't check the watermark when the user
changes /sys/module/dm_bufio/parameters/max_cache_size_bytes.
This may result in a problem - if the watermark is high enough so that
all possible buffers are allocated and if the user lowers the value of
"max_cache_size_bytes", the watermark will never be checked against the
new value because no new buffer would be allocated.
To fix this, change __evict_old_buffers() so that it checks the
watermark. __evict_old_buffers() is called every 30 seconds, so if the
user reduces "max_cache_size_bytes", dm-bufio will react to this change
within 30 seconds and decrease memory consumption.
Depends-on:
1b0fb5a5b2 ("dm bufio: avoid a possible ABBA deadlock")
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mikulas Patocka [Sun, 30 Apr 2017 21:33:26 +0000 (17:33 -0400)]
dm bufio: avoid a possible ABBA deadlock
commit
1b0fb5a5b2dc0dddcfa575060441a7176ba7ac37 upstream.
__get_memory_limit() tests if dm_bufio_cache_size changed and calls
__cache_size_refresh() if it did. It takes dm_bufio_clients_lock while
it already holds the client lock. However, lock ordering is violated
because in cleanup_old_buffers() dm_bufio_clients_lock is taken before
the client lock.
This results in a possible deadlock and lockdep engine warning.
Fix this deadlock by changing mutex_lock() to mutex_trylock(). If the
lock can't be taken, it will be re-checked next time when a new buffer
is allocated.
Also add "unlikely" to the if condition, so that the optimizer assumes
that the condition is false.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mikulas Patocka [Tue, 28 Mar 2017 16:53:39 +0000 (12:53 -0400)]
dm raid: select the Kconfig option CONFIG_MD_RAID0
commit
7b81ef8b14f80033e4a4168d199a0f5fd79b9426 upstream.
Since the commit
0cf4503174c1 ("dm raid: add support for the MD RAID0
personality"), the dm-raid subsystem can activate a RAID-0 array.
Therefore, add MD_RAID0 to the dependencies of DM_RAID, so that MD_RAID0
will be selected when DM_RAID is selected.
Fixes:
0cf4503174c1 ("dm raid: add support for the MD RAID0 personality")
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vinothkumar Raja [Fri, 7 Apr 2017 02:09:38 +0000 (22:09 -0400)]
dm btree: fix for dm_btree_find_lowest_key()
commit
7d1fedb6e96a960aa91e4ff70714c3fb09195a5a upstream.
dm_btree_find_lowest_key() is giving incorrect results. find_key()
traverses the btree correctly for finding the highest key, but there is
an error in the way it traverses the btree for retrieving the lowest
key. dm_btree_find_lowest_key() fetches the first key of the rightmost
block of the btree instead of fetching the first key from the leftmost
block.
Fix this by conditionally passing the correct parameter to value64()
based on the @find_highest flag.
Signed-off-by: Erez Zadok <ezk@fsl.cs.sunysb.edu>
Signed-off-by: Vinothkumar Raja <vinraja@cs.stonybrook.edu>
Signed-off-by: Nidhi Panpalia <npanpalia@cs.stonybrook.edu>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paolo Abeni [Fri, 28 Apr 2017 09:20:01 +0000 (11:20 +0200)]
infiniband: call ipv6 route lookup via the stub interface
commit
eea40b8f624f25cbc02d55f2d93203f60cee9341 upstream.
The infiniband address handle can be triggered to resolve an ipv6
address in response to MAD packets, regardless of the ipv6
module being disabled via the kernel command line argument.
That will cause a call into the ipv6 routing code, which is not
initialized, and a conseguent oops.
This commit addresses the above issue replacing the direct lookup
call with an indirect one via the ipv6 stub, which is properly
initialized according to the ipv6 status (e.g. if ipv6 is
disabled, the routing lookup fails gracefully)
Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sagi Grimberg [Sun, 23 Apr 2017 11:31:42 +0000 (14:31 +0300)]
mlx5: Fix mlx5_ib_map_mr_sg mr length
commit
0a49f2c31c3efbeb0de3e4b5598764887f629be2 upstream.
In case we got an initial sg_offset, we need to
account for it in the mr length.
Fixes:
ff2ba9936591 ("IB/core: Add passing an offset into the SG to ib_map_mr_sg")
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Tested-by: Israel Rukshin <israelr@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alexander Sverdlin [Sat, 29 Apr 2017 10:19:33 +0000 (12:19 +0200)]
ASoC: cs4271: configure reset GPIO as output
commit
49b2e27ab9f66b0a22c21980ad8118a4038324ae upstream.
During reset "refactoring" the output configuration was lost.
This commit repairs sound on EDB93XX boards.
Fixes: 9a397f4 ("ASoC: cs4271: add regulator consumer support")
Signed-off-by: Alexander Sverdlin <alexander.sverdlin@gmail.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jerry Snitselaar [Sat, 11 Mar 2017 00:46:04 +0000 (17:46 -0700)]
tpm_crb: check for bad response size
commit
8569defde8057258835c51ce01a33de82e14b148 upstream.
Make sure size of response buffer is at least 6 bytes, or
we will underflow and pass large size_t to memcpy_fromio().
This was encountered while testing earlier version of
locality patchset.
Fixes:
30fc8d138e912 ("tpm: TPM 2.0 CRB Interface")
Signed-off-by: Jerry Snitselaar <jsnitsel@redhat.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nayna Jain [Fri, 10 Mar 2017 18:45:54 +0000 (13:45 -0500)]
tpm: add sleep only for retry in i2c_nuvoton_write_status()
commit
0afb7118ae021e80ecf70f5a3336e0935505518a upstream.
Currently, there is an unnecessary 1 msec delay added in
i2c_nuvoton_write_status() for the successful case. This
function is called multiple times during send() and recv(),
which implies adding multiple extra delays for every TPM
operation.
This patch calls usleep_range() only if retry is to be done.
Signed-off-by: Nayna Jain <nayna@linux.vnet.ibm.com>
Reviewed-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nayna Jain [Fri, 10 Mar 2017 18:45:53 +0000 (13:45 -0500)]
tpm: msleep() delays - replace with usleep_range() in i2c nuvoton driver
commit
a233a0289cf9a96ef9b42c730a7621ccbf9a6f98 upstream.
Commit
500462a9de65 "timers: Switch to a non-cascading wheel" replaced
the 'classic' timer wheel, which aimed for near 'exact' expiry of the
timers. Their analysis was that the vast majority of timeout timers
are used as safeguards, not as real timers, and are cancelled or
rearmed before expiration. The only exception noted to this were
networking timers with a small expiry time.
Not included in the analysis was the TPM polling timer, which resulted
in a longer normal delay and, every so often, a very long delay. The
non-cascading wheel delay is based on CONFIG_HZ. For a description of
the different rings and their delays, refer to the comments in
kernel/time/timer.c.
Below are the delays given for rings 0 - 2, which explains the longer
"normal" delays and the very, long delays as seen on systems with
CONFIG_HZ 250.
* HZ 1000 steps
* Level Offset Granularity Range
* 0 0 1 ms 0 ms - 63 ms
* 1 64 8 ms 64 ms - 511 ms
* 2 128 64 ms 512 ms - 4095 ms (512ms - ~4s)
* HZ 250
* Level Offset Granularity Range
* 0 0 4 ms 0 ms - 255 ms
* 1 64 32 ms 256 ms - 2047 ms (256ms - ~2s)
* 2 128 256 ms 2048 ms - 16383 ms (~2s - ~16s)
Below is a comparison of extending the TPM with 1000 measurements,
using msleep() vs. usleep_delay() when configured for 1000 hz vs. 250
hz, before and after commit
500462a9de65.
linux-4.7 | msleep() usleep_range()
1000 hz: 0m44.628s | 1m34.497s 29.243s
250 hz: 1m28.510s | 4m49.269s 32.386s
linux-4.7 | min-max (msleep) min-max (usleep_range)
1000 hz: 0:017 - 2:760s | 0:015 - 3:967s 0:014 - 0:418s
250 hz: 0:028 - 1:954s | 0:040 - 4:096s 0:016 - 0:816s
This patch replaces the msleep() with usleep_range() calls in the
i2c nuvoton driver with a consistent max range value.
Signed-of-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Signed-off-by: Nayna Jain <nayna@linux.vnet.ibm.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Peter Huewe [Thu, 2 Mar 2017 13:03:15 +0000 (13:03 +0000)]
tpm_tis_spi: Add small delay after last transfer
commit
5cc0101d1f88500f8901d01b035af743215d4c3a upstream.
Testing the implementation with a Raspberry Pi 2 showed that under some
circumstances its SPI master erroneously releases the CS line before the
transfer is complete, i.e. before the end of the last clock. In this case
the TPM ignores the transfer and misses for example the GO command. The
driver is unable to detect this communication problem and will wait for a
command response that is never going to arrive, timing out eventually.
As a workaround, the small delay ensures that the CS line is held long
enough, even with a faulty SPI master. Other SPI masters are not affected,
except for a negligible performance penalty.
Fixes:
0edbfea537d1 ("tpm/tpm_tis_spi: Add support for spi phy")
Signed-off-by: Alexander Steffen <Alexander.Steffen@infineon.com>
Signed-off-by: Peter Huewe <peter.huewe@infineon.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Benoit Houyere <benoit.houyere@st.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Peter Huewe [Thu, 2 Mar 2017 13:03:14 +0000 (13:03 +0000)]
tpm_tis_spi: Remove limitation of transfers to MAX_SPI_FRAMESIZE bytes
commit
591e48c26ced7c455751eef27fb5963e902c2137 upstream.
Limiting transfers to MAX_SPI_FRAMESIZE was not expected by the upper
layers, as tpm_tis has no such limitation. Add a loop to hide that
limitation.
v2: Moved scope of spi_message to the top as requested by Jarkko
Fixes:
0edbfea537d1 ("tpm/tpm_tis_spi: Add support for spi phy")
Signed-off-by: Alexander Steffen <Alexander.Steffen@infineon.com>
Signed-off-by: Peter Huewe <peter.huewe@infineon.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Benoit Houyere <benoit.houyere@st.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Peter Huewe [Thu, 2 Mar 2017 13:03:13 +0000 (13:03 +0000)]
tpm_tis_spi: Check correct byte for wait state indicator
commit
e110cc69dc2ad679d6d478df636b99b14e6fbbc9 upstream.
Wait states are signaled in the last byte received from the TPM in
response to the header, not the first byte. Check rx_buf[3] instead of
rx_buf[0].
Fixes:
0edbfea537d1 ("tpm/tpm_tis_spi: Add support for spi phy")
Signed-off-by: Alexander Steffen <Alexander.Steffen@infineon.com>
Signed-off-by: Peter Huewe <peter.huewe@infineon.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Benoit Houyere <benoit.houyere@st.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Peter Huewe [Thu, 2 Mar 2017 13:03:12 +0000 (13:03 +0000)]
tpm_tis_spi: Abort transfer when too many wait states are signaled
commit
975094ddc369a32f27210248bdd9bbd153061b00 upstream.
Abort the transfer with ETIMEDOUT when the TPM signals more than
TPM_RETRY wait states. Continuing with the transfer in this state
will only lead to arbitrary failures in other parts of the code.
Fixes:
0edbfea537d1 ("tpm/tpm_tis_spi: Add support for spi phy")
Signed-off-by: Alexander Steffen <Alexander.Steffen@infineon.com>
Signed-off-by: Peter Huewe <peter.huewe@infineon.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Benoit Houyere <benoit.houyere@st.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Peter Huewe [Thu, 2 Mar 2017 13:03:11 +0000 (13:03 +0000)]
tpm_tis_spi: Use single function to transfer data
commit
f848f2143ae42dc0918400039257a893835254d1 upstream.
The algorithm for sending data to the TPM is mostly identical to the
algorithm for receiving data from the TPM, so a single function is
sufficient to handle both cases.
This is a prequisite for all the other fixes, so we don't have to fix
everything twice (send/receive)
v2: u16 instead of u8 for the length.
Fixes:
0edbfea537d1 ("tpm/tpm_tis_spi: Add support for spi phy")
Signed-off-by: Alexander Steffen <Alexander.Steffen@infineon.com>
Signed-off-by: Peter Huewe <peter.huewe@infineon.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Tested-by: Benoit Houyere <benoit.houyere@st.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Amir Goldstein [Tue, 25 Apr 2017 11:29:35 +0000 (14:29 +0300)]
fanotify: don't expose EOPENSTALE to userspace
commit
4ff33aafd32e084f5ee7faa54ba06e95f8b1b8af upstream.
When delivering an event to userspace for a file on an NFS share,
if the file is deleted on server side before user reads the event,
user will not get the event.
If the event queue contained several events, the stale event is
quietly dropped and read() returns to user with events read so far
in the buffer.
If the event queue contains a single stale event or if the stale
event is a permission event, read() returns to user with the kernel
internal error code 518 (EOPENSTALE), which is not a POSIX error code.
Check the internal return value -EOPENSTALE in fanotify_read(), just
the same as it is checked in path_openat() and drop the event in the
cases that it is not already dropped.
This is a reproducer from Marko Rauhamaa:
Just take the example program listed under "man fanotify" ("fantest")
and follow these steps:
==============================================================
NFS Server NFS Client(1) NFS Client(2)
==============================================================
# echo foo >/nfsshare/bar.txt
# cat /nfsshare/bar.txt
foo
# ./fantest /nfsshare
Press enter key to terminate.
Listening for events.
# rm -f /nfsshare/bar.txt
# cat /nfsshare/bar.txt
read: Unknown error 518
cat: /nfsshare/bar.txt: Operation not permitted
==============================================================
where NFS Client (1) and (2) are two terminal sessions on a single NFS
Client machine.
Reported-by: Marko Rauhamaa <marko.rauhamaa@f-secure.com>
Tested-by: Marko Rauhamaa <marko.rauhamaa@f-secure.com>
Cc: <linux-api@vger.kernel.org>
Signed-off-by: Amir Goldstein <amir73il@gmail.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marc Dietrich [Fri, 9 Dec 2016 09:20:38 +0000 (10:20 +0100)]
ARM: tegra: paz00: Mark panel regulator as enabled on boot
commit
0c18927f51f4d390abdcf385bff5f995407ee732 upstream.
Current U-Boot enables the display already. Marking the regulator as
enabled on boot fixes sporadic panel initialization failures.
Signed-off-by: Marc Dietrich <marvin24@gmx.de>
Tested-by: Misha Komarovskiy <zombah@gmail.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jeeja KP [Wed, 10 May 2017 06:21:58 +0000 (11:51 +0530)]
ALSA: hda: Fix cpu lockup when stopping the cmd dmas
commit
960013762df0a214b57f2fce655422fb52bdfd2c upstream.
Using jiffies in hdac_wait_for_cmd_dmas() to determine when to time out
when interrupts are off (snd_hdac_bus_stop_cmd_io()/spin_lock_irq())
causes hard lockup so unlock while waiting using jiffies.
---<-snip->---
<0>[ 1211.603046] NMI watchdog: Watchdog detected hard LOCKUP on cpu 3
<4>[ 1211.603047] Modules linked in: snd_hda_intel i915 vgem
<4>[ 1211.603053] irq event stamp: 13366
<4>[ 1211.603053] hardirqs last enabled at (13365):
...
<4>[ 1211.603059] Call Trace:
<4>[ 1211.603059] ? delay_tsc+0x3d/0xc0
<4>[ 1211.603059] __delay+0xa/0x10
<4>[ 1211.603060] __const_udelay+0x31/0x40
<4>[ 1211.603060] snd_hdac_bus_stop_cmd_io+0x96/0xe0 [snd_hda_core]
<4>[ 1211.603060] ? azx_dev_disconnect+0x20/0x20 [snd_hda_intel]
<4>[ 1211.603061] snd_hdac_bus_stop_chip+0xb1/0x100 [snd_hda_core]
<4>[ 1211.603061] azx_stop_chip+0x9/0x10 [snd_hda_codec]
<4>[ 1211.603061] azx_suspend+0x72/0x220 [snd_hda_intel]
<4>[ 1211.603061] pci_pm_suspend+0x71/0x140
<4>[ 1211.603062] dpm_run_callback+0x6f/0x330
<4>[ 1211.603062] ? pci_pm_freeze+0xe0/0xe0
<4>[ 1211.603062] __device_suspend+0xf9/0x370
<4>[ 1211.603062] ? dpm_watchdog_set+0x60/0x60
<4>[ 1211.603063] async_suspend+0x1a/0x90
<4>[ 1211.603063] async_run_entry_fn+0x34/0x160
<4>[ 1211.603063] process_one_work+0x1f4/0x6d0
<4>[ 1211.603063] ? process_one_work+0x16e/0x6d0
<4>[ 1211.603064] worker_thread+0x49/0x4a0
<4>[ 1211.603064] kthread+0x107/0x140
<4>[ 1211.603064] ? process_one_work+0x6d0/0x6d0
<4>[ 1211.603065] ? kthread_create_on_node+0x40/0x40
<4>[ 1211.603065] ret_from_fork+0x2e/0x40
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100419
Fixes:
38b19ed7f81ec ("ALSA: hda: fix to wait for RIRB & CORB DMA to set")
Reported-by: Marta Lofstedt <marta.lofstedt@intel.com>
Suggested-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Jeeja KP <jeeja.kp@intel.com>
Acked-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alexander Steffen [Thu, 16 Feb 2017 15:33:36 +0000 (15:33 +0000)]
tpm_tis_core: Choose appropriate timeout for reading burstcount
commit
302a6ad7fc77146191126a1f3e2c5d724fd72416 upstream.
TIS v1.3 for TPM 1.2 and PTP for TPM 2.0 disagree about which timeout
value applies to reading a valid burstcount. It is TIMEOUT_D according to
TIS, but TIMEOUT_A according to PTP, so choose the appropriate value
depending on whether we deal with a TPM 1.2 or a TPM 2.0.
This is important since according to the PTP TIMEOUT_D is much smaller
than TIMEOUT_A. So the previous implementation could run into timeouts
with a TPM 2.0, even though the TPM was behaving perfectly fine.
During tpm2_probe TIMEOUT_D will be used even with a TPM 2.0, because
TPM_CHIP_FLAG_TPM2 is not yet set. This is fine, since the timeout values
will only be changed afterwards by tpm_get_timeouts. Until then
TIS_TIMEOUT_D_MAX applies, which is large enough.
Fixes:
aec04cbdf723 ("tpm: TPM 2.0 FIFO Interface")
Signed-off-by: Alexander Steffen <Alexander.Steffen@infineon.com>
Signed-off-by: Peter Huewe <peter.huewe@infineon.com>
Reviewed-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vamsi Krishna Samavedam [Tue, 16 May 2017 12:38:08 +0000 (14:38 +0200)]
USB: core: replace %p with %pK
commit
2f964780c03b73de269b08d12aff96a9618d13f3 upstream.
Format specifier %p can leak kernel addresses while not valuing the
kptr_restrict system settings. When kptr_restrict is set to (1), kernel
pointers printed using the %pK format specifier will be replaced with
Zeros. Debugging Note : &pK prints only Zeros as address. If you need
actual address information, write 0 to kptr_restrict.
echo 0 > /proc/sys/kernel/kptr_restrict
[Found by poking around in a random vendor kernel tree, it would be nice
if someone would actually send these types of patches upstream - gkh]
Signed-off-by: Vamsi Krishna Samavedam <vskrishn@codeaurora.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Willy Tarreau [Tue, 16 May 2017 17:18:55 +0000 (19:18 +0200)]
char: lp: fix possible integer overflow in lp_setup()
commit
3e21f4af170bebf47c187c1ff8bf155583c9f3b1 upstream.
The lp_setup() code doesn't apply any bounds checking when passing
"lp=none", and only in this case, resulting in an overflow of the
parport_nr[] array. All versions in Git history are affected.
Reported-By: Roee Hay <roee.hay@hcl.com>
Cc: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johan Hovold [Mon, 13 Mar 2017 12:49:45 +0000 (13:49 +0100)]
watchdog: pcwd_usb: fix NULL-deref at probe
commit
46c319b848268dab3f0e7c4a5b6e9146d3bca8a4 upstream.
Make sure to check the number of endpoints to avoid dereferencing a
NULL-pointer should a malicious device lack endpoints.
Fixes:
1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Johan Hovold <johan@kernel.org>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Wim Van Sebroeck <wim@iguana.be>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alan Stern [Tue, 16 May 2017 15:47:29 +0000 (11:47 -0400)]
USB: ene_usb6250: fix DMA to the stack
commit
628c2893d44876ddd11602400c70606ade62e129 upstream.
The ene_usb6250 sub-driver in usb-storage does USB I/O to buffers on
the stack, which doesn't work with vmapped stacks. This patch fixes
the problem by allocating a separate 512-byte buffer at probe time and
using it for all of the offending I/O operations.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Reported-and-tested-by: Andreas Hartmann <andihartmann@01019freenet.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Maksim Salau [Sat, 13 May 2017 20:49:26 +0000 (23:49 +0300)]
usb: misc: legousbtower: Fix memory leak
commit
0bd193d62b4270a2a7a09da43ad1034c7ca5b3d3 upstream.
get_version_reply is not freed if function returns with success.
Fixes:
942a48730faf ("usb: misc: legousbtower: Fix buffers on stack")
Reported-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
Signed-off-by: Maksim Salau <maksim.salau@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Maksim Salau [Tue, 25 Apr 2017 19:49:21 +0000 (22:49 +0300)]
usb: misc: legousbtower: Fix buffers on stack
commit
942a48730faf149ccbf3e12ac718aee120bb3529 upstream.
Allocate buffers on HEAP instead of STACK for local structures
that are to be received using usb_control_msg().
Signed-off-by: Maksim Salau <maksim.salau@gmail.com>
Tested-by: Alfredo Rafael Vicente Boix <alviboi@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Sat, 20 May 2017 12:28:55 +0000 (14:28 +0200)]
Linux 4.9.29
Kees Cook [Mon, 6 Mar 2017 20:42:12 +0000 (12:42 -0800)]
pstore: Shut down worker when unregistering
commit
6330d5534786d5315d56d558aa6d20740f97d80a upstream.
When built as a module and running with update_ms >= 0, pstore will Oops
during module unload since the work timer is still running. This makes sure
the worker is stopped before unloading.
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ankit Kumar [Thu, 27 Apr 2017 11:33:13 +0000 (17:03 +0530)]
pstore: Fix flags to enable dumps on powerpc
commit
041939c1ec54208b42f5cd819209173d52a29d34 upstream.
After commit
c950fd6f201a kernel registers pstore write based on flag set.
Pstore write for powerpc is broken as flags(PSTORE_FLAGS_DMESG) is not set for
powerpc architecture. On panic, kernel doesn't write message to
/fs/pstore/dmesg*(Entry doesn't gets created at all).
This patch enables pstore write for powerpc architecture by setting
PSTORE_FLAGS_DMESG flag.
Fixes:
c950fd6f201a ("pstore: Split pstore fragile flags")
Signed-off-by: Ankit Kumar <ankit@linux.vnet.ibm.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Fri, 5 May 2017 02:54:42 +0000 (19:54 -0700)]
libnvdimm, pfn: fix 'npfns' vs section alignment
commit
d5483feda85a8f39ee2e940e279547c686aac30c upstream.
Fix failures to create namespaces due to the vmem_altmap not advertising
enough free space to store the memmap.
WARNING: CPU: 15 PID: 8022 at arch/x86/mm/init_64.c:656 arch_add_memory+0xde/0xf0
[..]
Call Trace:
dump_stack+0x63/0x83
__warn+0xcb/0xf0
warn_slowpath_null+0x1d/0x20
arch_add_memory+0xde/0xf0
devm_memremap_pages+0x244/0x440
pmem_attach_disk+0x37e/0x490 [nd_pmem]
nd_pmem_probe+0x7e/0xa0 [nd_pmem]
nvdimm_bus_probe+0x71/0x120 [libnvdimm]
driver_probe_device+0x2bb/0x460
bind_store+0x114/0x160
drv_attr_store+0x25/0x30
In commit
658922e57b84 "libnvdimm, pfn: fix memmap reservation sizing"
we arranged for the capacity to be allocated, but failed to also update
the 'npfns' parameter. This leads to cases where there is enough
capacity reserved to hold all the allocated sections, but
vmemmap_populate_hugepages() still encounters -ENOMEM from
altmap_alloc_block_buf().
This fix is a stop-gap until we can teach the core memory hotplug
implementation to permit sub-section hotplug.
Fixes:
658922e57b84 ("libnvdimm, pfn: fix memmap reservation sizing")
Reported-by: Anisha Allada <anisha.allada@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Toshi Kani [Tue, 25 Apr 2017 23:04:13 +0000 (17:04 -0600)]
libnvdimm, pmem: fix a NULL pointer BUG in nd_pmem_notify
commit
b2518c78ce76896f0f8f7940bf02104b227e1709 upstream.
The following BUG was observed when nd_pmem_notify() was called
for a BTT device. The use of a pmem_device pointer is not valid
with BTT.
BUG: unable to handle kernel NULL pointer dereference at
0000000000000030
IP: nd_pmem_notify+0x30/0xf0 [nd_pmem]
Call Trace:
nd_device_notify+0x40/0x50
child_notify+0x10/0x20
device_for_each_child+0x50/0x90
nd_region_notify+0x20/0x30
nd_device_notify+0x40/0x50
nvdimm_region_notify+0x27/0x30
acpi_nfit_scrub+0x341/0x590 [nfit]
process_one_work+0x197/0x450
worker_thread+0x4e/0x4a0
kthread+0x109/0x140
Fix nd_pmem_notify() by setting nd_region and badblocks pointers
properly for BTT.
Cc: Vishal Verma <vishal.l.verma@intel.com>
Fixes:
719994660c24 ("libnvdimm: async notification support")
Signed-off-by: Toshi Kani <toshi.kani@hpe.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Mon, 24 Apr 2017 22:43:05 +0000 (15:43 -0700)]
libnvdimm, region: fix flush hint detection crash
commit
bc042fdfbb92b5b13421316b4548e2d6e98eed37 upstream.
In the case where a dimm does not have any associated flush hints the
ndrd->flush_wpq array may be uninitialized leading to crashes with the
following signature:
BUG: unable to handle kernel NULL pointer dereference at
0000000000000010
IP: region_visible+0x10f/0x160 [libnvdimm]
Call Trace:
internal_create_group+0xbe/0x2f0
sysfs_create_groups+0x40/0x80
device_add+0x2d8/0x650
nd_async_device_register+0x12/0x40 [libnvdimm]
async_run_entry_fn+0x39/0x170
process_one_work+0x212/0x6c0
? process_one_work+0x197/0x6c0
worker_thread+0x4e/0x4a0
kthread+0x10c/0x140
? process_one_work+0x6c0/0x6c0
? kthread_create_on_node+0x60/0x60
ret_from_fork+0x31/0x40
Reviewed-by: Jeff Moyer <jmoyer@redhat.com>
Fixes:
f284a4f23752 ("libnvdimm: introduce nvdimm_flush() and nvdimm_has_flush()")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Joeseph Chang [Tue, 28 Mar 2017 02:22:09 +0000 (20:22 -0600)]
ipmi: Fix kernel panic at ipmi_ssif_thread()
commit
6de65fcfdb51835789b245203d1bfc8d14cb1e06 upstream.
msg_written_handler() may set ssif_info->multi_data to NULL
when using ipmitool to write fru.
Before setting ssif_info->multi_data to NULL, add new local
pointer "data_to_send" and store correct i2c data pointer to
it to fix NULL pointer kernel panic and incorrect ssif_info->multi_pos.
Signed-off-by: Joeseph Chang <joechang@codeaurora.org>
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johan Hovold [Wed, 29 Mar 2017 16:15:28 +0000 (18:15 +0200)]
Bluetooth: hci_intel: add missing tty-device sanity check
commit
dcb9cfaa5ea9aa0ec08aeb92582ccfe3e4c719a9 upstream.
Make sure to check the tty-device pointer before looking up the sibling
platform device to avoid dereferencing a NULL-pointer when the tty is
one end of a Unix98 pty.
Fixes:
74cdad37cd24 ("Bluetooth: hci_intel: Add runtime PM support")
Fixes:
1ab1f239bf17 ("Bluetooth: hci_intel: Add support for platform driver")
Cc: Loic Poulain <loic.poulain@intel.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johan Hovold [Wed, 29 Mar 2017 16:15:27 +0000 (18:15 +0200)]
Bluetooth: hci_bcm: add missing tty-device sanity check
commit
95065a61e9bf25fb85295127fba893200c2bbbd8 upstream.
Make sure to check the tty-device pointer before looking up the sibling
platform device to avoid dereferencing a NULL-pointer when the tty is
one end of a Unix98 pty.
Fixes:
0395ffc1ee05 ("Bluetooth: hci_bcm: Add PM for BCM devices")
Cc: Frederic Danis <frederic.danis@linux.intel.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Szymon Janc [Tue, 25 Apr 2017 01:25:04 +0000 (18:25 -0700)]
Bluetooth: Fix user channel for 32bit userspace on 64bit kernel
commit
ab89f0bdd63a3721f7cd3f064f39fc4ac7ca14d4 upstream.
Running 32bit userspace on 64bit kernel results in MSG_CMSG_COMPAT being
defined as 0x80000000. This results in sendmsg failure if used from 32bit
userspace running on 64bit kernel. Fix this by accounting for MSG_CMSG_COMPAT
in flags check in hci_sock_sendmsg.
Signed-off-by: Szymon Janc <szymon.janc@codecoup.pl>
Signed-off-by: Marko Kiiskila <marko@runtime.io>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Wang YanQing [Wed, 22 Feb 2017 11:37:08 +0000 (19:37 +0800)]
tty: pty: Fix ldisc flush after userspace become aware of the data already
commit
77dae6134440420bac334581a3ccee94cee1c054 upstream.
While using emacs, cat or others' commands in konsole with recent
kernels, I have met many times that CTRL-C freeze konsole. After
konsole freeze I can't type anything, then I have to open a new one,
it is very annoying.
See bug report:
https://bugs.kde.org/show_bug.cgi?id=175283
The platform in that bug report is Solaris, but now the pty in linux
has the same problem or the same behavior as Solaris :)
It has high possibility to trigger the problem follow steps below:
Note: In my test, BigFile is a text file whose size is bigger than 1G
1:open konsole
1:cat BigFile
2:CTRL-C
After some digging, I find out the reason is that commit
1d1d14da12e7
("pty: Fix buffer flush deadlock") changes the behavior of pty_flush_buffer.
Thread A Thread B
-------- --------
1:n_tty_poll return POLLIN
2:CTRL-C trigger pty_flush_buffer
tty_buffer_flush
n_tty_flush_buffer
3:attempt to check count of chars:
ioctl(fd, TIOCINQ, &available)
available is equal to 0
4:read(fd, buffer, avaiable)
return 0
5:konsole close fd
Yes, I know we could use the same patch included in the BUG report as
a workaround for linux platform too. But I think the data in ldisc is
belong to application of another side, we shouldn't clear it when we
want to flush write buffer of this side in pty_flush_buffer. So I think
it is better to disable ldisc flush in pty_flush_buffer, because its new
hehavior bring no benefit except that it mess up the behavior between
POLLIN, and TIOCINQ or FIONREAD.
Also I find no flush_buffer function in others' tty driver has the
same behavior as current pty_flush_buffer.
Fixes:
1d1d14da12e7 ("pty: Fix buffer flush deadlock")
Signed-off-by: Wang YanQing <udknight@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johan Hovold [Mon, 10 Apr 2017 09:21:39 +0000 (11:21 +0200)]
serial: omap: suspend device on probe errors
commit
77e6fe7fd2b7cba0bf2f2dc8cde51d7b9a35bf74 upstream.
Make sure to actually suspend the device before returning after a failed
(or deferred) probe.
Note that autosuspend must be disabled before runtime pm is disabled in
order to balance the usage count due to a negative autosuspend delay as
well as to make the final put suspend the device synchronously.
Fixes:
388bc2622680 ("omap-serial: Fix the error handling in the omap_serial probe")
Cc: Shubhrajyoti D <shubhrajyoti@ti.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johan Hovold [Mon, 10 Apr 2017 09:21:38 +0000 (11:21 +0200)]
serial: omap: fix runtime-pm handling on unbind
commit
099bd73dc17ed77aa8c98323e043613b6e8f54fc upstream.
An unbalanced and misplaced synchronous put was used to suspend the
device on driver unbind, something which with a likewise misplaced
pm_runtime_disable leads to external aborts when an open port is being
removed.
Unhandled fault: external abort on non-linefetch (0x1028) at 0xfa024010
...
[<
c046e760>] (serial_omap_set_mctrl) from [<
c046a064>] (uart_update_mctrl+0x50/0x60)
[<
c046a064>] (uart_update_mctrl) from [<
c046a400>] (uart_shutdown+0xbc/0x138)
[<
c046a400>] (uart_shutdown) from [<
c046bd2c>] (uart_hangup+0x94/0x190)
[<
c046bd2c>] (uart_hangup) from [<
c045b760>] (__tty_hangup+0x404/0x41c)
[<
c045b760>] (__tty_hangup) from [<
c045b794>] (tty_vhangup+0x1c/0x20)
[<
c045b794>] (tty_vhangup) from [<
c046ccc8>] (uart_remove_one_port+0xec/0x260)
[<
c046ccc8>] (uart_remove_one_port) from [<
c046ef4c>] (serial_omap_remove+0x40/0x60)
[<
c046ef4c>] (serial_omap_remove) from [<
c04845e8>] (platform_drv_remove+0x34/0x4c)
Fix this up by resuming the device before deregistering the port and by
suspending and disabling runtime pm only after the port has been
removed.
Also make sure to disable autosuspend before disabling runtime pm so
that the usage count is balanced and device actually suspended before
returning.
Note that due to a negative autosuspend delay being set in probe, the
unbalanced put would actually suspend the device on first driver unbind,
while rebinding and again unbinding would result in a negative
power.usage_count.
Fixes:
7e9c8e7dbf3b ("serial: omap: make sure to suspend device before remove")
Cc: Felipe Balbi <balbi@kernel.org>
Cc: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marek Szyprowski [Mon, 3 Apr 2017 06:20:59 +0000 (08:20 +0200)]
serial: samsung: Use right device for DMA-mapping calls
commit
768d64f491a530062ddad50e016fb27125f8bd7c upstream.
Driver should provide its own struct device for all DMA-mapping calls instead
of extracting device pointer from DMA engine channel. Although this is harmless
from the driver operation perspective on ARM architecture, it is always good
to use the DMA mapping API in a proper way. This patch fixes following DMA API
debug warning:
WARNING: CPU: 0 PID: 0 at lib/dma-debug.c:1241 check_sync+0x520/0x9f4
samsung-uart
12c20000.serial: DMA-API: device driver tries to sync DMA memory it has not allocated [device address=0x000000006df0f580] [size=64 bytes]
Modules linked in:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.11.0-rc1-00137-g07ca963 #51
Hardware name: SAMSUNG EXYNOS (Flattened Device Tree)
[<
c011aaa4>] (unwind_backtrace) from [<
c01127c0>] (show_stack+0x20/0x24)
[<
c01127c0>] (show_stack) from [<
c06ba5d8>] (dump_stack+0x84/0xa0)
[<
c06ba5d8>] (dump_stack) from [<
c0139528>] (__warn+0x14c/0x180)
[<
c0139528>] (__warn) from [<
c01395a4>] (warn_slowpath_fmt+0x48/0x50)
[<
c01395a4>] (warn_slowpath_fmt) from [<
c0729058>] (check_sync+0x520/0x9f4)
[<
c0729058>] (check_sync) from [<
c072967c>] (debug_dma_sync_single_for_device+0x88/0xc8)
[<
c072967c>] (debug_dma_sync_single_for_device) from [<
c0803c10>] (s3c24xx_serial_start_tx_dma+0x100/0x2f8)
[<
c0803c10>] (s3c24xx_serial_start_tx_dma) from [<
c0804338>] (s3c24xx_serial_tx_chars+0x198/0x33c)
Reported-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Fixes:
62c37eedb74c8 ("serial: samsung: add dma reqest/release functions")
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org>
Reviewed-by: Shuah Khan <shuahkh@osg.samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Biggers [Fri, 7 Apr 2017 17:58:37 +0000 (10:58 -0700)]
fscrypt: fix context consistency check when key(s) unavailable
commit
272f98f6846277378e1758a49a49d7bf39343c02 upstream.
To mitigate some types of offline attacks, filesystem encryption is
designed to enforce that all files in an encrypted directory tree use
the same encryption policy (i.e. the same encryption context excluding
the nonce). However, the fscrypt_has_permitted_context() function which
enforces this relies on comparing struct fscrypt_info's, which are only
available when we have the encryption keys. This can cause two
incorrect behaviors:
1. If we have the parent directory's key but not the child's key, or
vice versa, then fscrypt_has_permitted_context() returned false,
causing applications to see EPERM or ENOKEY. This is incorrect if
the encryption contexts are in fact consistent. Although we'd
normally have either both keys or neither key in that case since the
master_key_descriptors would be the same, this is not guaranteed
because keys can be added or removed from keyrings at any time.
2. If we have neither the parent's key nor the child's key, then
fscrypt_has_permitted_context() returned true, causing applications
to see no error (or else an error for some other reason). This is
incorrect if the encryption contexts are in fact inconsistent, since
in that case we should deny access.
To fix this, retrieve and compare the fscrypt_contexts if we are unable
to set up both fscrypt_infos.
While this slightly hurts performance when accessing an encrypted
directory tree without the key, this isn't a case we really need to be
optimizing for; access *with* the key is much more important.
Furthermore, the performance hit is barely noticeable given that we are
already retrieving the fscrypt_context and doing two keyring searches in
fscrypt_get_encryption_info(). If we ever actually wanted to optimize
this case we might start by caching the fscrypt_contexts.
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Fri, 17 Mar 2017 18:48:09 +0000 (12:48 -0600)]
device-dax: fix cdev leak
commit
ed01e50acdd3e4a640cf9ebd28a7e810c3ceca97 upstream.
If device_add() fails, cleanup the cdev. Otherwise, we leak a kobj_map()
with a stale device number.
As Jason points out, there is a small possibility that userspace has
opened and mapped the device in the time between cdev_add() and the
device_add() failure. We need a new kill_dax_dev() helper to invalidate
any established mappings.
Fixes:
ba09c01d2fa8 ("dax: convert to the cdev api")
Reported-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jason A. Donenfeld [Fri, 7 Apr 2017 00:33:30 +0000 (02:33 +0200)]
padata: free correct variable
commit
07a77929ba672d93642a56dc2255dd21e6e2290b upstream.
The author meant to free the variable that was just allocated, instead
of the one that failed to be allocated, but made a simple typo. This
patch rectifies that.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Björn Jacke [Fri, 5 May 2017 02:36:16 +0000 (04:36 +0200)]
CIFS: add misssing SFM mapping for doublequote
commit
85435d7a15294f9f7ef23469e6aaf7c5dfcc54f0 upstream.
SFM is mapping doublequote to 0xF020
Without this patch creating files with doublequote fails to Windows/Mac
Signed-off-by: Bjoern Jacke <bjacke@samba.org>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Disseldorp [Wed, 3 May 2017 22:41:13 +0000 (00:41 +0200)]
cifs: fix CIFS_IOC_GET_MNT_INFO oops
commit
d8a6e505d6bba2250852fbc1c1c86fe68aaf9af3 upstream.
An open directory may have a NULL private_data pointer prior to readdir.
Fixes:
0de1f4c6f6c0 ("Add way to query server fs info for smb3")
Signed-off-by: David Disseldorp <ddiss@suse.de>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rabin Vincent [Wed, 3 May 2017 15:54:01 +0000 (17:54 +0200)]
CIFS: fix oplock break deadlocks
commit
3998e6b87d4258a70df358296d6f1c7234012bfe upstream.
When the final cifsFileInfo_put() is called from cifsiod and an oplock
break work is queued, lockdep complains loudly:
=============================================
[ INFO: possible recursive locking detected ]
4.11.0+ #21 Not tainted
---------------------------------------------
kworker/0:2/78 is trying to acquire lock:
("cifsiod"){++++.+}, at: flush_work+0x215/0x350
but task is already holding lock:
("cifsiod"){++++.+}, at: process_one_work+0x255/0x8e0
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0
----
lock("cifsiod");
lock("cifsiod");
*** DEADLOCK ***
May be due to missing lock nesting notation
2 locks held by kworker/0:2/78:
#0: ("cifsiod"){++++.+}, at: process_one_work+0x255/0x8e0
#1: ((&wdata->work)){+.+...}, at: process_one_work+0x255/0x8e0
stack backtrace:
CPU: 0 PID: 78 Comm: kworker/0:2 Not tainted 4.11.0+ #21
Workqueue: cifsiod cifs_writev_complete
Call Trace:
dump_stack+0x85/0xc2
__lock_acquire+0x17dd/0x2260
? match_held_lock+0x20/0x2b0
? trace_hardirqs_off_caller+0x86/0x130
? mark_lock+0xa6/0x920
lock_acquire+0xcc/0x260
? lock_acquire+0xcc/0x260
? flush_work+0x215/0x350
flush_work+0x236/0x350
? flush_work+0x215/0x350
? destroy_worker+0x170/0x170
__cancel_work_timer+0x17d/0x210
? ___preempt_schedule+0x16/0x18
cancel_work_sync+0x10/0x20
cifsFileInfo_put+0x338/0x7f0
cifs_writedata_release+0x2a/0x40
? cifs_writedata_release+0x2a/0x40
cifs_writev_complete+0x29d/0x850
? preempt_count_sub+0x18/0xd0
process_one_work+0x304/0x8e0
worker_thread+0x9b/0x6a0
kthread+0x1b2/0x200
? process_one_work+0x8e0/0x8e0
? kthread_create_on_node+0x40/0x40
ret_from_fork+0x31/0x40
This is a real warning. Since the oplock is queued on the same
workqueue this can deadlock if there is only one worker thread active
for the workqueue (which will be the case during memory pressure when
the rescuer thread is handling it).
Furthermore, there is at least one other kind of hang possible due to
the oplock break handling if there is only worker. (This can be
reproduced without introducing memory pressure by having passing 1 for
the max_active parameter of cifsiod.) cifs_oplock_break() can wait
indefintely in the filemap_fdatawait() while the cifs_writev_complete()
work is blocked:
sysrq: SysRq : Show Blocked State
task PC stack pid father
kworker/0:1 D 0 16 2 0x00000000
Workqueue: cifsiod cifs_oplock_break
Call Trace:
__schedule+0x562/0xf40
? mark_held_locks+0x4a/0xb0
schedule+0x57/0xe0
io_schedule+0x21/0x50
wait_on_page_bit+0x143/0x190
? add_to_page_cache_lru+0x150/0x150
__filemap_fdatawait_range+0x134/0x190
? do_writepages+0x51/0x70
filemap_fdatawait_range+0x14/0x30
filemap_fdatawait+0x3b/0x40
cifs_oplock_break+0x651/0x710
? preempt_count_sub+0x18/0xd0
process_one_work+0x304/0x8e0
worker_thread+0x9b/0x6a0
kthread+0x1b2/0x200
? process_one_work+0x8e0/0x8e0
? kthread_create_on_node+0x40/0x40
ret_from_fork+0x31/0x40
dd D 0 683 171 0x00000000
Call Trace:
__schedule+0x562/0xf40
? mark_held_locks+0x29/0xb0
schedule+0x57/0xe0
io_schedule+0x21/0x50
wait_on_page_bit+0x143/0x190
? add_to_page_cache_lru+0x150/0x150
__filemap_fdatawait_range+0x134/0x190
? do_writepages+0x51/0x70
filemap_fdatawait_range+0x14/0x30
filemap_fdatawait+0x3b/0x40
filemap_write_and_wait+0x4e/0x70
cifs_flush+0x6a/0xb0
filp_close+0x52/0xa0
__close_fd+0xdc/0x150
SyS_close+0x33/0x60
entry_SYSCALL_64_fastpath+0x1f/0xbe
Showing all locks held in the system:
2 locks held by kworker/0:1/16:
#0: ("cifsiod"){.+.+.+}, at: process_one_work+0x255/0x8e0
#1: ((&cfile->oplock_break)){+.+.+.}, at: process_one_work+0x255/0x8e0
Showing busy workqueues and worker pools:
workqueue cifsiod: flags=0xc
pwq 0: cpus=0 node=0 flags=0x0 nice=0 active=1/1
in-flight: 16:cifs_oplock_break
delayed: cifs_writev_complete, cifs_echo_request
pool 0: cpus=0 node=0 flags=0x0 nice=0 hung=0s workers=3 idle: 750 3
Fix these problems by creating a a new workqueue (with a rescuer) for
the oplock break work.
Signed-off-by: Rabin Vincent <rabinv@axis.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Disseldorp [Wed, 3 May 2017 15:39:08 +0000 (17:39 +0200)]
cifs: fix CIFS_ENUMERATE_SNAPSHOTS oops
commit
6026685de33b0db5b2b6b0e9b41b3a1a3261033c upstream.
As with
618763958b22, an open directory may have a NULL private_data
pointer prior to readdir. CIFS_ENUMERATE_SNAPSHOTS must check for this
before dereference.
Fixes:
834170c85978 ("Enable previous version support")
Signed-off-by: David Disseldorp <ddiss@suse.de>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Disseldorp [Wed, 3 May 2017 15:39:09 +0000 (17:39 +0200)]
cifs: fix leak in FSCTL_ENUM_SNAPS response handling
commit
0e5c795592930d51fd30d53a2e7b73cba022a29b upstream.
The server may respond with success, and an output buffer less than
sizeof(struct smb_snapshot_array) in length. Do not leak the output
buffer in this case.
Fixes:
834170c85978 ("Enable previous version support")
Signed-off-by: David Disseldorp <ddiss@suse.de>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Björn Jacke [Wed, 3 May 2017 21:47:44 +0000 (23:47 +0200)]
CIFS: fix mapping of SFM_SPACE and SFM_PERIOD
commit
b704e70b7cf48f9b67c07d585168e102dfa30bb4 upstream.
- trailing space maps to 0xF028
- trailing period maps to 0xF029
This fix corrects the mapping of file names which have a trailing character
that would otherwise be illegal (period or space) but is allowed by POSIX.
Signed-off-by: Bjoern Jacke <bjacke@samba.org>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Steve French [Thu, 4 May 2017 02:12:20 +0000 (21:12 -0500)]
SMB3: Work around mount failure when using SMB3 dialect to Macs
commit
7db0a6efdc3e990cdfd4b24820d010e9eb7890ad upstream.
Macs send the maximum buffer size in response on ioctl to validate
negotiate security information, which causes us to fail the mount
as the response buffer is larger than the expected response.
Changed ioctl response processing to allow for padding of validate
negotiate ioctl response and limit the maximum response size to
maximum buffer size.
Signed-off-by: Steve French <steve.french@primarydata.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Steve French [Tue, 2 May 2017 18:35:20 +0000 (13:35 -0500)]
Set unicode flag on cifs echo request to avoid Mac error
commit
26c9cb668c7fbf9830516b75d8bee70b699ed449 upstream.
Mac requires the unicode flag to be set for cifs, even for the smb
echo request (which doesn't have strings).
Without this Mac rejects the periodic echo requests (when mounting
with cifs) that we use to check if server is down
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sachin Prabhu [Wed, 26 Apr 2017 13:05:46 +0000 (14:05 +0100)]
Fix match_prepath()
commit
cd8c42968ee651b69e00f8661caff32b0086e82d upstream.
Incorrect return value for shares not using the prefix path means that
we will never match superblocks for these shares.
Fixes: commit
c1d8b24d1819 ("Compare prepaths when comparing superblocks")
Signed-off-by: Sachin Prabhu <sprabhu@redhat.com>
Reviewed-by: Pavel Shilovsky <pshilov@microsoft.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vlastimil Babka [Mon, 8 May 2017 22:59:46 +0000 (15:59 -0700)]
mm: prevent potential recursive reclaim due to clearing PF_MEMALLOC
commit
62be1511b1db8066220b18b7d4da2e6b9fdc69fb upstream.
Patch series "more robust PF_MEMALLOC handling"
This series aims to unify the setting and clearing of PF_MEMALLOC, which
prevents recursive reclaim. There are some places that clear the flag
unconditionally from current->flags, which may result in clearing a
pre-existing flag. This already resulted in a bug report that Patch 1
fixes (without the new helpers, to make backporting easier). Patch 2
introduces the new helpers, modelled after existing memalloc_noio_* and
memalloc_nofs_* helpers, and converts mm core to use them. Patches 3
and 4 convert non-mm code.
This patch (of 4):
__alloc_pages_direct_compact() sets PF_MEMALLOC to prevent deadlock
during page migration by lock_page() (see the comment in
__unmap_and_move()). Then it unconditionally clears the flag, which can
clear a pre-existing PF_MEMALLOC flag and result in recursive reclaim.
This was not a problem until commit
a8161d1ed609 ("mm, page_alloc:
restructure direct compaction handling in slowpath"), because direct
compation was called only after direct reclaim, which was skipped when
PF_MEMALLOC flag was set.
Even now it's only a theoretical issue, as the new callsite of
__alloc_pages_direct_compact() is reached only for costly orders and
when gfp_pfmemalloc_allowed() is true, which means either
__GFP_NOMEMALLOC is in gfp_flags or in_interrupt() is true. There is no
such known context, but let's play it safe and make
__alloc_pages_direct_compact() robust for cases where PF_MEMALLOC is
already set.
Fixes:
a8161d1ed609 ("mm, page_alloc: restructure direct compaction handling in slowpath")
Link: http://lkml.kernel.org/r/20170405074700.29871-2-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Reported-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Boris Brezillon <boris.brezillon@free-electrons.com>
Cc: Chris Leech <cleech@redhat.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Eric Dumazet <edumazet@google.com>
Cc: Josef Bacik <jbacik@fb.com>
Cc: Lee Duncan <lduncan@suse.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Richard Weinberger <richard@nod.at>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andrey Ryabinin [Wed, 3 May 2017 21:56:02 +0000 (14:56 -0700)]
fs/block_dev: always invalidate cleancache in invalidate_bdev()
commit
a5f6a6a9c72eac38a7fadd1a038532bc8516337c upstream.
invalidate_bdev() calls cleancache_invalidate_inode() iff ->nrpages != 0
which doen't make any sense.
Make sure that invalidate_bdev() always calls cleancache_invalidate_inode()
regardless of mapping->nrpages value.
Fixes:
c515e1fd361c ("mm/fs: add hooks to support cleancache")
Link: http://lkml.kernel.org/r/20170424164135.22350-3-aryabinin@virtuozzo.com
Signed-off-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Alexey Kuznetsov <kuznet@virtuozzo.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Nikolay Borisov <n.borisov.lkml@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Luis Henriques [Fri, 28 Apr 2017 10:14:04 +0000 (11:14 +0100)]
ceph: fix memory leak in __ceph_setxattr()
commit
eeca958dce0a9231d1969f86196653eb50fcc9b3 upstream.
The ceph_inode_xattr needs to be released when removing an xattr. Easily
reproducible running the 'generic/020' test from xfstests or simply by
doing:
attr -s attr0 -V 0 /mnt/test && attr -r attr0 /mnt/test
While there, also fix the error path.
Here's the kmemleak splat:
unreferenced object 0xffff88001f86fbc0 (size 64):
comm "attr", pid 244, jiffies
4294904246 (age 98.464s)
hex dump (first 32 bytes):
40 fa 86 1f 00 88 ff ff 80 32 38 1f 00 88 ff ff @........28.....
00 01 00 00 00 00 ad de 00 02 00 00 00 00 ad de ................
backtrace:
[<
ffffffff81560199>] kmemleak_alloc+0x49/0xa0
[<
ffffffff810f3e5b>] kmem_cache_alloc+0x9b/0xf0
[<
ffffffff812b157e>] __ceph_setxattr+0x17e/0x820
[<
ffffffff812b1c57>] ceph_set_xattr_handler+0x37/0x40
[<
ffffffff8111fb4b>] __vfs_removexattr+0x4b/0x60
[<
ffffffff8111fd37>] vfs_removexattr+0x77/0xd0
[<
ffffffff8111fdd1>] removexattr+0x41/0x60
[<
ffffffff8111fe65>] path_removexattr+0x75/0xa0
[<
ffffffff81120aeb>] SyS_lremovexattr+0xb/0x10
[<
ffffffff81564b20>] entry_SYSCALL_64_fastpath+0x13/0x94
[<
ffffffffffffffff>] 0xffffffffffffffff
Signed-off-by: Luis Henriques <lhenriques@suse.com>
Reviewed-by: "Yan, Zheng" <zyan@redhat.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michal Hocko [Mon, 8 May 2017 22:57:24 +0000 (15:57 -0700)]
fs/xattr.c: zero out memory copied to userspace in getxattr
commit
81be3dee96346fbe08c31be5ef74f03f6b63cf68 upstream.
getxattr uses vmalloc to allocate memory if kzalloc fails. This is
filled by vfs_getxattr and then copied to the userspace. vmalloc,
however, doesn't zero out the memory so if the specific implementation
of the xattr handler is sloppy we can theoretically expose a kernel
memory. There is no real sign this is really the case but let's make
sure this will not happen and use vzalloc instead.
Fixes:
779302e67835 ("fs/xattr.c:getxattr(): improve handling of allocation failures")
Link: http://lkml.kernel.org/r/20170306103327.2766-1-mhocko@kernel.org
Acked-by: Kees Cook <keescook@chromium.org>
Reported-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Martin Brandenburg [Tue, 25 Apr 2017 19:38:04 +0000 (15:38 -0400)]
orangefs: do not check possibly stale size on truncate
commit
53950ef541675df48c219a8d665111a0e68dfc2f upstream.
Let the server figure this out because our size might be out of date or
not present.
The bug was that
xfs_io -f -t -c "pread -v 0 100" /mnt/foo
echo "Test" > /mnt/foo
xfs_io -f -t -c "pread -v 0 100" /mnt/foo
fails because the second truncate did not happen if nothing had
requested the size after the write in echo. Thus i_size was zero (not
present) and the orangefs_setattr though i_size was zero and there was
nothing to do.
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Martin Brandenburg [Tue, 25 Apr 2017 19:37:58 +0000 (15:37 -0400)]
orangefs: do not set getattr_time on orangefs_lookup
commit
17930b252cd6f31163c259eaa99dd8aa630fb9ba upstream.
Since orangefs_lookup calls orangefs_iget which calls
orangefs_inode_getattr, getattr_time will get set.
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Martin Brandenburg [Tue, 25 Apr 2017 19:37:57 +0000 (15:37 -0400)]
orangefs: clean up oversize xattr validation
commit
e675c5ec51fe2554719a7b6bcdbef0a770f2c19b upstream.
Also don't check flags as this has been validated by the VFS already.
Fix an off-by-one error in the max size checking.
Stop logging just because userspace wants to write attributes which do
not fit.
This and the previous commit fix xfstests generic/020.
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Martin Brandenburg [Tue, 25 Apr 2017 19:37:56 +0000 (15:37 -0400)]
orangefs: fix bounds check for listxattr
commit
a956af337b9ff25822d9ce1a59c6ed0c09fc14b9 upstream.
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Biggers [Sun, 30 Apr 2017 04:10:50 +0000 (00:10 -0400)]
ext4: evict inline data when writing to memory map
commit
7b4cc9787fe35b3ee2dfb1c35e22eafc32e00c33 upstream.
Currently the case of writing via mmap to a file with inline data is not
handled. This is maybe a rare case since it requires a writable memory
map of a very small file, but it is trivial to trigger with on
inline_data filesystem, and it causes the
'BUG_ON(ext4_test_inode_state(inode, EXT4_STATE_MAY_INLINE_DATA));' in
ext4_writepages() to be hit:
mkfs.ext4 -O inline_data /dev/vdb
mount /dev/vdb /mnt
xfs_io -f /mnt/file \
-c 'pwrite 0 1' \
-c 'mmap -w 0 1m' \
-c 'mwrite 0 1' \
-c 'fsync'
kernel BUG at fs/ext4/inode.c:2723!
invalid opcode: 0000 [#1] SMP
CPU: 1 PID: 2532 Comm: xfs_io Not tainted 4.11.0-rc1-xfstests-00301-g071d9acf3d1f #633
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-20170228_101828-anatol 04/01/2014
task:
ffff88003d3a8040 task.stack:
ffffc90000300000
RIP: 0010:ext4_writepages+0xc89/0xf8a
RSP: 0018:
ffffc90000303ca0 EFLAGS:
00010283
RAX:
0000028410000000 RBX:
ffff8800383fa3b0 RCX:
ffffffff812afcdc
RDX:
00000a9d00000246 RSI:
ffffffff81e660e0 RDI:
0000000000000246
RBP:
ffffc90000303dc0 R08:
0000000000000002 R09:
869618e8f99b4fa5
R10:
00000000852287a2 R11:
00000000a03b49f4 R12:
ffff88003808e698
R13:
0000000000000000 R14:
7fffffffffffffff R15:
7fffffffffffffff
FS:
00007fd3e53094c0(0000) GS:
ffff88003e400000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080050033
CR2:
00007fd3e4c51000 CR3:
000000003d554000 CR4:
00000000003406e0
Call Trace:
? _raw_spin_unlock+0x27/0x2a
? kvm_clock_read+0x1e/0x20
do_writepages+0x23/0x2c
? do_writepages+0x23/0x2c
__filemap_fdatawrite_range+0x80/0x87
filemap_write_and_wait_range+0x67/0x8c
ext4_sync_file+0x20e/0x472
vfs_fsync_range+0x8e/0x9f
? syscall_trace_enter+0x25b/0x2d0
vfs_fsync+0x1c/0x1e
do_fsync+0x31/0x4a
SyS_fsync+0x10/0x14
do_syscall_64+0x69/0x131
entry_SYSCALL64_slow_path+0x25/0x25
We could try to be smart and keep the inline data in this case, or at
least support delayed allocation when allocating the block, but these
solutions would be more complicated and don't seem worthwhile given how
rare this case seems to be. So just fix the bug by calling
ext4_convert_inline_data() when we're asked to make a page writable, so
that any inline data gets evicted, with the block allocated immediately.
Reported-by: Nick Alcock <nick.alcock@oracle.com>
Reviewed-by: Andreas Dilger <adilger@dilger.ca>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Adrian Hunter [Fri, 24 Mar 2017 12:15:52 +0000 (14:15 +0200)]
perf auxtrace: Fix no_size logic in addr_filter__resolve_kernel_syms()
commit
c3a0bbc7ad7598dec5a204868bdf8a2b1b51df14 upstream.
Address filtering with kernel symbols incorrectly resulted in the error
"Cannot determine size of symbol" because the no_size logic was the wrong
way around.
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Tested-by: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1490357752-27942-1-git-send-email-adrian.hunter@intel.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mike Marciniszyn [Sun, 9 Apr 2017 17:16:35 +0000 (10:16 -0700)]
IB/hfi1: Prevent kernel QP post send hard lockups
commit
b6eac931b9bb2bce4db7032c35b41e5e34ec22a5 upstream.
The driver progress routines can call cond_resched() when
a timeslice is exhausted and irqs are enabled.
If the ULP had been holding a spin lock without disabling irqs and
the post send directly called the progress routine, the cond_resched()
could yield allowing another thread from the same ULP to deadlock
on that same lock.
Correct by replacing the current hfi1_do_send() calldown with a unique
one for post send and adding an argument to hfi1_do_send() to indicate
that the send engine is running in a thread. If the routine is not
running in a thread, avoid calling cond_resched().
Fixes: Commit
831464ce4b74 ("IB/hfi1: Don't call cond_resched in atomic mode when sending packets")
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jack Morgenstein [Tue, 21 Mar 2017 10:57:06 +0000 (12:57 +0200)]
IB/mlx4: Reduce SRIOV multicast cleanup warning message to debug level
commit
fb7a91746af18b2ebf596778b38a709cdbc488d3 upstream.
A warning message during SRIOV multicast cleanup should have actually been
a debug level message. The condition generating the warning does no harm
and can fill the message log.
In some cases, during testing, some tests were so intense as to swamp the
message log with these warning messages, causing a stall in the console
message log output task. This stall caused an NMI to be sent to all CPUs
(so that they all dumped their stacks into the message log).
Aside from the message flood causing an NMI, the tests all passed.
Once the message flood which caused the NMI is removed (by reducing the
warning message to debug level), the NMI no longer occurs.
Sample message log (console log) output illustrating the flood and
resultant NMI (snippets with comments and modified with ... instead
of hex digits, to satisfy checkpatch.pl):
<mlx4_ib> _mlx4_ib_mcg_port_cleanup: ... WARNING: group refcount 1!!!...
*** About 4000 almost identical lines in less than one second ***
<mlx4_ib> _mlx4_ib_mcg_port_cleanup: ... WARNING: group refcount 1!!!...
INFO: rcu_sched detected stalls on CPUs/tasks: { 17} (...)
*** { 17} above indicates that CPU 17 was the one that stalled ***
sending NMI to all CPUs:
...
NMI backtrace for cpu 17
CPU: 17 PID: 45909 Comm: kworker/17:2
Hardware name: HP ProLiant DL360p Gen8, BIOS P71 09/08/2013
Workqueue: events fb_flashcursor
task:
ffff880478...... ti:
ffff88064e...... task.ti:
ffff88064e......
RIP: 0010:[
ffffffff81......] [
ffffffff81......] io_serial_in+0x15/0x20
RSP: 0018:
ffff88064e257cb0 EFLAGS:
00000002
RAX:
0000000000...... RBX:
ffffffff81...... RCX:
0000000000......
RDX:
0000000000...... RSI:
0000000000...... RDI:
ffffffff81......
RBP:
ffff88064e...... R08:
ffffffff81...... R09:
0000000000......
R10:
0000000000...... R11:
ffff88064e...... R12:
0000000000......
R13:
0000000000...... R14:
ffffffff81...... R15:
0000000000......
FS:
0000000000......(0000) GS:
ffff8804af......(0000) knlGS:
000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
0000000080......
CR2:
00007f2a2f...... CR3:
0000000001...... CR4:
0000000000......
DR0:
0000000000...... DR1:
0000000000...... DR2:
0000000000......
DR3:
0000000000...... DR6:
00000000ff...... DR7:
0000000000......
Stack:
ffff88064e......
ffffffff81......
ffffffff81......
0000000000......
ffffffff81......
ffff88064e......
ffffffff81......
ffffffff81......
ffffffff81......
ffff88064e......
ffffffff81......
0000000000......
Call Trace:
[<
ffffffff813d099b>] wait_for_xmitr+0x3b/0xa0
[<
ffffffff813d0b5c>] serial8250_console_putchar+0x1c/0x30
[<
ffffffff813d0b40>] ? serial8250_console_write+0x140/0x140
[<
ffffffff813cb5fa>] uart_console_write+0x3a/0x80
[<
ffffffff813d0aae>] serial8250_console_write+0xae/0x140
[<
ffffffff8107c4d1>] call_console_drivers.constprop.15+0x91/0xf0
[<
ffffffff8107d6cf>] console_unlock+0x3bf/0x400
[<
ffffffff813503cd>] fb_flashcursor+0x5d/0x140
[<
ffffffff81355c30>] ? bit_clear+0x120/0x120
[<
ffffffff8109d5fb>] process_one_work+0x17b/0x470
[<
ffffffff8109e3cb>] worker_thread+0x11b/0x400
[<
ffffffff8109e2b0>] ? rescuer_thread+0x400/0x400
[<
ffffffff810a5aef>] kthread+0xcf/0xe0
[<
ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
[<
ffffffff81645858>] ret_from_fork+0x58/0x90
[<
ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
Code: 48 89 e5 d3 e6 48 63 f6 48 03 77 10 8b 06 5d c3 66 0f 1f 44 00 00 66 66 66 6
As indicated in the stack trace above, the console output task got swamped.
Fixes:
b9c5d6a64358 ("IB/mlx4: Add multicast group (MCG) paravirtualization for SR-IOV")
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jack Morgenstein [Tue, 21 Mar 2017 10:57:05 +0000 (12:57 +0200)]
IB/mlx4: Fix ib device initialization error flow
commit
99e68909d5aba1861897fe7afc3306c3c81b6de0 upstream.
In mlx4_ib_add, procedure mlx4_ib_alloc_eqs is called to allocate EQs.
However, in the mlx4_ib_add error flow, procedure mlx4_ib_free_eqs is not
called to free the allocated EQs.
Fixes:
e605b743f33d ("IB/mlx4: Increase the number of vectors (EQs) available for ULPs")
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Shamir Rabinovitch [Wed, 29 Mar 2017 10:21:59 +0000 (06:21 -0400)]
IB/IPoIB: ibX: failed to create mcg debug file
commit
771a52584096c45e4565e8aabb596eece9d73d61 upstream.
When udev renames the netdev devices, ipoib debugfs entries does not
get renamed. As a result, if subsequent probe of ipoib device reuse the
name then creating a debugfs entry for the new device would fail.
Also, moved ipoib_create_debug_files and ipoib_delete_debug_files as part
of ipoib event handling in order to avoid any race condition between these.
Fixes:
1732b0ef3b3a ([IPoIB] add path record information in debugfs)
Signed-off-by: Vijay Kumar <vijay.ac.kumar@oracle.com>
Signed-off-by: Shamir Rabinovitch <shamir.rabinovitch@oracle.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael J. Ruhl [Sun, 9 Apr 2017 17:15:51 +0000 (10:15 -0700)]
IB/core: For multicast functions, verify that LIDs are multicast LIDs
commit
8561eae60ff9417a50fa1fb2b83ae950dc5c1e21 upstream.
The Infiniband spec defines "A multicast address is defined by a
MGID and a MLID" (section 10.5). Currently the MLID value is not
validated.
Add check to verify that the MLID value is in the correct address
range.
Fixes:
0c33aeedb2cf ("[IB] Add checks to multicast attach and detach")
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com>
Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jack Morgenstein [Sun, 19 Mar 2017 08:55:57 +0000 (10:55 +0200)]
IB/core: Fix sysfs registration error flow
commit
b312be3d87e4c80872cbea869e569175c5eb0f9a upstream.
The kernel commit cited below restructured ib device management
so that the device kobject is initialized in ib_alloc_device.
As part of the restructuring, the kobject is now initialized in
procedure ib_alloc_device, and is later added to the device hierarchy
in the ib_register_device call stack, in procedure
ib_device_register_sysfs (which calls device_add).
However, in the ib_device_register_sysfs error flow, if an error
occurs following the call to device_add, the cleanup procedure
device_unregister is called. This call results in the device object
being deleted -- which results in various use-after-free crashes.
The correct cleanup call is device_del -- which undoes device_add
without deleting the device object.
The device object will then (correctly) be deleted in the
ib_register_device caller's error cleanup flow, when the caller invokes
ib_dealloc_device.
Fixes:
55aeed06544f6 ("IB/core: Make ib_alloc_device init the kobject")
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ding Tianhong [Sat, 29 Apr 2017 02:38:48 +0000 (10:38 +0800)]
iov_iter: don't revert iov buffer if csum error
commit
a6a5993243550b09f620941dea741b7421fdf79c upstream.
The patch
327868212381 (make skb_copy_datagram_msg() et.al. preserve
->msg_iter on error) will revert the iov buffer if copy to iter
failed, but it didn't copy any datagram if the skb_checksum_complete
error, so no need to revert any data at this place.
v2: Sabrina notice that return -EFAULT when checksum error is not correct
here, it would confuse the caller about the return value, so fix it.
Fixes:
327868212381 ("make skb_copy_datagram_msg() et.al. preserve->msg_iter on error")
Signed-off-by: Ding Tianhong <dingtianhong@huawei.com>
Acked-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alex Williamson [Thu, 13 Apr 2017 20:10:15 +0000 (14:10 -0600)]
vfio/type1: Remove locked page accounting workqueue
commit
0cfef2b7410b64d7a430947e0b533314c4f97153 upstream.
If the mmap_sem is contented then the vfio type1 IOMMU backend will
defer locked page accounting updates to a workqueue task. This has a
few problems and depending on which side the user tries to play, they
might be over-penalized for unmaps that haven't yet been accounted or
race the workqueue to enter more mappings than they're allowed. The
original intent of this workqueue mechanism seems to be focused on
reducing latency through the ioctl, but we cannot do so at the cost
of correctness. Remove this workqueue mechanism and update the
callers to allow for failure. We can also now recheck the limit under
write lock to make sure we don't exceed it.
vfio_pin_pages_remote() also now necessarily includes an unwind path
which we can jump to directly if the consecutive page pinning finds
that we're exceeding the user's memory limits. This avoids the
current lazy approach which does accounting and mapping up to the
fault, only to return an error on the next iteration to unwind the
entire vfio_dma.
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Kirti Wankhede <kwankhede@nvidia.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dennis Yang [Tue, 18 Apr 2017 07:27:06 +0000 (15:27 +0800)]
dm thin: fix a memory leak when passing discard bio down
commit
948f581a53b704b984aa20df009f0a2b4cf7f907 upstream.
dm-thin does not free the discard_parent bio after all chained sub
bios finished. The following kmemleak report could be observed after
pool with discard_passdown option processes discard bios in
linux v4.11-rc7. To fix this, we drop the discard_parent bio reference
when its endio (passdown_endio) called.
unreferenced object 0xffff8803d6b29700 (size 256):
comm "kworker/u8:0", pid 30349, jiffies
4379504020 (age 143002.776s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
01 00 00 00 00 00 00 f0 00 00 00 00 00 00 00 00 ................
backtrace:
[<
ffffffff81a5efd9>] kmemleak_alloc+0x49/0xa0
[<
ffffffff8114ec34>] kmem_cache_alloc+0xb4/0x100
[<
ffffffff8110eec0>] mempool_alloc_slab+0x10/0x20
[<
ffffffff8110efa5>] mempool_alloc+0x55/0x150
[<
ffffffff81374939>] bio_alloc_bioset+0xb9/0x260
[<
ffffffffa018fd20>] process_prepared_discard_passdown_pt1+0x40/0x1c0 [dm_thin_pool]
[<
ffffffffa018b409>] break_up_discard_bio+0x1a9/0x200 [dm_thin_pool]
[<
ffffffffa018b484>] process_discard_cell_passdown+0x24/0x40 [dm_thin_pool]
[<
ffffffffa018b24d>] process_discard_bio+0xdd/0xf0 [dm_thin_pool]
[<
ffffffffa018ecf6>] do_worker+0xa76/0xd50 [dm_thin_pool]
[<
ffffffff81086239>] process_one_work+0x139/0x370
[<
ffffffff810867b1>] worker_thread+0x61/0x450
[<
ffffffff8108b316>] kthread+0xd6/0xf0
[<
ffffffff81a6cd1f>] ret_from_fork+0x3f/0x70
[<
ffffffffffffffff>] 0xffffffffffffffff
Signed-off-by: Dennis Yang <dennisyang@qnap.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bart Van Assche [Thu, 27 Apr 2017 17:11:19 +0000 (10:11 -0700)]
dm rq: check blk_mq_register_dev() return value in dm_mq_init_request_queue()
commit
23a601248958fa4142d49294352fe8d1fdf3e509 upstream.
Otherwise the request-based DM blk-mq request_queue will be put into
service without being properly exported via sysfs.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Somasundaram Krishnasamy [Fri, 7 Apr 2017 19:14:55 +0000 (12:14 -0700)]
dm era: save spacemap metadata root after the pre-commit
commit
117aceb030307dcd431fdcff87ce988d3016c34a upstream.
When committing era metadata to disk, it doesn't always save the latest
spacemap metadata root in superblock. Due to this, metadata is getting
corrupted sometimes when reopening the device. The correct order of update
should be, pre-commit (shadows spacemap root), save the spacemap root
(newly shadowed block) to in-core superblock and then the final commit.
Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gary R Hook [Fri, 21 Apr 2017 15:50:14 +0000 (10:50 -0500)]
crypto: ccp - Change ISR handler method for a v5 CCP
commit
6263b51eb3190d30351360fd168959af7e3a49a9 upstream.
The CCP has the ability to perform several operations simultaneously,
but only one interrupt. When implemented as a PCI device and using
MSI-X/MSI interrupts, use a tasklet model to service interrupts. By
disabling and enabling interrupts from the CCP, coupled with the
queuing that tasklets provide, we can ensure that all events
(occurring on the device) are recognized and serviced.
This change fixes a problem wherein 2 or more busy queues can cause
notification bits to change state while a (CCP) interrupt is being
serviced, but after the queue state has been evaluated. This results
in the event being 'lost' and the queue hanging, waiting to be
serviced. Since the status bits are never fully de-asserted, the
CCP never generates another interrupt (all bits zero -> one or more
bits one), and no further CCP operations will be executed.
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gary R Hook [Fri, 21 Apr 2017 15:50:05 +0000 (10:50 -0500)]
crypto: ccp - Change ISR handler method for a v3 CCP
commit
7b537b24e76a1e8e6d7ea91483a45d5b1426809b upstream.
The CCP has the ability to perform several operations simultaneously,
but only one interrupt. When implemented as a PCI device and using
MSI-X/MSI interrupts, use a tasklet model to service interrupts. By
disabling and enabling interrupts from the CCP, coupled with the
queuing that tasklets provide, we can ensure that all events
(occurring on the device) are recognized and serviced.
This change fixes a problem wherein 2 or more busy queues can cause
notification bits to change state while a (CCP) interrupt is being
serviced, but after the queue state has been evaluated. This results
in the event being 'lost' and the queue hanging, waiting to be
serviced. Since the status bits are never fully de-asserted, the
CCP never generates another interrupt (all bits zero -> one or more
bits one), and no further CCP operations will be executed.
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gary R Hook [Thu, 20 Apr 2017 20:24:22 +0000 (15:24 -0500)]
crypto: ccp - Disable interrupts early on unload
commit
116591fe3eef11c6f06b662c9176385f13891183 upstream.
Ensure that we disable interrupts first when shutting down
the driver.
Signed-off-by: Gary R Hook <ghook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gary R Hook [Thu, 20 Apr 2017 20:24:09 +0000 (15:24 -0500)]
crypto: ccp - Use only the relevant interrupt bits
commit
56467cb11cf8ae4db9003f54b3d3425b5f07a10a upstream.
Each CCP queue can product interrupts for 4 conditions:
operation complete, queue empty, error, and queue stopped.
This driver only works with completion and error events.
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stephan Mueller [Mon, 24 Apr 2017 09:15:23 +0000 (11:15 +0200)]
crypto: algif_aead - Require setkey before accept(2)
commit
2a2a251f110576b1d89efbd0662677d7e7db21a8 upstream.
Some cipher implementations will crash if you try to use them
without calling setkey first. This patch adds a check so that
the accept(2) call will fail with -ENOKEY if setkey hasn't been
done on the socket yet.
Fixes:
400c40cf78da ("crypto: algif - add AEAD support")
Signed-off-by: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mike Snitzer [Sat, 22 Apr 2017 21:22:09 +0000 (17:22 -0400)]
block: fix blk_integrity_register to use template's interval_exp if not 0
commit
2859323e35ab5fc42f351fbda23ab544eaa85945 upstream.
When registering an integrity profile: if the template's interval_exp is
not 0 use it, otherwise use the ilog2() of logical block size of the
provided gendisk.
This fixes a long-standing DM linear target bug where it cannot pass
integrity data to the underlying device if its logical block size
conflicts with the underlying device's logical block size.
Reported-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Acked-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marc Zyngier [Thu, 27 Apr 2017 18:06:48 +0000 (19:06 +0100)]
arm64: KVM: Fix decoding of Rt/Rt2 when trapping AArch32 CP accesses
commit
c667186f1c01ca8970c785888868b7ffd74e51ee upstream.
Our 32bit CP14/15 handling inherited some of the ARMv7 code for handling
the trapped system registers, completely missing the fact that the
fields for Rt and Rt2 are now 5 bit wide, and not 4...
Let's fix it, and provide an accessor for the most common Rt case.
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andrew Jones [Tue, 18 Apr 2017 15:59:58 +0000 (17:59 +0200)]
KVM: arm/arm64: fix races in kvm_psci_vcpu_on
commit
6c7a5dce22b3f3cc44be098e2837fa6797edb8b8 upstream.
Fix potential races in kvm_psci_vcpu_on() by taking the kvm->lock
mutex. In general, it's a bad idea to allow more than one PSCI_CPU_ON
to process the same target VCPU at the same time. One such problem
that may arise is that one PSCI_CPU_ON could be resetting the target
vcpu, which fills the entire sys_regs array with a temporary value
including the MPIDR register, while another looks up the VCPU based
on the MPIDR value, resulting in no target VCPU found. Resolves both
races found with the kvm-unit-tests/arm/psci unit test.
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Christoffer Dall <cdall@linaro.org>
Reported-by: Levente Kurusa <lkurusa@redhat.com>
Suggested-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Andrew Jones <drjones@redhat.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Hildenbrand [Thu, 23 Mar 2017 10:46:03 +0000 (11:46 +0100)]
KVM: x86: fix user triggerable warning in kvm_apic_accept_events()
commit
28bf28887976d8881a3a59491896c718fade7355 upstream.
If we already entered/are about to enter SMM, don't allow switching to
INIT/SIPI_RECEIVED, otherwise the next call to kvm_apic_accept_events()
will report a warning.
Same applies if we are already in MP state INIT_RECEIVED and SMM is
requested to be turned on. Refuse to set the VCPU events in this case.
Fixes:
cd7764fe9f73 ("KVM: x86: latch INITs while in system management mode")
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vince Weaver [Tue, 2 May 2017 18:08:50 +0000 (14:08 -0400)]
perf/x86: Fix Broadwell-EP DRAM RAPL events
commit
33b88e708e7dfa58dc896da2a98f5719d2eb315c upstream.
It appears as though the Broadwell-EP DRAM units share the special
units quirk with Haswell-EP/KNL.
Without this patch, you get really high results (a single DRAM using 20W
of power).
The powercap driver in drivers/powercap/intel_rapl.c already has this
change.
Signed-off-by: Vince Weaver <vincent.weaver@maine.edu>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Kan Liang <kan.liang@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@gmail.com>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Richard Weinberger [Fri, 31 Mar 2017 22:41:57 +0000 (00:41 +0200)]
um: Fix PTRACE_POKEUSER on x86_64
commit
9abc74a22d85ab29cef9896a2582a530da7e79bf upstream.
This is broken since ever but sadly nobody noticed.
Recent versions of GDB set DR_CONTROL unconditionally and
UML dies due to a heap corruption. It turns out that
the PTRACE_POKEUSER was copy&pasted from i386 and assumes
that addresses are 4 bytes long.
Fix that by using 8 as address size in the calculation.
Reported-by: jie cao <cj3054@gmail.com>
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ben Hutchings [Tue, 9 May 2017 17:00:43 +0000 (18:00 +0100)]
x86, pmem: Fix cache flushing for iovec write < 8 bytes
commit
8376efd31d3d7c44bd05be337adde023cc531fa1 upstream.
Commit
11e63f6d920d added cache flushing for unaligned writes from an
iovec, covering the first and last cache line of a >= 8 byte write and
the first cache line of a < 8 byte write. But an unaligned write of
2-7 bytes can still cover two cache lines, so make sure we flush both
in that case.
Fixes:
11e63f6d920d ("x86, pmem: fix broken __copy_user_nocache ...")
Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andy Lutomirski [Wed, 22 Mar 2017 21:32:29 +0000 (14:32 -0700)]
selftests/x86/ldt_gdt_32: Work around a glibc sigaction() bug
commit
65973dd3fd31151823f4b8c289eebbb3fb7e6bc0 upstream.
i386 glibc is buggy and calls the sigaction syscall incorrectly.
This is asymptomatic for normal programs, but it blows up on
programs that do evil things with segmentation. The ldt_gdt
self-test is an example of such an evil program.
This doesn't appear to be a regression -- I think I just got lucky
with the uninitialized memory that glibc threw at the kernel when I
wrote the test.
This hackish fix manually issues sigaction(2) syscalls to undo the
damage. Without the fix, ldt_gdt_32 segfaults; with the fix, it
passes for me.
See: https://sourceware.org/bugzilla/show_bug.cgi?id=21269
Signed-off-by: Andy Lutomirski <luto@kernel.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Garnier <thgarnie@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/aaab0f9f93c9af25396f01232608c163a760a668.1490218061.git.luto@kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ashish Kalra [Wed, 19 Apr 2017 15:20:15 +0000 (20:50 +0530)]
x86/boot: Fix BSS corruption/overwrite bug in early x86 kernel startup
commit
d594aa0277e541bb997aef0bc0a55172d8138340 upstream.
The minimum size for a new stack (512 bytes) setup for arch/x86/boot components
when the bootloader does not setup/provide a stack for the early boot components
is not "enough".
The setup code executing as part of early kernel startup code, uses the stack
beyond 512 bytes and accidentally overwrites and corrupts part of the BSS
section. This is exposed mostly in the early video setup code, where
it was corrupting BSS variables like force_x, force_y, which in-turn affected
kernel parameters such as screen_info (screen_info.orig_video_cols) and
later caused an exception/panic in console_init().
Most recent boot loaders setup the stack for early boot components, so this
stack overwriting into BSS section issue has not been exposed.
Signed-off-by: Ashish Kalra <ashish@bluestacks.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20170419152015.10011-1-ashishkalra@Ashishs-MacBook-Pro.local
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guenter Roeck [Mon, 20 Mar 2017 21:30:50 +0000 (14:30 -0700)]
usb: hub: Do not attempt to autosuspend disconnected devices
commit
f5cccf49428447dfbc9edb7a04bb8fc316269781 upstream.
While running a bind/unbind stress test with the dwc3 usb driver on rk3399,
the following crash was observed.
Unable to handle kernel NULL pointer dereference at virtual address
00000218
pgd =
ffffffc00165f000
[
00000218] *pgd=
000000000174f003, *pud=
000000000174f003,
*pmd=
0000000001750003, *pte=
00e8000001751713
Internal error: Oops:
96000005 [#1] PREEMPT SMP
Modules linked in: uinput uvcvideo videobuf2_vmalloc cmac
ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat rfcomm
xt_mark fuse bridge stp llc zram btusb btrtl btbcm btintel bluetooth
ip6table_filter mwifiex_pcie mwifiex cfg80211 cdc_ether usbnet r8152 mii joydev
snd_seq_midi snd_seq_midi_event snd_rawmidi snd_seq snd_seq_device ppp_async
ppp_generic slhc tun
CPU: 1 PID: 29814 Comm: kworker/1:1 Not tainted 4.4.52 #507
Hardware name: Google Kevin (DT)
Workqueue: pm pm_runtime_work
task:
ffffffc0ac540000 ti:
ffffffc0af4d4000 task.ti:
ffffffc0af4d4000
PC is at autosuspend_check+0x74/0x174
LR is at autosuspend_check+0x70/0x174
...
Call trace:
[<
ffffffc00080dcc0>] autosuspend_check+0x74/0x174
[<
ffffffc000810500>] usb_runtime_idle+0x20/0x40
[<
ffffffc000785ae0>] __rpm_callback+0x48/0x7c
[<
ffffffc000786af0>] rpm_idle+0x1e8/0x498
[<
ffffffc000787cdc>] pm_runtime_work+0x88/0xcc
[<
ffffffc000249bb8>] process_one_work+0x390/0x6b8
[<
ffffffc00024abcc>] worker_thread+0x480/0x610
[<
ffffffc000251a80>] kthread+0x164/0x178
[<
ffffffc0002045d0>] ret_from_fork+0x10/0x40
Source:
(gdb) l *0xffffffc00080dcc0
0xffffffc00080dcc0 is in autosuspend_check
(drivers/usb/core/driver.c:1778).
1773 /* We don't need to check interfaces that are
1774 * disabled for runtime PM. Either they are unbound
1775 * or else their drivers don't support autosuspend
1776 * and so they are permanently active.
1777 */
1778 if (intf->dev.power.disable_depth)
1779 continue;
1780 if (atomic_read(&intf->dev.power.usage_count) > 0)
1781 return -EBUSY;
1782 w |= intf->needs_remote_wakeup;
Code analysis shows that intf is set to NULL in usb_disable_device() prior
to setting actconfig to NULL. At the same time, usb_runtime_idle() does not
lock the usb device, and neither does any of the functions in the
traceback. This means that there is no protection against a race condition
where usb_disable_device() is removing dev->actconfig->interface[] pointers
while those are being accessed from autosuspend_check().
To solve the problem, synchronize and validate device state between
autosuspend_check() and usb_disconnect().
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guenter Roeck [Mon, 20 Mar 2017 18:16:11 +0000 (11:16 -0700)]
usb: hub: Fix error loop seen after hub communication errors
commit
245b2eecee2aac6fdc77dcafaa73c33f9644c3c7 upstream.
While stress testing a usb controller using a bind/unbind looop, the
following error loop was observed.
usb 7-1.2: new low-speed USB device number 3 using xhci-hcd
usb 7-1.2: hub failed to enable device, error -108
usb 7-1-port2: cannot disable (err = -22)
usb 7-1-port2: couldn't allocate usb_device
usb 7-1-port2: cannot disable (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: activate --> -22
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: activate --> -22
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: activate --> -22
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: activate --> -22
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: activate --> -22
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: activate --> -22
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: activate --> -22
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: activate --> -22
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
** 57 printk messages dropped ** hub 7-1:1.0: activate --> -22
** 82 printk messages dropped ** hub 7-1:1.0: hub_ext_port_status failed (err = -22)
This continues forever. After adding tracebacks into the code,
the call sequence leading to this is found to be as follows.
[<
ffffffc0007fc8e0>] hub_activate+0x368/0x7b8
[<
ffffffc0007fceb4>] hub_resume+0x2c/0x3c
[<
ffffffc00080b3b8>] usb_resume_interface.isra.6+0x128/0x158
[<
ffffffc00080b5d0>] usb_suspend_both+0x1e8/0x288
[<
ffffffc00080c9c4>] usb_runtime_suspend+0x3c/0x98
[<
ffffffc0007820a0>] __rpm_callback+0x48/0x7c
[<
ffffffc00078217c>] rpm_callback+0xa8/0xd4
[<
ffffffc000786234>] rpm_suspend+0x84/0x758
[<
ffffffc000786ca4>] rpm_idle+0x2c8/0x498
[<
ffffffc000786ed4>] __pm_runtime_idle+0x60/0xac
[<
ffffffc00080eba8>] usb_autopm_put_interface+0x6c/0x7c
[<
ffffffc000803798>] hub_event+0x10ac/0x12ac
[<
ffffffc000249bb8>] process_one_work+0x390/0x6b8
[<
ffffffc00024abcc>] worker_thread+0x480/0x610
[<
ffffffc000251a80>] kthread+0x164/0x178
[<
ffffffc0002045d0>] ret_from_fork+0x10/0x40
kick_hub_wq() is called from hub_activate() even after failures to
communicate with the hub. This results in an endless sequence of
hub event -> hub activate -> wq trigger -> hub event -> ...
Provide two solutions for the problem.
- Only trigger the hub event queue if communication with the hub
is successful.
- After a suspend failure, only resume already suspended interfaces
if the communication with the device is still possible.
Each of the changes fixes the observed problem. Use both to improve
robustness.
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alexey Brodkin [Thu, 13 Apr 2017 12:33:34 +0000 (15:33 +0300)]
usb: Make sure usb/phy/of gets built-in
commit
3d6159640da9c9175d1ca42f151fc1a14caded59 upstream.
DWC3 driver uses of_usb_get_phy_mode() which is
implemented in drivers/usb/phy/of.c and in bare minimal
configuration it might not be pulled in kernel binary.
In case of ARC or ARM this could be easily reproduced with
"allnodefconfig" +CONFIG_USB=m +CONFIG_USB_DWC3=m.
On building all ends-up with:
---------------------->8------------------
Kernel: arch/arm/boot/Image is ready
Kernel: arch/arm/boot/zImage is ready
Building modules, stage 2.
MODPOST 5 modules
ERROR: "of_usb_get_phy_mode" [drivers/usb/dwc3/dwc3.ko] undefined!
make[1]: *** [__modpost] Error 1
make: *** [modules] Error 2
---------------------->8------------------
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Geert Uytterhoeven <geert+renesas@glider.be>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Felipe Balbi <balbi@kernel.org>
Cc: Felix Fietkau <nbd@nbd.name>
Cc: Jeremy Kerr <jk@ozlabs.org>
Cc: linux-snps-arc@lists.infradead.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Romain Izard [Fri, 10 Mar 2017 13:11:41 +0000 (14:11 +0100)]
usb: gadget: legacy gadgets are optional
commit
6e253d0fbc665b36192b8ed3cecdbb65b413a1eb upstream.
With commit
bc49d1d17dcf ("usb: gadget: don't couple configfs to legacy
gadgets"),it is possible to build a modular kernel with both built-in
configfs support and modular legacy gadget drivers.
But when building a kernel without modules, it is also necessary to be
able to build with configfs but without any legacy gadget driver. This
was a possible configuration when the USB_CONFIGFS was a part of the
choice options, but not anymore.
Mark the choice for legacy gadget drivers as optional restores this.
Fixes:
bc49d1d17dcf ("usb: gadget: don't couple configfs to legacy gadgets")
Signed-off-by: Romain Izard <romain.izard.pro@gmail.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gustavo A. R. Silva [Tue, 4 Apr 2017 03:48:40 +0000 (22:48 -0500)]
usb: misc: add missing continue in switch
commit
2c930e3d0aed1505e86e0928d323df5027817740 upstream.
Add missing continue in switch.
Addresses-Coverity-ID: 1248733
Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ian Abbott [Fri, 17 Feb 2017 11:09:09 +0000 (11:09 +0000)]
staging: comedi: jr3_pci: cope with jiffies wraparound
commit
8ec04a491825e08068e92bed0bba7821893b6433 upstream.
The timer expiry routine `jr3_pci_poll_dev()` checks for expiry by
checking whether the absolute value of `jiffies` (stored in local
variable `now`) is greater than the expected expiry time in jiffy units.
This will fail when `jiffies` wraps around. Also, it seems to make
sense to handle the expiry one jiffy earlier than the current test. Use
`time_after_eq()` to check for expiry.
Signed-off-by: Ian Abbott <abbotti@mev.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ian Abbott [Fri, 17 Feb 2017 11:09:08 +0000 (11:09 +0000)]
staging: comedi: jr3_pci: fix possible null pointer dereference
commit
45292be0b3db0b7f8286683b376e2d9f949d11f9 upstream.
For some reason, the driver does not consider allocation of the
subdevice private data to be a fatal error when attaching the COMEDI
device. It tests the subdevice private data pointer for validity at
certain points, but omits some crucial tests. In particular,
`jr3_pci_auto_attach()` calls `jr3_pci_alloc_spriv()` to allocate and
initialize the subdevice private data, but the same function
subsequently dereferences the pointer to access the `next_time_min` and
`next_time_max` members without checking it first. The other missing
test is in the timer expiry routine `jr3_pci_poll_dev()`, but it will
crash before it gets that far.
Fix the bug by returning `-ENOMEM` from `jr3_pci_auto_attach()` as soon
as one of the calls to `jr3_pci_alloc_spriv()` returns `NULL`. The
COMEDI core will subsequently call `jr3_pci_detach()` to clean up.
Signed-off-by: Ian Abbott <abbotti@mev.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johan Hovold [Wed, 26 Apr 2017 10:23:04 +0000 (12:23 +0200)]
staging: gdm724x: gdm_mux: fix use-after-free on module unload
commit
b58f45c8fc301fe83ee28cad3e64686c19e78f1c upstream.
Make sure to deregister the USB driver before releasing the tty driver
to avoid use-after-free in the USB disconnect callback where the tty
devices are deregistered.
Fixes:
61e121047645 ("staging: gdm7240: adding LTE USB driver")
Cc: Won Kang <wkang77@gmail.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Malcolm Priestley [Sat, 22 Apr 2017 10:14:57 +0000 (11:14 +0100)]
staging: vt6656: use off stack for out buffer USB transfers.
commit
12ecd24ef93277e4e5feaf27b0b18f2d3828bc5e upstream.
Since 4.9 mandated USB buffers be heap allocated this causes the driver
to fail.
Since there is a wide range of buffer sizes use kmemdup to create
allocated buffer.
Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Malcolm Priestley [Sat, 22 Apr 2017 10:14:58 +0000 (11:14 +0100)]
staging: vt6656: use off stack for in buffer USB transfers.
commit
05c0cf88bec588a7cb34de569acd871ceef26760 upstream.
Since 4.9 mandated USB buffers to be heap allocated. This causes
the driver to fail.
Create buffer for USB transfers.
Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bjørn Mork [Fri, 21 Apr 2017 08:01:29 +0000 (10:01 +0200)]
USB: Revert "cdc-wdm: fix "out-of-sync" due to missing notifications"
commit
19445816996d1a89682c37685fe95959631d9f32 upstream.
This reverts commit
833415a3e781 ("cdc-wdm: fix "out-of-sync" due to
missing notifications")
There have been several reports of wdm_read returning unexpected EIO
errors with QMI devices using the qmi_wwan driver. The reporters
confirm that reverting prevents these errors. I have been unable to
reproduce the bug myself, and have no explanation to offer either. But
reverting is the safe choice here, given that the commit was an
attempt to work around a firmware problem. Living with a firmware
problem is still better than adding driver bugs.
Reported-by: Kasper Holtze <kasper@holtze.dk>
Reported-by: Aleksander Morgado <aleksander@aleksander.es>
Reported-by: Daniele Palmas <dnlplm@gmail.com>
Fixes:
833415a3e781 ("cdc-wdm: fix "out-of-sync" due to missing notifications")
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Acked-by: Oliver Neukum <oneukum@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ajay Kaher [Tue, 28 Mar 2017 12:09:32 +0000 (08:09 -0400)]
USB: Proper handling of Race Condition when two USB class drivers try to call init_usb_class simultaneously
commit
2f86a96be0ccb1302b7eee7855dbee5ce4dc5dfb upstream.
There is race condition when two USB class drivers try to call
init_usb_class at the same time and leads to crash.
code path: probe->usb_register_dev->init_usb_class
To solve this, mutex locking has been added in init_usb_class() and
destroy_usb_class().
As pointed by Alan, removed "if (usb_class)" test from destroy_usb_class()
because usb_class can never be NULL there.
Signed-off-by: Ajay Kaher <ajay.kaher@samsung.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marek Vasut [Tue, 18 Apr 2017 18:07:56 +0000 (20:07 +0200)]
USB: serial: ftdi_sio: add device ID for Microsemi/Arrow SF2PLUS Dev Kit
commit
31c5d1922b90ddc1da6a6ddecef7cd31f17aa32b upstream.
This development kit has an FT4232 on it with a custom USB VID/PID.
The FT4232 provides four UARTs, but only two are used. The UART 0
is used by the FlashPro5 programmer and UART 2 is connected to the
SmartFusion2 CortexM3 SoC UART port.
Note that the USB VID is registered to Actel according to Linux USB
VID database, but that was acquired by Microsemi.
Signed-off-by: Marek Vasut <marex@denx.de>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>