Steven Rostedt (Red Hat) [Wed, 24 Jul 2013 02:06:15 +0000 (22:06 -0400)]
ftrace: Add check for NULL regs if ops has SAVE_REGS set
commit
195a8afc7ac962f8da795549fe38e825f1372b0d upstream.
If a ftrace ops is registered with the SAVE_REGS flag set, and there's
already a ops registered to one of its functions but without the
SAVE_REGS flag, there's a small race window where the SAVE_REGS ops gets
added to the list of callbacks to call for that function before the
callback trampoline gets set to save the regs.
The problem is, the function is not currently saving regs, which opens
a small race window where the ops that is expecting regs to be passed
to it, wont. This can cause a crash if the callback were to reference
the regs, as the SAVE_REGS guarantees that regs will be set.
To fix this, we add a check in the loop case where it checks if the ops
has the SAVE_REGS flag set, and if so, it will ignore it if regs is
not set.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Oleg Nesterov [Tue, 23 Jul 2013 15:26:10 +0000 (17:26 +0200)]
tracing: Change tracing_fops/snapshot_fops to rely on tracing_get_cpu()
commit
6484c71cbc170634fa131b6d022d86d61686b88b upstream.
tracing_open() and tracing_snapshot_open() are racy, the memory
inode->i_private points to can be already freed.
Convert these last users of "inode->i_private == trace_cpu" to
use "i_private = trace_array" and rely on tracing_get_cpu().
v2: incorporate the fix from Steven, tracing_release() must not
blindly dereference file->private_data unless we know that
the file was opened for reading.
Link: http://lkml.kernel.org/r/20130723152610.GA23737@redhat.com
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Oleg Nesterov [Tue, 23 Jul 2013 15:26:06 +0000 (17:26 +0200)]
tracing: Change tracing_entries_fops to rely on tracing_get_cpu()
commit
0bc392ee46d0fd8e6b678457ef71f074f19a03c5 upstream.
tracing_open_generic_tc() is racy, the memory inode->i_private
points to can be already freed.
1. Change its last user, tracing_entries_fops, to use
tracing_*_generic_tr() instead.
2. Change debugfs_create_file("buffer_size_kb", data) callers
to pass "data = tr".
3. Change tracing_entries_read() and tracing_entries_write() to
use tracing_get_cpu().
4. Kill the no longer used tracing_open_generic_tc() and
tracing_release_generic_tc().
Link: http://lkml.kernel.org/r/20130723152606.GA23730@redhat.com
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Oleg Nesterov [Tue, 23 Jul 2013 15:26:03 +0000 (17:26 +0200)]
tracing: Change tracing_stats_fops to rely on tracing_get_cpu()
commit
4d3435b8a4c3357695e09c5e7a3bf73a19fca5b0 upstream.
tracing_open_generic_tc() is racy, the memory inode->i_private
points to can be already freed.
1. Change one of its users, tracing_stats_fops, to use
tracing_*_generic_tr() instead.
2. Change trace_create_cpu_file("stats", data) to pass "data = tr".
3. Change tracing_stats_read() to use tracing_get_cpu().
Link: http://lkml.kernel.org/r/20130723152603.GA23727@redhat.com
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Oleg Nesterov [Tue, 23 Jul 2013 15:26:00 +0000 (17:26 +0200)]
tracing: Change tracing_buffers_fops to rely on tracing_get_cpu()
commit
46ef2be0d1d5ccea0c41bb606143586daadd537c upstream.
tracing_buffers_open() is racy, the memory inode->i_private points
to can be already freed.
Change debugfs_create_file("trace_pipe_raw", data) caller to pass
"data = tr", tracing_buffers_open() can use tracing_get_cpu().
Change debugfs_create_file("snapshot_raw_fops", data) caller too,
this file uses tracing_buffers_open/release.
Link: http://lkml.kernel.org/r/20130723152600.GA23720@redhat.com
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Oleg Nesterov [Tue, 23 Jul 2013 15:25:57 +0000 (17:25 +0200)]
tracing: Change tracing_pipe_fops() to rely on tracing_get_cpu()
commit
15544209cb0b5312e5220a9337a1fe61d1a1f2d9 upstream.
tracing_open_pipe() is racy, the memory inode->i_private points to
can be already freed.
Change debugfs_create_file("trace_pipe", data) callers to to pass
"data = tr", tracing_open_pipe() can use tracing_get_cpu().
Link: http://lkml.kernel.org/r/20130723152557.GA23717@redhat.com
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Oleg Nesterov [Tue, 23 Jul 2013 15:25:54 +0000 (17:25 +0200)]
tracing: Introduce trace_create_cpu_file() and tracing_get_cpu()
commit
649e9c70da6bfbeb563193a35d3424a5aa7c0d38 upstream.
Every "file_operations" used by tracing_init_debugfs_percpu is buggy.
f_op->open/etc does:
1. struct trace_cpu *tc = inode->i_private;
struct trace_array *tr = tc->tr;
2. trace_array_get(tr) or fail;
3. do_something(tc);
But tc (and tr) can be already freed before trace_array_get() is called.
And it doesn't matter whether this file is per-cpu or it was created by
init_tracer_debugfs(), free_percpu() or kfree() are equally bad.
Note that even 1. is not safe, the freed memory can be unmapped. But even
if it was safe trace_array_get() can wrongly succeed if we also race with
the next new_instance_create() which can re-allocate the same tr, or tc
was overwritten and ->tr points to the valid tr. In this case 3. uses the
freed/reused memory.
Add the new trivial helper, trace_create_cpu_file() which simply calls
trace_create_file() and encodes "cpu" in "struct inode". Another helper,
tracing_get_cpu() will be used to read cpu_nr-or-RING_BUFFER_ALL_CPUS.
The patch abuses ->i_cdev to encode the number, it is never used unless
the file is S_ISCHR(). But we could use something else, say, i_bytes or
even ->d_fsdata. In any case this hack is hidden inside these 2 helpers,
it would be trivial to change them if needed.
This patch only changes tracing_init_debugfs_percpu() to use the new
trace_create_cpu_file(), the next patches will change file_operations.
Note: tracing_get_cpu(inode) is always safe but you can't trust the
result unless trace_array_get() was called, without trace_types_lock
which acts as a barrier it can wrongly return RING_BUFFER_ALL_CPUS.
Link: http://lkml.kernel.org/r/20130723152554.GA23710@redhat.com
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Masami Hiramatsu [Tue, 9 Jul 2013 09:35:26 +0000 (18:35 +0900)]
tracing/kprobe: Wait for disabling all running kprobe handlers
commit
a232e270dcb55a70ad3241bc6fc160fd9b5c9e6c upstream.
Wait for disabling all running kprobe handlers when a kprobe
event is disabled, since the caller, trace_remove_event_call()
supposes that a removing event is disabled completely by
disabling the event.
With this change, ftrace can ensure that there is no running
event handlers after disabling it.
Link: http://lkml.kernel.org/r/20130709093526.20138.93100.stgit@mhiramat-M0-7522
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Namhyung Kim [Fri, 7 Jun 2013 06:07:48 +0000 (15:07 +0900)]
tracing: Do not call kmem_cache_free() on allocation failure
commit
aaf6ac0f0871cb7fc0f28f3a00edf329bc7adc29 upstream.
There's no point calling it when _alloc() failed.
Link: http://lkml.kernel.org/r/1370585268-29169-1-git-send-email-namhyung@kernel.org
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Berg [Wed, 15 May 2013 09:44:49 +0000 (11:44 +0200)]
iwlwifi: mvm: adjust firmware D3 configuration API
commit
dfcb4c3aacedee6838e436fb575b31e138505203 upstream.
The D3 firmware API changed to include a new field, adjust
the driver to it to avoid getting an NMI when configuring.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Berg [Thu, 13 Jun 2013 14:06:08 +0000 (16:06 +0200)]
iwlwifi: bump required firmware API version for 3160/7260
commit
a2d0909a687b4d250cc2b7481072e361678745ba upstream.
As the firmware API has changed significantly and we don't
have support code for the old APIs, bump the version to be
able to release the version 7 API firmware. Unfortunately
this means that the driver in 3.9 and 3.10 can't work, but
that's still better than crashing the device/driver there.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Emmanuel Grumbach [Mon, 24 Jun 2013 12:44:03 +0000 (15:44 +0300)]
iwlwifi: mvm: unregister leds when registration failed
commit
b7327d89ae694a89f9934d428bde520b77b3131c upstream.
This was missing and prevented any further attempts
to load the module.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Emmanuel Grumbach [Thu, 13 Jun 2013 07:07:47 +0000 (10:07 +0300)]
iwlwifi: mvm: take the seqno from packet if transmit failed
commit
ebea2f32e814445f94f9e087b646f1cf4d55fa5a upstream.
The fw is unreliable in all the cases in which the packet
wasn't sent.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Emmanuel Grumbach [Sun, 9 Jun 2013 10:00:12 +0000 (13:00 +0300)]
iwlwifi: mvm: don't set the MCAST queue in STA's queue list
commit
837fb69f10588caafc883c4473a864660e1403ce upstream.
The MCAST queue should be enabled after DTIM only.
According to fw API, the MCAST must not be attached to any
station, but should appear in the mcast_qid of the AP's
mac context only.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Emmanuel Grumbach [Sun, 9 Jun 2013 09:59:24 +0000 (12:59 +0300)]
iwlwifi: mvm: properly tell the fw that a STA is awake
commit
5af01772ee1d6e96849adf728ff837bd71b119c0 upstream.
The firmware API wasn't being used correctly, fix that.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Emmanuel Grumbach [Sun, 2 Jun 2013 16:54:01 +0000 (19:54 +0300)]
iwlwifi: mvm: fix MCAST in AP mode
commit
9116a3683902583a302ac5dcb283416d504d9bb4 upstream.
In multicast, there is no retries nor RTS since there is no
specific recipient that can ACK or send CTS. This means
that we must not use the rate scale table for multicast
frames.
This true for any frame that doesn't have a valid
ieee80211_sta pointer.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Emmanuel Grumbach [Sun, 26 May 2013 17:47:53 +0000 (20:47 +0300)]
iwlwifi: mvm: correctly configure MCAST in AP mode
commit
86a91ec757338edbce51de5dabd7afb0366f485c upstream.
The AP mode needs to use the MCAST fifo for the MCAST
frames sent after the DTIM. This fifo needs to be
configured with the same parameters as the VOICE FIFO.
A separate SCD queue is mapped to this fifo - the cab_queue
(cab stands for Content After Beacon). This queue isn't
connected to any station, but rather to the MAC context.
This queue should (and is already) be set as the MCAST
queue - this is part of the of MAC context command.
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Reviewed-by: Ilan Peer <ilan.peer@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Samuel Ortiz [Fri, 3 May 2013 16:29:30 +0000 (18:29 +0200)]
NFC: llcp: Fix non blocking sockets connections
commit
b4011239a08e7e6c2c6e970dfa9e8ecb73139261 upstream.
Without the new LLCP_CONNECTING state, non blocking sockets will be
woken up with a POLLHUP right after calling connect() because their
state is stuck at LLCP_CLOSED.
That prevents userspace from implementing any proper non blocking
socket based NFC p2p client.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicolas Ferre [Thu, 18 Apr 2013 08:13:21 +0000 (10:13 +0200)]
ARM: at91: at91sam9x5 RTC is not compatible with at91rm9200 one
commit
23fb05c688a8dcb0cf6a4d8d819cffeca82e5c54 upstream.
Due to a bug with RTC IMR, we cannot consider at91sam9x5 RTC compatible
with the previous one. Modify DT compatibility string, even if the driver
is not yet modified to take it into account.
Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vineet Gupta [Tue, 20 Aug 2013 08:08:11 +0000 (13:38 +0530)]
ARC: gdbserver breakage in Big-Endian configuration #2
[Based on mainline commit
352c1d95e3220d0: "ARC: stop using
pt_regs->orig_r8"]
Stop using orig_r8 as it could get clobbered by ST in trap_with_param,
and further it is semantically not needed either.
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vineet Gupta [Tue, 20 Aug 2013 08:08:10 +0000 (13:38 +0530)]
ARC: gdbserver breakage in Big-Endian configuration #1
[Based on mainline commit
502a0c775c7f0a: "ARC: pt_regs update #5"]
gdbserver needs @stop_pc, served by ptrace, but fetched from pt_regs
differently, based on in_brkpt_traps(), which in turn relies on
additional machine state in pt_regs->event bitfield.
unsigned long orig_r8:16, event:16;
For big endian config, this macro was returning false, despite being in
breakpoint Trap exception, causing wrong @stop_pc to be returned to gdb.
Issue #1: In BE, @event above is at offset 2 in word, while a STW insn
at offset 0 was used to update it. Resort to using ST insn
which updates the half-word at right location.
Issue #2: The union involving bitfields causes all the members to be
laid out at offset 0. So with fix #1 above, ASM was now
updating at offset 2, "C" code was still referencing at
offset 0. Fixed by wrapping bitfield in a struct.
Reported-by: Noam Camus <noamc@ezchip.com>
Tested-by: Anton Kolesov <akolesov@synopsys.com>
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rafael J. Wysocki [Wed, 7 Aug 2013 20:55:00 +0000 (22:55 +0200)]
ACPI: Try harder to resolve _ADR collisions for bridges
commit
60f75b8e97daf4a39790a20d962cb861b9220af5 upstream.
In theory, under a given ACPI namespace node there should be only
one child device object with _ADR whose value matches a given bus
address exactly. In practice, however, there are systems in which
multiple child device objects under a given parent have _ADR matching
exactly the same address. In those cases we use _STA to determine
which of the multiple matching devices is enabled, since some systems
are known to indicate which ACPI device object to associate with the
given physical (usually PCI) device this way.
Unfortunately, as it turns out, there are systems in which many
device objects under the same parent have _ADR matching exactly the
same bus address and none of them has _STA, in which case they all
should be regarded as enabled according to the spec. Still, if
those device objects are supposed to represent bridges (e.g. this
is the case for device objects corresponding to PCIe ports), we can
try harder and skip the ones that have no child device objects in the
ACPI namespace. With luck, we can avoid using device objects that we
are not expected to use this way.
Although this only works for bridges whose children also have ACPI
namespace representation, it is sufficient to address graphics
adapter detection issues on some systems, so rework the code finding
a matching device ACPI handle for a given bus address to implement
this idea.
Introduce a new function, acpi_find_child(), taking three arguments:
the ACPI handle of the device's parent, a bus address suitable for
the device's bus type and a bool indicating if the device is a
bridge and make it work as outlined above. Reimplement the function
currently used for this purpose, acpi_get_child(), as a call to
acpi_find_child() with the last argument set to 'false' and make
the PCI subsystem use acpi_find_child() with the bridge information
passed as the last argument to it. [Lan Tianyu notices that it is
not sufficient to use pci_is_bridge() for that, because the device's
subordinate pointer hasn't been set yet at this point, so use
hdr_type instead.]
This change fixes a regression introduced inadvertently by commit
33f767d (ACPI: Rework acpi_get_child() to be more efficient) which
overlooked the fact that for acpi_walk_namespace() "post-order" means
"after all children have been visited" rather than "on the way back",
so for device objects without children and for namespace walks of
depth 1, as in the acpi_get_child() case, the "post-order" callbacks
ordering is actually the same as the ordering of "pre-order" ones.
Since that commit changed the namespace walk in acpi_get_child() to
terminate after finding the first matching object instead of going
through all of them and returning the last one, it effectively
changed the result returned by that function in some rare cases and
that led to problems (the switch from a "pre-order" to a "post-order"
callback was supposed to prevent that from happening, but it was
ineffective).
As it turns out, the systems where the change made by commit
33f767d actually matters are those where there are multiple ACPI
device objects representing the same PCIe port (which effectively
is a bridge). Moreover, only one of them, and the one we are
expected to use, has child device objects in the ACPI namespace,
so the regression can be addressed as described above.
References: https://bugzilla.kernel.org/show_bug.cgi?id=60561
Reported-by: Peter Wu <lekensteyn@gmail.com>
Tested-by: Vladimir Lalov <mail@vlalov.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Cc: Peter Wu <lekensteyn@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jeff Wu [Wed, 29 May 2013 06:31:30 +0000 (06:31 +0000)]
ACPI: add _STA evaluation at do_acpi_find_child()
commit
c7d9ca90aa9497f0b6e301ec67c52dd4b57a7852 upstream.
Once do_acpi_find_child() has found the first matching handle, it
makes the acpi_get_child() loop stop and return that handle. On some
platforms, though, there are multiple devices with the same value of
"_ADR" in the same namespace scope, and if one of them is enabled,
the others will be disabled. For example:
Address : 0x1FFFF ; path : SB_PCI0.SATA.DEV0
Address : 0x1FFFF ; path : SB_PCI0.SATA.DEV1
Address : 0x1FFFF ; path : SB_PCI0.SATA.DEV2
If DEV0 and DEV1 are disabled and DEV2 is enabled, the handle of DEV2
should be returned, but actually the function always returns the
handle of DEV0.
To address that issue, make do_acpi_find_child() evaluate _STA to
check the device status. If a matching device object exists, but is
disabled, acpi_get_child() will continue to walk the namespace in the
hope of finding an enabled one. If one is found, its handle will be
returned, but otherwise the function will return the handle of the
disabled object found before (in case it is enabled going forward).
[rjw: Changelog]
Signed-off-by: Jeff Wu <zlinuxkernel@gmail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Cc: Peter Wu <lekensteyn@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Berg [Mon, 29 Jul 2013 21:07:43 +0000 (23:07 +0200)]
mac80211: don't wait for TX status forever
commit
cb236d2d713cff83d024a82b836757d9e2b50715 upstream.
TX status notification can get lost, or the frames could
get stuck on the queue, so don't wait for the callback
from the driver forever and instead time out after half
a second.
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Luciano Coelho <luciano.coelho@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dominik Dingel [Fri, 26 Jul 2013 13:04:00 +0000 (15:04 +0200)]
KVM: s390: move kvm_guest_enter,exit closer to sie
commit
2b29a9fdcb92bfc6b6f4c412d71505869de61a56 upstream.
Any uaccess between guest_enter and guest_exit could trigger a page fault,
the page fault handler would handle it as a guest fault and translate a
user address as guest address.
Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Tue, 20 Aug 2013 22:40:47 +0000 (15:40 -0700)]
Linux 3.10.9
Greg Kroah-Hartman [Tue, 20 Aug 2013 22:32:57 +0000 (15:32 -0700)]
Revert "genetlink: fix family dump race"
This reverts commit
aab4f8d490ef8c184d854d5f630438c10406765c, commit
58ad436fcf49810aa006016107f494c9ac9013db upstream, as it causes
problems.
Cc: Johannes Berg <johannes.berg@intel.com>
Cc: Andrei Otcheretianski <andrei.otcheretianski@intel.com>
Cc: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Tue, 20 Aug 2013 15:43:19 +0000 (08:43 -0700)]
Linux 3.10.8
Li Zefan [Tue, 13 Aug 2013 02:05:59 +0000 (10:05 +0800)]
cpuset: fix the return value of cpuset_write_u64()
commit
a903f0865a190f8778c73df1a810ea6e25e5d7cf upstream.
Writing to this file always returns -ENODEV:
# echo 1 > cpuset.memory_pressure_enabled
-bash: echo: write error: No such device
Signed-off-by: Li Zefan <lizefan@huawei.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jan Kara [Mon, 12 Aug 2013 13:53:28 +0000 (09:53 -0400)]
jbd2: Fix use after free after error in jbd2_journal_dirty_metadata()
commit
91aa11fae1cf8c2fd67be0609692ea9741cdcc43 upstream.
When jbd2_journal_dirty_metadata() returns error,
__ext4_handle_dirty_metadata() stops the handle. However callers of this
function do not count with that fact and still happily used now freed
handle. This use after free can result in various issues but very likely
we oops soon.
The motivation of adding __ext4_journal_stop() into
__ext4_handle_dirty_metadata() in commit
9ea7a0df seems to be only to
improve error reporting. So replace __ext4_journal_stop() with
ext4_journal_abort_handle() which was there before that commit and add
WARN_ON_ONCE() to dump stack to provide useful information.
Reported-by: Sage Weil <sage@inktank.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guenter Roeck [Sat, 17 Aug 2013 03:50:55 +0000 (20:50 -0700)]
s390: Fix broken build
commit
215b28a5308f3d332df2ee09ef11fda45d7e4a92 upstream.
Fix this build error:
In file included from fs/exec.c:61:0:
arch/s390/include/asm/tlb.h:35:23: error: expected identifier or '(' before 'unsigned'
arch/s390/include/asm/tlb.h:36:1: warning: no semicolon at end of struct or union [enabled by default]
arch/s390/include/asm/tlb.h: In function 'tlb_gather_mmu':
arch/s390/include/asm/tlb.h:57:5: error: 'struct mmu_gather' has no member named 'end'
Broken due to commit
2b047252d0 ("Fix TLB gather virtual address range
invalidation corner cases").
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
[ Oh well. We had build testing for ppc amd um, but no s390 - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Geert Uytterhoeven [Thu, 25 Jul 2013 22:08:25 +0000 (00:08 +0200)]
m68k/atari: ARAnyM - Fix NatFeat module support
commit
e8184e10f89736a23ea6eea8e24cd524c5c513d2 upstream.
As pointed out by Andreas Schwab, pointers passed to ARAnyM NatFeat calls
should be physical addresses, not virtual addresses.
Fortunately on Atari, physical and virtual kernel addresses are the same,
as long as normal kernel memory is concerned, so this usually worked fine
without conversion.
But for modules, pointers to literal strings are located in vmalloc()ed
memory. Depending on the version of ARAnyM, this causes the nf_get_id()
call to just fail, or worse, crash ARAnyM itself with e.g.
Gotcha! Illegal memory access. Atari PC = $968c
This is a big issue for distro kernels, who want to have all drivers as
loadable modules in an initrd.
Add a wrapper for nf_get_id() that copies the literal to the stack to
work around this issue.
Reported-by: Thorsten Glaser <tg@debian.org>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andreas Schwab [Fri, 9 Aug 2013 13:14:08 +0000 (15:14 +0200)]
m68k: Truncate base in do_div()
commit
ea077b1b96e073eac5c3c5590529e964767fc5f7 upstream.
Explicitly truncate the second operand of do_div() to 32 bits to guard
against bogus code calling it with a 64-bit divisor.
[Thorsten]
After upgrading from 3.2 to 3.10, mounting a btrfs volume fails with:
btrfs: setting nodatacow, compression disabled
btrfs: enabling auto recovery
btrfs: disk space caching is enabled
*** ZERO DIVIDE *** FORMAT=2
Current process id is 722
BAD KERNEL TRAP:
00000000
Modules linked in: evdev mac_hid ext4 crc16 jbd2 mbcache btrfs xor lzo_compress zlib_deflate raid6_pq crc32c libcrc32c
PC: [<
319535b2>] __btrfs_map_block+0x11c/0x119a [btrfs]
SR: 2000 SP:
30c1fab4 a2:
30f0faf0
d0:
00000000 d1:
00001000 d2:
00000000 d3:
00000000
d4:
00010000 d5:
00000000 a0:
3085c72c a1:
3085c72c
Process mount (pid: 722, task=
30f0faf0)
Frame format=2 instr addr=
319535ae
Stack from
30c1faec:
00000000 00000020 00000000 00001000 00000000 01401000 30253928 300ffc00
00a843ac 3026f640 00000000 00010000 0009e250 00d106c0 00011220 00000000
00001000 301c6830 0009e32a 000000ff 00000009 3085c72c 00000000 00000000
30c1fd14 00000000 00000020 00000000 30c1fd14 0009e26c 00000020 00000003
00000000 0009dd8a 300b0b6c 30253928 00a843ac 00001000 00000000 00000000
0000a008 3194e76a 30253928 00a843ac 00001000 00000000 00000000 00000002
Call Trace: [<
00001000>] kernel_pg_dir+0x0/0x1000
[...]
Code: 222e ff74 2a2e ff5c 2c2e ff60 4c45 1402 <2d40> ff64 2d41 ff68 2205 4c2e 1800 ff68 4c04 0800 2041 d1c0 2206 4c2e 1400 ff68
[Geert]
As diagnosed by Andreas, fs/btrfs/volumes.c:__btrfs_map_block()
calls
do_div(stripe_nr, stripe_len);
with stripe_len u64, while do_div() assumes the divisor is a 32-bit number.
Due to the lack of truncation in the m68k-specific implementation of
do_div(), the division is performed using the upper 32-bit word of
stripe_len, which is zero.
This was introduced by commit
53b381b3abeb86f12787a6c40fee9b2f71edc23b
("Btrfs: RAID5 and RAID6"), which changed the divisor from
map->stripe_len (struct map_lookup.stripe_len is int) to a 64-bit temporary.
Reported-by: Thorsten Glaser <tg@debian.org>
Signed-off-by: Andreas Schwab <schwab@linux-m68k.org>
Tested-by: Thorsten Glaser <tg@debian.org>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Will Deacon [Wed, 7 Aug 2013 22:39:41 +0000 (23:39 +0100)]
ARM: 7809/1: perf: fix event validation for software group leaders
commit
c95eb3184ea1a3a2551df57190c81da695e2144b upstream.
It is possible to construct an event group with a software event as a
group leader and then subsequently add a hardware event to the group.
This results in the event group being validated by adding all members
of the group to a fake PMU and attempting to allocate each event on
their respective PMU.
Unfortunately, for software events wthout a corresponding arm_pmu, this
results in a kernel crash attempting to dereference the ->get_event_idx
function pointer.
This patch fixes the problem by checking explicitly for software events
and ignoring those in event validation (since they can always be
scheduled). We will probably want to revisit this for 3.12, since the
validation checks don't appear to work correctly when dealing with
multiple hardware PMUs anyway.
Reported-by: Vince Weaver <vincent.weaver@maine.edu>
Tested-by: Vince Weaver <vincent.weaver@maine.edu>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Linus Torvalds [Thu, 15 Aug 2013 18:42:25 +0000 (11:42 -0700)]
Fix TLB gather virtual address range invalidation corner cases
commit
2b047252d087be7f2ba088b4933cd904f92e6fce upstream.
Ben Tebulin reported:
"Since v3.7.2 on two independent machines a very specific Git
repository fails in 9/10 cases on git-fsck due to an SHA1/memory
failures. This only occurs on a very specific repository and can be
reproduced stably on two independent laptops. Git mailing list ran
out of ideas and for me this looks like some very exotic kernel issue"
and bisected the failure to the backport of commit
53a59fc67f97 ("mm:
limit mmu_gather batching to fix soft lockups on !CONFIG_PREEMPT").
That commit itself is not actually buggy, but what it does is to make it
much more likely to hit the partial TLB invalidation case, since it
introduces a new case in tlb_next_batch() that previously only ever
happened when running out of memory.
The real bug is that the TLB gather virtual memory range setup is subtly
buggered. It was introduced in commit
597e1c3580b7 ("mm/mmu_gather:
enable tlb flush range in generic mmu_gather"), and the range handling
was already fixed at least once in commit
e6c495a96ce0 ("mm: fix the TLB
range flushed when __tlb_remove_page() runs out of slots"), but that fix
was not complete.
The problem with the TLB gather virtual address range is that it isn't
set up by the initial tlb_gather_mmu() initialization (which didn't get
the TLB range information), but it is set up ad-hoc later by the
functions that actually flush the TLB. And so any such case that forgot
to update the TLB range entries would potentially miss TLB invalidates.
Rather than try to figure out exactly which particular ad-hoc range
setup was missing (I personally suspect it's the hugetlb case in
zap_huge_pmd(), which didn't have the same logic as zap_pte_range()
did), this patch just gets rid of the problem at the source: make the
TLB range information available to tlb_gather_mmu(), and initialize it
when initializing all the other tlb gather fields.
This makes the patch larger, but conceptually much simpler. And the end
result is much more understandable; even if you want to play games with
partial ranges when invalidating the TLB contents in chunks, now the
range information is always there, and anybody who doesn't want to
bother with it won't introduce subtle bugs.
Ben verified that this fixes his problem.
Reported-bisected-and-tested-by: Ben Tebulin <tebulin@googlemail.com>
Build-testing-by: Stephen Rothwell <sfr@canb.auug.org.au>
Build-testing-by: Richard Weinberger <richard.weinberger@gmail.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Pugliese [Fri, 9 Aug 2013 14:52:13 +0000 (09:52 -0500)]
wusbcore: fix kernel panic when disconnecting a wireless USB->serial device
commit
ec58fad1feb76c323ef47efff1d1e8660ed4644c upstream.
This patch fixes a kernel panic that can occur when disconnecting a
wireless USB->serial device. When the serial device disconnects, the
device cleanup procedure ends up calling usb_hcd_disable_endpoint on the
serial device's endpoints. The wusbcore uses the ABORT_RPIPE command to
abort all transfers on the given endpoint but it does not properly give
back the URBs when the transfer results return from the HWA. This patch
prevents the transfer result processing code from bailing out when it sees
a WA_XFER_STATUS_ABORTED result code so that these urbs are flushed
properly by usb_hcd_disable_endpoint. It also updates wa_urb_dequeue to
handle the case where the endpoint has already been cleaned up when
usb_kill_urb is called which is where the panic originally occurred.
Signed-off-by: Thomas Pugliese <thomas.pugliese@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stephen Boyd [Tue, 13 Aug 2013 21:12:40 +0000 (14:12 -0700)]
PM / QoS: Fix workqueue deadlock when using pm_qos_update_request_timeout()
commit
40fea92ffb5fa0ef26d10ae0fe5688bc8e61c791 upstream.
pm_qos_update_request_timeout() updates a qos and then schedules
a delayed work item to bring the qos back down to the default
after the timeout. When the work item runs, pm_qos_work_fn() will
call pm_qos_update_request() and deadlock because it tries to
cancel itself via cancel_delayed_work_sync(). Future callers of
that qos will also hang waiting to cancel the work that is
canceling itself. Let's extract the little bit of code that does
the real work of pm_qos_update_request() and call it from the
work function so that we don't deadlock.
Before ed1ac6e (PM: don't use [delayed_]work_pending()) this didn't
happen because the work function wouldn't try to cancel itself.
[backport to 3.10 - gregkh]
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Reviewed-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Matt Burtch [Mon, 12 Aug 2013 17:11:39 +0000 (10:11 -0700)]
USB-Serial: Fix error handling of usb_wwan
commit
6c1ee66a0b2bdbd64c078fba684d640cf2fd38a9 upstream.
This fixes an issue where the bulk-in urb used for incoming data transfer
is not resubmitted if the packet recieved contains an error status. This
results in the driver locking until the port is closed and re-opened.
Tested on a custom board with a Cinterion GSM module.
Signed-off-by: Matt Burtch <matt@grid-net.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alan Stern [Wed, 7 Aug 2013 14:58:05 +0000 (10:58 -0400)]
USB: EHCI: accept very late isochronous URBs
commit
24f531371de17010f2b1b57d90e42240032e7733 upstream.
Since commits
4005ad4390bf (EHCI: implement new semantics for
URB_ISO_ASAP) and
c75c5ab575af (ALSA: USB: adjust for changed 3.8 USB
API) became widely distributed, people have been experiencing problems
with audio transfers. The slightest underrun causes complete failure,
requiring the audio stream to be restarted.
It turns out that the current isochronous API doesn't handle underruns
in the best way. The ALSA developers would much rather have transfers
that are submitted too late be accepted and complete in the normal
fashion, rather than being refused outright.
This patch implements the requested approach. When an isochronous URB
submission is so late that all its scheduled slots have already
expired, a debugging message will be printed in the log and the URB
will be accepted as usual. Assuming it was submitted by a completion
handler (which is normally the case), it will complete shortly
thereafter with all the usb_iso_packet_descriptor status fields marked
-EXDEV.
This fixes (for ehci-hcd)
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1191603
It should be applied to all kernels that include commit
4005ad4390bf.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Tested-by: Maksim Boyko <maksboyko@yandex.ru>
CC: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johan Hovold [Tue, 13 Aug 2013 11:27:35 +0000 (13:27 +0200)]
USB: keyspan: fix null-deref at disconnect and release
commit
ff8a43c10f1440f07a5faca0c1556921259f7f76 upstream.
Make sure to fail properly if the device is not accepted during attach
in order to avoid null-pointer derefs (of missing interface private
data) at disconnect or release.
Signed-off-by: Johan Hovold <jhovold@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johan Hovold [Tue, 13 Aug 2013 11:27:34 +0000 (13:27 +0200)]
USB: mos7720: fix broken control requests
commit
ef6c8c1d733e244f0499035be0dabe1f4ed98c6f upstream.
The parallel-port code of the drivers used a stack allocated
control-request buffer for asynchronous (and possibly deferred) control
requests. This not only violates the no-DMA-from-stack requirement but
could also lead to corrupt control requests being submitted.
Signed-off-by: Johan Hovold <jhovold@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johan Hovold [Sun, 11 Aug 2013 14:49:20 +0000 (16:49 +0200)]
USB: mos7840: fix big-endian probe
commit
d551ec9b690f3de65b0091a2e767f1382adc792d upstream.
Fix bug in device-type detection on big-endian machines originally
introduced by commit
0eafe4de ("USB: serial: mos7840: add support for
MCS7810 devices") which always matched on little-endian product ids.
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Johan Hovold <jhovold@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johan Hovold [Sun, 11 Aug 2013 14:49:23 +0000 (16:49 +0200)]
USB: ti_usb_3410_5052: fix big-endian firmware handling
commit
e877dd2f2581628b7119df707d4cf03d940cff49 upstream.
Fix endianess bugs in firmware handling introduced by commits
cb7a7c6a
("ti_usb_3410_5052: add Multi-Tech modem support") and
05a3d905
("ti_usb_3410_5052: support alternate firmware") which made the driver
use the wrong firmware for certain devices on big-endian machines.
Signed-off-by: Johan Hovold <jhovold@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Oliver Neukum [Wed, 14 Aug 2013 09:01:46 +0000 (11:01 +0200)]
usb: add two quirky touchscreen
commit
304ab4ab079a8ed03ce39f1d274964a532db036b upstream.
These devices tend to become unresponsive after S3
Signed-off-by: Oliver Neukum <oneukum@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Berg [Tue, 30 Jul 2013 20:34:28 +0000 (22:34 +0200)]
nl80211: fix another nl80211_fam.attrbuf race
commit
c319d50bfcf678c2857038276d9fab3c6646f3bf upstream.
This is similar to the race Linus had reported, but in this case
it's an older bug: nl80211_prepare_wdev_dump() uses the wiphy
index in cb->args[0] as it is and thus parses the message over
and over again instead of just once because 0 is the first valid
wiphy index. Similar code in nl80211_testmode_dump() correctly
offsets the wiphy_index by 1, do that here as well.
Reported-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takashi Iwai [Fri, 16 Aug 2013 06:17:05 +0000 (08:17 +0200)]
ALSA: hda - Add a fixup for Gateway LT27
commit
1801928e0f99d94c55e33c584c5eb2ff5e246ee6 upstream.
Gateway LT27 needs a fixup for the inverted digital mic.
Reported-by: "Nathanael D. Noblet" <nathanael@gnat.ca>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takashi Iwai [Fri, 9 Aug 2013 10:34:42 +0000 (12:34 +0200)]
ALSA: hda - Add pinfix for LG LW25 laptop
commit
db8a38e5063a4daf61252e65d47ab3495c705f4c upstream.
Correct the pins for a line-in and a headphone on LG LW25 laptop with
ALC880 codec. Other pins seem fine.
Reported-and-tested-by: Joonas Saarinen <jonskunator@gmail.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takashi Iwai [Thu, 8 Aug 2013 07:32:37 +0000 (09:32 +0200)]
ALSA: hda - Fix missing mute controls for CX5051
commit
f69910ddbd8c29391958cf82b598dd78fe5c8640 upstream.
We've added a fake mute control (setting the amp volume to zero) for
CX5051 at commit [
3868137e: ALSA: hda - Add a fake mute feature], but
this feature was overlooked in the generic parser implementation. Now
the driver lacks of mute controls on these codecs.
The fix is just to check both AC_AMPCAP_MUTE and AC_AMPCAP_MIN_MUTE
bits in each place checking the amp capabilities.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=59001
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Torsten Schenk [Sun, 11 Aug 2013 09:11:35 +0000 (11:11 +0200)]
ALSA: 6fire: make buffers DMA-able (midi)
commit
4c2aee0032b70083dafebd733ed9c774633b2fa3 upstream.
Patch makes midi output buffer DMA-able by allocating it separately.
Signed-off-by: Torsten Schenk <torsten.schenk@zoho.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Torsten Schenk [Sun, 11 Aug 2013 09:11:19 +0000 (11:11 +0200)]
ALSA: 6fire: make buffers DMA-able (pcm)
commit
5ece263f1d93fba8d992e67e3ab8a71acf674db9 upstream.
Patch makes pcm buffers DMA-able by allocating each one separately.
Signed-off-by: Torsten Schenk <torsten.schenk@zoho.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Maksim A. Boyko [Sat, 10 Aug 2013 08:20:02 +0000 (12:20 +0400)]
ALSA: usb-audio: Fix invalid volume resolution for Logitech HD Webcam C525
commit
140d37de62ffe8405282a1d6498f3b4099006384 upstream.
Add the volume control quirk for avoiding the kernel warning
for the Logitech HD Webcam C525
as in the similar commit
36691e1be6ec551eef4a5225f126a281f8c051c2
for the Logitech HD Webcam C310.
Reported-by: Maksim Boyko <maksim.a.boyko@gmail.com>
Tested-by: Maksim Boyko <maksim.a.boyko@gmail.com>
Signed-off-by: Maksim Boyko <maksim.a.boyko@gmail.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stephen Warren [Wed, 14 Aug 2013 20:24:16 +0000 (14:24 -0600)]
ASoC: tegra: fix Tegra30 I2S capture parameter setup
commit
c90c0d7a96e634a73ef1580f1d20993606545647 upstream.
The Tegra30 I2S driver was writing the AHUB interface parameters to the
playback path register rather than the capture path register. This
caused the capture parameters not to be configured at all, so if
capturing using non-HW-default parameters (e.g. 16-bit stereo rather
than 8-bit mono) the audio would be corrupted.
With this fixed, audio capture from an analog microphone works correctly
on the Cardhu board.
Signed-off-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Brian Austin [Tue, 6 Aug 2013 17:57:21 +0000 (12:57 -0500)]
ASoC: cs42l52: Reorder Min/Max and update to SX_TLV for Beep Volume
commit
e2c98a8bba958045bde861fe1d66be54315c7790 upstream.
Beep Volume Min/Max was backwards.
Change to SOC_SONGLE_SX_TLV for correct volume representation
Signed-off-by: Brian Austin <brian.austin@cirrus.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Lars-Peter Clausen [Thu, 1 Aug 2013 16:30:38 +0000 (18:30 +0200)]
ASoC: dapm: Fix empty list check in dapm_new_mux()
commit
fe581391147cb3d738d961d0f1233d91a9e1113c upstream.
list_first_entry() will always return a valid pointer, even if the list is
empty. So the check whether path is NULL will always be false. So we end up
calling dapm_create_or_share_mixmux_kcontrol() with a path struct that points
right in the middle of the widget struct and by trying to modify the path the
widgets memory will become corrupted. Fix this by using list_emtpy() to check if
the widget doesn't have any paths.
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Tested-by: Stephen Warren <swarren@nvidia.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Berg [Tue, 30 Jul 2013 08:11:25 +0000 (10:11 +0200)]
cfg80211: fix P2P GO interface teardown
commit
74418edec915d0f446debebde08d170c7b8ba0ee upstream.
When a P2P GO interface goes down, cfg80211 doesn't properly
tear it down, leading to warnings later. Add the GO interface
type to the enumeration to tear it down like AP interfaces.
Otherwise, we leave it pending and mac80211's state can get
very confused, leading to warnings later.
Reported-by: Ilan Peer <ilan.peer@intel.com>
Tested-by: Ilan Peer <ilan.peer@intel.com>
Reviewed-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Berg [Tue, 13 Aug 2013 07:04:05 +0000 (09:04 +0200)]
genetlink: fix family dump race
commit
58ad436fcf49810aa006016107f494c9ac9013db upstream.
When dumping generic netlink families, only the first dump call
is locked with genl_lock(), which protects the list of families,
and thus subsequent calls can access the data without locking,
racing against family addition/removal. This can cause a crash.
Fix it - the locking needs to be conditional because the first
time around it's already locked.
A similar bug was reported to me on an old kernel (3.4.47) but
the exact scenario that happened there is no longer possible,
on those kernels the first round wasn't locked either. Looking
at the current code I found the race described above, which had
also existed on the old kernel.
Reported-by: Andrei Otcheretianski <andrei.otcheretianski@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stephane Grosjean [Fri, 9 Aug 2013 09:44:06 +0000 (11:44 +0200)]
can: pcan_usb: fix wrong memcpy() bytes length
commit
3c322a56b01695df15c70bfdc2d02e0ccd80654e upstream.
Fix possibly wrong memcpy() bytes length since some CAN records received from
PCAN-USB could define a DLC field in range [9..15].
In that case, the real DLC value MUST be used to move forward the record pointer
but, only 8 bytes max. MUST be copied into the data field of the struct
can_frame object of the skb given to the network core.
Signed-off-by: Stephane Grosjean <s.grosjean@peak-system.com>
Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Berg [Wed, 31 Jul 2013 18:52:03 +0000 (20:52 +0200)]
mac80211: continue using disabled channels while connected
commit
ddfe49b42d8ad4bfdf92d63d4a74f162660d878d upstream.
In case the AP has different regulatory information than we do,
it can happen that we connect to an AP based on e.g. the world
roaming regulatory data, and then update our database with the
AP's country information disables the channel the AP is using.
If this happens on an HT AP, the bandwidth tracking code will
hit the WARN_ON() and disconnect. Since that's not very useful,
ignore the channel-disable flag in bandwidth tracking.
Reported-by: Chris Wright <chrisw@sous-sol.org>
Tested-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Chris Wright [Wed, 31 Jul 2013 19:12:24 +0000 (12:12 -0700)]
mac80211: fix infinite loop in ieee80211_determine_chantype
commit
b56e4b857c5210e848bfb80e074e5756a36cd523 upstream.
Commit "3d9646d mac80211: fix channel selection bug" introduced a possible
infinite loop by moving the out target above the chandef_downgrade
while loop. When we downgrade to NL80211_CHAN_WIDTH_20_NOHT, we jump
back up to re-run the while loop...indefinitely. Replace goto with
break and carry on. This may not be sufficient to connect to the AP,
but will at least keep the cpu from livelocking. Thanks to Derek Atkins
as an extra pair of debugging eyes.
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Johannes Berg [Wed, 31 Jul 2013 09:23:06 +0000 (11:23 +0200)]
mac80211: ignore HT primary channel while connected
commit
5cdaed1e878d723d56d04ae0be1738124acf9f46 upstream.
While we're connected, the AP shouldn't change the primary channel
in the HT information. We checked this, and dropped the connection
if it did change it.
Unfortunately, this is causing problems on some APs, e.g. on the
Netgear WRT610NL: the beacons seem to always contain a bad channel
and if we made a connection using a probe response (correct data)
we drop the connection immediately and can basically not connect
properly at all.
Work around this by ignoring the HT primary channel information in
beacons if we're already connected.
Also print out more verbose messages in the other situations to
help diagnose similar bugs quicker in the future.
Acked-by: Andy Isaacson <adi@hexapodia.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stanislaw Gruszka [Thu, 1 Aug 2013 10:07:55 +0000 (12:07 +0200)]
iwl4965: reset firmware after rfkill off
commit
788f7a56fce1bcb2067b62b851a086fca48a0056 upstream.
Using rfkill switch can make firmware unstable, what cause various
Microcode errors and kernel warnings. Reseting firmware just after
rfkill off (radio on) helped with that.
Resolve:
https://bugzilla.redhat.com/show_bug.cgi?id=977053
Reported-and-tested-by: Justin Pearce <whitefox@guardianfox.net>
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stanislaw Gruszka [Thu, 1 Aug 2013 10:07:13 +0000 (12:07 +0200)]
iwl4965: set power mode early
commit
eca396d7a5bdcc1fd67b1b12f737c213ac78a6f4 upstream.
If device was put into a sleep and system was restarted or module
reloaded, we have to wake device up before sending other commands.
Otherwise it will fail to start with Microcode error.
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marc Zyngier [Fri, 21 Jun 2013 12:08:48 +0000 (13:08 +0100)]
ARM: KVM: clear exclusive monitor on all exception returns
commit
22cfbb6d730ca2fda236b507d9fba17bf002736c upstream.
Make sure we clear the exclusive monitor on all exception returns,
which otherwise could lead to lock corruptions.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jonghwan Choi <jhbird.choi@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marc Zyngier [Fri, 21 Jun 2013 12:08:47 +0000 (13:08 +0100)]
ARM: KVM: add missing dsb before invalidating Stage-2 TLBs
commit
479c5ae2f8a55509b691494cd13691d3dc31d102 upstream.
When performing a Stage-2 TLB invalidation, it is necessary to
make sure the write to the page tables is observable by all CPUs.
For this purpose, add a dsb instruction to __kvm_tlb_flush_vmid_ipa
before doing the TLB invalidation itself.
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jonghwan Choi <jhbird.choi@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Marc Zyngier [Fri, 21 Jun 2013 12:08:46 +0000 (13:08 +0100)]
ARM: KVM: perform save/restore of PAR
commit
6a077e4ab9cbfbf279fb955bae05b03781c97013 upstream.
Not saving PAR is an unfortunate oversight. If the guest performs
an AT* operation and gets scheduled out before reading the result
of the translation from PAR, it could become corrupted by another
guest or the host.
Saving this register is made slightly more complicated as KVM also
uses it on the permission fault handling path, leading to an ugly
"stash and restore" sequence. Fortunately, this is already a slow
path so we don't really care. Also, Linux doesn't do any AT*
operation, so Linux guests are not impacted by this bug.
[ Slightly tweaked to use an even register as first operand to ldrd
and strd operations in interrupts_head.S - Christoffer ]
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Jonghwan Choi <jhbird.choi@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jianpeng Ma [Wed, 3 Jul 2013 11:25:24 +0000 (13:25 +0200)]
elevator: Fix a race in elevator switching
commit
d50235b7bc3ee0a0427984d763ea7534149531b4 upstream.
There's a race between elevator switching and normal io operation.
Because the allocation of struct elevator_queue and struct elevator_data
don't in a atomic operation.So there are have chance to use NULL
->elevator_data.
For example:
Thread A: Thread B
blk_queu_bio elevator_switch
spin_lock_irq(q->queue_block) elevator_alloc
elv_merge elevator_init_fn
Because call elevator_alloc, it can't hold queue_lock and the
->elevator_data is NULL.So at the same time, threadA call elv_merge and
nedd some info of elevator_data.So the crash happened.
Move the elevator_alloc into func elevator_init_fn, it make the
operations in a atomic operation.
Using the follow method can easy reproduce this bug
1:dd if=/dev/sdb of=/dev/null
2:while true;do echo noop > scheduler;echo deadline > scheduler;done
The test method also use this method.
Signed-off-by: Jianpeng Ma <majianpeng@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Cc: Jonghwan Choi <jhbird.choi@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Peter Zijlstra [Fri, 26 Jul 2013 21:48:42 +0000 (23:48 +0200)]
sched: Ensure update_cfs_shares() is called for parents of continuously-running tasks
commit
bf0bd948d1682e3996adc093b43021ed391983e6 upstream.
We typically update a task_group's shares within the dequeue/enqueue
path. However, continuously running tasks sharing a CPU are not
subject to these updates as they are only put/picked. Unfortunately,
when we reverted
f269ae046 (in
17bc14b7), we lost the augmenting
periodic update that was supposed to account for this; resulting in a
potential loss of fairness.
To fix this, re-introduce the explicit update in
update_cfs_rq_blocked_load() [called via entity_tick()].
Reported-by: Max Hailperin <max@gustavus.edu>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Paul Turner <pjt@google.com>
Link: http://lkml.kernel.org/n/tip-9545m3apw5d93ubyrotrj31y@git.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
yonghua zheng [Tue, 13 Aug 2013 23:01:03 +0000 (16:01 -0700)]
fs/proc/task_mmu.c: fix buffer overflow in add_page_map()
commit
8c8296223f3abb142be8fc31711b18a704c0e7d8 upstream.
Recently we met quite a lot of random kernel panic issues after enabling
CONFIG_PROC_PAGE_MONITOR. After debuggind we found this has something
to do with following bug in pagemap:
In struct pagemapread:
struct pagemapread {
int pos, len;
pagemap_entry_t *buffer;
bool v2;
};
pos is number of PM_ENTRY_BYTES in buffer, but len is the size of
buffer, it is a mistake to compare pos and len in add_page_map() for
checking buffer is full or not, and this can lead to buffer overflow and
random kernel panic issue.
Correct len to be total number of PM_ENTRY_BYTES in buffer.
[akpm@linux-foundation.org: document pagemapread.pos and .len units, fix PM_ENTRY_BYTES definition]
Signed-off-by: Yonghua Zheng <younghua.zheng@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Radu Caragea [Tue, 13 Aug 2013 23:00:59 +0000 (16:00 -0700)]
x86 get_unmapped_area(): use proper mmap base for bottom-up direction
commit
df54d6fa54275ce59660453e29d1228c2b45a826 upstream.
When the stack is set to unlimited, the bottomup direction is used for
mmap-ings but the mmap_base is not used and thus effectively renders
ASLR for mmapings along with PIE useless.
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Michel Lespinasse <walken@google.com>
Cc: Oleg Nesterov <oleg@redhat.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: Adrian Sendroiu <molecula2788@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michal Simek [Tue, 13 Aug 2013 23:00:53 +0000 (16:00 -0700)]
microblaze: fix clone syscall
commit
dfa9771a7c4784bafd0673bc7abcee3813088b77 upstream.
Fix inadvertent breakage in the clone syscall ABI for Microblaze that
was introduced in commit
f3268edbe6fe ("microblaze: switch to generic
fork/vfork/clone").
The Microblaze syscall ABI for clone takes the parent tid address in the
4th argument; the third argument slot is used for the stack size. The
incorrectly-used CLONE_BACKWARDS type assigned parent tid to the 3rd
slot.
This commit restores the original ABI so that existing userspace libc
code will work correctly.
All kernel versions from v3.8-rc1 were affected.
Signed-off-by: Michal Simek <michal.simek@xilinx.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andrey Vagin [Tue, 13 Aug 2013 23:00:47 +0000 (16:00 -0700)]
memcg: don't initialize kmem-cache destroying work for root caches
commit
3e6b11df245180949938734bc192eaf32f3a06b3 upstream.
struct memcg_cache_params has a union. Different parts of this union
are used for root and non-root caches. A part with destroying work is
used only for non-root caches.
I fixed the same problem in another place v3.9-rc1-16204-gf101a94, but
didn't notice this one.
This patch fixes the kernel panic:
[ 46.848187] BUG: unable to handle kernel paging request at
000000fffffffeb8
[ 46.849026] IP: [<
ffffffff811a484c>] kmem_cache_destroy_memcg_children+0x6c/0xc0
[ 46.849092] PGD 0
[ 46.849092] Oops: 0000 [#1] SMP
...
Signed-off-by: Andrey Vagin <avagin@openvz.org>
Cc: Glauber Costa <glommer@openvz.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: Balbir Singh <bsingharora@gmail.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Konstantin Khlebnikov <khlebnikov@openvz.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stephen Boyd [Wed, 7 Aug 2013 23:18:08 +0000 (16:18 -0700)]
perf/arm: Fix armpmu_map_hw_event()
commit
b88a2595b6d8aedbd275c07dfa784657b4f757eb upstream.
Fix constraint check in armpmu_map_hw_event().
Reported-and-tested-by: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vince Weaver [Fri, 2 Aug 2013 14:47:34 +0000 (10:47 -0400)]
perf/x86: Fix intel QPI uncore event definitions
commit
c9601247f8f3fdc18aed7ed7e490e8dfcd07f122 upstream.
John McCalpin reports that the "drs_data" and "ncb_data" QPI
uncore events are missing the "extra bit" and always return zero
values unless the bit is properly set.
More details from him:
According to the Xeon E5-2600 Product Family Uncore Performance
Monitoring Guide, Table 2-94, about 1/2 of the QPI Link Layer events
(including the ones that "perf" calls "drs_data" and "ncb_data") require
that the "extra bit" be set.
This was confusing for a while -- a note at the bottom of page 94 says
that the "extra bit" is bit 16 of the control register.
Unfortunately, Table 2-86 clearly says that bit 16 is reserved and must
be zero. Looking around a bit, I found that bit 21 appears to be the
correct "extra bit", and further investigation shows that "perf" actually
agrees with me:
[root@c560-003.stampede]# cat /sys/bus/event_source/devices/uncore_qpi_0/format/event
config:0-7,21
So the command
# perf -e "uncore_qpi_0/event=drs_data/"
Is the same as
# perf -e "uncore_qpi_0/event=0x02,umask=0x08/"
While it should be
# perf -e "uncore_qpi_0/event=0x102,umask=0x08/"
I confirmed that this last version gives results that agree with the
amount of data that I expected the STREAM benchmark to move across the QPI
link in the second (cross-chip) test of the original script.
Reported-by: John McCalpin <mccalpin@tacc.utexas.edu>
Signed-off-by: Vince Weaver <vincent.weaver@maine.edu>
Cc: zheng.z.yan@intel.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@ghostprotocols.net>
Cc: Paul Mackerras <paulus@samba.org>
Link: http://lkml.kernel.org/r/alpine.DEB.2.10.1308021037280.26119@vincent-weaver-1.um.maine.edu
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Thu, 15 Aug 2013 05:59:42 +0000 (22:59 -0700)]
Linux 3.10.7
Markos Chandras [Mon, 17 Jun 2013 08:09:00 +0000 (08:09 +0000)]
MIPS: Expose missing pci_io{map,unmap} declarations
commit
78857614104a26cdada4c53eea104752042bf5a1 upstream.
The GENERIC_PCI_IOMAP does not depend on CONFIG_PCI so move
it to the CONFIG_MIPS symbol so it's always selected for MIPS.
This fixes the missing pci_iomap declaration for MIPS.
Moreover, the pci_iounmap function was not defined in the
io.h header file if the CONFIG_PCI symbol is not set,
but it should since MIPS is not using CONFIG_GENERIC_IOMAP.
This fixes the following problem on a allyesconfig:
drivers/net/ethernet/3com/3c59x.c:1031:2: error: implicit declaration of
function 'pci_iomap' [-Werror=implicit-function-declaration]
drivers/net/ethernet/3com/3c59x.c:1044:3: error: implicit declaration of
function 'pci_iounmap' [-Werror=implicit-function-declaration]
Signed-off-by: Markos Chandras <markos.chandras@imgtec.com>
Acked-by: Steven J. Hill <Steven.Hill@imgtec.com>
Patchwork: https://patchwork.linux-mips.org/patch/5478/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Cc: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Arnd Bergmann [Tue, 23 Apr 2013 13:47:50 +0000 (15:47 +0200)]
mtd: omap2: allow bulding as a module
commit
930d800bded771b26d9944c47810829130ff7c8c upstream.
The omap2 nand device driver calls into the the elm code, which can
be a loadable module, and in that case it cannot be built-in itself.
I can see no reason why the omap2 driver cannot also be a module,
so let's make the option "tristate" in Kconfig to fix this allmodconfig
build error:
ERROR: "elm_config" [drivers/mtd/nand/omap2.ko] undefined!
ERROR: "elm_decode_bch_error_page" [drivers/mtd/nand/omap2.ko] undefined!
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Tony Lindgren <tony@atomide.com>
Cc: David Woodhouse <dwmw2@infradead.org>
Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com>
Cc: Afzal Mohammed <afzal@ti.com>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Cc: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Arnd Bergmann [Thu, 14 Mar 2013 14:21:36 +0000 (15:21 +0100)]
SCSI: nsp32: use mdelay instead of large udelay constants
commit
b497ceb964a80ebada3b9b3cea4261409039e25a upstream.
ARM cannot handle udelay for more than 2 miliseconds, so we
should use mdelay instead for those.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: GOTO Masanori <gotom@debian.or.jp>
Cc: YOKOTA Hiroshi <yokota@netlab.is.tsukuba.ac.jp>
Cc: "James E.J. Bottomley" <JBottomley@parallels.com>
Cc: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alex Deucher [Sun, 4 Aug 2013 16:13:17 +0000 (12:13 -0400)]
drm/radeon: always program the MC on startup
commit
6fab3febf6d949b0a12b1e4e73db38e4a177a79e upstream.
For r6xx+ asics. This mirrors the behavior of pre-r6xx
asics. We need to program the MC even if something
else in startup() fails. Failure to do so results in
an unusable GPU.
Based on a fix from: Mark Kettenis <kettenis@openbsd.org>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
[ rebased for 3.10 and dropped the drivers/gpu/drm/radeon/cik.c
bit as it's 3.11 specific code / tmb ]
Signed-off-by: Thomas Backlund <tmb@mageia.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christian König [Mon, 5 Aug 2013 12:10:55 +0000 (14:10 +0200)]
drm/radeon: only save UVD bo when we have open handles
commit
4ad9c1c774c2af152283f510062094e768876f55 upstream.
Otherwise just reinitialize from scratch on resume,
and so make it more likely to succeed.
v2: rebased for 3.10-stable tree
Signed-off-by: Christian König <christian.koenig@amd.com>
Cc: stable@vger.kernel.org
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christian König [Thu, 1 Aug 2013 15:34:07 +0000 (17:34 +0200)]
drm/radeon: fix halting UVD
commit
2858c00d2823c83acce2a1175dbabb2cebee8678 upstream.
Removing the clock/power or resetting the VCPU can cause
hangs if that happens in the middle of a register write.
Stall the memory and register bus before putting the VCPU
into reset. Keep it in reset when unloading the module or
suspending.
v2: rebased on 3.10-stable tree
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jani Nikula [Thu, 25 Jul 2013 09:44:34 +0000 (12:44 +0300)]
drm/i915: initialize gt_lock early with other spin locks
commit
14c5cec5d0cd73e7e9d4fbea2bbfeea8f3ade871 upstream.
commit
181d1b9e31c668259d3798c521672afb8edd355c
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Sun Jul 21 13:16:24 2013 +0200
drm/i915: fix up gt init sequence fallout
moved dev_priv->gt_lock initialization after use. Do the initialization
much earlier with other spin lock initializations.
Reported-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Tested-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Zhouping Liu <zliu@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Al Viro [Mon, 5 Aug 2013 13:37:37 +0000 (17:37 +0400)]
reiserfs: fix deadlock in umount
commit
672fe15d091ce76d6fb98e489962e9add7c1ba4c upstream.
Since remove_proc_entry() started to wait for IO in progress (i.e.
since 2007 or so), the locking in fs/reiserfs/proc.c became wrong;
if procfs read happens between the moment when umount() locks the
victim superblock and removal of /proc/fs/reiserfs/<device>/*,
we'll get a deadlock - read will wait for s_umount (in sget(),
called by r_start()), while umount will wait in remove_proc_entry()
for that read to finish, holding s_umount all along.
Fortunately, the same change allows a much simpler race avoidance -
all we need to do is remove the procfs entries in the very beginning
of reiserfs ->kill_sb(); that'll guarantee that pointer to superblock
will remain valid for the duration for procfs IO, so we don't need
sget() to keep the sucker alive. As the matter of fact, we can
get rid of the home-grown iterator completely, and use single_open()
instead.
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Oleg Nesterov [Fri, 26 Jul 2013 15:12:56 +0000 (17:12 +0200)]
debugfs: debugfs_remove_recursive() must not rely on list_empty(d_subdirs)
commit
776164c1faac4966ab14418bb0922e1820da1d19 upstream.
debugfs_remove_recursive() is wrong,
1. it wrongly assumes that !list_empty(d_subdirs) means that this
dir should be removed.
This is not that bad by itself, but:
2. if d_subdirs does not becomes empty after __debugfs_remove()
it gives up and silently fails, it doesn't even try to remove
other entries.
However ->d_subdirs can be non-empty because it still has the
already deleted !debugfs_positive() entries.
3. simple_release_fs() is called even if __debugfs_remove() fails.
Suppose we have
dir1/
dir2/
file2
file1
and someone opens dir1/dir2/file2.
Now, debugfs_remove_recursive(dir1/dir2) succeeds, and dir1/dir2 goes
away.
But debugfs_remove_recursive(dir1) silently fails and doesn't remove
this directory. Because it tries to delete (the already deleted)
dir1/dir2/file2 again and then fails due to "Avoid infinite loop"
logic.
Test-case:
#!/bin/sh
cd /sys/kernel/debug/tracing
echo 'p:probe/sigprocmask sigprocmask' >> kprobe_events
sleep 1000 < events/probe/sigprocmask/id &
echo -n >| kprobe_events
[ -d events/probe ] && echo "ERR!! failed to rm probe"
And after that it is not possible to create another probe entry.
With this patch debugfs_remove_recursive() skips !debugfs_positive()
files although this is not strictly needed. The most important change
is that it does not try to make ->d_subdirs empty, it simply scans
the whole list(s) recursively and removes as much as possible.
Link: http://lkml.kernel.org/r/20130726151256.GC19472@redhat.com
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Julius Werner [Wed, 31 Jul 2013 02:51:20 +0000 (19:51 -0700)]
usb: core: don't try to reset_device() a port that got just disconnected
commit
481f2d4f89f87a0baa26147f323380e31cfa7c44 upstream.
The USB hub driver's event handler contains a check to catch SuperSpeed
devices that transitioned into the SS.Inactive state and tries to fix
them with a reset. It decides whether to do a plain hub port reset or
call the usb_reset_device() function based on whether there was a device
attached to the port.
However, there are device/hub combinations (found with a JetFlash
Transcend mass storage stick (8564:1000) on the root hub of an Intel
LynxPoint PCH) which can transition to the SS.Inactive state on
disconnect (and stay there long enough for the host to notice). In this
case, above-mentioned reset check will call usb_reset_device() on the
stale device data structure. The kernel will send pointless LPM control
messages to the no longer connected device address and can even cause
several 5 second khubd stalls on some (buggy?) host controllers, before
finally accepting the device's fate amongst a flurry of error messages.
This patch makes the choice of reset dependent on the port status that
has just been read from the hub in addition to the existence of an
in-kernel data structure for the device, and only proceeds with the more
extensive reset if both are valid.
Signed-off-by: Julius Werner <jwerner@chromium.org>
Signed-off-by: Sarah Sharp <sarah.a.sharp@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sergey Senozhatsky [Sat, 22 Jun 2013 14:21:00 +0000 (17:21 +0300)]
zram: allow request end to coincide with disksize
commit
75c7caf5a052ffd8db3312fa7864ee2d142890c4 upstream.
Pass valid_io_request() checks if request end coincides with disksize
(end equals bound), only fail if we attempt to read beyond the bound.
mkfs.ext2 produces numerous errors:
[ 2164.632747] quiet_error: 1 callbacks suppressed
[ 2164.633260] Buffer I/O error on device zram0, logical block 153599
[ 2164.633265] lost page write due to I/O error on zram0
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Thomas Backlund <tmb@mageia.org>
Cc: Minchan Kim <minchan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jeff Layton [Wed, 7 Aug 2013 14:29:08 +0000 (10:29 -0400)]
cifs: don't instantiate new dentries in readdir for inodes that need to be revalidated immediately
commit
757c4f6260febff982276818bb946df89c1105aa upstream.
David reported that commit
c2b93e06 (cifs: only set ops for inodes in
I_NEW state) caused a regression with mfsymlinks. Prior to that patch,
if a mfsymlink dentry was instantiated at readdir time, the inode would
get a new set of ops when it was revalidated. After that patch, this
did not occur.
This patch addresses this by simply skipping instantiating dentries in
the readdir codepath when we know that they will need to be immediately
revalidated. The next attempt to use that dentry will cause a new lookup
to occur (which is basically what we want to happen anyway).
Reported-and-Tested-by: David McBride <dwm37@cam.ac.uk>
Cc: "Stefan (metze) Metzmacher" <metze@samba.org>
Cc: Sachin Prabhu <sprabhu@redhat.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Chen Gang [Fri, 19 Jul 2013 01:01:36 +0000 (09:01 +0800)]
cifs: extend the buffer length enought for sprintf() using
commit
057d6332b24a4497c55a761c83c823eed9e3f23b upstream.
For cifs_set_cifscreds() in "fs/cifs/connect.c", 'desc' buffer length
is 'CIFSCREDS_DESC_SIZE' (56 is less than 256), and 'ses->domainName'
length may be "255 + '\0'".
The related sprintf() may cause memory overflow, so need extend related
buffer enough to hold all things.
It is also necessary to be sure of 'ses->domainName' must be less than
256, and define the related macro instead of hard code number '256'.
Signed-off-by: Chen Gang <gang.chen@asianux.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Reviewed-by: Shirish Pargaonkar <shirishpargaonkar@gmail.com>
Reviewed-by: Scott Lovenberg <scott.lovenberg@gmail.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Mon, 12 Aug 2013 13:29:30 +0000 (09:29 -0400)]
ext4: flush the extent status cache during EXT4_IOC_SWAP_BOOT
commit
cde2d7a796f7e895e25b43471ed658079345636d upstream.
Previously we weren't swapping only some of the extent_status LRU
fields during the processing of the EXT4_IOC_SWAP_BOOT ioctl. The
much safer thing to do is to just completely flush the extent status
tree when doing the swap.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Cc: Zheng Liu <gnehzuil.liu@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Piotr Sarna [Fri, 9 Aug 2013 03:02:24 +0000 (23:02 -0400)]
ext4: fix mount/remount error messages for incompatible mount options
commit
6ae6514b33f941d3386da0dfbe2942766eab1577 upstream.
Commit 5688978 ("ext4: improve handling of conflicting mount options")
introduced incorrect messages shown while choosing wrong mount options.
First of all, both cases of incorrect mount options,
"data=journal,delalloc" and "data=journal,dioread_nolock" result in
the same error message.
Secondly, the problem above isn't solved for remount option: the
mismatched parameter is simply ignored. Moreover, ext4_msg states
that remount with options "data=journal,delalloc" succeeded, which is
not true.
To fix it up, I added a simple check after parse_options() call to
ensure that data=journal and delalloc/dioread_nolock parameters are
not present at the same time.
Signed-off-by: Piotr Sarna <p.sarna@partner.samsung.com>
Acked-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Fri, 9 Aug 2013 03:01:24 +0000 (23:01 -0400)]
ext4: allow the mount options nodelalloc and data=journal
commit
59d9fa5c2e9086db11aa287bb4030151d0095a17 upstream.
Commit 26092bf ("ext4: use a table-driven handler for mount options")
wrongly disallows the specifying the mount options nodelalloc and
data=journal simultaneously. This is incorrect; it should have only
disallowed the combination of delalloc and data=journal
simultaneously.
Reported-by: Piotr Sarna <p.sarna@partner.samsung.com>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christian König [Mon, 5 Aug 2013 12:10:56 +0000 (14:10 +0200)]
drm/radeon: stop sending invalid UVD destroy msg
commit
641a00593f7d07eab778fbabf546fb68fff3d5ce upstream.
We also need to check the handle.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alex Deucher [Mon, 29 Jul 2013 22:56:13 +0000 (18:56 -0400)]
drm/radeon: select audio dto based on encoder id for DCE3
commit
e1accbf0543eecfdb161131208c3dfefee22d61f upstream.
There are two audio dtos on radeon asics that you can
select between. Normally, dto0 is used for hdmi and
dto1 for DP, but it seems that the dto is somehow
tied to the encoders on DCE3 asics.
fixes:
https://bugs.freedesktop.org/show_bug.cgi?id=67435
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michel Dänzer [Wed, 12 Jun 2013 09:58:44 +0000 (11:58 +0200)]
drm: Don't pass negative delta to ktime_sub_ns()
commit
e91abf80a0998f326107874c88d549f94839f13c upstream.
It takes an unsigned value. This happens not to blow up on 64-bit
architectures, but it does on 32-bit, causing
drm_calc_vbltimestamp_from_scanoutpos() to calculate totally bogus
timestamps for vblank events. Which in turn causes e.g. gnome-shell to
hang after a DPMS off cycle with current xf86-video-ati Git.
[airlied: regression introduced in drm: use monotonic time in drm_calc_vbltimestamp_from_scanoutpos]
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=59339
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=59836
Tested-by: shui yangwei <yangweix.shui@intel.com>
Signed-off-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dave Airlie [Wed, 7 Aug 2013 00:01:56 +0000 (10:01 +1000)]
drm/ast: invalidate page tables when pinning a BO
commit
3ac65259328324de323dc006b52ff7c1a5b18d19 upstream.
same fix as cirrus and mgag200.
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Egbert Eich [Wed, 17 Jul 2013 15:40:56 +0000 (17:40 +0200)]
drm/mgag200: Invalidate page tables when pinning a BO
commit
ecaac1c866bcda4780a963b3d18cd310d971aea3 upstream.
When a BO gets pinned the placement may get changed. If the memory is
mapped into user space and user space has already accessed the mapped
range the page tables are set up but now point to the wrong memory.
Set bo.mdev->dev_mapping in mgag200_bo_create() to make sure that
ttm_bo_unmap_virtual() called from ttm_bo_handle_move_mem() will take
care of this.
v2: Don't call ttm_bo_unmap_virtual() in mgag200_bo_pin(), fix comment.
Signed-off-by: Egbert Eich <eich@suse.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michal Srb [Tue, 6 Aug 2013 13:26:50 +0000 (15:26 +0200)]
drm/cirrus: Invalidate page tables when pinning a BO
commit
109a51598869a39fdcec2d49672a9a39b6d89481 upstream.
This is a cirrus version of Egbert Eich's patch for mgag200.
Without bo.bdev->dev_mapping set, the ttm_bo_unmap_virtual_locked
called from ttm_bo_handle_move_mem returns with no effect. If any
application accessed the memory before it was moved, it will
access wrong memory next time. This causes crashes when changing
resolution down.
Signed-off-by: Michal Srb <msrb@suse.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Amit Shah [Mon, 29 Jul 2013 04:53:21 +0000 (14:23 +0930)]
virtio: console: return -ENODEV on all read operations after unplug
commit
96f97a83910cdb9d89d127c5ee523f8fc040a804 upstream.
If a port gets unplugged while a user is blocked on read(), -ENODEV is
returned. However, subsequent read()s returned 0, indicating there's no
host-side connection (but not indicating the device went away).
This also happened when a port was unplugged and the user didn't have
any blocking operation pending. If the user didn't monitor the SIGIO
signal, they won't have a chance to find out if the port went away.
Fix by returning -ENODEV on all read()s after the port gets unplugged.
write() already behaves this way.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Amit Shah [Mon, 29 Jul 2013 04:51:32 +0000 (14:21 +0930)]
virtio: console: fix raising SIGIO after port unplug
commit
92d3453815fbe74d539c86b60dab39ecdf01bb99 upstream.
SIGIO should be sent when a port gets unplugged. It should only be sent
to prcesses that have the port opened, and have asked for SIGIO to be
delivered. We were clearing out guest_connected before calling
send_sigio_to_port(), resulting in a sigio not getting sent to
processes.
Fix by setting guest_connected to false after invoking the sigio
function.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Amit Shah [Mon, 29 Jul 2013 04:50:29 +0000 (14:20 +0930)]
virtio: console: clean up port data immediately at time of unplug
commit
ea3768b4386a8d1790f4cc9a35de4f55b92d6442 upstream.
We used to keep the port's char device structs and the /sys entries
around till the last reference to the port was dropped. This is
actually unnecessary, and resulted in buggy behaviour:
1. Open port in guest
2. Hot-unplug port
3. Hot-plug a port with the same 'name' property as the unplugged one
This resulted in hot-plug being unsuccessful, as a port with the same
name already exists (even though it was unplugged).
This behaviour resulted in a warning message like this one:
-------------------8<---------------------------------------
WARNING: at fs/sysfs/dir.c:512 sysfs_add_one+0xc9/0x130() (Not tainted)
Hardware name: KVM
sysfs: cannot create duplicate filename
'/devices/pci0000:00/0000:00:04.0/virtio0/virtio-ports/vport0p1'
Call Trace:
[<
ffffffff8106b607>] ? warn_slowpath_common+0x87/0xc0
[<
ffffffff8106b6f6>] ? warn_slowpath_fmt+0x46/0x50
[<
ffffffff811f2319>] ? sysfs_add_one+0xc9/0x130
[<
ffffffff811f23e8>] ? create_dir+0x68/0xb0
[<
ffffffff811f2469>] ? sysfs_create_dir+0x39/0x50
[<
ffffffff81273129>] ? kobject_add_internal+0xb9/0x260
[<
ffffffff812733d8>] ? kobject_add_varg+0x38/0x60
[<
ffffffff812734b4>] ? kobject_add+0x44/0x70
[<
ffffffff81349de4>] ? get_device_parent+0xf4/0x1d0
[<
ffffffff8134b389>] ? device_add+0xc9/0x650
-------------------8<---------------------------------------
Instead of relying on guest applications to release all references to
the ports, we should go ahead and unregister the port from all the core
layers. Any open/read calls on the port will then just return errors,
and an unplug/plug operation on the host will succeed as expected.
This also caused buggy behaviour in case of the device removal (not just
a port): when the device was removed (which means all ports on that
device are removed automatically as well), the ports with active
users would clean up only when the last references were dropped -- and
it would be too late then to be referencing char device pointers,
resulting in oopses:
-------------------8<---------------------------------------
PID: 6162 TASK:
ffff8801147ad500 CPU: 0 COMMAND: "cat"
#0 [
ffff88011b9d5a90] machine_kexec at
ffffffff8103232b
#1 [
ffff88011b9d5af0] crash_kexec at
ffffffff810b9322
#2 [
ffff88011b9d5bc0] oops_end at
ffffffff814f4a50
#3 [
ffff88011b9d5bf0] die at
ffffffff8100f26b
#4 [
ffff88011b9d5c20] do_general_protection at
ffffffff814f45e2
#5 [
ffff88011b9d5c50] general_protection at
ffffffff814f3db5
[exception RIP: strlen+2]
RIP:
ffffffff81272ae2 RSP:
ffff88011b9d5d00 RFLAGS:
00010246
RAX:
0000000000000000 RBX:
ffff880118901c18 RCX:
0000000000000000
RDX:
ffff88011799982c RSI:
00000000000000d0 RDI:
3a303030302f3030
RBP:
ffff88011b9d5d38 R8:
0000000000000006 R9:
ffffffffa0134500
R10:
0000000000001000 R11:
0000000000001000 R12:
ffff880117a1cc10
R13:
00000000000000d0 R14:
0000000000000017 R15:
ffffffff81aff700
ORIG_RAX:
ffffffffffffffff CS: 0010 SS: 0018
#6 [
ffff88011b9d5d00] kobject_get_path at
ffffffff8126dc5d
#7 [
ffff88011b9d5d40] kobject_uevent_env at
ffffffff8126e551
#8 [
ffff88011b9d5dd0] kobject_uevent at
ffffffff8126e9eb
#9 [
ffff88011b9d5de0] device_del at
ffffffff813440c7
-------------------8<---------------------------------------
So clean up when we have all the context, and all that's left to do when
the references to the port have dropped is to free up the port struct
itself.
Reported-by: chayang <chayang@redhat.com>
Reported-by: YOGANANTH SUBRAMANIAN <anantyog@in.ibm.com>
Reported-by: FuXiangChun <xfu@redhat.com>
Reported-by: Qunfang Zhang <qzhang@redhat.com>
Reported-by: Sibiao Luo <sluo@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Amit Shah [Mon, 29 Jul 2013 04:47:13 +0000 (14:17 +0930)]
virtio: console: fix race in port_fops_open() and port unplug
commit
671bdea2b9f210566610603ecbb6584c8a201c8c upstream.
Between open() being called and processed, the port can be unplugged.
Check if this happened, and bail out.
A simple test script to reproduce this is:
while true; do for i in $(seq 1 100); do echo $i > /dev/vport0p3; done; done;
This opens and closes the port a lot of times; unplugging the port while
this is happening triggers the bug.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>