Virupax Sadashivpetimath [Tue, 12 Jun 2012 13:10:58 +0000 (15:10 +0200)]
spi/pl022: disable port when unused
commit
fd316941cfee1fbd12746afea83720fb7823888a upstream.
Commit
ffbbdd21329f3e15eeca6df2d4bc11c04d9d91c0
"spi: create a message queueing infrastructure"
Accidentally deleted the logic to disable the port
when unused leading to higher power consumption.
Fix this up.
Cc: Vinit Shenoy <vinit.shenoy@st.com>
Signed-off-by: Virupax Sadashivpetimath <virupax.sadashivpetimath@stericsson.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jeff Layton [Tue, 24 Jul 2012 00:34:17 +0000 (20:34 -0400)]
cifs: reinstate sec=ntlmv2 mount option
commit
7659624ffb550d69c87f9af9ae63e717daa874bd upstream.
sec=ntlmv2 as a mount option got dropped in the mount option overhaul.
Cc: Sachin Prabhu <sprabhu@redhat.com>
Reported-by: Günter Kukkukk <linux@kukkukk.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Chris Mason [Wed, 25 Jul 2012 19:57:13 +0000 (15:57 -0400)]
Btrfs: call the ordered free operation without any locks held
commit
e9fbcb42201c862fd6ab45c48ead4f47bb2dea9d upstream.
Each ordered operation has a free callback, and this was called with the
worker spinlock held. Josef made the free callback also call iput,
which we can't do with the spinlock.
This drops the spinlock for the free operation and grabs it again before
moving through the rest of the list. We'll circle back around to this
and find a cleaner way that doesn't bounce the lock around so much.
Signed-off-by: Chris Mason <chris.mason@fusionio.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Lan Tianyu [Fri, 20 Jul 2012 05:29:16 +0000 (13:29 +0800)]
ACPI/AC: prevent OOPS on some boxes due to missing check power_supply_register() return value check
commit
f197ac13f6eeb351b31250b9ab7d0da17434ea36 upstream.
In the ac.c, power_supply_register()'s return value is not checked.
As a result, the driver's add() ops may return success
even though the device failed to initialize.
For example, some BIOS may describe two ACADs in the same DSDT.
The second ACAD device will fail to register,
but ACPI driver's add() ops returns sucessfully.
The ACPI device will receive ACPI notification and cause OOPS.
https://bugzilla.redhat.com/show_bug.cgi?id=772730
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jean Delvare [Tue, 12 Jun 2012 08:43:28 +0000 (10:43 +0200)]
ACPI, APEI: Fixup common access width firmware bug
commit
f712c71f7b2b43b894d1e92e1b77385fcad8815f upstream.
Many firmwares have a common register definition bug where 8-bit
access width is specified for a 32-bit register. Ideally this should
be fixed in the BIOS, but earlier versions of the kernel did not
complain, so fix that up silently.
This closes kernel bug #43282:
https://bugzilla.kernel.org/show_bug.cgi?id=43282
Signed-off-by: Jean Delvare <jdelvare@suse.de>
Acked-by: Huang Ying <ying.huang@intel.com>
Acked-by: Gary Hade <garyhade@us.ibm.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tejun Heo [Tue, 17 Jul 2012 19:39:26 +0000 (12:39 -0700)]
workqueue: perform cpu down operations from low priority cpu_notifier()
commit
6575820221f7a4dd6eadecf7bf83cdd154335eda upstream.
Currently, all workqueue cpu hotplug operations run off
CPU_PRI_WORKQUEUE which is higher than normal notifiers. This is to
ensure that workqueue is up and running while bringing up a CPU before
other notifiers try to use workqueue on the CPU.
Per-cpu workqueues are supposed to remain working and bound to the CPU
for normal CPU_DOWN_PREPARE notifiers. This holds mostly true even
with workqueue offlining running with higher priority because
workqueue CPU_DOWN_PREPARE only creates a bound trustee thread which
runs the per-cpu workqueue without concurrency management without
explicitly detaching the existing workers.
However, if the trustee needs to create new workers, it creates
unbound workers which may wander off to other CPUs while
CPU_DOWN_PREPARE notifiers are in progress. Furthermore, if the CPU
down is cancelled, the per-CPU workqueue may end up with workers which
aren't bound to the CPU.
While reliably reproducible with a convoluted artificial test-case
involving scheduling and flushing CPU burning work items from CPU down
notifiers, this isn't very likely to happen in the wild, and, even
when it happens, the effects are likely to be hidden by the following
successful CPU down.
Fix it by using different priorities for up and down notifiers - high
priority for up operations and low priority for down operations.
Workqueue cpu hotplug operations will soon go through further cleanup.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: "Rafael J. Wysocki" <rjw@sisk.pl>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ben Hutchings [Wed, 20 Jun 2012 01:31:11 +0000 (02:31 +0100)]
staging: zsmalloc: Finish conversion to a separate module
commit
069f101fa463351f528773d73b74e9b606b3f66a upstream.
ZSMALLOC is tristate, but the code has no MODULE_LICENSE and since it
depends on GPL-only symbols it cannot be loaded as a module. This in
turn breaks zram which now depends on it. I assume it's meant to be
Dual BSD/GPL like the other z-stuff.
There is also no module_exit, which will make it impossible to unload.
Add the appropriate module_init and module_exit declarations suggested
by comments.
Reported-by: Christian Ohm <chr.ohm@gmx.net>
References: http://bugs.debian.org/677273
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paul Gortmaker [Tue, 5 Jun 2012 15:15:50 +0000 (11:15 -0400)]
stable: update references to older 2.6 versions for 3.x
commit
2584f5212d97b664be250ad5700a2d0fee31a10d upstream.
Also add information on where the respective trees are.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Acked-by: Rob Landley <rob@landley.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jan Kara [Tue, 10 Jul 2012 15:58:04 +0000 (17:58 +0200)]
udf: Improve table length check to avoid possible overflow
commit
57b9655d01ef057a523e810d29c37ac09b80eead upstream.
When a partition table length is corrupted to be close to 1 << 32, the
check for its length may overflow on 32-bit systems and we will think
the length is valid. Later on the kernel can crash trying to read beyond
end of buffer. Fix the check to avoid possible overflow.
Reported-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Joerg Roedel [Thu, 19 Jul 2012 11:42:54 +0000 (13:42 +0200)]
iommu/amd: Fix hotplug with iommu=pt
commit
2c9195e990297068d0f1f1bd8e2f1d09538009da upstream.
This did not work because devices are not put into the
pt_domain. Fix this.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Joerg Roedel [Thu, 19 Jul 2012 08:56:10 +0000 (10:56 +0200)]
iommu/amd: Add missing spin_lock initialization
commit
2c13d47a1a7ee8808796016c617aef25fd1d1925 upstream.
Add missing spin_lock initialization in
amd_iommu_bind_pasid() function and make lockdep happy
again.
Signed-off-by: Joerg Roedel <joerg.roedel@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Heiko Carstens [Fri, 27 Jul 2012 07:45:39 +0000 (09:45 +0200)]
s390/mm: fix fault handling for page table walk case
commit
008c2e8f247f0a8db1e8e26139da12f3a3abcda0 upstream.
Make sure the kernel does not incorrectly create a SIGBUS signal during
user space accesses:
For user space accesses in the switched addressing mode case the kernel
may walk page tables and access user address space via the kernel
mapping. If a page table entry is invalid the function __handle_fault()
gets called in order to emulate a page fault and trigger all the usual
actions like paging in a missing page etc. by calling handle_mm_fault().
If handle_mm_fault() returns with an error fixup handling is necessary.
For the switched addressing mode case all errors need to be mapped to
-EFAULT, so that the calling uaccess function can return -EFAULT to
user space.
Unfortunately the __handle_fault() incorrectly calls do_sigbus() if
VM_FAULT_SIGBUS is set. This however should only happen if a page fault
was triggered by a user space instruction. For kernel mode uaccesses
the correct action is to only return -EFAULT.
So user space may incorrectly see SIGBUS signals because of this bug.
For current machines this would only be possible for the switched
addressing mode case in conjunction with futex operations.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Martin Schwidefsky [Thu, 26 Jul 2012 06:53:06 +0000 (08:53 +0200)]
s390/mm: downgrade page table after fork of a 31 bit process
commit
0f6f281b731d20bfe75c13f85d33f3f05b440222 upstream.
The downgrade of the 4 level page table created by init_new_context is
currently done only in start_thread31. If a 31 bit process forks the
new mm uses a 4 level page table, including the task size of 2<<42
that goes along with it. This is incorrect as now a 31 bit process
can map memory beyond 2GB. Define arch_dup_mmap to do the downgrade
after fork.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Heiko Carstens [Fri, 13 Jul 2012 13:45:33 +0000 (15:45 +0200)]
s390/idle: fix sequence handling vs cpu hotplug
commit
0008204ffe85d23382d6fd0f971f3f0fbe70bae2 upstream.
The s390 idle accounting code uses a sequence counter which gets used
when the per cpu idle statistics get updated and read.
One assumption on read access is that only when the sequence counter is
even and did not change while reading all values the result is valid.
On cpu hotplug however the per cpu data structure gets initialized via
a cpu hotplug notifier on CPU_ONLINE.
CPU_ONLINE however is too late, since the onlined cpu is already running
and might access the per cpu data. Worst case is that the data structure
gets initialized while an idle thread is updating its idle statistics.
This will result in an uneven sequence counter after an update.
As a result user space tools like top, which access /proc/stat in order
to get idle stats, will busy loop waiting for the sequence counter to
become even again, which will never happen until the queried cpu will
update its idle statistics again. And even then the sequence counter
will only have an even value for a couple of cpu cycles.
Fix this by moving the initialization of the per cpu idle statistics
to cpu_init(). I prefer that solution in favor of changing the
notifier to CPU_UP_PREPARE, which would be a different solution to
the problem.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Amitkumar Karwar [Thu, 12 Jul 2012 01:12:57 +0000 (18:12 -0700)]
mwifiex: correction in mcs index check
commit
fe020120cb863ba918c6d603345342a880272c4d upstream.
mwifiex driver supports 2x2 chips as well. Hence valid mcs values
are 0 to 15. The check for mcs index is corrected in this patch.
For example: if 40MHz is enabled and mcs index is 11, "iw link"
command would show "tx bitrate: 108.0 MBit/s" without this patch.
Now it shows "tx bitrate: 108.0 MBit/s MCS 11 40Mhz" with the patch.
Signed-off-by: Amitkumar Karwar <akarwar@marvell.com>
Signed-off-by: Bing Zhao <bzhao@marvell.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Forest Bond [Fri, 13 Jul 2012 16:26:06 +0000 (12:26 -0400)]
rtlwifi: rtl8192de: Fix phy-based version calculation
commit
f1b00f4dab29b57bdf1bc03ef12020b280fd2a72 upstream.
Commit
d83579e2a50ac68389e6b4c58b845c702cf37516 incorporated some
changes from the vendor driver that made it newly important that the
calculated hardware version correctly include the CHIP_92D bit, as all
of the IS_92D_* macros were changed to depend on it. However, this bit
was being unset for dual-mac, dual-phy devices. The vendor driver
behavior was modified to not do this, but unfortunately this change was
not picked up along with the others. This caused scanning in the 2.4GHz
band to be broken, and possibly other bugs as well.
This patch brings the version calculation logic in parity with the
vendor driver in this regard, and in doing so fixes the regression.
However, the version calculation code in general continues to be largely
incoherent and messy, and needs to be cleaned up.
Signed-off-by: Forest Bond <forest.bond@rapidrollout.com>
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Larry Finger [Wed, 11 Jul 2012 19:37:28 +0000 (14:37 -0500)]
rtlwifi: rtl8192cu: Change buffer allocation for synchronous reads
commit
3ce4d85b76010525adedcc2555fa164bf706a2f3 upstream.
In commit a7959c1, the USB part of rtlwifi was switched to convert
_usb_read_sync() to using a preallocated buffer rather than one
that has been acquired using kmalloc. Although this routine is named
as though it were synchronous, there seem to be simultaneous users,
and the selection of the index to the data buffer is not multi-user
safe. This situation is addressed by adding a new spinlock. The routine
cannot sleep, thus a mutex is not allowed.
Signed-off-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Meenakshi Venkataraman [Wed, 16 May 2012 20:40:50 +0000 (22:40 +0200)]
iwlwifi: fix debug print in iwl_sta_calc_ht_flags
commit
a35e270881a5db1ec9ac8bc6d61ebc3e85c14f33 upstream.
We missed passing an argument to the
debug print. Fix it.
Signed-off-by: Meenakshi Venkataraman <meenakshi.venkataraman@intel.com>
Reviewed-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eliad Peller [Sun, 13 May 2012 15:07:04 +0000 (18:07 +0300)]
mac80211: fail authentication when AP denied authentication
commit
dac211ec10d268b9d09000093a9fa2ac1773894f upstream.
ieee80211_rx_mgmt_auth() doesn't handle denied authentication
properly - it authenticates the station and waits for association
(for 5 seconds) instead of failing the authentication.
Fix it by destroying auth_data and bailing out instead.
Signed-off-by: Eliad Peller <eliad@wizery.com>
Acked-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mikulas Patocka [Thu, 19 Jul 2012 06:13:36 +0000 (06:13 +0000)]
tun: fix a crash bug and a memory leak
commit
b09e786bd1dd66418b69348cb110f3a64764626a upstream.
This patch fixes a crash
tun_chr_close -> netdev_run_todo -> tun_free_netdev -> sk_release_kernel ->
sock_release -> iput(SOCK_INODE(sock))
introduced by commit
1ab5ecb90cb6a3df1476e052f76a6e8f6511cb3d
The problem is that this socket is embedded in struct tun_struct, it has
no inode, iput is called on invalid inode, which modifies invalid memory
and optionally causes a crash.
sock_release also decrements sockets_in_use, this causes a bug that
"sockets: used" field in /proc/*/net/sockstat keeps on decreasing when
creating and closing tun devices.
This patch introduces a flag SOCK_EXTERNALLY_ALLOCATED that instructs
sock_release to not free the inode and not decrement sockets_in_use,
fixing both memory corruption and sockets_in_use underflow.
It should be backported to 3.3 an 3.4 stabke.
Signed-off-by: Mikulas Patocka <mikulas@artax.karlin.mff.cuni.cz>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rajiv Andrade [Tue, 24 Apr 2012 20:38:17 +0000 (17:38 -0300)]
TPM: chip disabled state erronously being reported as error
commit
24ebe6670de3d1f0dca11c9eb372134c7ab05503 upstream.
tpm_do_selftest() attempts to read a PCR in order to
decide if one can rely on the TPM being used or not.
The function that's used by __tpm_pcr_read() does not
expect the TPM to be disabled or deactivated, and if so,
reports an error.
It's fine if the TPM returns this error when trying to
use it for the first time after a power cycle, but it's
definitely not if it already returned success for a
previous attempt to read one of its PCRs.
The tpm_do_selftest() was modified so that the driver only
reports this return code as an error when it really is.
Reported-and-tested-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Rajiv Andrade <srajiv@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Colin Cross [Thu, 19 Jul 2012 08:38:06 +0000 (10:38 +0200)]
PM / Sleep: call early resume handlers when suspend_noirq fails
commit
064b021fbe470ecc9ca10f9f87af48c0fc0865fb upstream.
Commit
cf579dfb82550e34de7ccf3ef090d8b834ccd3a9 (PM / Sleep: Introduce
"late suspend" and "early resume" of devices) introduced a bug where
suspend_late handlers would be called, but if dpm_suspend_noirq returned
an error the early_resume handlers would never be called. All devices
would end up on the dpm_late_early_list, and would never be resumed
again.
Fix it by calling dpm_resume_early when dpm_suspend_noirq returns
an error.
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Srivatsa S. Bhat [Sat, 16 Jun 2012 13:30:45 +0000 (15:30 +0200)]
ftrace: Disable function tracing during suspend/resume and hibernation, again
commit
443772d408a25af62498793f6f805ce3c559309a upstream.
If function tracing is enabled for some of the low-level suspend/resume
functions, it leads to triple fault during resume from suspend, ultimately
ending up in a reboot instead of a resume (or a total refusal to come out
of suspended state, on some machines).
This issue was explained in more detail in commit
f42ac38c59e0a03d (ftrace:
disable tracing for suspend to ram). However, the changes made by that commit
got reverted by commit
cbe2f5a6e84eebb (tracing: allow tracing of
suspend/resume & hibernation code again). So, unfortunately since things are
not yet robust enough to allow tracing of low-level suspend/resume functions,
suspend/resume is still broken when ftrace is enabled.
So fix this by disabling function tracing during suspend/resume & hibernation.
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
J. Bruce Fields [Mon, 23 Jul 2012 19:17:17 +0000 (15:17 -0400)]
locks: fix checking of fcntl_setlease argument
commit
0ec4f431eb56d633da3a55da67d5c4b88886ccc7 upstream.
The only checks of the long argument passed to fcntl(fd,F_SETLEASE,.)
are done after converting the long to an int. Thus some illegal values
may be let through and cause problems in later code.
[ They actually *don't* cause problems in mainline, as of Dave Jones's
commit
8d657eb3b438 "Remove easily user-triggerable BUG from
generic_setlease", but we should fix this anyway. And this patch will
be necessary to fix real bugs on earlier kernels. ]
Signed-off-by: J. Bruce Fields <bfields@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tony Luck [Wed, 11 Jul 2012 17:20:47 +0000 (10:20 -0700)]
x86/mce: Fix siginfo_t->si_addr value for non-recoverable memory faults
commit
6751ed65dc6642af64f7b8a440a75563c8aab7ae upstream.
In commit
dad1743e5993f1 ("x86/mce: Only restart instruction after machine
check recovery if it is safe") we fixed mce_notify_process() to force a
signal to the current process if it was not restartable (RIPV bit not
set in MCG_STATUS). But doing it here means that the process doesn't
get told the virtual address of the fault via siginfo_t->si_addr. This
would prevent application level recovery from the fault.
Make a new MF_MUST_KILL flag bit for memory_failure() et al. to use so
that we will provide the right information with the signal.
Signed-off-by: Tony Luck <tony.luck@intel.com>
Acked-by: Borislav Petkov <borislav.petkov@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Henningsson [Fri, 20 Jul 2012 08:37:25 +0000 (10:37 +0200)]
ALSA: hda - add dock support for Thinkpad X230 Tablet
commit
108cc108a3bb42fe4705df1317ff98e1e29428a6 upstream.
Also add a model/fixup string "lenovo-dock", so that other Thinkpad
users will be able to test this fixup easily, to see if it enables
dock I/O for them as well.
BugLink: https://bugs.launchpad.net/bugs/1026953
Tested-by: John McCarron <john.mccarron@canonical.com>
Signed-off-by: David Henningsson <david.henningsson@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gerd Hoffmann [Tue, 19 Jun 2012 07:54:48 +0000 (09:54 +0200)]
Revert "usb/uas: make sure data urb is gone if we receive status before that"
commit
c621a81edecdee85da32c566c21836332c764fda upstream.
This reverts commit
e4d8318a85779b25b880187b1b1c44e797bd7d4b.
This patch makes uas.c call usb_unlink_urb on data urbs. The data urbs
get freed in the completion callback. This is illegal according to the
usb_unlink_urb documentation.
This patch also makes the code expect the data completion callback
being called before the status completion callback. This isn't
guaranteed to be the case, even though the actual data transfer should
be finished by the time the status is received.
Background: The ehci irq handler for example only know that there are
finished transfers, it then has go check the QHs & TDs to see which
transfers did actually finish. It has no way to figure in which order
the transfers did complete. The xhci driver can call the callbacks in
completion order thanks to the event queue. This does nicely explain
why the driver is solid on a (usb2) xhci port whereas it goes crazy on
ehci in my testing.
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bjørn Mork [Thu, 12 Jul 2012 10:37:32 +0000 (12:37 +0200)]
USB: option: add ZTE MF821D
commit
09110529780890804b22e997ae6b4fe3f0b3b158 upstream.
Sold by O2 (telefonica germany) under the name "LTE4G"
Tested-by: Thomas Schäfer <tschaefer@t-online.de>
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kevin Cernekee [Mon, 25 Jun 2012 04:11:22 +0000 (21:11 -0700)]
usb: gadget: Fix g_ether interface link status
commit
31bde1ceaa873bcaecd49e829bfabceacc4c512d upstream.
A "usb0" interface that has never been connected to a host has an unknown
operstate, and therefore the IFF_RUNNING flag is (incorrectly) asserted
when queried by ifconfig, ifplugd, etc. This is a result of calling
netif_carrier_off() too early in the probe function; it should be called
after register_netdev().
Similar problems have been fixed in many other drivers, e.g.:
e826eafa6 (bonding: Call netif_carrier_off after register_netdevice)
0d672e9f8 (drivers/net: Call netif_carrier_off at the end of the probe)
6a3c869a6 (cxgb4: fix reported state of interfaces without link)
Fix is to move netif_carrier_off() to the end of the function.
Signed-off-by: Kevin Cernekee <cernekee@gmail.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hans de Goede [Wed, 4 Jul 2012 07:18:01 +0000 (09:18 +0200)]
usbdevfs: Correct amount of data copied to user in processcompl_compat
commit
2102e06a5f2e414694921f23591f072a5ba7db9f upstream.
iso data buffers may have holes in them if some packets were short, so for
iso urbs we should always copy the entire buffer, just like the regular
processcompl does.
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dylan Reid [Fri, 20 Jul 2012 00:52:58 +0000 (17:52 -0700)]
ALSA: hda - Turn on PIN_OUT from hdmi playback prepare.
commit
9e76e6d031482194a5b24d8e9ab88063fbd6b4b5 upstream.
Turn on the pin widget's PIN_OUT bit from playback prepare. The pin is
enabled in open, but is disabled in hdmi_init_pin which is called during
system resume. This causes a system suspend/resume during playback to
mute HDMI/DP. Enabling the pin in prepare instead of open allows calling
snd_pcm_prepare after a system resume to restore audio.
Signed-off-by: Dylan Reid <dgreid@chromium.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Henningsson [Wed, 18 Jul 2012 05:38:46 +0000 (07:38 +0200)]
ALSA: hda - Add support for Realtek ALC282
commit
4e01ec636e64707d202a1ca21a47bbc6d53085b7 upstream.
This codec has a separate dmic path (separate dmic only ADC),
and thus it looks mostly like ALC275.
BugLink: https://bugs.launchpad.net/bugs/1025377
Tested-by: Ray Chen <ray.chen@canonical.com>
Signed-off-by: David Henningsson <david.henningsson@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mark Brown [Wed, 11 Jul 2012 18:03:48 +0000 (19:03 +0100)]
ASoC: wm8962: Redo early init of the part on resume
commit
e4dd76788c7e5b27165890d712c8c4f6f0abd645 upstream.
Ensure robust startup of the part by going through the reset procedure
prior to resyncing the full register cache, avoiding potential intermittent
faults in some designs.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mark Brown [Fri, 20 Jul 2012 16:29:34 +0000 (17:29 +0100)]
ASoC: dapm: Fix _PRE and _POST events for DAPM performance improvements
commit
0ff97ebf0804d2e519d578fcb4db03f104d2ca8c upstream.
Ever since the DAPM performance improvements we've been marking all widgets
as not dirty after each DAPM run. Since _PRE and _POST events aren't part
of the DAPM graph this has rendered them non-functional, they will never be
marked dirty again and thus will never be run again.
Fix this by skipping them when marking widgets as not dirty.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Acked-by: Liam Girdwood <lrg@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nishanth Menon [Fri, 18 May 2012 17:26:19 +0000 (12:26 -0500)]
ARM: OMAP2+: OPP: Fix to ensure check of right oppdef after bad one
commit
b110547e586eb5825bc1d04aa9147bff83b57672 upstream.
Commit
9fa2df6b90786301b175e264f5fa9846aba81a65
(ARM: OMAP2+: OPP: allow OPP enumeration to continue if device is not present)
makes the logic:
for (i = 0; i < opp_def_size; i++) {
<snip>
if (!oh || !oh->od) {
<snip>
continue;
}
<snip>
opp_def++;
}
In short, the moment we hit a "Bad OPP", we end up looping the list
comparing against the bad opp definition pointer for the rest of the
iteration count. Instead, increment opp_def in the for loop itself
and allow continue to be used in code without much thought so that
we check the next set of OPP definition pointers :)
Cc: Steve Sakoman <steve@sakoman.com>
Cc: Tony Lindgren <tony@atomide.com>
Signed-off-by: Nishanth Menon <nm@ti.com>
Signed-off-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Albert Pool [Mon, 14 May 2012 16:08:32 +0000 (18:08 +0200)]
rt2800usb: 2001:3c17 is an RT3370 device
commit
8fd9d059af12786341dec5a688e607bcdb372238 upstream.
D-Link DWA-123 rev A1
Signed-off-by: Albert Pool<albertpool@solcon.nl>
Acked-by: Gertjan van Wingerde <gwingerde@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bart Van Assche [Fri, 29 Jun 2012 15:34:26 +0000 (15:34 +0000)]
SCSI: Avoid dangling pointer in scsi_requeue_command()
commit
940f5d47e2f2e1fa00443921a0abf4822335b54d upstream.
When we call scsi_unprep_request() the command associated with the request
gets destroyed and therefore drops its reference on the device. If this was
the only reference, the device may get released and we end up with a NULL
pointer deref when we call blk_requeue_request.
Reported-by: Mike Christie <michaelc@cs.wisc.edu>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Mike Christie <michaelc@cs.wisc.edu>
Reviewed-by: Tejun Heo <tj@kernel.org>
[jejb: enhance commend and add commit log for stable]
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bart Van Assche [Fri, 29 Jun 2012 15:33:22 +0000 (15:33 +0000)]
SCSI: Fix device removal NULL pointer dereference
commit
67bd94130015c507011af37858989b199c52e1de upstream.
Use blk_queue_dead() to test whether the queue is dead instead
of !sdev. Since scsi_prep_fn() may be invoked concurrently with
__scsi_remove_device(), keep the queuedata (sdev) pointer in
__scsi_remove_device(). This patch fixes a kernel oops that
can be triggered by USB device removal. See also
http://www.spinics.net/lists/linux-scsi/msg56254.html.
Other changes included in this patch:
- Swap the blk_cleanup_queue() and kfree() calls in
scsi_host_dev_release() to make that code easier to grasp.
- Remove the queue dead check from scsi_run_queue() since the
queue state can change anyway at any point in that function
where the queue lock is not held.
- Remove the queue dead check from the start of scsi_request_fn()
since it is redundant with the scsi_device_online() check.
Reported-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Mike Christie <michaelc@cs.wisc.edu>
Reviewed-by: Tejun Heo <tj@kernel.org>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Fri, 22 Jun 2012 06:47:28 +0000 (23:47 -0700)]
SCSI: fix hot unplug vs async scan race
commit
3b661a92e869ebe2358de8f4b3230ad84f7fce51 upstream.
The following crash results from cases where the end_device has been
removed before scsi_sysfs_add_sdev has had a chance to run.
BUG: unable to handle kernel NULL pointer dereference at
0000000000000098
IP: [<
ffffffff8115e100>] sysfs_create_dir+0x32/0xb6
...
Call Trace:
[<
ffffffff8125e4a8>] kobject_add_internal+0x120/0x1e3
[<
ffffffff81075149>] ? trace_hardirqs_on+0xd/0xf
[<
ffffffff8125e641>] kobject_add_varg+0x41/0x50
[<
ffffffff8125e70b>] kobject_add+0x64/0x66
[<
ffffffff8131122b>] device_add+0x12d/0x63a
[<
ffffffff814b65ea>] ? _raw_spin_unlock_irqrestore+0x47/0x56
[<
ffffffff8107de15>] ? module_refcount+0x89/0xa0
[<
ffffffff8132f348>] scsi_sysfs_add_sdev+0x4e/0x28a
[<
ffffffff8132dcbb>] do_scan_async+0x9c/0x145
...teach scsi_sysfs_add_devices() to check for deleted devices() before
trying to add them, and teach scsi_remove_target() how to remove targets
that have not been added via device_add().
Reported-by: Dariusz Majchrzak <dariusz.majchrzak@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Fri, 22 Jun 2012 06:25:32 +0000 (23:25 -0700)]
SCSI: fix eh wakeup (scsi_schedule_eh vs scsi_restart_operations)
commit
57fc2e335fd3c2f898ee73570dc81426c28dc7b4 upstream.
Rapid ata hotplug on a libsas controller results in cases where libsas
is waiting indefinitely on eh to perform an ata probe.
A race exists between scsi_schedule_eh() and scsi_restart_operations()
in the case when scsi_restart_operations() issues i/o to other devices
in the sas domain. When this happens the host state transitions from
SHOST_RECOVERY (set by scsi_schedule_eh) back to SHOST_RUNNING and
->host_busy is non-zero so we put the eh thread to sleep even though
->host_eh_scheduled is active.
Before putting the error handler to sleep we need to check if the
host_state needs to return to SHOST_RECOVERY for another trip through
eh. Since i/o that is released by scsi_restart_operations has been
blocked for at least one eh cycle, this implementation allows those
i/o's to run before another eh cycle starts to discourage hung task
timeouts.
Reported-by: Tom Jackson <thomas.p.jackson@intel.com>
Tested-by: Tom Jackson <thomas.p.jackson@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Fri, 22 Jun 2012 06:36:20 +0000 (23:36 -0700)]
SCSI: libsas: fix sas_discover_devices return code handling
commit
b17caa174a7e1fd2e17b26e210d4ee91c4c28b37 upstream.
commit
198439e4 [SCSI] libsas: do not set res = 0 in sas_ex_discover_dev()
commit
19252de6 [SCSI] libsas: fix wide port hotplug issues
The above commits seem to have confused the return value of
sas_ex_discover_dev which is non-zero on failure and
sas_ex_join_wide_port which just indicates short circuiting discovery on
already established ports. The result is random discovery failures
depending on configuration.
Calls to sas_ex_join_wide_port are the source of the trouble as its
return value is errantly assigned to 'res'. Convert it to bool and stop
returning its result up the stack.
Tested-by: Dan Melnic <dan.melnic@amd.com>
Reported-by: Dan Melnic <dan.melnic@amd.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Jack Wang <jack_wang@usish.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Williams [Fri, 22 Jun 2012 06:36:15 +0000 (23:36 -0700)]
SCSI: libsas: continue revalidation
commit
26f2f199ff150d8876b2641c41e60d1c92d2fb81 upstream.
Continue running revalidation until no more broadcast devices are
discovered. Fixes cases where re-discovery completes too early in a
domain with multiple expanders with pending re-discovery events.
Servicing BCNs can get backed up behind error recovery.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Timur Tabi [Thu, 5 Jul 2012 15:08:28 +0000 (10:08 -0500)]
powerpc/85xx: use the BRx registers to enable indirect mode on the P1022DS
commit
6bd825f02966be8ba544047cab313d6032c23819 upstream.
In order to enable the DIU video controller on the P1022DS, the FPGA needs
to be switched to "indirect mode", where the localbus is disabled and
the FPGA is accessed via writes to localbus chip select signals CS0 and CS1.
To obtain the address of CS0 and CS1, the platform driver uses an "indirect
pixis mode" device tree node. This node assumes that the localbus 'ranges'
property is sorted in chip-select order. That is, reg value 0 maps to
CS0, reg value 1 maps to CS1, etc. This is how the 'ranges' property is
supposed to be arranged.
Unfortunately, the 'ranges' property is often mis-arranged, and not just on
the P1022DS. Linux normally does not care, since it does not program the
localbus. But the indirect-mode code on the P1022DS does care.
The "proper" fix is to have U-Boot fix the 'ranges' property, but this would
be too cumbersome. The names and 'reg' properties of all the localbus
devices would also need to be updated, and determining which localbus device
maps to which chip select is board-specific.
Instead, we determine the CS0/CS1 base addresses the same way that U-boot
does -- by reading the BRx registers directly and mapping them to physical
addresses. This code is simpler and more reliable, and it does not require
a U-boot or device tree change.
Since the indirect pixis device tree node is no longer needed, the node is
deleted from the DTS.
Signed-off-by: Timur Tabi <timur@freescale.com>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kleber Sacilotto de Souza [Thu, 12 Jul 2012 17:14:36 +0000 (17:14 +0000)]
powerpc/eeh: Check handle_eeh_events() return value
commit
10db8d212864cb6741df7d7fafda5ab6661f6f88 upstream.
Function eeh_event_handler() dereferences the pointer returned by
handle_eeh_events() without checking, causing a crash if NULL was
returned, which is expected in some situations.
This patch fixes this bug by checking for the value returned by
handle_eeh_events() before dereferencing it.
Signed-off-by: Kleber Sacilotto de Souza <klebers@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tiejun Chen [Wed, 11 Jul 2012 04:22:46 +0000 (14:22 +1000)]
powerpc: Add "memory" attribute for mfmsr()
commit
b416c9a10baae6a177b4f9ee858b8d309542fbef upstream.
Add "memory" attribute in inline assembly language as a compiler
barrier to make sure 4.6.x GCC don't reorder mfmsr().
Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
roger blofeld [Thu, 21 Jun 2012 05:27:14 +0000 (05:27 +0000)]
powerpc/ftrace: Fix assembly trampoline register usage
commit
fd5a42980e1cf327b7240adf5e7b51ea41c23437 upstream.
Just like the module loader, ftrace needs to be updated to use r12
instead of r11 with newer gcc's.
Signed-off-by: Roger Blofeld <blofeldus@yahoo.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Aaron Lu [Tue, 3 Jul 2012 09:27:49 +0000 (17:27 +0800)]
mmc: sdhci: fix incorrect command used in tuning
commit
473b095a72a95ba719905b1f2e82cd18d099a427 upstream.
For SD hosts using retuning mode 1, when retuning timer expired, it will
need to do retuning in sdhci_request before processing the actual
request. But the retuning command is fixed: cmd19 for SD card and cmd21
for eMMC card, so we can't use the original request's command to do the
tuning.
And since the tuning command depends on the card type attached to the
host, we will need to know the card type to use the correct tuning
command.
Signed-off-by: Aaron Lu <aaron.lu@amd.com>
Reviewed-by: Philip Rakity <prakity@marvell.com>
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniel Drake [Tue, 3 Jul 2012 22:13:39 +0000 (23:13 +0100)]
mmc: sdhci-pci: CaFe has broken card detection
commit
55fc05b7414274f17795cd0e8a3b1546f3649d5e upstream.
At http://dev.laptop.org/ticket/11980 we have determined that the
Marvell CaFe SDHCI controller reports bad card presence during
resume. It reports that no card is present even when it is.
This is a regression -- resume worked back around 2.6.37.
Around 400ms after resuming, a "card inserted" interrupt is
generated, at which point it starts reporting presence.
Work around this hardware oddity by setting the
SDHCI_QUIRK_BROKEN_CARD_DETECTION flag.
Thanks to Chris Ball for helping with diagnosis.
Signed-off-by: Daniel Drake <dsd@laptop.org>
Signed-off-by: Chris Ball <cjb@laptop.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Al Viro [Sat, 21 Jul 2012 07:55:18 +0000 (08:55 +0100)]
iscsi-target: Drop bogus struct file usage for iSCSI/SCTP
commit
bf6932f44a7b3fa7e2246a8b18a44670e5eab6c2 upstream.
From Al Viro:
BTW, speaking of struct file treatment related to sockets -
there's this piece of code in iscsi:
/*
* The SCTP stack needs struct socket->file.
*/
if ((np->np_network_transport == ISCSI_SCTP_TCP) ||
(np->np_network_transport == ISCSI_SCTP_UDP)) {
if (!new_sock->file) {
new_sock->file = kzalloc(
sizeof(struct file), GFP_KERNEL);
For one thing, as far as I can see it'not true - sctp does *not* depend on
socket->file being non-NULL; it does, in one place, check socket->file->f_flags
for O_NONBLOCK, but there it treats NULL socket->file as "flag not set".
Which is the case here anyway - the fake struct file created in
__iscsi_target_login_thread() (and in iscsi_target_setup_login_socket(), with
the same excuse) do *not* get that flag set.
Moreover, it's a bloody serious violation of a bunch of asserts in VFS;
all struct file instances should come from filp_cachep, via get_empty_filp()
(or alloc_file(), which is a wrapper for it). FWIW, I'm very tempted to
do this and be done with the entire mess:
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Cc: Andy Grover <agrover@redhat.com>
Cc: Hannes Reinecke <hare@suse.de>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Roland Dreier [Mon, 16 Jul 2012 22:34:21 +0000 (15:34 -0700)]
target: Add generation of LOGICAL BLOCK ADDRESS OUT OF RANGE
commit
e2397c704429025bc6b331a970f699e52f34283e upstream.
Many SCSI commands are defined to return a CHECK CONDITION / ILLEGAL
REQUEST with ASC set to LOGICAL BLOCK ADDRESS OUT OF RANGE if the
initiator sends a command that accesses a too-big LBA. Add an enum
value and case entries so that target code can return this status.
Signed-off-by: Roland Dreier <roland@purestorage.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Sun, 29 Jul 2012 15:04:57 +0000 (08:04 -0700)]
Linux 3.4.7
Jeff Layton [Wed, 11 Jul 2012 13:09:36 +0000 (09:09 -0400)]
cifs: when CONFIG_HIGHMEM is set, serialize the read/write kmaps
commit
3cf003c08be785af4bee9ac05891a15bcbff856a upstream.
[The async read code was broadened to include uncached reads in 3.5, so
the mainline patch did not apply directly. This patch is just a backport
to account for that change.]
Jian found that when he ran fsx on a 32 bit arch with a large wsize the
process and one of the bdi writeback kthreads would sometimes deadlock
with a stack trace like this:
crash> bt
PID: 2789 TASK:
f02edaa0 CPU: 3 COMMAND: "fsx"
#0 [
eed63cbc] schedule at
c083c5b3
#1 [
eed63d80] kmap_high at
c0500ec8
#2 [
eed63db0] cifs_async_writev at
f7fabcd7 [cifs]
#3 [
eed63df0] cifs_writepages at
f7fb7f5c [cifs]
#4 [
eed63e50] do_writepages at
c04f3e32
#5 [
eed63e54] __filemap_fdatawrite_range at
c04e152a
#6 [
eed63ea4] filemap_fdatawrite at
c04e1b3e
#7 [
eed63eb4] cifs_file_aio_write at
f7fa111a [cifs]
#8 [
eed63ecc] do_sync_write at
c052d202
#9 [
eed63f74] vfs_write at
c052d4ee
#10 [
eed63f94] sys_write at
c052df4c
#11 [
eed63fb0] ia32_sysenter_target at
c0409a98
EAX:
00000004 EBX:
00000003 ECX:
abd73b73 EDX:
012a65c6
DS: 007b ESI:
012a65c6 ES: 007b EDI:
00000000
SS: 007b ESP:
bf8db178 EBP:
bf8db1f8 GS: 0033
CS: 0073 EIP:
40000424 ERR:
00000004 EFLAGS:
00000246
Each task would kmap part of its address array before getting stuck, but
not enough to actually issue the write.
This patch fixes this by serializing the marshal_iov operations for
async reads and writes. The idea here is to ensure that cifs
aggressively tries to populate a request before attempting to fulfill
another one. As soon as all of the pages are kmapped for a request, then
we can unlock and allow another one to proceed.
There's no need to do this serialization on non-CONFIG_HIGHMEM arches
however, so optimize all of this out when CONFIG_HIGHMEM isn't set.
Reported-by: Jian Li <jiali@redhat.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tushar Behera [Thu, 12 Jul 2012 09:06:28 +0000 (18:06 +0900)]
ARM: SAMSUNG: Update default rate for xusbxti clock
commit
bdd3cc26ba651e33780ade33f1410320cf2d0cf4 upstream.
The rate of xusbxti clock is set in individual machine files.
The default value should be defined at the clock definition
and individual machine files should modify it if required.
Division by zero in kernel.
[<
c0011849>] (unwind_backtrace+0x1/0x9c) from [<
c022c663>] (Ldiv0+0x9/0x12)
[<
c022c663>] (Ldiv0+0x9/0x12) from [<
c001a3c3>] (s3c_setrate_clksrc+0x33/0x78)
[<
c001a3c3>] (s3c_setrate_clksrc+0x33/0x78) from [<
c0019e67>] (clk_set_rate+0x2f/0x78)
Signed-off-by: Tushar Behera <tushar.behera@linaro.org>
Signed-off-by: Kukjin Kim <kgene.kim@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mikulas Patocka [Fri, 20 Jul 2012 13:25:07 +0000 (14:25 +0100)]
dm raid1: set discard_zeroes_data_unsupported
commit
7c8d3a42fe1c58a7e8fd3f6a013e7d7b474ff931 upstream.
We can't guarantee that REQ_DISCARD on dm-mirror zeroes the data even if
the underlying disks support zero on discard. So this patch sets
ti->discard_zeroes_data_unsupported.
For example, if the mirror is in the process of resynchronizing, it may
happen that kcopyd reads a piece of data, then discard is sent on the
same area and then kcopyd writes the piece of data to another leg.
Consequently, the data is not zeroed.
The flag was made available by commit
983c7db347db8ce2d8453fd1d89b7a4bb6920d56
(dm crypt: always disable discard_zeroes_data).
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mikulas Patocka [Fri, 20 Jul 2012 13:25:03 +0000 (14:25 +0100)]
dm raid1: fix crash with mirror recovery and discard
commit
751f188dd5ab95b3f2b5f2f467c38aae5a2877eb upstream.
This patch fixes a crash when a discard request is sent during mirror
recovery.
Firstly, some background. Generally, the following sequence happens during
mirror synchronization:
- function do_recovery is called
- do_recovery calls dm_rh_recovery_prepare
- dm_rh_recovery_prepare uses a semaphore to limit the number
simultaneously recovered regions (by default the semaphore value is 1,
so only one region at a time is recovered)
- dm_rh_recovery_prepare calls __rh_recovery_prepare,
__rh_recovery_prepare asks the log driver for the next region to
recover. Then, it sets the region state to DM_RH_RECOVERING. If there
are no pending I/Os on this region, the region is added to
quiesced_regions list. If there are pending I/Os, the region is not
added to any list. It is added to the quiesced_regions list later (by
dm_rh_dec function) when all I/Os finish.
- when the region is on quiesced_regions list, there are no I/Os in
flight on this region. The region is popped from the list in
dm_rh_recovery_start function. Then, a kcopyd job is started in the
recover function.
- when the kcopyd job finishes, recovery_complete is called. It calls
dm_rh_recovery_end. dm_rh_recovery_end adds the region to
recovered_regions or failed_recovered_regions list (depending on
whether the copy operation was successful or not).
The above mechanism assumes that if the region is in DM_RH_RECOVERING
state, no new I/Os are started on this region. When I/O is started,
dm_rh_inc_pending is called, which increases reg->pending count. When
I/O is finished, dm_rh_dec is called. It decreases reg->pending count.
If the count is zero and the region was in DM_RH_RECOVERING state,
dm_rh_dec adds it to the quiesced_regions list.
Consequently, if we call dm_rh_inc_pending/dm_rh_dec while the region is
in DM_RH_RECOVERING state, it could be added to quiesced_regions list
multiple times or it could be added to this list when kcopyd is copying
data (it is assumed that the region is not on any list while kcopyd does
its jobs). This results in memory corruption and crash.
There already exist bypasses for REQ_FLUSH requests: REQ_FLUSH requests
do not belong to any region, so they are always added to the sync list
in do_writes. dm_rh_inc_pending does not increase count for REQ_FLUSH
requests. In mirror_end_io, dm_rh_dec is never called for REQ_FLUSH
requests. These bypasses avoid the crash possibility described above.
These bypasses were improperly implemented for REQ_DISCARD when
the mirror target gained discard support in commit
5fc2ffeabb9ee0fc0e71ff16b49f34f0ed3d05b4 (dm raid1: support discard).
In do_writes, REQ_DISCARD requests is always added to the sync queue and
immediately dispatched (even if the region is in DM_RH_RECOVERING). However,
dm_rh_inc and dm_rh_dec is called for REQ_DISCARD resusts. So it violates the
rule that no I/Os are started on DM_RH_RECOVERING regions, and causes the list
corruption described above.
This patch changes it so that REQ_DISCARD requests follow the same path
as REQ_FLUSH. This avoids the crash.
Reference: https://bugzilla.redhat.com/837607
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mikulas Patocka [Fri, 20 Jul 2012 13:25:05 +0000 (14:25 +0100)]
dm thin: do not send discards to shared blocks
commit
650d2a06b4fe1cc1d218c20e256650f68bf0ca31 upstream.
When process_discard receives a partial discard that doesn't cover a
full block, it sends this discard down to that block. Unfortunately, the
block can be shared and the discard would corrupt the other snapshots
sharing this block.
This patch detects block sharing and ends the discard with success when
sending it to the shared block.
The above change means that if the device supports discard it can't be
guaranteed that a discard request zeroes data. Therefore, we set
ti->discard_zeroes_data_unsupported.
Thin target discard support with this bug arrived in commit
104655fd4dcebd50068ef30253a001da72e3a081 (dm thin: support discards).
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Boaz Harrosh [Fri, 8 Jun 2012 02:29:40 +0000 (05:29 +0300)]
pnfs-obj: don't leak objio_state if ore_write/read fails
commit
9909d45a8557455ca5f8ee7af0f253debc851f1a upstream.
[Bug since 3.2 Kernel]
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Boaz Harrosh [Fri, 8 Jun 2012 01:30:40 +0000 (04:30 +0300)]
ore: Remove support of partial IO request (NFS crash)
commit
62b62ad873f2accad9222a4d7ffbe1e93f6714c1 upstream.
Do to OOM situations the ore might fail to allocate all resources
needed for IO of the full request. If some progress was possible
it would proceed with a partial/short request, for the sake of
forward progress.
Since this crashes NFS-core and exofs is just fine without it just
remove this contraption, and fail.
TODO:
Support real forward progress with some reserved allocations
of resources, such as mem pools and/or bio_sets
[Bug since 3.2 Kernel]
CC: Benny Halevy <bhalevy@tonian.com>
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Boaz Harrosh [Thu, 7 Jun 2012 22:19:07 +0000 (01:19 +0300)]
ore: Fix NFS crash by supporting any unaligned RAID IO
commit
9ff19309a9623f2963ac5a136782ea4d8b5d67fb upstream.
In RAID_5/6 We used to not permit an IO that it's end
byte is not stripe_size aligned and spans more than one stripe.
.i.e the caller must check if after submission the actual
transferred bytes is shorter, and would need to resubmit
a new IO with the remainder.
Exofs supports this, and NFS was supposed to support this
as well with it's short write mechanism. But late testing has
exposed a CRASH when this is used with none-RPC layout-drivers.
The change at NFS is deep and risky, in it's place the fix
at ORE to lift the limitation is actually clean and simple.
So here it is below.
The principal here is that in the case of unaligned IO on
both ends, beginning and end, we will send two read requests
one like old code, before the calculation of the first stripe,
and also a new site, before the calculation of the last stripe.
If any "boundary" is aligned or the complete IO is within a single
stripe. we do a single read like before.
The code is clean and simple by splitting the old _read_4_write
into 3 even parts:
1._read_4_write_first_stripe
2. _read_4_write_last_stripe
3. _read_4_write_execute
And calling 1+3 at the same place as before. 2+3 before last
stripe, and in the case of all in a single stripe then 1+2+3
is preformed additively.
Why did I not think of it before. Well I had a strike of
genius because I have stared at this code for 2 years, and did
not find this simple solution, til today. Not that I did not try.
This solution is much better for NFS than the previous supposedly
solution because the short write was dealt with out-of-band after
IO_done, which would cause for a seeky IO pattern where as in here
we execute in order. At both solutions we do 2 separate reads, only
here we do it within a single IO request. (And actually combine two
writes into a single submission)
NFS/exofs code need not change since the ORE API communicates the new
shorter length on return, what will happen is that this case would not
occur anymore.
hurray!!
[Stable this is an NFS bug since 3.2 Kernel should apply cleanly]
Signed-off-by: Boaz Harrosh <bharrosh@panasas.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Artem Bityutskiy [Sat, 14 Jul 2012 11:33:09 +0000 (14:33 +0300)]
UBIFS: fix a bug in empty space fix-up
commit
c6727932cfdb13501108b16c38463c09d5ec7a74 upstream.
UBIFS has a feature called "empty space fix-up" which is a quirk to work-around
limitations of dumb flasher programs. Namely, of those flashers that are unable
to skip NAND pages full of 0xFFs while flashing, resulting in empty space at
the end of half-filled eraseblocks to be unusable for UBIFS. This feature is
relatively new (introduced in v3.0).
The fix-up routine (fixup_free_space()) is executed only once at the very first
mount if the superblock has the 'space_fixup' flag set (can be done with -F
option of mkfs.ubifs). It basically reads all the UBIFS data and metadata and
writes it back to the same LEB. The routine assumes the image is pristine and
does not have anything in the journal.
There was a bug in 'fixup_free_space()' where it fixed up the log incorrectly.
All but one LEB of the log of a pristine file-system are empty. And one
contains just a commit start node. And 'fixup_free_space()' just unmapped this
LEB, which resulted in wiping the commit start node. As a result, some users
were unable to mount the file-system next time with the following symptom:
UBIFS error (pid 1): replay_log_leb: first log node at LEB 3:0 is not CS node
UBIFS error (pid 1): replay_log_leb: log error detected while replaying the log at LEB 3:0
The root-cause of this bug was that 'fixup_free_space()' wrongly assumed
that the beginning of empty space in the log head (c->lhead_offs) was known
on mount. However, it is not the case - it was always 0. UBIFS does not store
in it the master node and finds out by scanning the log on every mount.
The fix is simple - just pass commit start node size instead of 0 to
'fixup_leb()'.
Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@linux.intel.com>
Reported-by: Iwo Mergler <Iwo.Mergler@netcommwireless.com>
Tested-by: Iwo Mergler <Iwo.Mergler@netcommwireless.com>
Reported-by: James Nute <newten82@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Daney [Thu, 19 Jul 2012 07:11:14 +0000 (09:11 +0200)]
MIPS: Properly align the .data..init_task section.
commit
7b1c0d26a8e272787f0f9fcc5f3e8531df3b3409 upstream.
Improper alignment can lead to unbootable systems and/or random
crashes.
[ralf@linux-mips.org: This is a lond standing bug since
6eb10bc9e2deab06630261cd05c4cb1e9a60e980 (kernel.org) rsp.
c422a10917f75fd19fa7fe070aaaa23e384dae6f (lmo) [MIPS: Clean up linker script
using new linker script macros.] so dates back to 2.6.32.]
Signed-off-by: David Daney <david.daney@cavium.com>
Cc: linux-mips@linux-mips.org
Patchwork: https://patchwork.linux-mips.org/patch/3881/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiri Kosina [Fri, 20 Apr 2012 10:15:44 +0000 (12:15 +0200)]
HID: multitouch: Add support for Baanto touchscreen
commit
9ed326951806c424b42dcf2e1125e25a98fb13d1 upstream.
Reported-by: Tvrtko Ursulin <tvrtko.ursulin@onelan.co.uk>
Tested-by: Tvrtko Ursulin <tvrtko.ursulin@onelan.co.uk>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Frank Kunz [Thu, 5 Jul 2012 20:32:49 +0000 (22:32 +0200)]
HID: add Sennheiser BTD500USB device support
commit
0e050923a797c1fc46ccc1e5182fd3090f33a75d upstream.
The Sennheiser BTD500USB composit device requires the
HID_QUIRK_NOGET flag to be set for working proper. Without the
flag the device crashes during hid intialization.
Signed-off-by: Frank Kunz <xxxxxmichl@googlemail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniel Nicoletti [Wed, 4 Jul 2012 13:20:31 +0000 (10:20 -0300)]
HID: add battery quirk for Apple Wireless ANSI
commit
0c47935c5b5cd4916cf1c1ed4a2894807f7bcc3e upstream.
Add USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI, to the quirk list since it report
wrong feature type and wrong percentage range.
Signed-off-by: Daniel Nicoletti <dantti12@gmail.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Aaditya Kumar [Tue, 17 Jul 2012 22:48:07 +0000 (15:48 -0700)]
mm: fix lost kswapd wakeup in kswapd_stop()
commit
1c7e7f6c0703d03af6bcd5ccc11fc15d23e5ecbe upstream.
Offlining memory may block forever, waiting for kswapd() to wake up
because kswapd() does not check the event kthread->should_stop before
sleeping.
The proper pattern, from Documentation/memory-barriers.txt, is:
--- waker ---
event_indicated = 1;
wake_up_process(event_daemon);
--- sleeper ---
for (;;) {
set_current_state(TASK_UNINTERRUPTIBLE);
if (event_indicated)
break;
schedule();
}
set_current_state() may be wrapped by:
prepare_to_wait();
In the kswapd() case, event_indicated is kthread->should_stop.
=== offlining memory (waker) ===
kswapd_stop()
kthread_stop()
kthread->should_stop = 1
wake_up_process()
wait_for_completion()
=== kswapd_try_to_sleep (sleeper) ===
kswapd_try_to_sleep()
prepare_to_wait()
.
.
schedule()
.
.
finish_wait()
The schedule() needs to be protected by a test of kthread->should_stop,
which is wrapped by kthread_should_stop().
Reproducer:
Do heavy file I/O in background.
Do a memory offline/online in a tight loop
Signed-off-by: Aaditya Kumar <aaditya.kumar@ap.sony.com>
Acked-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Reviewed-by: Minchan Kim <minchan@kernel.org>
Acked-by: Mel Gorman <mel@csn.ul.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Al Viro [Wed, 18 Jul 2012 08:31:36 +0000 (09:31 +0100)]
ext4: fix duplicated mnt_drop_write call in EXT4_IOC_MOVE_EXT
commit
331ae4962b975246944ea039697a8f1cadce42bb upstream.
Caused, AFAICS, by mismerge in commit
ff9cb1c4eead ("Merge branch
'for_linus' into for_linus_merged")
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Cc: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mark Rustad [Sat, 14 Jul 2012 01:18:04 +0000 (18:18 -0700)]
tcm_fc: Fix crash seen with aborts and large reads
commit
3cc5d2a6b9a2fd1bf024aa5e52dd22961eecaf13 upstream.
This patch fixes a crash seen when large reads have their exchange
aborted by either timing out or being reset. Because the exchange
abort results in the seq pointer being set to NULL, because the
sequence is no longer valid, it must not be dereferenced. This
patch changes the function ft_get_task_tag to return ~0 if it is
unable to get the tag for this reason. Because the get_task_tag
interface provides no means of returning an error, this seems
like the best way to fix this issue at the moment.
Signed-off-by: Mark Rustad <mark.d.rustad@intel.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
John Stultz [Fri, 13 Jul 2012 05:21:50 +0000 (01:21 -0400)]
ntp: Fix STA_INS/DEL clearing bug
commit
6b1859dba01c7d512b72d77e3fd7da8354235189 upstream.
In commit
6b43ae8a619d17c4935c3320d2ef9e92bdeed05d, I
introduced a bug that kept the STA_INS or STA_DEL bit
from being cleared from time_status via adjtimex()
without forcing STA_PLL first.
Usually once the STA_INS is set, it isn't cleared
until the leap second is applied, so its unlikely this
affected anyone. However during testing I noticed it
took some effort to cancel a leap second once STA_INS
was set.
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Richard Cochran <richardcochran@gmail.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Link: http://lkml.kernel.org/r/1342156917-25092-2-git-send-email-john.stultz@linaro.org
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Roland Dreier [Tue, 17 Jul 2012 00:10:17 +0000 (17:10 -0700)]
target: Fix range calculation in WRITE SAME emulation when num blocks == 0
commit
1765fe5edcb83f53fc67edeb559fcf4bc82c6460 upstream.
When NUMBER OF LOGICAL BLOCKS is 0, WRITE SAME is supposed to write
all the blocks from the specified LBA through the end of the device.
However, dev->transport->get_blocks(dev) (perhaps confusingly) returns
the last valid LBA rather than the number of blocks, so the correct
number of blocks to write starting with lba is
dev->transport->get_blocks(dev) - lba + 1
(nab: Backport roland's for-3.6 patch to for-3.5)
Signed-off-by: Roland Dreier <roland@purestorage.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Roland Dreier [Mon, 16 Jul 2012 22:17:10 +0000 (15:17 -0700)]
target: Clean up returning errors in PR handling code
commit
d35212f3ca3bf4fb49d15e37f530c9931e2d2183 upstream.
- instead of (PTR_ERR(file) < 0) just use IS_ERR(file)
- return -EINVAL instead of EINVAL
- all other error returns in target_scsi3_emulate_pr_out() use
"goto out" -- get rid of the one remaining straight "return."
Signed-off-by: Roland Dreier <roland@purestorage.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jeff Layton [Wed, 11 Jul 2012 13:09:35 +0000 (09:09 -0400)]
cifs: on CONFIG_HIGHMEM machines, limit the rsize/wsize to the kmap space
commit
3ae629d98bd5ed77585a878566f04f310adbc591 upstream.
We currently rely on being able to kmap all of the pages in an async
read or write request. If you're on a machine that has CONFIG_HIGHMEM
set then that kmap space is limited, sometimes to as low as 512 slots.
With 512 slots, we can only support up to a 2M r/wsize, and that's
assuming that we can get our greedy little hands on all of them. There
are other users however, so it's possible we'll end up stuck with a
size that large.
Since we can't handle a rsize or wsize larger than that currently, cap
those options at the number of kmap slots we have. We could consider
capping it even lower, but we currently default to a max of 1M. Might as
well allow those luddites on 32 bit arches enough rope to hang
themselves.
A more robust fix would be to teach the send and receive routines how
to contend with an array of pages so we don't need to marshal up a kvec
array at all. That's a fairly significant overhaul though, so we'll need
this limit in place until that's ready.
Reported-by: Jian Li <jiali@redhat.com>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jeff Layton [Fri, 6 Jul 2012 11:09:42 +0000 (07:09 -0400)]
cifs: always update the inode cache with the results from a FIND_*
commit
cd60042cc1392e79410dc8de9e9c1abb38a29e57 upstream.
When we get back a FIND_FIRST/NEXT result, we have some info about the
dentry that we use to instantiate a new inode. We were ignoring and
discarding that info when we had an existing dentry in the cache.
Fix this by updating the inode in place when we find an existing dentry
and the uniqueid is the same.
Reported-and-Tested-by: Andrew Bartlett <abartlet@samba.org>
Reported-by: Bill Robertson <bill_robertson@debortoli.com.au>
Reported-by: Dion Edwards <dion_edwards@debortoli.com.au>
Signed-off-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Steve French <smfrench@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
NeilBrown [Thu, 19 Jul 2012 05:59:18 +0000 (15:59 +1000)]
md/raid1: close some possible races on write errors during resync
commit
58e94ae18478c08229626daece2fc108a4a23261 upstream.
commit
4367af556133723d0f443e14ca8170d9447317cb
md/raid1: clear bad-block record when write succeeds.
Added a 'reschedule_retry' call possibility at the end of
end_sync_write, but didn't add matching code at the end of
sync_request_write. So if the writes complete very quickly, or
scheduling makes it seem that way, then we can miss rescheduling
the request and the resync could hang.
Also commit
73d5c38a9536142e062c35997b044e89166e063b
md: avoid races when stopping resync.
Fix a race condition in this same code in end_sync_write but didn't
make the change in sync_request_write.
This patch updates sync_request_write to fix both of those.
Patch is suitable for 3.1 and later kernels.
Reported-by: Alexander Lyakas <alex.bolshoy@gmail.com>
Original-version-by: Alexander Lyakas <alex.bolshoy@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
NeilBrown [Thu, 19 Jul 2012 05:59:18 +0000 (15:59 +1000)]
md: avoid crash when stopping md array races with closing other open fds.
commit
a05b7ea03d72f36edb0cec05e8893803335c61a0 upstream.
md will refuse to stop an array if any other fd (or mounted fs) is
using it.
When any fs is unmounted of when the last open fd is closed all
pending IO will be flushed (e.g. sync_blockdev call in __blkdev_put)
so there will be no pending IO to worry about when the array is
stopped.
However in order to send the STOP_ARRAY ioctl to stop the array one
must first get and open fd on the block device.
If some fd is being used to write to the block device and it is closed
after mdadm open the block device, but before mdadm issues the
STOP_ARRAY ioctl, then there will be no last-close on the md device so
__blkdev_put will not call sync_blockdev.
If this happens, then IO can still be in-flight while md tears down
the array and bad things can happen (use-after-free and subsequent
havoc).
So in the case where do_md_stop is being called from an open file
descriptor, call sync_block after taking the mutex to ensure there
will be no new openers.
This is needed when setting a read-write device to read-only too.
Reported-by: majianpeng <majianpeng@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Thu, 19 Jul 2012 19:11:49 +0000 (12:11 -0700)]
Linux 3.4.6
Samuel Ortiz [Thu, 10 May 2012 17:45:51 +0000 (19:45 +0200)]
NFC: Export nfc.h to userland
commit
dbd4fcaf8d664fab4163b1f8682e41ad8bff3444 upstream.
The netlink commands and attributes, along with the socket structure
definitions need to be exported.
Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Tue, 17 Jul 2012 06:39:56 +0000 (02:39 -0400)]
timekeeping: Add missing update call in timekeeping_resume()
This is a backport of
3e997130bd2e8c6f5aaa49d6e3161d4d29b43ab0
The leap second rework unearthed another issue of inconsistent data.
On timekeeping_resume() the timekeeper data is updated, but nothing
calls timekeeping_update(), so now the update code in the timer
interrupt sees stale values.
This has been the case before those changes, but then the timer
interrupt was using stale data as well so this went unnoticed for quite
some time.
Add the missing update call, so all the data is consistent everywhere.
Reported-by: Andreas Schwab <schwab@linux-m68k.org>
Reported-and-tested-by: "Rafael J. Wysocki" <rjw@sisk.pl>
Reported-and-tested-by: Martin Steigerwald <Martin@lichtvoll.de>
Cc: John Stultz <johnstul@us.ibm.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>,
Cc: Prarit Bhargava <prarit@redhat.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
John Stultz [Tue, 17 Jul 2012 06:39:55 +0000 (02:39 -0400)]
hrtimer: Update hrtimer base offsets each hrtimer_interrupt
This is a backport of
5baefd6d84163443215f4a99f6a20f054ef11236
The update of the hrtimer base offsets on all cpus cannot be made
atomically from the timekeeper.lock held and interrupt disabled region
as smp function calls are not allowed there.
clock_was_set(), which enforces the update on all cpus, is called
either from preemptible process context in case of do_settimeofday()
or from the softirq context when the offset modification happened in
the timer interrupt itself due to a leap second.
In both cases there is a race window for an hrtimer interrupt between
dropping timekeeper lock, enabling interrupts and clock_was_set()
issuing the updates. Any interrupt which arrives in that window will
see the new time but operate on stale offsets.
So we need to make sure that an hrtimer interrupt always sees a
consistent state of time and offsets.
ktime_get_update_offsets() allows us to get the current monotonic time
and update the per cpu hrtimer base offsets from hrtimer_interrupt()
to capture a consistent state of monotonic time and the offsets. The
function replaces the existing ktime_get() calls in hrtimer_interrupt().
The overhead of the new function vs. ktime_get() is minimal as it just
adds two store operations.
This ensures that any changes to realtime or boottime offsets are
noticed and stored into the per-cpu hrtimer base structures, prior to
any hrtimer expiration and guarantees that timers are not expired early.
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Prarit Bhargava <prarit@redhat.com>
Link: http://lkml.kernel.org/r/1341960205-56738-8-git-send-email-johnstul@us.ibm.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Tue, 17 Jul 2012 06:39:54 +0000 (02:39 -0400)]
timekeeping: Provide hrtimer update function
This is a backport of
f6c06abfb3972ad4914cef57d8348fcb2932bc3b
To finally fix the infamous leap second issue and other race windows
caused by functions which change the offsets between the various time
bases (CLOCK_MONOTONIC, CLOCK_REALTIME and CLOCK_BOOTTIME) we need a
function which atomically gets the current monotonic time and updates
the offsets of CLOCK_REALTIME and CLOCK_BOOTTIME with minimalistic
overhead. The previous patch which provides ktime_t offsets allows us
to make this function almost as cheap as ktime_get() which is going to
be replaced in hrtimer_interrupt().
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Prarit Bhargava <prarit@redhat.com>
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Link: http://lkml.kernel.org/r/1341960205-56738-7-git-send-email-johnstul@us.ibm.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Tue, 17 Jul 2012 06:39:53 +0000 (02:39 -0400)]
hrtimers: Move lock held region in hrtimer_interrupt()
This is a backport of
196951e91262fccda81147d2bcf7fdab08668b40
We need to update the base offsets from this code and we need to do
that under base->lock. Move the lock held region around the
ktime_get() calls. The ktime_get() calls are going to be replaced with
a function which gets the time and the offsets atomically.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Prarit Bhargava <prarit@redhat.com>
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Link: http://lkml.kernel.org/r/1341960205-56738-6-git-send-email-johnstul@us.ibm.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Tue, 17 Jul 2012 06:39:52 +0000 (02:39 -0400)]
timekeeping: Maintain ktime_t based offsets for hrtimers
This is a backport of
5b9fe759a678e05be4937ddf03d50e950207c1c0
We need to update the hrtimer clock offsets from the hrtimer interrupt
context. To avoid conversions from timespec to ktime_t maintain a
ktime_t based representation of those offsets in the timekeeper. This
puts the conversion overhead into the code which updates the
underlying offsets and provides fast accessible values in the hrtimer
interrupt.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Prarit Bhargava <prarit@redhat.com>
Link: http://lkml.kernel.org/r/1341960205-56738-4-git-send-email-johnstul@us.ibm.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
John Stultz [Tue, 17 Jul 2012 06:39:51 +0000 (02:39 -0400)]
timekeeping: Fix leapsecond triggered load spike issue
This is a backport of
4873fa070ae84a4115f0b3c9dfabc224f1bc7c51
The timekeeping code misses an update of the hrtimer subsystem after a
leap second happened. Due to that timers based on CLOCK_REALTIME are
either expiring a second early or late depending on whether a leap
second has been inserted or deleted until an operation is initiated
which causes that update. Unless the update happens by some other
means this discrepancy between the timekeeping and the hrtimer data
stays forever and timers are expired either early or late.
The reported immediate workaround - $ data -s "`date`" - is causing a
call to clock_was_set() which updates the hrtimer data structures.
See: http://www.sheeri.com/content/mysql-and-leap-second-high-cpu-and-fix
Add the missing clock_was_set() call to update_wall_time() in case of
a leap second event. The actual update is deferred to softirq context
as the necessary smp function call cannot be invoked from hard
interrupt context.
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Reported-by: Jan Engelhardt <jengelh@inai.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Prarit Bhargava <prarit@redhat.com>
Link: http://lkml.kernel.org/r/1341960205-56738-3-git-send-email-johnstul@us.ibm.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
John Stultz [Tue, 17 Jul 2012 06:39:50 +0000 (02:39 -0400)]
hrtimer: Provide clock_was_set_delayed()
This is a backport of
f55a6faa384304c89cfef162768e88374d3312cb
clock_was_set() cannot be called from hard interrupt context because
it calls on_each_cpu().
For fixing the widely reported leap seconds issue it is necessary to
call it from hard interrupt context, i.e. the timer tick code, which
does the timekeeping updates.
Provide a new function which denotes it in the hrtimer cpu base
structure of the cpu on which it is called and raise the hrtimer
softirq. We then execute the clock_was_set() notificiation from
softirq context in run_hrtimer_softirq(). The hrtimer softirq is
rarely used, so polling the flag there is not a performance issue.
[ tglx: Made it depend on CONFIG_HIGH_RES_TIMERS. We really should get
rid of all this ifdeffery ASAP ]
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Reported-by: Jan Engelhardt <jengelh@inai.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Acked-by: Prarit Bhargava <prarit@redhat.com>
Link: http://lkml.kernel.org/r/1341960205-56738-2-git-send-email-johnstul@us.ibm.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: John Stultz <johnstul@us.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michal Kazior [Fri, 8 Jun 2012 08:55:44 +0000 (10:55 +0200)]
cfg80211: check iface combinations only when iface is running
commit
f8cdddb8d61d16a156229f0910f7ecfc7a82c003 upstream.
Don't validate interface combinations on a stopped
interface. Otherwise we might end up being able to
create a new interface with a certain type, but
won't be able to change an existing interface
into that type.
This also skips some other functions when
interface is stopped and changing interface type.
Signed-off-by: Michal Kazior <michal.kazior@tieto.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
[Fixes regression introduced by cherry pick of
463454b5dbd8]
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Pawel Moll [Fri, 8 Jun 2012 13:04:06 +0000 (14:04 +0100)]
clk: Check parent for NULL in clk_change_rate
commit
bf47b4fd8f9f81cd5ce40e1945c6334d088226d1 upstream.
clk_change_rate() is accessing parent's rate without checking
if the parent exists at all. In case of root clocks this will
cause NULL pointer dereference.
This patch follows what clk_calc_new_rates() does in such
situation.
Signed-off-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Mike Turquette <mturquette@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ryan Bourgeois [Tue, 10 Jul 2012 16:43:33 +0000 (09:43 -0700)]
HID: add support for 2012 MacBook Pro Retina
commit
b2e6ad7dfe26aac5bf136962d0b11d180b820d44 upstream.
Add support for the 15'' MacBook Pro Retina. The keyboard is
the same as recent models.
The patch needs to be synchronized with the bcm5974 patch for
the trackpad - as usual.
Patch originally written by clipcarl (forums.opensuse.org).
[rydberg@euromail.se: Amended mouse ignore lines]
Signed-off-by: Ryan Bourgeois <bluedragonx@gmail.com>
Signed-off-by: Henrik Rydberg <rydberg@euromail.se>
Acked-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Yuri Khan [Thu, 12 Jul 2012 05:12:31 +0000 (22:12 -0700)]
Input: xpad - add Andamiro Pump It Up pad
commit
e76b8ee25e034ab601b525abb95cea14aa167ed3 upstream.
I couldn't find the vendor ID in any of the online databases, but this
mat has a Pump It Up logo on the top side of the controller compartment,
and a disclaimer stating that Andamiro will not be liable on the bottom.
Signed-off-by: Yuri Khan <yurivkhan@gmail.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ilia Katsnelson [Wed, 11 Jul 2012 07:54:20 +0000 (00:54 -0700)]
Input: xpad - add signature for Razer Onza Tournament Edition
commit
cc71a7e899cc6b2ff41e1be48756782ed004d802 upstream.
Signed-off-by: Ilia Katsnelson <k0009000@gmail.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Yuri Khan [Wed, 11 Jul 2012 07:49:18 +0000 (00:49 -0700)]
Input: xpad - handle all variations of Mad Catz Beat Pad
commit
3ffb62cb9ac2430c2504c6ff9727d0f2476ef0bd upstream.
The device should be handled by xpad driver instead of generic HID driver.
Signed-off-by: Yuri Khan <yurivkhan@gmail.com>
Acked-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Henrik Rydberg [Tue, 10 Jul 2012 16:43:57 +0000 (09:43 -0700)]
Input: bcm5974 - Add support for 2012 MacBook Pro Retina
commit
3dde22a98e94eb18527f0ff0068fb2fb945e58d4 upstream.
Add support for the 15'' MacBook Pro Retina model (MacBookPro10,1).
Patch originally written by clipcarl (forums.opensuse.org).
Signed-off-by: Henrik Rydberg <rydberg@euromail.se>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric W. Biederman [Mon, 9 Jul 2012 10:51:45 +0000 (10:51 +0000)]
bonding: Manage /proc/net/bonding/ entries from the netdev events
commit
a64d49c3dd504b685f9742a2f3dcb11fb8e4345f upstream.
It was recently reported that moving a bonding device between network
namespaces causes warnings from /proc. It turns out after the move we
were trying to add and to remove the /proc/net/bonding entries from the
wrong network namespace.
Move the bonding /proc registration code into the NETDEV_REGISTER and
NETDEV_UNREGISTER events where the proc registration and unregistration
will always happen at the right time.
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric W. Biederman [Mon, 9 Jul 2012 10:52:43 +0000 (10:52 +0000)]
bonding: debugfs and network namespaces are incompatible
commit
96ca7ffe748bf91f851e6aa4479aa11c8b1122ba upstream.
The bonding debugfs support has been broken in the presence of network
namespaces since it has been added. The debugfs support does not handle
multiple bonding devices with the same name in different network
namespaces.
I haven't had any bug reports, and I'm not interested in getting any.
Disable the debugfs support when network namespaces are enabled.
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Deepak Sikri [Sun, 8 Jul 2012 21:14:45 +0000 (21:14 +0000)]
stmmac: Fix for nfs hang on multiple reboot
commit
8e83989106562326bfd6aaf92174fe138efd026b upstream.
It was observed that during multiple reboots nfs hangs. The status of
receive descriptors shows that all the descriptors were in control of
CPU, and none were assigned to DMA.
Also the DMA status register confirmed that the Rx buffer is
unavailable.
This patch adds the fix for the same by adding the memory barriers to
ascertain that the all instructions before enabling the Rx or Tx DMA are
completed which involves the proper setting of the ownership bit in DMA
descriptors.
Signed-off-by: Deepak Sikri <deepak.sikri@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eliad Peller [Mon, 2 Jul 2012 11:42:03 +0000 (14:42 +0300)]
mac80211: destroy assoc_data correctly if assoc fails
commit
10a9109f2705fdc3caa94d768b2559587a9a050c upstream.
If association failed due to internal error (e.g. no
supported rates IE), we call ieee80211_destroy_assoc_data()
with assoc=true, while we actually reject the association.
This results in the BSSID not being zeroed out.
After passing assoc=false, we no longer have to call
sta_info_destroy_addr() explicitly. While on it, move
the "associated" message after the assoc_success check.
Signed-off-by: Eliad Peller <eliad@wizery.com>
Reviewed-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Federico Fuga [Mon, 16 Jul 2012 07:36:51 +0000 (10:36 +0300)]
rpmsg: fix dependency on initialization order
commit
9634252617441991b01dacaf4040866feecaf36f upstream.
When rpmsg drivers are built into the kernel, they must not initialize
before the rpmsg bus does, otherwise they'd trigger a BUG() in
drivers/base/driver.c line 169 (driver_register()).
To fix that, and to stop depending on arbitrary linkage ordering of
those built-in rpmsg drivers, we make the rpmsg bus initialize at
subsys_initcall.
Signed-off-by: Federico Fuga <fuga@studiofuga.com>
[ohad: rewrite the commit log]
Signed-off-by: Ohad Ben-Cohen <ohad@wizery.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Emmanuel Grumbach [Wed, 4 Jul 2012 11:59:08 +0000 (13:59 +0200)]
iwlegacy: don't mess up the SCD when removing a key
commit
b48d96652626b315229b1b82c6270eead6a77a6d upstream.
When we remove a key, we put a key index which was supposed
to tell the fw that we are actually removing the key. But
instead the fw took that index as a valid index and messed
up the SRAM of the device.
This memory corruption on the device mangled the data of
the SCD. The impact on the user is that SCD queue 2 got
stuck after having removed keys.
Reported-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stanislaw Gruszka [Wed, 4 Jul 2012 11:20:20 +0000 (13:20 +0200)]
iwlegacy: always monitor for stuck queues
commit
c2ca7d92ed4bbd779516beb6eb226e19f7f7ab0f upstream.
This is iwlegacy version of:
commit
342bbf3fee2fa9a18147e74b2e3c4229a4564912
Author: Johannes Berg <johannes.berg@intel.com>
Date: Sun Mar 4 08:50:46 2012 -0800
iwlwifi: always monitor for stuck queues
If we only monitor while associated, the following
can happen:
- we're associated, and the queue stuck check
runs, setting the queue "touch" time to X
- we disassociate, stopping the monitoring,
which leaves the time set to X
- almost 2s later, we associate, and enqueue
a frame
- before the frame is transmitted, we monitor
for stuck queues, and find the time set to
X, although it is now later than X + 2000ms,
so we decide that the queue is stuck and
erroneously restart the device
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tushar Dave [Thu, 12 Jul 2012 08:56:56 +0000 (08:56 +0000)]
e1000e: Correct link check logic for 82571 serdes
commit
d0efa8f23a644f7cb7d1f8e78dd9a223efa412a3 upstream.
SYNCH bit and IV bit of RXCW register are sticky. Before examining these bits,
RXCW should be read twice to filter out one-time false events and have correct
values for these bits. Incorrect values of these bits in link check logic can
cause weird link stability issues if auto-negotiation fails.
Reported-by: Dean Nelson <dnelson@redhat.com>
Signed-off-by: Tushar Dave <tushar.n.dave@intel.com>
Reviewed-by: Bruce Allan <bruce.w.allan@intel.com>
Tested-by: Jeff Pieper <jeffrey.e.pieper@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stanislaw Gruszka [Wed, 4 Jul 2012 11:10:02 +0000 (13:10 +0200)]
rt2x00usb: fix indexes ordering on RX queue kick
commit
efd821182cec8c92babef6e00a95066d3252fda4 upstream.
On rt2x00_dmastart() we increase index specified by Q_INDEX and on
rt2x00_dmadone() we increase index specified by Q_INDEX_DONE. So entries
between Q_INDEX_DONE and Q_INDEX are those we currently process in the
hardware. Entries between Q_INDEX and Q_INDEX_DONE are those we can
submit to the hardware.
According to that fix rt2x00usb_kick_queue(), as we need to submit RX
entries that are not processed by the hardware. It worked before only
for empty queue, otherwise was broken.
Note that for TX queues indexes ordering are ok. We need to kick entries
that have filled skb, but was not submitted to the hardware, i.e.
started from Q_INDEX_DONE and have ENTRY_DATA_PENDING bit set.
From practical standpoint this fixes RX queue stall, usually reproducible
in AP mode, like for example reported here:
https://bugzilla.redhat.com/show_bug.cgi?id=828824
Reported-and-tested-by: Franco Miceli <fmiceli@plan.ceibal.edu.uy>
Reported-and-tested-by: Tom Horsley <horsley1953@gmail.com>
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Anders Kaseorg [Sun, 15 Jul 2012 21:14:25 +0000 (17:14 -0400)]
fifo: Do not restart open() if it already found a partner
commit
05d290d66be6ef77a0b962ebecf01911bd984a78 upstream.
If a parent and child process open the two ends of a fifo, and the
child immediately exits, the parent may receive a SIGCHLD before its
open() returns. In that case, we need to make sure that open() will
return successfully after the SIGCHLD handler returns, instead of
throwing EINTR or being restarted. Otherwise, the restarted open()
would incorrectly wait for a second partner on the other end.
The following test demonstrates the EINTR that was wrongly thrown from
the parent’s open(). Change .sa_flags = 0 to .sa_flags = SA_RESTART
to see a deadlock instead, in which the restarted open() waits for a
second reader that will never come. (On my systems, this happens
pretty reliably within about 5 to 500 iterations. Others report that
it manages to loop ~forever sometimes; YMMV.)
#include <sys/stat.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <fcntl.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#define CHECK(x) do if ((x) == -1) {perror(#x); abort();} while(0)
void handler(int signum) {}
int main()
{
struct sigaction act = {.sa_handler = handler, .sa_flags = 0};
CHECK(sigaction(SIGCHLD, &act, NULL));
CHECK(mknod("fifo", S_IFIFO | S_IRWXU, 0));
for (;;) {
int fd;
pid_t pid;
putc('.', stderr);
CHECK(pid = fork());
if (pid == 0) {
CHECK(fd = open("fifo", O_RDONLY));
_exit(0);
}
CHECK(fd = open("fifo", O_WRONLY));
CHECK(close(fd));
CHECK(waitpid(pid, NULL, 0));
}
}
This is what I suspect was causing the Git test suite to fail in
t9010-svn-fe.sh:
http://bugs.debian.org/678852
Signed-off-by: Anders Kaseorg <andersk@mit.edu>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>