Alexander Usyskin [Tue, 12 Aug 2014 15:07:57 +0000 (18:07 +0300)]
mei: nfc: fix memory leak in error path
commit
8e8248b1369c97c7bb6f8bcaee1f05deeabab8ef upstream.
NFC will leak buffer if send failed.
Use single exit point that does the freeing
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alexander Usyskin [Tue, 12 Aug 2014 15:07:56 +0000 (18:07 +0300)]
mei: reset client state on queued connect request
commit
73ab4232388b7a08f17c8d08141ff2099fa0b161 upstream.
If connect request is queued (e.g. device in pg) set client state
to initializing, thus avoid preliminary exit in wait if current
state is disconnected.
This is regression from:
commit
e4d8270e604c3202131bac607969605ac397b893
Author: Alexander Usyskin <alexander.usyskin@intel.com>
mei: set connecting state just upon connection request is sent to the fw
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Liu Bo [Tue, 19 Aug 2014 15:33:13 +0000 (23:33 +0800)]
Btrfs: fix crash on endio of reading corrupted block
commit
38c1c2e44bacb37efd68b90b3f70386a8ee370ee upstream.
The crash is
------------[ cut here ]------------
kernel BUG at fs/btrfs/extent_io.c:2124!
[...]
Workqueue: btrfs-endio normal_work_helper [btrfs]
RIP: 0010:[<
ffffffffa02d6055>] [<
ffffffffa02d6055>] end_bio_extent_readpage+0xb45/0xcd0 [btrfs]
This is in fact a regression.
It is because we forgot to increase @offset properly in reading corrupted block,
so that the @offset remains, and this leads to checksum errors while reading
left blocks queued up in the same bio, and then ends up with hiting the above
BUG_ON.
Reported-by: Chris Murphy <lists@colorremedies.com>
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Liu Bo [Thu, 24 Jul 2014 14:48:05 +0000 (22:48 +0800)]
Btrfs: fix compressed write corruption on enospc
commit
ce62003f690dff38d3164a632ec69efa15c32cbf upstream.
When failing to allocate space for the whole compressed extent, we'll
fallback to uncompressed IO, but we've forgotten to redirty the pages
which belong to this compressed extent, and these 'clean' pages will
simply skip 'submit' part and go to endio directly, at last we got data
corruption as we write nothing.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Tested-By: Martin Steigerwald <martin@lichtvoll.de>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Filipe Manana [Wed, 2 Jul 2014 19:07:54 +0000 (20:07 +0100)]
Btrfs: read lock extent buffer while walking backrefs
commit
6f7ff6d7832c6be13e8c95598884dbc40ad69fb7 upstream.
Before processing the extent buffer, acquire a read lock on it, so
that we're safe against concurrent updates on the extent buffer.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Filipe Manana [Sat, 9 Aug 2014 20:22:27 +0000 (21:22 +0100)]
Btrfs: fix csum tree corruption, duplicate and outdated checksums
commit
27b9a8122ff71a8cadfbffb9c4f0694300464f3b upstream.
Under rare circumstances we can end up leaving 2 versions of a checksum
for the same file extent range.
The reason for this is that after calling btrfs_next_leaf we process
slot 0 of the leaf it returns, instead of processing the slot set in
path->slots[0]. Most of the time (by far) path->slots[0] is 0, but after
btrfs_next_leaf() releases the path and before it searches for the next
leaf, another task might cause a split of the next leaf, which migrates
some of its keys to the leaf we were processing before calling
btrfs_next_leaf(). In this case btrfs_next_leaf() returns again the
same leaf but with path->slots[0] having a slot number corresponding
to the first new key it got, that is, a slot number that didn't exist
before calling btrfs_next_leaf(), as the leaf now has more keys than
it had before. So we must really process the returned leaf starting at
path->slots[0] always, as it isn't always 0, and the key at slot 0 can
have an offset much lower than our search offset/bytenr.
For example, consider the following scenario, where we have:
sums->bytenr:
40157184, sums->len: 16384, sums end:
40173568
four 4kb file data blocks with offsets
40157184,
40161280,
40165376,
40169472
Leaf N:
slot = 0 slot = btrfs_header_nritems() - 1
|-------------------------------------------------------------------|
| [(CSUM CSUM
39239680), size 8] ... [(CSUM CSUM
40116224), size 4] |
|-------------------------------------------------------------------|
Leaf N + 1:
slot = 0 slot = btrfs_header_nritems() - 1
|--------------------------------------------------------------------|
| [(CSUM CSUM
40161280), size 32] ... [((CSUM CSUM
40615936), size 8 |
|--------------------------------------------------------------------|
Because we are at the last slot of leaf N, we call btrfs_next_leaf() to
find the next highest key, which releases the current path and then searches
for that next key. However after releasing the path and before finding that
next key, the item at slot 0 of leaf N + 1 gets moved to leaf N, due to a call
to ctree.c:push_leaf_left() (via ctree.c:split_leaf()), and therefore
btrfs_next_leaf() will returns us a path again with leaf N but with the slot
pointing to its new last key (CSUM CSUM
40161280). This new version of leaf N
is then:
slot = 0 slot = btrfs_header_nritems() - 2 slot = btrfs_header_nritems() - 1
|----------------------------------------------------------------------------------------------------|
| [(CSUM CSUM
39239680), size 8] ... [(CSUM CSUM
40116224), size 4] [(CSUM CSUM
40161280), size 32] |
|----------------------------------------------------------------------------------------------------|
And incorrecly using slot 0, makes us set next_offset to
39239680 and we jump
into the "insert:" label, which will set tmp to:
tmp = min((sums->len - total_bytes) >> blocksize_bits,
(next_offset - file_key.offset) >> blocksize_bits) =
min((16384 - 0) >> 12, (
39239680 -
40157184) >> 12) =
min(4, (u64)-917504 =
18446744073708634112 >> 12) = 4
and
ins_size = csum_size * tmp = 4 * 4 = 16 bytes.
In other words, we insert a new csum item in the tree with key
(CSUM_OBJECTID CSUM_KEY
40157184 = sums->bytenr) that contains the checksums
for all the data (4 blocks of 4096 bytes each = sums->len). Which is wrong,
because the item with key (CSUM CSUM
40161280) (the one that was moved from
leaf N + 1 to the end of leaf N) contains the old checksums of the last 12288
bytes of our data and won't get those old checksums removed.
So this leaves us 2 different checksums for 3 4kb blocks of data in the tree,
and breaks the logical rule:
Key_N+1.offset >= Key_N.offset + length_of_data_its_checksums_cover
An obvious bad effect of this is that a subsequent csum tree lookup to get
the checksum of any of the blocks with logical offset of
40161280,
40165376
or
40169472 (the last 3 4kb blocks of file data), will get the old checksums.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takashi Iwai [Mon, 28 Jul 2014 08:57:04 +0000 (10:57 +0200)]
Btrfs: Fix memory corruption by ulist_add_merge() on 32bit arch
commit
4eb1f66dce6c4dc28dd90a7ffbe6b2b1cb08aa4e upstream.
We've got bug reports that btrfs crashes when quota is enabled on
32bit kernel, typically with the Oops like below:
BUG: unable to handle kernel NULL pointer dereference at
00000004
IP: [<
f9234590>] find_parent_nodes+0x360/0x1380 [btrfs]
*pde =
00000000
Oops: 0000 [#1] SMP
CPU: 0 PID: 151 Comm: kworker/u8:2 Tainted: G S W 3.15.2-1.gd43d97e-default #1
Workqueue: btrfs-qgroup-rescan normal_work_helper [btrfs]
task:
f1478130 ti:
f147c000 task.ti:
f147c000
EIP: 0060:[<
f9234590>] EFLAGS:
00010213 CPU: 0
EIP is at find_parent_nodes+0x360/0x1380 [btrfs]
EAX:
f147dda8 EBX:
f147ddb0 ECX:
00000011 EDX:
00000000
ESI:
00000000 EDI:
f147dda4 EBP:
f147ddf8 ESP:
f147dd38
DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
CR0:
8005003b CR2:
00000004 CR3:
00bf3000 CR4:
00000690
Stack:
00000000 00000000 f147dda4 00000050 00000001 00000000 00000001 00000050
00000001 00000000 d3059000 00000001 00000022 000000a8 00000000 00000000
00000000 000000a1 00000000 00000000 00000001 00000000 00000000 11800000
Call Trace:
[<
f923564d>] __btrfs_find_all_roots+0x9d/0xf0 [btrfs]
[<
f9237bb1>] btrfs_qgroup_rescan_worker+0x401/0x760 [btrfs]
[<
f9206148>] normal_work_helper+0xc8/0x270 [btrfs]
[<
c025e38b>] process_one_work+0x11b/0x390
[<
c025eea1>] worker_thread+0x101/0x340
[<
c026432b>] kthread+0x9b/0xb0
[<
c0712a71>] ret_from_kernel_thread+0x21/0x30
[<
c0264290>] kthread_create_on_node+0x110/0x110
This indicates a NULL corruption in prefs_delayed list. The further
investigation and bisection pointed that the call of ulist_add_merge()
results in the corruption.
ulist_add_merge() takes u64 as aux and writes a 64bit value into
old_aux. The callers of this function in backref.c, however, pass a
pointer of a pointer to old_aux. That is, the function overwrites
64bit value on 32bit pointer. This caused a NULL in the adjacent
variable, in this case, prefs_delayed.
Here is a quick attempt to band-aid over this: a new function,
ulist_add_merge_ptr() is introduced to pass/store properly a pointer
value instead of u64. There are still ugly void ** cast remaining
in the callers because void ** cannot be taken implicitly. But, it's
safer than explicit cast to u64, anyway.
Bugzilla: https://bugzilla.novell.com/show_bug.cgi?id=887046
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Stephen M. Cameron [Thu, 3 Jul 2014 15:18:03 +0000 (10:18 -0500)]
hpsa: fix bad -ENOMEM return value in hpsa_big_passthru_ioctl
commit
0758f4f732b08b6ef07f2e5f735655cf69fea477 upstream.
When copy_from_user fails, return -EFAULT, not -ENOMEM
Signed-off-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Reported-by: Robert Elliott <elliott@hp.com>
Reviewed-by: Joe Handzik <joseph.t.handzik@hp.com>
Reviewed-by: Scott Teel <scott.teel@hp.com>
Reviewed by: Mike MIller <michael.miller@canonical.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Vrabel [Thu, 7 Aug 2014 16:06:06 +0000 (17:06 +0100)]
x86/xen: resume timer irqs early
commit
8d5999df35314607c38fbd6bdd709e25c3a4eeab upstream.
If the timer irqs are resumed during device resume it is possible in
certain circumstances for the resume to hang early on, before device
interrupts are resumed. For an Ubuntu 14.04 PVHVM guest this would
occur in ~0.5% of resume attempts.
It is not entirely clear what is occuring the point of the hang but I
think a task necessary for the resume calls schedule_timeout(),
waiting for a timer interrupt (which never arrives). This failure may
require specific tasks to be running on the other VCPUs to trigger
(processes are not frozen during a suspend/resume if PREEMPT is
disabled).
Add IRQF_EARLY_RESUME to the timer interrupts so they are resumed in
syscore_resume().
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Vrabel [Tue, 5 Aug 2014 10:49:19 +0000 (11:49 +0100)]
x86/xen: use vmap() to map grant table pages in PVH guests
commit
7d951f3ccb0308c95bf76d5eef9886dea35a7013 upstream.
Commit
b7dd0e350e0b (x86/xen: safely map and unmap grant frames when
in atomic context) causes PVH guests to crash in
arch_gnttab_map_shared() when they attempted to map the pages for the
grant table.
This use of a PV-specific function during the PVH grant table setup is
non-obvious and not needed. The standard vmap() function does the
right thing.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Reported-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Tested-by: Mukesh Rathor <mukesh.rathor@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Matt Fleming [Fri, 11 Jul 2014 07:45:25 +0000 (08:45 +0100)]
x86/efi: Enforce CONFIG_RELOCATABLE for EFI boot stub
commit
7b2a583afb4ab894f78bc0f8bd136e96b6499a7e upstream.
Without CONFIG_RELOCATABLE the early boot code will decompress the
kernel to LOAD_PHYSICAL_ADDR. While this may have been fine in the BIOS
days, that isn't going to fly with UEFI since parts of the firmware
code/data may be located at LOAD_PHYSICAL_ADDR.
Straying outside of the bounds of the regions we've explicitly requested
from the firmware will cause all sorts of trouble. Bruno reports that
his machine resets while trying to decompress the kernel image.
We already go to great pains to ensure the kernel is loaded into a
suitably aligned buffer, it's just that the address isn't necessarily
LOAD_PHYSICAL_ADDR, because we can't guarantee that address isn't in-use
by the firmware.
Explicitly enforce CONFIG_RELOCATABLE for the EFI boot stub, so that we
can load the kernel at any address with the correct alignment.
Reported-by: Bruno Prémont <bonbons@linux-vserver.org>
Tested-by: Bruno Prémont <bonbons@linux-vserver.org>
Cc: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Matt Fleming <matt.fleming@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Vrabel [Thu, 31 Jul 2014 15:22:25 +0000 (16:22 +0100)]
xen/events/fifo: ensure all bitops are properly aligned even on x86
commit
dcecb8fd93a65787130a74e61fdf29932c8d85eb upstream.
When using the FIFO-based ABI on x86_64, if the last port is at the
end of an event array page then sync_test_bit() on this port's event
word will read beyond the end of the page and in certain circumstances
this may fault.
The fault requires the following page in the kernel's direct mapping
to be not present, which would mean:
a) the array page is the last page of RAM; or
b) the following page is ballooned out /and/ it has been used for a
foreign mapping by a kernel driver (such as netback or blkback)
/and/ the grant has been unmapped.
Use the infrastructure added for arm64 to ensure that all bitops
operating on event words are unsigned long aligned.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Arnd Bergmann [Thu, 26 Jun 2014 13:44:52 +0000 (15:44 +0200)]
hpsa: fix non-x86 builds
commit
0b9e7b741f2bf8103b15bb14d5b4a6f5ee91c59a upstream.
commit
28e134464734 "[SCSI] hpsa: enable unit attention reporting"
turns on unit attention notifications, but got the change wrong for
all architectures other than x86, which now store an uninitialized
value into the device register.
Gcc helpfully warns about this:
../drivers/scsi/hpsa.c: In function 'hpsa_set_driver_support_bits':
../drivers/scsi/hpsa.c:6373:17: warning: 'driver_support' is used uninitialized in this function [-Wuninitialized]
driver_support |= ENABLE_UNIT_ATTN;
^
This moves the #ifdef so only the prefetch-enable is conditional
on x86, not also reading the initial register contents.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes:
28e134464734 "[SCSI] hpsa: enable unit attention reporting"
Acked-by: Stephen M. Cameron <scameron@beardog.cce.hp.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andy Lutomirski [Fri, 25 Jul 2014 23:30:27 +0000 (16:30 -0700)]
x86_64/vsyscall: Fix warn_bad_vsyscall log output
commit
53b884ac3745353de220d92ef792515c3ae692f0 upstream.
This commit in Linux 3.6:
commit
c767a54ba0657e52e6edaa97cbe0b0a8bf1c1655
Author: Joe Perches <joe@perches.com>
Date: Mon May 21 19:50:07 2012 -0700
x86/debug: Add KERN_<LEVEL> to bare printks, convert printks to pr_<level>
caused warn_bad_vsyscall to output garbage in the middle of the
line. Revert the bad part of it.
The printk in question isn't actually bare; the level is "%s".
The bug this fixes is purely cosmetic; backports are optional.
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
Link: http://lkml.kernel.org/r/03eac1f24110bbe496ecc12a4df467e0d88466d4.1406330947.git.luto@amacapital.net
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Brian W Hart [Thu, 31 Jul 2014 19:24:37 +0000 (14:24 -0500)]
powerpc/powernv: Update dev->dma_mask in pci_set_dma_mask() path
commit
a32305bf90a2ae0e6a9a93370c7616565f75e15a upstream.
powerpc defines various machine-specific routines for handling
pci_set_dma_mask(). The routines for machine "PowerNV" may neglect
to set dev->dma_mask. This could confuse anyone (e.g. drivers) that
consult dev->dma_mask to find the current mask. Set the dma_mask in
the PowerNV leaf routine.
Signed-off-by: Brian W. Hart <hartb@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tyrel Datwyler [Tue, 29 Jul 2014 17:48:13 +0000 (13:48 -0400)]
powerpc/pci: Reorder pci bus/bridge unregistration during PHB removal
commit
7340056567e32b2c9d3554eb146e1977c93da116 upstream.
Commit bcdde7e made __sysfs_remove_dir() recursive and introduced a BUG_ON
during PHB removal while attempting to delete the power managment attribute
group of the bus. This is a result of tearing the bridge and bus devices down
out of order in remove_phb_dynamic. Since, the the bus resides below the bridge
in the sysfs device tree it should be torn down first.
This patch simply moves the device_unregister call for the PHB bridge device
after the device_unregister call for the PHB bus.
Fixes:
bcdde7e221a8 ("sysfs: make __sysfs_remove_dir() recursive")
Signed-off-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christoph Schulz [Wed, 16 Jul 2014 08:00:57 +0000 (10:00 +0200)]
x86: don't exclude low BIOS area when allocating address space for non-PCI cards
commit
cbace46a9710a480cae51e4611697df5de41713e upstream.
Commit
30919b0bf356 ("x86: avoid low BIOS area when allocating address
space") moved the test for resource allocations that fall within the first
1MB of address space from the PCI-specific path to a generic path, such
that all resource allocations will avoid this area. However, this breaks
ISA cards which need to allocate a memory region within the first 1MB. An
example is the i82365 PCMCIA controller and derivatives like the Ricoh
RF5C296/396 which map part of the PCMCIA socket memory address space into
the first 1MB of system memory address space. They do not work anymore as
no usable memory region exists due to this change:
Intel ISA PCIC probe: Ricoh RF5C296/396 ISA-to-PCMCIA at port 0x3e0 ofs 0x00, 2 sockets
host opts [0]: none
host opts [1]: none
ISA irqs (scanned) = 3,4,5,9,10 status change on irq 10
pcmcia_socket pcmcia_socket1: pccard: PCMCIA card inserted into slot 1
pcmcia_socket pcmcia_socket0: cs: IO port probe 0xc00-0xcff: excluding 0xcf8-0xcff
pcmcia_socket pcmcia_socket0: cs: IO port probe 0xa00-0xaff: clean.
pcmcia_socket pcmcia_socket0: cs: IO port probe 0x100-0x3ff: excluding 0x170-0x177 0x1f0-0x1f7 0x2f8-0x2ff 0x370-0x37f 0x3c0-0x3e7 0x3f0-0x3ff
pcmcia_socket pcmcia_socket0: cs: memory probe 0x0a0000-0x0affff: excluding 0xa0000-0xaffff
pcmcia_socket pcmcia_socket0: cs: memory probe 0x0b0000-0x0bffff: excluding 0xb0000-0xbffff
pcmcia_socket pcmcia_socket0: cs: memory probe 0x0c0000-0x0cffff: excluding 0xc0000-0xcbfff
pcmcia_socket pcmcia_socket0: cs: memory probe 0x0d0000-0x0dffff: clean.
pcmcia_socket pcmcia_socket0: cs: memory probe 0x0e0000-0x0effff: clean.
pcmcia_socket pcmcia_socket0: cs: memory probe 0x60000000-0x60ffffff: clean.
pcmcia_socket pcmcia_socket0: cs: memory probe 0xa0000000-0xa0ffffff: clean.
pcmcia_socket pcmcia_socket1: cs: IO port probe 0xc00-0xcff: excluding 0xcf8-0xcff
pcmcia_socket pcmcia_socket1: cs: IO port probe 0xa00-0xaff: clean.
pcmcia_socket pcmcia_socket1: cs: IO port probe 0x100-0x3ff: excluding 0x170-0x177 0x1f0-0x1f7 0x2f8-0x2ff 0x370-0x37f 0x3c0-0x3e7 0x3f0-0x3ff
pcmcia_socket pcmcia_socket1: cs: memory probe 0x0a0000-0x0affff: excluding 0xa0000-0xaffff
pcmcia_socket pcmcia_socket1: cs: memory probe 0x0b0000-0x0bffff: excluding 0xb0000-0xbffff
pcmcia_socket pcmcia_socket1: cs: memory probe 0x0c0000-0x0cffff: excluding 0xc0000-0xcbfff
pcmcia_socket pcmcia_socket1: cs: memory probe 0x0d0000-0x0dffff: clean.
pcmcia_socket pcmcia_socket1: cs: memory probe 0x0e0000-0x0effff: clean.
pcmcia_socket pcmcia_socket1: cs: memory probe 0x60000000-0x60ffffff: clean.
pcmcia_socket pcmcia_socket1: cs: memory probe 0xa0000000-0xa0ffffff: clean.
pcmcia_socket pcmcia_socket1: cs: memory probe 0x0cc000-0x0effff: excluding 0xe0000-0xeffff
pcmcia_socket pcmcia_socket1: cs: unable to map card memory!
If filtering out the first 1MB is reverted, everything works as expected.
Tested-by: Robert Resch <fli4l@robert.reschpara.de>
Signed-off-by: Christoph Schulz <develop@kristov.de>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Simone Gotti [Wed, 18 Jun 2014 14:55:30 +0000 (16:55 +0200)]
ACPI / PCI: Fix sysfs acpi_index and label errors
commit
dcfa9be83866e28fcb8b7e22b4eeb4ba63bd3174 upstream.
Fix errors in handling "device label" _DSM return values.
If _DSM returns a Unicode string, the ACPI type is ACPI_TYPE_BUFFER, not
ACPI_TYPE_STRING. Fix dsm_label_utf16s_to_utf8s() to convert UTF-16 from
acpi_object->buffer instead of acpi_object->string.
Prior to v3.14, we accepted Unicode labels (ACPI_TYPE_BUFFER return
values). But after
1d0fcef73283, we accepted only ASCII (ACPI_TYPE_STRING)
(and we incorrectly tried to convert those ASCII labels from UTF-16 to
UTF-8).
Rejecting Unicode labels made us return -EPERM when reading sysfs
"acpi_index" or "label" files, which in turn caused on-board network
interfaces on a Dell PowerEdge E420 to be renamed (by udev net_id internal)
from eno1/eno2 to enp2s0f0/enp2s0f1.
Fix this by accepting either ACPI_TYPE_STRING (and treating it as ASCII) or
ACPI_TYPE_BUFFER (and converting from UTF-16 to UTF-8).
[bhelgaas: changelog]
Fixes:
1d0fcef73283 ("ACPI / PCI: replace open-coded _DSM code with helper functions")
Signed-off-by: Simone Gotti <simone.gotti@gmail.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Jiang Liu <jiang.liu@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vidya Sagar [Wed, 16 Jul 2014 10:03:42 +0000 (15:33 +0530)]
PCI: Configure ASPM when enabling device
commit
1f6ae47ecff7f23da73417e068018b311f3b5583 upstream.
We can't do ASPM configuration at enumeration-time because enabling it
makes some defective hardware unresponsive, even if ASPM is disabled later
(see
41cd766b0659 ("PCI: Don't enable aspm before drivers have had a chance
to veto it"). Therefore, we have to do it after a driver claims the
device.
We previously configured ASPM in pci_set_power_state(), but that's not a
very good place because it's not really related to setting the PCI device
power state, and doing it there means:
- We incorrectly skipped ASPM config when setting a device that's
already in D0 to D0.
- We unnecessarily configured ASPM when setting a device to a low-power
state (the ASPM feature only applies when the device is in D0).
- We unnecessarily configured ASPM when called from a .resume() method
(ASPM configuration needs to be restored during resume, but
pci_restore_pcie_state() should already do this).
Move ASPM configuration from pci_set_power_state() to
do_pci_enable_device() so we do it when a driver enables a device.
[bhelgaas: changelog]
Link: https://bugzilla.kernel.org/show_bug.cgi?id=79621
Fixes:
db288c9c5f9d ("PCI / PM: restore the original behavior of pci_set_power_state()")
Suggested-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Vidya Sagar <sagar.tv@gmail.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alex Deucher [Thu, 21 Aug 2014 14:55:07 +0000 (10:55 -0400)]
drm/radeon: add additional SI pci ids
commit
37dbeab788a8f23fd946c0be083e5484d6f929a1 upstream.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alex Deucher [Thu, 21 Aug 2014 14:48:11 +0000 (10:48 -0400)]
drm/radeon: add new bonaire pci ids
commit
5fc540edc8ea1297c76685f74bc82a2107fe6731 upstream.
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alex Deucher [Thu, 21 Aug 2014 14:41:42 +0000 (10:41 -0400)]
drm/radeon: add new KV pci id
commit
6dc14baf4ced769017c7a7045019c7a19f373865 upstream.
bug:
https://bugs.freedesktop.org/show_bug.cgi?id=82912
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Sat, 23 Aug 2014 21:47:28 +0000 (17:47 -0400)]
ext4: fix BUG_ON in mb_free_blocks()
commit
c99d1e6e83b06744c75d9f5e491ed495a7086b7b upstream.
If we suffer a block allocation failure (for example due to a memory
allocation failure), it's possible that we will call
ext4_discard_allocated_blocks() before we've actually allocated any
blocks. In that case, fe_len and fe_start in ac->ac_f_ex will still
be zero, and this will result in mb_free_blocks(inode, e4b, 0, 0)
triggering the BUG_ON on mb_free_blocks():
BUG_ON(last >= (sb->s_blocksize << 3));
Fix this by bailing out of ext4_discard_allocated_blocks() if fs_len
is zero.
Also fix a missing ext4_mb_unload_buddy() call in
ext4_discard_allocated_blocks().
Google-Bug-Id:
16844242
Fixes:
86f0afd463215fc3e58020493482faa4ac3a4d69
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michael S. Tsirkin [Tue, 19 Aug 2014 11:14:50 +0000 (19:14 +0800)]
kvm: iommu: fix the third parameter of kvm_iommu_put_pages (CVE-2014-3601)
commit
350b8bdd689cd2ab2c67c8a86a0be86cfa0751a7 upstream.
The third parameter of kvm_iommu_put_pages is wrong,
It should be 'gfn - slot->base_gfn'.
By making gfn very large, malicious guest or userspace can cause kvm to
go to this error path, and subsequently to pass a huge value as size.
Alternatively if gfn is small, then pages would be pinned but never
unpinned, causing host memory leak and local DOS.
Passing a reasonable but large value could be the most dangerous case,
because it would unpin a page that should have stayed pinned, and thus
allow the device to DMA into arbitrary memory. However, this cannot
happen because of the condition that can trigger the error:
- out of memory (where you can't allocate even a single page)
should not be possible for the attacker to trigger
- when exceeding the iommu's address space, guest pages after gfn
will also exceed the iommu's address space, and inside
kvm_iommu_put_pages() the iommu_iova_to_phys() will fail. The
page thus would not be unpinned at all.
Reported-by: Jack Morgenstein <jackm@mellanox.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paolo Bonzini [Mon, 18 Aug 2014 14:39:48 +0000 (16:39 +0200)]
Revert "KVM: x86: Increase the number of fixed MTRR regs to 10"
commit
0d234daf7e0a3290a3a20c8087eefbd6335a5bd4 upstream.
This reverts commit
682367c494869008eb89ef733f196e99415ae862,
which causes 32-bit SMP Windows 7 guests to panic.
SeaBIOS has a limit on the number of MTRRs that it can handle,
and this patch exceeded the limit. Better revert it.
Thanks to Nadav Amit for debugging the cause.
Reported-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Wanpeng Li [Tue, 5 Aug 2014 04:42:24 +0000 (12:42 +0800)]
KVM: nVMX: fix "acknowledge interrupt on exit" when APICv is in use
commit
56cc2406d68c0f09505c389e276f27a99f495cbd upstream.
After commit 77b0f5d (KVM: nVMX: Ack and write vector info to intr_info
if L1 asks us to), "Acknowledge interrupt on exit" behavior can be
emulated. To do so, KVM will ask the APIC for the interrupt vector if
during a nested vmexit if VM_EXIT_ACK_INTR_ON_EXIT is set. With APICv,
kvm_get_apic_interrupt would return -1 and give the following WARNING:
Call Trace:
[<
ffffffff81493563>] dump_stack+0x49/0x5e
[<
ffffffff8103f0eb>] warn_slowpath_common+0x7c/0x96
[<
ffffffffa059709a>] ? nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<
ffffffff8103f11a>] warn_slowpath_null+0x15/0x17
[<
ffffffffa059709a>] nested_vmx_vmexit+0xa4/0x233 [kvm_intel]
[<
ffffffffa0594295>] ? nested_vmx_exit_handled+0x6a/0x39e [kvm_intel]
[<
ffffffffa0537931>] ? kvm_apic_has_interrupt+0x80/0xd5 [kvm]
[<
ffffffffa05972ec>] vmx_check_nested_events+0xc3/0xd3 [kvm_intel]
[<
ffffffffa051ebe9>] inject_pending_event+0xd0/0x16e [kvm]
[<
ffffffffa051efa0>] vcpu_enter_guest+0x319/0x704 [kvm]
To fix this, we cannot rely on the processor's virtual interrupt delivery,
because "acknowledge interrupt on exit" must only update the virtual
ISR/PPR/IRR registers (and SVI, which is just a cache of the virtual ISR)
but it should not deliver the interrupt through the IDT. Thus, KVM has
to deliver the interrupt "by hand", similar to the treatment of EOI in
commit
fc57ac2c9ca8 (KVM: lapic: sync highest ISR to hardware apic on
EOI, 2014-05-14).
The patch modifies kvm_cpu_get_interrupt to always acknowledge an
interrupt; there are only two callers, and the other is not affected
because it is never reached with kvm_apic_vid_enabled() == true. Then it
modifies apic_set_isr and apic_clear_irr to update SVI and RVI in addition
to the registers.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Suggested-by: "Zhang, Yang Z" <yang.z.zhang@intel.com>
Tested-by: Liu, RongrongX <rongrongx.liu@intel.com>
Tested-by: Felipe Reyes <freyes@suse.com>
Fixes:
77b0f5d67ff2781f36831cba79674c3e97bd7acf
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paolo Bonzini [Wed, 30 Jul 2014 16:07:24 +0000 (18:07 +0200)]
KVM: x86: always exit on EOIs for interrupts listed in the IOAPIC redir table
commit
0f6c0a740b7d3e1f3697395922d674000f83d060 upstream.
Currently, the EOI exit bitmap (used for APICv) does not include
interrupts that are masked. However, this can cause a bug that manifests
as an interrupt storm inside the guest. Alex Williamson reported the
bug and is the one who really debugged this; I only wrote the patch. :)
The scenario involves a multi-function PCI device with OHCI and EHCI
USB functions and an audio function, all assigned to the guest, where
both USB functions use legacy INTx interrupts.
As soon as the guest boots, interrupts for these devices turn into an
interrupt storm in the guest; the host does not see the interrupt storm.
Basically the EOI path does not work, and the guest continues to see the
interrupt over and over, even after it attempts to mask it at the APIC.
The bug is only visible with older kernels (RHEL6.5, based on 2.6.32
with not many changes in the area of APIC/IOAPIC handling).
Alex then tried forcing bit 59 (corresponding to the USB functions' IRQ)
on in the eoi_exit_bitmap and TMR, and things then work. What happens
is that VFIO asserts IRQ11, then KVM recomputes the EOI exit bitmap.
It does not have set bit 59 because the RTE was masked, so the IOAPIC
never sees the EOI and the interrupt continues to fire in the guest.
My guess was that the guest is masking the interrupt in the redirection
table in the interrupt routine, i.e. while the interrupt is set in a
LAPIC's ISR, The simplest fix is to ignore the masking state, we would
rather have an unnecessary exit rather than a missed IRQ ACK and anyway
IOAPIC interrupts are not as performance-sensitive as for example MSIs.
Alex tested this patch and it fixed his bug.
[Thanks to Alex for his precise description of the problem
and initial debugging effort. A lot of the text above is
based on emails exchanged with him.]
Reported-by: Alex Williamson <alex.williamson@redhat.com>
Tested-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nadav Amit [Sun, 15 Jun 2014 13:12:59 +0000 (16:12 +0300)]
KVM: x86: Inter-privilege level ret emulation is not implemeneted
commit
9e8919ae793f4edfaa29694a70f71a515ae9942a upstream.
Return unhandlable error on inter-privilege level ret instruction. This is
since the current emulation does not check the privilege level correctly when
loading the CS, and does not pop RSP/SS as needed.
Signed-off-by: Nadav Amit <namit@cs.technion.ac.il>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Steven Rostedt [Mon, 9 Jun 2014 18:06:07 +0000 (14:06 -0400)]
debugfs: Fix corrupted loop in debugfs_remove_recursive
commit
485d44022a152c0254dd63445fdb81c4194cbf0e upstream.
[ I'm currently running my tests on it now, and so far, after a few
hours it has yet to blow up. I'll run it for 24 hours which it never
succeeded in the past. ]
The tracing code has a way to make directories within the debugfs file
system as well as deleting them using mkdir/rmdir in the instance
directory. This is very limited in functionality, such as there is
no renames, and the parent directory "instance" can not be modified.
The tracing code creates the instance directory from the debugfs code
and then replaces the dentry->d_inode->i_op with its own to allow
for mkdir/rmdir to work.
When these are called, the d_entry and inode locks need to be released
to call the instance creation and deletion code. That code has its own
accounting and locking to serialize everything to prevent multiple
users from causing harm. As the parent "instance" directory can not
be modified this simplifies things.
I created a stress test that creates several threads that randomly
creates and deletes directories thousands of times a second. The code
stood up to this test and I submitted it a while ago.
Recently I added a new test that adds readers to the mix. While the
instance directories were being added and deleted, readers would read
from these directories and even enable tracing within them. This test
was able to trigger a bug:
general protection fault: 0000 [#1] PREEMPT SMP
Modules linked in: ...
CPU: 3 PID: 17789 Comm: rmdir Tainted: G W 3.15.0-rc2-test+ #41
Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./To be filled by O.E.M., BIOS SDBLI944.86P 05/08/2007
task:
ffff88003786ca60 ti:
ffff880077018000 task.ti:
ffff880077018000
RIP: 0010:[<
ffffffff811ed5eb>] [<
ffffffff811ed5eb>] debugfs_remove_recursive+0x1bd/0x367
RSP: 0018:
ffff880077019df8 EFLAGS:
00010246
RAX:
0000000000000002 RBX:
ffff88006f0fe490 RCX:
0000000000000000
RDX:
dead000000100058 RSI:
0000000000000246 RDI:
ffff88003786d454
RBP:
ffff88006f0fe640 R08:
0000000000000628 R09:
0000000000000000
R10:
0000000000000628 R11:
ffff8800795110a0 R12:
ffff88006f0fe640
R13:
ffff88006f0fe640 R14:
ffffffff81817d0b R15:
ffffffff818188b7
FS:
00007ff13ae24700(0000) GS:
ffff88007d580000(0000) knlGS:
0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0:
000000008005003b
CR2:
0000003054ec7be0 CR3:
0000000076d51000 CR4:
00000000000007e0
Stack:
ffff88007a41ebe0 dead000000100058 00000000fffffffe ffff88006f0fe640
0000000000000000 ffff88006f0fe678 ffff88007a41ebe0 ffff88003793a000
00000000fffffffe ffffffff810bde82 ffff88006f0fe640 ffff88007a41eb28
Call Trace:
[<
ffffffff810bde82>] ? instance_rmdir+0x15b/0x1de
[<
ffffffff81132e2d>] ? vfs_rmdir+0x80/0xd3
[<
ffffffff81132f51>] ? do_rmdir+0xd1/0x139
[<
ffffffff8124ad9e>] ? trace_hardirqs_on_thunk+0x3a/0x3c
[<
ffffffff814fea62>] ? system_call_fastpath+0x16/0x1b
Code: fe ff ff 48 8d 75 30 48 89 df e8 c9 fd ff ff 85 c0 75 13 48 c7 c6 b8 cc d2 81 48 c7 c7 b0 cc d2 81 e8 8c 7a f5 ff 48 8b 54 24 08 <48> 8b 82 a8 00 00 00 48 89 d3 48 2d a8 00 00 00 48 89 44 24 08
RIP [<
ffffffff811ed5eb>] debugfs_remove_recursive+0x1bd/0x367
RSP <
ffff880077019df8>
It took a while, but every time it triggered, it was always in the
same place:
list_for_each_entry_safe(child, next, &parent->d_subdirs, d_u.d_child) {
Where the child->d_u.d_child seemed to be corrupted. I added lots of
trace_printk()s to see what was wrong, and sure enough, it was always
the child's d_u.d_child field. I looked around to see what touches
it and noticed that in __dentry_kill() which calls dentry_free():
static void dentry_free(struct dentry *dentry)
{
/* if dentry was never visible to RCU, immediate free is OK */
if (!(dentry->d_flags & DCACHE_RCUACCESS))
__d_free(&dentry->d_u.d_rcu);
else
call_rcu(&dentry->d_u.d_rcu, __d_free);
}
I also noticed that __dentry_kill() unlinks the child->d_u.child
under the parent->d_lock spin_lock.
Looking back at the loop in debugfs_remove_recursive() it never takes the
parent->d_lock to do the list walk. Adding more tracing, I was able to
prove this was the issue:
ftrace-t-15385 1.... 246662024us : dentry_kill <
ffffffff81138b91>: free
ffff88006d573600
rmdir-15409 2.... 246662024us : debugfs_remove_recursive <
ffffffff811ec7e5>: child=
ffff88006d573600 next=
dead000000100058
The dentry_kill freed
ffff88006d573600 just as the remove recursive was walking
it.
In order to fix this, the list walk needs to be modified a bit to take
the parent->d_lock. The safe version is no longer necessary, as every
time we remove a child, the parent->d_lock must be released and the
list walk must start over. Each time a child is removed, even though it
may still be on the list, it should be skipped by the first check
in the loop:
if (!debugfs_positive(child))
continue;
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Arnd Bergmann [Thu, 26 Jun 2014 11:43:02 +0000 (13:43 +0200)]
crypto: ux500 - make interrupt mode plausible
commit
e1f8859ee265fc89bd21b4dca79e8e983a044892 upstream.
The interrupt handler in the ux500 crypto driver has an obviously
incorrect way to access the data buffer, which for a while has
caused this build warning:
../ux500/cryp/cryp_core.c: In function 'cryp_interrupt_handler':
../ux500/cryp/cryp_core.c:234:5: warning: passing argument 1 of '__fswab32' makes integer from pointer without a cast [enabled by default]
writel_relaxed(ctx->indata,
^
In file included from ../include/linux/swab.h:4:0,
from ../include/uapi/linux/byteorder/big_endian.h:12,
from ../include/linux/byteorder/big_endian.h:4,
from ../arch/arm/include/uapi/asm/byteorder.h:19,
from ../include/asm-generic/bitops/le.h:5,
from ../arch/arm/include/asm/bitops.h:340,
from ../include/linux/bitops.h:33,
from ../include/linux/kernel.h:10,
from ../include/linux/clk.h:16,
from ../drivers/crypto/ux500/cryp/cryp_core.c:12:
../include/uapi/linux/swab.h:57:119: note: expected '__u32' but argument is of type 'const u8 *'
static inline __attribute_const__ __u32 __fswab32(__u32 val)
There are at least two, possibly three problems here:
a) when writing into the FIFO, we copy the pointer rather than the
actual data we want to give to the hardware
b) the data pointer is an array of 8-bit values, while the FIFO
is 32-bit wide, so both the read and write access fail to do
a proper type conversion
c) This seems incorrect for big-endian kernels, on which we need to
byte-swap any register access, but not normally FIFO accesses,
at least the DMA case doesn't do it either.
This converts the bogus loop to use the same readsl/writesl pair
that we use for the two other modes (DMA and polling). This is
more efficient and consistent, and probably correct for endianess.
The bug has existed since the driver was first merged, and was
probably never detected because nobody tried to use interrupt mode.
It might make sense to backport this fix to stable kernels, depending
on how the crypto maintainers feel about that.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Cc: linux-crypto@vger.kernel.org
Cc: Fabio Baltieri <fabio.baltieri@linaro.org>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Peter Hurley [Wed, 9 Jul 2014 13:21:14 +0000 (09:21 -0400)]
serial: core: Preserve termios c_cflag for console resume
commit
ae84db9661cafc63d179e1d985a2c5b841ff0ac4 upstream.
When a tty is opened for the serial console, the termios c_cflag
settings are inherited from the console line settings.
However, if the tty is subsequently closed, the termios settings
are lost. This results in a garbled console if the console is later
suspended and resumed.
Preserve the termios c_cflag for the serial console when the tty
is shutdown; this reflects the most recent line settings.
Fixes: Bugzilla #69751, 'serial console does not wake from S3'
Reported-by: Valerio Vanni <valerio.vanni@inwind.it>
Acked-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Thu, 31 Jul 2014 02:17:17 +0000 (22:17 -0400)]
ext4: fix ext4_discard_allocated_blocks() if we can't allocate the pa struct
commit
86f0afd463215fc3e58020493482faa4ac3a4d69 upstream.
If there is a failure while allocating the preallocation structure, a
number of blocks can end up getting marked in the in-memory buddy
bitmap, and then not getting released. This can result in the
following corruption getting reported by the kernel:
EXT4-fs error (device sda3): ext4_mb_generate_buddy:758: group 1126,
12793 clusters in bitmap, 12729 in gd
In that case, we need to release the blocks using mb_free_blocks().
Tested: fs smoke test; also demonstrated that with injected errors,
the file system is no longer getting corrupted
Google-Bug-Id:
16657874
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Wolfram Sang [Mon, 21 Jul 2014 09:42:03 +0000 (11:42 +0200)]
drivers/i2c/busses: use correct type for dma_map/unmap
commit
28772ac8711e4d7268c06e765887dd8cb6924f98 upstream.
dma_{un}map_* uses 'enum dma_data_direction' not 'enum dma_transfer_direction'.
Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
Acked-by: Ludovic Desroches <ludovic.desroches@atmel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jason Gunthorpe [Sat, 9 Nov 2013 18:17:00 +0000 (11:17 -0700)]
tpm: Add missing tpm_do_selftest to ST33 I2C driver
commit
f07a5e9a331045e976a3d317ba43d14859d9407c upstream.
Most device drivers do call 'tpm_do_selftest' which executes a
TPM_ContinueSelfTest. tpm_i2c_stm_st33 is just pointlessly different,
I think it is bug.
These days we have the general assumption that the TPM is usable by
the kernel immediately after the driver is finished, so we can no
longer defer the mandatory self test to userspace.
Reported-by: Richard Marciel <rmaciel@linux.vnet.ibm.com>
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Peter Huewe <peterhuewe@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Axel Lin [Wed, 6 Aug 2014 00:02:44 +0000 (08:02 +0800)]
hwmon: (dme1737) Prevent overflow problem when writing large limits
commit
d58e47d787c09fe5c61af3c6ce7d784762f29c3d upstream.
On platforms with sizeof(int) < sizeof(long), writing a temperature
limit larger than MAXINT will result in unpredictable limit values
written to the chip. Avoid auto-conversion from long to int to fix
the problem.
Voltage limits, fan minimum speed, pwm frequency, pwm ramp rate, and
other attributes have the same problem, fix them as well.
Zone temperature limits are signed, but were cached as u8, causing
unepected values to be reported for negative temperatures. Cache as
s8 to fix the problem.
vrm is an u8, so the written value needs to be limited to [0, 255].
Signed-off-by: Axel Lin <axel.lin@ingics.com>
[Guenter Roeck: Fix zone temperature cache]
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Axel Lin [Tue, 5 Aug 2014 01:59:49 +0000 (09:59 +0800)]
hwmon: (ads1015) Fix out-of-bounds array access
commit
e981429557cbe10c780fab1c1a237cb832757652 upstream.
Current code uses data_rate as array index in ads1015_read_adc() and uses pga
as array index in ads1015_reg_to_mv, so we must make sure both data_rate and
pga settings are in valid value range.
Return -EINVAL if the setting is out-of-range.
Signed-off-by: Axel Lin <axel.lin@ingics.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guenter Roeck [Wed, 30 Jul 2014 05:23:12 +0000 (22:23 -0700)]
hwmon: (lm85) Fix various errors on attribute writes
commit
3248c3b771ddd9d31695da17ba350eb6e1b80a53 upstream.
Temperature limit register writes did not account for negative numbers.
As a result, writing -127000 resulted in -126000 written into the
temperature limit register. This problem affected temp[1-3]_min,
temp[1-3]_max, temp[1-3]_auto_temp_crit, and temp[1-3]_auto_temp_min.
When writing pwm[1-3]_freq, a long variable was auto-converted into an int
without range check. Wiring values larger than MAXINT resulted in unexpected
register values.
When writing temp[1-3]_auto_temp_max, an unsigned long variable was
auto-converted into an int without range check. Writing values larger than
MAXINT resulted in unexpected register values.
vrm is an u8, so the written value needs to be limited to [0, 255].
Cc: Axel Lin <axel.lin@ingics.com>
Reviewed-by: Axel Lin <axel.lin@ingics.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Axel Lin [Wed, 30 Jul 2014 03:13:52 +0000 (11:13 +0800)]
hwmon: (ads1015) Fix off-by-one for valid channel index checking
commit
56de1377ad92f72ee4e5cb0faf7a9b6048fdf0bf upstream.
Current code uses channel as array index, so the valid channel value is
0 .. ADS1015_CHANNELS - 1.
Signed-off-by: Axel Lin <axel.lin@ingics.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Axel Lin [Sat, 2 Aug 2014 05:36:38 +0000 (13:36 +0800)]
hwmon: (gpio-fan) Prevent overflow problem when writing large limits
commit
2565fb05d1e9fc0831f7b1c083bcfcb1cba1f020 upstream.
On platforms with sizeof(int) < sizeof(unsigned long), writing a rpm value
larger than MAXINT will result in unpredictable limit values written to the
chip. Avoid auto-conversion from unsigned long to int to fix the problem.
Signed-off-by: Axel Lin <axel.lin@ingics.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guenter Roeck [Wed, 30 Jul 2014 03:48:59 +0000 (20:48 -0700)]
hwmon: (lm78) Fix overflow problems seen when writing large temperature limits
commit
1074d683a51f1aded3562add9ef313e75d557327 upstream.
On platforms with sizeof(int) < sizeof(long), writing a temperature
limit larger than MAXINT will result in unpredictable limit values
written to the chip. Avoid auto-conversion from long to int to fix
the problem.
Cc: Axel Lin <axel.lin@ingics.com>
Reviewed-by: Axel Lin <axel.lin@ingics.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Axel Lin [Thu, 31 Jul 2014 01:43:19 +0000 (09:43 +0800)]
hwmon: (amc6821) Fix possible race condition bug
commit
cf44819c98db11163f58f08b822d626c7a8f5188 upstream.
Ensure mutex lock protects the read-modify-write period to prevent possible
race condition bug.
In additional, update data->valid should also be protected by the mutex lock.
Signed-off-by: Axel Lin <axel.lin@ingics.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Axel Lin [Thu, 31 Jul 2014 14:27:04 +0000 (22:27 +0800)]
hwmon: (sis5595) Prevent overflow problem when writing large limits
commit
cc336546ddca8c22de83720632431c16a5f9fe9a upstream.
On platforms with sizeof(int) < sizeof(long), writing a temperature
limit larger than MAXINT will result in unpredictable limit values
written to the chip. Avoid auto-conversion from long to int to fix
the problem.
Signed-off-by: Axel Lin <axel.lin@ingics.com>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Russell King [Sat, 12 Jul 2014 09:53:41 +0000 (10:53 +0100)]
drm: omapdrm: fix compiler errors
commit
2d31ca3ad7d5d44c8adc7f253c96ce33f3a2e931 upstream.
Regular randconfig nightly testing has detected problems with omapdrm.
omapdrm fails to build when the kernel is built to support 64-bit DMA
addresses and/or 64-bit physical addresses due to an assumption about
the width of these types.
Use %pad to print DMA addresses, rather than %x or %Zx (which is even
more wrong than %x). Avoid passing a uint32_t pointer into a function
which expects dma_addr_t pointer.
drivers/gpu/drm/omapdrm/omap_plane.c: In function 'omap_plane_pre_apply':
drivers/gpu/drm/omapdrm/omap_plane.c:145:2: error: format '%x' expects argument of type 'unsigned int', but argument 5 has type 'dma_addr_t' [-Werror=format]
drivers/gpu/drm/omapdrm/omap_plane.c:145:2: error: format '%x' expects argument of type 'unsigned int', but argument 6 has type 'dma_addr_t' [-Werror=format]
make[5]: *** [drivers/gpu/drm/omapdrm/omap_plane.o] Error 1
drivers/gpu/drm/omapdrm/omap_gem.c: In function 'omap_gem_get_paddr':
drivers/gpu/drm/omapdrm/omap_gem.c:794:4: error: format '%x' expects argument of type 'unsigned int', but argument 3 has type 'dma_addr_t' [-Werror=format]
drivers/gpu/drm/omapdrm/omap_gem.c: In function 'omap_gem_describe':
drivers/gpu/drm/omapdrm/omap_gem.c:991:4: error: format '%Zx' expects argument of type 'size_t', but argument 7 has type 'dma_addr_t' [-Werror=format]
drivers/gpu/drm/omapdrm/omap_gem.c: In function 'omap_gem_init':
drivers/gpu/drm/omapdrm/omap_gem.c:1470:4: error: format '%x' expects argument of type 'unsigned int', but argument 7 has type 'dma_addr_t' [-Werror=format]
make[5]: *** [drivers/gpu/drm/omapdrm/omap_gem.o] Error 1
drivers/gpu/drm/omapdrm/omap_dmm_tiler.c: In function 'dmm_txn_append':
drivers/gpu/drm/omapdrm/omap_dmm_tiler.c:226:2: error: passing argument 3 of 'alloc_dma' from incompatible pointer type [-Werror]
make[5]: *** [drivers/gpu/drm/omapdrm/omap_dmm_tiler.o] Error 1
make[5]: Target `__build' not remade because of errors.
make[4]: *** [drivers/gpu/drm/omapdrm] Error 2
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jeremy Vial [Thu, 31 Jul 2014 13:10:33 +0000 (15:10 +0200)]
ARM: OMAP3: Fix choice of omap3_restore_es function in OMAP34XX rev3.1.2 case.
commit
9b5f7428f8b16bd8980213f2b70baf1dd0b9e36c upstream.
According to the comment “restore_es3: applies to 34xx >= ES3.0" in
"arch/arm/mach-omap2/sleep34xx.S”, omap3_restore_es3 should be used
if the revision of an OMAP34xx is ES3.1.2.
Signed-off-by: Jeremy Vial <jvial@adeneo-embedded.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Baruch Siach [Wed, 9 Jul 2014 12:33:13 +0000 (13:33 +0100)]
ARM: 8097/1: unistd.h: relocate comments back to place
commit
bc994c77ce82576209dcf08f71de9ae51b0b100f upstream.
Commit
cb8db5d45 (UAPI: (Scripted) Disintegrate arch/arm/include/asm) moved
these syscall comments out of their context into the UAPI headers. Fix this.
Fixes:
cb8db5d4578a ("UAPI: (Scripted) Disintegrate arch/arm/include/asm")
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Suman Anna [Fri, 11 Jul 2014 21:44:37 +0000 (16:44 -0500)]
ARM: dts: AM4372: Correct mailbox node data
commit
44e6ab1b619853f05bf7250e55a6d82864e340d7 upstream.
The mailbox DT node for AM4372 is enabled and is corrected to
remove some properties that have crept in by mistake.
Fixes: 9e3269b (ARM: dts: AM4372: Add L2, EDMA, mailbox, MMC and SHAM nodes)
Signed-off-by: Suman Anna <s-anna@ti.com>
Signed-off-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniel Bristot de Oliveira [Wed, 23 Jul 2014 02:27:41 +0000 (23:27 -0300)]
sched: Fix sched_setparam() policy == -1 logic
commit
d8d28c8f00e84a72e8bee39a85835635417bee49 upstream.
The scheduler uses policy == -1 to preserve the current policy state to
implement sched_setparam(). But, as (int) -1 is equals to 0xffffffff,
it's matching the if (policy & SCHED_RESET_ON_FORK) on
_sched_setscheduler(). This match changes the policy value to an
invalid value, breaking the sched_setparam() syscall.
This patch checks policy == -1 before check the SCHED_RESET_ON_FORK flag.
The following program shows the bug:
int main(void)
{
struct sched_param param = {
.sched_priority = 5,
};
sched_setscheduler(0, SCHED_FIFO, ¶m);
param.sched_priority = 1;
sched_setparam(0, ¶m);
param.sched_priority = 0;
sched_getparam(0, ¶m);
if (param.sched_priority != 1)
printf("failed priority setting (found %d instead of 1)\n",
param.sched_priority);
else
printf("priority setting fine\n");
}
Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Reviewed-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
Fixes:
7479f3c9cf67 "sched: Move SCHED_RESET_ON_FORK into attr::sched_flags"
Link: http://lkml.kernel.org/r/9ebe0566a08dbbb3999759d3f20d6004bb2dbcfa.1406079891.git.bristot@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alexander Usyskin [Thu, 17 Jul 2014 07:53:35 +0000 (10:53 +0300)]
mei: start disconnect request timer consistently
commit
22b987a325701223f9a37db700c6eb20b9924c6f upstream.
Link must be reset in case the fw doesn't
respond to client disconnect request.
We did charge the timer only in irq path
from mei_cl_irq_close and not in mei_cl_disconnect
Signed-off-by: Alexander Usyskin <alexander.usyskin@intel.com>
Signed-off-by: Tomas Winkler <tomas.winkler@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takashi Iwai [Fri, 15 Aug 2014 15:35:00 +0000 (17:35 +0200)]
ALSA: hda/realtek - Avoid setting wrong COEF on ALC269 & co
commit
f3ee07d8b6e061bf34a7167c3f564e8da4360a99 upstream.
ALC269 & co have many vendor-specific setups with COEF verbs.
However, some verbs seem specific to some codec versions and they
result in the codec stalling. Typically, such a case can be avoided
by checking the return value from reading a COEF. If the return value
is -1, it implies that the COEF is invalid, thus it shouldn't be
written.
This patch adds the invalid COEF checks in appropriate places
accessing ALC269 and its variants. The patch actually fixes the
resume problem on Acer AO725 laptop.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=52181
Tested-by: Francesco Muzio <muziofg@gmail.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hui Wang [Tue, 19 Aug 2014 04:07:03 +0000 (12:07 +0800)]
ALSA: hda - restore the gpio led after resume
commit
f475371aa65de84fa483a998ab7594531026b9d9 upstream.
On some HP laptops, the mute led is controlled by codec gpio.
When some machine resume from s3/s4, the codec gpio data will be
cleared to 0 by BIOS:
Before suspend:
IO[3]: enable=1, dir=1, wake=0, sticky=0, data=1, unsol=0
After resume:
IO[3]: enable=1, dir=1, wake=0, sticky=0, data=0, unsol=0
To skip the AFG node to enter D3 can't fix this problem.
A workaround is to restore the gpio data when the system resume
back from s3/s4. It is safe even on the machines without this
problem.
BugLink: https://bugs.launchpad.net/bugs/1358116
Tested-by: Franz Hsieh <franz.hsieh@canonical.com>
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Henningsson [Tue, 22 Jul 2014 09:42:17 +0000 (11:42 +0200)]
ALSA: hda - Add mute LED pin quirk for HP 15 touchsmart
commit
423044744aa4c250058e976474856a7a41972182 upstream.
This makes the mute LED work on a HP 15 touchsmart machine.
BugLink: https://bugs.launchpad.net/bugs/1334950
Signed-off-by: David Henningsson <david.henningsson@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Clemens Ladisch [Sat, 9 Aug 2014 15:19:41 +0000 (17:19 +0200)]
ALSA: usb-audio: fix BOSS ME-25 MIDI regression
commit
53da5ebfef66ea6e478ad9c6add3781472b79475 upstream.
The BOSS ME-25 turns out not to have any useful descriptors in its MIDI
interface, so its needs a quirk entry after all.
Reported-and-tested-by: Kees van Veen <kees.vanveen@gmail.com>
Fixes:
8e5ced83dd1c ("ALSA: usb-audio: remove superfluous Roland quirks")
Signed-off-by: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Takashi Iwai [Sun, 10 Aug 2014 11:30:08 +0000 (13:30 +0200)]
ALSA: hda/ca0132 - Don't try loading firmware at resume when already failed
commit
e24aa0a4c5ac92a171d9dd74a8d3dbf652990d36 upstream.
CA0132 driver tries to reload the firmware at resume. Usually this
works since the firmware loader core caches the firmware contents by
itself. However, if the driver failed to load the firmwares
(e.g. missing files), reloading the firmware at resume goes through
the actual file loading code path, and triggers a kernel WARNING like:
WARNING: CPU: 10 PID:11371 at drivers/base/firmware_class.c:1105 _request_firmware+0x9ab/0x9d0()
For avoiding this situation, this patch makes CA0132 skipping the f/w
loading at resume when it failed at probe time.
Reported-and-tested-by: Janek Kozicki <cosurgi@gmail.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Clemens Ladisch [Mon, 4 Aug 2014 13:17:55 +0000 (15:17 +0200)]
ALSA: virtuoso: add Xonar Essence STX II support
commit
f42bb22243d2ae264d721b055f836059fe35321f upstream.
Just add the PCI ID for the STX II. It appears to work the same as the
STX, except for the addition of the not-yet-supported daughterboard.
Tested-by: Mario <fugazzi99@gmail.com>
Tested-by: corubba <corubba@gmx.de>
Signed-off-by: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Paul S McSpadden [Sun, 3 Aug 2014 22:47:36 +0000 (17:47 -0500)]
ALSA: usb-audio: Adjust Gamecom 780 volume level
commit
542baf94ec3c5526955b4c9fd899c7f30fae4ebe upstream.
Original patch fixed the original problem, but the sound was far too low
for most users. This patch references a compare matrix to allow the
volume levels to act normally. I personally tested this patch myself,
and volume levels returned to normal. Please see this discussion for
more details: https://bugzilla.kernel.org/show_bug.cgi?id=65251
Signed-off-by: Paul S McSpadden <fisch602@gmail.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hui Wang [Wed, 30 Jul 2014 03:11:48 +0000 (11:11 +0800)]
ALSA: hda - fix an external mic jack problem on a HP machine
commit
7440850c20b69658f322119d20a94dc914127cc7 upstream.
ON the machine, two pin complex (0xb and 0xe) are both routed to
the same external right-side mic jack, this makes the jack can't work.
To fix this problem, set the 0xe to "not connected".
BugLink: https://bugs.launchpad.net/bugs/1350148
Tested-by: Franz Hsieh <franz.hsieh@canonical.com>
Signed-off-by: Hui Wang <hui.wang@canonical.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pratyush Anand [Fri, 18 Jul 2014 07:07:10 +0000 (12:37 +0530)]
USB: Fix persist resume of some SS USB devices
commit
a40178b2fa6ad87670fb1e5fa4024db00c149629 upstream.
Problem Summary: Problem has been observed generally with PM states
where VBUS goes off during suspend. There are some SS USB devices which
take longer time for link training compared to many others. Such
devices fail to reconnect with same old address which was associated
with it before suspend.
When system resumes, at some point of time (dpm_run_callback->
usb_dev_resume->usb_resume->usb_resume_both->usb_resume_device->
usb_port_resume) SW reads hub status. If device is present,
then it finishes port resume and re-enumerates device with same
address. If device is not present then, SW thinks that device was
removed during suspend and therefore does logical disconnection
and removes all the resource allocated for this device.
Now, if I put sufficient delay just before root hub status read in
usb_resume_device then, SW sees always that device is present. In normal
course(without any delay) SW sees that no device is present and then SW
removes all resource associated with the device at this port. In the
latter case, after sometime, device says that hey I am here, now host
enumerates it, but with new address.
Problem had been reproduced when I connect verbatim USB3.0 hard disc
with my STiH407 XHCI host running with 3.10 kernel.
I see that similar problem has been reported here.
https://bugzilla.kernel.org/show_bug.cgi?id=53211
Reading above it seems that bug was not in 3.6.6 and was present in 3.8
and again it was not present for some in 3.12.6, while it was present
for few others. I tested with 3.13-FC19 running at i686 desktop, problem
was still there. However, I was failed to reproduce it with 3.16-RC4
running at same i686 machine. I would say it is just a random
observation. Problem for few devices is always there, as I am unable to
find a proper fix for the issue.
So, now question is what should be the amount of delay so that host is
always able to recognize suspended device after resume.
XHCI specs 4.19.4 says that when Link training is successful, port sets
CSC bit to 1. So if SW reads port status before successful link
training, then it will not find device to be present. USB Analyzer log
with such buggy devices show that in some cases device switch on the
RX termination after long delay of host enabling the VBUS. In few other
cases it has been seen that device fails to negotiate link training in
first attempt. It has been reported till now that few devices take as
long as 2000 ms to train the link after host enabling its VBUS and
RX termination. This patch implements a 2000 ms timeout for CSC bit to set
ie for link training. If in a case link trains before timeout, loop will
exit earlier.
This patch implements above delay, but only for SS device and when
persist is enabled.
So, for the good device overhead is almost none. While for the bad
devices penalty could be the time which it take for link training.
But, If a device was connected before suspend, and was removed
while system was asleep, then the penalty would be the timeout ie
2000 ms.
Results:
Verbatim USB SS hard disk connected with STiH407 USB host running 3.10
Kernel resumes in 461 msecs without this patch, but hard disk is
assigned a new device address. Same system resumes in 790 msecs with
this patch, but with old device address.
Signed-off-by: Pratyush Anand <pratyush.anand@st.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bryan O'Donoghue [Wed, 2 Jul 2014 08:58:18 +0000 (01:58 -0700)]
USB: ehci-pci: USB host controller support for Intel Quark X1000
commit
6e693739e9b603b3ca9ce0d4f4178f0633458465 upstream.
The EHCI packet buffer in/out threshold is programmable for Intel Quark X1000
USB host controller, and the default value is 0x20 dwords. The in/out threshold
can be programmed to 0x80 dwords (512 Bytes) to maximize the perfomrance,
but only when isochronous/interrupt transactions are not initiated by the USB
host controller. This patch is to reconfigure the packet buffer in/out
threshold as maximal as possible to maximize the performance, and 0x7F dwords
(508 Bytes) should be used because the USB host controller initiates
isochronous/interrupt transactions.
Signed-off-by: Bryan O'Donoghue <bryan.odonoghue@intel.com>
Signed-off-by: Alvin (Weike) Chen <alvin.chen@intel.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Reviewed-by: Jingoo Han <jg1.han@samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Patrick Riphagen [Thu, 24 Jul 2014 07:09:50 +0000 (09:09 +0200)]
USB: serial: ftdi_sio: Add support for new Xsens devices
commit
4bdcde358b4bda74e356841d351945ca3f2245dd upstream.
This adds support for new Xsens devices, using Xsens' own Vendor ID.
Signed-off-by: Patrick Riphagen <patrick.riphagen@xsens.com>
Signed-off-by: Frans Klaver <frans.klaver@xsens.com>
Cc: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Patrick Riphagen [Thu, 24 Jul 2014 07:12:52 +0000 (09:12 +0200)]
USB: serial: ftdi_sio: Annotate the current Xsens PID assignments
commit
9273b8a270878906540349422ab24558b9d65716 upstream.
The converters are used in specific products. It can be useful to know
which they are exactly.
Signed-off-by: Patrick Riphagen <patrick.riphagen@xsens.com>
Signed-off-by: Frans Klaver <frans.klaver@xsens.com>
Cc: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Oliver Neukum [Fri, 1 Aug 2014 07:55:20 +0000 (09:55 +0200)]
USB: devio: fix issue with log flooding
commit
d310d05f1225d1f6f2bf505255fdf593bfbb3051 upstream.
usbfs allows user space to pass down an URB which sets URB_SHORT_NOT_OK
for output URBs. That causes usbcore to log messages without limit
for a nonsensical disallowed combination. The fix is to silently drop
the attribute in usbfs.
The problem is reported to exist since 3.14
https://www.virtualbox.org/ticket/13085
Signed-off-by: Oliver Neukum <oneukum@suse.de>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alan Stern [Thu, 17 Jul 2014 20:34:29 +0000 (16:34 -0400)]
USB: OHCI: don't lose track of EDs when a controller dies
commit
977dcfdc60311e7aa571cabf6f39c36dde13339e upstream.
This patch fixes a bug in ohci-hcd. When an URB is unlinked, the
corresponding Endpoint Descriptor is added to the ed_rm_list and taken
off the hardware schedule. Once the ED is no longer visible to the
hardware, finish_unlinks() handles the URBs that were unlinked or have
completed. If any URBs remain attached to the ED, the ED is added
back to the hardware schedule -- but only if the controller is
running.
This fails when a controller dies. A non-empty ED does not get added
back to the hardware schedule and does not remain on the ed_rm_list;
ohci-hcd loses track of it. The remaining URBs cannot be unlinked,
which causes the USB stack to hang.
The patch changes finish_unlinks() so that non-empty EDs remain on
the ed_rm_list if the controller isn't running. This requires moving
some of the existing code around, to avoid modifying the ED's hardware
fields more than once.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alan Stern [Thu, 17 Jul 2014 20:32:26 +0000 (16:32 -0400)]
USB: OHCI: fix bugs in debug routines
commit
256dbcd80f1ccf8abf421c1d72ba79a4e29941dd upstream.
The debug routine fill_async_buffer() in ohci-hcd is buggy: It never
produces any output because it forgets to initialize the output buffer
size. Also, the debug routine ohci_dump() has an unused argument.
This patch adds the correct initialization and removes the unused
argument.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jan Kara [Sun, 17 Aug 2014 09:49:57 +0000 (11:49 +0200)]
isofs: Fix unbounded recursion when processing relocated directories
commit
410dd3cf4c9b36f27ed4542ee18b1af5e68645a4 upstream.
We did not check relocated directory in any way when processing Rock
Ridge 'CL' tag. Thus a corrupted isofs image can possibly have a CL
entry pointing to another CL entry leading to possibly unbounded
recursion in kernel code and thus stack overflow or deadlocks (if there
is a loop created from CL entries).
Fix the problem by not allowing CL entry to point to a directory entry
with CL entry (such use makes no good sense anyway) and by checking
whether CL entry doesn't point to itself.
Reported-by: Chris Evans <cevans@google.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiri Kosina [Thu, 21 Aug 2014 14:57:48 +0000 (09:57 -0500)]
HID: fix a couple of off-by-ones
commit
4ab25786c87eb20857bbb715c3ae34ec8fd6a214 upstream.
There are a few very theoretical off-by-one bugs in report descriptor size
checking when performing a pre-parsing fixup. Fix those.
Reported-by: Ben Hawkes <hawkes@google.com>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiri Kosina [Thu, 21 Aug 2014 14:57:17 +0000 (09:57 -0500)]
HID: logitech: perform bounds checking on device_id early enough
commit
ad3e14d7c5268c2e24477c6ef54bbdf88add5d36 upstream.
device_index is a char type and the size of paired_dj_deivces is 7
elements, therefore proper bounds checking has to be applied to
device_index before it is used.
We are currently performing the bounds checking in
logi_dj_recv_add_djhid_device(), which is too late, as malicious device
could send REPORT_TYPE_NOTIF_DEVICE_UNPAIRED early enough and trigger the
problem in one of the report forwarding functions called from
logi_dj_raw_event().
Fix this by performing the check at the earliest possible ocasion in
logi_dj_raw_event().
Reported-by: Ben Hawkes <hawkes@google.com>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dave Chiluk [Tue, 24 Jun 2014 15:11:26 +0000 (10:11 -0500)]
stable_kernel_rules: Add pointer to netdev-FAQ for network patches
commit
b76fc285337b6b256e9ba20a40cfd043f70c27af upstream.
Stable_kernel_rules should point submitters of network stable patches to the
netdev_FAQ.txt as requests for stable network patches should go to netdev
first.
Signed-off-by: Dave Chiluk <chiluk@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Greg Kroah-Hartman [Thu, 14 Aug 2014 01:38:34 +0000 (09:38 +0800)]
Linux 3.14.17
Dave Chinner [Mon, 19 May 2014 22:18:09 +0000 (08:18 +1000)]
xfs: log vector rounding leaks log space
commit
110dc24ad2ae4e9b94b08632fe1eb2fcdff83045 upstream.
The addition of direct formatting of log items into the CIL
linear buffer added alignment restrictions that the start of each
vector needed to be 64 bit aligned. Hence padding was added in
xlog_finish_iovec() to round up the vector length to ensure the next
vector started with the correct alignment.
This adds a small number of bytes to the size of
the linear buffer that is otherwise unused. The issue is that we
then use the linear buffer size to determine the log space used by
the log item, and this includes the unused space. Hence when we
account for space used by the log item, it's more than is actually
written into the iclogs, and hence we slowly leak this space.
This results on log hangs when reserving space, with threads getting
stuck with these stack traces:
Call Trace:
[<
ffffffff81d15989>] schedule+0x29/0x70
[<
ffffffff8150d3a2>] xlog_grant_head_wait+0xa2/0x1a0
[<
ffffffff8150d55d>] xlog_grant_head_check+0xbd/0x140
[<
ffffffff8150ee33>] xfs_log_reserve+0x103/0x220
[<
ffffffff814b7f05>] xfs_trans_reserve+0x2f5/0x310
.....
The 4 bytes is significant. Brain Foster did all the hard work in
tracking down a reproducable leak to inode chunk allocation (it went
away with the ikeep mount option). His rough numbers were that
creating 50,000 inodes leaked 11 log blocks. This turns out to be
roughly 800 inode chunks or 1600 inode cluster buffers. That
works out at roughly 4 bytes per cluster buffer logged, and at that
I started looking for a 4 byte leak in the buffer logging code.
What I found was that a struct xfs_buf_log_format structure for an
inode cluster buffer is 28 bytes in length. This gets rounded up to
32 bytes, but the vector length remains 28 bytes. Hence the CIL
ticket reservation is decremented by 32 bytes (via lv->lv_buf_len)
for that vector rather than 28 bytes which are written into the log.
The fix for this problem is to separately track the bytes used by
the log vectors in the item and use that instead of the buffer
length when accounting for the log space that will be used by the
formatted log item.
Again, thanks to Brian Foster for doing all the hard work and long
hours to isolate this leak and make finding the bug relatively
simple.
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Cc: Bill <billstuff2001@sbcglobal.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andrey Utkin [Mon, 4 Aug 2014 20:47:41 +0000 (23:47 +0300)]
arch/sparc/math-emu/math_32.c: drop stray break operator
[ Upstream commit
093758e3daede29cb4ce6aedb111becf9d4bfc57 ]
This commit is a guesswork, but it seems to make sense to drop this
break, as otherwise the following line is never executed and becomes
dead code. And that following line actually saves the result of
local calculation by the pointer given in function argument. So the
proposed change makes sense if this code in the whole makes sense (but I
am unable to analyze it in the whole).
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=81641
Reported-by: David Binderman <dcb314@hotmail.com>
Signed-off-by: Andrey Utkin <andrey.krieger.utkin@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sowmini Varadhan [Fri, 1 Aug 2014 13:50:40 +0000 (09:50 -0400)]
sparc64: ldc_connect() should not return EINVAL when handshake is in progress.
[ Upstream commit
4ec1b01029b4facb651b8ef70bc20a4be4cebc63 ]
The LDC handshake could have been asynchronously triggered
after ldc_bind() enables the ldc_rx() receive interrupt-handler
(and thus intercepts incoming control packets)
and before vio_port_up() calls ldc_connect(). If that is the case,
ldc_connect() should return 0 and let the state-machine
progress.
Signed-off-by: Sowmini Varadhan <sowmini.varadhan@oracle.com>
Acked-by: Karl Volz <karl.volz@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christopher Alexander Tobias Schulze [Sun, 3 Aug 2014 14:01:53 +0000 (16:01 +0200)]
sunsab: Fix detection of BREAK on sunsab serial console
[ Upstream commit
fe418231b195c205701c0cc550a03f6c9758fd9e ]
Fix detection of BREAK on sunsab serial console: BREAK detection was only
performed when there were also serial characters received simultaneously.
To handle all BREAKs correctly, the check for BREAK and the corresponding
call to uart_handle_break() must also be done if count == 0, therefore
duplicate this code fragment and pull it out of the loop over the received
characters.
Patch applies to 3.16-rc6.
Signed-off-by: Christopher Alexander Tobias Schulze <cat.schulze@alice-dsl.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christopher Alexander Tobias Schulze [Sun, 3 Aug 2014 13:44:52 +0000 (15:44 +0200)]
bbc-i2c: Fix BBC I2C envctrl on SunBlade 2000
[ Upstream commit
5cdceab3d5e02eb69ea0f5d8fa9181800baf6f77 ]
Fix regression in bbc i2c temperature and fan control on some Sun systems
that causes the driver to refuse to load due to the bbc_i2c_bussel resource not
being present on the (second) i2c bus where the temperature sensors and fan
control are located. (The check for the number of resources was removed when
the driver was ported to a pure OF driver in mid 2008.)
Signed-off-by: Christopher Alexander Tobias Schulze <cat.schulze@alice-dsl.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Tue, 5 Aug 2014 03:07:37 +0000 (20:07 -0700)]
sparc64: Guard against flushing openfirmware mappings.
[ Upstream commit
4ca9a23765da3260058db3431faf5b4efd8cf926 ]
Based almost entirely upon a patch by Christopher Alexander Tobias
Schulze.
In commit
db64fe02258f1507e13fe5212a989922323685ce ("mm: rewrite vmap
layer") lazy VMAP tlb flushing was added to the vmalloc layer. This
causes problems on sparc64.
Sparc64 has two VMAP mapped regions and they are not contiguous with
eachother. First we have the malloc mapping area, then another
unrelated region, then the vmalloc region.
This "another unrelated region" is where the firmware is mapped.
If the lazy TLB flushing logic in the vmalloc code triggers after
we've had both a module unload and a vfree or similar, it will pass an
address range that goes from somewhere inside the malloc region to
somewhere inside the vmalloc region, and thus covering the
openfirmware area entirely.
The sparc64 kernel learns about openfirmware's dynamic mappings in
this region early in the boot, and then services TLB misses in this
area. But openfirmware has some locked TLB entries which are not
mentioned in those dynamic mappings and we should thus not disturb
them.
These huge lazy TLB flush ranges causes those openfirmware locked TLB
entries to be removed, resulting in all kinds of problems including
hard hangs and crashes during reboot/reset.
Besides causing problems like this, such huge TLB flush ranges are
also incredibly inefficient. A plea has been made with the author of
the VMAP lazy TLB flushing code, but for now we'll put a safety guard
into our flush_tlb_kernel_range() implementation.
Since the implementation has become non-trivial, stop defining it as a
macro and instead make it a function in a C source file.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Mon, 4 Aug 2014 23:34:01 +0000 (16:34 -0700)]
sparc64: Do not insert non-valid PTEs into the TSB hash table.
[ Upstream commit
18f38132528c3e603c66ea464727b29e9bbcb91b ]
The assumption was that update_mmu_cache() (and the equivalent for PMDs) would
only be called when the PTE being installed will be accessible by the user.
This is not true for code paths originating from remove_migration_pte().
There are dire consequences for placing a non-valid PTE into the TSB. The TLB
miss frramework assumes thatwhen a TSB entry matches we can just load it into
the TLB and return from the TLB miss trap.
So if a non-valid PTE is in there, we will deadlock taking the TLB miss over
and over, never satisfying the miss.
Just exit early from update_mmu_cache() and friends in this situation.
Based upon a report and patch from Christopher Alexander Tobias Schulze.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Sat, 17 May 2014 18:28:05 +0000 (11:28 -0700)]
sparc64: Add membar to Niagara2 memcpy code.
[ Upstream commit
5aa4ecfd0ddb1e6dcd1c886e6c49677550f581aa ]
This is the prevent previous stores from overlapping the block stores
done by the memcpy loop.
Based upon a glibc patch by Jose E. Marchesi
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Wed, 7 May 2014 21:07:32 +0000 (14:07 -0700)]
sparc64: Fix huge TSB mapping on pre-UltraSPARC-III cpus.
[ Upstream commit
b18eb2d779240631a098626cb6841ee2dd34fda0 ]
Access to the TSB hash tables during TLB misses requires that there be
an atomic 128-bit quad load available so that we fetch a matching TAG
and DATA field at the same time.
On cpus prior to UltraSPARC-III only virtual address based quad loads
are available. UltraSPARC-III and later provide physical address
based variants which are easier to use.
When we only have virtual address based quad loads available this
means that we have to lock the TSB into the TLB at a fixed virtual
address on each cpu when it runs that process. We can't just access
the PAGE_OFFSET based aliased mapping of these TSBs because we cannot
take a recursive TLB miss inside of the TLB miss handler without
risking running out of hardware trap levels (some trap combinations
can be deep, such as those generated by register window spill and fill
traps).
Without huge pages it's working perfectly fine, but when the huge TSB
got added another chunk of fixed virtual address space was not
allocated for this second TSB mapping.
So we were mapping both the 8K and 4MB TSBs to the same exact virtual
address, causing multiple TLB matches which gives undefined behavior.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Wed, 7 May 2014 04:27:37 +0000 (21:27 -0700)]
sparc64: Don't bark so loudly about 32-bit tasks generating 64-bit fault addresses.
[ Upstream commit
e5c460f46ae7ee94831cb55cb980f942aa9e5a85 ]
This was found using Dave Jone's trinity tool.
When a user process which is 32-bit performs a load or a store, the
cpu chops off the top 32-bits of the effective address before
translating it.
This is because we run 32-bit tasks with the PSTATE_AM (address
masking) bit set.
We can't run the kernel with that bit set, so when the kernel accesses
userspace no address masking occurs.
Since a 32-bit process will have no mappings in that region we will
properly fault, so we don't try to handle this using access_ok(),
which can safely just be a NOP on sparc64.
Real faults from 32-bit processes should never generate such addresses
so a bug check was added long ago, and it barks in the logs if this
happens.
But it also barks when a kernel user access causes this condition, and
that _can_ happen. For example, if a pointer passed into a system call
is "0xfffffffc" and the kernel access 4 bytes offset from that pointer.
Just handle such faults normally via the exception entries.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Tue, 29 Apr 2014 20:28:23 +0000 (13:28 -0700)]
sparc64: Give more detailed information in {pgd,pmd}_ERROR() and kill pte_ERROR().
[ Upstream commit
fe866433f843b080246ce729b5e6b27b5f5d9a58 ]
pte_ERROR() is not used anywhere, delete it.
For pgd_ERROR() and pmd_ERROR(), output something similar to x86, giving the address
of the pgd/pmd as well as it's value.
Also provide the caller, since these macros are invoked from pgd_clear_bad() and
pmd_clear_bad() which provides little context as to what high level operation was
occuring when the BAD state was detected.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Tue, 29 Apr 2014 20:03:27 +0000 (13:03 -0700)]
sparc64: Add basic validations to {pud,pmd}_bad().
[ Upstream commit
26cf432551d749e7d581db33529507a711c6eaab ]
Instead of returning false we should at least check the most basic
things, otherwise page table corruptions will be very difficult to
debug.
PMD and PTE tables are of size PAGE_SIZE, so none of the sub-PAGE_SIZE
bits should be set.
We also complement this with a check that the physical address the
pud/pmd points to is valid memory.
PowerPC was used as a guide while implementating this.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Sun, 4 May 2014 05:52:50 +0000 (22:52 -0700)]
sparc64: Use 'ILOG2_4MB' instead of constant '22'.
[ Upstream commit
0eef331a3d0ee970dcbebd1bd5fcb57ca33ece01 ]
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Tue, 29 Apr 2014 19:58:03 +0000 (12:58 -0700)]
sparc64: Fix range check in kern_addr_valid().
[ Upstream commit
ee73887e92a69ae0a5cda21c68ea75a27804c944 ]
In commit
b2d438348024b75a1ee8b66b85d77f569a5dfed8 ("sparc64: Make
PAGE_OFFSET variable."), the MAX_PHYS_ADDRESS_BITS value was increased
(to 47).
This constant reference to '41UL' was missed.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Tue, 29 Apr 2014 06:52:11 +0000 (23:52 -0700)]
sparc64: Fix top-level fault handling bugs.
[ Upstream commit
70ffc6ebaead783ac8dafb1e87df0039bb043596 ]
Make get_user_insn() able to cope with huge PMDs.
Next, make do_fault_siginfo() more robust when get_user_insn() can't
actually fetch the instruction. In particular, use the MMU announced
fault address when that happens, instead of calling
compute_effective_address() and computing garbage.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Tue, 29 Apr 2014 06:50:08 +0000 (23:50 -0700)]
sparc64: Handle 32-bit tasks properly in compute_effective_address().
[ Upstream commit
d037d16372bbe4d580342bebbb8826821ad9edf0 ]
If we have a 32-bit task we must chop off the top 32-bits of the
64-bit value just as the cpu would.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Tue, 29 Apr 2014 02:11:27 +0000 (19:11 -0700)]
sparc64: Don't use _PAGE_PRESENT in pte_modify() mask.
[ Upstream commit
eaf85da82669b057f20c4e438dc2566b51a83af6 ]
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Mon, 28 Apr 2014 04:01:56 +0000 (21:01 -0700)]
sparc64: Fix hex values in comment above pte_modify().
[ Upstream commit
c2e4e676adb40ea764af79d3e08be954e14a0f4c ]
When _PAGE_SPECIAL and _PAGE_PMD_HUGE were added to the mask, the
comment was not updated.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Fri, 25 Apr 2014 17:21:12 +0000 (10:21 -0700)]
sparc64: Fix bugs in get_user_pages_fast() wrt. THP.
[ Upstream commit
04df419de34104d8818b8c5cffaa062fa36d20ea ]
The large PMD path needs to check _PAGE_VALID not _PAGE_PRESENT, to
decide if it needs to bail and return 0.
pmd_large() should therefore just check _PAGE_PMD_HUGE.
Calls to gup_huge_pmd() are guarded with a check of pmd_large(), so we
just need to add a valid bit check.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Thu, 24 Apr 2014 20:58:02 +0000 (13:58 -0700)]
sparc64: Fix huge PMD invalidation.
[ Upstream commit
51e5ef1bb7ab0e5fa7de4e802da5ab22fe35f0bf ]
On sparc64 "present" and "valid" are seperate PTE bits, this allows us to
naturally distinguish between the user explicitly asking for PROT_NONE
with mprotect() and other situations.
However we weren't handling this properly in the huge PMD paths.
First of all, the page table walker in the TSB miss path only checks
for _PAGE_PMD_HUGE. So the generic pmdp_invalidate() would clear
_PAGE_PRESENT but the TLB miss paths would still load it into the TLB
as a valid huge PMD.
Fix this by clearing the valid bit in pmdp_invalidate(), and also
checking the valid bit in USER_PGTABLE_CHECK_PMD_HUGE using "brgez"
since _PAGE_VALID is bit 63 in both the sun4u and sun4v pte layouts.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Mon, 21 Apr 2014 01:55:01 +0000 (21:55 -0400)]
sparc64: Fix executable bit testing in set_pmd_at() paths.
[ Upstream commit
5b1e94fa439a3227beefad58c28c17f68287a8e9 ]
This code was mistakenly using the exec bit from the PMD in all
cases, even when the PMD isn't a huge PMD.
If it's not a huge PMD, test the exec bit in the individual ptes down
in tlb_batch_pmd_scan().
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kirill Tkhai [Wed, 16 Apr 2014 20:45:24 +0000 (00:45 +0400)]
sparc64: Make itc_sync_lock raw
[ Upstream commit
49b6c01f4c1de3b5e5427ac5aba80f9f6d27837a ]
One more place where we must not be able
to be preempted or to be interrupted in RT.
Always actually disable interrupts during
synchronization cycle.
Signed-off-by: Kirill Tkhai <tkhai@yandex.ru>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David S. Miller [Thu, 1 May 2014 02:37:48 +0000 (19:37 -0700)]
sparc64: Fix argument sign extension for compat_sys_futex().
[ Upstream commit
aa3449ee9c87d9b7660dd1493248abcc57769e31 ]
Only the second argument, 'op', is signed.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eric Dumazet [Tue, 5 Aug 2014 14:49:52 +0000 (16:49 +0200)]
sctp: fix possible seqlock seadlock in sctp_packet_transmit()
[ Upstream commit
757efd32d5ce31f67193cc0e6a56e4dffcc42fb1 ]
Dave reported following splat, caused by improper use of
IP_INC_STATS_BH() in process context.
BUG: using __this_cpu_add() in preemptible [
00000000] code: trinity-c117/14551
caller is __this_cpu_preempt_check+0x13/0x20
CPU: 3 PID: 14551 Comm: trinity-c117 Not tainted 3.16.0+ #33
ffffffff9ec898f0 0000000047ea7e23 ffff88022d32f7f0 ffffffff9e7ee207
0000000000000003 ffff88022d32f818 ffffffff9e397eaa ffff88023ee70b40
ffff88022d32f970 ffff8801c026d580 ffff88022d32f828 ffffffff9e397ee3
Call Trace:
[<
ffffffff9e7ee207>] dump_stack+0x4e/0x7a
[<
ffffffff9e397eaa>] check_preemption_disabled+0xfa/0x100
[<
ffffffff9e397ee3>] __this_cpu_preempt_check+0x13/0x20
[<
ffffffffc0839872>] sctp_packet_transmit+0x692/0x710 [sctp]
[<
ffffffffc082a7f2>] sctp_outq_flush+0x2a2/0xc30 [sctp]
[<
ffffffff9e0d985c>] ? mark_held_locks+0x7c/0xb0
[<
ffffffff9e7f8c6d>] ? _raw_spin_unlock_irqrestore+0x5d/0x80
[<
ffffffffc082b99a>] sctp_outq_uncork+0x1a/0x20 [sctp]
[<
ffffffffc081e112>] sctp_cmd_interpreter.isra.23+0x1142/0x13f0 [sctp]
[<
ffffffffc081c86b>] sctp_do_sm+0xdb/0x330 [sctp]
[<
ffffffff9e0b8f1b>] ? preempt_count_sub+0xab/0x100
[<
ffffffffc083b350>] ? sctp_cname+0x70/0x70 [sctp]
[<
ffffffffc08389ca>] sctp_primitive_ASSOCIATE+0x3a/0x50 [sctp]
[<
ffffffffc083358f>] sctp_sendmsg+0x88f/0xe30 [sctp]
[<
ffffffff9e0d673a>] ? lock_release_holdtime.part.28+0x9a/0x160
[<
ffffffff9e0d62ce>] ? put_lock_stats.isra.27+0xe/0x30
[<
ffffffff9e73b624>] inet_sendmsg+0x104/0x220
[<
ffffffff9e73b525>] ? inet_sendmsg+0x5/0x220
[<
ffffffff9e68ac4e>] sock_sendmsg+0x9e/0xe0
[<
ffffffff9e1c0c09>] ? might_fault+0xb9/0xc0
[<
ffffffff9e1c0bae>] ? might_fault+0x5e/0xc0
[<
ffffffff9e68b234>] SYSC_sendto+0x124/0x1c0
[<
ffffffff9e0136b0>] ? syscall_trace_enter+0x250/0x330
[<
ffffffff9e68c3ce>] SyS_sendto+0xe/0x10
[<
ffffffff9e7f9be4>] tracesys+0xdd/0xe2
This is a followup of commits
f1d8cba61c3c4b ("inet: fix possible
seqlock deadlocks") and
7f88c6b23afbd315 ("ipv6: fix possible seqlock
deadlock in ip6_finish_output2")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
Reported-by: Dave Jones <davej@redhat.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sven Eckelmann [Mon, 26 May 2014 15:21:39 +0000 (17:21 +0200)]
batman-adv: Fix out-of-order fragmentation support
[ Upstream commit
d9124268d84a836f14a6ead54ff9d8eee4c43be5 ]
batadv_frag_insert_packet was unable to handle out-of-order packets because it
dropped them directly. This is caused by the way the fragmentation lists is
checked for the correct place to insert a fragmentation entry.
The fragmentation code keeps the fragments in lists. The fragmentation entries
are kept in descending order of sequence number. The list is traversed and each
entry is compared with the new fragment. If the current entry has a smaller
sequence number than the new fragment then the new one has to be inserted
before the current entry. This ensures that the list is still in descending
order.
An out-of-order packet with a smaller sequence number than all entries in the
list still has to be added to the end of the list. The used hlist has no
information about the last entry in the list inside hlist_head and thus the
last entry has to be calculated differently. Currently the code assumes that
the iterator variable of hlist_for_each_entry can be used for this purpose
after the hlist_for_each_entry finished. This is obviously wrong because the
iterator variable is always NULL when the list was completely traversed.
Instead the information about the last entry has to be stored in a different
variable.
This problem was introduced in
610bfc6bc99bc83680d190ebc69359a05fc7f605
("batman-adv: Receive fragmented packets and merge").
Signed-off-by: Sven Eckelmann <sven@narfation.org>
Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch>
Signed-off-by: Antonio Quartulli <antonio@meshcoding.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sasha Levin [Fri, 1 Aug 2014 03:00:35 +0000 (23:00 -0400)]
iovec: make sure the caller actually wants anything in memcpy_fromiovecend
[ Upstream commit
06ebb06d49486676272a3c030bfeef4bd969a8e6 ]
Check for cases when the caller requests 0 bytes instead of running off
and dereferencing potentially invalid iovecs.
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vlad Yasevich [Thu, 31 Jul 2014 14:33:06 +0000 (10:33 -0400)]
net: Correctly set segment mac_len in skb_segment().
[ Upstream commit
fcdfe3a7fa4cb74391d42b6a26dc07c20dab1d82 ]
When performing segmentation, the mac_len value is copied right
out of the original skb. However, this value is not always set correctly
(like when the packet is VLAN-tagged) and we'll end up copying a bad
value.
One way to demonstrate this is to configure a VM which tags
packets internally and turn off VLAN acceleration on the forwarding
bridge port. The packets show up corrupt like this:
16:18:24.985548 52:54:00:ab:be:25 > 52:54:00:26:ce:a3, ethertype 802.1Q
(0x8100), length 1518: vlan 100, p 0, ethertype 0x05e0,
0x0000: 8cdb 1c7c 8cdb 0064 4006 b59d 0a00 6402 ...|...d@.....d.
0x0010: 0a00 6401 9e0d b441 0a5e 64ec 0330 14fa ..d....A.^d..0..
0x0020: 29e3 01c9 f871 0000 0101 080a 000a e833)....q.........3
0x0030: 000f 8c75 6e65 7470 6572 6600 6e65 7470 ...unetperf.netp
0x0040: 6572 6600 6e65 7470 6572 6600 6e65 7470 erf.netperf.netp
0x0050: 6572 6600 6e65 7470 6572 6600 6e65 7470 erf.netperf.netp
0x0060: 6572 6600 6e65 7470 6572 6600 6e65 7470 erf.netperf.netp
...
This also leads to awful throughput as GSO packets are dropped and
cause retransmissions.
The solution is to set the mac_len using the values already available
in then new skb. We've already adjusted all of the header offset, so we
might as well correctly figure out the mac_len using skb_reset_mac_len().
After this change, packets are segmented correctly and performance
is restored.
CC: Eric Dumazet <edumazet@google.com>
Signed-off-by: Vlad Yasevich <vyasevic@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vlad Yasevich [Thu, 31 Jul 2014 14:30:25 +0000 (10:30 -0400)]
macvlan: Initialize vlan_features to turn on offload support.
[ Upstream commit
081e83a78db9b0ae1f5eabc2dedecc865f509b98 ]
Macvlan devices do not initialize vlan_features. As a result,
any vlan devices configured on top of macvlans perform very poorly.
Initialize vlan_features based on the vlan features of the lower-level
device.
Signed-off-by: Vlad Yasevich <vyasevic@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniel Borkmann [Tue, 22 Jul 2014 13:22:45 +0000 (15:22 +0200)]
net: sctp: inherit auth_capable on INIT collisions
[ Upstream commit
1be9a950c646c9092fb3618197f7b6bfb50e82aa ]
Jason reported an oops caused by SCTP on his ARM machine with
SCTP authentication enabled:
Internal error: Oops: 17 [#1] ARM
CPU: 0 PID: 104 Comm: sctp-test Not tainted 3.13.0-68744-g3632f30c9b20-dirty #1
task:
c6eefa40 ti:
c6f52000 task.ti:
c6f52000
PC is at sctp_auth_calculate_hmac+0xc4/0x10c
LR is at sg_init_table+0x20/0x38
pc : [<
c024bb80>] lr : [<
c00f32dc>] psr:
40000013
sp :
c6f538e8 ip :
00000000 fp :
c6f53924
r10:
c6f50d80 r9 :
00000000 r8 :
00010000
r7 :
00000000 r6 :
c7be4000 r5 :
00000000 r4 :
c6f56254
r3 :
c00c8170 r2 :
00000001 r1 :
00000008 r0 :
c6f1e660
Flags: nZcv IRQs on FIQs on Mode SVC_32 ISA ARM Segment user
Control:
0005397f Table:
06f28000 DAC:
00000015
Process sctp-test (pid: 104, stack limit = 0xc6f521c0)
Stack: (0xc6f538e8 to 0xc6f54000)
[...]
Backtrace:
[<
c024babc>] (sctp_auth_calculate_hmac+0x0/0x10c) from [<
c0249af8>] (sctp_packet_transmit+0x33c/0x5c8)
[<
c02497bc>] (sctp_packet_transmit+0x0/0x5c8) from [<
c023e96c>] (sctp_outq_flush+0x7fc/0x844)
[<
c023e170>] (sctp_outq_flush+0x0/0x844) from [<
c023ef78>] (sctp_outq_uncork+0x24/0x28)
[<
c023ef54>] (sctp_outq_uncork+0x0/0x28) from [<
c0234364>] (sctp_side_effects+0x1134/0x1220)
[<
c0233230>] (sctp_side_effects+0x0/0x1220) from [<
c02330b0>] (sctp_do_sm+0xac/0xd4)
[<
c0233004>] (sctp_do_sm+0x0/0xd4) from [<
c023675c>] (sctp_assoc_bh_rcv+0x118/0x160)
[<
c0236644>] (sctp_assoc_bh_rcv+0x0/0x160) from [<
c023d5bc>] (sctp_inq_push+0x6c/0x74)
[<
c023d550>] (sctp_inq_push+0x0/0x74) from [<
c024a6b0>] (sctp_rcv+0x7d8/0x888)
While we already had various kind of bugs in that area
ec0223ec48a9 ("net: sctp: fix sctp_sf_do_5_1D_ce to verify if
we/peer is AUTH capable") and
b14878ccb7fa ("net: sctp: cache
auth_enable per endpoint"), this one is a bit of a different
kind.
Giving a bit more background on why SCTP authentication is
needed can be found in RFC4895:
SCTP uses 32-bit verification tags to protect itself against
blind attackers. These values are not changed during the
lifetime of an SCTP association.
Looking at new SCTP extensions, there is the need to have a
method of proving that an SCTP chunk(s) was really sent by
the original peer that started the association and not by a
malicious attacker.
To cause this bug, we're triggering an INIT collision between
peers; normal SCTP handshake where both sides intent to
authenticate packets contains RANDOM; CHUNKS; HMAC-ALGO
parameters that are being negotiated among peers:
---------- INIT[RANDOM; CHUNKS; HMAC-ALGO] ---------->
<------- INIT-ACK[RANDOM; CHUNKS; HMAC-ALGO] ---------
-------------------- COOKIE-ECHO -------------------->
<-------------------- COOKIE-ACK ---------------------
RFC4895 says that each endpoint therefore knows its own random
number and the peer's random number *after* the association
has been established. The local and peer's random number along
with the shared key are then part of the secret used for
calculating the HMAC in the AUTH chunk.
Now, in our scenario, we have 2 threads with 1 non-blocking
SEQ_PACKET socket each, setting up common shared SCTP_AUTH_KEY
and SCTP_AUTH_ACTIVE_KEY properly, and each of them calling
sctp_bindx(3), listen(2) and connect(2) against each other,
thus the handshake looks similar to this, e.g.:
---------- INIT[RANDOM; CHUNKS; HMAC-ALGO] ---------->
<------- INIT-ACK[RANDOM; CHUNKS; HMAC-ALGO] ---------
<--------- INIT[RANDOM; CHUNKS; HMAC-ALGO] -----------
-------- INIT-ACK[RANDOM; CHUNKS; HMAC-ALGO] -------->
...
Since such collisions can also happen with verification tags,
the RFC4895 for AUTH rather vaguely says under section 6.1:
In case of INIT collision, the rules governing the handling
of this Random Number follow the same pattern as those for
the Verification Tag, as explained in Section 5.2.4 of
RFC 2960 [5]. Therefore, each endpoint knows its own Random
Number and the peer's Random Number after the association
has been established.
In RFC2960, section 5.2.4, we're eventually hitting Action B:
B) In this case, both sides may be attempting to start an
association at about the same time but the peer endpoint
started its INIT after responding to the local endpoint's
INIT. Thus it may have picked a new Verification Tag not
being aware of the previous Tag it had sent this endpoint.
The endpoint should stay in or enter the ESTABLISHED
state but it MUST update its peer's Verification Tag from
the State Cookie, stop any init or cookie timers that may
running and send a COOKIE ACK.
In other words, the handling of the Random parameter is the
same as behavior for the Verification Tag as described in
Action B of section 5.2.4.
Looking at the code, we exactly hit the sctp_sf_do_dupcook_b()
case which triggers an SCTP_CMD_UPDATE_ASSOC command to the
side effect interpreter, and in fact it properly copies over
peer_{random, hmacs, chunks} parameters from the newly created
association to update the existing one.
Also, the old asoc_shared_key is being released and based on
the new params, sctp_auth_asoc_init_active_key() updated.
However, the issue observed in this case is that the previous
asoc->peer.auth_capable was 0, and has *not* been updated, so
that instead of creating a new secret, we're doing an early
return from the function sctp_auth_asoc_init_active_key()
leaving asoc->asoc_shared_key as NULL. However, we now have to
authenticate chunks from the updated chunk list (e.g. COOKIE-ACK).
That in fact causes the server side when responding with ...
<------------------ AUTH; COOKIE-ACK -----------------
... to trigger a NULL pointer dereference, since in
sctp_packet_transmit(), it discovers that an AUTH chunk is
being queued for xmit, and thus it calls sctp_auth_calculate_hmac().
Since the asoc->active_key_id is still inherited from the
endpoint, and the same as encoded into the chunk, it uses
asoc->asoc_shared_key, which is still NULL, as an asoc_key
and dereferences it in ...
crypto_hash_setkey(desc.tfm, &asoc_key->data[0], asoc_key->len)
... causing an oops. All this happens because sctp_make_cookie_ack()
called with the *new* association has the peer.auth_capable=1
and therefore marks the chunk with auth=1 after checking
sctp_auth_send_cid(), but it is *actually* sent later on over
the then *updated* association's transport that didn't initialize
its shared key due to peer.auth_capable=0. Since control chunks
in that case are not sent by the temporary association which
are scheduled for deletion, they are issued for xmit via
SCTP_CMD_REPLY in the interpreter with the context of the
*updated* association. peer.auth_capable was 0 in the updated
association (which went from COOKIE_WAIT into ESTABLISHED state),
since all previous processing that performed sctp_process_init()
was being done on temporary associations, that we eventually
throw away each time.
The correct fix is to update to the new peer.auth_capable
value as well in the collision case via sctp_assoc_update(),
so that in case the collision migrated from 0 -> 1,
sctp_auth_asoc_init_active_key() can properly recalculate
the secret. This therefore fixes the observed server panic.
Fixes:
730fc3d05cd4 ("[SCTP]: Implete SCTP-AUTH parameter processing")
Reported-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Daniel Borkmann <dborkman@redhat.com>
Tested-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Cc: Vlad Yasevich <vyasevich@gmail.com>
Acked-by: Vlad Yasevich <vyasevich@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ivan Vecera [Tue, 29 Jul 2014 14:29:30 +0000 (16:29 +0200)]
bna: fix performance regression
[ Upstream commit
c36c9d50cc6af5c5bfcc195f21b73f55520c15f9 ]
The recent commit "e29aa33 bna: Enable Multi Buffer RX" is causing
a performance regression. It does not properly update 'cmpl' pointer
at the end of the loop in NAPI handler bnad_cq_process(). The result is
only one packet / per NAPI-schedule is processed.
Signed-off-by: Ivan Vecera <ivecera@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christoph Paasch [Tue, 29 Jul 2014 11:40:57 +0000 (13:40 +0200)]
tcp: Fix integer-overflow in TCP vegas
[ Upstream commit
1f74e613ded11517db90b2bd57e9464d9e0fb161 ]
In vegas we do a multiplication of the cwnd and the rtt. This
may overflow and thus their result is stored in a u64. However, we first
need to cast the cwnd so that actually 64-bit arithmetic is done.
Then, we need to do do_div to allow this to be used on 32-bit arches.
Cc: Stephen Hemminger <stephen@networkplumber.org>
Cc: Neal Cardwell <ncardwell@google.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: David Laight <David.Laight@ACULAB.COM>
Cc: Doug Leith <doug.leith@nuim.ie>
Fixes:
8d3a564da34e (tcp: tcp_vegas cong avoid fix)
Signed-off-by: Christoph Paasch <christoph.paasch@uclouvain.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christoph Paasch [Tue, 29 Jul 2014 10:07:27 +0000 (12:07 +0200)]
tcp: Fix integer-overflows in TCP veno
[ Upstream commit
45a07695bc64b3ab5d6d2215f9677e5b8c05a7d0 ]
In veno we do a multiplication of the cwnd and the rtt. This
may overflow and thus their result is stored in a u64. However, we first
need to cast the cwnd so that actually 64-bit arithmetic is done.
A first attempt at fixing
76f1017757aa0 ([TCP]: TCP Veno congestion
control) was made by
159131149c2 (tcp: Overflow bug in Vegas), but it
failed to add the required cast in tcp_veno_cong_avoid().
Fixes:
76f1017757aa0 ([TCP]: TCP Veno congestion control)
Signed-off-by: Christoph Paasch <christoph.paasch@uclouvain.be>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>