Anand Jain [Thu, 28 Sep 2017 06:51:09 +0000 (14:51 +0800)]
btrfs: undo writable superblocke when sprouting fails
[ Upstream commit
0af2c4bf5a012a40a2f9230458087d7f068339d0 ]
When new device is being added to seed FS, seed FS is marked writable,
but when we fail to bring in the new device, we missed to undo the
writable part. This patch fixes it.
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nikolay Borisov [Thu, 28 Sep 2017 07:53:17 +0000 (10:53 +0300)]
btrfs: Explicitly handle btrfs_update_root failure
[ Upstream commit
9417ebc8a676487c6ec8825f92fb28f7dbeb5f4b ]
btrfs_udpate_root can fail and it aborts the transaction, the correct
way to handle an aborted transaction is to explicitly end with
btrfs_end_transaction. Even now the code is correct since
btrfs_commit_transaction would handle an aborted transaction but this is
more of an implementation detail. So let's be explicit in handling
failure in btrfs_update_root.
Furthermore btrfs_commit_transaction can also fail and by ignoring it's
return value we could have left the in-memory copy of the root item in
an inconsistent state. So capture the error value which allows us to
correctly revert the RO/RW flags in case of commit failure.
Signed-off-by: Nikolay Borisov <nborisov@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Anand Jain [Sat, 14 Oct 2017 00:34:02 +0000 (08:34 +0800)]
btrfs: fix false EIO for missing device
[ Upstream commit
102ed2c5ff932439bbbe74c7bd63e6d5baa9f732 ]
When one of the device is missing, bbio_error() takes care of setting
the error status. And if its only IO that is pending in that stripe, it
fails to check the status of the other IO at %bbio_error before setting
the error %bi_status for the %orig_bio. Fix this by checking if
%bbio->error has exceeded the %bbio->max_errors.
Reproducer as below fdatasync error is seen intermittently.
mount -o degraded /dev/sdc /btrfs
dd status=none if=/dev/zero of=$(mktemp /btrfs/XXX) bs=4096 count=1 conv=fdatasync
dd: fdatasync failed for ‘/btrfs/LSe’: Input/output error
The reason for the intermittences of the problem is because
the following conditions have to be met, which depends on timing:
In btrfs_map_bio()
- the RAID1 the missing device has to be at %dev_nr = 1
In bbio_error()
. before bbio_error() is called the bio of the not-missing
device at %dev_nr = 0 must be completed so that the below
condition is true
if (atomic_dec_and_test(&bbio->stripes_pending)) {
Signed-off-by: Anand Jain <anand.jain@oracle.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: David Sterba <dsterba@suse.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nick Desaulniers [Fri, 27 Oct 2017 16:33:41 +0000 (09:33 -0700)]
arm64: prevent regressions in compressed kernel image size when upgrading to binutils 2.27
[ Upstream commit
fd9dde6abcb9bfe6c6bee48834e157999f113971 ]
Upon upgrading to binutils 2.27, we found that our lz4 and gzip
compressed kernel images were significantly larger, resulting is 10ms
boot time regressions.
As noted by Rahul:
"aarch64 binaries uses RELA relocations, where each relocation entry
includes an addend value. This is similar to x86_64. On x86_64, the
addend values are also stored at the relocation offset for relative
relocations. This is an optimization: in the case where code does not
need to be relocated, the loader can simply skip processing relative
relocations. In binutils-2.25, both bfd and gold linkers did this for
x86_64, but only the gold linker did this for aarch64. The kernel build
here is using the bfd linker, which stored zeroes at the relocation
offsets for relative relocations. Since a set of zeroes compresses
better than a set of non-zero addend values, this behavior was resulting
in much better lz4 compression.
The bfd linker in binutils-2.27 is now storing the actual addend values
at the relocation offsets. The behavior is now consistent with what it
does for x86_64 and what gold linker does for both architectures. The
change happened in this upstream commit:
https://sourceware.org/git/?p=binutils-gdb.git;a=commit;h=
1f56df9d0d5ad89806c24e71f296576d82344613
Since a bunch of zeroes got replaced by non-zero addend values, we see
the side effect of lz4 compressed image being a bit bigger.
To get the old behavior from the bfd linker, "--no-apply-dynamic-relocs"
flag can be used:
$ LDFLAGS="--no-apply-dynamic-relocs" make
With this flag, the compressed image size is back to what it was with
binutils-2.25.
If the kernel is using ASLR, there aren't additional runtime costs to
--no-apply-dynamic-relocs, as the relocations will need to be applied
again anyway after the kernel is relocated to a random address.
If the kernel is not using ASLR, then presumably the current default
behavior of the linker is better. Since the static linker performed the
dynamic relocs, and the kernel is not moved to a different address at
load time, it can skip applying the relocations all over again."
Some measurements:
$ ld -v
GNU ld (binutils-2.25-
f3d35cf6) 2.25.51.
20141117
^
$ ls -l vmlinux
-rwxr-x--- 1 ndesaulniers eng
300652760 Oct 26 11:57 vmlinux
$ ls -l Image.lz4-dtb
-rw-r----- 1 ndesaulniers eng
16932627 Oct 26 11:57 Image.lz4-dtb
$ ld -v
GNU ld (binutils-2.27-
53dd00a1) 2.27.0.
20170315
^
pre patch:
$ ls -l vmlinux
-rwxr-x--- 1 ndesaulniers eng
300376208 Oct 26 11:43 vmlinux
$ ls -l Image.lz4-dtb
-rw-r----- 1 ndesaulniers eng
18159474 Oct 26 11:43 Image.lz4-dtb
post patch:
$ ls -l vmlinux
-rwxr-x--- 1 ndesaulniers eng
300376208 Oct 26 12:06 vmlinux
$ ls -l Image.lz4-dtb
-rw-r----- 1 ndesaulniers eng
16932466 Oct 26 12:06 Image.lz4-dtb
By Siqi's measurement w/ gzip:
binutils 2.27 with this patch (with --no-apply-dynamic-relocs):
Image
41535488
Image.gz
13404067
binutils 2.27 without this patch (without --no-apply-dynamic-relocs):
Image
41535488
Image.gz
14125516
Any compression scheme should be able to get better results from the
longer runs of zeros, not just GZIP and LZ4.
10ms boot time savings isn't anything to get excited about, but users of
arm64+compression+bfd-2.27 should not have to pay a penalty for no
runtime improvement.
Reported-by: Gopinath Elanchezhian <gelanchezhian@google.com>
Reported-by: Sindhuri Pentyala <spentyala@google.com>
Reported-by: Wei Wang <wvw@google.com>
Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Suggested-by: Rahul Chaudhry <rahulchaudhry@google.com>
Suggested-by: Siqi Lin <siqilin@google.com>
Suggested-by: Stephen Hines <srhines@google.com>
Signed-off-by: Nick Desaulniers <ndesaulniers@google.com>
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
[will: added comment to Makefile]
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ronald Tschalär [Thu, 26 Oct 2017 05:15:19 +0000 (22:15 -0700)]
Bluetooth: hci_ldisc: Fix another race when closing the tty.
[ Upstream commit
0338b1b393ec7910898e8f7b25b3bf31a7282e16 ]
The following race condition still existed:
P1 P2
cancel_work_sync()
hci_uart_tx_wakeup()
hci_uart_write_work()
hci_uart_dequeue()
clear_bit(HCI_UART_PROTO_READY)
hci_unregister_dev(hdev)
hci_free_dev(hdev)
hu->proto->close(hu)
kfree(hu)
access to hdev and hu
Cancelling the work after clearing the HCI_UART_PROTO_READY bit avoids
this as any hci_uart_tx_wakeup() issued after the flag is cleared will
detect that and not schedule further work.
Signed-off-by: Ronald Tschalär <ronald@innovation.ch>
Reviewed-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Patel Jay P [Mon, 23 Oct 2017 13:05:53 +0000 (06:05 -0700)]
Ib/hfi1: Return actual operational VLs in port info query
[ Upstream commit
00f9203119dd2774564407c7a67b17d81916298b ]
__subn_get_opa_portinfo stores value returned by hfi1_get_ib_cfg() as
operational vls. hfi1_get_ib_cfg() returns vls_operational field in
hfi1_pportdata. The problem with this is that the value is always equal
to vls_supported field in hfi1_pportdata.
The logic to calculate operational_vls is to set value passed by FM
(in __subn_set_opa_portinfo routine). If no value is passed then
default value is stored in operational_vls.
Field actual_vls_operational is calculated on the basis of buffer
control table. Hence, modifying hfi1_get_ib_cfg() to return
actual_operational_vls when used with HFI1_IB_CFG_OP_VLS parameter
Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Patel Jay P <jay.p.patel@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
tang.junhui [Mon, 30 Oct 2017 21:46:34 +0000 (14:46 -0700)]
bcache: fix wrong cache_misses statistics
[ Upstream commit
c157313791a999646901b3e3c6888514ebc36d62 ]
Currently, Cache missed IOs are identified by s->cache_miss, but actually,
there are many situations that missed IOs are not assigned a value for
s->cache_miss in cached_dev_cache_miss(), for example, a bypassed IO
(s->iop.bypass = 1), or the cache_bio allocate failed. In these situations,
it will go to out_put or out_submit, and s->cache_miss is null, which leads
bch_mark_cache_accounting() to treat this IO as a hit IO.
[ML: applied by 3-way merge]
Signed-off-by: tang.junhui <tang.junhui@zte.com.cn>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Liang Chen [Mon, 30 Oct 2017 21:46:35 +0000 (14:46 -0700)]
bcache: explicitly destroy mutex while exiting
[ Upstream commit
330a4db89d39a6b43f36da16824eaa7a7509d34d ]
mutex_destroy does nothing most of time, but it's better to call
it to make the code future proof and it also has some meaning
for like mutex debug.
As Coly pointed out in a previous review, bcache_exit() may not be
able to handle all the references properly if userspace registers
cache and backing devices right before bch_debug_init runs and
bch_debug_init failes later. So not exposing userspace interface
until everything is ready to avoid that issue.
Signed-off-by: Liang Chen <liangchen.linux@gmail.com>
Reviewed-by: Michael Lyle <mlyle@lyle.org>
Reviewed-by: Coly Li <colyli@suse.de>
Reviewed-by: Eric Wheeler <bcache@linux.ewheeler.net>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Arun Kumar Neelakantam [Mon, 30 Oct 2017 05:41:24 +0000 (11:11 +0530)]
rpmsg: glink: Initialize the "intent_req_comp" completion variable
[ Upstream commit
2394facb17bcace4b3c19b50202177a5d8903b64 ]
The "intent_req_comp" variable is used without initialization which
results in NULL pointer dereference in qcom_glink_request_intent().
we need to initialize the completion variable before using it.
Fixes:
27b9c5b66b23 ("rpmsg: glink: Request for intents when unavailable")
Signed-off-by: Arun Kumar Neelakantam <aneela@codeaurora.org>
Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Adam Sampson [Tue, 24 Oct 2017 20:14:46 +0000 (16:14 -0400)]
media: usbtv: fix brightness and contrast controls
[ Upstream commit
b3168c87c0492661badc3e908f977d79e7738a41 ]
Because the brightness and contrast controls share a register,
usbtv_s_ctrl needs to read the existing values for both controls before
inserting the new value. However, the code accidentally wrote to the
registers (from an uninitialised stack array), rather than reading them.
The user-visible effect of this was that adjusting the brightness would
also set the contrast to a random value, and vice versa -- so it wasn't
possible to correctly adjust the brightness of usbtv's video output.
Tested with an "EasyDAY" UTV007 device.
Fixes:
c53a846c48f2 ("usbtv: add video controls")
Signed-off-by: Adam Sampson <ats@offog.org>
Reviewed-by: Lubomir Rintel <lkundrak@v3.sk>
Signed-off-by: Hans Verkuil <hans.verkuil@cisco.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bob Peterson [Wed, 20 Sep 2017 13:30:04 +0000 (08:30 -0500)]
GFS2: Take inode off order_write list when setting jdata flag
[ Upstream commit
cc555b09d8c3817aeebda43a14ab67049a5653f7 ]
This patch fixes a deadlock caused when the jdata flag is set for
inodes that are already on the ordered write list. Since it is
on the ordered write list, log_flush calls gfs2_ordered_write which
calls filemap_fdatawrite. But since the inode had the jdata flag
set, that calls gfs2_jdata_writepages, which tries to start a new
transaction. A new transaction cannot be started because it tries
to acquire the log_flush rwsem which is already locked by the log
flush operation.
The bottom line is: We cannot switch an inode from ordered to jdata
until we eliminate any ordered data pages (via log flush) or any
log_flush operation afterward will create the circular dependency
above. So we need to flush the log before setting the diskflags to
switch the file mode, then we need to remove the inode from the
ordered writes list.
Before this patch, the log flush was done for jdata->ordered, but
that's wrong. If we're going from jdata to ordered, we don't need
to call gfs2_log_flush because the call to filemap_fdatawrite will
do it for us:
filemap_fdatawrite() -> __filemap_fdatawrite_range()
__filemap_fdatawrite_range() -> do_writepages()
do_writepages() -> gfs2_jdata_writepages()
gfs2_jdata_writepages() -> gfs2_log_flush()
This patch modifies function do_gfs2_set_flags so that if a file
has its jdata flag set, and it's already on the ordered write list,
the log will be flushed and it will be removed from the list
before setting the flag.
Signed-off-by: Bob Peterson <rpeterso@redhat.com>
Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Acked-by: Abhijith Das <adas@redhat.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Douglas Gilbert [Sun, 29 Oct 2017 14:47:19 +0000 (10:47 -0400)]
scsi: scsi_debug: write_same: fix error report
[ Upstream commit
e33d7c56450b0a5c7290cbf9e1581fab5174f552 ]
The scsi_debug driver incorrectly suggests there is an error with the
SCSI WRITE SAME command when the number_of_logical_blocks is greater
than 1. It will also suggest there is an error when NDOB
(no data-out buffer) is set and the number_of_logical_blocks is
greater than 0. Both are valid, fix.
Signed-off-by: Douglas Gilbert <dgilbert@interlog.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Dan Carpenter [Sat, 30 Sep 2017 08:16:51 +0000 (11:16 +0300)]
misc: pci_endpoint_test: Avoid triggering a BUG()
[ Upstream commit
846df244ebefbc9f7b91e9ae7a5e5a2e69fb4772 ]
If you call ida_simple_remove(&pci_endpoint_test_ida, id) with a
negative "id" then it triggers an immediate BUG_ON(). Let's not allow
that.
Fixes:
2c156ac71c6b ("misc: Add host side PCI driver for PCI test function device")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kishon Vijay Abraham I [Wed, 11 Oct 2017 08:44:36 +0000 (14:14 +0530)]
misc: pci_endpoint_test: Fix failure path return values in probe
[ Upstream commit
80068c93688f6143100859c4856f895801c1a1d9 ]
Return value of pci_endpoint_test_probe is not set properly in a couple of
failure cases. Fix it here.
Signed-off-by: Kishon Vijay Abraham I <kishon@ti.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniel Lezcano [Thu, 19 Oct 2017 17:05:58 +0000 (19:05 +0200)]
thermal/drivers/step_wise: Fix temperature regulation misbehavior
[ Upstream commit
07209fcf33542c1ff1e29df2dbdf8f29cdaacb10 ]
There is a particular situation when the cooling device is cpufreq and the heat
dissipation is not efficient enough where the temperature increases little by
little until reaching the critical threshold and leading to a SoC reset.
The behavior is reproducible on a hikey6220 with bad heat dissipation (eg.
stacked with other boards).
Running a simple C program doing while(1); for each CPU of the SoC makes the
temperature to reach the passive regulation trip point and ends up to the
maximum allowed temperature followed by a reset.
This issue has been also reported by running the libhugetlbfs test suite.
What is observed is a ping pong between two cpu frequencies, 1.2GHz and 900MHz
while the temperature continues to grow.
It appears the step wise governor calls get_target_state() the first time with
the throttle set to true and the trend to 'raising'. The code selects logically
the next state, so the cpu frequency decreases from 1.2GHz to 900MHz, so far so
good. The temperature decreases immediately but still stays greater than the
trip point, then get_target_state() is called again, this time with the
throttle set to true *and* the trend to 'dropping'. From there the algorithm
assumes we have to step down the state and the cpu frequency jumps back to
1.2GHz. But the temperature is still higher than the trip point, so
get_target_state() is called with throttle=1 and trend='raising' again, we jump
to 900MHz, then get_target_state() is called with throttle=1 and
trend='dropping', we jump to 1.2GHz, etc ... but the temperature does not
stabilizes and continues to increase.
[ 237.922654] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=1,throttle=1
[ 237.922678] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=1,throttle=1
[ 237.922690] thermal cooling_device0: cur_state=0
[ 237.922701] thermal cooling_device0: old_target=0, target=1
[ 238.026656] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=2,throttle=1
[ 238.026680] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=2,throttle=1
[ 238.026694] thermal cooling_device0: cur_state=1
[ 238.026707] thermal cooling_device0: old_target=1, target=0
[ 238.134647] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=1,throttle=1
[ 238.134667] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=1,throttle=1
[ 238.134679] thermal cooling_device0: cur_state=0
[ 238.134690] thermal cooling_device0: old_target=0, target=1
In this situation the temperature continues to increase while the trend is
oscillating between 'dropping' and 'raising'. We need to keep the current state
untouched if the throttle is set, so the temperature can decrease or a higher
state could be selected, thus preventing this oscillation.
Keeping the next_target untouched when 'throttle' is true at 'dropping' time
fixes the issue.
The following traces show the governor does not change the next state if
trend==2 (dropping) and throttle==1.
[ 2306.127987] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=1,throttle=1
[ 2306.128009] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=1,throttle=1
[ 2306.128021] thermal cooling_device0: cur_state=0
[ 2306.128031] thermal cooling_device0: old_target=0, target=1
[ 2306.231991] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=2,throttle=1
[ 2306.232016] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=2,throttle=1
[ 2306.232030] thermal cooling_device0: cur_state=1
[ 2306.232042] thermal cooling_device0: old_target=1, target=1
[ 2306.335982] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=0,throttle=1
[ 2306.336006] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=0,throttle=1
[ 2306.336021] thermal cooling_device0: cur_state=1
[ 2306.336034] thermal cooling_device0: old_target=1, target=1
[ 2306.439984] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=2,throttle=1
[ 2306.440008] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=2,throttle=0
[ 2306.440022] thermal cooling_device0: cur_state=1
[ 2306.440034] thermal cooling_device0: old_target=1, target=0
[ ... ]
After a while, if the temperature continues to increase, the next state becomes
2 which is 720MHz on the hikey. That results in the temperature stabilizing
around the trip point.
[ 2455.831982] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=1,throttle=1
[ 2455.832006] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=1,throttle=0
[ 2455.832019] thermal cooling_device0: cur_state=1
[ 2455.832032] thermal cooling_device0: old_target=1, target=1
[ 2455.935985] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=0,throttle=1
[ 2455.936013] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=0,throttle=0
[ 2455.936027] thermal cooling_device0: cur_state=1
[ 2455.936040] thermal cooling_device0: old_target=1, target=1
[ 2456.043984] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=0,throttle=1
[ 2456.044009] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=0,throttle=0
[ 2456.044023] thermal cooling_device0: cur_state=1
[ 2456.044036] thermal cooling_device0: old_target=1, target=1
[ 2456.148001] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=1,throttle=1
[ 2456.148028] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=1,throttle=1
[ 2456.148042] thermal cooling_device0: cur_state=1
[ 2456.148055] thermal cooling_device0: old_target=1, target=2
[ 2456.252009] thermal thermal_zone0: Trip0[type=1,temp=65000]:trend=2,throttle=1
[ 2456.252041] thermal thermal_zone0: Trip1[type=1,temp=75000]:trend=2,throttle=0
[ 2456.252058] thermal cooling_device0: cur_state=2
[ 2456.252075] thermal cooling_device0: old_target=2, target=1
IOW, this change is needed to keep the state for a cooling device if the
temperature trend is oscillating while the temperature increases slightly.
Without this change, the situation above leads to a catastrophic crash by a
hardware reset on hikey. This issue has been reported to happen on an OMAP
dra7xx also.
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Cc: Keerthy <j-keerthy@ti.com>
Cc: John Stultz <john.stultz@linaro.org>
Cc: Leo Yan <leo.yan@linaro.org>
Tested-by: Keerthy <j-keerthy@ti.com>
Reviewed-by: Keerthy <j-keerthy@ti.com>
Signed-off-by: Eduardo Valentin <edubezval@gmail.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kuninori Morimoto [Wed, 1 Nov 2017 07:16:58 +0000 (07:16 +0000)]
ASoC: rsnd: rsnd_ssi_run_mods() needs to care ssi_parent_mod
[ Upstream commit
21781e87881f9c420871b1d1f3f29d4cd7bffb10 ]
SSI parent mod might be NULL. ssi_parent_mod() needs to care
about it. Otherwise, it uses negative shift.
This patch fixes it.
Signed-off-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gao Feng [Tue, 31 Oct 2017 10:25:37 +0000 (18:25 +0800)]
ppp: Destroy the mutex when cleanup
[ Upstream commit
f02b2320b27c16b644691267ee3b5c110846f49e ]
The mutex_destroy only makes sense when enable DEBUG_MUTEX. For the
good readbility, it's better to invoke it in exit func when the init
func invokes mutex_init.
Signed-off-by: Gao Feng <gfree.wind@vip.163.com>
Acked-by: Guillaume Nault <g.nault@alphalink.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michał Mirosław [Tue, 19 Sep 2017 02:48:10 +0000 (04:48 +0200)]
clk: tegra: Fix cclk_lp divisor register
[ Upstream commit
54eff2264d3e9fd7e3987de1d7eba1d3581c631e ]
According to comments in code and common sense, cclk_lp uses its
own divisor, not cclk_g's.
Fixes:
b08e8c0ecc42 ("clk: tegra: add clock support for Tegra30")
Signed-off-by: Michał Mirosław <mirq-linux@rere.qmqm.pl>
Acked-By: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Nicolin Chen [Fri, 15 Sep 2017 19:10:13 +0000 (12:10 -0700)]
clk: tegra: Use readl_relaxed_poll_timeout_atomic() in tegra210_clock_init()
[ Upstream commit
22ef01a203d27fee8b7694020b7e722db7efd2a7 ]
Below is the call trace of tegra210_init_pllu() function:
start_kernel()
-> time_init()
--> of_clk_init()
---> tegra210_clock_init()
----> tegra210_pll_init()
-----> tegra210_init_pllu()
Because the preemption is disabled in the start_kernel before calling
time_init, tegra210_init_pllu is actually in an atomic context while
it includes a readl_relaxed_poll_timeout that might sleep.
So this patch just changes this readl_relaxed_poll_timeout() to its
atomic version.
Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com>
Acked-By: Peter De Schrijver <pdeschrijver@nvidia.com>
Signed-off-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ming Lei [Sat, 14 Oct 2017 09:22:25 +0000 (17:22 +0800)]
blk-mq-sched: dispatch from scheduler IFF progress is made in ->dispatch
[ Upstream commit
5e3d02bbafad38975099b5848f5ebadedcf7bb7e ]
When the hw queue is busy, we shouldn't take requests from the scheduler
queue any more, otherwise it is difficult to do IO merge.
This patch fixes the awful IO performance on some SCSI devices(lpfc,
qla2xxx, ...) when mq-deadline/kyber is used by not taking requests if
hw queue is busy.
Reviewed-by: Omar Sandoval <osandov@fb.com>
Reviewed-by: Bart Van Assche <bart.vanassche@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Leo Yan [Fri, 1 Sep 2017 00:47:14 +0000 (08:47 +0800)]
clk: hi6220: mark clock cs_atb_syspll as critical
[ Upstream commit
d2a3671ebe6479483a12f94fcca63c058d95ad64 ]
Clock cs_atb_syspll is pll used for coresight trace bus; when clock
cs_atb_syspll is disabled and operates its child clock node cs_atb
results in system hang. So mark clock cs_atb_syspll as critical to
keep it enabled.
Cc: Guodong Xu <guodong.xu@linaro.org>
Cc: Zhangfei Gao <zhangfei.gao@linaro.org>
Cc: Haojian Zhuang <haojian.zhuang@linaro.org>
Signed-off-by: Leo Yan <leo.yan@linaro.org>
Signed-off-by: Michael Turquette <mturquette@baylibre.com>
Link: lkml.kernel.org/r/
1504226835-2115-2-git-send-email-leo.yan@linaro.org
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mauro Carvalho Chehab [Wed, 1 Nov 2017 12:09:59 +0000 (08:09 -0400)]
media: camss-vfe: always initialize reg at vfe_set_xbar_cfg()
[ Upstream commit
9917fbcfa20ab987d6381fd0365665e5c1402d75 ]
if output->wm_num is bigger than 2, the value for reg is
not initialized, as warned by smatch:
drivers/media/platform/qcom/camss-8x16/camss-vfe.c:633 vfe_set_xbar_cfg() error: uninitialized symbol 'reg'.
drivers/media/platform/qcom/camss-8x16/camss-vfe.c:637 vfe_set_xbar_cfg() error: uninitialized symbol 'reg'.
That shouldn't happen in practice, so add a logic that will
break the loop if i > 1, fixing the warnings.
Signed-off-by: Mauro Carvalho Chehab <mchehab@s-opensource.com>
Acked-by: Todor Tomov <todor.tomov@linaro.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sébastien Szymanski [Tue, 1 Aug 2017 10:40:07 +0000 (12:40 +0200)]
clk: imx6: refine hdmi_isfr's parent to make HDMI work on i.MX6 SoCs w/o VPU
[ Upstream commit
c68ee58d9ee7b856ac722f18f4f26579c8fbd2b4 ]
On i.MX6 SoCs without VPU (in my case MCIMX6D4AVT10AC), the hdmi driver
fails to probe:
[ 2.540030] dwhdmi-imx 120000.hdmi: Unsupported HDMI controller
(0000:00:00)
[ 2.548199] imx-drm display-subsystem: failed to bind 120000.hdmi
(ops dw_hdmi_imx_ops): -19
[ 2.557403] imx-drm display-subsystem: master bind failed: -19
That's because hdmi_isfr's parent, video_27m, is not correctly ungated.
As explained in commit
5ccc248cc537 ("ARM: imx6q: clk: Add support for
mipi_core_cfg clock as a shared clock gate"), video_27m is gated by
CCM_CCGR3[CG8].
On i.MX6 SoCs with VPU, the hdmi is working thanks to the
CCM_CMEOR[mod_en_ov_vpu] bit which makes the video_27m ungated whatever
is in CCM_CCGR3[CG8]. The issue can be reproduced by setting
CCMEOR[mod_en_ov_vpu] to 0.
Make the HDMI work in every case by setting hdmi_isfr's parent to
mipi_core_cfg.
Signed-off-by: Sébastien Szymanski <sebastien.szymanski@armadeus.com>
Reviewed-by: Fabio Estevam <fabio.estevam@nxp.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Adriana Reus [Mon, 2 Oct 2017 10:32:10 +0000 (13:32 +0300)]
clk: imx: imx7d: Fix parent clock for OCRAM_CLK
[ Upstream commit
edc5a8e754aba9c6eaeddd18cb1e72462f99b16c ]
The parent of OCRAM_CLK should be axi_main_root_clk
and not axi_post_div.
before:
axi_src 1 1
332307692 0 0
axi_cg 1 1
332307692 0 0
axi_pre_div 1 1
332307692 0 0
axi_post_div 1 1
332307692 0 0
ocram_clk 0 0
332307692 0 0
main_axi_root_clk 1 1
332307692 0 0
after:
axi_src 1 1
332307692 0 0
axi_cg 1 1
332307692 0 0
axi_pre_div 1 1
332307692 0 0
axi_post_div 1 1
332307692 0 0
main_axi_root_clk 1 1
332307692 0 0
ocram_clk 0 0
332307692 0 0
Reference Doc: i.MX 7D Reference Manual - Chap 5, p 516
(https://www.nxp.com/docs/en/reference-manual/IMX7DRM.pdf)
Fixes:
8f6d8094b215 ("ARM: imx: add imx7d clk tree support")
Signed-off-by: Adriana Reus <adriana.reus@nxp.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Chen Zhong [Thu, 5 Oct 2017 03:50:23 +0000 (11:50 +0800)]
clk: mediatek: add the option for determining PLL source clock
[ Upstream commit
c955bf3998efa3355790a4d8c82874582f1bc727 ]
Since the previous setup always sets the PLL using crystal 26MHz, this
doesn't always happen in every MediaTek platform. So the patch added
flexibility for assigning extra member for determining the PLL source
clock.
Signed-off-by: Chen Zhong <chen.zhong@mediatek.com>
Signed-off-by: Sean Wang <sean.wang@mediatek.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Hans de Goede [Thu, 2 Nov 2017 09:30:11 +0000 (10:30 +0100)]
staging: rtl8188eu: Revert part of "staging: rtl8188eu: fix comments with lines over 80 characters"
[ Upstream commit
4004a9870bbefdb6644c3d2033f5315920a3b669 ]
Commit
74e1e498e84e ("staging: rtl8188eu: fix comments with lines over 80
characters") not only changed comments but also changed an if check:
-if (pmlmepriv->cur_network.join_res != true) {
+if (!(pmlmepriv->cur_network.join_res)) {
This is not equivalent as join_res is an int and can have values such
as -2 and -3.
Note for the next time, please only make one type of changes in a single
clean-up commit.
Fixes:
74e1e498e84e ("staging: rtl8188eu: fix comments with lines over 80 ...")
Cc: Juliana Rodrigues <juliana.orod@gmail.com>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
qumingguang [Thu, 2 Nov 2017 12:45:22 +0000 (20:45 +0800)]
net: hns3: Fix a misuse to devm_free_irq
[ Upstream commit
ae064e6123f89f90af7e4ea190cc0c612643ca93 ]
we should use free_irq to free the nic irq during the unloading time.
because we use request_irq to apply it when nic up. It will crash if
up net device after reset the port. This patch fixes the issue.
Signed-off-by: qumingguang <qumingguang@huawei.com>
Signed-off-by: Lipeng <lipeng321@huawei.com>
Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Fuyun Liang [Fri, 3 Nov 2017 04:18:26 +0000 (12:18 +0800)]
net: hns3: fix for getting advertised_caps in hns3_get_link_ksettings
[ Upstream commit
2b39cabb2a283cea0c3d96d9370575371726164f ]
This patch fixes a bug for ethtool's get_link_ksettings().
The advertising for autoneg is always added to advertised_caps
whether autoneg is enable or disable. This patch fixes it.
Fixes: 496d03e (net: hns3: Add Ethtool support to HNS3 driver)
Signed-off-by: Fuyun Liang <liangfuyun1@huawei.com>
Signed-off-by: Lipeng <lipeng321@huawei.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jan Kara [Fri, 3 Nov 2017 11:21:21 +0000 (12:21 +0100)]
mm: Handle 0 flags in _calc_vm_trans() macro
[ Upstream commit
592e254502041f953e84d091eae2c68cba04c10b ]
_calc_vm_trans() does not handle the situation when some of the passed
flags are 0 (which can happen if these VM flags do not make sense for
the architecture). Improve the _calc_vm_trans() macro to return 0 in
such situation. Since all passed flags are constant, this does not add
any runtime overhead.
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Robert Baronescu [Tue, 10 Oct 2017 10:22:00 +0000 (13:22 +0300)]
crypto: tcrypt - fix buffer lengths in test_aead_speed()
[ Upstream commit
7aacbfcb331ceff3ac43096d563a1f93ed46e35e ]
Fix the way the length of the buffers used for
encryption / decryption are computed.
For e.g. in case of encryption, input buffer does not contain
an authentication tag.
Signed-off-by: Robert Baronescu <robert.baronescu@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Suzuki K Poulose [Fri, 3 Nov 2017 11:45:18 +0000 (11:45 +0000)]
arm-ccn: perf: Prevent module unload while PMU is in use
[ Upstream commit
c7f5828bf77dcbd61d51f4736c1d5aa35663fbb4 ]
When the PMU driver is built as a module, the perf expects the
pmu->module to be valid, so that the driver is prevented from
being unloaded while it is in use. Fix the CCN pmu driver to
fill in this field.
Fixes:
a33b0daab73a0 ("bus: ARM CCN PMU driver")
Cc: Pawel Moll <pawel.moll@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eryu Guan [Thu, 2 Nov 2017 04:43:50 +0000 (21:43 -0700)]
xfs: truncate pagecache before writeback in xfs_setattr_size()
[ Upstream commit
350976ae21873b0d36584ea005076356431b8f79 ]
On truncate down, if new size is not block size aligned, we zero the
rest of block to avoid exposing stale data to user, and
iomap_truncate_page() skips zeroing if the range is already in
unwritten state or a hole. Then we writeback from on-disk i_size to
the new size if this range hasn't been written to disk yet, and
truncate page cache beyond new EOF and set in-core i_size.
The problem is that we could write data between di_size and newsize
before removing the page cache beyond newsize, as the extents may
still be in unwritten state right after a buffer write. As such, the
page of data that newsize lies in has not been zeroed by page cache
invalidation before it is written, and xfs_do_writepage() hasn't
triggered it's "zero data beyond EOF" case because we haven't
updated in-core i_size yet. Then a subsequent mmap read could see
non-zeros past EOF.
I occasionally see this in fsx runs in fstests generic/112, a
simplified fsx operation sequence is like (assuming 4k block size
xfs):
fallocate 0x0 0x1000 0x0 keep_size
write 0x0 0x1000 0x0
truncate 0x0 0x800 0x1000
punch_hole 0x0 0x800 0x800
mapread 0x0 0x800 0x800
where fallocate allocates unwritten extent but doesn't update
i_size, buffer write populates the page cache and extent is still
unwritten, truncate skips zeroing page past new EOF and writes the
page to disk, punch_hole invalidates the page cache, at last mapread
reads the block back and sees non-zero beyond EOF.
Fix it by moving truncate_setsize() to before writeback so the page
cache invalidation zeros the partial page at the new EOF. This also
triggers "zero data beyond EOF" in xfs_do_writepage() at writeback
time, because newsize has been set and page straddles the newsize.
Also fixed the wrong 'end' param of filemap_write_and_wait_range()
call while we're at it, the 'end' is inclusive and should be
'newsize - 1'.
Suggested-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Eryu Guan <eguan@redhat.com>
Acked-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gary R Hook [Fri, 3 Nov 2017 16:50:34 +0000 (10:50 -0600)]
iommu/amd: Limit the IOVA page range to the specified addresses
[ Upstream commit
b92b4fb5c14257c0e7eae291ecc1f7b1962e1699 ]
The extent of pages specified when applying a reserved region should
include up to the last page of the range, but not the page following
the range.
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Fixes:
8d54d6c8b8f3 ('iommu/amd: Implement apply_dm_region call-back')
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Liu Bo [Fri, 3 Nov 2017 17:24:44 +0000 (11:24 -0600)]
badblocks: fix wrong return value in badblocks_set if badblocks are disabled
[ Upstream commit
39b4954c0a1556f8f7f1fdcf59a227117fcd8a0b ]
MD's rdev_set_badblocks() expects that badblocks_set() returns 1 if
badblocks are disabled, otherwise, rdev_set_badblocks() will record
superblock changes and return success in that case and md will fail to
report an IO error which it should.
This bug has existed since badblocks were introduced in commit
9e0e252a048b ("badblocks: Add core badblock management code").
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Acked-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Shaohua Li <shli@fb.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jiang Yi [Fri, 11 Aug 2017 03:29:44 +0000 (11:29 +0800)]
target/file: Do not return error for UNMAP if length is zero
[ Upstream commit
594e25e73440863981032d76c9b1e33409ceff6e ]
The function fd_execute_unmap() in target_core_file.c calles
ret = file->f_op->fallocate(file, mode, pos, len);
Some filesystems implement fallocate() to return error if
length is zero (e.g. btrfs) but according to SCSI Block
Commands spec UNMAP should return success for zero length.
Signed-off-by: Jiang Yi <jiangyilism@gmail.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
tangwenji [Thu, 24 Aug 2017 11:59:37 +0000 (19:59 +0800)]
target:fix condition return in core_pr_dump_initiator_port()
[ Upstream commit
24528f089d0a444070aa4f715ace537e8d6bf168 ]
When is pr_reg->isid_present_at_reg is false,this function should return.
This fixes a regression originally introduced by:
commit
d2843c173ee53cf4c12e7dfedc069a5bc76f0ac5
Author: Andy Grover <agrover@redhat.com>
Date: Thu May 16 10:40:55 2013 -0700
target: Alter core_pr_dump_initiator_port for ease of use
Signed-off-by: tangwenji <tang.wenji@zte.com.cn>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
tangwenji [Fri, 15 Sep 2017 08:03:13 +0000 (16:03 +0800)]
iscsi-target: fix memory leak in lio_target_tiqn_addtpg()
[ Upstream commit
12d5a43b2dffb6cd28062b4e19024f7982393288 ]
tpg must free when call core_tpg_register() return fail
Signed-off-by: tangwenji <tang.wenji@zte.com.cn>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bart Van Assche [Tue, 31 Oct 2017 18:03:17 +0000 (11:03 -0700)]
target/iscsi: Fix a race condition in iscsit_add_reject_from_cmd()
[ Upstream commit
cfe2b621bb18d86e93271febf8c6e37622da2d14 ]
Avoid that cmd->se_cmd.se_tfo is read after a command has already been
freed.
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Mike Christie <mchristi@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bart Van Assche [Tue, 31 Oct 2017 18:03:18 +0000 (11:03 -0700)]
target/iscsi: Detect conn_cmd_list corruption early
[ Upstream commit
6eaf69e4ec075f5af236c0c89f75639a195db904 ]
Certain behavior of the initiator can cause the target driver to
send both a reject and a SCSI response. If that happens two
target_put_sess_cmd() calls will occur without the command having
been removed from conn_cmd_list. In other words, conn_cmd_list
will get corrupted once the freed memory is reused. Although the
Linux kernel can detect list corruption if list debugging is
enabled, in this case the context in which list corruption is
detected is not related to the context that caused list corruption.
Hence add WARN_ON() statements that report the context that is
causing list corruption.
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Mike Christie <mchristi@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kuppuswamy Sathyanarayanan [Sun, 29 Oct 2017 09:49:54 +0000 (02:49 -0700)]
platform/x86: intel_punit_ipc: Fix resource ioremap warning
[ Upstream commit
6cc8cbbc8868033f279b63e98b26b75eaa0006ab ]
For PUNIT device, ISPDRIVER_IPC and GTDDRIVER_IPC resources are not
mandatory. So when PMC IPC driver creates a PUNIT device, if these
resources are not available then it creates dummy resource entries for
these missing resources. But during PUNIT device probe, doing ioremap on
these dummy resources generates following warning messages.
intel_punit_ipc: can't request region for resource [mem 0x00000000]
intel_punit_ipc: can't request region for resource [mem 0x00000000]
intel_punit_ipc: can't request region for resource [mem 0x00000000]
intel_punit_ipc: can't request region for resource [mem 0x00000000]
This patch fixes this issue by adding extra check for resource size
before performing ioremap operation.
Signed-off-by: Kuppuswamy Sathyanarayanan <sathyanarayanan.kuppuswamy@linux.intel.com>
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Tyrel Datwyler [Fri, 29 Sep 2017 00:19:20 +0000 (20:19 -0400)]
powerpc/pseries/vio: Dispose of virq mapping on vdevice unregister
[ Upstream commit
b8f89fea599d91e674497aad572613eb63181f31 ]
When a vdevice is DLPAR removed from the system the vio subsystem
doesn't bother unmapping the virq from the irq_domain. As a result we
have a virq mapped to a hardware irq that is no longer valid for the
irq_domain. A side effect is that we are left with /proc/irq/<irq#>
affinity entries, and attempts to modify the smp_affinity of the irq
will fail.
In the following observed example the kernel log is spammed by
ics_rtas_set_affinity errors after the removal of a VSCSI adapter.
This is a result of irqbalance trying to adjust the affinity every 10
seconds.
rpadlpar_io: slot U8408.E8E.10A7ACV-V5-C25 removed
ics_rtas_set_affinity: ibm,set-xive irq=655385 returns -3
ics_rtas_set_affinity: ibm,set-xive irq=655385 returns -3
This patch fixes the issue by calling irq_dispose_mapping() on the
virq of the viodev on unregister.
Fixes:
f2ab6219969f ("powerpc/pseries: Add PFO support to the VIO bus")
Signed-off-by: Tyrel Datwyler <tyreld@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christophe Leroy [Wed, 18 Oct 2017 09:16:47 +0000 (11:16 +0200)]
powerpc/ipic: Fix status get and status clear
[ Upstream commit
6b148a7ce72a7f87c81cbcde48af014abc0516a9 ]
IPIC Status is provided by register IPIC_SERSR and not by IPIC_SERMR
which is the mask register.
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
William A. Kennington III [Fri, 22 Sep 2017 23:58:00 +0000 (16:58 -0700)]
powerpc/opal: Fix EBUSY bug in acquiring tokens
[ Upstream commit
71e24d7731a2903b1ae2bba2b2971c654d9c2aa6 ]
The current code checks the completion map to look for the first token
that is complete. In some cases, a completion can come in but the
token can still be on lease to the caller processing the completion.
If this completed but unreleased token is the first token found in the
bitmap by another tasks trying to acquire a token, then the
__test_and_set_bit call will fail since the token will still be on
lease. The acquisition will then fail with an EBUSY.
This patch reorganizes the acquisition code to look at the
opal_async_token_map for an unleased token. If the token has no lease
it must have no outstanding completions so we should never see an
EBUSY, unless we have leased out too many tokens. Since
opal_async_get_token_inrerruptible is protected by a semaphore, we
will practically never see EBUSY anymore.
Fixes:
8d7248232208 ("powerpc/powernv: Infrastructure to support OPAL async completion")
Signed-off-by: William A. Kennington III <wak@google.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
KUWAZAWA Takuya [Sun, 15 Oct 2017 11:54:10 +0000 (20:54 +0900)]
netfilter: ipvs: Fix inappropriate output of procfs
[ Upstream commit
c5504f724c86ee925e7ffb80aa342cfd57959b13 ]
Information about ipvs in different network namespace can be seen via procfs.
How to reproduce:
# ip netns add ns01
# ip netns add ns02
# ip netns exec ns01 ip a add dev lo 127.0.0.1/8
# ip netns exec ns02 ip a add dev lo 127.0.0.1/8
# ip netns exec ns01 ipvsadm -A -t 10.1.1.1:80
# ip netns exec ns02 ipvsadm -A -t 10.1.1.2:80
The ipvsadm displays information about its own network namespace only.
# ip netns exec ns01 ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.1:80 wlc
# ip netns exec ns02 ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.1.2:80 wlc
But I can see information about other network namespace via procfs.
# ip netns exec ns01 cat /proc/net/ip_vs
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP
0A010101:0050 wlc
TCP
0A010102:0050 wlc
# ip netns exec ns02 cat /proc/net/ip_vs
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP
0A010102:0050 wlc
Signed-off-by: KUWAZAWA Takuya <albatross0@gmail.com>
Acked-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Gustavo A. R. Silva [Sun, 5 Nov 2017 04:52:54 +0000 (23:52 -0500)]
thunderbolt: tb: fix use after free in tb_activate_pcie_devices
[ Upstream commit
a2e373438f72391493a4425efc1b82030b6b4fd5 ]
Add a ̣̣continue statement in order to avoid using a previously
free'd pointer tunnel in list_add.
Addresses-Coverity-ID: 1415336
Fixes:
9d3cce0b6136 ("thunderbolt: Introduce thunderbolt bus and connection manager")
Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Matthias Brugger [Mon, 30 Oct 2017 11:37:55 +0000 (12:37 +0100)]
iommu/mediatek: Fix driver name
[ Upstream commit
395df08d2e1de238a9c8c33fdcd0e2160efd63a9 ]
There exist two Mediatek iommu drivers for the two different
generations of the device. But both drivers have the same name
"mtk-iommu". This breaks the registration of the second driver:
Error: Driver 'mtk-iommu' is already registered, aborting...
Fix this by changing the name for first generation to
"mtk-iommu-v1".
Fixes:
b17336c55d89 ("iommu/mediatek: add support for mtk iommu generation one HW")
Signed-off-by: Matthias Brugger <matthias.bgg@gmail.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mika Westerberg [Fri, 13 Oct 2017 18:35:43 +0000 (21:35 +0300)]
PCI: Do not allocate more buses than available in parent
[ Upstream commit
a20c7f36bd3d20d245616ae223bb9d05dfb6f050 ]
One can ask more buses to be reserved for hotplug bridges by passing
pci=hpbussize=N in the kernel command line. If the parent bus does not
have enough bus space available we incorrectly create child bus with the
requested number of subordinate buses.
In the example below hpbussize is set to one more than we have available
buses in the root port:
pci 0000:07:00.0: [8086:1578] type 01 class 0x060400
pci 0000:07:00.0: scanning [bus 00-00] behind bridge, pass 0
pci 0000:07:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
pci 0000:07:00.0: scanning [bus 00-00] behind bridge, pass 1
pci_bus 0000:08: busn_res: can not insert [bus 08-ff] under [bus 07-3f] (conflicts with (null) [bus 07-3f])
pci_bus 0000:08: scanning bus
...
pci_bus 0000:0a: bus scan returning with max=40
pci_bus 0000:0a: busn_res: [bus 0a-ff] end is updated to 40
pci_bus 0000:0a: [bus 0a-40] partially hidden behind bridge 0000:07 [bus 07-3f]
pci_bus 0000:08: bus scan returning with max=40
pci_bus 0000:08: busn_res: [bus 08-ff] end is updated to 40
Instead of allowing this, limit the subordinate number to be less than or
equal the maximum subordinate number allocated for the parent bus (if it
has any).
Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com>
[bhelgaas: remove irrelevant dmesg messages]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Shriya [Fri, 13 Oct 2017 04:36:41 +0000 (10:06 +0530)]
powerpc/powernv/cpufreq: Fix the frequency read by /proc/cpuinfo
[ Upstream commit
cd77b5ce208c153260ed7882d8910f2395bfaabd ]
The call to /proc/cpuinfo in turn calls cpufreq_quick_get() which
returns the last frequency requested by the kernel, but may not
reflect the actual frequency the processor is running at. This patch
makes a call to cpufreq_get() instead which returns the current
frequency reported by the hardware.
Fixes:
fb5153d05a7d ("powerpc: powernv: Implement ppc_md.get_proc_freq()")
Signed-off-by: Shriya <shriyak@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Qiang [Thu, 28 Sep 2017 03:54:34 +0000 (11:54 +0800)]
PCI/PME: Handle invalid data when reading Root Status
[ Upstream commit
3ad3f8ce50914288731a3018b27ee44ab803e170 ]
PCIe PME and native hotplug share the same interrupt number, so hotplug
interrupts are also processed by PME. In some cases, e.g., a Link Down
interrupt, a device may be present but unreachable, so when we try to
read its Root Status register, the read fails and we get all ones data
(0xffffffff).
Previously, we interpreted that data as PCI_EXP_RTSTA_PME being set, i.e.,
"some device has asserted PME," so we scheduled pcie_pme_work_fn(). This
caused an infinite loop because pcie_pme_work_fn() tried to handle PME
requests until PCI_EXP_RTSTA_PME is cleared, but with the link down,
PCI_EXP_RTSTA_PME can't be cleared.
Check for the invalid 0xffffffff data everywhere we read the Root Status
register.
1469d17dd341 ("PCI: pciehp: Handle invalid data when reading from
non-existent devices") added similar checks in the hotplug driver.
Signed-off-by: Qiang Zheng <zhengqiang10@huawei.com>
[bhelgaas: changelog, also check in pcie_pme_work_fn(), use "~0" to follow
other similar checks]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Wei Yongjun [Mon, 6 Nov 2017 11:11:28 +0000 (11:11 +0000)]
mlxsw: spectrum: Fix error return code in mlxsw_sp_port_create()
[ Upstream commit
d86fd113ebbb37726ef7c7cc6fd6d5ce377455d6 ]
Fix to return a negative error code from the VID create error handling
case instead of 0, as done elsewhere in this function.
Fixes:
c57529e1d5d8 ("mlxsw: spectrum: Replace vPorts with Port-VLAN")
Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Peter Ujfalusi [Wed, 8 Nov 2017 10:02:25 +0000 (12:02 +0200)]
dmaengine: ti-dma-crossbar: Correct am335x/am43xx mux value type
[ Upstream commit
288e7560e4d3e259aa28f8f58a8dfe63627a1bf6 ]
The used 0x1f mask is only valid for am335x family of SoC, different family
using this type of crossbar might have different number of electable
events. In case of am43xx family 0x3f mask should have been used for
example.
Instead of trying to handle each family's mask, just use u8 type to store
the mux value since the event offsets are aligned to byte offset.
Fixes:
42dbdcc6bf965 ("dmaengine: ti-dma-crossbar: Add support for crossbar on AM33xx/AM43xx")
Signed-off-by: Peter Ujfalusi <peter.ujfalusi@ti.com>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pankaj Bharadiya [Tue, 7 Nov 2017 10:46:19 +0000 (16:16 +0530)]
ASoC: Intel: Skylake: Fix uuid_module memory leak in failure case
[ Upstream commit
f8e066521192c7debe59127d90abbe2773577e25 ]
In the loop that adds the uuid_module to the uuid_list list, allocated
memory is not properly freed in the error path free uuid_list whenever
any of the memory allocation in the loop fails to avoid memory leak.
Signed-off-by: Pankaj Bharadiya <pankaj.laxminarayan.bharadiya@intel.com>
Signed-off-by: Guneshwor Singh <guneshwor.o.singh@intel.com>
Acked-By: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Rajat Jain [Tue, 31 Oct 2017 21:44:24 +0000 (14:44 -0700)]
PM / s2idle: Clear the events_check_enabled flag
[ Upstream commit
95b982b45122c57da2ee0b46cce70775e1d987af ]
Problem: This flag does not get cleared currently in the suspend or
resume path in the following cases:
* In case some driver's suspend routine returns an error.
* Successful s2idle case
* etc?
Why is this a problem: What happens is that the next suspend attempt
could fail even though the user did not enable the flag by writing to
/sys/power/wakeup_count. This is 1 use case how the issue can be seen
(but similar use case with driver suspend failure can be thought of):
1. Read /sys/power/wakeup_count
2. echo count > /sys/power/wakeup_count
3. echo freeze > /sys/power/wakeup_count
4. Let the system suspend, and wakeup the system using some wake source
that calls pm_wakeup_event() e.g. power button or something.
5. Note that the combined wakeup count would be incremented due
to the pm_wakeup_event() in the resume path.
6. After resuming the events_check_enabled flag is still set.
At this point if the user attempts to freeze again (without writing to
/sys/power/wakeup_count), the suspend would fail even though there has
been no wake event since the past resume.
Address that by clearing the flag just before a resume is completed,
so that it is always cleared for the corner cases mentioned above.
Signed-off-by: Rajat Jain <rajatja@google.com>
Acked-by: Pavel Machek <pavel@ucw.cz>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Pixel Ding [Wed, 8 Nov 2017 02:20:01 +0000 (10:20 +0800)]
drm/amdgpu: bypass lru touch for KIQ ring submission
[ Upstream commit
dce1e131dd4dc68099ff1b70aa03cd2d0acf8639 ]
KIQ ring submission is used for register accessing on SRIOV
VF that could happen both in irq enabled and irq disabled cases.
Inversion lock could happen on adev->ring_lru_list_lock, while
this operation is useless and just adds overhead in this use
case.
Signed-off-by: Pixel Ding <Pixel.Ding@amd.com>
Reviewed-by: Monk Liu <Monk.Liu@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Arnd Bergmann [Tue, 7 Nov 2017 10:46:05 +0000 (11:46 +0100)]
scsi: aacraid: use timespec64 instead of timeval
[ Upstream commit
820f188659122602ab217dd80cfa32b3ac0c55c0 ]
aacraid passes the current time to the firmware in one of two ways,
either as year/month/day/... or as 32-bit unsigned seconds.
The first one is broken on 32-bit architectures as it cannot go past
year 2038. Using timespec64 here makes it behave properly on both 32-bit
and 64-bit architectures, and avoids relying on signed integer overflow
to pass times into the second interface.
The interface used in aac_send_hosttime() however is still problematic
in year 2106 when 32-bit seconds overflow. Hopefully we don't have to
worry about aacraid by that time.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Dave Carroll <david.carroll@microsemi.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Philipp Zabel [Tue, 7 Nov 2017 12:12:17 +0000 (13:12 +0100)]
rtc: pcf8563: fix output clock rate
[ Upstream commit
a3350f9c57ffad569c40f7320b89da1f3061c5bb ]
The pcf8563_clkout_recalc_rate function erroneously ignores the
frequency index read from the CLKO register and always returns
32768 Hz.
Fixes:
a39a6405d5f9 ("rtc: pcf8563: add CLKOUT to common clock framework")
Signed-off-by: Philipp Zabel <p.zabel@pengutronix.de>
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christophe JAILLET [Thu, 9 Nov 2017 17:09:28 +0000 (18:09 +0100)]
video: fbdev: au1200fb: Return an error code if a memory allocation fails
[ Upstream commit
8cae353e6b01ac3f18097f631cdbceb5ff28c7f3 ]
'ret' is known to be 0 at this point.
In case of memory allocation error in 'framebuffer_alloc()', return
-ENOMEM instead.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christophe JAILLET [Thu, 9 Nov 2017 17:09:28 +0000 (18:09 +0100)]
video: fbdev: au1200fb: Release some resources if a memory allocation fails
[ Upstream commit
451f130602619a17c8883dd0b71b11624faffd51 ]
We should go through the error handling code instead of returning -ENOMEM
directly.
Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Cc: Tejun Heo <tj@kernel.org>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Ladislav Michl [Thu, 9 Nov 2017 17:09:30 +0000 (18:09 +0100)]
video: udlfb: Fix read EDID timeout
[ Upstream commit
c98769475575c8a585f5b3952f4b5f90266f699b ]
While usb_control_msg function expects timeout in miliseconds, a value
of HZ is used. Replace it with USB_CTRL_GET_TIMEOUT and also fix error
message which looks like:
udlfb: Read EDID byte 78 failed err
ffffff92
as error is either negative errno or number of bytes transferred use %d
format specifier.
Returned EDID is in second byte, so return error when less than two bytes
are received.
Fixes:
18dffdf8913a ("staging: udlfb: enhance EDID and mode handling support")
Signed-off-by: Ladislav Michl <ladis@linux-mips.org>
Cc: Bernie Thompson <bernie@plugable.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Geert Uytterhoeven [Thu, 9 Nov 2017 17:09:33 +0000 (18:09 +0100)]
fbdev: controlfb: Add missing modes to fix out of bounds access
[ Upstream commit
ac831a379d34109451b3c41a44a20ee10ecb615f ]
Dan's static analysis says:
drivers/video/fbdev/controlfb.c:560 control_setup()
error: buffer overflow 'control_mac_modes' 20 <= 21
Indeed, control_mac_modes[] has only 20 elements, while VMODE_MAX is 22,
which may lead to an out of bounds read when parsing vmode commandline
options.
The bug was introduced in v2.4.5.6, when 2 new modes were added to
macmodes.h, but control_mac_modes[] wasn't updated:
https://kernel.opensuse.org/cgit/kernel/diff/include/video/macmodes.h?h=v2.5.2&id=
29f279c764808560eaceb88fef36cbc35c529aad
Augment control_mac_modes[] with the two new video modes to fix this.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Robert Stonehouse [Tue, 7 Nov 2017 17:30:30 +0000 (17:30 +0000)]
sfc: don't warn on successful change of MAC
[ Upstream commit
cbad52e92ad7f01f0be4ca58bde59462dc1afe3a ]
Fixes:
535a61777f44e ("sfc: suppress handled MCDI failures when changing the MAC address")
Signed-off-by: Bert Kenward <bkenward@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Sébastien Szymanski [Fri, 10 Nov 2017 09:01:43 +0000 (10:01 +0100)]
HID: cp2112: fix broken gpio_direction_input callback
[ Upstream commit
7da85fbf1c87d4f73621e0e7666a3387497075a9 ]
When everything goes smoothly, ret is set to 0 which makes the function
to return EIO error.
Fixes:
8e9faa15469e ("HID: cp2112: fix gpio-callback error handling")
Signed-off-by: Sébastien Szymanski <sebastien.szymanski@armadeus.com>
Reviewed-by: Benjamin Tissoires <benjamin.tissoires@redhat.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Guy Levi [Wed, 25 Oct 2017 19:39:35 +0000 (22:39 +0300)]
IB/mlx4: Fix RSS's QPC attributes assignments
[ Upstream commit
108809a0571cd1e1b317c5c083a371e163e1f8f9 ]
In the modify QP handler the base_qpn_udp field in the RSS QPC is
overwrite later by irrelevant value assignment. Hence, ingress packets
which gets to the RSS QP will be steered then to a garbage QPN.
The patch fixes this by skipping the above assignment when a RSS QP is
modified, also, the RSS context's attributes assignments are relocated
just before the context is posted to avoid future issues like this.
Additionally, this patch takes the opportunity to change the code to be
disciplined to the device's manual and assigns the RSS QP context just at
RESET to INIT transition.
Fixes:
3078f5f1bd8b ("IB/mlx4: Add support for RSS QP")
Signed-off-by: Guy Levi <guyle@mellanox.com>
Reviewed-by: Yishai Hadas <yishaih@mellanox.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Chandan Rajendra [Mon, 11 Dec 2017 20:00:57 +0000 (15:00 -0500)]
ext4: fix crash when a directory's i_size is too small
commit
9d5afec6b8bd46d6ed821aa1579634437f58ef1f upstream.
On a ppc64 machine, when mounting a fuzzed ext2 image (generated by
fsfuzzer) the following call trace is seen,
VFS: brelse: Trying to free free buffer
WARNING: CPU: 1 PID: 6913 at /root/repos/linux/fs/buffer.c:1165 .__brelse.part.6+0x24/0x40
.__brelse.part.6+0x20/0x40 (unreliable)
.ext4_find_entry+0x384/0x4f0
.ext4_lookup+0x84/0x250
.lookup_slow+0xdc/0x230
.walk_component+0x268/0x400
.path_lookupat+0xec/0x2d0
.filename_lookup+0x9c/0x1d0
.vfs_statx+0x98/0x140
.SyS_newfstatat+0x48/0x80
system_call+0x58/0x6c
This happens because the directory that ext4_find_entry() looks up has
inode->i_size that is less than the block size of the filesystem. This
causes 'nblocks' to have a value of zero. ext4_bread_batch() ends up not
reading any of the directory file's blocks. This renders the entries in
bh_use[] array to continue to have garbage data. buffer_uptodate() on
bh_use[0] can then return a zero value upon which brelse() function is
invoked.
This commit fixes the bug by returning -ENOENT when the directory file
has no associated blocks.
Reported-by: Abdul Haleem <abdhalee@linux.vnet.ibm.com>
Signed-off-by: Chandan Rajendra <chandan@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Theodore Ts'o [Mon, 11 Dec 2017 04:44:11 +0000 (23:44 -0500)]
ext4: add missing error check in __ext4_new_inode()
commit
996fc4477a0ea28226b30d175f053fb6f9a4fa36 upstream.
It's possible for ext4_get_acl() to return an ERR_PTR. So we need to
add a check for this case in __ext4_new_inode(). Otherwise on an
error we can end up oops the kernel.
This was getting triggered by xfstests generic/388, which is a test
which exercises the shutdown code path.
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Eryu Guan [Mon, 4 Dec 2017 03:52:51 +0000 (22:52 -0500)]
ext4: fix fdatasync(2) after fallocate(2) operation
commit
c894aa97577e47d3066b27b32499ecf899bfa8b0 upstream.
Currently, fallocate(2) with KEEP_SIZE followed by a fdatasync(2)
then crash, we'll see wrong allocated block number (stat -c %b), the
blocks allocated beyond EOF are all lost. fstests generic/468
exposes this bug.
Commit
67a7d5f561f4 ("ext4: fix fdatasync(2) after extent
manipulation operations") fixed all the other extent manipulation
operation paths such as hole punch, zero range, collapse range etc.,
but forgot the fallocate case.
So similarly, fix it by recording the correct journal tid in ext4
inode in fallocate(2) path, so that ext4_sync_file() will wait for
the right tid to be committed on fdatasync(2).
This addresses the test failure in xfstests test generic/468.
Signed-off-by: Eryu Guan <eguan@redhat.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Andi Kleen [Mon, 4 Dec 2017 01:38:01 +0000 (20:38 -0500)]
ext4: support fast symlinks from ext3 file systems
commit
fc82228a5e3860502dbf3bfa4a9570cb7093cf7f upstream.
407cd7fb83c0 (ext4: change fast symlink test to not rely on i_blocks)
broke ~10 years old ext3 file systems created by 2.6.17. Any ELF
executable fails because the /lib/ld-linux.so.2 fast symlink
cannot be read anymore.
The patch assumed fast symlinks were created in a specific way,
but that's not true on these really old file systems.
The new behavior is apparently needed only with the large EA inode
feature.
Revert to the old behavior if the large EA inode feature is not set.
This makes my old VM boot again.
Fixes:
407cd7fb83c0 (ext4: change fast symlink test to not rely on i_blocks)
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Andreas Dilger <adilger@dilger.ca>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kees Cook [Tue, 12 Dec 2017 19:28:38 +0000 (11:28 -0800)]
Revert "exec: avoid RLIMIT_STACK races with prlimit()"
commit
779f4e1c6c7c661db40dfebd6dd6bda7b5f88aa3 upstream.
This reverts commit
04e35f4495dd560db30c25efca4eecae8ec8c375.
SELinux runs with secureexec for all non-"noatsecure" domain transitions,
which means lots of processes end up hitting the stack hard-limit change
that was introduced in order to fix a race with prlimit(). That race fix
will need to be redesigned.
Reported-by: Laura Abbott <labbott@redhat.com>
Reported-by: Tomáš Trnka <trnka@scm.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Adam Wallis [Mon, 27 Nov 2017 15:45:01 +0000 (10:45 -0500)]
dmaengine: dmatest: move callback wait queue to thread context
commit
6f6a23a213be51728502b88741ba6a10cda2441d upstream.
Commit
adfa543e7314 ("dmatest: don't use set_freezable_with_signal()")
introduced a bug (that is in fact documented by the patch commit text)
that leaves behind a dangling pointer. Since the done_wait structure is
allocated on the stack, future invocations to the DMATEST can produce
undesirable results (e.g., corrupted spinlocks).
Commit
a9df21e34b42 ("dmaengine: dmatest: warn user when dma test times
out") attempted to WARN the user that the stack was likely corrupted but
did not fix the actual issue.
This patch fixes the issue by pushing the wait queue and callback
structs into the the thread structure. If a failure occurs due to time,
dmaengine_terminate_all will force the callback to safely call
wake_up_all() without possibility of using a freed pointer.
Bug: https://bugzilla.kernel.org/show_bug.cgi?id=197605
Fixes:
adfa543e7314 ("dmatest: don't use set_freezable_with_signal()")
Reviewed-by: Sinan Kaya <okaya@codeaurora.org>
Suggested-by: Shunyong Yang <shunyong.yang@hxt-semitech.com>
Signed-off-by: Adam Wallis <awallis@codeaurora.org>
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thomas Gleixner [Fri, 15 Dec 2017 09:32:03 +0000 (10:32 +0100)]
posix-timer: Properly check sigevent->sigev_notify
commit
cef31d9af908243421258f1df35a4a644604efbe upstream.
timer_create() specifies via sigevent->sigev_notify the signal delivery for
the new timer. The valid modes are SIGEV_NONE, SIGEV_SIGNAL, SIGEV_THREAD
and (SIGEV_SIGNAL | SIGEV_THREAD_ID).
The sanity check in good_sigevent() is only checking the valid combination
for the SIGEV_THREAD_ID bit, i.e. SIGEV_SIGNAL, but if SIGEV_THREAD_ID is
not set it accepts any random value.
This has no real effects on the posix timer and signal delivery code, but
it affects show_timer() which handles the output of /proc/$PID/timers. That
function uses a string array to pretty print sigev_notify. The access to
that array has no bound checks, so random sigev_notify cause access beyond
the array bounds.
Add proper checks for the valid notify modes and remove the SIGEV_THREAD_ID
masking from various code pathes as SIGEV_NONE can never be set in
combination with SIGEV_THREAD_ID.
Reported-by: Eric Biggers <ebiggers3@gmail.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Reported-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: John Stultz <john.stultz@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Lechner [Mon, 4 Dec 2017 01:54:41 +0000 (19:54 -0600)]
eeprom: at24: change nvmem stride to 1
commit
7f6d2ecd3d7acaf205ea7b3e96f9ffc55b92298b upstream.
Trying to read the MAC address from an eeprom that has an offset that
is not a multiple of 4 causes an error currently.
Fix it by changing the nvmem stride to 1.
Signed-off-by: David Lechner <david@lechnology.com>
[Bartosz: tweaked the commit message]
Signed-off-by: Bartosz Golaszewski <brgl@bgdev.pl>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kirill A. Shutemov [Mon, 4 Dec 2017 12:40:56 +0000 (15:40 +0300)]
x86/boot/compressed/64: Print error if 5-level paging is not supported
commit
6d7e0ba2d2be9e50cccba213baf07e0e183c1b24 upstream.
If the machine does not support the paging mode for which the kernel was
compiled, the boot process cannot continue.
It's not possible to let the kernel detect the mismatch as it does not even
reach the point where cpu features can be evaluted due to a triple fault in
the KASLR setup.
Instead of instantaneous silent reboot, emit an error message which gives
the user the information why the boot fails.
Fixes:
77ef56e4f0fb ("x86: Enable 5-level paging support via CONFIG_X86_5LEVEL=y")
Reported-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Borislav Petkov <bp@suse.de>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: linux-mm@kvack.org
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lkml.kernel.org/r/20171204124059.63515-3-kirill.shutemov@linux.intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Kirill A. Shutemov [Mon, 4 Dec 2017 12:40:55 +0000 (15:40 +0300)]
x86/boot/compressed/64: Detect and handle 5-level paging at boot-time
commit
08529078d8d9adf689bf39cc38d53979a0869970 upstream.
Prerequisite for fixing the current problem of instantaneous reboots when a
5-level paging kernel is booted on 4-level paging hardware.
At the same time this change prepares the decompression code to boot-time
switching between 4- and 5-level paging.
[ tglx: Folded the GCC < 5 fix. ]
Fixes:
77ef56e4f0fb ("x86: Enable 5-level paging support via CONFIG_X86_5LEVEL=y")
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Andi Kleen <ak@linux.intel.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: linux-mm@kvack.org
Cc: Cyrill Gorcunov <gorcunov@openvz.org>
Cc: Borislav Petkov <bp@suse.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lkml.kernel.org/r/20171204124059.63515-2-kirill.shutemov@linux.intel.com
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Steve Wise [Mon, 27 Nov 2017 21:16:32 +0000 (13:16 -0800)]
iw_cxgb4: only insert drain cqes if wq is flushed
commit
c058ecf6e455fac7346d46197a02398ead90851f upstream.
Only insert our special drain CQEs to support ib_drain_sq/rq() after
the wq is flushed. Otherwise, existing but not yet polled CQEs can be
returned out of order to the user application. This can happen when the
QP has exited RTS but not yet flushed the QP, which can happen during
a normal close (vs abortive close).
In addition never count the drain CQEs when determining how many CQEs
need to be synthesized during the flush operation. This latter issue
should never happen if the QP is properly flushed before inserting the
drain CQE, but I wanted to avoid corrupting the CQ state. So we handle
it and log a warning once.
Fixes:
4fe7c2962e11 ("iw_cxgb4: refactor sq/rq drain logic")
Signed-off-by: Steve Wise <swise@opengridcomputing.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Trond Myklebust [Fri, 15 Dec 2017 02:24:08 +0000 (21:24 -0500)]
SUNRPC: Fix a race in the receive code path
commit
90d91b0cd371193d9dbfa9beacab8ab9a4cb75e0 upstream.
We must ensure that the call to rpc_sleep_on() in xprt_transmit() cannot
race with the call to xprt_complete_rqst().
Reported-by: Chuck Lever <chuck.lever@oracle.com>
Link: https://bugzilla.linux-nfs.org/show_bug.cgi?id=317
Fixes:
ce7c252a8c74 ("SUNRPC: Add a separate spinlock to protect..")
Reviewed-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
monty_pavel@sina.com [Fri, 24 Nov 2017 17:43:50 +0000 (01:43 +0800)]
dm: fix various targets to dm_register_target after module __init resources created
commit
7e6358d244e4706fe612a77b9c36519a33600ac0 upstream.
A NULL pointer is seen if two concurrent "vgchange -ay -K <vg name>"
processes race to load the dm-thin-pool module:
PID: 25992 TASK:
ffff883cd7d23500 CPU: 4 COMMAND: "vgchange"
#0 [
ffff883cd743d600] machine_kexec at
ffffffff81038fa9
0000001 [
ffff883cd743d660] crash_kexec at
ffffffff810c5992
0000002 [
ffff883cd743d730] oops_end at
ffffffff81515c90
0000003 [
ffff883cd743d760] no_context at
ffffffff81049f1b
0000004 [
ffff883cd743d7b0] __bad_area_nosemaphore at
ffffffff8104a1a5
0000005 [
ffff883cd743d800] bad_area at
ffffffff8104a2ce
0000006 [
ffff883cd743d830] __do_page_fault at
ffffffff8104aa6f
0000007 [
ffff883cd743d950] do_page_fault at
ffffffff81517bae
0000008 [
ffff883cd743d980] page_fault at
ffffffff81514f95
[exception RIP: kmem_cache_alloc+108]
RIP:
ffffffff8116ef3c RSP:
ffff883cd743da38 RFLAGS:
00010046
RAX:
0000000000000004 RBX:
ffffffff81121b90 RCX:
ffff881bf1e78cc0
RDX:
0000000000000000 RSI:
00000000000000d0 RDI:
0000000000000000
RBP:
ffff883cd743da68 R8:
ffff881bf1a4eb00 R9:
0000000080042000
R10:
0000000000002000 R11:
0000000000000000 R12:
00000000000000d0
R13:
0000000000000000 R14:
00000000000000d0 R15:
0000000000000246
ORIG_RAX:
ffffffffffffffff CS: 0010 SS: 0018
0000009 [
ffff883cd743da70] mempool_alloc_slab at
ffffffff81121ba5
0000010 [
ffff883cd743da80] mempool_create_node at
ffffffff81122083
0000011 [
ffff883cd743dad0] mempool_create at
ffffffff811220f4
0000012 [
ffff883cd743dae0] pool_ctr at
ffffffffa08de049 [dm_thin_pool]
0000013 [
ffff883cd743dbd0] dm_table_add_target at
ffffffffa0005f2f [dm_mod]
0000014 [
ffff883cd743dc30] table_load at
ffffffffa0008ba9 [dm_mod]
0000015 [
ffff883cd743dc90] ctl_ioctl at
ffffffffa0009dc4 [dm_mod]
The race results in a NULL pointer because:
Process A (vgchange -ay -K):
a. send DM_LIST_VERSIONS_CMD ioctl;
b. pool_target not registered;
c. modprobe dm_thin_pool and wait until end.
Process B (vgchange -ay -K):
a. send DM_LIST_VERSIONS_CMD ioctl;
b. pool_target registered;
c. table_load->dm_table_add_target->pool_ctr;
d. _new_mapping_cache is NULL and panic.
Note:
1. process A and process B are two concurrent processes.
2. pool_target can be detected by process B but
_new_mapping_cache initialization has not ended.
To fix dm-thin-pool, and other targets (cache, multipath, and snapshot)
with the same problem, simply dm_register_target() after all resources
created during module init (as labelled with __init) are finished.
Signed-off-by: monty <monty_pavel@sina.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Steven Rostedt [Sat, 2 Dec 2017 18:04:54 +0000 (13:04 -0500)]
sched/rt: Do not pull from current CPU if only one CPU to pull
commit
f73c52a5bcd1710994e53fbccc378c42b97a06b6 upstream.
Daniel Wagner reported a crash on the BeagleBone Black SoC.
This is a single CPU architecture, and does not have a functional
arch_send_call_function_single_ipi() implementation which can crash
the kernel if that is called.
As it only has one CPU, it shouldn't be called, but if the kernel is
compiled for SMP, the push/pull RT scheduling logic now calls it for
irq_work if the one CPU is overloaded, it can use that function to call
itself and crash the kernel.
Ideally, we should disable the SCHED_FEAT(RT_PUSH_IPI) if the system
only has a single CPU. But SCHED_FEAT is a constant if sched debugging
is turned off. Another fix can also be used, and this should also help
with normal SMP machines. That is, do not initiate the pull code if
there's only one RT overloaded CPU, and that CPU happens to be the
current CPU that is scheduling in a lower priority task.
Even on a system with many CPUs, if there's many RT tasks waiting to
run on a single CPU, and that CPU schedules in another RT task of lower
priority, it will initiate the PULL logic in case there's a higher
priority RT task on another CPU that is waiting to run. But if there is
no other CPU with waiting RT tasks, it will initiate the RT pull logic
on itself (as it still has RT tasks waiting to run). This is a wasted
effort.
Not only does this help with SMP code where the current CPU is the only
one with RT overloaded tasks, it should also solve the issue that
Daniel encountered, because it will prevent the PULL logic from
executing, as there's only one CPU on the system, and the check added
here will cause it to exit the RT pull code.
Reported-by: Daniel Wagner <wagi@monom.org>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-rt-users <linux-rt-users@vger.kernel.org>
Fixes:
4bdced5c9 ("sched/rt: Simplify the IPI based RT balancing logic")
Link: http://lkml.kernel.org/r/20171202130454.4cbbfe8d@vmware.local.home
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Jason Yan [Mon, 11 Dec 2017 07:03:33 +0000 (15:03 +0800)]
scsi: libsas: fix length error in sas_smp_handler()
commit
621f6401fdeefe96dfe9eab4b167c7c39f552bb0 upstream.
The return value of smp_execute_task_sg() is the untransferred residual,
but bsg_job_done() requires the length of payload received. This makes
SMP passthrough commands from userland by sg ioctl to libsas get a wrong
response. The userland tools such as smp_utils failed because of these
wrong responses:
~#smp_discover /dev/bsg/expander-2\:13
response too short, len=0
~#smp_discover /dev/bsg/expander-2\:134
response too short, len=0
Fix this by passing the actual received length to bsg_job_done(). And if
smp_execute_task_sg() returns 0, this means received length is exactly
the buffer length.
[mkp: typo]
Fixes:
651a01364994 ("scsi: scsi_transport_sas: switch to bsg-lib for SMP passthrough")
Signed-off-by: Jason Yan <yanaijie@huawei.com>
Reported-by: chenqilin <chenqilin2@huawei.com>
Tested-by: chenqilin <chenqilin2@huawei.com>
CC: Christoph Hellwig <hch@lst.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Bart Van Assche [Wed, 6 Dec 2017 00:57:51 +0000 (16:57 -0800)]
scsi: core: Fix a scsi_show_rq() NULL pointer dereference
commit
14e3062fb18532175af4d1c4073597999f7a2248 upstream.
Avoid that scsi_show_rq() triggers a NULL pointer dereference if called
after sd_uninit_command(). Swap the NULL pointer assignment and the
mempool_free() call in sd_uninit_command() to make it less likely that
scsi_show_rq() triggers a use-after-free. Note: even with these changes
scsi_show_rq() can trigger a use-after-free but that's a lesser evil
than e.g. suppressing debug information for T10 PI Type 2 commands
completely. This patch fixes the following oops:
BUG: unable to handle kernel NULL pointer dereference at (null)
IP: scsi_format_opcode_name+0x1a/0x1c0
CPU: 1 PID: 1881 Comm: cat Not tainted 4.14.0-rc2.blk_mq_io_hang+ #516
Call Trace:
__scsi_format_command+0x27/0xc0
scsi_show_rq+0x5c/0xc0
__blk_mq_debugfs_rq_show+0x116/0x130
blk_mq_debugfs_rq_show+0xe/0x10
seq_read+0xfe/0x3b0
full_proxy_read+0x54/0x90
__vfs_read+0x37/0x160
vfs_read+0x96/0x130
SyS_read+0x55/0xc0
entry_SYSCALL_64_fastpath+0x1a/0xa5
[mkp: added Type 2]
Fixes:
0eebd005dd07 ("scsi: Implement blk_mq_ops.show_rq()")
Reported-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: James E.J. Bottomley <jejb@linux.vnet.ibm.com>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Ming Lei <ming.lei@redhat.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Hannes Reinecke <hare@suse.com>
Cc: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mark Rutland [Wed, 13 Dec 2017 11:45:42 +0000 (11:45 +0000)]
arm64: fix CONFIG_DEBUG_WX address reporting
commit
1d08a044cf12aee37dfd54837558e3295287b343 upstream.
In ptdump_check_wx(), we pass walk_pgd() a start address of 0 (rather
than VA_START) for the init_mm. This means that any reported W&X
addresses are offset by VA_START, which is clearly wrong and can make
them appear like userspace addresses.
Fix this by telling the ptdump code that we're walking init_mm starting
at VA_START. We don't need to update the addr_markers, since these are
still valid bounds regardless.
Fixes:
1404d6f13e47 ("arm64: dump: Add checking for writable and exectuable pages")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Laura Abbott <labbott@redhat.com>
Reported-by: Timur Tabi <timur@codeaurora.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Steve Capper [Mon, 4 Dec 2017 14:13:05 +0000 (14:13 +0000)]
arm64: Initialise high_memory global variable earlier
commit
f24e5834a2c3f6c5f814a417f858226f0a010ade upstream.
The high_memory global variable is used by
cma_declare_contiguous(.) before it is defined.
We don't notice this as we compute __pa(high_memory - 1), and it looks
like we're processing a VA from the direct linear map.
This problem becomes apparent when we flip the kernel virtual address
space and the linear map is moved to the bottom of the kernel VA space.
This patch moves the initialisation of high_memory before it used.
Fixes:
f7426b983a6a ("mm: cma: adjust address limit to avoid hitting low/high memory boundary")
Signed-off-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Steve Capper [Fri, 1 Dec 2017 17:22:14 +0000 (17:22 +0000)]
arm64: mm: Fix pte_mkclean, pte_mkdirty semantics
commit
8781bcbc5e69d7da69e84c7044ca0284848d5d01 upstream.
On systems with hardware dirty bit management, the ltp madvise09 unit
test fails due to dirty bit information being lost and pages being
incorrectly freed.
This was bisected to:
arm64: Ignore hardware dirty bit updates in ptep_set_wrprotect()
Reverting this commit leads to a separate problem, that the unit test
retains pages that should have been dropped due to the function
madvise_free_pte_range(.) not cleaning pte's properly.
Currently pte_mkclean only clears the software dirty bit, thus the
following code sequence can appear:
pte = pte_mkclean(pte);
if (pte_dirty(pte))
// this condition can return true with HW DBM!
This patch also adjusts pte_mkclean to set PTE_RDONLY thus effectively
clearing both the SW and HW dirty information.
In order for this to function on systems without HW DBM, we need to
also adjust pte_mkdirty to remove the read only bit from writable pte's
to avoid infinite fault loops.
Fixes:
64c26841b349 ("arm64: Ignore hardware dirty bit updates in ptep_set_wrprotect()")
Reported-by: Bhupinder Thakur <bhupinder.thakur@linaro.org>
Tested-by: Bhupinder Thakur <bhupinder.thakur@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Steve Capper <steve.capper@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Scott Mayhew [Fri, 8 Dec 2017 21:00:12 +0000 (16:00 -0500)]
nfs: don't wait on commit in nfs_commit_inode() if there were no commit requests
commit
dc4fd9ab01ab379ae5af522b3efd4187a7c30a31 upstream.
If there were no commit requests, then nfs_commit_inode() should not
wait on the commit or mark the inode dirty, otherwise the following
BUG_ON can be triggered:
[ 1917.130762] kernel BUG at fs/inode.c:578!
[ 1917.130766] Oops: Exception in kernel mode, sig: 5 [#1]
[ 1917.130768] SMP NR_CPUS=2048 NUMA pSeries
[ 1917.130772] Modules linked in: iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi blocklayoutdriver rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver nfs lockd grace fscache sunrpc sg nx_crypto pseries_rng ip_tables xfs libcrc32c sd_mod crc_t10dif crct10dif_generic crct10dif_common ibmvscsi scsi_transport_srp ibmveth scsi_tgt dm_mirror dm_region_hash dm_log dm_mod
[ 1917.130805] CPU: 2 PID: 14923 Comm: umount.nfs4 Tainted: G ------------ T 3.10.0-768.el7.ppc64 #1
[ 1917.130810] task:
c0000005ecd88040 ti:
c00000004cea0000 task.ti:
c00000004cea0000
[ 1917.130813] NIP:
c000000000354178 LR:
c000000000354160 CTR:
c00000000012db80
[ 1917.130816] REGS:
c00000004cea3720 TRAP: 0700 Tainted: G ------------ T (3.10.0-768.el7.ppc64)
[ 1917.130820] MSR:
8000000100029032 <SF,EE,ME,IR,DR,RI> CR:
22002822 XER:
20000000
[ 1917.130828] CFAR:
c00000000011f594 SOFTE: 1
GPR00:
c000000000354160 c00000004cea39a0 c0000000014c4700 c0000000018cc750
GPR04:
000000000000c750 80c0000000000000 0600000000000000 04eeb76bea749a03
GPR08:
0000000000000034 c0000000018cc758 0000000000000001 d000000005e619e8
GPR12:
c00000000012db80 c000000007b31200 0000000000000000 0000000000000000
GPR16:
0000000000000000 0000000000000000 0000000000000000 0000000000000000
GPR20:
0000000000000000 0000000000000000 0000000000000000 0000000000000000
GPR24:
0000000000000000 c000000000dfc3ec 0000000000000000 c0000005eefc02c0
GPR28:
d0000000079dbd50 c0000005b94a02c0 c0000005b94a0250 c0000005b94a01c8
[ 1917.130867] NIP [
c000000000354178] .evict+0x1c8/0x350
[ 1917.130871] LR [
c000000000354160] .evict+0x1b0/0x350
[ 1917.130873] Call Trace:
[ 1917.130876] [
c00000004cea39a0] [
c000000000354160] .evict+0x1b0/0x350 (unreliable)
[ 1917.130880] [
c00000004cea3a30] [
c0000000003558cc] .evict_inodes+0x13c/0x270
[ 1917.130884] [
c00000004cea3af0] [
c000000000327d20] .kill_anon_super+0x70/0x1e0
[ 1917.130896] [
c00000004cea3b80] [
d000000005e43e30] .nfs_kill_super+0x20/0x60 [nfs]
[ 1917.130900] [
c00000004cea3c00] [
c000000000328a20] .deactivate_locked_super+0xa0/0x1b0
[ 1917.130903] [
c00000004cea3c80] [
c00000000035ba54] .cleanup_mnt+0xd4/0x180
[ 1917.130907] [
c00000004cea3d10] [
c000000000119034] .task_work_run+0x114/0x150
[ 1917.130912] [
c00000004cea3db0] [
c00000000001ba6c] .do_notify_resume+0xcc/0x100
[ 1917.130916] [
c00000004cea3e30] [
c00000000000a7b0] .ret_from_except_lite+0x5c/0x60
[ 1917.130919] Instruction dump:
[ 1917.130921]
7fc3f378 486734b5 60000000 387f00a0 38800003 4bdcb365 60000000 e95f00a0
[ 1917.130927]
694a0060 7d4a0074 794ad182 694a0001 <
0b0a0000>
892d02a4 2f890000 40de0134
Signed-off-by: Scott Mayhew <smayhew@redhat.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniel Jurgens [Tue, 5 Dec 2017 20:30:02 +0000 (22:30 +0200)]
IB/core: Don't enforce PKey security on SMI MADs
commit
0fbe8f575b15585eec3326e43708fbbc024e8486 upstream.
Per the infiniband spec an SMI MAD can have any PKey. Checking the pkey
on SMI MADs is not necessary, and it seems that some older adapters
using the mthca driver don't follow the convention of using the default
PKey, resulting in false denials, or errors querying the PKey cache.
SMI MAD security is still enforced, only agents allowed to manage the
subnet are able to receive or send SMI MADs.
Reported-by: Chris Blake <chrisrblake93@gmail.com>
Fixes:
47a2b338fe63 ("IB/core: Enforce security on management datagrams")
Signed-off-by: Daniel Jurgens <danielj@mellanox.com>
Reviewed-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Daniel Jurgens [Tue, 5 Dec 2017 20:30:01 +0000 (22:30 +0200)]
IB/core: Bound check alternate path port number
commit
4cae8ff136782d77b108cb3a5ba53e60597ba3a6 upstream.
The alternate port number is used as an array index in the IB
security implementation, invalid values can result in a kernel panic.
Fixes:
d291f1a65232 ("IB/core: Enforce PKey security on QPs")
Signed-off-by: Daniel Jurgens <danielj@mellanox.com>
Reviewed-by: Parav Pandit <parav@mellanox.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Mathias Nyman [Fri, 8 Dec 2017 16:10:05 +0000 (18:10 +0200)]
xhci: Don't add a virt_dev to the devs array before it's fully allocated
commit
5d9b70f7d52eb14bb37861c663bae44de9521c35 upstream.
Avoid null pointer dereference if some function is walking through the
devs array accessing members of a new virt_dev that is mid allocation.
Add the virt_dev to xhci->devs[i] _after_ the virt_device and all its
members are properly allocated.
issue found by KASAN: null-ptr-deref in xhci_find_slot_id_by_port
"Quick analysis suggests that xhci_alloc_virt_device() is not mutex
protected. If so, there is a time frame where xhci->devs[slot_id] is set
but not fully initialized. Specifically, xhci->devs[i]->udev can be NULL."
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Chunfeng Yun [Fri, 8 Dec 2017 16:10:06 +0000 (18:10 +0200)]
usb: xhci: fix TDS for MTK xHCI1.1
commit
72b663a99c074a8d073e7ecdae446cfb024ef551 upstream.
For MTK's xHCI 1.0 or latter, TD size is the number of max
packet sized packets remaining in the TD, not including
this TRB (following spec).
For MTK's xHCI 0.96 and older, TD size is the number of max
packet sized packets remaining in the TD, including this TRB
(not following spec).
Signed-off-by: Chunfeng Yun <chunfeng.yun@mediatek.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Yan, Zheng [Thu, 30 Nov 2017 03:59:22 +0000 (11:59 +0800)]
ceph: drop negative child dentries before try pruning inode's alias
commit
040d786032bf59002d374b86d75b04d97624005c upstream.
Negative child dentry holds reference on inode's alias, it makes
d_prune_aliases() do nothing.
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
Reviewed-by: Jeff Layton <jlayton@redhat.com>
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Christoph Fritz [Sat, 9 Dec 2017 22:47:55 +0000 (23:47 +0100)]
mmc: core: apply NO_CMD23 quirk to some specific cards
commit
91516a2a4734614d62ee3ed921f8f88acc67c000 upstream.
To get an usdhc Apacer and some ATP SD cards work reliable, CMD23 needs
to be disabled. This has been tested on i.MX6 (sdhci-esdhc) and rk3288
(dw_mmc-rockchip).
Without this patch on i.MX6 (sdhci-esdhc):
$ dd if=/dev/urandom of=/mnt/test bs=1M count=10 conv=fsync
| <mmc0: starting CMD23 arg
00000400 flags
00000015>
| mmc0: starting CMD25 arg
00a71f00 flags
000000b5
| mmc0: blksz 512 blocks 1024 flags
00000100 tsac 3000 ms nsac 0
| mmc0: CMD12 arg
00000000 flags
0000049d
| sdhci [sdhci_irq()]: *** mmc0 got interrupt: 0x00000001
| mmc0: Timeout waiting for hardware interrupt.
Without this patch on rk3288 (dw_mmc-rockchip):
| mmc1: Card stuck in programming state! mmcblk1 card_busy_detect
| dwmmc_rockchip
ff0c0000.dwmmc: Busy; trying anyway
| mmc_host mmc1: Bus speed (slot 0) = 400000Hz (slot req 400000Hz,
| actual 400000HZ div = 0)
| mmc1: card never left busy state
| mmc1: tried to reset card, got error -110
| blk_update_request: I/O error, dev mmcblk1, sector 139778
| Buffer I/O error on dev mmcblk1p1, logical block 131586, lost async
| page write
Signed-off-by: Christoph Fritz <chf.fritz@googlemail.com>
Signed-off-by: Ulf Hansson <ulf.hansson@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Shuah Khan [Thu, 7 Dec 2017 21:16:50 +0000 (14:16 -0700)]
usbip: fix stub_send_ret_submit() vulnerability to null transfer_buffer
commit
be6123df1ea8f01ee2f896a16c2b7be3e4557a5a upstream.
stub_send_ret_submit() handles urb with a potential null transfer_buffer,
when it replays a packet with potential malicious data that could contain
a null buffer. Add a check for the condition when actual_length > 0 and
transfer_buffer is null.
Reported-by: Secunia Research <vuln@secunia.com>
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Shuah Khan [Thu, 7 Dec 2017 21:16:49 +0000 (14:16 -0700)]
usbip: prevent vhci_hcd driver from leaking a socket pointer address
commit
2f2d0088eb93db5c649d2a5e34a3800a8a935fc5 upstream.
When a client has a USB device attached over IP, the vhci_hcd driver is
locally leaking a socket pointer address via the
/sys/devices/platform/vhci_hcd/status file (world-readable) and in debug
output when "usbip --debug port" is run.
Fix it to not leak. The socket pointer address is not used at the moment
and it was made visible as a convenient way to find IP address from socket
pointer address by looking up /proc/net/{tcp,tcp6}.
As this opens a security hole, the fix replaces socket pointer address with
sockfd.
Reported-by: Secunia Research <vuln@secunia.com>
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Shuah Khan [Thu, 7 Dec 2017 21:16:48 +0000 (14:16 -0700)]
usbip: fix stub_rx: harden CMD_SUBMIT path to handle malicious input
commit
c6688ef9f29762e65bce325ef4acd6c675806366 upstream.
Harden CMD_SUBMIT path to handle malicious input that could trigger
large memory allocations. Add checks to validate transfer_buffer_length
and number_of_packets to protect against bad input requesting for
unbounded memory allocations. Validate early in get_pipe() and return
failure.
Reported-by: Secunia Research <vuln@secunia.com>
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Shuah Khan [Thu, 7 Dec 2017 21:16:47 +0000 (14:16 -0700)]
usbip: fix stub_rx: get_pipe() to validate endpoint number
commit
635f545a7e8be7596b9b2b6a43cab6bbd5a88e43 upstream.
get_pipe() routine doesn't validate the input endpoint number
and uses to reference ep_in and ep_out arrays. Invalid endpoint
number can trigger BUG(). Range check the epnum and returning
error instead of calling BUG().
Change caller stub_recv_cmd_submit() to handle the get_pipe()
error return.
Reported-by: Secunia Research <vuln@secunia.com>
Signed-off-by: Shuah Khan <shuahkh@osg.samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Amir Goldstein [Wed, 29 Nov 2017 05:35:21 +0000 (07:35 +0200)]
ovl: update ctx->pos on impure dir iteration
commit
b02a16e6413a2f782e542ef60bad9ff6bf212f8a upstream.
This fixes a regression with readdir of impure dir in overlayfs
that is shared to VM via 9p fs.
Reported-by: Miguel Bernal Marin <miguel.bernal.marin@linux.intel.com>
Fixes:
4edb83bb1041 ("ovl: constant d_ino for non-merge dirs")
Signed-off-by: Amir Goldstein <amir73il@gmail.com>
Tested-by: Miguel Bernal Marin <miguel.bernal.marin@linux.intel.com>
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Vivek Goyal [Mon, 27 Nov 2017 15:12:44 +0000 (10:12 -0500)]
ovl: Pass ovl_get_nlink() parameters in right order
commit
08d8f8a5b094b66b29936e8751b4a818b8db1207 upstream.
Right now we seem to be passing index as "lowerdentry" and origin.dentry
as "upperdentry". IIUC, we should pass these parameters in reversed order
and this looks like a bug.
Signed-off-by: Vivek Goyal <vgoyal@redhat.com>
Acked-by: Amir Goldstein <amir73il@gmail.com>
Fixes:
caf70cb2ba5d ("ovl: cleanup orphan index entries")
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Alan Stern [Tue, 12 Dec 2017 19:25:13 +0000 (14:25 -0500)]
USB: core: prevent malicious bNumInterfaces overflow
commit
48a4ff1c7bb5a32d2e396b03132d20d552c0eca7 upstream.
A malicious USB device with crafted descriptors can cause the kernel
to access unallocated memory by setting the bNumInterfaces value too
high in a configuration descriptor. Although the value is adjusted
during parsing, this adjustment is skipped in one of the error return
paths.
This patch prevents the problem by setting bNumInterfaces to 0
initially. The existing code already sets it to the proper value
after parsing is complete.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
David Kozub [Tue, 5 Dec 2017 21:40:04 +0000 (22:40 +0100)]
USB: uas and storage: Add US_FL_BROKEN_FUA for another JMicron JMS567 ID
commit
62354454625741f0569c2cbe45b2d192f8fd258e upstream.
There is another JMS567-based USB3 UAS enclosure (152d:0578) that fails
with the following error:
[sda] tag#0 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
[sda] tag#0 Sense Key : Illegal Request [current]
[sda] tag#0 Add. Sense: Invalid field in cdb
The issue occurs both with UAS (occasionally) and mass storage
(immediately after mounting a FS on a disk in the enclosure).
Enabling US_FL_BROKEN_FUA quirk solves this issue.
This patch adds an UNUSUAL_DEV with US_FL_BROKEN_FUA for the enclosure
for both UAS and mass storage.
Signed-off-by: David Kozub <zub@linux.fjfi.cvut.cz>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Changbin Du [Thu, 30 Nov 2017 03:39:43 +0000 (11:39 +0800)]
tracing: Allocate mask_str buffer dynamically
commit
90e406f96f630c07d631a021fd4af10aac913e77 upstream.
The default NR_CPUS can be very large, but actual possible nr_cpu_ids
usually is very small. For my x86 distribution, the NR_CPUS is 8192 and
nr_cpu_ids is 4. About 2 pages are wasted.
Most machines don't have so many CPUs, so define a array with NR_CPUS
just wastes memory. So let's allocate the buffer dynamically when need.
With this change, the mutext tracing_cpumask_update_lock also can be
removed now, which was used to protect mask_str.
Link: http://lkml.kernel.org/r/1512013183-19107-1-git-send-email-changbin.du@intel.com
Fixes:
36dfe9252bd4c ("ftrace: make use of tracing_cpumask")
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Michal Hocko [Thu, 14 Dec 2017 23:33:15 +0000 (15:33 -0800)]
mm, oom_reaper: fix memory corruption
commit
4837fe37adff1d159904f0c013471b1ecbcb455e upstream.
David Rientjes has reported the following memory corruption while the
oom reaper tries to unmap the victims address space
BUG: Bad page map in process oom_reaper pte:
6353826300000000 pmd:
00000000
addr:
00007f50cab1d000 vm_flags:
08100073 anon_vma:
ffff9eea335603f0 mapping: (null) index:
7f50cab1d
file: (null) fault: (null) mmap: (null) readpage: (null)
CPU: 2 PID: 1001 Comm: oom_reaper
Call Trace:
unmap_page_range+0x1068/0x1130
__oom_reap_task_mm+0xd5/0x16b
oom_reaper+0xff/0x14c
kthread+0xc1/0xe0
Tetsuo Handa has noticed that the synchronization inside exit_mmap is
insufficient. We only synchronize with the oom reaper if
tsk_is_oom_victim which is not true if the final __mmput is called from
a different context than the oom victim exit path. This can trivially
happen from context of any task which has grabbed mm reference (e.g. to
read /proc/<pid>/ file which requires mm etc.).
The race would look like this
oom_reaper oom_victim task
mmget_not_zero
do_exit
mmput
__oom_reap_task_mm mmput
__mmput
exit_mmap
remove_vma
unmap_page_range
Fix this issue by providing a new mm_is_oom_victim() helper which
operates on the mm struct rather than a task. Any context which
operates on a remote mm struct should use this helper in place of
tsk_is_oom_victim. The flag is set in mark_oom_victim and never cleared
so it is stable in the exit_mmap path.
Debugged by Tetsuo Handa.
Link: http://lkml.kernel.org/r/20171210095130.17110-1-mhocko@kernel.org
Fixes:
212925802454 ("mm: oom: let oom_reap_task and exit_mmap run concurrently")
Signed-off-by: Michal Hocko <mhocko@suse.com>
Reported-by: David Rientjes <rientjes@google.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Andrea Argangeli <andrea@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Thiago Rafael Becker [Thu, 14 Dec 2017 23:33:12 +0000 (15:33 -0800)]
kernel: make groups_sort calling a responsibility group_info allocators
commit
bdcf0a423ea1c40bbb40e7ee483b50fc8aa3d758 upstream.
In testing, we found that nfsd threads may call set_groups in parallel
for the same entry cached in auth.unix.gid, racing in the call of
groups_sort, corrupting the groups for that entry and leading to
permission denials for the client.
This patch:
- Make groups_sort globally visible.
- Move the call to groups_sort to the modifiers of group_info
- Remove the call to groups_sort from set_groups
Link: http://lkml.kernel.org/r/20171211151420.18655-1-thiago.becker@gmail.com
Signed-off-by: Thiago Rafael Becker <thiago.becker@gmail.com>
Reviewed-by: Matthew Wilcox <mawilcox@microsoft.com>
Reviewed-by: NeilBrown <neilb@suse.com>
Acked-by: "J. Bruce Fields" <bfields@fieldses.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>