platform/kernel/linux-rpi.git
6 years agoblock: move ->timeout request member
Jens Axboe [Tue, 29 May 2018 14:47:57 +0000 (08:47 -0600)]
block: move ->timeout request member

After the recent timeout handling changes, we have two holes in
the struct. Move the timeout near the deadline, killing both,
and moving related members closer together. On my config on
x86-64, this shrinks struct request from 312 to 304 bytes.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblk-mq: simplify blk_mq_rq_timed_out
Christoph Hellwig [Tue, 29 May 2018 13:52:39 +0000 (15:52 +0200)]
blk-mq: simplify blk_mq_rq_timed_out

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: document the blk_eh_timer_return values
Christoph Hellwig [Tue, 29 May 2018 13:52:38 +0000 (15:52 +0200)]
block: document the blk_eh_timer_return values

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: remove BLK_EH_HANDLED
Christoph Hellwig [Tue, 29 May 2018 13:52:37 +0000 (15:52 +0200)]
block: remove BLK_EH_HANDLED

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agolibiscsi: don't try to bypass SCSI EH
Christoph Hellwig [Tue, 29 May 2018 13:52:36 +0000 (15:52 +0200)]
libiscsi: don't try to bypass SCSI EH

libiscsi is the only SCSI code that return BLK_EH_HANDLED, thus trying to
bypass the normal SCSI EH code.  We are going to remove this return value
at the block layer, and at least from a quick look it doesn't look too
harmful to try to send an abort for these cases, especially as the first
one should not actually be possible.  If this doesn't work out iscsi
will probably need its own eh_strategy_handler instead to just do the
right thing.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agommc: complete requests from ->timeout
Christoph Hellwig [Tue, 29 May 2018 13:52:35 +0000 (15:52 +0200)]
mmc: complete requests from ->timeout

By completing the request entirely in the driver we can remove the
BLK_EH_HANDLED return value and thus the split responsibility between the
driver and the block layer that has been causing trouble.

[While this keeps existing behavior it seems to mismatch the comment,
 maintainers please chime in!]

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoscsi_transport_fc: complete requests from ->timeout
Christoph Hellwig [Tue, 29 May 2018 13:52:34 +0000 (15:52 +0200)]
scsi_transport_fc: complete requests from ->timeout

By completing the request entirely in the driver we can remove the
BLK_EH_HANDLED return value and thus the split responsibility between the
driver and the block layer that has been causing trouble.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agonull_blk: complete requests from ->timeout
Christoph Hellwig [Tue, 29 May 2018 13:52:33 +0000 (15:52 +0200)]
null_blk: complete requests from ->timeout

By completing the request entirely in the driver we can remove the
BLK_EH_HANDLED return value and thus the split responsibility between the
driver and the block layer that has been causing trouble.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agomtip32xx: complete requests from ->timeout
Christoph Hellwig [Tue, 29 May 2018 13:52:32 +0000 (15:52 +0200)]
mtip32xx: complete requests from ->timeout

By completing the request entirely in the driver we can remove the
BLK_EH_HANDLED return value and thus the split responsibility between the
driver and the block layer that has been causing trouble.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agonbd: complete requests from ->timeout
Christoph Hellwig [Tue, 29 May 2018 13:52:31 +0000 (15:52 +0200)]
nbd: complete requests from ->timeout

By completing the request entirely in the driver we can remove the
BLK_EH_HANDLED return value and thus the split responsibility between the
driver and the block layer that has been causing trouble.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agonvme: return BLK_EH_DONE from ->timeout
Christoph Hellwig [Tue, 29 May 2018 13:52:30 +0000 (15:52 +0200)]
nvme: return BLK_EH_DONE from ->timeout

NVMe always completes the request before returning from ->timeout, either
by polling for it, or by disabling the controller.  Return BLK_EH_DONE so
that the block layer doesn't even try to complete it again.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: rename BLK_EH_NOT_HANDLED to BLK_EH_DONE
Christoph Hellwig [Tue, 29 May 2018 13:52:29 +0000 (15:52 +0200)]
block: rename BLK_EH_NOT_HANDLED to BLK_EH_DONE

The BLK_EH_NOT_HANDLED implies nothing happen, but very often that
is not what is happening - instead the driver already completed the
command.  Fix the symbolic name to reflect that a little better.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblk-mq: Remove generation seqeunce
Keith Busch [Tue, 29 May 2018 13:52:28 +0000 (15:52 +0200)]
blk-mq: Remove generation seqeunce

This patch simplifies the timeout handling by relying on the request
reference counting to ensure the iterator is operating on an inflight
and truly timed out request. Since the reference counting prevents the
tag from being reallocated, the block layer no longer needs to prevent
drivers from completing their requests while the timeout handler is
operating on it: a driver completing a request is allowed to proceed to
the next state without additional syncronization with the block layer.

This also removes any need for generation sequence numbers since the
request lifetime is prevented from being reallocated as a new sequence
while timeout handling is operating on it.

To enables this a refcount is added to struct request so that request
users can be sure they're operating on the same request without it
changing while they're processing it.  The request's tag won't be
released for reuse until both the timeout handler and the completion
are done with it.

Signed-off-by: Keith Busch <keith.busch@intel.com>
[hch: slight cleanups, added back submission side hctx lock, use cmpxchg
 for completions]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblk-mq: Fix timeout and state order
Keith Busch [Tue, 29 May 2018 13:52:27 +0000 (15:52 +0200)]
blk-mq: Fix timeout and state order

The block layer had been setting the state to in-flight prior to updating
the timer. This is the wrong order since the timeout handler could observe
the in-flight state with the older timeout, believing the request had
expired when in fact it is just getting started.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agolibata: remove ata_scsi_timed_out
Christoph Hellwig [Tue, 29 May 2018 13:52:26 +0000 (15:52 +0200)]
libata: remove ata_scsi_timed_out

As far as I can tell this function can't even be called any more, given
that ATA implements its own eh_strategy_handler with ata_scsi_error, which
never calls ->eh_timed_out.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agobcache: Replace bch_read_string_list() by __sysfs_match_string()
Andy Shevchenko [Mon, 28 May 2018 07:37:44 +0000 (15:37 +0800)]
bcache: Replace bch_read_string_list() by __sysfs_match_string()

Kernel library has a common function to match user input from sysfs
against an array of strings. Thus, replace bch_read_string_list() by
__sysfs_match_string().

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agobcache: Move couple of functions to sysfs.c
Andy Shevchenko [Mon, 28 May 2018 07:37:43 +0000 (15:37 +0800)]
bcache: Move couple of functions to sysfs.c

There is couple of functions that are used exclusively in sysfs.c.
Move it to there and make them static.

Besides above, it will allow further clean up.

No functional change intended.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agobcache: Move couple of string arrays to sysfs.c
Andy Shevchenko [Mon, 28 May 2018 07:37:42 +0000 (15:37 +0800)]
bcache: Move couple of string arrays to sysfs.c

There is couple of string arrays that are used exclusively in sysfs.c.
Move it to there and make them static.

Besides above, it will allow further clean up.

No functional change intended.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Coly Li <colyli@suse.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agobcache: stop bcache device when backing device is offline
Coly Li [Mon, 28 May 2018 07:37:41 +0000 (15:37 +0800)]
bcache: stop bcache device when backing device is offline

Currently bcache does not handle backing device failure, if backing
device is offline and disconnected from system, its bcache device can still
be accessible. If the bcache device is in writeback mode, I/O requests even
can success if the requests hit on cache device. That is to say, when and
how bcache handles offline backing device is undefined.

This patch tries to handle backing device offline in a rather simple way,
- Add cached_dev->status_update_thread kernel thread to update backing
  device status in every 1 second.
- Add cached_dev->offline_seconds to record how many seconds the backing
  device is observed to be offline. If the backing device is offline for
  BACKING_DEV_OFFLINE_TIMEOUT (30) seconds, set dc->io_disable to 1 and
  call bcache_device_stop() to stop the bache device which linked to the
  offline backing device.

Now if a backing device is offline for BACKING_DEV_OFFLINE_TIMEOUT seconds,
its bcache device will be removed, then user space application writing on
it will get error immediately, and handler the device failure in time.

This patch is quite simple, does not handle more complicated situations.
Once the bcache device is stopped, users need to recovery the backing
device, register and attach it manually.

Changelog:
v3: call wait_for_kthread_stop() before exits kernel thread.
v2: remove "bcache: " prefix when calling pr_warn().
v1: initial version.

Signed-off-by: Coly Li <colyli@suse.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Cc: Michael Lyle <mlyle@lyle.org>
Cc: Junhui Tang <tang.junhui@zte.com.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agonull_blk: add blocking description and remove lightnvm
Liu Bo [Fri, 25 May 2018 14:40:04 +0000 (22:40 +0800)]
null_blk: add blocking description and remove lightnvm

- The description of 'blocking' is missing in null_blk.txt

- The 'lightnvm' parameter has been removed in null_blk.c

This updates both in null_blk.txt.

Signed-off-by: Liu Bo <bo.liu@linux.alibaba.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock drivers/block: Use octal not symbolic permissions
Joe Perches [Thu, 24 May 2018 19:38:59 +0000 (13:38 -0600)]
block drivers/block: Use octal not symbolic permissions

Convert the S_<FOO> symbolic permissions to their octal equivalents as
using octal and not symbolic permissions is preferred by many as more
readable.

see: https://lkml.org/lkml/2016/8/2/1945

Done with automated conversion via:
$ ./scripts/checkpatch.pl -f --types=SYMBOLIC_PERMS --fix-inplace <files...>

Miscellanea:

o Wrapped modified multi-line calls to a single line where appropriate
o Realign modified multi-line calls to open parenthesis

Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblk-mq: avoid starving tag allocation after allocating process migrates
Ming Lei [Thu, 24 May 2018 17:00:39 +0000 (11:00 -0600)]
blk-mq: avoid starving tag allocation after allocating process migrates

When the allocation process is scheduled back and the mapped hw queue is
changed, fake one extra wake up on previous queue for compensating wake
up miss, so other allocations on the previous queue won't be starved.

This patch fixes one request allocation hang issue, which can be
triggered easily in case of very low nr_request.

The race is as follows:

1) 2 hw queues, nr_requests are 2, and wake_batch is one

2) there are 3 waiters on hw queue 0

3) two in-flight requests in hw queue 0 are completed, and only two
   waiters of 3 are waken up because of wake_batch, but both the two
   waiters can be scheduled to another CPU and cause to switch to hw
   queue 1

4) then the 3rd waiter will wait for ever, since no in-flight request
   is in hw queue 0 any more.

5) this patch fixes it by the fake wakeup when waiter is scheduled to
   another hw queue

Cc: <stable@vger.kernel.org>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Modified commit message to make it clearer, and make it apply on
top of the 4.18 branch.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agobdi: Move cgroup bdi_writeback to a dedicated low concurrency workqueue
Tejun Heo [Wed, 23 May 2018 17:56:32 +0000 (10:56 -0700)]
bdi: Move cgroup bdi_writeback to a dedicated low concurrency workqueue

From 0aa2e9b921d6db71150633ff290199554f0842a8 Mon Sep 17 00:00:00 2001
From: Tejun Heo <tj@kernel.org>
Date: Wed, 23 May 2018 10:29:00 -0700

cgwb_release() punts the actual release to cgwb_release_workfn() on
system_wq.  Depending on the number of cgroups or block devices, there
can be a lot of cgwb_release_workfn() in flight at the same time.

We're periodically seeing close to 256 kworkers getting stuck with the
following stack trace and overtime the entire system gets stuck.

  [<ffffffff810ee40c>] _synchronize_rcu_expedited.constprop.72+0x2fc/0x330
  [<ffffffff810ee634>] synchronize_rcu_expedited+0x24/0x30
  [<ffffffff811ccf23>] bdi_unregister+0x53/0x290
  [<ffffffff811cd1e9>] release_bdi+0x89/0xc0
  [<ffffffff811cd645>] wb_exit+0x85/0xa0
  [<ffffffff811cdc84>] cgwb_release_workfn+0x54/0xb0
  [<ffffffff810a68d0>] process_one_work+0x150/0x410
  [<ffffffff810a71fd>] worker_thread+0x6d/0x520
  [<ffffffff810ad3dc>] kthread+0x12c/0x160
  [<ffffffff81969019>] ret_from_fork+0x29/0x40
  [<ffffffffffffffff>] 0xffffffffffffffff

The events leading to the lockup are...

1. A lot of cgwb_release_workfn() is queued at the same time and all
   system_wq kworkers are assigned to execute them.

2. They all end up calling synchronize_rcu_expedited().  One of them
   wins and tries to perform the expedited synchronization.

3. However, that invovles queueing rcu_exp_work to system_wq and
   waiting for it.  Because #1 is holding all available kworkers on
   system_wq, rcu_exp_work can't be executed.  cgwb_release_workfn()
   is waiting for synchronize_rcu_expedited() which in turn is waiting
   for cgwb_release_workfn() to free up some of the kworkers.

We shouldn't be scheduling hundreds of cgwb_release_workfn() at the
same time.  There's nothing to be gained from that.  This patch
updates cgwb release path to use a dedicated percpu workqueue with
@max_active of 1.

While this resolves the problem at hand, it might be a good idea to
isolate rcu_exp_work to its own workqueue too as it can be used from
various paths and is prone to this sort of indirect A-A deadlocks.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agonbd: set discard granularity properly
Josef Bacik [Wed, 23 May 2018 17:35:59 +0000 (13:35 -0400)]
nbd: set discard granularity properly

For some reason we had discard granularity set to 512 always even when
discards were disabled.  Fix this by having the default be 0, and then
if we turn it on set the discard granularity to the blocksize.

Signed-off-by: Josef Bacik <jbacik@fb.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblkdev_report_zones_ioctl(): Use vmalloc() to allocate large buffers
Bart Van Assche [Tue, 22 May 2018 15:27:22 +0000 (08:27 -0700)]
blkdev_report_zones_ioctl(): Use vmalloc() to allocate large buffers

Avoid that complaints similar to the following appear in the kernel log
if the number of zones is sufficiently large:

  fio: page allocation failure: order:9, mode:0x140c0c0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null)
  Call Trace:
  dump_stack+0x63/0x88
  warn_alloc+0xf5/0x190
  __alloc_pages_slowpath+0x8f0/0xb0d
  __alloc_pages_nodemask+0x242/0x260
  alloc_pages_current+0x6a/0xb0
  kmalloc_order+0x18/0x50
  kmalloc_order_trace+0x26/0xb0
  __kmalloc+0x20e/0x220
  blkdev_report_zones_ioctl+0xa5/0x1a0
  blkdev_ioctl+0x1ba/0x930
  block_ioctl+0x41/0x50
  do_vfs_ioctl+0xaa/0x610
  SyS_ioctl+0x79/0x90
  do_syscall_64+0x79/0x1b0
  entry_SYSCALL_64_after_hwframe+0x3d/0xa2

Fixes: 3ed05a987e0f ("blk-zoned: implement ioctls")
Signed-off-by: Bart Van Assche <bart.vanassche@wdc.com>
Cc: Shaun Tancheff <shaun.tancheff@seagate.com>
Cc: Damien Le Moal <damien.lemoal@hgst.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Martin K. Petersen <martin.petersen@oracle.com>
Cc: Hannes Reinecke <hare@suse.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock/ndb: add WQ_UNBOUND to the knbd-recv workqueue
Dan Melnic [Mon, 18 Sep 2017 20:08:51 +0000 (13:08 -0700)]
block/ndb: add WQ_UNBOUND to the knbd-recv workqueue

Add WQ_UNBOUND to the knbd-recv workqueue so we're not bound
to a single CPU that is selected at device creation time.

Signed-off-by: Dan Melnic <dmm@fb.com>
Reviewed-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblk-mq: remove wrong 'unlikely' check
huhai [Tue, 22 May 2018 09:39:34 +0000 (17:39 +0800)]
blk-mq: remove wrong 'unlikely' check

When dispatch_rq_from_ctx is called, in the vast majority of cases
the ctx->rq_list is not empty.

Signed-off-by: huhai <huhai@kylinos.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agonvme-pci: fix race between poll and IRQ completions
Jens Axboe [Mon, 21 May 2018 14:41:52 +0000 (08:41 -0600)]
nvme-pci: fix race between poll and IRQ completions

If polling completions are racing with the IRQ triggered by a
completion, the IRQ handler will find no work and return IRQ_NONE.
This can trigger complaints about spurious interrupts:

[  560.169153] irq 630: nobody cared (try booting with the "irqpoll" option)
[  560.175988] CPU: 40 PID: 0 Comm: swapper/40 Not tainted 4.17.0-rc2+ #65
[  560.175990] Hardware name: Intel Corporation S2600STB/S2600STB, BIOS SE5C620.86B.00.01.0010.010920180151 01/09/2018
[  560.175991] Call Trace:
[  560.175994]  <IRQ>
[  560.176005]  dump_stack+0x5c/0x7b
[  560.176010]  __report_bad_irq+0x30/0xc0
[  560.176013]  note_interrupt+0x235/0x280
[  560.176020]  handle_irq_event_percpu+0x51/0x70
[  560.176023]  handle_irq_event+0x27/0x50
[  560.176026]  handle_edge_irq+0x6d/0x180
[  560.176031]  handle_irq+0xa5/0x110
[  560.176036]  do_IRQ+0x41/0xc0
[  560.176042]  common_interrupt+0xf/0xf
[  560.176043]  </IRQ>
[  560.176050] RIP: 0010:cpuidle_enter_state+0x9b/0x2b0
[  560.176052] RSP: 0018:ffffa0ed4659fe98 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffdd
[  560.176055] RAX: ffff9527beb20a80 RBX: 000000826caee491 RCX: 000000000000001f
[  560.176056] RDX: 000000826caee491 RSI: 00000000335206ee RDI: 0000000000000000
[  560.176057] RBP: 0000000000000001 R08: 00000000ffffffff R09: 0000000000000008
[  560.176059] R10: ffffa0ed4659fe78 R11: 0000000000000001 R12: ffff9527beb29358
[  560.176060] R13: ffffffffa235d4b8 R14: 0000000000000000 R15: 000000826caed593
[  560.176065]  ? cpuidle_enter_state+0x8b/0x2b0
[  560.176071]  do_idle+0x1f4/0x260
[  560.176075]  cpu_startup_entry+0x6f/0x80
[  560.176080]  start_secondary+0x184/0x1d0
[  560.176085]  secondary_startup_64+0xa5/0xb0
[  560.176088] handlers:
[  560.178387] [<00000000efb612be>] nvme_irq [nvme]
[  560.183019] Disabling IRQ #630

A previous commit removed ->cqe_seen that was handling this case,
but we need to handle this a bit differently due to completions
now running outside the queue lock. Return IRQ_HANDLED from the
IRQ handler, if the completion ring head was moved since we last
saw it.

Fixes: 5cb525c8315f ("nvme-pci: handle completions outside of the queue lock")
Reported-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Keith Busch <keith.busch@intel.com>
Tested-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoMerge branch 'nvme-4.18' of git://git.infradead.org/nvme into for-4.18/block
Jens Axboe [Mon, 21 May 2018 14:33:37 +0000 (08:33 -0600)]
Merge branch 'nvme-4.18' of git://git.infradead.org/nvme into for-4.18/block

Pull NVMe changes from Keith:

"This is just the first nvme pull request for 4.18. There are several
fabrics and target patches that I missed, so there will be more to
come."

* 'nvme-4.18' of git://git.infradead.org/nvme:
  nvme-pci: drop IRQ disabling on submission queue lock
  nvme-pci: split the nvme queue lock into submission and completion locks
  nvme-pci: handle completions outside of the queue lock
  nvme-pci: move ->cq_vector == -1 check outside of ->q_lock
  nvme-pci: remove cq check after submission
  nvme-pci: simplify nvme_cqe_valid
  nvme: mark the result argument to nvme_complete_async_event volatile
  nvme/pci: Sync controller reset for AER slot_reset
  nvme/pci: Hold controller reference during async probe
  nvme: only reconfigure discard if necessary
  nvme/pci: Use async_schedule for initial reset work
  nvme: lightnvm: add granby support
  NVMe: Add Quirk Delay before CHK RDY for Seagate Nytro Flash Storage
  nvme: change order of qid and cmdid in completion trace
  nvme: fc: provide a descriptive error

6 years agonvme-pci: drop IRQ disabling on submission queue lock
Jens Axboe [Thu, 17 May 2018 16:31:52 +0000 (18:31 +0200)]
nvme-pci: drop IRQ disabling on submission queue lock

Since we aren't sharing the lock for completions now, we don't
have to make it IRQ safe.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Christoph Hellwig <hch@lst.de>
6 years agonvme-pci: split the nvme queue lock into submission and completion locks
Jens Axboe [Thu, 17 May 2018 16:31:51 +0000 (18:31 +0200)]
nvme-pci: split the nvme queue lock into submission and completion locks

This is now feasible. We protect the submission queue ring with
->sq_lock, and the completion side with ->cq_lock.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Christoph Hellwig <hch@lst.de>
6 years agonvme-pci: handle completions outside of the queue lock
Jens Axboe [Thu, 17 May 2018 16:31:50 +0000 (18:31 +0200)]
nvme-pci: handle completions outside of the queue lock

Split the completion of events into a two part process:

1) Reap the events inside the queue lock
2) Complete the events outside the queue lock

Since we never wrap the queue, we can access it locklessly after we've
updated the completion queue head. This patch started off with batching
events on the stack, but with this trick we don't have to. Keith Busch
<keith.busch@intel.com> came up with that idea.

Note that this kills the ->cqe_seen as well. I haven't been able to
trigger any ill effects of this. If we do race with polling every so
often, it should be rare enough NOT to trigger any issues.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Keith Busch <keith.busch@intel.com>
[hch: refactored, restored poll early exit optimization]
Signed-off-by: Christoph Hellwig <hch@lst.de>
6 years agonvme-pci: move ->cq_vector == -1 check outside of ->q_lock
Jens Axboe [Thu, 17 May 2018 16:31:49 +0000 (18:31 +0200)]
nvme-pci: move ->cq_vector == -1 check outside of ->q_lock

We only clear it dynamically in nvme_suspend_queue(). When we do, ensure
to do a full flush so that any nvme_queue_rq() invocation will see it.

Ideally we'd kill this check completely, but we're using it to flush
requests on a dying queue.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Christoph Hellwig <hch@lst.de>
6 years agonvme-pci: remove cq check after submission
Jens Axboe [Thu, 17 May 2018 16:31:48 +0000 (18:31 +0200)]
nvme-pci: remove cq check after submission

We always check the completion queue after submitting, but in my testing
this isn't a win even on DRAM/xpoint devices. In some cases it's
actually worse. Kill it.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
6 years agonvme-pci: simplify nvme_cqe_valid
Christoph Hellwig [Fri, 18 May 2018 14:37:04 +0000 (08:37 -0600)]
nvme-pci: simplify nvme_cqe_valid

We always look at the current CQ head and phase, so don't pass these
as separate arguments, and rename the function to nvme_cqe_pending.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agonvme: mark the result argument to nvme_complete_async_event volatile
Christoph Hellwig [Thu, 17 May 2018 16:31:46 +0000 (18:31 +0200)]
nvme: mark the result argument to nvme_complete_async_event volatile

We'll need that in the PCIe driver soon as we'll read it straight off the
CQ.

Signed-off-by: Christoph Hellwig <hch@lst.de>
6 years agoblk-mq: clear hctx->dispatch_from when mappings change
huhai [Fri, 18 May 2018 14:32:30 +0000 (08:32 -0600)]
blk-mq: clear hctx->dispatch_from when mappings change

When the number of hardware queues is changed, the drivers will call
blk_mq_update_nr_hw_queues() to remap hardware queues. This changes
the ctx mappings, but the current code doesn't clear the
->dispatch_from hint. This can result in dispatch_from pointing to
a ctx that isn't mapped to the hctx anymore.

Fixes: b347689ffbca ("blk-mq-sched: improve dispatching from sw queue")
Signed-off-by: huhai <huhai@kylinos.cn>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Moved the placement of the clearing to where we clear other items
pertaining to the existing mapping, added Fixes line, and reworded
the commit message.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agonbd: call nbd_bdev_reset instead of bd_set_size on disconnect
Josef Bacik [Wed, 16 May 2018 18:51:22 +0000 (14:51 -0400)]
nbd: call nbd_bdev_reset instead of bd_set_size on disconnect

We need to make sure we don't just set the size of the bdev to 0 while
it's being used by a file system.  We have the appropriate check in
nbd_bdev_reset, simply use that helper instead.

Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agonbd: fix how we set bd_invalidated
Josef Bacik [Wed, 16 May 2018 18:51:21 +0000 (14:51 -0400)]
nbd: fix how we set bd_invalidated

bd_invalidated is kind of a pain wrt partitions as it really only
triggers the partition rescan if it is set after bd_ops->open() runs, so
setting it when we reset the device isn't useful.  We also sporadically
would still have partitions left over in some disconnect cases, so fix
this by always setting bd_invalidated on open if there's no
configuration or if we've had a disconnect action happen, that way the
partition table gets invalidated and rescanned properly.

Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agonbd: clear_sock on netlink disconnect
Josef Bacik [Wed, 16 May 2018 18:51:20 +0000 (14:51 -0400)]
nbd: clear_sock on netlink disconnect

This is what the ioctl based nbd disconnect does as well.  Without this
the device will just sit there and wait for the connection to go away
(or IO to occur) before the device gets torn down.  Instead clear
everything up on our end so the configuration goes away as quickly as
possible.

Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agonbd: use bd_set_size when updating disk size
Josef Bacik [Wed, 16 May 2018 18:51:19 +0000 (14:51 -0400)]
nbd: use bd_set_size when updating disk size

When we stopped relying on the bdev everywhere I broke updating the
block device size on the fly, which ceph relies on.  We can't just do
set_capacity, we also have to do bd_set_size so things like parted will
notice the device size change.

Fixes: 29eaadc ("nbd: stop using the bdev everywhere")
cc: stable@vger.kernel.org
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agonbd: update size when connected
Josef Bacik [Wed, 16 May 2018 18:51:18 +0000 (14:51 -0400)]
nbd: update size when connected

I messed up changing the size of an NBD device while it was connected by
not actually updating the device or doing the uevent.  Fix this by
updating everything if we're connected and we change the size.

cc: stable@vger.kernel.org
Fixes: 639812a ("nbd: don't set the device size until we're connected")
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agonbd: fix nbd device deletion
Josef Bacik [Wed, 16 May 2018 18:51:17 +0000 (14:51 -0400)]
nbd: fix nbd device deletion

This fixes a use after free bug, we shouldn't be doing disk->queue right
after we do del_gendisk(disk).  Save the queue and do the cleanup after
the del_gendisk.

Fixes: c6a4759ea0c9 ("nbd: add device refcounting")
cc: stable@vger.kernel.org
Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: fix MAINTAINERS email for nbd
Josef Bacik [Wed, 16 May 2018 18:36:01 +0000 (14:36 -0400)]
block: fix MAINTAINERS email for nbd

I've been missing stuff because it's been going into my work email which
is a black hole.  Update to the email I actually use so I stop missing
patches and bug reports.

Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblk-mq: remove redundant insert case in blk_mq_make_request()
huhai [Wed, 16 May 2018 14:21:21 +0000 (08:21 -0600)]
blk-mq: remove redundant insert case in blk_mq_make_request()

We can use blk_mq_sched_insert_request() even if we don't have
an IO scheduler attached, since that case will end up being
exactly the same as what blk_mq_queue_io() was doing now.

Signed-off-by: huhai <huhai@kylinos.cn>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoRemove jsflash driver
Jens Axboe [Tue, 15 May 2018 19:54:11 +0000 (13:54 -0600)]
Remove jsflash driver

Nobody is using it anymore, and it's been abandoned. Since David
is fine with removing it, kill it.

Suggested-by: Christoph Hellwig <hch@lst.de>
Acked-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: Add sysfs entry for fua support
Kent Overstreet [Wed, 9 May 2018 01:33:58 +0000 (21:33 -0400)]
block: Add sysfs entry for fua support

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: Export bio check/set pages_dirty
Kent Overstreet [Wed, 9 May 2018 01:33:57 +0000 (21:33 -0400)]
block: Export bio check/set pages_dirty

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: Add warning for bi_next not NULL in bio_endio()
Kent Overstreet [Wed, 9 May 2018 01:33:56 +0000 (21:33 -0400)]
block: Add warning for bi_next not NULL in bio_endio()

Recently found a bug where a driver left bi_next not NULL and then
called bio_endio(), and then the submitter of the bio used
bio_copy_data() which was treating src and dst as lists of bios.

Fixed that bug by splitting out bio_list_copy_data(), but in case other
things are depending on bi_next in weird ways, add a warning to help
avoid more bugs like that in the future.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: Add missing flush_dcache_page() call
Kent Overstreet [Wed, 9 May 2018 01:33:55 +0000 (21:33 -0400)]
block: Add missing flush_dcache_page() call

Since a bio can point to userspace pages (e.g. direct IO), this is
generally necessary.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: Split out bio_list_copy_data()
Kent Overstreet [Wed, 9 May 2018 01:33:54 +0000 (21:33 -0400)]
block: Split out bio_list_copy_data()

Found a bug (with ASAN) where we were passing a bio to bio_copy_data()
with bi_next not NULL, when it should have been - a driver had left
bi_next set to something after calling bio_endio().

Since the normal case is only copying single bios, split out
bio_list_copy_data() to avoid more bugs like this in the future.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: Add bio_copy_data_iter(), zero_fill_bio_iter()
Kent Overstreet [Wed, 9 May 2018 01:33:53 +0000 (21:33 -0400)]
block: Add bio_copy_data_iter(), zero_fill_bio_iter()

Add versions that take bvec_iter args instead of using bio->bi_iter - to
be used by bcachefs.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: Use bioset_init() for fs_bio_set
Kent Overstreet [Wed, 9 May 2018 01:33:52 +0000 (21:33 -0400)]
block: Use bioset_init() for fs_bio_set

Minor optimization - remove a pointer indirection when using fs_bio_set.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: Add bioset_init()/bioset_exit()
Kent Overstreet [Wed, 9 May 2018 01:33:51 +0000 (21:33 -0400)]
block: Add bioset_init()/bioset_exit()

Similarly to mempool_init()/mempool_exit(), take a pointer indirection
out of allocation/freeing by allowing biosets to be embedded in other
structs.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: Convert bio_set to mempool_init()
Kent Overstreet [Wed, 9 May 2018 01:33:50 +0000 (21:33 -0400)]
block: Convert bio_set to mempool_init()

Minor performance improvement by getting rid of pointer indirections
from allocation/freeing fastpaths.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agomempool: Add mempool_init()/mempool_exit()
Kent Overstreet [Mon, 4 May 2015 23:52:20 +0000 (16:52 -0700)]
mempool: Add mempool_init()/mempool_exit()

Allows mempools to be embedded in other structs, getting rid of a
pointer indirection from allocation fastpaths.

mempool_exit() is safe to call on an uninitialized but zeroed mempool.

Signed-off-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agosbitmap: fix race in wait batch accounting
Jens Axboe [Mon, 14 May 2018 18:17:31 +0000 (12:17 -0600)]
sbitmap: fix race in wait batch accounting

If we have multiple callers of sbq_wake_up(), we can end up in a
situation where the wait_cnt will continually go more and more
negative. Consider the case where our wake batch is 1, hence
wait_cnt will start out as 1.

wait_cnt == 1

CPU0 CPU1
atomic_dec_return(), cnt == 0
atomic_dec_return(), cnt == -1
cmpxchg(-1, 0) (succeeds)
[wait_cnt now 0]
cmpxchg(0, 1) (fails)

This ends up with wait_cnt being 0, we'll wakeup immediately
next time. Going through the same loop as above again, and
we'll have wait_cnt -1.

For the case where we have a larger wake batch, the only
difference is that the starting point will be higher. We'll
still end up with continually smaller batch wakeups, which
defeats the purpose of the rolling wakeups.

Always reset the wait_cnt to the batch value. Then it doesn't
matter who wins the race. But ensure that whomever does win
the race is the one that increments the ws index and wakes up
our batch count, loser gets to call __sbq_wake_up() again to
account his wakeups towards the next active wait state index.

Fixes: 6c0ca7ae292a ("sbitmap: fix wakeup hang after sbq resize")
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: consistently use GFP_NOIO instead of __GFP_NORECLAIM
Christoph Hellwig [Wed, 9 May 2018 07:54:08 +0000 (09:54 +0200)]
block: consistently use GFP_NOIO instead of __GFP_NORECLAIM

Same numerical value (for now at least), but a much better documentation
of intent.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: use GFP_NOIO instead of __GFP_DIRECT_RECLAIM
Christoph Hellwig [Wed, 9 May 2018 07:54:07 +0000 (09:54 +0200)]
block: use GFP_NOIO instead of __GFP_DIRECT_RECLAIM

We just can't do I/O when doing block layer requests allocations,
so use GFP_NOIO instead of the even more limited __GFP_DIRECT_RECLAIM.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: pass an explicit gfp_t to get_request
Christoph Hellwig [Wed, 9 May 2018 07:54:06 +0000 (09:54 +0200)]
block: pass an explicit gfp_t to get_request

blk_old_get_request already has it at hand, and in blk_queue_bio, which
is the fast path, it is constant.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: sanitize blk_get_request calling conventions
Christoph Hellwig [Wed, 9 May 2018 07:54:05 +0000 (09:54 +0200)]
block: sanitize blk_get_request calling conventions

Switch everyone to blk_get_request_flags, and then rename
blk_get_request_flags to blk_get_request.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: fix __get_request documentation
Christoph Hellwig [Wed, 9 May 2018 07:54:04 +0000 (09:54 +0200)]
block: fix __get_request documentation

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoscsi/osd: remove the gfp argument to osd_start_request
Christoph Hellwig [Wed, 9 May 2018 07:54:03 +0000 (09:54 +0200)]
scsi/osd: remove the gfp argument to osd_start_request

Always GFP_KERNEL, and keeping it would cause serious complications for
the next change.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agomemstick: remove unused variables
Christoph Hellwig [Mon, 14 May 2018 08:24:34 +0000 (10:24 +0200)]
memstick: remove unused variables

Fixes: 7c2d748e8476 ("memstick: don't call blk_queue_bounce_limit")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agops3disk: handle highmem pages
Christoph Hellwig [Wed, 9 May 2018 13:59:48 +0000 (15:59 +0200)]
ps3disk: handle highmem pages

The ps3disk driver already kmaps all pages when copying from/to the
internal bounce buffer, so it can accept highmem pages just fine.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agojsflash: handle highmem pages
Christoph Hellwig [Wed, 9 May 2018 13:59:47 +0000 (15:59 +0200)]
jsflash: handle highmem pages

Just kmap the bio single page payload before processing it.

(and yes, now highmem on sparc32 anyway, but kmap_(un)map atomic are nops,
so this gives the right example)

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoaoe: handle highmem pages
Christoph Hellwig [Wed, 9 May 2018 13:59:46 +0000 (15:59 +0200)]
aoe: handle highmem pages

Use kmap_atomic when copying out of a bio_vec.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agomtd_blkdevs: handle highmem pages
Christoph Hellwig [Wed, 9 May 2018 13:59:45 +0000 (15:59 +0200)]
mtd_blkdevs: handle highmem pages

Just kmap the single payload page before passing it on to the FTL.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agomemstick: don't call blk_queue_bounce_limit
Christoph Hellwig [Wed, 9 May 2018 13:59:44 +0000 (15:59 +0200)]
memstick: don't call blk_queue_bounce_limit

All in-tree host drivers set up a proper dma mask and use the dma-mapping
helpers.  This means they will be able to deal with any address that we
are throwing at them.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoDAC960: don't use block layer bounce buffers
Christoph Hellwig [Wed, 9 May 2018 13:59:43 +0000 (15:59 +0200)]
DAC960: don't use block layer bounce buffers

DAC960 just sets the block bounce limit to the dma mask, which means
that the iommu or swiotlb already take care of the bounce buffering,
and the block bouncing can be removed.

Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agomtip32xx: don't use block layer bounce buffers
Christoph Hellwig [Wed, 9 May 2018 13:59:42 +0000 (15:59 +0200)]
mtip32xx: don't use block layer bounce buffers

mtip32xx just sets the block bounce limit to the dma mask, which means
that the iommu or swiotlb already take care of the bounce buffering,
and the block bouncing can be removed.

Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agonvme/pci: Sync controller reset for AER slot_reset
Keith Busch [Thu, 10 May 2018 14:34:20 +0000 (08:34 -0600)]
nvme/pci: Sync controller reset for AER slot_reset

AER handling expects a successful return from slot_reset means the
driver made the device functional again. The nvme driver had been using
an asynchronous reset to recover the device, so the device
may still be initializing after control is returned to the
AER handler. This creates problems for subsequent event handling,
causing the initializion to fail.

This patch fixes that by syncing the controller reset before returning
to the AER driver, and reporting the true state of the reset.

Link: https://bugzilla.kernel.org/show_bug.cgi?id=199657
Reported-by: Alex Gagniuc <mr.nuke.me@gmail.com>
Cc: Sinan Kaya <okaya@codeaurora.org>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: stable@vger.kernel.org
Tested-by: Alex Gagniuc <mr.nuke.me@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Keith Busch <keith.busch@intel.com>
6 years agosbitmap: warn if using smaller shallow depth than was setup
Omar Sandoval [Thu, 10 May 2018 00:29:24 +0000 (17:29 -0700)]
sbitmap: warn if using smaller shallow depth than was setup

Make sure the user passed the right value to
sbitmap_queue_min_shallow_depth().

Acked-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agokyber-iosched: update shallow depth when setting up hardware queue
Jens Axboe [Wed, 9 May 2018 19:55:14 +0000 (13:55 -0600)]
kyber-iosched: update shallow depth when setting up hardware queue

We don't expect the async depth to be smaller than the wake batch
count for sbitmap, but just in case, inform sbitmap of what shallow
depth kyber may use.

Acked-by: Paolo Valente <paolo.valente@linaro.org>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agobfq-iosched: update shallow depth to smallest one used
Jens Axboe [Wed, 9 May 2018 21:26:55 +0000 (15:26 -0600)]
bfq-iosched: update shallow depth to smallest one used

If our shallow depth is smaller than the wake batching of sbitmap,
we can introduce hangs. Ensure that sbitmap knows how low we'll go.

Acked-by: Paolo Valente <paolo.valente@linaro.org>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agosbitmap: fix missed wakeups caused by sbitmap_queue_get_shallow()
Omar Sandoval [Thu, 10 May 2018 00:16:31 +0000 (17:16 -0700)]
sbitmap: fix missed wakeups caused by sbitmap_queue_get_shallow()

The sbitmap queue wake batch is calculated such that once allocations
start blocking, all of the bits which are already allocated must be
enough to fulfill the batch counters of all of the waitqueues. However,
the shallow allocation depth can break this invariant, since we block
before our full depth is being utilized. Add
sbitmap_queue_min_shallow_depth(), which saves the minimum shallow depth
the sbq will use, and update sbq_calc_wake_batch() to take it into
account.

Acked-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agobfq-iosched: remove unused variable
Jens Axboe [Wed, 9 May 2018 21:25:22 +0000 (15:25 -0600)]
bfq-iosched: remove unused variable

bfqd->sb_shift was attempted used as a cache for the sbitmap queue
shift, but we don't need it, as it never changes. Kill it with fire.

Acked-by: Paolo Valente <paolo.valente@linaro.org>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agobfq: calculate shallow depths at init time
Jens Axboe [Wed, 9 May 2018 19:27:21 +0000 (13:27 -0600)]
bfq: calculate shallow depths at init time

It doesn't change, so don't put it in the per-IO hot path.

Acked-by: Paolo Valente <paolo.valente@linaro.org>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agobfq-iosched: don't worry about reserved tags in limit_depth
Jens Axboe [Wed, 9 May 2018 19:12:10 +0000 (13:12 -0600)]
bfq-iosched: don't worry about reserved tags in limit_depth

Reserved tags are used for error handling, we don't need to
care about them for regular IO. The core won't call us for these
anyway.

Acked-by: Paolo Valente <paolo.valente@linaro.org>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblk-mq: don't call into depth limiting for reserved tags
Jens Axboe [Wed, 9 May 2018 19:28:50 +0000 (13:28 -0600)]
blk-mq: don't call into depth limiting for reserved tags

It's not useful, they are internal and/or error handling recovery
commands.

Acked-by: Paolo Valente <paolo.valente@linaro.org>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock, bfq: postpone rq preparation to insert or merge
Paolo Valente [Fri, 4 May 2018 17:17:01 +0000 (19:17 +0200)]
block, bfq: postpone rq preparation to insert or merge

When invoked for an I/O request rq, the prepare_request hook of bfq
increments reference counters in the destination bfq_queue for rq. In
this respect, after this hook has been invoked, rq may still be
transformed into a request with no icq attached, i.e., for bfq, a
request not associated with any bfq_queue. No further hook is invoked
to signal this tranformation to bfq (in general, to the destination
elevator for rq). This leads bfq into an inconsistent state, because
bfq has no chance to correctly lower these counters back. This
inconsistency may in its turn cause incorrect scheduling and hangs. It
certainly causes memory leaks, by making it impossible for bfq to free
the involved bfq_queue.

On the bright side, no transformation can still happen for rq after rq
has been inserted into bfq, or merged with another, already inserted,
request. Exploiting this fact, this commit addresses the above issue
by delaying the preparation of an I/O request to when the request is
inserted or merged.

This change also gives a performance bonus: a lock-contention point
gets removed. To prepare a request, bfq needs to hold its scheduler
lock. After postponing request preparation to insertion or merging, no
lock needs to be grabbed any longer in the prepare_request hook, while
the lock already taken to perform insertion or merging is used to
preparare the request as well.

Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Tested-by: Bart Van Assche <bart.vanassche@wdc.com>
Signed-off-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agomtip32xx: Fix an error handling path in 'mtip_pci_probe()'
Christophe JAILLET [Thu, 10 May 2018 07:27:31 +0000 (09:27 +0200)]
mtip32xx: Fix an error handling path in 'mtip_pci_probe()'

Branch to the right label in the error handling path in order to keep it
logical.

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agobrd: Mark as non-rotational
SeongJae Park [Thu, 3 May 2018 09:53:26 +0000 (18:53 +0900)]
brd: Mark as non-rotational

This commit sets QUEUE_FLAG_NONROT and clears up QUEUE_FLAG_ADD_RANDOM
to mark the ramdisks as non-rotational device.

Signed-off-by: SeongJae Park <sj38.park@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: consolidate struct request timestamp fields
Omar Sandoval [Wed, 9 May 2018 09:08:53 +0000 (02:08 -0700)]
block: consolidate struct request timestamp fields

Currently, struct request has four timestamp fields:

- A start time, set at get_request time, in jiffies, used for iostats
- An I/O start time, set at start_request time, in ktime nanoseconds,
  used for blk-stats (i.e., wbt, kyber, hybrid polling)
- Another start time and another I/O start time, used for cfq and bfq

These can all be consolidated into one start time and one I/O start
time, both in ktime nanoseconds, shaving off up to 16 bytes from struct
request depending on the kernel config.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: move blk_stat_add() to __blk_mq_end_request()
Omar Sandoval [Wed, 9 May 2018 09:08:52 +0000 (02:08 -0700)]
block: move blk_stat_add() to __blk_mq_end_request()

We want this next to blk_account_io_done() for the next change so that
we can call ktime_get() only once for both.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: use ktime_get_ns() instead of sched_clock() for cfq and bfq
Omar Sandoval [Wed, 9 May 2018 09:08:51 +0000 (02:08 -0700)]
block: use ktime_get_ns() instead of sched_clock() for cfq and bfq

cfq and bfq have some internal fields that use sched_clock() which can
trivially use ktime_get_ns() instead. Their timestamp fields in struct
request can also use ktime_get_ns(), which resolves the 8 year old
comment added by commit 28f4197e5d47 ("block: disable preemption before
using sched_clock()").

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: get rid of struct blk_issue_stat
Omar Sandoval [Wed, 9 May 2018 09:08:50 +0000 (02:08 -0700)]
block: get rid of struct blk_issue_stat

struct blk_issue_stat squashes three things into one u64:

- The time the driver started working on a request
- The original size of the request (for the io.low controller)
- Flags for writeback throttling

It turns out that on x86_64, we have a 4 byte hole in struct request
which we can fill with the non-timestamp fields from blk_issue_stat,
simplifying things quite a bit.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: replace bio->bi_issue_stat with bio-specific type
Omar Sandoval [Wed, 9 May 2018 09:08:49 +0000 (02:08 -0700)]
block: replace bio->bi_issue_stat with bio-specific type

struct blk_issue_stat is going away, and bio->bi_issue_stat doesn't even
use the blk-stats interface, so we can provide a separate implementation
specific for bios. The helpers work the same way as the blk-stats
helpers.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: pass struct request instead of struct blk_issue_stat to wbt
Omar Sandoval [Wed, 9 May 2018 09:08:48 +0000 (02:08 -0700)]
block: pass struct request instead of struct blk_issue_stat to wbt

issue_stat is going to go away, so first make writeback throttling take
the containing request, update the internal wbt helpers accordingly, and
change rwb->sync_cookie to be the request pointer instead of the
issue_stat pointer. No functional change.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: move some wbt helpers to blk-wbt.c
Omar Sandoval [Wed, 9 May 2018 09:08:47 +0000 (02:08 -0700)]
block: move some wbt helpers to blk-wbt.c

A few helpers are only used from blk-wbt.c, so move them there, and put
wbt_track() behind the CONFIG_BLK_WBT typedef. This is in preparation
for changing how the wbt flags are tracked.

Signed-off-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblk-wbt: throttle discards like background writes
Jens Axboe [Mon, 7 May 2018 16:03:23 +0000 (10:03 -0600)]
blk-wbt: throttle discards like background writes

Throttle discards like we would any background write. Discards should
be background activity, so if they are impacting foreground IO, then
we will throttle them down.

Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblk-wbt: pass in enum wbt_flags to get_rq_wait()
Jens Axboe [Mon, 7 May 2018 15:57:08 +0000 (09:57 -0600)]
blk-wbt: pass in enum wbt_flags to get_rq_wait()

This is in preparation for having more write queues, in which
case we would have needed to pass in more information than just
a simple 'is_kswapd' boolean.

Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblk-wbt: account any writing command as a write
Jens Axboe [Thu, 3 May 2018 15:14:57 +0000 (09:14 -0600)]
blk-wbt: account any writing command as a write

We currently special case WRITE and FLUSH, but we should really
just include any command with the write bit set. This ensures
that we account DISCARD.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: break discard submissions into the user defined size
Jens Axboe [Tue, 8 May 2018 21:09:41 +0000 (15:09 -0600)]
block: break discard submissions into the user defined size

Don't build discards bigger than what the user asked for, if the
user decided to limit the size by writing to 'discard_max_bytes'.

Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoloop: remember whether sysfs_create_group() was done
Tetsuo Handa [Fri, 4 May 2018 16:58:09 +0000 (10:58 -0600)]
loop: remember whether sysfs_create_group() was done

syzbot is hitting WARN() triggered by memory allocation fault
injection [1] because loop module is calling sysfs_remove_group()
when sysfs_create_group() failed.
Fix this by remembering whether sysfs_create_group() succeeded.

[1] https://syzkaller.appspot.com/bug?id=3f86c0edf75c86d2633aeb9dd69eccc70bc7e90b

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Reported-by: syzbot <syzbot+9f03168400f56df89dbc6f1751f4458fe739ff29@syzkaller.appspotmail.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Renamed sysfs_ready -> sysfs_inited.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: Shorten interrupt disabled regions
Thomas Gleixner [Fri, 4 May 2018 14:32:47 +0000 (16:32 +0200)]
block: Shorten interrupt disabled regions

Commit 9c40cef2b799 ("sched: Move blk_schedule_flush_plug() out of
__schedule()") moved the blk_schedule_flush_plug() call out of the
interrupt/preempt disabled region in the scheduler. This allows to replace
local_irq_save/restore(flags) by local_irq_disable/enable() in
blk_flush_plug_list().

But it makes more sense to disable interrupts explicitly when the request
queue is locked end reenable them when the request to is unlocked. This
shortens the interrupt disabled section which is important when the plug
list contains requests for more than one queue. The comment which claims
that disabling interrupts around the loop is misleading as the called
functions can reenable interrupts unconditionally anyway and obfuscates the
scope badly:

 local_irq_save(flags);
   spin_lock(q->queue_lock);
   ...
   queue_unplugged(q...);
     scsi_request_fn();
       spin_unlock_irq(q->queue_lock);

-------------------^^^ ????

       spin_lock_irq(q->queue_lock);
     spin_unlock(q->queue_lock);
 local_irq_restore(flags);

Aside of that the detached interrupt disabling is a constant pain for
PREEMPT_RT as it requires patching and special casing when RT is enabled
while with the spin_*_irq() variants this happens automatically.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/20110622174919.025446432@linutronix.de
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: Remove redundant WARN_ON()
Anna-Maria Gleixner [Fri, 4 May 2018 14:32:46 +0000 (16:32 +0200)]
block: Remove redundant WARN_ON()

Commit 2fff8a924d4c ("block: Check locking assumptions at runtime") added a
lockdep_assert_held(q->queue_lock) which makes the WARN_ON() redundant
because lockdep will detect and warn about context violations.

The unconditional WARN_ON() does not provide real additional value, so it
can be removed.

Signed-off-by: Anna-Maria Gleixner <anna-maria@linutronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoblock: don't disable interrupts during kmap_atomic()
Sebastian Andrzej Siewior [Fri, 4 May 2018 14:32:45 +0000 (16:32 +0200)]
block: don't disable interrupts during kmap_atomic()

bounce_copy_vec() disables interrupts around kmap_atomic(). This is a
leftover from the old kmap_atomic() implementation which relied on fixed
mapping slots, so the caller had to make sure that the same slot could not
be reused from an interrupting context.

kmap_atomic() was changed to dynamic slots long ago and commit 1ec9c5ddc17a
("include/linux/highmem.h: remove the second argument of k[un]map_atomic()")
removed the slot assignements, but the callers were not checked for now
redundant interrupt disabling.

Remove the conditional interrupt disable.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
6 years agoFix typo in comment.
Florian La Roche [Sun, 6 May 2018 17:34:07 +0000 (19:34 +0200)]
Fix typo in comment.

CONFIG_PRREMPT -> CONFIG_PREEMPT

Signed-off-by: Florian La Roche <Florian.LaRoche@googlemail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
6 years agoMerge tag 'devicetree-fixes-for-4.17' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Mon, 7 May 2018 15:33:29 +0000 (05:33 -1000)]
Merge tag 'devicetree-fixes-for-4.17' of git://git./linux/kernel/git/robh/linux

Pull DeviceTree fixes from Rob Herring:

 - fix path to display timing binding

 - fix some typos in interrupt-names and clock-names

 - fix a resource leak on overlay removal

 - add missing documentation for R8A77965 DMA, serial, and net

 - cleanup sunxi pinctrl description

 - add Kieback & Peter GmbH vendor prefix

* tag 'devicetree-fixes-for-4.17' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux:
  dt-bindings: panel: lvds: Fix path to display timing bindings
  dt-bindings: mvebu-uart: DT fix s/interrupts-names/interrupt-names/
  dt-bindings: meson-uart: DT fix s/clocks-names/clock-names/
  of: overlay: Stop leaking resources on overlay removal
  dtc: checks: drop warning for missing PCI bridge bus-range
  dt-bindings: dmaengine: rcar-dmac: document R8A77965 support
  dt-bindings: serial: sh-sci: Add support for r8a77965 (H)SCIF
  dt-bindings: net: ravb: Add support for r8a77965 SoC
  dt-bindings: pinctrl: sunxi: Fix reference to driver
  doc: Add vendor prefix for Kieback & Peter GmbH