nvme: fix setting the queue depth in nvme_alloc_io_tag_set
authorChristoph Hellwig <hch@lst.de>
Sun, 25 Dec 2022 10:32:31 +0000 (11:32 +0100)
committerJens Axboe <axboe@kernel.dk>
Mon, 26 Dec 2022 19:10:51 +0000 (12:10 -0700)
commit33b93727ce90c8db916fb071ed13e90106339754
treeff95728d96647c217de8a0bc4f3e04fcc9eae808
parent246cf66e300b76099b5dbd3fdd39e9a5dbc53f02
nvme: fix setting the queue depth in nvme_alloc_io_tag_set

While the CAP.MQES field in NVMe is a 0s based filed with a natural one
off, we also need to account for the queue wrap condition and fix undo
the one off again in nvme_alloc_io_tag_set.  This was never properly
done by the fabrics drivers, but they don't seem to care because there
is no actual physical queue that can wrap around, but it became a
problem when converting over the PCIe driver.  Also add back the
BLK_MQ_MAX_DEPTH check that was lost in the same commit.

Fixes: 0da7feaa5913 ("nvme-pci: use the tagset alloc/free helpers")
Reported-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Hugh Dickins <hughd@google.com>
Link: https://lore.kernel.org/r/20221225103234.226794-2-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
drivers/nvme/host/core.c