platform/kernel/linux-starfive.git
4 years agoRDMA/cma: Fix use after free race in roce multicast join
Jason Gunthorpe [Wed, 2 Sep 2020 08:11:22 +0000 (11:11 +0300)]
RDMA/cma: Fix use after free race in roce multicast join

The roce path triggers a work queue that continues to touch the id_priv
but doesn't hold any reference on it. Futher, unlike in the IB case, the
work queue is not fenced during rdma_destroy_id().

This can trigger a use after free if a destroy is triggered in the
incredibly narrow window after the queue_work and the work starting and
obtaining the handler_mutex.

The only purpose of this work queue is to run the ULP event callback from
the standard context, so switch the design to use the existing
cma_work_handler() scheme. This simplifies quite a lot of the flow:

- Use the cma_work_handler() callback to launch the work for roce. This
  requires generating the event synchronously inside the
  rdma_join_multicast(), which in turn means the dummy struct
  ib_sa_multicast can become a simple stack variable.

- cm_work_handler() used the id_priv kref, so we can entirely eliminate
  the kref inside struct cma_multicast. Since the cma_multicast never
  leaks into an unprotected work queue the kfree can be done at the same
  time as for IB.

- Eliminating the general multicast.ib requires using cma_set_mgid() in a
  few places to recompute the mgid.

Fixes: 3c86aa70bf67 ("RDMA/cm: Add RDMA CM support for IBoE devices")
Link: https://lore.kernel.org/r/20200902081122.745412-9-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/cma: Consolidate the destruction of a cma_multicast in one place
Jason Gunthorpe [Wed, 2 Sep 2020 08:11:21 +0000 (11:11 +0300)]
RDMA/cma: Consolidate the destruction of a cma_multicast in one place

Two places were open coding this sequence, and also pull in
cma_leave_roce_mc_group() which was called only once.

Link: https://lore.kernel.org/r/20200902081122.745412-8-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/cma: Remove dead code for kernel rdmacm multicast
Jason Gunthorpe [Wed, 2 Sep 2020 08:11:20 +0000 (11:11 +0300)]
RDMA/cma: Remove dead code for kernel rdmacm multicast

There is no kernel user of RDMA CM multicast so this code managing the
multicast subscription of the kernel-only internal QP is dead. Remove it.

This makes the bug fixes in the next patches much simpler.

Link: https://lore.kernel.org/r/20200902081122.745412-7-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/cma: Combine cma_ndev_work with cma_work
Jason Gunthorpe [Wed, 2 Sep 2020 08:11:19 +0000 (11:11 +0300)]
RDMA/cma: Combine cma_ndev_work with cma_work

These are the same thing, except that cma_ndev_work doesn't have a state
transition. Signal no state transition by setting old_state and new_state
== 0.

In all cases the handler function should not be called once
rdma_destroy_id() has progressed passed setting the state.

Link: https://lore.kernel.org/r/20200902081122.745412-6-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/cma: Remove cma_comp()
Jason Gunthorpe [Wed, 2 Sep 2020 08:11:18 +0000 (11:11 +0300)]
RDMA/cma: Remove cma_comp()

The only place that still uses it is rdma_join_multicast() which is only
doing a sanity check that the caller hasn't done something wrong and
doesn't need the spinlock.

At least in the case of rdma_join_multicast() the information it needs
will remain until the ID is destroyed once it enters these
states. Similarly there is no reason to check for these specific states in
the handler callback, instead use the usual check for a destroyed id under
the handler_mutex.

Link: https://lore.kernel.org/r/20200902081122.745412-5-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/cma: Fix locking for the RDMA_CM_LISTEN state
Jason Gunthorpe [Wed, 2 Sep 2020 08:11:17 +0000 (11:11 +0300)]
RDMA/cma: Fix locking for the RDMA_CM_LISTEN state

There is a strange unlocked read of the ID state when checking for
reuseaddr. This is because an ID cannot be reusable once it becomes a
listening ID. Instead of using the state to exclude reuse, just clear it
as part of rdma_listen()'s flow to convert reusable into not reusable.

Once a ID goes to listen there is no way back out, and the only use of
reusable is on the bind_list check.

Finally, update the checks under handler_mutex to use READ_ONCE and audit
that once RDMA_CM_LISTEN is observed in a req callback it is stable under
the handler_mutex.

Link: https://lore.kernel.org/r/20200902081122.745412-4-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/cma: Make the locking for automatic state transition more clear
Jason Gunthorpe [Wed, 2 Sep 2020 08:11:16 +0000 (11:11 +0300)]
RDMA/cma: Make the locking for automatic state transition more clear

Re-organize things so the state variable is not read unlocked. The first
attempt to go directly from ADDR_BOUND immediately tells us if the ID is
already bound, if we can't do that then the attempt inside
rdma_bind_addr() to go from IDLE to ADDR_BOUND confirms the ID needs
binding.

Link: https://lore.kernel.org/r/20200902081122.745412-3-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/cma: Fix locking for the RDMA_CM_CONNECT state
Jason Gunthorpe [Wed, 2 Sep 2020 08:11:15 +0000 (11:11 +0300)]
RDMA/cma: Fix locking for the RDMA_CM_CONNECT state

It is currently a bit confusing, but the design is if the handler_mutex
is held, and the state is in RDMA_CM_CONNECT, then the state cannot leave
RDMA_CM_CONNECT without also serializing with the handler_mutex.

Make this clearer by adding a direct assertion, fixing the usage in
rdma_connect and generally using READ_ONCE to read the state value.

Link: https://lore.kernel.org/r/20200902081122.745412-2-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/ipoib: Convert to use DEFINE_SEQ_ATTRIBUTE macro
Liu Shixin [Wed, 16 Sep 2020 02:50:22 +0000 (10:50 +0800)]
RDMA/ipoib: Convert to use DEFINE_SEQ_ATTRIBUTE macro

Use DEFINE_SEQ_ATTRIBUTE macro to simplify the code.

Link: https://lore.kernel.org/r/20200916025022.3992627-1-liushixin2@huawei.com
Signed-off-by: Liu Shixin <liushixin2@huawei.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/i40iw: Avoid typecast from void to pci_dev
Parav Pandit [Mon, 14 Sep 2020 12:35:28 +0000 (15:35 +0300)]
RDMA/i40iw: Avoid typecast from void to pci_dev

hw_context stores a pci_device pointer. Use the correct type instead of
void * and avoid typecasts.

Link: https://lore.kernel.org/r/20200914123528.270382-1-parav@mellanox.com
Signed-off-by: Parav Pandit <parav@nvidia.com>
Acked-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/qedr: Add support for user mode XRC-SRQ's
Yuval Basson [Wed, 22 Jul 2020 10:23:39 +0000 (13:23 +0300)]
RDMA/qedr: Add support for user mode XRC-SRQ's

Implement the XRC specific verbs.  The additional QP type introduced new
logic to the rest of the verbs that now require distinguishing whether a
QP has an "RQ" or an "SQ" or both.

Link: https://lore.kernel.org/r/20200722102339.30104-1-ybason@marvell.com
Signed-off-by: Michal Kalderon <mkalderon@marvell.com>
Signed-off-by: Yuval Basson <ybason@marvell.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/qedr: Fix function prototype parameters alignment
Michal Kalderon [Wed, 2 Sep 2020 16:57:41 +0000 (19:57 +0300)]
RDMA/qedr: Fix function prototype parameters alignment

Alignment of parameters was off by one

Link: https://lore.kernel.org/r/20200902165741.8355-9-michal.kalderon@marvell.com
Signed-off-by: Michal Kalderon <michal.kalderon@marvell.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/qedr: Fix inline size returned for iWARP
Michal Kalderon [Wed, 2 Sep 2020 16:57:40 +0000 (19:57 +0300)]
RDMA/qedr: Fix inline size returned for iWARP

commit 59e8970b3798 ("RDMA/qedr: Return max inline data in QP query
result") changed query_qp max_inline size to return the max roce inline
size.  When iwarp was introduced, this should have been modified to return
the max inline size based on protocol.  This size is cached in the device
attributes

Fixes: 69ad0e7fe845 ("RDMA/qedr: Add support for iWARP in user space")
Link: https://lore.kernel.org/r/20200902165741.8355-8-michal.kalderon@marvell.com
Signed-off-by: Michal Kalderon <michal.kalderon@marvell.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/qedr: Fix iWARP active mtu display
Michal Kalderon [Wed, 2 Sep 2020 16:57:39 +0000 (19:57 +0300)]
RDMA/qedr: Fix iWARP active mtu display

Currently iWARP does not support mtu-change. Notify user when MTU changes
that reload of qedr is required for mtu to change.  Display the correct
active mtu.

Fixes: f5b1b1775be6 ("RDMA/qedr: Add iWARP support in existing verbs")
Link: https://lore.kernel.org/r/20200902165741.8355-7-michal.kalderon@marvell.com
Signed-off-by: Michal Kalderon <michal.kalderon@marvell.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoqede: Notify qedr when mtu has changed
Michal Kalderon [Wed, 2 Sep 2020 16:57:38 +0000 (19:57 +0300)]
qede: Notify qedr when mtu has changed

MTU change on ethtool is currently not supported for iWARP.  Notify qedr
driver so that appropriate logging can take place.

Link: https://lore.kernel.org/r/20200902165741.8355-6-michal.kalderon@marvell.com
Signed-off-by: Michal Kalderon <michal.kalderon@marvell.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/qedr: Fix return code if accept is called on a destroyed qp
Michal Kalderon [Wed, 2 Sep 2020 16:57:37 +0000 (19:57 +0300)]
RDMA/qedr: Fix return code if accept is called on a destroyed qp

In iWARP, accept could be called after a QP is already destroyed.  In this
case an error should be returned and not success.

Fixes: 82af6d19d8d9 ("RDMA/qedr: Fix synchronization methods and memory leaks in qedr")
Link: https://lore.kernel.org/r/20200902165741.8355-5-michal.kalderon@marvell.com
Signed-off-by: Michal Kalderon <michal.kalderon@marvell.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/qedr: Fix use of uninitialized field
Michal Kalderon [Wed, 2 Sep 2020 16:57:36 +0000 (19:57 +0300)]
RDMA/qedr: Fix use of uninitialized field

dev->attr.page_size_caps was used uninitialized when setting device
attributes

Fixes: ec72fce401c6 ("qedr: Add support for RoCE HW init")
Link: https://lore.kernel.org/r/20200902165741.8355-4-michal.kalderon@marvell.com
Signed-off-by: Michal Kalderon <michal.kalderon@marvell.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/qedr: Fix doorbell setting
Michal Kalderon [Wed, 2 Sep 2020 16:57:35 +0000 (19:57 +0300)]
RDMA/qedr: Fix doorbell setting

Change the doorbell setting so that the maximum value between the last and
current value is set. This is to avoid doorbells being lost.

Fixes: a7efd7773e31 ("qedr: Add support for PD,PKEY and CQ verbs")
Link: https://lore.kernel.org/r/20200902165741.8355-3-michal.kalderon@marvell.com
Signed-off-by: Michal Kalderon <michal.kalderon@marvell.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/qedr: Fix qp structure memory leak
Michal Kalderon [Wed, 2 Sep 2020 16:57:34 +0000 (19:57 +0300)]
RDMA/qedr: Fix qp structure memory leak

The qedr_qp structure wasn't freed when the protocol was RoCE.  kmemleak
output when running basic RoCE scenario.

unreferenced object 0xffff927ad7e22c00 (size 1024):
  comm "ib_send_bw", pid 7082, jiffies 4384133693 (age 274.698s)
  hex dump (first 32 bytes):
    00 b0 cd a2 79 92 ff ff 00 3f a1 a2 79 92 ff ff  ....y....?..y...
    00 ee 5c dd 80 92 ff ff 00 f6 5c dd 80 92 ff ff  ..\.......\.....
  backtrace:
    [<00000000b2ba0f35>] qedr_create_qp+0xb3/0x6c0 [qedr]
    [<00000000e85a43dd>] ib_uverbs_handler_UVERBS_METHOD_QP_CREATE+0x555/0xad0 [ib_uverbs]
    [<00000000fee4d029>] ib_uverbs_cmd_verbs+0xa5a/0xb80 [ib_uverbs]
    [<000000005d622660>] ib_uverbs_ioctl+0xa4/0x110 [ib_uverbs]
    [<00000000eb4cdc71>] ksys_ioctl+0x87/0xc0
    [<00000000abe6b23a>] __x64_sys_ioctl+0x16/0x20
    [<0000000046e7cef4>] do_syscall_64+0x4d/0x90
    [<00000000c6948f76>] entry_SYSCALL_64_after_hwframe+0x44/0xa9

Fixes: 1212767e23bb ("qedr: Add wrapping generic structure for qpidr and adjust idr routines.")
Link: https://lore.kernel.org/r/20200902165741.8355-2-michal.kalderon@marvell.com
Signed-off-by: Michal Kalderon <michal.kalderon@marvell.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/core: Added missing WR and WC opcodes
Bob Pearson [Thu, 3 Sep 2020 22:40:34 +0000 (17:40 -0500)]
RDMA/core: Added missing WR and WC opcodes

Add work completion opcodes to a new ib_uverbs_wc_opcode enum in
ib_user_verbs.h. This plays the same role as ib_uverbs_wr_opcode
documenting the opcodes in the user space API.

Assigned the IB_WC_XXX opcodes in ib_verbs.h to the IB_UVERBS_WC_XXX
where they are defined. This follows the same pattern as the IB_WR_XXX
opcodes. This fixes an incorrect value for LSO that had crept in but
is not currently being used.

Added a missing IB_WR_BIND_MW opcode in ib_verbs.h.

Link: https://lore.kernel.org/r/20200903224039.437391-2-rpearson@hpe.com
Signed-off-by: Bob Pearson <rpearson@hpe.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/ocrdma: Remove fbo from MR
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:58 +0000 (19:41 -0300)]
RDMA/ocrdma: Remove fbo from MR

This is always the same value as IOVA masked by the page size, just use
that clearer calculation directly.

It is unclear of ocrdma hardware can actually support a true fbo, if so it
could use a different algorithm to compute the best page size.

Link: https://lore.kernel.org/r/17-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/qedr: Remove fbo and zbva from the MR
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:57 +0000 (19:41 -0300)]
RDMA/qedr: Remove fbo and zbva from the MR

zbva is always false, so fbo is never read.

A 'zero-based-virtual-address' is simply IOVA == 0, and the driver already
supports this.

Link: https://lore.kernel.org/r/16-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Acked-by: Michal KalderonĀ <michal.kalderon@marvell.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/mlx4: Use ib_umem_num_dma_blocks()
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:56 +0000 (19:41 -0300)]
RDMA/mlx4: Use ib_umem_num_dma_blocks()

For the calls linked to mlx4_ib_umem_calc_optimal_mtt_size() use
ib_umem_num_dma_blocks() inside the function, it is just some weird static
default.

All other places are just using it with PAGE_SIZE, switch to
ib_umem_num_dma_blocks().

As this is the last call site, remove ib_umem_num_count().

Link: https://lore.kernel.org/r/15-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/pvrdma: Use ib_umem_num_dma_blocks() instead of ib_umem_page_count()
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:55 +0000 (19:41 -0300)]
RDMA/pvrdma: Use ib_umem_num_dma_blocks() instead of ib_umem_page_count()

This driver always uses PAGE_SIZE.

Link: https://lore.kernel.org/r/14-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/ocrdma: Use ib_umem_num_dma_blocks() instead of ib_umem_page_count()
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:54 +0000 (19:41 -0300)]
RDMA/ocrdma: Use ib_umem_num_dma_blocks() instead of ib_umem_page_count()

This driver always uses a DMA array made up of PAGE_SIZE elements, so just
use ib_umem_num_dma_blocks().

Since rdma_for_each_dma_block() always iterates exactly
ib_umem_num_dma_blocks() there is no need for the early exit check in
build_user_pbes(), delete it.

Link: https://lore.kernel.org/r/13-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/hns: Use ib_umem_num_dma_blocks() instead of opencoding
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:53 +0000 (19:41 -0300)]
RDMA/hns: Use ib_umem_num_dma_blocks() instead of opencoding

mtr_umem_page_count() does the same thing, replace it with the core code.

Also, ib_umem_find_best_pgsz() should always be called to check that the
umem meets the page_size requirement. If there is a limited set of
page_sizes that work it the pgsz_bitmap should be set to that set. 0 is a
failure and the umem cannot be used.

Lightly tidy the control flow to implement this flow properly.

Link: https://lore.kernel.org/r/12-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Acked-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/bnxt: Do not use ib_umem_page_count() or ib_umem_num_pages()
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:52 +0000 (19:41 -0300)]
RDMA/bnxt: Do not use ib_umem_page_count() or ib_umem_num_pages()

ib_umem_page_count() returns the number of 4k entries required for a DMA
map, but bnxt_re already computes a variable page size. The correct API to
determine the size of the page table array is ib_umem_num_dma_blocks().

Fix the overallocation of the page array in fill_umem_pbl_tbl() when
working with larger page sizes by using the right function. Lightly
re-organize this function to make it clearer.

Replace the other calls to ib_umem_num_pages().

Fixes: d85582517e91 ("RDMA/bnxt_re: Use core helpers to get aligned DMA address")
Link: https://lore.kernel.org/r/11-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Acked-by: Selvin Xavier <selvin.xavier@broadcom.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/qedr: Use ib_umem_num_dma_blocks() instead of ib_umem_page_count()
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:51 +0000 (19:41 -0300)]
RDMA/qedr: Use ib_umem_num_dma_blocks() instead of ib_umem_page_count()

The length of the list populated by qedr_populate_pbls() should be
calculated using ib_umem_num_dma_blocks() with the same size/shift passed
to qedr_populate_pbls().

Link: https://lore.kernel.org/r/10-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Acked-by: Michal Kalderon <michal.kalderon@marvell.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/qedr: Use rdma_umem_for_each_dma_block() instead of open-coding
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:50 +0000 (19:41 -0300)]
RDMA/qedr: Use rdma_umem_for_each_dma_block() instead of open-coding

This loop is splitting the DMA SGL into pg_shift sized pages, use the core
code for this directly.

Link: https://lore.kernel.org/r/9-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Acked-by: Michal Kalderon <michal.kalderon@marvell.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/i40iw: Use ib_umem_num_dma_pages()
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:49 +0000 (19:41 -0300)]
RDMA/i40iw: Use ib_umem_num_dma_pages()

If ib_umem_find_best_pgsz() returns > PAGE_SIZE then the equation here is
not correct. 'start' should be 'virt'. Change it to use the core code for
page_num and the canonical calculation of page_shift.

Fixes: eb52c0333f06 ("RDMA/i40iw: Use core helpers to get aligned DMA address within a supported page size")
Link: https://lore.kernel.org/r/8-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/efa: Use ib_umem_num_dma_pages()
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:48 +0000 (19:41 -0300)]
RDMA/efa: Use ib_umem_num_dma_pages()

If ib_umem_find_best_pgsz() returns > PAGE_SIZE then the equation here is
not correct. 'start' should be 'virt'. Change it to use the core code for
page_num and the canonical calculation of page_shift.

Fixes: 40ddb3f02083 ("RDMA/efa: Use API to get contiguous memory blocks aligned to device supported page size")
Link: https://lore.kernel.org/r/7-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Tested-by: Gal Pressman <galpress@amazon.com>
Acked-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/umem: Split ib_umem_num_pages() into ib_umem_num_dma_blocks()
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:47 +0000 (19:41 -0300)]
RDMA/umem: Split ib_umem_num_pages() into ib_umem_num_dma_blocks()

ib_umem_num_pages() should only be used by things working with the SGL in
CPU pages directly.

Drivers building DMA lists should use the new ib_num_dma_blocks() which
returns the number of blocks rdma_umem_for_each_block() will return.

To make this general for DMA drivers requires a different implementation.
Computing DMA block count based on umem->address only works if the
requested page size is < PAGE_SIZE and/or the IOVA == umem->address.

Instead the number of DMA pages should be computed in the IOVA address
space, not umem->address. Thus the IOVA has to be stored inside the umem
so it can be used for these calculations.

For now set it to umem->address by default and fix it up if
ib_umem_find_best_pgsz() was called. This allows drivers to be converted
to ib_umem_num_dma_blocks() safely.

Link: https://lore.kernel.org/r/6-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/umem: Replace for_each_sg_dma_page with rdma_umem_for_each_dma_block
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:46 +0000 (19:41 -0300)]
RDMA/umem: Replace for_each_sg_dma_page with rdma_umem_for_each_dma_block

Generally drivers should be using this core helper to split up the umem
into DMA pages.

These drivers are all probably wrong in some way to pass PAGE_SIZE in as
the HW page size. Either the driver doesn't support other page sizes and
it should use 4096, or the driver does support other page sizes and should
use ib_umem_find_best_pgsz() to select the best HW pages size of the HW
supported set.

The only case it could be correct is if the HW has a global setting for
PAGE_SIZE set at driver initialization time.

Link: https://lore.kernel.org/r/5-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/umem: Add rdma_umem_for_each_dma_block()
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:45 +0000 (19:41 -0300)]
RDMA/umem: Add rdma_umem_for_each_dma_block()

This helper does the same as rdma_for_each_block(), except it works on a
umem. This simplifies most of the call sites.

Link: https://lore.kernel.org/r/4-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Acked-by: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com>
Acked-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/umem: Use simpler logic for ib_umem_find_best_pgsz()
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:44 +0000 (19:41 -0300)]
RDMA/umem: Use simpler logic for ib_umem_find_best_pgsz()

The calculation in rdma_find_pg_bit() is fairly complicated, and the
function is never called anywhere else. Inline a simpler version into
ib_umem_find_best_pgsz()

Link: https://lore.kernel.org/r/3-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/umem: Prevent small pages from being returned by ib_umem_find_best_pgsz()
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:43 +0000 (19:41 -0300)]
RDMA/umem: Prevent small pages from being returned by ib_umem_find_best_pgsz()

rdma_for_each_block() makes assumptions about how the SGL is constructed
that don't work if the block size is below the page size used to to build
the SGL.

The rules for umem SGL construction require that the SG's all be PAGE_SIZE
aligned and we don't encode the actual byte offset of the VA range inside
the SGL using offset and length. So rdma_for_each_block() has no idea
where the actual starting/ending point is to compute the first/last block
boundary if the starting address should be within a SGL.

Fixing the SGL construction turns out to be really hard, and will be the
subject of other patches. For now block smaller pages.

Fixes: 4a35339958f1 ("RDMA/umem: Add API to find best driver supported page size in an MR")
Link: https://lore.kernel.org/r/2-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/umem: Fix ib_umem_find_best_pgsz() for mappings that cross a page boundary
Jason Gunthorpe [Fri, 4 Sep 2020 22:41:42 +0000 (19:41 -0300)]
RDMA/umem: Fix ib_umem_find_best_pgsz() for mappings that cross a page boundary

It is possible for a single SGL to span an aligned boundary, eg if the SGL
is

  61440 -> 90112

Then the length is 28672, which currently limits the block size to
32k. With a 32k page size the two covering blocks will be:

  32768->65536 and 65536->98304

However, the correct answer is a 128K block size which will span the whole
28672 bytes in a single block.

Instead of limiting based on length figure out which high IOVA bits don't
change between the start and end addresses. That is the highest useful
page size.

Fixes: 4a35339958f1 ("RDMA/umem: Add API to find best driver supported page size in an MR")
Link: https://lore.kernel.org/r/1-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Reviewed-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA: Make counters destroy symmetrical
Leon Romanovsky [Mon, 7 Sep 2020 12:09:21 +0000 (15:09 +0300)]
RDMA: Make counters destroy symmetrical

Change counters to return failure like any other verbs destroy, however
this flow shouldn't return error at all.

Link: https://lore.kernel.org/r/20200907120921.476363-10-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA: Restore ability to return error for destroy WQ
Leon Romanovsky [Mon, 7 Sep 2020 12:09:20 +0000 (15:09 +0300)]
RDMA: Restore ability to return error for destroy WQ

Make this interface symmetrical to other destroy paths.

Fixes: a49b1dc7ae44 ("RDMA: Convert destroy_wq to be void")
Link: https://lore.kernel.org/r/20200907120921.476363-9-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA: Change XRCD destroy return value
Leon Romanovsky [Mon, 7 Sep 2020 12:09:19 +0000 (15:09 +0300)]
RDMA: Change XRCD destroy return value

Update XRCD destroy flow to allow command failure.

Fixes: 28ad5f65c314 ("RDMA: Move XRCD to be under ib_core responsibility")
Link: https://lore.kernel.org/r/20200907120921.476363-8-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA: Allow fail of destroy CQ
Leon Romanovsky [Mon, 7 Sep 2020 12:09:18 +0000 (15:09 +0300)]
RDMA: Allow fail of destroy CQ

Like any other verbs objects, CQ shouldn't fail during destroy, but
mlx5_ib didn't follow this contract with mixed IB verbs objects with
DEVX. Such mix causes to the situation where FW and kernel are fully
interdependent on the reference counting of each side.

Kernel verbs and drivers that don't have DEVX flows shouldn't fail.

Fixes: e39afe3d6dbd ("RDMA: Convert CQ allocations to be under core responsibility")
Link: https://lore.kernel.org/r/20200907120921.476363-7-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/core: Delete function indirection for alloc/free kernel CQ
Leon Romanovsky [Mon, 7 Sep 2020 12:09:17 +0000 (15:09 +0300)]
RDMA/core: Delete function indirection for alloc/free kernel CQ

The ib_alloc_cq*() and ib_free_cq*() are solely kernel verbs to manage CQs
and doesn't need extra indirection just to call same functions with
constant parameter NULL as udata.

Link: https://lore.kernel.org/r/20200907120921.476363-6-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA: Restore ability to fail on SRQ destroy
Leon Romanovsky [Mon, 7 Sep 2020 12:09:16 +0000 (15:09 +0300)]
RDMA: Restore ability to fail on SRQ destroy

In similar way to other IB objects, restore the ability to return error on
SRQ destroy. Strictly speaking, this change is not necessary, and provided
here to ensure a symmetrical interface like other destroy functions.

Fixes: 68e326dea1db ("RDMA: Handle SRQ allocations by IB/core")
Link: https://lore.kernel.org/r/20200907120921.476363-5-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/mlx5: Issue FW command to destroy SRQ on reentry
Leon Romanovsky [Mon, 7 Sep 2020 12:09:15 +0000 (15:09 +0300)]
RDMA/mlx5: Issue FW command to destroy SRQ on reentry

The HW release can fail and leave the system in limbo state, where SRQ is
removed from the table, but can't be destroyed later.  In every reentry,
the initial xa_erase_irq() check will fail.

Rewrite the erase logic to keep index, but don't store the entry
itself. By doing it, we can safely reinsert entry back in the case of
destroy failure.

Link: https://lore.kernel.org/r/20200907120921.476363-4-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA: Restore ability to fail on AH destroy
Leon Romanovsky [Mon, 7 Sep 2020 12:09:14 +0000 (15:09 +0300)]
RDMA: Restore ability to fail on AH destroy

Like any other IB verbs objects, AH are refcounted by ib_core. The release
of those objects are controlled by ib_core with promise that AH destroy
can't fail.

Being SW object for now, this change makes dealloc_ah() to behave like any
other destroy IB flows.

Fixes: d345691471b4 ("RDMA: Handle AH allocations by IB/core")
Link: https://lore.kernel.org/r/20200907120921.476363-3-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA: Restore ability to fail on PD deallocate
Leon Romanovsky [Mon, 7 Sep 2020 12:09:13 +0000 (15:09 +0300)]
RDMA: Restore ability to fail on PD deallocate

The IB verbs objects are counted by the kernel and ib_core ensures that
deallocate PD will success so it will be called once all other objects
that depends on PD will be released. This is achieved by managing various
reference counters on such objects.

The mlx5 driver didn't follow this standard flow when allowed DEVX objects
that are not managed by ib_core to be interleaved with the ones under
ib_core responsibility.

In such interleaved scenarios deallocate command can fail and ib_core will
leave uobject in internal DB and attempt to clean it later to free
resources anyway.

This change partially restores returned value from dealloc_pd() for all
drivers, but keeping in mind that non-DEVX devices and kernel verbs paths
shouldn't fail.

Fixes: 21a428a019c9 ("RDMA: Handle PD allocations by IB/core")
Link: https://lore.kernel.org/r/20200907120921.476363-2-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/rtrs-srv: Incorporate ib_register_client into rtrs server init
Md Haris Iqbal [Mon, 7 Sep 2020 10:31:06 +0000 (16:01 +0530)]
RDMA/rtrs-srv: Incorporate ib_register_client into rtrs server init

The rnbd_server module's communication manager (cm) initialization depends
on the registration of the "network namespace subsystem" of the RDMA CM
agent module. As such, when the kernel is configured to load the
rnbd_server and the RDMA cma module during initialization; and if the
rnbd_server module is initialized before RDMA cma module, a null ptr
dereference occurs during the RDMA bind operation.

Call trace:

  Call Trace:
   ? xas_load+0xd/0x80
   xa_load+0x47/0x80
   cma_ps_find+0x44/0x70
   rdma_bind_addr+0x782/0x8b0
   ? get_random_bytes+0x35/0x40
   rtrs_srv_cm_init+0x50/0x80
   rtrs_srv_open+0x102/0x180
   ? rnbd_client_init+0x6e/0x6e
   rnbd_srv_init_module+0x34/0x84
   ? rnbd_client_init+0x6e/0x6e
   do_one_initcall+0x4a/0x200
   kernel_init_freeable+0x1f1/0x26e
   ? rest_init+0xb0/0xb0
   kernel_init+0xe/0x100
   ret_from_fork+0x22/0x30
  Modules linked in:
  CR2: 0000000000000015

All this happens cause the cm init is in the call chain of the module
init, which is not a preferred practice.

So remove the call to rdma_create_id() from the module init call chain.
Instead register rtrs-srv as an ib client, which makes sure that the
rdma_create_id() is called only when an ib device is added.

Fixes: 9cb837480424 ("RDMA/rtrs: server: main functionality")
Link: https://lore.kernel.org/r/20200907103106.104530-1-haris.iqbal@cloud.ionos.com
Reported-by: kernel test robot <rong.a.chen@intel.com>
Signed-off-by: Md Haris Iqbal <haris.iqbal@cloud.ionos.com>
Reviewed-by: Jack Wang <jinpu.wang@cloud.ionos.com>
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/hns: Avoid unncessary initialization
Lijun Ou [Tue, 8 Sep 2020 06:52:24 +0000 (14:52 +0800)]
RDMA/hns: Avoid unncessary initialization

Some variables have been initialized when used. As a result, here removes
some unncessary initial assignment.

Link: https://lore.kernel.org/r/1599547944-30671-1-git-send-email-oulijun@huawei.com
Signed-off-by: Lijun Ou <oulijun@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/core: Change how failing destroy is handled during uobj abort
Jason Gunthorpe [Wed, 2 Sep 2020 08:17:08 +0000 (11:17 +0300)]
RDMA/core: Change how failing destroy is handled during uobj abort

Currently it triggers a WARN_ON and then goes ahead and destroys the
uobject anyhow, leaking any driver memory.

The only place that leaks driver memory should be during FD close() in
uverbs_destroy_ufile_hw().

Drivers are only allowed to fail destroy uobjects if they guarantee
destroy will eventually succeed. uverbs_destroy_ufile_hw() provides the
loop to give the driver that chance.

Link: https://lore.kernel.org/r/20200902081708.746631-1-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/rxe: Convert tasklets to use new tasklet_setup() API
Allen Pais [Thu, 3 Sep 2020 06:06:37 +0000 (11:36 +0530)]
RDMA/rxe: Convert tasklets to use new tasklet_setup() API

In preparation for unconditionally passing the struct tasklet_struct
pointer to all tasklet callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.

Link: https://lore.kernel.org/r/20200903060637.424458-6-allen.lkml@gmail.com
Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: Allen Pais <allen.lkml@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/qib: Convert tasklets to use new tasklet_setup() API
Allen Pais [Thu, 3 Sep 2020 06:06:36 +0000 (11:36 +0530)]
RDMA/qib: Convert tasklets to use new tasklet_setup() API

In preparation for unconditionally passing the struct tasklet_struct
pointer to all tasklet callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.

Link: https://lore.kernel.org/r/20200903060637.424458-5-allen.lkml@gmail.com
Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: Allen Pais <allen.lkml@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/i40iw: Convert tasklets to use new tasklet_setup() API
Allen Pais [Thu, 3 Sep 2020 06:06:35 +0000 (11:36 +0530)]
RDMA/i40iw: Convert tasklets to use new tasklet_setup() API

In preparation for unconditionally passing the struct tasklet_struct
pointer to all tasklet callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.

Link: https://lore.kernel.org/r/20200903060637.424458-4-allen.lkml@gmail.com
Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: Allen Pais <allen.lkml@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/hfi1: Convert tasklets to use new tasklet_setup() API
Allen Pais [Thu, 3 Sep 2020 06:06:34 +0000 (11:36 +0530)]
RDMA/hfi1: Convert tasklets to use new tasklet_setup() API

In preparation for unconditionally passing the struct tasklet_struct
pointer to all tasklet callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.

Link: https://lore.kernel.org/r/20200903060637.424458-3-allen.lkml@gmail.com
Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: Allen Pais <allen.lkml@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/bnxt_re: Convert tasklets to use new tasklet_setup() API
Allen Pais [Thu, 3 Sep 2020 06:06:33 +0000 (11:36 +0530)]
RDMA/bnxt_re: Convert tasklets to use new tasklet_setup() API

In preparation for unconditionally passing the struct tasklet_struct
pointer to all tasklet callbacks, switch to using the new tasklet_setup()
and from_tasklet() to pass the tasklet pointer explicitly.

Link: https://lore.kernel.org/r/20200903060637.424458-2-allen.lkml@gmail.com
Signed-off-by: Romain Perier <romain.perier@gmail.com>
Signed-off-by: Allen Pais <allen.lkml@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/mlx5: Fix potential race between destroy and CQE poll
Leon Romanovsky [Sun, 30 Aug 2020 08:40:04 +0000 (11:40 +0300)]
RDMA/mlx5: Fix potential race between destroy and CQE poll

The SRQ can be destroyed right before mlx5_cmd_get_srq is called.
In such case the latter will return NULL instead of expected SRQ.

Fixes: e126ba97dba9 ("mlx5: Add driver for Mellanox Connect-IB adapters")
Link: https://lore.kernel.org/r/20200830084010.102381-5-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/ucma: Fix resource leak on error path
Alex Dewar [Wed, 2 Sep 2020 16:24:51 +0000 (17:24 +0100)]
RDMA/ucma: Fix resource leak on error path

In ucma_process_join(), if the call to xa_alloc() fails, the function will
return without freeing mc. Fix this by jumping to the correct line.

In the process I renamed the jump labels to something more memorable for
extra clarity.

Link: https://lore.kernel.org/r/20200902162454.332828-1-alex.dewar90@gmail.com
Addresses-Coverity-ID: 1496814 ("Resource leak")
Fixes: 95fe51096b7a ("RDMA/ucma: Remove mc_list and rely on xarray")
Signed-off-by: Alex Dewar <alex.dewar90@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/qedr: Fix reported max_pkeys
Kamal Heib [Thu, 27 Aug 2020 14:16:55 +0000 (17:16 +0300)]
RDMA/qedr: Fix reported max_pkeys

As qedr driver supports both RoCE and iWarp, make sure to set the
max_pkeys only when running in RoCE mode.

Link: https://lore.kernel.org/r/20200827141655.406185-1-kamalheib1@gmail.com
Signed-off-by: Kamal Heib <kamalheib1@gmail.com>
Acked-by: Michal KalderonĀ <michal.kalderon@marvell.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/qib: Tidy up process_cc()
Alex Dewar [Tue, 25 Aug 2020 17:12:44 +0000 (18:12 +0100)]
RDMA/qib: Tidy up process_cc()

This function has a lot of gotos which could be replaced by simple
returns, making the function tidier and less bug prone.

Link: https://lore.kernel.org/r/20200825171242.448447-2-alex.dewar90@gmail.com
Signed-off-by: Alex Dewar <alex.dewar90@gmail.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/qib: Remove superfluous fallthrough statements
Alex Dewar [Tue, 25 Aug 2020 17:12:42 +0000 (18:12 +0100)]
RDMA/qib: Remove superfluous fallthrough statements

Commit 36a8f01cd24b ("IB/qib: Add congestion control agent
implementation") erroneously marked a couple of switch cases as /*
FALLTHROUGH */, which were later converted to fallthrough statements by
commit df561f6688fe ("treewide: Use fallthrough pseudo-keyword"). This
triggered a Coverity warning about unreachable code.

Remove the fallthrough statements.

Link: https://lore.kernel.org/r/20200825171242.448447-1-alex.dewar90@gmail.com
Addresses-Coverity: ("Unreachable code")
Fixes: 36a8f01cd24b ("IB/qib: Add congestion control agent implementation")
Signed-off-by: Alex Dewar <alex.dewar90@gmail.com>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoMerge tag 'v5.9-rc3' into rdma.git for-next
Jason Gunthorpe [Mon, 31 Aug 2020 15:28:12 +0000 (12:28 -0300)]
Merge tag 'v5.9-rc3' into rdma.git for-next

Required due to dependencies in following patches.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/rxe: Address an issue with hardened user copy
Bob Pearson [Thu, 27 Aug 2020 16:35:36 +0000 (11:35 -0500)]
RDMA/rxe: Address an issue with hardened user copy

Change rxe pools to use kzalloc instead of kmem_cache to allocate memory
for rxe objects. The pools are not really necessary and they trigger
hardened user copy warnings as the ioctl framework copies the QP number
directly to userspace.

Also the general project to move object alloation to the core code will
eventually clean these out anyhow.

Link: https://lore.kernel.org/r/20200827163535.2632-1-rpearson@hpe.com
Signed-off-by: Bob Pearson <rpearson@hpe.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/rxe: Add SPDX hdrs to rxe source files
Bob Pearson [Thu, 27 Aug 2020 14:54:40 +0000 (09:54 -0500)]
RDMA/rxe: Add SPDX hdrs to rxe source files

Add SPDX headers to all rxe .c and .h files.

Link: https://lore.kernel.org/r/20200827145439.2273-1-rpearson@hpe.com
Signed-off-by: Bob Pearson <rpearson@hpe.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/umem: Fix signature of stub ib_umem_find_best_pgsz()
Jason Gunthorpe [Tue, 25 Aug 2020 18:17:08 +0000 (15:17 -0300)]
RDMA/umem: Fix signature of stub ib_umem_find_best_pgsz()

The original function returns unsigned long and 0 on failure.

Fixes: 4a35339958f1 ("RDMA/umem: Add API to find best driver supported page size in an MR")
Link: https://lore.kernel.org/r/0-v1-982a13cc5c6d+501ae-fix_best_pgsz_stub_jgg@nvidia.com
Reviewed-by: Gal Pressman <galpress@amazon.com>
Acked-by: Shiraz Saleem <shiraz.saleem@intel.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/core: Trigger a WARN_ON if the driver causes uobjects to become leaked
Jason Gunthorpe [Tue, 25 Aug 2020 16:35:38 +0000 (13:35 -0300)]
RDMA/core: Trigger a WARN_ON if the driver causes uobjects to become leaked

Drivers that fail destroy can cause uverbs to leak uobjects. Drivers are
required to always eventually destroy their ubojects, so trigger a WARN_ON
to detect this driver bug.

Link: https://lore.kernel.org/r/0-v1-b1e0ed400ba9+f7-warn_destroy_ufile_hw_jgg@nvidia.com
Reviewed-by: Leon Romanovsky <leonro@nvidia.com>
Reviewed-by: Gal Pressman <galpress@amazon.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoRDMA/hns: Get udp sport num dynamically instead of using a fixed value
Weihang Li [Fri, 21 Aug 2020 09:31:29 +0000 (17:31 +0800)]
RDMA/hns: Get udp sport num dynamically instead of using a fixed value

The UDP source port number in RoCE v2 is used to create entropy for
network routers (ECMP), load balancers and 802.3ad link aggregation
switching that are not aware of RoCE IB headers. Considering that the IB
core has achieved a new interface to get a hashed value of it, the fixed
value of it in QPC and UD WQE in hns driver could be fixed and the port
number is to be set dynamically now.

For QPC of RC, the value could be hashed from flow_lable if the user pass
it in or from remote qpn and local qpn. For WQE of UD, it is set according
to fl or as a random value.

Link: https://lore.kernel.org/r/1598002289-8611-1-git-send-email-liweihang@huawei.com
Signed-off-by: Weihang Li <liweihang@huawei.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
4 years agoLinux 5.9-rc3
Linus Torvalds [Sun, 30 Aug 2020 23:01:54 +0000 (16:01 -0700)]
Linux 5.9-rc3

4 years agoMerge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Linus Torvalds [Sun, 30 Aug 2020 22:53:44 +0000 (15:53 -0700)]
Merge branch 'linus' of git://git./linux/kernel/git/herbert/crypto-2.6

Pull crypto fixes from Herbert Xu:

 - fix regression in af_alg that affects iwd

 - restore polling delay in qat

 - fix double free in ingenic on error path

 - fix potential build failure in sa2ul due to missing Kconfig dependency

* 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
  crypto: af_alg - Work around empty control messages without MSG_MORE
  crypto: sa2ul - add Kconfig selects to fix build error
  crypto: ingenic - Drop kfree for memory allocated with devm_kzalloc
  crypto: qat - add delay before polling mailbox

4 years agoMerge tag 'x86-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sun, 30 Aug 2020 19:01:23 +0000 (12:01 -0700)]
Merge tag 'x86-urgent-2020-08-30' of git://git./linux/kernel/git/tip/tip

Pull x86 fixes from Thomas Gleixner:
 "Three interrupt related fixes for X86:

   - Move disabling of the local APIC after invoking fixup_irqs() to
     ensure that interrupts which are incoming are noted in the IRR and
     not ignored.

   - Unbreak affinity setting.

     The rework of the entry code reused the regular exception entry
     code for device interrupts. The vector number is pushed into the
     errorcode slot on the stack which is then lifted into an argument
     and set to -1 because that's regs->orig_ax which is used in quite
     some places to check whether the entry came from a syscall.

     But it was overlooked that orig_ax is used in the affinity cleanup
     code to validate whether the interrupt has arrived on the new
     target. It turned out that this vector check is pointless because
     interrupts are never moved from one vector to another on the same
     CPU. That check is a historical leftover from the time where x86
     supported multi-CPU affinities, but not longer needed with the now
     strict single CPU affinity. Famous last words ...

   - Add a missing check for an empty cpumask into the matrix allocator.

     The affinity change added a warning to catch the case where an
     interrupt is moved on the same CPU to a different vector. This
     triggers because a condition with an empty cpumask returns an
     assignment from the allocator as the allocator uses for_each_cpu()
     without checking the cpumask for being empty. The historical
     inconsistent for_each_cpu() behaviour of ignoring the cpumask and
     unconditionally claiming that CPU0 is in the mask struck again.
     Sigh.

  plus a new entry into the MAINTAINER file for the HPE/UV platform"

* tag 'x86-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  genirq/matrix: Deal with the sillyness of for_each_cpu() on UP
  x86/irq: Unbreak interrupt affinity setting
  x86/hotplug: Silence APIC only after all interrupts are migrated
  MAINTAINERS: Add entry for HPE Superdome Flex (UV) maintainers

4 years agoMerge tag 'irq-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sun, 30 Aug 2020 18:56:54 +0000 (11:56 -0700)]
Merge tag 'irq-urgent-2020-08-30' of git://git./linux/kernel/git/tip/tip

Pull irq fixes from Thomas Gleixner:
 "A set of fixes for interrupt chip drivers:

   - Revert the platform driver conversion of interrupt chip drivers as
     it turned out to create more problems than it solves.

   - Fix a trivial typo in the new module helpers which made probing
     reliably fail.

   - Small fixes in the STM32 and MIPS Ingenic drivers

   - The TI firmware rework which had badly managed dependencies and had
     to wait post rc1"

* tag 'irq-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  irqchip/ingenic: Leave parent IRQ unmasked on suspend
  irqchip/stm32-exti: Avoid losing interrupts due to clearing pending bits by mistake
  irqchip: Revert modular support for drivers using IRQCHIP_PLATFORM_DRIVER helperse
  irqchip: Fix probing deferal when using IRQCHIP_PLATFORM_DRIVER helpers
  arm64: dts: k3-am65: Update the RM resource types
  arm64: dts: k3-am65: ti-sci-inta/intr: Update to latest bindings
  arm64: dts: k3-j721e: ti-sci-inta/intr: Update to latest bindings
  irqchip/ti-sci-inta: Add support for INTA directly connecting to GIC
  irqchip/ti-sci-inta: Do not store TISCI device id in platform device id field
  dt-bindings: irqchip: Convert ti, sci-inta bindings to yaml
  dt-bindings: irqchip: ti, sci-inta: Update docs to support different parent.
  irqchip/ti-sci-intr: Add support for INTR being a parent to INTR
  dt-bindings: irqchip: Convert ti, sci-intr bindings to yaml
  dt-bindings: irqchip: ti, sci-intr: Update bindings to drop the usage of gic as parent
  firmware: ti_sci: Add support for getting resource with subtype
  firmware: ti_sci: Drop unused structure ti_sci_rm_type_map
  firmware: ti_sci: Drop the device id to resource type translation

4 years agoMerge tag 'sched-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 30 Aug 2020 18:50:53 +0000 (11:50 -0700)]
Merge tag 'sched-urgent-2020-08-30' of git://git./linux/kernel/git/tip/tip

Pull scheduler fix from Thomas Gleixner:
 "A single fix for the scheduler:

   - Make is_idle_task() __always_inline to prevent the compiler from
     putting it out of line into the wrong section because it's used
     inside noinstr sections"

* tag 'sched-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  sched: Use __always_inline on is_idle_task()

4 years agoMerge tag 'locking-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 30 Aug 2020 18:43:50 +0000 (11:43 -0700)]
Merge tag 'locking-urgent-2020-08-30' of git://git./linux/kernel/git/tip/tip

Pull locking fixes from Thomas Gleixner:
 "A set of fixes for lockdep, tracing and RCU:

   - Prevent recursion by using raw_cpu_* operations

   - Fixup the interrupt state in the cpu idle code to be consistent

   - Push rcu_idle_enter/exit() invocations deeper into the idle path so
     that the lock operations are inside the RCU watching sections

   - Move trace_cpu_idle() into generic code so it's called before RCU
     goes idle.

   - Handle raw_local_irq* vs. local_irq* operations correctly

   - Move the tracepoints out from under the lockdep recursion handling
     which turned out to be fragile and inconsistent"

* tag 'locking-urgent-2020-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  lockdep,trace: Expose tracepoints
  lockdep: Only trace IRQ edges
  mips: Implement arch_irqs_disabled()
  arm64: Implement arch_irqs_disabled()
  nds32: Implement arch_irqs_disabled()
  locking/lockdep: Cleanup
  x86/entry: Remove unused THUNKs
  cpuidle: Move trace_cpu_idle() into generic code
  cpuidle: Make CPUIDLE_FLAG_TLB_FLUSHED generic
  sched,idle,rcu: Push rcu_idle deeper into the idle path
  cpuidle: Fixup IRQ state
  lockdep: Use raw_cpu_*() for per-cpu variables

4 years agoMerge tag '5.9-rc2-smb-fix' of git://git.samba.org/sfrench/cifs-2.6
Linus Torvalds [Sun, 30 Aug 2020 18:38:21 +0000 (11:38 -0700)]
Merge tag '5.9-rc2-smb-fix' of git://git.samba.org/sfrench/cifs-2.6

Pull cfis fix from Steve French:
 "DFS fix for referral problem when using SMB1"

* tag '5.9-rc2-smb-fix' of git://git.samba.org/sfrench/cifs-2.6:
  cifs: fix check of tcon dfs in smb1

4 years agoMerge tag 'powerpc-5.9-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc...
Linus Torvalds [Sun, 30 Aug 2020 17:56:12 +0000 (10:56 -0700)]
Merge tag 'powerpc-5.9-4' of git://git./linux/kernel/git/powerpc/linux

Pull powerpc fixes from Michael Ellerman:

 - Revert our removal of PROT_SAO, at least one user expressed an
   interest in using it on Power9. Instead don't allow it to be used in
   guests unless enabled explicitly at compile time.

 - A fix for a crash introduced by a recent change to FP handling.

 - Revert a change to our idle code that left Power10 with no idle
   support.

 - One minor fix for the new scv system call path to set PPR.

 - Fix a crash in our "generic" PMU if branch stack events were enabled.

 - A fix for the IMC PMU, to correctly identify host kernel samples.

 - The ADB_PMU powermac code was found to be incompatible with
   VMAP_STACK, so make them incompatible in Kconfig until the code can
   be fixed.

 - A build fix in drivers/video/fbdev/controlfb.c, and a documentation
   fix.

Thanks to Alexey Kardashevskiy, Athira Rajeev, Christophe Leroy,
Giuseppe Sacco, Madhavan Srinivasan, Milton Miller, Nicholas Piggin,
Pratik Rajesh Sampat, Randy Dunlap, Shawn Anastasio, Vaidyanathan
Srinivasan.

* tag 'powerpc-5.9-4' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
  powerpc/32s: Disable VMAP stack which CONFIG_ADB_PMU
  Revert "powerpc/powernv/idle: Replace CPU feature check with PVR check"
  powerpc/perf: Fix reading of MSR[HV/PR] bits in trace-imc
  powerpc/perf: Fix crashes with generic_compat_pmu & BHRB
  powerpc/64s: Fix crash in load_fp_state() due to fpexc_mode
  powerpc/64s: scv entry should set PPR
  Documentation/powerpc: fix malformed table in syscall64-abi
  video: fbdev: controlfb: Fix build for COMPILE_TEST=y && PPC_PMAC=n
  selftests/powerpc: Update PROT_SAO test to skip ISA 3.1
  powerpc/64s: Disallow PROT_SAO in LPARs by default
  Revert "powerpc/64s: Remove PROT_SAO support"

4 years agoMerge tag 'usb-5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb
Linus Torvalds [Sun, 30 Aug 2020 17:51:03 +0000 (10:51 -0700)]
Merge tag 'usb-5.9-rc3' of git://git./linux/kernel/git/gregkh/usb

Pull USB fixes from Greg KH:
 "Let's try this again...  Here are some USB fixes for 5.9-rc3.

  This differs from the previous pull request for this release in that
  the usb gadget patch now does not break some systems, and actually
  does what it was intended to do. Many thanks to Marek Szyprowski for
  quickly noticing and testing the patch from Andy Shevchenko to resolve
  this issue.

  Additionally, some more new USB quirks have been added to get some new
  devices to work properly based on user reports.

  Other than that, the patches are all here, and they contain:

   - usb gadget driver fixes

   - xhci driver fixes

   - typec fixes

   - new quirks and ids

   - fixes for USB patches that went into 5.9-rc1.

  All of these have been tested in linux-next with no reported issues"

* tag 'usb-5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (33 commits)
  usb: storage: Add unusual_uas entry for Sony PSZ drives
  USB: Ignore UAS for JMicron JMS567 ATA/ATAPI Bridge
  usb: host: ohci-exynos: Fix error handling in exynos_ohci_probe()
  USB: gadget: u_f: Unbreak offset calculation in VLAs
  USB: quirks: Ignore duplicate endpoint on Sound Devices MixPre-D
  usb: typec: tcpm: Fix Fix source hard reset response for TDA 2.3.1.1 and TDA 2.3.1.2 failures
  USB: PHY: JZ4770: Fix static checker warning.
  USB: gadget: f_ncm: add bounds checks to ncm_unwrap_ntb()
  USB: gadget: u_f: add overflow checks to VLA macros
  xhci: Always restore EP_SOFT_CLEAR_TOGGLE even if ep reset failed
  xhci: Do warm-reset when both CAS and XDEV_RESUME are set
  usb: host: xhci: fix ep context print mismatch in debugfs
  usb: uas: Add quirk for PNY Pro Elite
  tools: usb: move to tools buildsystem
  USB: Fix device driver race
  USB: Also match device drivers using the ->match vfunc
  usb: host: xhci-tegra: fix tegra_xusb_get_phy()
  usb: host: xhci-tegra: otg usb2/usb3 port init
  usb: hcd: Fix use after free in usb_hcd_pci_remove()
  usb: typec: ucsi: Hold con->lock for the entire duration of ucsi_register_port()
  ...

4 years agoMerge tag 'edac_urgent_for_v5.9_rc3' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 30 Aug 2020 17:47:23 +0000 (10:47 -0700)]
Merge tag 'edac_urgent_for_v5.9_rc3' of git://git./linux/kernel/git/ras/ras

Pull EDAC fix from Borislav Petkov:
 "A fix to properly clear ghes_edac driver state on driver remove so
  that a subsequent load can probe the system properly (Shiju Jose)"

* tag 'edac_urgent_for_v5.9_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras:
  EDAC/ghes: Fix NULL pointer dereference in ghes_edac_register()

4 years agoMerge tag 'dma-mapping-5.9-2' of git://git.infradead.org/users/hch/dma-mapping
Linus Torvalds [Sun, 30 Aug 2020 17:37:18 +0000 (10:37 -0700)]
Merge tag 'dma-mapping-5.9-2' of git://git.infradead.org/users/hch/dma-mapping

Pull dma-mapping fix from Christoph Hellwig:
 "Fix a possibly uninitialized variable (Dan Carpenter)"

* tag 'dma-mapping-5.9-2' of git://git.infradead.org/users/hch/dma-mapping:
  dma-pool: Fix an uninitialized variable bug in atomic_pool_expand()

4 years agogenirq/matrix: Deal with the sillyness of for_each_cpu() on UP
Thomas Gleixner [Sun, 30 Aug 2020 17:07:53 +0000 (19:07 +0200)]
genirq/matrix: Deal with the sillyness of for_each_cpu() on UP

Most of the CPU mask operations behave the same way, but for_each_cpu() and
it's variants ignore the cpumask argument and claim that CPU0 is always in
the mask. This is historical, inconsistent and annoying behaviour.

The matrix allocator uses for_each_cpu() and can be called on UP with an
empty cpumask. The calling code does not expect that this succeeds but
until commit e027fffff799 ("x86/irq: Unbreak interrupt affinity setting")
this went unnoticed. That commit added a WARN_ON() to catch cases which
move an interrupt from one vector to another on the same CPU. The warning
triggers on UP.

Add a check for the cpumask being empty to prevent this.

Fixes: 2f75d9e1c905 ("genirq: Implement bitmap matrix allocator")
Reported-by: kernel test robot <rong.a.chen@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
4 years agoMerge tag 'fallthrough-fixes-5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sat, 29 Aug 2020 21:21:58 +0000 (14:21 -0700)]
Merge tag 'fallthrough-fixes-5.9-rc3' of git://git./linux/kernel/git/gustavoars/linux

Pull fallthrough fixes from Gustavo A. R. Silva:
 "Fix some minor issues introduced by the recent treewide fallthrough
  conversions:

   - Fix identation issue

   - Fix erroneous fallthrough annotation

   - Remove unnecessary fallthrough annotation

   - Fix code comment changed by fallthrough conversion"

* tag 'fallthrough-fixes-5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gustavoars/linux:
  arm64/cpuinfo: Remove unnecessary fallthrough annotation
  media: dib0700: Fix identation issue in dib8096_set_param_override()
  afs: Remove erroneous fallthough annotation
  iio: dpot-dac: fix code comment in dpot_dac_read_raw()

4 years agofsldma: fix very broken 32-bit ppc ioread64 functionality
Linus Torvalds [Sat, 29 Aug 2020 20:50:56 +0000 (13:50 -0700)]
fsldma: fix very broken 32-bit ppc ioread64 functionality

Commit ef91bb196b0d ("kernel.h: Silence sparse warning in
lower_32_bits") caused new warnings to show in the fsldma driver, but
that commit was not to blame: it only exposed some very incorrect code
that tried to take the low 32 bits of an address.

That made no sense for multiple reasons, the most notable one being that
that code was intentionally limited to only 32-bit ppc builds, so "only
low 32 bits of an address" was completely nonsensical.  There were no
high bits to mask off to begin with.

But even more importantly fropm a correctness standpoint, turning the
address into an integer then caused the subsequent address arithmetic to
be completely wrong too, and the "+1" actually incremented the address
by one, rather than by four.

Which again was incorrect, since the code was reading two 32-bit values
and trying to make a 64-bit end result of it all.  Surprisingly, the
iowrite64() did not suffer from the same odd and incorrect model.

This code has never worked, but it's questionable whether anybody cared:
of the two users that actually read the 64-bit value (by way of some C
preprocessor hackery and eventually the 'get_cdar()' inline function),
one of them explicitly ignored the value, and the other one might just
happen to work despite the incorrect value being read.

This patch at least makes it not fail the build any more, and makes the
logic superficially sane.  Whether it makes any difference to the code
_working_ or not shall remain a mystery.

Compile-tested-by: Guenter Roeck <linux@roeck-us.net>
Reviewed-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
4 years agoMerge branch 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa...
Linus Torvalds [Sat, 29 Aug 2020 20:13:44 +0000 (13:13 -0700)]
Merge branch 'i2c/for-current' of git://git./linux/kernel/git/wsa/linux

Pull i2c fixes from Wolfram Sang:
 "A core fix for ACPI matching and two driver bugfixes"

* 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
  i2c: iproc: Fix shifting 31 bits
  i2c: rcar: in slave mode, clear NACK earlier
  i2c: acpi: Remove dead code, i.e. i2c_acpi_match_device()
  i2c: core: Don't fail PRP0001 enumeration when no ID table exist

4 years agoMerge tag 's390-5.9-4' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
Linus Torvalds [Sat, 29 Aug 2020 19:51:58 +0000 (12:51 -0700)]
Merge tag 's390-5.9-4' of git://git./linux/kernel/git/s390/linux

Pull s390 fixes from Vasily Gorbik:

 - Disable preemption trace in percpu macros since the lockdep code
   itself uses percpu variables now and it causes recursions.

 - Fix kernel space 4-level paging broken by recent vmem rework.

* tag 's390-5.9-4' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux:
  s390/vmem: fix vmem_add_range for 4-level paging
  s390: don't trace preemption in percpu macros

4 years agoMerge tag 'for-linus-5.9-rc3-tag' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sat, 29 Aug 2020 19:44:30 +0000 (12:44 -0700)]
Merge tag 'for-linus-5.9-rc3-tag' of git://git./linux/kernel/git/xen/tip

Pull xen fixes from Juergen Gross:
 "Two fixes for Xen: one needed for ongoing work to support virtio with
  Xen, and one for a corner case in IRQ handling with Xen"

* tag 'for-linus-5.9-rc3-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip:
  arm/xen: Add misuse warning to virt_to_gfn
  xen/xenbus: Fix granting of vmalloc'd memory
  XEN uses irqdesc::irq_data_common::handler_data to store a per interrupt XEN data pointer which contains XEN specific information.

4 years agoMerge tag 'hwmon-for-v5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/groec...
Linus Torvalds [Sat, 29 Aug 2020 19:37:00 +0000 (12:37 -0700)]
Merge tag 'hwmon-for-v5.9-rc3' of git://git./linux/kernel/git/groeck/linux-staging

Pull hwmon fixes from Guenter Roeck:

 - Fix tempeerature scale in gsc-hwmon driver

 - Fix divide by 0 error in nct7904 driver

 - Drop non-existing attribute from pmbus/isl68137 driver

 - Fix status check in applesmc driver

* tag 'hwmon-for-v5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging:
  hwmon: (gsc-hwmon) Scale temperature to millidegrees
  hwmon: (applesmc) check status earlier.
  hwmon: (nct7904) Correct divide by 0
  hwmon: (pmbus/isl68137) remove READ_TEMPERATURE_1 telemetry for RAA228228

4 years agoMerge tag 'block-5.9-2020-08-28' of git://git.kernel.dk/linux-block
Linus Torvalds [Fri, 28 Aug 2020 23:38:29 +0000 (16:38 -0700)]
Merge tag 'block-5.9-2020-08-28' of git://git.kernel.dk/linux-block

Pull block fixes from Jens Axboe:

 - nbd timeout fix (Hou)

 - device size fix for loop LOOP_CONFIGURE (Martijn)

 - MD pull from Song with raid5 stripe size fix (Yufen)

* tag 'block-5.9-2020-08-28' of git://git.kernel.dk/linux-block:
  md/raid5: make sure stripe_size as power of two
  loop: Set correct device size when using LOOP_CONFIGURE
  nbd: restore default timeout when setting it to zero

4 years agoMerge tag 'io_uring-5.9-2020-08-28' of git://git.kernel.dk/linux-block
Linus Torvalds [Fri, 28 Aug 2020 23:23:16 +0000 (16:23 -0700)]
Merge tag 'io_uring-5.9-2020-08-28' of git://git.kernel.dk/linux-block

Pull io_uring fixes from Jens Axboe:
 "A few fixes in here, all based on reports and test cases from folks
  using it. Most of it is stable material as well:

   - Hashed work cancelation fix (Pavel)

   - poll wakeup signalfd fix

   - memlock accounting fix

   - nonblocking poll retry fix

   - ensure we never return -ERESTARTSYS for reads

   - ensure offset == -1 is consistent with preadv2() as documented

   - IOPOLL -EAGAIN handling fixes

   - remove useless task_work bounce for block based -EAGAIN retry"

* tag 'io_uring-5.9-2020-08-28' of git://git.kernel.dk/linux-block:
  io_uring: don't bounce block based -EAGAIN retry off task_work
  io_uring: fix IOPOLL -EAGAIN retries
  io_uring: clear req->result on IOPOLL re-issue
  io_uring: make offset == -1 consistent with preadv2/pwritev2
  io_uring: ensure read requests go through -ERESTART* transformation
  io_uring: don't use poll handler if file can't be nonblocking read/written
  io_uring: fix imbalanced sqo_mm accounting
  io_uring: revert consumed iov_iter bytes on error
  io-wq: fix hang after cancelling pending hashed work
  io_uring: don't recurse on tsk->sighand->siglock with signalfd

4 years agoMerge tag 'devprop-5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael...
Linus Torvalds [Fri, 28 Aug 2020 20:23:31 +0000 (13:23 -0700)]
Merge tag 'devprop-5.9-rc3' of git://git./linux/kernel/git/rafael/linux-pm

Pull device properties framework fix from Rafael Wysocki:
 "Prevent the promotion of the secondary firmware node of a device to
  the primary one from leaking a pointer (Heikki Krogerus)"

* tag 'devprop-5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
  device property: Fix the secondary firmware node handling in set_primary_fwnode()

4 years agoMerge tag 'acpi-5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael...
Linus Torvalds [Fri, 28 Aug 2020 20:17:14 +0000 (13:17 -0700)]
Merge tag 'acpi-5.9-rc3' of git://git./linux/kernel/git/rafael/linux-pm

Pull ACPI fixes from Rafael Wysocki:
 "These fix two recent issues in the ACPI memory mappings management
  code and tighten up error handling in the ACPI driver for AMD SoCs
  (APD).

  Specifics:

   - Avoid redundant rounding to the page size in acpi_os_map_iomem() to
     address a recently introduced issue with the EFI memory map
     permission check on ARM64 (Ard Biesheuvel).

   - Fix acpi_release_memory() to wait until the memory mappings
     released by it have been really unmapped (Rafael Wysocki).

   - Make the ACPI driver for AMD SoCs (APD) check the return value of
     acpi_dev_get_property() to avoid failures in the cases when the
     device property under inspection is missing (Furquan Shaikh)"

* tag 'acpi-5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
  ACPI: OSL: Prevent acpi_release_memory() from returning too early
  ACPI: ioremap: avoid redundant rounding to OS page size
  ACPI: SoC: APD: Check return value of acpi_dev_get_property()

4 years agoMerge tag 'pm-5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Linus Torvalds [Fri, 28 Aug 2020 20:12:09 +0000 (13:12 -0700)]
Merge tag 'pm-5.9-rc3' of git://git./linux/kernel/git/rafael/linux-pm

Pull power management fixes from Rafael Wysocki:
 "These fix the recently added Tegra194 cpufreq driver and the handling
  of devices using runtime PM during system-wide suspend, improve the
  intel_pstate driver documentation and clean up the cpufreq core.

  Specifics:

   - Make the recently added Tegra194 cpufreq driver use
     read_cpuid_mpir() instead of cpu_logical_map() to avoid exporting
     logical_cpu_map (Sumit Gupta).

   - Drop the automatic system wakeup event reporting for devices with
     pending runtime-resume requests during system-wide suspend to avoid
     spurious aborts of the suspend flow (Rafael Wysocki).

   - Fix build warning in the intel_pstate driver documentation and
     improve the wording in there (Randy Dunlap).

   - Clean up two pieces of code in the cpufreq core (Viresh Kumar)"

* tag 'pm-5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
  cpufreq: Use WARN_ON_ONCE() for invalid relation
  cpufreq: No need to verify cpufreq_driver in show_scaling_cur_freq()
  PM: sleep: core: Fix the handling of pending runtime resume requests
  Documentation: fix pm/intel_pstate build warning and wording
  cpufreq: replace cpu_logical_map() with read_cpuid_mpir()

4 years agoMerge branch 'acpi-mm'
Rafael J. Wysocki [Fri, 28 Aug 2020 19:17:56 +0000 (21:17 +0200)]
Merge branch 'acpi-mm'

* acpi-mm:
  ACPI: OSL: Prevent acpi_release_memory() from returning too early
  ACPI: ioremap: avoid redundant rounding to OS page size

4 years agoMerge branch 'pm-cpufreq'
Rafael J. Wysocki [Fri, 28 Aug 2020 18:58:16 +0000 (20:58 +0200)]
Merge branch 'pm-cpufreq'

* pm-cpufreq:
  cpufreq: Use WARN_ON_ONCE() for invalid relation
  cpufreq: No need to verify cpufreq_driver in show_scaling_cur_freq()
  Documentation: fix pm/intel_pstate build warning and wording
  cpufreq: replace cpu_logical_map() with read_cpuid_mpir()

4 years agoMerge tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Linus Torvalds [Fri, 28 Aug 2020 18:37:33 +0000 (11:37 -0700)]
Merge tag 'arm64-fixes' of git://git./linux/kernel/git/arm64/linux

Pull arm64 fixes from Catalin Marinas:

 - Fix kernel build with the integrated LLVM assembler which doesn't see
   the -Wa,-march option.

 - Fix "make vdso_install" when COMPAT_VDSO is disabled.

 - Make KVM more robust if the AT S1E1R instruction triggers an
   exception (architecture corner cases).

* tag 'arm64-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux:
  KVM: arm64: Set HCR_EL2.PTW to prevent AT taking synchronous exception
  KVM: arm64: Survive synchronous exceptions caused by AT instructions
  KVM: arm64: Add kvm_extable for vaxorcism code
  arm64: vdso32: make vdso32 install conditional
  arm64: use a common .arch preamble for inline assembly

4 years agokernel.h: Silence sparse warning in lower_32_bits
Herbert Xu [Fri, 28 Aug 2020 07:11:25 +0000 (17:11 +1000)]
kernel.h: Silence sparse warning in lower_32_bits

I keep getting sparse warnings in crypto such as:

  CHECK   drivers/crypto/ccree/cc_hash.c
   drivers/crypto/ccree/cc_hash.c:49:9: warning: cast truncates bits from constant value (47b5481dbefa4fa4 becomes befa4fa4)
   drivers/crypto/ccree/cc_hash.c:49:26: warning: cast truncates bits from constant value (db0c2e0d64f98fa7 becomes 64f98fa7)
   [.. many more ..]

This patch removes the warning by adding a mask to keep sparse
happy.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
4 years agoMerge tag 'writeback_for_v5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Fri, 28 Aug 2020 17:57:14 +0000 (10:57 -0700)]
Merge tag 'writeback_for_v5.9-rc3' of git://git./linux/kernel/git/jack/linux-fs

Pull writeback fixes from Jan Kara:
 "Fixes for writeback code occasionally skipping writeback of some
  inodes or livelocking sync(2)"

* tag 'writeback_for_v5.9-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs:
  writeback: Drop I_DIRTY_TIME_EXPIRE
  writeback: Fix sync livelock due to b_dirty_time processing
  writeback: Avoid skipping inode writeback
  writeback: Protect inode->i_io_list with inode->i_lock

4 years agoMerge tag 'gfs2-v5.9-rc2-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Fri, 28 Aug 2020 17:41:00 +0000 (10:41 -0700)]
Merge tag 'gfs2-v5.9-rc2-fixes' of git://git./linux/kernel/git/gfs2/linux-gfs2

Pull gfs2 fix from Andreas Gruenbacher:
 "Fix a memory leak on filesystem withdraw.

  We didn't detect this bug because we have slab merging on by default
  (CONFIG_SLAB_MERGE_DEFAULT). Adding 'slub_nomerge' to the kernel
  command line exposed the problem"

* tag 'gfs2-v5.9-rc2-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/gfs2/linux-gfs2:
  gfs2: add some much needed cleanup for log flushes that fail

4 years agoMerge tag 'ceph-for-5.9-rc3' of git://github.com/ceph/ceph-client
Linus Torvalds [Fri, 28 Aug 2020 17:33:04 +0000 (10:33 -0700)]
Merge tag 'ceph-for-5.9-rc3' of git://github.com/ceph/ceph-client

Pull ceph fixes from Ilya Dryomov:
 "We have an inode number handling change, prompted by s390x which is a
  64-bit architecture with a 32-bit ino_t, a patch to disallow leases to
  avoid potential data integrity issues when CephFS is re-exported via
  NFS or CIFS and a fix for the bulk of W=1 compilation warnings"

* tag 'ceph-for-5.9-rc3' of git://github.com/ceph/ceph-client:
  ceph: don't allow setlease on cephfs
  ceph: fix inode number handling on arches with 32-bit ino_t
  libceph: add __maybe_unused to DEFINE_CEPH_FEATURE

4 years agocifs: fix check of tcon dfs in smb1
Paulo Alcantara [Thu, 27 Aug 2020 14:20:19 +0000 (11:20 -0300)]
cifs: fix check of tcon dfs in smb1

For SMB1, the DFS flag should be checked against tcon->Flags rather
than tcon->share_flags.  While at it, add an is_tcon_dfs() helper to
check for DFS capability in a more generic way.

Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz>
Signed-off-by: Steve French <stfrench@microsoft.com>
Reviewed-by: Shyam Prasad N <nspmangalore@gmail.com>
4 years agoMerge tag 'mfd-fixes-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/lee/mfd
Linus Torvalds [Fri, 28 Aug 2020 17:15:33 +0000 (10:15 -0700)]
Merge tag 'mfd-fixes-5.9' of git://git./linux/kernel/git/lee/mfd

Pull MFD fixes from Lee Jones:

 - fix double free

 - handle devicetree disabled devices gracefully

* tag 'mfd-fixes-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/lee/mfd:
  mfd: mfd-core: Ensure disabled devices are ignored without error
  mfd: core: Fix double-free in mfd_remove_devices_fn()

4 years agoMerge tag 'drm-fixes-2020-08-28' of git://anongit.freedesktop.org/drm/drm
Linus Torvalds [Fri, 28 Aug 2020 16:46:48 +0000 (09:46 -0700)]
Merge tag 'drm-fixes-2020-08-28' of git://anongit.freedesktop.org/drm/drm

Pull drm fixes from Dave Airlie:
 "As expected a bit of an rc3 uptick, amdgpu and msm are the main ones,
  one msm patch was from the merge window, but had dependencies and we
  dropped it until the other tree had landed. Otherwise it's a couple of
  fixes for core, and etnaviv, and single i915, exynos, omap fixes.

  I'm still tracking the Sandybridge gpu relocations issue, if we don't
  see much movement I might just queue up the reverts. I'll talk to
  Daniel next week once he's back from holidays.

  core:
   - Take modeset bkl for legacy drivers

  dp_mst:
   - Allow null crtc in dp_mst

  i915:
   - Fix command parser desc matching with masks

  amdgpu:
   - Misc display fixes
   - Backlight fixes
   - MPO fix for DCN1
   - Fixes for Sienna Cichlid
   - Fixes for Navy Flounder
   - Vega SW CTF fixes
   - SMU fix for Raven
   - Fix a possible overflow in INFO ioctl
   - Gfx10 clockgating fix

  msm:
   - opp/bw scaling patch followup
   - frequency restoring fux
   - vblank in atomic commit fix
   - dpu modesetting fixes
   - fencing fix

  etnaviv:
   - scheduler interaction fix
   - gpu init regression fix

  exynos:
   - Just drop __iommu annotation to fix sparse warning

  omap:
   - locking state fix"

* tag 'drm-fixes-2020-08-28' of git://anongit.freedesktop.org/drm/drm: (41 commits)
  drm/amd/display: Fix memleak in amdgpu_dm_mode_config_init
  drm/amdgpu: disable runtime pm for navy_flounder
  drm/amd/display: Retry AUX write when fail occurs
  drm/amdgpu: Fix buffer overflow in INFO ioctl
  drm/amd/powerplay: Fix hardmins not being sent to SMU for RV
  drm/amdgpu: use MODE1 reset for navy_flounder by default
  drm/amd/pm: correct the thermal alert temperature limit settings
  drm/amdgpu: add asd fw check before loading asd
  drm/amd/display: Keep current gain when ABM disable immediately
  drm/amd/display: Fix passive dongle mistaken as active dongle in EDID emulation
  drm/amd/display: Revert HDCP disable sequence change
  drm/amd/display: Send DISPLAY_OFF after power down on boot
  drm/amdgpu/gfx10: refine mgcg setting
  drm/amd/pm: correct Vega20 swctf limit setting
  drm/amd/pm: correct Vega12 swctf limit setting
  drm/amd/pm: correct Vega10 swctf limit setting
  drm/amd/pm: set VCN pg per instances
  drm/amd/pm: enable run_btc callback for sienna_cichlid
  drivers: gpu: amd: Initialize amdgpu_dm_backlight_caps object to 0 in amdgpu_dm_update_backlight_caps
  drm/amd/display: Reject overlay plane configurations in multi-display scenarios
  ...

4 years agoKVM: arm64: Set HCR_EL2.PTW to prevent AT taking synchronous exception
James Morse [Fri, 21 Aug 2020 14:07:07 +0000 (15:07 +0100)]
KVM: arm64: Set HCR_EL2.PTW to prevent AT taking synchronous exception

AT instructions do a translation table walk and return the result, or
the fault in PAR_EL1. KVM uses these to find the IPA when the value is
not provided by the CPU in HPFAR_EL1.

If a translation table walk causes an external abort it is taken as an
exception, even if it was due to an AT instruction. (DDI0487F.a's D5.2.11
"Synchronous faults generated by address translation instructions")

While we previously made KVM resilient to exceptions taken due to AT
instructions, the device access causes mismatched attributes, and may
occur speculatively. Prevent this, by forbidding a walk through memory
described as device at stage2. Now such AT instructions will report a
stage2 fault.

Such a fault will cause KVM to restart the guest. If the AT instructions
always walk the page tables, but guest execution uses the translation cached
in the TLB, the guest can't make forward progress until the TLB entry is
evicted. This isn't a problem, as since commit 5dcd0fdbb492 ("KVM: arm64:
Defer guest entry when an asynchronous exception is pending"), KVM will
return to the host to process IRQs allowing the rest of the system to keep
running.

Cc: stable@vger.kernel.org # <v5.3: 5dcd0fdbb492 ("KVM: arm64: Defer guest entry when an asynchronous exception is pending")
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
4 years agoKVM: arm64: Survive synchronous exceptions caused by AT instructions
James Morse [Fri, 21 Aug 2020 14:07:06 +0000 (15:07 +0100)]
KVM: arm64: Survive synchronous exceptions caused by AT instructions

KVM doesn't expect any synchronous exceptions when executing, any such
exception leads to a panic(). AT instructions access the guest page
tables, and can cause a synchronous external abort to be taken.

The arm-arm is unclear on what should happen if the guest has configured
the hardware update of the access-flag, and a memory type in TCR_EL1 that
does not support atomic operations. B2.2.6 "Possible implementation
restrictions on using atomic instructions" from DDI0487F.a lists
synchronous external abort as a possible behaviour of atomic instructions
that target memory that isn't writeback cacheable, but the page table
walker may behave differently.

Make KVM robust to synchronous exceptions caused by AT instructions.
Add a get_user() style helper for AT instructions that returns -EFAULT
if an exception was generated.

While KVM's version of the exception table mixes synchronous and
asynchronous exceptions, only one of these can occur at each location.

Re-enter the guest when the AT instructions take an exception on the
assumption the guest will take the same exception. This isn't guaranteed
to make forward progress, as the AT instructions may always walk the page
tables, but guest execution may use the translation cached in the TLB.

This isn't a problem, as since commit 5dcd0fdbb492 ("KVM: arm64: Defer guest
entry when an asynchronous exception is pending"), KVM will return to the
host to process IRQs allowing the rest of the system to keep running.

Cc: stable@vger.kernel.org # <v5.3: 5dcd0fdbb492 ("KVM: arm64: Defer guest entry when an asynchronous exception is pending")
Signed-off-by: James Morse <james.morse@arm.com>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>