RDMA/umem: Split ib_umem_num_pages() into ib_umem_num_dma_blocks()
authorJason Gunthorpe <jgg@nvidia.com>
Fri, 4 Sep 2020 22:41:47 +0000 (19:41 -0300)
committerJason Gunthorpe <jgg@nvidia.com>
Fri, 11 Sep 2020 13:24:53 +0000 (10:24 -0300)
commita665aca89a411115e35ea937c2d3fb2ee4f5a701
tree79c8a3e6ba1b0df6d1cfe1e1067a40b72bc1f942
parent89603f7e7e5a6b719f1a163a05bd8a9231b58318
RDMA/umem: Split ib_umem_num_pages() into ib_umem_num_dma_blocks()

ib_umem_num_pages() should only be used by things working with the SGL in
CPU pages directly.

Drivers building DMA lists should use the new ib_num_dma_blocks() which
returns the number of blocks rdma_umem_for_each_block() will return.

To make this general for DMA drivers requires a different implementation.
Computing DMA block count based on umem->address only works if the
requested page size is < PAGE_SIZE and/or the IOVA == umem->address.

Instead the number of DMA pages should be computed in the IOVA address
space, not umem->address. Thus the IOVA has to be stored inside the umem
so it can be used for these calculations.

For now set it to umem->address by default and fix it up if
ib_umem_find_best_pgsz() was called. This allows drivers to be converted
to ib_umem_num_dma_blocks() safely.

Link: https://lore.kernel.org/r/6-v2-270386b7e60b+28f4-umem_1_jgg@nvidia.com
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
drivers/infiniband/core/umem.c
drivers/infiniband/hw/cxgb4/mem.c
drivers/infiniband/hw/mlx5/mem.c
drivers/infiniband/hw/mthca/mthca_provider.c
drivers/infiniband/hw/vmw_pvrdma/pvrdma_mr.c
include/rdma/ib_umem.h