RDMA/core: Add weak ordering dma attr to dma mapping
authorMichael Guralnik <michaelgur@mellanox.com>
Wed, 12 Feb 2020 07:35:59 +0000 (09:35 +0200)
committerJason Gunthorpe <jgg@mellanox.com>
Thu, 13 Feb 2020 17:38:02 +0000 (13:38 -0400)
For memory regions registered with IB_ACCESS_RELAXED_ORDERING will be dma
mapped with the DMA_ATTR_WEAK_ORDERING.

This will allow reads and writes to the mapping to be weakly ordered, such
change can enhance performance on some supporting architectures.

Link: https://lore.kernel.org/r/20200212073559.684139-1-leon@kernel.org
Signed-off-by: Michael Guralnik <michaelgur@mellanox.com>
Signed-off-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Jason Gunthorpe <jgg@mellanox.com>
drivers/infiniband/core/umem.c

index 06b6125..82455a1 100644 (file)
@@ -197,6 +197,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr,
        unsigned long lock_limit;
        unsigned long new_pinned;
        unsigned long cur_base;
+       unsigned long dma_attr = 0;
        struct mm_struct *mm;
        unsigned long npages;
        int ret;
@@ -278,10 +279,12 @@ struct ib_umem *ib_umem_get(struct ib_device *device, unsigned long addr,
 
        sg_mark_end(sg);
 
-       umem->nmap = ib_dma_map_sg(device,
-                                  umem->sg_head.sgl,
-                                  umem->sg_nents,
-                                  DMA_BIDIRECTIONAL);
+       if (access & IB_ACCESS_RELAXED_ORDERING)
+               dma_attr |= DMA_ATTR_WEAK_ORDERING;
+
+       umem->nmap =
+               ib_dma_map_sg_attrs(device, umem->sg_head.sgl, umem->sg_nents,
+                                   DMA_BIDIRECTIONAL, dma_attr);
 
        if (!umem->nmap) {
                ret = -ENOMEM;