From: Shiraz Saleem Date: Thu, 28 Mar 2019 16:49:46 +0000 (-0500) Subject: RDMA/rxe: Use correct sizing on buffers holding page DMA addresses X-Git-Tag: v5.4-rc1~984^2~132 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=93923d309bda99bc52f8cee6ea4774895b18ae5b;p=platform%2Fkernel%2Flinux-rpi.git RDMA/rxe: Use correct sizing on buffers holding page DMA addresses The buffer that holds the page DMA addresses is sized off umem->nmap. This can potentially cause out of bound accesses on the PBL array when iterating the umem DMA-mapped SGL. This is because if umem pages are combined, umem->nmap can be much lower than the number of system pages in umem. Use ib_umem_num_pages() to size this buffer. Cc: Moni Shoua Signed-off-by: Shiraz Saleem Signed-off-by: Jason Gunthorpe --- diff --git a/drivers/infiniband/sw/rxe/rxe_mr.c b/drivers/infiniband/sw/rxe/rxe_mr.c index ec89fbd..f501f72 100644 --- a/drivers/infiniband/sw/rxe/rxe_mr.c +++ b/drivers/infiniband/sw/rxe/rxe_mr.c @@ -179,7 +179,7 @@ int rxe_mem_init_user(struct rxe_pd *pd, u64 start, } mem->umem = umem; - num_buf = umem->nmap; + num_buf = ib_umem_num_pages(umem); rxe_mem_init(access, mem);