drm: Do not overrun array in drm_gem_get_pages()
authorMatthew Wilcox (Oracle) <willy@infradead.org>
Thu, 5 Oct 2023 13:56:47 +0000 (14:56 +0100)
committerMaxime Ripard <mripard@kernel.org>
Thu, 12 Oct 2023 08:44:06 +0000 (10:44 +0200)
If the shared memory object is larger than the DRM object that it backs,
we can overrun the page array.  Limit the number of pages we install
from each folio to prevent this.

Signed-off-by: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Reported-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
Link: https://lore.kernel.org/lkml/13360591.uLZWGnKmhe@natalenko.name/
Fixes: 3291e09a4638 ("drm: convert drm_gem_put_pages() to use a folio_batch")
Cc: stable@vger.kernel.org # 6.5.x
Signed-off-by: Maxime Ripard <mripard@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20231005135648.2317298-1-willy@infradead.org
drivers/gpu/drm/drm_gem.c

index 6129b89..44a948b 100644 (file)
@@ -540,7 +540,7 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj)
        struct page **pages;
        struct folio *folio;
        struct folio_batch fbatch;
-       int i, j, npages;
+       long i, j, npages;
 
        if (WARN_ON(!obj->filp))
                return ERR_PTR(-EINVAL);
@@ -564,11 +564,13 @@ struct page **drm_gem_get_pages(struct drm_gem_object *obj)
 
        i = 0;
        while (i < npages) {
+               long nr;
                folio = shmem_read_folio_gfp(mapping, i,
                                mapping_gfp_mask(mapping));
                if (IS_ERR(folio))
                        goto fail;
-               for (j = 0; j < folio_nr_pages(folio); j++, i++)
+               nr = min(npages - i, folio_nr_pages(folio));
+               for (j = 0; j < nr; j++, i++)
                        pages[i] = folio_file_page(folio, i);
 
                /* Make sure shmem keeps __GFP_DMA32 allocated pages in the