dmapool: don't memset on free twice
authorKeith Busch <kbusch@kernel.org>
Thu, 26 Jan 2023 21:51:23 +0000 (13:51 -0800)
committerAndrew Morton <akpm@linux-foundation.org>
Thu, 6 Apr 2023 02:42:40 +0000 (19:42 -0700)
If debug is enabled, dmapool will poison the range, so no need to clear it
to 0 immediately before writing over it.

Link: https://lkml.kernel.org/r/20230126215125.4069751-11-kbusch@meta.com
Signed-off-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/dmapool.c

index 4dea2a0dbd336f9146e5cd8c5c582221ac53d512..21e6d362c72646f0e9a00b1044cecfb355efc396 100644 (file)
@@ -160,6 +160,8 @@ static void pool_check_block(struct dma_pool *pool, void *retval,
 static bool pool_page_err(struct dma_pool *pool, struct dma_page *page,
                          void *vaddr, dma_addr_t dma)
 {
+       if (want_init_on_free())
+               memset(vaddr, 0, pool->size);
        return false;
 }
 
@@ -441,8 +443,6 @@ void dma_pool_free(struct dma_pool *pool, void *vaddr, dma_addr_t dma)
                return;
        }
 
-       if (want_init_on_free())
-               memset(vaddr, 0, pool->size);
        if (pool_page_err(pool, page, vaddr, dma)) {
                spin_unlock_irqrestore(&pool->lock, flags);
                return;