mm: move end_index check out of readahead loop
authorMatthew Wilcox (Oracle) <willy@infradead.org>
Tue, 2 Jun 2020 04:46:47 +0000 (21:46 -0700)
committerLinus Torvalds <torvalds@linux-foundation.org>
Tue, 2 Jun 2020 17:59:06 +0000 (10:59 -0700)
By reducing nr_to_read, we can eliminate this check from inside the loop.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
Cc: Chao Yu <yuchao0@huawei.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Cong Wang <xiyou.wangcong@gmail.com>
Cc: Darrick J. Wong <darrick.wong@oracle.com>
Cc: Dave Chinner <dchinner@redhat.com>
Cc: Eric Biggers <ebiggers@google.com>
Cc: Gao Xiang <gaoxiang25@huawei.com>
Cc: Jaegeuk Kim <jaegeuk@kernel.org>
Cc: Joseph Qi <joseph.qi@linux.alibaba.com>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Cc: Miklos Szeredi <mszeredi@redhat.com>
Link: http://lkml.kernel.org/r/20200414150233.24495-13-willy@infradead.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
mm/readahead.c

index d01531ef9f3c6a4a21fc300d27b889b11b52ad8c..998fdd23c0b17fe0f17495a9c2f88aa58397d8a9 100644 (file)
@@ -167,8 +167,6 @@ void __do_page_cache_readahead(struct address_space *mapping,
                unsigned long lookahead_size)
 {
        struct inode *inode = mapping->host;
-       struct page *page;
-       unsigned long end_index;        /* The last page we want to read */
        LIST_HEAD(page_pool);
        loff_t isize = i_size_read(inode);
        gfp_t gfp_mask = readahead_gfp_mask(mapping);
@@ -178,22 +176,26 @@ void __do_page_cache_readahead(struct address_space *mapping,
                ._index = index,
        };
        unsigned long i;
+       pgoff_t end_index;      /* The last page we want to read */
 
        if (isize == 0)
                return;
 
-       end_index = ((isize - 1) >> PAGE_SHIFT);
+       end_index = (isize - 1) >> PAGE_SHIFT;
+       if (index > end_index)
+               return;
+       /* Don't read past the page containing the last byte of the file */
+       if (nr_to_read > end_index - index)
+               nr_to_read = end_index - index + 1;
 
        /*
         * Preallocate as many pages as we will need.
         */
        for (i = 0; i < nr_to_read; i++) {
-               if (index + i > end_index)
-                       break;
+               struct page *page = xa_load(&mapping->i_pages, index + i);
 
                BUG_ON(index + i != rac._index + rac._nr_pages);
 
-               page = xa_load(&mapping->i_pages, index + i);
                if (page && !xa_is_value(page)) {
                        /*
                         * Page already present?  Kick off the current batch of