xfs: Correctly invert xfs_buftarg LRU isolation logic
authorVratislav Bendel <vbendel@redhat.com>
Wed, 7 Mar 2018 01:07:44 +0000 (17:07 -0800)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 6 Nov 2019 11:18:27 +0000 (12:18 +0100)
commit 19957a181608d25c8f4136652d0ea00b3738972d upstream.

Due to an inverted logic mistake in xfs_buftarg_isolate()
the xfs_buffers with zero b_lru_ref will take another trip
around LRU, while isolating buffers with non-zero b_lru_ref.

Additionally those isolated buffers end up right back on the LRU
once they are released, because b_lru_ref remains elevated.

Fix that circuitous route by leaving them on the LRU
as originally intended.

Signed-off-by: Vratislav Bendel <vbendel@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Alex Lyakas <alex@zadara.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
fs/xfs/xfs_buf.c

index 3f45d9867e10a75cae0563c8281f4d691514e3ff..651755353374d63dde7d148aaf04408268049408 100644 (file)
@@ -1674,7 +1674,7 @@ xfs_buftarg_isolate(
         * zero. If the value is already zero, we need to reclaim the
         * buffer, otherwise it gets another trip through the LRU.
         */
-       if (!atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
+       if (atomic_add_unless(&bp->b_lru_ref, -1, 0)) {
                spin_unlock(&bp->b_lock);
                return LRU_ROTATE;
        }