xfs: reduce the number of atomic when locking a buffer after lookup
authorDave Chinner <dchinner@redhat.com>
Thu, 14 Jul 2022 02:04:38 +0000 (12:04 +1000)
committerDave Chinner <david@fromorbit.com>
Thu, 14 Jul 2022 02:04:38 +0000 (12:04 +1000)
Avoid an extra atomic operation in the non-trylock case by only
doing a trylock if the XBF_TRYLOCK flag is set. This follows the
pattern in the IO path with NOWAIT semantics where the
"trylock-fail-lock" path showed 5-10% reduced throughput compared to
just using single lock call when not under NOWAIT conditions. So
make that same change here, too.

See commit 942491c9e6d6 ("xfs: fix AIM7 regression") for details.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
[hch: split from a larger patch]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
fs/xfs/xfs_buf.c

index 81ca951..374c4e5 100644 (file)
@@ -534,11 +534,12 @@ xfs_buf_find_lock(
        struct xfs_buf          *bp,
        xfs_buf_flags_t         flags)
 {
-       if (!xfs_buf_trylock(bp)) {
-               if (flags & XBF_TRYLOCK) {
+       if (flags & XBF_TRYLOCK) {
+               if (!xfs_buf_trylock(bp)) {
                        XFS_STATS_INC(bp->b_mount, xb_busy_locked);
                        return -EAGAIN;
                }
+       } else {
                xfs_buf_lock(bp);
                XFS_STATS_INC(bp->b_mount, xb_get_locked_waited);
        }