From d8d9bbb0ee6c79191b704d88c8ae712b89e0d2bb Mon Sep 17 00:00:00 2001 From: Dave Chinner Date: Thu, 14 Jul 2022 12:04:38 +1000 Subject: [PATCH] xfs: reduce the number of atomic when locking a buffer after lookup Avoid an extra atomic operation in the non-trylock case by only doing a trylock if the XBF_TRYLOCK flag is set. This follows the pattern in the IO path with NOWAIT semantics where the "trylock-fail-lock" path showed 5-10% reduced throughput compared to just using single lock call when not under NOWAIT conditions. So make that same change here, too. See commit 942491c9e6d6 ("xfs: fix AIM7 regression") for details. Signed-off-by: Dave Chinner [hch: split from a larger patch] Signed-off-by: Christoph Hellwig Reviewed-by: Darrick J. Wong --- fs/xfs/xfs_buf.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 81ca951b..374c4e5 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -534,11 +534,12 @@ xfs_buf_find_lock( struct xfs_buf *bp, xfs_buf_flags_t flags) { - if (!xfs_buf_trylock(bp)) { - if (flags & XBF_TRYLOCK) { + if (flags & XBF_TRYLOCK) { + if (!xfs_buf_trylock(bp)) { XFS_STATS_INC(bp->b_mount, xb_busy_locked); return -EAGAIN; } + } else { xfs_buf_lock(bp); XFS_STATS_INC(bp->b_mount, xb_get_locked_waited); } -- 2.7.4