aboutsummaryrefslogtreecommitdiff
path: root/fs/xfs
diff options
context:
space:
mode:
authorGravatar Dave Chinner <dchinner@redhat.com> 2022-07-14 12:04:38 +1000
committerGravatar Dave Chinner <david@fromorbit.com> 2022-07-14 12:04:38 +1000
commitd8d9bbb0ee6c79191b704d88c8ae712b89e0d2bb (patch)
treee68e308afacc7c76a8c79e6ee5f9437fa0fe7307 /fs/xfs
parentxfs: merge xfs_buf_find() and xfs_buf_get_map() (diff)
downloadlinux-d8d9bbb0ee6c79191b704d88c8ae712b89e0d2bb.tar.gz
linux-d8d9bbb0ee6c79191b704d88c8ae712b89e0d2bb.tar.bz2
linux-d8d9bbb0ee6c79191b704d88c8ae712b89e0d2bb.zip
xfs: reduce the number of atomic when locking a buffer after lookup
Avoid an extra atomic operation in the non-trylock case by only doing a trylock if the XBF_TRYLOCK flag is set. This follows the pattern in the IO path with NOWAIT semantics where the "trylock-fail-lock" path showed 5-10% reduced throughput compared to just using single lock call when not under NOWAIT conditions. So make that same change here, too. See commit 942491c9e6d6 ("xfs: fix AIM7 regression") for details. Signed-off-by: Dave Chinner <dchinner@redhat.com> [hch: split from a larger patch] Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Diffstat (limited to 'fs/xfs')
-rw-r--r--fs/xfs/xfs_buf.c5
1 files changed, 3 insertions, 2 deletions
diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
index 81ca951b451a..374c4e508b12 100644
--- a/fs/xfs/xfs_buf.c
+++ b/fs/xfs/xfs_buf.c
@@ -534,11 +534,12 @@ xfs_buf_find_lock(
struct xfs_buf *bp,
xfs_buf_flags_t flags)
{
- if (!xfs_buf_trylock(bp)) {
- if (flags & XBF_TRYLOCK) {
+ if (flags & XBF_TRYLOCK) {
+ if (!xfs_buf_trylock(bp)) {
XFS_STATS_INC(bp->b_mount, xb_busy_locked);
return -EAGAIN;
}
+ } else {
xfs_buf_lock(bp);
XFS_STATS_INC(bp->b_mount, xb_get_locked_waited);
}