Linux-XFS Archive mirror
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: linux-xfs@vger.kernel.org
Subject: [PATCH 3/4] xfs: allow lazy removal of inodes from the inodegc queues
Date: Tue, 19 Mar 2024 11:15:59 +1100	[thread overview]
Message-ID: <20240319001707.3430251-4-david@fromorbit.com> (raw)
In-Reply-To: <20240319001707.3430251-1-david@fromorbit.com>

From: Dave Chinner <dchinner@redhat.com>

To allow us to recycle inodes that are awaiting inactivation, we
need to enable lazy removal of inodes from the list. Th elist is a
lockless single linked variant, so we can't just remove inodes from
the list at will.

Instead, we can remove them lazily whenever inodegc runs by enabling
the inodegc processing to determine whether inactivation needs to be
done at processing time rather than queuing time.

We've already modified the queuing code to only queue the inode if
it isn't already queued, so here all we need to do is modify the
queue processing to determine if inactivation needs to be done.

Hence we introduce the behaviour that we can cancel inactivation
processing simply by clearing the XFS_NEED_INACTIVE flag on the
inode. Processing will check this flag and skip inactivation
processing if it is not set. The flag is always set at queuing time,
regardless of whether the inode is already one the queues or not.
Hence if it is not set at processing time, it means that something
has cancelled the inactivation and we should just remove it from the
list and then leave it alone.

Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
---
 fs/xfs/xfs_icache.c | 36 +++++++++++++++++++++++++++++-------
 1 file changed, 29 insertions(+), 7 deletions(-)

diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
index 559b8f71dc91..7359753b892b 100644
--- a/fs/xfs/xfs_icache.c
+++ b/fs/xfs/xfs_icache.c
@@ -1882,13 +1882,21 @@ xfs_inodegc_worker(
 		int	error;
 
 		/*
-		 * Switch state to inactivating and remove the inode from the
-		 * gclist. This allows the use of llist_on_list() in the queuing
-		 * code to determine if the inode is already on an inodegc
-		 * queue.
+		 * Remove the inode from the gclist and determine if it needs to
+		 * be processed. The XFS_NEED_INACTIVE flag gets cleared if the
+		 * inode is reactivated after queuing, but the list removal is
+		 * lazy and left up to us.
+		 *
+		 * We always remove the inode from the list to allow the use of
+		 * llist_on_list() in the queuing code to determine if the inode
+		 * is already on an inodegc queue.
 		 */
 		spin_lock(&ip->i_flags_lock);
 		init_llist_node(&ip->i_gclist);
+		if (!(ip->i_flags & XFS_NEED_INACTIVE)) {
+			spin_unlock(&ip->i_flags_lock);
+			continue;
+		}
 		ip->i_flags |= XFS_INACTIVATING;
 		ip->i_flags &= ~XFS_NEED_INACTIVE;
 		spin_unlock(&ip->i_flags_lock);
@@ -2160,7 +2168,6 @@ xfs_inode_mark_reclaimable(
 	struct xfs_inode	*ip)
 {
 	struct xfs_mount	*mp = ip->i_mount;
-	bool			need_inactive;
 
 	XFS_STATS_INC(mp, vn_reclaim);
 
@@ -2169,8 +2176,23 @@ xfs_inode_mark_reclaimable(
 	 */
 	ASSERT_ALWAYS(!xfs_iflags_test(ip, XFS_ALL_IRECLAIM_FLAGS));
 
-	need_inactive = xfs_inode_needs_inactive(ip);
-	if (need_inactive) {
+	/*
+	 * If the inode is already queued for inactivation because it was
+	 * re-activated and is now being reclaimed again (e.g. fs has been
+	 * frozen for a while) we must ensure that the inode waits for inodegc
+	 * to be run and removes it from the inodegc queue before it moves to
+	 * the reclaimable state and gets freed.
+	 *
+	 * We don't care about races here. We can't race with a list addition
+	 * because only one thread can be evicting the inode from the VFS cache,
+	 * hence false negatives can't occur and we only need to worry about
+	 * list removal races.  If we get a false positive from a list removal
+	 * race, then the inode goes through the inactive list whether it needs
+	 * to or not. This will slow down reclaim of this inode slightly but
+	 * should have no other side effects.
+	 */
+	if (llist_on_list(&ip->i_gclist) ||
+	    xfs_inode_needs_inactive(ip)) {
 		xfs_inodegc_queue(ip);
 		return;
 	}
-- 
2.43.0


  parent reply	other threads:[~2024-03-19  0:17 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-19  0:15 [PATCH v2 0/4] xfs: recycle inactive inodes immediately Dave Chinner
2024-03-19  0:15 ` [PATCH 1/4] xfs: make inode inactivation state changes atomic Dave Chinner
2024-03-19 18:01   ` Darrick J. Wong
2024-03-19  0:15 ` [PATCH 2/4] xfs: prepare inode for i_gclist detection Dave Chinner
2024-03-19  0:15 ` Dave Chinner [this message]
2024-03-19  0:16 ` [PATCH 4/4] xfs: reactivate XFS_NEED_INACTIVE inodes from xfs_iget Dave Chinner
2024-03-19 18:11   ` Darrick J. Wong
2024-03-20  8:39   ` Andre Noll
2024-03-20 14:53     ` Darrick J. Wong
2024-03-20 16:58       ` Andre Noll
2024-03-20 22:51         ` Dave Chinner
2024-03-21  9:59           ` Andre Noll
2024-03-22  1:09             ` Dave Chinner
2024-03-20 21:58     ` Dave Chinner
  -- strict thread matches above, loose matches on Subject: below --
2024-02-01  0:30 [RFC] [PATCH 0/4] xfs: reactivate inodes immediately in xfs_iget Dave Chinner
2024-02-01  0:30 ` [PATCH 3/4] xfs: allow lazy removal of inodes from the inodegc queues Dave Chinner
2024-02-01 19:31   ` Darrick J. Wong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240319001707.3430251-4-david@fromorbit.com \
    --to=david@fromorbit.com \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).