From: Qu Wenruo <wqu@suse.com> To: linux-btrfs@vger.kernel.org, michel.palleau@gmail.com Subject: [PATCH 2/2] btrfs: scrub: update last_physical after scrubing one stripe Date: Fri, 8 Mar 2024 13:56:00 +1030 [thread overview] Message-ID: <d9154e07333df0c719627364b9035f1fa9cf11de.1709867186.git.wqu@suse.com> (raw) In-Reply-To: <cover.1709867186.git.wqu@suse.com> Currently sctx->stat.last_physical only got updated in the following cases: - When the last stripe of a non-RAID56 chunk is scrubbed This implies a pitfall, if the last stripe is at the chunk boundary, and we finished the scrub of the whole chunk, we won't update last_physical at all until the next chunk. - When a P/Q stripe of a RAID56 chunk is scrubbed This leads makes sctx->stat.last_physical to be not update for a long time if we're scrubbing a large data chunk (which can go up to 10GiB). And if scrub is cancelled halfway, we would restart from last_physical, but that last_physical is only updated to the last finished chunk end, we would re-scrub the same chunk again. This can waste a lot of time especially when the chunk is huge. Fix the problem by properly updating @last_physical after each stripe is scrubbed. And since we're here, for the sake of consistency, use spin lock to protect the update of @last_physical, just like all the remaining call sites touching sctx->stat. Reported-by: Michel Palleau <michel.palleau@gmail.com> Link: https://lore.kernel.org/linux-btrfs/CAMFk-+igFTv2E8svg=cQ6o3e6CrR5QwgQ3Ok9EyRaEvvthpqCQ@mail.gmail.com/ Signed-off-by: Qu Wenruo <wqu@suse.com> --- fs/btrfs/scrub.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 8a21214eca35..3bccd171be61 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -1872,6 +1872,8 @@ static int flush_scrub_stripes(struct scrub_ctx *sctx) stripe = &sctx->stripes[i]; wait_scrub_stripe_io(stripe); + sctx->stat.last_physical = stripe->physical + + stripe_length(stripe); scrub_reset_stripe(stripe); } out: @@ -2337,6 +2339,8 @@ static noinline_for_stack int scrub_stripe(struct scrub_ctx *sctx, stripe_logical += chunk_logical; ret = scrub_raid56_parity_stripe(sctx, scrub_dev, bg, map, stripe_logical); + sctx->stat.last_physical = min(physical + BTRFS_STRIPE_LEN, + physical_end); if (ret) goto out; goto next; -- 2.44.0
WARNING: multiple messages have this Message-ID (diff)
From: Qu Wenruo <wqu@suse.com> To: linux-btrfs@vger.kernel.org, michel.palleau@gmail.com Subject: [PATCH 2/2] btrfs: scrub: update last_physical after scrubing one stripe Date: Fri, 8 Mar 2024 13:40:31 +1030 [thread overview] Message-ID: <d9154e07333df0c719627364b9035f1fa9cf11de.1709867186.git.wqu@suse.com> (raw) Message-ID: <20240308031031.fd0krgfo9-FarhTsGV73UNRoIHA2kXTwp4o1Iu6dIeY@z> (raw) In-Reply-To: <cover.1709867186.git.wqu@suse.com> Currently sctx->stat.last_physical only got updated in the following cases: - When the last stripe of a non-RAID56 chunk is scrubbed This implies a pitfall, if the last stripe is at the chunk boundary, and we finished the scrub of the whole chunk, we won't update last_physical at all until the next chunk. - When a P/Q stripe of a RAID56 chunk is scrubbed This leads makes sctx->stat.last_physical to be not update for a long time if we're scrubbing a large data chunk (which can go up to 10GiB). And if scrub is cancelled halfway, we would restart from last_physical, but that last_physical is only updated to the last finished chunk end, we would re-scrub the same chunk again. This can waste a lot of time especially when the chunk is huge. Fix the problem by properly updating @last_physical after each stripe is scrubbed. And since we're here, for the sake of consistency, use spin lock to protect the update of @last_physical, just like all the remaining call sites touching sctx->stat. Reported-by: Michel Palleau <michel.palleau@gmail.com> Link: https://lore.kernel.org/linux-btrfs/CAMFk-+igFTv2E8svg=cQ6o3e6CrR5QwgQ3Ok9EyRaEvvthpqCQ@mail.gmail.com/ Signed-off-by: Qu Wenruo <wqu@suse.com> --- fs/btrfs/scrub.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c index 8a21214eca35..3bccd171be61 100644 --- a/fs/btrfs/scrub.c +++ b/fs/btrfs/scrub.c @@ -1872,6 +1872,8 @@ static int flush_scrub_stripes(struct scrub_ctx *sctx) stripe = &sctx->stripes[i]; wait_scrub_stripe_io(stripe); + sctx->stat.last_physical = stripe->physical + + stripe_length(stripe); scrub_reset_stripe(stripe); } out: @@ -2337,6 +2339,8 @@ static noinline_for_stack int scrub_stripe(struct scrub_ctx *sctx, stripe_logical += chunk_logical; ret = scrub_raid56_parity_stripe(sctx, scrub_dev, bg, map, stripe_logical); + sctx->stat.last_physical = min(physical + BTRFS_STRIPE_LEN, + physical_end); if (ret) goto out; goto next; -- 2.44.0
next prev parent reply other threads:[~2024-03-08 4:19 UTC|newest] Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top 2024-03-08 3:10 [PATCH 0/2] btrfs: scrub: update last_physical more frequently Qu Wenruo 2024-03-08 3:25 ` Qu Wenruo 2024-03-08 3:25 ` [PATCH 1/2] btrfs: extract the stripe length calculation into a helper Qu Wenruo 2024-03-08 3:10 ` Qu Wenruo 2024-03-08 11:37 ` Johannes Thumshirn 2024-03-08 3:26 ` Qu Wenruo [this message] 2024-03-08 3:10 ` [PATCH 2/2] btrfs: scrub: update last_physical after scrubing one stripe Qu Wenruo 2024-04-07 21:36 ` [PATCH 0/2] btrfs: scrub: update last_physical more frequently Qu Wenruo
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=d9154e07333df0c719627364b9035f1fa9cf11de.1709867186.git.wqu@suse.com \ --to=wqu@suse.com \ --cc=linux-btrfs@vger.kernel.org \ --cc=michel.palleau@gmail.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).