From: Yu Kuai <yukuai1@huaweicloud.com>
To: mpatocka@redhat.com, heinzm@redhat.com, xni@redhat.com,
agk@redhat.com, snitzer@kernel.org, dm-devel@lists.linux.dev,
song@kernel.org, yukuai3@huawei.com, jbrassow@f14.redhat.com,
neilb@suse.de, shli@fb.com, akpm@osdl.org
Cc: linux-kernel@vger.kernel.org, linux-raid@vger.kernel.org,
yukuai1@huaweicloud.com, yi.zhang@huawei.com,
yangerkun@huawei.com
Subject: [PATCH v4 00/14] dm-raid: fix v6.7 regressions
Date: Tue, 30 Jan 2024 10:18:29 +0800 [thread overview]
Message-ID: <20240130021843.3608859-1-yukuai1@huaweicloud.com> (raw)
From: Yu Kuai <yukuai3@huawei.com>
Changes in v4:
- add patch 10 to fix a raid456 deadlock(for both md/raid and dm-raid);
- add patch 13 to wait for inflight IO completion while removing dm
device;
Changes in v3:
- fix a problem in patch 5;
- add patch 12;
Changes in v2:
- replace revert changes for dm-raid with real fixes;
- fix dm-raid5 deadlock that exist for a long time, this deadlock is
triggered because another problem is fixed in raid5, and instead of
deadlock, user will read wrong data before v6.7, patch 9-11;
First regression related to stop sync thread:
The lifetime of sync_thread is designed as following:
1) Decide want to start sync_thread, set MD_RECOVERY_NEEDED, and wake up
daemon thread;
2) Daemon thread detect that MD_RECOVERY_NEEDED is set, then set
MD_RECOVERY_RUNNING and register sync_thread;
3) Execute md_do_sync() for the actual work, if it's done or
interrupted, it will set MD_RECOVERY_DONE and wake up daemone thread;
4) Daemon thread detect that MD_RECOVERY_DONE is set, then clear
MD_RECOVERY_RUNNING and unregister sync_thread;
In v6.7, we fix md/raid to follow this design by commit f52f5c71f3d4
("md: fix stopping sync thread"), however, dm-raid is not considered at
that time, and following test will hang:
shell/integrity-caching.sh
shell/lvconvert-raid-reshape.sh
This patch set fix the broken test by patch 1-4;
- patch 1 fix that step 4) is broken by suspended array;
- patch 2 fix that step 4) is broken by read-only array;
- patch 3 fix that step 3) is broken that md_do_sync() doesn't set
MD_RECOVERY_DONE; Noted that this patch will introdece new problem that
data will be corrupted, which will be fixed in later patches.
- patch 4 fix that setp 1) is broken that sync_thread is register and
MD_RECOVERY_RUNNING is set directly, md/raid behaviour, not related to
dm-raid;
With patch 1-4, the above test won't hang anymore, however, the test
will still fail and complain that ext4 is corrupted;
Second regression related to frozen sync thread:
Noted that for raid456, if reshape is interrupted, then call
"pers->start_reshape" will corrupt data. And dm-raid rely on md_do_sync()
doesn't set MD_RECOVERY_DONE so that new sync_thread won't be registered,
and patch 3 just break this.
- Patch 5-6 fix this problem by interrupting reshape and frozen
sync_thread in dm_suspend(), then unfrozen and continue reshape in
dm_resume(). It's verified that dm-raid tests won't complain that
ext4 is corrupted anymore.
- Patch 7 fix the problem that raid_message() call
md_reap_sync_thread() directly, without holding 'reconfig_mutex'.
Last regression related to dm-raid456 IO concurrent with reshape:
For raid456, if reshape is still in progress, then IO across reshape
position will wait for reshape to make progress. However, for dm-raid,
in following cases reshape will never make progress hence IO will hang:
1) the array is read-only;
2) MD_RECOVERY_WAIT is set;
3) MD_RECOVERY_FROZEN is set;
After commit c467e97f079f ("md/raid6: use valid sector values to determine
if an I/O should wait on the reshape") fix the problem that IO across
reshape position doesn't wait for reshape, the dm-raid test
shell/lvconvert-raid-reshape.sh start to hang at raid5_make_request().
For md/raid, the problem doesn't exist because:
1) If array is read-only, it can switch to read-write by ioctl/sysfs;
2) md/raid never set MD_RECOVERY_WAIT;
3) If MD_RECOVERY_FROZEN is set, mddev_suspend() doesn't hold
'reconfig_mutex' anymore, it can be cleared and reshape can continue by
sysfs api 'sync_action'.
However, I'm not sure yet how to avoid the problem in dm-raid yet.
- patch 9-11 fix this problem by detecting the above 3 cases in
dm_suspend(), and fail those IO directly.
If user really meet the IO error, then it means they're reading the wrong
data before c467e97f079f. And it's safe to read/write the array after
reshape make progress successfully.
There are also some other minor changes: patch 8 and patch 12;
Test result:
I apply this patchset on top of v6.8-rc1, and run lvm2 tests suite with
folling cmd for 24 round(for about 2 days):
for t in `ls test/shell`; do
if cat test/shell/$t | grep raid &> /dev/null; then
make check T=shell/$t
fi
done
failed count failed test
1 ### failed: [ndev-vanilla] shell/dmsecuretest.sh
1 ### failed: [ndev-vanilla] shell/dmsetup-integrity-keys.sh
1 ### failed: [ndev-vanilla] shell/dmsetup-keyring.sh
5 ### failed: [ndev-vanilla] shell/duplicate-pvs-md0.sh
1 ### failed: [ndev-vanilla] shell/duplicate-vgid.sh
2 ### failed: [ndev-vanilla] shell/duplicate-vgnames.sh
1 ### failed: [ndev-vanilla] shell/fsadm-crypt.sh
1 ### failed: [ndev-vanilla] shell/integrity.sh
6 ### failed: [ndev-vanilla] shell/lvchange-raid1-writemostly.sh
2 ### failed: [ndev-vanilla] shell/lvchange-rebuild-raid.sh
5 ### failed: [ndev-vanilla] shell/lvconvert-raid-reshape-stripes-load-reload.sh
4 ### failed: [ndev-vanilla] shell/lvconvert-raid-restripe-linear.sh
1 ### failed: [ndev-vanilla] shell/lvconvert-raid1-split-trackchanges.sh
20 ### failed: [ndev-vanilla] shell/lvconvert-repair-raid.sh
20 ### failed: [ndev-vanilla] shell/lvcreate-large-raid.sh
24 ### failed: [ndev-vanilla] shell/lvextend-raid.sh
And I ramdomly pick some tests verified by hand that these test will
fail in v6.6 as well(not all tests, I don't have the time do do this yet):
shell/lvextend-raid.sh
shell/lvcreate-large-raid.sh
shell/lvconvert-repair-raid.sh
shell/lvchange-rebuild-raid.sh
shell/lvchange-raid1-writemostly.sh
Yu Kuai (14):
md: don't ignore suspended array in md_check_recovery()
md: don't ignore read-only array in md_check_recovery()
md: make sure md_do_sync() will set MD_RECOVERY_DONE
md: don't register sync_thread for reshape directly
md: export helpers to stop sync_thread
dm-raid: really frozen sync_thread during suspend
md/dm-raid: don't call md_reap_sync_thread() directly
dm-raid: add a new helper prepare_suspend() in md_personality
md: export helper md_is_rdwr()
md: don't suspend the array for interrupted reshape
md/raid456: fix a deadlock for dm-raid456 while io concurrent with
reshape
dm-raid: fix lockdep waring in "pers->hot_add_disk"
dm: wait for IO completion before removing dm device
dm-raid: remove mddev_suspend/resume()
drivers/md/dm-raid.c | 78 +++++++++++++++++++---------
drivers/md/dm.c | 3 ++
drivers/md/md.c | 120 +++++++++++++++++++++++++++++--------------
drivers/md/md.h | 16 ++++++
drivers/md/raid10.c | 16 +-----
drivers/md/raid5.c | 61 ++++++++++++----------
6 files changed, 190 insertions(+), 104 deletions(-)
--
2.39.2
next reply other threads:[~2024-01-30 2:23 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-30 2:18 Yu Kuai [this message]
2024-01-30 2:18 ` [PATCH v4 01/14] md: don't ignore suspended array in md_check_recovery() Yu Kuai
2024-01-30 2:18 ` [PATCH v4 02/14] md: don't ignore read-only " Yu Kuai
2024-01-30 2:18 ` [PATCH v4 03/14] md: make sure md_do_sync() will set MD_RECOVERY_DONE Yu Kuai
2024-01-30 2:18 ` [PATCH v4 04/14] md: don't register sync_thread for reshape directly Yu Kuai
2024-01-30 2:18 ` [PATCH v4 05/14] md: export helpers to stop sync_thread Yu Kuai
2024-01-30 2:18 ` [PATCH v4 06/14] dm-raid: really frozen sync_thread during suspend Yu Kuai
2024-01-30 2:18 ` [PATCH v4 07/14] md/dm-raid: don't call md_reap_sync_thread() directly Yu Kuai
2024-01-30 2:18 ` [PATCH v4 08/14] dm-raid: add a new helper prepare_suspend() in md_personality Yu Kuai
2024-01-30 2:18 ` [PATCH v4 09/14] md: export helper md_is_rdwr() Yu Kuai
2024-01-30 2:18 ` [PATCH v4 10/14] md: don't suspend the array for interrupted reshape Yu Kuai
2024-01-30 2:18 ` [PATCH v4 11/14] md/raid456: fix a deadlock for dm-raid456 while io concurrent with reshape Yu Kuai
2024-01-30 2:18 ` [PATCH v4 12/14] dm-raid: fix lockdep waring in "pers->hot_add_disk" Yu Kuai
2024-01-30 2:18 ` [PATCH RFC v4 13/14] dm: wait for IO completion before removing dm device Yu Kuai
2024-01-30 11:46 ` Mikulas Patocka
2024-01-30 13:05 ` Yu Kuai
2024-01-31 1:35 ` Yu Kuai
2024-01-30 2:18 ` [PATCH v4 RFC 14/14] dm-raid: remove mddev_suspend/resume() Yu Kuai
2024-01-31 2:51 ` Yu Kuai
2024-01-30 9:08 ` [PATCH v4 00/14] dm-raid: fix v6.7 regressions Song Liu
2024-01-31 0:29 ` Xiao Ni
2024-01-31 1:25 ` Yu Kuai
2024-01-31 1:28 ` Xiao Ni
2024-01-31 2:52 ` Yu Kuai
2024-01-31 7:06 ` Yu Kuai
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240130021843.3608859-1-yukuai1@huaweicloud.com \
--to=yukuai1@huaweicloud.com \
--cc=agk@redhat.com \
--cc=akpm@osdl.org \
--cc=dm-devel@lists.linux.dev \
--cc=heinzm@redhat.com \
--cc=jbrassow@f14.redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-raid@vger.kernel.org \
--cc=mpatocka@redhat.com \
--cc=neilb@suse.de \
--cc=shli@fb.com \
--cc=snitzer@kernel.org \
--cc=song@kernel.org \
--cc=xni@redhat.com \
--cc=yangerkun@huawei.com \
--cc=yi.zhang@huawei.com \
--cc=yukuai3@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).