From: Paul Menzel <pmenzel@molgen.mpg.de>
To: Yiming Xu <teddyxym@outlook.com>
Cc: linux-raid@vger.kernel.org, paul.e.luse@intel.com,
firnyee@gmail.com, song@kernel.org, hch@infradead.org
Subject: Re: [RFC V2 1/2] md/raid5: optimize RAID5 performance.
Date: Tue, 2 Apr 2024 19:19:24 +0200 [thread overview]
Message-ID: <72abd26e-11f1-49ec-8c77-6d876a63c409@molgen.mpg.de> (raw)
In-Reply-To: <SJ0PR10MB574146BF65CC516F253B2DADD83E2@SJ0PR10MB5741.namprd10.prod.outlook.com>
Dear Shushu,
Thank you for your patch. Some comments and nits.
Please do *not* send it to <majordomo@vger.kernel.org>.
Please also do not add a dot/period at the end of the commit message
summary. (A more specific one would be nice too.)
Am 02.04.24 um 19:05 schrieb Yiming Xu:
> From: Shushu Yi <firnyee@gmail.com>
>
> <changelog>
Please remove.
> Optimized by using fine-grained locks, customized data structures, and
Imperative mood: Optimize
> scattered address space. Achieves significant improvements in both
> throughput and latency.
>
> This patch attempts to maximize thread-level parallelism and reduce
> CPU suspension time caused by lock contention. On a system with four
> PCIe 4.0 SSDs, we achieved increased overall storage throughput by
> 89.4% and decreases the 99.99th percentile I/O latency by 85.4%.
>
> Seeking feedback on the approach and any addition information regarding
> Required performance testing before submitting a formal patch.
>
> Note: this work has been published as a paper, and the URL is
> (https://www.hotstorage.org/2022/camera-ready/hotstorage22-5/pdf/
> hotstorage22-5.pdf)
A more elaborate description is needed.
> Co-developed-by: Yiming Xu <teddyxym@outlook.com>
> Signed-off-by: Yiming Xu <teddyxym@outlook.com>
> Signed-off-by: Shushu Yi <firnyee@gmail.com>
> Tested-by: Paul Luse <paul.e.luse@intel.com>
> ---
> V1 -> V2: Cleaned up coding style and divided into 2 patches (HemiRAID
> and ScalaRAID corresponding to the paper mentioned above). This part is
> HemiRAID, which increased the number of stripe locks to 128.
>
> drivers/md/raid5.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
> index 9b5a7dc3f2a0..d26da031d203 100644
> --- a/drivers/md/raid5.h
> +++ b/drivers/md/raid5.h
> @@ -501,7 +501,7 @@ struct disk_info {
> * and creating that much locking depth can cause
> * problems.
> */
> -#define NR_STRIPE_HASH_LOCKS 8
> +#define NR_STRIPE_HASH_LOCKS 128
> #define STRIPE_HASH_LOCKS_MASK (NR_STRIPE_HASH_LOCKS - 1)
>
> struct r5worker {
Is it intentional, that you only increased the number value of the
macro? The comment above also suggests that bigger numbers might cause
problems.
Kind regards,
Paul
prev parent reply other threads:[~2024-04-02 17:20 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-02 17:05 [RFC V2 1/2] md/raid5: optimize RAID5 performance Yiming Xu
2024-04-02 17:19 ` Paul Menzel [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=72abd26e-11f1-49ec-8c77-6d876a63c409@molgen.mpg.de \
--to=pmenzel@molgen.mpg.de \
--cc=firnyee@gmail.com \
--cc=hch@infradead.org \
--cc=linux-raid@vger.kernel.org \
--cc=paul.e.luse@intel.com \
--cc=song@kernel.org \
--cc=teddyxym@outlook.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).