From: Sagi Grimberg <sagi@grimberg.me>
To: Li Feng <fengli@smartx.com>, Keith Busch <kbusch@kernel.org>,
Jens Axboe <axboe@kernel.dk>, Christoph Hellwig <hch@lst.de>,
"open list:NVM EXPRESS DRIVER" <linux-nvme@lists.infradead.org>,
open list <linux-kernel@vger.kernel.org>
Cc: Anton.Gavriliuk@hpe.ua
Subject: Re: [PATCH] nvme/tcp: Add wq_unbound modparam for nvme_tcp_wq
Date: Wed, 13 Mar 2024 11:47:16 +0200 [thread overview]
Message-ID: <fc352d23-06e0-4f24-b220-fb87a229eec7@grimberg.me> (raw)
In-Reply-To: <20240313085531.617633-1-fengli@smartx.com>
On 13/03/2024 10:55, Li Feng wrote:
> The default nvme_tcp_wq will use all CPUs to process tasks. Sometimes it is
> necessary to set CPU affinity to improve performance.
>
> A new module parameter wq_unbound is added here. If set to true, users can
> configure cpu affinity through
> /sys/devices/virtual/workqueue/nvme_tcp_wq/cpumask.
>
> Signed-off-by: Li Feng <fengli@smartx.com>
> ---
> drivers/nvme/host/tcp.c | 21 ++++++++++++++++++---
> 1 file changed, 18 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> index a6d596e05602..5eaa275f436f 100644
> --- a/drivers/nvme/host/tcp.c
> +++ b/drivers/nvme/host/tcp.c
> @@ -36,6 +36,14 @@ static int so_priority;
> module_param(so_priority, int, 0644);
> MODULE_PARM_DESC(so_priority, "nvme tcp socket optimize priority");
>
> +/*
> + * Use the unbound workqueue for nvme_tcp_wq, then we can set the cpu affinity
> + * from sysfs.
> + */
> +static bool wq_unbound;
> +module_param(wq_unbound, bool, 0644);
> +MODULE_PARM_DESC(wq_unbound, "set unbound flag for nvme tcp work queue");
"Use unbound workqueue for nvme-tcp IO context (default false)"
> +
> /*
> * TLS handshake timeout
> */
> @@ -1551,7 +1559,10 @@ static void nvme_tcp_set_queue_io_cpu(struct nvme_tcp_queue *queue)
> else if (nvme_tcp_poll_queue(queue))
> n = qid - ctrl->io_queues[HCTX_TYPE_DEFAULT] -
> ctrl->io_queues[HCTX_TYPE_READ] - 1;
> - queue->io_cpu = cpumask_next_wrap(n - 1, cpu_online_mask, -1, false);
> + if (wq_unbound)
> + queue->io_cpu = WORK_CPU_UNBOUND;
> + else
> + queue->io_cpu = cpumask_next_wrap(n - 1, cpu_online_mask, -1, false);
> }
>
> static void nvme_tcp_tls_done(void *data, int status, key_serial_t pskid)
> @@ -2790,6 +2801,8 @@ static struct nvmf_transport_ops nvme_tcp_transport = {
>
> static int __init nvme_tcp_init_module(void)
> {
> + unsigned int wq_flags = WQ_MEM_RECLAIM | WQ_HIGHPRI;
> +
> BUILD_BUG_ON(sizeof(struct nvme_tcp_hdr) != 8);
> BUILD_BUG_ON(sizeof(struct nvme_tcp_cmd_pdu) != 72);
> BUILD_BUG_ON(sizeof(struct nvme_tcp_data_pdu) != 24);
> @@ -2799,8 +2812,10 @@ static int __init nvme_tcp_init_module(void)
> BUILD_BUG_ON(sizeof(struct nvme_tcp_icresp_pdu) != 128);
> BUILD_BUG_ON(sizeof(struct nvme_tcp_term_pdu) != 24);
>
> - nvme_tcp_wq = alloc_workqueue("nvme_tcp_wq",
> - WQ_MEM_RECLAIM | WQ_HIGHPRI, 0);
> + if (wq_unbound)
> + wq_flags |= WQ_UNBOUND | WQ_SYSFS;
I think we should have WQ_SYSFS exposed always. Add it in a seperate patch
that comes before this.
> +
> + nvme_tcp_wq = alloc_workqueue("nvme_tcp_wq", wq_flags, 0);
> if (!nvme_tcp_wq)
> return -ENOMEM;
>
prev parent reply other threads:[~2024-03-13 9:47 UTC|newest]
Thread overview: 2+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-03-13 8:55 [PATCH] nvme/tcp: Add wq_unbound modparam for nvme_tcp_wq Li Feng
2024-03-13 9:47 ` Sagi Grimberg [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fc352d23-06e0-4f24-b220-fb87a229eec7@grimberg.me \
--to=sagi@grimberg.me \
--cc=Anton.Gavriliuk@hpe.ua \
--cc=axboe@kernel.dk \
--cc=fengli@smartx.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).