Linux-NVME Archive mirror
 help / color / mirror / Atom feed
From: Aurelien Aptel <aaptel@nvidia.com>
To: Sagi Grimberg <sagi@grimberg.me>,
	linux-nvme@lists.infradead.org, netdev@vger.kernel.org,
	hch@lst.de, kbusch@kernel.org, axboe@fb.com,
	chaitanyak@nvidia.com, davem@davemloft.net, kuba@kernel.org
Cc: Boris Pismenny <borisp@nvidia.com>,
	aurelien.aptel@gmail.com, smalin@nvidia.com, malin1024@gmail.com,
	ogerlitz@nvidia.com, yorayz@nvidia.com, galshalom@nvidia.com,
	mgurtovoy@nvidia.com, edumazet@google.com, pabeni@redhat.com,
	dsahern@kernel.org, ast@kernel.org, jacob.e.keller@intel.com
Subject: Re: [PATCH v24 01/20] net: Introduce direct data placement tcp offload
Date: Mon, 06 May 2024 15:28:11 +0300	[thread overview]
Message-ID: <253a5l3qg4k.fsf@nvidia.com> (raw)
In-Reply-To: <29655a73-5d4c-4773-a425-e16628b8ba7a@grimberg.me>

Sagi Grimberg <sagi@grimberg.me> writes:
> Understood. It is usually the case as io threads are not aligned to the
> rss steering rules (unless
> arfs is used).

The steering rule we add has 2 effects:
1) steer the flow to the offload object on the HW
2) provide CPU alignement with the io threads (Rx affinity similar to
   aRFS)

We understand point (2) might not be achieved with unbounded queue
scenarios. That's fine.

>> But when it is bound, which is still the default common case, we will
>> benefit from the alignment. To not lose that benefit for the default
>> most common case, we would like to keep cfg->io_cpu.
>
> Well, this explanation is much more reasonable. Setting .affinity_hint
> argument
> seems like a proper argument to the interface and nvme-tcp can set it to
> queue->io_cpu.

Ok, we will rename cfg->io_cpu to ->affinity_hint.

>> Could you clarify what are the advantages of running unbounded queues,
>> or to handle RX on a different cpu than the current io_cpu?
>
> See the discussion related to the patch from Li Feng:
> https://lore.kernel.org/lkml/20230413062339.2454616-1-fengli@smartx.com/

Thanks. As said previously, we are fine with decreased perfs in this
edge case.

>>> nvme-tcp may handle rx side directly from .data_ready() in the future, what
>>> will the offload do in that case?
>> It is not clear to us what the benefit of handling rx in .data_ready()
>> will achieve. From our experiment, ->sk_data_ready() is called either
>> from queue->io_cpu, or sk->sk_incoming_cpu. Unless you enable aRFS,
>> sk_incoming_cpu will be constant for the whole connection. Can you
>> clarify would handling RX from data_ready() provide?
>
> Save the context switching to a kthread from softirq, can reduce latency
> substantially
> for some workloads.

Ok, thanks for the explanation. With bounded queues and steering the
softirq CPU will be aligned with the offload CPU, so we think this is
also OK.

Thanks


  reply	other threads:[~2024-05-06 12:28 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-04 12:36 [PATCH v24 00/20] nvme-tcp receive offloads Aurelien Aptel
2024-04-04 12:36 ` [PATCH v24 01/20] net: Introduce direct data placement tcp offload Aurelien Aptel
2024-04-21 11:47   ` Sagi Grimberg
2024-04-26  7:21     ` Aurelien Aptel
2024-04-28  8:15       ` Sagi Grimberg
2024-04-29 11:35         ` Aurelien Aptel
2024-04-30 11:54           ` Sagi Grimberg
2024-05-02  7:04             ` Aurelien Aptel
2024-05-03  7:31               ` Sagi Grimberg
2024-05-06 12:28                 ` Aurelien Aptel [this message]
2024-04-04 12:36 ` [PATCH v24 02/20] netlink: add new family to manage ULP_DDP enablement and stats Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 03/20] iov_iter: skip copy if src == dst for direct data placement Aurelien Aptel
2024-04-15 14:28   ` Max Gurtovoy
2024-04-16 20:30   ` David Laight
2024-04-18  8:22     ` Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 04/20] net/tls,core: export get_netdev_for_sock Aurelien Aptel
2024-04-21 11:45   ` Sagi Grimberg
2024-04-04 12:37 ` [PATCH v24 05/20] nvme-tcp: Add DDP offload control path Aurelien Aptel
2024-04-07 22:08   ` Sagi Grimberg
2024-04-10  6:31     ` Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 06/20] nvme-tcp: Add DDP data-path Aurelien Aptel
2024-04-07 22:08   ` Sagi Grimberg
2024-04-10  6:31     ` Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 07/20] nvme-tcp: RX DDGST offload Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 08/20] nvme-tcp: Deal with netdevice DOWN events Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 09/20] Documentation: add ULP DDP offload documentation Aurelien Aptel
2024-04-09  8:49   ` Bagas Sanjaya
2024-04-04 12:37 ` [PATCH v24 10/20] net/mlx5e: Rename from tls to transport static params Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 11/20] net/mlx5e: Refactor ico sq polling to get budget Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 12/20] net/mlx5: Add NVMEoTCP caps, HW bits, 128B CQE and enumerations Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 13/20] net/mlx5e: NVMEoTCP, offload initialization Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 14/20] net/mlx5e: TCP flow steering for nvme-tcp acceleration Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 15/20] net/mlx5e: NVMEoTCP, use KLM UMRs for buffer registration Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 16/20] net/mlx5e: NVMEoTCP, queue init/teardown Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 17/20] net/mlx5e: NVMEoTCP, ddp setup and resync Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 18/20] net/mlx5e: NVMEoTCP, async ddp invalidation Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 19/20] net/mlx5e: NVMEoTCP, data-path for DDP+DDGST offload Aurelien Aptel
2024-04-04 12:37 ` [PATCH v24 20/20] net/mlx5e: NVMEoTCP, statistics Aurelien Aptel
2024-04-06  5:45 ` [PATCH v24 00/20] nvme-tcp receive offloads Jakub Kicinski
2024-04-07 22:21   ` Sagi Grimberg
2024-04-09 22:35     ` Chaitanya Kulkarni
2024-04-09 22:59       ` Jakub Kicinski
2024-04-18  8:29         ` Chaitanya Kulkarni
2024-04-18 15:28           ` Jakub Kicinski

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=253a5l3qg4k.fsf@nvidia.com \
    --to=aaptel@nvidia.com \
    --cc=ast@kernel.org \
    --cc=aurelien.aptel@gmail.com \
    --cc=axboe@fb.com \
    --cc=borisp@nvidia.com \
    --cc=chaitanyak@nvidia.com \
    --cc=davem@davemloft.net \
    --cc=dsahern@kernel.org \
    --cc=edumazet@google.com \
    --cc=galshalom@nvidia.com \
    --cc=hch@lst.de \
    --cc=jacob.e.keller@intel.com \
    --cc=kbusch@kernel.org \
    --cc=kuba@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=malin1024@gmail.com \
    --cc=mgurtovoy@nvidia.com \
    --cc=netdev@vger.kernel.org \
    --cc=ogerlitz@nvidia.com \
    --cc=pabeni@redhat.com \
    --cc=sagi@grimberg.me \
    --cc=smalin@nvidia.com \
    --cc=yorayz@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).