Linux-RDMA Archive mirror
 help / color / mirror / Atom feed
From: Parav Pandit <parav@nvidia.com>
To: Jakub Kicinski <kuba@kernel.org>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"davem@davemloft.net" <davem@davemloft.net>,
	"edumazet@google.com" <edumazet@google.com>,
	"pabeni@redhat.com" <pabeni@redhat.com>,
	"corbet@lwn.net" <corbet@lwn.net>,
	"kalesh-anakkur.purayil@broadcom.com"
	<kalesh-anakkur.purayil@broadcom.com>,
	Saeed Mahameed <saeedm@nvidia.com>,
	"leon@kernel.org" <leon@kernel.org>,
	"jiri@resnulli.us" <jiri@resnulli.us>,
	Shay Drori <shayd@nvidia.com>, Dan Jurgens <danielj@nvidia.com>,
	Dima Chumak <dchumak@nvidia.com>,
	"linux-doc@vger.kernel.org" <linux-doc@vger.kernel.org>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	Jiri Pirko <jiri@nvidia.com>
Subject: RE: [net-next v3 1/2] devlink: Support setting max_io_eqs
Date: Fri, 5 Apr 2024 16:34:59 +0000	[thread overview]
Message-ID: <PH0PR12MB5481F01E3B241D87AD20E947DC032@PH0PR12MB5481.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20240405071337.3b9ced49@kernel.org>



> From: Jakub Kicinski <kuba@kernel.org>
> Sent: Friday, April 5, 2024 7:44 PM
> 
> On Fri, 5 Apr 2024 03:13:36 +0000 Parav Pandit wrote:
> > Netdev qps (txq, rxq pair) channels created are typically upto num cpus by
> driver, provided it has enough IO event queues upto cpu count.
> >
> > Rdma qps are far more than netdev qps as multiple process uses them and
> they are per user space process resource.
> > Those applications use number of QPs based on number of cpus and
> number of event channels to deliver notifications to user space.
> 
> Some other drivers (e.g. intel) support multiple queues per core in netdev.
> For mlx5 I think AF_XDP may be a good example (or used to be) where there
> may be more than one queue?
>
Yes, there may be multiple netdev queues which can be connected to a eq.
For example, as you described, mlx5 xdp, also mlx5 creates a multiple txq per traffic class per channel which are linked to a single eq of the channel.
But still those txq are per channel AFAIK.
 
> So I think the question still stands even for netdev.
> We should document whether number of EQs contains the number of Rx/Tx
> queues.
> 
I believe number of txq, rxqs can be more than the number of EQs connecting to same EQ.
Netdev channels have more accurate linkage to EQs, compared to raw txq/rxqs.

> > Driver uses the IRQs dynamically upto the PCI's limit based on supported IO
> event queues.
> 
> Right but one IRQ <> one EQ? Typically / always?
Typically yes, one IRQ <> one EQ.
> SFs "share" the IRQs with PF IIRC, do they share EQs?
>
SFs do not share EQs. Yes, SFs have their own dedicated EQs.
You remember right, that they share IRQs.
 
> > > The next patch says "maximum IO event queues which are typically
> > > used to derive the maximum and default number of net device channels"
> > > It may not be obvious to non-mlx5 experts, I think it needs to be
> > > better documented.
> > I will expand the documentation in .../networking/devlink/devlink-port.rst.
> >
> > I will add below change to the v4 that has David's comments also
> addressed.
> > Is this ok for you?
> 
> Looks like a good start but I think a few more sentences describing the
> relation to other resources would be good.
>
I think EQs limited object that does not have more wider relation in the stack.
Relation to IRQ is probably a good addition to do.
Along with below changes, will add the reference to IRQ too in v4.
 
> > --- a/Documentation/networking/devlink/devlink-port.rst
> > +++ b/Documentation/networking/devlink/devlink-port.rst
> > @@ -304,6 +304,11 @@ When user sets maximum number of IO event
> queues
> > for a SF or  a VF, such function driver is limited to consume only
> > enforced  number of IO event queues.
> >
> > +IO event queues deliver events related to IO queues, including
> > +network device transmit and receive queues (txq and rxq) and RDMA
> Queue Pairs (QPs).
> > +For example, the number of netdevice channels and RDMA device
> > +completion vectors are derived from the function's IO event queues.

  reply	other threads:[~2024-04-05 16:35 UTC|newest]

Thread overview: 18+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-03 17:41 [net-next v3 0/2] devlink: Add port function attribute for IO EQs Parav Pandit
2024-04-03 17:41 ` [net-next v3 1/2] devlink: Support setting max_io_eqs Parav Pandit
2024-04-04 16:59   ` David Wei
2024-04-04 17:58     ` Parav Pandit
2024-04-04 18:40       ` David Wei
2024-04-04 18:50         ` Parav Pandit
2024-04-04 18:52           ` David Wei
2024-04-04 18:55             ` Parav Pandit
2024-04-05  2:22               ` Jakub Kicinski
2024-04-05  2:21   ` Jakub Kicinski
2024-04-05  3:13     ` Parav Pandit
2024-04-05 14:13       ` Jakub Kicinski
2024-04-05 16:34         ` Parav Pandit [this message]
2024-04-03 17:41 ` [net-next v3 2/2] mlx5/core: Support max_io_eqs for a function Parav Pandit
2024-04-04  3:51   ` Kalesh Anakkur Purayil
2024-04-04 16:57   ` David Wei
2024-04-04 18:05     ` Parav Pandit
2024-04-04 18:44       ` David Wei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PH0PR12MB5481F01E3B241D87AD20E947DC032@PH0PR12MB5481.namprd12.prod.outlook.com \
    --to=parav@nvidia.com \
    --cc=corbet@lwn.net \
    --cc=danielj@nvidia.com \
    --cc=davem@davemloft.net \
    --cc=dchumak@nvidia.com \
    --cc=edumazet@google.com \
    --cc=jiri@nvidia.com \
    --cc=jiri@resnulli.us \
    --cc=kalesh-anakkur.purayil@broadcom.com \
    --cc=kuba@kernel.org \
    --cc=leon@kernel.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=saeedm@nvidia.com \
    --cc=shayd@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).