XDP-Newbies Archive mirror
 help / color / mirror / Atom feed
From: "Toke Høiland-Jørgensen" <toke@redhat.com>
To: Alexandre Cassen <acassen@gmail.com>,
	team lnx <teamlnxi8@gmail.com>,
	xdp-newbies@vger.kernel.org
Subject: Re: XDP packet queueing and scheduling capabilities
Date: Wed, 14 Feb 2024 14:21:57 +0100	[thread overview]
Message-ID: <871q9ffafe.fsf@toke.dk> (raw)
In-Reply-To: <e4086e04-ca73-439c-8a77-529c2f3562af@gmail.com>

Alexandre Cassen <acassen@gmail.com> writes:

>>> Watching at your LPC 2022 presentation, at the end, discussions where
>>> made around using existing Qdisc kernel framework and find a way to
>>> share the path between XDP and netstack. Is it a target for adding
>>> PIFO, or more generally getting queuing support for XDP ?
>> 
>> I don't personally consider it feasible to have forwarded XDP frames
>> share the qdisc path. The presence of an sk_buff is too simply too
>> fundamentally baked into the qdisc layer. I'm hoping that the addition
>> of an eBPF-based qdisc will instead make it feasible to share queueing
>> algorithm code between the two layers (and even build forwarding paths
>> that can handle both by having the different BPF implementations
>> cooperate). And of course co-existence between XDP and stack forwarding
>> is important to avoid starvation, but that is already an issue for XDP
>> forwarding today.
>
> Agreed too, eBPF backed Qdisc 'proxy' sounds great idea. latency 
> forecast impact ?

Of writing the qdisc in eBPF instead of as a regular kernel module?
Negligible; the overhead shown in the last posting of those patches[0]
is not nil, but it seems there's a path to getting rid of it (teaching
BPF how to put skbs directly into list/rbtree data structures instead of
having to allocate a container for it).

The latency impact of mixing XDP and qdisc traffic? Dunno, depends on
the traffic and the algorithms managing it. I don't think there's
anything inherent in the BPF side of things that would impact latency
(it's all just code in the end), as long as we make sure that the APIs
and primitives can express all the things we need to effectively
implement good algorithms. Which is why I'm asking for examples of use
cases :)

-Toke

[0] https://lore.kernel.org/r/cover.1705432850.git.amery.hung@bytedance.com


  reply	other threads:[~2024-02-14 13:22 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-12  4:27 XDP packet queueing and scheduling capabilities team lnx
2024-02-13 13:03 ` Toke Høiland-Jørgensen
2024-02-13 14:33   ` Alexandre Cassen
2024-02-13 15:01     ` Dave Taht
2024-02-13 16:07     ` Toke Høiland-Jørgensen
2024-02-13 17:31       ` Alexandre Cassen
2024-02-14 13:21         ` Toke Høiland-Jørgensen [this message]
2024-02-14 13:27   ` Marcus Wichelmann
2024-02-14 16:21     ` Toke Høiland-Jørgensen
2024-02-15 19:10   ` team lnx

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=871q9ffafe.fsf@toke.dk \
    --to=toke@redhat.com \
    --cc=acassen@gmail.com \
    --cc=teamlnxi8@gmail.com \
    --cc=xdp-newbies@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).