XDP-Newbies Archive mirror
 help / color / mirror / Atom feed
From: Federico Parola <federico.parola@polito.it>
To: xdp-newbies@vger.kernel.org
Subject: Re: AF_XDP busy poll and TX path
Date: Mon, 8 Nov 2021 10:53:10 +0100	[thread overview]
Message-ID: <cb827b68-ae61-5116-7f84-d930ba66f0eb@polito.it> (raw)
In-Reply-To: <eeb976ca-4af1-34fb-4723-bddd77f972cb@polito.it>

Hello everybody,
sorry for bringing this up again, but I didn't receive any answer and 
I'm still experiencing this problem on kernel 5.14.
Does anybody have some insight on whether this is an expected behavior, 
a bug, or maybe just some problem in my code?

Best regards,
Federico Parola

On 04/08/21 10:55, Federico Parola wrote:
> Dear all,
> I made some changes to the xdpsock kernel example to support the l2fwd 
> scenario between two interfaces. I didn't use the xsk_fwd sample because 
> I didn't need the complex umem management used there.
> The updated code can be found here:
> https://pastebin.com/p54T5nfW
> (I'd be happy to submit it to the kernel, with proper modifications, if 
> you deem it useful, just let me know).
> 
> I get a strange behavior when using busy polling on two interfaces. Here 
> are the numbers I get when testing on a Intel XL710 dual port NIC (i40e 
> driver). I ran all the tests on a single core (both user space and 
> interrupts) and tuned the input rate to achieve maximum throughput (this 
> was fundamental in the non busy poll tests). When running in busy poll 
> both interfaces are configured with:
> 
> $ echo 2 | sudo tee /sys/class/net/<if>/napi_defer_hard_irqs
> $ echo 200000 | sudo tee /sys/class/net/<if>/gro_flush_timeout
> 
> TEST     | Mpps
> ---------+-------
> 1 if     | 10.77
> 1 if BP  | 13.01
> 2 ifs    | 9.49
> 2 ifs BP | 7.25
> (BP = busy poll with default batch size, 64)
> 
> As you can see when I move from 1 interface to 2 interfaces in the 
> non-busy-poll mode performance reduces a bit, but I think this makes 
> sense since we are handling data structures for two ports instead of one 
> (does it make sense?).
> What I don't understand is why performance on 2 ifs is lower when using 
> busy polling. I made some tests and saw that with two interfaces there 
> is some sofirq CPU usage and the tx port keeps generating interrupts, 
> while this doesn't happen when using a single interface.
> 
> My question is: should interrupts and softirq processing be disabled on 
> the tx path as well, when using busy polling? Or is it just a rx feature?
> 
> Best regards,
> Federico Parola

      reply	other threads:[~2021-11-08  9:53 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-04  8:55 AF_XDP busy poll and TX path Federico Parola
2021-11-08  9:53 ` Federico Parola [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=cb827b68-ae61-5116-7f84-d930ba66f0eb@polito.it \
    --to=federico.parola@polito.it \
    --cc=xdp-newbies@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).