LKML Archive mirror
 help / color / mirror / Atom feed
* [PATCH bpf-next] bpf/test_run: increase Page Pool's ptr_ring size in live frames mode
@ 2024-02-14 15:38 Alexander Lobakin
  2024-02-14 16:16 ` Toke Høiland-Jørgensen
  0 siblings, 1 reply; 4+ messages in thread
From: Alexander Lobakin @ 2024-02-14 15:38 UTC (permalink / raw
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: Alexander Lobakin, Toke Høiland-Jørgensen,
	Martin KaFai Lau, Jakub Kicinski, Maciej Fijalkowski, bpf, netdev,
	linux-kernel

Currently, when running xdp-trafficgen, test_run creates page_pools with
the ptr_ring size of %NAPI_POLL_WEIGHT (64).
This might work fine if XDP Tx queues are polled with the budget
limitation. However, we often clear them with no limitation to ensure
maximum free space when sending.
For example, in ice and idpf (upcoming), we use "lazy" cleaning, i.e. we
clean XDP Tx queue only when the free space there is less than 1/4 of
the queue size. Let's take the ring size of 512 just as an example. 3/4
of the ring is 384 and often times, when we're entering the cleaning
function, we have this whole amount ready (or 256 or 192, doesn't
matter).
Then we're calling xdp_return_frame_bulk() and after 64th frame,
page_pool_put_page_bulk() starts returning pages to the page allocator
due to that the ptr_ring is already full. put_page(), alloc_page() et at
starts consuming a ton of CPU time and leading the board of the perf top
output.

Let's not limit ptr_ring to 64 for no real reason and allow more pages
to be recycled. Just don't put anything to page_pool_params::size and
let the Page Pool core pick the default of 1024 entries (I don't believe
there are real use cases to clean more than that amount of descriptors).
After the change, the MM layer disappears from the perf top output and
all pages get recycled to the PP. On my test setup on idpf with the
default ring size (512), this gives +80% of Tx performance with no
visible memory consumption increase.

Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
---
 net/bpf/test_run.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
index dfd919374017..1ad4f1ddcb88 100644
--- a/net/bpf/test_run.c
+++ b/net/bpf/test_run.c
@@ -163,7 +163,6 @@ static int xdp_test_run_setup(struct xdp_test_data *xdp, struct xdp_buff *orig_c
 	struct page_pool_params pp_params = {
 		.order = 0,
 		.flags = 0,
-		.pool_size = xdp->batch_size,
 		.nid = NUMA_NO_NODE,
 		.init_callback = xdp_test_run_init_page,
 		.init_arg = xdp,
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf-next] bpf/test_run: increase Page Pool's ptr_ring size in live frames mode
  2024-02-14 15:38 [PATCH bpf-next] bpf/test_run: increase Page Pool's ptr_ring size in live frames mode Alexander Lobakin
@ 2024-02-14 16:16 ` Toke Høiland-Jørgensen
  2024-02-14 23:02   ` Toke Høiland-Jørgensen
  0 siblings, 1 reply; 4+ messages in thread
From: Toke Høiland-Jørgensen @ 2024-02-14 16:16 UTC (permalink / raw
  To: Alexander Lobakin, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko
  Cc: Alexander Lobakin, Martin KaFai Lau, Jakub Kicinski,
	Maciej Fijalkowski, bpf, netdev, linux-kernel

Alexander Lobakin <aleksander.lobakin@intel.com> writes:

> Currently, when running xdp-trafficgen, test_run creates page_pools with
> the ptr_ring size of %NAPI_POLL_WEIGHT (64).
> This might work fine if XDP Tx queues are polled with the budget
> limitation. However, we often clear them with no limitation to ensure
> maximum free space when sending.
> For example, in ice and idpf (upcoming), we use "lazy" cleaning, i.e. we
> clean XDP Tx queue only when the free space there is less than 1/4 of
> the queue size. Let's take the ring size of 512 just as an example. 3/4
> of the ring is 384 and often times, when we're entering the cleaning
> function, we have this whole amount ready (or 256 or 192, doesn't
> matter).
> Then we're calling xdp_return_frame_bulk() and after 64th frame,
> page_pool_put_page_bulk() starts returning pages to the page allocator
> due to that the ptr_ring is already full. put_page(), alloc_page() et at
> starts consuming a ton of CPU time and leading the board of the perf top
> output.
>
> Let's not limit ptr_ring to 64 for no real reason and allow more pages
> to be recycled. Just don't put anything to page_pool_params::size and
> let the Page Pool core pick the default of 1024 entries (I don't believe
> there are real use cases to clean more than that amount of descriptors).
> After the change, the MM layer disappears from the perf top output and
> all pages get recycled to the PP. On my test setup on idpf with the
> default ring size (512), this gives +80% of Tx performance with no
> visible memory consumption increase.
>
> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>

Hmm, so my original idea with keeping this low was to avoid having a lot
of large rings lying around if it is used by multiple processes at once.
But we need to move away from the per-syscall allocation anyway, and
with Lorenzo's patches introducing a global system page pool we have an
avenue for that. So in the meantime, I have no objection to this...

Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf-next] bpf/test_run: increase Page Pool's ptr_ring size in live frames mode
  2024-02-14 16:16 ` Toke Høiland-Jørgensen
@ 2024-02-14 23:02   ` Toke Høiland-Jørgensen
  2024-02-15 11:57     ` Alexander Lobakin
  0 siblings, 1 reply; 4+ messages in thread
From: Toke Høiland-Jørgensen @ 2024-02-14 23:02 UTC (permalink / raw
  To: Alexander Lobakin, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko
  Cc: Alexander Lobakin, Martin KaFai Lau, Jakub Kicinski,
	Maciej Fijalkowski, bpf, netdev, linux-kernel

Toke Høiland-Jørgensen <toke@redhat.com> writes:

> Alexander Lobakin <aleksander.lobakin@intel.com> writes:
>
>> Currently, when running xdp-trafficgen, test_run creates page_pools with
>> the ptr_ring size of %NAPI_POLL_WEIGHT (64).
>> This might work fine if XDP Tx queues are polled with the budget
>> limitation. However, we often clear them with no limitation to ensure
>> maximum free space when sending.
>> For example, in ice and idpf (upcoming), we use "lazy" cleaning, i.e. we
>> clean XDP Tx queue only when the free space there is less than 1/4 of
>> the queue size. Let's take the ring size of 512 just as an example. 3/4
>> of the ring is 384 and often times, when we're entering the cleaning
>> function, we have this whole amount ready (or 256 or 192, doesn't
>> matter).
>> Then we're calling xdp_return_frame_bulk() and after 64th frame,
>> page_pool_put_page_bulk() starts returning pages to the page allocator
>> due to that the ptr_ring is already full. put_page(), alloc_page() et at
>> starts consuming a ton of CPU time and leading the board of the perf top
>> output.
>>
>> Let's not limit ptr_ring to 64 for no real reason and allow more pages
>> to be recycled. Just don't put anything to page_pool_params::size and
>> let the Page Pool core pick the default of 1024 entries (I don't believe
>> there are real use cases to clean more than that amount of descriptors).
>> After the change, the MM layer disappears from the perf top output and
>> all pages get recycled to the PP. On my test setup on idpf with the
>> default ring size (512), this gives +80% of Tx performance with no
>> visible memory consumption increase.
>>
>> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
>
> Hmm, so my original idea with keeping this low was to avoid having a lot
> of large rings lying around if it is used by multiple processes at once.
> But we need to move away from the per-syscall allocation anyway, and
> with Lorenzo's patches introducing a global system page pool we have an
> avenue for that. So in the meantime, I have no objection to this...
>
> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>

Actually, since Lorenzo's patches already landed in net-next, let's just
move to using those straight away. I'll send a patch for this tomorrow :)

-Toke


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH bpf-next] bpf/test_run: increase Page Pool's ptr_ring size in live frames mode
  2024-02-14 23:02   ` Toke Høiland-Jørgensen
@ 2024-02-15 11:57     ` Alexander Lobakin
  0 siblings, 0 replies; 4+ messages in thread
From: Alexander Lobakin @ 2024-02-15 11:57 UTC (permalink / raw
  To: Toke Høiland-Jørgensen
  Cc: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko,
	Martin KaFai Lau, Jakub Kicinski, Maciej Fijalkowski, bpf, netdev,
	linux-kernel

From: Toke Høiland-Jørgensen <toke@redhat.com>
Date: Thu, 15 Feb 2024 00:02:27 +0100

> Toke Høiland-Jørgensen <toke@redhat.com> writes:
> 
>> Alexander Lobakin <aleksander.lobakin@intel.com> writes:
>>
>>> Currently, when running xdp-trafficgen, test_run creates page_pools with
>>> the ptr_ring size of %NAPI_POLL_WEIGHT (64).
>>> This might work fine if XDP Tx queues are polled with the budget
>>> limitation. However, we often clear them with no limitation to ensure
>>> maximum free space when sending.
>>> For example, in ice and idpf (upcoming), we use "lazy" cleaning, i.e. we
>>> clean XDP Tx queue only when the free space there is less than 1/4 of
>>> the queue size. Let's take the ring size of 512 just as an example. 3/4
>>> of the ring is 384 and often times, when we're entering the cleaning
>>> function, we have this whole amount ready (or 256 or 192, doesn't
>>> matter).
>>> Then we're calling xdp_return_frame_bulk() and after 64th frame,
>>> page_pool_put_page_bulk() starts returning pages to the page allocator
>>> due to that the ptr_ring is already full. put_page(), alloc_page() et at
>>> starts consuming a ton of CPU time and leading the board of the perf top
>>> output.
>>>
>>> Let's not limit ptr_ring to 64 for no real reason and allow more pages
>>> to be recycled. Just don't put anything to page_pool_params::size and
>>> let the Page Pool core pick the default of 1024 entries (I don't believe
>>> there are real use cases to clean more than that amount of descriptors).
>>> After the change, the MM layer disappears from the perf top output and
>>> all pages get recycled to the PP. On my test setup on idpf with the
>>> default ring size (512), this gives +80% of Tx performance with no
>>> visible memory consumption increase.
>>>
>>> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
>>
>> Hmm, so my original idea with keeping this low was to avoid having a lot
>> of large rings lying around if it is used by multiple processes at once.
>> But we need to move away from the per-syscall allocation anyway, and
>> with Lorenzo's patches introducing a global system page pool we have an
>> avenue for that. So in the meantime, I have no objection to this...
>>
>> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
> 
> Actually, since Lorenzo's patches already landed in net-next, let's just
> move to using those straight away. I'll send a patch for this tomorrow :)

Keep in mind that system page_pools do direct recycling based on cpuid
and for now, memory leaks are possible. Pls see my patch[0] for the
details :D

> 
> -Toke
> 

[0]
https://lore.kernel.org/netdev/20240215113905.96817-1-aleksander.lobakin@intel.com

Olek

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2024-02-15 11:57 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-14 15:38 [PATCH bpf-next] bpf/test_run: increase Page Pool's ptr_ring size in live frames mode Alexander Lobakin
2024-02-14 16:16 ` Toke Høiland-Jørgensen
2024-02-14 23:02   ` Toke Høiland-Jørgensen
2024-02-15 11:57     ` Alexander Lobakin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).