All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] mm: swap: use folio_batch_reinit() in folio_batch_move_lru()
@ 2023-03-31  9:58 Qi Zheng
  2023-03-31  9:58 ` [PATCH 2/2] mm: mlock: use folios_put() in mlock_folio_batch() Qi Zheng
  2023-03-31 22:04 ` [PATCH 1/2] mm: swap: use folio_batch_reinit() in folio_batch_move_lru() Andrew Morton
  0 siblings, 2 replies; 5+ messages in thread
From: Qi Zheng @ 2023-03-31  9:58 UTC (permalink / raw
  To: akpm, willy, lstoakes; +Cc: linux-mm, linux-kernel, Qi Zheng

In folio_batch_move_lru(), the folio_batch is not freshly
initialised, so it should call folio_batch_reinit() as
pagevec_lru_move_fn() did before.

Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
---
 mm/swap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/swap.c b/mm/swap.c
index 57cb01b042f6..423199ee8478 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -222,7 +222,7 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
 	if (lruvec)
 		unlock_page_lruvec_irqrestore(lruvec, flags);
 	folios_put(fbatch->folios, folio_batch_count(fbatch));
-	folio_batch_init(fbatch);
+	folio_batch_reinit(fbatch);
 }
 
 static void folio_batch_add_and_move(struct folio_batch *fbatch,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/2] mm: mlock: use folios_put() in mlock_folio_batch()
  2023-03-31  9:58 [PATCH 1/2] mm: swap: use folio_batch_reinit() in folio_batch_move_lru() Qi Zheng
@ 2023-03-31  9:58 ` Qi Zheng
  2023-03-31 22:04 ` [PATCH 1/2] mm: swap: use folio_batch_reinit() in folio_batch_move_lru() Andrew Morton
  1 sibling, 0 replies; 5+ messages in thread
From: Qi Zheng @ 2023-03-31  9:58 UTC (permalink / raw
  To: akpm, willy, lstoakes; +Cc: linux-mm, linux-kernel, Qi Zheng

Since we have updated mlock to use folios, it's better
to call folios_put() instead of calling release_pages()
directly.

Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
---
 mm/mlock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/mlock.c b/mm/mlock.c
index 617469fce96d..40b43f8740df 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -206,7 +206,7 @@ static void mlock_folio_batch(struct folio_batch *fbatch)
 
 	if (lruvec)
 		unlock_page_lruvec_irq(lruvec);
-	release_pages(fbatch->folios, fbatch->nr);
+	folios_put(fbatch->folios, folio_batch_count(fbatch));
 	folio_batch_reinit(fbatch);
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] mm: swap: use folio_batch_reinit() in folio_batch_move_lru()
  2023-03-31  9:58 [PATCH 1/2] mm: swap: use folio_batch_reinit() in folio_batch_move_lru() Qi Zheng
  2023-03-31  9:58 ` [PATCH 2/2] mm: mlock: use folios_put() in mlock_folio_batch() Qi Zheng
@ 2023-03-31 22:04 ` Andrew Morton
  2023-04-02 13:36   ` Qi Zheng
  1 sibling, 1 reply; 5+ messages in thread
From: Andrew Morton @ 2023-03-31 22:04 UTC (permalink / raw
  To: Qi Zheng; +Cc: willy, lstoakes, linux-mm, linux-kernel

On Fri, 31 Mar 2023 17:58:57 +0800 Qi Zheng <zhengqi.arch@bytedance.com> wrote:

> In folio_batch_move_lru(), the folio_batch is not freshly
> initialised, so it should call folio_batch_reinit() as
> pagevec_lru_move_fn() did before.
> 
> ...
>
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -222,7 +222,7 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
>  	if (lruvec)
>  		unlock_page_lruvec_irqrestore(lruvec, flags);
>  	folios_put(fbatch->folios, folio_batch_count(fbatch));
> -	folio_batch_init(fbatch);
> +	folio_batch_reinit(fbatch);
>  }
>  
>  static void folio_batch_add_and_move(struct folio_batch *fbatch,

Well...  why?  This could leave the kernel falsely thinking that the
folio's pages have been drained from the per-cpu LRU addition
magazines.

Maybe that's desirable, maybe not, but I think this change needs much
much more explanation describing why it is beneficial.


folio_batch_reinit() seems to be a custom thing for the mlock code -
perhaps it just shouldn't exist, and its operation should instead be
open-coded in mlock_folio_batch().


The dynamics and rules around ->percpu_pvec_drained are a bit
mysterious.  A code comment which explains all of this would be
useful.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] mm: swap: use folio_batch_reinit() in folio_batch_move_lru()
  2023-03-31 22:04 ` [PATCH 1/2] mm: swap: use folio_batch_reinit() in folio_batch_move_lru() Andrew Morton
@ 2023-04-02 13:36   ` Qi Zheng
  2023-04-04  7:07     ` Qi Zheng
  0 siblings, 1 reply; 5+ messages in thread
From: Qi Zheng @ 2023-04-02 13:36 UTC (permalink / raw
  To: Andrew Morton; +Cc: willy, lstoakes, linux-mm, linux-kernel

Hi Andrew,

On 2023/4/1 06:04, Andrew Morton wrote:
> On Fri, 31 Mar 2023 17:58:57 +0800 Qi Zheng <zhengqi.arch@bytedance.com> wrote:
> 
>> In folio_batch_move_lru(), the folio_batch is not freshly
>> initialised, so it should call folio_batch_reinit() as
>> pagevec_lru_move_fn() did before.
>>
>> ...
>>
>> --- a/mm/swap.c
>> +++ b/mm/swap.c
>> @@ -222,7 +222,7 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn)
>>   	if (lruvec)
>>   		unlock_page_lruvec_irqrestore(lruvec, flags);
>>   	folios_put(fbatch->folios, folio_batch_count(fbatch));
>> -	folio_batch_init(fbatch);
>> +	folio_batch_reinit(fbatch);
>>   }
>>   
>>   static void folio_batch_add_and_move(struct folio_batch *fbatch,
> 
> Well...  why?  This could leave the kernel falsely thinking that the
> folio's pages have been drained from the per-cpu LRU addition
> magazines.
> 
> Maybe that's desirable, maybe not, but I think this change needs much
> much more explanation describing why it is beneficial.
> 
> 
> folio_batch_reinit() seems to be a custom thing for the mlock code -
> perhaps it just shouldn't exist, and its operation should instead be
> open-coded in mlock_folio_batch().

The folio_batch_reinit() corresponds to pagevec_reinit(),
the pagevec_reinit() was originally used in pagevec_lru_move_fn()
and mlock_pagevec(), not a custom thing for the mlock code.


The commit c2bc16817aa0 ("mm/swap: add folio_batch_move_lru()")
introduces folio_batch_move_lru() to replace pagevec_lru_move_fn(),
but calls folio_batch_init() (corresponding to pagevec_init()) instead
of folio_batch_reinit() (corresponding to pagevec_reinit()). This
change was not explained in the commit message and seems like an
oversight.

> 
> 
> The dynamics and rules around ->percpu_pvec_drained are a bit
> mysterious.  A code comment which explains all of this would be
> useful.

The commit d9ed0d08b6c6 ("mm: only drain per-cpu pagevecs once per
pagevec usage") originally introduced the ->drained (which was later
renamed to ->percpu_pvec_drained by commit 7f0b5fb953e7), which is
intended to drain per-cpu pagevecs only once per pagevec usage.

Maybe it would be better to add the following code comment:

diff --git a/mm/swap.c b/mm/swap.c
index 423199ee8478..107c4a13e476 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -1055,6 +1055,7 @@ EXPORT_SYMBOL(release_pages);
   */
  void __pagevec_release(struct pagevec *pvec)
  {
+       /* Only drain per-cpu pagevecs once per pagevec usage */
         if (!pvec->percpu_pvec_drained) {
                 lru_add_drain();
                 pvec->percpu_pvec_drained = true;

Please let me know if I missed something.

Thanks,
Qi

> 


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] mm: swap: use folio_batch_reinit() in folio_batch_move_lru()
  2023-04-02 13:36   ` Qi Zheng
@ 2023-04-04  7:07     ` Qi Zheng
  0 siblings, 0 replies; 5+ messages in thread
From: Qi Zheng @ 2023-04-04  7:07 UTC (permalink / raw
  To: Andrew Morton, Mel Gorman; +Cc: willy, lstoakes, linux-mm, linux-kernel



On 2023/4/2 21:36, Qi Zheng wrote:
> Hi Andrew,
> 
> On 2023/4/1 06:04, Andrew Morton wrote:
>> On Fri, 31 Mar 2023 17:58:57 +0800 Qi Zheng 
>> <zhengqi.arch@bytedance.com> wrote:
>>
>>> In folio_batch_move_lru(), the folio_batch is not freshly
>>> initialised, so it should call folio_batch_reinit() as
>>> pagevec_lru_move_fn() did before.
>>>
>>> ...
>>>
>>> --- a/mm/swap.c
>>> +++ b/mm/swap.c
>>> @@ -222,7 +222,7 @@ static void folio_batch_move_lru(struct 
>>> folio_batch *fbatch, move_fn_t move_fn)
>>>       if (lruvec)
>>>           unlock_page_lruvec_irqrestore(lruvec, flags);
>>>       folios_put(fbatch->folios, folio_batch_count(fbatch));
>>> -    folio_batch_init(fbatch);
>>> +    folio_batch_reinit(fbatch);
>>>   }
>>>   static void folio_batch_add_and_move(struct folio_batch *fbatch,
>>
>> Well...  why?  This could leave the kernel falsely thinking that the
>> folio's pages have been drained from the per-cpu LRU addition
>> magazines.
>>
>> Maybe that's desirable, maybe not, but I think this change needs much
>> much more explanation describing why it is beneficial.
>>
>>
>> folio_batch_reinit() seems to be a custom thing for the mlock code -
>> perhaps it just shouldn't exist, and its operation should instead be
>> open-coded in mlock_folio_batch().
> 
> The folio_batch_reinit() corresponds to pagevec_reinit(),
> the pagevec_reinit() was originally used in pagevec_lru_move_fn()
> and mlock_pagevec(), not a custom thing for the mlock code.
> 
> 
> The commit c2bc16817aa0 ("mm/swap: add folio_batch_move_lru()")
> introduces folio_batch_move_lru() to replace pagevec_lru_move_fn(),
> but calls folio_batch_init() (corresponding to pagevec_init()) instead
> of folio_batch_reinit() (corresponding to pagevec_reinit()). This
> change was not explained in the commit message and seems like an
> oversight.
> 
>>
>>
>> The dynamics and rules around ->percpu_pvec_drained are a bit
>> mysterious.  A code comment which explains all of this would be
>> useful.
> 
> The commit d9ed0d08b6c6 ("mm: only drain per-cpu pagevecs once per
> pagevec usage") originally introduced the ->drained (which was later
> renamed to ->percpu_pvec_drained by commit 7f0b5fb953e7), which is
> intended to drain per-cpu pagevecs only once per pagevec usage.
> 
> Maybe it would be better to add the following code comment:
> 
> diff --git a/mm/swap.c b/mm/swap.c
> index 423199ee8478..107c4a13e476 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -1055,6 +1055,7 @@ EXPORT_SYMBOL(release_pages);
>    */
>   void __pagevec_release(struct pagevec *pvec)
>   {
> +       /* Only drain per-cpu pagevecs once per pagevec usage */
>          if (!pvec->percpu_pvec_drained) {
>                  lru_add_drain();
>                  pvec->percpu_pvec_drained = true;
> 
> Please let me know if I missed something.

Maybe the commit message can be modified as follows:

```
The ->percpu_pvec_drained was originally introduced by commit
d9ed0d08b6c6 ("mm: only drain per-cpu pagevecs once per pagevec usage")
to drain per-cpu pagevecs only once per pagevec usage. But after
commit c2bc16817aa0 ("mm/swap: add folio_batch_move_lru()"), the
->percpu_pvec_drained will be reset to false by calling
folio_batch_init() in folio_batch_move_lru(), which may cause per-cpu
pagevecs to be drained multiple times per pagevec usage. This is not
what we expected, let's use folio_batch_reinit() in
folio_batch_move_lru() to fix it.
```

Also +CC Mel Gorman to confirm this. :)

Thanks,
Qi

> 
> Thanks,
> Qi
> 
>>
> 
> 

-- 
Thanks,
Qi

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2023-04-04  7:07 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-03-31  9:58 [PATCH 1/2] mm: swap: use folio_batch_reinit() in folio_batch_move_lru() Qi Zheng
2023-03-31  9:58 ` [PATCH 2/2] mm: mlock: use folios_put() in mlock_folio_batch() Qi Zheng
2023-03-31 22:04 ` [PATCH 1/2] mm: swap: use folio_batch_reinit() in folio_batch_move_lru() Andrew Morton
2023-04-02 13:36   ` Qi Zheng
2023-04-04  7:07     ` Qi Zheng

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.