All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
From: Chengming Zhou <chengming.zhou@linux.dev>
To: Zhongkun He <hezhongkun.hzk@bytedance.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Yosry Ahmed <yosryahmed@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-mm <linux-mm@kvack.org>,
	wuyun.abel@bytedance.com, zhouchengming@bytedance.com,
	Nhat Pham <nphamcs@gmail.com>
Subject: Re: [External] Re: [bug report] mm/zswap :memory corruption after zswap_load().
Date: Thu, 21 Mar 2024 17:28:42 +0800	[thread overview]
Message-ID: <d280d59f-8760-4f2e-973e-7fecd9c3710c@linux.dev> (raw)
In-Reply-To: <CACSyD1ObnBrN0mnL-NgtOs5XYH7zh00Vd5z3cygrVvepQHhfiA@mail.gmail.com>

On 2024/3/21 14:36, Zhongkun He wrote:
> On Thu, Mar 21, 2024 at 1:24 PM Chengming Zhou <chengming.zhou@linux.dev> wrote:
>>
>> On 2024/3/21 13:09, Zhongkun He wrote:
>>> On Thu, Mar 21, 2024 at 12:42 PM Chengming Zhou
>>> <chengming.zhou@linux.dev> wrote:
>>>>
>>>> On 2024/3/21 12:34, Zhongkun He wrote:
>>>>> Hey folks,
>>>>>
>>>>> Recently, I tested the zswap with memory reclaiming in the mainline
>>>>> (6.8) and found a memory corruption issue related to exclusive loads.
>>>>
>>>> Is this fix included? 13ddaf26be32 ("mm/swap: fix race when skipping swapcache")
>>>> This fix avoids concurrent swapin using the same swap entry.
>>>>
>>>
>>> Yes, This fix avoids concurrent swapin from different cpu, but the
>>> reported issue occurs
>>> on the same cpu.
>>
>> I think you may misunderstand the race description in this fix changelog,
>> the CPU0 and CPU1 just mean two concurrent threads, not real two CPUs.
>>
>> Could you verify if the problem still exists with this fix?
> 
> Yes,I'm sure the problem still exists with this patch.
> There is some debug info, not mainline.
> 
> bpftrace -e'k:swap_readpage {printf("%lld, %lld,%ld,%ld,%ld\n%s",
> ((struct page *)arg0)->private,nsecs,tid,pid,cpu,kstack)}' --include
> linux/mm_types.h

Ok, this problem seems only happen on SWP_SYNCHRONOUS_IO swap backends,
which now include zram, ramdisk, pmem, nvdimm.

It maybe not good to use zswap on these swap backends?

The problem here is the page fault handler tries to skip swapcache to
swapin the folio (swap entry count == 1), but then it can't install folio
to pte entry since some changes happened such as concurrent fork of entry.

Maybe we should writeback that folio in this special case.

> 
> offset nsecs tid pid cpu
> 2140659, 595771411052,15045,15045,6
> swap_readpage+1
> do_swap_page+2135
> handle_mm_fault+2426
> do_user_addr_fault+462
> do_page_fault+48
> async_page_fault+62
> 
> offset nsecs tid pid cpu
> 2140659, 595771424445,15045,15045,6
> swap_readpage+1
> do_swap_page+2135
> handle_mm_fault+2426
> do_user_addr_fault+462
> do_page_fault+48
> async_page_fault+62
> 
> -------------------------------
> There are two page faults with the same tid and offset  in 13393 nsecs.
> 
>>
>>>
>>> Thanks.
>>>
>>>> Thanks.
>>>>
>>>>>
>>>>>
>>>>> root@**:/sys/fs/cgroup/zz# stress --vm 5 --vm-bytes 1g --vm-hang 3 --vm-keep
>>>>> stress: info: [31753] dispatching hogs: 0 cpu, 0 io, 5 vm, 0 hdd
>>>>> stress: FAIL: [31758] (522) memory corruption at: 0x7f347ed1a010
>>>>> stress: FAIL: [31753] (394) <-- worker 31758 returned error 1
>>>>> stress: WARN: [31753] (396) now reaping child worker processes
>>>>> stress: FAIL: [31753] (451) failed run completed in 14s
>>>>>
>>>>>
>>>>> 1. Test step(the frequency of memory reclaiming has been accelerated):
>>>>> -------------------------
>>>>> a. set up the zswap, zram and cgroup V2
>>>>> b. echo 0 > /sys/kernel/mm/lru_gen/enabled
>>>>>       (Increase the probability of problems occurring)
>>>>> c.  mkdir /sys/fs/cgroup/zz
>>>>>      echo $$  > /sys/fs/cgroup/zz/cgroup.procs
>>>>>      cd  /sys/fs/cgroup/zz/
>>>>>      stress --vm 5 --vm-bytes 1g --vm-hang 3 --vm-keep
>>>>>
>>>>> e. in other shell:
>>>>>    while :;do for i in {1..5};do echo 20g >
>>>>> /sys/fs/cgroup/zz/memory.reclaim & done;sleep 1;done
>>>>>
>>>>> 2. Root cause:
>>>>> --------------------------
>>>>> With a small probability, the page fault will occur twice with the
>>>>> original pte, even if a new pte has been successfully set.
>>>>> Unfortunately, zswap_entry has been released during the first page fault
>>>>> with exclusive loads, so zswap_load will fail, and there is no corresponding
>>>>> data in swap space, memory corruption occurs.
>>>>>
>>>>> bpftrace -e'k:zswap_load {printf("%lld, %lld\n", ((struct page
>>>>> *)arg0)->private,nsecs)}'
>>>>> --include linux/mm_types.h  > a.txt
>>>>>
>>>>> look up the same index:
>>>>>
>>>>> index            nsecs
>>>>> 1318876, 8976040736819
>>>>> 1318876, 8976040746078
>>>>>
>>>>> 4123110, 8976234682970
>>>>> 4123110, 8976234689736
>>>>>
>>>>> 2268896, 8976660124792
>>>>> 2268896, 8976660130607
>>>>>
>>>>> 4634105, 8976662117938
>>>>> 4634105, 8976662127596
>>>>>
>>>>> 3. Solution
>>>>>
>>>>> Should we free zswap_entry in batches so that zswap_entry will be
>>>>> valid when the next page fault occurs with the
>>>>> original pte? It would be great if there are other better solutions.
>>>>>
>>>>


  reply	other threads:[~2024-03-21  9:29 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-21  4:34 [bug report] mm/zswap :memory corruption after zswap_load() Zhongkun He
2024-03-21  4:42 ` Chengming Zhou
2024-03-21  5:09   ` [External] " Zhongkun He
2024-03-21  5:24     ` Chengming Zhou
2024-03-21  6:36       ` Zhongkun He
2024-03-21  9:28         ` Chengming Zhou [this message]
2024-03-21 15:25           ` Nhat Pham
2024-03-21 18:32             ` Yosry Ahmed
2024-03-22  3:27               ` Chengming Zhou
2024-03-22  3:16             ` Zhongkun He
2024-03-22  3:04           ` Zhongkun He
2024-03-22 19:34             ` Yosry Ahmed
2024-03-22 23:04               ` Barry Song
2024-03-22 23:08                 ` Yosry Ahmed
2024-03-22 23:18                   ` Barry Song
2024-03-22 23:22                     ` Yosry Ahmed
2024-03-22 23:32                       ` Barry Song
2024-03-22 23:34                         ` Yosry Ahmed
2024-03-22 23:38                           ` Barry Song
2024-03-22 23:41                             ` Yosry Ahmed
2024-03-23  0:34                               ` Barry Song
2024-03-23  0:42                                 ` Yosry Ahmed
2024-03-23 10:48                                 ` Chris Li
2024-03-23 11:27                                   ` Chris Li
2024-03-23 12:41                                   ` Zhongkun He
2024-03-23  1:34               ` Zhongkun He
2024-03-23  1:36                 ` Yosry Ahmed
2024-03-23 10:52                 ` Chris Li
2024-03-23 10:55                   ` Barry Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d280d59f-8760-4f2e-973e-7fecd9c3710c@linux.dev \
    --to=chengming.zhou@linux.dev \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=hezhongkun.hzk@bytedance.com \
    --cc=linux-mm@kvack.org \
    --cc=nphamcs@gmail.com \
    --cc=wuyun.abel@bytedance.com \
    --cc=yosryahmed@google.com \
    --cc=zhouchengming@bytedance.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.