From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 990CEC54E64 for ; Fri, 22 Mar 2024 23:32:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 254576B0087; Fri, 22 Mar 2024 19:32:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 2044C6B0089; Fri, 22 Mar 2024 19:32:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 0A5DA6B008A; Fri, 22 Mar 2024 19:32:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id EEE146B0087 for ; Fri, 22 Mar 2024 19:32:19 -0400 (EDT) Received: from smtpin13.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id B376AC0516 for ; Fri, 22 Mar 2024 23:32:19 +0000 (UTC) X-FDA: 81926275998.13.F9FB88B Received: from mail-ua1-f48.google.com (mail-ua1-f48.google.com [209.85.222.48]) by imf25.hostedemail.com (Postfix) with ESMTP id E381AA000D for ; Fri, 22 Mar 2024 23:32:17 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=etT4FlzI; spf=pass (imf25.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.48 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711150337; a=rsa-sha256; cv=none; b=Z43V4HRo0PTO7sgG6zAXPoqoElqpcX75nQ6kLTGKVDClN3zz/aYFxjM0AV0SUgVeclgJwY QZR90ErsBEIEhTkHbvrBJru2uq6HksniAYFfzz7fb9OTEWJpQ+NyhWTX+vheay3MFlOZol M0YSKOZ4zDK8Hqykq2pu/nthyzvnqL0= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=etT4FlzI; spf=pass (imf25.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.48 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711150337; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=olZPaLJmcFw/n0J+KStrUEiKfyZ9HQQAD8jD6tf7q+0=; b=G12sxJy8Mr57WMdshW6c9SYt0N8mn3MmJyj+RxWN+wlPQ4zeD7g1BNbBc0vjR89wgja9N1 ZHD0NxlyNbUO6Ul+I2Obb3zZTLAo6W0R2xFRcxQseiRQApvcdpOnqGmcve0bcy8NCxjS4M 4Ks+nuJsXjBSe3Qc1f9Wk8BqxK0coZE= Received: by mail-ua1-f48.google.com with SMTP id a1e0cc1a2514c-7e06168551eso953734241.2 for ; Fri, 22 Mar 2024 16:32:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711150337; x=1711755137; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=olZPaLJmcFw/n0J+KStrUEiKfyZ9HQQAD8jD6tf7q+0=; b=etT4FlzI4rmS/u1Q0Uk8LKWLYdq2SWU50setalRA13hqAZI4l1yPgoRigbEWDxYrm1 H2Cn2zQBwY931f8HNjcKV2sKKy+qMT++J9mkmO9xpAiaXd9GMpeajwodwqvwTXAduCTq MnegHywkA+nUgafLSfeXQWUFxUswjISS8cizlT17mudqRUvsVyPIxEeKgcA6Vr2OsXcO WQjB3B1Q1oD5YJTt8xitIKdMs7ioyS1WZ3Ei+2wt8sTdKanNr6P4XbEaRg/2Ysgto84L S5+FSaQPWhAa4Y++erIfl/TQbgOmav7EZBjpgBp4CSonOBnTzWy1tIwDoFk13lW7W+LA 2HDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711150337; x=1711755137; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=olZPaLJmcFw/n0J+KStrUEiKfyZ9HQQAD8jD6tf7q+0=; b=nx9iX36Ahe/jsQeSMfPB/N7zc0oYDa88rg82A99A+Tt1F0SgcJ6O02RNK7LEROkwgS Qd9Fv+kA9paq9tJC0R0QtQpvKibbjJn87KFIG0uB2j8TWCl37CIHDSQ42SC2ATKXMHuO sK60BYM6iHuMpWmmVShvZ6/RZPK5WsRhAdWbHcR46mGCuvLk5Um2knk7b5RY70WaVGm+ BDkPox9RJqb1yYEOiRQew5e0AQKQZD6XAQXhyDpIXAb1edaVu/U20F55wH2ecakrNyRM ab9vYK5/6H82NNwocGxoarO9HQlJC+lgivnIKGPh0mcyhIJCZ6Tk4V1ZX4+X0GXZq5D7 fErw== X-Forwarded-Encrypted: i=1; AJvYcCUs1mTdaMoKBYqOM366j7JRXQeUiBpIqt9OBxZHfqkGAwTlh83RBPa9tJmR0iJUJnf07TCF48PIg30exRF3Qh98Ors= X-Gm-Message-State: AOJu0YyPnCPir4YNuTEmvNNc2uKU21p7xhZyOQe21RGnUfXzPo3GBBGM RNGEHO7k1VjvIHVsX2Wl08U+qsrfpMIa1NCikNMmbQYGtpZDkzi5xJp4qS7XHJ59w5T8jg87L99 7tE4tqVKoYL/h87/lQSomnwzS/zQ= X-Google-Smtp-Source: AGHT+IHYoX7kC1aI/VNb17Q9ijMXurrpOs86Xlq9OrueW2ESWgrOc+ChuETzbI0p0ix7WNDAXO3K3azRxk/f0K3d1sc= X-Received: by 2002:a1f:f843:0:b0:4d8:7a5e:392f with SMTP id w64-20020a1ff843000000b004d87a5e392fmr963742vkh.12.1711150336836; Fri, 22 Mar 2024 16:32:16 -0700 (PDT) MIME-Version: 1.0 References: <01b0b8e8-af1d-4fbe-951e-278e882283fd@linux.dev> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Sat, 23 Mar 2024 12:32:05 +1300 Message-ID: Subject: Re: [External] Re: [bug report] mm/zswap :memory corruption after zswap_load(). To: Yosry Ahmed Cc: Zhongkun He , Chengming Zhou , Johannes Weiner , Andrew Morton , linux-mm , wuyun.abel@bytedance.com, zhouchengming@bytedance.com, Nhat Pham , Kairui Song , Minchan Kim , David Hildenbrand , Chris Li , Ying Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: E381AA000D X-Stat-Signature: yrpd55oi8aimmu5nqax94tcu1brtqf57 X-Rspam-User: X-HE-Tag: 1711150337-393509 X-HE-Meta: U2FsdGVkX1/zjCvrFik0e/muWFYo5vuNQ58goKJR849pzSvtpp5valftFNyt4DQUyqxcTTVHC5TDv1t2Z/VjyAQ8zbs4fgAGfeqoAZsCiU+V+jA4a/8UHzLWAxml2R/nn+hK0cF0PWByo+O78YaTYIBGF6GrUYSBvb7nY5piBApghPvhEuAKsXNgv/GJ2aKnnrmpdvDrHInvyRfC7R21iHVrfwmoQlScqmC7XmdUgGGEXrpLVju8YHBh6Cq5RNFgEW5h+veR8+hIpINlYKDkYdRqagM5+t6iybPrfFi2mGUuGfoMGPOW5DkWFJbTNlen5VTZEO+kT4NwFnMj3aWoU955ai5NYwitzKvVjb89Qyfn3AlZahiXoIHuA9GljFx8WVWsgKqEwBL7KPT/5p034nP0auEVJZY22efSJwmIaWuvEu34LKEd/DXJoyatIl38+Tl8W2iYWxtShaTmKiKsTf67Dc7YlHpNecjHonDM+ZkF/1WJWHpmuFvx0iffPGVS+3VYBmp1olTFFVAr1ZExQmNL8yw58+QYi7i0zgKI+oZQ2dPIuFPwFiY47jJoy+c5O8nsPxz3ZwX06tew6UWSQ0jEKU8mxJRF4MAXmGe88fjmtASHDKiT8m56ncBWG3yiy5xhfHX522Rbc0n7bWtYuEHtEETZU2S2hv+zPv+cFvxw0TbanmLkciCcffSrD0ItGQsYQOtcyXEv9qIQd9RmGoFh0Yplr43iCTF/r02z5PPluNJa9fGd+Das5tXTT7Zvdp98ns//sUa3NGaQSdaYajfGRjNRMlgf7+421UtCBs+9QbScbJMPHXs/4iNdUdcnzdinbDZSEe9R0OU3G7XBECOuFh483AGdl5IVulLCHT8YDjnKljuBN5ucOxVzqDjagfCLX5uvqJk5FP0EHbiHlOAfD7zQHKN/HzWqHydvuvLM83D/rZqJec7X76+EwTgiHPV2t/DF9DN4a8awdm+ wJmO7ZV9 sHOUpmG8Ey81hlUatAipVCAvtKXP0pPPsUj1+fDysf0ig1zktjc9lsHaKJiAkvfYPd1xJpwqJTYTIKO2+iUlR8WpEQC+3vt4q5OkdQRqwvRVSACrEy8hYONMeXwHWqw98ZWGzrIHyO86R80WbF5s51Qpqm5nZdY74Cv7+9oLii86uH9wXyqXXJMG2gLtXjyOf4N4F4Y4BoOotF5f1Bta4P9sB/ruxU2KMy1WtwMSvLE1NQmVfQpHPdosjVny9qOmnnWpW5fqErIP/lFqdkXvjVfXQKl4gGilgiKVPZmAU6+JxQ1JPHpmFGLnaFI88tdahltsE0H13bMn5/SiL8xh8h0R1jIfnVYDTW/FWWFo4N9phFa5xKvgn7a3npR1bV5FYqOA+PWHhay/7yGVwtVbkiQadTtjWZ8+d2w6s X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Mar 23, 2024 at 12:23=E2=80=AFPM Yosry Ahmed wrote: > > On Fri, Mar 22, 2024 at 4:18=E2=80=AFPM Barry Song <21cnbao@gmail.com> wr= ote: > > > > On Sat, Mar 23, 2024 at 12:09=E2=80=AFPM Yosry Ahmed wrote: > > > > > > On Fri, Mar 22, 2024 at 4:04=E2=80=AFPM Barry Song <21cnbao@gmail.com= > wrote: > > > > > > > > On Sat, Mar 23, 2024 at 8:35=E2=80=AFAM Yosry Ahmed wrote: > > > > > > > > > > On Thu, Mar 21, 2024 at 8:04=E2=80=AFPM Zhongkun He > > > > > wrote: > > > > > > > > > > > > On Thu, Mar 21, 2024 at 5:29=E2=80=AFPM Chengming Zhou wrote: > > > > > > > > > > > > > > On 2024/3/21 14:36, Zhongkun He wrote: > > > > > > > > On Thu, Mar 21, 2024 at 1:24=E2=80=AFPM Chengming Zhou wrote: > > > > > > > >> > > > > > > > >> On 2024/3/21 13:09, Zhongkun He wrote: > > > > > > > >>> On Thu, Mar 21, 2024 at 12:42=E2=80=AFPM Chengming Zhou > > > > > > > >>> wrote: > > > > > > > >>>> > > > > > > > >>>> On 2024/3/21 12:34, Zhongkun He wrote: > > > > > > > >>>>> Hey folks, > > > > > > > >>>>> > > > > > > > >>>>> Recently, I tested the zswap with memory reclaiming in = the mainline > > > > > > > >>>>> (6.8) and found a memory corruption issue related to ex= clusive loads. > > > > > > > >>>> > > > > > > > >>>> Is this fix included? 13ddaf26be32 ("mm/swap: fix race w= hen skipping swapcache") > > > > > > > >>>> This fix avoids concurrent swapin using the same swap en= try. > > > > > > > >>>> > > > > > > > >>> > > > > > > > >>> Yes, This fix avoids concurrent swapin from different cpu= , but the > > > > > > > >>> reported issue occurs > > > > > > > >>> on the same cpu. > > > > > > > >> > > > > > > > >> I think you may misunderstand the race description in this= fix changelog, > > > > > > > >> the CPU0 and CPU1 just mean two concurrent threads, not re= al two CPUs. > > > > > > > >> > > > > > > > >> Could you verify if the problem still exists with this fix= ? > > > > > > > > > > > > > > > > Yes=EF=BC=8CI'm sure the problem still exists with this pat= ch. > > > > > > > > There is some debug info, not mainline. > > > > > > > > > > > > > > > > bpftrace -e'k:swap_readpage {printf("%lld, %lld,%ld,%ld,%ld= \n%s", > > > > > > > > ((struct page *)arg0)->private,nsecs,tid,pid,cpu,kstack)}' = --include > > > > > > > > linux/mm_types.h > > > > > > > > > > > > > > Ok, this problem seems only happen on SWP_SYNCHRONOUS_IO swap= backends, > > > > > > > which now include zram, ramdisk, pmem, nvdimm. > > > > > > > > > > > > Yes. > > > > > > > > > > > > > > > > > > > > It maybe not good to use zswap on these swap backends? > > > > > > > > > > > > > > The problem here is the page fault handler tries to skip swap= cache to > > > > > > > swapin the folio (swap entry count =3D=3D 1), but then it can= 't install folio > > > > > > > to pte entry since some changes happened such as concurrent f= ork of entry. > > > > > > > > > > > > > > > > > > > The first page fault returned VM_FAULT_RETRY because > > > > > > folio_lock_or_retry() failed. > > > > > > > > > > How so? The folio is newly allocated and not visible to any other > > > > > threads or CPUs. swap_read_folio() unlocks it and then returns an= d we > > > > > immediately try to lock it again with folio_lock_or_retry(). How = does > > > > > this fail? > > > > > > > > > > Let's go over what happens after swap_read_folio(): > > > > > - The 'if (!folio)' code block will be skipped. > > > > > - folio_lock_or_retry() should succeed as I mentioned earlier. > > > > > - The 'if (swapcache)' code block will be skipped. > > > > > - The pte_same() check should succeed on first look because other > > > > > concurrent faulting threads should be held off by the newly intro= duced > > > > > swapcache_prepare() logic. But looking deeper I think this one ma= y > > > > > fail due to a concurrent MADV_WILLNEED. > > > > > - The 'if (unlikely(!folio_test_uptodate(folio)))` part will be > > > > > skipped because swap_read_folio() marks the folio up-to-date. > > > > > - After that point there is no possible failure until we install = the > > > > > pte, at which point concurrent faults will fail on !pte_same() an= d > > > > > retry. > > > > > > > > > > So the only failure I think is possible is the pte_same() check. = I see > > > > > how a concurrent MADV_WILLNEED could cause that check to fail. A > > > > > concurrent MADV_WILLNEED will block on swapcache_prepare(), but o= nce > > > > > the fault resolves it will go ahead and read the folio again into= the > > > > > swapcache. It seems like we will end up with two copies of the sa= me > > > > > > > > but zswap has freed the object when the do_swap_page finishes swap_= read_folio > > > > due to exclusive load feature of zswap? > > > > > > > > so WILLNEED will get corrupted data and put it into swapcache. > > > > some other concurrent new forked process might get the new data > > > > from the swapcache WILLNEED puts when the new-forked process > > > > goes into do_swap_page. > > > > > > Oh I was wondering how synchronization with WILLNEED happens without > > > zswap. It seems like we could end up with two copies of the same foli= o > > > and one of them will be leaked unless I am missing something. > > > > > > > > > > > so very likely a new process is forked right after do_swap_page fin= ishes > > > > swap_read_folio and before swapcache_clear. > > > > > > > > > folio? Maybe this is harmless because the folio in the swacache w= ill > > > > > never be used, but it is essentially leaked at that point, right? > > > > > > > > > > I feel like I am missing something. Adding other folks that were > > > > > involved in the recent swapcache_prepare() synchronization thread= . > > > > > > > > > > Anyway, I agree that at least in theory the data corruption could > > > > > happen because of exclusive loads when skipping the swapcache, an= d we > > > > > should fix that. > > > > > > > > > > Perhaps the right thing to do may be to write the folio again to = zswap > > > > > before unlocking it and before calling swapcache_clear(). The nee= d for > > > > > the write can be detected by checking if the folio is dirty, I th= ink > > > > > this will only be true if the folio was loaded from zswap. > > > > > > > > we only need to write when we know swap_read_folio() gets data > > > > from zswap but not swapfile. is there a quick way to do this? > > > > > > The folio will be dirty when loaded from zswap, so we can check if th= e > > > folio is dirty and write the page if fail after swap_read_folio(). > > > > Is it actually a bug of swapin_walk_pmd_entry? it only check pte > > before read_swap_cache_async. but when read_swap_cache_async > > is blocked by swapcache_prepare, after it gets the swapcache_prepare > > successfully , someone else should have already set the pte and freed > > the swap slot even if this is not zswap? > > If someone freed the swap slot then swapcache_prepare() should fail, > but the swap entry could have been recycled after we dropped the pte > lock, right? > > Anyway, yeah, I think there might be a bug here irrelevant to zswap. > > > > > static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, > > unsigned long end, struct mm_walk *walk) > > { > > struct vm_area_struct *vma =3D walk->private; > > struct swap_iocb *splug =3D NULL; > > pte_t *ptep =3D NULL; > > spinlock_t *ptl; > > unsigned long addr; > > > > for (addr =3D start; addr < end; addr +=3D PAGE_SIZE) { > > pte_t pte; > > swp_entry_t entry; > > struct folio *folio; > > > > if (!ptep++) { > > ptep =3D pte_offset_map_lock(vma->vm_mm, pmd, a= ddr, &ptl); > > if (!ptep) > > break; > > } > > > > pte =3D ptep_get(ptep); > > if (!is_swap_pte(pte)) > > continue; > > entry =3D pte_to_swp_entry(pte); > > if (unlikely(non_swap_entry(entry))) > > continue; > > > > pte_unmap_unlock(ptep, ptl); > > ptep =3D NULL; > > > > folio =3D read_swap_cache_async(entry, GFP_HIGHUSER_MOV= ABLE, > > vma, addr, &splug); > > if (folio) > > folio_put(folio); > > } > > > > if (ptep)c > > pte_unmap_unlock(ptep, ptl); > > swap_read_unplug(splug); > > cond_resched(); > > > > return 0; > > } > > > > I mean pte can become non-swap within read_swap_cache_async(), > > so no matter if it is zswap, we have the bug. checked again, probably still a zswap issue, as swapcache_prepare can dete= ct real swap slot free :-) /* * Swap entry may have been freed since our caller observed= it. */ err =3D swapcache_prepare(entry); if (!err) break; zswap exslusive load isn't a real swap free. But probably we have found the timing which causes the issue at least :-)