From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABD95C54E67 for ; Sat, 23 Mar 2024 10:48:55 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id BA52D6B007B; Sat, 23 Mar 2024 06:48:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B2F1A6B0082; Sat, 23 Mar 2024 06:48:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9829B6B0083; Sat, 23 Mar 2024 06:48:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 800EE6B007B for ; Sat, 23 Mar 2024 06:48:54 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id EA97240E61 for ; Sat, 23 Mar 2024 10:48:53 +0000 (UTC) X-FDA: 81927980946.22.1C94B0E Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf22.hostedemail.com (Postfix) with ESMTP id EC449C0009 for ; Sat, 23 Mar 2024 10:48:51 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iw2kK4dK; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf22.hostedemail.com: domain of chrisl@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=chrisl@kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711190932; a=rsa-sha256; cv=none; b=JVrKDrLsStQL38Q7l2Ut4OdWp32WZO23Ofx3L4+OPG0Xtjsg7C+fmDq69/cnE461tu6LvM 2YtGASXqZHM1d7wJXd3Ry0uxRAgmbXzuNoAxrwhgQSI4rDEmx3upX3vrHRJVnQB9xlJqGp WRhO8O6OeiPwpW85nFY7qxs96X6DeN0= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=iw2kK4dK; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf22.hostedemail.com: domain of chrisl@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=chrisl@kernel.org ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711190932; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Wx9USQs3D64bvmbEGWqFGk2EK3Kk/gn0IitkQL54ZOs=; b=CEPxoK6M/4yW9qPht3Xl9jUuURUnbYARDJaVxeSGT02S69J98dIhsE8OWIcB7aau1TN4GN 029MfnqiLqwi3mGRMeDmr6OnjnSSRLeBtDJgtzp9g3Pl30y9CQwN7DqAx3Zm/LhcdUm+P6 PRIcrH+riAb8qsxSTyEgfhBN2Jm/x8s= Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id A9B0760A67 for ; Sat, 23 Mar 2024 10:48:50 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4D4E5C433C7 for ; Sat, 23 Mar 2024 10:48:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1711190930; bh=0CePt7q/BOPK0S9MwxloPc5S8GBEszKQKixmTvbVPBM=; h=References:In-Reply-To:From:Date:Subject:To:Cc:From; b=iw2kK4dKdixht7Bk7v4YkTXvPw60ckrivg0bYM/cIk5eBuLruL7RDrmsIzo8FAQoF DpDFa6sS+5OZexys9cbGbd47gxY7q5lbyafvo8UbTUYaQ4riH732WZFi9qxUfEQ0z4 Siq7T+n6agBylaPZnCM5I2rKNitCl2h4SxkzKTZJM97rvWf6PkcljmRL1aSCFIH/R2 esUymHeWr2J+fT5FNibfRE4NF1bc7cf6sBL60GcSNZzPKOPWUKaZ6HtQA11yLCTqqi RlueZyPMk6YNBYU103Is1gOqpE3PQXdhHYQLWULDDOdh9F0JZX5cp8eRHVq5KB5RtY KGXPHGB7tRziw== Received: by mail-il1-f176.google.com with SMTP id e9e14a558f8ab-3684e301716so13988355ab.3 for ; Sat, 23 Mar 2024 03:48:50 -0700 (PDT) X-Forwarded-Encrypted: i=1; AJvYcCXR3ZSBTVEDL8MGge9A3trRNNKWAsveN01PhnBC/DX+vy03ewu8SttUvEpwgx97//2AIXCn+AmHQF3nCZAU0QWKIpw= X-Gm-Message-State: AOJu0YxekJjT+z/kDY7djTFwzuPe6noZIHWqE89Mf9gAruMLTRThxvgt rllGXscn9jJiBtnaW/9pX3h+BGwOSkxU2VX+KlS7a1aaPVheNnQY4bNT0UBvn23yvtc9GM2cAhU JcRQxMMJifWpYvPsnOukgoSsASTfkoq8Tfvlp X-Google-Smtp-Source: AGHT+IFrAA+3sveTmd1DQ8vA2nWe9abzP36tPg9tGV46l/0oeECwxNhpjrjrZt+DNrthI25X+1iMFegewbnLV8iTj7o= X-Received: by 2002:a92:da48:0:b0:366:9edb:c0a6 with SMTP id p8-20020a92da48000000b003669edbc0a6mr1892999ilq.12.1711190929532; Sat, 23 Mar 2024 03:48:49 -0700 (PDT) MIME-Version: 1.0 References: <01b0b8e8-af1d-4fbe-951e-278e882283fd@linux.dev> In-Reply-To: From: Chris Li Date: Sat, 23 Mar 2024 03:48:36 -0700 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [External] Re: [bug report] mm/zswap :memory corruption after zswap_load(). To: Barry Song <21cnbao@gmail.com> Cc: Yosry Ahmed , Zhongkun He , Chengming Zhou , Johannes Weiner , Andrew Morton , linux-mm , wuyun.abel@bytedance.com, zhouchengming@bytedance.com, Nhat Pham , Kairui Song , Minchan Kim , David Hildenbrand , Ying Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: EC449C0009 X-Stat-Signature: qk5r9z9bwj9kbodho6qj97uif5ek3uto X-HE-Tag: 1711190931-167822 X-HE-Meta: U2FsdGVkX18f1F7CMTPcKNtKGWhn03B0mIKnTv73LLU1dlA38htGvxvCYXy2XOIzvNdAJ6l+8AU0gKdJfDvB1t1+Rv92Up8Rv1itP1tNDq5a8UCOcpVtFTW1KAOw4sQEo3Jq2GRvmQAvL6Ll0+/G9se6u3UlG5B5DSIUk90PxO55OVLlQWelhcDlZ9O1NdLpYeZKs3HMI3cUs9kuRfJ6i3RChP2W5QSaKyqMcxa1oR7M9vDhnYiPHA/ZI000y35uHzBt1dKvAZ653r2cd2NqK5AoEqiBIe6cD/qND9OPEhwGdvVZXsfGopXyj/xTTIPK8j1iu9Ll/a4qc50ZW2lWSqFLasbFl95iwLcWOly1w1mKIkNCV8J8rw9pSCkl5t+ZTCZAfHdINtIiR2TzeMflpWxjBszdOkH55Pl90KeNEBFSe3OOOKyUqXSIB7NdD49Udvv1oui5EtrgPmlLsDG1H2vpp2Y0S5EZrdZHvz/R7gAvAHA0OnWhE67k4OVbIMjuEG5vrFwhZUbgulis5Nbt5VzVxTSZ0Max+xCmKiiwt/vEpnEWWuy8u7vlMpfRsCfp/YGcZVmrIC9t9Y6695RGJhYeTUQIkF5bFYfd39fsJ9iQADHzmzhoyLreNkFyydOWT7SJBgI2Q3X1mZfpwHMf+RhpOpokSWo11rPyAu6I+jYrt3CGl5QllDLum8arTXY+Ed2evRgGABQP/uPIiF+5yne4yxC/Ru10WAoJTDUnkpF1eDj2DLdCQYHCuWFd3Ztr3kKFM5dJ3YcSh0wHPNeZnv8UEn6qWjAbI8IeFBYZPeAcBzwIfw+zJGJVpUi+Uaxg1k+r9+rf79p2NXK4CtJCSkmkjtGF4yeo+mFMwV0hf4Mu1mXwObFnIdjnUvbXRKS3Lo6/q8i4E3DVBqdAGnzvbDF39YDOSwuQ5eaNgbdf87wlDf5a3RyfBjbS9EPRM39N6lUsoqJZ0v2kDiQm6R3 JcW9rJpG GbjTDF7x5wooUL0rwhP/9Jwzx1hfxss+/NkMZvaCjbzTFSGaguyj/LmVAIZ83kf4tzGupO+vc3OLmtCRZQ6gp4oKgBJYPJL3u5D0a7eNQsE+lUvCS1VhWv3/ChVWAJbuhO6ovOr7US8iUud0saQobzBddnOqB0Qaa+Pd1H4pwNuely/+/5lmlzoL/qJ4nl84dGxNPkfK9xzFUZimIEsq9Rq1B8tQ6tAyzkqtgWHCVKmRxikzuCVo+2vKsNx1GGNExoekHkErEPKmQfKUSD3+YOcg/05qqi6NkEB+b1Klgno8fiJtJqZRaofSVoARDz0VSurpKt60Vx4hRBGk64ecYsmV3uBJcB9cCOvrTBh6DTUO6l729hvX4mcqi0c1WDka49oGUs6HDUKPFQgkR4da9gpJs+WLtKGELczSw+A087CgdPsGSl5hbVIYW2A== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Mar 22, 2024 at 5:35=E2=80=AFPM Barry Song <21cnbao@gmail.com> wrot= e: > > On Sat, Mar 23, 2024 at 12:42=E2=80=AFPM Yosry Ahmed wrote: > > > > On Fri, Mar 22, 2024 at 4:38=E2=80=AFPM Barry Song <21cnbao@gmail.com> = wrote: > > > > > > On Sat, Mar 23, 2024 at 12:35=E2=80=AFPM Yosry Ahmed wrote: > > > > > > > > On Fri, Mar 22, 2024 at 4:32=E2=80=AFPM Barry Song <21cnbao@gmail.c= om> wrote: > > > > > > > > > > On Sat, Mar 23, 2024 at 12:23=E2=80=AFPM Yosry Ahmed wrote: > > > > > > > > > > > > On Fri, Mar 22, 2024 at 4:18=E2=80=AFPM Barry Song <21cnbao@gma= il.com> wrote: > > > > > > > > > > > > > > On Sat, Mar 23, 2024 at 12:09=E2=80=AFPM Yosry Ahmed wrote: > > > > > > > > > > > > > > > > On Fri, Mar 22, 2024 at 4:04=E2=80=AFPM Barry Song <21cnbao= @gmail.com> wrote: > > > > > > > > > > > > > > > > > > On Sat, Mar 23, 2024 at 8:35=E2=80=AFAM Yosry Ahmed wrote: > > > > > > > > > > > > > > > > > > > > On Thu, Mar 21, 2024 at 8:04=E2=80=AFPM Zhongkun He > > > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > > On Thu, Mar 21, 2024 at 5:29=E2=80=AFPM Chengming Zho= u wrote: > > > > > > > > > > > > > > > > > > > > > > > > On 2024/3/21 14:36, Zhongkun He wrote: > > > > > > > > > > > > > On Thu, Mar 21, 2024 at 1:24=E2=80=AFPM Chengming= Zhou wrote: > > > > > > > > > > > > >> > > > > > > > > > > > > >> On 2024/3/21 13:09, Zhongkun He wrote: > > > > > > > > > > > > >>> On Thu, Mar 21, 2024 at 12:42=E2=80=AFPM Chengm= ing Zhou > > > > > > > > > > > > >>> wrote: > > > > > > > > > > > > >>>> > > > > > > > > > > > > >>>> On 2024/3/21 12:34, Zhongkun He wrote: > > > > > > > > > > > > >>>>> Hey folks, > > > > > > > > > > > > >>>>> > > > > > > > > > > > > >>>>> Recently, I tested the zswap with memory recl= aiming in the mainline > > > > > > > > > > > > >>>>> (6.8) and found a memory corruption issue rel= ated to exclusive loads. > > > > > > > > > > > > >>>> > > > > > > > > > > > > >>>> Is this fix included? 13ddaf26be32 ("mm/swap: = fix race when skipping swapcache") > > > > > > > > > > > > >>>> This fix avoids concurrent swapin using the sa= me swap entry. > > > > > > > > > > > > >>>> > > > > > > > > > > > > >>> > > > > > > > > > > > > >>> Yes, This fix avoids concurrent swapin from dif= ferent cpu, but the > > > > > > > > > > > > >>> reported issue occurs > > > > > > > > > > > > >>> on the same cpu. > > > > > > > > > > > > >> > > > > > > > > > > > > >> I think you may misunderstand the race descripti= on in this fix changelog, > > > > > > > > > > > > >> the CPU0 and CPU1 just mean two concurrent threa= ds, not real two CPUs. > > > > > > > > > > > > >> > > > > > > > > > > > > >> Could you verify if the problem still exists wit= h this fix? > > > > > > > > > > > > > > > > > > > > > > > > > > Yes=EF=BC=8CI'm sure the problem still exists wit= h this patch. > > > > > > > > > > > > > There is some debug info, not mainline. > > > > > > > > > > > > > > > > > > > > > > > > > > bpftrace -e'k:swap_readpage {printf("%lld, %lld,%= ld,%ld,%ld\n%s", > > > > > > > > > > > > > ((struct page *)arg0)->private,nsecs,tid,pid,cpu,= kstack)}' --include > > > > > > > > > > > > > linux/mm_types.h > > > > > > > > > > > > > > > > > > > > > > > > Ok, this problem seems only happen on SWP_SYNCHRONO= US_IO swap backends, > > > > > > > > > > > > which now include zram, ramdisk, pmem, nvdimm. > > > > > > > > > > > > > > > > > > > > > > Yes. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > It maybe not good to use zswap on these swap backen= ds? > > > > > > > > > > > > > > > > > > > > > > > > The problem here is the page fault handler tries to= skip swapcache to > > > > > > > > > > > > swapin the folio (swap entry count =3D=3D 1), but t= hen it can't install folio > > > > > > > > > > > > to pte entry since some changes happened such as co= ncurrent fork of entry. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > The first page fault returned VM_FAULT_RETRY because > > > > > > > > > > > folio_lock_or_retry() failed. > > > > > > > > > > > > > > > > > > > > How so? The folio is newly allocated and not visible to= any other > > > > > > > > > > threads or CPUs. swap_read_folio() unlocks it and then = returns and we > > > > > > > > > > immediately try to lock it again with folio_lock_or_ret= ry(). How does > > > > > > > > > > this fail? > > > > > > > > > > > > > > > > > > > > Let's go over what happens after swap_read_folio(): > > > > > > > > > > - The 'if (!folio)' code block will be skipped. > > > > > > > > > > - folio_lock_or_retry() should succeed as I mentioned e= arlier. > > > > > > > > > > - The 'if (swapcache)' code block will be skipped. > > > > > > > > > > - The pte_same() check should succeed on first look bec= ause other > > > > > > > > > > concurrent faulting threads should be held off by the n= ewly introduced > > > > > > > > > > swapcache_prepare() logic. But looking deeper I think t= his one may > > > > > > > > > > fail due to a concurrent MADV_WILLNEED. > > > > > > > > > > - The 'if (unlikely(!folio_test_uptodate(folio)))` part= will be > > > > > > > > > > skipped because swap_read_folio() marks the folio up-to= -date. > > > > > > > > > > - After that point there is no possible failure until w= e install the > > > > > > > > > > pte, at which point concurrent faults will fail on !pte= _same() and > > > > > > > > > > retry. > > > > > > > > > > > > > > > > > > > > So the only failure I think is possible is the pte_same= () check. I see > > > > > > > > > > how a concurrent MADV_WILLNEED could cause that check t= o fail. A > > > > > > > > > > concurrent MADV_WILLNEED will block on swapcache_prepar= e(), but once > > > > > > > > > > the fault resolves it will go ahead and read the folio = again into the > > > > > > > > > > swapcache. It seems like we will end up with two copies= of the same > > > > > > > > > > > > > > > > > > but zswap has freed the object when the do_swap_page fini= shes swap_read_folio > > > > > > > > > due to exclusive load feature of zswap? > > > > > > > > > > > > > > > > > > so WILLNEED will get corrupted data and put it into swapc= ache. > > > > > > > > > some other concurrent new forked process might get the ne= w data > > > > > > > > > from the swapcache WILLNEED puts when the new-forked proc= ess > > > > > > > > > goes into do_swap_page. > > > > > > > > > > > > > > > > Oh I was wondering how synchronization with WILLNEED happen= s without > > > > > > > > zswap. It seems like we could end up with two copies of the= same folio > > > > > > > > and one of them will be leaked unless I am missing somethin= g. > > > > > > > > > > > > > > > > > > > > > > > > > > so very likely a new process is forked right after do_swa= p_page finishes > > > > > > > > > swap_read_folio and before swapcache_clear. > > > > > > > > > > > > > > > > > > > folio? Maybe this is harmless because the folio in the = swacache will > > > > > > > > > > never be used, but it is essentially leaked at that poi= nt, right? > > > > > > > > > > > > > > > > > > > > I feel like I am missing something. Adding other folks = that were > > > > > > > > > > involved in the recent swapcache_prepare() synchronizat= ion thread. > > > > > > > > > > > > > > > > > > > > Anyway, I agree that at least in theory the data corrup= tion could > > > > > > > > > > happen because of exclusive loads when skipping the swa= pcache, and we > > > > > > > > > > should fix that. > > > > > > > > > > > > > > > > > > > > Perhaps the right thing to do may be to write the folio= again to zswap > > > > > > > > > > before unlocking it and before calling swapcache_clear(= ). The need for > > > > > > > > > > the write can be detected by checking if the folio is d= irty, I think > > > > > > > > > > this will only be true if the folio was loaded from zsw= ap. > > > > > > > > > > > > > > > > > > we only need to write when we know swap_read_folio() gets= data > > > > > > > > > from zswap but not swapfile. is there a quick way to do t= his? > > > > > > > > > > > > > > > > The folio will be dirty when loaded from zswap, so we can c= heck if the > > > > > > > > folio is dirty and write the page if fail after swap_read_f= olio(). > > > > > > > > > > > > > > Is it actually a bug of swapin_walk_pmd_entry? it only check = pte > > > > > > > before read_swap_cache_async. but when read_swap_cache_async > > > > > > > is blocked by swapcache_prepare, after it gets the swapcache_= prepare > > > > > > > successfully , someone else should have already set the pte a= nd freed > > > > > > > the swap slot even if this is not zswap? > > > > > > > > > > > > If someone freed the swap slot then swapcache_prepare() should = fail, > > > > > > but the swap entry could have been recycled after we dropped th= e pte > > > > > > lock, right? > > > > > > > > > > > > Anyway, yeah, I think there might be a bug here irrelevant to z= swap. > > > > > > > > > > > > > > > > > > > > static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long st= art, > > > > > > > unsigned long end, struct mm_walk *walk) > > > > > > > { > > > > > > > struct vm_area_struct *vma =3D walk->private; > > > > > > > struct swap_iocb *splug =3D NULL; > > > > > > > pte_t *ptep =3D NULL; > > > > > > > spinlock_t *ptl; > > > > > > > unsigned long addr; > > > > > > > > > > > > > > for (addr =3D start; addr < end; addr +=3D PAGE_SIZE)= { > > > > > > > pte_t pte; > > > > > > > swp_entry_t entry; > > > > > > > struct folio *folio; > > > > > > > > > > > > > > if (!ptep++) { > > > > > > > ptep =3D pte_offset_map_lock(vma->vm_= mm, pmd, addr, &ptl); > > > > > > > if (!ptep) > > > > > > > break; > > > > > > > } > > > > > > > > > > > > > > pte =3D ptep_get(ptep); > > > > > > > if (!is_swap_pte(pte)) > > > > > > > continue; > > > > > > > entry =3D pte_to_swp_entry(pte); > > > > > > > if (unlikely(non_swap_entry(entry))) > > > > > > > continue; > > > > > > > > > > > > > > pte_unmap_unlock(ptep, ptl); > > > > > > > ptep =3D NULL; > > > > > > > > > > > > > > folio =3D read_swap_cache_async(entry, GFP_HI= GHUSER_MOVABLE, > > > > > > > vma, addr, &splu= g); > > > > > > > if (folio) > > > > > > > folio_put(folio); > > > > > > > } > > > > > > > > > > > > > > if (ptep)c > > > > > > > pte_unmap_unlock(ptep, ptl); > > > > > > > swap_read_unplug(splug); > > > > > > > cond_resched(); > > > > > > > > > > > > > > return 0; > > > > > > > } > > > > > > > > > > > > > > I mean pte can become non-swap within read_swap_cache_async()= , > > > > > > > so no matter if it is zswap, we have the bug. > > > > > > > > > > checked again, probably still a zswap issue, as swapcache_prepar= e can detect > > > > > real swap slot free :-) > > > > > > > > > > /* > > > > > * Swap entry may have been freed since our calle= r observed it. > > > > > */ > > > > > err =3D swapcache_prepare(entry); > > > > > if (!err) > > > > > break; > > > > > > > > > > > > > > > zswap exslusive load isn't a real swap free. > > > > > > > > > > But probably we have found the timing which causes the issue at l= east :-) > > > > > > > > The problem I was referring to is with the swapin fault path that > > > > skips the swapcache vs. MADV_WILLNEED. The fault path could swapin = the > > > > page and skip the swapcache, and MADV_WILLNEED could swap it in aga= in > > > > into the swapcache. We would end up with two copies of the folio. > > > > > > right. i feel like we have to re-check pte is not changed within > > > __read_swap_cache_async after swapcache_prepare succeed > > > after being blocked for a while as the previous entry could have > > > been freed and re-allocted by someone else - a completely > > > different process. then we get read other processes' data. > > > > > This is only a problem when we skip the swapcache during swapin. > > Otherwise the swapcache synchronizes this. I wonder how much does > > skipping the swapcache buy us on recent kernels? This optimization was > > introduced a long time ago. > > Still performs quite good. according to kairui's data: > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit= /?id=3D13ddaf26be324a7f951891ecd9ccd04466d27458 > > Before: 10934698 us > After: 11157121 us > Cached: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag) > > BTW, zram+zswap seems pointless from the first beginning. it seems a wron= g > configuration for users. if this case is really happening, could we > simply fix it > by: > > diff --git a/mm/memory.c b/mm/memory.c > index b7cab8be8632..6742d1428373 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -3999,7 +3999,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > swapcache =3D folio; > > if (!folio) { > - if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && > + if (!is_zswap_enabled() && data_race(si->flags & > SWP_SYNCHRONOUS_IO) && Because zswap_enable can change at run time due to the delay setup of zswap= . This has the time-of-check to time-of-use issue. Maybe moving to the zswap_store() is better. Something like this. Zhongkun, can you verify with this change the bug will go away? Chris zswap: disable SWP_SYNCRNOUS_IO in zswap_store diff --git a/mm/zswap.c b/mm/zswap.c index f04a75a36236..f40778adefa3 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -1472,6 +1472,7 @@ bool zswap_store(struct folio *folio) struct obj_cgroup *objcg =3D NULL; struct mem_cgroup *memcg =3D NULL; unsigned long max_pages, cur_pages; + struct swap_info_struct *si =3D NULL; VM_WARN_ON_ONCE(!folio_test_locked(folio)); VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); @@ -1483,6 +1484,18 @@ bool zswap_store(struct folio *folio) if (!zswap_enabled) goto check_old; + /* Prevent swapoff from happening to us. */ + si =3D get_swap_device(swp); + if (si) { + /* + * SWP_SYNCRONOUS_IO bypass swap cache, not compatible + * with zswap exclusive load. + */ + if (data_race(si->flags & SWP_SYNCHRONOUS_IO)) + si->flags &=3D ~ SWP_SYNCHRONOUS_IO; + put_swap_device(si); + } + /* Check cgroup limits */ objcg =3D get_obj_cgroup_from_folio(folio); if (objcg && !obj_cgroup_may_zswap(objcg)) {