From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 436D5C54E64 for ; Sat, 23 Mar 2024 00:35:06 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id A4B7D6B0095; Fri, 22 Mar 2024 20:35:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9F9766B0096; Fri, 22 Mar 2024 20:35:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 872716B0098; Fri, 22 Mar 2024 20:35:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 70F266B0095 for ; Fri, 22 Mar 2024 20:35:05 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 33DC2C0A44 for ; Sat, 23 Mar 2024 00:35:05 +0000 (UTC) X-FDA: 81926434170.17.4CEC50D Received: from mail-ua1-f50.google.com (mail-ua1-f50.google.com [209.85.222.50]) by imf16.hostedemail.com (Postfix) with ESMTP id 5D65518000A for ; Sat, 23 Mar 2024 00:35:02 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=KGDDZJH0; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf16.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.50 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711154102; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=dQkQDxfYFGLPbH8a4iCDfyQDR5YXtlW1kYJ4JlT13+g=; b=dLQtL7hMXyJ+/LByDrB+XJXnaty89f1G60vvfV6ycLvV3wM4j7iCcuZz4MGtOtjNV9QY4v UkeI9sN/CA6sGgazRYg3GJKlh6oMUQpL4vbfnTzmvlgm/6p1gr800STRX7JP5Ik6/6ROn3 Nx+E8t+f/fykJcLNDYrR0nIWYSlx+Bc= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=KGDDZJH0; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf16.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.50 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711154102; a=rsa-sha256; cv=none; b=cAKg5Lg1nUH3pABQ2+cswhuNwnU7QrE6fHqrOZX6/PAGlgzEZLTbd7uPQsSyxSM8JppJ/M aRIlSjqS8ATsWavEf47SA7VDKJSMXJ/2QRLp6DQknm8dEQp6tDIDtRmBZqB7wBv/7SsIo2 pDbYmXgb8Ty6o/6awR2om1cBVyBU08o= Received: by mail-ua1-f50.google.com with SMTP id a1e0cc1a2514c-7e0cfda8e00so822951241.2 for ; Fri, 22 Mar 2024 17:35:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711154101; x=1711758901; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=dQkQDxfYFGLPbH8a4iCDfyQDR5YXtlW1kYJ4JlT13+g=; b=KGDDZJH0L0rmscZtirI1svYAmGu5xkyNDrnyk3B27gQOfXN4ITCULI9MDz5M+remp+ KMLBbySdq4yqwrrz9WYFKvaDClYHE+ZMC9Mb4LXOEXPXXi/o6+3tDIPMq7dkB/KQgE5b gMub958UXppberayQEysxqZHN3XLB2RNj3b0z0PtvQ/EONfOkfypX6xtdCAoe6fF2X0q 0L5ak5n4LrL+Kjgn4bRz9pzAYDn7EobLIm2M/hs2trGPe+CmdTBmP441LbmBN8yKxU6l 229EOFXWpTP4xDm9Ru/0QcWPcHLeys+X5wutHa01Eli26Xlv/KVzEMhe8lNnq8feI8Tm JMaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711154101; x=1711758901; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dQkQDxfYFGLPbH8a4iCDfyQDR5YXtlW1kYJ4JlT13+g=; b=SdPS62+lpid6ss7oYeUj+Plm7jFLjFBAnYgUTc43XqWOwNARgyKS7fdrPi0psbZaEN XxJT6vaT1CZRtWEHXQR3VvtqaNWKLt/78MkRJ+Al4xk+fM2LJjqbq/GEID5VqhtWyZ+9 /oc2wf+ah6fwx9lEgC1bM+nR2g+IYJxeR3UPSdQeSgZ4Xa83N/+FS9s1JFxtXcA+dQmm 7E49jkYnHYbvrG7zWw7dTua70yveJDbvFVlakRzB3RcY8/uFZitI1WVoxhBCGs17Ex/5 t9Zd1ZlRPvSyhNVzN3j2PJyhfHWqm7XvmXnq/Qfz8zBglmikRI320tW0IU0eInL9gJBc WtIw== X-Forwarded-Encrypted: i=1; AJvYcCWZ1s2qlp0e/oANmr6vd18cxD5jamfOEtHe6HdXSkPMc8lZguf0ZArit5spq/S6Ud99fpRnfb1CMw3SkTrDFUt1/1E= X-Gm-Message-State: AOJu0YwkBn75+Olcf+03IXjZGxFL4dWtyzkW1w5aQ94LZwHu0raEsUWY fn8676HxCaYuSkyll7d9b9NLlVFdZiejrY1BhTJR9+N/qKmcTANVw6tPUyb0y9LhxrsQyraVwqB pjRv02rXX14olnefuKkpLk5RIysM= X-Google-Smtp-Source: AGHT+IF3ViqAESBiWkQuBsACgvhyKRGDNDS9gblBxIh7CLO2GYtjCYGXKQfBR2K+sucFIHPTnwU9F/ReKI6AlIu4fGE= X-Received: by 2002:a05:6102:acc:b0:473:4d73:5e1c with SMTP id m12-20020a0561020acc00b004734d735e1cmr1115083vsh.9.1711154101182; Fri, 22 Mar 2024 17:35:01 -0700 (PDT) MIME-Version: 1.0 References: <01b0b8e8-af1d-4fbe-951e-278e882283fd@linux.dev> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Sat, 23 Mar 2024 13:34:49 +1300 Message-ID: Subject: Re: [External] Re: [bug report] mm/zswap :memory corruption after zswap_load(). To: Yosry Ahmed Cc: Zhongkun He , Chengming Zhou , Johannes Weiner , Andrew Morton , linux-mm , wuyun.abel@bytedance.com, zhouchengming@bytedance.com, Nhat Pham , Kairui Song , Minchan Kim , David Hildenbrand , Chris Li , Ying Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 5D65518000A X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: hasapdoip6gypasqfcd59tunuaw8pxce X-HE-Tag: 1711154102-298995 X-HE-Meta: U2FsdGVkX18sEqfZcHPlrDGjpj0HvhBecXolvYmJcHKRNm1CWZOqqabzmdul/pzwQ4VVPLQWRwVlIkzn1mMjRC5YSXJzvRYaMC/vx5r+jNYEoN14cfHqav+IrdPArKSVQCyevRgewApuYXjDVsPAOqu7q209pmLmkxL6zJ6wKFsY1y+Li3TmwjnpWtNDu+FSSPvOb7Xy7gAEuCCpPyn8YmZYeQNJiZMxayqlhaqWaUZwsYRU2aK/5MmKAQ8DOMDrSVvGJ/TirFV2TBGzm/dvt1bPYZcej2eSIUs/w7OXrhFu17zE8OjuJksnG6DIvGGN/quw6DPNT1X//4kHRlHzQTOSs3fCV5+Z85rrHx/8GeBJaggc3yC7q4o9Jv2pO2NO9vLp4LmWVc1r1s8I+XnVCsXQOYMmeoln/LnYOhACzcMGmERG1CQBqtF7VgCQ4kYc2je87zak0mch3JbFJ+xvTQncaeRWR7YG34SxYNriaLJqcNZNMqIyC5ZZm7E+Gji05dD9r5h7nU3Qnl0PvARuF2JToaOjpMcsmntX3zIgPXDYpMPpe6HWhmnbaWRlIS0z0vukZ2CEPVEcv6WUgfdhTOLSyhlcr9myHoQPbFw/EM/krL6wwx3ulxxvFCr+ZIgISLF8jWidYw540a0JZTPyg5Kv9f9lATP/GP0838UCVAXffRnmYndB6yxPcV35sAeFjR5ScUUKKM2HZjT5KvuQb4IOZIGUloQNIHMPDzJ/9zwQddvlA3g/qAeiv6iPyqwQyxQJAkl+drgp4RWR6JWAHdBgHPRzaEyzwogjCsHk+O9s8LGfbaNeEmRfOugVR4WrLrzRw1yeVPxRTSVsAZ1uYZJHxomjVGUMFVDezhIUz4Az0gSl4SpLzqDtZwp0QNTKhLEz15rnP0+bn8zKdMHmqxV99eLh8b3v7xbnFdHAdyj0fOm6Qg76UfR1imUnW+tb7mQc+V8Iburp50kCCid q8J0fXtt LnXeuIT9q1FsPPw5JaGaWfhFSnxdBye/Tum+OSE7bg+BdtygEFSxyZXO0kCclLDBH9zhYYuu2v1DYS6z1R+tazRjVuMM21OcKdWYNfaV8WhnrSFdJGkNX+X12zZ3xk8jtQbZ7DijSvfzByRknykUEPGQMSByZwluC+rmXGNLUe3oMKq4Z8lucq7WGfHylQPxNAKIrA5qXFUkG4DwLqxWw+YHPs6WfOkyyuoXLqO9rXkDNzrPzwMfGIxkoJQUbnlchDxfdxXVqgujJOIfXawyaqJtDE9uLXEZyDKfSjX2SsaTEtIYO1YkaOumUYRa3aPhrhmc9qKJf98LEimZ9Pu1EI8JTSjKdNYfSQ4Npm3cH2BxrxXwlRcaxOtTcdQsTGtc+aJrq/EVoOlLELyTfnoNupv78I5oxU49TB8ieqHPAr8txpDL5+7jBTpsFKhkuwE7gyOkoyTzKxnU7P8/e/DYW9mVySJpB2Xt4h+ZsJfUbHdwIFzgMCJ+N1H5mrfjXIoXlMx2zajXb0F5c4eutTskzJJzQUIwuvrwBr41BdZ6sRIQqf3kPr6YfjY1e3APQtU68wP+X X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Mar 23, 2024 at 12:42=E2=80=AFPM Yosry Ahmed wrote: > > On Fri, Mar 22, 2024 at 4:38=E2=80=AFPM Barry Song <21cnbao@gmail.com> wr= ote: > > > > On Sat, Mar 23, 2024 at 12:35=E2=80=AFPM Yosry Ahmed wrote: > > > > > > On Fri, Mar 22, 2024 at 4:32=E2=80=AFPM Barry Song <21cnbao@gmail.com= > wrote: > > > > > > > > On Sat, Mar 23, 2024 at 12:23=E2=80=AFPM Yosry Ahmed wrote: > > > > > > > > > > On Fri, Mar 22, 2024 at 4:18=E2=80=AFPM Barry Song <21cnbao@gmail= .com> wrote: > > > > > > > > > > > > On Sat, Mar 23, 2024 at 12:09=E2=80=AFPM Yosry Ahmed wrote: > > > > > > > > > > > > > > On Fri, Mar 22, 2024 at 4:04=E2=80=AFPM Barry Song <21cnbao@g= mail.com> wrote: > > > > > > > > > > > > > > > > On Sat, Mar 23, 2024 at 8:35=E2=80=AFAM Yosry Ahmed wrote: > > > > > > > > > > > > > > > > > > On Thu, Mar 21, 2024 at 8:04=E2=80=AFPM Zhongkun He > > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > On Thu, Mar 21, 2024 at 5:29=E2=80=AFPM Chengming Zhou = wrote: > > > > > > > > > > > > > > > > > > > > > > On 2024/3/21 14:36, Zhongkun He wrote: > > > > > > > > > > > > On Thu, Mar 21, 2024 at 1:24=E2=80=AFPM Chengming Z= hou wrote: > > > > > > > > > > > >> > > > > > > > > > > > >> On 2024/3/21 13:09, Zhongkun He wrote: > > > > > > > > > > > >>> On Thu, Mar 21, 2024 at 12:42=E2=80=AFPM Chengmin= g Zhou > > > > > > > > > > > >>> wrote: > > > > > > > > > > > >>>> > > > > > > > > > > > >>>> On 2024/3/21 12:34, Zhongkun He wrote: > > > > > > > > > > > >>>>> Hey folks, > > > > > > > > > > > >>>>> > > > > > > > > > > > >>>>> Recently, I tested the zswap with memory reclai= ming in the mainline > > > > > > > > > > > >>>>> (6.8) and found a memory corruption issue relat= ed to exclusive loads. > > > > > > > > > > > >>>> > > > > > > > > > > > >>>> Is this fix included? 13ddaf26be32 ("mm/swap: fi= x race when skipping swapcache") > > > > > > > > > > > >>>> This fix avoids concurrent swapin using the same= swap entry. > > > > > > > > > > > >>>> > > > > > > > > > > > >>> > > > > > > > > > > > >>> Yes, This fix avoids concurrent swapin from diffe= rent cpu, but the > > > > > > > > > > > >>> reported issue occurs > > > > > > > > > > > >>> on the same cpu. > > > > > > > > > > > >> > > > > > > > > > > > >> I think you may misunderstand the race description= in this fix changelog, > > > > > > > > > > > >> the CPU0 and CPU1 just mean two concurrent threads= , not real two CPUs. > > > > > > > > > > > >> > > > > > > > > > > > >> Could you verify if the problem still exists with = this fix? > > > > > > > > > > > > > > > > > > > > > > > > Yes=EF=BC=8CI'm sure the problem still exists with = this patch. > > > > > > > > > > > > There is some debug info, not mainline. > > > > > > > > > > > > > > > > > > > > > > > > bpftrace -e'k:swap_readpage {printf("%lld, %lld,%ld= ,%ld,%ld\n%s", > > > > > > > > > > > > ((struct page *)arg0)->private,nsecs,tid,pid,cpu,ks= tack)}' --include > > > > > > > > > > > > linux/mm_types.h > > > > > > > > > > > > > > > > > > > > > > Ok, this problem seems only happen on SWP_SYNCHRONOUS= _IO swap backends, > > > > > > > > > > > which now include zram, ramdisk, pmem, nvdimm. > > > > > > > > > > > > > > > > > > > > Yes. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > It maybe not good to use zswap on these swap backends= ? > > > > > > > > > > > > > > > > > > > > > > The problem here is the page fault handler tries to s= kip swapcache to > > > > > > > > > > > swapin the folio (swap entry count =3D=3D 1), but the= n it can't install folio > > > > > > > > > > > to pte entry since some changes happened such as conc= urrent fork of entry. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > The first page fault returned VM_FAULT_RETRY because > > > > > > > > > > folio_lock_or_retry() failed. > > > > > > > > > > > > > > > > > > How so? The folio is newly allocated and not visible to a= ny other > > > > > > > > > threads or CPUs. swap_read_folio() unlocks it and then re= turns and we > > > > > > > > > immediately try to lock it again with folio_lock_or_retry= (). How does > > > > > > > > > this fail? > > > > > > > > > > > > > > > > > > Let's go over what happens after swap_read_folio(): > > > > > > > > > - The 'if (!folio)' code block will be skipped. > > > > > > > > > - folio_lock_or_retry() should succeed as I mentioned ear= lier. > > > > > > > > > - The 'if (swapcache)' code block will be skipped. > > > > > > > > > - The pte_same() check should succeed on first look becau= se other > > > > > > > > > concurrent faulting threads should be held off by the new= ly introduced > > > > > > > > > swapcache_prepare() logic. But looking deeper I think thi= s one may > > > > > > > > > fail due to a concurrent MADV_WILLNEED. > > > > > > > > > - The 'if (unlikely(!folio_test_uptodate(folio)))` part w= ill be > > > > > > > > > skipped because swap_read_folio() marks the folio up-to-d= ate. > > > > > > > > > - After that point there is no possible failure until we = install the > > > > > > > > > pte, at which point concurrent faults will fail on !pte_s= ame() and > > > > > > > > > retry. > > > > > > > > > > > > > > > > > > So the only failure I think is possible is the pte_same()= check. I see > > > > > > > > > how a concurrent MADV_WILLNEED could cause that check to = fail. A > > > > > > > > > concurrent MADV_WILLNEED will block on swapcache_prepare(= ), but once > > > > > > > > > the fault resolves it will go ahead and read the folio ag= ain into the > > > > > > > > > swapcache. It seems like we will end up with two copies o= f the same > > > > > > > > > > > > > > > > but zswap has freed the object when the do_swap_page finish= es swap_read_folio > > > > > > > > due to exclusive load feature of zswap? > > > > > > > > > > > > > > > > so WILLNEED will get corrupted data and put it into swapcac= he. > > > > > > > > some other concurrent new forked process might get the new = data > > > > > > > > from the swapcache WILLNEED puts when the new-forked proces= s > > > > > > > > goes into do_swap_page. > > > > > > > > > > > > > > Oh I was wondering how synchronization with WILLNEED happens = without > > > > > > > zswap. It seems like we could end up with two copies of the s= ame folio > > > > > > > and one of them will be leaked unless I am missing something. > > > > > > > > > > > > > > > > > > > > > > > so very likely a new process is forked right after do_swap_= page finishes > > > > > > > > swap_read_folio and before swapcache_clear. > > > > > > > > > > > > > > > > > folio? Maybe this is harmless because the folio in the sw= acache will > > > > > > > > > never be used, but it is essentially leaked at that point= , right? > > > > > > > > > > > > > > > > > > I feel like I am missing something. Adding other folks th= at were > > > > > > > > > involved in the recent swapcache_prepare() synchronizatio= n thread. > > > > > > > > > > > > > > > > > > Anyway, I agree that at least in theory the data corrupti= on could > > > > > > > > > happen because of exclusive loads when skipping the swapc= ache, and we > > > > > > > > > should fix that. > > > > > > > > > > > > > > > > > > Perhaps the right thing to do may be to write the folio a= gain to zswap > > > > > > > > > before unlocking it and before calling swapcache_clear().= The need for > > > > > > > > > the write can be detected by checking if the folio is dir= ty, I think > > > > > > > > > this will only be true if the folio was loaded from zswap= . > > > > > > > > > > > > > > > > we only need to write when we know swap_read_folio() gets d= ata > > > > > > > > from zswap but not swapfile. is there a quick way to do thi= s? > > > > > > > > > > > > > > The folio will be dirty when loaded from zswap, so we can che= ck if the > > > > > > > folio is dirty and write the page if fail after swap_read_fol= io(). > > > > > > > > > > > > Is it actually a bug of swapin_walk_pmd_entry? it only check pt= e > > > > > > before read_swap_cache_async. but when read_swap_cache_async > > > > > > is blocked by swapcache_prepare, after it gets the swapcache_pr= epare > > > > > > successfully , someone else should have already set the pte and= freed > > > > > > the swap slot even if this is not zswap? > > > > > > > > > > If someone freed the swap slot then swapcache_prepare() should fa= il, > > > > > but the swap entry could have been recycled after we dropped the = pte > > > > > lock, right? > > > > > > > > > > Anyway, yeah, I think there might be a bug here irrelevant to zsw= ap. > > > > > > > > > > > > > > > > > static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long star= t, > > > > > > unsigned long end, struct mm_walk *walk) > > > > > > { > > > > > > struct vm_area_struct *vma =3D walk->private; > > > > > > struct swap_iocb *splug =3D NULL; > > > > > > pte_t *ptep =3D NULL; > > > > > > spinlock_t *ptl; > > > > > > unsigned long addr; > > > > > > > > > > > > for (addr =3D start; addr < end; addr +=3D PAGE_SIZE) { > > > > > > pte_t pte; > > > > > > swp_entry_t entry; > > > > > > struct folio *folio; > > > > > > > > > > > > if (!ptep++) { > > > > > > ptep =3D pte_offset_map_lock(vma->vm_mm= , pmd, addr, &ptl); > > > > > > if (!ptep) > > > > > > break; > > > > > > } > > > > > > > > > > > > pte =3D ptep_get(ptep); > > > > > > if (!is_swap_pte(pte)) > > > > > > continue; > > > > > > entry =3D pte_to_swp_entry(pte); > > > > > > if (unlikely(non_swap_entry(entry))) > > > > > > continue; > > > > > > > > > > > > pte_unmap_unlock(ptep, ptl); > > > > > > ptep =3D NULL; > > > > > > > > > > > > folio =3D read_swap_cache_async(entry, GFP_HIGH= USER_MOVABLE, > > > > > > vma, addr, &splug)= ; > > > > > > if (folio) > > > > > > folio_put(folio); > > > > > > } > > > > > > > > > > > > if (ptep)c > > > > > > pte_unmap_unlock(ptep, ptl); > > > > > > swap_read_unplug(splug); > > > > > > cond_resched(); > > > > > > > > > > > > return 0; > > > > > > } > > > > > > > > > > > > I mean pte can become non-swap within read_swap_cache_async(), > > > > > > so no matter if it is zswap, we have the bug. > > > > > > > > checked again, probably still a zswap issue, as swapcache_prepare = can detect > > > > real swap slot free :-) > > > > > > > > /* > > > > * Swap entry may have been freed since our caller = observed it. > > > > */ > > > > err =3D swapcache_prepare(entry); > > > > if (!err) > > > > break; > > > > > > > > > > > > zswap exslusive load isn't a real swap free. > > > > > > > > But probably we have found the timing which causes the issue at lea= st :-) > > > > > > The problem I was referring to is with the swapin fault path that > > > skips the swapcache vs. MADV_WILLNEED. The fault path could swapin th= e > > > page and skip the swapcache, and MADV_WILLNEED could swap it in again > > > into the swapcache. We would end up with two copies of the folio. > > > > right. i feel like we have to re-check pte is not changed within > > __read_swap_cache_async after swapcache_prepare succeed > > after being blocked for a while as the previous entry could have > > been freed and re-allocted by someone else - a completely > > different process. then we get read other processes' data. > > This is only a problem when we skip the swapcache during swapin. > Otherwise the swapcache synchronizes this. I wonder how much does > skipping the swapcache buy us on recent kernels? This optimization was > introduced a long time ago. Still performs quite good. according to kairui's data: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?= id=3D13ddaf26be324a7f951891ecd9ccd04466d27458 Before: 10934698 us After: 11157121 us Cached: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag) BTW, zram+zswap seems pointless from the first beginning. it seems a wrong configuration for users. if this case is really happening, could we simply fix it by: diff --git a/mm/memory.c b/mm/memory.c index b7cab8be8632..6742d1428373 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3999,7 +3999,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) swapcache =3D folio; if (!folio) { - if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && + if (!is_zswap_enabled() && data_race(si->flags & SWP_SYNCHRONOUS_IO) && __swap_count(entry) =3D=3D 1) { /* * Prevent parallel swapin from proceeding with