From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7E611CD11BF for ; Sat, 23 Mar 2024 12:41:52 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 987A66B007B; Sat, 23 Mar 2024 08:41:51 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 936C06B0082; Sat, 23 Mar 2024 08:41:51 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7D7906B0083; Sat, 23 Mar 2024 08:41:51 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 6C8CC6B007B for ; Sat, 23 Mar 2024 08:41:51 -0400 (EDT) Received: from smtpin29.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 35F4EC01F8 for ; Sat, 23 Mar 2024 12:41:51 +0000 (UTC) X-FDA: 81928265622.29.9926BA8 Received: from mail-lj1-f180.google.com (mail-lj1-f180.google.com [209.85.208.180]) by imf24.hostedemail.com (Postfix) with ESMTP id 5F014180018 for ; Sat, 23 Mar 2024 12:41:48 +0000 (UTC) Authentication-Results: imf24.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=IKd+CeeU; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf24.hostedemail.com: domain of hezhongkun.hzk@bytedance.com designates 209.85.208.180 as permitted sender) smtp.mailfrom=hezhongkun.hzk@bytedance.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711197709; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FBx31RHt6DkX+GY2GX4Csx5N0rlbwmmP6Eq9LiLPo4o=; b=FisseRWdle7SJheP9EbbC6bxweDC4/N4Kn59rZDhqmkTPmvMnKd8FMIEjMHxJ8N2xpS6+N irSfMmZchuVcY+3/6EshSQ7S+ZvN8+WfWQt/fMlL+fizfWe2oZ2SQKgPAHCAHiZhNY55GB XoBWzcHcmHJZPUDsyImutJEvIz1ACzg= ARC-Authentication-Results: i=1; imf24.hostedemail.com; dkim=pass header.d=bytedance.com header.s=google header.b=IKd+CeeU; dmarc=pass (policy=quarantine) header.from=bytedance.com; spf=pass (imf24.hostedemail.com: domain of hezhongkun.hzk@bytedance.com designates 209.85.208.180 as permitted sender) smtp.mailfrom=hezhongkun.hzk@bytedance.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711197709; a=rsa-sha256; cv=none; b=6jhOmp88DYEor2g/gq4J27/PejZQvucKf+knI7TZD2ImCc5nnQKJ1Gk4szIz1RlIlM3w7H wvWw14Opm0MN0wLrGdpROU4WTDq+AsSPHEd79ExbUzT3UQlSmH1Vc/17L2nSBiDoPlntzL 05kprj98VJSwxP68+lfnWyKui0KQQLg= Received: by mail-lj1-f180.google.com with SMTP id 38308e7fff4ca-2d46c44dcc0so35685491fa.2 for ; Sat, 23 Mar 2024 05:41:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance.com; s=google; t=1711197706; x=1711802506; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=FBx31RHt6DkX+GY2GX4Csx5N0rlbwmmP6Eq9LiLPo4o=; b=IKd+CeeUivt9/u5A1U4CTQvgjKgMgwHCSQJ0eWwy9S6WFvHbatNjurfTb5uAJybqJH FP0novfWFwyqbs7THbdGZKin9T6Eu2lfCz0KLzlmcuf9ZeSHdX9YJEjWya02ylunkYnt 1ei3BXGyLrzE5s7KGomSRPy2xKvY7UxFJ4aRBY0s9wPMwLhUQOBcqeKFrnMiGPrlOWDP M2RkKMAPUcDUyf4VgSKGRkpO8zoF3arpuzBHRwuZMh9SBjDvytla/QkiEiI2TtuAhR2w KI4b2lDKoU7mXrKuG1Mmro93JcVdwM0N6K9gWcW3wTHObvSytWIkMIvY1WyDQFTGBvTY jTfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711197706; x=1711802506; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FBx31RHt6DkX+GY2GX4Csx5N0rlbwmmP6Eq9LiLPo4o=; b=BW0Ezz2GNUfddX+lxemL0dUNtyH6kaTI/if1GaMfvhfI9oQXzCBwRuqoCgpobRt1Yd OzF3O8ExMEqL37v1+lT70HfgITZdyk0i4oBXYyf4fvQkD2Rl1uiKYdLK4sJDxL0Y3fzz iIhrWyp12yabvLy5gIv9TiPxJzLGba2uavVr4zHrQPP24MxCGf0xIgOOXjzDR/cbUCTH /qYhfnNKb0mmsA5/+Es1lBVRebH6MQs/+tOu8fxgahY1Jt99U7uajy6g3wsLPqRBw8lg mNzKNmk45Flv7oFwe1GzlCXbi9WV4So1v7bEHCRJa3DLizjzyJ/xMcR63E+PWecH1XSi DMgQ== X-Forwarded-Encrypted: i=1; AJvYcCUN6Mwj/R0cE1dE5y9src3x8OO2XHQlGULfSltStHSTvM5yrSHMVS17/RRDFgm/ZMtWA8RlzWgEM0qiT+oK7gfM7iQ= X-Gm-Message-State: AOJu0Ywx7EHXYe+iPqddH/mzmGGxR0inG2Xqdp/RwHHU5anL0+CWRdY4 Wk/bKVQfu5I673TzjG2aULSvNTRXDSUtWs6I1EXct3GvsflOvn1SFkNx/zheDiQDBR/JWrQx3t5 lK8qAbT1hLFk78hbB4wEILm3gopSE+j3fRVjx0Q== X-Google-Smtp-Source: AGHT+IE3W13Yp7oa6BcfX4M4jMzJC9N1zFcQMdU6DlpU9xkLWj5gLeF44Ru0byK7BtCAvczm0056ZDxB8okM0V/gW/Y= X-Received: by 2002:a2e:8014:0:b0:2d4:376f:5b44 with SMTP id j20-20020a2e8014000000b002d4376f5b44mr1340806ljg.34.1711197706222; Sat, 23 Mar 2024 05:41:46 -0700 (PDT) MIME-Version: 1.0 References: <01b0b8e8-af1d-4fbe-951e-278e882283fd@linux.dev> In-Reply-To: From: Zhongkun He Date: Sat, 23 Mar 2024 20:41:32 +0800 Message-ID: Subject: Re: [External] Re: [bug report] mm/zswap :memory corruption after zswap_load(). To: Chris Li Cc: Barry Song <21cnbao@gmail.com>, Yosry Ahmed , Chengming Zhou , Johannes Weiner , Andrew Morton , linux-mm , wuyun.abel@bytedance.com, zhouchengming@bytedance.com, Nhat Pham , Kairui Song , Minchan Kim , David Hildenbrand , Ying Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 5F014180018 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: hiboqzhpipyxwjntriyop6nspqhcgm96 X-HE-Tag: 1711197708-409159 X-HE-Meta: U2FsdGVkX1+NFqBNNKKHU9ZcSwH4n+ABuVxWhoIz9S/LoCRWlXRFDZD/0bgnq6oqtQkS8YedkpxIhSfTMybgb+YQKcZuKftfbgLoytn/WTUz3mclYJQEc30yjTtAcb/nUr1KakIMigAOS4alSkEcAaDU1otG8rFkzHX0wO/t6C2wt+gxPqEC4LCJcZhNbAJQN89f6euVEHMAOHpj6SdlBAXbMcwvXgfx1cCTJ+vHNd9BtqHJBrQMKc4tDcvwJ4F5agPJAKuEdbxn0nOVxUxz6QHEr63A4VE3+yfdrNi2MXkyjtrGN+lOaJhpa1yquRj+sMrGX6DwE281nPnCY3b5/5rShVe6zkZVKwox31LbGAwhZa/R6Xf8E0JJ1Eu1RV+T5IAx3/HlTRZnakw8XKbGCa/OAvjXUgu41O5zhZr9YV9anr0F4si3OXTE26vLDZpXNyq5rPm5ecEUrOK/vSZdbKbfa8YyiwSWUkB7GlAUkDNW67e+kYz0RefbRYxWwE74T+ElHanx+DKkCQFFKSN3ceb0mgNa1x8U1sr1VeK0lbZ7NlBRU0olMhez7RyYqd4gHISe+xDNmbuiSaZgXrJJoz2o7ZDCKI0YeTCw9w4KeS7xoO9dd+BRDY8DOWVPZ9onMmt4mqvbOmhqQM6SclGJVu/Wf5q5KMbOhQyv4+X35dKzYo8+iP6j99f8I+ZygXW+f8KCh5XNgXpWkGgwHN24RKobtJfGhhPpYE+XE3zWO3ly/t1C3misMiY5IcFFCZ63xqAjxKEPu1USMmN51+McuyElSWM5FSirgabEDRIhIlHgKmduinqdq8Pol+fNklnpYJt///aAJG+spo0i0HxDJpcHLOqueTJdfHntVUU/jU4arAEJ5lCS+lVR9ESP/HIYA++m7B0TPrTrqwgM9EFrNxtcybWaosS8B4XaEo4MHk/A+WyySjP0CRoscAvuOiI/V5i8CQOWOHRpN5KD+Y9 1iFT6e0g 1uIp7+L9g+mQ8wIly9EpuzHAAtYvrrEqvi+Rui2lzqObKpKM94ZbT1lkqhwlnrPIyp/IhgFewNAOAC+m5kveyOg+kw3BZR8mUm7kKYIcxDuHr4w+YAxkf7fIBYSmuPkhFlO3OtH6jQIy+s+/VChsiZi6oQY5tso7qoljbznaR5SItPYkQD6cgZhyOsjrHPwRvp77T9uQF1yzeYBSB3a7BgTmMFoy1ZL8JlbgHp1s+ANahF5XGDK3LsYqMOizs075x3pf6YjCeR8wpudwMuR8b9tcLMk8FXgT7qDlXQjXJwou916vDMlZ3o/ZU0bQYRRQWLHfDEvlnwutPoptq7KmQee7y6Du8T95dpSRxGUHma7geRlLUvMZCe2QSED/IJrRHVislUPKbgqDh6Q1CgGGyfaIBs6wXeAtRgYDdpqZs2iO67kdKcp3O/PK+K4TOYfIsPKC0+yRTdoa1sU+VN2gLCPsmIJ4xYZZNIIpUvAtmrsF760vPT74QcUo8S7GP5BXshrSHZ5xZrckVXG0= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Mar 23, 2024 at 6:49=E2=80=AFPM Chris Li wrote: > > On Fri, Mar 22, 2024 at 5:35=E2=80=AFPM Barry Song <21cnbao@gmail.com> wr= ote: > > > > On Sat, Mar 23, 2024 at 12:42=E2=80=AFPM Yosry Ahmed wrote: > > > > > > On Fri, Mar 22, 2024 at 4:38=E2=80=AFPM Barry Song <21cnbao@gmail.com= > wrote: > > > > > > > > On Sat, Mar 23, 2024 at 12:35=E2=80=AFPM Yosry Ahmed wrote: > > > > > > > > > > On Fri, Mar 22, 2024 at 4:32=E2=80=AFPM Barry Song <21cnbao@gmail= .com> wrote: > > > > > > > > > > > > On Sat, Mar 23, 2024 at 12:23=E2=80=AFPM Yosry Ahmed wrote: > > > > > > > > > > > > > > On Fri, Mar 22, 2024 at 4:18=E2=80=AFPM Barry Song <21cnbao@g= mail.com> wrote: > > > > > > > > > > > > > > > > On Sat, Mar 23, 2024 at 12:09=E2=80=AFPM Yosry Ahmed wrote: > > > > > > > > > > > > > > > > > > On Fri, Mar 22, 2024 at 4:04=E2=80=AFPM Barry Song <21cnb= ao@gmail.com> wrote: > > > > > > > > > > > > > > > > > > > > On Sat, Mar 23, 2024 at 8:35=E2=80=AFAM Yosry Ahmed wrote: > > > > > > > > > > > > > > > > > > > > > > On Thu, Mar 21, 2024 at 8:04=E2=80=AFPM Zhongkun He > > > > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > > > > On Thu, Mar 21, 2024 at 5:29=E2=80=AFPM Chengming Z= hou wrote: > > > > > > > > > > > > > > > > > > > > > > > > > > On 2024/3/21 14:36, Zhongkun He wrote: > > > > > > > > > > > > > > On Thu, Mar 21, 2024 at 1:24=E2=80=AFPM Chengmi= ng Zhou wrote: > > > > > > > > > > > > > >> > > > > > > > > > > > > > >> On 2024/3/21 13:09, Zhongkun He wrote: > > > > > > > > > > > > > >>> On Thu, Mar 21, 2024 at 12:42=E2=80=AFPM Chen= gming Zhou > > > > > > > > > > > > > >>> wrote: > > > > > > > > > > > > > >>>> > > > > > > > > > > > > > >>>> On 2024/3/21 12:34, Zhongkun He wrote: > > > > > > > > > > > > > >>>>> Hey folks, > > > > > > > > > > > > > >>>>> > > > > > > > > > > > > > >>>>> Recently, I tested the zswap with memory re= claiming in the mainline > > > > > > > > > > > > > >>>>> (6.8) and found a memory corruption issue r= elated to exclusive loads. > > > > > > > > > > > > > >>>> > > > > > > > > > > > > > >>>> Is this fix included? 13ddaf26be32 ("mm/swap= : fix race when skipping swapcache") > > > > > > > > > > > > > >>>> This fix avoids concurrent swapin using the = same swap entry. > > > > > > > > > > > > > >>>> > > > > > > > > > > > > > >>> > > > > > > > > > > > > > >>> Yes, This fix avoids concurrent swapin from d= ifferent cpu, but the > > > > > > > > > > > > > >>> reported issue occurs > > > > > > > > > > > > > >>> on the same cpu. > > > > > > > > > > > > > >> > > > > > > > > > > > > > >> I think you may misunderstand the race descrip= tion in this fix changelog, > > > > > > > > > > > > > >> the CPU0 and CPU1 just mean two concurrent thr= eads, not real two CPUs. > > > > > > > > > > > > > >> > > > > > > > > > > > > > >> Could you verify if the problem still exists w= ith this fix? > > > > > > > > > > > > > > > > > > > > > > > > > > > > Yes=EF=BC=8CI'm sure the problem still exists w= ith this patch. > > > > > > > > > > > > > > There is some debug info, not mainline. > > > > > > > > > > > > > > > > > > > > > > > > > > > > bpftrace -e'k:swap_readpage {printf("%lld, %lld= ,%ld,%ld,%ld\n%s", > > > > > > > > > > > > > > ((struct page *)arg0)->private,nsecs,tid,pid,cp= u,kstack)}' --include > > > > > > > > > > > > > > linux/mm_types.h > > > > > > > > > > > > > > > > > > > > > > > > > > Ok, this problem seems only happen on SWP_SYNCHRO= NOUS_IO swap backends, > > > > > > > > > > > > > which now include zram, ramdisk, pmem, nvdimm. > > > > > > > > > > > > > > > > > > > > > > > > Yes. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > It maybe not good to use zswap on these swap back= ends? > > > > > > > > > > > > > > > > > > > > > > > > > > The problem here is the page fault handler tries = to skip swapcache to > > > > > > > > > > > > > swapin the folio (swap entry count =3D=3D 1), but= then it can't install folio > > > > > > > > > > > > > to pte entry since some changes happened such as = concurrent fork of entry. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > The first page fault returned VM_FAULT_RETRY becaus= e > > > > > > > > > > > > folio_lock_or_retry() failed. > > > > > > > > > > > > > > > > > > > > > > How so? The folio is newly allocated and not visible = to any other > > > > > > > > > > > threads or CPUs. swap_read_folio() unlocks it and the= n returns and we > > > > > > > > > > > immediately try to lock it again with folio_lock_or_r= etry(). How does > > > > > > > > > > > this fail? > > > > > > > > > > > > > > > > > > > > > > Let's go over what happens after swap_read_folio(): > > > > > > > > > > > - The 'if (!folio)' code block will be skipped. > > > > > > > > > > > - folio_lock_or_retry() should succeed as I mentioned= earlier. > > > > > > > > > > > - The 'if (swapcache)' code block will be skipped. > > > > > > > > > > > - The pte_same() check should succeed on first look b= ecause other > > > > > > > > > > > concurrent faulting threads should be held off by the= newly introduced > > > > > > > > > > > swapcache_prepare() logic. But looking deeper I think= this one may > > > > > > > > > > > fail due to a concurrent MADV_WILLNEED. > > > > > > > > > > > - The 'if (unlikely(!folio_test_uptodate(folio)))` pa= rt will be > > > > > > > > > > > skipped because swap_read_folio() marks the folio up-= to-date. > > > > > > > > > > > - After that point there is no possible failure until= we install the > > > > > > > > > > > pte, at which point concurrent faults will fail on !p= te_same() and > > > > > > > > > > > retry. > > > > > > > > > > > > > > > > > > > > > > So the only failure I think is possible is the pte_sa= me() check. I see > > > > > > > > > > > how a concurrent MADV_WILLNEED could cause that check= to fail. A > > > > > > > > > > > concurrent MADV_WILLNEED will block on swapcache_prep= are(), but once > > > > > > > > > > > the fault resolves it will go ahead and read the foli= o again into the > > > > > > > > > > > swapcache. It seems like we will end up with two copi= es of the same > > > > > > > > > > > > > > > > > > > > but zswap has freed the object when the do_swap_page fi= nishes swap_read_folio > > > > > > > > > > due to exclusive load feature of zswap? > > > > > > > > > > > > > > > > > > > > so WILLNEED will get corrupted data and put it into swa= pcache. > > > > > > > > > > some other concurrent new forked process might get the = new data > > > > > > > > > > from the swapcache WILLNEED puts when the new-forked pr= ocess > > > > > > > > > > goes into do_swap_page. > > > > > > > > > > > > > > > > > > Oh I was wondering how synchronization with WILLNEED happ= ens without > > > > > > > > > zswap. It seems like we could end up with two copies of t= he same folio > > > > > > > > > and one of them will be leaked unless I am missing someth= ing. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > so very likely a new process is forked right after do_s= wap_page finishes > > > > > > > > > > swap_read_folio and before swapcache_clear. > > > > > > > > > > > > > > > > > > > > > folio? Maybe this is harmless because the folio in th= e swacache will > > > > > > > > > > > never be used, but it is essentially leaked at that p= oint, right? > > > > > > > > > > > > > > > > > > > > > > I feel like I am missing something. Adding other folk= s that were > > > > > > > > > > > involved in the recent swapcache_prepare() synchroniz= ation thread. > > > > > > > > > > > > > > > > > > > > > > Anyway, I agree that at least in theory the data corr= uption could > > > > > > > > > > > happen because of exclusive loads when skipping the s= wapcache, and we > > > > > > > > > > > should fix that. > > > > > > > > > > > > > > > > > > > > > > Perhaps the right thing to do may be to write the fol= io again to zswap > > > > > > > > > > > before unlocking it and before calling swapcache_clea= r(). The need for > > > > > > > > > > > the write can be detected by checking if the folio is= dirty, I think > > > > > > > > > > > this will only be true if the folio was loaded from z= swap. > > > > > > > > > > > > > > > > > > > > we only need to write when we know swap_read_folio() ge= ts data > > > > > > > > > > from zswap but not swapfile. is there a quick way to do= this? > > > > > > > > > > > > > > > > > > The folio will be dirty when loaded from zswap, so we can= check if the > > > > > > > > > folio is dirty and write the page if fail after swap_read= _folio(). > > > > > > > > > > > > > > > > Is it actually a bug of swapin_walk_pmd_entry? it only chec= k pte > > > > > > > > before read_swap_cache_async. but when read_swap_cache_asyn= c > > > > > > > > is blocked by swapcache_prepare, after it gets the swapcach= e_prepare > > > > > > > > successfully , someone else should have already set the pte= and freed > > > > > > > > the swap slot even if this is not zswap? > > > > > > > > > > > > > > If someone freed the swap slot then swapcache_prepare() shoul= d fail, > > > > > > > but the swap entry could have been recycled after we dropped = the pte > > > > > > > lock, right? > > > > > > > > > > > > > > Anyway, yeah, I think there might be a bug here irrelevant to= zswap. > > > > > > > > > > > > > > > > > > > > > > > static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long = start, > > > > > > > > unsigned long end, struct mm_walk *walk) > > > > > > > > { > > > > > > > > struct vm_area_struct *vma =3D walk->private; > > > > > > > > struct swap_iocb *splug =3D NULL; > > > > > > > > pte_t *ptep =3D NULL; > > > > > > > > spinlock_t *ptl; > > > > > > > > unsigned long addr; > > > > > > > > > > > > > > > > for (addr =3D start; addr < end; addr +=3D PAGE_SIZ= E) { > > > > > > > > pte_t pte; > > > > > > > > swp_entry_t entry; > > > > > > > > struct folio *folio; > > > > > > > > > > > > > > > > if (!ptep++) { > > > > > > > > ptep =3D pte_offset_map_lock(vma->v= m_mm, pmd, addr, &ptl); > > > > > > > > if (!ptep) > > > > > > > > break; > > > > > > > > } > > > > > > > > > > > > > > > > pte =3D ptep_get(ptep); > > > > > > > > if (!is_swap_pte(pte)) > > > > > > > > continue; > > > > > > > > entry =3D pte_to_swp_entry(pte); > > > > > > > > if (unlikely(non_swap_entry(entry))) > > > > > > > > continue; > > > > > > > > > > > > > > > > pte_unmap_unlock(ptep, ptl); > > > > > > > > ptep =3D NULL; > > > > > > > > > > > > > > > > folio =3D read_swap_cache_async(entry, GFP_= HIGHUSER_MOVABLE, > > > > > > > > vma, addr, &sp= lug); > > > > > > > > if (folio) > > > > > > > > folio_put(folio); > > > > > > > > } > > > > > > > > > > > > > > > > if (ptep)c > > > > > > > > pte_unmap_unlock(ptep, ptl); > > > > > > > > swap_read_unplug(splug); > > > > > > > > cond_resched(); > > > > > > > > > > > > > > > > return 0; > > > > > > > > } > > > > > > > > > > > > > > > > I mean pte can become non-swap within read_swap_cache_async= (), > > > > > > > > so no matter if it is zswap, we have the bug. > > > > > > > > > > > > checked again, probably still a zswap issue, as swapcache_prep= are can detect > > > > > > real swap slot free :-) > > > > > > > > > > > > /* > > > > > > * Swap entry may have been freed since our cal= ler observed it. > > > > > > */ > > > > > > err =3D swapcache_prepare(entry); > > > > > > if (!err) > > > > > > break; > > > > > > > > > > > > > > > > > > zswap exslusive load isn't a real swap free. > > > > > > > > > > > > But probably we have found the timing which causes the issue at= least :-) > > > > > > > > > > The problem I was referring to is with the swapin fault path that > > > > > skips the swapcache vs. MADV_WILLNEED. The fault path could swapi= n the > > > > > page and skip the swapcache, and MADV_WILLNEED could swap it in a= gain > > > > > into the swapcache. We would end up with two copies of the folio. > > > > > > > > right. i feel like we have to re-check pte is not changed within > > > > __read_swap_cache_async after swapcache_prepare succeed > > > > after being blocked for a while as the previous entry could have > > > > been freed and re-allocted by someone else - a completely > > > > different process. then we get read other processes' data. > > > > > > > > This is only a problem when we skip the swapcache during swapin. > > > Otherwise the swapcache synchronizes this. I wonder how much does > > > skipping the swapcache buy us on recent kernels? This optimization wa= s > > > introduced a long time ago. > > > > Still performs quite good. according to kairui's data: > > https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/comm= it/?id=3D13ddaf26be324a7f951891ecd9ccd04466d27458 > > > > Before: 10934698 us > > After: 11157121 us > > Cached: 13155355 us (Dropping SWP_SYNCHRONOUS_IO flag) > > > > BTW, zram+zswap seems pointless from the first beginning. it seems a wr= ong > > configuration for users. if this case is really happening, could we > > simply fix it > > by: > > > > diff --git a/mm/memory.c b/mm/memory.c > > index b7cab8be8632..6742d1428373 100644 > > --- a/mm/memory.c > > +++ b/mm/memory.c > > @@ -3999,7 +3999,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf) > > swapcache =3D folio; > > > > if (!folio) { > > - if (data_race(si->flags & SWP_SYNCHRONOUS_IO) && > > + if (!is_zswap_enabled() && data_race(si->flags & > > SWP_SYNCHRONOUS_IO) && > > Because zswap_enable can change at run time due to the delay setup of zsw= ap. > > This has the time-of-check to time-of-use issue. > > Maybe moving to the zswap_store() is better. > > Something like this. > > Zhongkun, can you verify with this change the bug will go away? > > Chris > Hi Chris, thanks for your analysis. There is a good fix from Johannes and I have tested it. https://lore.kernel.org/linux-mm/20240322234826.GA448621@cmpxchg.org/ > > zswap: disable SWP_SYNCRNOUS_IO in zswap_store > > diff --git a/mm/zswap.c b/mm/zswap.c > index f04a75a36236..f40778adefa3 100644 > --- a/mm/zswap.c > +++ b/mm/zswap.c > @@ -1472,6 +1472,7 @@ bool zswap_store(struct folio *folio) > struct obj_cgroup *objcg =3D NULL; > struct mem_cgroup *memcg =3D NULL; > unsigned long max_pages, cur_pages; > + struct swap_info_struct *si =3D NULL; > > VM_WARN_ON_ONCE(!folio_test_locked(folio)); > VM_WARN_ON_ONCE(!folio_test_swapcache(folio)); > @@ -1483,6 +1484,18 @@ bool zswap_store(struct folio *folio) > if (!zswap_enabled) > goto check_old; > > + /* Prevent swapoff from happening to us. */ > + si =3D get_swap_device(swp); > + if (si) { > + /* > + * SWP_SYNCRONOUS_IO bypass swap cache, not compatible > + * with zswap exclusive load. > + */ > + if (data_race(si->flags & SWP_SYNCHRONOUS_IO)) > + si->flags &=3D ~ SWP_SYNCHRONOUS_IO; > + put_swap_device(si); > + } > + > /* Check cgroup limits */ > objcg =3D get_obj_cgroup_from_folio(folio); > if (objcg && !obj_cgroup_may_zswap(objcg)) {