From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E52DBC54E64 for ; Fri, 22 Mar 2024 23:23:16 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6B7C16B0088; Fri, 22 Mar 2024 19:23:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 667806B0089; Fri, 22 Mar 2024 19:23:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 508586B008A; Fri, 22 Mar 2024 19:23:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 409E76B0088 for ; Fri, 22 Mar 2024 19:23:16 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id B472E1604FF for ; Fri, 22 Mar 2024 23:23:15 +0000 (UTC) X-FDA: 81926253150.12.56A7B8E Received: from mail-lf1-f49.google.com (mail-lf1-f49.google.com [209.85.167.49]) by imf09.hostedemail.com (Postfix) with ESMTP id C88AC140006 for ; Fri, 22 Mar 2024 23:23:13 +0000 (UTC) Authentication-Results: imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=X1t6noLg; spf=pass (imf09.hostedemail.com: domain of yosryahmed@google.com designates 209.85.167.49 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711149793; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=uZZbRl+jlw5kCABO7QIfHF0EoqlfbLH8phXW1jgPRlw=; b=0MelTv64bE+yrvfoDQe3ubLmt6Vk3ZIdDBJwozVaWtIlvCdti17ixzRS+4Y6KuvZj7U+x7 uzfGJtF1TiJDVZfbErR4eLP5RLMRoxGDTVr2G0rn6g7jGsz6yo5gDUVIMYD4tFlonAI9vU k+xkWJOU8+iS0yXk//Sp6sE5pnkEEYU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711149794; a=rsa-sha256; cv=none; b=A5GM0udVenm4eaYbH5JhGBvVEHZ+mqvzzI1Yjcf43KpQSJp66GRFf5UH9j6qHr6b2zgVuY 0eeW9yVBxTagUewkLoXKs/HY8IBRENfLNdRS89IuQBm2joskcXmK6RSXcGcdsfls09X1T+ +mBcVB5qEr2qX852eL7j+2xbRQgRsrc= ARC-Authentication-Results: i=1; imf09.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b=X1t6noLg; spf=pass (imf09.hostedemail.com: domain of yosryahmed@google.com designates 209.85.167.49 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-lf1-f49.google.com with SMTP id 2adb3069b0e04-513d212f818so3131680e87.2 for ; Fri, 22 Mar 2024 16:23:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1711149792; x=1711754592; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=uZZbRl+jlw5kCABO7QIfHF0EoqlfbLH8phXW1jgPRlw=; b=X1t6noLglFU6/hxoB5mAtvw6xkt+t3Eb4g+3W0tZ3dNY2tJqAZ2fnxx71aVMJ5T062 QaRISZgQDZumrSYk+Std/C8vTZVXl3ULvU2Fa40XSueB47FSIz8X2wlnbNAJc/Yo1ZQ4 vVy2Ye7TdyVkQiqEz6z2G/jByl16EWGw21kckmoVEpMny2gQu0s8DwMjQTZa0PdKBtFv 3cO0uM2WdgrK+w5srG2Ypoj6dCnu9j/TNjcOiwSkuBcpveAQciEZ3iZiCpHtZRb7T2N0 gVF7Yw/CG+mwUp25Hn5VIClaMuPGr8hZ/Y51uAzIolZ8RBZODEzolQen4MP+ksSgwaPW /x2A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711149792; x=1711754592; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=uZZbRl+jlw5kCABO7QIfHF0EoqlfbLH8phXW1jgPRlw=; b=ufZEWX920w9ZLTQ7QQaRDIh+EE3lDk/hoCXmccrh5Jvu3d8eTdHl933Jt8BE9f1H5o tg49SuPXSKa6StBigK9GLQ3Gmy8LJckZSOnvn8rFOV+ub33eXfgPJzMvbAvaeTAjffje 2nkcsbLWjcB8nm+c3UUpAWL7h25ryKi+ScmD+L0VJ43qmoUb5ZvgJgWAu1S6SesidjQN u4q+At05zP3kXVpZQR4AU9gMnjFwG27ZLaAigV2OZ1e/7couywt02jpOsZyriWvK9+Rz 9ebG0yiC0MzR567MIQzbFahN5hZ0LX5iTtuzp5wwVXuoMswYMQrmNRvptcwNLZy6jWgW TwQw== X-Forwarded-Encrypted: i=1; AJvYcCWpUEX9aPkcn0mgqB39E4BMvLlGigEXHJgLep6K6l3XE1hccaatun1PCEdu3tG46b6SZcjeIY6yuB/rY1m5XpV59rQ= X-Gm-Message-State: AOJu0YzX08FttF8vBaMaL6Ci3Zt1aMfl6S2SxVjUXsNPVSz/LLPKlWiZ V9ZCdBGDEbpe+4bnrthx9A6pTROYRCl6/Nsclwswk3RTYVewkMPatQkcZ/byR10z+zTZYKd3Si6 gl/c70DlZS55UBpLumqOEyTDUp4by3ds3LoKy X-Google-Smtp-Source: AGHT+IE5s5OFxxmpxOkXm4fxu2X60nRC8PAkdSjHWLWPAYp2HuiYfMdTfL+lnXFyl12Pem97wsVzZ5WLedUNyNrenTs= X-Received: by 2002:a19:9118:0:b0:515:8acb:165e with SMTP id t24-20020a199118000000b005158acb165emr525381lfd.68.1711149791657; Fri, 22 Mar 2024 16:23:11 -0700 (PDT) MIME-Version: 1.0 References: <01b0b8e8-af1d-4fbe-951e-278e882283fd@linux.dev> In-Reply-To: From: Yosry Ahmed Date: Fri, 22 Mar 2024 16:22:35 -0700 Message-ID: Subject: Re: [External] Re: [bug report] mm/zswap :memory corruption after zswap_load(). To: Barry Song <21cnbao@gmail.com> Cc: Zhongkun He , Chengming Zhou , Johannes Weiner , Andrew Morton , linux-mm , wuyun.abel@bytedance.com, zhouchengming@bytedance.com, Nhat Pham , Kairui Song , Minchan Kim , David Hildenbrand , Chris Li , Ying Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: mt7h1xyik6uteuwpxdz4umrse7ixh673 X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: C88AC140006 X-Rspam-User: X-HE-Tag: 1711149793-456330 X-HE-Meta: U2FsdGVkX1++yJQU/v8ta92En1dsItM7qWR/QHVA15/tA0xx3rzlISQvLuiAhFnuX5cnQZyJZCU/fri2zUCPUhSPi402qUmCDg4FTIiXKq5QdGxTsT6VD22QuSc17XOAxAZx2351F8JT84gJKYFx+b54dTb6Bevb6Atk8Ruyqz6bu2H4dpC1mNJvwfIfqbCq7klfXcd882WFPO7NkzkTI48N96KUjxplY30zgL6KCjLwUQ8ST7KtHpoZK6YMQyjAc4QoJ7mln9T/Q6WjdbkIHIgKTTC4K1KymtSTrfD/VA65zV4+WTgwzQGTtMLdhDE17gIZQ/PsQqS277ZuQNlH8JPO7SXggxKoxamHpOCdbA67weWk67ZuONoEmi8HJKtcL2lvN3wPqIuOcJDAZ87sSJLEbtXt0Yyf68AtMSqh7BQ4Zjexd49eLn2c0m3A1BOKClm97JvwIW+hR6xMN2vzhVSb9smV/mo7MOrnst9KAcMZ423OGnWd6AIGpnmZQM40zjHsiPmgojO/BHRLpzDaM8ZQ1AFS/zFWffyGnUr8HQCUtcjX41LZ9K1ahB08s8AoFeUNAkMfKe5IsY50Sc7YKCihvMxvvXgVk1caREw7FUL0L6enr7S0oUUAvQW7RIzQjlEBac1PD25k7/7DlyLnArgGRrkGq71BhniG8cZPr1VngV+zJVpGYgp9VRe1qCMy8++PHYo/Fmlcqrz8fOLi4S6FUiTd/eDtC+H9rTYGH3ynhmZiEOxRo0zksxuslClAAV9XNcQEGz/JpXpb1j9dkpS6Tz4q1zLPhfAeFcto63Zg1wNi1F+v9QBtbD4MKu1g4p+RaiknybaMsnm2PinEDtiXWRXauNCsyGMWX0afgXmVENYCpaQ2MbVB0uun1mqzZ3UmPm7x4YFrgX8oJCtY5GXpTJRTQD6xUlA6GHPSQp6yhrx0UMirzn0/wMElQiyt0EtSx5Qd3p0O42bsKwE nyyCR2if LSIOO6UCXYChYeuO/yka1tmYe9Iz+Iq/qCtHvpW3Y2od7OO0UuR4mskCOocKEda9fTtisNM9U68NTfpiWeuO4BtrpNsirdoHoWAszrZK6H5TP7h9RRJD4iUi7Y7TCOTcnsNwd/PXU17BRb9loWj+BonEC12BnR9Mn0Z5zkF9clUJm81ZzG4jVgvF9E5zxEM4nBZZNHVsEc6Y3Lchi4OZd7/SRGhKL0K4L7JUOD9lk3yAyUckY9XU2ny3rbR9OzPPOEBnCUzkFhVfGXdkOcB+ugG8JsJFUlmaxeBLtUJQjxEUkf2o2GNAtd212ggN+Sh8ZOARhhleMp0/cuUol15PWHp9uqqYeekxKQqjsPTvXCeM6GS6ft22ogZOB3ZwLrOh3FMJB X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Mar 22, 2024 at 4:18=E2=80=AFPM Barry Song <21cnbao@gmail.com> wrot= e: > > On Sat, Mar 23, 2024 at 12:09=E2=80=AFPM Yosry Ahmed wrote: > > > > On Fri, Mar 22, 2024 at 4:04=E2=80=AFPM Barry Song <21cnbao@gmail.com> = wrote: > > > > > > On Sat, Mar 23, 2024 at 8:35=E2=80=AFAM Yosry Ahmed wrote: > > > > > > > > On Thu, Mar 21, 2024 at 8:04=E2=80=AFPM Zhongkun He > > > > wrote: > > > > > > > > > > On Thu, Mar 21, 2024 at 5:29=E2=80=AFPM Chengming Zhou wrote: > > > > > > > > > > > > On 2024/3/21 14:36, Zhongkun He wrote: > > > > > > > On Thu, Mar 21, 2024 at 1:24=E2=80=AFPM Chengming Zhou wrote: > > > > > > >> > > > > > > >> On 2024/3/21 13:09, Zhongkun He wrote: > > > > > > >>> On Thu, Mar 21, 2024 at 12:42=E2=80=AFPM Chengming Zhou > > > > > > >>> wrote: > > > > > > >>>> > > > > > > >>>> On 2024/3/21 12:34, Zhongkun He wrote: > > > > > > >>>>> Hey folks, > > > > > > >>>>> > > > > > > >>>>> Recently, I tested the zswap with memory reclaiming in th= e mainline > > > > > > >>>>> (6.8) and found a memory corruption issue related to excl= usive loads. > > > > > > >>>> > > > > > > >>>> Is this fix included? 13ddaf26be32 ("mm/swap: fix race whe= n skipping swapcache") > > > > > > >>>> This fix avoids concurrent swapin using the same swap entr= y. > > > > > > >>>> > > > > > > >>> > > > > > > >>> Yes, This fix avoids concurrent swapin from different cpu, = but the > > > > > > >>> reported issue occurs > > > > > > >>> on the same cpu. > > > > > > >> > > > > > > >> I think you may misunderstand the race description in this f= ix changelog, > > > > > > >> the CPU0 and CPU1 just mean two concurrent threads, not real= two CPUs. > > > > > > >> > > > > > > >> Could you verify if the problem still exists with this fix? > > > > > > > > > > > > > > Yes=EF=BC=8CI'm sure the problem still exists with this patch= . > > > > > > > There is some debug info, not mainline. > > > > > > > > > > > > > > bpftrace -e'k:swap_readpage {printf("%lld, %lld,%ld,%ld,%ld\n= %s", > > > > > > > ((struct page *)arg0)->private,nsecs,tid,pid,cpu,kstack)}' --= include > > > > > > > linux/mm_types.h > > > > > > > > > > > > Ok, this problem seems only happen on SWP_SYNCHRONOUS_IO swap b= ackends, > > > > > > which now include zram, ramdisk, pmem, nvdimm. > > > > > > > > > > Yes. > > > > > > > > > > > > > > > > > It maybe not good to use zswap on these swap backends? > > > > > > > > > > > > The problem here is the page fault handler tries to skip swapca= che to > > > > > > swapin the folio (swap entry count =3D=3D 1), but then it can't= install folio > > > > > > to pte entry since some changes happened such as concurrent for= k of entry. > > > > > > > > > > > > > > > > The first page fault returned VM_FAULT_RETRY because > > > > > folio_lock_or_retry() failed. > > > > > > > > How so? The folio is newly allocated and not visible to any other > > > > threads or CPUs. swap_read_folio() unlocks it and then returns and = we > > > > immediately try to lock it again with folio_lock_or_retry(). How do= es > > > > this fail? > > > > > > > > Let's go over what happens after swap_read_folio(): > > > > - The 'if (!folio)' code block will be skipped. > > > > - folio_lock_or_retry() should succeed as I mentioned earlier. > > > > - The 'if (swapcache)' code block will be skipped. > > > > - The pte_same() check should succeed on first look because other > > > > concurrent faulting threads should be held off by the newly introdu= ced > > > > swapcache_prepare() logic. But looking deeper I think this one may > > > > fail due to a concurrent MADV_WILLNEED. > > > > - The 'if (unlikely(!folio_test_uptodate(folio)))` part will be > > > > skipped because swap_read_folio() marks the folio up-to-date. > > > > - After that point there is no possible failure until we install th= e > > > > pte, at which point concurrent faults will fail on !pte_same() and > > > > retry. > > > > > > > > So the only failure I think is possible is the pte_same() check. I = see > > > > how a concurrent MADV_WILLNEED could cause that check to fail. A > > > > concurrent MADV_WILLNEED will block on swapcache_prepare(), but onc= e > > > > the fault resolves it will go ahead and read the folio again into t= he > > > > swapcache. It seems like we will end up with two copies of the same > > > > > > but zswap has freed the object when the do_swap_page finishes swap_re= ad_folio > > > due to exclusive load feature of zswap? > > > > > > so WILLNEED will get corrupted data and put it into swapcache. > > > some other concurrent new forked process might get the new data > > > from the swapcache WILLNEED puts when the new-forked process > > > goes into do_swap_page. > > > > Oh I was wondering how synchronization with WILLNEED happens without > > zswap. It seems like we could end up with two copies of the same folio > > and one of them will be leaked unless I am missing something. > > > > > > > > so very likely a new process is forked right after do_swap_page finis= hes > > > swap_read_folio and before swapcache_clear. > > > > > > > folio? Maybe this is harmless because the folio in the swacache wil= l > > > > never be used, but it is essentially leaked at that point, right? > > > > > > > > I feel like I am missing something. Adding other folks that were > > > > involved in the recent swapcache_prepare() synchronization thread. > > > > > > > > Anyway, I agree that at least in theory the data corruption could > > > > happen because of exclusive loads when skipping the swapcache, and = we > > > > should fix that. > > > > > > > > Perhaps the right thing to do may be to write the folio again to zs= wap > > > > before unlocking it and before calling swapcache_clear(). The need = for > > > > the write can be detected by checking if the folio is dirty, I thin= k > > > > this will only be true if the folio was loaded from zswap. > > > > > > we only need to write when we know swap_read_folio() gets data > > > from zswap but not swapfile. is there a quick way to do this? > > > > The folio will be dirty when loaded from zswap, so we can check if the > > folio is dirty and write the page if fail after swap_read_folio(). > > Is it actually a bug of swapin_walk_pmd_entry? it only check pte > before read_swap_cache_async. but when read_swap_cache_async > is blocked by swapcache_prepare, after it gets the swapcache_prepare > successfully , someone else should have already set the pte and freed > the swap slot even if this is not zswap? If someone freed the swap slot then swapcache_prepare() should fail, but the swap entry could have been recycled after we dropped the pte lock, right? Anyway, yeah, I think there might be a bug here irrelevant to zswap. > > static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, > unsigned long end, struct mm_walk *walk) > { > struct vm_area_struct *vma =3D walk->private; > struct swap_iocb *splug =3D NULL; > pte_t *ptep =3D NULL; > spinlock_t *ptl; > unsigned long addr; > > for (addr =3D start; addr < end; addr +=3D PAGE_SIZE) { > pte_t pte; > swp_entry_t entry; > struct folio *folio; > > if (!ptep++) { > ptep =3D pte_offset_map_lock(vma->vm_mm, pmd, add= r, &ptl); > if (!ptep) > break; > } > > pte =3D ptep_get(ptep); > if (!is_swap_pte(pte)) > continue; > entry =3D pte_to_swp_entry(pte); > if (unlikely(non_swap_entry(entry))) > continue; > > pte_unmap_unlock(ptep, ptl); > ptep =3D NULL; > > folio =3D read_swap_cache_async(entry, GFP_HIGHUSER_MOVAB= LE, > vma, addr, &splug); > if (folio) > folio_put(folio); > } > > if (ptep)c > pte_unmap_unlock(ptep, ptl); > swap_read_unplug(splug); > cond_resched(); > > return 0; > } > > I mean pte can become non-swap within read_swap_cache_async(), > so no matter if it is zswap, we have the bug.