From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80C67C54E64 for ; Fri, 22 Mar 2024 23:18:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id C0B0E6B0082; Fri, 22 Mar 2024 19:18:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id BB9C06B0087; Fri, 22 Mar 2024 19:18:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A594D6B0088; Fri, 22 Mar 2024 19:18:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 95DA16B0082 for ; Fri, 22 Mar 2024 19:18:39 -0400 (EDT) Received: from smtpin30.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 41EFB1C1967 for ; Fri, 22 Mar 2024 23:18:39 +0000 (UTC) X-FDA: 81926241558.30.4B2FF34 Received: from mail-ua1-f52.google.com (mail-ua1-f52.google.com [209.85.222.52]) by imf20.hostedemail.com (Postfix) with ESMTP id 69AE51C0009 for ; Fri, 22 Mar 2024 23:18:37 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=m0A5Xppc; spf=pass (imf20.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.52 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711149517; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=xSfCN/+f7yC5pGak6QGN97Yxy/z5ZnN1O8UuapN34Cs=; b=FhhQZZwqj8zw7jCpiVBlNaVRDWgGVBhWsDkQGNI2r3GfFiHOqs0hCJZky3L4jTH02avEqy Nh5gJc0mzBEU3IjtbOeHdKxjmQ4O8RdVovOwXkaOnXYcp0ZSTAYMrLEywzkoY/aZLxmgbW 9yt7pwzDos6YHMUR2OieRP/F30B3G6c= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711149517; a=rsa-sha256; cv=none; b=O3sTQqbf/n+RBsdZPM1iwU9ILA+XOASIiznsmT09ziQNZwUSmMvL41X76PfDVFzTN1Oafx J+/lAjaXOVH+Gkst7TQexmDiZcVE+05v4kkp4AvwUmRu4hlq5QwXFG9bIYTop6LK9VimvL hqI5PYPjvDwCI010BrLK/UeQiWQ+6Is= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=m0A5Xppc; spf=pass (imf20.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.52 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-ua1-f52.google.com with SMTP id a1e0cc1a2514c-7e09ad21b34so878909241.0 for ; Fri, 22 Mar 2024 16:18:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1711149516; x=1711754316; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=xSfCN/+f7yC5pGak6QGN97Yxy/z5ZnN1O8UuapN34Cs=; b=m0A5Xppc9wTW1KbirZ+9XgroTHJYAhAa1PnTmnyX28lBYe7ovOxxee8Ptg2FkqRbVf F1p8JPVwpGQiALldxNV1rT07VlfZpPKM2lULxqCe9VOZ4icoA1b6sjL9aEYpU7ezPp4e e3D+Xqnbhuf4Gm8L6NQEs9cQRvqqKoZ2LEhoKEJeI97bjFclugR7LDoDH7XXWYOxO9xc 8cqRbhInGszo69e3/OA3lh7GHYUfQG/OviEMLQG1XKq+zlEc6Jf794s7rCKkqeuw8EVT ljWYFnk/aTWmkOc1SuiRK9qaBZvbe6svrMKohNP+QqByZfcXPo8Ev/4f4YpavFMdBPEo Pwyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711149516; x=1711754316; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xSfCN/+f7yC5pGak6QGN97Yxy/z5ZnN1O8UuapN34Cs=; b=btGMsHFRc8qKSw91mMoAl3wyqxUyR5M/ILv8ZOldCdKvWZL5yc3/vpgjY2ucJW58Iw O0ggPAL4ajJG+NYdoYFd2GZoulm11Gz3XVSTxNgzyF7/iZIebrPnFG/ylf3wo0xaTQoW yBA48VuekZ2JKK+ie8eWup84R4I1qRbzJehJIjCU1pAuHbtVvM2KuPF7utRG1MMt8GpC 4tqNbF/WY7igyvOFqHKfL4Ziy/kiUSMdiw12KRrVN9m/uR+Dj07HFo5Td1NyNnGMhGQN WJpaIYqBI5dSqH4foa5/wfuFbKE6PW63qnUKVW0ABaJPvKh4TIuvenp/FWeQNBgGoCKV 1Zag== X-Forwarded-Encrypted: i=1; AJvYcCXzu7H2rrQrUkyItWLXtPUfqEBylEd+p26BOLFOVRgbwSSY1RMsAeFCLCn2m86ZocgBIdSULvClQtwazEX6QFp1Ph8= X-Gm-Message-State: AOJu0Yzwfm+9dePI04a6lXVMdVNRr63Idk992Y4sdwrWgElEywPoTYG7 4jmDPFxd8IdaOk+/n9Yz9slbB/eHw8fHv/kCQx+PNdmBfKLlXcVlEG7mt8FDMUEQ3CtMIB7YSn+ Gh2DHgVG7tFYXFvYGiONwq5P2WIs= X-Google-Smtp-Source: AGHT+IGbrri4XmLEnZoCN+El132soNFtesx7KHGo2910WVF/ywOT0aHFAo2T6QCecuatjWo3YlOOlMbhjHr6RnIjwh0= X-Received: by 2002:a67:be13:0:b0:476:c40b:e6b5 with SMTP id x19-20020a67be13000000b00476c40be6b5mr1054643vsq.18.1711149516350; Fri, 22 Mar 2024 16:18:36 -0700 (PDT) MIME-Version: 1.0 References: <01b0b8e8-af1d-4fbe-951e-278e882283fd@linux.dev> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Sat, 23 Mar 2024 12:18:25 +1300 Message-ID: Subject: Re: [External] Re: [bug report] mm/zswap :memory corruption after zswap_load(). To: Yosry Ahmed Cc: Zhongkun He , Chengming Zhou , Johannes Weiner , Andrew Morton , linux-mm , wuyun.abel@bytedance.com, zhouchengming@bytedance.com, Nhat Pham , Kairui Song , Minchan Kim , David Hildenbrand , Chris Li , Ying Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 69AE51C0009 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: rg1ifij6d9iq19xis86ozsnimetmi9ru X-HE-Tag: 1711149517-511088 X-HE-Meta: U2FsdGVkX18dlQnLnrlpfTEHptXyiGlvHn/zJC7Vnq+FsCLLQulARdFPDW+MIo4JAx3pdLD8EESixk/cCYyXCOlKf5ceysPlChvzoVymAyWaCIC7d5cAB0ulObFXpAkOt36nSgcCE2kLljX9V4sHj1BPFkHGyq3fecx8JaBMs0i6GKj0O6qOn2T6XQa2EcGNC37X9EIJBaa3lvqL4XZjY2rihr1mPCixLVsisbe4jZHk12iETZynuiFavJdLrAQa9Wynmk02B/THVIGBTQLdJRe4lJCi60y82N/Ed01+Cat5Eboah4UCMneE3mUnDdUAEvqdclssGaegW6ZDTz3Ptle1n1ofUm/59NWqaJuiz3+MWqZ17GnQ0Jefc8I5vX/T0b5CtVTgq1n3PE2WnNFxaUEW8bgJyap5LIAyA0mtm1crYeRAdP5HrEia7L1fBkFnAlhAZjhpoiYUbDhB6le5QxjtOzP/g+DuMK3yMP3b/4ZxymLHh2+1+nTVIaVO3QEIWrZzR44qMtRSsWUwWoNErjP8x/I6rCcDZnTyIPcA3NTxVOlyHo45DqbNsKBGsWGlCzt0sNZKZIDwdtnTZYxtzeHK+flClrZyeCTcfk+tCMEIlPx8LDs8FxjQgh/UyyLpkq7GKSRp+KmmXDwf87HLrbI8G06Whg9Zu0ha/7WIDEzLNkE3q/ha2W3OfCGnQB+Rqt62Yz8VEUpWXrfq14lMkAN6iHkLk9nBcePI737gus5vgTaZo9GfpIFhPhVup6RiJLcYop/I4tbb8ELAvDIqN4nBM4bV0pFsS45ClF8kQk/25btSVbFjznvcwzgcI02NYtfo462ZPRIrBl0qGfhlHhFBJfRboaQfOLvUcEp0A2wdlS5l+6o6ODT6UXsdPUzIKD6VvNT/QAPLp+TEcLXyfutOMT7eaxBbN7N+b6nmHpzBxYuh0F5GqmEp4RoFHALiDz848EodkuVJZKFzk6z zz63OL6R fJu2ehpOey2i7PVI4iG3R91ZyXFIsUsF2jsA+WAcjaDX5zqj38M7YOCXuh6FLzW6/+ZVZaIeg0sPyiYsDxWmP2S4ox31ebW+SSuH8bgp9opjDvi24A90fW9oLkAzJgSvPwlpcOR7JDclPZppZ7Vn5OGaMaeF1pC8Lf/pbl5szDmPG16AGglzNF/q+ul2I6foLQHXD30JlaWDV/YVHizV4z1rA//N2l5rQFMGmSbjldyNBBQcTPXRQvIyiP0wRuS9BJJtSy1zXfUNOUu3Z2+VD8r3LOb6AudkPlzPA/aa2SvU0wGncNC8ZKbWnwpFHqMP2Z1IkkqAqLwUZjifrc2dITMZRvjaKKJISdn0BufCKsrQrlmOJfwkNaQthtOl7DO4EYjlQqrO9urJNr2yaeFFgjj/gygRdeA5auaKk X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Sat, Mar 23, 2024 at 12:09=E2=80=AFPM Yosry Ahmed wrote: > > On Fri, Mar 22, 2024 at 4:04=E2=80=AFPM Barry Song <21cnbao@gmail.com> wr= ote: > > > > On Sat, Mar 23, 2024 at 8:35=E2=80=AFAM Yosry Ahmed wrote: > > > > > > On Thu, Mar 21, 2024 at 8:04=E2=80=AFPM Zhongkun He > > > wrote: > > > > > > > > On Thu, Mar 21, 2024 at 5:29=E2=80=AFPM Chengming Zhou wrote: > > > > > > > > > > On 2024/3/21 14:36, Zhongkun He wrote: > > > > > > On Thu, Mar 21, 2024 at 1:24=E2=80=AFPM Chengming Zhou wrote: > > > > > >> > > > > > >> On 2024/3/21 13:09, Zhongkun He wrote: > > > > > >>> On Thu, Mar 21, 2024 at 12:42=E2=80=AFPM Chengming Zhou > > > > > >>> wrote: > > > > > >>>> > > > > > >>>> On 2024/3/21 12:34, Zhongkun He wrote: > > > > > >>>>> Hey folks, > > > > > >>>>> > > > > > >>>>> Recently, I tested the zswap with memory reclaiming in the = mainline > > > > > >>>>> (6.8) and found a memory corruption issue related to exclus= ive loads. > > > > > >>>> > > > > > >>>> Is this fix included? 13ddaf26be32 ("mm/swap: fix race when = skipping swapcache") > > > > > >>>> This fix avoids concurrent swapin using the same swap entry. > > > > > >>>> > > > > > >>> > > > > > >>> Yes, This fix avoids concurrent swapin from different cpu, bu= t the > > > > > >>> reported issue occurs > > > > > >>> on the same cpu. > > > > > >> > > > > > >> I think you may misunderstand the race description in this fix= changelog, > > > > > >> the CPU0 and CPU1 just mean two concurrent threads, not real t= wo CPUs. > > > > > >> > > > > > >> Could you verify if the problem still exists with this fix? > > > > > > > > > > > > Yes=EF=BC=8CI'm sure the problem still exists with this patch. > > > > > > There is some debug info, not mainline. > > > > > > > > > > > > bpftrace -e'k:swap_readpage {printf("%lld, %lld,%ld,%ld,%ld\n%s= ", > > > > > > ((struct page *)arg0)->private,nsecs,tid,pid,cpu,kstack)}' --in= clude > > > > > > linux/mm_types.h > > > > > > > > > > Ok, this problem seems only happen on SWP_SYNCHRONOUS_IO swap bac= kends, > > > > > which now include zram, ramdisk, pmem, nvdimm. > > > > > > > > Yes. > > > > > > > > > > > > > > It maybe not good to use zswap on these swap backends? > > > > > > > > > > The problem here is the page fault handler tries to skip swapcach= e to > > > > > swapin the folio (swap entry count =3D=3D 1), but then it can't i= nstall folio > > > > > to pte entry since some changes happened such as concurrent fork = of entry. > > > > > > > > > > > > > The first page fault returned VM_FAULT_RETRY because > > > > folio_lock_or_retry() failed. > > > > > > How so? The folio is newly allocated and not visible to any other > > > threads or CPUs. swap_read_folio() unlocks it and then returns and we > > > immediately try to lock it again with folio_lock_or_retry(). How does > > > this fail? > > > > > > Let's go over what happens after swap_read_folio(): > > > - The 'if (!folio)' code block will be skipped. > > > - folio_lock_or_retry() should succeed as I mentioned earlier. > > > - The 'if (swapcache)' code block will be skipped. > > > - The pte_same() check should succeed on first look because other > > > concurrent faulting threads should be held off by the newly introduce= d > > > swapcache_prepare() logic. But looking deeper I think this one may > > > fail due to a concurrent MADV_WILLNEED. > > > - The 'if (unlikely(!folio_test_uptodate(folio)))` part will be > > > skipped because swap_read_folio() marks the folio up-to-date. > > > - After that point there is no possible failure until we install the > > > pte, at which point concurrent faults will fail on !pte_same() and > > > retry. > > > > > > So the only failure I think is possible is the pte_same() check. I se= e > > > how a concurrent MADV_WILLNEED could cause that check to fail. A > > > concurrent MADV_WILLNEED will block on swapcache_prepare(), but once > > > the fault resolves it will go ahead and read the folio again into the > > > swapcache. It seems like we will end up with two copies of the same > > > > but zswap has freed the object when the do_swap_page finishes swap_read= _folio > > due to exclusive load feature of zswap? > > > > so WILLNEED will get corrupted data and put it into swapcache. > > some other concurrent new forked process might get the new data > > from the swapcache WILLNEED puts when the new-forked process > > goes into do_swap_page. > > Oh I was wondering how synchronization with WILLNEED happens without > zswap. It seems like we could end up with two copies of the same folio > and one of them will be leaked unless I am missing something. > > > > > so very likely a new process is forked right after do_swap_page finishe= s > > swap_read_folio and before swapcache_clear. > > > > > folio? Maybe this is harmless because the folio in the swacache will > > > never be used, but it is essentially leaked at that point, right? > > > > > > I feel like I am missing something. Adding other folks that were > > > involved in the recent swapcache_prepare() synchronization thread. > > > > > > Anyway, I agree that at least in theory the data corruption could > > > happen because of exclusive loads when skipping the swapcache, and we > > > should fix that. > > > > > > Perhaps the right thing to do may be to write the folio again to zswa= p > > > before unlocking it and before calling swapcache_clear(). The need fo= r > > > the write can be detected by checking if the folio is dirty, I think > > > this will only be true if the folio was loaded from zswap. > > > > we only need to write when we know swap_read_folio() gets data > > from zswap but not swapfile. is there a quick way to do this? > > The folio will be dirty when loaded from zswap, so we can check if the > folio is dirty and write the page if fail after swap_read_folio(). Is it actually a bug of swapin_walk_pmd_entry? it only check pte before read_swap_cache_async. but when read_swap_cache_async is blocked by swapcache_prepare, after it gets the swapcache_prepare successfully , someone else should have already set the pte and freed the swap slot even if this is not zswap? static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, unsigned long end, struct mm_walk *walk) { struct vm_area_struct *vma =3D walk->private; struct swap_iocb *splug =3D NULL; pte_t *ptep =3D NULL; spinlock_t *ptl; unsigned long addr; for (addr =3D start; addr < end; addr +=3D PAGE_SIZE) { pte_t pte; swp_entry_t entry; struct folio *folio; if (!ptep++) { ptep =3D pte_offset_map_lock(vma->vm_mm, pmd, addr,= &ptl); if (!ptep) break; } pte =3D ptep_get(ptep); if (!is_swap_pte(pte)) continue; entry =3D pte_to_swp_entry(pte); if (unlikely(non_swap_entry(entry))) continue; pte_unmap_unlock(ptep, ptl); ptep =3D NULL; folio =3D read_swap_cache_async(entry, GFP_HIGHUSER_MOVABLE= , vma, addr, &splug); if (folio) folio_put(folio); } if (ptep)c pte_unmap_unlock(ptep, ptl); swap_read_unplug(splug); cond_resched(); return 0; } I mean pte can become non-swap within read_swap_cache_async(), so no matter if it is zswap, we have the bug.