From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id DEBF3C47DD9 for ; Fri, 22 Mar 2024 23:42:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 767FE6B0089; Fri, 22 Mar 2024 19:42:20 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 717F16B008C; Fri, 22 Mar 2024 19:42:20 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 592606B0092; Fri, 22 Mar 2024 19:42:20 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 416996B0089 for ; Fri, 22 Mar 2024 19:42:20 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id 123EAA13C4 for ; Fri, 22 Mar 2024 23:42:20 +0000 (UTC) X-FDA: 81926301240.25.2A7900A Received: from mail-ej1-f50.google.com (mail-ej1-f50.google.com [209.85.218.50]) by imf29.hostedemail.com (Postfix) with ESMTP id 2E67C12000B for ; Fri, 22 Mar 2024 23:42:17 +0000 (UTC) Authentication-Results: imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="TOso+/nE"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.50 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1711150938; a=rsa-sha256; cv=none; b=Xxb9ufaJ5Yh49e3TJLjY5KYZTSqmbR8rAWkHXsduNx+uSCEdSThAooVdU+dnOKc9g4tY/O SCNCa9PQi17MaC4tEEChRdSBr7af19xTH9vWKDCHgnCdGlVqe6RPEvFk8ypK/QAcOxyD2t Qz7hMae2KT4EZLH3vJK6wU67OwtnlMw= ARC-Authentication-Results: i=1; imf29.hostedemail.com; dkim=pass header.d=google.com header.s=20230601 header.b="TOso+/nE"; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf29.hostedemail.com: domain of yosryahmed@google.com designates 209.85.218.50 as permitted sender) smtp.mailfrom=yosryahmed@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1711150938; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=Wu8atQ2rIir8ZUf8OEJEXGH9tj19XRFoDrx3Tz5ZVh0=; b=EyQbJZ31lltOVuZo6AB060v8tvAMWKv9GH8Y1ob7s11yZhN8YJbZJBd6pFLUEN7QyH+Tl3 x6QnayjwBd+uVk1YxqT66Uc7QZBdXHZRmxZTZAK1WUIlj6Wr7et2qqmBAitoF+2BHCzwqP WT6eaV75NWK2gjdWNA8ojL49svIUZgk= Received: by mail-ej1-f50.google.com with SMTP id a640c23a62f3a-a4715991c32so295348066b.1 for ; Fri, 22 Mar 2024 16:42:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1711150936; x=1711755736; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=Wu8atQ2rIir8ZUf8OEJEXGH9tj19XRFoDrx3Tz5ZVh0=; b=TOso+/nE8/lGvcg16KRk8YqgbvWTPKULwXb7NEO867JB0UFuZFBwWg/1UT3u5ZKdnA T0OFVrxMwHKglpEoXW1gAS0UTbt+gpyh80R9r/zgqVE2qgtBQgyGV6AHI4CSpAQA72ra qeVPocoVvNj38Q+VvhbZTaOjAELIUmtf6444vy2l4Gonxqv5Tg8coza0f+3SC8EoK74R SnZwfdXuDdbG76uEddvnnSy/tTdutzohJC0uvf2d4QPNdQkJwEWQmRw7SLO5vYfTzfQL DmS2THpQLhRfkjxQ8jlOZ4oofjxkMitim9QiuuG2O6oFZzrghjBRFlFB2JaTVV+vPZNm 4LpQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1711150936; x=1711755736; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Wu8atQ2rIir8ZUf8OEJEXGH9tj19XRFoDrx3Tz5ZVh0=; b=Qv8b7MIPCdk+1LLpsRr7X2EUI6TonHn7tzvgpVEFdLfZbFndmfC4xFwIZFHw4HS3wU 0fpMDCI/xRLmJzV77GQ5bYmM4g0qeBu6dYCmlKw5uQUhIQUPoj2JbEG0oG6/EV9IB64p hCRPXxW4xdkXQG8ntM9hxwaqGfTvzYrKVaeYOqIZY42lxzq/GFz0nNsSou1WOs7VtUL5 YVVWevxpb1MNXOcX8IvKEibS6Gsj5gYVu3uCMDg9TavOLJEdqnM1GbEdr6bhTgGwoxTs lod8g5/TYdUEWUnq20Xf4Or33vdiyfb8NKjnx18AU41aTSrQozgKAGsPgo3SmSMoTeDp G7sg== X-Forwarded-Encrypted: i=1; AJvYcCUObMTKrBmK9F+hhobL+EguZm2PZciIhN82g9Tt7NeHGE/ZbwMd5Gm1+1YAL3sFNl2sTWg1CyzBgAiHw4MQa0OAmJg= X-Gm-Message-State: AOJu0YyWTamkYFYmyWqb8w21kfGWsGQmtLonih6EE/h4M5Bl7wPL2Ol6 ic2r51Mx7CtRXXZ6qdYew6yqOdlZJhM+veqDLyqPdADy8iFYW+S0EeHluD52eDx2XWmzb8VDEBz BPqKrCy/1VIR9Qxvm/N7lVbVBhHiJOR37LziU X-Google-Smtp-Source: AGHT+IHy6cw8kMDdfk7ZsWYFyxhNq+7d1zkGZxH1dEaXaE+OvXf/MCeKLPomsTo1wgjekkRA+120Q39PXLwxT1r+0iM= X-Received: by 2002:a17:906:128d:b0:a47:25e4:f5c8 with SMTP id k13-20020a170906128d00b00a4725e4f5c8mr709611ejb.65.1711150936356; Fri, 22 Mar 2024 16:42:16 -0700 (PDT) MIME-Version: 1.0 References: <01b0b8e8-af1d-4fbe-951e-278e882283fd@linux.dev> In-Reply-To: From: Yosry Ahmed Date: Fri, 22 Mar 2024 16:41:40 -0700 Message-ID: Subject: Re: [External] Re: [bug report] mm/zswap :memory corruption after zswap_load(). To: Barry Song <21cnbao@gmail.com> Cc: Zhongkun He , Chengming Zhou , Johannes Weiner , Andrew Morton , linux-mm , wuyun.abel@bytedance.com, zhouchengming@bytedance.com, Nhat Pham , Kairui Song , Minchan Kim , David Hildenbrand , Chris Li , Ying Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 2E67C12000B X-Stat-Signature: b5brbe5iai1rarrrkhspgcw39fbryxjk X-HE-Tag: 1711150937-876369 X-HE-Meta: U2FsdGVkX1/etYsEdMXz+6dcP3ivITlGgq+iyKVeOJG2nEsrAaoE5WkMoYb1fDNfHIJ0E/UygMYeOAxk8Kc4149HCVmOuLRVOS9vJHk8rYVphVS2/InRY5L6Gk/CoOY45521jDtkpKSL8C8Cs+gwulmvO9zqw3yE4DklvBhBP+ry8QPXntsd1DqSi6yRWsLtn3zK2INgXvPO7TtddGjP+YMuXujcnubrfeXHN2VWDPxos3p5voOoFKVmyhM/iuOwvNaAowR+KzsUPgagcs8V0pQTiq0sS0D4D5lGdcfwgDYYHqKhYelYLFXceGmJEnGf3y8hQxy6qx2SPNvuQHtAQGYSUXac6/324oiUOkY/3JyJlzTqSGFAA3g1sTxYJJ++yEPG7pbCbwb1wylWQmKcemsaXgvQpgEh/ddDKG4cwKxiAOLYdxS8mcFK1BNItt7ND8Zc+xsqIvJSD3o/g4zntzH9IavJS6fNwpAi2ummo7UqVX4kBLa82DveuqE7f6/axgppLF5CvrSonGsVkHK6//S38JBsou5SduL8Ln6inI7TBVqHY0r0WuAbu7RTws26eHinoqZ+2DYZR1TXC1RMVvrZ6QCjrhTows6tYO5GSGcN50JAyxM/+Nk1xJ/LgUMxHzC8d5uHhLGtbOiQGdzX40Akvhq2g+n6YDVp9XuK+IW7n1RMj4/LV8hHf9JOalgsiNZibjuIgnIyzprB/A7+rxwvI74NoFAm6y3h6oH9KlSj56F8fwHZggQGsVhHajET80tcGjkMV6ANhQ8/l9GtzF46HaazX+BEojwZaiurZCXw4lhaZMFDQa1MUH3xWj62BNnSeAb//mW8GBkfLX0fZCea054lFX97KkkX9gufFu9K+nf8OSn6ajovYhPhukTOj9h0kUmnsfmkwOOBKALYsK1U9AS661z8SMov4EMdRcCkg3hAffVaZON/e37mWW95UxhwL36AJavPCs7XKy5 w0dGD6xn NWkrUr1zvCUvqSYaGFGksL5nEKHaUAEriD/sH7d/72TjrBtU4fOP19q5Nl+/E5WMQUtD2DRYeQ9KsqZ8nkdwI4Ugl+wc9mV71y7SkVsovFz7UODdKGQSMoPFnJLXLJGAZg8moxGNBLN+BKb5JW5iwQvOxHGl9H29+VMKLaTjMP/JMI/3Xqc5/yUTyc9W+q0vGlNDuwVc6w5cOkhU6xC0mq7GXNaRF6fJEHvi60KTBMuVQIB1KXWH3S8SecLYr2ljdmN1OkkXiUQ/DkcfUHB+/0lF8f14mWTqodU4NOC7n75IoX26V7cd0MZQuu+EOQa2H5iSK4Xz2hYkjDJolp0+uLjoUMIcdAcjuo0I7WjKpY6cBHsofDj+E0i60lkTM/d39znPK X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Fri, Mar 22, 2024 at 4:38=E2=80=AFPM Barry Song <21cnbao@gmail.com> wrot= e: > > On Sat, Mar 23, 2024 at 12:35=E2=80=AFPM Yosry Ahmed wrote: > > > > On Fri, Mar 22, 2024 at 4:32=E2=80=AFPM Barry Song <21cnbao@gmail.com> = wrote: > > > > > > On Sat, Mar 23, 2024 at 12:23=E2=80=AFPM Yosry Ahmed wrote: > > > > > > > > On Fri, Mar 22, 2024 at 4:18=E2=80=AFPM Barry Song <21cnbao@gmail.c= om> wrote: > > > > > > > > > > On Sat, Mar 23, 2024 at 12:09=E2=80=AFPM Yosry Ahmed wrote: > > > > > > > > > > > > On Fri, Mar 22, 2024 at 4:04=E2=80=AFPM Barry Song <21cnbao@gma= il.com> wrote: > > > > > > > > > > > > > > On Sat, Mar 23, 2024 at 8:35=E2=80=AFAM Yosry Ahmed wrote: > > > > > > > > > > > > > > > > On Thu, Mar 21, 2024 at 8:04=E2=80=AFPM Zhongkun He > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > On Thu, Mar 21, 2024 at 5:29=E2=80=AFPM Chengming Zhou wrote: > > > > > > > > > > > > > > > > > > > > On 2024/3/21 14:36, Zhongkun He wrote: > > > > > > > > > > > On Thu, Mar 21, 2024 at 1:24=E2=80=AFPM Chengming Zho= u wrote: > > > > > > > > > > >> > > > > > > > > > > >> On 2024/3/21 13:09, Zhongkun He wrote: > > > > > > > > > > >>> On Thu, Mar 21, 2024 at 12:42=E2=80=AFPM Chengming = Zhou > > > > > > > > > > >>> wrote: > > > > > > > > > > >>>> > > > > > > > > > > >>>> On 2024/3/21 12:34, Zhongkun He wrote: > > > > > > > > > > >>>>> Hey folks, > > > > > > > > > > >>>>> > > > > > > > > > > >>>>> Recently, I tested the zswap with memory reclaimi= ng in the mainline > > > > > > > > > > >>>>> (6.8) and found a memory corruption issue related= to exclusive loads. > > > > > > > > > > >>>> > > > > > > > > > > >>>> Is this fix included? 13ddaf26be32 ("mm/swap: fix = race when skipping swapcache") > > > > > > > > > > >>>> This fix avoids concurrent swapin using the same s= wap entry. > > > > > > > > > > >>>> > > > > > > > > > > >>> > > > > > > > > > > >>> Yes, This fix avoids concurrent swapin from differe= nt cpu, but the > > > > > > > > > > >>> reported issue occurs > > > > > > > > > > >>> on the same cpu. > > > > > > > > > > >> > > > > > > > > > > >> I think you may misunderstand the race description i= n this fix changelog, > > > > > > > > > > >> the CPU0 and CPU1 just mean two concurrent threads, = not real two CPUs. > > > > > > > > > > >> > > > > > > > > > > >> Could you verify if the problem still exists with th= is fix? > > > > > > > > > > > > > > > > > > > > > > Yes=EF=BC=8CI'm sure the problem still exists with th= is patch. > > > > > > > > > > > There is some debug info, not mainline. > > > > > > > > > > > > > > > > > > > > > > bpftrace -e'k:swap_readpage {printf("%lld, %lld,%ld,%= ld,%ld\n%s", > > > > > > > > > > > ((struct page *)arg0)->private,nsecs,tid,pid,cpu,ksta= ck)}' --include > > > > > > > > > > > linux/mm_types.h > > > > > > > > > > > > > > > > > > > > Ok, this problem seems only happen on SWP_SYNCHRONOUS_I= O swap backends, > > > > > > > > > > which now include zram, ramdisk, pmem, nvdimm. > > > > > > > > > > > > > > > > > > Yes. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > It maybe not good to use zswap on these swap backends? > > > > > > > > > > > > > > > > > > > > The problem here is the page fault handler tries to ski= p swapcache to > > > > > > > > > > swapin the folio (swap entry count =3D=3D 1), but then = it can't install folio > > > > > > > > > > to pte entry since some changes happened such as concur= rent fork of entry. > > > > > > > > > > > > > > > > > > > > > > > > > > > > The first page fault returned VM_FAULT_RETRY because > > > > > > > > > folio_lock_or_retry() failed. > > > > > > > > > > > > > > > > How so? The folio is newly allocated and not visible to any= other > > > > > > > > threads or CPUs. swap_read_folio() unlocks it and then retu= rns and we > > > > > > > > immediately try to lock it again with folio_lock_or_retry()= . How does > > > > > > > > this fail? > > > > > > > > > > > > > > > > Let's go over what happens after swap_read_folio(): > > > > > > > > - The 'if (!folio)' code block will be skipped. > > > > > > > > - folio_lock_or_retry() should succeed as I mentioned earli= er. > > > > > > > > - The 'if (swapcache)' code block will be skipped. > > > > > > > > - The pte_same() check should succeed on first look because= other > > > > > > > > concurrent faulting threads should be held off by the newly= introduced > > > > > > > > swapcache_prepare() logic. But looking deeper I think this = one may > > > > > > > > fail due to a concurrent MADV_WILLNEED. > > > > > > > > - The 'if (unlikely(!folio_test_uptodate(folio)))` part wil= l be > > > > > > > > skipped because swap_read_folio() marks the folio up-to-dat= e. > > > > > > > > - After that point there is no possible failure until we in= stall the > > > > > > > > pte, at which point concurrent faults will fail on !pte_sam= e() and > > > > > > > > retry. > > > > > > > > > > > > > > > > So the only failure I think is possible is the pte_same() c= heck. I see > > > > > > > > how a concurrent MADV_WILLNEED could cause that check to fa= il. A > > > > > > > > concurrent MADV_WILLNEED will block on swapcache_prepare(),= but once > > > > > > > > the fault resolves it will go ahead and read the folio agai= n into the > > > > > > > > swapcache. It seems like we will end up with two copies of = the same > > > > > > > > > > > > > > but zswap has freed the object when the do_swap_page finishes= swap_read_folio > > > > > > > due to exclusive load feature of zswap? > > > > > > > > > > > > > > so WILLNEED will get corrupted data and put it into swapcache= . > > > > > > > some other concurrent new forked process might get the new da= ta > > > > > > > from the swapcache WILLNEED puts when the new-forked process > > > > > > > goes into do_swap_page. > > > > > > > > > > > > Oh I was wondering how synchronization with WILLNEED happens wi= thout > > > > > > zswap. It seems like we could end up with two copies of the sam= e folio > > > > > > and one of them will be leaked unless I am missing something. > > > > > > > > > > > > > > > > > > > > so very likely a new process is forked right after do_swap_pa= ge finishes > > > > > > > swap_read_folio and before swapcache_clear. > > > > > > > > > > > > > > > folio? Maybe this is harmless because the folio in the swac= ache will > > > > > > > > never be used, but it is essentially leaked at that point, = right? > > > > > > > > > > > > > > > > I feel like I am missing something. Adding other folks that= were > > > > > > > > involved in the recent swapcache_prepare() synchronization = thread. > > > > > > > > > > > > > > > > Anyway, I agree that at least in theory the data corruption= could > > > > > > > > happen because of exclusive loads when skipping the swapcac= he, and we > > > > > > > > should fix that. > > > > > > > > > > > > > > > > Perhaps the right thing to do may be to write the folio aga= in to zswap > > > > > > > > before unlocking it and before calling swapcache_clear(). T= he need for > > > > > > > > the write can be detected by checking if the folio is dirty= , I think > > > > > > > > this will only be true if the folio was loaded from zswap. > > > > > > > > > > > > > > we only need to write when we know swap_read_folio() gets dat= a > > > > > > > from zswap but not swapfile. is there a quick way to do this? > > > > > > > > > > > > The folio will be dirty when loaded from zswap, so we can check= if the > > > > > > folio is dirty and write the page if fail after swap_read_folio= (). > > > > > > > > > > Is it actually a bug of swapin_walk_pmd_entry? it only check pte > > > > > before read_swap_cache_async. but when read_swap_cache_async > > > > > is blocked by swapcache_prepare, after it gets the swapcache_prep= are > > > > > successfully , someone else should have already set the pte and f= reed > > > > > the swap slot even if this is not zswap? > > > > > > > > If someone freed the swap slot then swapcache_prepare() should fail= , > > > > but the swap entry could have been recycled after we dropped the pt= e > > > > lock, right? > > > > > > > > Anyway, yeah, I think there might be a bug here irrelevant to zswap= . > > > > > > > > > > > > > > static int swapin_walk_pmd_entry(pmd_t *pmd, unsigned long start, > > > > > unsigned long end, struct mm_walk *walk) > > > > > { > > > > > struct vm_area_struct *vma =3D walk->private; > > > > > struct swap_iocb *splug =3D NULL; > > > > > pte_t *ptep =3D NULL; > > > > > spinlock_t *ptl; > > > > > unsigned long addr; > > > > > > > > > > for (addr =3D start; addr < end; addr +=3D PAGE_SIZE) { > > > > > pte_t pte; > > > > > swp_entry_t entry; > > > > > struct folio *folio; > > > > > > > > > > if (!ptep++) { > > > > > ptep =3D pte_offset_map_lock(vma->vm_mm, = pmd, addr, &ptl); > > > > > if (!ptep) > > > > > break; > > > > > } > > > > > > > > > > pte =3D ptep_get(ptep); > > > > > if (!is_swap_pte(pte)) > > > > > continue; > > > > > entry =3D pte_to_swp_entry(pte); > > > > > if (unlikely(non_swap_entry(entry))) > > > > > continue; > > > > > > > > > > pte_unmap_unlock(ptep, ptl); > > > > > ptep =3D NULL; > > > > > > > > > > folio =3D read_swap_cache_async(entry, GFP_HIGHUS= ER_MOVABLE, > > > > > vma, addr, &splug); > > > > > if (folio) > > > > > folio_put(folio); > > > > > } > > > > > > > > > > if (ptep)c > > > > > pte_unmap_unlock(ptep, ptl); > > > > > swap_read_unplug(splug); > > > > > cond_resched(); > > > > > > > > > > return 0; > > > > > } > > > > > > > > > > I mean pte can become non-swap within read_swap_cache_async(), > > > > > so no matter if it is zswap, we have the bug. > > > > > > checked again, probably still a zswap issue, as swapcache_prepare ca= n detect > > > real swap slot free :-) > > > > > > /* > > > * Swap entry may have been freed since our caller ob= served it. > > > */ > > > err =3D swapcache_prepare(entry); > > > if (!err) > > > break; > > > > > > > > > zswap exslusive load isn't a real swap free. > > > > > > But probably we have found the timing which causes the issue at least= :-) > > > > The problem I was referring to is with the swapin fault path that > > skips the swapcache vs. MADV_WILLNEED. The fault path could swapin the > > page and skip the swapcache, and MADV_WILLNEED could swap it in again > > into the swapcache. We would end up with two copies of the folio. > > right. i feel like we have to re-check pte is not changed within > __read_swap_cache_async after swapcache_prepare succeed > after being blocked for a while as the previous entry could have > been freed and re-allocted by someone else - a completely > different process. then we get read other processes' data. This is only a problem when we skip the swapcache during swapin. Otherwise the swapcache synchronizes this. I wonder how much does skipping the swapcache buy us on recent kernels? This optimization was introduced a long time ago.