LinuxPPC-Dev Archive mirror
 help / color / mirror / Atom feed
* [PATCH v11 0/5]arm64: add ARCH_HAS_COPY_MC support
@ 2024-02-07 13:21 Tong Tiangen
  2024-02-07 13:22 ` [PATCH v11 1/5] uaccess: add generic fallback version of copy_mc_to_user() Tong Tiangen
                   ` (6 more replies)
  0 siblings, 7 replies; 8+ messages in thread
From: Tong Tiangen @ 2024-02-07 13:21 UTC (permalink / raw
  To: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
	James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
	Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
	Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin
  Cc: wangkefeng.wang, linux-kernel, linux-mm, Tong Tiangen, Guohanjun,
	linuxppc-dev, linux-arm-kernel

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="y", Size: 8117 bytes --]

With the increase of memory capacity and density, the probability of memory
error also increases. The increasing size and density of server RAM in data
centers and clouds have shown increased uncorrectable memory errors.

Currently, more and more scenarios that can tolerate memory errors,such as
CoW[1,2], KSM copy[3], coredump copy[4], khugepaged[5,6], uaccess copy[7],
etc.

This patchset introduces a new processing framework on ARM64, which enables
ARM64 to support error recovery in the above scenarios, and more scenarios
can be expanded based on this in the future.

In arm64, memory error handling in do_sea(), which is divided into two cases:
 1. If the user state consumed the memory errors, the solution is to kill
    the user process and isolate the error page.
 2. If the kernel state consumed the memory errors, the solution is to
    panic.

For case 2, Undifferentiated panic may not be the optimal choice, as it can
be handled better. In some scenarios, we can avoid panic, such as uaccess,
if the uaccess fails due to memory error, only the user process will be
affected, killing the user process and isolating the user page with
hardware memory errors is a better choice.

[1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline")
[2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults")
[3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
[4] commit 245f09226893 ("mm: hwpoison: coredump: support recovery from dump_user_range()")
[5] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory")
[6] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory")
[7] commit 278b917f8cb9 ("x86/mce: Add _ASM_EXTABLE_CPY for copy user access")

------------------
Test result:

1. copy_page(), copy_mc_page() basic function test pass, and the disassembly
   contents remains the same before and after refactor.

2. copy_to/from_user() access kernel NULL pointer raise translation fault
   and dump error message then die(), test pass.

3. Test following scenarios: copy_from_user(), get_user(), COW.

   Before patched: trigger a hardware memory error then panic.
   After  patched: trigger a hardware memory error without panic.

   Testing step:
   step1. start an user-process.
   step2. poison(einj) the user-process's page.
   step3: user-process access the poison page in kernel mode, then trigger SEA.
   step4: the kernel will not panic, only the user process is killed, the poison
          page is isolated. (before patched, the kernel will panic in do_sea())

------------------

Since V10:
 Accroding Mark's suggestion:
 1. Merge V10's patch2 and patch3 to V11's patch2.
 2. Patch2(V11): use new fixup_type for ld* in copy_to_user(), fix fatal
    issues (NULL kernel pointeraccess) been fixup incorrectly.
 3. Patch2(V11): refactoring the logic of do_sea().
 4. Patch4(V11): Remove duplicate assembly logic and remove do_mte().

 Besides:
 1. Patch2(V11): remove st* insn's fixup, st* generally not trigger memory error.
 2. Split a part of the logic of patch2(V11) to patch5(V11), for detail,
    see patch5(V11)'s commit msg.
 3. Remove patch6(v10) “arm64: introduce copy_mc_to_kernel() implementation”.
    During modification, some problems that cannot be solved in a short
    period are found. The patch will be released after the problems are
    solved.
 4. Add test result in this patch.
 5. Modify patchset title, do not use machine check and remove "-next".

Since V9:
 1. Rebase to latest kernel version 6.8-rc2.
 2. Add patch 6/6 to support copy_mc_to_kernel().

Since V8:
 1. Rebase to latest kernel version and fix topo in some of the patches.
 2. According to the suggestion of Catalin, I attempted to modify the
    return value of function copy_mc_[user]_highpage() to bytes not copied.
    During the modification process, I found that it would be more
    reasonable to return -EFAULT when copy error occurs (referring to the
    newly added patch 4). 

    For ARM64, the implementation of copy_mc_[user]_highpage() needs to
    consider MTE. Considering the scenario where data copying is successful
    but the MTE tag copying fails, it is also not reasonable to return
    bytes not copied.
 3. Considering the recent addition of machine check safe support for
    multiple scenarios, modify commit message for patch 5 (patch 4 for V8).

Since V7:
 Currently, there are patches supporting recover from poison
 consumption for the cow scenario[1]. Therefore, Supporting cow
 scenario under the arm64 architecture only needs to modify the relevant
 code under the arch/.
 [1]https://lore.kernel.org/lkml/20221031201029.102123-1-tony.luck@intel.com/

Since V6:
 Resend patches that are not merged into the mainline in V6.

Since V5:
 1. Add patch2/3 to add uaccess assembly helpers.
 2. Optimize the implementation logic of arm64_do_kernel_sea() in patch8.
 3. Remove kernel access fixup in patch9.
 All suggestion are from Mark. 

Since V4:
 1. According Michael's suggestion, add patch5.
 2. According Mark's suggestiog, do some restructuring to arm64
 extable, then a new adaptation of machine check safe support is made based
 on this.
 3. According Mark's suggestion, support machine check safe in do_mte() in
 cow scene.
 4. In V4, two patches have been merged into -next, so V5 not send these
 two patches.

Since V3:
 1. According to Robin's suggestion, direct modify user_ldst and
 user_ldp in asm-uaccess.h and modify mte.S.
 2. Add new macro USER_MC in asm-uaccess.h, used in copy_from_user.S
 and copy_to_user.S.
 3. According to Robin's suggestion, using micro in copy_page_mc.S to
 simplify code.
 4. According to KeFeng's suggestion, modify powerpc code in patch1.
 5. According to KeFeng's suggestion, modify mm/extable.c and some code
 optimization.

Since V2:
 1. According to Mark's suggestion, all uaccess can be recovered due to
    memory error.
 2. Scenario pagecache reading is also supported as part of uaccess
    (copy_to_user()) and duplication code problem is also solved. 
    Thanks for Robin's suggestion.
 3. According Mark's suggestion, update commit message of patch 2/5.
 4. According Borisllav's suggestion, update commit message of patch 1/5.

Since V1:
 1.Consistent with PPC/x86, Using CONFIG_ARCH_HAS_COPY_MC instead of
   ARM64_UCE_KERNEL_RECOVERY.
 2.Add two new scenes, cow and pagecache reading.
 3.Fix two small bug(the first two patch).

V1 in here:
https://lore.kernel.org/lkml/20220323033705.3966643-1-tongtiangen@huawei.com/

Tong Tiangen (5):
  uaccess: add generic fallback version of copy_mc_to_user()
  arm64: add support for ARCH_HAS_COPY_MC
  mm/hwpoison: return -EFAULT when copy fail in
    copy_mc_[user]_highpage()
  arm64: support copy_mc_[user]_highpage()
  arm64: send SIGBUS to user process for SEA exception

 arch/arm64/Kconfig                   |  1 +
 arch/arm64/include/asm/asm-extable.h | 31 ++++++++++++---
 arch/arm64/include/asm/asm-uaccess.h |  4 ++
 arch/arm64/include/asm/extable.h     |  1 +
 arch/arm64/include/asm/mte.h         |  9 +++++
 arch/arm64/include/asm/page.h        | 10 +++++
 arch/arm64/lib/Makefile              |  2 +
 arch/arm64/lib/copy_mc_page.S        | 37 ++++++++++++++++++
 arch/arm64/lib/copy_page.S           | 50 +++----------------------
 arch/arm64/lib/copy_page_template.S  | 56 ++++++++++++++++++++++++++++
 arch/arm64/lib/copy_to_user.S        | 10 ++---
 arch/arm64/lib/mte.S                 | 29 ++++++++++++++
 arch/arm64/mm/copypage.c             | 45 ++++++++++++++++++++++
 arch/arm64/mm/extable.c              | 19 ++++++++++
 arch/arm64/mm/fault.c                | 39 ++++++++++++++-----
 arch/powerpc/include/asm/uaccess.h   |  1 +
 arch/x86/include/asm/uaccess.h       |  1 +
 include/linux/highmem.h              | 16 ++++++--
 include/linux/uaccess.h              |  9 +++++
 mm/khugepaged.c                      |  4 +-
 20 files changed, 304 insertions(+), 70 deletions(-)
 create mode 100644 arch/arm64/lib/copy_mc_page.S
 create mode 100644 arch/arm64/lib/copy_page_template.S

-- 
2.25.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v11 1/5] uaccess: add generic fallback version of copy_mc_to_user()
  2024-02-07 13:21 [PATCH v11 0/5]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
@ 2024-02-07 13:22 ` Tong Tiangen
  2024-02-07 13:22 ` [PATCH v11 2/5] arm64: add support for ARCH_HAS_COPY_MC Tong Tiangen
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Tong Tiangen @ 2024-02-07 13:22 UTC (permalink / raw
  To: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
	James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
	Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
	Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin
  Cc: wangkefeng.wang, linux-kernel, linux-mm, Tong Tiangen, Guohanjun,
	linuxppc-dev, linux-arm-kernel

x86/powerpc has it's implementation of copy_mc_to_user(), we add generic
fallback in include/linux/uaccess.h prepare for other architechures to
enable CONFIG_ARCH_HAS_COPY_MC.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
Acked-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/uaccess.h | 1 +
 arch/x86/include/asm/uaccess.h     | 1 +
 include/linux/uaccess.h            | 9 +++++++++
 3 files changed, 11 insertions(+)

diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index f1f9890f50d3..4bfd1e6f0702 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -381,6 +381,7 @@ copy_mc_to_user(void __user *to, const void *from, unsigned long n)
 
 	return n;
 }
+#define copy_mc_to_user copy_mc_to_user
 #endif
 
 extern long __copy_from_user_flushcache(void *dst, const void __user *src,
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index 5c367c1290c3..fd56282ee9a8 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -497,6 +497,7 @@ copy_mc_to_kernel(void *to, const void *from, unsigned len);
 
 unsigned long __must_check
 copy_mc_to_user(void __user *to, const void *from, unsigned len);
+#define copy_mc_to_user copy_mc_to_user
 #endif
 
 /*
diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index 3064314f4832..550287c92990 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -205,6 +205,15 @@ copy_mc_to_kernel(void *dst, const void *src, size_t cnt)
 }
 #endif
 
+#ifndef copy_mc_to_user
+static inline unsigned long __must_check
+copy_mc_to_user(void *dst, const void *src, size_t cnt)
+{
+	check_object_size(src, cnt, true);
+	return raw_copy_to_user(dst, src, cnt);
+}
+#endif
+
 static __always_inline void pagefault_disabled_inc(void)
 {
 	current->pagefault_disabled++;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v11 2/5] arm64: add support for ARCH_HAS_COPY_MC
  2024-02-07 13:21 [PATCH v11 0/5]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
  2024-02-07 13:22 ` [PATCH v11 1/5] uaccess: add generic fallback version of copy_mc_to_user() Tong Tiangen
@ 2024-02-07 13:22 ` Tong Tiangen
  2024-02-07 13:22 ` [PATCH v11 3/5] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() Tong Tiangen
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Tong Tiangen @ 2024-02-07 13:22 UTC (permalink / raw
  To: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
	James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
	Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
	Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin
  Cc: wangkefeng.wang, linux-kernel, linux-mm, Tong Tiangen, Guohanjun,
	linuxppc-dev, linux-arm-kernel

For the arm64 kernel, when it processes hardware memory errors for
synchronize notifications(do_sea()), if the errors is consumed within the
kernel, the current processing is panic. However, it is not optimal.

Take copy_from/to_user for example, If ld* triggers a memory error, even in
kernel mode, only the associated process is affected. Killing the user
process and isolating the corrupt page is a better choice.

New fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE is added to identify insn
that can recover from memory errors triggered by access to kernel memory.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/Kconfig                   |  1 +
 arch/arm64/include/asm/asm-extable.h | 31 +++++++++++++++++++++++-----
 arch/arm64/include/asm/asm-uaccess.h |  4 ++++
 arch/arm64/include/asm/extable.h     |  1 +
 arch/arm64/lib/copy_to_user.S        | 10 ++++-----
 arch/arm64/mm/extable.c              | 19 +++++++++++++++++
 arch/arm64/mm/fault.c                | 27 +++++++++++++++++-------
 7 files changed, 75 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 96fb363d2f52..72b651c461d5 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -20,6 +20,7 @@ config ARM64
 	select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
 	select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE
 	select ARCH_HAS_CACHE_LINE_SIZE
+	select ARCH_HAS_COPY_MC if ACPI_APEI_GHES
 	select ARCH_HAS_CURRENT_STACK_POINTER
 	select ARCH_HAS_DEBUG_VIRTUAL
 	select ARCH_HAS_DEBUG_VM_PGTABLE
diff --git a/arch/arm64/include/asm/asm-extable.h b/arch/arm64/include/asm/asm-extable.h
index 980d1dd8e1a3..9c0664fe1eb1 100644
--- a/arch/arm64/include/asm/asm-extable.h
+++ b/arch/arm64/include/asm/asm-extable.h
@@ -5,11 +5,13 @@
 #include <linux/bits.h>
 #include <asm/gpr-num.h>
 
-#define EX_TYPE_NONE			0
-#define EX_TYPE_BPF			1
-#define EX_TYPE_UACCESS_ERR_ZERO	2
-#define EX_TYPE_KACCESS_ERR_ZERO	3
-#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD	4
+#define EX_TYPE_NONE				0
+#define EX_TYPE_BPF				1
+#define EX_TYPE_UACCESS_ERR_ZERO		2
+#define EX_TYPE_KACCESS_ERR_ZERO		3
+#define EX_TYPE_LOAD_UNALIGNED_ZEROPAD		4
+/* kernel access memory error safe */
+#define EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE	5
 
 /* Data fields for EX_TYPE_UACCESS_ERR_ZERO */
 #define EX_DATA_REG_ERR_SHIFT	0
@@ -51,6 +53,17 @@
 #define _ASM_EXTABLE_UACCESS(insn, fixup)				\
 	_ASM_EXTABLE_UACCESS_ERR_ZERO(insn, fixup, wzr, wzr)
 
+#define _ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, err, zero)	\
+	__ASM_EXTABLE_RAW(insn, fixup, 					\
+			  EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE,		\
+			  (						\
+			    EX_DATA_REG(ERR, err) |			\
+			    EX_DATA_REG(ZERO, zero)			\
+			  ))
+
+#define _ASM_EXTABLE_KACCESS_ME_SAFE(insn, fixup)			\
+	_ASM_EXTABLE_KACCESS_ERR_ZERO_ME_SAFE(insn, fixup, wzr, wzr)
+
 /*
  * Create an exception table entry for uaccess `insn`, which will branch to `fixup`
  * when an unhandled fault is taken.
@@ -69,6 +82,14 @@
 	.endif
 	.endm
 
+/*
+ * Create an exception table entry for kaccess me(memory error) safe `insn`, which
+ * will branch to `fixup` when an unhandled fault is taken.
+ */
+	.macro          _asm_extable_kaccess_me_safe, insn, fixup
+	_ASM_EXTABLE_KACCESS_ME_SAFE(\insn, \fixup)
+	.endm
+
 #else /* __ASSEMBLY__ */
 
 #include <linux/stringify.h>
diff --git a/arch/arm64/include/asm/asm-uaccess.h b/arch/arm64/include/asm/asm-uaccess.h
index 5b6efe8abeeb..7bbebfa5b710 100644
--- a/arch/arm64/include/asm/asm-uaccess.h
+++ b/arch/arm64/include/asm/asm-uaccess.h
@@ -57,6 +57,10 @@ alternative_else_nop_endif
 	.endm
 #endif
 
+#define KERNEL_ME_SAFE(l, x...)			\
+9999:	x;					\
+	_asm_extable_kaccess_me_safe	9999b, l
+
 #define USER(l, x...)				\
 9999:	x;					\
 	_asm_extable_uaccess	9999b, l
diff --git a/arch/arm64/include/asm/extable.h b/arch/arm64/include/asm/extable.h
index 72b0e71cc3de..bc49443bc502 100644
--- a/arch/arm64/include/asm/extable.h
+++ b/arch/arm64/include/asm/extable.h
@@ -46,4 +46,5 @@ bool ex_handler_bpf(const struct exception_table_entry *ex,
 #endif /* !CONFIG_BPF_JIT */
 
 bool fixup_exception(struct pt_regs *regs);
+bool fixup_exception_me(struct pt_regs *regs);
 #endif
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 802231772608..2ac716c0d6d8 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -20,7 +20,7 @@
  *	x0 - bytes not copied
  */
 	.macro ldrb1 reg, ptr, val
-	ldrb  \reg, [\ptr], \val
+	KERNEL_ME_SAFE(9998f, ldrb  \reg, [\ptr], \val)
 	.endm
 
 	.macro strb1 reg, ptr, val
@@ -28,7 +28,7 @@
 	.endm
 
 	.macro ldrh1 reg, ptr, val
-	ldrh  \reg, [\ptr], \val
+	KERNEL_ME_SAFE(9998f, ldrh  \reg, [\ptr], \val)
 	.endm
 
 	.macro strh1 reg, ptr, val
@@ -36,7 +36,7 @@
 	.endm
 
 	.macro ldr1 reg, ptr, val
-	ldr \reg, [\ptr], \val
+	KERNEL_ME_SAFE(9998f, ldr \reg, [\ptr], \val)
 	.endm
 
 	.macro str1 reg, ptr, val
@@ -44,7 +44,7 @@
 	.endm
 
 	.macro ldp1 reg1, reg2, ptr, val
-	ldp \reg1, \reg2, [\ptr], \val
+	KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr], \val)
 	.endm
 
 	.macro stp1 reg1, reg2, ptr, val
@@ -64,7 +64,7 @@ SYM_FUNC_START(__arch_copy_to_user)
 9997:	cmp	dst, dstin
 	b.ne	9998f
 	// Before being absolutely sure we couldn't copy anything, try harder
-	ldrb	tmp1w, [srcin]
+KERNEL_ME_SAFE(9998f, ldrb	tmp1w, [srcin])
 USER(9998f, sttrb tmp1w, [dst])
 	add	dst, dst, #1
 9998:	sub	x0, end, dst			// bytes not copied
diff --git a/arch/arm64/mm/extable.c b/arch/arm64/mm/extable.c
index 228d681a8715..8c690ae61944 100644
--- a/arch/arm64/mm/extable.c
+++ b/arch/arm64/mm/extable.c
@@ -72,7 +72,26 @@ bool fixup_exception(struct pt_regs *regs)
 		return ex_handler_uaccess_err_zero(ex, regs);
 	case EX_TYPE_LOAD_UNALIGNED_ZEROPAD:
 		return ex_handler_load_unaligned_zeropad(ex, regs);
+	case EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE:
+		return false;
 	}
 
 	BUG();
 }
+
+bool fixup_exception_me(struct pt_regs *regs)
+{
+	const struct exception_table_entry *ex;
+
+	ex = search_exception_tables(instruction_pointer(regs));
+	if (!ex)
+		return false;
+
+	switch (ex->type) {
+	case EX_TYPE_UACCESS_ERR_ZERO:
+	case EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE:
+		return ex_handler_uaccess_err_zero(ex, regs);
+	}
+
+	return false;
+}
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 13189322a38f..78f9d5ce83bb 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -802,21 +802,32 @@ static int do_bad(unsigned long far, unsigned long esr, struct pt_regs *regs)
 	return 1; /* "fault" */
 }
 
+/*
+ * APEI claimed this as a firmware-first notification.
+ * Some processing deferred to task_work before ret_to_user().
+ */
+static bool do_apei_claim_sea(struct pt_regs *regs)
+{
+	if (user_mode(regs)) {
+		if (!apei_claim_sea(regs))
+			return true;
+	} else if (IS_ENABLED(CONFIG_ARCH_HAS_COPY_MC)) {
+		if (fixup_exception_me(regs) && !apei_claim_sea(regs))
+			return true;
+	}
+
+	return false;
+}
+
 static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
 {
 	const struct fault_info *inf;
 	unsigned long siaddr;
 
-	inf = esr_to_fault_info(esr);
-
-	if (user_mode(regs) && apei_claim_sea(regs) == 0) {
-		/*
-		 * APEI claimed this as a firmware-first notification.
-		 * Some processing deferred to task_work before ret_to_user().
-		 */
+	if (do_apei_claim_sea(regs))
 		return 0;
-	}
 
+	inf = esr_to_fault_info(esr);
 	if (esr & ESR_ELx_FnV) {
 		siaddr = 0;
 	} else {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v11 3/5] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage()
  2024-02-07 13:21 [PATCH v11 0/5]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
  2024-02-07 13:22 ` [PATCH v11 1/5] uaccess: add generic fallback version of copy_mc_to_user() Tong Tiangen
  2024-02-07 13:22 ` [PATCH v11 2/5] arm64: add support for ARCH_HAS_COPY_MC Tong Tiangen
@ 2024-02-07 13:22 ` Tong Tiangen
  2024-02-07 13:22 ` [PATCH v11 4/5] arm64: support copy_mc_[user]_highpage() Tong Tiangen
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Tong Tiangen @ 2024-02-07 13:22 UTC (permalink / raw
  To: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
	James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
	Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
	Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin
  Cc: wangkefeng.wang, linux-kernel, linux-mm, Tong Tiangen, Guohanjun,
	linuxppc-dev, linux-arm-kernel

If hardware errors are encountered during page copying, returning the bytes
not copied is not meaningful, and the caller cannot do any processing on
the remaining data. Returning -EFAULT is more reasonable, which represents
a hardware error encountered during the copying.

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 include/linux/highmem.h | 8 ++++----
 mm/khugepaged.c         | 4 ++--
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index 451c1dff0e87..c5ca1a1fc4f5 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -335,8 +335,8 @@ static inline void copy_highpage(struct page *to, struct page *from)
 /*
  * If architecture supports machine check exception handling, define the
  * #MC versions of copy_user_highpage and copy_highpage. They copy a memory
- * page with #MC in source page (@from) handled, and return the number
- * of bytes not copied if there was a #MC, otherwise 0 for success.
+ * page with #MC in source page (@from) handled, and return -EFAULT if there
+ * was a #MC, otherwise 0 for success.
  */
 static inline int copy_mc_user_highpage(struct page *to, struct page *from,
 					unsigned long vaddr, struct vm_area_struct *vma)
@@ -352,7 +352,7 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from,
 	kunmap_local(vto);
 	kunmap_local(vfrom);
 
-	return ret;
+	return ret ? -EFAULT : 0;
 }
 
 static inline int copy_mc_highpage(struct page *to, struct page *from)
@@ -368,7 +368,7 @@ static inline int copy_mc_highpage(struct page *to, struct page *from)
 	kunmap_local(vto);
 	kunmap_local(vfrom);
 
-	return ret;
+	return ret ? -EFAULT : 0;
 }
 #else
 static inline int copy_mc_user_highpage(struct page *to, struct page *from,
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index fe43fbc44525..d0f40c42f620 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -797,7 +797,7 @@ static int __collapse_huge_page_copy(pte_t *pte,
 			continue;
 		}
 		src_page = pte_page(pteval);
-		if (copy_mc_user_highpage(page, src_page, _address, vma) > 0) {
+		if (copy_mc_user_highpage(page, src_page, _address, vma)) {
 			result = SCAN_COPY_MC;
 			break;
 		}
@@ -2053,7 +2053,7 @@ static int collapse_file(struct mm_struct *mm, unsigned long addr,
 			clear_highpage(hpage + (index % HPAGE_PMD_NR));
 			index++;
 		}
-		if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page) > 0) {
+		if (copy_mc_highpage(hpage + (page->index % HPAGE_PMD_NR), page)) {
 			result = SCAN_COPY_MC;
 			goto rollback;
 		}
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v11 4/5] arm64: support copy_mc_[user]_highpage()
  2024-02-07 13:21 [PATCH v11 0/5]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
                   ` (2 preceding siblings ...)
  2024-02-07 13:22 ` [PATCH v11 3/5] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() Tong Tiangen
@ 2024-02-07 13:22 ` Tong Tiangen
  2024-02-07 13:22 ` [PATCH v11 5/5] arm64: send SIGBUS to user process for SEA exception Tong Tiangen
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Tong Tiangen @ 2024-02-07 13:22 UTC (permalink / raw
  To: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
	James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
	Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
	Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin
  Cc: wangkefeng.wang, linux-kernel, linux-mm, Tong Tiangen, Guohanjun,
	linuxppc-dev, linux-arm-kernel

Currently, many scenarios that can tolerate memory errors when copying page
have been supported in the kernel[1~5], all of which are implemented by
copy_mc_[user]_highpage(). arm64 should also support this mechanism.

Due to mte, arm64 needs to have its own copy_mc_[user]_highpage()
architecture implementation, macros __HAVE_ARCH_COPY_MC_HIGHPAGE and
__HAVE_ARCH_COPY_MC_USER_HIGHPAGE have been added to control it.

Add new helper copy_mc_page() which provide a page copy implementation with
hardware memory error safe. The code logic of copy_mc_page() is the same as
copy_page(), the main difference is that the ldp insn of copy_mc_page()
contains the fixup type EX_TYPE_KACCESS_ERR_ZERO_ME_SAFE, therefore, the
main logic is extracted to copy_page_template.S.

[1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline")
[2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults")
[3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
[4] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory")
[5] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory")

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/include/asm/mte.h        |  9 +++++
 arch/arm64/include/asm/page.h       | 10 ++++++
 arch/arm64/lib/Makefile             |  2 ++
 arch/arm64/lib/copy_mc_page.S       | 37 +++++++++++++++++++
 arch/arm64/lib/copy_page.S          | 50 +++-----------------------
 arch/arm64/lib/copy_page_template.S | 56 +++++++++++++++++++++++++++++
 arch/arm64/lib/mte.S                | 29 +++++++++++++++
 arch/arm64/mm/copypage.c            | 45 +++++++++++++++++++++++
 include/linux/highmem.h             |  8 +++++
 9 files changed, 201 insertions(+), 45 deletions(-)
 create mode 100644 arch/arm64/lib/copy_mc_page.S
 create mode 100644 arch/arm64/lib/copy_page_template.S

diff --git a/arch/arm64/include/asm/mte.h b/arch/arm64/include/asm/mte.h
index 91fbd5c8a391..dc68337c2623 100644
--- a/arch/arm64/include/asm/mte.h
+++ b/arch/arm64/include/asm/mte.h
@@ -92,6 +92,11 @@ static inline bool try_page_mte_tagging(struct page *page)
 void mte_zero_clear_page_tags(void *addr);
 void mte_sync_tags(pte_t pte, unsigned int nr_pages);
 void mte_copy_page_tags(void *kto, const void *kfrom);
+
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+int mte_copy_mc_page_tags(void *kto, const void *kfrom);
+#endif
+
 void mte_thread_init_user(void);
 void mte_thread_switch(struct task_struct *next);
 void mte_cpu_setup(void);
@@ -128,6 +133,10 @@ static inline void mte_sync_tags(pte_t pte, unsigned int nr_pages)
 static inline void mte_copy_page_tags(void *kto, const void *kfrom)
 {
 }
+static inline int mte_copy_mc_page_tags(void *kto, const void *kfrom)
+{
+	return 0;
+}
 static inline void mte_thread_init_user(void)
 {
 }
diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
index 2312e6ee595f..304cc86b8a10 100644
--- a/arch/arm64/include/asm/page.h
+++ b/arch/arm64/include/asm/page.h
@@ -29,6 +29,16 @@ void copy_user_highpage(struct page *to, struct page *from,
 void copy_highpage(struct page *to, struct page *from);
 #define __HAVE_ARCH_COPY_HIGHPAGE
 
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+int copy_mc_page(void *to, const void *from);
+int copy_mc_highpage(struct page *to, struct page *from);
+#define __HAVE_ARCH_COPY_MC_HIGHPAGE
+
+int copy_mc_user_highpage(struct page *to, struct page *from,
+		unsigned long vaddr, struct vm_area_struct *vma);
+#define __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
+#endif
+
 struct folio *vma_alloc_zeroed_movable_folio(struct vm_area_struct *vma,
 						unsigned long vaddr);
 #define vma_alloc_zeroed_movable_folio vma_alloc_zeroed_movable_folio
diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
index 29490be2546b..a2fd865b816d 100644
--- a/arch/arm64/lib/Makefile
+++ b/arch/arm64/lib/Makefile
@@ -15,6 +15,8 @@ endif
 
 lib-$(CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE) += uaccess_flushcache.o
 
+lib-$(CONFIG_ARCH_HAS_COPY_MC) += copy_mc_page.o
+
 obj-$(CONFIG_CRC32) += crc32.o
 
 obj-$(CONFIG_FUNCTION_ERROR_INJECTION) += error-inject.o
diff --git a/arch/arm64/lib/copy_mc_page.S b/arch/arm64/lib/copy_mc_page.S
new file mode 100644
index 000000000000..1e5fe6952869
--- /dev/null
+++ b/arch/arm64/lib/copy_mc_page.S
@@ -0,0 +1,37 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+
+#include <linux/linkage.h>
+#include <linux/const.h>
+#include <asm/assembler.h>
+#include <asm/page.h>
+#include <asm/cpufeature.h>
+#include <asm/alternative.h>
+#include <asm/asm-extable.h>
+#include <asm/asm-uaccess.h>
+
+/*
+ * Copy a page from src to dest (both are page aligned) with memory error safe
+ *
+ * Parameters:
+ *	x0 - dest
+ *	x1 - src
+ * Returns:
+ * 	x0 - Return 0 if copy success, or -EFAULT if anything goes wrong
+ *	     while copying.
+ */
+	.macro ldp1 reg1, reg2, ptr, val
+	KERNEL_ME_SAFE(9998f, ldp \reg1, \reg2, [\ptr, \val])
+	.endm
+
+SYM_FUNC_START(__pi_copy_mc_page)
+#include "copy_page_template.S"
+
+	mov x0, #0
+	ret
+
+9998:	mov x0, #-EFAULT
+	ret
+
+SYM_FUNC_END(__pi_copy_mc_page)
+SYM_FUNC_ALIAS(copy_mc_page, __pi_copy_mc_page)
+EXPORT_SYMBOL(copy_mc_page)
diff --git a/arch/arm64/lib/copy_page.S b/arch/arm64/lib/copy_page.S
index 6a56d7cf309d..5499f507bb75 100644
--- a/arch/arm64/lib/copy_page.S
+++ b/arch/arm64/lib/copy_page.S
@@ -17,52 +17,12 @@
  *	x0 - dest
  *	x1 - src
  */
-SYM_FUNC_START(__pi_copy_page)
-	ldp	x2, x3, [x1]
-	ldp	x4, x5, [x1, #16]
-	ldp	x6, x7, [x1, #32]
-	ldp	x8, x9, [x1, #48]
-	ldp	x10, x11, [x1, #64]
-	ldp	x12, x13, [x1, #80]
-	ldp	x14, x15, [x1, #96]
-	ldp	x16, x17, [x1, #112]
-
-	add	x0, x0, #256
-	add	x1, x1, #128
-1:
-	tst	x0, #(PAGE_SIZE - 1)
-
-	stnp	x2, x3, [x0, #-256]
-	ldp	x2, x3, [x1]
-	stnp	x4, x5, [x0, #16 - 256]
-	ldp	x4, x5, [x1, #16]
-	stnp	x6, x7, [x0, #32 - 256]
-	ldp	x6, x7, [x1, #32]
-	stnp	x8, x9, [x0, #48 - 256]
-	ldp	x8, x9, [x1, #48]
-	stnp	x10, x11, [x0, #64 - 256]
-	ldp	x10, x11, [x1, #64]
-	stnp	x12, x13, [x0, #80 - 256]
-	ldp	x12, x13, [x1, #80]
-	stnp	x14, x15, [x0, #96 - 256]
-	ldp	x14, x15, [x1, #96]
-	stnp	x16, x17, [x0, #112 - 256]
-	ldp	x16, x17, [x1, #112]
-
-	add	x0, x0, #128
-	add	x1, x1, #128
-
-	b.ne	1b
-
-	stnp	x2, x3, [x0, #-256]
-	stnp	x4, x5, [x0, #16 - 256]
-	stnp	x6, x7, [x0, #32 - 256]
-	stnp	x8, x9, [x0, #48 - 256]
-	stnp	x10, x11, [x0, #64 - 256]
-	stnp	x12, x13, [x0, #80 - 256]
-	stnp	x14, x15, [x0, #96 - 256]
-	stnp	x16, x17, [x0, #112 - 256]
+	.macro ldp1 reg1, reg2, ptr, val
+	ldp \reg1, \reg2, [\ptr, \val]
+	.endm
 
+SYM_FUNC_START(__pi_copy_page)
+#include "copy_page_template.S"
 	ret
 SYM_FUNC_END(__pi_copy_page)
 SYM_FUNC_ALIAS(copy_page, __pi_copy_page)
diff --git a/arch/arm64/lib/copy_page_template.S b/arch/arm64/lib/copy_page_template.S
new file mode 100644
index 000000000000..b3ddec2c7a27
--- /dev/null
+++ b/arch/arm64/lib/copy_page_template.S
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2012 ARM Ltd.
+ */
+
+/*
+ * Copy a page from src to dest (both are page aligned)
+ *
+ * Parameters:
+ *	x0 - dest
+ *	x1 - src
+ */
+	ldp1	x2, x3, x1, #0
+	ldp1	x4, x5, x1, #16
+	ldp1	x6, x7, x1, #32
+	ldp1	x8, x9, x1, #48
+	ldp1	x10, x11, x1, #64
+	ldp1	x12, x13, x1, #80
+	ldp1	x14, x15, x1, #96
+	ldp1	x16, x17, x1, #112
+
+	add	x0, x0, #256
+	add	x1, x1, #128
+1:
+	tst	x0, #(PAGE_SIZE - 1)
+
+	stnp	x2, x3, [x0, #-256]
+	ldp1	x2, x3, x1, #0
+	stnp	x4, x5, [x0, #16 - 256]
+	ldp1	x4, x5, x1, #16
+	stnp	x6, x7, [x0, #32 - 256]
+	ldp1	x6, x7, x1, #32
+	stnp	x8, x9, [x0, #48 - 256]
+	ldp1	x8, x9, x1, #48
+	stnp	x10, x11, [x0, #64 - 256]
+	ldp1	x10, x11, x1, #64
+	stnp	x12, x13, [x0, #80 - 256]
+	ldp1	x12, x13, x1, #80
+	stnp	x14, x15, [x0, #96 - 256]
+	ldp1	x14, x15, x1, #96
+	stnp	x16, x17, [x0, #112 - 256]
+	ldp1	x16, x17, x1, #112
+
+	add	x0, x0, #128
+	add	x1, x1, #128
+
+	b.ne	1b
+
+	stnp	x2, x3, [x0, #-256]
+	stnp	x4, x5, [x0, #16 - 256]
+	stnp	x6, x7, [x0, #32 - 256]
+	stnp	x8, x9, [x0, #48 - 256]
+	stnp	x10, x11, [x0, #64 - 256]
+	stnp	x12, x13, [x0, #80 - 256]
+	stnp	x14, x15, [x0, #96 - 256]
+	stnp	x16, x17, [x0, #112 - 256]
diff --git a/arch/arm64/lib/mte.S b/arch/arm64/lib/mte.S
index 5018ac03b6bf..50ef24318281 100644
--- a/arch/arm64/lib/mte.S
+++ b/arch/arm64/lib/mte.S
@@ -80,6 +80,35 @@ SYM_FUNC_START(mte_copy_page_tags)
 	ret
 SYM_FUNC_END(mte_copy_page_tags)
 
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+/*
+ * Copy the tags from the source page to the destination one wiht machine check safe
+ *   x0 - address of the destination page
+ *   x1 - address of the source page
+ * Returns:
+ *   x0 - Return 0 if copy success, or
+ *        -EFAULT if anything goes wrong while copying.
+ */
+SYM_FUNC_START(mte_copy_mc_page_tags)
+	mov	x2, x0
+	mov	x3, x1
+	multitag_transfer_size x5, x6
+1:
+KERNEL_ME_SAFE(2f, ldgm	x4, [x3])
+	stgm	x4, [x2]
+	add	x2, x2, x5
+	add	x3, x3, x5
+	tst	x2, #(PAGE_SIZE - 1)
+	b.ne	1b
+
+	mov x0, #0
+	ret
+
+2:	mov x0, #-EFAULT
+	ret
+SYM_FUNC_END(mte_copy_mc_page_tags)
+#endif
+
 /*
  * Read tags from a user buffer (one tag per byte) and set the corresponding
  * tags at the given kernel address. Used by PTRACE_POKEMTETAGS.
diff --git a/arch/arm64/mm/copypage.c b/arch/arm64/mm/copypage.c
index a7bb20055ce0..ff0d9ceea2a4 100644
--- a/arch/arm64/mm/copypage.c
+++ b/arch/arm64/mm/copypage.c
@@ -40,3 +40,48 @@ void copy_user_highpage(struct page *to, struct page *from,
 	flush_dcache_page(to);
 }
 EXPORT_SYMBOL_GPL(copy_user_highpage);
+
+#ifdef CONFIG_ARCH_HAS_COPY_MC
+/*
+ * Return -EFAULT if anything goes wrong while copying page or mte.
+ */
+int copy_mc_highpage(struct page *to, struct page *from)
+{
+	void *kto = page_address(to);
+	void *kfrom = page_address(from);
+	int ret;
+
+	ret = copy_mc_page(kto, kfrom);
+	if (ret)
+		return -EFAULT;
+
+	if (kasan_hw_tags_enabled())
+		page_kasan_tag_reset(to);
+
+	if (system_supports_mte() && page_mte_tagged(from)) {
+		/* It's a new page, shouldn't have been tagged yet */
+		WARN_ON_ONCE(!try_page_mte_tagging(to));
+		ret = mte_copy_mc_page_tags(kto, kfrom);
+		if (ret)
+			return -EFAULT;
+
+		set_page_mte_tagged(to);
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL(copy_mc_highpage);
+
+int copy_mc_user_highpage(struct page *to, struct page *from,
+			unsigned long vaddr, struct vm_area_struct *vma)
+{
+	int ret;
+
+	ret = copy_mc_highpage(to, from);
+	if (!ret)
+		flush_dcache_page(to);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(copy_mc_user_highpage);
+#endif
diff --git a/include/linux/highmem.h b/include/linux/highmem.h
index c5ca1a1fc4f5..a42470ca42f2 100644
--- a/include/linux/highmem.h
+++ b/include/linux/highmem.h
@@ -332,6 +332,7 @@ static inline void copy_highpage(struct page *to, struct page *from)
 #endif
 
 #ifdef copy_mc_to_kernel
+#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
 /*
  * If architecture supports machine check exception handling, define the
  * #MC versions of copy_user_highpage and copy_highpage. They copy a memory
@@ -354,7 +355,9 @@ static inline int copy_mc_user_highpage(struct page *to, struct page *from,
 
 	return ret ? -EFAULT : 0;
 }
+#endif
 
+#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE
 static inline int copy_mc_highpage(struct page *to, struct page *from)
 {
 	unsigned long ret;
@@ -370,20 +373,25 @@ static inline int copy_mc_highpage(struct page *to, struct page *from)
 
 	return ret ? -EFAULT : 0;
 }
+#endif
 #else
+#ifndef __HAVE_ARCH_COPY_MC_USER_HIGHPAGE
 static inline int copy_mc_user_highpage(struct page *to, struct page *from,
 					unsigned long vaddr, struct vm_area_struct *vma)
 {
 	copy_user_highpage(to, from, vaddr, vma);
 	return 0;
 }
+#endif
 
+#ifndef __HAVE_ARCH_COPY_MC_HIGHPAGE
 static inline int copy_mc_highpage(struct page *to, struct page *from)
 {
 	copy_highpage(to, from);
 	return 0;
 }
 #endif
+#endif
 
 static inline void memcpy_page(struct page *dst_page, size_t dst_off,
 			       struct page *src_page, size_t src_off,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v11 5/5] arm64: send SIGBUS to user process for SEA exception
  2024-02-07 13:21 [PATCH v11 0/5]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
                   ` (3 preceding siblings ...)
  2024-02-07 13:22 ` [PATCH v11 4/5] arm64: support copy_mc_[user]_highpage() Tong Tiangen
@ 2024-02-07 13:22 ` Tong Tiangen
  2024-02-18  7:05 ` [PATCH v11 0/5]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
  2024-03-27  0:49 ` Tong Tiangen
  6 siblings, 0 replies; 8+ messages in thread
From: Tong Tiangen @ 2024-02-07 13:22 UTC (permalink / raw
  To: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
	James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
	Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
	Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin
  Cc: wangkefeng.wang, linux-kernel, linux-mm, Tong Tiangen, Guohanjun,
	linuxppc-dev, linux-arm-kernel

For SEA exception, kernel require take some action to recover from memory
error, such as isolate poison page adn kill failure thread, which are done
in memory_failure().

During the test, the failure thread cannot be killed due to this issue[1],
Here, I temporarily workaround this issue by sending signals to user
processes (!(PF_KTHREAD|PF_IO_WORKER|PF_WQ_WORKER|PF_USER_WORKER)) in
do_sea(). After [1] is merged, this patch can be rolled back or the SIGBUS
will be sent repeated.

[1]https://lore.kernel.org/lkml/20240204080144.7977-1-xueshuai@linux.alibaba.com/

Signed-off-by: Tong Tiangen <tongtiangen@huawei.com>
---
 arch/arm64/mm/fault.c | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 78f9d5ce83bb..a27bb2de1a7c 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -824,9 +824,6 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
 	const struct fault_info *inf;
 	unsigned long siaddr;
 
-	if (do_apei_claim_sea(regs))
-		return 0;
-
 	inf = esr_to_fault_info(esr);
 	if (esr & ESR_ELx_FnV) {
 		siaddr = 0;
@@ -838,6 +835,19 @@ static int do_sea(unsigned long far, unsigned long esr, struct pt_regs *regs)
 		 */
 		siaddr  = untagged_addr(far);
 	}
+
+	if (do_apei_claim_sea(regs)) {
+		if (!(current->flags & (PF_KTHREAD |
+					PF_USER_WORKER |
+					PF_WQ_WORKER |
+					PF_IO_WORKER))) {
+			set_thread_esr(0, esr);
+			arm64_force_sig_fault(inf->sig, inf->code, siaddr,
+				"Uncorrected memory error on access to poison memory\n");
+		}
+		return 0;
+	}
+
 	arm64_notify_die(inf->name, regs, inf->sig, inf->code, siaddr, esr);
 
 	return 0;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v11 0/5]arm64: add ARCH_HAS_COPY_MC support
  2024-02-07 13:21 [PATCH v11 0/5]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
                   ` (4 preceding siblings ...)
  2024-02-07 13:22 ` [PATCH v11 5/5] arm64: send SIGBUS to user process for SEA exception Tong Tiangen
@ 2024-02-18  7:05 ` Tong Tiangen
  2024-03-27  0:49 ` Tong Tiangen
  6 siblings, 0 replies; 8+ messages in thread
From: Tong Tiangen @ 2024-02-18  7:05 UTC (permalink / raw
  To: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
	James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
	Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
	Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin
  Cc: wangkefeng.wang, linux-kernel, linux-mm, Guohanjun, linuxppc-dev,
	linux-arm-kernel

Hi Mark:

Kindly ping :)

Thanks.
Tong.

在 2024/2/7 21:21, Tong Tiangen 写道:
> With the increase of memory capacity and density, the probability of memory
> error also increases. The increasing size and density of server RAM in data
> centers and clouds have shown increased uncorrectable memory errors.
> 
> Currently, more and more scenarios that can tolerate memory errors,such as
> CoW[1,2], KSM copy[3], coredump copy[4], khugepaged[5,6], uaccess copy[7],
> etc.
> 
> This patchset introduces a new processing framework on ARM64, which enables
> ARM64 to support error recovery in the above scenarios, and more scenarios
> can be expanded based on this in the future.
> 
> In arm64, memory error handling in do_sea(), which is divided into two cases:
>   1. If the user state consumed the memory errors, the solution is to kill
>      the user process and isolate the error page.
>   2. If the kernel state consumed the memory errors, the solution is to
>      panic.
> 
> For case 2, Undifferentiated panic may not be the optimal choice, as it can
> be handled better. In some scenarios, we can avoid panic, such as uaccess,
> if the uaccess fails due to memory error, only the user process will be
> affected, killing the user process and isolating the user page with
> hardware memory errors is a better choice.
> 
> [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline")
> [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults")
> [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
> [4] commit 245f09226893 ("mm: hwpoison: coredump: support recovery from dump_user_range()")
> [5] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory")
> [6] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory")
> [7] commit 278b917f8cb9 ("x86/mce: Add _ASM_EXTABLE_CPY for copy user access")
> 
> ------------------
> Test result:
> 
> 1. copy_page(), copy_mc_page() basic function test pass, and the disassembly
>     contents remains the same before and after refactor.
> 
> 2. copy_to/from_user() access kernel NULL pointer raise translation fault
>     and dump error message then die(), test pass.
> 
> 3. Test following scenarios: copy_from_user(), get_user(), COW.
> 
>     Before patched: trigger a hardware memory error then panic.
>     After  patched: trigger a hardware memory error without panic.
> 
>     Testing step:
>     step1. start an user-process.
>     step2. poison(einj) the user-process's page.
>     step3: user-process access the poison page in kernel mode, then trigger SEA.
>     step4: the kernel will not panic, only the user process is killed, the poison
>            page is isolated. (before patched, the kernel will panic in do_sea())
> 
> ------------------
> 
> Since V10:
>   Accroding Mark's suggestion:
>   1. Merge V10's patch2 and patch3 to V11's patch2.
>   2. Patch2(V11): use new fixup_type for ld* in copy_to_user(), fix fatal
>      issues (NULL kernel pointeraccess) been fixup incorrectly.
>   3. Patch2(V11): refactoring the logic of do_sea().
>   4. Patch4(V11): Remove duplicate assembly logic and remove do_mte().
> 
>   Besides:
>   1. Patch2(V11): remove st* insn's fixup, st* generally not trigger memory error.
>   2. Split a part of the logic of patch2(V11) to patch5(V11), for detail,
>      see patch5(V11)'s commit msg.
>   3. Remove patch6(v10) “arm64: introduce copy_mc_to_kernel() implementation”.
>      During modification, some problems that cannot be solved in a short
>      period are found. The patch will be released after the problems are
>      solved.
>   4. Add test result in this patch.
>   5. Modify patchset title, do not use machine check and remove "-next".
> 
> Since V9:
>   1. Rebase to latest kernel version 6.8-rc2.
>   2. Add patch 6/6 to support copy_mc_to_kernel().
> 
> Since V8:
>   1. Rebase to latest kernel version and fix topo in some of the patches.
>   2. According to the suggestion of Catalin, I attempted to modify the
>      return value of function copy_mc_[user]_highpage() to bytes not copied.
>      During the modification process, I found that it would be more
>      reasonable to return -EFAULT when copy error occurs (referring to the
>      newly added patch 4).
> 
>      For ARM64, the implementation of copy_mc_[user]_highpage() needs to
>      consider MTE. Considering the scenario where data copying is successful
>      but the MTE tag copying fails, it is also not reasonable to return
>      bytes not copied.
>   3. Considering the recent addition of machine check safe support for
>      multiple scenarios, modify commit message for patch 5 (patch 4 for V8).
> 
> Since V7:
>   Currently, there are patches supporting recover from poison
>   consumption for the cow scenario[1]. Therefore, Supporting cow
>   scenario under the arm64 architecture only needs to modify the relevant
>   code under the arch/.
>   [1]https://lore.kernel.org/lkml/20221031201029.102123-1-tony.luck@intel.com/
> 
> Since V6:
>   Resend patches that are not merged into the mainline in V6.
> 
> Since V5:
>   1. Add patch2/3 to add uaccess assembly helpers.
>   2. Optimize the implementation logic of arm64_do_kernel_sea() in patch8.
>   3. Remove kernel access fixup in patch9.
>   All suggestion are from Mark.
> 
> Since V4:
>   1. According Michael's suggestion, add patch5.
>   2. According Mark's suggestiog, do some restructuring to arm64
>   extable, then a new adaptation of machine check safe support is made based
>   on this.
>   3. According Mark's suggestion, support machine check safe in do_mte() in
>   cow scene.
>   4. In V4, two patches have been merged into -next, so V5 not send these
>   two patches.
> 
> Since V3:
>   1. According to Robin's suggestion, direct modify user_ldst and
>   user_ldp in asm-uaccess.h and modify mte.S.
>   2. Add new macro USER_MC in asm-uaccess.h, used in copy_from_user.S
>   and copy_to_user.S.
>   3. According to Robin's suggestion, using micro in copy_page_mc.S to
>   simplify code.
>   4. According to KeFeng's suggestion, modify powerpc code in patch1.
>   5. According to KeFeng's suggestion, modify mm/extable.c and some code
>   optimization.
> 
> Since V2:
>   1. According to Mark's suggestion, all uaccess can be recovered due to
>      memory error.
>   2. Scenario pagecache reading is also supported as part of uaccess
>      (copy_to_user()) and duplication code problem is also solved.
>      Thanks for Robin's suggestion.
>   3. According Mark's suggestion, update commit message of patch 2/5.
>   4. According Borisllav's suggestion, update commit message of patch 1/5.
> 
> Since V1:
>   1.Consistent with PPC/x86, Using CONFIG_ARCH_HAS_COPY_MC instead of
>     ARM64_UCE_KERNEL_RECOVERY.
>   2.Add two new scenes, cow and pagecache reading.
>   3.Fix two small bug(the first two patch).
> 
> V1 in here:
> https://lore.kernel.org/lkml/20220323033705.3966643-1-tongtiangen@huawei.com/
> 
> Tong Tiangen (5):
>    uaccess: add generic fallback version of copy_mc_to_user()
>    arm64: add support for ARCH_HAS_COPY_MC
>    mm/hwpoison: return -EFAULT when copy fail in
>      copy_mc_[user]_highpage()
>    arm64: support copy_mc_[user]_highpage()
>    arm64: send SIGBUS to user process for SEA exception
> 
>   arch/arm64/Kconfig                   |  1 +
>   arch/arm64/include/asm/asm-extable.h | 31 ++++++++++++---
>   arch/arm64/include/asm/asm-uaccess.h |  4 ++
>   arch/arm64/include/asm/extable.h     |  1 +
>   arch/arm64/include/asm/mte.h         |  9 +++++
>   arch/arm64/include/asm/page.h        | 10 +++++
>   arch/arm64/lib/Makefile              |  2 +
>   arch/arm64/lib/copy_mc_page.S        | 37 ++++++++++++++++++
>   arch/arm64/lib/copy_page.S           | 50 +++----------------------
>   arch/arm64/lib/copy_page_template.S  | 56 ++++++++++++++++++++++++++++
>   arch/arm64/lib/copy_to_user.S        | 10 ++---
>   arch/arm64/lib/mte.S                 | 29 ++++++++++++++
>   arch/arm64/mm/copypage.c             | 45 ++++++++++++++++++++++
>   arch/arm64/mm/extable.c              | 19 ++++++++++
>   arch/arm64/mm/fault.c                | 39 ++++++++++++++-----
>   arch/powerpc/include/asm/uaccess.h   |  1 +
>   arch/x86/include/asm/uaccess.h       |  1 +
>   include/linux/highmem.h              | 16 ++++++--
>   include/linux/uaccess.h              |  9 +++++
>   mm/khugepaged.c                      |  4 +-
>   20 files changed, 304 insertions(+), 70 deletions(-)
>   create mode 100644 arch/arm64/lib/copy_mc_page.S
>   create mode 100644 arch/arm64/lib/copy_page_template.S
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v11 0/5]arm64: add ARCH_HAS_COPY_MC support
  2024-02-07 13:21 [PATCH v11 0/5]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
                   ` (5 preceding siblings ...)
  2024-02-18  7:05 ` [PATCH v11 0/5]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
@ 2024-03-27  0:49 ` Tong Tiangen
  6 siblings, 0 replies; 8+ messages in thread
From: Tong Tiangen @ 2024-03-27  0:49 UTC (permalink / raw
  To: Mark Rutland, Catalin Marinas, Will Deacon, Andrew Morton,
	James Morse, Robin Murphy, Andrey Konovalov, Dmitry Vyukov,
	Vincenzo Frascino, Michael Ellerman, Nicholas Piggin,
	Andrey Ryabinin, Alexander Potapenko, Christophe Leroy,
	Aneesh Kumar K.V, Naveen N. Rao, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin
  Cc: wangkefeng.wang, linux-kernel, linux-mm, Guohanjun, linuxppc-dev,
	linux-arm-kernel

Hi Mark:

	Kindly ping...

Thanks,
Tong.

在 2024/2/7 21:21, Tong Tiangen 写道:
> With the increase of memory capacity and density, the probability of memory
> error also increases. The increasing size and density of server RAM in data
> centers and clouds have shown increased uncorrectable memory errors.
> 
> Currently, more and more scenarios that can tolerate memory errors,such as
> CoW[1,2], KSM copy[3], coredump copy[4], khugepaged[5,6], uaccess copy[7],
> etc.
> 
> This patchset introduces a new processing framework on ARM64, which enables
> ARM64 to support error recovery in the above scenarios, and more scenarios
> can be expanded based on this in the future.
> 
> In arm64, memory error handling in do_sea(), which is divided into two cases:
>   1. If the user state consumed the memory errors, the solution is to kill
>      the user process and isolate the error page.
>   2. If the kernel state consumed the memory errors, the solution is to
>      panic.
> 
> For case 2, Undifferentiated panic may not be the optimal choice, as it can
> be handled better. In some scenarios, we can avoid panic, such as uaccess,
> if the uaccess fails due to memory error, only the user process will be
> affected, killing the user process and isolating the user page with
> hardware memory errors is a better choice.
> 
> [1] commit d302c2398ba2 ("mm, hwpoison: when copy-on-write hits poison, take page offline")
> [2] commit 1cb9dc4b475c ("mm: hwpoison: support recovery from HugePage copy-on-write faults")
> [3] commit 6b970599e807 ("mm: hwpoison: support recovery from ksm_might_need_to_copy()")
> [4] commit 245f09226893 ("mm: hwpoison: coredump: support recovery from dump_user_range()")
> [5] commit 98c76c9f1ef7 ("mm/khugepaged: recover from poisoned anonymous memory")
> [6] commit 12904d953364 ("mm/khugepaged: recover from poisoned file-backed memory")
> [7] commit 278b917f8cb9 ("x86/mce: Add _ASM_EXTABLE_CPY for copy user access")
> 
> ------------------
> Test result:
> 
> 1. copy_page(), copy_mc_page() basic function test pass, and the disassembly
>     contents remains the same before and after refactor.
> 
> 2. copy_to/from_user() access kernel NULL pointer raise translation fault
>     and dump error message then die(), test pass.
> 
> 3. Test following scenarios: copy_from_user(), get_user(), COW.
> 
>     Before patched: trigger a hardware memory error then panic.
>     After  patched: trigger a hardware memory error without panic.
> 
>     Testing step:
>     step1. start an user-process.
>     step2. poison(einj) the user-process's page.
>     step3: user-process access the poison page in kernel mode, then trigger SEA.
>     step4: the kernel will not panic, only the user process is killed, the poison
>            page is isolated. (before patched, the kernel will panic in do_sea())
> 
> ------------------
> 
> Since V10:
>   Accroding Mark's suggestion:
>   1. Merge V10's patch2 and patch3 to V11's patch2.
>   2. Patch2(V11): use new fixup_type for ld* in copy_to_user(), fix fatal
>      issues (NULL kernel pointeraccess) been fixup incorrectly.
>   3. Patch2(V11): refactoring the logic of do_sea().
>   4. Patch4(V11): Remove duplicate assembly logic and remove do_mte().
> 
>   Besides:
>   1. Patch2(V11): remove st* insn's fixup, st* generally not trigger memory error.
>   2. Split a part of the logic of patch2(V11) to patch5(V11), for detail,
>      see patch5(V11)'s commit msg.
>   3. Remove patch6(v10) “arm64: introduce copy_mc_to_kernel() implementation”.
>      During modification, some problems that cannot be solved in a short
>      period are found. The patch will be released after the problems are
>      solved.
>   4. Add test result in this patch.
>   5. Modify patchset title, do not use machine check and remove "-next".
> 
> Since V9:
>   1. Rebase to latest kernel version 6.8-rc2.
>   2. Add patch 6/6 to support copy_mc_to_kernel().
> 
> Since V8:
>   1. Rebase to latest kernel version and fix topo in some of the patches.
>   2. According to the suggestion of Catalin, I attempted to modify the
>      return value of function copy_mc_[user]_highpage() to bytes not copied.
>      During the modification process, I found that it would be more
>      reasonable to return -EFAULT when copy error occurs (referring to the
>      newly added patch 4).
> 
>      For ARM64, the implementation of copy_mc_[user]_highpage() needs to
>      consider MTE. Considering the scenario where data copying is successful
>      but the MTE tag copying fails, it is also not reasonable to return
>      bytes not copied.
>   3. Considering the recent addition of machine check safe support for
>      multiple scenarios, modify commit message for patch 5 (patch 4 for V8).
> 
> Since V7:
>   Currently, there are patches supporting recover from poison
>   consumption for the cow scenario[1]. Therefore, Supporting cow
>   scenario under the arm64 architecture only needs to modify the relevant
>   code under the arch/.
>   [1]https://lore.kernel.org/lkml/20221031201029.102123-1-tony.luck@intel.com/
> 
> Since V6:
>   Resend patches that are not merged into the mainline in V6.
> 
> Since V5:
>   1. Add patch2/3 to add uaccess assembly helpers.
>   2. Optimize the implementation logic of arm64_do_kernel_sea() in patch8.
>   3. Remove kernel access fixup in patch9.
>   All suggestion are from Mark.
> 
> Since V4:
>   1. According Michael's suggestion, add patch5.
>   2. According Mark's suggestiog, do some restructuring to arm64
>   extable, then a new adaptation of machine check safe support is made based
>   on this.
>   3. According Mark's suggestion, support machine check safe in do_mte() in
>   cow scene.
>   4. In V4, two patches have been merged into -next, so V5 not send these
>   two patches.
> 
> Since V3:
>   1. According to Robin's suggestion, direct modify user_ldst and
>   user_ldp in asm-uaccess.h and modify mte.S.
>   2. Add new macro USER_MC in asm-uaccess.h, used in copy_from_user.S
>   and copy_to_user.S.
>   3. According to Robin's suggestion, using micro in copy_page_mc.S to
>   simplify code.
>   4. According to KeFeng's suggestion, modify powerpc code in patch1.
>   5. According to KeFeng's suggestion, modify mm/extable.c and some code
>   optimization.
> 
> Since V2:
>   1. According to Mark's suggestion, all uaccess can be recovered due to
>      memory error.
>   2. Scenario pagecache reading is also supported as part of uaccess
>      (copy_to_user()) and duplication code problem is also solved.
>      Thanks for Robin's suggestion.
>   3. According Mark's suggestion, update commit message of patch 2/5.
>   4. According Borisllav's suggestion, update commit message of patch 1/5.
> 
> Since V1:
>   1.Consistent with PPC/x86, Using CONFIG_ARCH_HAS_COPY_MC instead of
>     ARM64_UCE_KERNEL_RECOVERY.
>   2.Add two new scenes, cow and pagecache reading.
>   3.Fix two small bug(the first two patch).
> 
> V1 in here:
> https://lore.kernel.org/lkml/20220323033705.3966643-1-tongtiangen@huawei.com/
> 
> Tong Tiangen (5):
>    uaccess: add generic fallback version of copy_mc_to_user()
>    arm64: add support for ARCH_HAS_COPY_MC
>    mm/hwpoison: return -EFAULT when copy fail in
>      copy_mc_[user]_highpage()
>    arm64: support copy_mc_[user]_highpage()
>    arm64: send SIGBUS to user process for SEA exception
> 
>   arch/arm64/Kconfig                   |  1 +
>   arch/arm64/include/asm/asm-extable.h | 31 ++++++++++++---
>   arch/arm64/include/asm/asm-uaccess.h |  4 ++
>   arch/arm64/include/asm/extable.h     |  1 +
>   arch/arm64/include/asm/mte.h         |  9 +++++
>   arch/arm64/include/asm/page.h        | 10 +++++
>   arch/arm64/lib/Makefile              |  2 +
>   arch/arm64/lib/copy_mc_page.S        | 37 ++++++++++++++++++
>   arch/arm64/lib/copy_page.S           | 50 +++----------------------
>   arch/arm64/lib/copy_page_template.S  | 56 ++++++++++++++++++++++++++++
>   arch/arm64/lib/copy_to_user.S        | 10 ++---
>   arch/arm64/lib/mte.S                 | 29 ++++++++++++++
>   arch/arm64/mm/copypage.c             | 45 ++++++++++++++++++++++
>   arch/arm64/mm/extable.c              | 19 ++++++++++
>   arch/arm64/mm/fault.c                | 39 ++++++++++++++-----
>   arch/powerpc/include/asm/uaccess.h   |  1 +
>   arch/x86/include/asm/uaccess.h       |  1 +
>   include/linux/highmem.h              | 16 ++++++--
>   include/linux/uaccess.h              |  9 +++++
>   mm/khugepaged.c                      |  4 +-
>   20 files changed, 304 insertions(+), 70 deletions(-)
>   create mode 100644 arch/arm64/lib/copy_mc_page.S
>   create mode 100644 arch/arm64/lib/copy_page_template.S
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2024-03-27  1:09 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-02-07 13:21 [PATCH v11 0/5]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
2024-02-07 13:22 ` [PATCH v11 1/5] uaccess: add generic fallback version of copy_mc_to_user() Tong Tiangen
2024-02-07 13:22 ` [PATCH v11 2/5] arm64: add support for ARCH_HAS_COPY_MC Tong Tiangen
2024-02-07 13:22 ` [PATCH v11 3/5] mm/hwpoison: return -EFAULT when copy fail in copy_mc_[user]_highpage() Tong Tiangen
2024-02-07 13:22 ` [PATCH v11 4/5] arm64: support copy_mc_[user]_highpage() Tong Tiangen
2024-02-07 13:22 ` [PATCH v11 5/5] arm64: send SIGBUS to user process for SEA exception Tong Tiangen
2024-02-18  7:05 ` [PATCH v11 0/5]arm64: add ARCH_HAS_COPY_MC support Tong Tiangen
2024-03-27  0:49 ` Tong Tiangen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).