LKML Archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/11] transfer page to folio in KSM
@ 2024-03-20  7:40 alexs
  2024-03-20  7:40 ` [PATCH 01/11] mm/ksm: Convert get_ksm_page to return a folio alexs
                   ` (10 more replies)
  0 siblings, 11 replies; 18+ messages in thread
From: alexs @ 2024-03-20  7:40 UTC (permalink / raw
  To: Izik Eidus, Matthew Wilcox, Andrea Arcangeli, Hugh Dickins,
	Chris Wright, kasong
  Cc: linux-kernel, Alex Shi (tencent)

From: "Alex Shi (tencent)" <alexs@kernel.org>

This is the first part of page to folio transfer on KSM. Since only
single page could be stored in KSM, we could safely transfer stable tree
pages to folios. 
This patchset could reduce ksm.o size to 2487136 from 2547952 bytes on
latest akpm/mm-stable branch with CONFIG_DEBUG_VM enabled.


Alex Shi (tencent) (11):
  mm/ksm: Convert get_ksm_page to return a folio
  mm/ksm: use a folio in remove_rmap_item_from_tree
  mm/ksm: use a folio in remove_stable_node
  mm/ksm: use folio in stable_node_dup
  mm/ksm: use a folio in scan_get_next_rmap_item func
  mm/ksm: use folio in write_protect_page
  mm/ksm: Convert chain series funcs to use folio
  mm/ksm: Convert stable_tree_insert to use folio
  mm/ksm: Convert stable_tree_search to use folio
  mm/ksm: rename get_ksm_page to get_ksm_folio and return type
  mm/ksm: return folio for chain series funcs

 mm/ksm.c | 244 +++++++++++++++++++++++++++----------------------------
 1 file changed, 122 insertions(+), 122 deletions(-)

-- 
2.43.0


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH 01/11] mm/ksm: Convert get_ksm_page to return a folio
  2024-03-20  7:40 [PATCH 00/11] transfer page to folio in KSM alexs
@ 2024-03-20  7:40 ` alexs
  2024-03-20 12:54   ` Matthew Wilcox
  2024-03-20  7:40 ` [PATCH 02/11] mm/ksm: use a folio in remove_rmap_item_from_tree alexs
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 18+ messages in thread
From: alexs @ 2024-03-20  7:40 UTC (permalink / raw
  To: Izik Eidus, Matthew Wilcox, Andrea Arcangeli, Hugh Dickins,
	Chris Wright, kasong, Andrew Morton, open list:MEMORY MANAGEMENT,
	open list
  Cc: linux-kernel, Alex Shi (tencent)

From: "Alex Shi (tencent)" <alexs@kernel.org>

The ksm only contains single pages, so use folio instead of pages to
save a couple of compound_head calls.

Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
 mm/ksm.c | 30 +++++++++++++++---------------
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 8c001819cf10..fda291b054c2 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -915,10 +915,10 @@ enum get_ksm_page_flags {
  * a page to put something that might look like our key in page->mapping.
  * is on its way to being freed; but it is an anomaly to bear in mind.
  */
-static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
+static void *get_ksm_page(struct ksm_stable_node *stable_node,
 				 enum get_ksm_page_flags flags)
 {
-	struct page *page;
+	struct folio *folio;
 	void *expected_mapping;
 	unsigned long kpfn;
 
@@ -926,8 +926,8 @@ static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
 					PAGE_MAPPING_KSM);
 again:
 	kpfn = READ_ONCE(stable_node->kpfn); /* Address dependency. */
-	page = pfn_to_page(kpfn);
-	if (READ_ONCE(page->mapping) != expected_mapping)
+	folio = pfn_folio(kpfn);
+	if (READ_ONCE(folio->mapping) != expected_mapping)
 		goto stale;
 
 	/*
@@ -940,7 +940,7 @@ static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
 	 * in folio_migrate_mapping(), it might still be our page,
 	 * in which case it's essential to keep the node.
 	 */
-	while (!get_page_unless_zero(page)) {
+	while (!folio_try_get(folio)) {
 		/*
 		 * Another check for page->mapping != expected_mapping would
 		 * work here too.  We have chosen the !PageSwapCache test to
@@ -949,32 +949,32 @@ static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
 		 * in the ref_freeze section of __remove_mapping(); but Anon
 		 * page->mapping reset to NULL later, in free_pages_prepare().
 		 */
-		if (!PageSwapCache(page))
+		if (!folio_test_swapcache(folio))
 			goto stale;
 		cpu_relax();
 	}
 
-	if (READ_ONCE(page->mapping) != expected_mapping) {
-		put_page(page);
+	if (READ_ONCE(folio->mapping) != expected_mapping) {
+		folio_put(folio);
 		goto stale;
 	}
 
 	if (flags == GET_KSM_PAGE_TRYLOCK) {
-		if (!trylock_page(page)) {
-			put_page(page);
+		if (!folio_trylock(folio)) {
+			folio_put(folio);
 			return ERR_PTR(-EBUSY);
 		}
 	} else if (flags == GET_KSM_PAGE_LOCK)
-		lock_page(page);
+		folio_lock(folio);
 
 	if (flags != GET_KSM_PAGE_NOLOCK) {
-		if (READ_ONCE(page->mapping) != expected_mapping) {
-			unlock_page(page);
-			put_page(page);
+		if (READ_ONCE(folio->mapping) != expected_mapping) {
+			folio_unlock(folio);
+			folio_put(folio);
 			goto stale;
 		}
 	}
-	return page;
+	return folio;
 
 stale:
 	/*
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 02/11] mm/ksm: use a folio in remove_rmap_item_from_tree
  2024-03-20  7:40 [PATCH 00/11] transfer page to folio in KSM alexs
  2024-03-20  7:40 ` [PATCH 01/11] mm/ksm: Convert get_ksm_page to return a folio alexs
@ 2024-03-20  7:40 ` alexs
  2024-03-20  7:40 ` [PATCH 03/11] mm/ksm: use a folio in remove_stable_node alexs
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: alexs @ 2024-03-20  7:40 UTC (permalink / raw
  To: Izik Eidus, Matthew Wilcox, Andrea Arcangeli, Hugh Dickins,
	Chris Wright, kasong, Andrew Morton, open list:MEMORY MANAGEMENT,
	open list
  Cc: linux-kernel, Alex Shi (tencent)

From: "Alex Shi (tencent)" <alexs@kernel.org>

Save 2 compound_head calls.

Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
 mm/ksm.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index fda291b054c2..922e33500875 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -998,16 +998,16 @@ static void remove_rmap_item_from_tree(struct ksm_rmap_item *rmap_item)
 {
 	if (rmap_item->address & STABLE_FLAG) {
 		struct ksm_stable_node *stable_node;
-		struct page *page;
+		struct folio *folio;
 
 		stable_node = rmap_item->head;
-		page = get_ksm_page(stable_node, GET_KSM_PAGE_LOCK);
-		if (!page)
+		folio = get_ksm_page(stable_node, GET_KSM_PAGE_LOCK);
+		if (!folio)
 			goto out;
 
 		hlist_del(&rmap_item->hlist);
-		unlock_page(page);
-		put_page(page);
+		folio_unlock(folio);
+		folio_put(folio);
 
 		if (!hlist_empty(&stable_node->hlist))
 			ksm_pages_sharing--;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 03/11] mm/ksm: use a folio in remove_stable_node
  2024-03-20  7:40 [PATCH 00/11] transfer page to folio in KSM alexs
  2024-03-20  7:40 ` [PATCH 01/11] mm/ksm: Convert get_ksm_page to return a folio alexs
  2024-03-20  7:40 ` [PATCH 02/11] mm/ksm: use a folio in remove_rmap_item_from_tree alexs
@ 2024-03-20  7:40 ` alexs
  2024-03-20 13:00   ` Matthew Wilcox
  2024-03-20  7:40 ` [PATCH 04/11] mm/ksm: use folio in stable_node_dup alexs
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 18+ messages in thread
From: alexs @ 2024-03-20  7:40 UTC (permalink / raw
  To: Izik Eidus, Matthew Wilcox, Andrea Arcangeli, Hugh Dickins,
	Chris Wright, kasong, Andrew Morton, open list:MEMORY MANAGEMENT,
	open list
  Cc: linux-kernel, Alex Shi (tencent)

From: "Alex Shi (tencent)" <alexs@kernel.org>

pages in stable tree are all single normal page, use folios could save
3 calls to compound_head().

Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
 mm/ksm.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 922e33500875..9ea9b5ac44b4 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1107,11 +1107,11 @@ static inline void set_page_stable_node(struct page *page,
  */
 static int remove_stable_node(struct ksm_stable_node *stable_node)
 {
-	struct page *page;
+	struct folio *folio;
 	int err;
 
-	page = get_ksm_page(stable_node, GET_KSM_PAGE_LOCK);
-	if (!page) {
+	folio = get_ksm_page(stable_node, GET_KSM_PAGE_LOCK);
+	if (!folio) {
 		/*
 		 * get_ksm_page did remove_node_from_stable_tree itself.
 		 */
@@ -1124,22 +1124,22 @@ static int remove_stable_node(struct ksm_stable_node *stable_node)
 	 * merge_across_nodes/max_page_sharing be switched.
 	 */
 	err = -EBUSY;
-	if (!page_mapped(page)) {
+	if (!folio_mapped(folio)) {
 		/*
 		 * The stable node did not yet appear stale to get_ksm_page(),
-		 * since that allows for an unmapped ksm page to be recognized
+		 * since that allows for an unmapped ksm folio to be recognized
 		 * right up until it is freed; but the node is safe to remove.
-		 * This page might be in an LRU cache waiting to be freed,
+		 * This folio might be in an LRU cache waiting to be freed,
 		 * or it might be PageSwapCache (perhaps under writeback),
 		 * or it might have been removed from swapcache a moment ago.
 		 */
-		set_page_stable_node(page, NULL);
+		set_page_stable_node(&folio->page, NULL);
 		remove_node_from_stable_tree(stable_node);
 		err = 0;
 	}
 
-	unlock_page(page);
-	put_page(page);
+	folio_unlock(folio);
+	folio_put(folio);
 	return err;
 }
 
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 04/11] mm/ksm: use folio in stable_node_dup
  2024-03-20  7:40 [PATCH 00/11] transfer page to folio in KSM alexs
                   ` (2 preceding siblings ...)
  2024-03-20  7:40 ` [PATCH 03/11] mm/ksm: use a folio in remove_stable_node alexs
@ 2024-03-20  7:40 ` alexs
  2024-03-20  7:40 ` [PATCH 05/11] mm/ksm: use a folio in scan_get_next_rmap_item func alexs
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: alexs @ 2024-03-20  7:40 UTC (permalink / raw
  To: Izik Eidus, Matthew Wilcox, Andrea Arcangeli, Hugh Dickins,
	Chris Wright, kasong, Andrew Morton, open list:MEMORY MANAGEMENT,
	open list
  Cc: linux-kernel, Alex Shi (tencent)

From: "Alex Shi (tencent)" <alexs@kernel.org>

Save 2 compound_head calls.

Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
 mm/ksm.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 9ea9b5ac44b4..f57817ef75bf 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1617,14 +1617,14 @@ bool is_page_sharing_candidate(struct ksm_stable_node *stable_node)
 	return __is_page_sharing_candidate(stable_node, 0);
 }
 
-static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
-				    struct ksm_stable_node **_stable_node,
-				    struct rb_root *root,
-				    bool prune_stale_stable_nodes)
+static void *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
+			     struct ksm_stable_node **_stable_node,
+			     struct rb_root *root,
+			     bool prune_stale_stable_nodes)
 {
 	struct ksm_stable_node *dup, *found = NULL, *stable_node = *_stable_node;
 	struct hlist_node *hlist_safe;
-	struct page *_tree_page, *tree_page = NULL;
+	struct folio *folio, *tree_folio = NULL;
 	int nr = 0;
 	int found_rmap_hlist_len;
 
@@ -1649,18 +1649,18 @@ static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
 		 * stable_node parameter itself will be freed from
 		 * under us if it returns NULL.
 		 */
-		_tree_page = get_ksm_page(dup, GET_KSM_PAGE_NOLOCK);
-		if (!_tree_page)
+		folio = get_ksm_page(dup, GET_KSM_PAGE_NOLOCK);
+		if (!folio)
 			continue;
 		nr += 1;
 		if (is_page_sharing_candidate(dup)) {
 			if (!found ||
 			    dup->rmap_hlist_len > found_rmap_hlist_len) {
 				if (found)
-					put_page(tree_page);
+					folio_put(tree_folio);
 				found = dup;
 				found_rmap_hlist_len = found->rmap_hlist_len;
-				tree_page = _tree_page;
+				tree_folio = folio;
 
 				/* skip put_page for found dup */
 				if (!prune_stale_stable_nodes)
@@ -1668,7 +1668,7 @@ static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
 				continue;
 			}
 		}
-		put_page(_tree_page);
+		folio_put(folio);
 	}
 
 	if (found) {
@@ -1733,7 +1733,7 @@ static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
 	}
 
 	*_stable_node_dup = found;
-	return tree_page;
+	return tree_folio;
 }
 
 static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node *stable_node,
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 05/11] mm/ksm: use a folio in scan_get_next_rmap_item func
  2024-03-20  7:40 [PATCH 00/11] transfer page to folio in KSM alexs
                   ` (3 preceding siblings ...)
  2024-03-20  7:40 ` [PATCH 04/11] mm/ksm: use folio in stable_node_dup alexs
@ 2024-03-20  7:40 ` alexs
  2024-03-20  7:40 ` [PATCH 06/11] mm/ksm: use folio in write_protect_page alexs
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: alexs @ 2024-03-20  7:40 UTC (permalink / raw
  To: Izik Eidus, Matthew Wilcox, Andrea Arcangeli, Hugh Dickins,
	Chris Wright, kasong, Andrew Morton, open list:MEMORY MANAGEMENT,
	open list
  Cc: linux-kernel, Alex Shi (tencent)

From: "Alex Shi (tencent)" <alexs@kernel.org>

Save a compound calls.

Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
 mm/ksm.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index f57817ef75bf..165a3e4162bf 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2597,14 +2597,14 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
 		 */
 		if (!ksm_merge_across_nodes) {
 			struct ksm_stable_node *stable_node, *next;
-			struct page *page;
+			struct folio *folio;
 
 			list_for_each_entry_safe(stable_node, next,
 						 &migrate_nodes, list) {
-				page = get_ksm_page(stable_node,
+				folio = get_ksm_page(stable_node,
 						    GET_KSM_PAGE_NOLOCK);
-				if (page)
-					put_page(page);
+				if (folio)
+					folio_put(folio);
 				cond_resched();
 			}
 		}
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 06/11] mm/ksm: use folio in write_protect_page
  2024-03-20  7:40 [PATCH 00/11] transfer page to folio in KSM alexs
                   ` (4 preceding siblings ...)
  2024-03-20  7:40 ` [PATCH 05/11] mm/ksm: use a folio in scan_get_next_rmap_item func alexs
@ 2024-03-20  7:40 ` alexs
  2024-03-20 14:57   ` Matthew Wilcox
  2024-03-20  7:40 ` [PATCH 07/11] mm/ksm: Convert chain series funcs to use folio alexs
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 18+ messages in thread
From: alexs @ 2024-03-20  7:40 UTC (permalink / raw
  To: Izik Eidus, Matthew Wilcox, Andrea Arcangeli, Hugh Dickins,
	Chris Wright, kasong, Andrew Morton, open list:MEMORY MANAGEMENT,
	open list
  Cc: linux-kernel, Alex Shi (tencent)

From: "Alex Shi (tencent)" <alexs@kernel.org>

Compound page is checked and skipped before write_protect_page() called,
use folio to save few compound_head checking and also remove duplicated
compound checking again in the func.

Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
 mm/ksm.c | 22 ++++++++++------------
 1 file changed, 10 insertions(+), 12 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 165a3e4162bf..ad3a0294a2ec 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1275,23 +1275,21 @@ static u32 calc_checksum(struct page *page)
 	return checksum;
 }
 
-static int write_protect_page(struct vm_area_struct *vma, struct page *page,
+static int write_protect_page(struct vm_area_struct *vma, struct folio *folio,
 			      pte_t *orig_pte)
 {
 	struct mm_struct *mm = vma->vm_mm;
-	DEFINE_PAGE_VMA_WALK(pvmw, page, vma, 0, 0);
+	DEFINE_PAGE_VMA_WALK(pvmw, &folio->page, vma, 0, 0);
 	int swapped;
 	int err = -EFAULT;
 	struct mmu_notifier_range range;
 	bool anon_exclusive;
 	pte_t entry;
 
-	pvmw.address = page_address_in_vma(page, vma);
+	pvmw.address = page_address_in_vma(&folio->page, vma);
 	if (pvmw.address == -EFAULT)
 		goto out;
 
-	BUG_ON(PageTransCompound(page));
-
 	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, pvmw.address,
 				pvmw.address + PAGE_SIZE);
 	mmu_notifier_invalidate_range_start(&range);
@@ -1301,12 +1299,12 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
 	if (WARN_ONCE(!pvmw.pte, "Unexpected PMD mapping?"))
 		goto out_unlock;
 
-	anon_exclusive = PageAnonExclusive(page);
+	anon_exclusive = PageAnonExclusive(&folio->page);
 	entry = ptep_get(pvmw.pte);
 	if (pte_write(entry) || pte_dirty(entry) ||
 	    anon_exclusive || mm_tlb_flush_pending(mm)) {
-		swapped = PageSwapCache(page);
-		flush_cache_page(vma, pvmw.address, page_to_pfn(page));
+		swapped = folio_test_swapcache(folio);
+		flush_cache_page(vma, pvmw.address, folio_pfn(folio));
 		/*
 		 * Ok this is tricky, when get_user_pages_fast() run it doesn't
 		 * take any lock, therefore the check that we are going to make
@@ -1326,20 +1324,20 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
 		 * Check that no O_DIRECT or similar I/O is in progress on the
 		 * page
 		 */
-		if (page_mapcount(page) + 1 + swapped != page_count(page)) {
+		if (folio_mapcount(folio) + 1 + swapped != folio_ref_count(folio)) {
 			set_pte_at(mm, pvmw.address, pvmw.pte, entry);
 			goto out_unlock;
 		}
 
 		/* See folio_try_share_anon_rmap_pte(): clear PTE first. */
 		if (anon_exclusive &&
-		    folio_try_share_anon_rmap_pte(page_folio(page), page)) {
+		    folio_try_share_anon_rmap_pte(folio, &folio->page)) {
 			set_pte_at(mm, pvmw.address, pvmw.pte, entry);
 			goto out_unlock;
 		}
 
 		if (pte_dirty(entry))
-			set_page_dirty(page);
+			folio_mark_dirty(folio);
 		entry = pte_mkclean(entry);
 
 		if (pte_write(entry))
@@ -1505,7 +1503,7 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
 	 * ptes are necessarily already write-protected.  But in either
 	 * case, we need to lock and check page_count is not raised.
 	 */
-	if (write_protect_page(vma, page, &orig_pte) == 0) {
+	if (write_protect_page(vma, (struct folio *)page, &orig_pte) == 0) {
 		if (!kpage) {
 			/*
 			 * While we hold page lock, upgrade page from
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 07/11] mm/ksm: Convert chain series funcs to use folio
  2024-03-20  7:40 [PATCH 00/11] transfer page to folio in KSM alexs
                   ` (5 preceding siblings ...)
  2024-03-20  7:40 ` [PATCH 06/11] mm/ksm: use folio in write_protect_page alexs
@ 2024-03-20  7:40 ` alexs
  2024-03-20  7:40 ` [PATCH 08/11] mm/ksm: Convert stable_tree_insert " alexs
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: alexs @ 2024-03-20  7:40 UTC (permalink / raw
  To: Izik Eidus, Matthew Wilcox, Andrea Arcangeli, Hugh Dickins,
	Chris Wright, kasong, Andrew Morton, open list:MEMORY MANAGEMENT,
	open list
  Cc: linux-kernel, Alex Shi (tencent)

From: "Alex Shi (tencent)" <alexs@kernel.org>

In ksm stable tree all page are single, let's convert them to use folios.

Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
 mm/ksm.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index ad3a0294a2ec..648fa695424b 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1761,7 +1761,7 @@ static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node *stabl
  * function and will be overwritten in all cases, the caller doesn't
  * need to initialize it.
  */
-static struct page *__stable_node_chain(struct ksm_stable_node **_stable_node_dup,
+static void *__stable_node_chain(struct ksm_stable_node **_stable_node_dup,
 					struct ksm_stable_node **_stable_node,
 					struct rb_root *root,
 					bool prune_stale_stable_nodes)
@@ -1783,24 +1783,24 @@ static struct page *__stable_node_chain(struct ksm_stable_node **_stable_node_du
 			       prune_stale_stable_nodes);
 }
 
-static __always_inline struct page *chain_prune(struct ksm_stable_node **s_n_d,
+static __always_inline void *chain_prune(struct ksm_stable_node **s_n_d,
 						struct ksm_stable_node **s_n,
 						struct rb_root *root)
 {
 	return __stable_node_chain(s_n_d, s_n, root, true);
 }
 
-static __always_inline struct page *chain(struct ksm_stable_node **s_n_d,
+static __always_inline void *chain(struct ksm_stable_node **s_n_d,
 					  struct ksm_stable_node *s_n,
 					  struct rb_root *root)
 {
 	struct ksm_stable_node *old_stable_node = s_n;
-	struct page *tree_page;
+	struct folio *tree_folio;
 
-	tree_page = __stable_node_chain(s_n_d, &s_n, root, false);
+	tree_folio = __stable_node_chain(s_n_d, &s_n, root, false);
 	/* not pruning dups so s_n cannot have changed */
 	VM_BUG_ON(s_n != old_stable_node);
-	return tree_page;
+	return tree_folio;
 }
 
 /*
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 08/11] mm/ksm: Convert stable_tree_insert to use folio
  2024-03-20  7:40 [PATCH 00/11] transfer page to folio in KSM alexs
                   ` (6 preceding siblings ...)
  2024-03-20  7:40 ` [PATCH 07/11] mm/ksm: Convert chain series funcs to use folio alexs
@ 2024-03-20  7:40 ` alexs
  2024-03-20  7:40 ` [PATCH 09/11] mm/ksm: Convert stable_tree_search " alexs
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 18+ messages in thread
From: alexs @ 2024-03-20  7:40 UTC (permalink / raw
  To: Izik Eidus, Matthew Wilcox, Andrea Arcangeli, Hugh Dickins,
	Chris Wright, kasong, Andrew Morton, open list:MEMORY MANAGEMENT,
	open list
  Cc: linux-kernel, Alex Shi (tencent)

From: "Alex Shi (tencent)" <alexs@kernel.org>

KSM stable tree only store single page, so convert the func users to use
folio and save few compound_head calls.

Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
 mm/ksm.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 648fa695424b..71d1a52f344d 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2062,7 +2062,7 @@ static struct page *stable_tree_search(struct page *page)
  * This function returns the stable tree node just allocated on success,
  * NULL otherwise.
  */
-static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
+static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)
 {
 	int nid;
 	unsigned long kpfn;
@@ -2072,7 +2072,7 @@ static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
 	struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any;
 	bool need_chain = false;
 
-	kpfn = page_to_pfn(kpage);
+	kpfn = folio_pfn(kfolio);
 	nid = get_kpfn_nid(kpfn);
 	root = root_stable_tree + nid;
 again:
@@ -2080,13 +2080,13 @@ static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
 	new = &root->rb_node;
 
 	while (*new) {
-		struct page *tree_page;
+		struct folio *tree_folio;
 		int ret;
 
 		cond_resched();
 		stable_node = rb_entry(*new, struct ksm_stable_node, node);
 		stable_node_any = NULL;
-		tree_page = chain(&stable_node_dup, stable_node, root);
+		tree_folio = chain(&stable_node_dup, stable_node, root);
 		if (!stable_node_dup) {
 			/*
 			 * Either all stable_node dups were full in
@@ -2108,11 +2108,11 @@ static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
 			 * write protected at all times. Any will work
 			 * fine to continue the walk.
 			 */
-			tree_page = get_ksm_page(stable_node_any,
+			tree_folio = get_ksm_page(stable_node_any,
 						 GET_KSM_PAGE_NOLOCK);
 		}
 		VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
-		if (!tree_page) {
+		if (!tree_folio) {
 			/*
 			 * If we walked over a stale stable_node,
 			 * get_ksm_page() will call rb_erase() and it
@@ -2125,8 +2125,8 @@ static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
 			goto again;
 		}
 
-		ret = memcmp_pages(kpage, tree_page);
-		put_page(tree_page);
+		ret = memcmp_pages(&kfolio->page, &tree_folio->page);
+		folio_put(tree_folio);
 
 		parent = *new;
 		if (ret < 0)
@@ -2145,7 +2145,7 @@ static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
 
 	INIT_HLIST_HEAD(&stable_node_dup->hlist);
 	stable_node_dup->kpfn = kpfn;
-	set_page_stable_node(kpage, stable_node_dup);
+	set_page_stable_node(&kfolio->page, stable_node_dup);
 	stable_node_dup->rmap_hlist_len = 0;
 	DO_NUMA(stable_node_dup->nid = nid);
 	if (!need_chain) {
@@ -2423,7 +2423,7 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
 			 * node in the stable tree and add both rmap_items.
 			 */
 			lock_page(kpage);
-			stable_node = stable_tree_insert(kpage);
+			stable_node = stable_tree_insert(page_folio(kpage));
 			if (stable_node) {
 				stable_tree_append(tree_rmap_item, stable_node,
 						   false);
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 09/11] mm/ksm: Convert stable_tree_search to use folio
  2024-03-20  7:40 [PATCH 00/11] transfer page to folio in KSM alexs
                   ` (7 preceding siblings ...)
  2024-03-20  7:40 ` [PATCH 08/11] mm/ksm: Convert stable_tree_insert " alexs
@ 2024-03-20  7:40 ` alexs
  2024-03-20 15:26   ` Matthew Wilcox
  2024-03-20  7:40 ` [PATCH 10/11] mm/ksm: rename get_ksm_page to get_ksm_folio and return type alexs
  2024-03-20  7:40 ` [PATCH 11/11] mm/ksm: return folio for chain series funcs alexs
  10 siblings, 1 reply; 18+ messages in thread
From: alexs @ 2024-03-20  7:40 UTC (permalink / raw
  To: Izik Eidus, Matthew Wilcox, Andrea Arcangeli, Hugh Dickins,
	Chris Wright, kasong, Andrew Morton, open list:MEMORY MANAGEMENT,
	open list
  Cc: linux-kernel, Alex Shi (tencent)

From: "Alex Shi (tencent)" <alexs@kernel.org>

Although, the func may pass a tail page to check its contents, but only
single page exist in KSM stable tree, so we still can use folio in
stable_tree_search() to save a few compound_head calls.

Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
 mm/ksm.c | 60 +++++++++++++++++++++++++++++---------------------------
 1 file changed, 31 insertions(+), 29 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 71d1a52f344d..75401b3bae5c 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1812,7 +1812,7 @@ static __always_inline void *chain(struct ksm_stable_node **s_n_d,
  * This function returns the stable tree node of identical content if found,
  * NULL otherwise.
  */
-static struct page *stable_tree_search(struct page *page)
+static void *stable_tree_search(struct page *page)
 {
 	int nid;
 	struct rb_root *root;
@@ -1820,28 +1820,30 @@ static struct page *stable_tree_search(struct page *page)
 	struct rb_node *parent;
 	struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any;
 	struct ksm_stable_node *page_node;
+	struct folio *folio;
 
-	page_node = page_stable_node(page);
+	folio = (struct folio *)page;
+	page_node = folio_stable_node(folio);
 	if (page_node && page_node->head != &migrate_nodes) {
 		/* ksm page forked */
-		get_page(page);
-		return page;
+		folio_get(folio);
+		return folio;
 	}
 
-	nid = get_kpfn_nid(page_to_pfn(page));
+	nid = get_kpfn_nid(folio_pfn(folio));
 	root = root_stable_tree + nid;
 again:
 	new = &root->rb_node;
 	parent = NULL;
 
 	while (*new) {
-		struct page *tree_page;
+		struct folio *tree_folio;
 		int ret;
 
 		cond_resched();
 		stable_node = rb_entry(*new, struct ksm_stable_node, node);
 		stable_node_any = NULL;
-		tree_page = chain_prune(&stable_node_dup, &stable_node,	root);
+		tree_folio = chain_prune(&stable_node_dup, &stable_node, root);
 		/*
 		 * NOTE: stable_node may have been freed by
 		 * chain_prune() if the returned stable_node_dup is
@@ -1875,11 +1877,11 @@ static struct page *stable_tree_search(struct page *page)
 			 * write protected at all times. Any will work
 			 * fine to continue the walk.
 			 */
-			tree_page = get_ksm_page(stable_node_any,
-						 GET_KSM_PAGE_NOLOCK);
+			tree_folio = get_ksm_page(stable_node_any,
+						  GET_KSM_PAGE_NOLOCK);
 		}
 		VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
-		if (!tree_page) {
+		if (!tree_folio) {
 			/*
 			 * If we walked over a stale stable_node,
 			 * get_ksm_page() will call rb_erase() and it
@@ -1892,8 +1894,8 @@ static struct page *stable_tree_search(struct page *page)
 			goto again;
 		}
 
-		ret = memcmp_pages(page, tree_page);
-		put_page(tree_page);
+		ret = memcmp_pages(&folio->page, &tree_folio->page);
+		folio_put(tree_folio);
 
 		parent = *new;
 		if (ret < 0)
@@ -1936,26 +1938,26 @@ static struct page *stable_tree_search(struct page *page)
 			 * It would be more elegant to return stable_node
 			 * than kpage, but that involves more changes.
 			 */
-			tree_page = get_ksm_page(stable_node_dup,
-						 GET_KSM_PAGE_TRYLOCK);
+			tree_folio = get_ksm_page(stable_node_dup,
+						  GET_KSM_PAGE_TRYLOCK);
 
-			if (PTR_ERR(tree_page) == -EBUSY)
+			if (PTR_ERR(tree_folio) == -EBUSY)
 				return ERR_PTR(-EBUSY);
 
-			if (unlikely(!tree_page))
+			if (unlikely(!tree_folio))
 				/*
 				 * The tree may have been rebalanced,
 				 * so re-evaluate parent and new.
 				 */
 				goto again;
-			unlock_page(tree_page);
+			folio_unlock(tree_folio);
 
 			if (get_kpfn_nid(stable_node_dup->kpfn) !=
 			    NUMA(stable_node_dup->nid)) {
-				put_page(tree_page);
+				folio_put(tree_folio);
 				goto replace;
 			}
-			return tree_page;
+			return tree_folio;
 		}
 	}
 
@@ -1968,8 +1970,8 @@ static struct page *stable_tree_search(struct page *page)
 	rb_insert_color(&page_node->node, root);
 out:
 	if (is_page_sharing_candidate(page_node)) {
-		get_page(page);
-		return page;
+		folio_get(folio);
+		return folio;
 	} else
 		return NULL;
 
@@ -1994,12 +1996,12 @@ static struct page *stable_tree_search(struct page *page)
 					&page_node->node,
 					root);
 			if (is_page_sharing_candidate(page_node))
-				get_page(page);
+				folio_get(folio);
 			else
-				page = NULL;
+				folio = NULL;
 		} else {
 			rb_erase(&stable_node_dup->node, root);
-			page = NULL;
+			folio = NULL;
 		}
 	} else {
 		VM_BUG_ON(!is_stable_node_chain(stable_node));
@@ -2010,16 +2012,16 @@ static struct page *stable_tree_search(struct page *page)
 			DO_NUMA(page_node->nid = nid);
 			stable_node_chain_add_dup(page_node, stable_node);
 			if (is_page_sharing_candidate(page_node))
-				get_page(page);
+				folio_get(folio);
 			else
-				page = NULL;
+				folio = NULL;
 		} else {
-			page = NULL;
+			folio = NULL;
 		}
 	}
 	stable_node_dup->head = &migrate_nodes;
 	list_add(&stable_node_dup->list, stable_node_dup->head);
-	return page;
+	return folio;
 
 chain_append:
 	/* stable_node_dup could be null if it reached the limit */
@@ -2109,7 +2111,7 @@ static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)
 			 * fine to continue the walk.
 			 */
 			tree_folio = get_ksm_page(stable_node_any,
-						 GET_KSM_PAGE_NOLOCK);
+						  GET_KSM_PAGE_NOLOCK);
 		}
 		VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
 		if (!tree_folio) {
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 10/11] mm/ksm: rename get_ksm_page to get_ksm_folio and return type
  2024-03-20  7:40 [PATCH 00/11] transfer page to folio in KSM alexs
                   ` (8 preceding siblings ...)
  2024-03-20  7:40 ` [PATCH 09/11] mm/ksm: Convert stable_tree_search " alexs
@ 2024-03-20  7:40 ` alexs
  2024-03-20  7:40 ` [PATCH 11/11] mm/ksm: return folio for chain series funcs alexs
  10 siblings, 0 replies; 18+ messages in thread
From: alexs @ 2024-03-20  7:40 UTC (permalink / raw
  To: Izik Eidus, Matthew Wilcox, Andrea Arcangeli, Hugh Dickins,
	Chris Wright, kasong, Andrew Morton, open list:MEMORY MANAGEMENT,
	open list
  Cc: linux-kernel, Alex Shi (tencent)

From: "Alex Shi (tencent)" <alexs@kernel.org>

Now since all caller are changed to folio, return to folio and rename it
as get_ksm_folio.

Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
 mm/ksm.c | 50 +++++++++++++++++++++++++-------------------------
 1 file changed, 25 insertions(+), 25 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 75401b3bae5c..806ad4d2693b 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -890,14 +890,14 @@ static void remove_node_from_stable_tree(struct ksm_stable_node *stable_node)
 	free_stable_node(stable_node);
 }
 
-enum get_ksm_page_flags {
+enum get_ksm_folio_flags {
 	GET_KSM_PAGE_NOLOCK,
 	GET_KSM_PAGE_LOCK,
 	GET_KSM_PAGE_TRYLOCK
 };
 
 /*
- * get_ksm_page: checks if the page indicated by the stable node
+ * get_ksm_folio: checks if the page indicated by the stable node
  * is still its ksm page, despite having held no reference to it.
  * In which case we can trust the content of the page, and it
  * returns the gotten page; but if the page has now been zapped,
@@ -915,8 +915,8 @@ enum get_ksm_page_flags {
  * a page to put something that might look like our key in page->mapping.
  * is on its way to being freed; but it is an anomaly to bear in mind.
  */
-static void *get_ksm_page(struct ksm_stable_node *stable_node,
-				 enum get_ksm_page_flags flags)
+static struct folio *get_ksm_folio(struct ksm_stable_node *stable_node,
+				   enum get_ksm_folio_flags flags)
 {
 	struct folio *folio;
 	void *expected_mapping;
@@ -1001,7 +1001,7 @@ static void remove_rmap_item_from_tree(struct ksm_rmap_item *rmap_item)
 		struct folio *folio;
 
 		stable_node = rmap_item->head;
-		folio = get_ksm_page(stable_node, GET_KSM_PAGE_LOCK);
+		folio = get_ksm_folio(stable_node, GET_KSM_PAGE_LOCK);
 		if (!folio)
 			goto out;
 
@@ -1110,10 +1110,10 @@ static int remove_stable_node(struct ksm_stable_node *stable_node)
 	struct folio *folio;
 	int err;
 
-	folio = get_ksm_page(stable_node, GET_KSM_PAGE_LOCK);
+	folio = get_ksm_folio(stable_node, GET_KSM_PAGE_LOCK);
 	if (!folio) {
 		/*
-		 * get_ksm_page did remove_node_from_stable_tree itself.
+		 * get_ksm_folio did remove_node_from_stable_tree itself.
 		 */
 		return 0;
 	}
@@ -1126,7 +1126,7 @@ static int remove_stable_node(struct ksm_stable_node *stable_node)
 	err = -EBUSY;
 	if (!folio_mapped(folio)) {
 		/*
-		 * The stable node did not yet appear stale to get_ksm_page(),
+		 * The stable node did not yet appear stale to get_ksm_folio(),
 		 * since that allows for an unmapped ksm folio to be recognized
 		 * right up until it is freed; but the node is safe to remove.
 		 * This folio might be in an LRU cache waiting to be freed,
@@ -1641,13 +1641,13 @@ static void *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
 		 * We must walk all stable_node_dup to prune the stale
 		 * stable nodes during lookup.
 		 *
-		 * get_ksm_page can drop the nodes from the
+		 * get_ksm_folio can drop the nodes from the
 		 * stable_node->hlist if they point to freed pages
 		 * (that's why we do a _safe walk). The "dup"
 		 * stable_node parameter itself will be freed from
 		 * under us if it returns NULL.
 		 */
-		folio = get_ksm_page(dup, GET_KSM_PAGE_NOLOCK);
+		folio = get_ksm_folio(dup, GET_KSM_PAGE_NOLOCK);
 		if (!folio)
 			continue;
 		nr += 1;
@@ -1748,7 +1748,7 @@ static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node *stabl
 }
 
 /*
- * Like for get_ksm_page, this function can free the *_stable_node and
+ * Like for get_ksm_folio, this function can free the *_stable_node and
  * *_stable_node_dup if the returned tree_page is NULL.
  *
  * It can also free and overwrite *_stable_node with the found
@@ -1770,7 +1770,7 @@ static void *__stable_node_chain(struct ksm_stable_node **_stable_node_dup,
 	if (!is_stable_node_chain(stable_node)) {
 		if (is_page_sharing_candidate(stable_node)) {
 			*_stable_node_dup = stable_node;
-			return get_ksm_page(stable_node, GET_KSM_PAGE_NOLOCK);
+			return get_ksm_folio(stable_node, GET_KSM_PAGE_NOLOCK);
 		}
 		/*
 		 * _stable_node_dup set to NULL means the stable_node
@@ -1877,14 +1877,14 @@ static void *stable_tree_search(struct page *page)
 			 * write protected at all times. Any will work
 			 * fine to continue the walk.
 			 */
-			tree_folio = get_ksm_page(stable_node_any,
-						  GET_KSM_PAGE_NOLOCK);
+			tree_folio = get_ksm_folio(stable_node_any,
+						   GET_KSM_PAGE_NOLOCK);
 		}
 		VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
 		if (!tree_folio) {
 			/*
 			 * If we walked over a stale stable_node,
-			 * get_ksm_page() will call rb_erase() and it
+			 * get_ksm_folio() will call rb_erase() and it
 			 * may rebalance the tree from under us. So
 			 * restart the search from scratch. Returning
 			 * NULL would be safe too, but we'd generate
@@ -1938,8 +1938,8 @@ static void *stable_tree_search(struct page *page)
 			 * It would be more elegant to return stable_node
 			 * than kpage, but that involves more changes.
 			 */
-			tree_folio = get_ksm_page(stable_node_dup,
-						  GET_KSM_PAGE_TRYLOCK);
+			tree_folio = get_ksm_folio(stable_node_dup,
+						   GET_KSM_PAGE_TRYLOCK);
 
 			if (PTR_ERR(tree_folio) == -EBUSY)
 				return ERR_PTR(-EBUSY);
@@ -2110,14 +2110,14 @@ static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)
 			 * write protected at all times. Any will work
 			 * fine to continue the walk.
 			 */
-			tree_folio = get_ksm_page(stable_node_any,
-						  GET_KSM_PAGE_NOLOCK);
+			tree_folio = get_ksm_folio(stable_node_any,
+						   GET_KSM_PAGE_NOLOCK);
 		}
 		VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
 		if (!tree_folio) {
 			/*
 			 * If we walked over a stale stable_node,
-			 * get_ksm_page() will call rb_erase() and it
+			 * get_ksm_folio() will call rb_erase() and it
 			 * may rebalance the tree from under us. So
 			 * restart the search from scratch. Returning
 			 * NULL would be safe too, but we'd generate
@@ -2601,8 +2601,8 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
 
 			list_for_each_entry_safe(stable_node, next,
 						 &migrate_nodes, list) {
-				folio = get_ksm_page(stable_node,
-						    GET_KSM_PAGE_NOLOCK);
+				folio = get_ksm_folio(stable_node,
+						      GET_KSM_PAGE_NOLOCK);
 				if (folio)
 					folio_put(folio);
 				cond_resched();
@@ -3229,7 +3229,7 @@ void folio_migrate_ksm(struct folio *newfolio, struct folio *folio)
 		/*
 		 * newfolio->mapping was set in advance; now we need smp_wmb()
 		 * to make sure that the new stable_node->kpfn is visible
-		 * to get_ksm_page() before it can see that folio->mapping
+		 * to get_ksm_folio() before it can see that folio->mapping
 		 * has gone stale (or that folio_test_swapcache has been cleared).
 		 */
 		smp_wmb();
@@ -3256,7 +3256,7 @@ static bool stable_node_dup_remove_range(struct ksm_stable_node *stable_node,
 	if (stable_node->kpfn >= start_pfn &&
 	    stable_node->kpfn < end_pfn) {
 		/*
-		 * Don't get_ksm_page, page has already gone:
+		 * Don't get_ksm_folio, page has already gone:
 		 * which is why we keep kpfn instead of page*
 		 */
 		remove_node_from_stable_tree(stable_node);
@@ -3344,7 +3344,7 @@ static int ksm_memory_callback(struct notifier_block *self,
 		 * Most of the work is done by page migration; but there might
 		 * be a few stable_nodes left over, still pointing to struct
 		 * pages which have been offlined: prune those from the tree,
-		 * otherwise get_ksm_page() might later try to access a
+		 * otherwise get_ksm_folio() might later try to access a
 		 * non-existent struct page.
 		 */
 		ksm_check_stable_tree(mn->start_pfn,
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH 11/11] mm/ksm: return folio for chain series funcs
  2024-03-20  7:40 [PATCH 00/11] transfer page to folio in KSM alexs
                   ` (9 preceding siblings ...)
  2024-03-20  7:40 ` [PATCH 10/11] mm/ksm: rename get_ksm_page to get_ksm_folio and return type alexs
@ 2024-03-20  7:40 ` alexs
  10 siblings, 0 replies; 18+ messages in thread
From: alexs @ 2024-03-20  7:40 UTC (permalink / raw
  To: Izik Eidus, Matthew Wilcox, Andrea Arcangeli, Hugh Dickins,
	Chris Wright, kasong, Andrew Morton, open list:MEMORY MANAGEMENT,
	open list
  Cc: linux-kernel, Alex Shi (tencent)

From: "Alex Shi (tencent)" <alexs@kernel.org>

Since all caller changed to folios, change their return type to folio
too.

Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
 mm/ksm.c | 28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 806ad4d2693b..74cf6c028380 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1615,10 +1615,10 @@ bool is_page_sharing_candidate(struct ksm_stable_node *stable_node)
 	return __is_page_sharing_candidate(stable_node, 0);
 }
 
-static void *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
-			     struct ksm_stable_node **_stable_node,
-			     struct rb_root *root,
-			     bool prune_stale_stable_nodes)
+static struct folio *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
+				     struct ksm_stable_node **_stable_node,
+				     struct rb_root *root,
+				     bool prune_stale_stable_nodes)
 {
 	struct ksm_stable_node *dup, *found = NULL, *stable_node = *_stable_node;
 	struct hlist_node *hlist_safe;
@@ -1761,10 +1761,10 @@ static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node *stabl
  * function and will be overwritten in all cases, the caller doesn't
  * need to initialize it.
  */
-static void *__stable_node_chain(struct ksm_stable_node **_stable_node_dup,
-					struct ksm_stable_node **_stable_node,
-					struct rb_root *root,
-					bool prune_stale_stable_nodes)
+static struct folio *__stable_node_chain(struct ksm_stable_node **_stable_node_dup,
+					 struct ksm_stable_node **_stable_node,
+					 struct rb_root *root,
+					 bool prune_stale_stable_nodes)
 {
 	struct ksm_stable_node *stable_node = *_stable_node;
 	if (!is_stable_node_chain(stable_node)) {
@@ -1783,16 +1783,16 @@ static void *__stable_node_chain(struct ksm_stable_node **_stable_node_dup,
 			       prune_stale_stable_nodes);
 }
 
-static __always_inline void *chain_prune(struct ksm_stable_node **s_n_d,
-						struct ksm_stable_node **s_n,
-						struct rb_root *root)
+static __always_inline struct folio *chain_prune(struct ksm_stable_node **s_n_d,
+						 struct ksm_stable_node **s_n,
+						 struct rb_root *root)
 {
 	return __stable_node_chain(s_n_d, s_n, root, true);
 }
 
-static __always_inline void *chain(struct ksm_stable_node **s_n_d,
-					  struct ksm_stable_node *s_n,
-					  struct rb_root *root)
+static __always_inline struct folio *chain(struct ksm_stable_node **s_n_d,
+					   struct ksm_stable_node *s_n,
+					   struct rb_root *root)
 {
 	struct ksm_stable_node *old_stable_node = s_n;
 	struct folio *tree_folio;
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH 01/11] mm/ksm: Convert get_ksm_page to return a folio
  2024-03-20  7:40 ` [PATCH 01/11] mm/ksm: Convert get_ksm_page to return a folio alexs
@ 2024-03-20 12:54   ` Matthew Wilcox
  2024-03-21  2:07     ` Alex Shi
  0 siblings, 1 reply; 18+ messages in thread
From: Matthew Wilcox @ 2024-03-20 12:54 UTC (permalink / raw
  To: alexs
  Cc: Izik Eidus, Andrea Arcangeli, Hugh Dickins, Chris Wright, kasong,
	Andrew Morton, open list:MEMORY MANAGEMENT, open list

On Wed, Mar 20, 2024 at 03:40:37PM +0800, alexs@kernel.org wrote:
> -static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
> +static void *get_ksm_page(struct ksm_stable_node *stable_node,
>  				 enum get_ksm_page_flags flags)

I am really not a fan of returning void * instead of a page or a
folio.  Particularly since you rename this function at the end anyway!

You should do it like this:

In this patch, convert get_ksm_page() to get_ksm_folio() and add:

static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
		enum get_ksm_page_flags flags)
{
	struct folio *folio = get_ksm_folio(node, flags);
	return &folio->page;
}

Then convert each call-site to get_ksm_folio(), and finally delete
get_ksm_page().  That way you're always converting each caller to
the exact code you want it to look like, and your reiewrs don't have to
keep three patches in their head at once as they review each place.

Also, I think this should be ksm_get_folio(), not get_ksm_folio().
Seems to fit better.

> @@ -949,32 +949,32 @@ static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
>  		 * in the ref_freeze section of __remove_mapping(); but Anon
>  		 * page->mapping reset to NULL later, in free_pages_prepare().

Could you fix page->mapping to folio->mapping in the comment?


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 03/11] mm/ksm: use a folio in remove_stable_node
  2024-03-20  7:40 ` [PATCH 03/11] mm/ksm: use a folio in remove_stable_node alexs
@ 2024-03-20 13:00   ` Matthew Wilcox
  0 siblings, 0 replies; 18+ messages in thread
From: Matthew Wilcox @ 2024-03-20 13:00 UTC (permalink / raw
  To: alexs
  Cc: Izik Eidus, Andrea Arcangeli, Hugh Dickins, Chris Wright, kasong,
	Andrew Morton, open list:MEMORY MANAGEMENT, open list

On Wed, Mar 20, 2024 at 03:40:39PM +0800, alexs@kernel.org wrote:
> @@ -1124,22 +1124,22 @@ static int remove_stable_node(struct ksm_stable_node *stable_node)
>  	 * merge_across_nodes/max_page_sharing be switched.
>  	 */
>  	err = -EBUSY;
> -	if (!page_mapped(page)) {
> +	if (!folio_mapped(folio)) {
>  		/*
>  		 * The stable node did not yet appear stale to get_ksm_page(),
> -		 * since that allows for an unmapped ksm page to be recognized
> +		 * since that allows for an unmapped ksm folio to be recognized
>  		 * right up until it is freed; but the node is safe to remove.
> -		 * This page might be in an LRU cache waiting to be freed,
> +		 * This folio might be in an LRU cache waiting to be freed,
>  		 * or it might be PageSwapCache (perhaps under writeback),

s/PageSwapCache/in the swapcache/

>  		 * or it might have been removed from swapcache a moment ago.
>  		 */
> -		set_page_stable_node(page, NULL);
> +		set_page_stable_node(&folio->page, NULL);

Before this patch, introduce a folio_set_stable_node() (and convert the
one caller which already has a folio).  I'd do it the other way around
from ksm_get_folio(); that is:

static inline void folio_set_stable_node(struct folio *folio,
		struct ksm_stable_node *stable_node)
{
	set_page_stable_node(&folio->page, stable_node);
}

and then we can merge the two later when there are no more calls to
set_page_stable_node().


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 06/11] mm/ksm: use folio in write_protect_page
  2024-03-20  7:40 ` [PATCH 06/11] mm/ksm: use folio in write_protect_page alexs
@ 2024-03-20 14:57   ` Matthew Wilcox
  0 siblings, 0 replies; 18+ messages in thread
From: Matthew Wilcox @ 2024-03-20 14:57 UTC (permalink / raw
  To: alexs
  Cc: Izik Eidus, Andrea Arcangeli, Hugh Dickins, Chris Wright, kasong,
	Andrew Morton, open list:MEMORY MANAGEMENT, open list

On Wed, Mar 20, 2024 at 03:40:42PM +0800, alexs@kernel.org wrote:
> -static int write_protect_page(struct vm_area_struct *vma, struct page *page,
> +static int write_protect_page(struct vm_area_struct *vma, struct folio *folio,
>  			      pte_t *orig_pte)
>  {
>  	struct mm_struct *mm = vma->vm_mm;
> -	DEFINE_PAGE_VMA_WALK(pvmw, page, vma, 0, 0);
> +	DEFINE_PAGE_VMA_WALK(pvmw, &folio->page, vma, 0, 0);

We have a DEFINE_FOLIO_VMA_WALK

> -	pvmw.address = page_address_in_vma(page, vma);
> +	pvmw.address = page_address_in_vma(&folio->page, vma);

We don't yet have a folio_address_in_vma().  This needs more study,
so I approve of how you've converted this line.

> -	BUG_ON(PageTransCompound(page));

I might make this a VM_BUG_ON(folio_test_large(folio))

> @@ -1505,7 +1503,7 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
>  	 * ptes are necessarily already write-protected.  But in either
>  	 * case, we need to lock and check page_count is not raised.
>  	 */
> -	if (write_protect_page(vma, page, &orig_pte) == 0) {
> +	if (write_protect_page(vma, (struct folio *)page, &orig_pte) == 0) {

I don't love this cast.  I see why it's safe (called split_huge_page()
above), but I'd rather see a call to page_folio() just to keep things
tidy.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 09/11] mm/ksm: Convert stable_tree_search to use folio
  2024-03-20  7:40 ` [PATCH 09/11] mm/ksm: Convert stable_tree_search " alexs
@ 2024-03-20 15:26   ` Matthew Wilcox
  2024-03-21  1:47     ` Alex Shi
  0 siblings, 1 reply; 18+ messages in thread
From: Matthew Wilcox @ 2024-03-20 15:26 UTC (permalink / raw
  To: alexs
  Cc: Izik Eidus, Andrea Arcangeli, Hugh Dickins, Chris Wright, kasong,
	Andrew Morton, open list:MEMORY MANAGEMENT, open list

On Wed, Mar 20, 2024 at 03:40:45PM +0800, alexs@kernel.org wrote:
> @@ -1820,28 +1820,30 @@ static struct page *stable_tree_search(struct page *page)
>  	struct rb_node *parent;
>  	struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any;
>  	struct ksm_stable_node *page_node;
> +	struct folio *folio;
>  
> -	page_node = page_stable_node(page);
> +	folio = (struct folio *)page;

These casts make me nervous.  Remember that we're heading towards a
future where struct page and struct folio don't point to the same
memory, and part of that will be finding all these casts and removing
them.  Please, unless there's a good reason, just use page_folio().


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 09/11] mm/ksm: Convert stable_tree_search to use folio
  2024-03-20 15:26   ` Matthew Wilcox
@ 2024-03-21  1:47     ` Alex Shi
  0 siblings, 0 replies; 18+ messages in thread
From: Alex Shi @ 2024-03-21  1:47 UTC (permalink / raw
  To: Matthew Wilcox, alexs
  Cc: Izik Eidus, Andrea Arcangeli, Hugh Dickins, Chris Wright, kasong,
	Andrew Morton, open list:MEMORY MANAGEMENT, open list



On 3/20/24 11:26 PM, Matthew Wilcox wrote:
> On Wed, Mar 20, 2024 at 03:40:45PM +0800, alexs@kernel.org wrote:
>> @@ -1820,28 +1820,30 @@ static struct page *stable_tree_search(struct page *page)
>>  	struct rb_node *parent;
>>  	struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any;
>>  	struct ksm_stable_node *page_node;
>> +	struct folio *folio;
>>  
>> -	page_node = page_stable_node(page);
>> +	folio = (struct folio *)page;
> 
> These casts make me nervous.  Remember that we're heading towards a
> future where struct page and struct folio don't point to the same
> memory, and part of that will be finding all these casts and removing
> them.  Please, unless there's a good reason, just use page_folio().
> 

Hi willy,

Thanks a lot for all comments. Yes,all of them are very right. I will rewrite and send the patchset.

Best Regards!
Alex

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH 01/11] mm/ksm: Convert get_ksm_page to return a folio
  2024-03-20 12:54   ` Matthew Wilcox
@ 2024-03-21  2:07     ` Alex Shi
  0 siblings, 0 replies; 18+ messages in thread
From: Alex Shi @ 2024-03-21  2:07 UTC (permalink / raw
  To: Matthew Wilcox, alexs
  Cc: Izik Eidus, Andrea Arcangeli, Hugh Dickins, Chris Wright, kasong,
	Andrew Morton, open list:MEMORY MANAGEMENT, open list



On 3/20/24 8:54 PM, Matthew Wilcox wrote:
> On Wed, Mar 20, 2024 at 03:40:37PM +0800, alexs@kernel.org wrote:
>> -static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
>> +static void *get_ksm_page(struct ksm_stable_node *stable_node,
>>  				 enum get_ksm_page_flags flags)
> 
> I am really not a fan of returning void * instead of a page or a
> folio.  Particularly since you rename this function at the end anyway!
> 
> You should do it like this:
> 
> In this patch, convert get_ksm_page() to get_ksm_folio() and add:
> 
> static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
> 		enum get_ksm_page_flags flags)
> {
> 	struct folio *folio = get_ksm_folio(node, flags);
> 	return &folio->page;
> }
> 
> Then convert each call-site to get_ksm_folio(), and finally delete
> get_ksm_page().  That way you're always converting each caller to
> the exact code you want it to look like, and your reiewrs don't have to
> keep three patches in their head at once as they review each place.
> 
> Also, I think this should be ksm_get_folio(), not get_ksm_folio().
> Seems to fit better.
> 
>> @@ -949,32 +949,32 @@ static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
>>  		 * in the ref_freeze section of __remove_mapping(); but Anon
>>  		 * page->mapping reset to NULL later, in free_pages_prepare().
> 
> Could you fix page->mapping to folio->mapping in the comment?
> 

Thanks for comment! I will take your suggestion and resend soon. 

Best regards!

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2024-03-21  2:14 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-20  7:40 [PATCH 00/11] transfer page to folio in KSM alexs
2024-03-20  7:40 ` [PATCH 01/11] mm/ksm: Convert get_ksm_page to return a folio alexs
2024-03-20 12:54   ` Matthew Wilcox
2024-03-21  2:07     ` Alex Shi
2024-03-20  7:40 ` [PATCH 02/11] mm/ksm: use a folio in remove_rmap_item_from_tree alexs
2024-03-20  7:40 ` [PATCH 03/11] mm/ksm: use a folio in remove_stable_node alexs
2024-03-20 13:00   ` Matthew Wilcox
2024-03-20  7:40 ` [PATCH 04/11] mm/ksm: use folio in stable_node_dup alexs
2024-03-20  7:40 ` [PATCH 05/11] mm/ksm: use a folio in scan_get_next_rmap_item func alexs
2024-03-20  7:40 ` [PATCH 06/11] mm/ksm: use folio in write_protect_page alexs
2024-03-20 14:57   ` Matthew Wilcox
2024-03-20  7:40 ` [PATCH 07/11] mm/ksm: Convert chain series funcs to use folio alexs
2024-03-20  7:40 ` [PATCH 08/11] mm/ksm: Convert stable_tree_insert " alexs
2024-03-20  7:40 ` [PATCH 09/11] mm/ksm: Convert stable_tree_search " alexs
2024-03-20 15:26   ` Matthew Wilcox
2024-03-21  1:47     ` Alex Shi
2024-03-20  7:40 ` [PATCH 10/11] mm/ksm: rename get_ksm_page to get_ksm_folio and return type alexs
2024-03-20  7:40 ` [PATCH 11/11] mm/ksm: return folio for chain series funcs alexs

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).