* [PATCH -V2 1/2] hugetlb: Move all the in use pages to active list
@ 2012-06-15 10:31 Aneesh Kumar K.V
2012-06-15 10:31 ` [PATCH -V2 2/2] hugetlb/cgroup: Assign the page hugetlb cgroup when we move the page " Aneesh Kumar K.V
` (3 more replies)
0 siblings, 4 replies; 6+ messages in thread
From: Aneesh Kumar K.V @ 2012-06-15 10:31 UTC (permalink / raw
To: linux-mm, kamezawa.hiroyu, mhocko, akpm; +Cc: Aneesh Kumar K.V
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
When we fail to allocate pages from the reserve pool, hugetlb
do try to allocate huge pages using alloc_buddy_huge_page.
Add these to the active list. We also need to add the huge
page we allocate when we soft offline the oldpage to active
list.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
mm/hugetlb.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c57740b..ec7b86e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -928,8 +928,14 @@ struct page *alloc_huge_page_node(struct hstate *h, int nid)
page = dequeue_huge_page_node(h, nid);
spin_unlock(&hugetlb_lock);
- if (!page)
+ if (!page) {
page = alloc_buddy_huge_page(h, nid);
+ if (page) {
+ spin_lock(&hugetlb_lock);
+ list_move(&page->lru, &h->hugepage_activelist);
+ spin_unlock(&hugetlb_lock);
+ }
+ }
return page;
}
@@ -1155,6 +1161,9 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
hugepage_subpool_put_pages(spool, chg);
return ERR_PTR(-ENOSPC);
}
+ spin_lock(&hugetlb_lock);
+ list_move(&page->lru, &h->hugepage_activelist);
+ spin_unlock(&hugetlb_lock);
}
set_page_private(page, (unsigned long)spool);
--
1.7.10
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 6+ messages in thread
* [PATCH -V2 2/2] hugetlb/cgroup: Assign the page hugetlb cgroup when we move the page to active list.
2012-06-15 10:31 [PATCH -V2 1/2] hugetlb: Move all the in use pages to active list Aneesh Kumar K.V
@ 2012-06-15 10:31 ` Aneesh Kumar K.V
2012-06-15 12:35 ` Michal Hocko
2012-06-15 12:23 ` [PATCH -V2 1/2] hugetlb: Move all the in use pages " Michal Hocko
` (2 subsequent siblings)
3 siblings, 1 reply; 6+ messages in thread
From: Aneesh Kumar K.V @ 2012-06-15 10:31 UTC (permalink / raw
To: linux-mm, kamezawa.hiroyu, mhocko, akpm; +Cc: Aneesh Kumar K.V
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
page's hugetlb cgroup assign and moving to active list should happen with
hugetlb_lock held. Otherwise when we remove the hugetlb cgroup we would
iterate the active list and will find page with NULL hugetlb cgroup values.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
mm/hugetlb.c | 14 +++++++++-----
mm/hugetlb_cgroup.c | 3 +--
2 files changed, 10 insertions(+), 7 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ec7b86e..10160cb 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1150,9 +1150,13 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
}
spin_lock(&hugetlb_lock);
page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve);
- spin_unlock(&hugetlb_lock);
-
- if (!page) {
+ if (page) {
+ /* update page cgroup details */
+ hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h),
+ h_cg, page);
+ spin_unlock(&hugetlb_lock);
+ } else {
+ spin_unlock(&hugetlb_lock);
page = alloc_buddy_huge_page(h, NUMA_NO_NODE);
if (!page) {
hugetlb_cgroup_uncharge_cgroup(idx,
@@ -1163,14 +1167,14 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
}
spin_lock(&hugetlb_lock);
list_move(&page->lru, &h->hugepage_activelist);
+ hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h),
+ h_cg, page);
spin_unlock(&hugetlb_lock);
}
set_page_private(page, (unsigned long)spool);
vma_commit_reservation(h, vma, addr);
- /* update page cgroup details */
- hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, page);
return page;
}
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 8e7ca0a..d4f3f7b 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -218,6 +218,7 @@ done:
return ret;
}
+/* Should be called with hugetlb_lock held */
void hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
struct hugetlb_cgroup *h_cg,
struct page *page)
@@ -225,9 +226,7 @@ void hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
if (hugetlb_cgroup_disabled() || !h_cg)
return;
- spin_lock(&hugetlb_lock);
set_hugetlb_cgroup(page, h_cg);
- spin_unlock(&hugetlb_lock);
return;
}
--
1.7.10
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH -V2 1/2] hugetlb: Move all the in use pages to active list
2012-06-15 10:31 [PATCH -V2 1/2] hugetlb: Move all the in use pages to active list Aneesh Kumar K.V
2012-06-15 10:31 ` [PATCH -V2 2/2] hugetlb/cgroup: Assign the page hugetlb cgroup when we move the page " Aneesh Kumar K.V
@ 2012-06-15 12:23 ` Michal Hocko
2012-06-16 6:23 ` Kamezawa Hiroyuki
2012-06-22 21:01 ` Andrew Morton
3 siblings, 0 replies; 6+ messages in thread
From: Michal Hocko @ 2012-06-15 12:23 UTC (permalink / raw
To: Aneesh Kumar K.V; +Cc: linux-mm, kamezawa.hiroyu, akpm
On Fri 15-06-12 16:01:02, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>
> When we fail to allocate pages from the reserve pool, hugetlb
> do try to allocate huge pages using alloc_buddy_huge_page.
> Add these to the active list. We also need to add the huge
> page we allocate when we soft offline the oldpage to active
> list.
Yes, I have totally missed this.
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
> ---
> mm/hugetlb.c | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index c57740b..ec7b86e 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -928,8 +928,14 @@ struct page *alloc_huge_page_node(struct hstate *h, int nid)
> page = dequeue_huge_page_node(h, nid);
> spin_unlock(&hugetlb_lock);
>
> - if (!page)
> + if (!page) {
> page = alloc_buddy_huge_page(h, nid);
> + if (page) {
> + spin_lock(&hugetlb_lock);
> + list_move(&page->lru, &h->hugepage_activelist);
> + spin_unlock(&hugetlb_lock);
> + }
> + }
>
> return page;
> }
> @@ -1155,6 +1161,9 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
> hugepage_subpool_put_pages(spool, chg);
> return ERR_PTR(-ENOSPC);
> }
> + spin_lock(&hugetlb_lock);
> + list_move(&page->lru, &h->hugepage_activelist);
> + spin_unlock(&hugetlb_lock);
> }
>
> set_page_private(page, (unsigned long)spool);
> --
> 1.7.10
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH -V2 2/2] hugetlb/cgroup: Assign the page hugetlb cgroup when we move the page to active list.
2012-06-15 10:31 ` [PATCH -V2 2/2] hugetlb/cgroup: Assign the page hugetlb cgroup when we move the page " Aneesh Kumar K.V
@ 2012-06-15 12:35 ` Michal Hocko
0 siblings, 0 replies; 6+ messages in thread
From: Michal Hocko @ 2012-06-15 12:35 UTC (permalink / raw
To: Aneesh Kumar K.V; +Cc: linux-mm, kamezawa.hiroyu, akpm
On Fri 15-06-12 16:01:03, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>
> page's hugetlb cgroup assign and moving to active list should happen with
> hugetlb_lock held. Otherwise when we remove the hugetlb cgroup we would
> iterate the active list and will find page with NULL hugetlb cgroup values.
>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
> ---
> mm/hugetlb.c | 14 +++++++++-----
> mm/hugetlb_cgroup.c | 3 +--
> 2 files changed, 10 insertions(+), 7 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index ec7b86e..10160cb 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1150,9 +1150,13 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
> }
> spin_lock(&hugetlb_lock);
> page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve);
> - spin_unlock(&hugetlb_lock);
> -
> - if (!page) {
> + if (page) {
> + /* update page cgroup details */
> + hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h),
> + h_cg, page);
> + spin_unlock(&hugetlb_lock);
> + } else {
> + spin_unlock(&hugetlb_lock);
> page = alloc_buddy_huge_page(h, NUMA_NO_NODE);
> if (!page) {
> hugetlb_cgroup_uncharge_cgroup(idx,
> @@ -1163,14 +1167,14 @@ static struct page *alloc_huge_page(struct vm_area_struct *vma,
> }
> spin_lock(&hugetlb_lock);
> list_move(&page->lru, &h->hugepage_activelist);
> + hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h),
> + h_cg, page);
> spin_unlock(&hugetlb_lock);
> }
>
> set_page_private(page, (unsigned long)spool);
>
> vma_commit_reservation(h, vma, addr);
> - /* update page cgroup details */
> - hugetlb_cgroup_commit_charge(idx, pages_per_huge_page(h), h_cg, page);
> return page;
> }
>
> diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
> index 8e7ca0a..d4f3f7b 100644
> --- a/mm/hugetlb_cgroup.c
> +++ b/mm/hugetlb_cgroup.c
> @@ -218,6 +218,7 @@ done:
> return ret;
> }
>
> +/* Should be called with hugetlb_lock held */
> void hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
> struct hugetlb_cgroup *h_cg,
> struct page *page)
> @@ -225,9 +226,7 @@ void hugetlb_cgroup_commit_charge(int idx, unsigned long nr_pages,
> if (hugetlb_cgroup_disabled() || !h_cg)
> return;
>
> - spin_lock(&hugetlb_lock);
> set_hugetlb_cgroup(page, h_cg);
> - spin_unlock(&hugetlb_lock);
> return;
> }
>
> --
> 1.7.10
>
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH -V2 1/2] hugetlb: Move all the in use pages to active list
2012-06-15 10:31 [PATCH -V2 1/2] hugetlb: Move all the in use pages to active list Aneesh Kumar K.V
2012-06-15 10:31 ` [PATCH -V2 2/2] hugetlb/cgroup: Assign the page hugetlb cgroup when we move the page " Aneesh Kumar K.V
2012-06-15 12:23 ` [PATCH -V2 1/2] hugetlb: Move all the in use pages " Michal Hocko
@ 2012-06-16 6:23 ` Kamezawa Hiroyuki
2012-06-22 21:01 ` Andrew Morton
3 siblings, 0 replies; 6+ messages in thread
From: Kamezawa Hiroyuki @ 2012-06-16 6:23 UTC (permalink / raw
To: Aneesh Kumar K.V; +Cc: linux-mm, mhocko, akpm
(2012/06/15 19:31), Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V"<aneesh.kumar@linux.vnet.ibm.com>
>
> When we fail to allocate pages from the reserve pool, hugetlb
> do try to allocate huge pages using alloc_buddy_huge_page.
> Add these to the active list. We also need to add the huge
> page we allocate when we soft offline the oldpage to active
> list.
>
> Signed-off-by: Aneesh Kumar K.V<aneesh.kumar@linux.vnet.ibm.com>
Reviewed-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH -V2 1/2] hugetlb: Move all the in use pages to active list
2012-06-15 10:31 [PATCH -V2 1/2] hugetlb: Move all the in use pages to active list Aneesh Kumar K.V
` (2 preceding siblings ...)
2012-06-16 6:23 ` Kamezawa Hiroyuki
@ 2012-06-22 21:01 ` Andrew Morton
3 siblings, 0 replies; 6+ messages in thread
From: Andrew Morton @ 2012-06-22 21:01 UTC (permalink / raw
To: Aneesh Kumar K.V; +Cc: linux-mm, kamezawa.hiroyu, mhocko
On Fri, 15 Jun 2012 16:01:02 +0530
"Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> wrote:
> When we fail to allocate pages from the reserve pool, hugetlb
> do try to allocate huge pages using alloc_buddy_huge_page.
> Add these to the active list. We also need to add the huge
> page we allocate when we soft offline the oldpage to active
> list.
When fixing a bug, please describe the end-user-visible effects of that bug.
Fully. Every time. No exceptions.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2012-06-22 21:01 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-06-15 10:31 [PATCH -V2 1/2] hugetlb: Move all the in use pages to active list Aneesh Kumar K.V
2012-06-15 10:31 ` [PATCH -V2 2/2] hugetlb/cgroup: Assign the page hugetlb cgroup when we move the page " Aneesh Kumar K.V
2012-06-15 12:35 ` Michal Hocko
2012-06-15 12:23 ` [PATCH -V2 1/2] hugetlb: Move all the in use pages " Michal Hocko
2012-06-16 6:23 ` Kamezawa Hiroyuki
2012-06-22 21:01 ` Andrew Morton
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.