All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 0/3] sched/fair: Limit access to overutilized
@ 2024-03-06 10:24 Shrikanth Hegde
  2024-03-06 10:24 ` [PATCH v5 1/3] sched/fair: Add EAS checks before updating overutilized Shrikanth Hegde
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Shrikanth Hegde @ 2024-03-06 10:24 UTC (permalink / raw
  To: mingo, peterz, vincent.guittot
  Cc: sshegde, yu.c.chen, dietmar.eggemann, linux-kernel, nysal,
	aboorvad, srikar, vschneid, pierre.gondois, qyousef

When running an ISV workload on a large system (240 Cores, SMT8), it was
observed from perf profile that newidle_balance and enqueue_task_fair
were consuming more cycles. Perf annotate showed that most of the time
was spent on accessing overutilized field of root domain. Similar perf profile
was simulated by making some changes to stress-ng --wait. Both
newidle_balance and enqueue_task_fair consume close to 5-7%.

Overutilized was added for EAS(Energy aware scheduler) to decide whether
to do load balance or not. Simultaneous access to overutilized by
multiple CPUs lead cache invalidations due to true sharing. Updating
overutilized is not required for non-EAS platforms. Since overutilized and
overload are part of the same cacheline, there is false sharing as well.

Patch 1/3 - Main patch. It helps in reducing the above said issue.
Both the functions don't show up in the profile. With patch comparison is in
changelog. With the patch stated problem in the ISV workload also got
solved and throughput has improved.
Patch 2/3 - Code refactoring to use the helper function instead of
direct access of the field. Keeping this patch so patch 3/3 becomes
easier to understand. Depends on 1/3.
Patch 3/3 - Refactoring the code since most of the patterns are observed
are eas && !overutilzed. Changed the helper function accordingly.
Depends on 2/3.

More details can be found in cover letter of v1.

v4 -> v5:
- Added EAS check inside helper functions instead of sprinkling it
around. EAS check is static branch at best. So that keeps the code
simpler.
- added EAS check inside cpu_overutilized as suggested by Qais.
- Added patch 3 since most of the code does eas && !overutilized
pattern as suggested by Vincent.

v3 -> v4:
- corrected a mistake where EAS check was missed.
v2 -> v3:
- Pierre and Dietmar suggested we could add one more EAS check before
calling cpu_overutilized. That makes sense since that value is not used
anyway in Non-EAS case.
- Refactored the code as dietmar suggested to avoid additional call to
sched_energy_enabled().
- Minor edits to change log.
v1 -> v2:
Chen Yu pointed out minor issue in code. Corrected that code and updated
the changelog.

v1: https://lore.kernel.org/lkml/20240223150707.410417-1-sshegde@linux.ibm.com/
v2: https://lore.kernel.org/lkml/20240228071621.602596-1-sshegde@linux.ibm.com/
v3: https://lore.kernel.org/lkml/20240229104010.747411-1-sshegde@linux.ibm.com/
v4: https://lore.kernel.org/lkml/20240301151725.874604-1-sshegde@linux.ibm.com/

Shrikanth Hegde (3):
  sched/fair: Add EAS checks before updating overutilized
  sched/fair: Use helper function to access rd->overutilized
  sched/fair: Combine EAS check with overutilized access

 kernel/sched/fair.c | 86 +++++++++++++++++++++++++++++++--------------
 1 file changed, 59 insertions(+), 27 deletions(-)

--
2.39.3


^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v5 1/3] sched/fair: Add EAS checks before updating overutilized
  2024-03-06 10:24 [PATCH v5 0/3] sched/fair: Limit access to overutilized Shrikanth Hegde
@ 2024-03-06 10:24 ` Shrikanth Hegde
  2024-03-06 17:40   ` Vincent Guittot
  2024-03-06 10:24 ` [PATCH v5 2/3] sched/fair: Use helper function to access rd->overutilized Shrikanth Hegde
  2024-03-06 10:24 ` [PATCH v5 3/3] sched/fair: Combine EAS check with overutilized access Shrikanth Hegde
  2 siblings, 1 reply; 7+ messages in thread
From: Shrikanth Hegde @ 2024-03-06 10:24 UTC (permalink / raw
  To: mingo, peterz, vincent.guittot
  Cc: sshegde, yu.c.chen, dietmar.eggemann, linux-kernel, nysal,
	aboorvad, srikar, vschneid, pierre.gondois, qyousef

Overutilized field of root domain is only used for EAS(energy aware scheduler)
to decide whether to do load balance or not. It is not used if EAS
not possible.

Currently enqueue_task_fair and task_tick_fair accesses, sometime updates
this field. In update_sd_lb_stats it is updated often. This causes cache
contention due to true sharing and burns a lot of cycles. overload and
overutilized are part of the same cacheline. Updating it often invalidates
the cacheline. That causes access  to overload to slow down due to
false sharing. Hence add EAS check before accessing/updating this field.
EAS check is optimized at compile time or it is a static branch.
Hence it shouldn't cost much.

With the patch, both enqueue_task_fair and newidle_balance don't show
up as hot routines in perf profile.

6.8-rc4:
7.18%  swapper          [kernel.vmlinux]              [k] enqueue_task_fair
6.78%  s                [kernel.vmlinux]              [k] newidle_balance
+patch:
0.14%  swapper          [kernel.vmlinux]              [k] enqueue_task_fair
0.00%  swapper          [kernel.vmlinux]              [k] newidle_balance

Minor change: trace_sched_overutilized_tp expect that second argument to
be bool. So do a int to bool conversion for that.

Fixes: 2802bf3cd936 ("sched/fair: Add over-utilization/tipping point indicator")
Reviewed-by: Qais Yousef <qyousef@layalina.io>
Reviewed-by: Srikar Dronamraju <srikar@linux.ibm.com>
Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
---
 kernel/sched/fair.c | 62 +++++++++++++++++++++++++++++++--------------
 1 file changed, 43 insertions(+), 19 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6a16129f9a5c..997e570d9423 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6663,22 +6663,51 @@ static inline void hrtick_update(struct rq *rq)
 #ifdef CONFIG_SMP
 static inline bool cpu_overutilized(int cpu)
 {
-	unsigned long rq_util_min = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MIN);
-	unsigned long rq_util_max = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MAX);
+	unsigned long  rq_util_min, rq_util_max;
+
+	if (!sched_energy_enabled())
+		return false;
+
+	rq_util_min = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MIN);
+	rq_util_max = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MAX);

 	/* Return true only if the utilization doesn't fit CPU's capacity */
 	return !util_fits_cpu(cpu_util_cfs(cpu), rq_util_min, rq_util_max, cpu);
 }

-static inline void update_overutilized_status(struct rq *rq)
+static inline void set_rd_overutilized_status(struct root_domain *rd,
+					      unsigned int status)
 {
-	if (!READ_ONCE(rq->rd->overutilized) && cpu_overutilized(rq->cpu)) {
-		WRITE_ONCE(rq->rd->overutilized, SG_OVERUTILIZED);
-		trace_sched_overutilized_tp(rq->rd, SG_OVERUTILIZED);
-	}
+	if (!sched_energy_enabled())
+		return;
+
+	WRITE_ONCE(rd->overutilized, status);
+	trace_sched_overutilized_tp(rd, !!status);
+}
+
+static inline void check_update_overutilized_status(struct rq *rq)
+{
+	/*
+	 * overutilized field is used for load balancing decisions only
+	 * if energy aware scheduler is being used
+	 */
+	if (!sched_energy_enabled())
+		return;
+
+	if (!READ_ONCE(rq->rd->overutilized) && cpu_overutilized(rq->cpu))
+		set_rd_overutilized_status(rq->rd, SG_OVERUTILIZED);
 }
 #else
-static inline void update_overutilized_status(struct rq *rq) { }
+static inline void check_update_overutilized_status(struct rq *rq)
+{
+	return 0;
+}
+
+static inline void set_rd_overutilized_status(struct root_domain *rd,
+					      unsigned int status)
+{
+	return 0;
+}
 #endif

 /* Runqueue only has SCHED_IDLE tasks enqueued */
@@ -6779,7 +6808,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 	 * and the following generally works well enough in practice.
 	 */
 	if (!task_new)
-		update_overutilized_status(rq);
+		check_update_overutilized_status(rq);

 enqueue_throttle:
 	assert_list_leaf_cfs_rq(rq);
@@ -10596,19 +10625,14 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 		env->fbq_type = fbq_classify_group(&sds->busiest_stat);

 	if (!env->sd->parent) {
-		struct root_domain *rd = env->dst_rq->rd;
-
 		/* update overload indicator if we are at root domain */
-		WRITE_ONCE(rd->overload, sg_status & SG_OVERLOAD);
+		WRITE_ONCE(env->dst_rq->rd->overload, sg_status & SG_OVERLOAD);

 		/* Update over-utilization (tipping point, U >= 0) indicator */
-		WRITE_ONCE(rd->overutilized, sg_status & SG_OVERUTILIZED);
-		trace_sched_overutilized_tp(rd, sg_status & SG_OVERUTILIZED);
+		set_rd_overutilized_status(env->dst_rq->rd,
+					   sg_status & SG_OVERUTILIZED);
 	} else if (sg_status & SG_OVERUTILIZED) {
-		struct root_domain *rd = env->dst_rq->rd;
-
-		WRITE_ONCE(rd->overutilized, SG_OVERUTILIZED);
-		trace_sched_overutilized_tp(rd, SG_OVERUTILIZED);
+		set_rd_overutilized_status(env->dst_rq->rd, SG_OVERUTILIZED);
 	}

 	update_idle_cpu_scan(env, sum_util);
@@ -12609,7 +12633,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
 		task_tick_numa(rq, curr);

 	update_misfit_status(curr, rq);
-	update_overutilized_status(task_rq(curr));
+	check_update_overutilized_status(task_rq(curr));

 	task_tick_core(rq, curr);
 }
--
2.39.3


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v5 2/3] sched/fair: Use helper function to access rd->overutilized
  2024-03-06 10:24 [PATCH v5 0/3] sched/fair: Limit access to overutilized Shrikanth Hegde
  2024-03-06 10:24 ` [PATCH v5 1/3] sched/fair: Add EAS checks before updating overutilized Shrikanth Hegde
@ 2024-03-06 10:24 ` Shrikanth Hegde
  2024-03-06 17:40   ` Vincent Guittot
  2024-03-06 10:24 ` [PATCH v5 3/3] sched/fair: Combine EAS check with overutilized access Shrikanth Hegde
  2 siblings, 1 reply; 7+ messages in thread
From: Shrikanth Hegde @ 2024-03-06 10:24 UTC (permalink / raw
  To: mingo, peterz, vincent.guittot
  Cc: sshegde, yu.c.chen, dietmar.eggemann, linux-kernel, nysal,
	aboorvad, srikar, vschneid, pierre.gondois, qyousef

Overutilized field is accessed directly in multiple places.
So it could use a helper function. That way one might be more
informed that it needs to be used only in case of EAS.

No change in functionality intended.

Reviewed-by: Qais Yousef <qyousef@layalina.io>
Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
---
 kernel/sched/fair.c | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 997e570d9423..e3086c8930ea 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6675,6 +6675,15 @@ static inline bool cpu_overutilized(int cpu)
 	return !util_fits_cpu(cpu_util_cfs(cpu), rq_util_min, rq_util_max, cpu);
 }

+/*
+ * Ensure that caller can do EAS. overutilized value
+ * make sense only if EAS is enabled
+ */
+static inline int is_rd_overutilized(struct root_domain *rd)
+{
+	return READ_ONCE(rd->overutilized);
+}
+
 static inline void set_rd_overutilized_status(struct root_domain *rd,
 					      unsigned int status)
 {
@@ -6694,7 +6703,7 @@ static inline void check_update_overutilized_status(struct rq *rq)
 	if (!sched_energy_enabled())
 		return;

-	if (!READ_ONCE(rq->rd->overutilized) && cpu_overutilized(rq->cpu))
+	if (!is_rd_overutilized(rq->rd) && cpu_overutilized(rq->cpu))
 		set_rd_overutilized_status(rq->rd, SG_OVERUTILIZED);
 }
 #else
@@ -6708,6 +6717,11 @@ static inline void set_rd_overutilized_status(struct root_domain *rd,
 {
 	return 0;
 }
+
+static inline int is_rd_overutilized(struct root_domain *rd)
+{
+	return 0;
+}
 #endif

 /* Runqueue only has SCHED_IDLE tasks enqueued */
@@ -7989,7 +8003,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)

 	rcu_read_lock();
 	pd = rcu_dereference(rd->pd);
-	if (!pd || READ_ONCE(rd->overutilized))
+	if (!pd || is_rd_overutilized(rd))
 		goto unlock;

 	/*
@@ -10872,7 +10886,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 	if (sched_energy_enabled()) {
 		struct root_domain *rd = env->dst_rq->rd;

-		if (rcu_dereference(rd->pd) && !READ_ONCE(rd->overutilized))
+		if (rcu_dereference(rd->pd) && !is_rd_overutilized(rd))
 			goto out_balanced;
 	}

--
2.39.3


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v5 3/3] sched/fair: Combine EAS check with overutilized access
  2024-03-06 10:24 [PATCH v5 0/3] sched/fair: Limit access to overutilized Shrikanth Hegde
  2024-03-06 10:24 ` [PATCH v5 1/3] sched/fair: Add EAS checks before updating overutilized Shrikanth Hegde
  2024-03-06 10:24 ` [PATCH v5 2/3] sched/fair: Use helper function to access rd->overutilized Shrikanth Hegde
@ 2024-03-06 10:24 ` Shrikanth Hegde
  2 siblings, 0 replies; 7+ messages in thread
From: Shrikanth Hegde @ 2024-03-06 10:24 UTC (permalink / raw
  To: mingo, peterz, vincent.guittot
  Cc: sshegde, yu.c.chen, dietmar.eggemann, linux-kernel, nysal,
	aboorvad, srikar, vschneid, pierre.gondois, qyousef

Access to overutilized is always used with sched_energy_enabled in
the pattern:

if (sched_energy_enabled && !overutilized)
       do something

So modify the helper function to return this pattern. This is more
readable code as it would say, do something when root domain is not
overutilized.

No change in functionality intended.

Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
---
 kernel/sched/fair.c | 26 ++++++++++----------------
 1 file changed, 10 insertions(+), 16 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e3086c8930ea..62f247bdec86 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6676,12 +6676,11 @@ static inline bool cpu_overutilized(int cpu)
 }

 /*
- * Ensure that caller can do EAS. overutilized value
- * make sense only if EAS is enabled
+ * overutilized value make sense only if EAS is enabled
  */
-static inline int is_rd_overutilized(struct root_domain *rd)
+static inline int is_rd_not_overutilized(struct root_domain *rd)
 {
-	return READ_ONCE(rd->overutilized);
+	return sched_energy_enabled() && !READ_ONCE(rd->overutilized);
 }

 static inline void set_rd_overutilized_status(struct root_domain *rd,
@@ -6700,10 +6699,8 @@ static inline void check_update_overutilized_status(struct rq *rq)
 	 * overutilized field is used for load balancing decisions only
 	 * if energy aware scheduler is being used
 	 */
-	if (!sched_energy_enabled())
-		return;

-	if (!is_rd_overutilized(rq->rd) && cpu_overutilized(rq->cpu))
+	if (is_rd_not_overutilized(rq->rd) && cpu_overutilized(rq->cpu))
 		set_rd_overutilized_status(rq->rd, SG_OVERUTILIZED);
 }
 #else
@@ -6718,7 +6715,7 @@ static inline void set_rd_overutilized_status(struct root_domain *rd,
 	return 0;
 }

-static inline int is_rd_overutilized(struct root_domain *rd)
+static inline int is_rd_not_overutilized(struct root_domain *rd)
 {
 	return 0;
 }
@@ -8003,7 +8000,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)

 	rcu_read_lock();
 	pd = rcu_dereference(rd->pd);
-	if (!pd || is_rd_overutilized(rd))
+	if (!pd)
 		goto unlock;

 	/*
@@ -8206,7 +8203,7 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int wake_flags)
 		    cpumask_test_cpu(cpu, p->cpus_ptr))
 			return cpu;

-		if (sched_energy_enabled()) {
+		if (is_rd_not_overutilized(this_rq()->rd)) {
 			new_cpu = find_energy_efficient_cpu(p, prev_cpu);
 			if (new_cpu >= 0)
 				return new_cpu;
@@ -10883,12 +10880,9 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 	if (busiest->group_type == group_misfit_task)
 		goto force_balance;

-	if (sched_energy_enabled()) {
-		struct root_domain *rd = env->dst_rq->rd;
-
-		if (rcu_dereference(rd->pd) && !is_rd_overutilized(rd))
-			goto out_balanced;
-	}
+	if (is_rd_not_overutilized(env->dst_rq->rd) &&
+	    rcu_dereference(env->dst_rq->rd->pd))
+		goto out_balanced;

 	/* ASYM feature bypasses nice load balance check */
 	if (busiest->group_type == group_asym_packing)
--
2.39.3


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v5 1/3] sched/fair: Add EAS checks before updating overutilized
  2024-03-06 10:24 ` [PATCH v5 1/3] sched/fair: Add EAS checks before updating overutilized Shrikanth Hegde
@ 2024-03-06 17:40   ` Vincent Guittot
  2024-03-07  4:50     ` Shrikanth Hegde
  0 siblings, 1 reply; 7+ messages in thread
From: Vincent Guittot @ 2024-03-06 17:40 UTC (permalink / raw
  To: Shrikanth Hegde
  Cc: mingo, peterz, yu.c.chen, dietmar.eggemann, linux-kernel, nysal,
	aboorvad, srikar, vschneid, pierre.gondois, qyousef

On Wed, 6 Mar 2024 at 11:25, Shrikanth Hegde <sshegde@linux.ibm.com> wrote:
>
> Overutilized field of root domain is only used for EAS(energy aware scheduler)
> to decide whether to do load balance or not. It is not used if EAS
> not possible.
>
> Currently enqueue_task_fair and task_tick_fair accesses, sometime updates
> this field. In update_sd_lb_stats it is updated often. This causes cache
> contention due to true sharing and burns a lot of cycles. overload and
> overutilized are part of the same cacheline. Updating it often invalidates
> the cacheline. That causes access  to overload to slow down due to
> false sharing. Hence add EAS check before accessing/updating this field.
> EAS check is optimized at compile time or it is a static branch.
> Hence it shouldn't cost much.
>
> With the patch, both enqueue_task_fair and newidle_balance don't show
> up as hot routines in perf profile.
>
> 6.8-rc4:
> 7.18%  swapper          [kernel.vmlinux]              [k] enqueue_task_fair
> 6.78%  s                [kernel.vmlinux]              [k] newidle_balance
> +patch:
> 0.14%  swapper          [kernel.vmlinux]              [k] enqueue_task_fair
> 0.00%  swapper          [kernel.vmlinux]              [k] newidle_balance
>
> Minor change: trace_sched_overutilized_tp expect that second argument to
> be bool. So do a int to bool conversion for that.
>
> Fixes: 2802bf3cd936 ("sched/fair: Add over-utilization/tipping point indicator")
> Reviewed-by: Qais Yousef <qyousef@layalina.io>
> Reviewed-by: Srikar Dronamraju <srikar@linux.ibm.com>
> Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
> ---
>  kernel/sched/fair.c | 62 +++++++++++++++++++++++++++++++--------------
>  1 file changed, 43 insertions(+), 19 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 6a16129f9a5c..997e570d9423 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6663,22 +6663,51 @@ static inline void hrtick_update(struct rq *rq)
>  #ifdef CONFIG_SMP
>  static inline bool cpu_overutilized(int cpu)
>  {
> -       unsigned long rq_util_min = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MIN);
> -       unsigned long rq_util_max = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MAX);
> +       unsigned long  rq_util_min, rq_util_max;
> +
> +       if (!sched_energy_enabled())
> +               return false;
> +
> +       rq_util_min = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MIN);
> +       rq_util_max = uclamp_rq_get(cpu_rq(cpu), UCLAMP_MAX);
>
>         /* Return true only if the utilization doesn't fit CPU's capacity */
>         return !util_fits_cpu(cpu_util_cfs(cpu), rq_util_min, rq_util_max, cpu);
>  }
>
> -static inline void update_overutilized_status(struct rq *rq)
> +static inline void set_rd_overutilized_status(struct root_domain *rd,
> +                                             unsigned int status)
>  {
> -       if (!READ_ONCE(rq->rd->overutilized) && cpu_overutilized(rq->cpu)) {
> -               WRITE_ONCE(rq->rd->overutilized, SG_OVERUTILIZED);
> -               trace_sched_overutilized_tp(rq->rd, SG_OVERUTILIZED);
> -       }
> +       if (!sched_energy_enabled())
> +               return;
> +
> +       WRITE_ONCE(rd->overutilized, status);
> +       trace_sched_overutilized_tp(rd, !!status);
> +}
> +
> +static inline void check_update_overutilized_status(struct rq *rq)
> +{
> +       /*
> +        * overutilized field is used for load balancing decisions only
> +        * if energy aware scheduler is being used
> +        */
> +       if (!sched_energy_enabled())
> +               return;
> +
> +       if (!READ_ONCE(rq->rd->overutilized) && cpu_overutilized(rq->cpu))
> +               set_rd_overutilized_status(rq->rd, SG_OVERUTILIZED);
>  }
>  #else
> -static inline void update_overutilized_status(struct rq *rq) { }
> +static inline void check_update_overutilized_status(struct rq *rq)
> +{
> +       return 0;
> +}

static inline void check_update_overutilized_status(struct rq *rq) { }

> +
> +static inline void set_rd_overutilized_status(struct root_domain *rd,
> +                                             unsigned int status)
> +{
> +       return 0;
> +}

static inline void set_rd_overutilized_status(struct rq *rq) { }

my comment on v4 about {return 0; } applies only for static inline int
is_rd_overutilized(struct root_domain *rd)

Also, I don't think that set_rd_overutilized_status() is called
outside #ifdef CONFIG_SMP so you can remove it.




>  #endif
>
>  /* Runqueue only has SCHED_IDLE tasks enqueued */
> @@ -6779,7 +6808,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>          * and the following generally works well enough in practice.
>          */
>         if (!task_new)
> -               update_overutilized_status(rq);
> +               check_update_overutilized_status(rq);
>
>  enqueue_throttle:
>         assert_list_leaf_cfs_rq(rq);
> @@ -10596,19 +10625,14 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
>                 env->fbq_type = fbq_classify_group(&sds->busiest_stat);
>
>         if (!env->sd->parent) {
> -               struct root_domain *rd = env->dst_rq->rd;
> -
>                 /* update overload indicator if we are at root domain */
> -               WRITE_ONCE(rd->overload, sg_status & SG_OVERLOAD);
> +               WRITE_ONCE(env->dst_rq->rd->overload, sg_status & SG_OVERLOAD);
>
>                 /* Update over-utilization (tipping point, U >= 0) indicator */
> -               WRITE_ONCE(rd->overutilized, sg_status & SG_OVERUTILIZED);
> -               trace_sched_overutilized_tp(rd, sg_status & SG_OVERUTILIZED);
> +               set_rd_overutilized_status(env->dst_rq->rd,
> +                                          sg_status & SG_OVERUTILIZED);
>         } else if (sg_status & SG_OVERUTILIZED) {
> -               struct root_domain *rd = env->dst_rq->rd;
> -
> -               WRITE_ONCE(rd->overutilized, SG_OVERUTILIZED);
> -               trace_sched_overutilized_tp(rd, SG_OVERUTILIZED);
> +               set_rd_overutilized_status(env->dst_rq->rd, SG_OVERUTILIZED);
>         }
>
>         update_idle_cpu_scan(env, sum_util);
> @@ -12609,7 +12633,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
>                 task_tick_numa(rq, curr);
>
>         update_misfit_status(curr, rq);
> -       update_overutilized_status(task_rq(curr));
> +       check_update_overutilized_status(task_rq(curr));
>
>         task_tick_core(rq, curr);
>  }
> --
> 2.39.3
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v5 2/3] sched/fair: Use helper function to access rd->overutilized
  2024-03-06 10:24 ` [PATCH v5 2/3] sched/fair: Use helper function to access rd->overutilized Shrikanth Hegde
@ 2024-03-06 17:40   ` Vincent Guittot
  0 siblings, 0 replies; 7+ messages in thread
From: Vincent Guittot @ 2024-03-06 17:40 UTC (permalink / raw
  To: Shrikanth Hegde
  Cc: mingo, peterz, yu.c.chen, dietmar.eggemann, linux-kernel, nysal,
	aboorvad, srikar, vschneid, pierre.gondois, qyousef

On Wed, 6 Mar 2024 at 11:25, Shrikanth Hegde <sshegde@linux.ibm.com> wrote:
>
> Overutilized field is accessed directly in multiple places.
> So it could use a helper function. That way one might be more
> informed that it needs to be used only in case of EAS.
>
> No change in functionality intended.
>
> Reviewed-by: Qais Yousef <qyousef@layalina.io>
> Signed-off-by: Shrikanth Hegde <sshegde@linux.ibm.com>
> ---
>  kernel/sched/fair.c | 20 +++++++++++++++++---
>  1 file changed, 17 insertions(+), 3 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 997e570d9423..e3086c8930ea 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6675,6 +6675,15 @@ static inline bool cpu_overutilized(int cpu)
>         return !util_fits_cpu(cpu_util_cfs(cpu), rq_util_min, rq_util_max, cpu);
>  }
>
> +/*
> + * Ensure that caller can do EAS. overutilized value
> + * make sense only if EAS is enabled
> + */
> +static inline int is_rd_overutilized(struct root_domain *rd)
> +{
> +       return READ_ONCE(rd->overutilized);
> +}
> +
>  static inline void set_rd_overutilized_status(struct root_domain *rd,
>                                               unsigned int status)
>  {
> @@ -6694,7 +6703,7 @@ static inline void check_update_overutilized_status(struct rq *rq)
>         if (!sched_energy_enabled())
>                 return;
>
> -       if (!READ_ONCE(rq->rd->overutilized) && cpu_overutilized(rq->cpu))
> +       if (!is_rd_overutilized(rq->rd) && cpu_overutilized(rq->cpu))
>                 set_rd_overutilized_status(rq->rd, SG_OVERUTILIZED);
>  }
>  #else
> @@ -6708,6 +6717,11 @@ static inline void set_rd_overutilized_status(struct root_domain *rd,
>  {
>         return 0;
>  }
> +
> +static inline int is_rd_overutilized(struct root_domain *rd)

I don't think that is_rd_overutilized() is called outside #ifdef
CONFIG_SMP so you can remove it.


> +{
> +       return 0;
> +}
>  #endif
>
>  /* Runqueue only has SCHED_IDLE tasks enqueued */
> @@ -7989,7 +8003,7 @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu)
>
>         rcu_read_lock();
>         pd = rcu_dereference(rd->pd);
> -       if (!pd || READ_ONCE(rd->overutilized))
> +       if (!pd || is_rd_overutilized(rd))
>                 goto unlock;
>
>         /*
> @@ -10872,7 +10886,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
>         if (sched_energy_enabled()) {
>                 struct root_domain *rd = env->dst_rq->rd;
>
> -               if (rcu_dereference(rd->pd) && !READ_ONCE(rd->overutilized))
> +               if (rcu_dereference(rd->pd) && !is_rd_overutilized(rd))
>                         goto out_balanced;
>         }
>
> --
> 2.39.3
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v5 1/3] sched/fair: Add EAS checks before updating overutilized
  2024-03-06 17:40   ` Vincent Guittot
@ 2024-03-07  4:50     ` Shrikanth Hegde
  0 siblings, 0 replies; 7+ messages in thread
From: Shrikanth Hegde @ 2024-03-07  4:50 UTC (permalink / raw
  To: Vincent Guittot
  Cc: mingo, peterz, yu.c.chen, dietmar.eggemann, linux-kernel, nysal,
	aboorvad, srikar, vschneid, pierre.gondois, qyousef



On 3/6/24 11:10 PM, Vincent Guittot wrote:
> On Wed, 6 Mar 2024 at 11:25, Shrikanth Hegde <sshegde@linux.ibm.com> wrote:
>>  #else
>> -static inline void update_overutilized_status(struct rq *rq) { }
>> +static inline void check_update_overutilized_status(struct rq *rq)
>> +{
>> +       return 0;
>> +}
> 
> static inline void check_update_overutilized_status(struct rq *rq) { }
> 

Yes. Thanks for catching. I made a mistake there. 

>> +
>> +static inline void set_rd_overutilized_status(struct root_domain *rd,
>> +                                             unsigned int status)
>> +{
>> +       return 0;
>> +}
> 
> static inline void set_rd_overutilized_status(struct rq *rq) { }
> 
> my comment on v4 about {return 0; } applies only for static inline int
> is_rd_overutilized(struct root_domain *rd)
> 
> Also, I don't think that set_rd_overutilized_status() is called
> outside #ifdef CONFIG_SMP so you can remove it.
> 

yes. this set_rd_overutilized_status and is_rd_overutilized or is_rd_not_overutilized
can go away for !CONFIG_SMP case. Let me do that and send next version. 

> 
> 
> 

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2024-03-07  4:56 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-03-06 10:24 [PATCH v5 0/3] sched/fair: Limit access to overutilized Shrikanth Hegde
2024-03-06 10:24 ` [PATCH v5 1/3] sched/fair: Add EAS checks before updating overutilized Shrikanth Hegde
2024-03-06 17:40   ` Vincent Guittot
2024-03-07  4:50     ` Shrikanth Hegde
2024-03-06 10:24 ` [PATCH v5 2/3] sched/fair: Use helper function to access rd->overutilized Shrikanth Hegde
2024-03-06 17:40   ` Vincent Guittot
2024-03-06 10:24 ` [PATCH v5 3/3] sched/fair: Combine EAS check with overutilized access Shrikanth Hegde

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.