From: Will Deacon <will@kernel.org> To: Valentin Schneider <valentin.schneider@arm.com> Cc: linux-arm-kernel@lists.infradead.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Catalin Marinas <catalin.marinas@arm.com>, Marc Zyngier <maz@kernel.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Peter Zijlstra <peterz@infradead.org>, Morten Rasmussen <morten.rasmussen@arm.com>, Qais Yousef <qais.yousef@arm.com>, Suren Baghdasaryan <surenb@google.com>, Quentin Perret <qperret@google.com>, Tejun Heo <tj@kernel.org>, Johannes Weiner <hannes@cmpxchg.org>, Ingo Molnar <mingo@redhat.com>, Juri Lelli <juri.lelli@redhat.com>, Vincent Guittot <vincent.guittot@linaro.org>, "Rafael J. Wysocki" <rjw@rjwysocki.net>, Dietmar Eggemann <dietmar.eggemann@arm.com>, Daniel Bristot de Oliveira <bristot@redhat.com>, kernel-team@android.com, Li Zefan <lizefan@huawei.com> Subject: Re: [PATCH v8 06/19] cpuset: Don't use the cpu_possible_mask as a last resort for cgroup v1 Date: Mon, 7 Jun 2021 18:20:42 +0100 [thread overview] Message-ID: <20210607172042.GB7650@willie-the-truck> (raw) In-Reply-To: <877dj9ees8.mognet@arm.com> On Fri, Jun 04, 2021 at 06:11:03PM +0100, Valentin Schneider wrote: > On 02/06/21 17:47, Will Deacon wrote: > > @@ -3322,9 +3322,13 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask) > > > > void cpuset_cpus_allowed_fallback(struct task_struct *tsk) > > { > > + const struct cpumask *cs_mask; > > + const struct cpumask *possible_mask = task_cpu_possible_mask(tsk); > > + > > rcu_read_lock(); > > - do_set_cpus_allowed(tsk, is_in_v2_mode() ? > > - task_cs(tsk)->cpus_allowed : cpu_possible_mask); > > + cs_mask = task_cs(tsk)->cpus_allowed; > > + if (is_in_v2_mode() && cpumask_subset(cs_mask, possible_mask)) > > + do_set_cpus_allowed(tsk, cs_mask); > > Since the task will still go through the is_cpu_allowed() loop in > select_fallback_rq() after this, is the subset check actually required > here? Yes, I think it's needed. do_set_cpus_allowed() doesn't do any checking against the task_cpu_possible_mask, so if we returned to select_fallback_rq() with a mask containing a mixture of 32-bit-capable and 64-bit-only CPUs then we'd end up setting an affinity mask for a 32-bit task which contains 64-bit-only cores. > It would have more merit if cpuset_cpus_allowed_fallback() returned whether > it actually changed the allowed mask or not, in which case we could branch > either to the is_cpu_allowed() loop (as we do unconditionally now), or to > the 'state == possible' switch case. I think this is a cleanup, so I can include it as a separate patch (see below). Will --->8 diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 414a8e694413..d2b9c41c8edf 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -59,7 +59,7 @@ extern void cpuset_wait_for_hotplug(void); extern void cpuset_read_lock(void); extern void cpuset_read_unlock(void); extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask); -extern void cpuset_cpus_allowed_fallback(struct task_struct *p); +extern bool cpuset_cpus_allowed_fallback(struct task_struct *p); extern nodemask_t cpuset_mems_allowed(struct task_struct *p); #define cpuset_current_mems_allowed (current->mems_allowed) void cpuset_init_current_mems_allowed(void); @@ -188,8 +188,9 @@ static inline void cpuset_cpus_allowed(struct task_struct *p, cpumask_copy(mask, task_cpu_possible_mask(p)); } -static inline void cpuset_cpus_allowed_fallback(struct task_struct *p) +static inline bool cpuset_cpus_allowed_fallback(struct task_struct *p) { + return false; } static inline nodemask_t cpuset_mems_allowed(struct task_struct *p) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 4e7c271e3800..a6bab2259f98 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -3327,17 +3327,22 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask) * which will not contain a sane cpumask during cases such as cpu hotplugging. * This is the absolute last resort for the scheduler and it is only used if * _every_ other avenue has been traveled. + * + * Returns true if the affinity of @tsk was changed, false otherwise. **/ -void cpuset_cpus_allowed_fallback(struct task_struct *tsk) +bool cpuset_cpus_allowed_fallback(struct task_struct *tsk) { const struct cpumask *cs_mask; + bool changed = false; const struct cpumask *possible_mask = task_cpu_possible_mask(tsk); rcu_read_lock(); cs_mask = task_cs(tsk)->cpus_allowed; - if (is_in_v2_mode() && cpumask_subset(cs_mask, possible_mask)) + if (is_in_v2_mode() && cpumask_subset(cs_mask, possible_mask)) { do_set_cpus_allowed(tsk, cs_mask); + changed = true; + } rcu_read_unlock(); /* @@ -3357,6 +3362,7 @@ void cpuset_cpus_allowed_fallback(struct task_struct *tsk) * select_fallback_rq() will fix things ups and set cpu_possible_mask * if required. */ + return changed; } void __init cpuset_init_current_mems_allowed(void) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index fc7de4f955cf..9d7a74a07632 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2951,8 +2951,7 @@ static int select_fallback_rq(int cpu, struct task_struct *p) /* No more Mr. Nice Guy. */ switch (state) { case cpuset: - if (IS_ENABLED(CONFIG_CPUSETS)) { - cpuset_cpus_allowed_fallback(p); + if (cpuset_cpus_allowed_fallback(p)) { state = possible; break; }
WARNING: multiple messages have this Message-ID (diff)
From: Will Deacon <will@kernel.org> To: Valentin Schneider <valentin.schneider@arm.com> Cc: linux-arm-kernel@lists.infradead.org, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, Catalin Marinas <catalin.marinas@arm.com>, Marc Zyngier <maz@kernel.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Peter Zijlstra <peterz@infradead.org>, Morten Rasmussen <morten.rasmussen@arm.com>, Qais Yousef <qais.yousef@arm.com>, Suren Baghdasaryan <surenb@google.com>, Quentin Perret <qperret@google.com>, Tejun Heo <tj@kernel.org>, Johannes Weiner <hannes@cmpxchg.org>, Ingo Molnar <mingo@redhat.com>, Juri Lelli <juri.lelli@redhat.com>, Vincent Guittot <vincent.guittot@linaro.org>, "Rafael J. Wysocki" <rjw@rjwysocki.net>, Dietmar Eggemann <dietmar.eggemann@arm.com>, Daniel Bristot de Oliveira <bristot@redhat.com>, kernel-team@android.com, Li Zefan <lizefan@huawei.com> Subject: Re: [PATCH v8 06/19] cpuset: Don't use the cpu_possible_mask as a last resort for cgroup v1 Date: Mon, 7 Jun 2021 18:20:42 +0100 [thread overview] Message-ID: <20210607172042.GB7650@willie-the-truck> (raw) In-Reply-To: <877dj9ees8.mognet@arm.com> On Fri, Jun 04, 2021 at 06:11:03PM +0100, Valentin Schneider wrote: > On 02/06/21 17:47, Will Deacon wrote: > > @@ -3322,9 +3322,13 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask) > > > > void cpuset_cpus_allowed_fallback(struct task_struct *tsk) > > { > > + const struct cpumask *cs_mask; > > + const struct cpumask *possible_mask = task_cpu_possible_mask(tsk); > > + > > rcu_read_lock(); > > - do_set_cpus_allowed(tsk, is_in_v2_mode() ? > > - task_cs(tsk)->cpus_allowed : cpu_possible_mask); > > + cs_mask = task_cs(tsk)->cpus_allowed; > > + if (is_in_v2_mode() && cpumask_subset(cs_mask, possible_mask)) > > + do_set_cpus_allowed(tsk, cs_mask); > > Since the task will still go through the is_cpu_allowed() loop in > select_fallback_rq() after this, is the subset check actually required > here? Yes, I think it's needed. do_set_cpus_allowed() doesn't do any checking against the task_cpu_possible_mask, so if we returned to select_fallback_rq() with a mask containing a mixture of 32-bit-capable and 64-bit-only CPUs then we'd end up setting an affinity mask for a 32-bit task which contains 64-bit-only cores. > It would have more merit if cpuset_cpus_allowed_fallback() returned whether > it actually changed the allowed mask or not, in which case we could branch > either to the is_cpu_allowed() loop (as we do unconditionally now), or to > the 'state == possible' switch case. I think this is a cleanup, so I can include it as a separate patch (see below). Will --->8 diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 414a8e694413..d2b9c41c8edf 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -59,7 +59,7 @@ extern void cpuset_wait_for_hotplug(void); extern void cpuset_read_lock(void); extern void cpuset_read_unlock(void); extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask); -extern void cpuset_cpus_allowed_fallback(struct task_struct *p); +extern bool cpuset_cpus_allowed_fallback(struct task_struct *p); extern nodemask_t cpuset_mems_allowed(struct task_struct *p); #define cpuset_current_mems_allowed (current->mems_allowed) void cpuset_init_current_mems_allowed(void); @@ -188,8 +188,9 @@ static inline void cpuset_cpus_allowed(struct task_struct *p, cpumask_copy(mask, task_cpu_possible_mask(p)); } -static inline void cpuset_cpus_allowed_fallback(struct task_struct *p) +static inline bool cpuset_cpus_allowed_fallback(struct task_struct *p) { + return false; } static inline nodemask_t cpuset_mems_allowed(struct task_struct *p) diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 4e7c271e3800..a6bab2259f98 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -3327,17 +3327,22 @@ void cpuset_cpus_allowed(struct task_struct *tsk, struct cpumask *pmask) * which will not contain a sane cpumask during cases such as cpu hotplugging. * This is the absolute last resort for the scheduler and it is only used if * _every_ other avenue has been traveled. + * + * Returns true if the affinity of @tsk was changed, false otherwise. **/ -void cpuset_cpus_allowed_fallback(struct task_struct *tsk) +bool cpuset_cpus_allowed_fallback(struct task_struct *tsk) { const struct cpumask *cs_mask; + bool changed = false; const struct cpumask *possible_mask = task_cpu_possible_mask(tsk); rcu_read_lock(); cs_mask = task_cs(tsk)->cpus_allowed; - if (is_in_v2_mode() && cpumask_subset(cs_mask, possible_mask)) + if (is_in_v2_mode() && cpumask_subset(cs_mask, possible_mask)) { do_set_cpus_allowed(tsk, cs_mask); + changed = true; + } rcu_read_unlock(); /* @@ -3357,6 +3362,7 @@ void cpuset_cpus_allowed_fallback(struct task_struct *tsk) * select_fallback_rq() will fix things ups and set cpu_possible_mask * if required. */ + return changed; } void __init cpuset_init_current_mems_allowed(void) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index fc7de4f955cf..9d7a74a07632 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2951,8 +2951,7 @@ static int select_fallback_rq(int cpu, struct task_struct *p) /* No more Mr. Nice Guy. */ switch (state) { case cpuset: - if (IS_ENABLED(CONFIG_CPUSETS)) { - cpuset_cpus_allowed_fallback(p); + if (cpuset_cpus_allowed_fallback(p)) { state = possible; break; } _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
next prev parent reply other threads:[~2021-06-07 17:20 UTC|newest] Thread overview: 96+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-06-02 16:47 [PATCH v8 00/19] Add support for 32-bit tasks on asymmetric AArch32 systems Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-02 16:47 ` [PATCH v8 01/19] arm64: cpuinfo: Split AArch32 registers out into a separate struct Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-03 12:38 ` Mark Rutland 2021-06-03 12:38 ` Mark Rutland 2021-06-03 17:24 ` Will Deacon 2021-06-03 17:24 ` Will Deacon 2021-06-02 16:47 ` [PATCH v8 02/19] arm64: Allow mismatched 32-bit EL0 support Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-03 12:37 ` Mark Rutland 2021-06-03 12:37 ` Mark Rutland 2021-06-03 17:44 ` Will Deacon 2021-06-03 17:44 ` Will Deacon 2021-06-04 9:38 ` Mark Rutland 2021-06-04 9:38 ` Mark Rutland 2021-06-04 11:05 ` Will Deacon 2021-06-04 11:05 ` Will Deacon 2021-06-04 12:04 ` Mark Rutland 2021-06-04 12:04 ` Mark Rutland 2021-06-04 13:50 ` Will Deacon 2021-06-04 13:50 ` Will Deacon 2021-06-02 16:47 ` [PATCH v8 03/19] KVM: arm64: Kill 32-bit vCPUs on systems with mismatched " Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-02 16:47 ` [PATCH v8 04/19] arm64: Kill 32-bit applications scheduled on 64-bit-only CPUs Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-02 16:47 ` [PATCH v8 05/19] sched: Introduce task_cpu_possible_mask() to limit fallback rq selection Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-04 17:10 ` Valentin Schneider 2021-06-04 17:10 ` Valentin Schneider 2021-06-07 17:04 ` Will Deacon 2021-06-07 17:04 ` Will Deacon 2021-06-02 16:47 ` [PATCH v8 06/19] cpuset: Don't use the cpu_possible_mask as a last resort for cgroup v1 Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-04 17:11 ` Valentin Schneider 2021-06-04 17:11 ` Valentin Schneider 2021-06-07 17:20 ` Will Deacon [this message] 2021-06-07 17:20 ` Will Deacon 2021-06-10 10:20 ` Valentin Schneider 2021-06-10 10:20 ` Valentin Schneider 2021-06-02 16:47 ` [PATCH v8 07/19] cpuset: Honour task_cpu_possible_mask() in guarantee_online_cpus() Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-04 17:11 ` Valentin Schneider 2021-06-04 17:11 ` Valentin Schneider 2021-06-02 16:47 ` [PATCH v8 08/19] sched: Reject CPU affinity changes based on task_cpu_possible_mask() Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-04 17:11 ` Valentin Schneider 2021-06-04 17:11 ` Valentin Schneider 2021-06-07 22:43 ` Will Deacon 2021-06-07 22:43 ` Will Deacon 2021-06-02 16:47 ` [PATCH v8 09/19] sched: Introduce task_struct::user_cpus_ptr to track requested affinity Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-04 17:12 ` Valentin Schneider 2021-06-04 17:12 ` Valentin Schneider 2021-06-02 16:47 ` [PATCH v8 10/19] sched: Split the guts of sched_setaffinity() into a helper function Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-04 17:12 ` Valentin Schneider 2021-06-04 17:12 ` Valentin Schneider 2021-06-02 16:47 ` [PATCH v8 11/19] sched: Allow task CPU affinity to be restricted on asymmetric systems Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-04 17:12 ` Valentin Schneider 2021-06-04 17:12 ` Valentin Schneider 2021-06-07 22:52 ` Will Deacon 2021-06-07 22:52 ` Will Deacon 2021-06-10 10:20 ` Valentin Schneider 2021-06-10 10:20 ` Valentin Schneider 2021-06-02 16:47 ` [PATCH v8 12/19] sched: Introduce task_cpus_dl_admissible() to check proposed affinity Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-03 9:43 ` Daniel Bristot de Oliveira 2021-06-03 9:43 ` Daniel Bristot de Oliveira 2021-06-03 9:52 ` Will Deacon 2021-06-03 9:52 ` Will Deacon 2021-06-02 16:47 ` [PATCH v8 13/19] arm64: Implement task_cpu_possible_mask() Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-02 16:47 ` [PATCH v8 14/19] arm64: exec: Adjust affinity for compat tasks with mismatched 32-bit EL0 Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-03 9:45 ` Daniel Bristot de Oliveira 2021-06-03 9:45 ` Daniel Bristot de Oliveira 2021-06-02 16:47 ` [PATCH v8 15/19] arm64: Prevent offlining first CPU with 32-bit EL0 on mismatched system Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-03 12:58 ` Mark Rutland 2021-06-03 12:58 ` Mark Rutland 2021-06-03 17:40 ` Will Deacon 2021-06-03 17:40 ` Will Deacon 2021-06-04 9:49 ` Mark Rutland 2021-06-04 9:49 ` Mark Rutland 2021-06-04 12:14 ` Qais Yousef 2021-06-04 12:14 ` Qais Yousef 2021-06-02 16:47 ` [PATCH v8 16/19] arm64: Advertise CPUs capable of running 32-bit applications in sysfs Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-02 16:47 ` [PATCH v8 17/19] arm64: Hook up cmdline parameter to allow mismatched 32-bit EL0 Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-02 16:47 ` [PATCH v8 18/19] arm64: Remove logic to kill 32-bit tasks on 64-bit-only cores Will Deacon 2021-06-02 16:47 ` Will Deacon 2021-06-02 16:47 ` [PATCH v8 19/19] Documentation: arm64: describe asymmetric 32-bit support Will Deacon 2021-06-02 16:47 ` Will Deacon
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210607172042.GB7650@willie-the-truck \ --to=will@kernel.org \ --cc=bristot@redhat.com \ --cc=catalin.marinas@arm.com \ --cc=dietmar.eggemann@arm.com \ --cc=gregkh@linuxfoundation.org \ --cc=hannes@cmpxchg.org \ --cc=juri.lelli@redhat.com \ --cc=kernel-team@android.com \ --cc=linux-arch@vger.kernel.org \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=lizefan@huawei.com \ --cc=maz@kernel.org \ --cc=mingo@redhat.com \ --cc=morten.rasmussen@arm.com \ --cc=peterz@infradead.org \ --cc=qais.yousef@arm.com \ --cc=qperret@google.com \ --cc=rjw@rjwysocki.net \ --cc=surenb@google.com \ --cc=tj@kernel.org \ --cc=valentin.schneider@arm.com \ --cc=vincent.guittot@linaro.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.