All the mail mirrored from lore.kernel.org
 help / color / mirror / Atom feed
* [GIT PULL] sched/core: scheduler patches for cmwq
@ 2010-06-04 13:27 Tejun Heo
  2010-06-08 19:46 ` [PATCH UPDATED] sched: adjust when cpu_active and cpuset configurations are updated during cpu on/offlining Tejun Heo
  0 siblings, 1 reply; 8+ messages in thread
From: Tejun Heo @ 2010-06-04 13:27 UTC (permalink / raw
  To: Ingo Molnar; +Cc: Peter Zijlstra, lkml

Hello, Ingo.

Please pull from the following branch to receive four sched/core
patches preparing for cmwq.  All have been reviewed and acked by
Peter[1].  The first two patches are generic update to when cpu_active
and scheduler configurations are updated during CPU on/offlining.  The
latter two add hardcoded hooks for cmwq (currently noop).

  git://git.kernel.org/pub/scm/linux/kernel/git/tj/wq.git sched-wq

Tejun Heo (4):
      sched: define and use CPU_PRI_* enums for cpu notifier priorities
      sched: adjust when cpu_active and cpuset configurations are updated during cpu on/offlining
      sched: refactor try_to_wake_up()
      sched: add hooks for workqueue

 include/linux/cpu.h        |   25 ++++++
 include/linux/cpuset.h     |    6 ++
 include/linux/perf_event.h |    2 +-
 include/linux/sched.h      |    1 +
 kernel/cpu.c               |    6 --
 kernel/cpuset.c            |   21 +----
 kernel/fork.c              |    2 +-
 kernel/sched.c             |  205 ++++++++++++++++++++++++++++++++------------
 kernel/workqueue_sched.h   |   16 ++++
 9 files changed, 203 insertions(+), 81 deletions(-)
 create mode 100644 kernel/workqueue_sched.h

Thanks.

--
tejun

[1] http://thread.gmane.org/gmane.linux.kernel/992913

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH UPDATED] sched: adjust when cpu_active and cpuset configurations are updated during cpu on/offlining
  2010-06-04 13:27 [GIT PULL] sched/core: scheduler patches for cmwq Tejun Heo
@ 2010-06-08 19:46 ` Tejun Heo
  2010-06-21 18:28   ` Tony Luck
  0 siblings, 1 reply; 8+ messages in thread
From: Tejun Heo @ 2010-06-08 19:46 UTC (permalink / raw
  To: Ingo Molnar; +Cc: Peter Zijlstra, lkml

Currently, when a cpu goes down, cpu_active is cleared before
CPU_DOWN_PREPARE starts and cpuset configuration is updated from a
default priority cpu notifier.  When a cpu is coming up, it's set
before CPU_ONLINE but cpuset configuration again is updated from the
same cpu notifier.

For cpu notifiers, this presents an inconsistent state.  Threads which
a CPU_DOWN_PREPARE notifier expects to be bound to the CPU can be
migrated to other cpus because the cpu is no more inactive.

Fix it by updating cpu_active in the highest priority cpu notifier and
cpuset configuration in the second highest when a cpu is coming up.
Down path is updated similarly.  This guarantees that all other cpu
notifiers see consistent cpu_active and cpuset configuration.

cpuset_track_online_cpus() notifier is converted to
cpuset_update_active_cpus() which just updates the configuration and
now called from cpuset_cpu_[in]active() notifiers registered from
sched_init_smp().  If cpuset is disabled, cpuset_update_active_cpus()
degenerates into partition_sched_domains() making separate notifier
for !CONFIG_CPUSETS unnecessary.

This problem is triggered by cmwq.  During CPU_DOWN_PREPARE, hotplug
callback creates a kthread and kthread_bind()s it to the target cpu,
and the thread is expected to run on that cpu.

* Ingo's test discovered __cpuinit/exit markups were incorrect.
  Fixed.

Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Paul Menage <menage@google.com>
---
The second patch incorrectly labeled sched_cpu_inactive() with
__cpuexit while unnecessarily labeling cpuset notifiers __cpuinit
instead of __cpuexit.  This triggered notifier warnings during
!HOTPLUG_CPU testing by Ingo.  Fixed.  Git tree is updated
accordingly.

Thanks.

 include/linux/cpu.h    |   16 +++++++++++
 include/linux/cpuset.h |    6 ++++
 kernel/cpu.c           |    6 ----
 kernel/cpuset.c        |   21 +-------------
 kernel/sched.c         |   67 +++++++++++++++++++++++++++++++++++------------
 5 files changed, 74 insertions(+), 42 deletions(-)

diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 2d90738..de6b172 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -52,6 +52,22 @@ struct notifier_block;
  * CPU notifier priorities.
  */
 enum {
+	/*
+	 * SCHED_ACTIVE marks a cpu which is coming up active during
+	 * CPU_ONLINE and CPU_DOWN_FAILED and must be the first
+	 * notifier.  CPUSET_ACTIVE adjusts cpuset according to
+	 * cpu_active mask right after SCHED_ACTIVE.  During
+	 * CPU_DOWN_PREPARE, SCHED_INACTIVE and CPUSET_INACTIVE are
+	 * ordered in the similar way.
+	 *
+	 * This ordering guarantees consistent cpu_active mask and
+	 * migration behavior to all cpu notifiers.
+	 */
+	CPU_PRI_SCHED_ACTIVE	= INT_MAX,
+	CPU_PRI_CPUSET_ACTIVE	= INT_MAX - 1,
+	CPU_PRI_SCHED_INACTIVE	= INT_MIN + 1,
+	CPU_PRI_CPUSET_INACTIVE	= INT_MIN,
+
 	/* migration should happen before other stuff but after perf */
 	CPU_PRI_PERF		= 20,
 	CPU_PRI_MIGRATION	= 10,
diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 457ed76..f20eb8f 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -20,6 +20,7 @@ extern int number_of_cpusets;	/* How many cpusets are defined in system? */

 extern int cpuset_init(void);
 extern void cpuset_init_smp(void);
+extern void cpuset_update_active_cpus(void);
 extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
 extern int cpuset_cpus_allowed_fallback(struct task_struct *p);
 extern nodemask_t cpuset_mems_allowed(struct task_struct *p);
@@ -132,6 +133,11 @@ static inline void set_mems_allowed(nodemask_t nodemask)
 static inline int cpuset_init(void) { return 0; }
 static inline void cpuset_init_smp(void) {}

+static inline void cpuset_update_active_cpus(void)
+{
+	partition_sched_domains(1, NULL, NULL);
+}
+
 static inline void cpuset_cpus_allowed(struct task_struct *p,
 				       struct cpumask *mask)
 {
diff --git a/kernel/cpu.c b/kernel/cpu.c
index 97d1b42..f6e726f 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -235,11 +235,8 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
 		return -EINVAL;

 	cpu_hotplug_begin();
-	set_cpu_active(cpu, false);
 	err = __cpu_notify(CPU_DOWN_PREPARE | mod, hcpu, -1, &nr_calls);
 	if (err) {
-		set_cpu_active(cpu, true);
-
 		nr_calls--;
 		__cpu_notify(CPU_DOWN_FAILED | mod, hcpu, nr_calls, NULL);
 		printk("%s: attempt to take down CPU %u failed\n",
@@ -249,7 +246,6 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)

 	err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));
 	if (err) {
-		set_cpu_active(cpu, true);
 		/* CPU didn't die: tell everyone.  Can't complain. */
 		cpu_notify_nofail(CPU_DOWN_FAILED | mod, hcpu);

@@ -321,8 +317,6 @@ static int __cpuinit _cpu_up(unsigned int cpu, int tasks_frozen)
 		goto out_notify;
 	BUG_ON(!cpu_online(cpu));

-	set_cpu_active(cpu, true);
-
 	/* Now call notifier in preparation. */
 	cpu_notify(CPU_ONLINE | mod, hcpu);

diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index 02b9611..05727dc 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -2113,31 +2113,17 @@ static void scan_for_empty_cpusets(struct cpuset *root)
  * but making no active use of cpusets.
  *
  * This routine ensures that top_cpuset.cpus_allowed tracks
- * cpu_online_map on each CPU hotplug (cpuhp) event.
+ * cpu_active_mask on each CPU hotplug (cpuhp) event.
  *
  * Called within get_online_cpus().  Needs to call cgroup_lock()
  * before calling generate_sched_domains().
  */
-static int cpuset_track_online_cpus(struct notifier_block *unused_nb,
-				unsigned long phase, void *unused_cpu)
+void __cpuexit cpuset_update_active_cpus(void)
 {
 	struct sched_domain_attr *attr;
 	cpumask_var_t *doms;
 	int ndoms;

-	switch (phase) {
-	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
-	case CPU_DOWN_PREPARE:
-	case CPU_DOWN_PREPARE_FROZEN:
-	case CPU_DOWN_FAILED:
-	case CPU_DOWN_FAILED_FROZEN:
-		break;
-
-	default:
-		return NOTIFY_DONE;
-	}
-
 	cgroup_lock();
 	mutex_lock(&callback_mutex);
 	cpumask_copy(top_cpuset.cpus_allowed, cpu_active_mask);
@@ -2148,8 +2134,6 @@ static int cpuset_track_online_cpus(struct notifier_block *unused_nb,

 	/* Have scheduler rebuild the domains */
 	partition_sched_domains(ndoms, doms, attr);
-
-	return NOTIFY_OK;
 }

 #ifdef CONFIG_MEMORY_HOTPLUG
@@ -2203,7 +2187,6 @@ void __init cpuset_init_smp(void)
 	cpumask_copy(top_cpuset.cpus_allowed, cpu_active_mask);
 	top_cpuset.mems_allowed = node_states[N_HIGH_MEMORY];

-	hotcpu_notifier(cpuset_track_online_cpus, 0);
 	hotplug_memory_notifier(cpuset_track_online_nodes, 10);

 	cpuset_wq = create_singlethread_workqueue("cpuset");
diff --git a/kernel/sched.c b/kernel/sched.c
index 552faf8..2b942e4 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -5804,17 +5804,46 @@ static struct notifier_block __cpuinitdata migration_notifier = {
 	.priority = CPU_PRI_MIGRATION,
 };

+static int __cpuinit sched_cpu_active(struct notifier_block *nfb,
+				      unsigned long action, void *hcpu)
+{
+	switch (action & ~CPU_TASKS_FROZEN) {
+	case CPU_ONLINE:
+	case CPU_DOWN_FAILED:
+		set_cpu_active((long)hcpu, true);
+		return NOTIFY_OK;
+	default:
+		return NOTIFY_DONE;
+	}
+}
+
+static int __cpuinit sched_cpu_inactive(struct notifier_block *nfb,
+					unsigned long action, void *hcpu)
+{
+	switch (action & ~CPU_TASKS_FROZEN) {
+	case CPU_DOWN_PREPARE:
+		set_cpu_active((long)hcpu, false);
+		return NOTIFY_OK;
+	default:
+		return NOTIFY_DONE;
+	}
+}
+
 static int __init migration_init(void)
 {
 	void *cpu = (void *)(long)smp_processor_id();
 	int err;

-	/* Start one for the boot CPU: */
+	/* Initialize migration for the boot CPU */
 	err = migration_call(&migration_notifier, CPU_UP_PREPARE, cpu);
 	BUG_ON(err == NOTIFY_BAD);
 	migration_call(&migration_notifier, CPU_ONLINE, cpu);
 	register_cpu_notifier(&migration_notifier);

+	/* Register cpu active notifiers */
+	cpu_notifier(sched_cpu_active, CPU_PRI_SCHED_ACTIVE);
+	cpu_notifier(sched_cpu_inactive, CPU_PRI_SCHED_INACTIVE);
+
 	return 0;
 }
 early_initcall(migration_init);
@@ -7273,29 +7302,35 @@ int __init sched_create_sysfs_power_savings_entries(struct sysdev_class *cls)
 }
 #endif /* CONFIG_SCHED_MC || CONFIG_SCHED_SMT */

-#ifndef CONFIG_CPUSETS
 /*
- * Add online and remove offline CPUs from the scheduler domains.
- * When cpusets are enabled they take over this function.
+ * Update cpusets according to cpu_active mask.  If cpusets are
+ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
+ * around partition_sched_domains().
  */
-static int update_sched_domains(struct notifier_block *nfb,
-				unsigned long action, void *hcpu)
+static int __cpuexit cpuset_cpu_active(struct notifier_block *nfb,
+				       unsigned long action, void *hcpu)
 {
-	switch (action) {
+	switch (action & ~CPU_TASKS_FROZEN) {
 	case CPU_ONLINE:
-	case CPU_ONLINE_FROZEN:
-	case CPU_DOWN_PREPARE:
-	case CPU_DOWN_PREPARE_FROZEN:
 	case CPU_DOWN_FAILED:
-	case CPU_DOWN_FAILED_FROZEN:
-		partition_sched_domains(1, NULL, NULL);
+		cpuset_update_active_cpus();
 		return NOTIFY_OK;
+	default:
+		return NOTIFY_DONE;
+	}
+}

+static int __cpuexit cpuset_cpu_inactive(struct notifier_block *nfb,
+					 unsigned long action, void *hcpu)
+{
+	switch (action & ~CPU_TASKS_FROZEN) {
+	case CPU_DOWN_PREPARE:
+		cpuset_update_active_cpus();
+		return NOTIFY_OK;
 	default:
 		return NOTIFY_DONE;
 	}
 }
-#endif

 static int update_runtime(struct notifier_block *nfb,
 				unsigned long action, void *hcpu)
@@ -7341,10 +7376,8 @@ void __init sched_init_smp(void)
 	mutex_unlock(&sched_domains_mutex);
 	put_online_cpus();

-#ifndef CONFIG_CPUSETS
-	/* XXX: Theoretical race here - CPU may be hotplugged now */
-	hotcpu_notifier(update_sched_domains, 0);
-#endif
+	hotcpu_notifier(cpuset_cpu_active, CPU_PRI_CPUSET_ACTIVE);
+	hotcpu_notifier(cpuset_cpu_inactive, CPU_PRI_CPUSET_INACTIVE);

 	/* RT runtime code needs to handle some hotplug events */
 	hotcpu_notifier(update_runtime, 0);
-- 
1.6.4.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH UPDATED] sched: adjust when cpu_active and cpuset  configurations are updated during cpu on/offlining
  2010-06-08 19:46 ` [PATCH UPDATED] sched: adjust when cpu_active and cpuset configurations are updated during cpu on/offlining Tejun Heo
@ 2010-06-21 18:28   ` Tony Luck
  2010-06-21 20:55     ` Tejun Heo
  0 siblings, 1 reply; 8+ messages in thread
From: Tony Luck @ 2010-06-21 18:28 UTC (permalink / raw
  To: Tejun Heo; +Cc: Ingo Molnar, Peter Zijlstra, lkml

On Tue, Jun 8, 2010 at 12:46 PM, Tejun Heo <tj@kernel.org> wrote:
> * Ingo's test discovered __cpuinit/exit markups were incorrect.
>  Fixed.

No it isn't :-(

> +static int __cpuexit cpuset_cpu_active(struct notifier_block *nfb,
> +                                      unsigned long action, void *hcpu)
...
> +static int __cpuexit cpuset_cpu_inactive(struct notifier_block *nfb,
> +                                        unsigned long action, void *hcpu)

This patch arrived in linux-next (tag next-20100621) and breaks the
ia64 build for configurations where CONFIG_HOTPLUG_CPU=n
with the following cryptic error:

`.cpuexit.text' referenced in section `.IA_64.unwind.cpuexit.text' of
kernel/built-in.o: defined in discarded section `.cpuexit.text' of
kernel/built-in.o

This is because ia64 link stage drops __exit functions from
built-in code (under the logic that they can never be called).

Is the problem in the !CONFIG_HOTPLUG_CPU definition of
hotcpu_notifier() in <linux/cpu.h> which still references the
function argument:

#define hotcpu_notifier(fn, pri)        do { (void)(fn); } while (0)

Or should these functions not be marked __cpuexit?

-Tony

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH UPDATED] sched: adjust when cpu_active and cpuset  configurations are updated during cpu on/offlining
  2010-06-21 18:28   ` Tony Luck
@ 2010-06-21 20:55     ` Tejun Heo
  2010-06-21 21:15       ` Tony Luck
  0 siblings, 1 reply; 8+ messages in thread
From: Tejun Heo @ 2010-06-21 20:55 UTC (permalink / raw
  To: Tony Luck; +Cc: Ingo Molnar, Peter Zijlstra, lkml

Hello,

On 06/21/2010 08:28 PM, Tony Luck wrote:
> On Tue, Jun 8, 2010 at 12:46 PM, Tejun Heo <tj@kernel.org> wrote:
>> * Ingo's test discovered __cpuinit/exit markups were incorrect.
>>  Fixed.
> 
> No it isn't :-(

Ah, sorry, my original patch was broken on x86 too, so...

>> +static int __cpuexit cpuset_cpu_active(struct notifier_block *nfb,
>> +                                      unsigned long action, void *hcpu)
> ...
>> +static int __cpuexit cpuset_cpu_inactive(struct notifier_block *nfb,
>> +                                        unsigned long action, void *hcpu)
> 
> This patch arrived in linux-next (tag next-20100621) and breaks the
> ia64 build for configurations where CONFIG_HOTPLUG_CPU=n
> with the following cryptic error:
> 
> `.cpuexit.text' referenced in section `.IA_64.unwind.cpuexit.text' of
> kernel/built-in.o: defined in discarded section `.cpuexit.text' of
> kernel/built-in.o
> 
> This is because ia64 link stage drops __exit functions from
> built-in code (under the logic that they can never be called).
> 
> Is the problem in the !CONFIG_HOTPLUG_CPU definition of
> hotcpu_notifier() in <linux/cpu.h> which still references the
> function argument:
> 
> #define hotcpu_notifier(fn, pri)        do { (void)(fn); } while (0)
> 
> Or should these functions not be marked __cpuexit?

I see.  I think the right solution is removing __cpuexit but it's kind
of silly to have different rules on different architectures.  On x86,
__cpuexit currently means "you can drop it if you're not gonna be
removing cpus after system boot"; IOW, __cpuexit is strict subset of
__cpuinit.  If you define it as "don't include it in the text at all
if cpus are not gonna be removed", it actually forces you to carry
more text in the running system.  Is there any reason ia64 drops them
during linking?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH UPDATED] sched: adjust when cpu_active and cpuset  configurations are updated during cpu on/offlining
  2010-06-21 20:55     ` Tejun Heo
@ 2010-06-21 21:15       ` Tony Luck
  2010-06-21 21:20         ` Tejun Heo
  0 siblings, 1 reply; 8+ messages in thread
From: Tony Luck @ 2010-06-21 21:15 UTC (permalink / raw
  To: Tejun Heo; +Cc: Ingo Molnar, Peter Zijlstra, lkml

On Mon, Jun 21, 2010 at 1:55 PM, Tejun Heo <tj@kernel.org> wrote:
> I see.  I think the right solution is removing __cpuexit but it's kind
> of silly to have different rules on different architectures.  On x86,
> __cpuexit currently means "you can drop it if you're not gonna be
> removing cpus after system boot"; IOW, __cpuexit is strict subset of
> __cpuinit.  If you define it as "don't include it in the text at all
> if cpus are not gonna be removed", it actually forces you to carry
> more text in the running system.  Is there any reason ia64 drops them
> during linking?

The history is that __exit functions are those that are called on module
unload.  When a driver is built-in to the kernel, it can obviously never
be unloaded. Therefore the __exit code must just be bloat for the built-in
case.

A system built with CONFIG_HOTPLUG_CPU=n meets the requirement
that cpus will not be removed after system boot. So why do I need to
include the __cpuexit code that should only be used to remove cpus?

-Tony

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH UPDATED] sched: adjust when cpu_active and cpuset  configurations are updated during cpu on/offlining
  2010-06-21 21:15       ` Tony Luck
@ 2010-06-21 21:20         ` Tejun Heo
  2010-06-21 21:46           ` Tony Luck
  0 siblings, 1 reply; 8+ messages in thread
From: Tejun Heo @ 2010-06-21 21:20 UTC (permalink / raw
  To: Tony Luck; +Cc: Ingo Molnar, Peter Zijlstra, lkml

Hello,

On 06/21/2010 11:15 PM, Tony Luck wrote:
> The history is that __exit functions are those that are called on module
> unload.  When a driver is built-in to the kernel, it can obviously never
> be unloaded. Therefore the __exit code must just be bloat for the built-in
> case.

I see.

> A system built with CONFIG_HOTPLUG_CPU=n meets the requirement
> that cpus will not be removed after system boot. So why do I need to
> include the __cpuexit code that should only be used to remove cpus?

I'm primarily curious why different archs are doing things
differently, which causes confusion and reduces test coverage.  Also,
if you just think about the end result, what x86 is doing makes more
sense.  Although it may end up with larger kernel image, it actually
allows more to be dropped once init is complete.  Anyways, will send a
patch to change those __cpuexit's to __cpuinit's.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH UPDATED] sched: adjust when cpu_active and cpuset  configurations are updated during cpu on/offlining
  2010-06-21 21:20         ` Tejun Heo
@ 2010-06-21 21:46           ` Tony Luck
  2010-06-21 22:02             ` Tejun Heo
  0 siblings, 1 reply; 8+ messages in thread
From: Tony Luck @ 2010-06-21 21:46 UTC (permalink / raw
  To: Tejun Heo; +Cc: Ingo Molnar, Peter Zijlstra, lkml

On Mon, Jun 21, 2010 at 2:20 PM, Tejun Heo <tj@kernel.org> wrote:
> I'm primarily curious why different archs are doing things
> differently, which causes confusion and reduces test coverage.  Also,
> if you just think about the end result, what x86 is doing makes more
> sense.  Although it may end up with larger kernel image, it actually
> allows more to be dropped once init is complete.

It allows x86 to drop some code that it never needed in the first place.

i don't think that is better :-)

Maybe someone from x86-land can explain why they *keep* __exit
code as they are the ones doing it wrong (/me ducks, runs and hides)

-Tony

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH UPDATED] sched: adjust when cpu_active and cpuset  configurations are updated during cpu on/offlining
  2010-06-21 21:46           ` Tony Luck
@ 2010-06-21 22:02             ` Tejun Heo
  0 siblings, 0 replies; 8+ messages in thread
From: Tejun Heo @ 2010-06-21 22:02 UTC (permalink / raw
  To: Tony Luck; +Cc: Ingo Molnar, Peter Zijlstra, lkml

Hello,

On 06/21/2010 11:46 PM, Tony Luck wrote:
> On Mon, Jun 21, 2010 at 2:20 PM, Tejun Heo <tj@kernel.org> wrote:
>> I'm primarily curious why different archs are doing things
>> differently, which causes confusion and reduces test coverage.  Also,
>> if you just think about the end result, what x86 is doing makes more
>> sense.  Although it may end up with larger kernel image, it actually
>> allows more to be dropped once init is complete.
> 
> It allows x86 to drop some code that it never needed in the first place.
> 
> i don't think that is better :-)
> 
> Maybe someone from x86-land can explain why they *keep* __exit
> code as they are the ones doing it wrong (/me ducks, runs and hides)

Oh, it can actually drop more.  Please consider the following classes.

1. Stuff which are used during system init.
2. Stuff which are used during system init or hotplug.
3. Stuff which are used during hotplug.

ia64 way can express #1 and #3, x86 #1 and #2.  #2 is superset of #3.
So, once init is complete, x86 way can drop larger set.  What matters
is the memory consumption once init is complete, not the image size.
Anyways, in the end, the difference isn't really meaningful but I do
think that it would be far better to have unified behavior across
different architectures, one way or the other.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2010-06-21 22:02 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-06-04 13:27 [GIT PULL] sched/core: scheduler patches for cmwq Tejun Heo
2010-06-08 19:46 ` [PATCH UPDATED] sched: adjust when cpu_active and cpuset configurations are updated during cpu on/offlining Tejun Heo
2010-06-21 18:28   ` Tony Luck
2010-06-21 20:55     ` Tejun Heo
2010-06-21 21:15       ` Tony Luck
2010-06-21 21:20         ` Tejun Heo
2010-06-21 21:46           ` Tony Luck
2010-06-21 22:02             ` Tejun Heo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.