Skip to content

Commit 88bd663

Browse files
KAGA-KOKOhemantbeast
authored andcommitted
[backport] Hotplug thread infrastructure
smp: Add generic smpboot facility Start a new file, which will hold SMP and CPU hotplug related generic infrastructure. Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rusty Russell <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Srivatsa S. Bhat <[email protected]> Cc: Matt Turner <[email protected]> Cc: Russell King <[email protected]> Cc: Mike Frysinger <[email protected]> Cc: Jesper Nilsson <[email protected]> Cc: Richard Kuo <[email protected]> Cc: Tony Luck <[email protected]> Cc: Hirokazu Takata <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: David Howells <[email protected]> Cc: James E.J. Bottomley <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Paul Mundt <[email protected]> Cc: David S. Miller <[email protected]> Cc: Chris Metcalf <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Change-Id: Ia1ad435435aa12c47ac0d381ae031ebf6edcff1f smp: Provide generic idle thread allocation All SMP architectures have magic to fork the idle task and to store it for reusage when cpu hotplug is enabled. Provide a generic infrastructure for it. Create/reinit the idle thread for the cpu which is brought up in the generic code and hand the thread pointer to the architecture code via __cpu_up(). Note, that fork_idle() is called via a workqueue, because this guarantees that the idle thread does not get a reference to a user space VM. This can happen when the boot process did not bring up all possible cpus and a later cpu_up() is initiated via the sysfs interface. In that case fork_idle() would be called in the context of the user space task and take a reference on the user space VM. Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rusty Russell <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Srivatsa S. Bhat <[email protected]> Cc: Matt Turner <[email protected]> Cc: Russell King <[email protected]> Cc: Mike Frysinger <[email protected]> Cc: Jesper Nilsson <[email protected]> Cc: Richard Kuo <[email protected]> Cc: Tony Luck <[email protected]> Cc: Hirokazu Takata <[email protected]> Cc: Ralf Baechle <[email protected]> Cc: David Howells <[email protected]> Cc: James E.J. Bottomley <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Martin Schwidefsky <[email protected]> Cc: Paul Mundt <[email protected]> Cc: David S. Miller <[email protected]> Cc: Chris Metcalf <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: [email protected] Acked-by: Venkatesh Pallipadi <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Change-Id: Ie2d32789f3a69ee15f38ba704aaa84d6be85bcd4 smp, idle: Allocate idle thread for each possible cpu during boot percpu areas are already allocated during boot for each possible cpu. percpu idle threads can be considered as an extension of the percpu areas, and allocate them for each possible cpu during boot. This will eliminate the need for workqueue based idle thread allocation. In future we can move the idle thread area into the percpu area too. [ tglx: Moved the loop into smpboot.c and added an error check when the init code failed to allocate an idle thread for a cpu which should be onlined ] Signed-off-by: Suresh Siddha <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rusty Russell <[email protected]> Cc: Paul E. McKenney <[email protected]> Cc: Srivatsa S. Bhat <[email protected]> Cc: Tejun Heo <[email protected]> Cc: David Rientjes <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Change-Id: I36828165fc08b7c0a8a0fe6a2aa24d358e623dd2 smpboot, idle: Optimize calls to smp_processor_id() in idle_threads_init() While trying to initialize idle threads for all cpus, idle_threads_init() calls smp_processor_id() in a loop, which is unnecessary. The intent is to initialize idle threads for all non-boot cpus. So just use a variable to note the boot cpu and use it in the loop. Signed-off-by: Srivatsa S. Bhat <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Change-Id: Ib65df4c31e93e1622c26f2c2a4946ffd28c1839d smpboot, idle: Fix comment mismatch over idle_threads_init() The comment over idle_threads_init() really talks about the functionality of idle_init(). Move that comment to idle_init(), and add a suitable comment over idle_threads_init(). Signed-off-by: Srivatsa S. Bhat <[email protected]> Cc: [email protected] Cc: [email protected] Cc: [email protected] Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Change-Id: Ib0cd6d6e19e0c64868a42a77101b080a5f3b04f8 kthread_worker: reorganize to prepare for flush_kthread_work() reimplementation Make the following two non-functional changes. * Separate out insert_kthread_work() from queue_kthread_work(). * Relocate struct kthread_flush_work and kthread_flush_work_fn() definitions above flush_kthread_work(). v2: Added lockdep_assert_held() in insert_kthread_work() as suggested by Andy Walls. Signed-off-by: Tejun Heo <[email protected]> Acked-by: Andy Walls <[email protected]> Change-Id: Ie1eef2c000c328ec16f32db011377415237da93d kthread_worker: reimplement flush_kthread_work() to allow freeing the work item being executed kthread_worker provides minimalistic workqueue-like interface for users which need a dedicated worker thread (e.g. for realtime priority). It has basic queue, flush_work, flush_worker operations which mostly match the workqueue counterparts; however, due to the way flush_work() is implemented, it has a noticeable difference of not allowing work items to be freed while being executed. While the current users of kthread_worker are okay with the current behavior, the restriction does impede some valid use cases. Also, removing this difference isn't difficult and actually makes the code easier to understand. This patch reimplements flush_kthread_work() such that it uses a flush_work item instead of queue/done sequence numbers. Signed-off-by: Tejun Heo <[email protected]> Change-Id: I06e2ab5ef8ea3caa8e40257da0a636ab9eb5ae55 kthread: Implement park/unpark facility To avoid the full teardown/setup of per cpu kthreads in the case of cpu hot(un)plug, provide a facility which allows to put the kthread into a park position and unpark it when the cpu comes online again. Signed-off-by: Thomas Gleixner <[email protected]> Reviewed-by: Namhyung Kim <[email protected]> Cc: Peter Zijlstra <[email protected]> Reviewed-by: Srivatsa S. Bhat <[email protected]> Cc: Rusty Russell <[email protected]> Reviewed-by: Paul E. McKenney <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Change-Id: I05d28788540b666349bafecf6cb3fdc873b6cdde smpboot: Provide infrastructure for percpu hotplug threads Provide a generic interface for setting up and tearing down percpu threads. On registration the threads for already online cpus are created and started. On deregistration (modules) the threads are stoppped. During hotplug operations the threads are created, started, parked and unparked. The datastructure for registration provides a pointer to percpu storage space and optional setup, cleanup, park, unpark functions. These functions are called when the thread state changes. Each implementation has to provide a function which is queried and returns whether the thread should run and the thread function itself. The core code handles all state transitions and avoids duplicated code in the call sites. [ paulmck: Preemption leak fix ] Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Reviewed-by: Srivatsa S. Bhat <[email protected]> Cc: Rusty Russell <[email protected]> Reviewed-by: Paul E. McKenney <[email protected]> Cc: Namhyung Kim <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Change-Id: Ib2ac667cd13cf26a042d65c1b3f20fe7e4b02423 hotplug: Fix UP bug in smpboot hotplug code Because kernel subsystems need their per-CPU kthreads on UP systems as well as on SMP systems, the smpboot hotplug kthread functions must be provided in UP builds as well as in SMP builds. This commit therefore adds smpboot.c to UP builds and excludes irrelevant code via #ifdef. Signed-off-by: Paul E. McKenney <[email protected]> Signed-off-by: Paul E. McKenney <[email protected]> Signed-off-by: Thomas Gleixner <[email protected]> Change-Id: I7b570d6c241c513227c3fdc1d843bf369bed036c smpboot: Allow selfparking per cpu threads The stop machine threads are still killed when a cpu goes offline. The reason is that the thread is used to bring the cpu down, so it can't be parked along with the other per cpu threads. Allow a per cpu thread to be excluded from automatic parking, so it can park itself once it's done Add a create callback function as well. Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rusty Russell <[email protected]> Cc: Paul McKenney <[email protected]> Cc: Srivatsa S. Bhat <[email protected]> Cc: Arjan van de Veen <[email protected]> Cc: Paul Turner <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Magnus Damm <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Change-Id: I864f39336a2cb648c518526459929c081f831216 kthread: Prevent unpark race which puts threads on the wrong cpu The smpboot threads rely on the park/unpark mechanism which binds per cpu threads on a particular core. Though the functionality is racy: CPU0 CPU1 CPU2 unpark(T) wake_up_process(T) clear(SHOULD_PARK) T runs leave parkme() due to !SHOULD_PARK bind_to(CPU2) BUG_ON(wrong CPU) We cannot let the tasks move themself to the target CPU as one of those tasks is actually the migration thread itself, which requires that it starts running on the target cpu right away. The solution to this problem is to prevent wakeups in park mode which are not from unpark(). That way we can guarantee that the association of the task to the target cpu is working correctly. Add a new task state (TASK_PARKED) which prevents other wakeups and use this state explicitly for the unpark wakeup. Peter noticed: Also, since the task state is visible to userspace and all the parked tasks are still in the PID space, its a good hint in ps and friends that these tasks aren't really there for the moment. The migration thread has another related issue. CPU0 CPU1 Bring up CPU2 create_thread(T) park(T) wait_for_completion() parkme() complete() sched_set_stop_task() schedule(TASK_PARKED) The sched_set_stop_task() call is issued while the task is on the runqueue of CPU1 and that confuses the hell out of the stop_task class on that cpu. So we need the same synchronizaion before sched_set_stop_task(). Reported-by: Dave Jones <[email protected]> Reported-and-tested-by: Dave Hansen <[email protected]> Reported-and-tested-by: Borislav Petkov <[email protected]> Acked-by: Peter Ziljstra <[email protected]> Cc: Srivatsa S. Bhat <[email protected]> Cc: [email protected] Cc: Ingo Molnar <[email protected]> Cc: [email protected] Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1304091635430.21884@ionos Signed-off-by: Thomas Gleixner <[email protected]> Change-Id: If1e9993951c4ad1f6f35ad0698f6ccd05a67e81f stop_machine: Store task reference in a separate per cpu variable To allow the stopper thread being managed by the smpboot thread infrastructure separate out the task storage from the stopper data structure. Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rusty Russell <[email protected]> Cc: Paul McKenney <[email protected]> Cc: Srivatsa S. Bhat <[email protected]> Cc: Arjan van de Veen <[email protected]> Cc: Paul Turner <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Magnus Damm <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Change-Id: Ibfe2389e42fcf2e236940bbc223a36da571ed6e9 stop_machine: Use smpboot threads Use the smpboot thread infrastructure. Mark the stopper thread selfparking and park it after it has finished the take_cpu_down() work. Signed-off-by: Thomas Gleixner <[email protected]> Cc: Peter Zijlstra <[email protected]> Cc: Rusty Russell <[email protected]> Cc: Paul McKenney <[email protected]> Cc: Srivatsa S. Bhat <[email protected]> Cc: Arjan van de Veen <[email protected]> Cc: Paul Turner <[email protected]> Cc: Richard Weinberger <[email protected]> Cc: Magnus Damm <[email protected]> Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Thomas Gleixner <[email protected]> Change-Id: I30771810f2cbb2a64ca090864156edc79d338dfd stop_machine: Mark per cpu stopper enabled early commit 14e568e78 (stop_machine: Use smpboot threads) introduced the following regression: Before this commit the stopper enabled bit was set in the online notifier. CPU0 CPU1 cpu_up cpu online hotplug_notifier(ONLINE) stopper(CPU1)->enabled = true; ... stop_machine() The conversion to smpboot threads moved the enablement to the wakeup path of the parked thread. The majority of users seem to have the following working order: CPU0 CPU1 cpu_up cpu online unpark_threads() wakeup(stopper[CPU1]) .... stopper thread runs stopper(CPU1)->enabled = true; stop_machine() But Konrad and Sander have observed: CPU0 CPU1 cpu_up cpu online unpark_threads() wakeup(stopper[CPU1]) .... stop_machine() stopper thread runs stopper(CPU1)->enabled = true; Now the stop machinery kicks CPU0 into the stop loop, where it gets stuck forever because the queue code saw stopper(CPU1)->enabled == false, so CPU0 waits for CPU1 to enter stomp_machine, but the CPU1 stopper work got discarded due to enabled == false. Add a pre_unpark function to the smpboot thread descriptor and call it before waking the thread. This fixes the problem at hand, but the stop_machine code should be more robust. The stopper->enabled flag smells fishy at best. Thanks to Konrad for going through a loop of debug patches and providing the information to decode this issue. Reported-and-tested-by: Konrad Rzeszutek Wilk <[email protected]> Reported-and-tested-by: Sander Eikelenboom <[email protected]> Cc: Srivatsa S. Bhat <[email protected]> Cc: Rusty Russell <[email protected]> Link: http://lkml.kernel.org/r/alpine.LFD.2.02.1302261843240.22263@ionos Signed-off-by: Thomas Gleixner <[email protected]> Change-Id: Iaff8824879eb21552fc9e46e259b604dfce113bc Signed-off-by: tarun93 <[email protected]>
1 parent 2d27442 commit 88bd663

File tree

15 files changed

+723
-167
lines changed

15 files changed

+723
-167
lines changed

arch/Kconfig

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -148,6 +148,9 @@ config HAVE_DMA_CONTIGUOUS
148148
config USE_GENERIC_SMP_HELPERS
149149
bool
150150

151+
config GENERIC_SMP_IDLE_THREAD
152+
bool
153+
151154
config HAVE_REGS_AND_STACK_ACCESS_API
152155
bool
153156
help

arch/arm/Kconfig

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,7 @@ config ARM
3636
select CPU_PM if (SUSPEND || CPU_IDLE)
3737
select GENERIC_PCI_IOMAP
3838
select HAVE_BPF_JIT if NET
39+
select GENERIC_SMP_IDLE_THREAD
3940
help
4041
The ARM series is a line of low-power-consumption RISC chip designs
4142
licensed by ARM Ltd and targeted at embedded applications and

fs/proc/array.c

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -142,6 +142,7 @@ static const char * const task_state_array[] = {
142142
"x (dead)", /* 64 */
143143
"K (wakekill)", /* 128 */
144144
"W (waking)", /* 256 */
145+
"P (parked)", /* 512 */
145146
};
146147

147148
static inline const char *get_task_state(struct task_struct *tsk)

include/linux/kthread.h

Lines changed: 12 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -14,6 +14,11 @@ struct task_struct *kthread_create_on_node(int (*threadfn)(void *data),
1414
kthread_create_on_node(threadfn, data, -1, namefmt, ##arg)
1515

1616

17+
struct task_struct *kthread_create_on_cpu(int (*threadfn)(void *data),
18+
void *data,
19+
unsigned int cpu,
20+
const char *namefmt);
21+
1722
/**
1823
* kthread_run - create and wake a thread.
1924
* @threadfn: the function to run until signal_pending(current).
@@ -34,9 +39,13 @@ struct task_struct *kthread_create_on_node(int (*threadfn)(void *data),
3439

3540
void kthread_bind(struct task_struct *k, unsigned int cpu);
3641
int kthread_stop(struct task_struct *k);
37-
int kthread_should_stop(void);
42+
bool kthread_should_stop(void);
43+
bool kthread_should_park(void);
3844
bool kthread_freezable_should_stop(bool *was_frozen);
3945
void *kthread_data(struct task_struct *k);
46+
int kthread_park(struct task_struct *k);
47+
void kthread_unpark(struct task_struct *k);
48+
void kthread_parkme(void);
4049

4150
int kthreadd(void *unused);
4251
extern struct task_struct *kthreadd_task;
@@ -49,8 +58,6 @@ extern int tsk_fork_get_node(struct task_struct *tsk);
4958
* can be queued and flushed using queue/flush_kthread_work()
5059
* respectively. Queued kthread_works are processed by a kthread
5160
* running kthread_worker_fn().
52-
*
53-
* A kthread_work can't be freed while it is executing.
5461
*/
5562
struct kthread_work;
5663
typedef void (*kthread_work_func_t)(struct kthread_work *work);
@@ -59,15 +66,14 @@ struct kthread_worker {
5966
spinlock_t lock;
6067
struct list_head work_list;
6168
struct task_struct *task;
69+
struct kthread_work *current_work;
6270
};
6371

6472
struct kthread_work {
6573
struct list_head node;
6674
kthread_work_func_t func;
6775
wait_queue_head_t done;
68-
atomic_t flushing;
69-
int queue_seq;
70-
int done_seq;
76+
struct kthread_worker *worker;
7177
};
7278

7379
#define KTHREAD_WORKER_INIT(worker) { \
@@ -79,7 +85,6 @@ struct kthread_work {
7985
.node = LIST_HEAD_INIT((work).node), \
8086
.func = (fn), \
8187
.done = __WAIT_QUEUE_HEAD_INITIALIZER((work).done), \
82-
.flushing = ATOMIC_INIT(0), \
8388
}
8489

8590
#define DEFINE_KTHREAD_WORKER(worker) \

include/linux/sched.h

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -194,9 +194,10 @@ print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
194194
#define TASK_DEAD 64
195195
#define TASK_WAKEKILL 128
196196
#define TASK_WAKING 256
197-
#define TASK_STATE_MAX 512
197+
#define TASK_PARKED 512
198+
#define TASK_STATE_MAX 1024
198199

199-
#define TASK_STATE_TO_CHAR_STR "RSDTtZXxKW"
200+
#define TASK_STATE_TO_CHAR_STR "RSDTtZXxKWP"
200201

201202
extern char ___assert_task_state[1 - 2*!!(
202203
sizeof(TASK_STATE_TO_CHAR_STR)-1 != ilog2(TASK_STATE_MAX)+1)];

include/linux/smpboot.h

Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
#ifndef _LINUX_SMPBOOT_H
2+
#define _LINUX_SMPBOOT_H
3+
4+
#include <linux/types.h>
5+
6+
struct task_struct;
7+
/* Cookie handed to the thread_fn*/
8+
struct smpboot_thread_data;
9+
10+
/**
11+
* struct smp_hotplug_thread - CPU hotplug related thread descriptor
12+
* @store: Pointer to per cpu storage for the task pointers
13+
* @list: List head for core management
14+
* @thread_should_run: Check whether the thread should run or not. Called with
15+
* preemption disabled.
16+
* @thread_fn: The associated thread function
17+
* @create: Optional setup function, called when the thread gets
18+
* created (Not called from the thread context)
19+
* @setup: Optional setup function, called when the thread gets
20+
* operational the first time
21+
* @cleanup: Optional cleanup function, called when the thread
22+
* should stop (module exit)
23+
* @park: Optional park function, called when the thread is
24+
* parked (cpu offline)
25+
* @unpark: Optional unpark function, called when the thread is
26+
* unparked (cpu online)
27+
* @pre_unpark: Optional unpark function, called before the thread is
28+
* unparked (cpu online). This is not guaranteed to be
29+
* called on the target cpu of the thread. Careful!
30+
* @selfparking: Thread is not parked by the park function.
31+
* @thread_comm: The base name of the thread
32+
*/
33+
struct smp_hotplug_thread {
34+
struct task_struct __percpu **store;
35+
struct list_head list;
36+
int (*thread_should_run)(unsigned int cpu);
37+
void (*thread_fn)(unsigned int cpu);
38+
void (*create)(unsigned int cpu);
39+
void (*setup)(unsigned int cpu);
40+
void (*cleanup)(unsigned int cpu, bool online);
41+
void (*park)(unsigned int cpu);
42+
void (*unpark)(unsigned int cpu);
43+
void (*pre_unpark)(unsigned int cpu);
44+
bool selfparking;
45+
const char *thread_comm;
46+
};
47+
48+
int smpboot_register_percpu_thread(struct smp_hotplug_thread *plug_thread);
49+
void smpboot_unregister_percpu_thread(struct smp_hotplug_thread *plug_thread);
50+
int smpboot_thread_schedule(void);
51+
52+
#endif

include/trace/events/sched.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -182,7 +182,7 @@ TRACE_EVENT(sched_switch,
182182
__print_flags(__entry->prev_state & (TASK_STATE_MAX-1), "|",
183183
{ 1, "S"} , { 2, "D" }, { 4, "T" }, { 8, "t" },
184184
{ 16, "Z" }, { 32, "X" }, { 64, "x" },
185-
{ 128, "W" }) : "R",
185+
{ 128, "K" }, { 256, "W" }, { 512, "P" }) : "R",
186186
__entry->prev_state & TASK_STATE_MAX ? "+" : "",
187187
__entry->next_comm, __entry->next_pid, __entry->next_prio)
188188
);

kernel/Makefile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ obj-y = fork.o exec_domain.o panic.o printk.o \
1010
kthread.o wait.o kfifo.o sys_ni.o posix-cpu-timers.o mutex.o \
1111
hrtimer.o rwsem.o nsproxy.o srcu.o semaphore.o \
1212
notifier.o ksysfs.o cred.o \
13-
async.o range.o groups.o
13+
async.o range.o groups.o smpboot.o
1414

1515
ifdef CONFIG_FUNCTION_TRACER
1616
# Do not trace debug files and internal ftrace files

kernel/cpu.c

Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,8 @@
1919

2020
#include <trace/events/sched.h>
2121

22+
#include "smpboot.h"
23+
2224
#ifdef CONFIG_SMP
2325
/* Serializes the updates to cpu_online_mask, cpu_present_mask */
2426
static DEFINE_MUTEX(cpu_add_remove_lock);
@@ -210,6 +212,8 @@ static int __ref take_cpu_down(void *_param)
210212
return err;
211213

212214
cpu_notify(CPU_DYING | param->mod, param->hcpu);
215+
/* Park the stopper thread */
216+
kthread_park(current);
213217
return 0;
214218
}
215219

@@ -240,12 +244,13 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen)
240244
__func__, cpu);
241245
goto out_release;
242246
}
247+
smpboot_park_threads(cpu);
243248

244249
err = __stop_machine(take_cpu_down, &tcd_param, cpumask_of(cpu));
245250
if (err) {
246251
/* CPU didn't die: tell everyone. Can't complain. */
252+
smpboot_unpark_threads(cpu);
247253
cpu_notify_nofail(CPU_DOWN_FAILED | mod, hcpu);
248-
249254
goto out_release;
250255
}
251256
BUG_ON(cpu_online(cpu));
@@ -302,11 +307,23 @@ static int __cpuinit _cpu_up(unsigned int cpu, int tasks_frozen)
302307
int ret, nr_calls = 0;
303308
void *hcpu = (void *)(long)cpu;
304309
unsigned long mod = tasks_frozen ? CPU_TASKS_FROZEN : 0;
310+
struct task_struct *idle;
305311

306312
if (cpu_online(cpu) || !cpu_present(cpu))
307313
return -EINVAL;
308314

309315
cpu_hotplug_begin();
316+
317+
idle = idle_thread_get(cpu);
318+
if (IS_ERR(idle)) {
319+
ret = PTR_ERR(idle);
320+
goto out;
321+
}
322+
323+
ret = smpboot_create_threads(cpu);
324+
if (ret)
325+
goto out;
326+
310327
ret = __cpu_notify(CPU_UP_PREPARE | mod, hcpu, -1, &nr_calls);
311328
if (ret) {
312329
nr_calls--;
@@ -321,12 +338,16 @@ static int __cpuinit _cpu_up(unsigned int cpu, int tasks_frozen)
321338
goto out_notify;
322339
BUG_ON(!cpu_online(cpu));
323340

341+
/* Wake the per cpu threads */
342+
smpboot_unpark_threads(cpu);
343+
324344
/* Now call notifier in preparation. */
325345
cpu_notify(CPU_ONLINE | mod, hcpu);
326346

327347
out_notify:
328348
if (ret != 0)
329349
__cpu_notify(CPU_UP_CANCELED | mod, hcpu, nr_calls, NULL);
350+
out:
330351
cpu_hotplug_done();
331352
trace_sched_cpu_hotplug(cpu, ret, 1);
332353

0 commit comments

Comments
 (0)