Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Documentation: Update CPU hotplug and move it to core-api

The current CPU hotplug is outdated. During the update to what we
currently have I rewrote it partly and moved to sphinx format.

Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Mauro Carvalho Chehab <mchehab@kernel.org>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Cc: Srivatsa Vaddagiri <vatsa@in.ibm.com>
Cc: Ashok Raj <ashok.raj@intel.com>
Cc: Joel Schopp <jschopp@austin.ibm.com>
Cc: linux-doc@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Jonathan Corbet <corbet@lwn.net>

authored by

Sebastian Andrzej Siewior and committed by
Jonathan Corbet
ff58fa7f df31175b

+373 -452
+372
Documentation/core-api/cpu_hotplug.rst
··· 1 + ========================= 2 + CPU hotplug in the Kernel 3 + ========================= 4 + 5 + :Date: December, 2016 6 + :Author: Sebastian Andrzej Siewior <bigeasy@linutronix.de>, 7 + Rusty Russell <rusty@rustcorp.com.au>, 8 + Srivatsa Vaddagiri <vatsa@in.ibm.com>, 9 + Ashok Raj <ashok.raj@intel.com>, 10 + Joel Schopp <jschopp@austin.ibm.com> 11 + 12 + Introduction 13 + ============ 14 + 15 + Modern advances in system architectures have introduced advanced error 16 + reporting and correction capabilities in processors. There are couple OEMS that 17 + support NUMA hardware which are hot pluggable as well, where physical node 18 + insertion and removal require support for CPU hotplug. 19 + 20 + Such advances require CPUs available to a kernel to be removed either for 21 + provisioning reasons, or for RAS purposes to keep an offending CPU off 22 + system execution path. Hence the need for CPU hotplug support in the 23 + Linux kernel. 24 + 25 + A more novel use of CPU-hotplug support is its use today in suspend resume 26 + support for SMP. Dual-core and HT support makes even a laptop run SMP kernels 27 + which didn't support these methods. 28 + 29 + 30 + Command Line Switches 31 + ===================== 32 + ``maxcpus=n`` 33 + Restrict boot time CPUs to *n*. Say if you have fourV CPUs, using 34 + ``maxcpus=2`` will only boot two. You can choose to bring the 35 + other CPUs later online. 36 + 37 + ``nr_cpus=n`` 38 + Restrict the total amount CPUs the kernel will support. If the number 39 + supplied here is lower than the number of physically available CPUs than 40 + those CPUs can not be brought online later. 41 + 42 + ``additional_cpus=n`` 43 + Use this to limit hotpluggable CPUs. This option sets 44 + ``cpu_possible_mask = cpu_present_mask + additional_cpus`` 45 + 46 + This option is limited to the IA64 architecture. 47 + 48 + ``possible_cpus=n`` 49 + This option sets ``possible_cpus`` bits in ``cpu_possible_mask``. 50 + 51 + This option is limited to the X86 and S390 architecture. 52 + 53 + ``cede_offline={"off","on"}`` 54 + Use this option to disable/enable putting offlined processors to an extended 55 + ``H_CEDE`` state on supported pseries platforms. If nothing is specified, 56 + ``cede_offline`` is set to "on". 57 + 58 + This option is limited to the PowerPC architecture. 59 + 60 + ``cpu0_hotplug`` 61 + Allow to shutdown CPU0. 62 + 63 + This option is limited to the X86 architecture. 64 + 65 + CPU maps 66 + ======== 67 + 68 + ``cpu_possible_mask`` 69 + Bitmap of possible CPUs that can ever be available in the 70 + system. This is used to allocate some boot time memory for per_cpu variables 71 + that aren't designed to grow/shrink as CPUs are made available or removed. 72 + Once set during boot time discovery phase, the map is static, i.e no bits 73 + are added or removed anytime. Trimming it accurately for your system needs 74 + upfront can save some boot time memory. 75 + 76 + ``cpu_online_mask`` 77 + Bitmap of all CPUs currently online. Its set in ``__cpu_up()`` 78 + after a CPU is available for kernel scheduling and ready to receive 79 + interrupts from devices. Its cleared when a CPU is brought down using 80 + ``__cpu_disable()``, before which all OS services including interrupts are 81 + migrated to another target CPU. 82 + 83 + ``cpu_present_mask`` 84 + Bitmap of CPUs currently present in the system. Not all 85 + of them may be online. When physical hotplug is processed by the relevant 86 + subsystem (e.g ACPI) can change and new bit either be added or removed 87 + from the map depending on the event is hot-add/hot-remove. There are currently 88 + no locking rules as of now. Typical usage is to init topology during boot, 89 + at which time hotplug is disabled. 90 + 91 + You really don't need to manipulate any of the system CPU maps. They should 92 + be read-only for most use. When setting up per-cpu resources almost always use 93 + ``cpu_possible_mask`` or ``for_each_possible_cpu()`` to iterate. To macro 94 + ``for_each_cpu()`` can be used to iterate over a custom CPU mask. 95 + 96 + Never use anything other than ``cpumask_t`` to represent bitmap of CPUs. 97 + 98 + 99 + Using CPU hotplug 100 + ================= 101 + The kernel option *CONFIG_HOTPLUG_CPU* needs to be enabled. It is currently 102 + available on multiple architectures including ARM, MIPS, PowerPC and X86. The 103 + configuration is done via the sysfs interface: :: 104 + 105 + $ ls -lh /sys/devices/system/cpu 106 + total 0 107 + drwxr-xr-x 9 root root 0 Dec 21 16:33 cpu0 108 + drwxr-xr-x 9 root root 0 Dec 21 16:33 cpu1 109 + drwxr-xr-x 9 root root 0 Dec 21 16:33 cpu2 110 + drwxr-xr-x 9 root root 0 Dec 21 16:33 cpu3 111 + drwxr-xr-x 9 root root 0 Dec 21 16:33 cpu4 112 + drwxr-xr-x 9 root root 0 Dec 21 16:33 cpu5 113 + drwxr-xr-x 9 root root 0 Dec 21 16:33 cpu6 114 + drwxr-xr-x 9 root root 0 Dec 21 16:33 cpu7 115 + drwxr-xr-x 2 root root 0 Dec 21 16:33 hotplug 116 + -r--r--r-- 1 root root 4.0K Dec 21 16:33 offline 117 + -r--r--r-- 1 root root 4.0K Dec 21 16:33 online 118 + -r--r--r-- 1 root root 4.0K Dec 21 16:33 possible 119 + -r--r--r-- 1 root root 4.0K Dec 21 16:33 present 120 + 121 + The files *offline*, *online*, *possible*, *present* represent the CPU masks. 122 + Each CPU folder contains an *online* file which controls the logical on (1) and 123 + off (0) state. To logically shutdown CPU4: :: 124 + 125 + $ echo 0 > /sys/devices/system/cpu/cpu4/online 126 + smpboot: CPU 4 is now offline 127 + 128 + Once the CPU is shutdown, it will be removed from */proc/interrupts*, 129 + */proc/cpuinfo* and should also not be shown visible by the *top* command. To 130 + bring CPU4 back online: :: 131 + 132 + $ echo 1 > /sys/devices/system/cpu/cpu4/online 133 + smpboot: Booting Node 0 Processor 4 APIC 0x1 134 + 135 + The CPU is usable again. This should work on all CPUs. CPU0 is often special 136 + and excluded from CPU hotplug. On X86 the kernel option 137 + *CONFIG_BOOTPARAM_HOTPLUG_CPU0* has to be enabled in order to be able to 138 + shutdown CPU0. Alternatively the kernel command option *cpu0_hotplug* can be 139 + used. Some known dependencies of CPU0: 140 + 141 + * Resume from hibernate/suspend. Hibernate/suspend will fail if CPU0 is offline. 142 + * PIC interrupts. CPU0 can't be removed if a PIC interrupt is detected. 143 + 144 + Please let Fenghua Yu <fenghua.yu@intel.com> know if you find any dependencies 145 + on CPU0. 146 + 147 + The CPU hotplug coordination 148 + ============================ 149 + 150 + The offline case 151 + ---------------- 152 + Once a CPU has been logically shutdown the teardown callbacks of registered 153 + hotplug states will be invoked, starting with ``CPUHP_ONLINE`` and terminating 154 + at state ``CPUHP_OFFLINE``. This includes: 155 + 156 + * If tasks are frozen due to a suspend operation then *cpuhp_tasks_frozen* 157 + will be set to true. 158 + * All processes are migrated away from this outgoing CPU to new CPUs. 159 + The new CPU is chosen from each process' current cpuset, which may be 160 + a subset of all online CPUs. 161 + * All interrupts targeted to this CPU are migrated to a new CPU 162 + * timers are also migrated to a new CPU 163 + * Once all services are migrated, kernel calls an arch specific routine 164 + ``__cpu_disable()`` to perform arch specific cleanup. 165 + 166 + Using the hotplug API 167 + --------------------- 168 + It is possible to receive notifications once a CPU is offline or onlined. This 169 + might be important to certain drivers which need to perform some kind of setup 170 + or clean up functions based on the number of available CPUs: :: 171 + 172 + #include <linux/cpuhotplug.h> 173 + 174 + ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "X/Y:online", 175 + Y_online, Y_prepare_down); 176 + 177 + *X* is the subsystem and *Y* the particular driver. The *Y_online* callback 178 + will be invoked during registration on all online CPUs. If an error 179 + occurs during the online callback the *Y_prepare_down* callback will be 180 + invoked on all CPUs on which the online callback was previously invoked. 181 + After registration completed, the *Y_online* callback will be invoked 182 + once a CPU is brought online and *Y_prepare_down* will be invoked when a 183 + CPU is shutdown. All resources which were previously allocated in 184 + *Y_online* should be released in *Y_prepare_down*. 185 + The return value *ret* is negative if an error occurred during the 186 + registration process. Otherwise a positive value is returned which 187 + contains the allocated hotplug for dynamically allocated states 188 + (*CPUHP_AP_ONLINE_DYN*). It will return zero for predefined states. 189 + 190 + The callback can be remove by invoking ``cpuhp_remove_state()``. In case of a 191 + dynamically allocated state (*CPUHP_AP_ONLINE_DYN*) use the returned state. 192 + During the removal of a hotplug state the teardown callback will be invoked. 193 + 194 + Multiple instances 195 + ~~~~~~~~~~~~~~~~~~ 196 + If a driver has multiple instances and each instance needs to perform the 197 + callback independently then it is likely that a ''multi-state'' should be used. 198 + First a multi-state state needs to be registered: :: 199 + 200 + ret = cpuhp_setup_state_multi(CPUHP_AP_ONLINE_DYN, "X/Y:online, 201 + Y_online, Y_prepare_down); 202 + Y_hp_online = ret; 203 + 204 + The ``cpuhp_setup_state_multi()`` behaves similar to ``cpuhp_setup_state()`` 205 + except it prepares the callbacks for a multi state and does not invoke 206 + the callbacks. This is a one time setup. 207 + Once a new instance is allocated, you need to register this new instance: :: 208 + 209 + ret = cpuhp_state_add_instance(Y_hp_online, &d->node); 210 + 211 + This function will add this instance to your previously allocated 212 + *Y_hp_online* state and invoke the previously registered callback 213 + (*Y_online*) on all online CPUs. The *node* element is a ``struct 214 + hlist_node`` member of your per-instance data structure. 215 + 216 + On removal of the instance: :: 217 + cpuhp_state_remove_instance(Y_hp_online, &d->node) 218 + 219 + should be invoked which will invoke the teardown callback on all online 220 + CPUs. 221 + 222 + Manual setup 223 + ~~~~~~~~~~~~ 224 + Usually it is handy to invoke setup and teardown callbacks on registration or 225 + removal of a state because usually the operation needs to performed once a CPU 226 + goes online (offline) and during initial setup (shutdown) of the driver. However 227 + each registration and removal function is also available with a ``_nocalls`` 228 + suffix which does not invoke the provided callbacks if the invocation of the 229 + callbacks is not desired. During the manual setup (or teardown) the functions 230 + ``get_online_cpus()`` and ``put_online_cpus()`` should be used to inhibit CPU 231 + hotplug operations. 232 + 233 + 234 + The ordering of the events 235 + -------------------------- 236 + The hotplug states are defined in ``include/linux/cpuhotplug.h``: 237 + 238 + * The states *CPUHP_OFFLINE* … *CPUHP_AP_OFFLINE* are invoked before the 239 + CPU is up. 240 + * The states *CPUHP_AP_OFFLINE* … *CPUHP_AP_ONLINE* are invoked 241 + just the after the CPU has been brought up. The interrupts are off and 242 + the scheduler is not yet active on this CPU. Starting with *CPUHP_AP_OFFLINE* 243 + the callbacks are invoked on the target CPU. 244 + * The states between *CPUHP_AP_ONLINE_DYN* and *CPUHP_AP_ONLINE_DYN_END* are 245 + reserved for the dynamic allocation. 246 + * The states are invoked in the reverse order on CPU shutdown starting with 247 + *CPUHP_ONLINE* and stopping at *CPUHP_OFFLINE*. Here the callbacks are 248 + invoked on the CPU that will be shutdown until *CPUHP_AP_OFFLINE*. 249 + 250 + A dynamically allocated state via *CPUHP_AP_ONLINE_DYN* is often enough. 251 + However if an earlier invocation during the bring up or shutdown is required 252 + then an explicit state should be acquired. An explicit state might also be 253 + required if the hotplug event requires specific ordering in respect to 254 + another hotplug event. 255 + 256 + Testing of hotplug states 257 + ========================= 258 + One way to verify whether a custom state is working as expected or not is to 259 + shutdown a CPU and then put it online again. It is also possible to put the CPU 260 + to certain state (for instance *CPUHP_AP_ONLINE*) and then go back to 261 + *CPUHP_ONLINE*. This would simulate an error one state after *CPUHP_AP_ONLINE* 262 + which would lead to rollback to the online state. 263 + 264 + All registered states are enumerated in ``/sys/devices/system/cpu/hotplug/states``: :: 265 + 266 + $ tail /sys/devices/system/cpu/hotplug/states 267 + 138: mm/vmscan:online 268 + 139: mm/vmstat:online 269 + 140: lib/percpu_cnt:online 270 + 141: acpi/cpu-drv:online 271 + 142: base/cacheinfo:online 272 + 143: virtio/net:online 273 + 144: x86/mce:online 274 + 145: printk:online 275 + 168: sched:active 276 + 169: online 277 + 278 + To rollback CPU4 to ``lib/percpu_cnt:online`` and back online just issue: :: 279 + 280 + $ cat /sys/devices/system/cpu/cpu4/hotplug/state 281 + 169 282 + $ echo 140 > /sys/devices/system/cpu/cpu4/hotplug/target 283 + $ cat /sys/devices/system/cpu/cpu4/hotplug/state 284 + 140 285 + 286 + It is important to note that the teardown callbac of state 140 have been 287 + invoked. And now get back online: :: 288 + 289 + $ echo 169 > /sys/devices/system/cpu/cpu4/hotplug/target 290 + $ cat /sys/devices/system/cpu/cpu4/hotplug/state 291 + 169 292 + 293 + With trace events enabled, the individual steps are visible, too: :: 294 + 295 + # TASK-PID CPU# TIMESTAMP FUNCTION 296 + # | | | | | 297 + bash-394 [001] 22.976: cpuhp_enter: cpu: 0004 target: 140 step: 169 (cpuhp_kick_ap_work) 298 + cpuhp/4-31 [004] 22.977: cpuhp_enter: cpu: 0004 target: 140 step: 168 (sched_cpu_deactivate) 299 + cpuhp/4-31 [004] 22.990: cpuhp_exit: cpu: 0004 state: 168 step: 168 ret: 0 300 + cpuhp/4-31 [004] 22.991: cpuhp_enter: cpu: 0004 target: 140 step: 144 (mce_cpu_pre_down) 301 + cpuhp/4-31 [004] 22.992: cpuhp_exit: cpu: 0004 state: 144 step: 144 ret: 0 302 + cpuhp/4-31 [004] 22.993: cpuhp_multi_enter: cpu: 0004 target: 140 step: 143 (virtnet_cpu_down_prep) 303 + cpuhp/4-31 [004] 22.994: cpuhp_exit: cpu: 0004 state: 143 step: 143 ret: 0 304 + cpuhp/4-31 [004] 22.995: cpuhp_enter: cpu: 0004 target: 140 step: 142 (cacheinfo_cpu_pre_down) 305 + cpuhp/4-31 [004] 22.996: cpuhp_exit: cpu: 0004 state: 142 step: 142 ret: 0 306 + bash-394 [001] 22.997: cpuhp_exit: cpu: 0004 state: 140 step: 169 ret: 0 307 + bash-394 [005] 95.540: cpuhp_enter: cpu: 0004 target: 169 step: 140 (cpuhp_kick_ap_work) 308 + cpuhp/4-31 [004] 95.541: cpuhp_enter: cpu: 0004 target: 169 step: 141 (acpi_soft_cpu_online) 309 + cpuhp/4-31 [004] 95.542: cpuhp_exit: cpu: 0004 state: 141 step: 141 ret: 0 310 + cpuhp/4-31 [004] 95.543: cpuhp_enter: cpu: 0004 target: 169 step: 142 (cacheinfo_cpu_online) 311 + cpuhp/4-31 [004] 95.544: cpuhp_exit: cpu: 0004 state: 142 step: 142 ret: 0 312 + cpuhp/4-31 [004] 95.545: cpuhp_multi_enter: cpu: 0004 target: 169 step: 143 (virtnet_cpu_online) 313 + cpuhp/4-31 [004] 95.546: cpuhp_exit: cpu: 0004 state: 143 step: 143 ret: 0 314 + cpuhp/4-31 [004] 95.547: cpuhp_enter: cpu: 0004 target: 169 step: 144 (mce_cpu_online) 315 + cpuhp/4-31 [004] 95.548: cpuhp_exit: cpu: 0004 state: 144 step: 144 ret: 0 316 + cpuhp/4-31 [004] 95.549: cpuhp_enter: cpu: 0004 target: 169 step: 145 (console_cpu_notify) 317 + cpuhp/4-31 [004] 95.550: cpuhp_exit: cpu: 0004 state: 145 step: 145 ret: 0 318 + cpuhp/4-31 [004] 95.551: cpuhp_enter: cpu: 0004 target: 169 step: 168 (sched_cpu_activate) 319 + cpuhp/4-31 [004] 95.552: cpuhp_exit: cpu: 0004 state: 168 step: 168 ret: 0 320 + bash-394 [005] 95.553: cpuhp_exit: cpu: 0004 state: 169 step: 140 ret: 0 321 + 322 + As it an be seen, CPU4 went down until timestamp 22.996 and then back up until 323 + 95.552. All invoked callbacks including their return codes are visible in the 324 + trace. 325 + 326 + Architecture's requirements 327 + =========================== 328 + The following functions and configurations are required: 329 + 330 + ``CONFIG_HOTPLUG_CPU`` 331 + This entry needs to be enabled in Kconfig 332 + 333 + ``__cpu_up()`` 334 + Arch interface to bring up a CPU 335 + 336 + ``__cpu_disable()`` 337 + Arch interface to shutdown a CPU, no more interrupts can be handled by the 338 + kernel after the routine returns. This includes the shutdown of the timer. 339 + 340 + ``__cpu_die()`` 341 + This actually supposed to ensure death of the CPU. Actually look at some 342 + example code in other arch that implement CPU hotplug. The processor is taken 343 + down from the ``idle()`` loop for that specific architecture. ``__cpu_die()`` 344 + typically waits for some per_cpu state to be set, to ensure the processor dead 345 + routine is called to be sure positively. 346 + 347 + User Space Notification 348 + ======================= 349 + After CPU successfully onlined or offline udev events are sent. A udev rule like: :: 350 + 351 + SUBSYSTEM=="cpu", DRIVERS=="processor", DEVPATH=="/devices/system/cpu/*", RUN+="the_hotplug_receiver.sh" 352 + 353 + will receive all events. A script like: :: 354 + 355 + #!/bin/sh 356 + 357 + if [ "${ACTION}" = "offline" ] 358 + then 359 + echo "CPU ${DEVPATH##*/} offline" 360 + 361 + elif [ "${ACTION}" = "online" ] 362 + then 363 + echo "CPU ${DEVPATH##*/} online" 364 + 365 + fi 366 + 367 + can process the event further. 368 + 369 + Kernel Inline Documentations Reference 370 + ====================================== 371 + 372 + .. kernel-doc:: include/linux/cpuhotplug.h
+1
Documentation/core-api/index.rst
··· 13 13 14 14 assoc_array 15 15 atomic_ops 16 + cpu_hotplug 16 17 local_ops 17 18 workqueue 18 19
-452
Documentation/cpu-hotplug.txt
··· 1 - CPU hotplug Support in Linux(tm) Kernel 2 - 3 - Maintainers: 4 - CPU Hotplug Core: 5 - Rusty Russell <rusty@rustcorp.com.au> 6 - Srivatsa Vaddagiri <vatsa@in.ibm.com> 7 - i386: 8 - Zwane Mwaikambo <zwanem@gmail.com> 9 - ppc64: 10 - Nathan Lynch <nathanl@austin.ibm.com> 11 - Joel Schopp <jschopp@austin.ibm.com> 12 - ia64/x86_64: 13 - Ashok Raj <ashok.raj@intel.com> 14 - s390: 15 - Heiko Carstens <heiko.carstens@de.ibm.com> 16 - 17 - Authors: Ashok Raj <ashok.raj@intel.com> 18 - Lots of feedback: Nathan Lynch <nathanl@austin.ibm.com>, 19 - Joel Schopp <jschopp@austin.ibm.com> 20 - 21 - Introduction 22 - 23 - Modern advances in system architectures have introduced advanced error 24 - reporting and correction capabilities in processors. CPU architectures permit 25 - partitioning support, where compute resources of a single CPU could be made 26 - available to virtual machine environments. There are couple OEMS that 27 - support NUMA hardware which are hot pluggable as well, where physical 28 - node insertion and removal require support for CPU hotplug. 29 - 30 - Such advances require CPUs available to a kernel to be removed either for 31 - provisioning reasons, or for RAS purposes to keep an offending CPU off 32 - system execution path. Hence the need for CPU hotplug support in the 33 - Linux kernel. 34 - 35 - A more novel use of CPU-hotplug support is its use today in suspend 36 - resume support for SMP. Dual-core and HT support makes even 37 - a laptop run SMP kernels which didn't support these methods. SMP support 38 - for suspend/resume is a work in progress. 39 - 40 - General Stuff about CPU Hotplug 41 - -------------------------------- 42 - 43 - Command Line Switches 44 - --------------------- 45 - maxcpus=n Restrict boot time cpus to n. Say if you have 4 cpus, using 46 - maxcpus=2 will only boot 2. You can choose to bring the 47 - other cpus later online, read FAQ's for more info. 48 - 49 - additional_cpus=n (*) Use this to limit hotpluggable cpus. This option sets 50 - cpu_possible_mask = cpu_present_mask + additional_cpus 51 - 52 - cede_offline={"off","on"} Use this option to disable/enable putting offlined 53 - processors to an extended H_CEDE state on 54 - supported pseries platforms. 55 - If nothing is specified, 56 - cede_offline is set to "on". 57 - 58 - (*) Option valid only for following architectures 59 - - ia64 60 - 61 - ia64 uses the number of disabled local apics in ACPI tables MADT to 62 - determine the number of potentially hot-pluggable cpus. The implementation 63 - should only rely on this to count the # of cpus, but *MUST* not rely 64 - on the apicid values in those tables for disabled apics. In the event 65 - BIOS doesn't mark such hot-pluggable cpus as disabled entries, one could 66 - use this parameter "additional_cpus=x" to represent those cpus in the 67 - cpu_possible_mask. 68 - 69 - possible_cpus=n [s390,x86_64] use this to set hotpluggable cpus. 70 - This option sets possible_cpus bits in 71 - cpu_possible_mask. Thus keeping the numbers of bits set 72 - constant even if the machine gets rebooted. 73 - 74 - CPU maps and such 75 - ----------------- 76 - [More on cpumaps and primitive to manipulate, please check 77 - include/linux/cpumask.h that has more descriptive text.] 78 - 79 - cpu_possible_mask: Bitmap of possible CPUs that can ever be available in the 80 - system. This is used to allocate some boot time memory for per_cpu variables 81 - that aren't designed to grow/shrink as CPUs are made available or removed. 82 - Once set during boot time discovery phase, the map is static, i.e no bits 83 - are added or removed anytime. Trimming it accurately for your system needs 84 - upfront can save some boot time memory. See below for how we use heuristics 85 - in x86_64 case to keep this under check. 86 - 87 - cpu_online_mask: Bitmap of all CPUs currently online. It's set in __cpu_up() 88 - after a CPU is available for kernel scheduling and ready to receive 89 - interrupts from devices. It's cleared when a CPU is brought down using 90 - __cpu_disable(), before which all OS services including interrupts are 91 - migrated to another target CPU. 92 - 93 - cpu_present_mask: Bitmap of CPUs currently present in the system. Not all 94 - of them may be online. When physical hotplug is processed by the relevant 95 - subsystem (e.g ACPI) can change and new bit either be added or removed 96 - from the map depending on the event is hot-add/hot-remove. There are currently 97 - no locking rules as of now. Typical usage is to init topology during boot, 98 - at which time hotplug is disabled. 99 - 100 - You really dont need to manipulate any of the system cpu maps. They should 101 - be read-only for most use. When setting up per-cpu resources almost always use 102 - cpu_possible_mask/for_each_possible_cpu() to iterate. 103 - 104 - Never use anything other than cpumask_t to represent bitmap of CPUs. 105 - 106 - #include <linux/cpumask.h> 107 - 108 - for_each_possible_cpu - Iterate over cpu_possible_mask 109 - for_each_online_cpu - Iterate over cpu_online_mask 110 - for_each_present_cpu - Iterate over cpu_present_mask 111 - for_each_cpu(x,mask) - Iterate over some random collection of cpu mask. 112 - 113 - #include <linux/cpu.h> 114 - get_online_cpus() and put_online_cpus(): 115 - 116 - The above calls are used to inhibit cpu hotplug operations. While the 117 - cpu_hotplug.refcount is non zero, the cpu_online_mask will not change. 118 - If you merely need to avoid cpus going away, you could also use 119 - preempt_disable() and preempt_enable() for those sections. 120 - Just remember the critical section cannot call any 121 - function that can sleep or schedule this process away. The preempt_disable() 122 - will work as long as stop_machine_run() is used to take a cpu down. 123 - 124 - CPU Hotplug - Frequently Asked Questions. 125 - 126 - Q: How to enable my kernel to support CPU hotplug? 127 - A: When doing make defconfig, Enable CPU hotplug support 128 - 129 - "Processor type and Features" -> Support for Hotpluggable CPUs 130 - 131 - Make sure that you have CONFIG_SMP turned on as well. 132 - 133 - You would need to enable CONFIG_HOTPLUG_CPU for SMP suspend/resume support 134 - as well. 135 - 136 - Q: What architectures support CPU hotplug? 137 - A: As of 2.6.14, the following architectures support CPU hotplug. 138 - 139 - i386 (Intel), ppc, ppc64, parisc, s390, ia64 and x86_64 140 - 141 - Q: How to test if hotplug is supported on the newly built kernel? 142 - A: You should now notice an entry in sysfs. 143 - 144 - Check if sysfs is mounted, using the "mount" command. You should notice 145 - an entry as shown below in the output. 146 - 147 - .... 148 - none on /sys type sysfs (rw) 149 - .... 150 - 151 - If this is not mounted, do the following. 152 - 153 - #mkdir /sys 154 - #mount -t sysfs sys /sys 155 - 156 - Now you should see entries for all present cpu, the following is an example 157 - in a 8-way system. 158 - 159 - #pwd 160 - #/sys/devices/system/cpu 161 - #ls -l 162 - total 0 163 - drwxr-xr-x 10 root root 0 Sep 19 07:44 . 164 - drwxr-xr-x 13 root root 0 Sep 19 07:45 .. 165 - drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu0 166 - drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu1 167 - drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu2 168 - drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu3 169 - drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu4 170 - drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu5 171 - drwxr-xr-x 3 root root 0 Sep 19 07:44 cpu6 172 - drwxr-xr-x 3 root root 0 Sep 19 07:48 cpu7 173 - 174 - Under each directory you would find an "online" file which is the control 175 - file to logically online/offline a processor. 176 - 177 - Q: Does hot-add/hot-remove refer to physical add/remove of cpus? 178 - A: The usage of hot-add/remove may not be very consistently used in the code. 179 - CONFIG_HOTPLUG_CPU enables logical online/offline capability in the kernel. 180 - To support physical addition/removal, one would need some BIOS hooks and 181 - the platform should have something like an attention button in PCI hotplug. 182 - CONFIG_ACPI_HOTPLUG_CPU enables ACPI support for physical add/remove of CPUs. 183 - 184 - Q: How do I logically offline a CPU? 185 - A: Do the following. 186 - 187 - #echo 0 > /sys/devices/system/cpu/cpuX/online 188 - 189 - Once the logical offline is successful, check 190 - 191 - #cat /proc/interrupts 192 - 193 - You should now not see the CPU that you removed. Also online file will report 194 - the state as 0 when a CPU is offline and 1 when it's online. 195 - 196 - #To display the current cpu state. 197 - #cat /sys/devices/system/cpu/cpuX/online 198 - 199 - Q: Why can't I remove CPU0 on some systems? 200 - A: Some architectures may have some special dependency on a certain CPU. 201 - 202 - For e.g in IA64 platforms we have ability to send platform interrupts to the 203 - OS. a.k.a Corrected Platform Error Interrupts (CPEI). In current ACPI 204 - specifications, we didn't have a way to change the target CPU. Hence if the 205 - current ACPI version doesn't support such re-direction, we disable that CPU 206 - by making it not-removable. 207 - 208 - In such cases you will also notice that the online file is missing under cpu0. 209 - 210 - Q: Is CPU0 removable on X86? 211 - A: Yes. If kernel is compiled with CONFIG_BOOTPARAM_HOTPLUG_CPU0=y, CPU0 is 212 - removable by default. Otherwise, CPU0 is also removable by kernel option 213 - cpu0_hotplug. 214 - 215 - But some features depend on CPU0. Two known dependencies are: 216 - 217 - 1. Resume from hibernate/suspend depends on CPU0. Hibernate/suspend will fail if 218 - CPU0 is offline and you need to online CPU0 before hibernate/suspend can 219 - continue. 220 - 2. PIC interrupts also depend on CPU0. CPU0 can't be removed if a PIC interrupt 221 - is detected. 222 - 223 - It's said poweroff/reboot may depend on CPU0 on some machines although I haven't 224 - seen any poweroff/reboot failure so far after CPU0 is offline on a few tested 225 - machines. 226 - 227 - Please let me know if you know or see any other dependencies of CPU0. 228 - 229 - If the dependencies are under your control, you can turn on CPU0 hotplug feature 230 - either by CONFIG_BOOTPARAM_HOTPLUG_CPU0 or by kernel parameter cpu0_hotplug. 231 - 232 - --Fenghua Yu <fenghua.yu@intel.com> 233 - 234 - Q: How do I find out if a particular CPU is not removable? 235 - A: Depending on the implementation, some architectures may show this by the 236 - absence of the "online" file. This is done if it can be determined ahead of 237 - time that this CPU cannot be removed. 238 - 239 - In some situations, this can be a run time check, i.e if you try to remove the 240 - last CPU, this will not be permitted. You can find such failures by 241 - investigating the return value of the "echo" command. 242 - 243 - Q: What happens when a CPU is being logically offlined? 244 - A: The following happen, listed in no particular order :-) 245 - 246 - - A notification is sent to in-kernel registered modules by sending an event 247 - CPU_DOWN_PREPARE or CPU_DOWN_PREPARE_FROZEN, depending on whether or not the 248 - CPU is being offlined while tasks are frozen due to a suspend operation in 249 - progress 250 - - All processes are migrated away from this outgoing CPU to new CPUs. 251 - The new CPU is chosen from each process' current cpuset, which may be 252 - a subset of all online CPUs. 253 - - All interrupts targeted to this CPU are migrated to a new CPU 254 - - timers/bottom half/task lets are also migrated to a new CPU 255 - - Once all services are migrated, kernel calls an arch specific routine 256 - __cpu_disable() to perform arch specific cleanup. 257 - - Once this is successful, an event for successful cleanup is sent by an event 258 - CPU_DEAD (or CPU_DEAD_FROZEN if tasks are frozen due to a suspend while the 259 - CPU is being offlined). 260 - 261 - "It is expected that each service cleans up when the CPU_DOWN_PREPARE 262 - notifier is called, when CPU_DEAD is called it's expected there is nothing 263 - running on behalf of this CPU that was offlined" 264 - 265 - Q: If I have some kernel code that needs to be aware of CPU arrival and 266 - departure, how to i arrange for proper notification? 267 - A: This is what you would need in your kernel code to receive notifications. 268 - 269 - #include <linux/cpu.h> 270 - static int foobar_cpu_callback(struct notifier_block *nfb, 271 - unsigned long action, void *hcpu) 272 - { 273 - unsigned int cpu = (unsigned long)hcpu; 274 - 275 - switch (action) { 276 - case CPU_ONLINE: 277 - case CPU_ONLINE_FROZEN: 278 - foobar_online_action(cpu); 279 - break; 280 - case CPU_DEAD: 281 - case CPU_DEAD_FROZEN: 282 - foobar_dead_action(cpu); 283 - break; 284 - } 285 - return NOTIFY_OK; 286 - } 287 - 288 - static struct notifier_block foobar_cpu_notifier = 289 - { 290 - .notifier_call = foobar_cpu_callback, 291 - }; 292 - 293 - You need to call register_cpu_notifier() from your init function. 294 - Init functions could be of two types: 295 - 1. early init (init function called when only the boot processor is online). 296 - 2. late init (init function called _after_ all the CPUs are online). 297 - 298 - For the first case, you should add the following to your init function 299 - 300 - register_cpu_notifier(&foobar_cpu_notifier); 301 - 302 - For the second case, you should add the following to your init function 303 - 304 - register_hotcpu_notifier(&foobar_cpu_notifier); 305 - 306 - You can fail PREPARE notifiers if something doesn't work to prepare resources. 307 - This will stop the activity and send a following CANCELED event back. 308 - 309 - CPU_DEAD should not be failed, its just a goodness indication, but bad 310 - things will happen if a notifier in path sent a BAD notify code. 311 - 312 - Q: I don't see my action being called for all CPUs already up and running? 313 - A: Yes, CPU notifiers are called only when new CPUs are on-lined or offlined. 314 - If you need to perform some action for each CPU already in the system, then 315 - do this: 316 - 317 - for_each_online_cpu(i) { 318 - foobar_cpu_callback(&foobar_cpu_notifier, CPU_UP_PREPARE, i); 319 - foobar_cpu_callback(&foobar_cpu_notifier, CPU_ONLINE, i); 320 - } 321 - 322 - However, if you want to register a hotplug callback, as well as perform 323 - some initialization for CPUs that are already online, then do this: 324 - 325 - Version 1: (Correct) 326 - --------- 327 - 328 - cpu_notifier_register_begin(); 329 - 330 - for_each_online_cpu(i) { 331 - foobar_cpu_callback(&foobar_cpu_notifier, 332 - CPU_UP_PREPARE, i); 333 - foobar_cpu_callback(&foobar_cpu_notifier, 334 - CPU_ONLINE, i); 335 - } 336 - 337 - /* Note the use of the double underscored version of the API */ 338 - __register_cpu_notifier(&foobar_cpu_notifier); 339 - 340 - cpu_notifier_register_done(); 341 - 342 - Note that the following code is *NOT* the right way to achieve this, 343 - because it is prone to an ABBA deadlock between the cpu_add_remove_lock 344 - and the cpu_hotplug.lock. 345 - 346 - Version 2: (Wrong!) 347 - --------- 348 - 349 - get_online_cpus(); 350 - 351 - for_each_online_cpu(i) { 352 - foobar_cpu_callback(&foobar_cpu_notifier, 353 - CPU_UP_PREPARE, i); 354 - foobar_cpu_callback(&foobar_cpu_notifier, 355 - CPU_ONLINE, i); 356 - } 357 - 358 - register_cpu_notifier(&foobar_cpu_notifier); 359 - 360 - put_online_cpus(); 361 - 362 - So always use the first version shown above when you want to register 363 - callbacks as well as initialize the already online CPUs. 364 - 365 - 366 - Q: If I would like to develop CPU hotplug support for a new architecture, 367 - what do I need at a minimum? 368 - A: The following are what is required for CPU hotplug infrastructure to work 369 - correctly. 370 - 371 - - Make sure you have an entry in Kconfig to enable CONFIG_HOTPLUG_CPU 372 - - __cpu_up() - Arch interface to bring up a CPU 373 - - __cpu_disable() - Arch interface to shutdown a CPU, no more interrupts 374 - can be handled by the kernel after the routine 375 - returns. Including local APIC timers etc are 376 - shutdown. 377 - - __cpu_die() - This actually supposed to ensure death of the CPU. 378 - Actually look at some example code in other arch 379 - that implement CPU hotplug. The processor is taken 380 - down from the idle() loop for that specific 381 - architecture. __cpu_die() typically waits for some 382 - per_cpu state to be set, to ensure the processor 383 - dead routine is called to be sure positively. 384 - 385 - Q: I need to ensure that a particular CPU is not removed when there is some 386 - work specific to this CPU in progress. 387 - A: There are two ways. If your code can be run in interrupt context, use 388 - smp_call_function_single(), otherwise use work_on_cpu(). Note that 389 - work_on_cpu() is slow, and can fail due to out of memory: 390 - 391 - int my_func_on_cpu(int cpu) 392 - { 393 - int err; 394 - get_online_cpus(); 395 - if (!cpu_online(cpu)) 396 - err = -EINVAL; 397 - else 398 - #if NEEDS_BLOCKING 399 - err = work_on_cpu(cpu, __my_func_on_cpu, NULL); 400 - #else 401 - smp_call_function_single(cpu, __my_func_on_cpu, &err, 402 - true); 403 - #endif 404 - put_online_cpus(); 405 - return err; 406 - } 407 - 408 - Q: How do we determine how many CPUs are available for hotplug. 409 - A: There is no clear spec defined way from ACPI that can give us that 410 - information today. Based on some input from Natalie of Unisys, 411 - that the ACPI MADT (Multiple APIC Description Tables) marks those possible 412 - CPUs in a system with disabled status. 413 - 414 - Andi implemented some simple heuristics that count the number of disabled 415 - CPUs in MADT as hotpluggable CPUS. In the case there are no disabled CPUS 416 - we assume 1/2 the number of CPUs currently present can be hotplugged. 417 - 418 - Caveat: ACPI MADT can only provide 256 entries in systems with only ACPI 2.0c 419 - or earlier ACPI version supported, because the apicid field in MADT is only 420 - 8 bits. From ACPI 3.0, this limitation was removed since the apicid field 421 - was extended to 32 bits with x2APIC introduced. 422 - 423 - User Space Notification 424 - 425 - Hotplug support for devices is common in Linux today. Its being used today to 426 - support automatic configuration of network, usb and pci devices. A hotplug 427 - event can be used to invoke an agent script to perform the configuration task. 428 - 429 - You can add /etc/hotplug/cpu.agent to handle hotplug notification user space 430 - scripts. 431 - 432 - #!/bin/bash 433 - # $Id: cpu.agent 434 - # Kernel hotplug params include: 435 - #ACTION=%s [online or offline] 436 - #DEVPATH=%s 437 - # 438 - cd /etc/hotplug 439 - . ./hotplug.functions 440 - 441 - case $ACTION in 442 - online) 443 - echo `date` ":cpu.agent" add cpu >> /tmp/hotplug.txt 444 - ;; 445 - offline) 446 - echo `date` ":cpu.agent" remove cpu >>/tmp/hotplug.txt 447 - ;; 448 - *) 449 - debug_mesg CPU $ACTION event not supported 450 - exit 1 451 - ;; 452 - esac