Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

at 989a7241df87526bfef0396567e71ebe53a84ae4 728 lines 32 kB view raw
1 CPUSETS 2 ------- 3 4Copyright (C) 2004 BULL SA. 5Written by Simon.Derr@bull.net 6 7Portions Copyright (c) 2004-2006 Silicon Graphics, Inc. 8Modified by Paul Jackson <pj@sgi.com> 9Modified by Christoph Lameter <clameter@sgi.com> 10Modified by Paul Menage <menage@google.com> 11 12CONTENTS: 13========= 14 151. Cpusets 16 1.1 What are cpusets ? 17 1.2 Why are cpusets needed ? 18 1.3 How are cpusets implemented ? 19 1.4 What are exclusive cpusets ? 20 1.5 What is memory_pressure ? 21 1.6 What is memory spread ? 22 1.7 What is sched_load_balance ? 23 1.8 How do I use cpusets ? 242. Usage Examples and Syntax 25 2.1 Basic Usage 26 2.2 Adding/removing cpus 27 2.3 Setting flags 28 2.4 Attaching processes 293. Questions 304. Contact 31 321. Cpusets 33========== 34 351.1 What are cpusets ? 36---------------------- 37 38Cpusets provide a mechanism for assigning a set of CPUs and Memory 39Nodes to a set of tasks. In this document "Memory Node" refers to 40an on-line node that contains memory. 41 42Cpusets constrain the CPU and Memory placement of tasks to only 43the resources within a tasks current cpuset. They form a nested 44hierarchy visible in a virtual file system. These are the essential 45hooks, beyond what is already present, required to manage dynamic 46job placement on large systems. 47 48Cpusets use the generic cgroup subsystem described in 49Documentation/cgroup.txt. 50 51Requests by a task, using the sched_setaffinity(2) system call to 52include CPUs in its CPU affinity mask, and using the mbind(2) and 53set_mempolicy(2) system calls to include Memory Nodes in its memory 54policy, are both filtered through that tasks cpuset, filtering out any 55CPUs or Memory Nodes not in that cpuset. The scheduler will not 56schedule a task on a CPU that is not allowed in its cpus_allowed 57vector, and the kernel page allocator will not allocate a page on a 58node that is not allowed in the requesting tasks mems_allowed vector. 59 60User level code may create and destroy cpusets by name in the cgroup 61virtual file system, manage the attributes and permissions of these 62cpusets and which CPUs and Memory Nodes are assigned to each cpuset, 63specify and query to which cpuset a task is assigned, and list the 64task pids assigned to a cpuset. 65 66 671.2 Why are cpusets needed ? 68---------------------------- 69 70The management of large computer systems, with many processors (CPUs), 71complex memory cache hierarchies and multiple Memory Nodes having 72non-uniform access times (NUMA) presents additional challenges for 73the efficient scheduling and memory placement of processes. 74 75Frequently more modest sized systems can be operated with adequate 76efficiency just by letting the operating system automatically share 77the available CPU and Memory resources amongst the requesting tasks. 78 79But larger systems, which benefit more from careful processor and 80memory placement to reduce memory access times and contention, 81and which typically represent a larger investment for the customer, 82can benefit from explicitly placing jobs on properly sized subsets of 83the system. 84 85This can be especially valuable on: 86 87 * Web Servers running multiple instances of the same web application, 88 * Servers running different applications (for instance, a web server 89 and a database), or 90 * NUMA systems running large HPC applications with demanding 91 performance characteristics. 92 93These subsets, or "soft partitions" must be able to be dynamically 94adjusted, as the job mix changes, without impacting other concurrently 95executing jobs. The location of the running jobs pages may also be moved 96when the memory locations are changed. 97 98The kernel cpuset patch provides the minimum essential kernel 99mechanisms required to efficiently implement such subsets. It 100leverages existing CPU and Memory Placement facilities in the Linux 101kernel to avoid any additional impact on the critical scheduler or 102memory allocator code. 103 104 1051.3 How are cpusets implemented ? 106--------------------------------- 107 108Cpusets provide a Linux kernel mechanism to constrain which CPUs and 109Memory Nodes are used by a process or set of processes. 110 111The Linux kernel already has a pair of mechanisms to specify on which 112CPUs a task may be scheduled (sched_setaffinity) and on which Memory 113Nodes it may obtain memory (mbind, set_mempolicy). 114 115Cpusets extends these two mechanisms as follows: 116 117 - Cpusets are sets of allowed CPUs and Memory Nodes, known to the 118 kernel. 119 - Each task in the system is attached to a cpuset, via a pointer 120 in the task structure to a reference counted cgroup structure. 121 - Calls to sched_setaffinity are filtered to just those CPUs 122 allowed in that tasks cpuset. 123 - Calls to mbind and set_mempolicy are filtered to just 124 those Memory Nodes allowed in that tasks cpuset. 125 - The root cpuset contains all the systems CPUs and Memory 126 Nodes. 127 - For any cpuset, one can define child cpusets containing a subset 128 of the parents CPU and Memory Node resources. 129 - The hierarchy of cpusets can be mounted at /dev/cpuset, for 130 browsing and manipulation from user space. 131 - A cpuset may be marked exclusive, which ensures that no other 132 cpuset (except direct ancestors and descendents) may contain 133 any overlapping CPUs or Memory Nodes. 134 - You can list all the tasks (by pid) attached to any cpuset. 135 136The implementation of cpusets requires a few, simple hooks 137into the rest of the kernel, none in performance critical paths: 138 139 - in init/main.c, to initialize the root cpuset at system boot. 140 - in fork and exit, to attach and detach a task from its cpuset. 141 - in sched_setaffinity, to mask the requested CPUs by what's 142 allowed in that tasks cpuset. 143 - in sched.c migrate_all_tasks(), to keep migrating tasks within 144 the CPUs allowed by their cpuset, if possible. 145 - in the mbind and set_mempolicy system calls, to mask the requested 146 Memory Nodes by what's allowed in that tasks cpuset. 147 - in page_alloc.c, to restrict memory to allowed nodes. 148 - in vmscan.c, to restrict page recovery to the current cpuset. 149 150You should mount the "cgroup" filesystem type in order to enable 151browsing and modifying the cpusets presently known to the kernel. No 152new system calls are added for cpusets - all support for querying and 153modifying cpusets is via this cpuset file system. 154 155The /proc/<pid>/status file for each task has two added lines, 156displaying the tasks cpus_allowed (on which CPUs it may be scheduled) 157and mems_allowed (on which Memory Nodes it may obtain memory), 158in the format seen in the following example: 159 160 Cpus_allowed: ffffffff,ffffffff,ffffffff,ffffffff 161 Mems_allowed: ffffffff,ffffffff 162 163Each cpuset is represented by a directory in the cgroup file system 164containing (on top of the standard cgroup files) the following 165files describing that cpuset: 166 167 - cpus: list of CPUs in that cpuset 168 - mems: list of Memory Nodes in that cpuset 169 - memory_migrate flag: if set, move pages to cpusets nodes 170 - cpu_exclusive flag: is cpu placement exclusive? 171 - mem_exclusive flag: is memory placement exclusive? 172 - memory_pressure: measure of how much paging pressure in cpuset 173 174In addition, the root cpuset only has the following file: 175 - memory_pressure_enabled flag: compute memory_pressure? 176 177New cpusets are created using the mkdir system call or shell 178command. The properties of a cpuset, such as its flags, allowed 179CPUs and Memory Nodes, and attached tasks, are modified by writing 180to the appropriate file in that cpusets directory, as listed above. 181 182The named hierarchical structure of nested cpusets allows partitioning 183a large system into nested, dynamically changeable, "soft-partitions". 184 185The attachment of each task, automatically inherited at fork by any 186children of that task, to a cpuset allows organizing the work load 187on a system into related sets of tasks such that each set is constrained 188to using the CPUs and Memory Nodes of a particular cpuset. A task 189may be re-attached to any other cpuset, if allowed by the permissions 190on the necessary cpuset file system directories. 191 192Such management of a system "in the large" integrates smoothly with 193the detailed placement done on individual tasks and memory regions 194using the sched_setaffinity, mbind and set_mempolicy system calls. 195 196The following rules apply to each cpuset: 197 198 - Its CPUs and Memory Nodes must be a subset of its parents. 199 - It can only be marked exclusive if its parent is. 200 - If its cpu or memory is exclusive, they may not overlap any sibling. 201 202These rules, and the natural hierarchy of cpusets, enable efficient 203enforcement of the exclusive guarantee, without having to scan all 204cpusets every time any of them change to ensure nothing overlaps a 205exclusive cpuset. Also, the use of a Linux virtual file system (vfs) 206to represent the cpuset hierarchy provides for a familiar permission 207and name space for cpusets, with a minimum of additional kernel code. 208 209The cpus and mems files in the root (top_cpuset) cpuset are 210read-only. The cpus file automatically tracks the value of 211cpu_online_map using a CPU hotplug notifier, and the mems file 212automatically tracks the value of node_states[N_HIGH_MEMORY]--i.e., 213nodes with memory--using the cpuset_track_online_nodes() hook. 214 215 2161.4 What are exclusive cpusets ? 217-------------------------------- 218 219If a cpuset is cpu or mem exclusive, no other cpuset, other than 220a direct ancestor or descendent, may share any of the same CPUs or 221Memory Nodes. 222 223A cpuset that is mem_exclusive restricts kernel allocations for 224page, buffer and other data commonly shared by the kernel across 225multiple users. All cpusets, whether mem_exclusive or not, restrict 226allocations of memory for user space. This enables configuring a 227system so that several independent jobs can share common kernel data, 228such as file system pages, while isolating each jobs user allocation in 229its own cpuset. To do this, construct a large mem_exclusive cpuset to 230hold all the jobs, and construct child, non-mem_exclusive cpusets for 231each individual job. Only a small amount of typical kernel memory, 232such as requests from interrupt handlers, is allowed to be taken 233outside even a mem_exclusive cpuset. 234 235 2361.5 What is memory_pressure ? 237----------------------------- 238The memory_pressure of a cpuset provides a simple per-cpuset metric 239of the rate that the tasks in a cpuset are attempting to free up in 240use memory on the nodes of the cpuset to satisfy additional memory 241requests. 242 243This enables batch managers monitoring jobs running in dedicated 244cpusets to efficiently detect what level of memory pressure that job 245is causing. 246 247This is useful both on tightly managed systems running a wide mix of 248submitted jobs, which may choose to terminate or re-prioritize jobs that 249are trying to use more memory than allowed on the nodes assigned them, 250and with tightly coupled, long running, massively parallel scientific 251computing jobs that will dramatically fail to meet required performance 252goals if they start to use more memory than allowed to them. 253 254This mechanism provides a very economical way for the batch manager 255to monitor a cpuset for signs of memory pressure. It's up to the 256batch manager or other user code to decide what to do about it and 257take action. 258 259==> Unless this feature is enabled by writing "1" to the special file 260 /dev/cpuset/memory_pressure_enabled, the hook in the rebalance 261 code of __alloc_pages() for this metric reduces to simply noticing 262 that the cpuset_memory_pressure_enabled flag is zero. So only 263 systems that enable this feature will compute the metric. 264 265Why a per-cpuset, running average: 266 267 Because this meter is per-cpuset, rather than per-task or mm, 268 the system load imposed by a batch scheduler monitoring this 269 metric is sharply reduced on large systems, because a scan of 270 the tasklist can be avoided on each set of queries. 271 272 Because this meter is a running average, instead of an accumulating 273 counter, a batch scheduler can detect memory pressure with a 274 single read, instead of having to read and accumulate results 275 for a period of time. 276 277 Because this meter is per-cpuset rather than per-task or mm, 278 the batch scheduler can obtain the key information, memory 279 pressure in a cpuset, with a single read, rather than having to 280 query and accumulate results over all the (dynamically changing) 281 set of tasks in the cpuset. 282 283A per-cpuset simple digital filter (requires a spinlock and 3 words 284of data per-cpuset) is kept, and updated by any task attached to that 285cpuset, if it enters the synchronous (direct) page reclaim code. 286 287A per-cpuset file provides an integer number representing the recent 288(half-life of 10 seconds) rate of direct page reclaims caused by 289the tasks in the cpuset, in units of reclaims attempted per second, 290times 1000. 291 292 2931.6 What is memory spread ? 294--------------------------- 295There are two boolean flag files per cpuset that control where the 296kernel allocates pages for the file system buffers and related in 297kernel data structures. They are called 'memory_spread_page' and 298'memory_spread_slab'. 299 300If the per-cpuset boolean flag file 'memory_spread_page' is set, then 301the kernel will spread the file system buffers (page cache) evenly 302over all the nodes that the faulting task is allowed to use, instead 303of preferring to put those pages on the node where the task is running. 304 305If the per-cpuset boolean flag file 'memory_spread_slab' is set, 306then the kernel will spread some file system related slab caches, 307such as for inodes and dentries evenly over all the nodes that the 308faulting task is allowed to use, instead of preferring to put those 309pages on the node where the task is running. 310 311The setting of these flags does not affect anonymous data segment or 312stack segment pages of a task. 313 314By default, both kinds of memory spreading are off, and memory 315pages are allocated on the node local to where the task is running, 316except perhaps as modified by the tasks NUMA mempolicy or cpuset 317configuration, so long as sufficient free memory pages are available. 318 319When new cpusets are created, they inherit the memory spread settings 320of their parent. 321 322Setting memory spreading causes allocations for the affected page 323or slab caches to ignore the tasks NUMA mempolicy and be spread 324instead. Tasks using mbind() or set_mempolicy() calls to set NUMA 325mempolicies will not notice any change in these calls as a result of 326their containing tasks memory spread settings. If memory spreading 327is turned off, then the currently specified NUMA mempolicy once again 328applies to memory page allocations. 329 330Both 'memory_spread_page' and 'memory_spread_slab' are boolean flag 331files. By default they contain "0", meaning that the feature is off 332for that cpuset. If a "1" is written to that file, then that turns 333the named feature on. 334 335The implementation is simple. 336 337Setting the flag 'memory_spread_page' turns on a per-process flag 338PF_SPREAD_PAGE for each task that is in that cpuset or subsequently 339joins that cpuset. The page allocation calls for the page cache 340is modified to perform an inline check for this PF_SPREAD_PAGE task 341flag, and if set, a call to a new routine cpuset_mem_spread_node() 342returns the node to prefer for the allocation. 343 344Similarly, setting 'memory_spread_cache' turns on the flag 345PF_SPREAD_SLAB, and appropriately marked slab caches will allocate 346pages from the node returned by cpuset_mem_spread_node(). 347 348The cpuset_mem_spread_node() routine is also simple. It uses the 349value of a per-task rotor cpuset_mem_spread_rotor to select the next 350node in the current tasks mems_allowed to prefer for the allocation. 351 352This memory placement policy is also known (in other contexts) as 353round-robin or interleave. 354 355This policy can provide substantial improvements for jobs that need 356to place thread local data on the corresponding node, but that need 357to access large file system data sets that need to be spread across 358the several nodes in the jobs cpuset in order to fit. Without this 359policy, especially for jobs that might have one thread reading in the 360data set, the memory allocation across the nodes in the jobs cpuset 361can become very uneven. 362 3631.7 What is sched_load_balance ? 364-------------------------------- 365 366The kernel scheduler (kernel/sched.c) automatically load balances 367tasks. If one CPU is underutilized, kernel code running on that 368CPU will look for tasks on other more overloaded CPUs and move those 369tasks to itself, within the constraints of such placement mechanisms 370as cpusets and sched_setaffinity. 371 372The algorithmic cost of load balancing and its impact on key shared 373kernel data structures such as the task list increases more than 374linearly with the number of CPUs being balanced. So the scheduler 375has support to partition the systems CPUs into a number of sched 376domains such that it only load balances within each sched domain. 377Each sched domain covers some subset of the CPUs in the system; 378no two sched domains overlap; some CPUs might not be in any sched 379domain and hence won't be load balanced. 380 381Put simply, it costs less to balance between two smaller sched domains 382than one big one, but doing so means that overloads in one of the 383two domains won't be load balanced to the other one. 384 385By default, there is one sched domain covering all CPUs, except those 386marked isolated using the kernel boot time "isolcpus=" argument. 387 388This default load balancing across all CPUs is not well suited for 389the following two situations: 390 1) On large systems, load balancing across many CPUs is expensive. 391 If the system is managed using cpusets to place independent jobs 392 on separate sets of CPUs, full load balancing is unnecessary. 393 2) Systems supporting realtime on some CPUs need to minimize 394 system overhead on those CPUs, including avoiding task load 395 balancing if that is not needed. 396 397When the per-cpuset flag "sched_load_balance" is enabled (the default 398setting), it requests that all the CPUs in that cpusets allowed 'cpus' 399be contained in a single sched domain, ensuring that load balancing 400can move a task (not otherwised pinned, as by sched_setaffinity) 401from any CPU in that cpuset to any other. 402 403When the per-cpuset flag "sched_load_balance" is disabled, then the 404scheduler will avoid load balancing across the CPUs in that cpuset, 405--except-- in so far as is necessary because some overlapping cpuset 406has "sched_load_balance" enabled. 407 408So, for example, if the top cpuset has the flag "sched_load_balance" 409enabled, then the scheduler will have one sched domain covering all 410CPUs, and the setting of the "sched_load_balance" flag in any other 411cpusets won't matter, as we're already fully load balancing. 412 413Therefore in the above two situations, the top cpuset flag 414"sched_load_balance" should be disabled, and only some of the smaller, 415child cpusets have this flag enabled. 416 417When doing this, you don't usually want to leave any unpinned tasks in 418the top cpuset that might use non-trivial amounts of CPU, as such tasks 419may be artificially constrained to some subset of CPUs, depending on 420the particulars of this flag setting in descendent cpusets. Even if 421such a task could use spare CPU cycles in some other CPUs, the kernel 422scheduler might not consider the possibility of load balancing that 423task to that underused CPU. 424 425Of course, tasks pinned to a particular CPU can be left in a cpuset 426that disables "sched_load_balance" as those tasks aren't going anywhere 427else anyway. 428 429There is an impedance mismatch here, between cpusets and sched domains. 430Cpusets are hierarchical and nest. Sched domains are flat; they don't 431overlap and each CPU is in at most one sched domain. 432 433It is necessary for sched domains to be flat because load balancing 434across partially overlapping sets of CPUs would risk unstable dynamics 435that would be beyond our understanding. So if each of two partially 436overlapping cpusets enables the flag 'sched_load_balance', then we 437form a single sched domain that is a superset of both. We won't move 438a task to a CPU outside it cpuset, but the scheduler load balancing 439code might waste some compute cycles considering that possibility. 440 441This mismatch is why there is not a simple one-to-one relation 442between which cpusets have the flag "sched_load_balance" enabled, 443and the sched domain configuration. If a cpuset enables the flag, it 444will get balancing across all its CPUs, but if it disables the flag, 445it will only be assured of no load balancing if no other overlapping 446cpuset enables the flag. 447 448If two cpusets have partially overlapping 'cpus' allowed, and only 449one of them has this flag enabled, then the other may find its 450tasks only partially load balanced, just on the overlapping CPUs. 451This is just the general case of the top_cpuset example given a few 452paragraphs above. In the general case, as in the top cpuset case, 453don't leave tasks that might use non-trivial amounts of CPU in 454such partially load balanced cpusets, as they may be artificially 455constrained to some subset of the CPUs allowed to them, for lack of 456load balancing to the other CPUs. 457 4581.7.1 sched_load_balance implementation details. 459------------------------------------------------ 460 461The per-cpuset flag 'sched_load_balance' defaults to enabled (contrary 462to most cpuset flags.) When enabled for a cpuset, the kernel will 463ensure that it can load balance across all the CPUs in that cpuset 464(makes sure that all the CPUs in the cpus_allowed of that cpuset are 465in the same sched domain.) 466 467If two overlapping cpusets both have 'sched_load_balance' enabled, 468then they will be (must be) both in the same sched domain. 469 470If, as is the default, the top cpuset has 'sched_load_balance' enabled, 471then by the above that means there is a single sched domain covering 472the whole system, regardless of any other cpuset settings. 473 474The kernel commits to user space that it will avoid load balancing 475where it can. It will pick as fine a granularity partition of sched 476domains as it can while still providing load balancing for any set 477of CPUs allowed to a cpuset having 'sched_load_balance' enabled. 478 479The internal kernel cpuset to scheduler interface passes from the 480cpuset code to the scheduler code a partition of the load balanced 481CPUs in the system. This partition is a set of subsets (represented 482as an array of cpumask_t) of CPUs, pairwise disjoint, that cover all 483the CPUs that must be load balanced. 484 485Whenever the 'sched_load_balance' flag changes, or CPUs come or go 486from a cpuset with this flag enabled, or a cpuset with this flag 487enabled is removed, the cpuset code builds a new such partition and 488passes it to the scheduler sched domain setup code, to have the sched 489domains rebuilt as necessary. 490 491This partition exactly defines what sched domains the scheduler should 492setup - one sched domain for each element (cpumask_t) in the partition. 493 494The scheduler remembers the currently active sched domain partitions. 495When the scheduler routine partition_sched_domains() is invoked from 496the cpuset code to update these sched domains, it compares the new 497partition requested with the current, and updates its sched domains, 498removing the old and adding the new, for each change. 499 5001.8 How do I use cpusets ? 501-------------------------- 502 503In order to minimize the impact of cpusets on critical kernel 504code, such as the scheduler, and due to the fact that the kernel 505does not support one task updating the memory placement of another 506task directly, the impact on a task of changing its cpuset CPU 507or Memory Node placement, or of changing to which cpuset a task 508is attached, is subtle. 509 510If a cpuset has its Memory Nodes modified, then for each task attached 511to that cpuset, the next time that the kernel attempts to allocate 512a page of memory for that task, the kernel will notice the change 513in the tasks cpuset, and update its per-task memory placement to 514remain within the new cpusets memory placement. If the task was using 515mempolicy MPOL_BIND, and the nodes to which it was bound overlap with 516its new cpuset, then the task will continue to use whatever subset 517of MPOL_BIND nodes are still allowed in the new cpuset. If the task 518was using MPOL_BIND and now none of its MPOL_BIND nodes are allowed 519in the new cpuset, then the task will be essentially treated as if it 520was MPOL_BIND bound to the new cpuset (even though its numa placement, 521as queried by get_mempolicy(), doesn't change). If a task is moved 522from one cpuset to another, then the kernel will adjust the tasks 523memory placement, as above, the next time that the kernel attempts 524to allocate a page of memory for that task. 525 526If a cpuset has its 'cpus' modified, then each task in that cpuset 527will have its allowed CPU placement changed immediately. Similarly, 528if a tasks pid is written to a cpusets 'tasks' file, in either its 529current cpuset or another cpuset, then its allowed CPU placement is 530changed immediately. If such a task had been bound to some subset 531of its cpuset using the sched_setaffinity() call, the task will be 532allowed to run on any CPU allowed in its new cpuset, negating the 533affect of the prior sched_setaffinity() call. 534 535In summary, the memory placement of a task whose cpuset is changed is 536updated by the kernel, on the next allocation of a page for that task, 537but the processor placement is not updated, until that tasks pid is 538rewritten to the 'tasks' file of its cpuset. This is done to avoid 539impacting the scheduler code in the kernel with a check for changes 540in a tasks processor placement. 541 542Normally, once a page is allocated (given a physical page 543of main memory) then that page stays on whatever node it 544was allocated, so long as it remains allocated, even if the 545cpusets memory placement policy 'mems' subsequently changes. 546If the cpuset flag file 'memory_migrate' is set true, then when 547tasks are attached to that cpuset, any pages that task had 548allocated to it on nodes in its previous cpuset are migrated 549to the tasks new cpuset. The relative placement of the page within 550the cpuset is preserved during these migration operations if possible. 551For example if the page was on the second valid node of the prior cpuset 552then the page will be placed on the second valid node of the new cpuset. 553 554Also if 'memory_migrate' is set true, then if that cpusets 555'mems' file is modified, pages allocated to tasks in that 556cpuset, that were on nodes in the previous setting of 'mems', 557will be moved to nodes in the new setting of 'mems.' 558Pages that were not in the tasks prior cpuset, or in the cpusets 559prior 'mems' setting, will not be moved. 560 561There is an exception to the above. If hotplug functionality is used 562to remove all the CPUs that are currently assigned to a cpuset, 563then the kernel will automatically update the cpus_allowed of all 564tasks attached to CPUs in that cpuset to allow all CPUs. When memory 565hotplug functionality for removing Memory Nodes is available, a 566similar exception is expected to apply there as well. In general, 567the kernel prefers to violate cpuset placement, over starving a task 568that has had all its allowed CPUs or Memory Nodes taken offline. User 569code should reconfigure cpusets to only refer to online CPUs and Memory 570Nodes when using hotplug to add or remove such resources. 571 572There is a second exception to the above. GFP_ATOMIC requests are 573kernel internal allocations that must be satisfied, immediately. 574The kernel may drop some request, in rare cases even panic, if a 575GFP_ATOMIC alloc fails. If the request cannot be satisfied within 576the current tasks cpuset, then we relax the cpuset, and look for 577memory anywhere we can find it. It's better to violate the cpuset 578than stress the kernel. 579 580To start a new job that is to be contained within a cpuset, the steps are: 581 582 1) mkdir /dev/cpuset 583 2) mount -t cgroup -ocpuset cpuset /dev/cpuset 584 3) Create the new cpuset by doing mkdir's and write's (or echo's) in 585 the /dev/cpuset virtual file system. 586 4) Start a task that will be the "founding father" of the new job. 587 5) Attach that task to the new cpuset by writing its pid to the 588 /dev/cpuset tasks file for that cpuset. 589 6) fork, exec or clone the job tasks from this founding father task. 590 591For example, the following sequence of commands will setup a cpuset 592named "Charlie", containing just CPUs 2 and 3, and Memory Node 1, 593and then start a subshell 'sh' in that cpuset: 594 595 mount -t cgroup -ocpuset cpuset /dev/cpuset 596 cd /dev/cpuset 597 mkdir Charlie 598 cd Charlie 599 /bin/echo 2-3 > cpus 600 /bin/echo 1 > mems 601 /bin/echo $$ > tasks 602 sh 603 # The subshell 'sh' is now running in cpuset Charlie 604 # The next line should display '/Charlie' 605 cat /proc/self/cpuset 606 607In the future, a C library interface to cpusets will likely be 608available. For now, the only way to query or modify cpusets is 609via the cpuset file system, using the various cd, mkdir, echo, cat, 610rmdir commands from the shell, or their equivalent from C. 611 612The sched_setaffinity calls can also be done at the shell prompt using 613SGI's runon or Robert Love's taskset. The mbind and set_mempolicy 614calls can be done at the shell prompt using the numactl command 615(part of Andi Kleen's numa package). 616 6172. Usage Examples and Syntax 618============================ 619 6202.1 Basic Usage 621--------------- 622 623Creating, modifying, using the cpusets can be done through the cpuset 624virtual filesystem. 625 626To mount it, type: 627# mount -t cgroup -o cpuset cpuset /dev/cpuset 628 629Then under /dev/cpuset you can find a tree that corresponds to the 630tree of the cpusets in the system. For instance, /dev/cpuset 631is the cpuset that holds the whole system. 632 633If you want to create a new cpuset under /dev/cpuset: 634# cd /dev/cpuset 635# mkdir my_cpuset 636 637Now you want to do something with this cpuset. 638# cd my_cpuset 639 640In this directory you can find several files: 641# ls 642cpus cpu_exclusive mems mem_exclusive tasks 643 644Reading them will give you information about the state of this cpuset: 645the CPUs and Memory Nodes it can use, the processes that are using 646it, its properties. By writing to these files you can manipulate 647the cpuset. 648 649Set some flags: 650# /bin/echo 1 > cpu_exclusive 651 652Add some cpus: 653# /bin/echo 0-7 > cpus 654 655Add some mems: 656# /bin/echo 0-7 > mems 657 658Now attach your shell to this cpuset: 659# /bin/echo $$ > tasks 660 661You can also create cpusets inside your cpuset by using mkdir in this 662directory. 663# mkdir my_sub_cs 664 665To remove a cpuset, just use rmdir: 666# rmdir my_sub_cs 667This will fail if the cpuset is in use (has cpusets inside, or has 668processes attached). 669 670Note that for legacy reasons, the "cpuset" filesystem exists as a 671wrapper around the cgroup filesystem. 672 673The command 674 675mount -t cpuset X /dev/cpuset 676 677is equivalent to 678 679mount -t cgroup -ocpuset X /dev/cpuset 680echo "/sbin/cpuset_release_agent" > /dev/cpuset/release_agent 681 6822.2 Adding/removing cpus 683------------------------ 684 685This is the syntax to use when writing in the cpus or mems files 686in cpuset directories: 687 688# /bin/echo 1-4 > cpus -> set cpus list to cpus 1,2,3,4 689# /bin/echo 1,2,3,4 > cpus -> set cpus list to cpus 1,2,3,4 690 6912.3 Setting flags 692----------------- 693 694The syntax is very simple: 695 696# /bin/echo 1 > cpu_exclusive -> set flag 'cpu_exclusive' 697# /bin/echo 0 > cpu_exclusive -> unset flag 'cpu_exclusive' 698 6992.4 Attaching processes 700----------------------- 701 702# /bin/echo PID > tasks 703 704Note that it is PID, not PIDs. You can only attach ONE task at a time. 705If you have several tasks to attach, you have to do it one after another: 706 707# /bin/echo PID1 > tasks 708# /bin/echo PID2 > tasks 709 ... 710# /bin/echo PIDn > tasks 711 712 7133. Questions 714============ 715 716Q: what's up with this '/bin/echo' ? 717A: bash's builtin 'echo' command does not check calls to write() against 718 errors. If you use it in the cpuset file system, you won't be 719 able to tell whether a command succeeded or failed. 720 721Q: When I attach processes, only the first of the line gets really attached ! 722A: We can only return one error code per call to write(). So you should also 723 put only ONE pid. 724 7254. Contact 726========== 727 728Web: http://www.bullopensource.org/cpuset