Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

trivial: cgroups: documentation typo and spelling corrections

Minor typo and spelling corrections fixed whilst reading
to learn about cgroups capabilities.

Signed-off-by: Chris Samuel <chris@csamuel.org>
Acked-by: Paul Menage <menage@google.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>

authored by

Chris Samuel and committed by
Jiri Kosina
caa790ba c0496f4e

+13 -13
+5 -5
Documentation/cgroups/cgroups.txt
··· 56 56 state attached to each cgroup in the hierarchy. Each hierarchy has 57 57 an instance of the cgroup virtual filesystem associated with it. 58 58 59 - At any one time there may be multiple active hierachies of task 59 + At any one time there may be multiple active hierarchies of task 60 60 cgroups. Each hierarchy is a partition of all tasks in the system. 61 61 62 62 User level code may create and destroy cgroups by name in an ··· 124 124 / \ 125 125 Prof (15%) students (5%) 126 126 127 - Browsers like firefox/lynx go into the WWW network class, while (k)nfsd go 127 + Browsers like Firefox/Lynx go into the WWW network class, while (k)nfsd go 128 128 into NFS network class. 129 129 130 - At the same time firefox/lynx will share an appropriate CPU/Memory class 130 + At the same time Firefox/Lynx will share an appropriate CPU/Memory class 131 131 depending on who launched it (prof/student). 132 132 133 133 With the ability to classify tasks differently for different resources ··· 325 325 Creating, modifying, using the cgroups can be done through the cgroup 326 326 virtual filesystem. 327 327 328 - To mount a cgroup hierarchy will all available subsystems, type: 328 + To mount a cgroup hierarchy with all available subsystems, type: 329 329 # mount -t cgroup xxx /dev/cgroup 330 330 331 331 The "xxx" is not interpreted by the cgroup code, but will appear in ··· 521 521 void post_clone(struct cgroup_subsys *ss, struct cgroup *cgrp) 522 522 (cgroup_mutex held by caller) 523 523 524 - Called at the end of cgroup_clone() to do any paramater 524 + Called at the end of cgroup_clone() to do any parameter 525 525 initialization which might be required before a task could attach. For 526 526 example in cpusets, no task may attach before 'cpus' and 'mems' are set 527 527 up.
+6 -6
Documentation/cgroups/cpusets.txt
··· 131 131 - The hierarchy of cpusets can be mounted at /dev/cpuset, for 132 132 browsing and manipulation from user space. 133 133 - A cpuset may be marked exclusive, which ensures that no other 134 - cpuset (except direct ancestors and descendents) may contain 134 + cpuset (except direct ancestors and descendants) may contain 135 135 any overlapping CPUs or Memory Nodes. 136 136 - You can list all the tasks (by pid) attached to any cpuset. 137 137 ··· 226 226 -------------------------------- 227 227 228 228 If a cpuset is cpu or mem exclusive, no other cpuset, other than 229 - a direct ancestor or descendent, may share any of the same CPUs or 229 + a direct ancestor or descendant, may share any of the same CPUs or 230 230 Memory Nodes. 231 231 232 232 A cpuset that is mem_exclusive *or* mem_hardwall is "hardwalled", ··· 427 427 When doing this, you don't usually want to leave any unpinned tasks in 428 428 the top cpuset that might use non-trivial amounts of CPU, as such tasks 429 429 may be artificially constrained to some subset of CPUs, depending on 430 - the particulars of this flag setting in descendent cpusets. Even if 430 + the particulars of this flag setting in descendant cpusets. Even if 431 431 such a task could use spare CPU cycles in some other CPUs, the kernel 432 432 scheduler might not consider the possibility of load balancing that 433 433 task to that underused CPU. ··· 531 531 532 532 Of course it takes some searching cost to find movable tasks and/or 533 533 idle CPUs, the scheduler might not search all CPUs in the domain 534 - everytime. In fact, in some architectures, the searching ranges on 534 + every time. In fact, in some architectures, the searching ranges on 535 535 events are limited in the same socket or node where the CPU locates, 536 - while the load balance on tick searchs all. 536 + while the load balance on tick searches all. 537 537 538 538 For example, assume CPU Z is relatively far from CPU X. Even if CPU Z 539 539 is idle while CPU X and the siblings are busy, scheduler can't migrate ··· 601 601 of MPOL_BIND nodes are still allowed in the new cpuset. If the task 602 602 was using MPOL_BIND and now none of its MPOL_BIND nodes are allowed 603 603 in the new cpuset, then the task will be essentially treated as if it 604 - was MPOL_BIND bound to the new cpuset (even though its numa placement, 604 + was MPOL_BIND bound to the new cpuset (even though its NUMA placement, 605 605 as queried by get_mempolicy(), doesn't change). If a task is moved 606 606 from one cpuset to another, then the kernel will adjust the tasks 607 607 memory placement, as above, the next time that the kernel attempts
+1 -1
Documentation/cgroups/devices.txt
··· 42 42 movement as people get some experience with this. We may just want 43 43 to require CAP_SYS_ADMIN, which at least is a separate bit from 44 44 CAP_MKNOD. We may want to just refuse moving to a cgroup which 45 - isn't a descendent of the current one. Or we may want to use 45 + isn't a descendant of the current one. Or we may want to use 46 46 CAP_MAC_ADMIN, since we really are trying to lock down root. 47 47 48 48 CAP_SYS_ADMIN is needed to modify the whitelist or move another
+1 -1
Documentation/cgroups/memory.txt
··· 302 302 unevictable - # of pages cannot be reclaimed.(mlocked etc) 303 303 304 304 Below is depend on CONFIG_DEBUG_VM. 305 - inactive_ratio - VM inernal parameter. (see mm/page_alloc.c) 305 + inactive_ratio - VM internal parameter. (see mm/page_alloc.c) 306 306 recent_rotated_anon - VM internal parameter. (see mm/vmscan.c) 307 307 recent_rotated_file - VM internal parameter. (see mm/vmscan.c) 308 308 recent_scanned_anon - VM internal parameter. (see mm/vmscan.c)