Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

docs: cgroup-v1: add it to the admin-guide book

Those files belong to the admin guide, so add them.

Signed-off-by: Mauro Carvalho Chehab <mchehab+samsung@kernel.org>

+32 -33
+1 -1
Documentation/admin-guide/cgroup-v2.rst
··· 9 9 conventions of cgroup v2. It describes all userland-visible aspects 10 10 of cgroup including core and specific controller behaviors. All 11 11 future changes must be reflected in this document. Documentation for 12 - v1 is available under Documentation/cgroup-v1/. 12 + v1 is available under Documentation/admin-guide/cgroup-v1/. 13 13 14 14 .. CONTENTS 15 15
+1
Documentation/admin-guide/index.rst
··· 59 59 60 60 initrd 61 61 cgroup-v2 62 + cgroup-v1/index 62 63 serial-console 63 64 braille-console 64 65 parport
+2 -2
Documentation/admin-guide/kernel-parameters.txt
··· 4089 4089 4090 4090 relax_domain_level= 4091 4091 [KNL, SMP] Set scheduler's default relax_domain_level. 4092 - See Documentation/cgroup-v1/cpusets.rst. 4092 + See Documentation/admin-guide/cgroup-v1/cpusets.rst. 4093 4093 4094 4094 reserve= [KNL,BUGS] Force kernel to ignore I/O ports or memory 4095 4095 Format: <base1>,<size1>[,<base2>,<size2>,...] ··· 4599 4599 swapaccount=[0|1] 4600 4600 [KNL] Enable accounting of swap in memory resource 4601 4601 controller if no parameter or 1 is given or disable 4602 - it if 0 is given (See Documentation/cgroup-v1/memory.rst) 4602 + it if 0 is given (See Documentation/admin-guide/cgroup-v1/memory.rst) 4603 4603 4604 4604 swiotlb= [ARM,IA-64,PPC,MIPS,X86] 4605 4605 Format: { <int> | force | noforce }
+1 -1
Documentation/admin-guide/mm/numa_memory_policy.rst
··· 15 15 support. 16 16 17 17 Memory policies should not be confused with cpusets 18 - (``Documentation/cgroup-v1/cpusets.rst``) 18 + (``Documentation/admin-guide/cgroup-v1/cpusets.rst``) 19 19 which is an administrative mechanism for restricting the nodes from which 20 20 memory may be allocated by a set of processes. Memory policies are a 21 21 programming interface that a NUMA-aware application can take advantage of. When
+1 -1
Documentation/block/bfq-iosched.rst
··· 547 547 created, and kept up-to-date by bfq, depends on whether 548 548 CONFIG_BFQ_CGROUP_DEBUG is set. If it is set, then bfq creates all 549 549 the stat files documented in 550 - Documentation/cgroup-v1/blkio-controller.rst. If, instead, 550 + Documentation/admin-guide/cgroup-v1/blkio-controller.rst. If, instead, 551 551 CONFIG_BFQ_CGROUP_DEBUG is not set, then bfq creates only the files:: 552 552 553 553 blkio.bfq.io_service_bytes
Documentation/cgroup-v1/blkio-controller.rst Documentation/admin-guide/cgroup-v1/blkio-controller.rst
+2 -2
Documentation/cgroup-v1/cgroups.rst Documentation/admin-guide/cgroup-v1/cgroups.rst
··· 3 3 ============== 4 4 5 5 Written by Paul Menage <menage@google.com> based on 6 - Documentation/cgroup-v1/cpusets.rst 6 + Documentation/admin-guide/cgroup-v1/cpusets.rst 7 7 8 8 Original copyright statements from cpusets.txt: 9 9 ··· 76 76 tracking. The intention is that other subsystems hook into the generic 77 77 cgroup support to provide new attributes for cgroups, such as 78 78 accounting/limiting the resources which processes in a cgroup can 79 - access. For example, cpusets (see Documentation/cgroup-v1/cpusets.rst) allow 79 + access. For example, cpusets (see Documentation/admin-guide/cgroup-v1/cpusets.rst) allow 80 80 you to associate a set of CPUs and a set of memory nodes with the 81 81 tasks in each cgroup. 82 82
Documentation/cgroup-v1/cpuacct.rst Documentation/admin-guide/cgroup-v1/cpuacct.rst
+1 -1
Documentation/cgroup-v1/cpusets.rst Documentation/admin-guide/cgroup-v1/cpusets.rst
··· 49 49 job placement on large systems. 50 50 51 51 Cpusets use the generic cgroup subsystem described in 52 - Documentation/cgroup-v1/cgroups.rst. 52 + Documentation/admin-guide/cgroup-v1/cgroups.rst. 53 53 54 54 Requests by a task, using the sched_setaffinity(2) system call to 55 55 include CPUs in its CPU affinity mask, and using the mbind(2) and
Documentation/cgroup-v1/devices.rst Documentation/admin-guide/cgroup-v1/devices.rst
Documentation/cgroup-v1/freezer-subsystem.rst Documentation/admin-guide/cgroup-v1/freezer-subsystem.rst
Documentation/cgroup-v1/hugetlb.rst Documentation/admin-guide/cgroup-v1/hugetlb.rst
-2
Documentation/cgroup-v1/index.rst Documentation/admin-guide/cgroup-v1/index.rst
··· 1 - :orphan: 2 - 3 1 ======================== 4 2 Control Groups version 1 5 3 ========================
+2 -2
Documentation/cgroup-v1/memcg_test.rst Documentation/admin-guide/cgroup-v1/memcg_test.rst
··· 10 10 is complex. This is a document for memcg's internal behavior. 11 11 Please note that implementation details can be changed. 12 12 13 - (*) Topics on API should be in Documentation/cgroup-v1/memory.rst) 13 + (*) Topics on API should be in Documentation/admin-guide/cgroup-v1/memory.rst) 14 14 15 15 0. How to record usage ? 16 16 ======================== ··· 327 327 You can see charges have been moved by reading ``*.usage_in_bytes`` or 328 328 memory.stat of both A and B. 329 329 330 - See 8.2 of Documentation/cgroup-v1/memory.rst to see what value should 330 + See 8.2 of Documentation/admin-guide/cgroup-v1/memory.rst to see what value should 331 331 be written to move_charge_at_immigrate. 332 332 333 333 9.10 Memory thresholds
Documentation/cgroup-v1/memory.rst Documentation/admin-guide/cgroup-v1/memory.rst
Documentation/cgroup-v1/net_cls.rst Documentation/admin-guide/cgroup-v1/net_cls.rst
Documentation/cgroup-v1/net_prio.rst Documentation/admin-guide/cgroup-v1/net_prio.rst
Documentation/cgroup-v1/pids.rst Documentation/admin-guide/cgroup-v1/pids.rst
Documentation/cgroup-v1/rdma.rst Documentation/admin-guide/cgroup-v1/rdma.rst
+1 -1
Documentation/filesystems/tmpfs.txt
··· 98 98 use at file creation time. When a task allocates a file in the file 99 99 system, the mount option memory policy will be applied with a NodeList, 100 100 if any, modified by the calling task's cpuset constraints 101 - [See Documentation/cgroup-v1/cpusets.rst] and any optional flags, listed 101 + [See Documentation/admin-guide/cgroup-v1/cpusets.rst] and any optional flags, listed 102 102 below. If the resulting NodeLists is the empty set, the effective memory 103 103 policy for the file will revert to "default" policy. 104 104
+1 -1
Documentation/kernel-per-CPU-kthreads.txt
··· 12 12 13 13 - Documentation/IRQ-affinity.txt: Binding interrupts to sets of CPUs. 14 14 15 - - Documentation/cgroup-v1: Using cgroups to bind tasks to sets of CPUs. 15 + - Documentation/admin-guide/cgroup-v1: Using cgroups to bind tasks to sets of CPUs. 16 16 17 17 - man taskset: Using the taskset command to bind tasks to sets 18 18 of CPUs.
+1 -1
Documentation/scheduler/sched-deadline.rst
··· 669 669 670 670 -deadline tasks cannot have an affinity mask smaller that the entire 671 671 root_domain they are created on. However, affinities can be specified 672 - through the cpuset facility (Documentation/cgroup-v1/cpusets.rst). 672 + through the cpuset facility (Documentation/admin-guide/cgroup-v1/cpusets.rst). 673 673 674 674 5.1 SCHED_DEADLINE and cpusets HOWTO 675 675 ------------------------------------
+1 -1
Documentation/scheduler/sched-design-CFS.rst
··· 222 222 223 223 These options need CONFIG_CGROUPS to be defined, and let the administrator 224 224 create arbitrary groups of tasks, using the "cgroup" pseudo filesystem. See 225 - Documentation/cgroup-v1/cgroups.rst for more information about this filesystem. 225 + Documentation/admin-guide/cgroup-v1/cgroups.rst for more information about this filesystem. 226 226 227 227 When CONFIG_FAIR_GROUP_SCHED is defined, a "cpu.shares" file is created for each 228 228 group created using the pseudo filesystem. See example steps below to create
+1 -1
Documentation/scheduler/sched-rt-group.rst
··· 133 133 to control the CPU time reserved for each control group. 134 134 135 135 For more information on working with control groups, you should read 136 - Documentation/cgroup-v1/cgroups.rst as well. 136 + Documentation/admin-guide/cgroup-v1/cgroups.rst as well. 137 137 138 138 Group settings are checked against the following limits in order to keep the 139 139 configuration schedulable:
+2 -2
Documentation/vm/numa.rst
··· 67 67 physical memory. NUMA emluation is useful for testing NUMA kernel and 68 68 application features on non-NUMA platforms, and as a sort of memory resource 69 69 management mechanism when used together with cpusets. 70 - [see Documentation/cgroup-v1/cpusets.rst] 70 + [see Documentation/admin-guide/cgroup-v1/cpusets.rst] 71 71 72 72 For each node with memory, Linux constructs an independent memory management 73 73 subsystem, complete with its own free page lists, in-use page lists, usage ··· 114 114 115 115 System administrators can restrict the CPUs and nodes' memories that a non- 116 116 privileged user can specify in the scheduling or NUMA commands and functions 117 - using control groups and CPUsets. [see Documentation/cgroup-v1/cpusets.rst] 117 + using control groups and CPUsets. [see Documentation/admin-guide/cgroup-v1/cpusets.rst] 118 118 119 119 On architectures that do not hide memoryless nodes, Linux will include only 120 120 zones [nodes] with memory in the zonelists. This means that for a memoryless
+1 -1
Documentation/vm/page_migration.rst
··· 41 41 Larger installations usually partition the system using cpusets into 42 42 sections of nodes. Paul Jackson has equipped cpusets with the ability to 43 43 move pages when a task is moved to another cpuset (See 44 - Documentation/cgroup-v1/cpusets.rst). 44 + Documentation/admin-guide/cgroup-v1/cpusets.rst). 45 45 Cpusets allows the automation of process locality. If a task is moved to 46 46 a new cpuset then also all its pages are moved with it so that the 47 47 performance of the process does not sink dramatically. Also the pages
+1 -1
Documentation/vm/unevictable-lru.rst
··· 98 98 -------------------------------- 99 99 100 100 The unevictable LRU facility interacts with the memory control group [aka 101 - memory controller; see Documentation/cgroup-v1/memory.rst] by extending the 101 + memory controller; see Documentation/admin-guide/cgroup-v1/memory.rst] by extending the 102 102 lru_list enum. 103 103 104 104 The memory controller data structure automatically gets a per-zone unevictable
+2 -2
Documentation/x86/x86_64/fake-numa-for-cpusets.rst
··· 15 15 amount of system memory that are available to a certain class of tasks. 16 16 17 17 For more information on the features of cpusets, see 18 - Documentation/cgroup-v1/cpusets.rst. 18 + Documentation/admin-guide/cgroup-v1/cpusets.rst. 19 19 There are a number of different configurations you can use for your needs. For 20 20 more information on the numa=fake command line option and its various ways of 21 21 configuring fake nodes, see Documentation/x86/x86_64/boot-options.rst. ··· 40 40 On node 3 totalpages: 131072 41 41 42 42 Now following the instructions for mounting the cpusets filesystem from 43 - Documentation/cgroup-v1/cpusets.rst, you can assign fake nodes (i.e. contiguous memory 43 + Documentation/admin-guide/cgroup-v1/cpusets.rst, you can assign fake nodes (i.e. contiguous memory 44 44 address spaces) to individual cpusets:: 45 45 46 46 [root@xroads /]# mkdir exampleset
+2 -2
MAINTAINERS
··· 4158 4158 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git 4159 4159 S: Maintained 4160 4160 F: Documentation/admin-guide/cgroup-v2.rst 4161 - F: Documentation/cgroup-v1/ 4161 + F: Documentation/admin-guide/cgroup-v1/ 4162 4162 F: include/linux/cgroup* 4163 4163 F: kernel/cgroup/ 4164 4164 ··· 4169 4169 W: http://oss.sgi.com/projects/cpusets/ 4170 4170 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup.git 4171 4171 S: Maintained 4172 - F: Documentation/cgroup-v1/cpusets.rst 4172 + F: Documentation/admin-guide/cgroup-v1/cpusets.rst 4173 4173 F: include/linux/cpuset.h 4174 4174 F: kernel/cgroup/cpuset.c 4175 4175
+1 -1
block/Kconfig
··· 89 89 one needs to mount and use blkio cgroup controller for creating 90 90 cgroups and specifying per device IO rate policies. 91 91 92 - See Documentation/cgroup-v1/blkio-controller.rst for more information. 92 + See Documentation/admin-guide/cgroup-v1/blkio-controller.rst for more information. 93 93 94 94 config BLK_DEV_THROTTLING_LOW 95 95 bool "Block throttling .low limit interface support (EXPERIMENTAL)"
+1 -1
include/linux/cgroup-defs.h
··· 624 624 625 625 /* 626 626 * Control Group subsystem type. 627 - * See Documentation/cgroup-v1/cgroups.rst for details 627 + * See Documentation/admin-guide/cgroup-v1/cgroups.rst for details 628 628 */ 629 629 struct cgroup_subsys { 630 630 struct cgroup_subsys_state *(*css_alloc)(struct cgroup_subsys_state *parent_css);
+1 -1
include/uapi/linux/bpf.h
··· 806 806 * based on a user-provided identifier for all traffic coming from 807 807 * the tasks belonging to the related cgroup. See also the related 808 808 * kernel documentation, available from the Linux sources in file 809 - * *Documentation/cgroup-v1/net_cls.rst*. 809 + * *Documentation/admin-guide/cgroup-v1/net_cls.rst*. 810 810 * 811 811 * The Linux kernel has two versions for cgroups: there are 812 812 * cgroups v1 and cgroups v2. Both are available to users, who can
+2 -2
init/Kconfig
··· 821 821 controls or device isolation. 822 822 See 823 823 - Documentation/scheduler/sched-design-CFS.rst (CFS) 824 - - Documentation/cgroup-v1/ (features for grouping, isolation 824 + - Documentation/admin-guide/cgroup-v1/ (features for grouping, isolation 825 825 and resource control) 826 826 827 827 Say N if unsure. ··· 883 883 CONFIG_CFQ_GROUP_IOSCHED=y; for enabling throttling policy, set 884 884 CONFIG_BLK_DEV_THROTTLING=y. 885 885 886 - See Documentation/cgroup-v1/blkio-controller.rst for more information. 886 + See Documentation/admin-guide/cgroup-v1/blkio-controller.rst for more information. 887 887 888 888 config CGROUP_WRITEBACK 889 889 bool
+1 -1
kernel/cgroup/cpuset.c
··· 729 729 * load balancing domains (sched domains) as specified by that partial 730 730 * partition. 731 731 * 732 - * See "What is sched_load_balance" in Documentation/cgroup-v1/cpusets.rst 732 + * See "What is sched_load_balance" in Documentation/admin-guide/cgroup-v1/cpusets.rst 733 733 * for a background explanation of this. 734 734 * 735 735 * Does not return errors, on the theory that the callers of this
+1 -1
security/device_cgroup.c
··· 509 509 * This is one of the three key functions for hierarchy implementation. 510 510 * This function is responsible for re-evaluating all the cgroup's active 511 511 * exceptions due to a parent's exception change. 512 - * Refer to Documentation/cgroup-v1/devices.rst for more details. 512 + * Refer to Documentation/admin-guide/cgroup-v1/devices.rst for more details. 513 513 */ 514 514 static void revalidate_active_exceptions(struct dev_cgroup *devcg) 515 515 {
+1 -1
tools/include/uapi/linux/bpf.h
··· 806 806 * based on a user-provided identifier for all traffic coming from 807 807 * the tasks belonging to the related cgroup. See also the related 808 808 * kernel documentation, available from the Linux sources in file 809 - * *Documentation/cgroup-v1/net_cls.rst*. 809 + * *Documentation/admin-guide/cgroup-v1/net_cls.rst*. 810 810 * 811 811 * The Linux kernel has two versions for cgroups: there are 812 812 * cgroups v1 and cgroups v2. Both are available to users, who can