Linux kernel mirror (for testing)
git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel
os
linux
1.. _cgroup-v2:
2
3================
4Control Group v2
5================
6
7:Date: October, 2015
8:Author: Tejun Heo <tj@kernel.org>
9
10This is the authoritative documentation on the design, interface and
11conventions of cgroup v2. It describes all userland-visible aspects
12of cgroup including core and specific controller behaviors. All
13future changes must be reflected in this document. Documentation for
14v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
15
16.. CONTENTS
17
18 [Whenever any new section is added to this document, please also add
19 an entry here.]
20
21 1. Introduction
22 1-1. Terminology
23 1-2. What is cgroup?
24 2. Basic Operations
25 2-1. Mounting
26 2-2. Organizing Processes and Threads
27 2-2-1. Processes
28 2-2-2. Threads
29 2-3. [Un]populated Notification
30 2-4. Controlling Controllers
31 2-4-1. Availability
32 2-4-2. Enabling and Disabling
33 2-4-3. Top-down Constraint
34 2-4-4. No Internal Process Constraint
35 2-5. Delegation
36 2-5-1. Model of Delegation
37 2-5-2. Delegation Containment
38 2-6. Guidelines
39 2-6-1. Organize Once and Control
40 2-6-2. Avoid Name Collisions
41 3. Resource Distribution Models
42 3-1. Weights
43 3-2. Limits
44 3-3. Protections
45 3-4. Allocations
46 4. Interface Files
47 4-1. Format
48 4-2. Conventions
49 4-3. Core Interface Files
50 5. Controllers
51 5-1. CPU
52 5-1-1. CPU Interface Files
53 5-2. Memory
54 5-2-1. Memory Interface Files
55 5-2-2. Usage Guidelines
56 5-2-3. Memory Ownership
57 5-3. IO
58 5-3-1. IO Interface Files
59 5-3-2. Writeback
60 5-3-3. IO Latency
61 5-3-3-1. How IO Latency Throttling Works
62 5-3-3-2. IO Latency Interface Files
63 5-3-4. IO Priority
64 5-4. PID
65 5-4-1. PID Interface Files
66 5-5. Cpuset
67 5.5-1. Cpuset Interface Files
68 5-6. Device controller
69 5-7. RDMA
70 5-7-1. RDMA Interface Files
71 5-8. DMEM
72 5-8-1. DMEM Interface Files
73 5-9. HugeTLB
74 5.9-1. HugeTLB Interface Files
75 5-10. Misc
76 5.10-1 Misc Interface Files
77 5.10-2 Migration and Ownership
78 5-11. Others
79 5-11-1. perf_event
80 5-N. Non-normative information
81 5-N-1. CPU controller root cgroup process behaviour
82 5-N-2. IO controller root cgroup process behaviour
83 6. Namespace
84 6-1. Basics
85 6-2. The Root and Views
86 6-3. Migration and setns(2)
87 6-4. Interaction with Other Namespaces
88 P. Information on Kernel Programming
89 P-1. Filesystem Support for Writeback
90 D. Deprecated v1 Core Features
91 R. Issues with v1 and Rationales for v2
92 R-1. Multiple Hierarchies
93 R-2. Thread Granularity
94 R-3. Competition Between Inner Nodes and Threads
95 R-4. Other Interface Issues
96 R-5. Controller Issues and Remedies
97 R-5-1. Memory
98
99
100Introduction
101============
102
103Terminology
104-----------
105
106"cgroup" stands for "control group" and is never capitalized. The
107singular form is used to designate the whole feature and also as a
108qualifier as in "cgroup controllers". When explicitly referring to
109multiple individual control groups, the plural form "cgroups" is used.
110
111
112What is cgroup?
113---------------
114
115cgroup is a mechanism to organize processes hierarchically and
116distribute system resources along the hierarchy in a controlled and
117configurable manner.
118
119cgroup is largely composed of two parts - the core and controllers.
120cgroup core is primarily responsible for hierarchically organizing
121processes. A cgroup controller is usually responsible for
122distributing a specific type of system resource along the hierarchy
123although there are utility controllers which serve purposes other than
124resource distribution.
125
126cgroups form a tree structure and every process in the system belongs
127to one and only one cgroup. All threads of a process belong to the
128same cgroup. On creation, all processes are put in the cgroup that
129the parent process belongs to at the time. A process can be migrated
130to another cgroup. Migration of a process doesn't affect already
131existing descendant processes.
132
133Following certain structural constraints, controllers may be enabled or
134disabled selectively on a cgroup. All controller behaviors are
135hierarchical - if a controller is enabled on a cgroup, it affects all
136processes which belong to the cgroups consisting the inclusive
137sub-hierarchy of the cgroup. When a controller is enabled on a nested
138cgroup, it always restricts the resource distribution further. The
139restrictions set closer to the root in the hierarchy can not be
140overridden from further away.
141
142
143Basic Operations
144================
145
146Mounting
147--------
148
149Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2
150hierarchy can be mounted with the following mount command::
151
152 # mount -t cgroup2 none $MOUNT_POINT
153
154cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All
155controllers which support v2 and are not bound to a v1 hierarchy are
156automatically bound to the v2 hierarchy and show up at the root.
157Controllers which are not in active use in the v2 hierarchy can be
158bound to other hierarchies. This allows mixing v2 hierarchy with the
159legacy v1 multiple hierarchies in a fully backward compatible way.
160
161A controller can be moved across hierarchies only after the controller
162is no longer referenced in its current hierarchy. Because per-cgroup
163controller states are destroyed asynchronously and controllers may
164have lingering references, a controller may not show up immediately on
165the v2 hierarchy after the final umount of the previous hierarchy.
166Similarly, a controller should be fully disabled to be moved out of
167the unified hierarchy and it may take some time for the disabled
168controller to become available for other hierarchies; furthermore, due
169to inter-controller dependencies, other controllers may need to be
170disabled too.
171
172While useful for development and manual configurations, moving
173controllers dynamically between the v2 and other hierarchies is
174strongly discouraged for production use. It is recommended to decide
175the hierarchies and controller associations before starting using the
176controllers after system boot.
177
178During transition to v2, system management software might still
179automount the v1 cgroup filesystem and so hijack all controllers
180during boot, before manual intervention is possible. To make testing
181and experimenting easier, the kernel parameter cgroup_no_v1= allows
182disabling controllers in v1 and make them always available in v2.
183
184cgroup v2 currently supports the following mount options.
185
186 nsdelegate
187 Consider cgroup namespaces as delegation boundaries. This
188 option is system wide and can only be set on mount or modified
189 through remount from the init namespace. The mount option is
190 ignored on non-init namespace mounts. Please refer to the
191 Delegation section for details.
192
193 favordynmods
194 Reduce the latencies of dynamic cgroup modifications such as
195 task migrations and controller on/offs at the cost of making
196 hot path operations such as forks and exits more expensive.
197 The static usage pattern of creating a cgroup, enabling
198 controllers, and then seeding it with CLONE_INTO_CGROUP is
199 not affected by this option.
200
201 memory_localevents
202 Only populate memory.events with data for the current cgroup,
203 and not any subtrees. This is legacy behaviour, the default
204 behaviour without this option is to include subtree counts.
205 This option is system wide and can only be set on mount or
206 modified through remount from the init namespace. The mount
207 option is ignored on non-init namespace mounts.
208
209 memory_recursiveprot
210 Recursively apply memory.min and memory.low protection to
211 entire subtrees, without requiring explicit downward
212 propagation into leaf cgroups. This allows protecting entire
213 subtrees from one another, while retaining free competition
214 within those subtrees. This should have been the default
215 behavior but is a mount-option to avoid regressing setups
216 relying on the original semantics (e.g. specifying bogusly
217 high 'bypass' protection values at higher tree levels).
218
219 memory_hugetlb_accounting
220 Count HugeTLB memory usage towards the cgroup's overall
221 memory usage for the memory controller (for the purpose of
222 statistics reporting and memory protetion). This is a new
223 behavior that could regress existing setups, so it must be
224 explicitly opted in with this mount option.
225
226 A few caveats to keep in mind:
227
228 * There is no HugeTLB pool management involved in the memory
229 controller. The pre-allocated pool does not belong to anyone.
230 Specifically, when a new HugeTLB folio is allocated to
231 the pool, it is not accounted for from the perspective of the
232 memory controller. It is only charged to a cgroup when it is
233 actually used (for e.g at page fault time). Host memory
234 overcommit management has to consider this when configuring
235 hard limits. In general, HugeTLB pool management should be
236 done via other mechanisms (such as the HugeTLB controller).
237 * Failure to charge a HugeTLB folio to the memory controller
238 results in SIGBUS. This could happen even if the HugeTLB pool
239 still has pages available (but the cgroup limit is hit and
240 reclaim attempt fails).
241 * Charging HugeTLB memory towards the memory controller affects
242 memory protection and reclaim dynamics. Any userspace tuning
243 (of low, min limits for e.g) needs to take this into account.
244 * HugeTLB pages utilized while this option is not selected
245 will not be tracked by the memory controller (even if cgroup
246 v2 is remounted later on).
247
248 pids_localevents
249 The option restores v1-like behavior of pids.events:max, that is only
250 local (inside cgroup proper) fork failures are counted. Without this
251 option pids.events.max represents any pids.max enforcemnt across
252 cgroup's subtree.
253
254
255
256Organizing Processes and Threads
257--------------------------------
258
259Processes
260~~~~~~~~~
261
262Initially, only the root cgroup exists to which all processes belong.
263A child cgroup can be created by creating a sub-directory::
264
265 # mkdir $CGROUP_NAME
266
267A given cgroup may have multiple child cgroups forming a tree
268structure. Each cgroup has a read-writable interface file
269"cgroup.procs". When read, it lists the PIDs of all processes which
270belong to the cgroup one-per-line. The PIDs are not ordered and the
271same PID may show up more than once if the process got moved to
272another cgroup and then back or the PID got recycled while reading.
273
274A process can be migrated into a cgroup by writing its PID to the
275target cgroup's "cgroup.procs" file. Only one process can be migrated
276on a single write(2) call. If a process is composed of multiple
277threads, writing the PID of any thread migrates all threads of the
278process.
279
280When a process forks a child process, the new process is born into the
281cgroup that the forking process belongs to at the time of the
282operation. After exit, a process stays associated with the cgroup
283that it belonged to at the time of exit until it's reaped; however, a
284zombie process does not appear in "cgroup.procs" and thus can't be
285moved to another cgroup.
286
287A cgroup which doesn't have any children or live processes can be
288destroyed by removing the directory. Note that a cgroup which doesn't
289have any children and is associated only with zombie processes is
290considered empty and can be removed::
291
292 # rmdir $CGROUP_NAME
293
294"/proc/$PID/cgroup" lists a process's cgroup membership. If legacy
295cgroup is in use in the system, this file may contain multiple lines,
296one for each hierarchy. The entry for cgroup v2 is always in the
297format "0::$PATH"::
298
299 # cat /proc/842/cgroup
300 ...
301 0::/test-cgroup/test-cgroup-nested
302
303If the process becomes a zombie and the cgroup it was associated with
304is removed subsequently, " (deleted)" is appended to the path::
305
306 # cat /proc/842/cgroup
307 ...
308 0::/test-cgroup/test-cgroup-nested (deleted)
309
310
311Threads
312~~~~~~~
313
314cgroup v2 supports thread granularity for a subset of controllers to
315support use cases requiring hierarchical resource distribution across
316the threads of a group of processes. By default, all threads of a
317process belong to the same cgroup, which also serves as the resource
318domain to host resource consumptions which are not specific to a
319process or thread. The thread mode allows threads to be spread across
320a subtree while still maintaining the common resource domain for them.
321
322Controllers which support thread mode are called threaded controllers.
323The ones which don't are called domain controllers.
324
325Marking a cgroup threaded makes it join the resource domain of its
326parent as a threaded cgroup. The parent may be another threaded
327cgroup whose resource domain is further up in the hierarchy. The root
328of a threaded subtree, that is, the nearest ancestor which is not
329threaded, is called threaded domain or thread root interchangeably and
330serves as the resource domain for the entire subtree.
331
332Inside a threaded subtree, threads of a process can be put in
333different cgroups and are not subject to the no internal process
334constraint - threaded controllers can be enabled on non-leaf cgroups
335whether they have threads in them or not.
336
337As the threaded domain cgroup hosts all the domain resource
338consumptions of the subtree, it is considered to have internal
339resource consumptions whether there are processes in it or not and
340can't have populated child cgroups which aren't threaded. Because the
341root cgroup is not subject to no internal process constraint, it can
342serve both as a threaded domain and a parent to domain cgroups.
343
344The current operation mode or type of the cgroup is shown in the
345"cgroup.type" file which indicates whether the cgroup is a normal
346domain, a domain which is serving as the domain of a threaded subtree,
347or a threaded cgroup.
348
349On creation, a cgroup is always a domain cgroup and can be made
350threaded by writing "threaded" to the "cgroup.type" file. The
351operation is single direction::
352
353 # echo threaded > cgroup.type
354
355Once threaded, the cgroup can't be made a domain again. To enable the
356thread mode, the following conditions must be met.
357
358- As the cgroup will join the parent's resource domain. The parent
359 must either be a valid (threaded) domain or a threaded cgroup.
360
361- When the parent is an unthreaded domain, it must not have any domain
362 controllers enabled or populated domain children. The root is
363 exempt from this requirement.
364
365Topology-wise, a cgroup can be in an invalid state. Please consider
366the following topology::
367
368 A (threaded domain) - B (threaded) - C (domain, just created)
369
370C is created as a domain but isn't connected to a parent which can
371host child domains. C can't be used until it is turned into a
372threaded cgroup. "cgroup.type" file will report "domain (invalid)" in
373these cases. Operations which fail due to invalid topology use
374EOPNOTSUPP as the errno.
375
376A domain cgroup is turned into a threaded domain when one of its child
377cgroup becomes threaded or threaded controllers are enabled in the
378"cgroup.subtree_control" file while there are processes in the cgroup.
379A threaded domain reverts to a normal domain when the conditions
380clear.
381
382When read, "cgroup.threads" contains the list of the thread IDs of all
383threads in the cgroup. Except that the operations are per-thread
384instead of per-process, "cgroup.threads" has the same format and
385behaves the same way as "cgroup.procs". While "cgroup.threads" can be
386written to in any cgroup, as it can only move threads inside the same
387threaded domain, its operations are confined inside each threaded
388subtree.
389
390The threaded domain cgroup serves as the resource domain for the whole
391subtree, and, while the threads can be scattered across the subtree,
392all the processes are considered to be in the threaded domain cgroup.
393"cgroup.procs" in a threaded domain cgroup contains the PIDs of all
394processes in the subtree and is not readable in the subtree proper.
395However, "cgroup.procs" can be written to from anywhere in the subtree
396to migrate all threads of the matching process to the cgroup.
397
398Only threaded controllers can be enabled in a threaded subtree. When
399a threaded controller is enabled inside a threaded subtree, it only
400accounts for and controls resource consumptions associated with the
401threads in the cgroup and its descendants. All consumptions which
402aren't tied to a specific thread belong to the threaded domain cgroup.
403
404Because a threaded subtree is exempt from no internal process
405constraint, a threaded controller must be able to handle competition
406between threads in a non-leaf cgroup and its child cgroups. Each
407threaded controller defines how such competitions are handled.
408
409Currently, the following controllers are threaded and can be enabled
410in a threaded cgroup::
411
412- cpu
413- cpuset
414- perf_event
415- pids
416
417[Un]populated Notification
418--------------------------
419
420Each non-root cgroup has a "cgroup.events" file which contains
421"populated" field indicating whether the cgroup's sub-hierarchy has
422live processes in it. Its value is 0 if there is no live process in
423the cgroup and its descendants; otherwise, 1. poll and [id]notify
424events are triggered when the value changes. This can be used, for
425example, to start a clean-up operation after all processes of a given
426sub-hierarchy have exited. The populated state updates and
427notifications are recursive. Consider the following sub-hierarchy
428where the numbers in the parentheses represent the numbers of processes
429in each cgroup::
430
431 A(4) - B(0) - C(1)
432 \ D(0)
433
434A, B and C's "populated" fields would be 1 while D's 0. After the one
435process in C exits, B and C's "populated" fields would flip to "0" and
436file modified events will be generated on the "cgroup.events" files of
437both cgroups.
438
439
440Controlling Controllers
441-----------------------
442
443Availability
444~~~~~~~~~~~~
445
446A controller is available in a cgroup when it is supported by the kernel (i.e.,
447compiled in, not disabled and not attached to a v1 hierarchy) and listed in the
448"cgroup.controllers" file. Availability means the controller's interface files
449are exposed in the cgroup’s directory, allowing the distribution of the target
450resource to be observed or controlled within that cgroup.
451
452Enabling and Disabling
453~~~~~~~~~~~~~~~~~~~~~~
454
455Each cgroup has a "cgroup.controllers" file which lists all
456controllers available for the cgroup to enable::
457
458 # cat cgroup.controllers
459 cpu io memory
460
461No controller is enabled by default. Controllers can be enabled and
462disabled by writing to the "cgroup.subtree_control" file::
463
464 # echo "+cpu +memory -io" > cgroup.subtree_control
465
466Only controllers which are listed in "cgroup.controllers" can be
467enabled. When multiple operations are specified as above, either they
468all succeed or fail. If multiple operations on the same controller
469are specified, the last one is effective.
470
471Enabling a controller in a cgroup indicates that the distribution of
472the target resource across its immediate children will be controlled.
473Consider the following sub-hierarchy. The enabled controllers are
474listed in parentheses::
475
476 A(cpu,memory) - B(memory) - C()
477 \ D()
478
479As A has "cpu" and "memory" enabled, A will control the distribution
480of CPU cycles and memory to its children, in this case, B. As B has
481"memory" enabled but not "CPU", C and D will compete freely on CPU
482cycles but their division of memory available to B will be controlled.
483
484As a controller regulates the distribution of the target resource to
485the cgroup's children, enabling it creates the controller's interface
486files in the child cgroups. In the above example, enabling "cpu" on B
487would create the "cpu." prefixed controller interface files in C and
488D. Likewise, disabling "memory" from B would remove the "memory."
489prefixed controller interface files from C and D. This means that the
490controller interface files - anything which doesn't start with
491"cgroup." are owned by the parent rather than the cgroup itself.
492
493
494Top-down Constraint
495~~~~~~~~~~~~~~~~~~~
496
497Resources are distributed top-down and a cgroup can further distribute
498a resource only if the resource has been distributed to it from the
499parent. This means that all non-root "cgroup.subtree_control" files
500can only contain controllers which are enabled in the parent's
501"cgroup.subtree_control" file. A controller can be enabled only if
502the parent has the controller enabled and a controller can't be
503disabled if one or more children have it enabled.
504
505
506No Internal Process Constraint
507~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
508
509Non-root cgroups can distribute domain resources to their children
510only when they don't have any processes of their own. In other words,
511only domain cgroups which don't contain any processes can have domain
512controllers enabled in their "cgroup.subtree_control" files.
513
514This guarantees that, when a domain controller is looking at the part
515of the hierarchy which has it enabled, processes are always only on
516the leaves. This rules out situations where child cgroups compete
517against internal processes of the parent.
518
519The root cgroup is exempt from this restriction. Root contains
520processes and anonymous resource consumption which can't be associated
521with any other cgroups and requires special treatment from most
522controllers. How resource consumption in the root cgroup is governed
523is up to each controller (for more information on this topic please
524refer to the Non-normative information section in the Controllers
525chapter).
526
527Note that the restriction doesn't get in the way if there is no
528enabled controller in the cgroup's "cgroup.subtree_control". This is
529important as otherwise it wouldn't be possible to create children of a
530populated cgroup. To control resource distribution of a cgroup, the
531cgroup must create children and transfer all its processes to the
532children before enabling controllers in its "cgroup.subtree_control"
533file.
534
535
536Delegation
537----------
538
539Model of Delegation
540~~~~~~~~~~~~~~~~~~~
541
542A cgroup can be delegated in two ways. First, to a less privileged
543user by granting write access of the directory and its "cgroup.procs",
544"cgroup.threads" and "cgroup.subtree_control" files to the user.
545Second, if the "nsdelegate" mount option is set, automatically to a
546cgroup namespace on namespace creation.
547
548Because the resource control interface files in a given directory
549control the distribution of the parent's resources, the delegatee
550shouldn't be allowed to write to them. For the first method, this is
551achieved by not granting access to these files. For the second, files
552outside the namespace should be hidden from the delegatee by the means
553of at least mount namespacing, and the kernel rejects writes to all
554files on a namespace root from inside the cgroup namespace, except for
555those files listed in "/sys/kernel/cgroup/delegate" (including
556"cgroup.procs", "cgroup.threads", "cgroup.subtree_control", etc.).
557
558The end results are equivalent for both delegation types. Once
559delegated, the user can build sub-hierarchy under the directory,
560organize processes inside it as it sees fit and further distribute the
561resources it received from the parent. The limits and other settings
562of all resource controllers are hierarchical and regardless of what
563happens in the delegated sub-hierarchy, nothing can escape the
564resource restrictions imposed by the parent.
565
566Currently, cgroup doesn't impose any restrictions on the number of
567cgroups in or nesting depth of a delegated sub-hierarchy; however,
568this may be limited explicitly in the future.
569
570
571Delegation Containment
572~~~~~~~~~~~~~~~~~~~~~~
573
574A delegated sub-hierarchy is contained in the sense that processes
575can't be moved into or out of the sub-hierarchy by the delegatee.
576
577For delegations to a less privileged user, this is achieved by
578requiring the following conditions for a process with a non-root euid
579to migrate a target process into a cgroup by writing its PID to the
580"cgroup.procs" file.
581
582- The writer must have write access to the "cgroup.procs" file.
583
584- The writer must have write access to the "cgroup.procs" file of the
585 common ancestor of the source and destination cgroups.
586
587The above two constraints ensure that while a delegatee may migrate
588processes around freely in the delegated sub-hierarchy it can't pull
589in from or push out to outside the sub-hierarchy.
590
591For an example, let's assume cgroups C0 and C1 have been delegated to
592user U0 who created C00, C01 under C0 and C10 under C1 as follows and
593all processes under C0 and C1 belong to U0::
594
595 ~~~~~~~~~~~~~ - C0 - C00
596 ~ cgroup ~ \ C01
597 ~ hierarchy ~
598 ~~~~~~~~~~~~~ - C1 - C10
599
600Let's also say U0 wants to write the PID of a process which is
601currently in C10 into "C00/cgroup.procs". U0 has write access to the
602file; however, the common ancestor of the source cgroup C10 and the
603destination cgroup C00 is above the points of delegation and U0 would
604not have write access to its "cgroup.procs" files and thus the write
605will be denied with -EACCES.
606
607For delegations to namespaces, containment is achieved by requiring
608that both the source and destination cgroups are reachable from the
609namespace of the process which is attempting the migration. If either
610is not reachable, the migration is rejected with -ENOENT.
611
612
613Guidelines
614----------
615
616Organize Once and Control
617~~~~~~~~~~~~~~~~~~~~~~~~~
618
619Migrating a process across cgroups is a relatively expensive operation
620and stateful resources such as memory are not moved together with the
621process. This is an explicit design decision as there often exist
622inherent trade-offs between migration and various hot paths in terms
623of synchronization cost.
624
625As such, migrating processes across cgroups frequently as a means to
626apply different resource restrictions is discouraged. A workload
627should be assigned to a cgroup according to the system's logical and
628resource structure once on start-up. Dynamic adjustments to resource
629distribution can be made by changing controller configuration through
630the interface files.
631
632
633Avoid Name Collisions
634~~~~~~~~~~~~~~~~~~~~~
635
636Interface files for a cgroup and its children cgroups occupy the same
637directory and it is possible to create children cgroups which collide
638with interface files.
639
640All cgroup core interface files are prefixed with "cgroup." and each
641controller's interface files are prefixed with the controller name and
642a dot. A controller's name is composed of lower case alphabets and
643'_'s but never begins with an '_' so it can be used as the prefix
644character for collision avoidance. Also, interface file names won't
645start or end with terms which are often used in categorizing workloads
646such as job, service, slice, unit or workload.
647
648cgroup doesn't do anything to prevent name collisions and it's the
649user's responsibility to avoid them.
650
651
652Resource Distribution Models
653============================
654
655cgroup controllers implement several resource distribution schemes
656depending on the resource type and expected use cases. This section
657describes major schemes in use along with their expected behaviors.
658
659
660Weights
661-------
662
663A parent's resource is distributed by adding up the weights of all
664active children and giving each the fraction matching the ratio of its
665weight against the sum. As only children which can make use of the
666resource at the moment participate in the distribution, this is
667work-conserving. Due to the dynamic nature, this model is usually
668used for stateless resources.
669
670All weights are in the range [1, 10000] with the default at 100. This
671allows symmetric multiplicative biases in both directions at fine
672enough granularity while staying in the intuitive range.
673
674As long as the weight is in range, all configuration combinations are
675valid and there is no reason to reject configuration changes or
676process migrations.
677
678"cpu.weight" proportionally distributes CPU cycles to active children
679and is an example of this type.
680
681
682.. _cgroupv2-limits-distributor:
683
684Limits
685------
686
687A child can only consume up to the configured amount of the resource.
688Limits can be over-committed - the sum of the limits of children can
689exceed the amount of resource available to the parent.
690
691Limits are in the range [0, max] and defaults to "max", which is noop.
692
693As limits can be over-committed, all configuration combinations are
694valid and there is no reason to reject configuration changes or
695process migrations.
696
697"io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
698on an IO device and is an example of this type.
699
700.. _cgroupv2-protections-distributor:
701
702Protections
703-----------
704
705A cgroup is protected up to the configured amount of the resource
706as long as the usages of all its ancestors are under their
707protected levels. Protections can be hard guarantees or best effort
708soft boundaries. Protections can also be over-committed in which case
709only up to the amount available to the parent is protected among
710children.
711
712Protections are in the range [0, max] and defaults to 0, which is
713noop.
714
715As protections can be over-committed, all configuration combinations
716are valid and there is no reason to reject configuration changes or
717process migrations.
718
719"memory.low" implements best-effort memory protection and is an
720example of this type.
721
722
723Allocations
724-----------
725
726A cgroup is exclusively allocated a certain amount of a finite
727resource. Allocations can't be over-committed - the sum of the
728allocations of children can not exceed the amount of resource
729available to the parent.
730
731Allocations are in the range [0, max] and defaults to 0, which is no
732resource.
733
734As allocations can't be over-committed, some configuration
735combinations are invalid and should be rejected. Also, if the
736resource is mandatory for execution of processes, process migrations
737may be rejected.
738
739"cpu.rt.max" hard-allocates realtime slices and is an example of this
740type.
741
742
743Interface Files
744===============
745
746Format
747------
748
749All interface files should be in one of the following formats whenever
750possible::
751
752 New-line separated values
753 (when only one value can be written at once)
754
755 VAL0\n
756 VAL1\n
757 ...
758
759 Space separated values
760 (when read-only or multiple values can be written at once)
761
762 VAL0 VAL1 ...\n
763
764 Flat keyed
765
766 KEY0 VAL0\n
767 KEY1 VAL1\n
768 ...
769
770 Nested keyed
771
772 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
773 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
774 ...
775
776For a writable file, the format for writing should generally match
777reading; however, controllers may allow omitting later fields or
778implement restricted shortcuts for most common use cases.
779
780For both flat and nested keyed files, only the values for a single key
781can be written at a time. For nested keyed files, the sub key pairs
782may be specified in any order and not all pairs have to be specified.
783
784
785Conventions
786-----------
787
788- Settings for a single feature should be contained in a single file.
789
790- The root cgroup should be exempt from resource control and thus
791 shouldn't have resource control interface files.
792
793- The default time unit is microseconds. If a different unit is ever
794 used, an explicit unit suffix must be present.
795
796- A parts-per quantity should use a percentage decimal with at least
797 two digit fractional part - e.g. 13.40.
798
799- If a controller implements weight based resource distribution, its
800 interface file should be named "weight" and have the range [1,
801 10000] with 100 as the default. The values are chosen to allow
802 enough and symmetric bias in both directions while keeping it
803 intuitive (the default is 100%).
804
805- If a controller implements an absolute resource guarantee and/or
806 limit, the interface files should be named "min" and "max"
807 respectively. If a controller implements best effort resource
808 guarantee and/or limit, the interface files should be named "low"
809 and "high" respectively.
810
811 In the above four control files, the special token "max" should be
812 used to represent upward infinity for both reading and writing.
813
814- If a setting has a configurable default value and keyed specific
815 overrides, the default entry should be keyed with "default" and
816 appear as the first entry in the file.
817
818 The default value can be updated by writing either "default $VAL" or
819 "$VAL".
820
821 When writing to update a specific override, "default" can be used as
822 the value to indicate removal of the override. Override entries
823 with "default" as the value must not appear when read.
824
825 For example, a setting which is keyed by major:minor device numbers
826 with integer values may look like the following::
827
828 # cat cgroup-example-interface-file
829 default 150
830 8:0 300
831
832 The default value can be updated by::
833
834 # echo 125 > cgroup-example-interface-file
835
836 or::
837
838 # echo "default 125" > cgroup-example-interface-file
839
840 An override can be set by::
841
842 # echo "8:16 170" > cgroup-example-interface-file
843
844 and cleared by::
845
846 # echo "8:0 default" > cgroup-example-interface-file
847 # cat cgroup-example-interface-file
848 default 125
849 8:16 170
850
851- For events which are not very high frequency, an interface file
852 "events" should be created which lists event key value pairs.
853 Whenever a notifiable event happens, file modified event should be
854 generated on the file.
855
856
857Core Interface Files
858--------------------
859
860All cgroup core files are prefixed with "cgroup."
861
862 cgroup.type
863 A read-write single value file which exists on non-root
864 cgroups.
865
866 When read, it indicates the current type of the cgroup, which
867 can be one of the following values.
868
869 - "domain" : A normal valid domain cgroup.
870
871 - "domain threaded" : A threaded domain cgroup which is
872 serving as the root of a threaded subtree.
873
874 - "domain invalid" : A cgroup which is in an invalid state.
875 It can't be populated or have controllers enabled. It may
876 be allowed to become a threaded cgroup.
877
878 - "threaded" : A threaded cgroup which is a member of a
879 threaded subtree.
880
881 A cgroup can be turned into a threaded cgroup by writing
882 "threaded" to this file.
883
884 cgroup.procs
885 A read-write new-line separated values file which exists on
886 all cgroups.
887
888 When read, it lists the PIDs of all processes which belong to
889 the cgroup one-per-line. The PIDs are not ordered and the
890 same PID may show up more than once if the process got moved
891 to another cgroup and then back or the PID got recycled while
892 reading.
893
894 A PID can be written to migrate the process associated with
895 the PID to the cgroup. The writer should match all of the
896 following conditions.
897
898 - It must have write access to the "cgroup.procs" file.
899
900 - It must have write access to the "cgroup.procs" file of the
901 common ancestor of the source and destination cgroups.
902
903 When delegating a sub-hierarchy, write access to this file
904 should be granted along with the containing directory.
905
906 In a threaded cgroup, reading this file fails with EOPNOTSUPP
907 as all the processes belong to the thread root. Writing is
908 supported and moves every thread of the process to the cgroup.
909
910 cgroup.threads
911 A read-write new-line separated values file which exists on
912 all cgroups.
913
914 When read, it lists the TIDs of all threads which belong to
915 the cgroup one-per-line. The TIDs are not ordered and the
916 same TID may show up more than once if the thread got moved to
917 another cgroup and then back or the TID got recycled while
918 reading.
919
920 A TID can be written to migrate the thread associated with the
921 TID to the cgroup. The writer should match all of the
922 following conditions.
923
924 - It must have write access to the "cgroup.threads" file.
925
926 - The cgroup that the thread is currently in must be in the
927 same resource domain as the destination cgroup.
928
929 - It must have write access to the "cgroup.procs" file of the
930 common ancestor of the source and destination cgroups.
931
932 When delegating a sub-hierarchy, write access to this file
933 should be granted along with the containing directory.
934
935 cgroup.controllers
936 A read-only space separated values file which exists on all
937 cgroups.
938
939 It shows space separated list of all controllers available to
940 the cgroup. The controllers are not ordered.
941
942 cgroup.subtree_control
943 A read-write space separated values file which exists on all
944 cgroups. Starts out empty.
945
946 When read, it shows space separated list of the controllers
947 which are enabled to control resource distribution from the
948 cgroup to its children.
949
950 Space separated list of controllers prefixed with '+' or '-'
951 can be written to enable or disable controllers. A controller
952 name prefixed with '+' enables the controller and '-'
953 disables. If a controller appears more than once on the list,
954 the last one is effective. When multiple enable and disable
955 operations are specified, either all succeed or all fail.
956
957 cgroup.events
958 A read-only flat-keyed file which exists on non-root cgroups.
959 The following entries are defined. Unless specified
960 otherwise, a value change in this file generates a file
961 modified event.
962
963 populated
964 1 if the cgroup or its descendants contains any live
965 processes; otherwise, 0.
966 frozen
967 1 if the cgroup is frozen; otherwise, 0.
968
969 cgroup.max.descendants
970 A read-write single value files. The default is "max".
971
972 Maximum allowed number of descent cgroups.
973 If the actual number of descendants is equal or larger,
974 an attempt to create a new cgroup in the hierarchy will fail.
975
976 cgroup.max.depth
977 A read-write single value files. The default is "max".
978
979 Maximum allowed descent depth below the current cgroup.
980 If the actual descent depth is equal or larger,
981 an attempt to create a new child cgroup will fail.
982
983 cgroup.stat
984 A read-only flat-keyed file with the following entries:
985
986 nr_descendants
987 Total number of visible descendant cgroups.
988
989 nr_dying_descendants
990 Total number of dying descendant cgroups. A cgroup becomes
991 dying after being deleted by a user. The cgroup will remain
992 in dying state for some time undefined time (which can depend
993 on system load) before being completely destroyed.
994
995 A process can't enter a dying cgroup under any circumstances,
996 a dying cgroup can't revive.
997
998 A dying cgroup can consume system resources not exceeding
999 limits, which were active at the moment of cgroup deletion.
1000
1001 nr_subsys_<cgroup_subsys>
1002 Total number of live cgroup subsystems (e.g memory
1003 cgroup) at and beneath the current cgroup.
1004
1005 nr_dying_subsys_<cgroup_subsys>
1006 Total number of dying cgroup subsystems (e.g. memory
1007 cgroup) at and beneath the current cgroup.
1008
1009 cgroup.stat.local
1010 A read-only flat-keyed file which exists in non-root cgroups.
1011 The following entry is defined:
1012
1013 frozen_usec
1014 Cumulative time that this cgroup has spent between freezing and
1015 thawing, regardless of whether by self or ancestor groups.
1016 NB: (not) reaching "frozen" state is not accounted here.
1017
1018 Using the following ASCII representation of a cgroup's freezer
1019 state, ::
1020
1021 1 _____
1022 frozen 0 __/ \__
1023 ab cd
1024
1025 the duration being measured is the span between a and c.
1026
1027 cgroup.freeze
1028 A read-write single value file which exists on non-root cgroups.
1029 Allowed values are "0" and "1". The default is "0".
1030
1031 Writing "1" to the file causes freezing of the cgroup and all
1032 descendant cgroups. This means that all belonging processes will
1033 be stopped and will not run until the cgroup will be explicitly
1034 unfrozen. Freezing of the cgroup may take some time; when this action
1035 is completed, the "frozen" value in the cgroup.events control file
1036 will be updated to "1" and the corresponding notification will be
1037 issued.
1038
1039 A cgroup can be frozen either by its own settings, or by settings
1040 of any ancestor cgroups. If any of ancestor cgroups is frozen, the
1041 cgroup will remain frozen.
1042
1043 Processes in the frozen cgroup can be killed by a fatal signal.
1044 They also can enter and leave a frozen cgroup: either by an explicit
1045 move by a user, or if freezing of the cgroup races with fork().
1046 If a process is moved to a frozen cgroup, it stops. If a process is
1047 moved out of a frozen cgroup, it becomes running.
1048
1049 Frozen status of a cgroup doesn't affect any cgroup tree operations:
1050 it's possible to delete a frozen (and empty) cgroup, as well as
1051 create new sub-cgroups.
1052
1053 cgroup.kill
1054 A write-only single value file which exists in non-root cgroups.
1055 The only allowed value is "1".
1056
1057 Writing "1" to the file causes the cgroup and all descendant cgroups to
1058 be killed. This means that all processes located in the affected cgroup
1059 tree will be killed via SIGKILL.
1060
1061 Killing a cgroup tree will deal with concurrent forks appropriately and
1062 is protected against migrations.
1063
1064 In a threaded cgroup, writing this file fails with EOPNOTSUPP as
1065 killing cgroups is a process directed operation, i.e. it affects
1066 the whole thread-group.
1067
1068 cgroup.pressure
1069 A read-write single value file that allowed values are "0" and "1".
1070 The default is "1".
1071
1072 Writing "0" to the file will disable the cgroup PSI accounting.
1073 Writing "1" to the file will re-enable the cgroup PSI accounting.
1074
1075 This control attribute is not hierarchical, so disable or enable PSI
1076 accounting in a cgroup does not affect PSI accounting in descendants
1077 and doesn't need pass enablement via ancestors from root.
1078
1079 The reason this control attribute exists is that PSI accounts stalls for
1080 each cgroup separately and aggregates it at each level of the hierarchy.
1081 This may cause non-negligible overhead for some workloads when under
1082 deep level of the hierarchy, in which case this control attribute can
1083 be used to disable PSI accounting in the non-leaf cgroups.
1084
1085 irq.pressure
1086 A read-write nested-keyed file.
1087
1088 Shows pressure stall information for IRQ/SOFTIRQ. See
1089 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1090
1091Controllers
1092===========
1093
1094.. _cgroup-v2-cpu:
1095
1096CPU
1097---
1098
1099The "cpu" controllers regulates distribution of CPU cycles. This
1100controller implements weight and absolute bandwidth limit models for
1101normal scheduling policy and absolute bandwidth allocation model for
1102realtime scheduling policy.
1103
1104In all the above models, cycles distribution is defined only on a temporal
1105base and it does not account for the frequency at which tasks are executed.
1106The (optional) utilization clamping support allows to hint the schedutil
1107cpufreq governor about the minimum desired frequency which should always be
1108provided by a CPU, as well as the maximum desired frequency, which should not
1109be exceeded by a CPU.
1110
1111WARNING: cgroup2 cpu controller doesn't yet support the (bandwidth) control of
1112realtime processes. For a kernel built with the CONFIG_RT_GROUP_SCHED option
1113enabled for group scheduling of realtime processes, the cpu controller can only
1114be enabled when all RT processes are in the root cgroup. Be aware that system
1115management software may already have placed RT processes into non-root cgroups
1116during the system boot process, and these processes may need to be moved to the
1117root cgroup before the cpu controller can be enabled with a
1118CONFIG_RT_GROUP_SCHED enabled kernel.
1119
1120With CONFIG_RT_GROUP_SCHED disabled, this limitation does not apply and some of
1121the interface files either affect realtime processes or account for them. See
1122the following section for details. Only the cpu controller is affected by
1123CONFIG_RT_GROUP_SCHED. Other controllers can be used for the resource control of
1124realtime processes irrespective of CONFIG_RT_GROUP_SCHED.
1125
1126
1127CPU Interface Files
1128~~~~~~~~~~~~~~~~~~~
1129
1130The interaction of a process with the cpu controller depends on its scheduling
1131policy and the underlying scheduler. From the point of view of the cpu controller,
1132processes can be categorized as follows:
1133
1134* Processes under the fair-class scheduler
1135* Processes under a BPF scheduler with the ``cgroup_set_weight`` callback
1136* Everything else: ``SCHED_{FIFO,RR,DEADLINE}`` and processes under a BPF scheduler
1137 without the ``cgroup_set_weight`` callback
1138
1139For details on when a process is under the fair-class scheduler or a BPF scheduler,
1140check out :ref:`Documentation/scheduler/sched-ext.rst <sched-ext>`.
1141
1142For each of the following interface files, the above categories
1143will be referred to. All time durations are in microseconds.
1144
1145 cpu.stat
1146 A read-only flat-keyed file.
1147 This file exists whether the controller is enabled or not.
1148
1149 It always reports the following three stats, which account for all the
1150 processes in the cgroup:
1151
1152 - usage_usec
1153 - user_usec
1154 - system_usec
1155
1156 and the following five when the controller is enabled, which account for
1157 only the processes under the fair-class scheduler:
1158
1159 - nr_periods
1160 - nr_throttled
1161 - throttled_usec
1162 - nr_bursts
1163 - burst_usec
1164
1165 cpu.weight
1166 A read-write single value file which exists on non-root
1167 cgroups. The default is "100".
1168
1169 For non idle groups (cpu.idle = 0), the weight is in the
1170 range [1, 10000].
1171
1172 If the cgroup has been configured to be SCHED_IDLE (cpu.idle = 1),
1173 then the weight will show as a 0.
1174
1175 This file affects only processes under the fair-class scheduler and a BPF
1176 scheduler with the ``cgroup_set_weight`` callback depending on what the
1177 callback actually does.
1178
1179 cpu.weight.nice
1180 A read-write single value file which exists on non-root
1181 cgroups. The default is "0".
1182
1183 The nice value is in the range [-20, 19].
1184
1185 This interface file is an alternative interface for
1186 "cpu.weight" and allows reading and setting weight using the
1187 same values used by nice(2). Because the range is smaller and
1188 granularity is coarser for the nice values, the read value is
1189 the closest approximation of the current weight.
1190
1191 This file affects only processes under the fair-class scheduler and a BPF
1192 scheduler with the ``cgroup_set_weight`` callback depending on what the
1193 callback actually does.
1194
1195 cpu.max
1196 A read-write two value file which exists on non-root cgroups.
1197 The default is "max 100000".
1198
1199 The maximum bandwidth limit. It's in the following format::
1200
1201 $MAX $PERIOD
1202
1203 which indicates that the group may consume up to $MAX in each
1204 $PERIOD duration. "max" for $MAX indicates no limit. If only
1205 one number is written, $MAX is updated.
1206
1207 This file affects only processes under the fair-class scheduler.
1208
1209 cpu.max.burst
1210 A read-write single value file which exists on non-root
1211 cgroups. The default is "0".
1212
1213 The burst in the range [0, $MAX].
1214
1215 This file affects only processes under the fair-class scheduler.
1216
1217 cpu.pressure
1218 A read-write nested-keyed file.
1219
1220 Shows pressure stall information for CPU. See
1221 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1222
1223 This file accounts for all the processes in the cgroup.
1224
1225 cpu.uclamp.min
1226 A read-write single value file which exists on non-root cgroups.
1227 The default is "0", i.e. no utilization boosting.
1228
1229 The requested minimum utilization (protection) as a percentage
1230 rational number, e.g. 12.34 for 12.34%.
1231
1232 This interface allows reading and setting minimum utilization clamp
1233 values similar to the sched_setattr(2). This minimum utilization
1234 value is used to clamp the task specific minimum utilization clamp,
1235 including those of realtime processes.
1236
1237 The requested minimum utilization (protection) is always capped by
1238 the current value for the maximum utilization (limit), i.e.
1239 `cpu.uclamp.max`.
1240
1241 This file affects all the processes in the cgroup.
1242
1243 cpu.uclamp.max
1244 A read-write single value file which exists on non-root cgroups.
1245 The default is "max". i.e. no utilization capping
1246
1247 The requested maximum utilization (limit) as a percentage rational
1248 number, e.g. 98.76 for 98.76%.
1249
1250 This interface allows reading and setting maximum utilization clamp
1251 values similar to the sched_setattr(2). This maximum utilization
1252 value is used to clamp the task specific maximum utilization clamp,
1253 including those of realtime processes.
1254
1255 This file affects all the processes in the cgroup.
1256
1257 cpu.idle
1258 A read-write single value file which exists on non-root cgroups.
1259 The default is 0.
1260
1261 This is the cgroup analog of the per-task SCHED_IDLE sched policy.
1262 Setting this value to a 1 will make the scheduling policy of the
1263 cgroup SCHED_IDLE. The threads inside the cgroup will retain their
1264 own relative priorities, but the cgroup itself will be treated as
1265 very low priority relative to its peers.
1266
1267 This file affects only processes under the fair-class scheduler.
1268
1269Memory
1270------
1271
1272The "memory" controller regulates distribution of memory. Memory is
1273stateful and implements both limit and protection models. Due to the
1274intertwining between memory usage and reclaim pressure and the
1275stateful nature of memory, the distribution model is relatively
1276complex.
1277
1278While not completely water-tight, all major memory usages by a given
1279cgroup are tracked so that the total memory consumption can be
1280accounted and controlled to a reasonable extent. Currently, the
1281following types of memory usages are tracked.
1282
1283- Userland memory - page cache and anonymous memory.
1284
1285- Kernel data structures such as dentries and inodes.
1286
1287- TCP socket buffers.
1288
1289The above list may expand in the future for better coverage.
1290
1291
1292Memory Interface Files
1293~~~~~~~~~~~~~~~~~~~~~~
1294
1295All memory amounts are in bytes. If a value which is not aligned to
1296PAGE_SIZE is written, the value may be rounded up to the closest
1297PAGE_SIZE multiple when read back.
1298
1299 memory.current
1300 A read-only single value file which exists on non-root
1301 cgroups.
1302
1303 The total amount of memory currently being used by the cgroup
1304 and its descendants.
1305
1306 memory.min
1307 A read-write single value file which exists on non-root
1308 cgroups. The default is "0".
1309
1310 Hard memory protection. If the memory usage of a cgroup
1311 is within its effective min boundary, the cgroup's memory
1312 won't be reclaimed under any conditions. If there is no
1313 unprotected reclaimable memory available, OOM killer
1314 is invoked. Above the effective min boundary (or
1315 effective low boundary if it is higher), pages are reclaimed
1316 proportionally to the overage, reducing reclaim pressure for
1317 smaller overages.
1318
1319 Effective min boundary is limited by memory.min values of
1320 all ancestor cgroups. If there is memory.min overcommitment
1321 (child cgroup or cgroups are requiring more protected memory
1322 than parent will allow), then each child cgroup will get
1323 the part of parent's protection proportional to its
1324 actual memory usage below memory.min.
1325
1326 Putting more memory than generally available under this
1327 protection is discouraged and may lead to constant OOMs.
1328
1329 If a memory cgroup is not populated with processes,
1330 its memory.min is ignored.
1331
1332 memory.low
1333 A read-write single value file which exists on non-root
1334 cgroups. The default is "0".
1335
1336 Best-effort memory protection. If the memory usage of a
1337 cgroup is within its effective low boundary, the cgroup's
1338 memory won't be reclaimed unless there is no reclaimable
1339 memory available in unprotected cgroups.
1340 Above the effective low boundary (or
1341 effective min boundary if it is higher), pages are reclaimed
1342 proportionally to the overage, reducing reclaim pressure for
1343 smaller overages.
1344
1345 Effective low boundary is limited by memory.low values of
1346 all ancestor cgroups. If there is memory.low overcommitment
1347 (child cgroup or cgroups are requiring more protected memory
1348 than parent will allow), then each child cgroup will get
1349 the part of parent's protection proportional to its
1350 actual memory usage below memory.low.
1351
1352 Putting more memory than generally available under this
1353 protection is discouraged.
1354
1355 memory.high
1356 A read-write single value file which exists on non-root
1357 cgroups. The default is "max".
1358
1359 Memory usage throttle limit. If a cgroup's usage goes
1360 over the high boundary, the processes of the cgroup are
1361 throttled and put under heavy reclaim pressure.
1362
1363 Going over the high limit never invokes the OOM killer and
1364 under extreme conditions the limit may be breached. The high
1365 limit should be used in scenarios where an external process
1366 monitors the limited cgroup to alleviate heavy reclaim
1367 pressure.
1368
1369 If memory.high is opened with O_NONBLOCK then the synchronous
1370 reclaim is bypassed. This is useful for admin processes that
1371 need to dynamically adjust the job's memory limits without
1372 expending their own CPU resources on memory reclamation. The
1373 job will trigger the reclaim and/or get throttled on its
1374 next charge request.
1375
1376 Please note that with O_NONBLOCK, there is a chance that the
1377 target memory cgroup may take indefinite amount of time to
1378 reduce usage below the limit due to delayed charge request or
1379 busy-hitting its memory to slow down reclaim.
1380
1381 memory.max
1382 A read-write single value file which exists on non-root
1383 cgroups. The default is "max".
1384
1385 Memory usage hard limit. This is the main mechanism to limit
1386 memory usage of a cgroup. If a cgroup's memory usage reaches
1387 this limit and can't be reduced, the OOM killer is invoked in
1388 the cgroup. Under certain circumstances, the usage may go
1389 over the limit temporarily.
1390
1391 In default configuration regular 0-order allocations always
1392 succeed unless OOM killer chooses current task as a victim.
1393
1394 Some kinds of allocations don't invoke the OOM killer.
1395 Caller could retry them differently, return into userspace
1396 as -ENOMEM or silently ignore in cases like disk readahead.
1397
1398 If memory.max is opened with O_NONBLOCK, then the synchronous
1399 reclaim and oom-kill are bypassed. This is useful for admin
1400 processes that need to dynamically adjust the job's memory limits
1401 without expending their own CPU resources on memory reclamation.
1402 The job will trigger the reclaim and/or oom-kill on its next
1403 charge request.
1404
1405 Please note that with O_NONBLOCK, there is a chance that the
1406 target memory cgroup may take indefinite amount of time to
1407 reduce usage below the limit due to delayed charge request or
1408 busy-hitting its memory to slow down reclaim.
1409
1410 memory.reclaim
1411 A write-only nested-keyed file which exists for all cgroups.
1412
1413 This is a simple interface to trigger memory reclaim in the
1414 target cgroup.
1415
1416 Example::
1417
1418 echo "1G" > memory.reclaim
1419
1420 Please note that the kernel can over or under reclaim from
1421 the target cgroup. If less bytes are reclaimed than the
1422 specified amount, -EAGAIN is returned.
1423
1424 Please note that the proactive reclaim (triggered by this
1425 interface) is not meant to indicate memory pressure on the
1426 memory cgroup. Therefore socket memory balancing triggered by
1427 the memory reclaim normally is not exercised in this case.
1428 This means that the networking layer will not adapt based on
1429 reclaim induced by memory.reclaim.
1430
1431The following nested keys are defined.
1432
1433 ========== ================================
1434 swappiness Swappiness value to reclaim with
1435 ========== ================================
1436
1437 Specifying a swappiness value instructs the kernel to perform
1438 the reclaim with that swappiness value. Note that this has the
1439 same semantics as vm.swappiness applied to memcg reclaim with
1440 all the existing limitations and potential future extensions.
1441
1442 The valid range for swappiness is [0-200, max], setting
1443 swappiness=max exclusively reclaims anonymous memory.
1444
1445 memory.peak
1446 A read-write single value file which exists on non-root cgroups.
1447
1448 The max memory usage recorded for the cgroup and its descendants since
1449 either the creation of the cgroup or the most recent reset for that FD.
1450
1451 A write of any non-empty string to this file resets it to the
1452 current memory usage for subsequent reads through the same
1453 file descriptor.
1454
1455 memory.oom.group
1456 A read-write single value file which exists on non-root
1457 cgroups. The default value is "0".
1458
1459 Determines whether the cgroup should be treated as
1460 an indivisible workload by the OOM killer. If set,
1461 all tasks belonging to the cgroup or to its descendants
1462 (if the memory cgroup is not a leaf cgroup) are killed
1463 together or not at all. This can be used to avoid
1464 partial kills to guarantee workload integrity.
1465
1466 Tasks with the OOM protection (oom_score_adj set to -1000)
1467 are treated as an exception and are never killed.
1468
1469 If the OOM killer is invoked in a cgroup, it's not going
1470 to kill any tasks outside of this cgroup, regardless
1471 memory.oom.group values of ancestor cgroups.
1472
1473 memory.events
1474 A read-only flat-keyed file which exists on non-root cgroups.
1475 The following entries are defined. Unless specified
1476 otherwise, a value change in this file generates a file
1477 modified event.
1478
1479 Note that all fields in this file are hierarchical and the
1480 file modified event can be generated due to an event down the
1481 hierarchy. For the local events at the cgroup level see
1482 memory.events.local.
1483
1484 low
1485 The number of times the cgroup is reclaimed due to
1486 high memory pressure even though its usage is under
1487 the low boundary. This usually indicates that the low
1488 boundary is over-committed.
1489
1490 high
1491 The number of times processes of the cgroup are
1492 throttled and routed to perform direct memory reclaim
1493 because the high memory boundary was exceeded. For a
1494 cgroup whose memory usage is capped by the high limit
1495 rather than global memory pressure, this event's
1496 occurrences are expected.
1497
1498 max
1499 The number of times the cgroup's memory usage was
1500 about to go over the max boundary. If direct reclaim
1501 fails to bring it down, the cgroup goes to OOM state.
1502
1503 oom
1504 The number of time the cgroup's memory usage was
1505 reached the limit and allocation was about to fail.
1506
1507 This event is not raised if the OOM killer is not
1508 considered as an option, e.g. for failed high-order
1509 allocations or if caller asked to not retry attempts.
1510
1511 oom_kill
1512 The number of processes belonging to this cgroup
1513 killed by any kind of OOM killer.
1514
1515 oom_group_kill
1516 The number of times a group OOM has occurred.
1517
1518 memory.events.local
1519 Similar to memory.events but the fields in the file are local
1520 to the cgroup i.e. not hierarchical. The file modified event
1521 generated on this file reflects only the local events.
1522
1523 memory.stat
1524 A read-only flat-keyed file which exists on non-root cgroups.
1525
1526 This breaks down the cgroup's memory footprint into different
1527 types of memory, type-specific details, and other information
1528 on the state and past events of the memory management system.
1529
1530 All memory amounts are in bytes.
1531
1532 The entries are ordered to be human readable, and new entries
1533 can show up in the middle. Don't rely on items remaining in a
1534 fixed position; use the keys to look up specific values!
1535
1536 If the entry has no per-node counter (or not show in the
1537 memory.numa_stat). We use 'npn' (non-per-node) as the tag
1538 to indicate that it will not show in the memory.numa_stat.
1539
1540 anon
1541 Amount of memory used in anonymous mappings such as
1542 brk(), sbrk(), and mmap(MAP_ANONYMOUS). Note that
1543 some kernel configurations might account complete larger
1544 allocations (e.g., THP) if only some, but not all the
1545 memory of such an allocation is mapped anymore.
1546
1547 file
1548 Amount of memory used to cache filesystem data,
1549 including tmpfs and shared memory.
1550
1551 kernel (npn)
1552 Amount of total kernel memory, including
1553 (kernel_stack, pagetables, percpu, vmalloc, slab) in
1554 addition to other kernel memory use cases.
1555
1556 kernel_stack
1557 Amount of memory allocated to kernel stacks.
1558
1559 pagetables
1560 Amount of memory allocated for page tables.
1561
1562 sec_pagetables
1563 Amount of memory allocated for secondary page tables,
1564 this currently includes KVM mmu allocations on x86
1565 and arm64 and IOMMU page tables.
1566
1567 percpu (npn)
1568 Amount of memory used for storing per-cpu kernel
1569 data structures.
1570
1571 sock (npn)
1572 Amount of memory used in network transmission buffers
1573
1574 vmalloc (npn)
1575 Amount of memory used for vmap backed memory.
1576
1577 shmem
1578 Amount of cached filesystem data that is swap-backed,
1579 such as tmpfs, shm segments, shared anonymous mmap()s
1580
1581 zswap
1582 Amount of memory consumed by the zswap compression backend.
1583
1584 zswapped
1585 Amount of application memory swapped out to zswap.
1586
1587 file_mapped
1588 Amount of cached filesystem data mapped with mmap(). Note
1589 that some kernel configurations might account complete
1590 larger allocations (e.g., THP) if only some, but not
1591 not all the memory of such an allocation is mapped.
1592
1593 file_dirty
1594 Amount of cached filesystem data that was modified but
1595 not yet written back to disk
1596
1597 file_writeback
1598 Amount of cached filesystem data that was modified and
1599 is currently being written back to disk
1600
1601 swapcached
1602 Amount of swap cached in memory. The swapcache is accounted
1603 against both memory and swap usage.
1604
1605 anon_thp
1606 Amount of memory used in anonymous mappings backed by
1607 transparent hugepages
1608
1609 file_thp
1610 Amount of cached filesystem data backed by transparent
1611 hugepages
1612
1613 shmem_thp
1614 Amount of shm, tmpfs, shared anonymous mmap()s backed by
1615 transparent hugepages
1616
1617 inactive_anon, active_anon, inactive_file, active_file, unevictable
1618 Amount of memory, swap-backed and filesystem-backed,
1619 on the internal memory management lists used by the
1620 page reclaim algorithm.
1621
1622 As these represent internal list state (eg. shmem pages are on anon
1623 memory management lists), inactive_foo + active_foo may not be equal to
1624 the value for the foo counter, since the foo counter is type-based, not
1625 list-based.
1626
1627 slab_reclaimable
1628 Part of "slab" that might be reclaimed, such as
1629 dentries and inodes.
1630
1631 slab_unreclaimable
1632 Part of "slab" that cannot be reclaimed on memory
1633 pressure.
1634
1635 slab (npn)
1636 Amount of memory used for storing in-kernel data
1637 structures.
1638
1639 workingset_refault_anon
1640 Number of refaults of previously evicted anonymous pages.
1641
1642 workingset_refault_file
1643 Number of refaults of previously evicted file pages.
1644
1645 workingset_activate_anon
1646 Number of refaulted anonymous pages that were immediately
1647 activated.
1648
1649 workingset_activate_file
1650 Number of refaulted file pages that were immediately activated.
1651
1652 workingset_restore_anon
1653 Number of restored anonymous pages which have been detected as
1654 an active workingset before they got reclaimed.
1655
1656 workingset_restore_file
1657 Number of restored file pages which have been detected as an
1658 active workingset before they got reclaimed.
1659
1660 workingset_nodereclaim
1661 Number of times a shadow node has been reclaimed
1662
1663 pswpin (npn)
1664 Number of pages swapped into memory
1665
1666 pswpout (npn)
1667 Number of pages swapped out of memory
1668
1669 pgscan (npn)
1670 Amount of scanned pages (in an inactive LRU list)
1671
1672 pgsteal (npn)
1673 Amount of reclaimed pages
1674
1675 pgscan_kswapd (npn)
1676 Amount of scanned pages by kswapd (in an inactive LRU list)
1677
1678 pgscan_direct (npn)
1679 Amount of scanned pages directly (in an inactive LRU list)
1680
1681 pgscan_khugepaged (npn)
1682 Amount of scanned pages by khugepaged (in an inactive LRU list)
1683
1684 pgscan_proactive (npn)
1685 Amount of scanned pages proactively (in an inactive LRU list)
1686
1687 pgsteal_kswapd (npn)
1688 Amount of reclaimed pages by kswapd
1689
1690 pgsteal_direct (npn)
1691 Amount of reclaimed pages directly
1692
1693 pgsteal_khugepaged (npn)
1694 Amount of reclaimed pages by khugepaged
1695
1696 pgsteal_proactive (npn)
1697 Amount of reclaimed pages proactively
1698
1699 pgfault (npn)
1700 Total number of page faults incurred
1701
1702 pgmajfault (npn)
1703 Number of major page faults incurred
1704
1705 pgrefill (npn)
1706 Amount of scanned pages (in an active LRU list)
1707
1708 pgactivate (npn)
1709 Amount of pages moved to the active LRU list
1710
1711 pgdeactivate (npn)
1712 Amount of pages moved to the inactive LRU list
1713
1714 pglazyfree (npn)
1715 Amount of pages postponed to be freed under memory pressure
1716
1717 pglazyfreed (npn)
1718 Amount of reclaimed lazyfree pages
1719
1720 swpin_zero
1721 Number of pages swapped into memory and filled with zero, where I/O
1722 was optimized out because the page content was detected to be zero
1723 during swapout.
1724
1725 swpout_zero
1726 Number of zero-filled pages swapped out with I/O skipped due to the
1727 content being detected as zero.
1728
1729 zswpin
1730 Number of pages moved in to memory from zswap.
1731
1732 zswpout
1733 Number of pages moved out of memory to zswap.
1734
1735 zswpwb
1736 Number of pages written from zswap to swap.
1737
1738 thp_fault_alloc (npn)
1739 Number of transparent hugepages which were allocated to satisfy
1740 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1741 is not set.
1742
1743 thp_collapse_alloc (npn)
1744 Number of transparent hugepages which were allocated to allow
1745 collapsing an existing range of pages. This counter is not
1746 present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1747
1748 thp_swpout (npn)
1749 Number of transparent hugepages which are swapout in one piece
1750 without splitting.
1751
1752 thp_swpout_fallback (npn)
1753 Number of transparent hugepages which were split before swapout.
1754 Usually because failed to allocate some continuous swap space
1755 for the huge page.
1756
1757 numa_pages_migrated (npn)
1758 Number of pages migrated by NUMA balancing.
1759
1760 numa_pte_updates (npn)
1761 Number of pages whose page table entries are modified by
1762 NUMA balancing to produce NUMA hinting faults on access.
1763
1764 numa_hint_faults (npn)
1765 Number of NUMA hinting faults.
1766
1767 pgdemote_kswapd
1768 Number of pages demoted by kswapd.
1769
1770 pgdemote_direct
1771 Number of pages demoted directly.
1772
1773 pgdemote_khugepaged
1774 Number of pages demoted by khugepaged.
1775
1776 pgdemote_proactive
1777 Number of pages demoted by proactively.
1778
1779 hugetlb
1780 Amount of memory used by hugetlb pages. This metric only shows
1781 up if hugetlb usage is accounted for in memory.current (i.e.
1782 cgroup is mounted with the memory_hugetlb_accounting option).
1783
1784 memory.numa_stat
1785 A read-only nested-keyed file which exists on non-root cgroups.
1786
1787 This breaks down the cgroup's memory footprint into different
1788 types of memory, type-specific details, and other information
1789 per node on the state of the memory management system.
1790
1791 This is useful for providing visibility into the NUMA locality
1792 information within an memcg since the pages are allowed to be
1793 allocated from any physical node. One of the use case is evaluating
1794 application performance by combining this information with the
1795 application's CPU allocation.
1796
1797 All memory amounts are in bytes.
1798
1799 The output format of memory.numa_stat is::
1800
1801 type N0=<bytes in node 0> N1=<bytes in node 1> ...
1802
1803 The entries are ordered to be human readable, and new entries
1804 can show up in the middle. Don't rely on items remaining in a
1805 fixed position; use the keys to look up specific values!
1806
1807 The entries can refer to the memory.stat.
1808
1809 memory.swap.current
1810 A read-only single value file which exists on non-root
1811 cgroups.
1812
1813 The total amount of swap currently being used by the cgroup
1814 and its descendants.
1815
1816 memory.swap.high
1817 A read-write single value file which exists on non-root
1818 cgroups. The default is "max".
1819
1820 Swap usage throttle limit. If a cgroup's swap usage exceeds
1821 this limit, all its further allocations will be throttled to
1822 allow userspace to implement custom out-of-memory procedures.
1823
1824 This limit marks a point of no return for the cgroup. It is NOT
1825 designed to manage the amount of swapping a workload does
1826 during regular operation. Compare to memory.swap.max, which
1827 prohibits swapping past a set amount, but lets the cgroup
1828 continue unimpeded as long as other memory can be reclaimed.
1829
1830 Healthy workloads are not expected to reach this limit.
1831
1832 memory.swap.peak
1833 A read-write single value file which exists on non-root cgroups.
1834
1835 The max swap usage recorded for the cgroup and its descendants since
1836 the creation of the cgroup or the most recent reset for that FD.
1837
1838 A write of any non-empty string to this file resets it to the
1839 current memory usage for subsequent reads through the same
1840 file descriptor.
1841
1842 memory.swap.max
1843 A read-write single value file which exists on non-root
1844 cgroups. The default is "max".
1845
1846 Swap usage hard limit. If a cgroup's swap usage reaches this
1847 limit, anonymous memory of the cgroup will not be swapped out.
1848
1849 memory.swap.events
1850 A read-only flat-keyed file which exists on non-root cgroups.
1851 The following entries are defined. Unless specified
1852 otherwise, a value change in this file generates a file
1853 modified event.
1854
1855 high
1856 The number of times the cgroup's swap usage was over
1857 the high threshold.
1858
1859 max
1860 The number of times the cgroup's swap usage was about
1861 to go over the max boundary and swap allocation
1862 failed.
1863
1864 fail
1865 The number of times swap allocation failed either
1866 because of running out of swap system-wide or max
1867 limit.
1868
1869 When reduced under the current usage, the existing swap
1870 entries are reclaimed gradually and the swap usage may stay
1871 higher than the limit for an extended period of time. This
1872 reduces the impact on the workload and memory management.
1873
1874 memory.zswap.current
1875 A read-only single value file which exists on non-root
1876 cgroups.
1877
1878 The total amount of memory consumed by the zswap compression
1879 backend.
1880
1881 memory.zswap.max
1882 A read-write single value file which exists on non-root
1883 cgroups. The default is "max".
1884
1885 Zswap usage hard limit. If a cgroup's zswap pool reaches this
1886 limit, it will refuse to take any more stores before existing
1887 entries fault back in or are written out to disk.
1888
1889 memory.zswap.writeback
1890 A read-write single value file. The default value is "1".
1891 Note that this setting is hierarchical, i.e. the writeback would be
1892 implicitly disabled for child cgroups if the upper hierarchy
1893 does so.
1894
1895 When this is set to 0, all swapping attempts to swapping devices
1896 are disabled. This included both zswap writebacks, and swapping due
1897 to zswap store failures. If the zswap store failures are recurring
1898 (for e.g if the pages are incompressible), users can observe
1899 reclaim inefficiency after disabling writeback (because the same
1900 pages might be rejected again and again).
1901
1902 Note that this is subtly different from setting memory.swap.max to
1903 0, as it still allows for pages to be written to the zswap pool.
1904 This setting has no effect if zswap is disabled, and swapping
1905 is allowed unless memory.swap.max is set to 0.
1906
1907 memory.pressure
1908 A read-only nested-keyed file.
1909
1910 Shows pressure stall information for memory. See
1911 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1912
1913
1914Usage Guidelines
1915~~~~~~~~~~~~~~~~
1916
1917"memory.high" is the main mechanism to control memory usage.
1918Over-committing on high limit (sum of high limits > available memory)
1919and letting global memory pressure to distribute memory according to
1920usage is a viable strategy.
1921
1922Because breach of the high limit doesn't trigger the OOM killer but
1923throttles the offending cgroup, a management agent has ample
1924opportunities to monitor and take appropriate actions such as granting
1925more memory or terminating the workload.
1926
1927Determining whether a cgroup has enough memory is not trivial as
1928memory usage doesn't indicate whether the workload can benefit from
1929more memory. For example, a workload which writes data received from
1930network to a file can use all available memory but can also operate as
1931performant with a small amount of memory. A measure of memory
1932pressure - how much the workload is being impacted due to lack of
1933memory - is necessary to determine whether a workload needs more
1934memory; unfortunately, memory pressure monitoring mechanism isn't
1935implemented yet.
1936
1937
1938Memory Ownership
1939~~~~~~~~~~~~~~~~
1940
1941A memory area is charged to the cgroup which instantiated it and stays
1942charged to the cgroup until the area is released. Migrating a process
1943to a different cgroup doesn't move the memory usages that it
1944instantiated while in the previous cgroup to the new cgroup.
1945
1946A memory area may be used by processes belonging to different cgroups.
1947To which cgroup the area will be charged is in-deterministic; however,
1948over time, the memory area is likely to end up in a cgroup which has
1949enough memory allowance to avoid high reclaim pressure.
1950
1951If a cgroup sweeps a considerable amount of memory which is expected
1952to be accessed repeatedly by other cgroups, it may make sense to use
1953POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1954belonging to the affected files to ensure correct memory ownership.
1955
1956
1957IO
1958--
1959
1960The "io" controller regulates the distribution of IO resources. This
1961controller implements both weight based and absolute bandwidth or IOPS
1962limit distribution; however, weight based distribution is available
1963only if cfq-iosched is in use and neither scheme is available for
1964blk-mq devices.
1965
1966
1967IO Interface Files
1968~~~~~~~~~~~~~~~~~~
1969
1970 io.stat
1971 A read-only nested-keyed file.
1972
1973 Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1974 The following nested keys are defined.
1975
1976 ====== =====================
1977 rbytes Bytes read
1978 wbytes Bytes written
1979 rios Number of read IOs
1980 wios Number of write IOs
1981 dbytes Bytes discarded
1982 dios Number of discard IOs
1983 ====== =====================
1984
1985 An example read output follows::
1986
1987 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1988 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1989
1990 io.cost.qos
1991 A read-write nested-keyed file which exists only on the root
1992 cgroup.
1993
1994 This file configures the Quality of Service of the IO cost
1995 model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1996 currently implements "io.weight" proportional control. Lines
1997 are keyed by $MAJ:$MIN device numbers and not ordered. The
1998 line for a given device is populated on the first write for
1999 the device on "io.cost.qos" or "io.cost.model". The following
2000 nested keys are defined.
2001
2002 ====== =====================================
2003 enable Weight-based control enable
2004 ctrl "auto" or "user"
2005 rpct Read latency percentile [0, 100]
2006 rlat Read latency threshold
2007 wpct Write latency percentile [0, 100]
2008 wlat Write latency threshold
2009 min Minimum scaling percentage [1, 10000]
2010 max Maximum scaling percentage [1, 10000]
2011 ====== =====================================
2012
2013 The controller is disabled by default and can be enabled by
2014 setting "enable" to 1. "rpct" and "wpct" parameters default
2015 to zero and the controller uses internal device saturation
2016 state to adjust the overall IO rate between "min" and "max".
2017
2018 When a better control quality is needed, latency QoS
2019 parameters can be configured. For example::
2020
2021 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
2022
2023 shows that on sdb, the controller is enabled, will consider
2024 the device saturated if the 95th percentile of read completion
2025 latencies is above 75ms or write 150ms, and adjust the overall
2026 IO issue rate between 50% and 150% accordingly.
2027
2028 The lower the saturation point, the better the latency QoS at
2029 the cost of aggregate bandwidth. The narrower the allowed
2030 adjustment range between "min" and "max", the more conformant
2031 to the cost model the IO behavior. Note that the IO issue
2032 base rate may be far off from 100% and setting "min" and "max"
2033 blindly can lead to a significant loss of device capacity or
2034 control quality. "min" and "max" are useful for regulating
2035 devices which show wide temporary behavior changes - e.g. a
2036 ssd which accepts writes at the line speed for a while and
2037 then completely stalls for multiple seconds.
2038
2039 When "ctrl" is "auto", the parameters are controlled by the
2040 kernel and may change automatically. Setting "ctrl" to "user"
2041 or setting any of the percentile and latency parameters puts
2042 it into "user" mode and disables the automatic changes. The
2043 automatic mode can be restored by setting "ctrl" to "auto".
2044
2045 io.cost.model
2046 A read-write nested-keyed file which exists only on the root
2047 cgroup.
2048
2049 This file configures the cost model of the IO cost model based
2050 controller (CONFIG_BLK_CGROUP_IOCOST) which currently
2051 implements "io.weight" proportional control. Lines are keyed
2052 by $MAJ:$MIN device numbers and not ordered. The line for a
2053 given device is populated on the first write for the device on
2054 "io.cost.qos" or "io.cost.model". The following nested keys
2055 are defined.
2056
2057 ===== ================================
2058 ctrl "auto" or "user"
2059 model The cost model in use - "linear"
2060 ===== ================================
2061
2062 When "ctrl" is "auto", the kernel may change all parameters
2063 dynamically. When "ctrl" is set to "user" or any other
2064 parameters are written to, "ctrl" become "user" and the
2065 automatic changes are disabled.
2066
2067 When "model" is "linear", the following model parameters are
2068 defined.
2069
2070 ============= ========================================
2071 [r|w]bps The maximum sequential IO throughput
2072 [r|w]seqiops The maximum 4k sequential IOs per second
2073 [r|w]randiops The maximum 4k random IOs per second
2074 ============= ========================================
2075
2076 From the above, the builtin linear model determines the base
2077 costs of a sequential and random IO and the cost coefficient
2078 for the IO size. While simple, this model can cover most
2079 common device classes acceptably.
2080
2081 The IO cost model isn't expected to be accurate in absolute
2082 sense and is scaled to the device behavior dynamically.
2083
2084 If needed, tools/cgroup/iocost_coef_gen.py can be used to
2085 generate device-specific coefficients.
2086
2087 io.weight
2088 A read-write flat-keyed file which exists on non-root cgroups.
2089 The default is "default 100".
2090
2091 The first line is the default weight applied to devices
2092 without specific override. The rest are overrides keyed by
2093 $MAJ:$MIN device numbers and not ordered. The weights are in
2094 the range [1, 10000] and specifies the relative amount IO time
2095 the cgroup can use in relation to its siblings.
2096
2097 The default weight can be updated by writing either "default
2098 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing
2099 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
2100
2101 An example read output follows::
2102
2103 default 100
2104 8:16 200
2105 8:0 50
2106
2107 io.max
2108 A read-write nested-keyed file which exists on non-root
2109 cgroups.
2110
2111 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN
2112 device numbers and not ordered. The following nested keys are
2113 defined.
2114
2115 ===== ==================================
2116 rbps Max read bytes per second
2117 wbps Max write bytes per second
2118 riops Max read IO operations per second
2119 wiops Max write IO operations per second
2120 ===== ==================================
2121
2122 When writing, any number of nested key-value pairs can be
2123 specified in any order. "max" can be specified as the value
2124 to remove a specific limit. If the same key is specified
2125 multiple times, the outcome is undefined.
2126
2127 BPS and IOPS are measured in each IO direction and IOs are
2128 delayed if limit is reached. Temporary bursts are allowed.
2129
2130 Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
2131
2132 echo "8:16 rbps=2097152 wiops=120" > io.max
2133
2134 Reading returns the following::
2135
2136 8:16 rbps=2097152 wbps=max riops=max wiops=120
2137
2138 Write IOPS limit can be removed by writing the following::
2139
2140 echo "8:16 wiops=max" > io.max
2141
2142 Reading now returns the following::
2143
2144 8:16 rbps=2097152 wbps=max riops=max wiops=max
2145
2146 io.pressure
2147 A read-only nested-keyed file.
2148
2149 Shows pressure stall information for IO. See
2150 :ref:`Documentation/accounting/psi.rst <psi>` for details.
2151
2152
2153Writeback
2154~~~~~~~~~
2155
2156Page cache is dirtied through buffered writes and shared mmaps and
2157written asynchronously to the backing filesystem by the writeback
2158mechanism. Writeback sits between the memory and IO domains and
2159regulates the proportion of dirty memory by balancing dirtying and
2160write IOs.
2161
2162The io controller, in conjunction with the memory controller,
2163implements control of page cache writeback IOs. The memory controller
2164defines the memory domain that dirty memory ratio is calculated and
2165maintained for and the io controller defines the io domain which
2166writes out dirty pages for the memory domain. Both system-wide and
2167per-cgroup dirty memory states are examined and the more restrictive
2168of the two is enforced.
2169
2170cgroup writeback requires explicit support from the underlying
2171filesystem. Currently, cgroup writeback is implemented on ext2, ext4,
2172btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are
2173attributed to the root cgroup.
2174
2175There are inherent differences in memory and writeback management
2176which affects how cgroup ownership is tracked. Memory is tracked per
2177page while writeback per inode. For the purpose of writeback, an
2178inode is assigned to a cgroup and all IO requests to write dirty pages
2179from the inode are attributed to that cgroup.
2180
2181As cgroup ownership for memory is tracked per page, there can be pages
2182which are associated with different cgroups than the one the inode is
2183associated with. These are called foreign pages. The writeback
2184constantly keeps track of foreign pages and, if a particular foreign
2185cgroup becomes the majority over a certain period of time, switches
2186the ownership of the inode to that cgroup.
2187
2188While this model is enough for most use cases where a given inode is
2189mostly dirtied by a single cgroup even when the main writing cgroup
2190changes over time, use cases where multiple cgroups write to a single
2191inode simultaneously are not supported well. In such circumstances, a
2192significant portion of IOs are likely to be attributed incorrectly.
2193As memory controller assigns page ownership on the first use and
2194doesn't update it until the page is released, even if writeback
2195strictly follows page ownership, multiple cgroups dirtying overlapping
2196areas wouldn't work as expected. It's recommended to avoid such usage
2197patterns.
2198
2199The sysctl knobs which affect writeback behavior are applied to cgroup
2200writeback as follows.
2201
2202 vm.dirty_background_ratio, vm.dirty_ratio
2203 These ratios apply the same to cgroup writeback with the
2204 amount of available memory capped by limits imposed by the
2205 memory controller and system-wide clean memory.
2206
2207 vm.dirty_background_bytes, vm.dirty_bytes
2208 For cgroup writeback, this is calculated into ratio against
2209 total available memory and applied the same way as
2210 vm.dirty[_background]_ratio.
2211
2212
2213IO Latency
2214~~~~~~~~~~
2215
2216This is a cgroup v2 controller for IO workload protection. You provide a group
2217with a latency target, and if the average latency exceeds that target the
2218controller will throttle any peers that have a lower latency target than the
2219protected workload.
2220
2221The limits are only applied at the peer level in the hierarchy. This means that
2222in the diagram below, only groups A, B, and C will influence each other, and
2223groups D and F will influence each other. Group G will influence nobody::
2224
2225 [root]
2226 / | \
2227 A B C
2228 / \ |
2229 D F G
2230
2231
2232So the ideal way to configure this is to set io.latency in groups A, B, and C.
2233Generally you do not want to set a value lower than the latency your device
2234supports. Experiment to find the value that works best for your workload.
2235Start at higher than the expected latency for your device and watch the
2236avg_lat value in io.stat for your workload group to get an idea of the
2237latency you see during normal operation. Use the avg_lat value as a basis for
2238your real setting, setting at 10-15% higher than the value in io.stat.
2239
2240How IO Latency Throttling Works
2241~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2242
2243io.latency is work conserving; so as long as everybody is meeting their latency
2244target the controller doesn't do anything. Once a group starts missing its
2245target it begins throttling any peer group that has a higher target than itself.
2246This throttling takes 2 forms:
2247
2248- Queue depth throttling. This is the number of outstanding IO's a group is
2249 allowed to have. We will clamp down relatively quickly, starting at no limit
2250 and going all the way down to 1 IO at a time.
2251
2252- Artificial delay induction. There are certain types of IO that cannot be
2253 throttled without possibly adversely affecting higher priority groups. This
2254 includes swapping and metadata IO. These types of IO are allowed to occur
2255 normally, however they are "charged" to the originating group. If the
2256 originating group is being throttled you will see the use_delay and delay
2257 fields in io.stat increase. The delay value is how many microseconds that are
2258 being added to any process that runs in this group. Because this number can
2259 grow quite large if there is a lot of swapping or metadata IO occurring we
2260 limit the individual delay events to 1 second at a time.
2261
2262Once the victimized group starts meeting its latency target again it will start
2263unthrottling any peer groups that were throttled previously. If the victimized
2264group simply stops doing IO the global counter will unthrottle appropriately.
2265
2266IO Latency Interface Files
2267~~~~~~~~~~~~~~~~~~~~~~~~~~
2268
2269 io.latency
2270 This takes a similar format as the other controllers.
2271
2272 "MAJOR:MINOR target=<target time in microseconds>"
2273
2274 io.stat
2275 If the controller is enabled you will see extra stats in io.stat in
2276 addition to the normal ones.
2277
2278 depth
2279 This is the current queue depth for the group.
2280
2281 avg_lat
2282 This is an exponential moving average with a decay rate of 1/exp
2283 bound by the sampling interval. The decay rate interval can be
2284 calculated by multiplying the win value in io.stat by the
2285 corresponding number of samples based on the win value.
2286
2287 win
2288 The sampling window size in milliseconds. This is the minimum
2289 duration of time between evaluation events. Windows only elapse
2290 with IO activity. Idle periods extend the most recent window.
2291
2292IO Priority
2293~~~~~~~~~~~
2294
2295A single attribute controls the behavior of the I/O priority cgroup policy,
2296namely the io.prio.class attribute. The following values are accepted for
2297that attribute:
2298
2299 no-change
2300 Do not modify the I/O priority class.
2301
2302 promote-to-rt
2303 For requests that have a non-RT I/O priority class, change it into RT.
2304 Also change the priority level of these requests to 4. Do not modify
2305 the I/O priority of requests that have priority class RT.
2306
2307 restrict-to-be
2308 For requests that do not have an I/O priority class or that have I/O
2309 priority class RT, change it into BE. Also change the priority level
2310 of these requests to 0. Do not modify the I/O priority class of
2311 requests that have priority class IDLE.
2312
2313 idle
2314 Change the I/O priority class of all requests into IDLE, the lowest
2315 I/O priority class.
2316
2317 none-to-rt
2318 Deprecated. Just an alias for promote-to-rt.
2319
2320The following numerical values are associated with the I/O priority policies:
2321
2322+----------------+---+
2323| no-change | 0 |
2324+----------------+---+
2325| promote-to-rt | 1 |
2326+----------------+---+
2327| restrict-to-be | 2 |
2328+----------------+---+
2329| idle | 3 |
2330+----------------+---+
2331
2332The numerical value that corresponds to each I/O priority class is as follows:
2333
2334+-------------------------------+---+
2335| IOPRIO_CLASS_NONE | 0 |
2336+-------------------------------+---+
2337| IOPRIO_CLASS_RT (real-time) | 1 |
2338+-------------------------------+---+
2339| IOPRIO_CLASS_BE (best effort) | 2 |
2340+-------------------------------+---+
2341| IOPRIO_CLASS_IDLE | 3 |
2342+-------------------------------+---+
2343
2344The algorithm to set the I/O priority class for a request is as follows:
2345
2346- If I/O priority class policy is promote-to-rt, change the request I/O
2347 priority class to IOPRIO_CLASS_RT and change the request I/O priority
2348 level to 4.
2349- If I/O priority class policy is not promote-to-rt, translate the I/O priority
2350 class policy into a number, then change the request I/O priority class
2351 into the maximum of the I/O priority class policy number and the numerical
2352 I/O priority class.
2353
2354PID
2355---
2356
2357The process number controller is used to allow a cgroup to stop any
2358new tasks from being fork()'d or clone()'d after a specified limit is
2359reached.
2360
2361The number of tasks in a cgroup can be exhausted in ways which other
2362controllers cannot prevent, thus warranting its own controller. For
2363example, a fork bomb is likely to exhaust the number of tasks before
2364hitting memory restrictions.
2365
2366Note that PIDs used in this controller refer to TIDs, process IDs as
2367used by the kernel.
2368
2369
2370PID Interface Files
2371~~~~~~~~~~~~~~~~~~~
2372
2373 pids.max
2374 A read-write single value file which exists on non-root
2375 cgroups. The default is "max".
2376
2377 Hard limit of number of processes.
2378
2379 pids.current
2380 A read-only single value file which exists on non-root cgroups.
2381
2382 The number of processes currently in the cgroup and its
2383 descendants.
2384
2385 pids.peak
2386 A read-only single value file which exists on non-root cgroups.
2387
2388 The maximum value that the number of processes in the cgroup and its
2389 descendants has ever reached.
2390
2391 pids.events
2392 A read-only flat-keyed file which exists on non-root cgroups. Unless
2393 specified otherwise, a value change in this file generates a file
2394 modified event. The following entries are defined.
2395
2396 max
2397 The number of times the cgroup's total number of processes hit the pids.max
2398 limit (see also pids_localevents).
2399
2400 pids.events.local
2401 Similar to pids.events but the fields in the file are local
2402 to the cgroup i.e. not hierarchical. The file modified event
2403 generated on this file reflects only the local events.
2404
2405Organisational operations are not blocked by cgroup policies, so it is
2406possible to have pids.current > pids.max. This can be done by either
2407setting the limit to be smaller than pids.current, or attaching enough
2408processes to the cgroup such that pids.current is larger than
2409pids.max. However, it is not possible to violate a cgroup PID policy
2410through fork() or clone(). These will return -EAGAIN if the creation
2411of a new process would cause a cgroup policy to be violated.
2412
2413
2414Cpuset
2415------
2416
2417The "cpuset" controller provides a mechanism for constraining
2418the CPU and memory node placement of tasks to only the resources
2419specified in the cpuset interface files in a task's current cgroup.
2420This is especially valuable on large NUMA systems where placing jobs
2421on properly sized subsets of the systems with careful processor and
2422memory placement to reduce cross-node memory access and contention
2423can improve overall system performance.
2424
2425The "cpuset" controller is hierarchical. That means the controller
2426cannot use CPUs or memory nodes not allowed in its parent.
2427
2428
2429Cpuset Interface Files
2430~~~~~~~~~~~~~~~~~~~~~~
2431
2432 cpuset.cpus
2433 A read-write multiple values file which exists on non-root
2434 cpuset-enabled cgroups.
2435
2436 It lists the requested CPUs to be used by tasks within this
2437 cgroup. The actual list of CPUs to be granted, however, is
2438 subjected to constraints imposed by its parent and can differ
2439 from the requested CPUs.
2440
2441 The CPU numbers are comma-separated numbers or ranges.
2442 For example::
2443
2444 # cat cpuset.cpus
2445 0-4,6,8-10
2446
2447 An empty value indicates that the cgroup is using the same
2448 setting as the nearest cgroup ancestor with a non-empty
2449 "cpuset.cpus" or all the available CPUs if none is found.
2450
2451 The value of "cpuset.cpus" stays constant until the next update
2452 and won't be affected by any CPU hotplug events.
2453
2454 cpuset.cpus.effective
2455 A read-only multiple values file which exists on all
2456 cpuset-enabled cgroups.
2457
2458 It lists the onlined CPUs that are actually granted to this
2459 cgroup by its parent. These CPUs are allowed to be used by
2460 tasks within the current cgroup.
2461
2462 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2463 all the CPUs from the parent cgroup that can be available to
2464 be used by this cgroup. Otherwise, it should be a subset of
2465 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2466 can be granted. In this case, it will be treated just like an
2467 empty "cpuset.cpus".
2468
2469 Its value will be affected by CPU hotplug events.
2470
2471 cpuset.mems
2472 A read-write multiple values file which exists on non-root
2473 cpuset-enabled cgroups.
2474
2475 It lists the requested memory nodes to be used by tasks within
2476 this cgroup. The actual list of memory nodes granted, however,
2477 is subjected to constraints imposed by its parent and can differ
2478 from the requested memory nodes.
2479
2480 The memory node numbers are comma-separated numbers or ranges.
2481 For example::
2482
2483 # cat cpuset.mems
2484 0-1,3
2485
2486 An empty value indicates that the cgroup is using the same
2487 setting as the nearest cgroup ancestor with a non-empty
2488 "cpuset.mems" or all the available memory nodes if none
2489 is found.
2490
2491 The value of "cpuset.mems" stays constant until the next update
2492 and won't be affected by any memory nodes hotplug events.
2493
2494 Setting a non-empty value to "cpuset.mems" causes memory of
2495 tasks within the cgroup to be migrated to the designated nodes if
2496 they are currently using memory outside of the designated nodes.
2497
2498 There is a cost for this memory migration. The migration
2499 may not be complete and some memory pages may be left behind.
2500 So it is recommended that "cpuset.mems" should be set properly
2501 before spawning new tasks into the cpuset. Even if there is
2502 a need to change "cpuset.mems" with active tasks, it shouldn't
2503 be done frequently.
2504
2505 cpuset.mems.effective
2506 A read-only multiple values file which exists on all
2507 cpuset-enabled cgroups.
2508
2509 It lists the onlined memory nodes that are actually granted to
2510 this cgroup by its parent. These memory nodes are allowed to
2511 be used by tasks within the current cgroup.
2512
2513 If "cpuset.mems" is empty, it shows all the memory nodes from the
2514 parent cgroup that will be available to be used by this cgroup.
2515 Otherwise, it should be a subset of "cpuset.mems" unless none of
2516 the memory nodes listed in "cpuset.mems" can be granted. In this
2517 case, it will be treated just like an empty "cpuset.mems".
2518
2519 Its value will be affected by memory nodes hotplug events.
2520
2521 cpuset.cpus.exclusive
2522 A read-write multiple values file which exists on non-root
2523 cpuset-enabled cgroups.
2524
2525 It lists all the exclusive CPUs that are allowed to be used
2526 to create a new cpuset partition. Its value is not used
2527 unless the cgroup becomes a valid partition root. See the
2528 "cpuset.cpus.partition" section below for a description of what
2529 a cpuset partition is.
2530
2531 When the cgroup becomes a partition root, the actual exclusive
2532 CPUs that are allocated to that partition are listed in
2533 "cpuset.cpus.exclusive.effective" which may be different
2534 from "cpuset.cpus.exclusive". If "cpuset.cpus.exclusive"
2535 has previously been set, "cpuset.cpus.exclusive.effective"
2536 is always a subset of it.
2537
2538 Users can manually set it to a value that is different from
2539 "cpuset.cpus". One constraint in setting it is that the list of
2540 CPUs must be exclusive with respect to "cpuset.cpus.exclusive"
2541 of its sibling. If "cpuset.cpus.exclusive" of a sibling cgroup
2542 isn't set, its "cpuset.cpus" value, if set, cannot be a subset
2543 of it to leave at least one CPU available when the exclusive
2544 CPUs are taken away.
2545
2546 For a parent cgroup, any one of its exclusive CPUs can only
2547 be distributed to at most one of its child cgroups. Having an
2548 exclusive CPU appearing in two or more of its child cgroups is
2549 not allowed (the exclusivity rule). A value that violates the
2550 exclusivity rule will be rejected with a write error.
2551
2552 The root cgroup is a partition root and all its available CPUs
2553 are in its exclusive CPU set.
2554
2555 cpuset.cpus.exclusive.effective
2556 A read-only multiple values file which exists on all non-root
2557 cpuset-enabled cgroups.
2558
2559 This file shows the effective set of exclusive CPUs that
2560 can be used to create a partition root. The content
2561 of this file will always be a subset of its parent's
2562 "cpuset.cpus.exclusive.effective" if its parent is not the root
2563 cgroup. It will also be a subset of "cpuset.cpus.exclusive"
2564 if it is set. If "cpuset.cpus.exclusive" is not set, it is
2565 treated to have an implicit value of "cpuset.cpus" in the
2566 formation of local partition.
2567
2568 cpuset.cpus.isolated
2569 A read-only and root cgroup only multiple values file.
2570
2571 This file shows the set of all isolated CPUs used in existing
2572 isolated partitions. It will be empty if no isolated partition
2573 is created.
2574
2575 cpuset.cpus.partition
2576 A read-write single value file which exists on non-root
2577 cpuset-enabled cgroups. This flag is owned by the parent cgroup
2578 and is not delegatable.
2579
2580 It accepts only the following input values when written to.
2581
2582 ========== =====================================
2583 "member" Non-root member of a partition
2584 "root" Partition root
2585 "isolated" Partition root without load balancing
2586 ========== =====================================
2587
2588 A cpuset partition is a collection of cpuset-enabled cgroups with
2589 a partition root at the top of the hierarchy and its descendants
2590 except those that are separate partition roots themselves and
2591 their descendants. A partition has exclusive access to the
2592 set of exclusive CPUs allocated to it. Other cgroups outside
2593 of that partition cannot use any CPUs in that set.
2594
2595 There are two types of partitions - local and remote. A local
2596 partition is one whose parent cgroup is also a valid partition
2597 root. A remote partition is one whose parent cgroup is not a
2598 valid partition root itself. Writing to "cpuset.cpus.exclusive"
2599 is optional for the creation of a local partition as its
2600 "cpuset.cpus.exclusive" file will assume an implicit value that
2601 is the same as "cpuset.cpus" if it is not set. Writing the
2602 proper "cpuset.cpus.exclusive" values down the cgroup hierarchy
2603 before the target partition root is mandatory for the creation
2604 of a remote partition.
2605
2606 Currently, a remote partition cannot be created under a local
2607 partition. All the ancestors of a remote partition root except
2608 the root cgroup cannot be a partition root.
2609
2610 The root cgroup is always a partition root and its state cannot
2611 be changed. All other non-root cgroups start out as "member".
2612
2613 When set to "root", the current cgroup is the root of a new
2614 partition or scheduling domain. The set of exclusive CPUs is
2615 determined by the value of its "cpuset.cpus.exclusive.effective".
2616
2617 When set to "isolated", the CPUs in that partition will be in
2618 an isolated state without any load balancing from the scheduler
2619 and excluded from the unbound workqueues. Tasks placed in such
2620 a partition with multiple CPUs should be carefully distributed
2621 and bound to each of the individual CPUs for optimal performance.
2622
2623 A partition root ("root" or "isolated") can be in one of the
2624 two possible states - valid or invalid. An invalid partition
2625 root is in a degraded state where some state information may
2626 be retained, but behaves more like a "member".
2627
2628 All possible state transitions among "member", "root" and
2629 "isolated" are allowed.
2630
2631 On read, the "cpuset.cpus.partition" file can show the following
2632 values.
2633
2634 ============================= =====================================
2635 "member" Non-root member of a partition
2636 "root" Partition root
2637 "isolated" Partition root without load balancing
2638 "root invalid (<reason>)" Invalid partition root
2639 "isolated invalid (<reason>)" Invalid isolated partition root
2640 ============================= =====================================
2641
2642 In the case of an invalid partition root, a descriptive string on
2643 why the partition is invalid is included within parentheses.
2644
2645 For a local partition root to be valid, the following conditions
2646 must be met.
2647
2648 1) The parent cgroup is a valid partition root.
2649 2) The "cpuset.cpus.exclusive.effective" file cannot be empty,
2650 though it may contain offline CPUs.
2651 3) The "cpuset.cpus.effective" cannot be empty unless there is
2652 no task associated with this partition.
2653
2654 For a remote partition root to be valid, all the above conditions
2655 except the first one must be met.
2656
2657 External events like hotplug or changes to "cpuset.cpus" or
2658 "cpuset.cpus.exclusive" can cause a valid partition root to
2659 become invalid and vice versa. Note that a task cannot be
2660 moved to a cgroup with empty "cpuset.cpus.effective".
2661
2662 A valid non-root parent partition may distribute out all its CPUs
2663 to its child local partitions when there is no task associated
2664 with it.
2665
2666 Care must be taken to change a valid partition root to "member"
2667 as all its child local partitions, if present, will become
2668 invalid causing disruption to tasks running in those child
2669 partitions. These inactivated partitions could be recovered if
2670 their parent is switched back to a partition root with a proper
2671 value in "cpuset.cpus" or "cpuset.cpus.exclusive".
2672
2673 Poll and inotify events are triggered whenever the state of
2674 "cpuset.cpus.partition" changes. That includes changes caused
2675 by write to "cpuset.cpus.partition", cpu hotplug or other
2676 changes that modify the validity status of the partition.
2677 This will allow user space agents to monitor unexpected changes
2678 to "cpuset.cpus.partition" without the need to do continuous
2679 polling.
2680
2681 A user can pre-configure certain CPUs to an isolated state
2682 with load balancing disabled at boot time with the "isolcpus"
2683 kernel boot command line option. If those CPUs are to be put
2684 into a partition, they have to be used in an isolated partition.
2685
2686
2687Device controller
2688-----------------
2689
2690Device controller manages access to device files. It includes both
2691creation of new device files (using mknod), and access to the
2692existing device files.
2693
2694Cgroup v2 device controller has no interface files and is implemented
2695on top of cgroup BPF. To control access to device files, a user may
2696create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
2697them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
2698device file, corresponding BPF programs will be executed, and depending
2699on the return value the attempt will succeed or fail with -EPERM.
2700
2701A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
2702bpf_cgroup_dev_ctx structure, which describes the device access attempt:
2703access type (mknod/read/write) and device (type, major and minor numbers).
2704If the program returns 0, the attempt fails with -EPERM, otherwise it
2705succeeds.
2706
2707An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
2708tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
2709
2710
2711RDMA
2712----
2713
2714The "rdma" controller regulates the distribution and accounting of
2715RDMA resources.
2716
2717RDMA Interface Files
2718~~~~~~~~~~~~~~~~~~~~
2719
2720 rdma.max
2721 A readwrite nested-keyed file that exists for all the cgroups
2722 except root that describes current configured resource limit
2723 for a RDMA/IB device.
2724
2725 Lines are keyed by device name and are not ordered.
2726 Each line contains space separated resource name and its configured
2727 limit that can be distributed.
2728
2729 The following nested keys are defined.
2730
2731 ========== =============================
2732 hca_handle Maximum number of HCA Handles
2733 hca_object Maximum number of HCA Objects
2734 ========== =============================
2735
2736 An example for mlx4 and ocrdma device follows::
2737
2738 mlx4_0 hca_handle=2 hca_object=2000
2739 ocrdma1 hca_handle=3 hca_object=max
2740
2741 rdma.current
2742 A read-only file that describes current resource usage.
2743 It exists for all the cgroup except root.
2744
2745 An example for mlx4 and ocrdma device follows::
2746
2747 mlx4_0 hca_handle=1 hca_object=20
2748 ocrdma1 hca_handle=1 hca_object=23
2749
2750DMEM
2751----
2752
2753The "dmem" controller regulates the distribution and accounting of
2754device memory regions. Because each memory region may have its own page size,
2755which does not have to be equal to the system page size, the units are always bytes.
2756
2757DMEM Interface Files
2758~~~~~~~~~~~~~~~~~~~~
2759
2760 dmem.max, dmem.min, dmem.low
2761 A readwrite nested-keyed file that exists for all the cgroups
2762 except root that describes current configured resource limit
2763 for a region.
2764
2765 An example for xe follows::
2766
2767 drm/0000:03:00.0/vram0 1073741824
2768 drm/0000:03:00.0/stolen max
2769
2770 The semantics are the same as for the memory cgroup controller, and are
2771 calculated in the same way.
2772
2773 dmem.capacity
2774 A read-only file that describes maximum region capacity.
2775 It only exists on the root cgroup. Not all memory can be
2776 allocated by cgroups, as the kernel reserves some for
2777 internal use.
2778
2779 An example for xe follows::
2780
2781 drm/0000:03:00.0/vram0 8514437120
2782 drm/0000:03:00.0/stolen 67108864
2783
2784 dmem.current
2785 A read-only file that describes current resource usage.
2786 It exists for all the cgroup except root.
2787
2788 An example for xe follows::
2789
2790 drm/0000:03:00.0/vram0 12550144
2791 drm/0000:03:00.0/stolen 8650752
2792
2793HugeTLB
2794-------
2795
2796The HugeTLB controller allows to limit the HugeTLB usage per control group and
2797enforces the controller limit during page fault.
2798
2799HugeTLB Interface Files
2800~~~~~~~~~~~~~~~~~~~~~~~
2801
2802 hugetlb.<hugepagesize>.current
2803 Show current usage for "hugepagesize" hugetlb. It exists for all
2804 the cgroup except root.
2805
2806 hugetlb.<hugepagesize>.max
2807 Set/show the hard limit of "hugepagesize" hugetlb usage.
2808 The default value is "max". It exists for all the cgroup except root.
2809
2810 hugetlb.<hugepagesize>.events
2811 A read-only flat-keyed file which exists on non-root cgroups.
2812
2813 max
2814 The number of allocation failure due to HugeTLB limit
2815
2816 hugetlb.<hugepagesize>.events.local
2817 Similar to hugetlb.<hugepagesize>.events but the fields in the file
2818 are local to the cgroup i.e. not hierarchical. The file modified event
2819 generated on this file reflects only the local events.
2820
2821 hugetlb.<hugepagesize>.numa_stat
2822 Similar to memory.numa_stat, it shows the numa information of the
2823 hugetlb pages of <hugepagesize> in this cgroup. Only active in
2824 use hugetlb pages are included. The per-node values are in bytes.
2825
2826Misc
2827----
2828
2829The Miscellaneous cgroup provides the resource limiting and tracking
2830mechanism for the scalar resources which cannot be abstracted like the other
2831cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2832option.
2833
2834A resource can be added to the controller via enum misc_res_type{} in the
2835include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2836in the kernel/cgroup/misc.c file. Provider of the resource must set its
2837capacity prior to using the resource by calling misc_cg_set_capacity().
2838
2839Once a capacity is set then the resource usage can be updated using charge and
2840uncharge APIs. All of the APIs to interact with misc controller are in
2841include/linux/misc_cgroup.h.
2842
2843Misc Interface Files
2844~~~~~~~~~~~~~~~~~~~~
2845
2846Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2847
2848 misc.capacity
2849 A read-only flat-keyed file shown only in the root cgroup. It shows
2850 miscellaneous scalar resources available on the platform along with
2851 their quantities::
2852
2853 $ cat misc.capacity
2854 res_a 50
2855 res_b 10
2856
2857 misc.current
2858 A read-only flat-keyed file shown in the all cgroups. It shows
2859 the current usage of the resources in the cgroup and its children.::
2860
2861 $ cat misc.current
2862 res_a 3
2863 res_b 0
2864
2865 misc.peak
2866 A read-only flat-keyed file shown in all cgroups. It shows the
2867 historical maximum usage of the resources in the cgroup and its
2868 children.::
2869
2870 $ cat misc.peak
2871 res_a 10
2872 res_b 8
2873
2874 misc.max
2875 A read-write flat-keyed file shown in the non root cgroups. Allowed
2876 maximum usage of the resources in the cgroup and its children.::
2877
2878 $ cat misc.max
2879 res_a max
2880 res_b 4
2881
2882 Limit can be set by::
2883
2884 # echo res_a 1 > misc.max
2885
2886 Limit can be set to max by::
2887
2888 # echo res_a max > misc.max
2889
2890 Limits can be set higher than the capacity value in the misc.capacity
2891 file.
2892
2893 misc.events
2894 A read-only flat-keyed file which exists on non-root cgroups. The
2895 following entries are defined. Unless specified otherwise, a value
2896 change in this file generates a file modified event. All fields in
2897 this file are hierarchical.
2898
2899 max
2900 The number of times the cgroup's resource usage was
2901 about to go over the max boundary.
2902
2903 misc.events.local
2904 Similar to misc.events but the fields in the file are local to the
2905 cgroup i.e. not hierarchical. The file modified event generated on
2906 this file reflects only the local events.
2907
2908Migration and Ownership
2909~~~~~~~~~~~~~~~~~~~~~~~
2910
2911A miscellaneous scalar resource is charged to the cgroup in which it is used
2912first, and stays charged to that cgroup until that resource is freed. Migrating
2913a process to a different cgroup does not move the charge to the destination
2914cgroup where the process has moved.
2915
2916Others
2917------
2918
2919perf_event
2920~~~~~~~~~~
2921
2922perf_event controller, if not mounted on a legacy hierarchy, is
2923automatically enabled on the v2 hierarchy so that perf events can
2924always be filtered by cgroup v2 path. The controller can still be
2925moved to a legacy hierarchy after v2 hierarchy is populated.
2926
2927
2928Non-normative information
2929-------------------------
2930
2931This section contains information that isn't considered to be a part of
2932the stable kernel API and so is subject to change.
2933
2934
2935CPU controller root cgroup process behaviour
2936~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2937
2938When distributing CPU cycles in the root cgroup each thread in this
2939cgroup is treated as if it was hosted in a separate child cgroup of the
2940root cgroup. This child cgroup weight is dependent on its thread nice
2941level.
2942
2943For details of this mapping see sched_prio_to_weight array in
2944kernel/sched/core.c file (values from this array should be scaled
2945appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2946
2947
2948IO controller root cgroup process behaviour
2949~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2950
2951Root cgroup processes are hosted in an implicit leaf child node.
2952When distributing IO resources this implicit child node is taken into
2953account as if it was a normal child cgroup of the root cgroup with a
2954weight value of 200.
2955
2956
2957Namespace
2958=========
2959
2960Basics
2961------
2962
2963cgroup namespace provides a mechanism to virtualize the view of the
2964"/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone
2965flag can be used with clone(2) and unshare(2) to create a new cgroup
2966namespace. The process running inside the cgroup namespace will have
2967its "/proc/$PID/cgroup" output restricted to cgroupns root. The
2968cgroupns root is the cgroup of the process at the time of creation of
2969the cgroup namespace.
2970
2971Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2972complete path of the cgroup of a process. In a container setup where
2973a set of cgroups and namespaces are intended to isolate processes the
2974"/proc/$PID/cgroup" file may leak potential system level information
2975to the isolated processes. For example::
2976
2977 # cat /proc/self/cgroup
2978 0::/batchjobs/container_id1
2979
2980The path '/batchjobs/container_id1' can be considered as system-data
2981and undesirable to expose to the isolated processes. cgroup namespace
2982can be used to restrict visibility of this path. For example, before
2983creating a cgroup namespace, one would see::
2984
2985 # ls -l /proc/self/ns/cgroup
2986 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2987 # cat /proc/self/cgroup
2988 0::/batchjobs/container_id1
2989
2990After unsharing a new namespace, the view changes::
2991
2992 # ls -l /proc/self/ns/cgroup
2993 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2994 # cat /proc/self/cgroup
2995 0::/
2996
2997When some thread from a multi-threaded process unshares its cgroup
2998namespace, the new cgroupns gets applied to the entire process (all
2999the threads). This is natural for the v2 hierarchy; however, for the
3000legacy hierarchies, this may be unexpected.
3001
3002A cgroup namespace is alive as long as there are processes inside or
3003mounts pinning it. When the last usage goes away, the cgroup
3004namespace is destroyed. The cgroupns root and the actual cgroups
3005remain.
3006
3007
3008The Root and Views
3009------------------
3010
3011The 'cgroupns root' for a cgroup namespace is the cgroup in which the
3012process calling unshare(2) is running. For example, if a process in
3013/batchjobs/container_id1 cgroup calls unshare, cgroup
3014/batchjobs/container_id1 becomes the cgroupns root. For the
3015init_cgroup_ns, this is the real root ('/') cgroup.
3016
3017The cgroupns root cgroup does not change even if the namespace creator
3018process later moves to a different cgroup::
3019
3020 # ~/unshare -c # unshare cgroupns in some cgroup
3021 # cat /proc/self/cgroup
3022 0::/
3023 # mkdir sub_cgrp_1
3024 # echo 0 > sub_cgrp_1/cgroup.procs
3025 # cat /proc/self/cgroup
3026 0::/sub_cgrp_1
3027
3028Each process gets its namespace-specific view of "/proc/$PID/cgroup"
3029
3030Processes running inside the cgroup namespace will be able to see
3031cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
3032From within an unshared cgroupns::
3033
3034 # sleep 100000 &
3035 [1] 7353
3036 # echo 7353 > sub_cgrp_1/cgroup.procs
3037 # cat /proc/7353/cgroup
3038 0::/sub_cgrp_1
3039
3040From the initial cgroup namespace, the real cgroup path will be
3041visible::
3042
3043 $ cat /proc/7353/cgroup
3044 0::/batchjobs/container_id1/sub_cgrp_1
3045
3046From a sibling cgroup namespace (that is, a namespace rooted at a
3047different cgroup), the cgroup path relative to its own cgroup
3048namespace root will be shown. For instance, if PID 7353's cgroup
3049namespace root is at '/batchjobs/container_id2', then it will see::
3050
3051 # cat /proc/7353/cgroup
3052 0::/../container_id2/sub_cgrp_1
3053
3054Note that the relative path always starts with '/' to indicate that
3055its relative to the cgroup namespace root of the caller.
3056
3057
3058Migration and setns(2)
3059----------------------
3060
3061Processes inside a cgroup namespace can move into and out of the
3062namespace root if they have proper access to external cgroups. For
3063example, from inside a namespace with cgroupns root at
3064/batchjobs/container_id1, and assuming that the global hierarchy is
3065still accessible inside cgroupns::
3066
3067 # cat /proc/7353/cgroup
3068 0::/sub_cgrp_1
3069 # echo 7353 > batchjobs/container_id2/cgroup.procs
3070 # cat /proc/7353/cgroup
3071 0::/../container_id2
3072
3073Note that this kind of setup is not encouraged. A task inside cgroup
3074namespace should only be exposed to its own cgroupns hierarchy.
3075
3076setns(2) to another cgroup namespace is allowed when:
3077
3078(a) the process has CAP_SYS_ADMIN against its current user namespace
3079(b) the process has CAP_SYS_ADMIN against the target cgroup
3080 namespace's userns
3081
3082No implicit cgroup changes happen with attaching to another cgroup
3083namespace. It is expected that the someone moves the attaching
3084process under the target cgroup namespace root.
3085
3086
3087Interaction with Other Namespaces
3088---------------------------------
3089
3090Namespace specific cgroup hierarchy can be mounted by a process
3091running inside a non-init cgroup namespace::
3092
3093 # mount -t cgroup2 none $MOUNT_POINT
3094
3095This will mount the unified cgroup hierarchy with cgroupns root as the
3096filesystem root. The process needs CAP_SYS_ADMIN against its user and
3097mount namespaces.
3098
3099The virtualization of /proc/self/cgroup file combined with restricting
3100the view of cgroup hierarchy by namespace-private cgroupfs mount
3101provides a properly isolated cgroup view inside the container.
3102
3103
3104Information on Kernel Programming
3105=================================
3106
3107This section contains kernel programming information in the areas
3108where interacting with cgroup is necessary. cgroup core and
3109controllers are not covered.
3110
3111
3112Filesystem Support for Writeback
3113--------------------------------
3114
3115A filesystem can support cgroup writeback by updating
3116address_space_operations->writepages() to annotate bio's using the
3117following two functions.
3118
3119 wbc_init_bio(@wbc, @bio)
3120 Should be called for each bio carrying writeback data and
3121 associates the bio with the inode's owner cgroup and the
3122 corresponding request queue. This must be called after
3123 a queue (device) has been associated with the bio and
3124 before submission.
3125
3126 wbc_account_cgroup_owner(@wbc, @folio, @bytes)
3127 Should be called for each data segment being written out.
3128 While this function doesn't care exactly when it's called
3129 during the writeback session, it's the easiest and most
3130 natural to call it as data segments are added to a bio.
3131
3132With writeback bio's annotated, cgroup support can be enabled per
3133super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for
3134selective disabling of cgroup writeback support which is helpful when
3135certain filesystem features, e.g. journaled data mode, are
3136incompatible.
3137
3138wbc_init_bio() binds the specified bio to its cgroup. Depending on
3139the configuration, the bio may be executed at a lower priority and if
3140the writeback session is holding shared resources, e.g. a journal
3141entry, may lead to priority inversion. There is no one easy solution
3142for the problem. Filesystems can try to work around specific problem
3143cases by skipping wbc_init_bio() and using bio_associate_blkg()
3144directly.
3145
3146
3147Deprecated v1 Core Features
3148===========================
3149
3150- Multiple hierarchies including named ones are not supported.
3151
3152- All v1 mount options are not supported.
3153
3154- The "tasks" file is removed and "cgroup.procs" is not sorted.
3155
3156- "cgroup.clone_children" is removed.
3157
3158- /proc/cgroups is meaningless for v2. Use "cgroup.controllers" or
3159 "cgroup.stat" files at the root instead.
3160
3161
3162Issues with v1 and Rationales for v2
3163====================================
3164
3165Multiple Hierarchies
3166--------------------
3167
3168cgroup v1 allowed an arbitrary number of hierarchies and each
3169hierarchy could host any number of controllers. While this seemed to
3170provide a high level of flexibility, it wasn't useful in practice.
3171
3172For example, as there is only one instance of each controller, utility
3173type controllers such as freezer which can be useful in all
3174hierarchies could only be used in one. The issue is exacerbated by
3175the fact that controllers couldn't be moved to another hierarchy once
3176hierarchies were populated. Another issue was that all controllers
3177bound to a hierarchy were forced to have exactly the same view of the
3178hierarchy. It wasn't possible to vary the granularity depending on
3179the specific controller.
3180
3181In practice, these issues heavily limited which controllers could be
3182put on the same hierarchy and most configurations resorted to putting
3183each controller on its own hierarchy. Only closely related ones, such
3184as the cpu and cpuacct controllers, made sense to be put on the same
3185hierarchy. This often meant that userland ended up managing multiple
3186similar hierarchies repeating the same steps on each hierarchy
3187whenever a hierarchy management operation was necessary.
3188
3189Furthermore, support for multiple hierarchies came at a steep cost.
3190It greatly complicated cgroup core implementation but more importantly
3191the support for multiple hierarchies restricted how cgroup could be
3192used in general and what controllers was able to do.
3193
3194There was no limit on how many hierarchies there might be, which meant
3195that a thread's cgroup membership couldn't be described in finite
3196length. The key might contain any number of entries and was unlimited
3197in length, which made it highly awkward to manipulate and led to
3198addition of controllers which existed only to identify membership,
3199which in turn exacerbated the original problem of proliferating number
3200of hierarchies.
3201
3202Also, as a controller couldn't have any expectation regarding the
3203topologies of hierarchies other controllers might be on, each
3204controller had to assume that all other controllers were attached to
3205completely orthogonal hierarchies. This made it impossible, or at
3206least very cumbersome, for controllers to cooperate with each other.
3207
3208In most use cases, putting controllers on hierarchies which are
3209completely orthogonal to each other isn't necessary. What usually is
3210called for is the ability to have differing levels of granularity
3211depending on the specific controller. In other words, hierarchy may
3212be collapsed from leaf towards root when viewed from specific
3213controllers. For example, a given configuration might not care about
3214how memory is distributed beyond a certain level while still wanting
3215to control how CPU cycles are distributed.
3216
3217
3218Thread Granularity
3219------------------
3220
3221cgroup v1 allowed threads of a process to belong to different cgroups.
3222This didn't make sense for some controllers and those controllers
3223ended up implementing different ways to ignore such situations but
3224much more importantly it blurred the line between API exposed to
3225individual applications and system management interface.
3226
3227Generally, in-process knowledge is available only to the process
3228itself; thus, unlike service-level organization of processes,
3229categorizing threads of a process requires active participation from
3230the application which owns the target process.
3231
3232cgroup v1 had an ambiguously defined delegation model which got abused
3233in combination with thread granularity. cgroups were delegated to
3234individual applications so that they can create and manage their own
3235sub-hierarchies and control resource distributions along them. This
3236effectively raised cgroup to the status of a syscall-like API exposed
3237to lay programs.
3238
3239First of all, cgroup has a fundamentally inadequate interface to be
3240exposed this way. For a process to access its own knobs, it has to
3241extract the path on the target hierarchy from /proc/self/cgroup,
3242construct the path by appending the name of the knob to the path, open
3243and then read and/or write to it. This is not only extremely clunky
3244and unusual but also inherently racy. There is no conventional way to
3245define transaction across the required steps and nothing can guarantee
3246that the process would actually be operating on its own sub-hierarchy.
3247
3248cgroup controllers implemented a number of knobs which would never be
3249accepted as public APIs because they were just adding control knobs to
3250system-management pseudo filesystem. cgroup ended up with interface
3251knobs which were not properly abstracted or refined and directly
3252revealed kernel internal details. These knobs got exposed to
3253individual applications through the ill-defined delegation mechanism
3254effectively abusing cgroup as a shortcut to implementing public APIs
3255without going through the required scrutiny.
3256
3257This was painful for both userland and kernel. Userland ended up with
3258misbehaving and poorly abstracted interfaces and kernel exposing and
3259locked into constructs inadvertently.
3260
3261
3262Competition Between Inner Nodes and Threads
3263-------------------------------------------
3264
3265cgroup v1 allowed threads to be in any cgroups which created an
3266interesting problem where threads belonging to a parent cgroup and its
3267children cgroups competed for resources. This was nasty as two
3268different types of entities competed and there was no obvious way to
3269settle it. Different controllers did different things.
3270
3271The cpu controller considered threads and cgroups as equivalents and
3272mapped nice levels to cgroup weights. This worked for some cases but
3273fell flat when children wanted to be allocated specific ratios of CPU
3274cycles and the number of internal threads fluctuated - the ratios
3275constantly changed as the number of competing entities fluctuated.
3276There also were other issues. The mapping from nice level to weight
3277wasn't obvious or universal, and there were various other knobs which
3278simply weren't available for threads.
3279
3280The io controller implicitly created a hidden leaf node for each
3281cgroup to host the threads. The hidden leaf had its own copies of all
3282the knobs with ``leaf_`` prefixed. While this allowed equivalent
3283control over internal threads, it was with serious drawbacks. It
3284always added an extra layer of nesting which wouldn't be necessary
3285otherwise, made the interface messy and significantly complicated the
3286implementation.
3287
3288The memory controller didn't have a way to control what happened
3289between internal tasks and child cgroups and the behavior was not
3290clearly defined. There were attempts to add ad-hoc behaviors and
3291knobs to tailor the behavior to specific workloads which would have
3292led to problems extremely difficult to resolve in the long term.
3293
3294Multiple controllers struggled with internal tasks and came up with
3295different ways to deal with it; unfortunately, all the approaches were
3296severely flawed and, furthermore, the widely different behaviors
3297made cgroup as a whole highly inconsistent.
3298
3299This clearly is a problem which needs to be addressed from cgroup core
3300in a uniform way.
3301
3302
3303Other Interface Issues
3304----------------------
3305
3306cgroup v1 grew without oversight and developed a large number of
3307idiosyncrasies and inconsistencies. One issue on the cgroup core side
3308was how an empty cgroup was notified - a userland helper binary was
3309forked and executed for each event. The event delivery wasn't
3310recursive or delegatable. The limitations of the mechanism also led
3311to in-kernel event delivery filtering mechanism further complicating
3312the interface.
3313
3314Controller interfaces were problematic too. An extreme example is
3315controllers completely ignoring hierarchical organization and treating
3316all cgroups as if they were all located directly under the root
3317cgroup. Some controllers exposed a large amount of inconsistent
3318implementation details to userland.
3319
3320There also was no consistency across controllers. When a new cgroup
3321was created, some controllers defaulted to not imposing extra
3322restrictions while others disallowed any resource usage until
3323explicitly configured. Configuration knobs for the same type of
3324control used widely differing naming schemes and formats. Statistics
3325and information knobs were named arbitrarily and used different
3326formats and units even in the same controller.
3327
3328cgroup v2 establishes common conventions where appropriate and updates
3329controllers so that they expose minimal and consistent interfaces.
3330
3331
3332Controller Issues and Remedies
3333------------------------------
3334
3335Memory
3336~~~~~~
3337
3338The original lower boundary, the soft limit, is defined as a limit
3339that is per default unset. As a result, the set of cgroups that
3340global reclaim prefers is opt-in, rather than opt-out. The costs for
3341optimizing these mostly negative lookups are so high that the
3342implementation, despite its enormous size, does not even provide the
3343basic desirable behavior. First off, the soft limit has no
3344hierarchical meaning. All configured groups are organized in a global
3345rbtree and treated like equal peers, regardless where they are located
3346in the hierarchy. This makes subtree delegation impossible. Second,
3347the soft limit reclaim pass is so aggressive that it not just
3348introduces high allocation latencies into the system, but also impacts
3349system performance due to overreclaim, to the point where the feature
3350becomes self-defeating.
3351
3352The memory.low boundary on the other hand is a top-down allocated
3353reserve. A cgroup enjoys reclaim protection when it's within its
3354effective low, which makes delegation of subtrees possible. It also
3355enjoys having reclaim pressure proportional to its overage when
3356above its effective low.
3357
3358The original high boundary, the hard limit, is defined as a strict
3359limit that can not budge, even if the OOM killer has to be called.
3360But this generally goes against the goal of making the most out of the
3361available memory. The memory consumption of workloads varies during
3362runtime, and that requires users to overcommit. But doing that with a
3363strict upper limit requires either a fairly accurate prediction of the
3364working set size or adding slack to the limit. Since working set size
3365estimation is hard and error prone, and getting it wrong results in
3366OOM kills, most users tend to err on the side of a looser limit and
3367end up wasting precious resources.
3368
3369The memory.high boundary on the other hand can be set much more
3370conservatively. When hit, it throttles allocations by forcing them
3371into direct reclaim to work off the excess, but it never invokes the
3372OOM killer. As a result, a high boundary that is chosen too
3373aggressively will not terminate the processes, but instead it will
3374lead to gradual performance degradation. The user can monitor this
3375and make corrections until the minimal memory footprint that still
3376gives acceptable performance is found.
3377
3378In extreme cases, with many concurrent allocations and a complete
3379breakdown of reclaim progress within the group, the high boundary can
3380be exceeded. But even then it's mostly better to satisfy the
3381allocation from the slack available in other groups or the rest of the
3382system than killing the group. Otherwise, memory.max is there to
3383limit this type of spillover and ultimately contain buggy or even
3384malicious applications.
3385
3386Setting the original memory.limit_in_bytes below the current usage was
3387subject to a race condition, where concurrent charges could cause the
3388limit setting to fail. memory.max on the other hand will first set the
3389limit to prevent new charges, and then reclaim and OOM kill until the
3390new limit is met - or the task writing to memory.max is killed.
3391
3392The combined memory+swap accounting and limiting is replaced by real
3393control over swap space.
3394
3395The main argument for a combined memory+swap facility in the original
3396cgroup design was that global or parental pressure would always be
3397able to swap all anonymous memory of a child group, regardless of the
3398child's own (possibly untrusted) configuration. However, untrusted
3399groups can sabotage swapping by other means - such as referencing its
3400anonymous memory in a tight loop - and an admin can not assume full
3401swappability when overcommitting untrusted jobs.
3402
3403For trusted jobs, on the other hand, a combined counter is not an
3404intuitive userspace interface, and it flies in the face of the idea
3405that cgroup controllers should account and limit specific physical
3406resources. Swap space is a resource like all others in the system,
3407and that's why unified hierarchy allows distributing it separately.