8 :Author: Tejun Heo <tj@kernel.org>
10 This is the authoritative documentation on the design, interface and
11 conventions of cgroup v2. It describes all userland-visible aspects
12 of cgroup including core and specific controller behaviors. All
13 future changes must be reflected in this document. Documentation for
14 v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
23 2-2. Organizing Processes and Threads
26 2-3. [Un]populated Notification
27 2-4. Controlling Controllers
28 2-4-1. Enabling and Disabling
29 2-4-2. Top-down Constraint
30 2-4-3. No Internal Process Constraint
32 2-5-1. Model of Delegation
33 2-5-2. Delegation Containment
35 2-6-1. Organize Once and Control
36 2-6-2. Avoid Name Collisions
37 3. Resource Distribution Models
45 4-3. Core Interface Files
48 5-1-1. CPU Interface Files
50 5-2-1. Memory Interface Files
51 5-2-2. Usage Guidelines
52 5-2-3. Memory Ownership
54 5-3-1. IO Interface Files
57 5-3-3-1. How IO Latency Throttling Works
58 5-3-3-2. IO Latency Interface Files
61 5-4-1. PID Interface Files
63 5.5-1. Cpuset Interface Files
66 5-7-1. RDMA Interface Files
68 5.8-1. HugeTLB Interface Files
70 5.9-1 Miscellaneous cgroup Interface Files
71 5.9-2 Migration and Ownership
74 5-N. Non-normative information
75 5-N-1. CPU controller root cgroup process behaviour
76 5-N-2. IO controller root cgroup process behaviour
79 6-2. The Root and Views
80 6-3. Migration and setns(2)
81 6-4. Interaction with Other Namespaces
82 P. Information on Kernel Programming
83 P-1. Filesystem Support for Writeback
84 D. Deprecated v1 Core Features
85 R. Issues with v1 and Rationales for v2
86 R-1. Multiple Hierarchies
87 R-2. Thread Granularity
88 R-3. Competition Between Inner Nodes and Threads
89 R-4. Other Interface Issues
90 R-5. Controller Issues and Remedies
100 "cgroup" stands for "control group" and is never capitalized. The
101 singular form is used to designate the whole feature and also as a
102 qualifier as in "cgroup controllers". When explicitly referring to
103 multiple individual control groups, the plural form "cgroups" is used.
109 cgroup is a mechanism to organize processes hierarchically and
110 distribute system resources along the hierarchy in a controlled and
113 cgroup is largely composed of two parts - the core and controllers.
114 cgroup core is primarily responsible for hierarchically organizing
115 processes. A cgroup controller is usually responsible for
116 distributing a specific type of system resource along the hierarchy
117 although there are utility controllers which serve purposes other than
118 resource distribution.
120 cgroups form a tree structure and every process in the system belongs
121 to one and only one cgroup. All threads of a process belong to the
122 same cgroup. On creation, all processes are put in the cgroup that
123 the parent process belongs to at the time. A process can be migrated
124 to another cgroup. Migration of a process doesn't affect already
125 existing descendant processes.
127 Following certain structural constraints, controllers may be enabled or
128 disabled selectively on a cgroup. All controller behaviors are
129 hierarchical - if a controller is enabled on a cgroup, it affects all
130 processes which belong to the cgroups consisting the inclusive
131 sub-hierarchy of the cgroup. When a controller is enabled on a nested
132 cgroup, it always restricts the resource distribution further. The
133 restrictions set closer to the root in the hierarchy can not be
134 overridden from further away.
143 Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2
144 hierarchy can be mounted with the following mount command::
146 # mount -t cgroup2 none $MOUNT_POINT
148 cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All
149 controllers which support v2 and are not bound to a v1 hierarchy are
150 automatically bound to the v2 hierarchy and show up at the root.
151 Controllers which are not in active use in the v2 hierarchy can be
152 bound to other hierarchies. This allows mixing v2 hierarchy with the
153 legacy v1 multiple hierarchies in a fully backward compatible way.
155 A controller can be moved across hierarchies only after the controller
156 is no longer referenced in its current hierarchy. Because per-cgroup
157 controller states are destroyed asynchronously and controllers may
158 have lingering references, a controller may not show up immediately on
159 the v2 hierarchy after the final umount of the previous hierarchy.
160 Similarly, a controller should be fully disabled to be moved out of
161 the unified hierarchy and it may take some time for the disabled
162 controller to become available for other hierarchies; furthermore, due
163 to inter-controller dependencies, other controllers may need to be
166 While useful for development and manual configurations, moving
167 controllers dynamically between the v2 and other hierarchies is
168 strongly discouraged for production use. It is recommended to decide
169 the hierarchies and controller associations before starting using the
170 controllers after system boot.
172 During transition to v2, system management software might still
173 automount the v1 cgroup filesystem and so hijack all controllers
174 during boot, before manual intervention is possible. To make testing
175 and experimenting easier, the kernel parameter cgroup_no_v1= allows
176 disabling controllers in v1 and make them always available in v2.
178 cgroup v2 currently supports the following mount options.
181 Consider cgroup namespaces as delegation boundaries. This
182 option is system wide and can only be set on mount or modified
183 through remount from the init namespace. The mount option is
184 ignored on non-init namespace mounts. Please refer to the
185 Delegation section for details.
188 Reduce the latencies of dynamic cgroup modifications such as
189 task migrations and controller on/offs at the cost of making
190 hot path operations such as forks and exits more expensive.
191 The static usage pattern of creating a cgroup, enabling
192 controllers, and then seeding it with CLONE_INTO_CGROUP is
193 not affected by this option.
196 Only populate memory.events with data for the current cgroup,
197 and not any subtrees. This is legacy behaviour, the default
198 behaviour without this option is to include subtree counts.
199 This option is system wide and can only be set on mount or
200 modified through remount from the init namespace. The mount
201 option is ignored on non-init namespace mounts.
204 Recursively apply memory.min and memory.low protection to
205 entire subtrees, without requiring explicit downward
206 propagation into leaf cgroups. This allows protecting entire
207 subtrees from one another, while retaining free competition
208 within those subtrees. This should have been the default
209 behavior but is a mount-option to avoid regressing setups
210 relying on the original semantics (e.g. specifying bogusly
211 high 'bypass' protection values at higher tree levels).
214 Organizing Processes and Threads
215 --------------------------------
220 Initially, only the root cgroup exists to which all processes belong.
221 A child cgroup can be created by creating a sub-directory::
225 A given cgroup may have multiple child cgroups forming a tree
226 structure. Each cgroup has a read-writable interface file
227 "cgroup.procs". When read, it lists the PIDs of all processes which
228 belong to the cgroup one-per-line. The PIDs are not ordered and the
229 same PID may show up more than once if the process got moved to
230 another cgroup and then back or the PID got recycled while reading.
232 A process can be migrated into a cgroup by writing its PID to the
233 target cgroup's "cgroup.procs" file. Only one process can be migrated
234 on a single write(2) call. If a process is composed of multiple
235 threads, writing the PID of any thread migrates all threads of the
238 When a process forks a child process, the new process is born into the
239 cgroup that the forking process belongs to at the time of the
240 operation. After exit, a process stays associated with the cgroup
241 that it belonged to at the time of exit until it's reaped; however, a
242 zombie process does not appear in "cgroup.procs" and thus can't be
243 moved to another cgroup.
245 A cgroup which doesn't have any children or live processes can be
246 destroyed by removing the directory. Note that a cgroup which doesn't
247 have any children and is associated only with zombie processes is
248 considered empty and can be removed::
252 "/proc/$PID/cgroup" lists a process's cgroup membership. If legacy
253 cgroup is in use in the system, this file may contain multiple lines,
254 one for each hierarchy. The entry for cgroup v2 is always in the
257 # cat /proc/842/cgroup
259 0::/test-cgroup/test-cgroup-nested
261 If the process becomes a zombie and the cgroup it was associated with
262 is removed subsequently, " (deleted)" is appended to the path::
264 # cat /proc/842/cgroup
266 0::/test-cgroup/test-cgroup-nested (deleted)
272 cgroup v2 supports thread granularity for a subset of controllers to
273 support use cases requiring hierarchical resource distribution across
274 the threads of a group of processes. By default, all threads of a
275 process belong to the same cgroup, which also serves as the resource
276 domain to host resource consumptions which are not specific to a
277 process or thread. The thread mode allows threads to be spread across
278 a subtree while still maintaining the common resource domain for them.
280 Controllers which support thread mode are called threaded controllers.
281 The ones which don't are called domain controllers.
283 Marking a cgroup threaded makes it join the resource domain of its
284 parent as a threaded cgroup. The parent may be another threaded
285 cgroup whose resource domain is further up in the hierarchy. The root
286 of a threaded subtree, that is, the nearest ancestor which is not
287 threaded, is called threaded domain or thread root interchangeably and
288 serves as the resource domain for the entire subtree.
290 Inside a threaded subtree, threads of a process can be put in
291 different cgroups and are not subject to the no internal process
292 constraint - threaded controllers can be enabled on non-leaf cgroups
293 whether they have threads in them or not.
295 As the threaded domain cgroup hosts all the domain resource
296 consumptions of the subtree, it is considered to have internal
297 resource consumptions whether there are processes in it or not and
298 can't have populated child cgroups which aren't threaded. Because the
299 root cgroup is not subject to no internal process constraint, it can
300 serve both as a threaded domain and a parent to domain cgroups.
302 The current operation mode or type of the cgroup is shown in the
303 "cgroup.type" file which indicates whether the cgroup is a normal
304 domain, a domain which is serving as the domain of a threaded subtree,
305 or a threaded cgroup.
307 On creation, a cgroup is always a domain cgroup and can be made
308 threaded by writing "threaded" to the "cgroup.type" file. The
309 operation is single direction::
311 # echo threaded > cgroup.type
313 Once threaded, the cgroup can't be made a domain again. To enable the
314 thread mode, the following conditions must be met.
316 - As the cgroup will join the parent's resource domain. The parent
317 must either be a valid (threaded) domain or a threaded cgroup.
319 - When the parent is an unthreaded domain, it must not have any domain
320 controllers enabled or populated domain children. The root is
321 exempt from this requirement.
323 Topology-wise, a cgroup can be in an invalid state. Please consider
324 the following topology::
326 A (threaded domain) - B (threaded) - C (domain, just created)
328 C is created as a domain but isn't connected to a parent which can
329 host child domains. C can't be used until it is turned into a
330 threaded cgroup. "cgroup.type" file will report "domain (invalid)" in
331 these cases. Operations which fail due to invalid topology use
332 EOPNOTSUPP as the errno.
334 A domain cgroup is turned into a threaded domain when one of its child
335 cgroup becomes threaded or threaded controllers are enabled in the
336 "cgroup.subtree_control" file while there are processes in the cgroup.
337 A threaded domain reverts to a normal domain when the conditions
340 When read, "cgroup.threads" contains the list of the thread IDs of all
341 threads in the cgroup. Except that the operations are per-thread
342 instead of per-process, "cgroup.threads" has the same format and
343 behaves the same way as "cgroup.procs". While "cgroup.threads" can be
344 written to in any cgroup, as it can only move threads inside the same
345 threaded domain, its operations are confined inside each threaded
348 The threaded domain cgroup serves as the resource domain for the whole
349 subtree, and, while the threads can be scattered across the subtree,
350 all the processes are considered to be in the threaded domain cgroup.
351 "cgroup.procs" in a threaded domain cgroup contains the PIDs of all
352 processes in the subtree and is not readable in the subtree proper.
353 However, "cgroup.procs" can be written to from anywhere in the subtree
354 to migrate all threads of the matching process to the cgroup.
356 Only threaded controllers can be enabled in a threaded subtree. When
357 a threaded controller is enabled inside a threaded subtree, it only
358 accounts for and controls resource consumptions associated with the
359 threads in the cgroup and its descendants. All consumptions which
360 aren't tied to a specific thread belong to the threaded domain cgroup.
362 Because a threaded subtree is exempt from no internal process
363 constraint, a threaded controller must be able to handle competition
364 between threads in a non-leaf cgroup and its child cgroups. Each
365 threaded controller defines how such competitions are handled.
368 [Un]populated Notification
369 --------------------------
371 Each non-root cgroup has a "cgroup.events" file which contains
372 "populated" field indicating whether the cgroup's sub-hierarchy has
373 live processes in it. Its value is 0 if there is no live process in
374 the cgroup and its descendants; otherwise, 1. poll and [id]notify
375 events are triggered when the value changes. This can be used, for
376 example, to start a clean-up operation after all processes of a given
377 sub-hierarchy have exited. The populated state updates and
378 notifications are recursive. Consider the following sub-hierarchy
379 where the numbers in the parentheses represent the numbers of processes
385 A, B and C's "populated" fields would be 1 while D's 0. After the one
386 process in C exits, B and C's "populated" fields would flip to "0" and
387 file modified events will be generated on the "cgroup.events" files of
391 Controlling Controllers
392 -----------------------
394 Enabling and Disabling
395 ~~~~~~~~~~~~~~~~~~~~~~
397 Each cgroup has a "cgroup.controllers" file which lists all
398 controllers available for the cgroup to enable::
400 # cat cgroup.controllers
403 No controller is enabled by default. Controllers can be enabled and
404 disabled by writing to the "cgroup.subtree_control" file::
406 # echo "+cpu +memory -io" > cgroup.subtree_control
408 Only controllers which are listed in "cgroup.controllers" can be
409 enabled. When multiple operations are specified as above, either they
410 all succeed or fail. If multiple operations on the same controller
411 are specified, the last one is effective.
413 Enabling a controller in a cgroup indicates that the distribution of
414 the target resource across its immediate children will be controlled.
415 Consider the following sub-hierarchy. The enabled controllers are
416 listed in parentheses::
418 A(cpu,memory) - B(memory) - C()
421 As A has "cpu" and "memory" enabled, A will control the distribution
422 of CPU cycles and memory to its children, in this case, B. As B has
423 "memory" enabled but not "CPU", C and D will compete freely on CPU
424 cycles but their division of memory available to B will be controlled.
426 As a controller regulates the distribution of the target resource to
427 the cgroup's children, enabling it creates the controller's interface
428 files in the child cgroups. In the above example, enabling "cpu" on B
429 would create the "cpu." prefixed controller interface files in C and
430 D. Likewise, disabling "memory" from B would remove the "memory."
431 prefixed controller interface files from C and D. This means that the
432 controller interface files - anything which doesn't start with
433 "cgroup." are owned by the parent rather than the cgroup itself.
439 Resources are distributed top-down and a cgroup can further distribute
440 a resource only if the resource has been distributed to it from the
441 parent. This means that all non-root "cgroup.subtree_control" files
442 can only contain controllers which are enabled in the parent's
443 "cgroup.subtree_control" file. A controller can be enabled only if
444 the parent has the controller enabled and a controller can't be
445 disabled if one or more children have it enabled.
448 No Internal Process Constraint
449 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
451 Non-root cgroups can distribute domain resources to their children
452 only when they don't have any processes of their own. In other words,
453 only domain cgroups which don't contain any processes can have domain
454 controllers enabled in their "cgroup.subtree_control" files.
456 This guarantees that, when a domain controller is looking at the part
457 of the hierarchy which has it enabled, processes are always only on
458 the leaves. This rules out situations where child cgroups compete
459 against internal processes of the parent.
461 The root cgroup is exempt from this restriction. Root contains
462 processes and anonymous resource consumption which can't be associated
463 with any other cgroups and requires special treatment from most
464 controllers. How resource consumption in the root cgroup is governed
465 is up to each controller (for more information on this topic please
466 refer to the Non-normative information section in the Controllers
469 Note that the restriction doesn't get in the way if there is no
470 enabled controller in the cgroup's "cgroup.subtree_control". This is
471 important as otherwise it wouldn't be possible to create children of a
472 populated cgroup. To control resource distribution of a cgroup, the
473 cgroup must create children and transfer all its processes to the
474 children before enabling controllers in its "cgroup.subtree_control"
484 A cgroup can be delegated in two ways. First, to a less privileged
485 user by granting write access of the directory and its "cgroup.procs",
486 "cgroup.threads" and "cgroup.subtree_control" files to the user.
487 Second, if the "nsdelegate" mount option is set, automatically to a
488 cgroup namespace on namespace creation.
490 Because the resource control interface files in a given directory
491 control the distribution of the parent's resources, the delegatee
492 shouldn't be allowed to write to them. For the first method, this is
493 achieved by not granting access to these files. For the second, the
494 kernel rejects writes to all files other than "cgroup.procs" and
495 "cgroup.subtree_control" on a namespace root from inside the
498 The end results are equivalent for both delegation types. Once
499 delegated, the user can build sub-hierarchy under the directory,
500 organize processes inside it as it sees fit and further distribute the
501 resources it received from the parent. The limits and other settings
502 of all resource controllers are hierarchical and regardless of what
503 happens in the delegated sub-hierarchy, nothing can escape the
504 resource restrictions imposed by the parent.
506 Currently, cgroup doesn't impose any restrictions on the number of
507 cgroups in or nesting depth of a delegated sub-hierarchy; however,
508 this may be limited explicitly in the future.
511 Delegation Containment
512 ~~~~~~~~~~~~~~~~~~~~~~
514 A delegated sub-hierarchy is contained in the sense that processes
515 can't be moved into or out of the sub-hierarchy by the delegatee.
517 For delegations to a less privileged user, this is achieved by
518 requiring the following conditions for a process with a non-root euid
519 to migrate a target process into a cgroup by writing its PID to the
522 - The writer must have write access to the "cgroup.procs" file.
524 - The writer must have write access to the "cgroup.procs" file of the
525 common ancestor of the source and destination cgroups.
527 The above two constraints ensure that while a delegatee may migrate
528 processes around freely in the delegated sub-hierarchy it can't pull
529 in from or push out to outside the sub-hierarchy.
531 For an example, let's assume cgroups C0 and C1 have been delegated to
532 user U0 who created C00, C01 under C0 and C10 under C1 as follows and
533 all processes under C0 and C1 belong to U0::
535 ~~~~~~~~~~~~~ - C0 - C00
538 ~~~~~~~~~~~~~ - C1 - C10
540 Let's also say U0 wants to write the PID of a process which is
541 currently in C10 into "C00/cgroup.procs". U0 has write access to the
542 file; however, the common ancestor of the source cgroup C10 and the
543 destination cgroup C00 is above the points of delegation and U0 would
544 not have write access to its "cgroup.procs" files and thus the write
545 will be denied with -EACCES.
547 For delegations to namespaces, containment is achieved by requiring
548 that both the source and destination cgroups are reachable from the
549 namespace of the process which is attempting the migration. If either
550 is not reachable, the migration is rejected with -ENOENT.
556 Organize Once and Control
557 ~~~~~~~~~~~~~~~~~~~~~~~~~
559 Migrating a process across cgroups is a relatively expensive operation
560 and stateful resources such as memory are not moved together with the
561 process. This is an explicit design decision as there often exist
562 inherent trade-offs between migration and various hot paths in terms
563 of synchronization cost.
565 As such, migrating processes across cgroups frequently as a means to
566 apply different resource restrictions is discouraged. A workload
567 should be assigned to a cgroup according to the system's logical and
568 resource structure once on start-up. Dynamic adjustments to resource
569 distribution can be made by changing controller configuration through
573 Avoid Name Collisions
574 ~~~~~~~~~~~~~~~~~~~~~
576 Interface files for a cgroup and its children cgroups occupy the same
577 directory and it is possible to create children cgroups which collide
578 with interface files.
580 All cgroup core interface files are prefixed with "cgroup." and each
581 controller's interface files are prefixed with the controller name and
582 a dot. A controller's name is composed of lower case alphabets and
583 '_'s but never begins with an '_' so it can be used as the prefix
584 character for collision avoidance. Also, interface file names won't
585 start or end with terms which are often used in categorizing workloads
586 such as job, service, slice, unit or workload.
588 cgroup doesn't do anything to prevent name collisions and it's the
589 user's responsibility to avoid them.
592 Resource Distribution Models
593 ============================
595 cgroup controllers implement several resource distribution schemes
596 depending on the resource type and expected use cases. This section
597 describes major schemes in use along with their expected behaviors.
603 A parent's resource is distributed by adding up the weights of all
604 active children and giving each the fraction matching the ratio of its
605 weight against the sum. As only children which can make use of the
606 resource at the moment participate in the distribution, this is
607 work-conserving. Due to the dynamic nature, this model is usually
608 used for stateless resources.
610 All weights are in the range [1, 10000] with the default at 100. This
611 allows symmetric multiplicative biases in both directions at fine
612 enough granularity while staying in the intuitive range.
614 As long as the weight is in range, all configuration combinations are
615 valid and there is no reason to reject configuration changes or
618 "cpu.weight" proportionally distributes CPU cycles to active children
619 and is an example of this type.
622 .. _cgroupv2-limits-distributor:
627 A child can only consume up to the configured amount of the resource.
628 Limits can be over-committed - the sum of the limits of children can
629 exceed the amount of resource available to the parent.
631 Limits are in the range [0, max] and defaults to "max", which is noop.
633 As limits can be over-committed, all configuration combinations are
634 valid and there is no reason to reject configuration changes or
637 "io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
638 on an IO device and is an example of this type.
640 .. _cgroupv2-protections-distributor:
645 A cgroup is protected up to the configured amount of the resource
646 as long as the usages of all its ancestors are under their
647 protected levels. Protections can be hard guarantees or best effort
648 soft boundaries. Protections can also be over-committed in which case
649 only up to the amount available to the parent is protected among
652 Protections are in the range [0, max] and defaults to 0, which is
655 As protections can be over-committed, all configuration combinations
656 are valid and there is no reason to reject configuration changes or
659 "memory.low" implements best-effort memory protection and is an
660 example of this type.
666 A cgroup is exclusively allocated a certain amount of a finite
667 resource. Allocations can't be over-committed - the sum of the
668 allocations of children can not exceed the amount of resource
669 available to the parent.
671 Allocations are in the range [0, max] and defaults to 0, which is no
674 As allocations can't be over-committed, some configuration
675 combinations are invalid and should be rejected. Also, if the
676 resource is mandatory for execution of processes, process migrations
679 "cpu.rt.max" hard-allocates realtime slices and is an example of this
689 All interface files should be in one of the following formats whenever
692 New-line separated values
693 (when only one value can be written at once)
699 Space separated values
700 (when read-only or multiple values can be written at once)
712 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
713 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
716 For a writable file, the format for writing should generally match
717 reading; however, controllers may allow omitting later fields or
718 implement restricted shortcuts for most common use cases.
720 For both flat and nested keyed files, only the values for a single key
721 can be written at a time. For nested keyed files, the sub key pairs
722 may be specified in any order and not all pairs have to be specified.
728 - Settings for a single feature should be contained in a single file.
730 - The root cgroup should be exempt from resource control and thus
731 shouldn't have resource control interface files.
733 - The default time unit is microseconds. If a different unit is ever
734 used, an explicit unit suffix must be present.
736 - A parts-per quantity should use a percentage decimal with at least
737 two digit fractional part - e.g. 13.40.
739 - If a controller implements weight based resource distribution, its
740 interface file should be named "weight" and have the range [1,
741 10000] with 100 as the default. The values are chosen to allow
742 enough and symmetric bias in both directions while keeping it
743 intuitive (the default is 100%).
745 - If a controller implements an absolute resource guarantee and/or
746 limit, the interface files should be named "min" and "max"
747 respectively. If a controller implements best effort resource
748 guarantee and/or limit, the interface files should be named "low"
749 and "high" respectively.
751 In the above four control files, the special token "max" should be
752 used to represent upward infinity for both reading and writing.
754 - If a setting has a configurable default value and keyed specific
755 overrides, the default entry should be keyed with "default" and
756 appear as the first entry in the file.
758 The default value can be updated by writing either "default $VAL" or
761 When writing to update a specific override, "default" can be used as
762 the value to indicate removal of the override. Override entries
763 with "default" as the value must not appear when read.
765 For example, a setting which is keyed by major:minor device numbers
766 with integer values may look like the following::
768 # cat cgroup-example-interface-file
772 The default value can be updated by::
774 # echo 125 > cgroup-example-interface-file
778 # echo "default 125" > cgroup-example-interface-file
780 An override can be set by::
782 # echo "8:16 170" > cgroup-example-interface-file
786 # echo "8:0 default" > cgroup-example-interface-file
787 # cat cgroup-example-interface-file
791 - For events which are not very high frequency, an interface file
792 "events" should be created which lists event key value pairs.
793 Whenever a notifiable event happens, file modified event should be
794 generated on the file.
800 All cgroup core files are prefixed with "cgroup."
803 A read-write single value file which exists on non-root
806 When read, it indicates the current type of the cgroup, which
807 can be one of the following values.
809 - "domain" : A normal valid domain cgroup.
811 - "domain threaded" : A threaded domain cgroup which is
812 serving as the root of a threaded subtree.
814 - "domain invalid" : A cgroup which is in an invalid state.
815 It can't be populated or have controllers enabled. It may
816 be allowed to become a threaded cgroup.
818 - "threaded" : A threaded cgroup which is a member of a
821 A cgroup can be turned into a threaded cgroup by writing
822 "threaded" to this file.
825 A read-write new-line separated values file which exists on
828 When read, it lists the PIDs of all processes which belong to
829 the cgroup one-per-line. The PIDs are not ordered and the
830 same PID may show up more than once if the process got moved
831 to another cgroup and then back or the PID got recycled while
834 A PID can be written to migrate the process associated with
835 the PID to the cgroup. The writer should match all of the
836 following conditions.
838 - It must have write access to the "cgroup.procs" file.
840 - It must have write access to the "cgroup.procs" file of the
841 common ancestor of the source and destination cgroups.
843 When delegating a sub-hierarchy, write access to this file
844 should be granted along with the containing directory.
846 In a threaded cgroup, reading this file fails with EOPNOTSUPP
847 as all the processes belong to the thread root. Writing is
848 supported and moves every thread of the process to the cgroup.
851 A read-write new-line separated values file which exists on
854 When read, it lists the TIDs of all threads which belong to
855 the cgroup one-per-line. The TIDs are not ordered and the
856 same TID may show up more than once if the thread got moved to
857 another cgroup and then back or the TID got recycled while
860 A TID can be written to migrate the thread associated with the
861 TID to the cgroup. The writer should match all of the
862 following conditions.
864 - It must have write access to the "cgroup.threads" file.
866 - The cgroup that the thread is currently in must be in the
867 same resource domain as the destination cgroup.
869 - It must have write access to the "cgroup.procs" file of the
870 common ancestor of the source and destination cgroups.
872 When delegating a sub-hierarchy, write access to this file
873 should be granted along with the containing directory.
876 A read-only space separated values file which exists on all
879 It shows space separated list of all controllers available to
880 the cgroup. The controllers are not ordered.
882 cgroup.subtree_control
883 A read-write space separated values file which exists on all
884 cgroups. Starts out empty.
886 When read, it shows space separated list of the controllers
887 which are enabled to control resource distribution from the
888 cgroup to its children.
890 Space separated list of controllers prefixed with '+' or '-'
891 can be written to enable or disable controllers. A controller
892 name prefixed with '+' enables the controller and '-'
893 disables. If a controller appears more than once on the list,
894 the last one is effective. When multiple enable and disable
895 operations are specified, either all succeed or all fail.
898 A read-only flat-keyed file which exists on non-root cgroups.
899 The following entries are defined. Unless specified
900 otherwise, a value change in this file generates a file
904 1 if the cgroup or its descendants contains any live
905 processes; otherwise, 0.
907 1 if the cgroup is frozen; otherwise, 0.
909 cgroup.max.descendants
910 A read-write single value files. The default is "max".
912 Maximum allowed number of descent cgroups.
913 If the actual number of descendants is equal or larger,
914 an attempt to create a new cgroup in the hierarchy will fail.
917 A read-write single value files. The default is "max".
919 Maximum allowed descent depth below the current cgroup.
920 If the actual descent depth is equal or larger,
921 an attempt to create a new child cgroup will fail.
924 A read-only flat-keyed file with the following entries:
927 Total number of visible descendant cgroups.
930 Total number of dying descendant cgroups. A cgroup becomes
931 dying after being deleted by a user. The cgroup will remain
932 in dying state for some time undefined time (which can depend
933 on system load) before being completely destroyed.
935 A process can't enter a dying cgroup under any circumstances,
936 a dying cgroup can't revive.
938 A dying cgroup can consume system resources not exceeding
939 limits, which were active at the moment of cgroup deletion.
942 A read-write single value file which exists on non-root cgroups.
943 Allowed values are "0" and "1". The default is "0".
945 Writing "1" to the file causes freezing of the cgroup and all
946 descendant cgroups. This means that all belonging processes will
947 be stopped and will not run until the cgroup will be explicitly
948 unfrozen. Freezing of the cgroup may take some time; when this action
949 is completed, the "frozen" value in the cgroup.events control file
950 will be updated to "1" and the corresponding notification will be
953 A cgroup can be frozen either by its own settings, or by settings
954 of any ancestor cgroups. If any of ancestor cgroups is frozen, the
955 cgroup will remain frozen.
957 Processes in the frozen cgroup can be killed by a fatal signal.
958 They also can enter and leave a frozen cgroup: either by an explicit
959 move by a user, or if freezing of the cgroup races with fork().
960 If a process is moved to a frozen cgroup, it stops. If a process is
961 moved out of a frozen cgroup, it becomes running.
963 Frozen status of a cgroup doesn't affect any cgroup tree operations:
964 it's possible to delete a frozen (and empty) cgroup, as well as
965 create new sub-cgroups.
968 A write-only single value file which exists in non-root cgroups.
969 The only allowed value is "1".
971 Writing "1" to the file causes the cgroup and all descendant cgroups to
972 be killed. This means that all processes located in the affected cgroup
973 tree will be killed via SIGKILL.
975 Killing a cgroup tree will deal with concurrent forks appropriately and
976 is protected against migrations.
978 In a threaded cgroup, writing this file fails with EOPNOTSUPP as
979 killing cgroups is a process directed operation, i.e. it affects
980 the whole thread-group.
983 A read-write single value file that allowed values are "0" and "1".
986 Writing "0" to the file will disable the cgroup PSI accounting.
987 Writing "1" to the file will re-enable the cgroup PSI accounting.
989 This control attribute is not hierarchical, so disable or enable PSI
990 accounting in a cgroup does not affect PSI accounting in descendants
991 and doesn't need pass enablement via ancestors from root.
993 The reason this control attribute exists is that PSI accounts stalls for
994 each cgroup separately and aggregates it at each level of the hierarchy.
995 This may cause non-negligible overhead for some workloads when under
996 deep level of the hierarchy, in which case this control attribute can
997 be used to disable PSI accounting in the non-leaf cgroups.
1000 A read-write nested-keyed file.
1002 Shows pressure stall information for IRQ/SOFTIRQ. See
1003 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1013 The "cpu" controllers regulates distribution of CPU cycles. This
1014 controller implements weight and absolute bandwidth limit models for
1015 normal scheduling policy and absolute bandwidth allocation model for
1016 realtime scheduling policy.
1018 In all the above models, cycles distribution is defined only on a temporal
1019 base and it does not account for the frequency at which tasks are executed.
1020 The (optional) utilization clamping support allows to hint the schedutil
1021 cpufreq governor about the minimum desired frequency which should always be
1022 provided by a CPU, as well as the maximum desired frequency, which should not
1023 be exceeded by a CPU.
1025 WARNING: cgroup2 doesn't yet support control of realtime processes and
1026 the cpu controller can only be enabled when all RT processes are in
1027 the root cgroup. Be aware that system management software may already
1028 have placed RT processes into nonroot cgroups during the system boot
1029 process, and these processes may need to be moved to the root cgroup
1030 before the cpu controller can be enabled.
1036 All time durations are in microseconds.
1039 A read-only flat-keyed file.
1040 This file exists whether the controller is enabled or not.
1042 It always reports the following three stats:
1048 and the following three when the controller is enabled:
1057 A read-write single value file which exists on non-root
1058 cgroups. The default is "100".
1060 The weight in the range [1, 10000].
1063 A read-write single value file which exists on non-root
1064 cgroups. The default is "0".
1066 The nice value is in the range [-20, 19].
1068 This interface file is an alternative interface for
1069 "cpu.weight" and allows reading and setting weight using the
1070 same values used by nice(2). Because the range is smaller and
1071 granularity is coarser for the nice values, the read value is
1072 the closest approximation of the current weight.
1075 A read-write two value file which exists on non-root cgroups.
1076 The default is "max 100000".
1078 The maximum bandwidth limit. It's in the following format::
1082 which indicates that the group may consume up to $MAX in each
1083 $PERIOD duration. "max" for $MAX indicates no limit. If only
1084 one number is written, $MAX is updated.
1087 A read-write single value file which exists on non-root
1088 cgroups. The default is "0".
1090 The burst in the range [0, $MAX].
1093 A read-write nested-keyed file.
1095 Shows pressure stall information for CPU. See
1096 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1099 A read-write single value file which exists on non-root cgroups.
1100 The default is "0", i.e. no utilization boosting.
1102 The requested minimum utilization (protection) as a percentage
1103 rational number, e.g. 12.34 for 12.34%.
1105 This interface allows reading and setting minimum utilization clamp
1106 values similar to the sched_setattr(2). This minimum utilization
1107 value is used to clamp the task specific minimum utilization clamp.
1109 The requested minimum utilization (protection) is always capped by
1110 the current value for the maximum utilization (limit), i.e.
1114 A read-write single value file which exists on non-root cgroups.
1115 The default is "max". i.e. no utilization capping
1117 The requested maximum utilization (limit) as a percentage rational
1118 number, e.g. 98.76 for 98.76%.
1120 This interface allows reading and setting maximum utilization clamp
1121 values similar to the sched_setattr(2). This maximum utilization
1122 value is used to clamp the task specific maximum utilization clamp.
1129 The "memory" controller regulates distribution of memory. Memory is
1130 stateful and implements both limit and protection models. Due to the
1131 intertwining between memory usage and reclaim pressure and the
1132 stateful nature of memory, the distribution model is relatively
1135 While not completely water-tight, all major memory usages by a given
1136 cgroup are tracked so that the total memory consumption can be
1137 accounted and controlled to a reasonable extent. Currently, the
1138 following types of memory usages are tracked.
1140 - Userland memory - page cache and anonymous memory.
1142 - Kernel data structures such as dentries and inodes.
1144 - TCP socket buffers.
1146 The above list may expand in the future for better coverage.
1149 Memory Interface Files
1150 ~~~~~~~~~~~~~~~~~~~~~~
1152 All memory amounts are in bytes. If a value which is not aligned to
1153 PAGE_SIZE is written, the value may be rounded up to the closest
1154 PAGE_SIZE multiple when read back.
1157 A read-only single value file which exists on non-root
1160 The total amount of memory currently being used by the cgroup
1161 and its descendants.
1164 A read-write single value file which exists on non-root
1165 cgroups. The default is "0".
1167 Hard memory protection. If the memory usage of a cgroup
1168 is within its effective min boundary, the cgroup's memory
1169 won't be reclaimed under any conditions. If there is no
1170 unprotected reclaimable memory available, OOM killer
1171 is invoked. Above the effective min boundary (or
1172 effective low boundary if it is higher), pages are reclaimed
1173 proportionally to the overage, reducing reclaim pressure for
1176 Effective min boundary is limited by memory.min values of
1177 all ancestor cgroups. If there is memory.min overcommitment
1178 (child cgroup or cgroups are requiring more protected memory
1179 than parent will allow), then each child cgroup will get
1180 the part of parent's protection proportional to its
1181 actual memory usage below memory.min.
1183 Putting more memory than generally available under this
1184 protection is discouraged and may lead to constant OOMs.
1186 If a memory cgroup is not populated with processes,
1187 its memory.min is ignored.
1190 A read-write single value file which exists on non-root
1191 cgroups. The default is "0".
1193 Best-effort memory protection. If the memory usage of a
1194 cgroup is within its effective low boundary, the cgroup's
1195 memory won't be reclaimed unless there is no reclaimable
1196 memory available in unprotected cgroups.
1197 Above the effective low boundary (or
1198 effective min boundary if it is higher), pages are reclaimed
1199 proportionally to the overage, reducing reclaim pressure for
1202 Effective low boundary is limited by memory.low values of
1203 all ancestor cgroups. If there is memory.low overcommitment
1204 (child cgroup or cgroups are requiring more protected memory
1205 than parent will allow), then each child cgroup will get
1206 the part of parent's protection proportional to its
1207 actual memory usage below memory.low.
1209 Putting more memory than generally available under this
1210 protection is discouraged.
1213 A read-write single value file which exists on non-root
1214 cgroups. The default is "max".
1216 Memory usage throttle limit. If a cgroup's usage goes
1217 over the high boundary, the processes of the cgroup are
1218 throttled and put under heavy reclaim pressure.
1220 Going over the high limit never invokes the OOM killer and
1221 under extreme conditions the limit may be breached. The high
1222 limit should be used in scenarios where an external process
1223 monitors the limited cgroup to alleviate heavy reclaim
1227 A read-write single value file which exists on non-root
1228 cgroups. The default is "max".
1230 Memory usage hard limit. This is the main mechanism to limit
1231 memory usage of a cgroup. If a cgroup's memory usage reaches
1232 this limit and can't be reduced, the OOM killer is invoked in
1233 the cgroup. Under certain circumstances, the usage may go
1234 over the limit temporarily.
1236 In default configuration regular 0-order allocations always
1237 succeed unless OOM killer chooses current task as a victim.
1239 Some kinds of allocations don't invoke the OOM killer.
1240 Caller could retry them differently, return into userspace
1241 as -ENOMEM or silently ignore in cases like disk readahead.
1244 A write-only nested-keyed file which exists for all cgroups.
1246 This is a simple interface to trigger memory reclaim in the
1249 This file accepts a single key, the number of bytes to reclaim.
1250 No nested keys are currently supported.
1254 echo "1G" > memory.reclaim
1256 The interface can be later extended with nested keys to
1257 configure the reclaim behavior. For example, specify the
1258 type of memory to reclaim from (anon, file, ..).
1260 Please note that the kernel can over or under reclaim from
1261 the target cgroup. If less bytes are reclaimed than the
1262 specified amount, -EAGAIN is returned.
1264 Please note that the proactive reclaim (triggered by this
1265 interface) is not meant to indicate memory pressure on the
1266 memory cgroup. Therefore socket memory balancing triggered by
1267 the memory reclaim normally is not exercised in this case.
1268 This means that the networking layer will not adapt based on
1269 reclaim induced by memory.reclaim.
1272 A read-only single value file which exists on non-root
1275 The max memory usage recorded for the cgroup and its
1276 descendants since the creation of the cgroup.
1279 A read-write single value file which exists on non-root
1280 cgroups. The default value is "0".
1282 Determines whether the cgroup should be treated as
1283 an indivisible workload by the OOM killer. If set,
1284 all tasks belonging to the cgroup or to its descendants
1285 (if the memory cgroup is not a leaf cgroup) are killed
1286 together or not at all. This can be used to avoid
1287 partial kills to guarantee workload integrity.
1289 Tasks with the OOM protection (oom_score_adj set to -1000)
1290 are treated as an exception and are never killed.
1292 If the OOM killer is invoked in a cgroup, it's not going
1293 to kill any tasks outside of this cgroup, regardless
1294 memory.oom.group values of ancestor cgroups.
1297 A read-only flat-keyed file which exists on non-root cgroups.
1298 The following entries are defined. Unless specified
1299 otherwise, a value change in this file generates a file
1302 Note that all fields in this file are hierarchical and the
1303 file modified event can be generated due to an event down the
1304 hierarchy. For the local events at the cgroup level see
1305 memory.events.local.
1308 The number of times the cgroup is reclaimed due to
1309 high memory pressure even though its usage is under
1310 the low boundary. This usually indicates that the low
1311 boundary is over-committed.
1314 The number of times processes of the cgroup are
1315 throttled and routed to perform direct memory reclaim
1316 because the high memory boundary was exceeded. For a
1317 cgroup whose memory usage is capped by the high limit
1318 rather than global memory pressure, this event's
1319 occurrences are expected.
1322 The number of times the cgroup's memory usage was
1323 about to go over the max boundary. If direct reclaim
1324 fails to bring it down, the cgroup goes to OOM state.
1327 The number of time the cgroup's memory usage was
1328 reached the limit and allocation was about to fail.
1330 This event is not raised if the OOM killer is not
1331 considered as an option, e.g. for failed high-order
1332 allocations or if caller asked to not retry attempts.
1335 The number of processes belonging to this cgroup
1336 killed by any kind of OOM killer.
1339 The number of times a group OOM has occurred.
1342 Similar to memory.events but the fields in the file are local
1343 to the cgroup i.e. not hierarchical. The file modified event
1344 generated on this file reflects only the local events.
1347 A read-only flat-keyed file which exists on non-root cgroups.
1349 This breaks down the cgroup's memory footprint into different
1350 types of memory, type-specific details, and other information
1351 on the state and past events of the memory management system.
1353 All memory amounts are in bytes.
1355 The entries are ordered to be human readable, and new entries
1356 can show up in the middle. Don't rely on items remaining in a
1357 fixed position; use the keys to look up specific values!
1359 If the entry has no per-node counter (or not show in the
1360 memory.numa_stat). We use 'npn' (non-per-node) as the tag
1361 to indicate that it will not show in the memory.numa_stat.
1364 Amount of memory used in anonymous mappings such as
1365 brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1368 Amount of memory used to cache filesystem data,
1369 including tmpfs and shared memory.
1372 Amount of total kernel memory, including
1373 (kernel_stack, pagetables, percpu, vmalloc, slab) in
1374 addition to other kernel memory use cases.
1377 Amount of memory allocated to kernel stacks.
1380 Amount of memory allocated for page tables.
1383 Amount of memory allocated for secondary page tables,
1384 this currently includes KVM mmu allocations on x86
1388 Amount of memory used for storing per-cpu kernel
1392 Amount of memory used in network transmission buffers
1395 Amount of memory used for vmap backed memory.
1398 Amount of cached filesystem data that is swap-backed,
1399 such as tmpfs, shm segments, shared anonymous mmap()s
1402 Amount of memory consumed by the zswap compression backend.
1405 Amount of application memory swapped out to zswap.
1408 Amount of cached filesystem data mapped with mmap()
1411 Amount of cached filesystem data that was modified but
1412 not yet written back to disk
1415 Amount of cached filesystem data that was modified and
1416 is currently being written back to disk
1419 Amount of swap cached in memory. The swapcache is accounted
1420 against both memory and swap usage.
1423 Amount of memory used in anonymous mappings backed by
1424 transparent hugepages
1427 Amount of cached filesystem data backed by transparent
1431 Amount of shm, tmpfs, shared anonymous mmap()s backed by
1432 transparent hugepages
1434 inactive_anon, active_anon, inactive_file, active_file, unevictable
1435 Amount of memory, swap-backed and filesystem-backed,
1436 on the internal memory management lists used by the
1437 page reclaim algorithm.
1439 As these represent internal list state (eg. shmem pages are on anon
1440 memory management lists), inactive_foo + active_foo may not be equal to
1441 the value for the foo counter, since the foo counter is type-based, not
1445 Part of "slab" that might be reclaimed, such as
1446 dentries and inodes.
1449 Part of "slab" that cannot be reclaimed on memory
1453 Amount of memory used for storing in-kernel data
1456 workingset_refault_anon
1457 Number of refaults of previously evicted anonymous pages.
1459 workingset_refault_file
1460 Number of refaults of previously evicted file pages.
1462 workingset_activate_anon
1463 Number of refaulted anonymous pages that were immediately
1466 workingset_activate_file
1467 Number of refaulted file pages that were immediately activated.
1469 workingset_restore_anon
1470 Number of restored anonymous pages which have been detected as
1471 an active workingset before they got reclaimed.
1473 workingset_restore_file
1474 Number of restored file pages which have been detected as an
1475 active workingset before they got reclaimed.
1477 workingset_nodereclaim
1478 Number of times a shadow node has been reclaimed
1481 Amount of scanned pages (in an inactive LRU list)
1484 Amount of reclaimed pages
1487 Amount of scanned pages by kswapd (in an inactive LRU list)
1490 Amount of scanned pages directly (in an inactive LRU list)
1492 pgscan_khugepaged (npn)
1493 Amount of scanned pages by khugepaged (in an inactive LRU list)
1495 pgsteal_kswapd (npn)
1496 Amount of reclaimed pages by kswapd
1498 pgsteal_direct (npn)
1499 Amount of reclaimed pages directly
1501 pgsteal_khugepaged (npn)
1502 Amount of reclaimed pages by khugepaged
1505 Total number of page faults incurred
1508 Number of major page faults incurred
1511 Amount of scanned pages (in an active LRU list)
1514 Amount of pages moved to the active LRU list
1517 Amount of pages moved to the inactive LRU list
1520 Amount of pages postponed to be freed under memory pressure
1523 Amount of reclaimed lazyfree pages
1525 thp_fault_alloc (npn)
1526 Number of transparent hugepages which were allocated to satisfy
1527 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1530 thp_collapse_alloc (npn)
1531 Number of transparent hugepages which were allocated to allow
1532 collapsing an existing range of pages. This counter is not
1533 present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1536 A read-only nested-keyed file which exists on non-root cgroups.
1538 This breaks down the cgroup's memory footprint into different
1539 types of memory, type-specific details, and other information
1540 per node on the state of the memory management system.
1542 This is useful for providing visibility into the NUMA locality
1543 information within an memcg since the pages are allowed to be
1544 allocated from any physical node. One of the use case is evaluating
1545 application performance by combining this information with the
1546 application's CPU allocation.
1548 All memory amounts are in bytes.
1550 The output format of memory.numa_stat is::
1552 type N0=<bytes in node 0> N1=<bytes in node 1> ...
1554 The entries are ordered to be human readable, and new entries
1555 can show up in the middle. Don't rely on items remaining in a
1556 fixed position; use the keys to look up specific values!
1558 The entries can refer to the memory.stat.
1561 A read-only single value file which exists on non-root
1564 The total amount of swap currently being used by the cgroup
1565 and its descendants.
1568 A read-write single value file which exists on non-root
1569 cgroups. The default is "max".
1571 Swap usage throttle limit. If a cgroup's swap usage exceeds
1572 this limit, all its further allocations will be throttled to
1573 allow userspace to implement custom out-of-memory procedures.
1575 This limit marks a point of no return for the cgroup. It is NOT
1576 designed to manage the amount of swapping a workload does
1577 during regular operation. Compare to memory.swap.max, which
1578 prohibits swapping past a set amount, but lets the cgroup
1579 continue unimpeded as long as other memory can be reclaimed.
1581 Healthy workloads are not expected to reach this limit.
1584 A read-only single value file which exists on non-root
1587 The max swap usage recorded for the cgroup and its
1588 descendants since the creation of the cgroup.
1591 A read-write single value file which exists on non-root
1592 cgroups. The default is "max".
1594 Swap usage hard limit. If a cgroup's swap usage reaches this
1595 limit, anonymous memory of the cgroup will not be swapped out.
1598 A read-only flat-keyed file which exists on non-root cgroups.
1599 The following entries are defined. Unless specified
1600 otherwise, a value change in this file generates a file
1604 The number of times the cgroup's swap usage was over
1608 The number of times the cgroup's swap usage was about
1609 to go over the max boundary and swap allocation
1613 The number of times swap allocation failed either
1614 because of running out of swap system-wide or max
1617 When reduced under the current usage, the existing swap
1618 entries are reclaimed gradually and the swap usage may stay
1619 higher than the limit for an extended period of time. This
1620 reduces the impact on the workload and memory management.
1622 memory.zswap.current
1623 A read-only single value file which exists on non-root
1626 The total amount of memory consumed by the zswap compression
1630 A read-write single value file which exists on non-root
1631 cgroups. The default is "max".
1633 Zswap usage hard limit. If a cgroup's zswap pool reaches this
1634 limit, it will refuse to take any more stores before existing
1635 entries fault back in or are written out to disk.
1638 A read-only nested-keyed file.
1640 Shows pressure stall information for memory. See
1641 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1647 "memory.high" is the main mechanism to control memory usage.
1648 Over-committing on high limit (sum of high limits > available memory)
1649 and letting global memory pressure to distribute memory according to
1650 usage is a viable strategy.
1652 Because breach of the high limit doesn't trigger the OOM killer but
1653 throttles the offending cgroup, a management agent has ample
1654 opportunities to monitor and take appropriate actions such as granting
1655 more memory or terminating the workload.
1657 Determining whether a cgroup has enough memory is not trivial as
1658 memory usage doesn't indicate whether the workload can benefit from
1659 more memory. For example, a workload which writes data received from
1660 network to a file can use all available memory but can also operate as
1661 performant with a small amount of memory. A measure of memory
1662 pressure - how much the workload is being impacted due to lack of
1663 memory - is necessary to determine whether a workload needs more
1664 memory; unfortunately, memory pressure monitoring mechanism isn't
1671 A memory area is charged to the cgroup which instantiated it and stays
1672 charged to the cgroup until the area is released. Migrating a process
1673 to a different cgroup doesn't move the memory usages that it
1674 instantiated while in the previous cgroup to the new cgroup.
1676 A memory area may be used by processes belonging to different cgroups.
1677 To which cgroup the area will be charged is in-deterministic; however,
1678 over time, the memory area is likely to end up in a cgroup which has
1679 enough memory allowance to avoid high reclaim pressure.
1681 If a cgroup sweeps a considerable amount of memory which is expected
1682 to be accessed repeatedly by other cgroups, it may make sense to use
1683 POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1684 belonging to the affected files to ensure correct memory ownership.
1690 The "io" controller regulates the distribution of IO resources. This
1691 controller implements both weight based and absolute bandwidth or IOPS
1692 limit distribution; however, weight based distribution is available
1693 only if cfq-iosched is in use and neither scheme is available for
1701 A read-only nested-keyed file.
1703 Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1704 The following nested keys are defined.
1706 ====== =====================
1708 wbytes Bytes written
1709 rios Number of read IOs
1710 wios Number of write IOs
1711 dbytes Bytes discarded
1712 dios Number of discard IOs
1713 ====== =====================
1715 An example read output follows::
1717 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1718 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1721 A read-write nested-keyed file which exists only on the root
1724 This file configures the Quality of Service of the IO cost
1725 model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1726 currently implements "io.weight" proportional control. Lines
1727 are keyed by $MAJ:$MIN device numbers and not ordered. The
1728 line for a given device is populated on the first write for
1729 the device on "io.cost.qos" or "io.cost.model". The following
1730 nested keys are defined.
1732 ====== =====================================
1733 enable Weight-based control enable
1734 ctrl "auto" or "user"
1735 rpct Read latency percentile [0, 100]
1736 rlat Read latency threshold
1737 wpct Write latency percentile [0, 100]
1738 wlat Write latency threshold
1739 min Minimum scaling percentage [1, 10000]
1740 max Maximum scaling percentage [1, 10000]
1741 ====== =====================================
1743 The controller is disabled by default and can be enabled by
1744 setting "enable" to 1. "rpct" and "wpct" parameters default
1745 to zero and the controller uses internal device saturation
1746 state to adjust the overall IO rate between "min" and "max".
1748 When a better control quality is needed, latency QoS
1749 parameters can be configured. For example::
1751 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1753 shows that on sdb, the controller is enabled, will consider
1754 the device saturated if the 95th percentile of read completion
1755 latencies is above 75ms or write 150ms, and adjust the overall
1756 IO issue rate between 50% and 150% accordingly.
1758 The lower the saturation point, the better the latency QoS at
1759 the cost of aggregate bandwidth. The narrower the allowed
1760 adjustment range between "min" and "max", the more conformant
1761 to the cost model the IO behavior. Note that the IO issue
1762 base rate may be far off from 100% and setting "min" and "max"
1763 blindly can lead to a significant loss of device capacity or
1764 control quality. "min" and "max" are useful for regulating
1765 devices which show wide temporary behavior changes - e.g. a
1766 ssd which accepts writes at the line speed for a while and
1767 then completely stalls for multiple seconds.
1769 When "ctrl" is "auto", the parameters are controlled by the
1770 kernel and may change automatically. Setting "ctrl" to "user"
1771 or setting any of the percentile and latency parameters puts
1772 it into "user" mode and disables the automatic changes. The
1773 automatic mode can be restored by setting "ctrl" to "auto".
1776 A read-write nested-keyed file which exists only on the root
1779 This file configures the cost model of the IO cost model based
1780 controller (CONFIG_BLK_CGROUP_IOCOST) which currently
1781 implements "io.weight" proportional control. Lines are keyed
1782 by $MAJ:$MIN device numbers and not ordered. The line for a
1783 given device is populated on the first write for the device on
1784 "io.cost.qos" or "io.cost.model". The following nested keys
1787 ===== ================================
1788 ctrl "auto" or "user"
1789 model The cost model in use - "linear"
1790 ===== ================================
1792 When "ctrl" is "auto", the kernel may change all parameters
1793 dynamically. When "ctrl" is set to "user" or any other
1794 parameters are written to, "ctrl" become "user" and the
1795 automatic changes are disabled.
1797 When "model" is "linear", the following model parameters are
1800 ============= ========================================
1801 [r|w]bps The maximum sequential IO throughput
1802 [r|w]seqiops The maximum 4k sequential IOs per second
1803 [r|w]randiops The maximum 4k random IOs per second
1804 ============= ========================================
1806 From the above, the builtin linear model determines the base
1807 costs of a sequential and random IO and the cost coefficient
1808 for the IO size. While simple, this model can cover most
1809 common device classes acceptably.
1811 The IO cost model isn't expected to be accurate in absolute
1812 sense and is scaled to the device behavior dynamically.
1814 If needed, tools/cgroup/iocost_coef_gen.py can be used to
1815 generate device-specific coefficients.
1818 A read-write flat-keyed file which exists on non-root cgroups.
1819 The default is "default 100".
1821 The first line is the default weight applied to devices
1822 without specific override. The rest are overrides keyed by
1823 $MAJ:$MIN device numbers and not ordered. The weights are in
1824 the range [1, 10000] and specifies the relative amount IO time
1825 the cgroup can use in relation to its siblings.
1827 The default weight can be updated by writing either "default
1828 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing
1829 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1831 An example read output follows::
1838 A read-write nested-keyed file which exists on non-root
1841 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN
1842 device numbers and not ordered. The following nested keys are
1845 ===== ==================================
1846 rbps Max read bytes per second
1847 wbps Max write bytes per second
1848 riops Max read IO operations per second
1849 wiops Max write IO operations per second
1850 ===== ==================================
1852 When writing, any number of nested key-value pairs can be
1853 specified in any order. "max" can be specified as the value
1854 to remove a specific limit. If the same key is specified
1855 multiple times, the outcome is undefined.
1857 BPS and IOPS are measured in each IO direction and IOs are
1858 delayed if limit is reached. Temporary bursts are allowed.
1860 Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
1862 echo "8:16 rbps=2097152 wiops=120" > io.max
1864 Reading returns the following::
1866 8:16 rbps=2097152 wbps=max riops=max wiops=120
1868 Write IOPS limit can be removed by writing the following::
1870 echo "8:16 wiops=max" > io.max
1872 Reading now returns the following::
1874 8:16 rbps=2097152 wbps=max riops=max wiops=max
1877 A read-only nested-keyed file.
1879 Shows pressure stall information for IO. See
1880 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1886 Page cache is dirtied through buffered writes and shared mmaps and
1887 written asynchronously to the backing filesystem by the writeback
1888 mechanism. Writeback sits between the memory and IO domains and
1889 regulates the proportion of dirty memory by balancing dirtying and
1892 The io controller, in conjunction with the memory controller,
1893 implements control of page cache writeback IOs. The memory controller
1894 defines the memory domain that dirty memory ratio is calculated and
1895 maintained for and the io controller defines the io domain which
1896 writes out dirty pages for the memory domain. Both system-wide and
1897 per-cgroup dirty memory states are examined and the more restrictive
1898 of the two is enforced.
1900 cgroup writeback requires explicit support from the underlying
1901 filesystem. Currently, cgroup writeback is implemented on ext2, ext4,
1902 btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are
1903 attributed to the root cgroup.
1905 There are inherent differences in memory and writeback management
1906 which affects how cgroup ownership is tracked. Memory is tracked per
1907 page while writeback per inode. For the purpose of writeback, an
1908 inode is assigned to a cgroup and all IO requests to write dirty pages
1909 from the inode are attributed to that cgroup.
1911 As cgroup ownership for memory is tracked per page, there can be pages
1912 which are associated with different cgroups than the one the inode is
1913 associated with. These are called foreign pages. The writeback
1914 constantly keeps track of foreign pages and, if a particular foreign
1915 cgroup becomes the majority over a certain period of time, switches
1916 the ownership of the inode to that cgroup.
1918 While this model is enough for most use cases where a given inode is
1919 mostly dirtied by a single cgroup even when the main writing cgroup
1920 changes over time, use cases where multiple cgroups write to a single
1921 inode simultaneously are not supported well. In such circumstances, a
1922 significant portion of IOs are likely to be attributed incorrectly.
1923 As memory controller assigns page ownership on the first use and
1924 doesn't update it until the page is released, even if writeback
1925 strictly follows page ownership, multiple cgroups dirtying overlapping
1926 areas wouldn't work as expected. It's recommended to avoid such usage
1929 The sysctl knobs which affect writeback behavior are applied to cgroup
1930 writeback as follows.
1932 vm.dirty_background_ratio, vm.dirty_ratio
1933 These ratios apply the same to cgroup writeback with the
1934 amount of available memory capped by limits imposed by the
1935 memory controller and system-wide clean memory.
1937 vm.dirty_background_bytes, vm.dirty_bytes
1938 For cgroup writeback, this is calculated into ratio against
1939 total available memory and applied the same way as
1940 vm.dirty[_background]_ratio.
1946 This is a cgroup v2 controller for IO workload protection. You provide a group
1947 with a latency target, and if the average latency exceeds that target the
1948 controller will throttle any peers that have a lower latency target than the
1951 The limits are only applied at the peer level in the hierarchy. This means that
1952 in the diagram below, only groups A, B, and C will influence each other, and
1953 groups D and F will influence each other. Group G will influence nobody::
1962 So the ideal way to configure this is to set io.latency in groups A, B, and C.
1963 Generally you do not want to set a value lower than the latency your device
1964 supports. Experiment to find the value that works best for your workload.
1965 Start at higher than the expected latency for your device and watch the
1966 avg_lat value in io.stat for your workload group to get an idea of the
1967 latency you see during normal operation. Use the avg_lat value as a basis for
1968 your real setting, setting at 10-15% higher than the value in io.stat.
1970 How IO Latency Throttling Works
1971 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1973 io.latency is work conserving; so as long as everybody is meeting their latency
1974 target the controller doesn't do anything. Once a group starts missing its
1975 target it begins throttling any peer group that has a higher target than itself.
1976 This throttling takes 2 forms:
1978 - Queue depth throttling. This is the number of outstanding IO's a group is
1979 allowed to have. We will clamp down relatively quickly, starting at no limit
1980 and going all the way down to 1 IO at a time.
1982 - Artificial delay induction. There are certain types of IO that cannot be
1983 throttled without possibly adversely affecting higher priority groups. This
1984 includes swapping and metadata IO. These types of IO are allowed to occur
1985 normally, however they are "charged" to the originating group. If the
1986 originating group is being throttled you will see the use_delay and delay
1987 fields in io.stat increase. The delay value is how many microseconds that are
1988 being added to any process that runs in this group. Because this number can
1989 grow quite large if there is a lot of swapping or metadata IO occurring we
1990 limit the individual delay events to 1 second at a time.
1992 Once the victimized group starts meeting its latency target again it will start
1993 unthrottling any peer groups that were throttled previously. If the victimized
1994 group simply stops doing IO the global counter will unthrottle appropriately.
1996 IO Latency Interface Files
1997 ~~~~~~~~~~~~~~~~~~~~~~~~~~
2000 This takes a similar format as the other controllers.
2002 "MAJOR:MINOR target=<target time in microseconds>"
2005 If the controller is enabled you will see extra stats in io.stat in
2006 addition to the normal ones.
2009 This is the current queue depth for the group.
2012 This is an exponential moving average with a decay rate of 1/exp
2013 bound by the sampling interval. The decay rate interval can be
2014 calculated by multiplying the win value in io.stat by the
2015 corresponding number of samples based on the win value.
2018 The sampling window size in milliseconds. This is the minimum
2019 duration of time between evaluation events. Windows only elapse
2020 with IO activity. Idle periods extend the most recent window.
2025 A single attribute controls the behavior of the I/O priority cgroup policy,
2026 namely the blkio.prio.class attribute. The following values are accepted for
2030 Do not modify the I/O priority class.
2033 For requests that have a non-RT I/O priority class, change it into RT.
2034 Also change the priority level of these requests to 4. Do not modify
2035 the I/O priority of requests that have priority class RT.
2038 For requests that do not have an I/O priority class or that have I/O
2039 priority class RT, change it into BE. Also change the priority level
2040 of these requests to 0. Do not modify the I/O priority class of
2041 requests that have priority class IDLE.
2044 Change the I/O priority class of all requests into IDLE, the lowest
2048 Deprecated. Just an alias for promote-to-rt.
2050 The following numerical values are associated with the I/O priority policies:
2052 +----------------+---+
2054 +----------------+---+
2056 +----------------+---+
2058 +----------------+---+
2060 The numerical value that corresponds to each I/O priority class is as follows:
2062 +-------------------------------+---+
2063 | IOPRIO_CLASS_NONE | 0 |
2064 +-------------------------------+---+
2065 | IOPRIO_CLASS_RT (real-time) | 1 |
2066 +-------------------------------+---+
2067 | IOPRIO_CLASS_BE (best effort) | 2 |
2068 +-------------------------------+---+
2069 | IOPRIO_CLASS_IDLE | 3 |
2070 +-------------------------------+---+
2072 The algorithm to set the I/O priority class for a request is as follows:
2074 - If I/O priority class policy is promote-to-rt, change the request I/O
2075 priority class to IOPRIO_CLASS_RT and change the request I/O priority
2077 - If I/O priorityt class is not promote-to-rt, translate the I/O priority
2078 class policy into a number, then change the request I/O priority class
2079 into the maximum of the I/O priority class policy number and the numerical
2085 The process number controller is used to allow a cgroup to stop any
2086 new tasks from being fork()'d or clone()'d after a specified limit is
2089 The number of tasks in a cgroup can be exhausted in ways which other
2090 controllers cannot prevent, thus warranting its own controller. For
2091 example, a fork bomb is likely to exhaust the number of tasks before
2092 hitting memory restrictions.
2094 Note that PIDs used in this controller refer to TIDs, process IDs as
2102 A read-write single value file which exists on non-root
2103 cgroups. The default is "max".
2105 Hard limit of number of processes.
2108 A read-only single value file which exists on all cgroups.
2110 The number of processes currently in the cgroup and its
2113 Organisational operations are not blocked by cgroup policies, so it is
2114 possible to have pids.current > pids.max. This can be done by either
2115 setting the limit to be smaller than pids.current, or attaching enough
2116 processes to the cgroup such that pids.current is larger than
2117 pids.max. However, it is not possible to violate a cgroup PID policy
2118 through fork() or clone(). These will return -EAGAIN if the creation
2119 of a new process would cause a cgroup policy to be violated.
2125 The "cpuset" controller provides a mechanism for constraining
2126 the CPU and memory node placement of tasks to only the resources
2127 specified in the cpuset interface files in a task's current cgroup.
2128 This is especially valuable on large NUMA systems where placing jobs
2129 on properly sized subsets of the systems with careful processor and
2130 memory placement to reduce cross-node memory access and contention
2131 can improve overall system performance.
2133 The "cpuset" controller is hierarchical. That means the controller
2134 cannot use CPUs or memory nodes not allowed in its parent.
2137 Cpuset Interface Files
2138 ~~~~~~~~~~~~~~~~~~~~~~
2141 A read-write multiple values file which exists on non-root
2142 cpuset-enabled cgroups.
2144 It lists the requested CPUs to be used by tasks within this
2145 cgroup. The actual list of CPUs to be granted, however, is
2146 subjected to constraints imposed by its parent and can differ
2147 from the requested CPUs.
2149 The CPU numbers are comma-separated numbers or ranges.
2155 An empty value indicates that the cgroup is using the same
2156 setting as the nearest cgroup ancestor with a non-empty
2157 "cpuset.cpus" or all the available CPUs if none is found.
2159 The value of "cpuset.cpus" stays constant until the next update
2160 and won't be affected by any CPU hotplug events.
2162 cpuset.cpus.effective
2163 A read-only multiple values file which exists on all
2164 cpuset-enabled cgroups.
2166 It lists the onlined CPUs that are actually granted to this
2167 cgroup by its parent. These CPUs are allowed to be used by
2168 tasks within the current cgroup.
2170 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2171 all the CPUs from the parent cgroup that can be available to
2172 be used by this cgroup. Otherwise, it should be a subset of
2173 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2174 can be granted. In this case, it will be treated just like an
2175 empty "cpuset.cpus".
2177 Its value will be affected by CPU hotplug events.
2180 A read-write multiple values file which exists on non-root
2181 cpuset-enabled cgroups.
2183 It lists the requested memory nodes to be used by tasks within
2184 this cgroup. The actual list of memory nodes granted, however,
2185 is subjected to constraints imposed by its parent and can differ
2186 from the requested memory nodes.
2188 The memory node numbers are comma-separated numbers or ranges.
2194 An empty value indicates that the cgroup is using the same
2195 setting as the nearest cgroup ancestor with a non-empty
2196 "cpuset.mems" or all the available memory nodes if none
2199 The value of "cpuset.mems" stays constant until the next update
2200 and won't be affected by any memory nodes hotplug events.
2202 Setting a non-empty value to "cpuset.mems" causes memory of
2203 tasks within the cgroup to be migrated to the designated nodes if
2204 they are currently using memory outside of the designated nodes.
2206 There is a cost for this memory migration. The migration
2207 may not be complete and some memory pages may be left behind.
2208 So it is recommended that "cpuset.mems" should be set properly
2209 before spawning new tasks into the cpuset. Even if there is
2210 a need to change "cpuset.mems" with active tasks, it shouldn't
2213 cpuset.mems.effective
2214 A read-only multiple values file which exists on all
2215 cpuset-enabled cgroups.
2217 It lists the onlined memory nodes that are actually granted to
2218 this cgroup by its parent. These memory nodes are allowed to
2219 be used by tasks within the current cgroup.
2221 If "cpuset.mems" is empty, it shows all the memory nodes from the
2222 parent cgroup that will be available to be used by this cgroup.
2223 Otherwise, it should be a subset of "cpuset.mems" unless none of
2224 the memory nodes listed in "cpuset.mems" can be granted. In this
2225 case, it will be treated just like an empty "cpuset.mems".
2227 Its value will be affected by memory nodes hotplug events.
2229 cpuset.cpus.partition
2230 A read-write single value file which exists on non-root
2231 cpuset-enabled cgroups. This flag is owned by the parent cgroup
2232 and is not delegatable.
2234 It accepts only the following input values when written to.
2236 ========== =====================================
2237 "member" Non-root member of a partition
2238 "root" Partition root
2239 "isolated" Partition root without load balancing
2240 ========== =====================================
2242 The root cgroup is always a partition root and its state
2243 cannot be changed. All other non-root cgroups start out as
2246 When set to "root", the current cgroup is the root of a new
2247 partition or scheduling domain that comprises itself and all
2248 its descendants except those that are separate partition roots
2249 themselves and their descendants.
2251 When set to "isolated", the CPUs in that partition root will
2252 be in an isolated state without any load balancing from the
2253 scheduler. Tasks placed in such a partition with multiple
2254 CPUs should be carefully distributed and bound to each of the
2255 individual CPUs for optimal performance.
2257 The value shown in "cpuset.cpus.effective" of a partition root
2258 is the CPUs that the partition root can dedicate to a potential
2259 new child partition root. The new child subtracts available
2260 CPUs from its parent "cpuset.cpus.effective".
2262 A partition root ("root" or "isolated") can be in one of the
2263 two possible states - valid or invalid. An invalid partition
2264 root is in a degraded state where some state information may
2265 be retained, but behaves more like a "member".
2267 All possible state transitions among "member", "root" and
2268 "isolated" are allowed.
2270 On read, the "cpuset.cpus.partition" file can show the following
2273 ============================= =====================================
2274 "member" Non-root member of a partition
2275 "root" Partition root
2276 "isolated" Partition root without load balancing
2277 "root invalid (<reason>)" Invalid partition root
2278 "isolated invalid (<reason>)" Invalid isolated partition root
2279 ============================= =====================================
2281 In the case of an invalid partition root, a descriptive string on
2282 why the partition is invalid is included within parentheses.
2284 For a partition root to become valid, the following conditions
2287 1) The "cpuset.cpus" is exclusive with its siblings , i.e. they
2288 are not shared by any of its siblings (exclusivity rule).
2289 2) The parent cgroup is a valid partition root.
2290 3) The "cpuset.cpus" is not empty and must contain at least
2291 one of the CPUs from parent's "cpuset.cpus", i.e. they overlap.
2292 4) The "cpuset.cpus.effective" cannot be empty unless there is
2293 no task associated with this partition.
2295 External events like hotplug or changes to "cpuset.cpus" can
2296 cause a valid partition root to become invalid and vice versa.
2297 Note that a task cannot be moved to a cgroup with empty
2298 "cpuset.cpus.effective".
2300 For a valid partition root with the sibling cpu exclusivity
2301 rule enabled, changes made to "cpuset.cpus" that violate the
2302 exclusivity rule will invalidate the partition as well as its
2303 sibling partitions with conflicting cpuset.cpus values. So
2304 care must be taking in changing "cpuset.cpus".
2306 A valid non-root parent partition may distribute out all its CPUs
2307 to its child partitions when there is no task associated with it.
2309 Care must be taken to change a valid partition root to
2310 "member" as all its child partitions, if present, will become
2311 invalid causing disruption to tasks running in those child
2312 partitions. These inactivated partitions could be recovered if
2313 their parent is switched back to a partition root with a proper
2314 set of "cpuset.cpus".
2316 Poll and inotify events are triggered whenever the state of
2317 "cpuset.cpus.partition" changes. That includes changes caused
2318 by write to "cpuset.cpus.partition", cpu hotplug or other
2319 changes that modify the validity status of the partition.
2320 This will allow user space agents to monitor unexpected changes
2321 to "cpuset.cpus.partition" without the need to do continuous
2328 Device controller manages access to device files. It includes both
2329 creation of new device files (using mknod), and access to the
2330 existing device files.
2332 Cgroup v2 device controller has no interface files and is implemented
2333 on top of cgroup BPF. To control access to device files, a user may
2334 create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
2335 them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
2336 device file, corresponding BPF programs will be executed, and depending
2337 on the return value the attempt will succeed or fail with -EPERM.
2339 A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
2340 bpf_cgroup_dev_ctx structure, which describes the device access attempt:
2341 access type (mknod/read/write) and device (type, major and minor numbers).
2342 If the program returns 0, the attempt fails with -EPERM, otherwise it
2345 An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
2346 tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
2352 The "rdma" controller regulates the distribution and accounting of
2355 RDMA Interface Files
2356 ~~~~~~~~~~~~~~~~~~~~
2359 A readwrite nested-keyed file that exists for all the cgroups
2360 except root that describes current configured resource limit
2361 for a RDMA/IB device.
2363 Lines are keyed by device name and are not ordered.
2364 Each line contains space separated resource name and its configured
2365 limit that can be distributed.
2367 The following nested keys are defined.
2369 ========== =============================
2370 hca_handle Maximum number of HCA Handles
2371 hca_object Maximum number of HCA Objects
2372 ========== =============================
2374 An example for mlx4 and ocrdma device follows::
2376 mlx4_0 hca_handle=2 hca_object=2000
2377 ocrdma1 hca_handle=3 hca_object=max
2380 A read-only file that describes current resource usage.
2381 It exists for all the cgroup except root.
2383 An example for mlx4 and ocrdma device follows::
2385 mlx4_0 hca_handle=1 hca_object=20
2386 ocrdma1 hca_handle=1 hca_object=23
2391 The HugeTLB controller allows to limit the HugeTLB usage per control group and
2392 enforces the controller limit during page fault.
2394 HugeTLB Interface Files
2395 ~~~~~~~~~~~~~~~~~~~~~~~
2397 hugetlb.<hugepagesize>.current
2398 Show current usage for "hugepagesize" hugetlb. It exists for all
2399 the cgroup except root.
2401 hugetlb.<hugepagesize>.max
2402 Set/show the hard limit of "hugepagesize" hugetlb usage.
2403 The default value is "max". It exists for all the cgroup except root.
2405 hugetlb.<hugepagesize>.events
2406 A read-only flat-keyed file which exists on non-root cgroups.
2409 The number of allocation failure due to HugeTLB limit
2411 hugetlb.<hugepagesize>.events.local
2412 Similar to hugetlb.<hugepagesize>.events but the fields in the file
2413 are local to the cgroup i.e. not hierarchical. The file modified event
2414 generated on this file reflects only the local events.
2416 hugetlb.<hugepagesize>.numa_stat
2417 Similar to memory.numa_stat, it shows the numa information of the
2418 hugetlb pages of <hugepagesize> in this cgroup. Only active in
2419 use hugetlb pages are included. The per-node values are in bytes.
2424 The Miscellaneous cgroup provides the resource limiting and tracking
2425 mechanism for the scalar resources which cannot be abstracted like the other
2426 cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2429 A resource can be added to the controller via enum misc_res_type{} in the
2430 include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2431 in the kernel/cgroup/misc.c file. Provider of the resource must set its
2432 capacity prior to using the resource by calling misc_cg_set_capacity().
2434 Once a capacity is set then the resource usage can be updated using charge and
2435 uncharge APIs. All of the APIs to interact with misc controller are in
2436 include/linux/misc_cgroup.h.
2438 Misc Interface Files
2439 ~~~~~~~~~~~~~~~~~~~~
2441 Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2444 A read-only flat-keyed file shown only in the root cgroup. It shows
2445 miscellaneous scalar resources available on the platform along with
2453 A read-only flat-keyed file shown in the all cgroups. It shows
2454 the current usage of the resources in the cgroup and its children.::
2461 A read-write flat-keyed file shown in the non root cgroups. Allowed
2462 maximum usage of the resources in the cgroup and its children.::
2468 Limit can be set by::
2470 # echo res_a 1 > misc.max
2472 Limit can be set to max by::
2474 # echo res_a max > misc.max
2476 Limits can be set higher than the capacity value in the misc.capacity
2480 A read-only flat-keyed file which exists on non-root cgroups. The
2481 following entries are defined. Unless specified otherwise, a value
2482 change in this file generates a file modified event. All fields in
2483 this file are hierarchical.
2486 The number of times the cgroup's resource usage was
2487 about to go over the max boundary.
2489 Migration and Ownership
2490 ~~~~~~~~~~~~~~~~~~~~~~~
2492 A miscellaneous scalar resource is charged to the cgroup in which it is used
2493 first, and stays charged to that cgroup until that resource is freed. Migrating
2494 a process to a different cgroup does not move the charge to the destination
2495 cgroup where the process has moved.
2503 perf_event controller, if not mounted on a legacy hierarchy, is
2504 automatically enabled on the v2 hierarchy so that perf events can
2505 always be filtered by cgroup v2 path. The controller can still be
2506 moved to a legacy hierarchy after v2 hierarchy is populated.
2509 Non-normative information
2510 -------------------------
2512 This section contains information that isn't considered to be a part of
2513 the stable kernel API and so is subject to change.
2516 CPU controller root cgroup process behaviour
2517 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2519 When distributing CPU cycles in the root cgroup each thread in this
2520 cgroup is treated as if it was hosted in a separate child cgroup of the
2521 root cgroup. This child cgroup weight is dependent on its thread nice
2524 For details of this mapping see sched_prio_to_weight array in
2525 kernel/sched/core.c file (values from this array should be scaled
2526 appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2529 IO controller root cgroup process behaviour
2530 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2532 Root cgroup processes are hosted in an implicit leaf child node.
2533 When distributing IO resources this implicit child node is taken into
2534 account as if it was a normal child cgroup of the root cgroup with a
2535 weight value of 200.
2544 cgroup namespace provides a mechanism to virtualize the view of the
2545 "/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone
2546 flag can be used with clone(2) and unshare(2) to create a new cgroup
2547 namespace. The process running inside the cgroup namespace will have
2548 its "/proc/$PID/cgroup" output restricted to cgroupns root. The
2549 cgroupns root is the cgroup of the process at the time of creation of
2550 the cgroup namespace.
2552 Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2553 complete path of the cgroup of a process. In a container setup where
2554 a set of cgroups and namespaces are intended to isolate processes the
2555 "/proc/$PID/cgroup" file may leak potential system level information
2556 to the isolated processes. For example::
2558 # cat /proc/self/cgroup
2559 0::/batchjobs/container_id1
2561 The path '/batchjobs/container_id1' can be considered as system-data
2562 and undesirable to expose to the isolated processes. cgroup namespace
2563 can be used to restrict visibility of this path. For example, before
2564 creating a cgroup namespace, one would see::
2566 # ls -l /proc/self/ns/cgroup
2567 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2568 # cat /proc/self/cgroup
2569 0::/batchjobs/container_id1
2571 After unsharing a new namespace, the view changes::
2573 # ls -l /proc/self/ns/cgroup
2574 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2575 # cat /proc/self/cgroup
2578 When some thread from a multi-threaded process unshares its cgroup
2579 namespace, the new cgroupns gets applied to the entire process (all
2580 the threads). This is natural for the v2 hierarchy; however, for the
2581 legacy hierarchies, this may be unexpected.
2583 A cgroup namespace is alive as long as there are processes inside or
2584 mounts pinning it. When the last usage goes away, the cgroup
2585 namespace is destroyed. The cgroupns root and the actual cgroups
2592 The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2593 process calling unshare(2) is running. For example, if a process in
2594 /batchjobs/container_id1 cgroup calls unshare, cgroup
2595 /batchjobs/container_id1 becomes the cgroupns root. For the
2596 init_cgroup_ns, this is the real root ('/') cgroup.
2598 The cgroupns root cgroup does not change even if the namespace creator
2599 process later moves to a different cgroup::
2601 # ~/unshare -c # unshare cgroupns in some cgroup
2602 # cat /proc/self/cgroup
2605 # echo 0 > sub_cgrp_1/cgroup.procs
2606 # cat /proc/self/cgroup
2609 Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2611 Processes running inside the cgroup namespace will be able to see
2612 cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2613 From within an unshared cgroupns::
2617 # echo 7353 > sub_cgrp_1/cgroup.procs
2618 # cat /proc/7353/cgroup
2621 From the initial cgroup namespace, the real cgroup path will be
2624 $ cat /proc/7353/cgroup
2625 0::/batchjobs/container_id1/sub_cgrp_1
2627 From a sibling cgroup namespace (that is, a namespace rooted at a
2628 different cgroup), the cgroup path relative to its own cgroup
2629 namespace root will be shown. For instance, if PID 7353's cgroup
2630 namespace root is at '/batchjobs/container_id2', then it will see::
2632 # cat /proc/7353/cgroup
2633 0::/../container_id2/sub_cgrp_1
2635 Note that the relative path always starts with '/' to indicate that
2636 its relative to the cgroup namespace root of the caller.
2639 Migration and setns(2)
2640 ----------------------
2642 Processes inside a cgroup namespace can move into and out of the
2643 namespace root if they have proper access to external cgroups. For
2644 example, from inside a namespace with cgroupns root at
2645 /batchjobs/container_id1, and assuming that the global hierarchy is
2646 still accessible inside cgroupns::
2648 # cat /proc/7353/cgroup
2650 # echo 7353 > batchjobs/container_id2/cgroup.procs
2651 # cat /proc/7353/cgroup
2652 0::/../container_id2
2654 Note that this kind of setup is not encouraged. A task inside cgroup
2655 namespace should only be exposed to its own cgroupns hierarchy.
2657 setns(2) to another cgroup namespace is allowed when:
2659 (a) the process has CAP_SYS_ADMIN against its current user namespace
2660 (b) the process has CAP_SYS_ADMIN against the target cgroup
2663 No implicit cgroup changes happen with attaching to another cgroup
2664 namespace. It is expected that the someone moves the attaching
2665 process under the target cgroup namespace root.
2668 Interaction with Other Namespaces
2669 ---------------------------------
2671 Namespace specific cgroup hierarchy can be mounted by a process
2672 running inside a non-init cgroup namespace::
2674 # mount -t cgroup2 none $MOUNT_POINT
2676 This will mount the unified cgroup hierarchy with cgroupns root as the
2677 filesystem root. The process needs CAP_SYS_ADMIN against its user and
2680 The virtualization of /proc/self/cgroup file combined with restricting
2681 the view of cgroup hierarchy by namespace-private cgroupfs mount
2682 provides a properly isolated cgroup view inside the container.
2685 Information on Kernel Programming
2686 =================================
2688 This section contains kernel programming information in the areas
2689 where interacting with cgroup is necessary. cgroup core and
2690 controllers are not covered.
2693 Filesystem Support for Writeback
2694 --------------------------------
2696 A filesystem can support cgroup writeback by updating
2697 address_space_operations->writepage[s]() to annotate bio's using the
2698 following two functions.
2700 wbc_init_bio(@wbc, @bio)
2701 Should be called for each bio carrying writeback data and
2702 associates the bio with the inode's owner cgroup and the
2703 corresponding request queue. This must be called after
2704 a queue (device) has been associated with the bio and
2707 wbc_account_cgroup_owner(@wbc, @page, @bytes)
2708 Should be called for each data segment being written out.
2709 While this function doesn't care exactly when it's called
2710 during the writeback session, it's the easiest and most
2711 natural to call it as data segments are added to a bio.
2713 With writeback bio's annotated, cgroup support can be enabled per
2714 super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for
2715 selective disabling of cgroup writeback support which is helpful when
2716 certain filesystem features, e.g. journaled data mode, are
2719 wbc_init_bio() binds the specified bio to its cgroup. Depending on
2720 the configuration, the bio may be executed at a lower priority and if
2721 the writeback session is holding shared resources, e.g. a journal
2722 entry, may lead to priority inversion. There is no one easy solution
2723 for the problem. Filesystems can try to work around specific problem
2724 cases by skipping wbc_init_bio() and using bio_associate_blkg()
2728 Deprecated v1 Core Features
2729 ===========================
2731 - Multiple hierarchies including named ones are not supported.
2733 - All v1 mount options are not supported.
2735 - The "tasks" file is removed and "cgroup.procs" is not sorted.
2737 - "cgroup.clone_children" is removed.
2739 - /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file
2740 at the root instead.
2743 Issues with v1 and Rationales for v2
2744 ====================================
2746 Multiple Hierarchies
2747 --------------------
2749 cgroup v1 allowed an arbitrary number of hierarchies and each
2750 hierarchy could host any number of controllers. While this seemed to
2751 provide a high level of flexibility, it wasn't useful in practice.
2753 For example, as there is only one instance of each controller, utility
2754 type controllers such as freezer which can be useful in all
2755 hierarchies could only be used in one. The issue is exacerbated by
2756 the fact that controllers couldn't be moved to another hierarchy once
2757 hierarchies were populated. Another issue was that all controllers
2758 bound to a hierarchy were forced to have exactly the same view of the
2759 hierarchy. It wasn't possible to vary the granularity depending on
2760 the specific controller.
2762 In practice, these issues heavily limited which controllers could be
2763 put on the same hierarchy and most configurations resorted to putting
2764 each controller on its own hierarchy. Only closely related ones, such
2765 as the cpu and cpuacct controllers, made sense to be put on the same
2766 hierarchy. This often meant that userland ended up managing multiple
2767 similar hierarchies repeating the same steps on each hierarchy
2768 whenever a hierarchy management operation was necessary.
2770 Furthermore, support for multiple hierarchies came at a steep cost.
2771 It greatly complicated cgroup core implementation but more importantly
2772 the support for multiple hierarchies restricted how cgroup could be
2773 used in general and what controllers was able to do.
2775 There was no limit on how many hierarchies there might be, which meant
2776 that a thread's cgroup membership couldn't be described in finite
2777 length. The key might contain any number of entries and was unlimited
2778 in length, which made it highly awkward to manipulate and led to
2779 addition of controllers which existed only to identify membership,
2780 which in turn exacerbated the original problem of proliferating number
2783 Also, as a controller couldn't have any expectation regarding the
2784 topologies of hierarchies other controllers might be on, each
2785 controller had to assume that all other controllers were attached to
2786 completely orthogonal hierarchies. This made it impossible, or at
2787 least very cumbersome, for controllers to cooperate with each other.
2789 In most use cases, putting controllers on hierarchies which are
2790 completely orthogonal to each other isn't necessary. What usually is
2791 called for is the ability to have differing levels of granularity
2792 depending on the specific controller. In other words, hierarchy may
2793 be collapsed from leaf towards root when viewed from specific
2794 controllers. For example, a given configuration might not care about
2795 how memory is distributed beyond a certain level while still wanting
2796 to control how CPU cycles are distributed.
2802 cgroup v1 allowed threads of a process to belong to different cgroups.
2803 This didn't make sense for some controllers and those controllers
2804 ended up implementing different ways to ignore such situations but
2805 much more importantly it blurred the line between API exposed to
2806 individual applications and system management interface.
2808 Generally, in-process knowledge is available only to the process
2809 itself; thus, unlike service-level organization of processes,
2810 categorizing threads of a process requires active participation from
2811 the application which owns the target process.
2813 cgroup v1 had an ambiguously defined delegation model which got abused
2814 in combination with thread granularity. cgroups were delegated to
2815 individual applications so that they can create and manage their own
2816 sub-hierarchies and control resource distributions along them. This
2817 effectively raised cgroup to the status of a syscall-like API exposed
2820 First of all, cgroup has a fundamentally inadequate interface to be
2821 exposed this way. For a process to access its own knobs, it has to
2822 extract the path on the target hierarchy from /proc/self/cgroup,
2823 construct the path by appending the name of the knob to the path, open
2824 and then read and/or write to it. This is not only extremely clunky
2825 and unusual but also inherently racy. There is no conventional way to
2826 define transaction across the required steps and nothing can guarantee
2827 that the process would actually be operating on its own sub-hierarchy.
2829 cgroup controllers implemented a number of knobs which would never be
2830 accepted as public APIs because they were just adding control knobs to
2831 system-management pseudo filesystem. cgroup ended up with interface
2832 knobs which were not properly abstracted or refined and directly
2833 revealed kernel internal details. These knobs got exposed to
2834 individual applications through the ill-defined delegation mechanism
2835 effectively abusing cgroup as a shortcut to implementing public APIs
2836 without going through the required scrutiny.
2838 This was painful for both userland and kernel. Userland ended up with
2839 misbehaving and poorly abstracted interfaces and kernel exposing and
2840 locked into constructs inadvertently.
2843 Competition Between Inner Nodes and Threads
2844 -------------------------------------------
2846 cgroup v1 allowed threads to be in any cgroups which created an
2847 interesting problem where threads belonging to a parent cgroup and its
2848 children cgroups competed for resources. This was nasty as two
2849 different types of entities competed and there was no obvious way to
2850 settle it. Different controllers did different things.
2852 The cpu controller considered threads and cgroups as equivalents and
2853 mapped nice levels to cgroup weights. This worked for some cases but
2854 fell flat when children wanted to be allocated specific ratios of CPU
2855 cycles and the number of internal threads fluctuated - the ratios
2856 constantly changed as the number of competing entities fluctuated.
2857 There also were other issues. The mapping from nice level to weight
2858 wasn't obvious or universal, and there were various other knobs which
2859 simply weren't available for threads.
2861 The io controller implicitly created a hidden leaf node for each
2862 cgroup to host the threads. The hidden leaf had its own copies of all
2863 the knobs with ``leaf_`` prefixed. While this allowed equivalent
2864 control over internal threads, it was with serious drawbacks. It
2865 always added an extra layer of nesting which wouldn't be necessary
2866 otherwise, made the interface messy and significantly complicated the
2869 The memory controller didn't have a way to control what happened
2870 between internal tasks and child cgroups and the behavior was not
2871 clearly defined. There were attempts to add ad-hoc behaviors and
2872 knobs to tailor the behavior to specific workloads which would have
2873 led to problems extremely difficult to resolve in the long term.
2875 Multiple controllers struggled with internal tasks and came up with
2876 different ways to deal with it; unfortunately, all the approaches were
2877 severely flawed and, furthermore, the widely different behaviors
2878 made cgroup as a whole highly inconsistent.
2880 This clearly is a problem which needs to be addressed from cgroup core
2884 Other Interface Issues
2885 ----------------------
2887 cgroup v1 grew without oversight and developed a large number of
2888 idiosyncrasies and inconsistencies. One issue on the cgroup core side
2889 was how an empty cgroup was notified - a userland helper binary was
2890 forked and executed for each event. The event delivery wasn't
2891 recursive or delegatable. The limitations of the mechanism also led
2892 to in-kernel event delivery filtering mechanism further complicating
2895 Controller interfaces were problematic too. An extreme example is
2896 controllers completely ignoring hierarchical organization and treating
2897 all cgroups as if they were all located directly under the root
2898 cgroup. Some controllers exposed a large amount of inconsistent
2899 implementation details to userland.
2901 There also was no consistency across controllers. When a new cgroup
2902 was created, some controllers defaulted to not imposing extra
2903 restrictions while others disallowed any resource usage until
2904 explicitly configured. Configuration knobs for the same type of
2905 control used widely differing naming schemes and formats. Statistics
2906 and information knobs were named arbitrarily and used different
2907 formats and units even in the same controller.
2909 cgroup v2 establishes common conventions where appropriate and updates
2910 controllers so that they expose minimal and consistent interfaces.
2913 Controller Issues and Remedies
2914 ------------------------------
2919 The original lower boundary, the soft limit, is defined as a limit
2920 that is per default unset. As a result, the set of cgroups that
2921 global reclaim prefers is opt-in, rather than opt-out. The costs for
2922 optimizing these mostly negative lookups are so high that the
2923 implementation, despite its enormous size, does not even provide the
2924 basic desirable behavior. First off, the soft limit has no
2925 hierarchical meaning. All configured groups are organized in a global
2926 rbtree and treated like equal peers, regardless where they are located
2927 in the hierarchy. This makes subtree delegation impossible. Second,
2928 the soft limit reclaim pass is so aggressive that it not just
2929 introduces high allocation latencies into the system, but also impacts
2930 system performance due to overreclaim, to the point where the feature
2931 becomes self-defeating.
2933 The memory.low boundary on the other hand is a top-down allocated
2934 reserve. A cgroup enjoys reclaim protection when it's within its
2935 effective low, which makes delegation of subtrees possible. It also
2936 enjoys having reclaim pressure proportional to its overage when
2937 above its effective low.
2939 The original high boundary, the hard limit, is defined as a strict
2940 limit that can not budge, even if the OOM killer has to be called.
2941 But this generally goes against the goal of making the most out of the
2942 available memory. The memory consumption of workloads varies during
2943 runtime, and that requires users to overcommit. But doing that with a
2944 strict upper limit requires either a fairly accurate prediction of the
2945 working set size or adding slack to the limit. Since working set size
2946 estimation is hard and error prone, and getting it wrong results in
2947 OOM kills, most users tend to err on the side of a looser limit and
2948 end up wasting precious resources.
2950 The memory.high boundary on the other hand can be set much more
2951 conservatively. When hit, it throttles allocations by forcing them
2952 into direct reclaim to work off the excess, but it never invokes the
2953 OOM killer. As a result, a high boundary that is chosen too
2954 aggressively will not terminate the processes, but instead it will
2955 lead to gradual performance degradation. The user can monitor this
2956 and make corrections until the minimal memory footprint that still
2957 gives acceptable performance is found.
2959 In extreme cases, with many concurrent allocations and a complete
2960 breakdown of reclaim progress within the group, the high boundary can
2961 be exceeded. But even then it's mostly better to satisfy the
2962 allocation from the slack available in other groups or the rest of the
2963 system than killing the group. Otherwise, memory.max is there to
2964 limit this type of spillover and ultimately contain buggy or even
2965 malicious applications.
2967 Setting the original memory.limit_in_bytes below the current usage was
2968 subject to a race condition, where concurrent charges could cause the
2969 limit setting to fail. memory.max on the other hand will first set the
2970 limit to prevent new charges, and then reclaim and OOM kill until the
2971 new limit is met - or the task writing to memory.max is killed.
2973 The combined memory+swap accounting and limiting is replaced by real
2974 control over swap space.
2976 The main argument for a combined memory+swap facility in the original
2977 cgroup design was that global or parental pressure would always be
2978 able to swap all anonymous memory of a child group, regardless of the
2979 child's own (possibly untrusted) configuration. However, untrusted
2980 groups can sabotage swapping by other means - such as referencing its
2981 anonymous memory in a tight loop - and an admin can not assume full
2982 swappability when overcommitting untrusted jobs.
2984 For trusted jobs, on the other hand, a combined counter is not an
2985 intuitive userspace interface, and it flies in the face of the idea
2986 that cgroup controllers should account and limit specific physical
2987 resources. Swap space is a resource like all others in the system,
2988 and that's why unified hierarchy allows distributing it separately.