8 :Author: Tejun Heo <tj@kernel.org>
10 This is the authoritative documentation on the design, interface and
11 conventions of cgroup v2. It describes all userland-visible aspects
12 of cgroup including core and specific controller behaviors. All
13 future changes must be reflected in this document. Documentation for
14 v1 is available under :ref:`Documentation/admin-guide/cgroup-v1/index.rst <cgroup-v1>`.
23 2-2. Organizing Processes and Threads
26 2-3. [Un]populated Notification
27 2-4. Controlling Controllers
28 2-4-1. Enabling and Disabling
29 2-4-2. Top-down Constraint
30 2-4-3. No Internal Process Constraint
32 2-5-1. Model of Delegation
33 2-5-2. Delegation Containment
35 2-6-1. Organize Once and Control
36 2-6-2. Avoid Name Collisions
37 3. Resource Distribution Models
45 4-3. Core Interface Files
48 5-1-1. CPU Interface Files
50 5-2-1. Memory Interface Files
51 5-2-2. Usage Guidelines
52 5-2-3. Memory Ownership
54 5-3-1. IO Interface Files
57 5-3-3-1. How IO Latency Throttling Works
58 5-3-3-2. IO Latency Interface Files
61 5-4-1. PID Interface Files
63 5.5-1. Cpuset Interface Files
66 5-7-1. RDMA Interface Files
68 5.8-1. HugeTLB Interface Files
70 5.9-1 Miscellaneous cgroup Interface Files
71 5.9-2 Migration and Ownership
74 5-N. Non-normative information
75 5-N-1. CPU controller root cgroup process behaviour
76 5-N-2. IO controller root cgroup process behaviour
79 6-2. The Root and Views
80 6-3. Migration and setns(2)
81 6-4. Interaction with Other Namespaces
82 P. Information on Kernel Programming
83 P-1. Filesystem Support for Writeback
84 D. Deprecated v1 Core Features
85 R. Issues with v1 and Rationales for v2
86 R-1. Multiple Hierarchies
87 R-2. Thread Granularity
88 R-3. Competition Between Inner Nodes and Threads
89 R-4. Other Interface Issues
90 R-5. Controller Issues and Remedies
100 "cgroup" stands for "control group" and is never capitalized. The
101 singular form is used to designate the whole feature and also as a
102 qualifier as in "cgroup controllers". When explicitly referring to
103 multiple individual control groups, the plural form "cgroups" is used.
109 cgroup is a mechanism to organize processes hierarchically and
110 distribute system resources along the hierarchy in a controlled and
113 cgroup is largely composed of two parts - the core and controllers.
114 cgroup core is primarily responsible for hierarchically organizing
115 processes. A cgroup controller is usually responsible for
116 distributing a specific type of system resource along the hierarchy
117 although there are utility controllers which serve purposes other than
118 resource distribution.
120 cgroups form a tree structure and every process in the system belongs
121 to one and only one cgroup. All threads of a process belong to the
122 same cgroup. On creation, all processes are put in the cgroup that
123 the parent process belongs to at the time. A process can be migrated
124 to another cgroup. Migration of a process doesn't affect already
125 existing descendant processes.
127 Following certain structural constraints, controllers may be enabled or
128 disabled selectively on a cgroup. All controller behaviors are
129 hierarchical - if a controller is enabled on a cgroup, it affects all
130 processes which belong to the cgroups consisting the inclusive
131 sub-hierarchy of the cgroup. When a controller is enabled on a nested
132 cgroup, it always restricts the resource distribution further. The
133 restrictions set closer to the root in the hierarchy can not be
134 overridden from further away.
143 Unlike v1, cgroup v2 has only single hierarchy. The cgroup v2
144 hierarchy can be mounted with the following mount command::
146 # mount -t cgroup2 none $MOUNT_POINT
148 cgroup2 filesystem has the magic number 0x63677270 ("cgrp"). All
149 controllers which support v2 and are not bound to a v1 hierarchy are
150 automatically bound to the v2 hierarchy and show up at the root.
151 Controllers which are not in active use in the v2 hierarchy can be
152 bound to other hierarchies. This allows mixing v2 hierarchy with the
153 legacy v1 multiple hierarchies in a fully backward compatible way.
155 A controller can be moved across hierarchies only after the controller
156 is no longer referenced in its current hierarchy. Because per-cgroup
157 controller states are destroyed asynchronously and controllers may
158 have lingering references, a controller may not show up immediately on
159 the v2 hierarchy after the final umount of the previous hierarchy.
160 Similarly, a controller should be fully disabled to be moved out of
161 the unified hierarchy and it may take some time for the disabled
162 controller to become available for other hierarchies; furthermore, due
163 to inter-controller dependencies, other controllers may need to be
166 While useful for development and manual configurations, moving
167 controllers dynamically between the v2 and other hierarchies is
168 strongly discouraged for production use. It is recommended to decide
169 the hierarchies and controller associations before starting using the
170 controllers after system boot.
172 During transition to v2, system management software might still
173 automount the v1 cgroup filesystem and so hijack all controllers
174 during boot, before manual intervention is possible. To make testing
175 and experimenting easier, the kernel parameter cgroup_no_v1= allows
176 disabling controllers in v1 and make them always available in v2.
178 cgroup v2 currently supports the following mount options.
181 Consider cgroup namespaces as delegation boundaries. This
182 option is system wide and can only be set on mount or modified
183 through remount from the init namespace. The mount option is
184 ignored on non-init namespace mounts. Please refer to the
185 Delegation section for details.
188 Reduce the latencies of dynamic cgroup modifications such as
189 task migrations and controller on/offs at the cost of making
190 hot path operations such as forks and exits more expensive.
191 The static usage pattern of creating a cgroup, enabling
192 controllers, and then seeding it with CLONE_INTO_CGROUP is
193 not affected by this option.
196 Only populate memory.events with data for the current cgroup,
197 and not any subtrees. This is legacy behaviour, the default
198 behaviour without this option is to include subtree counts.
199 This option is system wide and can only be set on mount or
200 modified through remount from the init namespace. The mount
201 option is ignored on non-init namespace mounts.
204 Recursively apply memory.min and memory.low protection to
205 entire subtrees, without requiring explicit downward
206 propagation into leaf cgroups. This allows protecting entire
207 subtrees from one another, while retaining free competition
208 within those subtrees. This should have been the default
209 behavior but is a mount-option to avoid regressing setups
210 relying on the original semantics (e.g. specifying bogusly
211 high 'bypass' protection values at higher tree levels).
214 Organizing Processes and Threads
215 --------------------------------
220 Initially, only the root cgroup exists to which all processes belong.
221 A child cgroup can be created by creating a sub-directory::
225 A given cgroup may have multiple child cgroups forming a tree
226 structure. Each cgroup has a read-writable interface file
227 "cgroup.procs". When read, it lists the PIDs of all processes which
228 belong to the cgroup one-per-line. The PIDs are not ordered and the
229 same PID may show up more than once if the process got moved to
230 another cgroup and then back or the PID got recycled while reading.
232 A process can be migrated into a cgroup by writing its PID to the
233 target cgroup's "cgroup.procs" file. Only one process can be migrated
234 on a single write(2) call. If a process is composed of multiple
235 threads, writing the PID of any thread migrates all threads of the
238 When a process forks a child process, the new process is born into the
239 cgroup that the forking process belongs to at the time of the
240 operation. After exit, a process stays associated with the cgroup
241 that it belonged to at the time of exit until it's reaped; however, a
242 zombie process does not appear in "cgroup.procs" and thus can't be
243 moved to another cgroup.
245 A cgroup which doesn't have any children or live processes can be
246 destroyed by removing the directory. Note that a cgroup which doesn't
247 have any children and is associated only with zombie processes is
248 considered empty and can be removed::
252 "/proc/$PID/cgroup" lists a process's cgroup membership. If legacy
253 cgroup is in use in the system, this file may contain multiple lines,
254 one for each hierarchy. The entry for cgroup v2 is always in the
257 # cat /proc/842/cgroup
259 0::/test-cgroup/test-cgroup-nested
261 If the process becomes a zombie and the cgroup it was associated with
262 is removed subsequently, " (deleted)" is appended to the path::
264 # cat /proc/842/cgroup
266 0::/test-cgroup/test-cgroup-nested (deleted)
272 cgroup v2 supports thread granularity for a subset of controllers to
273 support use cases requiring hierarchical resource distribution across
274 the threads of a group of processes. By default, all threads of a
275 process belong to the same cgroup, which also serves as the resource
276 domain to host resource consumptions which are not specific to a
277 process or thread. The thread mode allows threads to be spread across
278 a subtree while still maintaining the common resource domain for them.
280 Controllers which support thread mode are called threaded controllers.
281 The ones which don't are called domain controllers.
283 Marking a cgroup threaded makes it join the resource domain of its
284 parent as a threaded cgroup. The parent may be another threaded
285 cgroup whose resource domain is further up in the hierarchy. The root
286 of a threaded subtree, that is, the nearest ancestor which is not
287 threaded, is called threaded domain or thread root interchangeably and
288 serves as the resource domain for the entire subtree.
290 Inside a threaded subtree, threads of a process can be put in
291 different cgroups and are not subject to the no internal process
292 constraint - threaded controllers can be enabled on non-leaf cgroups
293 whether they have threads in them or not.
295 As the threaded domain cgroup hosts all the domain resource
296 consumptions of the subtree, it is considered to have internal
297 resource consumptions whether there are processes in it or not and
298 can't have populated child cgroups which aren't threaded. Because the
299 root cgroup is not subject to no internal process constraint, it can
300 serve both as a threaded domain and a parent to domain cgroups.
302 The current operation mode or type of the cgroup is shown in the
303 "cgroup.type" file which indicates whether the cgroup is a normal
304 domain, a domain which is serving as the domain of a threaded subtree,
305 or a threaded cgroup.
307 On creation, a cgroup is always a domain cgroup and can be made
308 threaded by writing "threaded" to the "cgroup.type" file. The
309 operation is single direction::
311 # echo threaded > cgroup.type
313 Once threaded, the cgroup can't be made a domain again. To enable the
314 thread mode, the following conditions must be met.
316 - As the cgroup will join the parent's resource domain. The parent
317 must either be a valid (threaded) domain or a threaded cgroup.
319 - When the parent is an unthreaded domain, it must not have any domain
320 controllers enabled or populated domain children. The root is
321 exempt from this requirement.
323 Topology-wise, a cgroup can be in an invalid state. Please consider
324 the following topology::
326 A (threaded domain) - B (threaded) - C (domain, just created)
328 C is created as a domain but isn't connected to a parent which can
329 host child domains. C can't be used until it is turned into a
330 threaded cgroup. "cgroup.type" file will report "domain (invalid)" in
331 these cases. Operations which fail due to invalid topology use
332 EOPNOTSUPP as the errno.
334 A domain cgroup is turned into a threaded domain when one of its child
335 cgroup becomes threaded or threaded controllers are enabled in the
336 "cgroup.subtree_control" file while there are processes in the cgroup.
337 A threaded domain reverts to a normal domain when the conditions
340 When read, "cgroup.threads" contains the list of the thread IDs of all
341 threads in the cgroup. Except that the operations are per-thread
342 instead of per-process, "cgroup.threads" has the same format and
343 behaves the same way as "cgroup.procs". While "cgroup.threads" can be
344 written to in any cgroup, as it can only move threads inside the same
345 threaded domain, its operations are confined inside each threaded
348 The threaded domain cgroup serves as the resource domain for the whole
349 subtree, and, while the threads can be scattered across the subtree,
350 all the processes are considered to be in the threaded domain cgroup.
351 "cgroup.procs" in a threaded domain cgroup contains the PIDs of all
352 processes in the subtree and is not readable in the subtree proper.
353 However, "cgroup.procs" can be written to from anywhere in the subtree
354 to migrate all threads of the matching process to the cgroup.
356 Only threaded controllers can be enabled in a threaded subtree. When
357 a threaded controller is enabled inside a threaded subtree, it only
358 accounts for and controls resource consumptions associated with the
359 threads in the cgroup and its descendants. All consumptions which
360 aren't tied to a specific thread belong to the threaded domain cgroup.
362 Because a threaded subtree is exempt from no internal process
363 constraint, a threaded controller must be able to handle competition
364 between threads in a non-leaf cgroup and its child cgroups. Each
365 threaded controller defines how such competitions are handled.
368 [Un]populated Notification
369 --------------------------
371 Each non-root cgroup has a "cgroup.events" file which contains
372 "populated" field indicating whether the cgroup's sub-hierarchy has
373 live processes in it. Its value is 0 if there is no live process in
374 the cgroup and its descendants; otherwise, 1. poll and [id]notify
375 events are triggered when the value changes. This can be used, for
376 example, to start a clean-up operation after all processes of a given
377 sub-hierarchy have exited. The populated state updates and
378 notifications are recursive. Consider the following sub-hierarchy
379 where the numbers in the parentheses represent the numbers of processes
385 A, B and C's "populated" fields would be 1 while D's 0. After the one
386 process in C exits, B and C's "populated" fields would flip to "0" and
387 file modified events will be generated on the "cgroup.events" files of
391 Controlling Controllers
392 -----------------------
394 Enabling and Disabling
395 ~~~~~~~~~~~~~~~~~~~~~~
397 Each cgroup has a "cgroup.controllers" file which lists all
398 controllers available for the cgroup to enable::
400 # cat cgroup.controllers
403 No controller is enabled by default. Controllers can be enabled and
404 disabled by writing to the "cgroup.subtree_control" file::
406 # echo "+cpu +memory -io" > cgroup.subtree_control
408 Only controllers which are listed in "cgroup.controllers" can be
409 enabled. When multiple operations are specified as above, either they
410 all succeed or fail. If multiple operations on the same controller
411 are specified, the last one is effective.
413 Enabling a controller in a cgroup indicates that the distribution of
414 the target resource across its immediate children will be controlled.
415 Consider the following sub-hierarchy. The enabled controllers are
416 listed in parentheses::
418 A(cpu,memory) - B(memory) - C()
421 As A has "cpu" and "memory" enabled, A will control the distribution
422 of CPU cycles and memory to its children, in this case, B. As B has
423 "memory" enabled but not "CPU", C and D will compete freely on CPU
424 cycles but their division of memory available to B will be controlled.
426 As a controller regulates the distribution of the target resource to
427 the cgroup's children, enabling it creates the controller's interface
428 files in the child cgroups. In the above example, enabling "cpu" on B
429 would create the "cpu." prefixed controller interface files in C and
430 D. Likewise, disabling "memory" from B would remove the "memory."
431 prefixed controller interface files from C and D. This means that the
432 controller interface files - anything which doesn't start with
433 "cgroup." are owned by the parent rather than the cgroup itself.
439 Resources are distributed top-down and a cgroup can further distribute
440 a resource only if the resource has been distributed to it from the
441 parent. This means that all non-root "cgroup.subtree_control" files
442 can only contain controllers which are enabled in the parent's
443 "cgroup.subtree_control" file. A controller can be enabled only if
444 the parent has the controller enabled and a controller can't be
445 disabled if one or more children have it enabled.
448 No Internal Process Constraint
449 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
451 Non-root cgroups can distribute domain resources to their children
452 only when they don't have any processes of their own. In other words,
453 only domain cgroups which don't contain any processes can have domain
454 controllers enabled in their "cgroup.subtree_control" files.
456 This guarantees that, when a domain controller is looking at the part
457 of the hierarchy which has it enabled, processes are always only on
458 the leaves. This rules out situations where child cgroups compete
459 against internal processes of the parent.
461 The root cgroup is exempt from this restriction. Root contains
462 processes and anonymous resource consumption which can't be associated
463 with any other cgroups and requires special treatment from most
464 controllers. How resource consumption in the root cgroup is governed
465 is up to each controller (for more information on this topic please
466 refer to the Non-normative information section in the Controllers
469 Note that the restriction doesn't get in the way if there is no
470 enabled controller in the cgroup's "cgroup.subtree_control". This is
471 important as otherwise it wouldn't be possible to create children of a
472 populated cgroup. To control resource distribution of a cgroup, the
473 cgroup must create children and transfer all its processes to the
474 children before enabling controllers in its "cgroup.subtree_control"
484 A cgroup can be delegated in two ways. First, to a less privileged
485 user by granting write access of the directory and its "cgroup.procs",
486 "cgroup.threads" and "cgroup.subtree_control" files to the user.
487 Second, if the "nsdelegate" mount option is set, automatically to a
488 cgroup namespace on namespace creation.
490 Because the resource control interface files in a given directory
491 control the distribution of the parent's resources, the delegatee
492 shouldn't be allowed to write to them. For the first method, this is
493 achieved by not granting access to these files. For the second, the
494 kernel rejects writes to all files other than "cgroup.procs" and
495 "cgroup.subtree_control" on a namespace root from inside the
498 The end results are equivalent for both delegation types. Once
499 delegated, the user can build sub-hierarchy under the directory,
500 organize processes inside it as it sees fit and further distribute the
501 resources it received from the parent. The limits and other settings
502 of all resource controllers are hierarchical and regardless of what
503 happens in the delegated sub-hierarchy, nothing can escape the
504 resource restrictions imposed by the parent.
506 Currently, cgroup doesn't impose any restrictions on the number of
507 cgroups in or nesting depth of a delegated sub-hierarchy; however,
508 this may be limited explicitly in the future.
511 Delegation Containment
512 ~~~~~~~~~~~~~~~~~~~~~~
514 A delegated sub-hierarchy is contained in the sense that processes
515 can't be moved into or out of the sub-hierarchy by the delegatee.
517 For delegations to a less privileged user, this is achieved by
518 requiring the following conditions for a process with a non-root euid
519 to migrate a target process into a cgroup by writing its PID to the
522 - The writer must have write access to the "cgroup.procs" file.
524 - The writer must have write access to the "cgroup.procs" file of the
525 common ancestor of the source and destination cgroups.
527 The above two constraints ensure that while a delegatee may migrate
528 processes around freely in the delegated sub-hierarchy it can't pull
529 in from or push out to outside the sub-hierarchy.
531 For an example, let's assume cgroups C0 and C1 have been delegated to
532 user U0 who created C00, C01 under C0 and C10 under C1 as follows and
533 all processes under C0 and C1 belong to U0::
535 ~~~~~~~~~~~~~ - C0 - C00
538 ~~~~~~~~~~~~~ - C1 - C10
540 Let's also say U0 wants to write the PID of a process which is
541 currently in C10 into "C00/cgroup.procs". U0 has write access to the
542 file; however, the common ancestor of the source cgroup C10 and the
543 destination cgroup C00 is above the points of delegation and U0 would
544 not have write access to its "cgroup.procs" files and thus the write
545 will be denied with -EACCES.
547 For delegations to namespaces, containment is achieved by requiring
548 that both the source and destination cgroups are reachable from the
549 namespace of the process which is attempting the migration. If either
550 is not reachable, the migration is rejected with -ENOENT.
556 Organize Once and Control
557 ~~~~~~~~~~~~~~~~~~~~~~~~~
559 Migrating a process across cgroups is a relatively expensive operation
560 and stateful resources such as memory are not moved together with the
561 process. This is an explicit design decision as there often exist
562 inherent trade-offs between migration and various hot paths in terms
563 of synchronization cost.
565 As such, migrating processes across cgroups frequently as a means to
566 apply different resource restrictions is discouraged. A workload
567 should be assigned to a cgroup according to the system's logical and
568 resource structure once on start-up. Dynamic adjustments to resource
569 distribution can be made by changing controller configuration through
573 Avoid Name Collisions
574 ~~~~~~~~~~~~~~~~~~~~~
576 Interface files for a cgroup and its children cgroups occupy the same
577 directory and it is possible to create children cgroups which collide
578 with interface files.
580 All cgroup core interface files are prefixed with "cgroup." and each
581 controller's interface files are prefixed with the controller name and
582 a dot. A controller's name is composed of lower case alphabets and
583 '_'s but never begins with an '_' so it can be used as the prefix
584 character for collision avoidance. Also, interface file names won't
585 start or end with terms which are often used in categorizing workloads
586 such as job, service, slice, unit or workload.
588 cgroup doesn't do anything to prevent name collisions and it's the
589 user's responsibility to avoid them.
592 Resource Distribution Models
593 ============================
595 cgroup controllers implement several resource distribution schemes
596 depending on the resource type and expected use cases. This section
597 describes major schemes in use along with their expected behaviors.
603 A parent's resource is distributed by adding up the weights of all
604 active children and giving each the fraction matching the ratio of its
605 weight against the sum. As only children which can make use of the
606 resource at the moment participate in the distribution, this is
607 work-conserving. Due to the dynamic nature, this model is usually
608 used for stateless resources.
610 All weights are in the range [1, 10000] with the default at 100. This
611 allows symmetric multiplicative biases in both directions at fine
612 enough granularity while staying in the intuitive range.
614 As long as the weight is in range, all configuration combinations are
615 valid and there is no reason to reject configuration changes or
618 "cpu.weight" proportionally distributes CPU cycles to active children
619 and is an example of this type.
622 .. _cgroupv2-limits-distributor:
627 A child can only consume up to the configured amount of the resource.
628 Limits can be over-committed - the sum of the limits of children can
629 exceed the amount of resource available to the parent.
631 Limits are in the range [0, max] and defaults to "max", which is noop.
633 As limits can be over-committed, all configuration combinations are
634 valid and there is no reason to reject configuration changes or
637 "io.max" limits the maximum BPS and/or IOPS that a cgroup can consume
638 on an IO device and is an example of this type.
640 .. _cgroupv2-protections-distributor:
645 A cgroup is protected up to the configured amount of the resource
646 as long as the usages of all its ancestors are under their
647 protected levels. Protections can be hard guarantees or best effort
648 soft boundaries. Protections can also be over-committed in which case
649 only up to the amount available to the parent is protected among
652 Protections are in the range [0, max] and defaults to 0, which is
655 As protections can be over-committed, all configuration combinations
656 are valid and there is no reason to reject configuration changes or
659 "memory.low" implements best-effort memory protection and is an
660 example of this type.
666 A cgroup is exclusively allocated a certain amount of a finite
667 resource. Allocations can't be over-committed - the sum of the
668 allocations of children can not exceed the amount of resource
669 available to the parent.
671 Allocations are in the range [0, max] and defaults to 0, which is no
674 As allocations can't be over-committed, some configuration
675 combinations are invalid and should be rejected. Also, if the
676 resource is mandatory for execution of processes, process migrations
679 "cpu.rt.max" hard-allocates realtime slices and is an example of this
689 All interface files should be in one of the following formats whenever
692 New-line separated values
693 (when only one value can be written at once)
699 Space separated values
700 (when read-only or multiple values can be written at once)
712 KEY0 SUB_KEY0=VAL00 SUB_KEY1=VAL01...
713 KEY1 SUB_KEY0=VAL10 SUB_KEY1=VAL11...
716 For a writable file, the format for writing should generally match
717 reading; however, controllers may allow omitting later fields or
718 implement restricted shortcuts for most common use cases.
720 For both flat and nested keyed files, only the values for a single key
721 can be written at a time. For nested keyed files, the sub key pairs
722 may be specified in any order and not all pairs have to be specified.
728 - Settings for a single feature should be contained in a single file.
730 - The root cgroup should be exempt from resource control and thus
731 shouldn't have resource control interface files.
733 - The default time unit is microseconds. If a different unit is ever
734 used, an explicit unit suffix must be present.
736 - A parts-per quantity should use a percentage decimal with at least
737 two digit fractional part - e.g. 13.40.
739 - If a controller implements weight based resource distribution, its
740 interface file should be named "weight" and have the range [1,
741 10000] with 100 as the default. The values are chosen to allow
742 enough and symmetric bias in both directions while keeping it
743 intuitive (the default is 100%).
745 - If a controller implements an absolute resource guarantee and/or
746 limit, the interface files should be named "min" and "max"
747 respectively. If a controller implements best effort resource
748 guarantee and/or limit, the interface files should be named "low"
749 and "high" respectively.
751 In the above four control files, the special token "max" should be
752 used to represent upward infinity for both reading and writing.
754 - If a setting has a configurable default value and keyed specific
755 overrides, the default entry should be keyed with "default" and
756 appear as the first entry in the file.
758 The default value can be updated by writing either "default $VAL" or
761 When writing to update a specific override, "default" can be used as
762 the value to indicate removal of the override. Override entries
763 with "default" as the value must not appear when read.
765 For example, a setting which is keyed by major:minor device numbers
766 with integer values may look like the following::
768 # cat cgroup-example-interface-file
772 The default value can be updated by::
774 # echo 125 > cgroup-example-interface-file
778 # echo "default 125" > cgroup-example-interface-file
780 An override can be set by::
782 # echo "8:16 170" > cgroup-example-interface-file
786 # echo "8:0 default" > cgroup-example-interface-file
787 # cat cgroup-example-interface-file
791 - For events which are not very high frequency, an interface file
792 "events" should be created which lists event key value pairs.
793 Whenever a notifiable event happens, file modified event should be
794 generated on the file.
800 All cgroup core files are prefixed with "cgroup."
803 A read-write single value file which exists on non-root
806 When read, it indicates the current type of the cgroup, which
807 can be one of the following values.
809 - "domain" : A normal valid domain cgroup.
811 - "domain threaded" : A threaded domain cgroup which is
812 serving as the root of a threaded subtree.
814 - "domain invalid" : A cgroup which is in an invalid state.
815 It can't be populated or have controllers enabled. It may
816 be allowed to become a threaded cgroup.
818 - "threaded" : A threaded cgroup which is a member of a
821 A cgroup can be turned into a threaded cgroup by writing
822 "threaded" to this file.
825 A read-write new-line separated values file which exists on
828 When read, it lists the PIDs of all processes which belong to
829 the cgroup one-per-line. The PIDs are not ordered and the
830 same PID may show up more than once if the process got moved
831 to another cgroup and then back or the PID got recycled while
834 A PID can be written to migrate the process associated with
835 the PID to the cgroup. The writer should match all of the
836 following conditions.
838 - It must have write access to the "cgroup.procs" file.
840 - It must have write access to the "cgroup.procs" file of the
841 common ancestor of the source and destination cgroups.
843 When delegating a sub-hierarchy, write access to this file
844 should be granted along with the containing directory.
846 In a threaded cgroup, reading this file fails with EOPNOTSUPP
847 as all the processes belong to the thread root. Writing is
848 supported and moves every thread of the process to the cgroup.
851 A read-write new-line separated values file which exists on
854 When read, it lists the TIDs of all threads which belong to
855 the cgroup one-per-line. The TIDs are not ordered and the
856 same TID may show up more than once if the thread got moved to
857 another cgroup and then back or the TID got recycled while
860 A TID can be written to migrate the thread associated with the
861 TID to the cgroup. The writer should match all of the
862 following conditions.
864 - It must have write access to the "cgroup.threads" file.
866 - The cgroup that the thread is currently in must be in the
867 same resource domain as the destination cgroup.
869 - It must have write access to the "cgroup.procs" file of the
870 common ancestor of the source and destination cgroups.
872 When delegating a sub-hierarchy, write access to this file
873 should be granted along with the containing directory.
876 A read-only space separated values file which exists on all
879 It shows space separated list of all controllers available to
880 the cgroup. The controllers are not ordered.
882 cgroup.subtree_control
883 A read-write space separated values file which exists on all
884 cgroups. Starts out empty.
886 When read, it shows space separated list of the controllers
887 which are enabled to control resource distribution from the
888 cgroup to its children.
890 Space separated list of controllers prefixed with '+' or '-'
891 can be written to enable or disable controllers. A controller
892 name prefixed with '+' enables the controller and '-'
893 disables. If a controller appears more than once on the list,
894 the last one is effective. When multiple enable and disable
895 operations are specified, either all succeed or all fail.
898 A read-only flat-keyed file which exists on non-root cgroups.
899 The following entries are defined. Unless specified
900 otherwise, a value change in this file generates a file
904 1 if the cgroup or its descendants contains any live
905 processes; otherwise, 0.
907 1 if the cgroup is frozen; otherwise, 0.
909 cgroup.max.descendants
910 A read-write single value files. The default is "max".
912 Maximum allowed number of descent cgroups.
913 If the actual number of descendants is equal or larger,
914 an attempt to create a new cgroup in the hierarchy will fail.
917 A read-write single value files. The default is "max".
919 Maximum allowed descent depth below the current cgroup.
920 If the actual descent depth is equal or larger,
921 an attempt to create a new child cgroup will fail.
924 A read-only flat-keyed file with the following entries:
927 Total number of visible descendant cgroups.
930 Total number of dying descendant cgroups. A cgroup becomes
931 dying after being deleted by a user. The cgroup will remain
932 in dying state for some time undefined time (which can depend
933 on system load) before being completely destroyed.
935 A process can't enter a dying cgroup under any circumstances,
936 a dying cgroup can't revive.
938 A dying cgroup can consume system resources not exceeding
939 limits, which were active at the moment of cgroup deletion.
942 A read-write single value file which exists on non-root cgroups.
943 Allowed values are "0" and "1". The default is "0".
945 Writing "1" to the file causes freezing of the cgroup and all
946 descendant cgroups. This means that all belonging processes will
947 be stopped and will not run until the cgroup will be explicitly
948 unfrozen. Freezing of the cgroup may take some time; when this action
949 is completed, the "frozen" value in the cgroup.events control file
950 will be updated to "1" and the corresponding notification will be
953 A cgroup can be frozen either by its own settings, or by settings
954 of any ancestor cgroups. If any of ancestor cgroups is frozen, the
955 cgroup will remain frozen.
957 Processes in the frozen cgroup can be killed by a fatal signal.
958 They also can enter and leave a frozen cgroup: either by an explicit
959 move by a user, or if freezing of the cgroup races with fork().
960 If a process is moved to a frozen cgroup, it stops. If a process is
961 moved out of a frozen cgroup, it becomes running.
963 Frozen status of a cgroup doesn't affect any cgroup tree operations:
964 it's possible to delete a frozen (and empty) cgroup, as well as
965 create new sub-cgroups.
968 A write-only single value file which exists in non-root cgroups.
969 The only allowed value is "1".
971 Writing "1" to the file causes the cgroup and all descendant cgroups to
972 be killed. This means that all processes located in the affected cgroup
973 tree will be killed via SIGKILL.
975 Killing a cgroup tree will deal with concurrent forks appropriately and
976 is protected against migrations.
978 In a threaded cgroup, writing this file fails with EOPNOTSUPP as
979 killing cgroups is a process directed operation, i.e. it affects
980 the whole thread-group.
983 A read-write single value file that allowed values are "0" and "1".
986 Writing "0" to the file will disable the cgroup PSI accounting.
987 Writing "1" to the file will re-enable the cgroup PSI accounting.
989 This control attribute is not hierarchical, so disable or enable PSI
990 accounting in a cgroup does not affect PSI accounting in descendants
991 and doesn't need pass enablement via ancestors from root.
993 The reason this control attribute exists is that PSI accounts stalls for
994 each cgroup separately and aggregates it at each level of the hierarchy.
995 This may cause non-negligible overhead for some workloads when under
996 deep level of the hierarchy, in which case this control attribute can
997 be used to disable PSI accounting in the non-leaf cgroups.
1000 A read-write nested-keyed file.
1002 Shows pressure stall information for IRQ/SOFTIRQ. See
1003 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1013 The "cpu" controllers regulates distribution of CPU cycles. This
1014 controller implements weight and absolute bandwidth limit models for
1015 normal scheduling policy and absolute bandwidth allocation model for
1016 realtime scheduling policy.
1018 In all the above models, cycles distribution is defined only on a temporal
1019 base and it does not account for the frequency at which tasks are executed.
1020 The (optional) utilization clamping support allows to hint the schedutil
1021 cpufreq governor about the minimum desired frequency which should always be
1022 provided by a CPU, as well as the maximum desired frequency, which should not
1023 be exceeded by a CPU.
1025 WARNING: cgroup2 doesn't yet support control of realtime processes and
1026 the cpu controller can only be enabled when all RT processes are in
1027 the root cgroup. Be aware that system management software may already
1028 have placed RT processes into nonroot cgroups during the system boot
1029 process, and these processes may need to be moved to the root cgroup
1030 before the cpu controller can be enabled.
1036 All time durations are in microseconds.
1039 A read-only flat-keyed file.
1040 This file exists whether the controller is enabled or not.
1042 It always reports the following three stats:
1048 and the following three when the controller is enabled:
1057 A read-write single value file which exists on non-root
1058 cgroups. The default is "100".
1060 The weight in the range [1, 10000].
1063 A read-write single value file which exists on non-root
1064 cgroups. The default is "0".
1066 The nice value is in the range [-20, 19].
1068 This interface file is an alternative interface for
1069 "cpu.weight" and allows reading and setting weight using the
1070 same values used by nice(2). Because the range is smaller and
1071 granularity is coarser for the nice values, the read value is
1072 the closest approximation of the current weight.
1075 A read-write two value file which exists on non-root cgroups.
1076 The default is "max 100000".
1078 The maximum bandwidth limit. It's in the following format::
1082 which indicates that the group may consume up to $MAX in each
1083 $PERIOD duration. "max" for $MAX indicates no limit. If only
1084 one number is written, $MAX is updated.
1087 A read-write single value file which exists on non-root
1088 cgroups. The default is "0".
1090 The burst in the range [0, $MAX].
1093 A read-write nested-keyed file.
1095 Shows pressure stall information for CPU. See
1096 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1099 A read-write single value file which exists on non-root cgroups.
1100 The default is "0", i.e. no utilization boosting.
1102 The requested minimum utilization (protection) as a percentage
1103 rational number, e.g. 12.34 for 12.34%.
1105 This interface allows reading and setting minimum utilization clamp
1106 values similar to the sched_setattr(2). This minimum utilization
1107 value is used to clamp the task specific minimum utilization clamp.
1109 The requested minimum utilization (protection) is always capped by
1110 the current value for the maximum utilization (limit), i.e.
1114 A read-write single value file which exists on non-root cgroups.
1115 The default is "max". i.e. no utilization capping
1117 The requested maximum utilization (limit) as a percentage rational
1118 number, e.g. 98.76 for 98.76%.
1120 This interface allows reading and setting maximum utilization clamp
1121 values similar to the sched_setattr(2). This maximum utilization
1122 value is used to clamp the task specific maximum utilization clamp.
1129 The "memory" controller regulates distribution of memory. Memory is
1130 stateful and implements both limit and protection models. Due to the
1131 intertwining between memory usage and reclaim pressure and the
1132 stateful nature of memory, the distribution model is relatively
1135 While not completely water-tight, all major memory usages by a given
1136 cgroup are tracked so that the total memory consumption can be
1137 accounted and controlled to a reasonable extent. Currently, the
1138 following types of memory usages are tracked.
1140 - Userland memory - page cache and anonymous memory.
1142 - Kernel data structures such as dentries and inodes.
1144 - TCP socket buffers.
1146 The above list may expand in the future for better coverage.
1149 Memory Interface Files
1150 ~~~~~~~~~~~~~~~~~~~~~~
1152 All memory amounts are in bytes. If a value which is not aligned to
1153 PAGE_SIZE is written, the value may be rounded up to the closest
1154 PAGE_SIZE multiple when read back.
1157 A read-only single value file which exists on non-root
1160 The total amount of memory currently being used by the cgroup
1161 and its descendants.
1164 A read-write single value file which exists on non-root
1165 cgroups. The default is "0".
1167 Hard memory protection. If the memory usage of a cgroup
1168 is within its effective min boundary, the cgroup's memory
1169 won't be reclaimed under any conditions. If there is no
1170 unprotected reclaimable memory available, OOM killer
1171 is invoked. Above the effective min boundary (or
1172 effective low boundary if it is higher), pages are reclaimed
1173 proportionally to the overage, reducing reclaim pressure for
1176 Effective min boundary is limited by memory.min values of
1177 all ancestor cgroups. If there is memory.min overcommitment
1178 (child cgroup or cgroups are requiring more protected memory
1179 than parent will allow), then each child cgroup will get
1180 the part of parent's protection proportional to its
1181 actual memory usage below memory.min.
1183 Putting more memory than generally available under this
1184 protection is discouraged and may lead to constant OOMs.
1186 If a memory cgroup is not populated with processes,
1187 its memory.min is ignored.
1190 A read-write single value file which exists on non-root
1191 cgroups. The default is "0".
1193 Best-effort memory protection. If the memory usage of a
1194 cgroup is within its effective low boundary, the cgroup's
1195 memory won't be reclaimed unless there is no reclaimable
1196 memory available in unprotected cgroups.
1197 Above the effective low boundary (or
1198 effective min boundary if it is higher), pages are reclaimed
1199 proportionally to the overage, reducing reclaim pressure for
1202 Effective low boundary is limited by memory.low values of
1203 all ancestor cgroups. If there is memory.low overcommitment
1204 (child cgroup or cgroups are requiring more protected memory
1205 than parent will allow), then each child cgroup will get
1206 the part of parent's protection proportional to its
1207 actual memory usage below memory.low.
1209 Putting more memory than generally available under this
1210 protection is discouraged.
1213 A read-write single value file which exists on non-root
1214 cgroups. The default is "max".
1216 Memory usage throttle limit. If a cgroup's usage goes
1217 over the high boundary, the processes of the cgroup are
1218 throttled and put under heavy reclaim pressure.
1220 Going over the high limit never invokes the OOM killer and
1221 under extreme conditions the limit may be breached. The high
1222 limit should be used in scenarios where an external process
1223 monitors the limited cgroup to alleviate heavy reclaim
1227 A read-write single value file which exists on non-root
1228 cgroups. The default is "max".
1230 Memory usage hard limit. This is the main mechanism to limit
1231 memory usage of a cgroup. If a cgroup's memory usage reaches
1232 this limit and can't be reduced, the OOM killer is invoked in
1233 the cgroup. Under certain circumstances, the usage may go
1234 over the limit temporarily.
1236 In default configuration regular 0-order allocations always
1237 succeed unless OOM killer chooses current task as a victim.
1239 Some kinds of allocations don't invoke the OOM killer.
1240 Caller could retry them differently, return into userspace
1241 as -ENOMEM or silently ignore in cases like disk readahead.
1244 A write-only nested-keyed file which exists for all cgroups.
1246 This is a simple interface to trigger memory reclaim in the
1249 This file accepts a single key, the number of bytes to reclaim.
1250 No nested keys are currently supported.
1254 echo "1G" > memory.reclaim
1256 The interface can be later extended with nested keys to
1257 configure the reclaim behavior. For example, specify the
1258 type of memory to reclaim from (anon, file, ..).
1260 Please note that the kernel can over or under reclaim from
1261 the target cgroup. If less bytes are reclaimed than the
1262 specified amount, -EAGAIN is returned.
1264 Please note that the proactive reclaim (triggered by this
1265 interface) is not meant to indicate memory pressure on the
1266 memory cgroup. Therefore socket memory balancing triggered by
1267 the memory reclaim normally is not exercised in this case.
1268 This means that the networking layer will not adapt based on
1269 reclaim induced by memory.reclaim.
1272 A read-only single value file which exists on non-root
1275 The max memory usage recorded for the cgroup and its
1276 descendants since the creation of the cgroup.
1279 A read-write single value file which exists on non-root
1280 cgroups. The default value is "0".
1282 Determines whether the cgroup should be treated as
1283 an indivisible workload by the OOM killer. If set,
1284 all tasks belonging to the cgroup or to its descendants
1285 (if the memory cgroup is not a leaf cgroup) are killed
1286 together or not at all. This can be used to avoid
1287 partial kills to guarantee workload integrity.
1289 Tasks with the OOM protection (oom_score_adj set to -1000)
1290 are treated as an exception and are never killed.
1292 If the OOM killer is invoked in a cgroup, it's not going
1293 to kill any tasks outside of this cgroup, regardless
1294 memory.oom.group values of ancestor cgroups.
1297 A read-only flat-keyed file which exists on non-root cgroups.
1298 The following entries are defined. Unless specified
1299 otherwise, a value change in this file generates a file
1302 Note that all fields in this file are hierarchical and the
1303 file modified event can be generated due to an event down the
1304 hierarchy. For the local events at the cgroup level see
1305 memory.events.local.
1308 The number of times the cgroup is reclaimed due to
1309 high memory pressure even though its usage is under
1310 the low boundary. This usually indicates that the low
1311 boundary is over-committed.
1314 The number of times processes of the cgroup are
1315 throttled and routed to perform direct memory reclaim
1316 because the high memory boundary was exceeded. For a
1317 cgroup whose memory usage is capped by the high limit
1318 rather than global memory pressure, this event's
1319 occurrences are expected.
1322 The number of times the cgroup's memory usage was
1323 about to go over the max boundary. If direct reclaim
1324 fails to bring it down, the cgroup goes to OOM state.
1327 The number of time the cgroup's memory usage was
1328 reached the limit and allocation was about to fail.
1330 This event is not raised if the OOM killer is not
1331 considered as an option, e.g. for failed high-order
1332 allocations or if caller asked to not retry attempts.
1335 The number of processes belonging to this cgroup
1336 killed by any kind of OOM killer.
1339 The number of times a group OOM has occurred.
1342 Similar to memory.events but the fields in the file are local
1343 to the cgroup i.e. not hierarchical. The file modified event
1344 generated on this file reflects only the local events.
1347 A read-only flat-keyed file which exists on non-root cgroups.
1349 This breaks down the cgroup's memory footprint into different
1350 types of memory, type-specific details, and other information
1351 on the state and past events of the memory management system.
1353 All memory amounts are in bytes.
1355 The entries are ordered to be human readable, and new entries
1356 can show up in the middle. Don't rely on items remaining in a
1357 fixed position; use the keys to look up specific values!
1359 If the entry has no per-node counter (or not show in the
1360 memory.numa_stat). We use 'npn' (non-per-node) as the tag
1361 to indicate that it will not show in the memory.numa_stat.
1364 Amount of memory used in anonymous mappings such as
1365 brk(), sbrk(), and mmap(MAP_ANONYMOUS)
1368 Amount of memory used to cache filesystem data,
1369 including tmpfs and shared memory.
1372 Amount of total kernel memory, including
1373 (kernel_stack, pagetables, percpu, vmalloc, slab) in
1374 addition to other kernel memory use cases.
1377 Amount of memory allocated to kernel stacks.
1380 Amount of memory allocated for page tables.
1383 Amount of memory allocated for secondary page tables,
1384 this currently includes KVM mmu allocations on x86
1388 Amount of memory used for storing per-cpu kernel
1392 Amount of memory used in network transmission buffers
1395 Amount of memory used for vmap backed memory.
1398 Amount of cached filesystem data that is swap-backed,
1399 such as tmpfs, shm segments, shared anonymous mmap()s
1402 Amount of memory consumed by the zswap compression backend.
1405 Amount of application memory swapped out to zswap.
1408 Amount of cached filesystem data mapped with mmap()
1411 Amount of cached filesystem data that was modified but
1412 not yet written back to disk
1415 Amount of cached filesystem data that was modified and
1416 is currently being written back to disk
1419 Amount of swap cached in memory. The swapcache is accounted
1420 against both memory and swap usage.
1423 Amount of memory used in anonymous mappings backed by
1424 transparent hugepages
1427 Amount of cached filesystem data backed by transparent
1431 Amount of shm, tmpfs, shared anonymous mmap()s backed by
1432 transparent hugepages
1434 inactive_anon, active_anon, inactive_file, active_file, unevictable
1435 Amount of memory, swap-backed and filesystem-backed,
1436 on the internal memory management lists used by the
1437 page reclaim algorithm.
1439 As these represent internal list state (eg. shmem pages are on anon
1440 memory management lists), inactive_foo + active_foo may not be equal to
1441 the value for the foo counter, since the foo counter is type-based, not
1445 Part of "slab" that might be reclaimed, such as
1446 dentries and inodes.
1449 Part of "slab" that cannot be reclaimed on memory
1453 Amount of memory used for storing in-kernel data
1456 workingset_refault_anon
1457 Number of refaults of previously evicted anonymous pages.
1459 workingset_refault_file
1460 Number of refaults of previously evicted file pages.
1462 workingset_activate_anon
1463 Number of refaulted anonymous pages that were immediately
1466 workingset_activate_file
1467 Number of refaulted file pages that were immediately activated.
1469 workingset_restore_anon
1470 Number of restored anonymous pages which have been detected as
1471 an active workingset before they got reclaimed.
1473 workingset_restore_file
1474 Number of restored file pages which have been detected as an
1475 active workingset before they got reclaimed.
1477 workingset_nodereclaim
1478 Number of times a shadow node has been reclaimed
1481 Amount of scanned pages (in an inactive LRU list)
1484 Amount of reclaimed pages
1487 Amount of scanned pages by kswapd (in an inactive LRU list)
1490 Amount of scanned pages directly (in an inactive LRU list)
1492 pgscan_khugepaged (npn)
1493 Amount of scanned pages by khugepaged (in an inactive LRU list)
1495 pgsteal_kswapd (npn)
1496 Amount of reclaimed pages by kswapd
1498 pgsteal_direct (npn)
1499 Amount of reclaimed pages directly
1501 pgsteal_khugepaged (npn)
1502 Amount of reclaimed pages by khugepaged
1505 Total number of page faults incurred
1508 Number of major page faults incurred
1511 Amount of scanned pages (in an active LRU list)
1514 Amount of pages moved to the active LRU list
1517 Amount of pages moved to the inactive LRU list
1520 Amount of pages postponed to be freed under memory pressure
1523 Amount of reclaimed lazyfree pages
1525 thp_fault_alloc (npn)
1526 Number of transparent hugepages which were allocated to satisfy
1527 a page fault. This counter is not present when CONFIG_TRANSPARENT_HUGEPAGE
1530 thp_collapse_alloc (npn)
1531 Number of transparent hugepages which were allocated to allow
1532 collapsing an existing range of pages. This counter is not
1533 present when CONFIG_TRANSPARENT_HUGEPAGE is not set.
1536 A read-only nested-keyed file which exists on non-root cgroups.
1538 This breaks down the cgroup's memory footprint into different
1539 types of memory, type-specific details, and other information
1540 per node on the state of the memory management system.
1542 This is useful for providing visibility into the NUMA locality
1543 information within an memcg since the pages are allowed to be
1544 allocated from any physical node. One of the use case is evaluating
1545 application performance by combining this information with the
1546 application's CPU allocation.
1548 All memory amounts are in bytes.
1550 The output format of memory.numa_stat is::
1552 type N0=<bytes in node 0> N1=<bytes in node 1> ...
1554 The entries are ordered to be human readable, and new entries
1555 can show up in the middle. Don't rely on items remaining in a
1556 fixed position; use the keys to look up specific values!
1558 The entries can refer to the memory.stat.
1561 A read-only single value file which exists on non-root
1564 The total amount of swap currently being used by the cgroup
1565 and its descendants.
1568 A read-write single value file which exists on non-root
1569 cgroups. The default is "max".
1571 Swap usage throttle limit. If a cgroup's swap usage exceeds
1572 this limit, all its further allocations will be throttled to
1573 allow userspace to implement custom out-of-memory procedures.
1575 This limit marks a point of no return for the cgroup. It is NOT
1576 designed to manage the amount of swapping a workload does
1577 during regular operation. Compare to memory.swap.max, which
1578 prohibits swapping past a set amount, but lets the cgroup
1579 continue unimpeded as long as other memory can be reclaimed.
1581 Healthy workloads are not expected to reach this limit.
1584 A read-write single value file which exists on non-root
1585 cgroups. The default is "max".
1587 Swap usage hard limit. If a cgroup's swap usage reaches this
1588 limit, anonymous memory of the cgroup will not be swapped out.
1591 A read-only flat-keyed file which exists on non-root cgroups.
1592 The following entries are defined. Unless specified
1593 otherwise, a value change in this file generates a file
1597 The number of times the cgroup's swap usage was over
1601 The number of times the cgroup's swap usage was about
1602 to go over the max boundary and swap allocation
1606 The number of times swap allocation failed either
1607 because of running out of swap system-wide or max
1610 When reduced under the current usage, the existing swap
1611 entries are reclaimed gradually and the swap usage may stay
1612 higher than the limit for an extended period of time. This
1613 reduces the impact on the workload and memory management.
1615 memory.zswap.current
1616 A read-only single value file which exists on non-root
1619 The total amount of memory consumed by the zswap compression
1623 A read-write single value file which exists on non-root
1624 cgroups. The default is "max".
1626 Zswap usage hard limit. If a cgroup's zswap pool reaches this
1627 limit, it will refuse to take any more stores before existing
1628 entries fault back in or are written out to disk.
1631 A read-only nested-keyed file.
1633 Shows pressure stall information for memory. See
1634 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1640 "memory.high" is the main mechanism to control memory usage.
1641 Over-committing on high limit (sum of high limits > available memory)
1642 and letting global memory pressure to distribute memory according to
1643 usage is a viable strategy.
1645 Because breach of the high limit doesn't trigger the OOM killer but
1646 throttles the offending cgroup, a management agent has ample
1647 opportunities to monitor and take appropriate actions such as granting
1648 more memory or terminating the workload.
1650 Determining whether a cgroup has enough memory is not trivial as
1651 memory usage doesn't indicate whether the workload can benefit from
1652 more memory. For example, a workload which writes data received from
1653 network to a file can use all available memory but can also operate as
1654 performant with a small amount of memory. A measure of memory
1655 pressure - how much the workload is being impacted due to lack of
1656 memory - is necessary to determine whether a workload needs more
1657 memory; unfortunately, memory pressure monitoring mechanism isn't
1664 A memory area is charged to the cgroup which instantiated it and stays
1665 charged to the cgroup until the area is released. Migrating a process
1666 to a different cgroup doesn't move the memory usages that it
1667 instantiated while in the previous cgroup to the new cgroup.
1669 A memory area may be used by processes belonging to different cgroups.
1670 To which cgroup the area will be charged is in-deterministic; however,
1671 over time, the memory area is likely to end up in a cgroup which has
1672 enough memory allowance to avoid high reclaim pressure.
1674 If a cgroup sweeps a considerable amount of memory which is expected
1675 to be accessed repeatedly by other cgroups, it may make sense to use
1676 POSIX_FADV_DONTNEED to relinquish the ownership of memory areas
1677 belonging to the affected files to ensure correct memory ownership.
1683 The "io" controller regulates the distribution of IO resources. This
1684 controller implements both weight based and absolute bandwidth or IOPS
1685 limit distribution; however, weight based distribution is available
1686 only if cfq-iosched is in use and neither scheme is available for
1694 A read-only nested-keyed file.
1696 Lines are keyed by $MAJ:$MIN device numbers and not ordered.
1697 The following nested keys are defined.
1699 ====== =====================
1701 wbytes Bytes written
1702 rios Number of read IOs
1703 wios Number of write IOs
1704 dbytes Bytes discarded
1705 dios Number of discard IOs
1706 ====== =====================
1708 An example read output follows::
1710 8:16 rbytes=1459200 wbytes=314773504 rios=192 wios=353 dbytes=0 dios=0
1711 8:0 rbytes=90430464 wbytes=299008000 rios=8950 wios=1252 dbytes=50331648 dios=3021
1714 A read-write nested-keyed file which exists only on the root
1717 This file configures the Quality of Service of the IO cost
1718 model based controller (CONFIG_BLK_CGROUP_IOCOST) which
1719 currently implements "io.weight" proportional control. Lines
1720 are keyed by $MAJ:$MIN device numbers and not ordered. The
1721 line for a given device is populated on the first write for
1722 the device on "io.cost.qos" or "io.cost.model". The following
1723 nested keys are defined.
1725 ====== =====================================
1726 enable Weight-based control enable
1727 ctrl "auto" or "user"
1728 rpct Read latency percentile [0, 100]
1729 rlat Read latency threshold
1730 wpct Write latency percentile [0, 100]
1731 wlat Write latency threshold
1732 min Minimum scaling percentage [1, 10000]
1733 max Maximum scaling percentage [1, 10000]
1734 ====== =====================================
1736 The controller is disabled by default and can be enabled by
1737 setting "enable" to 1. "rpct" and "wpct" parameters default
1738 to zero and the controller uses internal device saturation
1739 state to adjust the overall IO rate between "min" and "max".
1741 When a better control quality is needed, latency QoS
1742 parameters can be configured. For example::
1744 8:16 enable=1 ctrl=auto rpct=95.00 rlat=75000 wpct=95.00 wlat=150000 min=50.00 max=150.0
1746 shows that on sdb, the controller is enabled, will consider
1747 the device saturated if the 95th percentile of read completion
1748 latencies is above 75ms or write 150ms, and adjust the overall
1749 IO issue rate between 50% and 150% accordingly.
1751 The lower the saturation point, the better the latency QoS at
1752 the cost of aggregate bandwidth. The narrower the allowed
1753 adjustment range between "min" and "max", the more conformant
1754 to the cost model the IO behavior. Note that the IO issue
1755 base rate may be far off from 100% and setting "min" and "max"
1756 blindly can lead to a significant loss of device capacity or
1757 control quality. "min" and "max" are useful for regulating
1758 devices which show wide temporary behavior changes - e.g. a
1759 ssd which accepts writes at the line speed for a while and
1760 then completely stalls for multiple seconds.
1762 When "ctrl" is "auto", the parameters are controlled by the
1763 kernel and may change automatically. Setting "ctrl" to "user"
1764 or setting any of the percentile and latency parameters puts
1765 it into "user" mode and disables the automatic changes. The
1766 automatic mode can be restored by setting "ctrl" to "auto".
1769 A read-write nested-keyed file which exists only on the root
1772 This file configures the cost model of the IO cost model based
1773 controller (CONFIG_BLK_CGROUP_IOCOST) which currently
1774 implements "io.weight" proportional control. Lines are keyed
1775 by $MAJ:$MIN device numbers and not ordered. The line for a
1776 given device is populated on the first write for the device on
1777 "io.cost.qos" or "io.cost.model". The following nested keys
1780 ===== ================================
1781 ctrl "auto" or "user"
1782 model The cost model in use - "linear"
1783 ===== ================================
1785 When "ctrl" is "auto", the kernel may change all parameters
1786 dynamically. When "ctrl" is set to "user" or any other
1787 parameters are written to, "ctrl" become "user" and the
1788 automatic changes are disabled.
1790 When "model" is "linear", the following model parameters are
1793 ============= ========================================
1794 [r|w]bps The maximum sequential IO throughput
1795 [r|w]seqiops The maximum 4k sequential IOs per second
1796 [r|w]randiops The maximum 4k random IOs per second
1797 ============= ========================================
1799 From the above, the builtin linear model determines the base
1800 costs of a sequential and random IO and the cost coefficient
1801 for the IO size. While simple, this model can cover most
1802 common device classes acceptably.
1804 The IO cost model isn't expected to be accurate in absolute
1805 sense and is scaled to the device behavior dynamically.
1807 If needed, tools/cgroup/iocost_coef_gen.py can be used to
1808 generate device-specific coefficients.
1811 A read-write flat-keyed file which exists on non-root cgroups.
1812 The default is "default 100".
1814 The first line is the default weight applied to devices
1815 without specific override. The rest are overrides keyed by
1816 $MAJ:$MIN device numbers and not ordered. The weights are in
1817 the range [1, 10000] and specifies the relative amount IO time
1818 the cgroup can use in relation to its siblings.
1820 The default weight can be updated by writing either "default
1821 $WEIGHT" or simply "$WEIGHT". Overrides can be set by writing
1822 "$MAJ:$MIN $WEIGHT" and unset by writing "$MAJ:$MIN default".
1824 An example read output follows::
1831 A read-write nested-keyed file which exists on non-root
1834 BPS and IOPS based IO limit. Lines are keyed by $MAJ:$MIN
1835 device numbers and not ordered. The following nested keys are
1838 ===== ==================================
1839 rbps Max read bytes per second
1840 wbps Max write bytes per second
1841 riops Max read IO operations per second
1842 wiops Max write IO operations per second
1843 ===== ==================================
1845 When writing, any number of nested key-value pairs can be
1846 specified in any order. "max" can be specified as the value
1847 to remove a specific limit. If the same key is specified
1848 multiple times, the outcome is undefined.
1850 BPS and IOPS are measured in each IO direction and IOs are
1851 delayed if limit is reached. Temporary bursts are allowed.
1853 Setting read limit at 2M BPS and write at 120 IOPS for 8:16::
1855 echo "8:16 rbps=2097152 wiops=120" > io.max
1857 Reading returns the following::
1859 8:16 rbps=2097152 wbps=max riops=max wiops=120
1861 Write IOPS limit can be removed by writing the following::
1863 echo "8:16 wiops=max" > io.max
1865 Reading now returns the following::
1867 8:16 rbps=2097152 wbps=max riops=max wiops=max
1870 A read-only nested-keyed file.
1872 Shows pressure stall information for IO. See
1873 :ref:`Documentation/accounting/psi.rst <psi>` for details.
1879 Page cache is dirtied through buffered writes and shared mmaps and
1880 written asynchronously to the backing filesystem by the writeback
1881 mechanism. Writeback sits between the memory and IO domains and
1882 regulates the proportion of dirty memory by balancing dirtying and
1885 The io controller, in conjunction with the memory controller,
1886 implements control of page cache writeback IOs. The memory controller
1887 defines the memory domain that dirty memory ratio is calculated and
1888 maintained for and the io controller defines the io domain which
1889 writes out dirty pages for the memory domain. Both system-wide and
1890 per-cgroup dirty memory states are examined and the more restrictive
1891 of the two is enforced.
1893 cgroup writeback requires explicit support from the underlying
1894 filesystem. Currently, cgroup writeback is implemented on ext2, ext4,
1895 btrfs, f2fs, and xfs. On other filesystems, all writeback IOs are
1896 attributed to the root cgroup.
1898 There are inherent differences in memory and writeback management
1899 which affects how cgroup ownership is tracked. Memory is tracked per
1900 page while writeback per inode. For the purpose of writeback, an
1901 inode is assigned to a cgroup and all IO requests to write dirty pages
1902 from the inode are attributed to that cgroup.
1904 As cgroup ownership for memory is tracked per page, there can be pages
1905 which are associated with different cgroups than the one the inode is
1906 associated with. These are called foreign pages. The writeback
1907 constantly keeps track of foreign pages and, if a particular foreign
1908 cgroup becomes the majority over a certain period of time, switches
1909 the ownership of the inode to that cgroup.
1911 While this model is enough for most use cases where a given inode is
1912 mostly dirtied by a single cgroup even when the main writing cgroup
1913 changes over time, use cases where multiple cgroups write to a single
1914 inode simultaneously are not supported well. In such circumstances, a
1915 significant portion of IOs are likely to be attributed incorrectly.
1916 As memory controller assigns page ownership on the first use and
1917 doesn't update it until the page is released, even if writeback
1918 strictly follows page ownership, multiple cgroups dirtying overlapping
1919 areas wouldn't work as expected. It's recommended to avoid such usage
1922 The sysctl knobs which affect writeback behavior are applied to cgroup
1923 writeback as follows.
1925 vm.dirty_background_ratio, vm.dirty_ratio
1926 These ratios apply the same to cgroup writeback with the
1927 amount of available memory capped by limits imposed by the
1928 memory controller and system-wide clean memory.
1930 vm.dirty_background_bytes, vm.dirty_bytes
1931 For cgroup writeback, this is calculated into ratio against
1932 total available memory and applied the same way as
1933 vm.dirty[_background]_ratio.
1939 This is a cgroup v2 controller for IO workload protection. You provide a group
1940 with a latency target, and if the average latency exceeds that target the
1941 controller will throttle any peers that have a lower latency target than the
1944 The limits are only applied at the peer level in the hierarchy. This means that
1945 in the diagram below, only groups A, B, and C will influence each other, and
1946 groups D and F will influence each other. Group G will influence nobody::
1955 So the ideal way to configure this is to set io.latency in groups A, B, and C.
1956 Generally you do not want to set a value lower than the latency your device
1957 supports. Experiment to find the value that works best for your workload.
1958 Start at higher than the expected latency for your device and watch the
1959 avg_lat value in io.stat for your workload group to get an idea of the
1960 latency you see during normal operation. Use the avg_lat value as a basis for
1961 your real setting, setting at 10-15% higher than the value in io.stat.
1963 How IO Latency Throttling Works
1964 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1966 io.latency is work conserving; so as long as everybody is meeting their latency
1967 target the controller doesn't do anything. Once a group starts missing its
1968 target it begins throttling any peer group that has a higher target than itself.
1969 This throttling takes 2 forms:
1971 - Queue depth throttling. This is the number of outstanding IO's a group is
1972 allowed to have. We will clamp down relatively quickly, starting at no limit
1973 and going all the way down to 1 IO at a time.
1975 - Artificial delay induction. There are certain types of IO that cannot be
1976 throttled without possibly adversely affecting higher priority groups. This
1977 includes swapping and metadata IO. These types of IO are allowed to occur
1978 normally, however they are "charged" to the originating group. If the
1979 originating group is being throttled you will see the use_delay and delay
1980 fields in io.stat increase. The delay value is how many microseconds that are
1981 being added to any process that runs in this group. Because this number can
1982 grow quite large if there is a lot of swapping or metadata IO occurring we
1983 limit the individual delay events to 1 second at a time.
1985 Once the victimized group starts meeting its latency target again it will start
1986 unthrottling any peer groups that were throttled previously. If the victimized
1987 group simply stops doing IO the global counter will unthrottle appropriately.
1989 IO Latency Interface Files
1990 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1993 This takes a similar format as the other controllers.
1995 "MAJOR:MINOR target=<target time in microseconds>"
1998 If the controller is enabled you will see extra stats in io.stat in
1999 addition to the normal ones.
2002 This is the current queue depth for the group.
2005 This is an exponential moving average with a decay rate of 1/exp
2006 bound by the sampling interval. The decay rate interval can be
2007 calculated by multiplying the win value in io.stat by the
2008 corresponding number of samples based on the win value.
2011 The sampling window size in milliseconds. This is the minimum
2012 duration of time between evaluation events. Windows only elapse
2013 with IO activity. Idle periods extend the most recent window.
2018 A single attribute controls the behavior of the I/O priority cgroup policy,
2019 namely the blkio.prio.class attribute. The following values are accepted for
2023 Do not modify the I/O priority class.
2026 For requests that do not have an I/O priority class (NONE),
2027 change the I/O priority class into RT. Do not modify
2028 the I/O priority class of other requests.
2031 For requests that do not have an I/O priority class or that have I/O
2032 priority class RT, change it into BE. Do not modify the I/O priority
2033 class of requests that have priority class IDLE.
2036 Change the I/O priority class of all requests into IDLE, the lowest
2039 The following numerical values are associated with the I/O priority policies:
2051 The numerical value that corresponds to each I/O priority class is as follows:
2053 +-------------------------------+---+
2054 | IOPRIO_CLASS_NONE | 0 |
2055 +-------------------------------+---+
2056 | IOPRIO_CLASS_RT (real-time) | 1 |
2057 +-------------------------------+---+
2058 | IOPRIO_CLASS_BE (best effort) | 2 |
2059 +-------------------------------+---+
2060 | IOPRIO_CLASS_IDLE | 3 |
2061 +-------------------------------+---+
2063 The algorithm to set the I/O priority class for a request is as follows:
2065 - Translate the I/O priority class policy into a number.
2066 - Change the request I/O priority class into the maximum of the I/O priority
2067 class policy number and the numerical I/O priority class.
2072 The process number controller is used to allow a cgroup to stop any
2073 new tasks from being fork()'d or clone()'d after a specified limit is
2076 The number of tasks in a cgroup can be exhausted in ways which other
2077 controllers cannot prevent, thus warranting its own controller. For
2078 example, a fork bomb is likely to exhaust the number of tasks before
2079 hitting memory restrictions.
2081 Note that PIDs used in this controller refer to TIDs, process IDs as
2089 A read-write single value file which exists on non-root
2090 cgroups. The default is "max".
2092 Hard limit of number of processes.
2095 A read-only single value file which exists on all cgroups.
2097 The number of processes currently in the cgroup and its
2100 Organisational operations are not blocked by cgroup policies, so it is
2101 possible to have pids.current > pids.max. This can be done by either
2102 setting the limit to be smaller than pids.current, or attaching enough
2103 processes to the cgroup such that pids.current is larger than
2104 pids.max. However, it is not possible to violate a cgroup PID policy
2105 through fork() or clone(). These will return -EAGAIN if the creation
2106 of a new process would cause a cgroup policy to be violated.
2112 The "cpuset" controller provides a mechanism for constraining
2113 the CPU and memory node placement of tasks to only the resources
2114 specified in the cpuset interface files in a task's current cgroup.
2115 This is especially valuable on large NUMA systems where placing jobs
2116 on properly sized subsets of the systems with careful processor and
2117 memory placement to reduce cross-node memory access and contention
2118 can improve overall system performance.
2120 The "cpuset" controller is hierarchical. That means the controller
2121 cannot use CPUs or memory nodes not allowed in its parent.
2124 Cpuset Interface Files
2125 ~~~~~~~~~~~~~~~~~~~~~~
2128 A read-write multiple values file which exists on non-root
2129 cpuset-enabled cgroups.
2131 It lists the requested CPUs to be used by tasks within this
2132 cgroup. The actual list of CPUs to be granted, however, is
2133 subjected to constraints imposed by its parent and can differ
2134 from the requested CPUs.
2136 The CPU numbers are comma-separated numbers or ranges.
2142 An empty value indicates that the cgroup is using the same
2143 setting as the nearest cgroup ancestor with a non-empty
2144 "cpuset.cpus" or all the available CPUs if none is found.
2146 The value of "cpuset.cpus" stays constant until the next update
2147 and won't be affected by any CPU hotplug events.
2149 cpuset.cpus.effective
2150 A read-only multiple values file which exists on all
2151 cpuset-enabled cgroups.
2153 It lists the onlined CPUs that are actually granted to this
2154 cgroup by its parent. These CPUs are allowed to be used by
2155 tasks within the current cgroup.
2157 If "cpuset.cpus" is empty, the "cpuset.cpus.effective" file shows
2158 all the CPUs from the parent cgroup that can be available to
2159 be used by this cgroup. Otherwise, it should be a subset of
2160 "cpuset.cpus" unless none of the CPUs listed in "cpuset.cpus"
2161 can be granted. In this case, it will be treated just like an
2162 empty "cpuset.cpus".
2164 Its value will be affected by CPU hotplug events.
2167 A read-write multiple values file which exists on non-root
2168 cpuset-enabled cgroups.
2170 It lists the requested memory nodes to be used by tasks within
2171 this cgroup. The actual list of memory nodes granted, however,
2172 is subjected to constraints imposed by its parent and can differ
2173 from the requested memory nodes.
2175 The memory node numbers are comma-separated numbers or ranges.
2181 An empty value indicates that the cgroup is using the same
2182 setting as the nearest cgroup ancestor with a non-empty
2183 "cpuset.mems" or all the available memory nodes if none
2186 The value of "cpuset.mems" stays constant until the next update
2187 and won't be affected by any memory nodes hotplug events.
2189 Setting a non-empty value to "cpuset.mems" causes memory of
2190 tasks within the cgroup to be migrated to the designated nodes if
2191 they are currently using memory outside of the designated nodes.
2193 There is a cost for this memory migration. The migration
2194 may not be complete and some memory pages may be left behind.
2195 So it is recommended that "cpuset.mems" should be set properly
2196 before spawning new tasks into the cpuset. Even if there is
2197 a need to change "cpuset.mems" with active tasks, it shouldn't
2200 cpuset.mems.effective
2201 A read-only multiple values file which exists on all
2202 cpuset-enabled cgroups.
2204 It lists the onlined memory nodes that are actually granted to
2205 this cgroup by its parent. These memory nodes are allowed to
2206 be used by tasks within the current cgroup.
2208 If "cpuset.mems" is empty, it shows all the memory nodes from the
2209 parent cgroup that will be available to be used by this cgroup.
2210 Otherwise, it should be a subset of "cpuset.mems" unless none of
2211 the memory nodes listed in "cpuset.mems" can be granted. In this
2212 case, it will be treated just like an empty "cpuset.mems".
2214 Its value will be affected by memory nodes hotplug events.
2216 cpuset.cpus.partition
2217 A read-write single value file which exists on non-root
2218 cpuset-enabled cgroups. This flag is owned by the parent cgroup
2219 and is not delegatable.
2221 It accepts only the following input values when written to.
2223 ========== =====================================
2224 "member" Non-root member of a partition
2225 "root" Partition root
2226 "isolated" Partition root without load balancing
2227 ========== =====================================
2229 The root cgroup is always a partition root and its state
2230 cannot be changed. All other non-root cgroups start out as
2233 When set to "root", the current cgroup is the root of a new
2234 partition or scheduling domain that comprises itself and all
2235 its descendants except those that are separate partition roots
2236 themselves and their descendants.
2238 When set to "isolated", the CPUs in that partition root will
2239 be in an isolated state without any load balancing from the
2240 scheduler. Tasks placed in such a partition with multiple
2241 CPUs should be carefully distributed and bound to each of the
2242 individual CPUs for optimal performance.
2244 The value shown in "cpuset.cpus.effective" of a partition root
2245 is the CPUs that the partition root can dedicate to a potential
2246 new child partition root. The new child subtracts available
2247 CPUs from its parent "cpuset.cpus.effective".
2249 A partition root ("root" or "isolated") can be in one of the
2250 two possible states - valid or invalid. An invalid partition
2251 root is in a degraded state where some state information may
2252 be retained, but behaves more like a "member".
2254 All possible state transitions among "member", "root" and
2255 "isolated" are allowed.
2257 On read, the "cpuset.cpus.partition" file can show the following
2260 ============================= =====================================
2261 "member" Non-root member of a partition
2262 "root" Partition root
2263 "isolated" Partition root without load balancing
2264 "root invalid (<reason>)" Invalid partition root
2265 "isolated invalid (<reason>)" Invalid isolated partition root
2266 ============================= =====================================
2268 In the case of an invalid partition root, a descriptive string on
2269 why the partition is invalid is included within parentheses.
2271 For a partition root to become valid, the following conditions
2274 1) The "cpuset.cpus" is exclusive with its siblings , i.e. they
2275 are not shared by any of its siblings (exclusivity rule).
2276 2) The parent cgroup is a valid partition root.
2277 3) The "cpuset.cpus" is not empty and must contain at least
2278 one of the CPUs from parent's "cpuset.cpus", i.e. they overlap.
2279 4) The "cpuset.cpus.effective" cannot be empty unless there is
2280 no task associated with this partition.
2282 External events like hotplug or changes to "cpuset.cpus" can
2283 cause a valid partition root to become invalid and vice versa.
2284 Note that a task cannot be moved to a cgroup with empty
2285 "cpuset.cpus.effective".
2287 For a valid partition root with the sibling cpu exclusivity
2288 rule enabled, changes made to "cpuset.cpus" that violate the
2289 exclusivity rule will invalidate the partition as well as its
2290 sibling partitions with conflicting cpuset.cpus values. So
2291 care must be taking in changing "cpuset.cpus".
2293 A valid non-root parent partition may distribute out all its CPUs
2294 to its child partitions when there is no task associated with it.
2296 Care must be taken to change a valid partition root to
2297 "member" as all its child partitions, if present, will become
2298 invalid causing disruption to tasks running in those child
2299 partitions. These inactivated partitions could be recovered if
2300 their parent is switched back to a partition root with a proper
2301 set of "cpuset.cpus".
2303 Poll and inotify events are triggered whenever the state of
2304 "cpuset.cpus.partition" changes. That includes changes caused
2305 by write to "cpuset.cpus.partition", cpu hotplug or other
2306 changes that modify the validity status of the partition.
2307 This will allow user space agents to monitor unexpected changes
2308 to "cpuset.cpus.partition" without the need to do continuous
2315 Device controller manages access to device files. It includes both
2316 creation of new device files (using mknod), and access to the
2317 existing device files.
2319 Cgroup v2 device controller has no interface files and is implemented
2320 on top of cgroup BPF. To control access to device files, a user may
2321 create bpf programs of type BPF_PROG_TYPE_CGROUP_DEVICE and attach
2322 them to cgroups with BPF_CGROUP_DEVICE flag. On an attempt to access a
2323 device file, corresponding BPF programs will be executed, and depending
2324 on the return value the attempt will succeed or fail with -EPERM.
2326 A BPF_PROG_TYPE_CGROUP_DEVICE program takes a pointer to the
2327 bpf_cgroup_dev_ctx structure, which describes the device access attempt:
2328 access type (mknod/read/write) and device (type, major and minor numbers).
2329 If the program returns 0, the attempt fails with -EPERM, otherwise it
2332 An example of BPF_PROG_TYPE_CGROUP_DEVICE program may be found in
2333 tools/testing/selftests/bpf/progs/dev_cgroup.c in the kernel source tree.
2339 The "rdma" controller regulates the distribution and accounting of
2342 RDMA Interface Files
2343 ~~~~~~~~~~~~~~~~~~~~
2346 A readwrite nested-keyed file that exists for all the cgroups
2347 except root that describes current configured resource limit
2348 for a RDMA/IB device.
2350 Lines are keyed by device name and are not ordered.
2351 Each line contains space separated resource name and its configured
2352 limit that can be distributed.
2354 The following nested keys are defined.
2356 ========== =============================
2357 hca_handle Maximum number of HCA Handles
2358 hca_object Maximum number of HCA Objects
2359 ========== =============================
2361 An example for mlx4 and ocrdma device follows::
2363 mlx4_0 hca_handle=2 hca_object=2000
2364 ocrdma1 hca_handle=3 hca_object=max
2367 A read-only file that describes current resource usage.
2368 It exists for all the cgroup except root.
2370 An example for mlx4 and ocrdma device follows::
2372 mlx4_0 hca_handle=1 hca_object=20
2373 ocrdma1 hca_handle=1 hca_object=23
2378 The HugeTLB controller allows to limit the HugeTLB usage per control group and
2379 enforces the controller limit during page fault.
2381 HugeTLB Interface Files
2382 ~~~~~~~~~~~~~~~~~~~~~~~
2384 hugetlb.<hugepagesize>.current
2385 Show current usage for "hugepagesize" hugetlb. It exists for all
2386 the cgroup except root.
2388 hugetlb.<hugepagesize>.max
2389 Set/show the hard limit of "hugepagesize" hugetlb usage.
2390 The default value is "max". It exists for all the cgroup except root.
2392 hugetlb.<hugepagesize>.events
2393 A read-only flat-keyed file which exists on non-root cgroups.
2396 The number of allocation failure due to HugeTLB limit
2398 hugetlb.<hugepagesize>.events.local
2399 Similar to hugetlb.<hugepagesize>.events but the fields in the file
2400 are local to the cgroup i.e. not hierarchical. The file modified event
2401 generated on this file reflects only the local events.
2403 hugetlb.<hugepagesize>.numa_stat
2404 Similar to memory.numa_stat, it shows the numa information of the
2405 hugetlb pages of <hugepagesize> in this cgroup. Only active in
2406 use hugetlb pages are included. The per-node values are in bytes.
2411 The Miscellaneous cgroup provides the resource limiting and tracking
2412 mechanism for the scalar resources which cannot be abstracted like the other
2413 cgroup resources. Controller is enabled by the CONFIG_CGROUP_MISC config
2416 A resource can be added to the controller via enum misc_res_type{} in the
2417 include/linux/misc_cgroup.h file and the corresponding name via misc_res_name[]
2418 in the kernel/cgroup/misc.c file. Provider of the resource must set its
2419 capacity prior to using the resource by calling misc_cg_set_capacity().
2421 Once a capacity is set then the resource usage can be updated using charge and
2422 uncharge APIs. All of the APIs to interact with misc controller are in
2423 include/linux/misc_cgroup.h.
2425 Misc Interface Files
2426 ~~~~~~~~~~~~~~~~~~~~
2428 Miscellaneous controller provides 3 interface files. If two misc resources (res_a and res_b) are registered then:
2431 A read-only flat-keyed file shown only in the root cgroup. It shows
2432 miscellaneous scalar resources available on the platform along with
2440 A read-only flat-keyed file shown in the non-root cgroups. It shows
2441 the current usage of the resources in the cgroup and its children.::
2448 A read-write flat-keyed file shown in the non root cgroups. Allowed
2449 maximum usage of the resources in the cgroup and its children.::
2455 Limit can be set by::
2457 # echo res_a 1 > misc.max
2459 Limit can be set to max by::
2461 # echo res_a max > misc.max
2463 Limits can be set higher than the capacity value in the misc.capacity
2467 A read-only flat-keyed file which exists on non-root cgroups. The
2468 following entries are defined. Unless specified otherwise, a value
2469 change in this file generates a file modified event. All fields in
2470 this file are hierarchical.
2473 The number of times the cgroup's resource usage was
2474 about to go over the max boundary.
2476 Migration and Ownership
2477 ~~~~~~~~~~~~~~~~~~~~~~~
2479 A miscellaneous scalar resource is charged to the cgroup in which it is used
2480 first, and stays charged to that cgroup until that resource is freed. Migrating
2481 a process to a different cgroup does not move the charge to the destination
2482 cgroup where the process has moved.
2490 perf_event controller, if not mounted on a legacy hierarchy, is
2491 automatically enabled on the v2 hierarchy so that perf events can
2492 always be filtered by cgroup v2 path. The controller can still be
2493 moved to a legacy hierarchy after v2 hierarchy is populated.
2496 Non-normative information
2497 -------------------------
2499 This section contains information that isn't considered to be a part of
2500 the stable kernel API and so is subject to change.
2503 CPU controller root cgroup process behaviour
2504 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2506 When distributing CPU cycles in the root cgroup each thread in this
2507 cgroup is treated as if it was hosted in a separate child cgroup of the
2508 root cgroup. This child cgroup weight is dependent on its thread nice
2511 For details of this mapping see sched_prio_to_weight array in
2512 kernel/sched/core.c file (values from this array should be scaled
2513 appropriately so the neutral - nice 0 - value is 100 instead of 1024).
2516 IO controller root cgroup process behaviour
2517 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2519 Root cgroup processes are hosted in an implicit leaf child node.
2520 When distributing IO resources this implicit child node is taken into
2521 account as if it was a normal child cgroup of the root cgroup with a
2522 weight value of 200.
2531 cgroup namespace provides a mechanism to virtualize the view of the
2532 "/proc/$PID/cgroup" file and cgroup mounts. The CLONE_NEWCGROUP clone
2533 flag can be used with clone(2) and unshare(2) to create a new cgroup
2534 namespace. The process running inside the cgroup namespace will have
2535 its "/proc/$PID/cgroup" output restricted to cgroupns root. The
2536 cgroupns root is the cgroup of the process at the time of creation of
2537 the cgroup namespace.
2539 Without cgroup namespace, the "/proc/$PID/cgroup" file shows the
2540 complete path of the cgroup of a process. In a container setup where
2541 a set of cgroups and namespaces are intended to isolate processes the
2542 "/proc/$PID/cgroup" file may leak potential system level information
2543 to the isolated processes. For example::
2545 # cat /proc/self/cgroup
2546 0::/batchjobs/container_id1
2548 The path '/batchjobs/container_id1' can be considered as system-data
2549 and undesirable to expose to the isolated processes. cgroup namespace
2550 can be used to restrict visibility of this path. For example, before
2551 creating a cgroup namespace, one would see::
2553 # ls -l /proc/self/ns/cgroup
2554 lrwxrwxrwx 1 root root 0 2014-07-15 10:37 /proc/self/ns/cgroup -> cgroup:[4026531835]
2555 # cat /proc/self/cgroup
2556 0::/batchjobs/container_id1
2558 After unsharing a new namespace, the view changes::
2560 # ls -l /proc/self/ns/cgroup
2561 lrwxrwxrwx 1 root root 0 2014-07-15 10:35 /proc/self/ns/cgroup -> cgroup:[4026532183]
2562 # cat /proc/self/cgroup
2565 When some thread from a multi-threaded process unshares its cgroup
2566 namespace, the new cgroupns gets applied to the entire process (all
2567 the threads). This is natural for the v2 hierarchy; however, for the
2568 legacy hierarchies, this may be unexpected.
2570 A cgroup namespace is alive as long as there are processes inside or
2571 mounts pinning it. When the last usage goes away, the cgroup
2572 namespace is destroyed. The cgroupns root and the actual cgroups
2579 The 'cgroupns root' for a cgroup namespace is the cgroup in which the
2580 process calling unshare(2) is running. For example, if a process in
2581 /batchjobs/container_id1 cgroup calls unshare, cgroup
2582 /batchjobs/container_id1 becomes the cgroupns root. For the
2583 init_cgroup_ns, this is the real root ('/') cgroup.
2585 The cgroupns root cgroup does not change even if the namespace creator
2586 process later moves to a different cgroup::
2588 # ~/unshare -c # unshare cgroupns in some cgroup
2589 # cat /proc/self/cgroup
2592 # echo 0 > sub_cgrp_1/cgroup.procs
2593 # cat /proc/self/cgroup
2596 Each process gets its namespace-specific view of "/proc/$PID/cgroup"
2598 Processes running inside the cgroup namespace will be able to see
2599 cgroup paths (in /proc/self/cgroup) only inside their root cgroup.
2600 From within an unshared cgroupns::
2604 # echo 7353 > sub_cgrp_1/cgroup.procs
2605 # cat /proc/7353/cgroup
2608 From the initial cgroup namespace, the real cgroup path will be
2611 $ cat /proc/7353/cgroup
2612 0::/batchjobs/container_id1/sub_cgrp_1
2614 From a sibling cgroup namespace (that is, a namespace rooted at a
2615 different cgroup), the cgroup path relative to its own cgroup
2616 namespace root will be shown. For instance, if PID 7353's cgroup
2617 namespace root is at '/batchjobs/container_id2', then it will see::
2619 # cat /proc/7353/cgroup
2620 0::/../container_id2/sub_cgrp_1
2622 Note that the relative path always starts with '/' to indicate that
2623 its relative to the cgroup namespace root of the caller.
2626 Migration and setns(2)
2627 ----------------------
2629 Processes inside a cgroup namespace can move into and out of the
2630 namespace root if they have proper access to external cgroups. For
2631 example, from inside a namespace with cgroupns root at
2632 /batchjobs/container_id1, and assuming that the global hierarchy is
2633 still accessible inside cgroupns::
2635 # cat /proc/7353/cgroup
2637 # echo 7353 > batchjobs/container_id2/cgroup.procs
2638 # cat /proc/7353/cgroup
2639 0::/../container_id2
2641 Note that this kind of setup is not encouraged. A task inside cgroup
2642 namespace should only be exposed to its own cgroupns hierarchy.
2644 setns(2) to another cgroup namespace is allowed when:
2646 (a) the process has CAP_SYS_ADMIN against its current user namespace
2647 (b) the process has CAP_SYS_ADMIN against the target cgroup
2650 No implicit cgroup changes happen with attaching to another cgroup
2651 namespace. It is expected that the someone moves the attaching
2652 process under the target cgroup namespace root.
2655 Interaction with Other Namespaces
2656 ---------------------------------
2658 Namespace specific cgroup hierarchy can be mounted by a process
2659 running inside a non-init cgroup namespace::
2661 # mount -t cgroup2 none $MOUNT_POINT
2663 This will mount the unified cgroup hierarchy with cgroupns root as the
2664 filesystem root. The process needs CAP_SYS_ADMIN against its user and
2667 The virtualization of /proc/self/cgroup file combined with restricting
2668 the view of cgroup hierarchy by namespace-private cgroupfs mount
2669 provides a properly isolated cgroup view inside the container.
2672 Information on Kernel Programming
2673 =================================
2675 This section contains kernel programming information in the areas
2676 where interacting with cgroup is necessary. cgroup core and
2677 controllers are not covered.
2680 Filesystem Support for Writeback
2681 --------------------------------
2683 A filesystem can support cgroup writeback by updating
2684 address_space_operations->writepage[s]() to annotate bio's using the
2685 following two functions.
2687 wbc_init_bio(@wbc, @bio)
2688 Should be called for each bio carrying writeback data and
2689 associates the bio with the inode's owner cgroup and the
2690 corresponding request queue. This must be called after
2691 a queue (device) has been associated with the bio and
2694 wbc_account_cgroup_owner(@wbc, @page, @bytes)
2695 Should be called for each data segment being written out.
2696 While this function doesn't care exactly when it's called
2697 during the writeback session, it's the easiest and most
2698 natural to call it as data segments are added to a bio.
2700 With writeback bio's annotated, cgroup support can be enabled per
2701 super_block by setting SB_I_CGROUPWB in ->s_iflags. This allows for
2702 selective disabling of cgroup writeback support which is helpful when
2703 certain filesystem features, e.g. journaled data mode, are
2706 wbc_init_bio() binds the specified bio to its cgroup. Depending on
2707 the configuration, the bio may be executed at a lower priority and if
2708 the writeback session is holding shared resources, e.g. a journal
2709 entry, may lead to priority inversion. There is no one easy solution
2710 for the problem. Filesystems can try to work around specific problem
2711 cases by skipping wbc_init_bio() and using bio_associate_blkg()
2715 Deprecated v1 Core Features
2716 ===========================
2718 - Multiple hierarchies including named ones are not supported.
2720 - All v1 mount options are not supported.
2722 - The "tasks" file is removed and "cgroup.procs" is not sorted.
2724 - "cgroup.clone_children" is removed.
2726 - /proc/cgroups is meaningless for v2. Use "cgroup.controllers" file
2727 at the root instead.
2730 Issues with v1 and Rationales for v2
2731 ====================================
2733 Multiple Hierarchies
2734 --------------------
2736 cgroup v1 allowed an arbitrary number of hierarchies and each
2737 hierarchy could host any number of controllers. While this seemed to
2738 provide a high level of flexibility, it wasn't useful in practice.
2740 For example, as there is only one instance of each controller, utility
2741 type controllers such as freezer which can be useful in all
2742 hierarchies could only be used in one. The issue is exacerbated by
2743 the fact that controllers couldn't be moved to another hierarchy once
2744 hierarchies were populated. Another issue was that all controllers
2745 bound to a hierarchy were forced to have exactly the same view of the
2746 hierarchy. It wasn't possible to vary the granularity depending on
2747 the specific controller.
2749 In practice, these issues heavily limited which controllers could be
2750 put on the same hierarchy and most configurations resorted to putting
2751 each controller on its own hierarchy. Only closely related ones, such
2752 as the cpu and cpuacct controllers, made sense to be put on the same
2753 hierarchy. This often meant that userland ended up managing multiple
2754 similar hierarchies repeating the same steps on each hierarchy
2755 whenever a hierarchy management operation was necessary.
2757 Furthermore, support for multiple hierarchies came at a steep cost.
2758 It greatly complicated cgroup core implementation but more importantly
2759 the support for multiple hierarchies restricted how cgroup could be
2760 used in general and what controllers was able to do.
2762 There was no limit on how many hierarchies there might be, which meant
2763 that a thread's cgroup membership couldn't be described in finite
2764 length. The key might contain any number of entries and was unlimited
2765 in length, which made it highly awkward to manipulate and led to
2766 addition of controllers which existed only to identify membership,
2767 which in turn exacerbated the original problem of proliferating number
2770 Also, as a controller couldn't have any expectation regarding the
2771 topologies of hierarchies other controllers might be on, each
2772 controller had to assume that all other controllers were attached to
2773 completely orthogonal hierarchies. This made it impossible, or at
2774 least very cumbersome, for controllers to cooperate with each other.
2776 In most use cases, putting controllers on hierarchies which are
2777 completely orthogonal to each other isn't necessary. What usually is
2778 called for is the ability to have differing levels of granularity
2779 depending on the specific controller. In other words, hierarchy may
2780 be collapsed from leaf towards root when viewed from specific
2781 controllers. For example, a given configuration might not care about
2782 how memory is distributed beyond a certain level while still wanting
2783 to control how CPU cycles are distributed.
2789 cgroup v1 allowed threads of a process to belong to different cgroups.
2790 This didn't make sense for some controllers and those controllers
2791 ended up implementing different ways to ignore such situations but
2792 much more importantly it blurred the line between API exposed to
2793 individual applications and system management interface.
2795 Generally, in-process knowledge is available only to the process
2796 itself; thus, unlike service-level organization of processes,
2797 categorizing threads of a process requires active participation from
2798 the application which owns the target process.
2800 cgroup v1 had an ambiguously defined delegation model which got abused
2801 in combination with thread granularity. cgroups were delegated to
2802 individual applications so that they can create and manage their own
2803 sub-hierarchies and control resource distributions along them. This
2804 effectively raised cgroup to the status of a syscall-like API exposed
2807 First of all, cgroup has a fundamentally inadequate interface to be
2808 exposed this way. For a process to access its own knobs, it has to
2809 extract the path on the target hierarchy from /proc/self/cgroup,
2810 construct the path by appending the name of the knob to the path, open
2811 and then read and/or write to it. This is not only extremely clunky
2812 and unusual but also inherently racy. There is no conventional way to
2813 define transaction across the required steps and nothing can guarantee
2814 that the process would actually be operating on its own sub-hierarchy.
2816 cgroup controllers implemented a number of knobs which would never be
2817 accepted as public APIs because they were just adding control knobs to
2818 system-management pseudo filesystem. cgroup ended up with interface
2819 knobs which were not properly abstracted or refined and directly
2820 revealed kernel internal details. These knobs got exposed to
2821 individual applications through the ill-defined delegation mechanism
2822 effectively abusing cgroup as a shortcut to implementing public APIs
2823 without going through the required scrutiny.
2825 This was painful for both userland and kernel. Userland ended up with
2826 misbehaving and poorly abstracted interfaces and kernel exposing and
2827 locked into constructs inadvertently.
2830 Competition Between Inner Nodes and Threads
2831 -------------------------------------------
2833 cgroup v1 allowed threads to be in any cgroups which created an
2834 interesting problem where threads belonging to a parent cgroup and its
2835 children cgroups competed for resources. This was nasty as two
2836 different types of entities competed and there was no obvious way to
2837 settle it. Different controllers did different things.
2839 The cpu controller considered threads and cgroups as equivalents and
2840 mapped nice levels to cgroup weights. This worked for some cases but
2841 fell flat when children wanted to be allocated specific ratios of CPU
2842 cycles and the number of internal threads fluctuated - the ratios
2843 constantly changed as the number of competing entities fluctuated.
2844 There also were other issues. The mapping from nice level to weight
2845 wasn't obvious or universal, and there were various other knobs which
2846 simply weren't available for threads.
2848 The io controller implicitly created a hidden leaf node for each
2849 cgroup to host the threads. The hidden leaf had its own copies of all
2850 the knobs with ``leaf_`` prefixed. While this allowed equivalent
2851 control over internal threads, it was with serious drawbacks. It
2852 always added an extra layer of nesting which wouldn't be necessary
2853 otherwise, made the interface messy and significantly complicated the
2856 The memory controller didn't have a way to control what happened
2857 between internal tasks and child cgroups and the behavior was not
2858 clearly defined. There were attempts to add ad-hoc behaviors and
2859 knobs to tailor the behavior to specific workloads which would have
2860 led to problems extremely difficult to resolve in the long term.
2862 Multiple controllers struggled with internal tasks and came up with
2863 different ways to deal with it; unfortunately, all the approaches were
2864 severely flawed and, furthermore, the widely different behaviors
2865 made cgroup as a whole highly inconsistent.
2867 This clearly is a problem which needs to be addressed from cgroup core
2871 Other Interface Issues
2872 ----------------------
2874 cgroup v1 grew without oversight and developed a large number of
2875 idiosyncrasies and inconsistencies. One issue on the cgroup core side
2876 was how an empty cgroup was notified - a userland helper binary was
2877 forked and executed for each event. The event delivery wasn't
2878 recursive or delegatable. The limitations of the mechanism also led
2879 to in-kernel event delivery filtering mechanism further complicating
2882 Controller interfaces were problematic too. An extreme example is
2883 controllers completely ignoring hierarchical organization and treating
2884 all cgroups as if they were all located directly under the root
2885 cgroup. Some controllers exposed a large amount of inconsistent
2886 implementation details to userland.
2888 There also was no consistency across controllers. When a new cgroup
2889 was created, some controllers defaulted to not imposing extra
2890 restrictions while others disallowed any resource usage until
2891 explicitly configured. Configuration knobs for the same type of
2892 control used widely differing naming schemes and formats. Statistics
2893 and information knobs were named arbitrarily and used different
2894 formats and units even in the same controller.
2896 cgroup v2 establishes common conventions where appropriate and updates
2897 controllers so that they expose minimal and consistent interfaces.
2900 Controller Issues and Remedies
2901 ------------------------------
2906 The original lower boundary, the soft limit, is defined as a limit
2907 that is per default unset. As a result, the set of cgroups that
2908 global reclaim prefers is opt-in, rather than opt-out. The costs for
2909 optimizing these mostly negative lookups are so high that the
2910 implementation, despite its enormous size, does not even provide the
2911 basic desirable behavior. First off, the soft limit has no
2912 hierarchical meaning. All configured groups are organized in a global
2913 rbtree and treated like equal peers, regardless where they are located
2914 in the hierarchy. This makes subtree delegation impossible. Second,
2915 the soft limit reclaim pass is so aggressive that it not just
2916 introduces high allocation latencies into the system, but also impacts
2917 system performance due to overreclaim, to the point where the feature
2918 becomes self-defeating.
2920 The memory.low boundary on the other hand is a top-down allocated
2921 reserve. A cgroup enjoys reclaim protection when it's within its
2922 effective low, which makes delegation of subtrees possible. It also
2923 enjoys having reclaim pressure proportional to its overage when
2924 above its effective low.
2926 The original high boundary, the hard limit, is defined as a strict
2927 limit that can not budge, even if the OOM killer has to be called.
2928 But this generally goes against the goal of making the most out of the
2929 available memory. The memory consumption of workloads varies during
2930 runtime, and that requires users to overcommit. But doing that with a
2931 strict upper limit requires either a fairly accurate prediction of the
2932 working set size or adding slack to the limit. Since working set size
2933 estimation is hard and error prone, and getting it wrong results in
2934 OOM kills, most users tend to err on the side of a looser limit and
2935 end up wasting precious resources.
2937 The memory.high boundary on the other hand can be set much more
2938 conservatively. When hit, it throttles allocations by forcing them
2939 into direct reclaim to work off the excess, but it never invokes the
2940 OOM killer. As a result, a high boundary that is chosen too
2941 aggressively will not terminate the processes, but instead it will
2942 lead to gradual performance degradation. The user can monitor this
2943 and make corrections until the minimal memory footprint that still
2944 gives acceptable performance is found.
2946 In extreme cases, with many concurrent allocations and a complete
2947 breakdown of reclaim progress within the group, the high boundary can
2948 be exceeded. But even then it's mostly better to satisfy the
2949 allocation from the slack available in other groups or the rest of the
2950 system than killing the group. Otherwise, memory.max is there to
2951 limit this type of spillover and ultimately contain buggy or even
2952 malicious applications.
2954 Setting the original memory.limit_in_bytes below the current usage was
2955 subject to a race condition, where concurrent charges could cause the
2956 limit setting to fail. memory.max on the other hand will first set the
2957 limit to prevent new charges, and then reclaim and OOM kill until the
2958 new limit is met - or the task writing to memory.max is killed.
2960 The combined memory+swap accounting and limiting is replaced by real
2961 control over swap space.
2963 The main argument for a combined memory+swap facility in the original
2964 cgroup design was that global or parental pressure would always be
2965 able to swap all anonymous memory of a child group, regardless of the
2966 child's own (possibly untrusted) configuration. However, untrusted
2967 groups can sabotage swapping by other means - such as referencing its
2968 anonymous memory in a tight loop - and an admin can not assume full
2969 swappability when overcommitting untrusted jobs.
2971 For trusted jobs, on the other hand, a combined counter is not an
2972 intuitive userspace interface, and it flies in the face of the idea
2973 that cgroup controllers should account and limit specific physical
2974 resources. Swap space is a resource like all others in the system,
2975 and that's why unified hierarchy allows distributing it separately.