1 .. SPDX-License-Identifier: GPL-2.0
7 ===================== ======================================= ================
8 /proc/sys Terrehon Bowden <terrehon@pacbell.net>, October 7 1999
9 Bodo Bauer <bb@ricochet.net>
10 2.4.x update Jorge Nerin <comandante@zaralinux.com> November 14 2000
11 move /proc/sys Shen Feng <shen@cn.fujitsu.com> April 1 2009
12 fixes/update part 1.1 Stefani Seibold <stefani@seibold.net> June 9 2009
13 ===================== ======================================= ================
20 0.1 Introduction/Credits
23 1 Collecting System Information
24 1.1 Process-Specific Subdirectories
26 1.3 IDE devices in /proc/ide
27 1.4 Networking info in /proc/net
29 1.6 Parallel port info in /proc/parport
30 1.7 TTY info in /proc/tty
31 1.8 Miscellaneous kernel statistics in /proc/stat
32 1.9 Ext4 file system parameters
34 2 Modifying System Parameters
36 3 Per-Process Parameters
37 3.1 /proc/<pid>/oom_adj & /proc/<pid>/oom_score_adj - Adjust the oom-killer
39 3.2 /proc/<pid>/oom_score - Display current oom-killer score
40 3.3 /proc/<pid>/io - Display the IO accounting fields
41 3.4 /proc/<pid>/coredump_filter - Core dump filtering settings
42 3.5 /proc/<pid>/mountinfo - Information about mounts
43 3.6 /proc/<pid>/comm & /proc/<pid>/task/<tid>/comm
44 3.7 /proc/<pid>/task/<tid>/children - Information about task children
45 3.8 /proc/<pid>/fdinfo/<fd> - Information about opened file
46 3.9 /proc/<pid>/map_files - Information about memory mapped files
47 3.10 /proc/<pid>/timerslack_ns - Task timerslack value
48 3.11 /proc/<pid>/patch_state - Livepatch patch operation state
49 3.12 /proc/<pid>/arch_status - Task architecture specific information
59 0.1 Introduction/Credits
60 ------------------------
62 This documentation is part of a soon (or so we hope) to be released book on
63 the SuSE Linux distribution. As there is no complete documentation for the
64 /proc file system and we've used many freely available sources to write these
65 chapters, it seems only fair to give the work back to the Linux community.
66 This work is based on the 2.2.* kernel version and the upcoming 2.4.*. I'm
67 afraid it's still far from complete, but we hope it will be useful. As far as
68 we know, it is the first 'all-in-one' document about the /proc file system. It
69 is focused on the Intel x86 hardware, so if you are looking for PPC, ARM,
70 SPARC, AXP, etc., features, you probably won't find what you are looking for.
71 It also only covers IPv4 networking, not IPv6 nor other protocols - sorry. But
72 additions and patches are welcome and will be added to this document if you
75 We'd like to thank Alan Cox, Rik van Riel, and Alexey Kuznetsov and a lot of
76 other people for help compiling this documentation. We'd also like to extend a
77 special thank you to Andi Kleen for documentation, which we relied on heavily
78 to create this document, as well as the additional information he provided.
79 Thanks to everybody else who contributed source or docs to the Linux kernel
80 and helped create a great piece of software... :)
82 If you have any comments, corrections or additions, please don't hesitate to
83 contact Bodo Bauer at bb@ricochet.net. We'll be happy to add them to this
86 The latest version of this document is available online at
87 http://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/proc.html
89 If the above direction does not works for you, you could try the kernel
90 mailing list at linux-kernel@vger.kernel.org and/or try to reach me at
91 comandante@zaralinux.com.
96 We don't guarantee the correctness of this document, and if you come to us
97 complaining about how you screwed up your system because of incorrect
98 documentation, we won't feel responsible...
100 Chapter 1: Collecting System Information
101 ========================================
105 * Investigating the properties of the pseudo file system /proc and its
106 ability to provide information on the running Linux system
107 * Examining /proc's structure
108 * Uncovering various information about the kernel and the processes running
111 ------------------------------------------------------------------------------
113 The proc file system acts as an interface to internal data structures in the
114 kernel. It can be used to obtain information about the system and to change
115 certain kernel parameters at runtime (sysctl).
117 First, we'll take a look at the read-only parts of /proc. In Chapter 2, we
118 show you how you can use /proc/sys to change settings.
120 1.1 Process-Specific Subdirectories
121 -----------------------------------
123 The directory /proc contains (among other things) one subdirectory for each
124 process running on the system, which is named after the process ID (PID).
126 The link 'self' points to the process reading the file system. Each process
127 subdirectory has the entries listed in Table 1-1.
129 Note that an open file descriptor to /proc/<pid> or to any of its
130 contained files or subdirectories does not prevent <pid> being reused
131 for some other process in the event that <pid> exits. Operations on
132 open /proc/<pid> file descriptors corresponding to dead processes
133 never act on any new process that the kernel may, through chance, have
134 also assigned the process ID <pid>. Instead, operations on these FDs
135 usually fail with ESRCH.
137 .. table:: Table 1-1: Process specific entries in /proc
139 ============= ===============================================================
141 ============= ===============================================================
142 clear_refs Clears page referenced bits shown in smaps output
143 cmdline Command line arguments
144 cpu Current and last cpu in which it was executed (2.4)(smp)
145 cwd Link to the current working directory
146 environ Values of environment variables
147 exe Link to the executable of this process
148 fd Directory, which contains all file descriptors
149 maps Memory maps to executables and library files (2.4)
150 mem Memory held by this process
151 root Link to the root directory of this process
153 statm Process memory status information
154 status Process status in human readable form
155 wchan Present with CONFIG_KALLSYMS=y: it shows the kernel function
156 symbol the task is blocked in - or "0" if not blocked.
158 stack Report full stack trace, enable via CONFIG_STACKTRACE
159 smaps An extension based on maps, showing the memory consumption of
160 each mapping and flags associated with it
161 smaps_rollup Accumulated smaps stats for all mappings of the process. This
162 can be derived from smaps, but is faster and more convenient
163 numa_maps An extension based on maps, showing the memory locality and
164 binding policy as well as mem usage (in pages) of each mapping.
165 ============= ===============================================================
167 For example, to get the status information of a process, all you have to do is
168 read the file /proc/PID/status::
170 >cat /proc/self/status
200 SigPnd: 0000000000000000
201 ShdPnd: 0000000000000000
202 SigBlk: 0000000000000000
203 SigIgn: 0000000000000000
204 SigCgt: 0000000000000000
205 CapInh: 00000000fffffeff
206 CapPrm: 0000000000000000
207 CapEff: 0000000000000000
208 CapBnd: ffffffffffffffff
209 CapAmb: 0000000000000000
212 Speculation_Store_Bypass: thread vulnerable
213 SpeculationIndirectBranch: conditional enabled
214 voluntary_ctxt_switches: 0
215 nonvoluntary_ctxt_switches: 1
217 This shows you nearly the same information you would get if you viewed it with
218 the ps command. In fact, ps uses the proc file system to obtain its
219 information. But you get a more detailed view of the process by reading the
220 file /proc/PID/status. It fields are described in table 1-2.
222 The statm file contains more detailed information about the process
223 memory usage. Its seven fields are explained in Table 1-3. The stat file
224 contains detailed information about the process itself. Its fields are
225 explained in Table 1-4.
227 (for SMP CONFIG users)
229 For making accounting scalable, RSS related information are handled in an
230 asynchronous manner and the value may not be very precise. To see a precise
231 snapshot of a moment, you can see /proc/<pid>/smaps file and scan page table.
232 It's slow but very precise.
234 .. table:: Table 1-2: Contents of the status files (as of 4.19)
236 ========================== ===================================================
238 ========================== ===================================================
239 Name filename of the executable
240 Umask file mode creation mask
241 State state (R is running, S is sleeping, D is sleeping
242 in an uninterruptible wait, Z is zombie,
243 T is traced or stopped)
245 Ngid NUMA group ID (0 if none)
247 PPid process id of the parent process
248 TracerPid PID of process tracing this process (0 if not)
249 Uid Real, effective, saved set, and file system UIDs
250 Gid Real, effective, saved set, and file system GIDs
251 FDSize number of file descriptor slots currently allocated
252 Groups supplementary group list
253 NStgid descendant namespace thread group ID hierarchy
254 NSpid descendant namespace process ID hierarchy
255 NSpgid descendant namespace process group ID hierarchy
256 NSsid descendant namespace session ID hierarchy
257 VmPeak peak virtual memory size
258 VmSize total program size
259 VmLck locked memory size
260 VmPin pinned memory size
261 VmHWM peak resident set size ("high water mark")
262 VmRSS size of memory portions. It contains the three
264 (VmRSS = RssAnon + RssFile + RssShmem)
265 RssAnon size of resident anonymous memory
266 RssFile size of resident file mappings
267 RssShmem size of resident shmem memory (includes SysV shm,
268 mapping of tmpfs and shared anonymous mappings)
269 VmData size of private data segments
270 VmStk size of stack segments
271 VmExe size of text segment
272 VmLib size of shared library code
273 VmPTE size of page table entries
274 VmSwap amount of swap used by anonymous private data
275 (shmem swap usage is not included)
276 HugetlbPages size of hugetlb memory portions
277 CoreDumping process's memory is currently being dumped
278 (killing the process may lead to a corrupted core)
279 THP_enabled process is allowed to use THP (returns 0 when
280 PR_SET_THP_DISABLE is set on the process
281 Threads number of threads
282 SigQ number of signals queued/max. number for queue
283 SigPnd bitmap of pending signals for the thread
284 ShdPnd bitmap of shared pending signals for the process
285 SigBlk bitmap of blocked signals
286 SigIgn bitmap of ignored signals
287 SigCgt bitmap of caught signals
288 CapInh bitmap of inheritable capabilities
289 CapPrm bitmap of permitted capabilities
290 CapEff bitmap of effective capabilities
291 CapBnd bitmap of capabilities bounding set
292 CapAmb bitmap of ambient capabilities
293 NoNewPrivs no_new_privs, like prctl(PR_GET_NO_NEW_PRIV, ...)
294 Seccomp seccomp mode, like prctl(PR_GET_SECCOMP, ...)
295 Speculation_Store_Bypass speculative store bypass mitigation status
296 SpeculationIndirectBranch indirect branch speculation mode
297 Cpus_allowed mask of CPUs on which this process may run
298 Cpus_allowed_list Same as previous, but in "list format"
299 Mems_allowed mask of memory nodes allowed to this process
300 Mems_allowed_list Same as previous, but in "list format"
301 voluntary_ctxt_switches number of voluntary context switches
302 nonvoluntary_ctxt_switches number of non voluntary context switches
303 ========================== ===================================================
306 .. table:: Table 1-3: Contents of the statm files (as of 2.6.8-rc3)
308 ======== =============================== ==============================
310 ======== =============================== ==============================
311 size total program size (pages) (same as VmSize in status)
312 resident size of memory portions (pages) (same as VmRSS in status)
313 shared number of pages that are shared (i.e. backed by a file, same
314 as RssFile+RssShmem in status)
315 trs number of pages that are 'code' (not including libs; broken,
316 includes data segment)
317 lrs number of pages of library (always 0 on 2.6)
318 drs number of pages of data/stack (including libs; broken,
319 includes library text)
320 dt number of dirty pages (always 0 on 2.6)
321 ======== =============================== ==============================
324 .. table:: Table 1-4: Contents of the stat files (as of 2.6.30-rc7)
326 ============= ===============================================================
328 ============= ===============================================================
330 tcomm filename of the executable
331 state state (R is running, S is sleeping, D is sleeping in an
332 uninterruptible wait, Z is zombie, T is traced or stopped)
333 ppid process id of the parent process
334 pgrp pgrp of the process
336 tty_nr tty the process uses
337 tty_pgrp pgrp of the tty
339 min_flt number of minor faults
340 cmin_flt number of minor faults with child's
341 maj_flt number of major faults
342 cmaj_flt number of major faults with child's
343 utime user mode jiffies
344 stime kernel mode jiffies
345 cutime user mode jiffies with child's
346 cstime kernel mode jiffies with child's
347 priority priority level
349 num_threads number of threads
350 it_real_value (obsolete, always 0)
351 start_time time the process started after system boot
352 vsize virtual memory size
353 rss resident set memory size
354 rsslim current limit in bytes on the rss
355 start_code address above which program text can run
356 end_code address below which program text can run
357 start_stack address of the start of the main process stack
358 esp current value of ESP
359 eip current value of EIP
360 pending bitmap of pending signals
361 blocked bitmap of blocked signals
362 sigign bitmap of ignored signals
363 sigcatch bitmap of caught signals
364 0 (place holder, used to be the wchan address,
365 use /proc/PID/wchan instead)
368 exit_signal signal to send to parent thread on exit
369 task_cpu which CPU the task is scheduled on
370 rt_priority realtime priority
371 policy scheduling policy (man sched_setscheduler)
372 blkio_ticks time spent waiting for block IO
373 gtime guest time of the task in jiffies
374 cgtime guest time of the task children in jiffies
375 start_data address above which program data+bss is placed
376 end_data address below which program data+bss is placed
377 start_brk address above which program heap can be expanded with brk()
378 arg_start address above which program command line is placed
379 arg_end address below which program command line is placed
380 env_start address above which program environment is placed
381 env_end address below which program environment is placed
382 exit_code the thread's exit_code in the form reported by the waitpid
384 ============= ===============================================================
386 The /proc/PID/maps file contains the currently mapped memory regions and
387 their access permissions.
391 address perms offset dev inode pathname
393 08048000-08049000 r-xp 00000000 03:00 8312 /opt/test
394 08049000-0804a000 rw-p 00001000 03:00 8312 /opt/test
395 0804a000-0806b000 rw-p 00000000 00:00 0 [heap]
396 a7cb1000-a7cb2000 ---p 00000000 00:00 0
397 a7cb2000-a7eb2000 rw-p 00000000 00:00 0
398 a7eb2000-a7eb3000 ---p 00000000 00:00 0
399 a7eb3000-a7ed5000 rw-p 00000000 00:00 0
400 a7ed5000-a8008000 r-xp 00000000 03:00 4222 /lib/libc.so.6
401 a8008000-a800a000 r--p 00133000 03:00 4222 /lib/libc.so.6
402 a800a000-a800b000 rw-p 00135000 03:00 4222 /lib/libc.so.6
403 a800b000-a800e000 rw-p 00000000 00:00 0
404 a800e000-a8022000 r-xp 00000000 03:00 14462 /lib/libpthread.so.0
405 a8022000-a8023000 r--p 00013000 03:00 14462 /lib/libpthread.so.0
406 a8023000-a8024000 rw-p 00014000 03:00 14462 /lib/libpthread.so.0
407 a8024000-a8027000 rw-p 00000000 00:00 0
408 a8027000-a8043000 r-xp 00000000 03:00 8317 /lib/ld-linux.so.2
409 a8043000-a8044000 r--p 0001b000 03:00 8317 /lib/ld-linux.so.2
410 a8044000-a8045000 rw-p 0001c000 03:00 8317 /lib/ld-linux.so.2
411 aff35000-aff4a000 rw-p 00000000 00:00 0 [stack]
412 ffffe000-fffff000 r-xp 00000000 00:00 0 [vdso]
414 where "address" is the address space in the process that it occupies, "perms"
415 is a set of permissions::
421 p = private (copy on write)
423 "offset" is the offset into the mapping, "dev" is the device (major:minor), and
424 "inode" is the inode on that device. 0 indicates that no inode is associated
425 with the memory region, as the case would be with BSS (uninitialized data).
426 The "pathname" shows the name associated file for this mapping. If the mapping
427 is not associated with a file:
429 ============= ====================================
430 [heap] the heap of the program
431 [stack] the stack of the main process
432 [vdso] the "virtual dynamic shared object",
433 the kernel system call handler
434 [anon:<name>] an anonymous mapping that has been
436 ============= ====================================
438 or if empty, the mapping is anonymous.
440 The /proc/PID/smaps is an extension based on maps, showing the memory
441 consumption for each of the process's mappings. For each mapping (aka Virtual
442 Memory Area, or VMA) there is a series of lines such as the following::
444 08048000-080bc000 r-xp 00000000 03:02 13130 /bin/bash
462 Private_Hugetlb: 0 kB
469 VmFlags: rd ex mr mw me dw
471 The first of these lines shows the same information as is displayed for the
472 mapping in /proc/PID/maps. Following lines show the size of the mapping
473 (size); the size of each page allocated when backing a VMA (KernelPageSize),
474 which is usually the same as the size in the page table entries; the page size
475 used by the MMU when backing a VMA (in most cases, the same as KernelPageSize);
476 the amount of the mapping that is currently resident in RAM (RSS); the
477 process' proportional share of this mapping (PSS); and the number of clean and
478 dirty shared and private pages in the mapping.
480 The "proportional set size" (PSS) of a process is the count of pages it has
481 in memory, where each page is divided by the number of processes sharing it.
482 So if a process has 1000 pages all to itself, and 1000 shared with one other
483 process, its PSS will be 1500. "Pss_Dirty" is the portion of PSS which
484 consists of dirty pages. ("Pss_Clean" is not included, but it can be
485 calculated by subtracting "Pss_Dirty" from "Pss".)
487 Note that even a page which is part of a MAP_SHARED mapping, but has only
488 a single pte mapped, i.e. is currently used by only one process, is accounted
489 as private and not as shared.
491 "Referenced" indicates the amount of memory currently marked as referenced or
494 "Anonymous" shows the amount of memory that does not belong to any file. Even
495 a mapping associated with a file may contain anonymous pages: when MAP_PRIVATE
496 and a page is modified, the file page is replaced by a private anonymous copy.
498 "LazyFree" shows the amount of memory which is marked by madvise(MADV_FREE).
499 The memory isn't freed immediately with madvise(). It's freed in memory
500 pressure if the memory is clean. Please note that the printed value might
501 be lower than the real value due to optimizations used in the current
502 implementation. If this is not desirable please file a bug report.
504 "AnonHugePages" shows the ammount of memory backed by transparent hugepage.
506 "ShmemPmdMapped" shows the ammount of shared (shmem/tmpfs) memory backed by
509 "Shared_Hugetlb" and "Private_Hugetlb" show the ammounts of memory backed by
510 hugetlbfs page which is *not* counted in "RSS" or "PSS" field for historical
511 reasons. And these are not included in {Shared,Private}_{Clean,Dirty} field.
513 "Swap" shows how much would-be-anonymous memory is also used, but out on swap.
515 For shmem mappings, "Swap" includes also the size of the mapped (and not
516 replaced by copy-on-write) part of the underlying shmem object out on swap.
517 "SwapPss" shows proportional swap share of this mapping. Unlike "Swap", this
518 does not take into account swapped out page of underlying shmem objects.
519 "Locked" indicates whether the mapping is locked in memory or not.
521 "THPeligible" indicates whether the mapping is eligible for allocating THP
522 pages as well as the THP is PMD mappable or not - 1 if true, 0 otherwise.
523 It just shows the current status.
525 "VmFlags" field deserves a separate description. This member represents the
526 kernel flags associated with the particular virtual memory area in two letter
527 encoded manner. The codes are the following:
529 == =======================================
538 gd stack segment growns down
540 dw disabled write to the mapped file
541 lo pages are locked in memory
542 io memory mapped I/O area
543 sr sequential read advise provided
544 rr random read advise provided
545 dc do not copy area on fork
546 de do not expand area on remapping
547 ac area is accountable
548 nr swap space is not reserved for the area
549 ht area uses huge tlb pages
550 sf synchronous page fault
551 ar architecture specific flag
553 dd do not include area into core dump
556 hg huge page advise flag
557 nh no huge page advise flag
558 mg mergable advise flag
559 bt arm64 BTI guarded page
560 mt arm64 MTE allocation tags are enabled
561 um userfaultfd missing tracking
562 uw userfaultfd wr-protect tracking
563 == =======================================
565 Note that there is no guarantee that every flag and associated mnemonic will
566 be present in all further kernel releases. Things get changed, the flags may
567 be vanished or the reverse -- new added. Interpretation of their meaning
568 might change in future as well. So each consumer of these flags has to
569 follow each specific kernel version for the exact semantic.
571 This file is only present if the CONFIG_MMU kernel configuration option is
574 Note: reading /proc/PID/maps or /proc/PID/smaps is inherently racy (consistent
575 output can be achieved only in the single read call).
577 This typically manifests when doing partial reads of these files while the
578 memory map is being modified. Despite the races, we do provide the following
581 1) The mapped addresses never go backwards, which implies no two
582 regions will ever overlap.
583 2) If there is something at a given vaddr during the entirety of the
584 life of the smaps/maps walk, there will be some output for it.
586 The /proc/PID/smaps_rollup file includes the same fields as /proc/PID/smaps,
587 but their values are the sums of the corresponding values for all mappings of
588 the process. Additionally, it contains these fields:
594 They represent the proportional shares of anonymous, file, and shmem pages, as
595 described for smaps above. These fields are omitted in smaps since each
596 mapping identifies the type (anon, file, or shmem) of all pages it contains.
597 Thus all information in smaps_rollup can be derived from smaps, but at a
598 significantly higher cost.
600 The /proc/PID/clear_refs is used to reset the PG_Referenced and ACCESSED/YOUNG
601 bits on both physical and virtual pages associated with a process, and the
602 soft-dirty bit on pte (see Documentation/admin-guide/mm/soft-dirty.rst
604 To clear the bits for all the pages associated with the process::
606 > echo 1 > /proc/PID/clear_refs
608 To clear the bits for the anonymous pages associated with the process::
610 > echo 2 > /proc/PID/clear_refs
612 To clear the bits for the file mapped pages associated with the process::
614 > echo 3 > /proc/PID/clear_refs
616 To clear the soft-dirty bit::
618 > echo 4 > /proc/PID/clear_refs
620 To reset the peak resident set size ("high water mark") to the process's
623 > echo 5 > /proc/PID/clear_refs
625 Any other value written to /proc/PID/clear_refs will have no effect.
627 The /proc/pid/pagemap gives the PFN, which can be used to find the pageflags
628 using /proc/kpageflags and number of times a page is mapped using
629 /proc/kpagecount. For detailed explanation, see
630 Documentation/admin-guide/mm/pagemap.rst.
632 The /proc/pid/numa_maps is an extension based on maps, showing the memory
633 locality and binding policy, as well as the memory usage (in pages) of
634 each mapping. The output follows a general format where mapping details get
635 summarized separated by blank spaces, one mapping per each file line::
637 address policy mapping details
639 00400000 default file=/usr/local/bin/app mapped=1 active=0 N3=1 kernelpagesize_kB=4
640 00600000 default file=/usr/local/bin/app anon=1 dirty=1 N3=1 kernelpagesize_kB=4
641 3206000000 default file=/lib64/ld-2.12.so mapped=26 mapmax=6 N0=24 N3=2 kernelpagesize_kB=4
642 320621f000 default file=/lib64/ld-2.12.so anon=1 dirty=1 N3=1 kernelpagesize_kB=4
643 3206220000 default file=/lib64/ld-2.12.so anon=1 dirty=1 N3=1 kernelpagesize_kB=4
644 3206221000 default anon=1 dirty=1 N3=1 kernelpagesize_kB=4
645 3206800000 default file=/lib64/libc-2.12.so mapped=59 mapmax=21 active=55 N0=41 N3=18 kernelpagesize_kB=4
646 320698b000 default file=/lib64/libc-2.12.so
647 3206b8a000 default file=/lib64/libc-2.12.so anon=2 dirty=2 N3=2 kernelpagesize_kB=4
648 3206b8e000 default file=/lib64/libc-2.12.so anon=1 dirty=1 N3=1 kernelpagesize_kB=4
649 3206b8f000 default anon=3 dirty=3 active=1 N3=3 kernelpagesize_kB=4
650 7f4dc10a2000 default anon=3 dirty=3 N3=3 kernelpagesize_kB=4
651 7f4dc10b4000 default anon=2 dirty=2 active=1 N3=2 kernelpagesize_kB=4
652 7f4dc1200000 default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N3=1 kernelpagesize_kB=2048
653 7fff335f0000 default stack anon=3 dirty=3 N3=3 kernelpagesize_kB=4
654 7fff3369d000 default mapped=1 mapmax=35 active=0 N3=1 kernelpagesize_kB=4
658 "address" is the starting address for the mapping;
660 "policy" reports the NUMA memory policy set for the mapping (see Documentation/admin-guide/mm/numa_memory_policy.rst);
662 "mapping details" summarizes mapping data such as mapping type, page usage counters,
663 node locality page counters (N0 == node0, N1 == node1, ...) and the kernel page
664 size, in KB, that is backing the mapping up.
669 Similar to the process entries, the kernel data files give information about
670 the running kernel. The files used to obtain this information are contained in
671 /proc and are listed in Table 1-5. Not all of these will be present in your
672 system. It depends on the kernel configuration and the loaded modules, which
673 files are there, and which are missing.
675 .. table:: Table 1-5: Kernel info in /proc
677 ============ ===============================================================
679 ============ ===============================================================
680 apm Advanced power management info
681 buddyinfo Kernel memory allocator information (see text) (2.5)
682 bus Directory containing bus specific information
683 cmdline Kernel command line
684 cpuinfo Info about the CPU
685 devices Available devices (block and character)
686 dma Used DMS channels
687 filesystems Supported filesystems
688 driver Various drivers grouped here, currently rtc (2.4)
689 execdomains Execdomains, related to security (2.4)
690 fb Frame Buffer devices (2.4)
691 fs File system parameters, currently nfs/exports (2.4)
692 ide Directory containing info about the IDE subsystem
693 interrupts Interrupt usage
694 iomem Memory map (2.4)
695 ioports I/O port usage
696 irq Masks for irq to cpu affinity (2.4)(smp?)
697 isapnp ISA PnP (Plug&Play) Info (2.4)
698 kcore Kernel core image (can be ELF or A.OUT(deprecated in 2.4))
700 ksyms Kernel symbol table
701 loadavg Load average of last 1, 5 & 15 minutes;
702 number of processes currently runnable (running or on ready queue);
703 total number of processes in system;
705 All fields are separated by one space except "number of
706 processes currently runnable" and "total number of processes
707 in system", which are separated by a slash ('/'). Example:
708 0.61 0.61 0.55 3/828 22084
712 modules List of loaded modules
713 mounts Mounted filesystems
714 net Networking info (see text)
715 pagetypeinfo Additional page allocator information (see text) (2.5)
716 partitions Table of partitions known to the system
717 pci Deprecated info of PCI bus (new way -> /proc/bus/pci/,
718 decoupled by lspci (2.4)
720 scsi SCSI info (see text)
721 slabinfo Slab pool info
722 softirqs softirq usage
723 stat Overall statistics
724 swaps Swap space utilization
726 sysvipc Info of SysVIPC Resources (msg, sem, shm) (2.4)
727 tty Info of tty drivers
728 uptime Wall clock since boot, combined idle time of all cpus
729 version Kernel version
730 video bttv info of video resources (2.4)
731 vmallocinfo Show vmalloced areas
732 ============ ===============================================================
734 You can, for example, check which interrupts are currently in use and what
735 they are used for by looking in the file /proc/interrupts::
737 > cat /proc/interrupts
739 0: 8728810 XT-PIC timer
740 1: 895 XT-PIC keyboard
742 3: 531695 XT-PIC aha152x
743 4: 2014133 XT-PIC serial
744 5: 44401 XT-PIC pcnet_cs
747 12: 182918 XT-PIC PS/2 Mouse
749 14: 1232265 XT-PIC ide0
753 In 2.4.* a couple of lines where added to this file LOC & ERR (this time is the
754 output of a SMP machine)::
756 > cat /proc/interrupts
759 0: 1243498 1214548 IO-APIC-edge timer
760 1: 8949 8958 IO-APIC-edge keyboard
761 2: 0 0 XT-PIC cascade
762 5: 11286 10161 IO-APIC-edge soundblaster
763 8: 1 0 IO-APIC-edge rtc
764 9: 27422 27407 IO-APIC-edge 3c503
765 12: 113645 113873 IO-APIC-edge PS/2 Mouse
767 14: 22491 24012 IO-APIC-edge ide0
768 15: 2183 2415 IO-APIC-edge ide1
769 17: 30564 30414 IO-APIC-level eth0
770 18: 177 164 IO-APIC-level bttv
775 NMI is incremented in this case because every timer interrupt generates a NMI
776 (Non Maskable Interrupt) which is used by the NMI Watchdog to detect lockups.
778 LOC is the local interrupt counter of the internal APIC of every CPU.
780 ERR is incremented in the case of errors in the IO-APIC bus (the bus that
781 connects the CPUs in a SMP system. This means that an error has been detected,
782 the IO-APIC automatically retry the transmission, so it should not be a big
783 problem, but you should read the SMP-FAQ.
785 In 2.6.2* /proc/interrupts was expanded again. This time the goal was for
786 /proc/interrupts to display every IRQ vector in use by the system, not
787 just those considered 'most important'. The new vectors are:
790 interrupt raised when a machine check threshold counter
791 (typically counting ECC corrected errors of memory or cache) exceeds
792 a configurable threshold. Only available on some systems.
795 a thermal event interrupt occurs when a temperature threshold
796 has been exceeded for the CPU. This interrupt may also be generated
797 when the temperature drops back to normal.
800 a spurious interrupt is some interrupt that was raised then lowered
801 by some IO device before it could be fully processed by the APIC. Hence
802 the APIC sees the interrupt but does not know what device it came from.
803 For this case the APIC will generate the interrupt with a IRQ vector
804 of 0xff. This might also be generated by chipset bugs.
807 rescheduling, call and TLB flush interrupts are
808 sent from one CPU to another per the needs of the OS. Typically,
809 their statistics are used by kernel developers and interested users to
810 determine the occurrence of interrupts of the given type.
812 The above IRQ vectors are displayed only when relevant. For example,
813 the threshold vector does not exist on x86_64 platforms. Others are
814 suppressed when the system is a uniprocessor. As of this writing, only
815 i386 and x86_64 platforms support the new IRQ vector displays.
817 Of some interest is the introduction of the /proc/irq directory to 2.4.
818 It could be used to set IRQ to CPU affinity. This means that you can "hook" an
819 IRQ to only one CPU, or to exclude a CPU of handling IRQs. The contents of the
820 irq subdir is one subdir for each IRQ, and two files; default_smp_affinity and
826 0 10 12 14 16 18 2 4 6 8 prof_cpu_mask
827 1 11 13 15 17 19 3 5 7 9 default_smp_affinity
831 smp_affinity is a bitmask, in which you can specify which CPUs can handle the
832 IRQ. You can set it by doing::
834 > echo 1 > /proc/irq/10/smp_affinity
836 This means that only the first CPU will handle the IRQ, but you can also echo
837 5 which means that only the first and third CPU can handle the IRQ.
839 The contents of each smp_affinity file is the same by default::
841 > cat /proc/irq/0/smp_affinity
844 There is an alternate interface, smp_affinity_list which allows specifying
845 a CPU range instead of a bitmask::
847 > cat /proc/irq/0/smp_affinity_list
850 The default_smp_affinity mask applies to all non-active IRQs, which are the
851 IRQs which have not yet been allocated/activated, and hence which lack a
852 /proc/irq/[0-9]* directory.
854 The node file on an SMP system shows the node to which the device using the IRQ
855 reports itself as being attached. This hardware locality information does not
856 include information about any possible driver locality preference.
858 prof_cpu_mask specifies which CPUs are to be profiled by the system wide
859 profiler. Default value is ffffffff (all CPUs if there are only 32 of them).
861 The way IRQs are routed is handled by the IO-APIC, and it's Round Robin
862 between all the CPUs which are allowed to handle it. As usual the kernel has
863 more info than you and does a better job than you, so the defaults are the
864 best choice for almost everyone. [Note this applies only to those IO-APIC's
865 that support "Round Robin" interrupt distribution.]
867 There are three more important subdirectories in /proc: net, scsi, and sys.
868 The general rule is that the contents, or even the existence of these
869 directories, depend on your kernel configuration. If SCSI is not enabled, the
870 directory scsi may not exist. The same is true with the net, which is there
871 only when networking support is present in the running kernel.
873 The slabinfo file gives information about memory usage at the slab level.
874 Linux uses slab pools for memory management above page level in version 2.2.
875 Commonly used objects have their own slab pool (such as network buffers,
876 directory cache, and so on).
880 > cat /proc/buddyinfo
882 Node 0, zone DMA 0 4 5 4 4 3 ...
883 Node 0, zone Normal 1 0 0 1 101 8 ...
884 Node 0, zone HighMem 2 0 0 1 1 0 ...
886 External fragmentation is a problem under some workloads, and buddyinfo is a
887 useful tool for helping diagnose these problems. Buddyinfo will give you a
888 clue as to how big an area you can safely allocate, or why a previous
891 Each column represents the number of pages of a certain order which are
892 available. In this case, there are 0 chunks of 2^0*PAGE_SIZE available in
893 ZONE_DMA, 4 chunks of 2^1*PAGE_SIZE in ZONE_DMA, 101 chunks of 2^4*PAGE_SIZE
894 available in ZONE_NORMAL, etc...
896 More information relevant to external fragmentation can be found in
899 > cat /proc/pagetypeinfo
903 Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10
904 Node 0, zone DMA, type Unmovable 0 0 0 1 1 1 1 1 1 1 0
905 Node 0, zone DMA, type Reclaimable 0 0 0 0 0 0 0 0 0 0 0
906 Node 0, zone DMA, type Movable 1 1 2 1 2 1 1 0 1 0 2
907 Node 0, zone DMA, type Reserve 0 0 0 0 0 0 0 0 0 1 0
908 Node 0, zone DMA, type Isolate 0 0 0 0 0 0 0 0 0 0 0
909 Node 0, zone DMA32, type Unmovable 103 54 77 1 1 1 11 8 7 1 9
910 Node 0, zone DMA32, type Reclaimable 0 0 2 1 0 0 0 0 1 0 0
911 Node 0, zone DMA32, type Movable 169 152 113 91 77 54 39 13 6 1 452
912 Node 0, zone DMA32, type Reserve 1 2 2 2 2 0 1 1 1 1 0
913 Node 0, zone DMA32, type Isolate 0 0 0 0 0 0 0 0 0 0 0
915 Number of blocks type Unmovable Reclaimable Movable Reserve Isolate
916 Node 0, zone DMA 2 0 5 1 0
917 Node 0, zone DMA32 41 6 967 2 0
919 Fragmentation avoidance in the kernel works by grouping pages of different
920 migrate types into the same contiguous regions of memory called page blocks.
921 A page block is typically the size of the default hugepage size, e.g. 2MB on
922 X86-64. By keeping pages grouped based on their ability to move, the kernel
923 can reclaim pages within a page block to satisfy a high-order allocation.
925 The pagetypinfo begins with information on the size of a page block. It
926 then gives the same type of information as buddyinfo except broken down
927 by migrate-type and finishes with details on how many page blocks of each
930 If min_free_kbytes has been tuned correctly (recommendations made by hugeadm
931 from libhugetlbfs https://github.com/libhugetlbfs/libhugetlbfs/), one can
932 make an estimate of the likely number of huge pages that can be allocated
933 at a given point in time. All the "Movable" blocks should be allocatable
934 unless memory has been mlock()'d. Some of the Reclaimable blocks should
935 also be allocatable although a lot of filesystem metadata may have to be
936 reclaimed to achieve this.
942 Provides information about distribution and utilization of memory. This
943 varies by architecture and compile options. Some of the counters reported
944 here overlap. The memory reported by the non overlapping counters may not
945 add up to the overall memory usage and the difference for some workloads
946 can be substantial. In many cases there are other means to find out
947 additional memory using subsystem specific interfaces, for instance
948 /proc/net/sockstat for TCP memory allocations.
950 Example output. You may not have all of these fields.
956 MemTotal: 32858820 kB
958 MemAvailable: 27214312 kB
964 Active(anon): 94064 kB
965 Inactive(anon): 4570616 kB
966 Active(file): 3143088 kB
967 Inactive(file): 3015640 kB
976 AnonPages: 4654780 kB
979 KReclaimable: 517708 kB
981 SReclaimable: 517708 kB
982 SUnreclaim: 142336 kB
983 KernelStack: 11168 kB
989 CommitLimit: 16429408 kB
990 Committed_AS: 7715148 kB
991 VmallocTotal: 34359738367 kB
992 VmallocUsed: 40444 kB
995 HardwareCorrupted: 0 kB
996 AnonHugePages: 4149248 kB
1007 Hugepagesize: 2048 kB
1009 DirectMap4k: 401152 kB
1010 DirectMap2M: 10008576 kB
1011 DirectMap1G: 24117248 kB
1014 Total usable RAM (i.e. physical RAM minus a few reserved
1015 bits and the kernel binary code)
1017 Total free RAM. On highmem systems, the sum of LowFree+HighFree
1019 An estimate of how much memory is available for starting new
1020 applications, without swapping. Calculated from MemFree,
1021 SReclaimable, the size of the file LRU lists, and the low
1022 watermarks in each zone.
1023 The estimate takes into account that the system needs some
1024 page cache to function well, and that not all reclaimable
1025 slab will be reclaimable, due to items being in use. The
1026 impact of those factors will vary from system to system.
1028 Relatively temporary storage for raw disk blocks
1029 shouldn't get tremendously large (20MB or so)
1031 In-memory cache for files read from the disk (the
1032 pagecache) as well as tmpfs & shmem.
1033 Doesn't include SwapCached.
1035 Memory that once was swapped out, is swapped back in but
1036 still also is in the swapfile (if memory is needed it
1037 doesn't need to be swapped out AGAIN because it is already
1038 in the swapfile. This saves I/O)
1040 Memory that has been used more recently and usually not
1041 reclaimed unless absolutely necessary.
1043 Memory which has been less recently used. It is more
1044 eligible to be reclaimed for other purposes
1046 Memory allocated for userspace which cannot be reclaimed, such
1047 as mlocked pages, ramfs backing pages, secret memfd pages etc.
1049 Memory locked with mlock().
1051 Highmem is all memory above ~860MB of physical memory.
1052 Highmem areas are for use by userspace programs, or
1053 for the pagecache. The kernel must use tricks to access
1054 this memory, making it slower to access than lowmem.
1056 Lowmem is memory which can be used for everything that
1057 highmem can be used for, but it is also available for the
1058 kernel's use for its own data structures. Among many
1059 other things, it is where everything from the Slab is
1060 allocated. Bad things happen when you're out of lowmem.
1062 total amount of swap space available
1064 Memory which has been evicted from RAM, and is temporarily
1067 Memory consumed by the zswap backend (compressed size)
1069 Amount of anonymous memory stored in zswap (original size)
1071 Memory which is waiting to get written back to the disk
1073 Memory which is actively being written back to the disk
1075 Non-file backed pages mapped into userspace page tables
1077 files which have been mmaped, such as libraries
1079 Total memory used by shared memory (shmem) and tmpfs
1081 Kernel allocations that the kernel will attempt to reclaim
1082 under memory pressure. Includes SReclaimable (below), and other
1083 direct allocations with a shrinker.
1085 in-kernel data structures cache
1087 Part of Slab, that might be reclaimed, such as caches
1089 Part of Slab, that cannot be reclaimed on memory pressure
1091 Memory consumed by the kernel stacks of all tasks
1093 Memory consumed by userspace page tables
1095 Memory consumed by secondary page tables, this currently
1096 currently includes KVM mmu allocations on x86 and arm64.
1098 Always zero. Previous counted pages which had been written to
1099 the server, but has not been committed to stable storage.
1101 Memory used for block device "bounce buffers"
1103 Memory used by FUSE for temporary writeback buffers
1105 Based on the overcommit ratio ('vm.overcommit_ratio'),
1106 this is the total amount of memory currently available to
1107 be allocated on the system. This limit is only adhered to
1108 if strict overcommit accounting is enabled (mode 2 in
1109 'vm.overcommit_memory').
1111 The CommitLimit is calculated with the following formula::
1113 CommitLimit = ([total RAM pages] - [total huge TLB pages]) *
1114 overcommit_ratio / 100 + [total swap pages]
1116 For example, on a system with 1G of physical RAM and 7G
1117 of swap with a `vm.overcommit_ratio` of 30 it would
1118 yield a CommitLimit of 7.3G.
1120 For more details, see the memory overcommit documentation
1121 in mm/overcommit-accounting.
1123 The amount of memory presently allocated on the system.
1124 The committed memory is a sum of all of the memory which
1125 has been allocated by processes, even if it has not been
1126 "used" by them as of yet. A process which malloc()'s 1G
1127 of memory, but only touches 300M of it will show up as
1128 using 1G. This 1G is memory which has been "committed" to
1129 by the VM and can be used at any time by the allocating
1130 application. With strict overcommit enabled on the system
1131 (mode 2 in 'vm.overcommit_memory'), allocations which would
1132 exceed the CommitLimit (detailed above) will not be permitted.
1133 This is useful if one needs to guarantee that processes will
1134 not fail due to lack of memory once that memory has been
1135 successfully allocated.
1137 total size of vmalloc virtual address space
1139 amount of vmalloc area which is used
1141 largest contiguous block of vmalloc area which is free
1143 Memory allocated to the percpu allocator used to back percpu
1144 allocations. This stat excludes the cost of metadata.
1146 The amount of RAM/memory in KB, the kernel identifies as
1149 Non-file backed huge pages mapped into userspace page tables
1151 Memory used by shared memory (shmem) and tmpfs allocated
1154 Shared memory mapped into userspace with huge pages
1156 Memory used for filesystem data (page cache) allocated
1159 Page cache mapped into userspace with huge pages
1161 Memory reserved for the Contiguous Memory Allocator (CMA)
1163 Free remaining memory in the CMA reserves
1164 HugePages_Total, HugePages_Free, HugePages_Rsvd, HugePages_Surp, Hugepagesize, Hugetlb
1165 See Documentation/admin-guide/mm/hugetlbpage.rst.
1166 DirectMap4k, DirectMap2M, DirectMap1G
1167 Breakdown of page table sizes used in the kernel's
1168 identity mapping of RAM
1173 Provides information about vmalloced/vmaped areas. One line per area,
1174 containing the virtual address range of the area, size in bytes,
1175 caller information of the creator, and optional information depending
1176 on the kind of area:
1178 ========== ===================================================
1179 pages=nr number of pages
1180 phys=addr if a physical address was specified
1181 ioremap I/O mapping (ioremap() and friends)
1182 vmalloc vmalloc() area
1184 user VM_USERMAP area
1185 vpages buffer for pages pointers was vmalloced (huge area)
1186 N<node>=nr (Only on NUMA kernels)
1187 Number of pages allocated on memory node <node>
1188 ========== ===================================================
1192 > cat /proc/vmallocinfo
1193 0xffffc20000000000-0xffffc20000201000 2101248 alloc_large_system_hash+0x204 ...
1194 /0x2c0 pages=512 vmalloc N0=128 N1=128 N2=128 N3=128
1195 0xffffc20000201000-0xffffc20000302000 1052672 alloc_large_system_hash+0x204 ...
1196 /0x2c0 pages=256 vmalloc N0=64 N1=64 N2=64 N3=64
1197 0xffffc20000302000-0xffffc20000304000 8192 acpi_tb_verify_table+0x21/0x4f...
1198 phys=7fee8000 ioremap
1199 0xffffc20000304000-0xffffc20000307000 12288 acpi_tb_verify_table+0x21/0x4f...
1200 phys=7fee7000 ioremap
1201 0xffffc2000031d000-0xffffc2000031f000 8192 init_vdso_vars+0x112/0x210
1202 0xffffc2000031f000-0xffffc2000032b000 49152 cramfs_uncompress_init+0x2e ...
1203 /0x80 pages=11 vmalloc N0=3 N1=3 N2=2 N3=3
1204 0xffffc2000033a000-0xffffc2000033d000 12288 sys_swapon+0x640/0xac0 ...
1205 pages=2 vmalloc N1=2
1206 0xffffc20000347000-0xffffc2000034c000 20480 xt_alloc_table_info+0xfe ...
1207 /0x130 [x_tables] pages=4 vmalloc N0=4
1208 0xffffffffa0000000-0xffffffffa000f000 61440 sys_init_module+0xc27/0x1d00 ...
1209 pages=14 vmalloc N2=14
1210 0xffffffffa000f000-0xffffffffa0014000 20480 sys_init_module+0xc27/0x1d00 ...
1211 pages=4 vmalloc N1=4
1212 0xffffffffa0014000-0xffffffffa0017000 12288 sys_init_module+0xc27/0x1d00 ...
1213 pages=2 vmalloc N1=2
1214 0xffffffffa0017000-0xffffffffa0022000 45056 sys_init_module+0xc27/0x1d00 ...
1215 pages=10 vmalloc N0=10
1221 Provides counts of softirq handlers serviced since boot time, for each CPU.
1225 > cat /proc/softirqs
1228 TIMER: 27166 27120 27097 27034
1233 SCHED: 27035 26983 26971 26746
1235 RCU: 1678 1769 2178 2250
1237 1.3 Networking info in /proc/net
1238 --------------------------------
1240 The subdirectory /proc/net follows the usual pattern. Table 1-8 shows the
1241 additional values you get for IP version 6 if you configure the kernel to
1242 support this. Table 1-9 lists the files and their meaning.
1245 .. table:: Table 1-8: IPv6 info in /proc/net
1247 ========== =====================================================
1249 ========== =====================================================
1250 udp6 UDP sockets (IPv6)
1251 tcp6 TCP sockets (IPv6)
1252 raw6 Raw device statistics (IPv6)
1253 igmp6 IP multicast addresses, which this host joined (IPv6)
1254 if_inet6 List of IPv6 interface addresses
1255 ipv6_route Kernel routing table for IPv6
1256 rt6_stats Global IPv6 routing tables statistics
1257 sockstat6 Socket statistics (IPv6)
1258 snmp6 Snmp data (IPv6)
1259 ========== =====================================================
1261 .. table:: Table 1-9: Network info in /proc/net
1263 ============= ================================================================
1265 ============= ================================================================
1266 arp Kernel ARP table
1267 dev network devices with statistics
1268 dev_mcast the Layer2 multicast groups a device is listening too
1269 (interface index, label, number of references, number of bound
1271 dev_stat network device status
1272 ip_fwchains Firewall chain linkage
1273 ip_fwnames Firewall chain names
1274 ip_masq Directory containing the masquerading tables
1275 ip_masquerade Major masquerading table
1276 netstat Network statistics
1277 raw raw device statistics
1278 route Kernel routing table
1279 rpc Directory containing rpc info
1280 rt_cache Routing cache
1282 sockstat Socket statistics
1285 unix UNIX domain sockets
1286 wireless Wireless interface data (Wavelan etc)
1287 igmp IP multicast addresses, which this host joined
1288 psched Global packet scheduler parameters.
1289 netlink List of PF_NETLINK sockets
1290 ip_mr_vifs List of multicast virtual interfaces
1291 ip_mr_cache List of multicast routing cache
1292 ============= ================================================================
1294 You can use this information to see which network devices are available in
1295 your system and how much traffic was routed over those devices::
1298 Inter-|Receive |[...
1299 face |bytes packets errs drop fifo frame compressed multicast|[...
1300 lo: 908188 5596 0 0 0 0 0 0 [...
1301 ppp0:15475140 20721 410 0 0 410 0 0 [...
1302 eth0: 614530 7085 0 0 0 0 0 1 [...
1305 ...] bytes packets errs drop fifo colls carrier compressed
1306 ...] 908188 5596 0 0 0 0 0 0
1307 ...] 1375103 17405 0 0 0 0 0 0
1308 ...] 1703981 5535 0 0 0 3 0 0
1310 In addition, each Channel Bond interface has its own directory. For
1311 example, the bond0 device will have a directory called /proc/net/bond0/.
1312 It will contain information that is specific to that bond, such as the
1313 current slaves of the bond, the link status of the slaves, and how
1314 many times the slaves link has failed.
1319 If you have a SCSI host adapter in your system, you'll find a subdirectory
1320 named after the driver for this adapter in /proc/scsi. You'll also see a list
1321 of all recognized SCSI devices in /proc/scsi::
1323 >cat /proc/scsi/scsi
1325 Host: scsi0 Channel: 00 Id: 00 Lun: 00
1326 Vendor: IBM Model: DGHS09U Rev: 03E0
1327 Type: Direct-Access ANSI SCSI revision: 03
1328 Host: scsi0 Channel: 00 Id: 06 Lun: 00
1329 Vendor: PIONEER Model: CD-ROM DR-U06S Rev: 1.04
1330 Type: CD-ROM ANSI SCSI revision: 02
1333 The directory named after the driver has one file for each adapter found in
1334 the system. These files contain information about the controller, including
1335 the used IRQ and the IO address range. The amount of information shown is
1336 dependent on the adapter you use. The example shows the output for an Adaptec
1337 AHA-2940 SCSI adapter::
1339 > cat /proc/scsi/aic7xxx/0
1341 Adaptec AIC7xxx driver version: 5.1.19/3.2.4
1343 TCQ Enabled By Default : Disabled
1344 AIC7XXX_PROC_STATS : Disabled
1345 AIC7XXX_RESET_DELAY : 5
1346 Adapter Configuration:
1347 SCSI Adapter: Adaptec AHA-294X Ultra SCSI host adapter
1348 Ultra Wide Controller
1349 PCI MMAPed I/O Base: 0xeb001000
1350 Adapter SEEPROM Config: SEEPROM found and used.
1351 Adaptec SCSI BIOS: Enabled
1353 SCBs: Active 0, Max Active 2,
1354 Allocated 15, HW 16, Page 255
1356 BIOS Control Word: 0x18b6
1357 Adapter Control Word: 0x005b
1358 Extended Translation: Enabled
1359 Disconnect Enable Flags: 0xffff
1360 Ultra Enable Flags: 0x0001
1361 Tag Queue Enable Flags: 0x0000
1362 Ordered Queue Tag Flags: 0x0000
1363 Default Tag Queue Depth: 8
1364 Tagged Queue By Device array for aic7xxx host instance 0:
1365 {255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255}
1366 Actual queue depth per device for aic7xxx host instance 0:
1367 {1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1}
1370 Device using Wide/Sync transfers at 40.0 MByte/sec, offset 8
1371 Transinfo settings: current(12/8/1/0), goal(12/8/1/0), user(12/15/1/0)
1372 Total transfers 160151 (74577 reads and 85574 writes)
1374 Device using Narrow/Sync transfers at 5.0 MByte/sec, offset 15
1375 Transinfo settings: current(50/15/0/0), goal(50/15/0/0), user(50/15/0/0)
1376 Total transfers 0 (0 reads and 0 writes)
1379 1.5 Parallel port info in /proc/parport
1380 ---------------------------------------
1382 The directory /proc/parport contains information about the parallel ports of
1383 your system. It has one subdirectory for each port, named after the port
1386 These directories contain the four files shown in Table 1-10.
1389 .. table:: Table 1-10: Files in /proc/parport
1391 ========= ====================================================================
1393 ========= ====================================================================
1394 autoprobe Any IEEE-1284 device ID information that has been acquired.
1395 devices list of the device drivers using that port. A + will appear by the
1396 name of the device currently using the port (it might not appear
1398 hardware Parallel port's base address, IRQ line and DMA channel.
1399 irq IRQ that parport is using for that port. This is in a separate
1400 file to allow you to alter it by writing a new value in (IRQ
1402 ========= ====================================================================
1404 1.6 TTY info in /proc/tty
1405 -------------------------
1407 Information about the available and actually used tty's can be found in the
1408 directory /proc/tty. You'll find entries for drivers and line disciplines in
1409 this directory, as shown in Table 1-11.
1412 .. table:: Table 1-11: Files in /proc/tty
1414 ============= ==============================================
1416 ============= ==============================================
1417 drivers list of drivers and their usage
1418 ldiscs registered line disciplines
1419 driver/serial usage statistic and status of single tty lines
1420 ============= ==============================================
1422 To see which tty's are currently in use, you can simply look into the file
1425 > cat /proc/tty/drivers
1426 pty_slave /dev/pts 136 0-255 pty:slave
1427 pty_master /dev/ptm 128 0-255 pty:master
1428 pty_slave /dev/ttyp 3 0-255 pty:slave
1429 pty_master /dev/pty 2 0-255 pty:master
1430 serial /dev/cua 5 64-67 serial:callout
1431 serial /dev/ttyS 4 64-67 serial
1432 /dev/tty0 /dev/tty0 4 0 system:vtmaster
1433 /dev/ptmx /dev/ptmx 5 2 system
1434 /dev/console /dev/console 5 1 system:console
1435 /dev/tty /dev/tty 5 0 system:/dev/tty
1436 unknown /dev/tty 4 1-63 console
1439 1.7 Miscellaneous kernel statistics in /proc/stat
1440 -------------------------------------------------
1442 Various pieces of information about kernel activity are available in the
1443 /proc/stat file. All of the numbers reported in this file are aggregates
1444 since the system first booted. For a quick look, simply cat the file::
1447 cpu 2255 34 2290 22625563 6290 127 456 0 0 0
1448 cpu0 1132 34 1441 11311718 3675 127 438 0 0 0
1449 cpu1 1123 0 849 11313845 2614 0 18 0 0 0
1450 intr 114930548 113199788 3 0 5 263 0 4 [... lots more numbers ...]
1456 softirq 183433 0 21755 12 39 1137 231 21459 2263
1458 The very first "cpu" line aggregates the numbers in all of the other "cpuN"
1459 lines. These numbers identify the amount of time the CPU has spent performing
1460 different kinds of work. Time units are in USER_HZ (typically hundredths of a
1461 second). The meanings of the columns are as follows, from left to right:
1463 - user: normal processes executing in user mode
1464 - nice: niced processes executing in user mode
1465 - system: processes executing in kernel mode
1466 - idle: twiddling thumbs
1467 - iowait: In a word, iowait stands for waiting for I/O to complete. But there
1468 are several problems:
1470 1. CPU will not wait for I/O to complete, iowait is the time that a task is
1471 waiting for I/O to complete. When CPU goes into idle state for
1472 outstanding task I/O, another task will be scheduled on this CPU.
1473 2. In a multi-core CPU, the task waiting for I/O to complete is not running
1474 on any CPU, so the iowait of each CPU is difficult to calculate.
1475 3. The value of iowait field in /proc/stat will decrease in certain
1478 So, the iowait is not reliable by reading from /proc/stat.
1479 - irq: servicing interrupts
1480 - softirq: servicing softirqs
1481 - steal: involuntary wait
1482 - guest: running a normal guest
1483 - guest_nice: running a niced guest
1485 The "intr" line gives counts of interrupts serviced since boot time, for each
1486 of the possible system interrupts. The first column is the total of all
1487 interrupts serviced including unnumbered architecture specific interrupts;
1488 each subsequent column is the total for that particular numbered interrupt.
1489 Unnumbered interrupts are not shown, only summed into the total.
1491 The "ctxt" line gives the total number of context switches across all CPUs.
1493 The "btime" line gives the time at which the system booted, in seconds since
1496 The "processes" line gives the number of processes and threads created, which
1497 includes (but is not limited to) those created by calls to the fork() and
1498 clone() system calls.
1500 The "procs_running" line gives the total number of threads that are
1501 running or ready to run (i.e., the total number of runnable threads).
1503 The "procs_blocked" line gives the number of processes currently blocked,
1504 waiting for I/O to complete.
1506 The "softirq" line gives counts of softirqs serviced since boot time, for each
1507 of the possible system softirqs. The first column is the total of all
1508 softirqs serviced; each subsequent column is the total for that particular
1512 1.8 Ext4 file system parameters
1513 -------------------------------
1515 Information about mounted ext4 file systems can be found in
1516 /proc/fs/ext4. Each mounted filesystem will have a directory in
1517 /proc/fs/ext4 based on its device name (i.e., /proc/fs/ext4/hdc or
1518 /proc/fs/ext4/dm-0). The files in each per-device directory are shown
1519 in Table 1-12, below.
1521 .. table:: Table 1-12: Files in /proc/fs/ext4/<devname>
1523 ============== ==========================================================
1525 mb_groups details of multiblock allocator buddy cache of free blocks
1526 ============== ==========================================================
1530 Shows registered system console lines.
1532 To see which character device lines are currently used for the system console
1533 /dev/console, you may simply look into the file /proc/consoles::
1535 > cat /proc/consoles
1541 +--------------------+-------------------------------------------------------+
1542 | device | name of the device |
1543 +====================+=======================================================+
1544 | operations | * R = can do read operations |
1545 | | * W = can do write operations |
1546 | | * U = can do unblank |
1547 +--------------------+-------------------------------------------------------+
1548 | flags | * E = it is enabled |
1549 | | * C = it is preferred console |
1550 | | * B = it is primary boot console |
1551 | | * p = it is used for printk buffer |
1552 | | * b = it is not a TTY but a Braille device |
1553 | | * a = it is safe to use when cpu is offline |
1554 +--------------------+-------------------------------------------------------+
1555 | major:minor | major and minor number of the device separated by a |
1557 +--------------------+-------------------------------------------------------+
1562 The /proc file system serves information about the running system. It not only
1563 allows access to process data but also allows you to request the kernel status
1564 by reading files in the hierarchy.
1566 The directory structure of /proc reflects the types of information and makes
1567 it easy, if not obvious, where to look for specific data.
1569 Chapter 2: Modifying System Parameters
1570 ======================================
1575 * Modifying kernel parameters by writing into files found in /proc/sys
1576 * Exploring the files which modify certain parameters
1577 * Review of the /proc/sys file tree
1579 ------------------------------------------------------------------------------
1581 A very interesting part of /proc is the directory /proc/sys. This is not only
1582 a source of information, it also allows you to change parameters within the
1583 kernel. Be very careful when attempting this. You can optimize your system,
1584 but you can also cause it to crash. Never alter kernel parameters on a
1585 production system. Set up a development machine and test to make sure that
1586 everything works the way you want it to. You may have no alternative but to
1587 reboot the machine once an error has been made.
1589 To change a value, simply echo the new value into the file.
1590 You need to be root to do this. You can create your own boot script
1591 to perform this every time your system boots.
1593 The files in /proc/sys can be used to fine tune and monitor miscellaneous and
1594 general things in the operation of the Linux kernel. Since some of the files
1595 can inadvertently disrupt your system, it is advisable to read both
1596 documentation and source before actually making adjustments. In any case, be
1597 very careful when writing to any of these files. The entries in /proc may
1598 change slightly between the 2.1.* and the 2.2 kernel, so if there is any doubt
1599 review the kernel documentation in the directory /usr/src/linux/Documentation.
1600 This chapter is heavily based on the documentation included in the pre 2.2
1601 kernels, and became part of it in version 2.2.1 of the Linux kernel.
1603 Please see: Documentation/admin-guide/sysctl/ directory for descriptions of these
1609 Certain aspects of kernel behavior can be modified at runtime, without the
1610 need to recompile the kernel, or even to reboot the system. The files in the
1611 /proc/sys tree can not only be read, but also modified. You can use the echo
1612 command to write value into these files, thereby changing the default settings
1616 Chapter 3: Per-process Parameters
1617 =================================
1619 3.1 /proc/<pid>/oom_adj & /proc/<pid>/oom_score_adj- Adjust the oom-killer score
1620 --------------------------------------------------------------------------------
1622 These files can be used to adjust the badness heuristic used to select which
1623 process gets killed in out of memory (oom) conditions.
1625 The badness heuristic assigns a value to each candidate task ranging from 0
1626 (never kill) to 1000 (always kill) to determine which process is targeted. The
1627 units are roughly a proportion along that range of allowed memory the process
1628 may allocate from based on an estimation of its current memory and swap use.
1629 For example, if a task is using all allowed memory, its badness score will be
1630 1000. If it is using half of its allowed memory, its score will be 500.
1632 The amount of "allowed" memory depends on the context in which the oom killer
1633 was called. If it is due to the memory assigned to the allocating task's cpuset
1634 being exhausted, the allowed memory represents the set of mems assigned to that
1635 cpuset. If it is due to a mempolicy's node(s) being exhausted, the allowed
1636 memory represents the set of mempolicy nodes. If it is due to a memory
1637 limit (or swap limit) being reached, the allowed memory is that configured
1638 limit. Finally, if it is due to the entire system being out of memory, the
1639 allowed memory represents all allocatable resources.
1641 The value of /proc/<pid>/oom_score_adj is added to the badness score before it
1642 is used to determine which task to kill. Acceptable values range from -1000
1643 (OOM_SCORE_ADJ_MIN) to +1000 (OOM_SCORE_ADJ_MAX). This allows userspace to
1644 polarize the preference for oom killing either by always preferring a certain
1645 task or completely disabling it. The lowest possible value, -1000, is
1646 equivalent to disabling oom killing entirely for that task since it will always
1647 report a badness score of 0.
1649 Consequently, it is very simple for userspace to define the amount of memory to
1650 consider for each task. Setting a /proc/<pid>/oom_score_adj value of +500, for
1651 example, is roughly equivalent to allowing the remainder of tasks sharing the
1652 same system, cpuset, mempolicy, or memory controller resources to use at least
1653 50% more memory. A value of -500, on the other hand, would be roughly
1654 equivalent to discounting 50% of the task's allowed memory from being considered
1655 as scoring against the task.
1657 For backwards compatibility with previous kernels, /proc/<pid>/oom_adj may also
1658 be used to tune the badness score. Its acceptable values range from -16
1659 (OOM_ADJUST_MIN) to +15 (OOM_ADJUST_MAX) and a special value of -17
1660 (OOM_DISABLE) to disable oom killing entirely for that task. Its value is
1661 scaled linearly with /proc/<pid>/oom_score_adj.
1663 The value of /proc/<pid>/oom_score_adj may be reduced no lower than the last
1664 value set by a CAP_SYS_RESOURCE process. To reduce the value any lower
1665 requires CAP_SYS_RESOURCE.
1668 3.2 /proc/<pid>/oom_score - Display current oom-killer score
1669 -------------------------------------------------------------
1671 This file can be used to check the current score used by the oom-killer for
1672 any given <pid>. Use it together with /proc/<pid>/oom_score_adj to tune which
1673 process should be killed in an out-of-memory situation.
1675 Please note that the exported value includes oom_score_adj so it is
1676 effectively in range [0,2000].
1679 3.3 /proc/<pid>/io - Display the IO accounting fields
1680 -------------------------------------------------------
1682 This file contains IO statistics for each running process.
1689 test:/tmp # dd if=/dev/zero of=/tmp/test.dat &
1692 test:/tmp # cat /proc/3828/io
1698 write_bytes: 323932160
1699 cancelled_write_bytes: 0
1708 I/O counter: chars read
1709 The number of bytes which this task has caused to be read from storage. This
1710 is simply the sum of bytes which this process passed to read() and pread().
1711 It includes things like tty IO and it is unaffected by whether or not actual
1712 physical disk IO was required (the read might have been satisfied from
1719 I/O counter: chars written
1720 The number of bytes which this task has caused, or shall cause to be written
1721 to disk. Similar caveats apply here as with rchar.
1727 I/O counter: read syscalls
1728 Attempt to count the number of read I/O operations, i.e. syscalls like read()
1735 I/O counter: write syscalls
1736 Attempt to count the number of write I/O operations, i.e. syscalls like
1737 write() and pwrite().
1743 I/O counter: bytes read
1744 Attempt to count the number of bytes which this process really did cause to
1745 be fetched from the storage layer. Done at the submit_bio() level, so it is
1746 accurate for block-backed filesystems. <please add status regarding NFS and
1747 CIFS at a later time>
1753 I/O counter: bytes written
1754 Attempt to count the number of bytes which this process caused to be sent to
1755 the storage layer. This is done at page-dirtying time.
1758 cancelled_write_bytes
1759 ^^^^^^^^^^^^^^^^^^^^^
1761 The big inaccuracy here is truncate. If a process writes 1MB to a file and
1762 then deletes the file, it will in fact perform no writeout. But it will have
1763 been accounted as having caused 1MB of write.
1764 In other words: The number of bytes which this process caused to not happen,
1765 by truncating pagecache. A task can cause "negative" IO too. If this task
1766 truncates some dirty pagecache, some IO which another task has been accounted
1767 for (in its write_bytes) will not be happening. We _could_ just subtract that
1768 from the truncating task's write_bytes, but there is information loss in doing
1774 At its current implementation state, this is a bit racy on 32-bit machines:
1775 if process A reads process B's /proc/pid/io while process B is updating one
1776 of those 64-bit counters, process A could see an intermediate result.
1779 More information about this can be found within the taskstats documentation in
1780 Documentation/accounting.
1782 3.4 /proc/<pid>/coredump_filter - Core dump filtering settings
1783 ---------------------------------------------------------------
1784 When a process is dumped, all anonymous memory is written to a core file as
1785 long as the size of the core file isn't limited. But sometimes we don't want
1786 to dump some memory segments, for example, huge shared memory or DAX.
1787 Conversely, sometimes we want to save file-backed memory segments into a core
1788 file, not only the individual files.
1790 /proc/<pid>/coredump_filter allows you to customize which memory segments
1791 will be dumped when the <pid> process is dumped. coredump_filter is a bitmask
1792 of memory types. If a bit of the bitmask is set, memory segments of the
1793 corresponding memory type are dumped, otherwise they are not dumped.
1795 The following 9 memory types are supported:
1797 - (bit 0) anonymous private memory
1798 - (bit 1) anonymous shared memory
1799 - (bit 2) file-backed private memory
1800 - (bit 3) file-backed shared memory
1801 - (bit 4) ELF header pages in file-backed private memory areas (it is
1802 effective only if the bit 2 is cleared)
1803 - (bit 5) hugetlb private memory
1804 - (bit 6) hugetlb shared memory
1805 - (bit 7) DAX private memory
1806 - (bit 8) DAX shared memory
1808 Note that MMIO pages such as frame buffer are never dumped and vDSO pages
1809 are always dumped regardless of the bitmask status.
1811 Note that bits 0-4 don't affect hugetlb or DAX memory. hugetlb memory is
1812 only affected by bit 5-6, and DAX is only affected by bits 7-8.
1814 The default value of coredump_filter is 0x33; this means all anonymous memory
1815 segments, ELF header pages and hugetlb private memory are dumped.
1817 If you don't want to dump all shared memory segments attached to pid 1234,
1818 write 0x31 to the process's proc file::
1820 $ echo 0x31 > /proc/1234/coredump_filter
1822 When a new process is created, the process inherits the bitmask status from its
1823 parent. It is useful to set up coredump_filter before the program runs.
1826 $ echo 0x7 > /proc/self/coredump_filter
1829 3.5 /proc/<pid>/mountinfo - Information about mounts
1830 --------------------------------------------------------
1832 This file contains lines of the form::
1834 36 35 98:0 /mnt1 /mnt2 rw,noatime master:1 - ext3 /dev/root rw,errors=continue
1835 (1)(2)(3) (4) (5) (6) (n…m) (m+1)(m+2) (m+3) (m+4)
1837 (1) mount ID: unique identifier of the mount (may be reused after umount)
1838 (2) parent ID: ID of parent (or of self for the top of the mount tree)
1839 (3) major:minor: value of st_dev for files on filesystem
1840 (4) root: root of the mount within the filesystem
1841 (5) mount point: mount point relative to the process's root
1842 (6) mount options: per mount options
1843 (n…m) optional fields: zero or more fields of the form "tag[:value]"
1844 (m+1) separator: marks the end of the optional fields
1845 (m+2) filesystem type: name of filesystem of the form "type[.subtype]"
1846 (m+3) mount source: filesystem specific information or "none"
1847 (m+4) super options: per super block options
1849 Parsers should ignore all unrecognised optional fields. Currently the
1850 possible optional fields are:
1852 ================ ==============================================================
1853 shared:X mount is shared in peer group X
1854 master:X mount is slave to peer group X
1855 propagate_from:X mount is slave and receives propagation from peer group X [#]_
1856 unbindable mount is unbindable
1857 ================ ==============================================================
1859 .. [#] X is the closest dominant peer group under the process's root. If
1860 X is the immediate master of the mount, or if there's no dominant peer
1861 group under the same root, then only the "master:X" field is present
1862 and not the "propagate_from:X" field.
1864 For more information on mount propagation see:
1866 Documentation/filesystems/sharedsubtree.rst
1869 3.6 /proc/<pid>/comm & /proc/<pid>/task/<tid>/comm
1870 --------------------------------------------------------
1871 These files provide a method to access a task's comm value. It also allows for
1872 a task to set its own or one of its thread siblings comm value. The comm value
1873 is limited in size compared to the cmdline value, so writing anything longer
1874 then the kernel's TASK_COMM_LEN (currently 16 chars) will result in a truncated
1878 3.7 /proc/<pid>/task/<tid>/children - Information about task children
1879 -------------------------------------------------------------------------
1880 This file provides a fast way to retrieve first level children pids
1881 of a task pointed by <pid>/<tid> pair. The format is a space separated
1884 Note the "first level" here -- if a child has its own children they will
1885 not be listed here; one needs to read /proc/<children-pid>/task/<tid>/children
1886 to obtain the descendants.
1888 Since this interface is intended to be fast and cheap it doesn't
1889 guarantee to provide precise results and some children might be
1890 skipped, especially if they've exited right after we printed their
1891 pids, so one needs to either stop or freeze processes being inspected
1892 if precise results are needed.
1895 3.8 /proc/<pid>/fdinfo/<fd> - Information about opened file
1896 ---------------------------------------------------------------
1897 This file provides information associated with an opened file. The regular
1898 files have at least four fields -- 'pos', 'flags', 'mnt_id' and 'ino'.
1899 The 'pos' represents the current offset of the opened file in decimal
1900 form [see lseek(2) for details], 'flags' denotes the octal O_xxx mask the
1901 file has been created with [see open(2) for details] and 'mnt_id' represents
1902 mount ID of the file system containing the opened file [see 3.5
1903 /proc/<pid>/mountinfo for details]. 'ino' represents the inode number of
1906 A typical output is::
1913 All locks associated with a file descriptor are shown in its fdinfo too::
1915 lock: 1: FLOCK ADVISORY WRITE 359 00:13:11691 0 EOF
1917 The files such as eventfd, fsnotify, signalfd, epoll among the regular pos/flags
1918 pair provide additional information particular to the objects they represent.
1931 where 'eventfd-count' is hex value of a counter.
1942 sigmask: 0000000000000200
1944 where 'sigmask' is hex value of the signal mask associated
1956 tfd: 5 events: 1d data: ffffffffffffffff pos:0 ino:61af sdev:7
1958 where 'tfd' is a target file descriptor number in decimal form,
1959 'events' is events mask being watched and the 'data' is data
1960 associated with a target [see epoll(7) for more details].
1962 The 'pos' is current offset of the target file in decimal form
1963 [see lseek(2)], 'ino' and 'sdev' are inode and device numbers
1964 where target file resides, all in hex format.
1968 For inotify files the format is the following::
1974 inotify wd:3 ino:9e7e sdev:800013 mask:800afce ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:7e9e0000640d1b6d
1976 where 'wd' is a watch descriptor in decimal form, i.e. a target file
1977 descriptor number, 'ino' and 'sdev' are inode and device where the
1978 target file resides and the 'mask' is the mask of events, all in hex
1979 form [see inotify(7) for more details].
1981 If the kernel was built with exportfs support, the path to the target
1982 file is encoded as a file handle. The file handle is provided by three
1983 fields 'fhandle-bytes', 'fhandle-type' and 'f_handle', all in hex
1986 If the kernel is built without exportfs support the file handle won't be
1989 If there is no inotify mark attached yet the 'inotify' line will be omitted.
1991 For fanotify files the format is::
1997 fanotify flags:10 event-flags:0
1998 fanotify mnt_id:12 mflags:40 mask:38 ignored_mask:40000003
1999 fanotify ino:4f969 sdev:800013 mflags:0 mask:3b ignored_mask:40000000 fhandle-bytes:8 fhandle-type:1 f_handle:69f90400c275b5b4
2001 where fanotify 'flags' and 'event-flags' are values used in fanotify_init
2002 call, 'mnt_id' is the mount point identifier, 'mflags' is the value of
2003 flags associated with mark which are tracked separately from events
2004 mask. 'ino' and 'sdev' are target inode and device, 'mask' is the events
2005 mask and 'ignored_mask' is the mask of events which are to be ignored.
2006 All are in hex format. Incorporation of 'mflags', 'mask' and 'ignored_mask'
2007 provide information about flags and mask used in fanotify_mark
2008 call [see fsnotify manpage for details].
2010 While the first three lines are mandatory and always printed, the rest is
2011 optional and may be omitted if no marks created yet.
2025 it_value: (0, 49406829)
2028 where 'clockid' is the clock type and 'ticks' is the number of the timer expirations
2029 that have occurred [see timerfd_create(2) for details]. 'settime flags' are
2030 flags in octal form been used to setup the timer [see timerfd_settime(2) for
2031 details]. 'it_value' is remaining time until the timer expiration.
2032 'it_interval' is the interval for the timer. Note the timer might be set up
2033 with TIMER_ABSTIME option which will be shown in 'settime flags', but 'it_value'
2034 still exhibits timer's remaining time.
2047 exp_name: system-heap
2049 where 'size' is the size of the DMA buffer in bytes. 'count' is the file count of
2050 the DMA buffer file. 'exp_name' is the name of the DMA buffer exporter.
2052 3.9 /proc/<pid>/map_files - Information about memory mapped files
2053 ---------------------------------------------------------------------
2054 This directory contains symbolic links which represent memory mapped files
2055 the process is maintaining. Example output::
2057 | lr-------- 1 root root 64 Jan 27 11:24 333c600000-333c620000 -> /usr/lib64/ld-2.18.so
2058 | lr-------- 1 root root 64 Jan 27 11:24 333c81f000-333c820000 -> /usr/lib64/ld-2.18.so
2059 | lr-------- 1 root root 64 Jan 27 11:24 333c820000-333c821000 -> /usr/lib64/ld-2.18.so
2061 | lr-------- 1 root root 64 Jan 27 11:24 35d0421000-35d0422000 -> /usr/lib64/libselinux.so.1
2062 | lr-------- 1 root root 64 Jan 27 11:24 400000-41a000 -> /usr/bin/ls
2064 The name of a link represents the virtual memory bounds of a mapping, i.e.
2065 vm_area_struct::vm_start-vm_area_struct::vm_end.
2067 The main purpose of the map_files is to retrieve a set of memory mapped
2068 files in a fast way instead of parsing /proc/<pid>/maps or
2069 /proc/<pid>/smaps, both of which contain many more records. At the same
2070 time one can open(2) mappings from the listings of two processes and
2071 comparing their inode numbers to figure out which anonymous memory areas
2072 are actually shared.
2074 3.10 /proc/<pid>/timerslack_ns - Task timerslack value
2075 ---------------------------------------------------------
2076 This file provides the value of the task's timerslack value in nanoseconds.
2077 This value specifies an amount of time that normal timers may be deferred
2078 in order to coalesce timers and avoid unnecessary wakeups.
2080 This allows a task's interactivity vs power consumption tradeoff to be
2083 Writing 0 to the file will set the task's timerslack to the default value.
2085 Valid values are from 0 - ULLONG_MAX
2087 An application setting the value must have PTRACE_MODE_ATTACH_FSCREDS level
2088 permissions on the task specified to change its timerslack_ns value.
2090 3.11 /proc/<pid>/patch_state - Livepatch patch operation state
2091 -----------------------------------------------------------------
2092 When CONFIG_LIVEPATCH is enabled, this file displays the value of the
2093 patch state for the task.
2095 A value of '-1' indicates that no patch is in transition.
2097 A value of '0' indicates that a patch is in transition and the task is
2098 unpatched. If the patch is being enabled, then the task hasn't been
2099 patched yet. If the patch is being disabled, then the task has already
2102 A value of '1' indicates that a patch is in transition and the task is
2103 patched. If the patch is being enabled, then the task has already been
2104 patched. If the patch is being disabled, then the task hasn't been
2107 3.12 /proc/<pid>/arch_status - task architecture specific status
2108 -------------------------------------------------------------------
2109 When CONFIG_PROC_PID_ARCH_STATUS is enabled, this file displays the
2110 architecture specific status of the task.
2117 $ cat /proc/6753/arch_status
2118 AVX512_elapsed_ms: 8
2123 x86 specific entries
2124 ~~~~~~~~~~~~~~~~~~~~~
2129 If AVX512 is supported on the machine, this entry shows the milliseconds
2130 elapsed since the last time AVX512 usage was recorded. The recording
2131 happens on a best effort basis when a task is scheduled out. This means
2132 that the value depends on two factors:
2134 1) The time which the task spent on the CPU without being scheduled
2135 out. With CPU isolation and a single runnable task this can take
2138 2) The time since the task was scheduled out last. Depending on the
2139 reason for being scheduled out (time slice exhausted, syscall ...)
2140 this can be arbitrary long time.
2142 As a consequence the value cannot be considered precise and authoritative
2143 information. The application which uses this information has to be aware
2144 of the overall scenario on the system in order to determine whether a
2145 task is a real AVX512 user or not. Precise information can be obtained
2146 with performance counters.
2148 A special value of '-1' indicates that no AVX512 usage was recorded, thus
2149 the task is unlikely an AVX512 user, but depends on the workload and the
2150 scheduling scenario, it also could be a false negative mentioned above.
2152 Chapter 4: Configuring procfs
2153 =============================
2156 ---------------------
2158 The following mount options are supported:
2160 ========= ========================================================
2161 hidepid= Set /proc/<pid>/ access mode.
2162 gid= Set the group authorized to learn processes information.
2163 subset= Show only the specified subset of procfs.
2164 ========= ========================================================
2166 hidepid=off or hidepid=0 means classic mode - everybody may access all
2167 /proc/<pid>/ directories (default).
2169 hidepid=noaccess or hidepid=1 means users may not access any /proc/<pid>/
2170 directories but their own. Sensitive files like cmdline, sched*, status are now
2171 protected against other users. This makes it impossible to learn whether any
2172 user runs specific program (given the program doesn't reveal itself by its
2173 behaviour). As an additional bonus, as /proc/<pid>/cmdline is unaccessible for
2174 other users, poorly written programs passing sensitive information via program
2175 arguments are now protected against local eavesdroppers.
2177 hidepid=invisible or hidepid=2 means hidepid=1 plus all /proc/<pid>/ will be
2178 fully invisible to other users. It doesn't mean that it hides a fact whether a
2179 process with a specific pid value exists (it can be learned by other means, e.g.
2180 by "kill -0 $PID"), but it hides process' uid and gid, which may be learned by
2181 stat()'ing /proc/<pid>/ otherwise. It greatly complicates an intruder's task of
2182 gathering information about running processes, whether some daemon runs with
2183 elevated privileges, whether other user runs some sensitive program, whether
2184 other users run any program at all, etc.
2186 hidepid=ptraceable or hidepid=4 means that procfs should only contain
2187 /proc/<pid>/ directories that the caller can ptrace.
2189 gid= defines a group authorized to learn processes information otherwise
2190 prohibited by hidepid=. If you use some daemon like identd which needs to learn
2191 information about processes information, just add identd to this group.
2193 subset=pid hides all top level files and directories in the procfs that
2194 are not related to tasks.
2196 Chapter 5: Filesystem behavior
2197 ==============================
2199 Originally, before the advent of pid namepsace, procfs was a global file
2200 system. It means that there was only one procfs instance in the system.
2202 When pid namespace was added, a separate procfs instance was mounted in
2203 each pid namespace. So, procfs mount options are global among all
2204 mountpoints within the same namespace::
2206 # grep ^proc /proc/mounts
2207 proc /proc proc rw,relatime,hidepid=2 0 0
2209 # strace -e mount mount -o hidepid=1 -t proc proc /tmp/proc
2210 mount("proc", "/tmp/proc", "proc", 0, "hidepid=1") = 0
2211 +++ exited with 0 +++
2213 # grep ^proc /proc/mounts
2214 proc /proc proc rw,relatime,hidepid=2 0 0
2215 proc /tmp/proc proc rw,relatime,hidepid=2 0 0
2217 and only after remounting procfs mount options will change at all
2220 # mount -o remount,hidepid=1 -t proc proc /tmp/proc
2222 # grep ^proc /proc/mounts
2223 proc /proc proc rw,relatime,hidepid=1 0 0
2224 proc /tmp/proc proc rw,relatime,hidepid=1 0 0
2226 This behavior is different from the behavior of other filesystems.
2228 The new procfs behavior is more like other filesystems. Each procfs mount
2229 creates a new procfs instance. Mount options affect own procfs instance.
2230 It means that it became possible to have several procfs instances
2231 displaying tasks with different filtering options in one pid namespace::
2233 # mount -o hidepid=invisible -t proc proc /proc
2234 # mount -o hidepid=noaccess -t proc proc /tmp/proc
2235 # grep ^proc /proc/mounts
2236 proc /proc proc rw,relatime,hidepid=invisible 0 0
2237 proc /tmp/proc proc rw,relatime,hidepid=noaccess 0 0