4 Kernel bug reports often come with a stack dump like the one below::
6 ------------[ cut here ]------------
7 WARNING: CPU: 1 PID: 28102 at kernel/module.c:1108 module_put+0x57/0x70
8 Modules linked in: dvb_usb_gp8psk(-) dvb_usb dvb_core nvidia_drm(PO) nvidia_modeset(PO) snd_hda_codec_hdmi snd_hda_intel snd_hda_codec snd_hwdep snd_hda_core snd_pcm snd_timer snd soundcore nvidia(PO) [last unloaded: rc_core]
9 CPU: 1 PID: 28102 Comm: rmmod Tainted: P WC O 4.8.4-build.1 #1
10 Hardware name: MSI MS-7309/MS-7309, BIOS V1.12 02/23/2009
11 00000000 c12ba080 00000000 00000000 c103ed6a c1616014 00000001 00006dc6
12 c1615862 00000454 c109e8a7 c109e8a7 00000009 ffffffff 00000000 f13f6a10
13 f5f5a600 c103ee33 00000009 00000000 00000000 c109e8a7 f80ca4d0 c109f617
15 [<c12ba080>] ? dump_stack+0x44/0x64
16 [<c103ed6a>] ? __warn+0xfa/0x120
17 [<c109e8a7>] ? module_put+0x57/0x70
18 [<c109e8a7>] ? module_put+0x57/0x70
19 [<c103ee33>] ? warn_slowpath_null+0x23/0x30
20 [<c109e8a7>] ? module_put+0x57/0x70
21 [<f80ca4d0>] ? gp8psk_fe_set_frontend+0x460/0x460 [dvb_usb_gp8psk]
22 [<c109f617>] ? symbol_put_addr+0x27/0x50
23 [<f80bc9ca>] ? dvb_usb_adapter_frontend_exit+0x3a/0x70 [dvb_usb]
24 [<f80bb3bf>] ? dvb_usb_exit+0x2f/0xd0 [dvb_usb]
25 [<c13d03bc>] ? usb_disable_endpoint+0x7c/0xb0
26 [<f80bb48a>] ? dvb_usb_device_exit+0x2a/0x50 [dvb_usb]
27 [<c13d2882>] ? usb_unbind_interface+0x62/0x250
28 [<c136b514>] ? __pm_runtime_idle+0x44/0x70
29 [<c13620d8>] ? __device_release_driver+0x78/0x120
30 [<c1362907>] ? driver_detach+0x87/0x90
31 [<c1361c48>] ? bus_remove_driver+0x38/0x90
32 [<c13d1c18>] ? usb_deregister+0x58/0xb0
33 [<c109fbb0>] ? SyS_delete_module+0x130/0x1f0
34 [<c1055654>] ? task_work_run+0x64/0x80
35 [<c1000fa5>] ? exit_to_usermode_loop+0x85/0x90
36 [<c10013f0>] ? do_fast_syscall_32+0x80/0x130
37 [<c1549f43>] ? sysenter_past_esp+0x40/0x6a
38 ---[ end trace 6ebc60ef3981792f ]---
40 Such stack traces provide enough information to identify the line inside the
41 Kernel's source code where the bug happened. Depending on the severity of
42 the issue, it may also contain the word **Oops**, as on this one::
44 BUG: unable to handle kernel NULL pointer dereference at (null)
45 IP: [<c06969d4>] iret_exc+0x7d0/0xa59
46 *pdpt = 000000002258a001 *pde = 0000000000000000
47 Oops: 0002 [#1] PREEMPT SMP
50 Despite being an **Oops** or some other sort of stack trace, the offended
51 line is usually required to identify and handle the bug. Along this chapter,
52 we'll refer to "Oops" for all kinds of stack traces that need to be analyzed.
54 If the kernel is compiled with ``CONFIG_DEBUG_INFO``, you can enhance the
55 quality of the stack trace by using file:`scripts/decode_stacktrace.sh`.
60 Modules that are tainted or are being loaded or unloaded are marked with
61 "(...)", where the taint flags are described in
62 file:`Documentation/admin-guide/tainted-kernels.rst`, "being loaded" is
63 annotated with "+", and "being unloaded" is annotated with "-".
66 Where is the Oops message is located?
67 -------------------------------------
69 Normally the Oops text is read from the kernel buffers by klogd and
70 handed to ``syslogd`` which writes it to a syslog file, typically
71 ``/var/log/messages`` (depends on ``/etc/syslog.conf``). On systems with
72 systemd, it may also be stored by the ``journald`` daemon, and accessed
73 by running ``journalctl`` command.
75 Sometimes ``klogd`` dies, in which case you can run ``dmesg > file`` to
76 read the data from the kernel buffers and save it. Or you can
77 ``cat /proc/kmsg > file``, however you have to break in to stop the transfer,
78 since ``kmsg`` is a "never ending file".
80 If the machine has crashed so badly that you cannot enter commands or
81 the disk is not available then you have three options:
83 (1) Hand copy the text from the screen and type it in after the machine
84 has restarted. Messy but it is the only option if you have not
85 planned for a crash. Alternatively, you can take a picture of
86 the screen with a digital camera - not nice, but better than
87 nothing. If the messages scroll off the top of the console, you
88 may find that booting with a higher resolution (e.g., ``vga=791``)
89 will allow you to read more of the text. (Caveat: This needs ``vesafb``,
90 so won't help for 'early' oopses.)
92 (2) Boot with a serial console (see
93 :ref:`Documentation/admin-guide/serial-console.rst <serial_console>`),
94 run a null modem to a second machine and capture the output there
95 using your favourite communication program. Minicom works well.
97 (3) Use Kdump (see Documentation/admin-guide/kdump/kdump.rst),
98 extract the kernel ring buffer from old memory with using dmesg
99 gdbmacro in Documentation/admin-guide/kdump/gdbmacros.txt.
101 Finding the bug's location
102 --------------------------
104 Reporting a bug works best if you point the location of the bug at the
105 Kernel source file. There are two methods for doing that. Usually, using
106 ``gdb`` is easier, but the Kernel should be pre-compiled with debug info.
111 The GNU debugger (``gdb``) is the best way to figure out the exact file and line
112 number of the OOPS from the ``vmlinux`` file.
114 The usage of gdb works best on a kernel compiled with ``CONFIG_DEBUG_INFO``.
115 This can be set by running::
117 $ ./scripts/config -d COMPILE_TEST -e DEBUG_KERNEL -e DEBUG_INFO
119 On a kernel compiled with ``CONFIG_DEBUG_INFO``, you can simply copy the
120 EIP value from the OOPS::
122 EIP: 0060:[<c021e50e>] Not tainted VLI
124 And use GDB to translate that to human-readable form::
129 If you don't have ``CONFIG_DEBUG_INFO`` enabled, you use the function
130 offset from the OOPS::
132 EIP is at vt_ioctl+0xda8/0x1482
134 And recompile the kernel with ``CONFIG_DEBUG_INFO`` enabled::
136 $ ./scripts/config -d COMPILE_TEST -e DEBUG_KERNEL -e DEBUG_INFO
139 (gdb) l *vt_ioctl+0xda8
140 0x1888 is in vt_ioctl (drivers/tty/vt/vt_ioctl.c:293).
142 289 struct vc_data *vc = NULL;
146 293 if (VT_BUSY(vc_num))
149 296 vc = vc_deallocate(vc_num);
150 297 console_unlock();
152 or, if you want to be more verbose::
155 $1 = {int (struct tty_struct *, unsigned int, unsigned long)} 0xae0 <vt_ioctl>
158 You could, instead, use the object file::
161 $ gdb drivers/tty/vt/vt_ioctl.o
162 (gdb) l *vt_ioctl+0xda8
164 If you have a call trace, such as::
167 [<ffffffff8802c8e9>] :jbd:log_wait_commit+0xa3/0xf5
168 [<ffffffff810482d9>] autoremove_wake_function+0x0/0x2e
169 [<ffffffff8802770b>] :jbd:journal_stop+0x1be/0x1ee
172 this shows the problem likely is in the :jbd: module. You can load that module
173 in gdb and list the relevant code::
176 (gdb) l *log_wait_commit+0xa3
180 You can also do the same for any function call at the stack trace,
183 [<f80bc9ca>] ? dvb_usb_adapter_frontend_exit+0x3a/0x70 [dvb_usb]
185 The position where the above call happened can be seen with::
187 $ gdb drivers/media/usb/dvb-usb/dvb-usb.o
188 (gdb) l *dvb_usb_adapter_frontend_exit+0x3a
193 To debug a kernel, use objdump and look for the hex offset from the crash
194 output to find the valid line of code/assembler. Without debug symbols, you
195 will see the assembler code for the routine shown, but if your kernel has
196 debug symbols the C code will also be available. (Debug symbols can be enabled
197 in the kernel hacking menu of the menu configuration.) For example::
199 $ objdump -r -S -l --disassemble net/dccp/ipv4.o
203 You need to be at the top level of the kernel tree for this to pick up
206 If you don't have access to the source code you can still debug some crash
207 dumps using the following method (example crash dump output as shown by
210 EIP is at +0x14/0x4c0
212 Code: 44 24 04 e8 6f 05 00 00 e9 e8 fe ff ff 8d 76 00 8d bc 27 00 00
213 00 00 55 57 56 53 81 ec bc 00 00 00 8b ac 24 d0 00 00 00 8b 5d 08
214 <8b> 83 3c 01 00 00 89 44 24 14 8b 45 28 85 c0 89 44 24 18 0f 85
216 Put the bytes into a "foo.s" file like this:
221 .byte .... /* bytes from Code: part of OOPS dump */
223 Compile it with "gcc -c -o foo.o foo.s" then look at the output of
224 "objdump --disassemble foo.o".
234 mov 0xd0(%esp), %ebp ! %ebp = arg0 (skb)
235 mov 0x8(%ebp), %ebx ! %ebx = skb->sk
236 mov 0x13c(%ebx), %eax ! %eax = inet_sk(sk)->opt
238 file:`scripts/decodecode` can be used to automate most of this, depending
239 on what CPU architecture is being debugged.
244 Once you find where the bug happened, by inspecting its location,
245 you could either try to fix it yourself or report it upstream.
247 In order to report it upstream, you should identify the mailing list
248 used for the development of the affected code. This can be done by using
249 the ``get_maintainer.pl`` script.
251 For example, if you find a bug at the gspca's sonixj.c file, you can get
252 its maintainers with::
254 $ ./scripts/get_maintainer.pl -f drivers/media/usb/gspca/sonixj.c
255 Hans Verkuil <hverkuil@xs4all.nl> (odd fixer:GSPCA USB WEBCAM DRIVER,commit_signer:1/1=100%)
256 Mauro Carvalho Chehab <mchehab@kernel.org> (maintainer:MEDIA INPUT INFRASTRUCTURE (V4L/DVB),commit_signer:1/1=100%)
257 Tejun Heo <tj@kernel.org> (commit_signer:1/1=100%)
258 Bhaktipriya Shridhar <bhaktipriya96@gmail.com> (commit_signer:1/1=100%,authored:1/1=100%,added_lines:4/4=100%,removed_lines:9/9=100%)
259 linux-media@vger.kernel.org (open list:GSPCA USB WEBCAM DRIVER)
260 linux-kernel@vger.kernel.org (open list)
262 Please notice that it will point to:
264 - The last developers that touched the source code (if this is done inside
265 a git tree). On the above example, Tejun and Bhaktipriya (in this
266 specific case, none really involved on the development of this file);
267 - The driver maintainer (Hans Verkuil);
268 - The subsystem maintainer (Mauro Carvalho Chehab);
269 - The driver and/or subsystem mailing list (linux-media@vger.kernel.org);
270 - the Linux Kernel mailing list (linux-kernel@vger.kernel.org).
272 Usually, the fastest way to have your bug fixed is to report it to mailing
273 list used for the development of the code (linux-media ML) copying the
274 driver maintainer (Hans).
276 If you are totally stumped as to whom to send the report, and
277 ``get_maintainer.pl`` didn't provide you anything useful, send it to
278 linux-kernel@vger.kernel.org.
280 Thanks for your help in making Linux as stable as humanly possible.
285 If you know programming, you could help us by not only reporting the bug,
286 but also providing us with a solution. After all, open source is about
287 sharing what you do and don't you want to be recognised for your genius?
289 If you decide to take this way, once you have worked out a fix please submit
293 :ref:`Documentation/process/submitting-patches.rst <submittingpatches>` though
294 to help your code get accepted.
297 ---------------------------------------------------------------------------
299 Notes on Oops tracing with ``klogd``
300 ------------------------------------
302 In order to help Linus and the other kernel developers there has been
303 substantial support incorporated into ``klogd`` for processing protection
304 faults. In order to have full support for address resolution at least
305 version 1.3-pl3 of the ``sysklogd`` package should be used.
307 When a protection fault occurs the ``klogd`` daemon automatically
308 translates important addresses in the kernel log messages to their
309 symbolic equivalents. This translated kernel message is then
310 forwarded through whatever reporting mechanism ``klogd`` is using. The
311 protection fault message can be simply cut out of the message files
312 and forwarded to the kernel developers.
314 Two types of address resolution are performed by ``klogd``. The first is
315 static translation and the second is dynamic translation.
316 Static translation uses the System.map file.
317 In order to do static translation the ``klogd`` daemon
318 must be able to find a system map file at daemon initialization time.
319 See the klogd man page for information on how ``klogd`` searches for map
322 Dynamic address translation is important when kernel loadable modules
323 are being used. Since memory for kernel modules is allocated from the
324 kernel's dynamic memory pools there are no fixed locations for either
325 the start of the module or for functions and symbols in the module.
327 The kernel supports system calls which allow a program to determine
328 which modules are loaded and their location in memory. Using these
329 system calls the klogd daemon builds a symbol table which can be used
330 to debug a protection fault which occurs in a loadable kernel module.
332 At the very minimum klogd will provide the name of the module which
333 generated the protection fault. There may be additional symbolic
334 information available if the developer of the loadable module chose to
335 export symbol information from the module.
337 Since the kernel module environment can be dynamic there must be a
338 mechanism for notifying the ``klogd`` daemon when a change in module
339 environment occurs. There are command line options available which
340 allow klogd to signal the currently executing daemon that symbol
341 information should be refreshed. See the ``klogd`` manual page for more
344 A patch is included with the sysklogd distribution which modifies the
345 ``modules-2.0.0`` package to automatically signal klogd whenever a module
346 is loaded or unloaded. Applying this patch provides essentially
347 seamless support for debugging protection faults which occur with
348 kernel loadable modules.
350 The following is an example of a protection fault in a loadable module
351 processed by ``klogd``::
353 Aug 29 09:51:01 blizard kernel: Unable to handle kernel paging request at virtual address f15e97cc
354 Aug 29 09:51:01 blizard kernel: current->tss.cr3 = 0062d000, %cr3 = 0062d000
355 Aug 29 09:51:01 blizard kernel: *pde = 00000000
356 Aug 29 09:51:01 blizard kernel: Oops: 0002
357 Aug 29 09:51:01 blizard kernel: CPU: 0
358 Aug 29 09:51:01 blizard kernel: EIP: 0010:[oops:_oops+16/3868]
359 Aug 29 09:51:01 blizard kernel: EFLAGS: 00010212
360 Aug 29 09:51:01 blizard kernel: eax: 315e97cc ebx: 003a6f80 ecx: 001be77b edx: 00237c0c
361 Aug 29 09:51:01 blizard kernel: esi: 00000000 edi: bffffdb3 ebp: 00589f90 esp: 00589f8c
362 Aug 29 09:51:01 blizard kernel: ds: 0018 es: 0018 fs: 002b gs: 002b ss: 0018
363 Aug 29 09:51:01 blizard kernel: Process oops_test (pid: 3374, process nr: 21, stackpage=00589000)
364 Aug 29 09:51:01 blizard kernel: Stack: 315e97cc 00589f98 0100b0b4 bffffed4 0012e38e 00240c64 003a6f80 00000001
365 Aug 29 09:51:01 blizard kernel: 00000000 00237810 bfffff00 0010a7fa 00000003 00000001 00000000 bfffff00
366 Aug 29 09:51:01 blizard kernel: bffffdb3 bffffed4 ffffffda 0000002b 0007002b 0000002b 0000002b 00000036
367 Aug 29 09:51:01 blizard kernel: Call Trace: [oops:_oops_ioctl+48/80] [_sys_ioctl+254/272] [_system_call+82/128]
368 Aug 29 09:51:01 blizard kernel: Code: c7 00 05 00 00 00 eb 08 90 90 90 90 90 90 90 90 89 ec 5d c3
370 ---------------------------------------------------------------------------
374 Dr. G.W. Wettstein Oncology Research Div. Computing Facility
375 Roger Maris Cancer Center INTERNET: greg@wind.rmcc.com