1 \input texinfo @c -*-texinfo-*-
2 @setfilename gprof.info
6 This file documents the gprof profiler of the GNU system.
8 Copyright (C) 1988, 1992 Free Software Foundation, Inc.
10 Permission is granted to make and distribute verbatim copies of
11 this manual provided the copyright notice and this permission notice
12 are preserved on all copies.
15 Permission is granted to process this file through Tex and print the
16 results, provided the printed document carries copying permission
17 notice identical to this one except for the removal of this paragraph
18 (this paragraph not being relevant to the printed manual).
21 Permission is granted to copy and distribute modified versions of this
22 manual under the conditions for verbatim copying, provided that the entire
23 resulting derived work is distributed under the terms of a permission
24 notice identical to this one.
26 Permission is granted to copy and distribute translations of this manual
27 into another language, under the above conditions for modified versions.
35 @subtitle The @sc{gnu} Profiler
36 @author Jay Fenlason and Richard Stallman
40 This manual describes the @sc{gnu} profiler, @code{gprof}, and how you
41 can use it to determine which parts of a program are taking most of the
42 execution time. We assume that you know how to write, compile, and
43 execute programs. @sc{gnu} @code{gprof} was written by Jay Fenlason.
45 This manual was edited January 1993 by Jeffrey Osier.
47 @vskip 0pt plus 1filll
48 Copyright @copyright{} 1988, 1992 Free Software Foundation, Inc.
50 Permission is granted to make and distribute verbatim copies of
51 this manual provided the copyright notice and this permission notice
52 are preserved on all copies.
55 Permission is granted to process this file through TeX and print the
56 results, provided the printed document carries copying permission
57 notice identical to this one except for the removal of this paragraph
58 (this paragraph not being relevant to the printed manual).
61 Permission is granted to copy and distribute modified versions of this
62 manual under the conditions for verbatim copying, provided that the entire
63 resulting derived work is distributed under the terms of a permission
64 notice identical to this one.
66 Permission is granted to copy and distribute translations of this manual
67 into another language, under the same conditions as for modified versions.
73 @top Profiling a Program: Where Does It Spend Its Time?
75 This manual describes the @sc{gnu} profiler, @code{gprof}, and how you
76 can use it to determine which parts of a program are taking most of the
77 execution time. We assume that you know how to write, compile, and
78 execute programs. @sc{gnu} @code{gprof} was written by Jay Fenlason.
81 * Why:: What profiling means, and why it is useful.
82 * Compiling:: How to compile your program for profiling.
83 * Executing:: How to execute your program to generate the
84 profile data file @file{gmon.out}.
85 * Invoking:: How to run @code{gprof}, and how to specify
88 * Flat Profile:: The flat profile shows how much time was spent
89 executing directly in each function.
90 * Call Graph:: The call graph shows which functions called which
91 others, and how much time each function used
92 when its subroutine calls are included.
94 * Implementation:: How the profile data is recorded and written.
95 * Sampling Error:: Statistical margins of error.
96 How to accumulate data from several runs
97 to make it more accurate.
99 * Assumptions:: Some of @code{gprof}'s measurements are based
100 on assumptions about your program
101 that could be very wrong.
103 * Incompatibilities:: (between GNU @code{gprof} and Unix @code{gprof}.)
110 Profiling allows you to learn where your program spent its time and which
111 functions called which other functions while it was executing. This
112 information can show you which pieces of your program are slower than you
113 expected, and might be candidates for rewriting to make your program
114 execute faster. It can also tell you which functions are being called more
115 or less often than you expected. This may help you spot bugs that had
116 otherwise been unnoticed.
118 Since the profiler uses information collected during the actual execution
119 of your program, it can be used on programs that are too large or too
120 complex to analyze by reading the source. However, how your program is run
121 will affect the information that shows up in the profile data. If you
122 don't use some feature of your program while it is being profiled, no
123 profile information will be generated for that feature.
125 Profiling has several steps:
129 You must compile and link your program with profiling enabled.
133 You must execute your program to generate a profile data file.
137 You must run @code{gprof} to analyze the profile data.
141 The next three chapters explain these steps in greater detail.
143 The result of the analysis is a file containing two tables, the
144 @dfn{flat profile} and the @dfn{call graph} (plus blurbs which briefly
145 explain the contents of these tables).
147 The flat profile shows how much time your program spent in each function,
148 and how many times that function was called. If you simply want to know
149 which functions burn most of the cycles, it is stated concisely here.
152 The call graph shows, for each function, which functions called it, which
153 other functions it called, and how many times. There is also an estimate
154 of how much time was spent in the subroutines of each function. This can
155 suggest places where you might try to eliminate function calls that use a
156 lot of time. @xref{Call Graph}.
159 @chapter Compiling a Program for Profiling
161 The first step in generating profile information for your program is
162 to compile and link it with profiling enabled.
164 To compile a source file for profiling, specify the @samp{-pg} option when
165 you run the compiler. (This is in addition to the options you normally
168 To link the program for profiling, if you use a compiler such as @code{cc}
169 to do the linking, simply specify @samp{-pg} in addition to your usual
170 options. The same option, @samp{-pg}, alters either compilation or linking
171 to do what is necessary for profiling. Here are examples:
174 cc -g -c myprog.c utils.c -pg
175 cc -o myprog myprog.o utils.o -pg
178 The @samp{-pg} option also works with a command that both compiles and links:
181 cc -o myprog myprog.c utils.c -g -pg
184 If you run the linker @code{ld} directly instead of through a compiler
185 such as @code{cc}, you must specify the profiling startup file
186 @file{/lib/gcrt0.o} as the first input file instead of the usual startup
187 file @file{/lib/crt0.o}. In addition, you would probably want to
188 specify the profiling C library, @file{/usr/lib/libc_p.a}, by writing
189 @samp{-lc_p} instead of the usual @samp{-lc}. This is not absolutely
190 necessary, but doing this gives you number-of-calls information for
191 standard library functions such as @code{read} and @code{open}. For
195 ld -o myprog /lib/gcrt0.o myprog.o utils.o -lc_p
198 If you compile only some of the modules of the program with @samp{-pg}, you
199 can still profile the program, but you won't get complete information about
200 the modules that were compiled without @samp{-pg}. The only information
201 you get for the functions in those modules is the total time spent in them;
202 there is no record of how many times they were called, or from where. This
203 will not affect the flat profile (except that the @code{calls} field for
204 the functions will be blank), but will greatly reduce the usefulness of the
208 @chapter Executing the Program to Generate Profile Data
210 Once the program is compiled for profiling, you must run it in order to
211 generate the information that @code{gprof} needs. Simply run the program
212 as usual, using the normal arguments, file names, etc. The program should
213 run normally, producing the same output as usual. It will, however, run
214 somewhat slower than normal because of the time spent collecting and the
215 writing the profile data.
217 The way you run the program---the arguments and input that you give
218 it---may have a dramatic effect on what the profile information shows. The
219 profile data will describe the parts of the program that were activated for
220 the particular input you use. For example, if the first command you give
221 to your program is to quit, the profile data will show the time used in
222 initialization and in cleanup, but not much else.
224 You program will write the profile data into a file called @file{gmon.out}
225 just before exiting. If there is already a file called @file{gmon.out},
226 its contents are overwritten. There is currently no way to tell the
227 program to write the profile data under a different name, but you can rename
228 the file afterward if you are concerned that it may be overwritten.
230 In order to write the @file{gmon.out} file properly, your program must exit
231 normally: by returning from @code{main} or by calling @code{exit}. Calling
232 the low-level function @code{_exit} does not write the profile data, and
233 neither does abnormal termination due to an unhandled signal.
235 The @file{gmon.out} file is written in the program's @emph{current working
236 directory} at the time it exits. This means that if your program calls
237 @code{chdir}, the @file{gmon.out} file will be left in the last directory
238 your program @code{chdir}'d to. If you don't have permission to write in
239 this directory, the file is not written. You may get a confusing error
240 message if this happens. (We have not yet replaced the part of Unix
241 responsible for this; when we do, we will make the error message
245 @chapter @code{gprof} Command Summary
247 After you have a profile data file @file{gmon.out}, you can run @code{gprof}
248 to interpret the information in it. The @code{gprof} program prints a
249 flat profile and a call graph on standard output. Typically you would
250 redirect the output of @code{gprof} into a file with @samp{>}.
252 You run @code{gprof} like this:
255 gprof @var{options} [@var{executable-file} [@var{profile-data-files}@dots{}]] [> @var{outfile}]
259 Here square-brackets indicate optional arguments.
261 If you omit the executable file name, the file @file{a.out} is used. If
262 you give no profile data file name, the file @file{gmon.out} is used. If
263 any file is not in the proper format, or if the profile data file does not
264 appear to belong to the executable file, an error message is printed.
266 You can give more than one profile data file by entering all their names
267 after the executable file name; then the statistics in all the data files
270 The following options may be used to selectively include or exclude
271 functions in the output:
275 The @samp{-a} option causes @code{gprof} to suppress the printing of
276 statically declared (private) functions. (These are functions whose
277 names are not listed as global, and which are not visible outside the
278 file/function/block where they were defined.) Time spent in these
279 functions, calls to/from them, etc, will all be attributed to the
280 function that was loaded directly before it in the executable file.
281 @c This is compatible with Unix @code{gprof}, but a bad idea.
282 This option affects both the flat profile and the call graph.
284 @item -e @var{function_name}
285 The @samp{-e @var{function}} option tells @code{gprof} to not print
286 information about the function @var{function_name} (and its
287 children@dots{}) in the call graph. The function will still be listed
288 as a child of any functions that call it, but its index number will be
289 shown as @samp{[not printed]}. More than one @samp{-e} option may be
290 given; only one @var{function_name} may be indicated with each @samp{-e}
293 @item -E @var{function_name}
294 The @code{-E @var{function}} option works like the @code{-e} option, but
295 time spent in the function (and children who were not called from
296 anywhere else), will not be used to compute the percentages-of-time for
297 the call graph. More than one @samp{-E} option may be given; only one
298 @var{function_name} may be indicated with each @samp{-E} option.
300 @item -f @var{function_name}
301 The @samp{-f @var{function}} option causes @code{gprof} to limit the
302 call graph to the function @var{function_name} and its children (and
303 their children@dots{}). More than one @samp{-f} option may be given;
304 only one @var{function_name} may be indicated with each @samp{-f}
307 @item -F @var{function_name}
308 The @samp{-F @var{function}} option works like the @code{-f} option, but
309 only time spent in the function and its children (and their
310 children@dots{}) will be used to determine total-time and
311 percentages-of-time for the call graph. More than one @samp{-F} option
312 may be given; only one @var{function_name} may be indicated with each
313 @samp{-F} option. The @samp{-F} option overrides the @samp{-E} option.
315 @item -k @var{from@dots{}} @var{to@dots{}}
316 The @samp{-k} option allows you to delete from the profile any arcs from
317 routine @var{from} to routine @var{to}.
320 If you give the @samp{-z} option, @code{gprof} will mention all
321 functions in the flat profile, even those that were never called, and
322 that had no time spent in them. This is useful in conjunction with the
323 @samp{-c} option for discovering which routines were never called.
326 The order of these options does not matter.
328 Note that only one function can be specified with each @code{-e},
329 @code{-E}, @code{-f} or @code{-F} option. To specify more than one
330 function, use multiple options. For example, this command:
333 gprof -e boring -f foo -f bar myprogram > gprof.output
337 lists in the call graph all functions that were reached from either
338 @code{foo} or @code{bar} and were not reachable from @code{boring}.
340 There are a few other useful @code{gprof} options:
344 If the @samp{-b} option is given, @code{gprof} doesn't print the
345 verbose blurbs that try to explain the meaning of all of the fields in
346 the tables. This is useful if you intend to print out the output, or
347 are tired of seeing the blurbs.
350 The @samp{-c} option causes the static call-graph of the program to be
351 discovered by a heuristic which examines the text space of the object
352 file. Static-only parents or children are indicated with call counts of
356 The @samp{-d @var{num}} option specifies debugging options.
360 The @samp{-s} option causes @code{gprof} to summarize the information
361 in the profile data files it read in, and write out a profile data
362 file called @file{gmon.sum}, which contains all the information from
363 the profile data files that @code{gprof} read in. The file @file{gmon.sum}
364 may be one of the specified input files; the effect of this is to
365 merge the data in the other input files into @file{gmon.sum}.
366 @xref{Sampling Error}.
368 Eventually you can run @code{gprof} again without @samp{-s} to analyze the
369 cumulative data in the file @file{gmon.sum}.
372 The @samp{-T} option causes @code{gprof} to print its output in
373 ``traditional'' BSD style.
377 @chapter How to Understand the Flat Profile
380 The @dfn{flat profile} shows the total amount of time your program
381 spent executing each function. Unless the @samp{-z} option is given,
382 functions with no apparent time spent in them, and no apparent calls
383 to them, are not mentioned. Note that if a function was not compiled
384 for profiling, and didn't run long enough to show up on the program
385 counter histogram, it will be indistinguishable from a function that
388 This is part of a flat profile for a small program:
394 Each sample counts as 0.01 seconds.
395 % cumulative self self total
396 time seconds seconds calls ms/call ms/call name
397 33.34 0.02 0.02 7208 0.00 0.00 open
398 16.67 0.03 0.01 244 0.04 0.12 offtime
399 16.67 0.04 0.01 8 1.25 1.25 memccpy
400 16.67 0.05 0.01 7 1.43 1.43 write
401 16.67 0.06 0.01 mcount
402 0.00 0.06 0.00 236 0.00 0.00 tzset
403 0.00 0.06 0.00 192 0.00 0.00 tolower
404 0.00 0.06 0.00 47 0.00 0.00 strlen
405 0.00 0.06 0.00 45 0.00 0.00 strchr
406 0.00 0.06 0.00 1 0.00 50.00 main
407 0.00 0.06 0.00 1 0.00 0.00 memcpy
408 0.00 0.06 0.00 1 0.00 10.11 print
409 0.00 0.06 0.00 1 0.00 0.00 profil
410 0.00 0.06 0.00 1 0.00 50.00 report
416 The functions are sorted by decreasing run-time spent in them. The
417 functions @samp{mcount} and @samp{profil} are part of the profiling
418 aparatus and appear in every flat profile; their time gives a measure of
419 the amount of overhead due to profiling.
421 The sampling period estimates the margin of error in each of the time
422 figures. A time figure that is not much larger than this is not
423 reliable. In this example, the @samp{self seconds} field for
424 @samp{mcount} might well be @samp{0} or @samp{0.04} in another run.
425 @xref{Sampling Error}, for a complete discussion.
427 Here is what the fields in each line mean:
431 This is the percentage of the total execution time your program spent
432 in this function. These should all add up to 100%.
434 @item cumulative seconds
435 This is the cumulative total number of seconds the computer spent
436 executing this functions, plus the time spent in all the functions
437 above this one in this table.
440 This is the number of seconds accounted for by this function alone.
441 The flat profile listing is sorted first by this number.
444 This is the total number of times the function was called. If the
445 function was never called, or the number of times it was called cannot
446 be determined (probably because the function was not compiled with
447 profiling enabled), the @dfn{calls} field is blank.
450 This represents the average number of milliseconds spent in this
451 function per call, if this function is profiled. Otherwise, this field
452 is blank for this function.
455 This represents the average number of milliseconds spent in this
456 function and its descendants per call, if this function is profiled.
457 Otherwise, this field is blank for this function.
460 This is the name of the function. The flat profile is sorted by this
461 field alphabetically after the @dfn{self seconds} field is sorted.
465 @chapter How to Read the Call Graph
468 The @dfn{call graph} shows how much time was spent in each function
469 and its children. From this information, you can find functions that,
470 while they themselves may not have used much time, called other
471 functions that did use unusual amounts of time.
473 Here is a sample call from a small program. This call came from the
474 same @code{gprof} run as the flat profile example in the previous
479 granularity: each sample hit covers 2 byte(s) for 20.00% of 0.05 seconds
481 index % time self children called name
483 [1] 100.0 0.00 0.05 start [1]
484 0.00 0.05 1/1 main [2]
485 0.00 0.00 1/2 on_exit [28]
486 0.00 0.00 1/1 exit [59]
487 -----------------------------------------------
488 0.00 0.05 1/1 start [1]
489 [2] 100.0 0.00 0.05 1 main [2]
490 0.00 0.05 1/1 report [3]
491 -----------------------------------------------
492 0.00 0.05 1/1 main [2]
493 [3] 100.0 0.00 0.05 1 report [3]
494 0.00 0.03 8/8 timelocal [6]
495 0.00 0.01 1/1 print [9]
496 0.00 0.01 9/9 fgets [12]
497 0.00 0.00 12/34 strncmp <cycle 1> [40]
498 0.00 0.00 8/8 lookup [20]
499 0.00 0.00 1/1 fopen [21]
500 0.00 0.00 8/8 chewtime [24]
501 0.00 0.00 8/16 skipspace [44]
502 -----------------------------------------------
503 [4] 59.8 0.01 0.02 8+472 <cycle 2 as a whole> [4]
504 0.01 0.02 244+260 offtime <cycle 2> [7]
505 0.00 0.00 236+1 tzset <cycle 2> [26]
506 -----------------------------------------------
510 The lines full of dashes divide this table into @dfn{entries}, one for each
511 function. Each entry has one or more lines.
513 In each entry, the primary line is the one that starts with an index number
514 in square brackets. The end of this line says which function the entry is
515 for. The preceding lines in the entry describe the callers of this
516 function and the following lines describe its subroutines (also called
517 @dfn{children} when we speak of the call graph).
519 The entries are sorted by time spent in the function and its subroutines.
521 The internal profiling function @code{mcount} (@pxref{Flat Profile})
522 is never mentioned in the call graph.
525 * Primary:: Details of the primary line's contents.
526 * Callers:: Details of caller-lines' contents.
527 * Subroutines:: Details of subroutine-lines' contents.
528 * Cycles:: When there are cycles of recursion,
529 such as @code{a} calls @code{b} calls @code{a}@dots{}
533 @section The Primary Line
535 The @dfn{primary line} in a call graph entry is the line that
536 describes the function which the entry is about and gives the overall
537 statistics for this function.
539 For reference, we repeat the primary line from the entry for function
540 @code{report} in our main example, together with the heading line that
541 shows the names of the fields:
545 index % time self children called name
547 [3] 100.0 0.00 0.05 1 report [3]
551 Here is what the fields in the primary line mean:
555 Entries are numbered with consecutive integers. Each function
556 therefore has an index number, which appears at the beginning of its
559 Each cross-reference to a function, as a caller or subroutine of
560 another, gives its index number as well as its name. The index number
561 guides you if you wish to look for the entry for that function.
564 This is the percentage of the total time that was spent in this
565 function, including time spent in subroutines called from this
568 The time spent in this function is counted again for the callers of
569 this function. Therefore, adding up these percentages is meaningless.
572 This is the total amount of time spent in this function. This
573 should be identical to the number printed in the @code{seconds} field
574 for this function in the flat profile.
577 This is the total amount of time spent in the subroutine calls made by
578 this function. This should be equal to the sum of all the @code{self}
579 and @code{children} entries of the children listed directly below this
583 This is the number of times the function was called.
585 If the function called itself recursively, there are two numbers,
586 separated by a @samp{+}. The first number counts non-recursive calls,
587 and the second counts recursive calls.
589 In the example above, the function @code{report} was called once from
593 This is the name of the current function. The index number is
596 If the function is part of a cycle of recursion, the cycle number is
597 printed between the function's name and the index number
598 (@pxref{Cycles}). For example, if function @code{gnurr} is part of
599 cycle number one, and has index number twelve, its primary line would
607 @node Callers, Subroutines, Primary, Call Graph
608 @section Lines for a Function's Callers
610 A function's entry has a line for each function it was called by.
611 These lines' fields correspond to the fields of the primary line, but
612 their meanings are different because of the difference in context.
614 For reference, we repeat two lines from the entry for the function
615 @code{report}, the primary line and one caller-line preceding it, together
616 with the heading line that shows the names of the fields:
619 index % time self children called name
621 0.00 0.05 1/1 main [2]
622 [3] 100.0 0.00 0.05 1 report [3]
625 Here are the meanings of the fields in the caller-line for @code{report}
626 called from @code{main}:
630 An estimate of the amount of time spent in @code{report} itself when it was
631 called from @code{main}.
634 An estimate of the amount of time spent in subroutines of @code{report}
635 when @code{report} was called from @code{main}.
637 The sum of the @code{self} and @code{children} fields is an estimate
638 of the amount of time spent within calls to @code{report} from @code{main}.
641 Two numbers: the number of times @code{report} was called from @code{main},
642 followed by the total number of nonrecursive calls to @code{report} from
645 @item name and index number
646 The name of the caller of @code{report} to which this line applies,
647 followed by the caller's index number.
649 Not all functions have entries in the call graph; some
650 options to @code{gprof} request the omission of certain functions.
651 When a caller has no entry of its own, it still has caller-lines
652 in the entries of the functions it calls.
654 If the caller is part of a recursion cycle, the cycle number is
655 printed between the name and the index number.
658 If the identity of the callers of a function cannot be determined, a
659 dummy caller-line is printed which has @samp{<spontaneous>} as the
660 ``caller's name'' and all other fields blank. This can happen for
662 @c What if some calls have determinable callers' names but not all?
663 @c FIXME - still relevant?
665 @node Subroutines, Cycles, Callers, Call Graph
666 @section Lines for a Function's Subroutines
668 A function's entry has a line for each of its subroutines---in other
669 words, a line for each other function that it called. These lines'
670 fields correspond to the fields of the primary line, but their meanings
671 are different because of the difference in context.
673 For reference, we repeat two lines from the entry for the function
674 @code{main}, the primary line and a line for a subroutine, together
675 with the heading line that shows the names of the fields:
678 index % time self children called name
680 [2] 100.0 0.00 0.05 1 main [2]
681 0.00 0.05 1/1 report [3]
684 Here are the meanings of the fields in the subroutine-line for @code{main}
685 calling @code{report}:
689 An estimate of the amount of time spent directly within @code{report}
690 when @code{report} was called from @code{main}.
693 An estimate of the amount of time spent in subroutines of @code{report}
694 when @code{report} was called from @code{main}.
696 The sum of the @code{self} and @code{children} fields is an estimate
697 of the total time spent in calls to @code{report} from @code{main}.
700 Two numbers, the number of calls to @code{report} from @code{main}
701 followed by the total number of nonrecursive calls to @code{report}.
704 The name of the subroutine of @code{main} to which this line applies,
705 followed by the subroutine's index number.
707 If the caller is part of a recursion cycle, the cycle number is
708 printed between the name and the index number.
711 @node Cycles,, Subroutines, Call Graph
712 @section How Mutually Recursive Functions Are Described
714 @cindex recursion cycle
716 The graph may be complicated by the presence of @dfn{cycles of
717 recursion} in the call graph. A cycle exists if a function calls
718 another function that (directly or indirectly) calls (or appears to
719 call) the original function. For example: if @code{a} calls @code{b},
720 and @code{b} calls @code{a}, then @code{a} and @code{b} form a cycle.
722 Whenever there are call-paths both ways between a pair of functions, they
723 belong to the same cycle. If @code{a} and @code{b} call each other and
724 @code{b} and @code{c} call each other, all three make one cycle. Note that
725 even if @code{b} only calls @code{a} if it was not called from @code{a},
726 @code{gprof} cannot determine this, so @code{a} and @code{b} are still
729 The cycles are numbered with consecutive integers. When a function
730 belongs to a cycle, each time the function name appears in the call graph
731 it is followed by @samp{<cycle @var{number}>}.
733 The reason cycles matter is that they make the time values in the call
734 graph paradoxical. The ``time spent in children'' of @code{a} should
735 include the time spent in its subroutine @code{b} and in @code{b}'s
736 subroutines---but one of @code{b}'s subroutines is @code{a}! How much of
737 @code{a}'s time should be included in the children of @code{a}, when
738 @code{a} is indirectly recursive?
740 The way @code{gprof} resolves this paradox is by creating a single entry
741 for the cycle as a whole. The primary line of this entry describes the
742 total time spent directly in the functions of the cycle. The
743 ``subroutines'' of the cycle are the individual functions of the cycle, and
744 all other functions that were called directly by them. The ``callers'' of
745 the cycle are the functions, outside the cycle, that called functions in
748 Here is an example portion of a call graph which shows a cycle containing
749 functions @code{a} and @code{b}. The cycle was entered by a call to
750 @code{a} from @code{main}; both @code{a} and @code{b} called @code{c}.
753 index % time self children called name
754 ----------------------------------------
756 [3] 91.71 1.77 0 1+5 <cycle 1 as a whole> [3]
757 1.02 0 3 b <cycle 1> [4]
758 0.75 0 2 a <cycle 1> [5]
759 ----------------------------------------
761 [4] 52.85 1.02 0 0 b <cycle 1> [4]
764 ----------------------------------------
767 [5] 38.86 0.75 0 1 a <cycle 1> [5]
770 ----------------------------------------
774 (The entire call graph for this program contains in addition an entry for
775 @code{main}, which calls @code{a}, and an entry for @code{c}, with callers
776 @code{a} and @code{b}.)
779 index % time self children called name
781 [1] 100.00 0 1.93 0 start [1]
782 0.16 1.77 1/1 main [2]
783 ----------------------------------------
784 0.16 1.77 1/1 start [1]
785 [2] 100.00 0.16 1.77 1 main [2]
786 1.77 0 1/1 a <cycle 1> [5]
787 ----------------------------------------
789 [3] 91.71 1.77 0 1+5 <cycle 1 as a whole> [3]
790 1.02 0 3 b <cycle 1> [4]
791 0.75 0 2 a <cycle 1> [5]
793 ----------------------------------------
795 [4] 52.85 1.02 0 0 b <cycle 1> [4]
798 ----------------------------------------
801 [5] 38.86 0.75 0 1 a <cycle 1> [5]
804 ----------------------------------------
805 0 0 3/6 b <cycle 1> [4]
806 0 0 3/6 a <cycle 1> [5]
808 ----------------------------------------
811 The @code{self} field of the cycle's primary line is the total time
812 spent in all the functions of the cycle. It equals the sum of the
813 @code{self} fields for the individual functions in the cycle, found
814 in the entry in the subroutine lines for these functions.
816 The @code{children} fields of the cycle's primary line and subroutine lines
817 count only subroutines outside the cycle. Even though @code{a} calls
818 @code{b}, the time spent in those calls to @code{b} is not counted in
819 @code{a}'s @code{children} time. Thus, we do not encounter the problem of
820 what to do when the time in those calls to @code{b} includes indirect
821 recursive calls back to @code{a}.
823 The @code{children} field of a caller-line in the cycle's entry estimates
824 the amount of time spent @emph{in the whole cycle}, and its other
825 subroutines, on the times when that caller called a function in the cycle.
827 The @code{calls} field in the primary line for the cycle has two numbers:
828 first, the number of times functions in the cycle were called by functions
829 outside the cycle; second, the number of times they were called by
830 functions in the cycle (including times when a function in the cycle calls
831 itself). This is a generalization of the usual split into nonrecursive and
834 The @code{calls} field of a subroutine-line for a cycle member in the
835 cycle's entry says how many time that function was called from functions in
836 the cycle. The total of all these is the second number in the primary line's
839 In the individual entry for a function in a cycle, the other functions in
840 the same cycle can appear as subroutines and as callers. These lines show
841 how many times each function in the cycle called or was called from each other
842 function in the cycle. The @code{self} and @code{children} fields in these
843 lines are blank because of the difficulty of defining meanings for them
844 when recursion is going on.
846 @node Implementation, Sampling Error, Call Graph, Top
847 @chapter Implementation of Profiling
849 Profiling works by changing how every function in your program is compiled
850 so that when it is called, it will stash away some information about where
851 it was called from. From this, the profiler can figure out what function
852 called it, and can count how many times it was called. This change is made
853 by the compiler when your program is compiled with the @samp{-pg} option.
855 Profiling also involves watching your program as it runs, and keeping a
856 histogram of where the program counter happens to be every now and then.
857 Typically the program counter is looked at around 100 times per second of
858 run time, but the exact frequency may vary from system to system.
860 A special startup routine allocates memory for the histogram and sets up
861 a clock signal handler to make entries in it. Use of this special
862 startup routine is one of the effects of using @samp{gcc @dots{} -pg} to
863 link. The startup file also includes an @samp{exit} function which is
864 responsible for writing the file @file{gmon.out}.
866 Number-of-calls information for library routines is collected by using a
867 special version of the C library. The programs in it are the same as in
868 the usual C library, but they were compiled with @samp{-pg}. If you
869 link your program with @samp{gcc @dots{} -pg}, it automatically uses the
870 profiling version of the library.
872 The output from @code{gprof} gives no indication of parts of your program that
873 are limited by I/O or swapping bandwidth. This is because samples of the
874 program counter are taken at fixed intervals of run time. Therefore, the
875 time measurements in @code{gprof} output say nothing about time that your
876 program was not running. For example, a part of the program that creates
877 so much data that it cannot all fit in physical memory at once may run very
878 slowly due to thrashing, but @code{gprof} will say it uses little time. On
879 the other hand, sampling by run time has the advantage that the amount of
880 load due to other users won't directly affect the output you get.
882 @node Sampling Error, Assumptions, Implementation, Top
883 @chapter Statistical Inaccuracy of @code{gprof} Output
885 The run-time figures that @code{gprof} gives you are based on a sampling
886 process, so they are subject to statistical inaccuracy. If a function runs
887 only a small amount of time, so that on the average the sampling process
888 ought to catch that function in the act only once, there is a pretty good
889 chance it will actually find that function zero times, or twice.
891 By contrast, the number-of-calls figures are derived by counting, not
892 sampling. They are completely accurate and will not vary from run to run
893 if your program is deterministic.
895 The @dfn{sampling period} that is printed at the beginning of the flat
896 profile says how often samples are taken. The rule of thumb is that a
897 run-time figure is accurate if it is considerably bigger than the sampling
900 The actual amount of error is usually more than one sampling period. In
901 fact, if a value is @var{n} times the sampling period, the @emph{expected}
902 error in it is the square-root of @var{n} sampling periods. If the
903 sampling period is 0.01 seconds and @code{foo}'s run-time is 1 second, the
904 expected error in @code{foo}'s run-time is 0.1 seconds. It is likely to
905 vary this much @emph{on the average} from one profiling run to the next.
906 (@emph{Sometimes} it will vary more.)
908 This does not mean that a small run-time figure is devoid of information.
909 If the program's @emph{total} run-time is large, a small run-time for one
910 function does tell you that that function used an insignificant fraction of
911 the whole program's time. Usually this means it is not worth optimizing.
913 One way to get more accuracy is to give your program more (but similar)
914 input data so it will take longer. Another way is to combine the data from
915 several runs, using the @samp{-s} option of @code{gprof}. Here is how:
919 Run your program once.
922 Issue the command @samp{mv gmon.out gmon.sum}.
925 Run your program again, the same as before.
928 Merge the new data in @file{gmon.out} into @file{gmon.sum} with this command:
931 gprof -s @var{executable-file} gmon.out gmon.sum
935 Repeat the last two steps as often as you wish.
938 Analyze the cumulative data using this command:
941 gprof @var{executable-file} gmon.sum > @var{output-file}
945 @node Assumptions, Incompatibilities, Sampling Error, Top
946 @chapter Estimating @code{children} Times Uses an Assumption
948 Some of the figures in the call graph are estimates---for example, the
949 @code{children} time values and all the the time figures in caller and
952 There is no direct information about these measurements in the profile
953 data itself. Instead, @code{gprof} estimates them by making an assumption
954 about your program that might or might not be true.
956 The assumption made is that the average time spent in each call to any
957 function @code{foo} is not correlated with who called @code{foo}. If
958 @code{foo} used 5 seconds in all, and 2/5 of the calls to @code{foo} came
959 from @code{a}, then @code{foo} contributes 2 seconds to @code{a}'s
960 @code{children} time, by assumption.
962 This assumption is usually true enough, but for some programs it is far
963 from true. Suppose that @code{foo} returns very quickly when its argument
964 is zero; suppose that @code{a} always passes zero as an argument, while
965 other callers of @code{foo} pass other arguments. In this program, all the
966 time spent in @code{foo} is in the calls from callers other than @code{a}.
967 But @code{gprof} has no way of knowing this; it will blindly and
968 incorrectly charge 2 seconds of time in @code{foo} to the children of
971 @c FIXME - has this been fixed?
972 We hope some day to put more complete data into @file{gmon.out}, so that
973 this assumption is no longer needed, if we can figure out how. For the
974 nonce, the estimated figures are usually more useful than misleading.
976 @node Incompatibilities, , Assumptions, Top
977 @chapter Incompatibilities with Unix @code{gprof}
979 @sc{gnu} @code{gprof} and Berkeley Unix @code{gprof} use the same data
980 file @file{gmon.out}, and provide essentially the same information. But
981 there are a few differences.
985 For a recursive function, Unix @code{gprof} lists the function as a
986 parent and as a child, with a @code{calls} field that lists the number
987 of recursive calls. @sc{gnu} @code{gprof} omits these lines and puts
988 the number of recursive calls in the primary line.
991 When a function is suppressed from the call graph with @samp{-e}, @sc{gnu}
992 @code{gprof} still lists it as a subroutine of functions that call it.
994 @ignore - it does this now
996 The function names printed in @sc{gnu} @code{gprof} output do not include
997 the leading underscores that are added internally to the front of all
998 C identifiers on many operating systems.
1002 The blurbs, field widths, and output formats are different. @sc{gnu}
1003 @code{gprof} prints blurbs after the tables, so that you can see the
1004 tables without skipping the blurbs.
1012 The @file{gmon.out} file is written in the program's @emph{current working
1013 directory} at the time it exits. This means that if your program calls
1014 @code{chdir}, the @file{gmon.out} file will be left in the last directory
1015 your program @code{chdir}'d to. If you don't have permission to write in
1016 this directory, the file is not written. You may get a confusing error
1017 message if this happens. (We have not yet replaced the part of Unix
1018 responsible for this; when we do, we will make the error message
1023 -d debugging...? should this be documented?
1025 -T - "traditional BSD style": How is it different? Should the
1026 differences be documented?
1028 what is this about? (and to think, I *wrote* it...)
1030 The @samp{-c} option causes the static call-graph of the program to be
1031 discovered by a heuristic which examines the text space of the object
1032 file. Static-only parents or children are indicated with call counts of
1035 example flat file adds up to 100.01%...
1037 note: time estimates now only go out to one decimal place (0.0), where
1038 they used to extend two (78.67).