1 \input texinfo @c -*-texinfo-*-
2 @setfilename gprof.info
7 @c This is a dir.info fragment to support semi-automated addition of
8 @c manuals to an info tree. zoo@cygnus.com is developing this facility.
11 * gprof:: Profiling your program's execution
17 This file documents the gprof profiler of the GNU system.
19 Copyright (C) 1988, 1992 Free Software Foundation, Inc.
21 Permission is granted to make and distribute verbatim copies of
22 this manual provided the copyright notice and this permission notice
23 are preserved on all copies.
26 Permission is granted to process this file through Tex and print the
27 results, provided the printed document carries copying permission
28 notice identical to this one except for the removal of this paragraph
29 (this paragraph not being relevant to the printed manual).
32 Permission is granted to copy and distribute modified versions of this
33 manual under the conditions for verbatim copying, provided that the entire
34 resulting derived work is distributed under the terms of a permission
35 notice identical to this one.
37 Permission is granted to copy and distribute translations of this manual
38 into another language, under the above conditions for modified versions.
46 @subtitle The @sc{gnu} Profiler
47 @author Jay Fenlason and Richard Stallman
51 This manual describes the @sc{gnu} profiler, @code{gprof}, and how you
52 can use it to determine which parts of a program are taking most of the
53 execution time. We assume that you know how to write, compile, and
54 execute programs. @sc{gnu} @code{gprof} was written by Jay Fenlason.
56 This manual was edited January 1993 by Jeffrey Osier.
58 @vskip 0pt plus 1filll
59 Copyright @copyright{} 1988, 1992 Free Software Foundation, Inc.
61 Permission is granted to make and distribute verbatim copies of
62 this manual provided the copyright notice and this permission notice
63 are preserved on all copies.
66 Permission is granted to process this file through TeX and print the
67 results, provided the printed document carries copying permission
68 notice identical to this one except for the removal of this paragraph
69 (this paragraph not being relevant to the printed manual).
72 Permission is granted to copy and distribute modified versions of this
73 manual under the conditions for verbatim copying, provided that the entire
74 resulting derived work is distributed under the terms of a permission
75 notice identical to this one.
77 Permission is granted to copy and distribute translations of this manual
78 into another language, under the same conditions as for modified versions.
84 @top Profiling a Program: Where Does It Spend Its Time?
86 This manual describes the @sc{gnu} profiler, @code{gprof}, and how you
87 can use it to determine which parts of a program are taking most of the
88 execution time. We assume that you know how to write, compile, and
89 execute programs. @sc{gnu} @code{gprof} was written by Jay Fenlason.
92 * Why:: What profiling means, and why it is useful.
93 * Compiling:: How to compile your program for profiling.
94 * Executing:: How to execute your program to generate the
95 profile data file @file{gmon.out}.
96 * Invoking:: How to run @code{gprof}, and how to specify
99 * Flat Profile:: The flat profile shows how much time was spent
100 executing directly in each function.
101 * Call Graph:: The call graph shows which functions called which
102 others, and how much time each function used
103 when its subroutine calls are included.
105 * Implementation:: How the profile data is recorded and written.
106 * Sampling Error:: Statistical margins of error.
107 How to accumulate data from several runs
108 to make it more accurate.
110 * Assumptions:: Some of @code{gprof}'s measurements are based
111 on assumptions about your program
112 that could be very wrong.
114 * Incompatibilities:: (between GNU @code{gprof} and Unix @code{gprof}.)
121 Profiling allows you to learn where your program spent its time and which
122 functions called which other functions while it was executing. This
123 information can show you which pieces of your program are slower than you
124 expected, and might be candidates for rewriting to make your program
125 execute faster. It can also tell you which functions are being called more
126 or less often than you expected. This may help you spot bugs that had
127 otherwise been unnoticed.
129 Since the profiler uses information collected during the actual execution
130 of your program, it can be used on programs that are too large or too
131 complex to analyze by reading the source. However, how your program is run
132 will affect the information that shows up in the profile data. If you
133 don't use some feature of your program while it is being profiled, no
134 profile information will be generated for that feature.
136 Profiling has several steps:
140 You must compile and link your program with profiling enabled.
144 You must execute your program to generate a profile data file.
148 You must run @code{gprof} to analyze the profile data.
152 The next three chapters explain these steps in greater detail.
154 The result of the analysis is a file containing two tables, the
155 @dfn{flat profile} and the @dfn{call graph} (plus blurbs which briefly
156 explain the contents of these tables).
158 The flat profile shows how much time your program spent in each function,
159 and how many times that function was called. If you simply want to know
160 which functions burn most of the cycles, it is stated concisely here.
163 The call graph shows, for each function, which functions called it, which
164 other functions it called, and how many times. There is also an estimate
165 of how much time was spent in the subroutines of each function. This can
166 suggest places where you might try to eliminate function calls that use a
167 lot of time. @xref{Call Graph}.
170 @chapter Compiling a Program for Profiling
172 The first step in generating profile information for your program is
173 to compile and link it with profiling enabled.
175 To compile a source file for profiling, specify the @samp{-pg} option when
176 you run the compiler. (This is in addition to the options you normally
179 To link the program for profiling, if you use a compiler such as @code{cc}
180 to do the linking, simply specify @samp{-pg} in addition to your usual
181 options. The same option, @samp{-pg}, alters either compilation or linking
182 to do what is necessary for profiling. Here are examples:
185 cc -g -c myprog.c utils.c -pg
186 cc -o myprog myprog.o utils.o -pg
189 The @samp{-pg} option also works with a command that both compiles and links:
192 cc -o myprog myprog.c utils.c -g -pg
195 If you run the linker @code{ld} directly instead of through a compiler
196 such as @code{cc}, you must specify the profiling startup file
197 @file{/lib/gcrt0.o} as the first input file instead of the usual startup
198 file @file{/lib/crt0.o}. In addition, you would probably want to
199 specify the profiling C library, @file{/usr/lib/libc_p.a}, by writing
200 @samp{-lc_p} instead of the usual @samp{-lc}. This is not absolutely
201 necessary, but doing this gives you number-of-calls information for
202 standard library functions such as @code{read} and @code{open}. For
206 ld -o myprog /lib/gcrt0.o myprog.o utils.o -lc_p
209 If you compile only some of the modules of the program with @samp{-pg}, you
210 can still profile the program, but you won't get complete information about
211 the modules that were compiled without @samp{-pg}. The only information
212 you get for the functions in those modules is the total time spent in them;
213 there is no record of how many times they were called, or from where. This
214 will not affect the flat profile (except that the @code{calls} field for
215 the functions will be blank), but will greatly reduce the usefulness of the
219 @chapter Executing the Program to Generate Profile Data
221 Once the program is compiled for profiling, you must run it in order to
222 generate the information that @code{gprof} needs. Simply run the program
223 as usual, using the normal arguments, file names, etc. The program should
224 run normally, producing the same output as usual. It will, however, run
225 somewhat slower than normal because of the time spent collecting and the
226 writing the profile data.
228 The way you run the program---the arguments and input that you give
229 it---may have a dramatic effect on what the profile information shows. The
230 profile data will describe the parts of the program that were activated for
231 the particular input you use. For example, if the first command you give
232 to your program is to quit, the profile data will show the time used in
233 initialization and in cleanup, but not much else.
235 You program will write the profile data into a file called @file{gmon.out}
236 just before exiting. If there is already a file called @file{gmon.out},
237 its contents are overwritten. There is currently no way to tell the
238 program to write the profile data under a different name, but you can rename
239 the file afterward if you are concerned that it may be overwritten.
241 In order to write the @file{gmon.out} file properly, your program must exit
242 normally: by returning from @code{main} or by calling @code{exit}. Calling
243 the low-level function @code{_exit} does not write the profile data, and
244 neither does abnormal termination due to an unhandled signal.
246 The @file{gmon.out} file is written in the program's @emph{current working
247 directory} at the time it exits. This means that if your program calls
248 @code{chdir}, the @file{gmon.out} file will be left in the last directory
249 your program @code{chdir}'d to. If you don't have permission to write in
250 this directory, the file is not written. You may get a confusing error
251 message if this happens. (We have not yet replaced the part of Unix
252 responsible for this; when we do, we will make the error message
256 @chapter @code{gprof} Command Summary
258 After you have a profile data file @file{gmon.out}, you can run @code{gprof}
259 to interpret the information in it. The @code{gprof} program prints a
260 flat profile and a call graph on standard output. Typically you would
261 redirect the output of @code{gprof} into a file with @samp{>}.
263 You run @code{gprof} like this:
266 gprof @var{options} [@var{executable-file} [@var{profile-data-files}@dots{}]] [> @var{outfile}]
270 Here square-brackets indicate optional arguments.
272 If you omit the executable file name, the file @file{a.out} is used. If
273 you give no profile data file name, the file @file{gmon.out} is used. If
274 any file is not in the proper format, or if the profile data file does not
275 appear to belong to the executable file, an error message is printed.
277 You can give more than one profile data file by entering all their names
278 after the executable file name; then the statistics in all the data files
281 The following options may be used to selectively include or exclude
282 functions in the output:
286 The @samp{-a} option causes @code{gprof} to suppress the printing of
287 statically declared (private) functions. (These are functions whose
288 names are not listed as global, and which are not visible outside the
289 file/function/block where they were defined.) Time spent in these
290 functions, calls to/from them, etc, will all be attributed to the
291 function that was loaded directly before it in the executable file.
292 @c This is compatible with Unix @code{gprof}, but a bad idea.
293 This option affects both the flat profile and the call graph.
295 @item -e @var{function_name}
296 The @samp{-e @var{function}} option tells @code{gprof} to not print
297 information about the function @var{function_name} (and its
298 children@dots{}) in the call graph. The function will still be listed
299 as a child of any functions that call it, but its index number will be
300 shown as @samp{[not printed]}. More than one @samp{-e} option may be
301 given; only one @var{function_name} may be indicated with each @samp{-e}
304 @item -E @var{function_name}
305 The @code{-E @var{function}} option works like the @code{-e} option, but
306 time spent in the function (and children who were not called from
307 anywhere else), will not be used to compute the percentages-of-time for
308 the call graph. More than one @samp{-E} option may be given; only one
309 @var{function_name} may be indicated with each @samp{-E} option.
311 @item -f @var{function_name}
312 The @samp{-f @var{function}} option causes @code{gprof} to limit the
313 call graph to the function @var{function_name} and its children (and
314 their children@dots{}). More than one @samp{-f} option may be given;
315 only one @var{function_name} may be indicated with each @samp{-f}
318 @item -F @var{function_name}
319 The @samp{-F @var{function}} option works like the @code{-f} option, but
320 only time spent in the function and its children (and their
321 children@dots{}) will be used to determine total-time and
322 percentages-of-time for the call graph. More than one @samp{-F} option
323 may be given; only one @var{function_name} may be indicated with each
324 @samp{-F} option. The @samp{-F} option overrides the @samp{-E} option.
326 @item -k @var{from@dots{}} @var{to@dots{}}
327 The @samp{-k} option allows you to delete from the profile any arcs from
328 routine @var{from} to routine @var{to}.
331 If you give the @samp{-z} option, @code{gprof} will mention all
332 functions in the flat profile, even those that were never called, and
333 that had no time spent in them. This is useful in conjunction with the
334 @samp{-c} option for discovering which routines were never called.
337 The order of these options does not matter.
339 Note that only one function can be specified with each @code{-e},
340 @code{-E}, @code{-f} or @code{-F} option. To specify more than one
341 function, use multiple options. For example, this command:
344 gprof -e boring -f foo -f bar myprogram > gprof.output
348 lists in the call graph all functions that were reached from either
349 @code{foo} or @code{bar} and were not reachable from @code{boring}.
351 There are a few other useful @code{gprof} options:
355 If the @samp{-b} option is given, @code{gprof} doesn't print the
356 verbose blurbs that try to explain the meaning of all of the fields in
357 the tables. This is useful if you intend to print out the output, or
358 are tired of seeing the blurbs.
361 The @samp{-c} option causes the static call-graph of the program to be
362 discovered by a heuristic which examines the text space of the object
363 file. Static-only parents or children are indicated with call counts of
367 The @samp{-d @var{num}} option specifies debugging options.
371 The @samp{-s} option causes @code{gprof} to summarize the information
372 in the profile data files it read in, and write out a profile data
373 file called @file{gmon.sum}, which contains all the information from
374 the profile data files that @code{gprof} read in. The file @file{gmon.sum}
375 may be one of the specified input files; the effect of this is to
376 merge the data in the other input files into @file{gmon.sum}.
377 @xref{Sampling Error}.
379 Eventually you can run @code{gprof} again without @samp{-s} to analyze the
380 cumulative data in the file @file{gmon.sum}.
383 The @samp{-T} option causes @code{gprof} to print its output in
384 ``traditional'' BSD style.
388 @chapter How to Understand the Flat Profile
391 The @dfn{flat profile} shows the total amount of time your program
392 spent executing each function. Unless the @samp{-z} option is given,
393 functions with no apparent time spent in them, and no apparent calls
394 to them, are not mentioned. Note that if a function was not compiled
395 for profiling, and didn't run long enough to show up on the program
396 counter histogram, it will be indistinguishable from a function that
399 This is part of a flat profile for a small program:
405 Each sample counts as 0.01 seconds.
406 % cumulative self self total
407 time seconds seconds calls ms/call ms/call name
408 33.34 0.02 0.02 7208 0.00 0.00 open
409 16.67 0.03 0.01 244 0.04 0.12 offtime
410 16.67 0.04 0.01 8 1.25 1.25 memccpy
411 16.67 0.05 0.01 7 1.43 1.43 write
412 16.67 0.06 0.01 mcount
413 0.00 0.06 0.00 236 0.00 0.00 tzset
414 0.00 0.06 0.00 192 0.00 0.00 tolower
415 0.00 0.06 0.00 47 0.00 0.00 strlen
416 0.00 0.06 0.00 45 0.00 0.00 strchr
417 0.00 0.06 0.00 1 0.00 50.00 main
418 0.00 0.06 0.00 1 0.00 0.00 memcpy
419 0.00 0.06 0.00 1 0.00 10.11 print
420 0.00 0.06 0.00 1 0.00 0.00 profil
421 0.00 0.06 0.00 1 0.00 50.00 report
427 The functions are sorted by decreasing run-time spent in them. The
428 functions @samp{mcount} and @samp{profil} are part of the profiling
429 aparatus and appear in every flat profile; their time gives a measure of
430 the amount of overhead due to profiling.
432 The sampling period estimates the margin of error in each of the time
433 figures. A time figure that is not much larger than this is not
434 reliable. In this example, the @samp{self seconds} field for
435 @samp{mcount} might well be @samp{0} or @samp{0.04} in another run.
436 @xref{Sampling Error}, for a complete discussion.
438 Here is what the fields in each line mean:
442 This is the percentage of the total execution time your program spent
443 in this function. These should all add up to 100%.
445 @item cumulative seconds
446 This is the cumulative total number of seconds the computer spent
447 executing this functions, plus the time spent in all the functions
448 above this one in this table.
451 This is the number of seconds accounted for by this function alone.
452 The flat profile listing is sorted first by this number.
455 This is the total number of times the function was called. If the
456 function was never called, or the number of times it was called cannot
457 be determined (probably because the function was not compiled with
458 profiling enabled), the @dfn{calls} field is blank.
461 This represents the average number of milliseconds spent in this
462 function per call, if this function is profiled. Otherwise, this field
463 is blank for this function.
466 This represents the average number of milliseconds spent in this
467 function and its descendants per call, if this function is profiled.
468 Otherwise, this field is blank for this function.
471 This is the name of the function. The flat profile is sorted by this
472 field alphabetically after the @dfn{self seconds} field is sorted.
476 @chapter How to Read the Call Graph
479 The @dfn{call graph} shows how much time was spent in each function
480 and its children. From this information, you can find functions that,
481 while they themselves may not have used much time, called other
482 functions that did use unusual amounts of time.
484 Here is a sample call from a small program. This call came from the
485 same @code{gprof} run as the flat profile example in the previous
490 granularity: each sample hit covers 2 byte(s) for 20.00% of 0.05 seconds
492 index % time self children called name
494 [1] 100.0 0.00 0.05 start [1]
495 0.00 0.05 1/1 main [2]
496 0.00 0.00 1/2 on_exit [28]
497 0.00 0.00 1/1 exit [59]
498 -----------------------------------------------
499 0.00 0.05 1/1 start [1]
500 [2] 100.0 0.00 0.05 1 main [2]
501 0.00 0.05 1/1 report [3]
502 -----------------------------------------------
503 0.00 0.05 1/1 main [2]
504 [3] 100.0 0.00 0.05 1 report [3]
505 0.00 0.03 8/8 timelocal [6]
506 0.00 0.01 1/1 print [9]
507 0.00 0.01 9/9 fgets [12]
508 0.00 0.00 12/34 strncmp <cycle 1> [40]
509 0.00 0.00 8/8 lookup [20]
510 0.00 0.00 1/1 fopen [21]
511 0.00 0.00 8/8 chewtime [24]
512 0.00 0.00 8/16 skipspace [44]
513 -----------------------------------------------
514 [4] 59.8 0.01 0.02 8+472 <cycle 2 as a whole> [4]
515 0.01 0.02 244+260 offtime <cycle 2> [7]
516 0.00 0.00 236+1 tzset <cycle 2> [26]
517 -----------------------------------------------
521 The lines full of dashes divide this table into @dfn{entries}, one for each
522 function. Each entry has one or more lines.
524 In each entry, the primary line is the one that starts with an index number
525 in square brackets. The end of this line says which function the entry is
526 for. The preceding lines in the entry describe the callers of this
527 function and the following lines describe its subroutines (also called
528 @dfn{children} when we speak of the call graph).
530 The entries are sorted by time spent in the function and its subroutines.
532 The internal profiling function @code{mcount} (@pxref{Flat Profile})
533 is never mentioned in the call graph.
536 * Primary:: Details of the primary line's contents.
537 * Callers:: Details of caller-lines' contents.
538 * Subroutines:: Details of subroutine-lines' contents.
539 * Cycles:: When there are cycles of recursion,
540 such as @code{a} calls @code{b} calls @code{a}@dots{}
544 @section The Primary Line
546 The @dfn{primary line} in a call graph entry is the line that
547 describes the function which the entry is about and gives the overall
548 statistics for this function.
550 For reference, we repeat the primary line from the entry for function
551 @code{report} in our main example, together with the heading line that
552 shows the names of the fields:
556 index % time self children called name
558 [3] 100.0 0.00 0.05 1 report [3]
562 Here is what the fields in the primary line mean:
566 Entries are numbered with consecutive integers. Each function
567 therefore has an index number, which appears at the beginning of its
570 Each cross-reference to a function, as a caller or subroutine of
571 another, gives its index number as well as its name. The index number
572 guides you if you wish to look for the entry for that function.
575 This is the percentage of the total time that was spent in this
576 function, including time spent in subroutines called from this
579 The time spent in this function is counted again for the callers of
580 this function. Therefore, adding up these percentages is meaningless.
583 This is the total amount of time spent in this function. This
584 should be identical to the number printed in the @code{seconds} field
585 for this function in the flat profile.
588 This is the total amount of time spent in the subroutine calls made by
589 this function. This should be equal to the sum of all the @code{self}
590 and @code{children} entries of the children listed directly below this
594 This is the number of times the function was called.
596 If the function called itself recursively, there are two numbers,
597 separated by a @samp{+}. The first number counts non-recursive calls,
598 and the second counts recursive calls.
600 In the example above, the function @code{report} was called once from
604 This is the name of the current function. The index number is
607 If the function is part of a cycle of recursion, the cycle number is
608 printed between the function's name and the index number
609 (@pxref{Cycles}). For example, if function @code{gnurr} is part of
610 cycle number one, and has index number twelve, its primary line would
618 @node Callers, Subroutines, Primary, Call Graph
619 @section Lines for a Function's Callers
621 A function's entry has a line for each function it was called by.
622 These lines' fields correspond to the fields of the primary line, but
623 their meanings are different because of the difference in context.
625 For reference, we repeat two lines from the entry for the function
626 @code{report}, the primary line and one caller-line preceding it, together
627 with the heading line that shows the names of the fields:
630 index % time self children called name
632 0.00 0.05 1/1 main [2]
633 [3] 100.0 0.00 0.05 1 report [3]
636 Here are the meanings of the fields in the caller-line for @code{report}
637 called from @code{main}:
641 An estimate of the amount of time spent in @code{report} itself when it was
642 called from @code{main}.
645 An estimate of the amount of time spent in subroutines of @code{report}
646 when @code{report} was called from @code{main}.
648 The sum of the @code{self} and @code{children} fields is an estimate
649 of the amount of time spent within calls to @code{report} from @code{main}.
652 Two numbers: the number of times @code{report} was called from @code{main},
653 followed by the total number of nonrecursive calls to @code{report} from
656 @item name and index number
657 The name of the caller of @code{report} to which this line applies,
658 followed by the caller's index number.
660 Not all functions have entries in the call graph; some
661 options to @code{gprof} request the omission of certain functions.
662 When a caller has no entry of its own, it still has caller-lines
663 in the entries of the functions it calls.
665 If the caller is part of a recursion cycle, the cycle number is
666 printed between the name and the index number.
669 If the identity of the callers of a function cannot be determined, a
670 dummy caller-line is printed which has @samp{<spontaneous>} as the
671 ``caller's name'' and all other fields blank. This can happen for
673 @c What if some calls have determinable callers' names but not all?
674 @c FIXME - still relevant?
676 @node Subroutines, Cycles, Callers, Call Graph
677 @section Lines for a Function's Subroutines
679 A function's entry has a line for each of its subroutines---in other
680 words, a line for each other function that it called. These lines'
681 fields correspond to the fields of the primary line, but their meanings
682 are different because of the difference in context.
684 For reference, we repeat two lines from the entry for the function
685 @code{main}, the primary line and a line for a subroutine, together
686 with the heading line that shows the names of the fields:
689 index % time self children called name
691 [2] 100.0 0.00 0.05 1 main [2]
692 0.00 0.05 1/1 report [3]
695 Here are the meanings of the fields in the subroutine-line for @code{main}
696 calling @code{report}:
700 An estimate of the amount of time spent directly within @code{report}
701 when @code{report} was called from @code{main}.
704 An estimate of the amount of time spent in subroutines of @code{report}
705 when @code{report} was called from @code{main}.
707 The sum of the @code{self} and @code{children} fields is an estimate
708 of the total time spent in calls to @code{report} from @code{main}.
711 Two numbers, the number of calls to @code{report} from @code{main}
712 followed by the total number of nonrecursive calls to @code{report}.
715 The name of the subroutine of @code{main} to which this line applies,
716 followed by the subroutine's index number.
718 If the caller is part of a recursion cycle, the cycle number is
719 printed between the name and the index number.
722 @node Cycles,, Subroutines, Call Graph
723 @section How Mutually Recursive Functions Are Described
725 @cindex recursion cycle
727 The graph may be complicated by the presence of @dfn{cycles of
728 recursion} in the call graph. A cycle exists if a function calls
729 another function that (directly or indirectly) calls (or appears to
730 call) the original function. For example: if @code{a} calls @code{b},
731 and @code{b} calls @code{a}, then @code{a} and @code{b} form a cycle.
733 Whenever there are call-paths both ways between a pair of functions, they
734 belong to the same cycle. If @code{a} and @code{b} call each other and
735 @code{b} and @code{c} call each other, all three make one cycle. Note that
736 even if @code{b} only calls @code{a} if it was not called from @code{a},
737 @code{gprof} cannot determine this, so @code{a} and @code{b} are still
740 The cycles are numbered with consecutive integers. When a function
741 belongs to a cycle, each time the function name appears in the call graph
742 it is followed by @samp{<cycle @var{number}>}.
744 The reason cycles matter is that they make the time values in the call
745 graph paradoxical. The ``time spent in children'' of @code{a} should
746 include the time spent in its subroutine @code{b} and in @code{b}'s
747 subroutines---but one of @code{b}'s subroutines is @code{a}! How much of
748 @code{a}'s time should be included in the children of @code{a}, when
749 @code{a} is indirectly recursive?
751 The way @code{gprof} resolves this paradox is by creating a single entry
752 for the cycle as a whole. The primary line of this entry describes the
753 total time spent directly in the functions of the cycle. The
754 ``subroutines'' of the cycle are the individual functions of the cycle, and
755 all other functions that were called directly by them. The ``callers'' of
756 the cycle are the functions, outside the cycle, that called functions in
759 Here is an example portion of a call graph which shows a cycle containing
760 functions @code{a} and @code{b}. The cycle was entered by a call to
761 @code{a} from @code{main}; both @code{a} and @code{b} called @code{c}.
764 index % time self children called name
765 ----------------------------------------
767 [3] 91.71 1.77 0 1+5 <cycle 1 as a whole> [3]
768 1.02 0 3 b <cycle 1> [4]
769 0.75 0 2 a <cycle 1> [5]
770 ----------------------------------------
772 [4] 52.85 1.02 0 0 b <cycle 1> [4]
775 ----------------------------------------
778 [5] 38.86 0.75 0 1 a <cycle 1> [5]
781 ----------------------------------------
785 (The entire call graph for this program contains in addition an entry for
786 @code{main}, which calls @code{a}, and an entry for @code{c}, with callers
787 @code{a} and @code{b}.)
790 index % time self children called name
792 [1] 100.00 0 1.93 0 start [1]
793 0.16 1.77 1/1 main [2]
794 ----------------------------------------
795 0.16 1.77 1/1 start [1]
796 [2] 100.00 0.16 1.77 1 main [2]
797 1.77 0 1/1 a <cycle 1> [5]
798 ----------------------------------------
800 [3] 91.71 1.77 0 1+5 <cycle 1 as a whole> [3]
801 1.02 0 3 b <cycle 1> [4]
802 0.75 0 2 a <cycle 1> [5]
804 ----------------------------------------
806 [4] 52.85 1.02 0 0 b <cycle 1> [4]
809 ----------------------------------------
812 [5] 38.86 0.75 0 1 a <cycle 1> [5]
815 ----------------------------------------
816 0 0 3/6 b <cycle 1> [4]
817 0 0 3/6 a <cycle 1> [5]
819 ----------------------------------------
822 The @code{self} field of the cycle's primary line is the total time
823 spent in all the functions of the cycle. It equals the sum of the
824 @code{self} fields for the individual functions in the cycle, found
825 in the entry in the subroutine lines for these functions.
827 The @code{children} fields of the cycle's primary line and subroutine lines
828 count only subroutines outside the cycle. Even though @code{a} calls
829 @code{b}, the time spent in those calls to @code{b} is not counted in
830 @code{a}'s @code{children} time. Thus, we do not encounter the problem of
831 what to do when the time in those calls to @code{b} includes indirect
832 recursive calls back to @code{a}.
834 The @code{children} field of a caller-line in the cycle's entry estimates
835 the amount of time spent @emph{in the whole cycle}, and its other
836 subroutines, on the times when that caller called a function in the cycle.
838 The @code{calls} field in the primary line for the cycle has two numbers:
839 first, the number of times functions in the cycle were called by functions
840 outside the cycle; second, the number of times they were called by
841 functions in the cycle (including times when a function in the cycle calls
842 itself). This is a generalization of the usual split into nonrecursive and
845 The @code{calls} field of a subroutine-line for a cycle member in the
846 cycle's entry says how many time that function was called from functions in
847 the cycle. The total of all these is the second number in the primary line's
850 In the individual entry for a function in a cycle, the other functions in
851 the same cycle can appear as subroutines and as callers. These lines show
852 how many times each function in the cycle called or was called from each other
853 function in the cycle. The @code{self} and @code{children} fields in these
854 lines are blank because of the difficulty of defining meanings for them
855 when recursion is going on.
857 @node Implementation, Sampling Error, Call Graph, Top
858 @chapter Implementation of Profiling
860 Profiling works by changing how every function in your program is compiled
861 so that when it is called, it will stash away some information about where
862 it was called from. From this, the profiler can figure out what function
863 called it, and can count how many times it was called. This change is made
864 by the compiler when your program is compiled with the @samp{-pg} option.
866 Profiling also involves watching your program as it runs, and keeping a
867 histogram of where the program counter happens to be every now and then.
868 Typically the program counter is looked at around 100 times per second of
869 run time, but the exact frequency may vary from system to system.
871 A special startup routine allocates memory for the histogram and sets up
872 a clock signal handler to make entries in it. Use of this special
873 startup routine is one of the effects of using @samp{gcc @dots{} -pg} to
874 link. The startup file also includes an @samp{exit} function which is
875 responsible for writing the file @file{gmon.out}.
877 Number-of-calls information for library routines is collected by using a
878 special version of the C library. The programs in it are the same as in
879 the usual C library, but they were compiled with @samp{-pg}. If you
880 link your program with @samp{gcc @dots{} -pg}, it automatically uses the
881 profiling version of the library.
883 The output from @code{gprof} gives no indication of parts of your program that
884 are limited by I/O or swapping bandwidth. This is because samples of the
885 program counter are taken at fixed intervals of run time. Therefore, the
886 time measurements in @code{gprof} output say nothing about time that your
887 program was not running. For example, a part of the program that creates
888 so much data that it cannot all fit in physical memory at once may run very
889 slowly due to thrashing, but @code{gprof} will say it uses little time. On
890 the other hand, sampling by run time has the advantage that the amount of
891 load due to other users won't directly affect the output you get.
893 @node Sampling Error, Assumptions, Implementation, Top
894 @chapter Statistical Inaccuracy of @code{gprof} Output
896 The run-time figures that @code{gprof} gives you are based on a sampling
897 process, so they are subject to statistical inaccuracy. If a function runs
898 only a small amount of time, so that on the average the sampling process
899 ought to catch that function in the act only once, there is a pretty good
900 chance it will actually find that function zero times, or twice.
902 By contrast, the number-of-calls figures are derived by counting, not
903 sampling. They are completely accurate and will not vary from run to run
904 if your program is deterministic.
906 The @dfn{sampling period} that is printed at the beginning of the flat
907 profile says how often samples are taken. The rule of thumb is that a
908 run-time figure is accurate if it is considerably bigger than the sampling
911 The actual amount of error is usually more than one sampling period. In
912 fact, if a value is @var{n} times the sampling period, the @emph{expected}
913 error in it is the square-root of @var{n} sampling periods. If the
914 sampling period is 0.01 seconds and @code{foo}'s run-time is 1 second, the
915 expected error in @code{foo}'s run-time is 0.1 seconds. It is likely to
916 vary this much @emph{on the average} from one profiling run to the next.
917 (@emph{Sometimes} it will vary more.)
919 This does not mean that a small run-time figure is devoid of information.
920 If the program's @emph{total} run-time is large, a small run-time for one
921 function does tell you that that function used an insignificant fraction of
922 the whole program's time. Usually this means it is not worth optimizing.
924 One way to get more accuracy is to give your program more (but similar)
925 input data so it will take longer. Another way is to combine the data from
926 several runs, using the @samp{-s} option of @code{gprof}. Here is how:
930 Run your program once.
933 Issue the command @samp{mv gmon.out gmon.sum}.
936 Run your program again, the same as before.
939 Merge the new data in @file{gmon.out} into @file{gmon.sum} with this command:
942 gprof -s @var{executable-file} gmon.out gmon.sum
946 Repeat the last two steps as often as you wish.
949 Analyze the cumulative data using this command:
952 gprof @var{executable-file} gmon.sum > @var{output-file}
956 @node Assumptions, Incompatibilities, Sampling Error, Top
957 @chapter Estimating @code{children} Times Uses an Assumption
959 Some of the figures in the call graph are estimates---for example, the
960 @code{children} time values and all the the time figures in caller and
963 There is no direct information about these measurements in the profile
964 data itself. Instead, @code{gprof} estimates them by making an assumption
965 about your program that might or might not be true.
967 The assumption made is that the average time spent in each call to any
968 function @code{foo} is not correlated with who called @code{foo}. If
969 @code{foo} used 5 seconds in all, and 2/5 of the calls to @code{foo} came
970 from @code{a}, then @code{foo} contributes 2 seconds to @code{a}'s
971 @code{children} time, by assumption.
973 This assumption is usually true enough, but for some programs it is far
974 from true. Suppose that @code{foo} returns very quickly when its argument
975 is zero; suppose that @code{a} always passes zero as an argument, while
976 other callers of @code{foo} pass other arguments. In this program, all the
977 time spent in @code{foo} is in the calls from callers other than @code{a}.
978 But @code{gprof} has no way of knowing this; it will blindly and
979 incorrectly charge 2 seconds of time in @code{foo} to the children of
982 @c FIXME - has this been fixed?
983 We hope some day to put more complete data into @file{gmon.out}, so that
984 this assumption is no longer needed, if we can figure out how. For the
985 nonce, the estimated figures are usually more useful than misleading.
987 @node Incompatibilities, , Assumptions, Top
988 @chapter Incompatibilities with Unix @code{gprof}
990 @sc{gnu} @code{gprof} and Berkeley Unix @code{gprof} use the same data
991 file @file{gmon.out}, and provide essentially the same information. But
992 there are a few differences.
996 For a recursive function, Unix @code{gprof} lists the function as a
997 parent and as a child, with a @code{calls} field that lists the number
998 of recursive calls. @sc{gnu} @code{gprof} omits these lines and puts
999 the number of recursive calls in the primary line.
1002 When a function is suppressed from the call graph with @samp{-e}, @sc{gnu}
1003 @code{gprof} still lists it as a subroutine of functions that call it.
1005 @ignore - it does this now
1007 The function names printed in @sc{gnu} @code{gprof} output do not include
1008 the leading underscores that are added internally to the front of all
1009 C identifiers on many operating systems.
1013 The blurbs, field widths, and output formats are different. @sc{gnu}
1014 @code{gprof} prints blurbs after the tables, so that you can see the
1015 tables without skipping the blurbs.
1023 The @file{gmon.out} file is written in the program's @emph{current working
1024 directory} at the time it exits. This means that if your program calls
1025 @code{chdir}, the @file{gmon.out} file will be left in the last directory
1026 your program @code{chdir}'d to. If you don't have permission to write in
1027 this directory, the file is not written. You may get a confusing error
1028 message if this happens. (We have not yet replaced the part of Unix
1029 responsible for this; when we do, we will make the error message
1034 -d debugging...? should this be documented?
1036 -T - "traditional BSD style": How is it different? Should the
1037 differences be documented?
1039 what is this about? (and to think, I *wrote* it...)
1041 The @samp{-c} option causes the static call-graph of the program to be
1042 discovered by a heuristic which examines the text space of the object
1043 file. Static-only parents or children are indicated with call counts of
1046 example flat file adds up to 100.01%...
1048 note: time estimates now only go out to one decimal place (0.0), where
1049 they used to extend two (78.67).