1 # Advanced googletest Topics
3 <!-- GOOGLETEST_CM0016 DO NOT DELETE -->
7 Now that you have read the [googletest Primer](primer.md) and learned how to
8 write tests using googletest, it's time to learn some new tricks. This document
9 will show you more assertions as well as how to construct complex failure
10 messages, propagate fatal failures, reuse and speed up your test fixtures, and
11 use various flags with your tests.
15 This section covers some less frequently used, but still significant,
18 ### Explicit Success and Failure
20 These three assertions do not actually test a value or expression. Instead, they
21 generate a success or failure directly. Like the macros that actually perform a
22 test, you may stream a custom failure message into them.
28 Generates a success. This does **NOT** make the overall test succeed. A test is
29 considered successful only if none of its assertions fail during its execution.
31 NOTE: `SUCCEED()` is purely documentary and currently doesn't generate any
32 user-visible output. However, we may add `SUCCEED()` messages to googletest's
38 ADD_FAILURE_AT("file_path", line_number);
41 `FAIL()` generates a fatal failure, while `ADD_FAILURE()` and `ADD_FAILURE_AT()`
42 generate a nonfatal failure. These are useful when control flow, rather than a
43 Boolean expression, determines the test's success or failure. For example, you
44 might want to write something like:
51 ... some other checks ...
53 FAIL() << "We shouldn't get here.";
57 NOTE: you can only use `FAIL()` in functions that return `void`. See the
58 [Assertion Placement section](#assertion-placement) for more information.
60 ### Exception Assertions
62 These are for verifying that a piece of code throws (or does not throw) an
63 exception of the given type:
65 Fatal assertion | Nonfatal assertion | Verifies
66 ------------------------------------------ | ------------------------------------------ | --------
67 `ASSERT_THROW(statement, exception_type);` | `EXPECT_THROW(statement, exception_type);` | `statement` throws an exception of the given type
68 `ASSERT_ANY_THROW(statement);` | `EXPECT_ANY_THROW(statement);` | `statement` throws an exception of any type
69 `ASSERT_NO_THROW(statement);` | `EXPECT_NO_THROW(statement);` | `statement` doesn't throw any exception
74 ASSERT_THROW(Foo(5), bar_exception);
82 **Availability**: requires exceptions to be enabled in the build environment
84 ### Predicate Assertions for Better Error Messages
86 Even though googletest has a rich set of assertions, they can never be complete,
87 as it's impossible (nor a good idea) to anticipate all scenarios a user might
88 run into. Therefore, sometimes a user has to use `EXPECT_TRUE()` to check a
89 complex expression, for lack of a better macro. This has the problem of not
90 showing you the values of the parts of the expression, making it hard to
91 understand what went wrong. As a workaround, some users choose to construct the
92 failure message by themselves, streaming it into `EXPECT_TRUE()`. However, this
93 is awkward especially when the expression has side-effects or is expensive to
96 googletest gives you three different options to solve this problem:
98 #### Using an Existing Boolean Function
100 If you already have a function or functor that returns `bool` (or a type that
101 can be implicitly converted to `bool`), you can use it in a *predicate
102 assertion* to get the function arguments printed for free:
104 <!-- mdformat off(github rendering does not support multiline tables) -->
106 | Fatal assertion | Nonfatal assertion | Verifies |
107 | --------------------------------- | --------------------------------- | --------------------------- |
108 | `ASSERT_PRED1(pred1, val1)` | `EXPECT_PRED1(pred1, val1)` | `pred1(val1)` is true |
109 | `ASSERT_PRED2(pred2, val1, val2)` | `EXPECT_PRED2(pred2, val1, val2)` | `pred1(val1, val2)` is true |
110 | `...` | `...` | `...` |
113 In the above, `predn` is an `n`-ary predicate function or functor, where `val1`,
114 `val2`, ..., and `valn` are its arguments. The assertion succeeds if the
115 predicate returns `true` when applied to the given arguments, and fails
116 otherwise. When the assertion fails, it prints the value of each argument. In
117 either case, the arguments are evaluated exactly once.
119 Here's an example. Given
122 // Returns true if m and n have no common divisors except 1.
123 bool MutuallyPrime(int m, int n) { ... }
133 EXPECT_PRED2(MutuallyPrime, a, b);
136 will succeed, while the assertion
139 EXPECT_PRED2(MutuallyPrime, b, c);
142 will fail with the message
145 MutuallyPrime(b, c) is false, where
152 > 1. If you see a compiler error "no matching function to call" when using
153 > `ASSERT_PRED*` or `EXPECT_PRED*`, please see
154 > [this](faq.md#the-compiler-complains-no-matching-function-to-call-when-i-use-assert-pred-how-do-i-fix-it)
155 > for how to resolve it.
157 #### Using a Function That Returns an AssertionResult
159 While `EXPECT_PRED*()` and friends are handy for a quick job, the syntax is not
160 satisfactory: you have to use different macros for different arities, and it
161 feels more like Lisp than C++. The `::testing::AssertionResult` class solves
164 An `AssertionResult` object represents the result of an assertion (whether it's
165 a success or a failure, and an associated message). You can create an
166 `AssertionResult` using one of these factory functions:
171 // Returns an AssertionResult object to indicate that an assertion has
173 AssertionResult AssertionSuccess();
175 // Returns an AssertionResult object to indicate that an assertion has
177 AssertionResult AssertionFailure();
182 You can then use the `<<` operator to stream messages to the `AssertionResult`
185 To provide more readable messages in Boolean assertions (e.g. `EXPECT_TRUE()`),
186 write a predicate function that returns `AssertionResult` instead of `bool`. For
187 example, if you define `IsEven()` as:
190 ::testing::AssertionResult IsEven(int n) {
192 return ::testing::AssertionSuccess();
194 return ::testing::AssertionFailure() << n << " is odd";
206 the failed assertion `EXPECT_TRUE(IsEven(Fib(4)))` will print:
209 Value of: IsEven(Fib(4))
210 Actual: false (3 is odd)
214 instead of a more opaque
217 Value of: IsEven(Fib(4))
222 If you want informative messages in `EXPECT_FALSE` and `ASSERT_FALSE` as well
223 (one third of Boolean assertions in the Google code base are negative ones), and
224 are fine with making the predicate slower in the success case, you can supply a
228 ::testing::AssertionResult IsEven(int n) {
230 return ::testing::AssertionSuccess() << n << " is even";
232 return ::testing::AssertionFailure() << n << " is odd";
236 Then the statement `EXPECT_FALSE(IsEven(Fib(6)))` will print
239 Value of: IsEven(Fib(6))
240 Actual: true (8 is even)
244 #### Using a Predicate-Formatter
246 If you find the default message generated by `(ASSERT|EXPECT)_PRED*` and
247 `(ASSERT|EXPECT)_(TRUE|FALSE)` unsatisfactory, or some arguments to your
248 predicate do not support streaming to `ostream`, you can instead use the
249 following *predicate-formatter assertions* to *fully* customize how the message
252 Fatal assertion | Nonfatal assertion | Verifies
253 ------------------------------------------------ | ------------------------------------------------ | --------
254 `ASSERT_PRED_FORMAT1(pred_format1, val1);` | `EXPECT_PRED_FORMAT1(pred_format1, val1);` | `pred_format1(val1)` is successful
255 `ASSERT_PRED_FORMAT2(pred_format2, val1, val2);` | `EXPECT_PRED_FORMAT2(pred_format2, val1, val2);` | `pred_format2(val1, val2)` is successful
258 The difference between this and the previous group of macros is that instead of
259 a predicate, `(ASSERT|EXPECT)_PRED_FORMAT*` take a *predicate-formatter*
260 (`pred_formatn`), which is a function or functor with the signature:
263 ::testing::AssertionResult PredicateFormattern(const char* expr1,
273 where `val1`, `val2`, ..., and `valn` are the values of the predicate arguments,
274 and `expr1`, `expr2`, ..., and `exprn` are the corresponding expressions as they
275 appear in the source code. The types `T1`, `T2`, ..., and `Tn` can be either
276 value types or reference types. For example, if an argument has type `Foo`, you
277 can declare it as either `Foo` or `const Foo&`, whichever is appropriate.
279 As an example, let's improve the failure message in `MutuallyPrime()`, which was
280 used with `EXPECT_PRED2()`:
283 // Returns the smallest prime common divisor of m and n,
284 // or 1 when m and n are mutually prime.
285 int SmallestPrimeCommonDivisor(int m, int n) { ... }
287 // A predicate-formatter for asserting that two integers are mutually prime.
288 ::testing::AssertionResult AssertMutuallyPrime(const char* m_expr,
292 if (MutuallyPrime(m, n)) return ::testing::AssertionSuccess();
294 return ::testing::AssertionFailure() << m_expr << " and " << n_expr
295 << " (" << m << " and " << n << ") are not mutually prime, "
296 << "as they have a common divisor " << SmallestPrimeCommonDivisor(m, n);
300 With this predicate-formatter, we can use
303 EXPECT_PRED_FORMAT2(AssertMutuallyPrime, b, c);
306 to generate the message
309 b and c (4 and 10) are not mutually prime, as they have a common divisor 2.
312 As you may have realized, many of the built-in assertions we introduced earlier
313 are special cases of `(EXPECT|ASSERT)_PRED_FORMAT*`. In fact, most of them are
314 indeed defined using `(EXPECT|ASSERT)_PRED_FORMAT*`.
316 ### Floating-Point Comparison
318 Comparing floating-point numbers is tricky. Due to round-off errors, it is very
319 unlikely that two floating-points will match exactly. Therefore, `ASSERT_EQ` 's
320 naive comparison usually doesn't work. And since floating-points can have a wide
321 value range, no single fixed error bound works. It's better to compare by a
322 fixed relative error bound, except for values close to 0 due to the loss of
325 In general, for floating-point comparison to make sense, the user needs to
326 carefully choose the error bound. If they don't want or care to, comparing in
327 terms of Units in the Last Place (ULPs) is a good default, and googletest
328 provides assertions to do this. Full details about ULPs are quite long; if you
329 want to learn more, see
330 [here](https://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/).
332 #### Floating-Point Macros
334 <!-- mdformat off(github rendering does not support multiline tables) -->
336 | Fatal assertion | Nonfatal assertion | Verifies |
337 | ------------------------------- | ------------------------------- | ---------------------------------------- |
338 | `ASSERT_FLOAT_EQ(val1, val2);` | `EXPECT_FLOAT_EQ(val1, val2);` | the two `float` values are almost equal |
339 | `ASSERT_DOUBLE_EQ(val1, val2);` | `EXPECT_DOUBLE_EQ(val1, val2);` | the two `double` values are almost equal |
343 By "almost equal" we mean the values are within 4 ULP's from each other.
345 The following assertions allow you to choose the acceptable error bound:
347 <!-- mdformat off(github rendering does not support multiline tables) -->
349 | Fatal assertion | Nonfatal assertion | Verifies |
350 | ------------------------------------- | ------------------------------------- | -------------------------------------------------------------------------------- |
351 | `ASSERT_NEAR(val1, val2, abs_error);` | `EXPECT_NEAR(val1, val2, abs_error);` | the difference between `val1` and `val2` doesn't exceed the given absolute error |
355 #### Floating-Point Predicate-Format Functions
357 Some floating-point operations are useful, but not that often used. In order to
358 avoid an explosion of new macros, we provide them as predicate-format functions
359 that can be used in predicate assertion macros (e.g. `EXPECT_PRED_FORMAT2`,
363 EXPECT_PRED_FORMAT2(::testing::FloatLE, val1, val2);
364 EXPECT_PRED_FORMAT2(::testing::DoubleLE, val1, val2);
367 Verifies that `val1` is less than, or almost equal to, `val2`. You can replace
368 `EXPECT_PRED_FORMAT2` in the above table with `ASSERT_PRED_FORMAT2`.
370 ### Asserting Using gMock Matchers
372 [gMock](../../googlemock) comes with a library of matchers for validating
373 arguments passed to mock objects. A gMock *matcher* is basically a predicate
374 that knows how to describe itself. It can be used in these assertion macros:
376 <!-- mdformat off(github rendering does not support multiline tables) -->
378 | Fatal assertion | Nonfatal assertion | Verifies |
379 | ------------------------------ | ------------------------------ | --------------------- |
380 | `ASSERT_THAT(value, matcher);` | `EXPECT_THAT(value, matcher);` | value matches matcher |
384 For example, `StartsWith(prefix)` is a matcher that matches a string starting
385 with `prefix`, and you can write:
388 using ::testing::StartsWith;
390 // Verifies that Foo() returns a string starting with "Hello".
391 EXPECT_THAT(Foo(), StartsWith("Hello"));
395 [recipe](../../googlemock/docs/cook_book.md#using-matchers-in-googletest-assertions)
396 in the gMock Cookbook for more details.
398 gMock has a rich set of matchers. You can do many things googletest cannot do
399 alone with them. For a list of matchers gMock provides, read
400 [this](../../googlemock/docs/cook_book.md##using-matchers). It's easy to write
401 your [own matchers](../../googlemock/docs/cook_book.md#NewMatchers) too.
403 gMock is bundled with googletest, so you don't need to add any build dependency
404 in order to take advantage of this. Just include `"testing/base/public/gmock.h"`
405 and you're ready to go.
407 ### More String Assertions
409 (Please read the [previous](#asserting-using-gmock-matchers) section first if
412 You can use the gMock
413 [string matchers](../../googlemock/docs/cheat_sheet.md#string-matchers) with
414 `EXPECT_THAT()` or `ASSERT_THAT()` to do more string comparison tricks
415 (sub-string, prefix, suffix, regular expression, and etc). For example,
418 using ::testing::HasSubstr;
419 using ::testing::MatchesRegex;
421 ASSERT_THAT(foo_string, HasSubstr("needle"));
422 EXPECT_THAT(bar_string, MatchesRegex("\\w*\\d+"));
425 If the string contains a well-formed HTML or XML document, you can check whether
426 its DOM tree matches an
427 [XPath expression](http://www.w3.org/TR/xpath/#contents):
430 // Currently still in //template/prototemplate/testing:xpath_matcher
431 #include "template/prototemplate/testing/xpath_matcher.h"
432 using prototemplate::testing::MatchesXPath;
433 EXPECT_THAT(html_string, MatchesXPath("//a[text()='click here']"));
436 ### Windows HRESULT assertions
438 These assertions test for `HRESULT` success or failure.
440 Fatal assertion | Nonfatal assertion | Verifies
441 -------------------------------------- | -------------------------------------- | --------
442 `ASSERT_HRESULT_SUCCEEDED(expression)` | `EXPECT_HRESULT_SUCCEEDED(expression)` | `expression` is a success `HRESULT`
443 `ASSERT_HRESULT_FAILED(expression)` | `EXPECT_HRESULT_FAILED(expression)` | `expression` is a failure `HRESULT`
445 The generated output contains the human-readable error message associated with
446 the `HRESULT` code returned by `expression`.
448 You might use them like this:
451 CComPtr<IShellDispatch2> shell;
452 ASSERT_HRESULT_SUCCEEDED(shell.CoCreateInstance(L"Shell.Application"));
454 ASSERT_HRESULT_SUCCEEDED(shell->ShellExecute(CComBSTR(url), empty, empty, empty, empty));
459 You can call the function
462 ::testing::StaticAssertTypeEq<T1, T2>();
465 to assert that types `T1` and `T2` are the same. The function does nothing if
466 the assertion is satisfied. If the types are different, the function call will
467 fail to compile, the compiler error message will say that
468 `type1 and type2 are not the same type` and most likely (depending on the compiler)
469 show you the actual values of `T1` and `T2`. This is mainly useful inside
472 **Caveat**: When used inside a member function of a class template or a function
473 template, `StaticAssertTypeEq<T1, T2>()` is effective only if the function is
474 instantiated. For example, given:
477 template <typename T> class Foo {
479 void Bar() { ::testing::StaticAssertTypeEq<int, T>(); }
486 void Test1() { Foo<bool> foo; }
489 will not generate a compiler error, as `Foo<bool>::Bar()` is never actually
490 instantiated. Instead, you need:
493 void Test2() { Foo<bool> foo; foo.Bar(); }
496 to cause a compiler error.
498 ### Assertion Placement
500 You can use assertions in any C++ function. In particular, it doesn't have to be
501 a method of the test fixture class. The one constraint is that assertions that
502 generate a fatal failure (`FAIL*` and `ASSERT_*`) can only be used in
503 void-returning functions. This is a consequence of Google's not using
504 exceptions. By placing it in a non-void function you'll get a confusing compile
505 error like `"error: void value not ignored as it ought to be"` or `"cannot
506 initialize return object of type 'bool' with an rvalue of type 'void'"` or
507 `"error: no viable conversion from 'void' to 'string'"`.
509 If you need to use fatal assertions in a function that returns non-void, one
510 option is to make the function return the value in an out parameter instead. For
511 example, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. You
512 need to make sure that `*result` contains some sensible value even when the
513 function returns prematurely. As the function now returns `void`, you can use
514 any assertion inside of it.
516 If changing the function's type is not an option, you should just use assertions
517 that generate non-fatal failures, such as `ADD_FAILURE*` and `EXPECT_*`.
519 NOTE: Constructors and destructors are not considered void-returning functions,
520 according to the C++ language specification, and so you may not use fatal
521 assertions in them; you'll get a compilation error if you try. Instead, either
522 call `abort` and crash the entire test executable, or put the fatal assertion in
523 a `SetUp`/`TearDown` function; see
524 [constructor/destructor vs. `SetUp`/`TearDown`](faq.md#CtorVsSetUp)
526 WARNING: A fatal assertion in a helper function (private void-returning method)
527 called from a constructor or destructor does not does not terminate the current
528 test, as your intuition might suggest: it merely returns from the constructor or
529 destructor early, possibly leaving your object in a partially-constructed or
530 partially-destructed state! You almost certainly want to `abort` or use
531 `SetUp`/`TearDown` instead.
533 ## Teaching googletest How to Print Your Values
535 When a test assertion such as `EXPECT_EQ` fails, googletest prints the argument
536 values to help you debug. It does this using a user-extensible value printer.
538 This printer knows how to print built-in C++ types, native arrays, STL
539 containers, and any type that supports the `<<` operator. For other types, it
540 prints the raw bytes in the value and hopes that you the user can figure it out.
542 As mentioned earlier, the printer is *extensible*. That means you can teach it
543 to do a better job at printing your particular type than to dump the bytes. To
544 do that, define `<<` for your type:
551 class Bar { // We want googletest to be able to print instances of this.
553 // Create a free inline friend function.
554 friend std::ostream& operator<<(std::ostream& os, const Bar& bar) {
555 return os << bar.DebugString(); // whatever needed to print bar to os
559 // If you can't declare the function in the class it's important that the
560 // << operator is defined in the SAME namespace that defines Bar. C++'s look-up
561 // rules rely on that.
562 std::ostream& operator<<(std::ostream& os, const Bar& bar) {
563 return os << bar.DebugString(); // whatever needed to print bar to os
569 Sometimes, this might not be an option: your team may consider it bad style to
570 have a `<<` operator for `Bar`, or `Bar` may already have a `<<` operator that
571 doesn't do what you want (and you cannot change it). If so, you can instead
572 define a `PrintTo()` function like this:
581 friend void PrintTo(const Bar& bar, std::ostream* os) {
582 *os << bar.DebugString(); // whatever needed to print bar to os
586 // If you can't declare the function in the class it's important that PrintTo()
587 // is defined in the SAME namespace that defines Bar. C++'s look-up rules rely
589 void PrintTo(const Bar& bar, std::ostream* os) {
590 *os << bar.DebugString(); // whatever needed to print bar to os
596 If you have defined both `<<` and `PrintTo()`, the latter will be used when
597 googletest is concerned. This allows you to customize how the value appears in
598 googletest's output without affecting code that relies on the behavior of its
601 If you want to print a value `x` using googletest's value printer yourself, just
602 call `::testing::PrintToString(x)`, which returns an `std::string`:
605 vector<pair<Bar, int> > bar_ints = GetBarIntVector();
607 EXPECT_TRUE(IsCorrectBarIntVector(bar_ints))
608 << "bar_ints = " << ::testing::PrintToString(bar_ints);
613 In many applications, there are assertions that can cause application failure if
614 a condition is not met. These sanity checks, which ensure that the program is in
615 a known good state, are there to fail at the earliest possible time after some
616 program state is corrupted. If the assertion checks the wrong condition, then
617 the program may proceed in an erroneous state, which could lead to memory
618 corruption, security holes, or worse. Hence it is vitally important to test that
619 such assertion statements work as expected.
621 Since these precondition checks cause the processes to die, we call such tests
622 _death tests_. More generally, any test that checks that a program terminates
623 (except by throwing an exception) in an expected fashion is also a death test.
625 Note that if a piece of code throws an exception, we don't consider it "death"
626 for the purpose of death tests, as the caller of the code could catch the
627 exception and avoid the crash. If you want to verify exceptions thrown by your
628 code, see [Exception Assertions](#ExceptionAssertions).
630 If you want to test `EXPECT_*()/ASSERT_*()` failures in your test code, see
633 ### How to Write a Death Test
635 googletest has the following macros to support death tests:
637 Fatal assertion | Nonfatal assertion | Verifies
638 ------------------------------------------------ | ------------------------------------------------ | --------
639 `ASSERT_DEATH(statement, matcher);` | `EXPECT_DEATH(statement, matcher);` | `statement` crashes with the given error
640 `ASSERT_DEATH_IF_SUPPORTED(statement, matcher);` | `EXPECT_DEATH_IF_SUPPORTED(statement, matcher);` | if death tests are supported, verifies that `statement` crashes with the given error; otherwise verifies nothing
641 `ASSERT_EXIT(statement, predicate, matcher);` | `EXPECT_EXIT(statement, predicate, matcher);` | `statement` exits with the given error, and its exit code matches `predicate`
643 where `statement` is a statement that is expected to cause the process to die,
644 `predicate` is a function or function object that evaluates an integer exit
645 status, and `matcher` is either a GMock matcher matching a `const std::string&`
646 or a (Perl) regular expression - either of which is matched against the stderr
647 output of `statement`. For legacy reasons, a bare string (i.e. with no matcher)
648 is interpreted as `ContainsRegex(str)`, **not** `Eq(str)`. Note that `statement`
649 can be *any valid statement* (including *compound statement*) and doesn't have
652 As usual, the `ASSERT` variants abort the current test function, while the
653 `EXPECT` variants do not.
655 > NOTE: We use the word "crash" here to mean that the process terminates with a
656 > *non-zero* exit status code. There are two possibilities: either the process
657 > has called `exit()` or `_exit()` with a non-zero value, or it may be killed by
660 > This means that if `*statement*` terminates the process with a 0 exit code, it
661 > is *not* considered a crash by `EXPECT_DEATH`. Use `EXPECT_EXIT` instead if
662 > this is the case, or if you want to restrict the exit code more precisely.
664 A predicate here must accept an `int` and return a `bool`. The death test
665 succeeds only if the predicate returns `true`. googletest defines a few
666 predicates that handle the most common cases:
669 ::testing::ExitedWithCode(exit_code)
672 This expression is `true` if the program exited normally with the given exit
676 ::testing::KilledBySignal(signal_number) // Not available on Windows.
679 This expression is `true` if the program was killed by the given signal.
681 The `*_DEATH` macros are convenient wrappers for `*_EXIT` that use a predicate
682 that verifies the process' exit code is non-zero.
684 Note that a death test only cares about three things:
686 1. does `statement` abort or exit the process?
687 2. (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status
688 satisfy `predicate`? Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`)
689 is the exit status non-zero? And
690 3. does the stderr output match `regex`?
692 In particular, if `statement` generates an `ASSERT_*` or `EXPECT_*` failure, it
693 will **not** cause the death test to fail, as googletest assertions don't abort
696 To write a death test, simply use one of the above macros inside your test
697 function. For example,
700 TEST(MyDeathTest, Foo) {
701 // This death test uses a compound statement.
705 }, "Error on line .* of Foo()");
708 TEST(MyDeathTest, NormalExit) {
709 EXPECT_EXIT(NormalExit(), ::testing::ExitedWithCode(0), "Success");
712 TEST(MyDeathTest, KillMyself) {
713 EXPECT_EXIT(KillMyself(), ::testing::KilledBySignal(SIGKILL),
714 "Sending myself unblockable signal");
720 * calling `Foo(5)` causes the process to die with the given error message,
721 * calling `NormalExit()` causes the process to print `"Success"` to stderr and
722 exit with exit code 0, and
723 * calling `KillMyself()` kills the process with signal `SIGKILL`.
725 The test function body may contain other assertions and statements as well, if
728 ### Death Test Naming
730 IMPORTANT: We strongly recommend you to follow the convention of naming your
731 **test suite** (not test) `*DeathTest` when it contains a death test, as
732 demonstrated in the above example. The
733 [Death Tests And Threads](#death-tests-and-threads) section below explains why.
735 If a test fixture class is shared by normal tests and death tests, you can use
736 `using` or `typedef` to introduce an alias for the fixture class and avoid
737 duplicating its code:
740 class FooTest : public ::testing::Test { ... };
742 using FooDeathTest = FooTest;
744 TEST_F(FooTest, DoesThis) {
748 TEST_F(FooDeathTest, DoesThat) {
753 ### Regular Expression Syntax
755 On POSIX systems (e.g. Linux, Cygwin, and Mac), googletest uses the
756 [POSIX extended regular expression](http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap09.html#tag_09_04)
757 syntax. To learn about this syntax, you may want to read this
758 [Wikipedia entry](http://en.wikipedia.org/wiki/Regular_expression#POSIX_Extended_Regular_Expressions).
760 On Windows, googletest uses its own simple regular expression implementation. It
761 lacks many features. For example, we don't support union (`"x|y"`), grouping
762 (`"(xy)"`), brackets (`"[xy]"`), and repetition count (`"x{5,7}"`), among
763 others. Below is what we do support (`A` denotes a literal character, period
764 (`.`), or a single `\\ ` escape sequence; `x` and `y` denote regular
768 ---------- | --------------------------------------------------------------
769 `c` | matches any literal character `c`
770 `\\d` | matches any decimal digit
771 `\\D` | matches any character that's not a decimal digit
775 `\\s` | matches any ASCII whitespace, including `\n`
776 `\\S` | matches any character that's not a whitespace
779 `\\w` | matches any letter, `_`, or decimal digit
780 `\\W` | matches any character that `\\w` doesn't match
781 `\\c` | matches any literal character `c`, which must be a punctuation
782 `.` | matches any single character except `\n`
783 `A?` | matches 0 or 1 occurrences of `A`
784 `A*` | matches 0 or many occurrences of `A`
785 `A+` | matches 1 or many occurrences of `A`
786 `^` | matches the beginning of a string (not that of each line)
787 `$` | matches the end of a string (not that of each line)
788 `xy` | matches `x` followed by `y`
790 To help you determine which capability is available on your system, googletest
791 defines macros to govern which regular expression it is using. The macros are:
792 `GTEST_USES_SIMPLE_RE=1` or `GTEST_USES_POSIX_RE=1`. If you want your death
793 tests to work in all cases, you can either `#if` on these macros or use the more
798 Under the hood, `ASSERT_EXIT()` spawns a new process and executes the death test
799 statement in that process. The details of how precisely that happens depend on
800 the platform and the variable ::testing::GTEST_FLAG(death_test_style) (which is
801 initialized from the command-line flag `--gtest_death_test_style`).
803 * On POSIX systems, `fork()` (or `clone()` on Linux) is used to spawn the
805 * If the variable's value is `"fast"`, the death test statement is
806 immediately executed.
807 * If the variable's value is `"threadsafe"`, the child process re-executes
808 the unit test binary just as it was originally invoked, but with some
809 extra flags to cause just the single death test under consideration to
811 * On Windows, the child is spawned using the `CreateProcess()` API, and
812 re-executes the binary to cause just the single death test under
813 consideration to be run - much like the `threadsafe` mode on POSIX.
815 Other values for the variable are illegal and will cause the death test to fail.
816 Currently, the flag's default value is **"fast"**
818 1. the child's exit status satisfies the predicate, and
819 2. the child's stderr matches the regular expression.
821 If the death test statement runs to completion without dying, the child process
822 will nonetheless terminate, and the assertion fails.
824 ### Death Tests And Threads
826 The reason for the two death test styles has to do with thread safety. Due to
827 well-known problems with forking in the presence of threads, death tests should
828 be run in a single-threaded context. Sometimes, however, it isn't feasible to
829 arrange that kind of environment. For example, statically-initialized modules
830 may start threads before main is ever reached. Once threads have been created,
831 it may be difficult or impossible to clean them up.
833 googletest has three features intended to raise awareness of threading issues.
835 1. A warning is emitted if multiple threads are running when a death test is
837 2. Test suites with a name ending in "DeathTest" are run before all other
839 3. It uses `clone()` instead of `fork()` to spawn the child process on Linux
840 (`clone()` is not available on Cygwin and Mac), as `fork()` is more likely
841 to cause the child to hang when the parent process has multiple threads.
843 It's perfectly fine to create threads inside a death test statement; they are
844 executed in a separate process and cannot affect the parent.
846 ### Death Test Styles
848 The "threadsafe" death test style was introduced in order to help mitigate the
849 risks of testing in a possibly multithreaded environment. It trades increased
850 test execution time (potentially dramatically so) for improved thread safety.
852 The automated testing framework does not set the style flag. You can choose a
853 particular style of death tests by setting the flag programmatically:
856 testing::FLAGS_gtest_death_test_style="threadsafe"
859 You can do this in `main()` to set the style for all death tests in the binary,
860 or in individual tests. Recall that flags are saved before running each test and
861 restored afterwards, so you need not do that yourself. For example:
864 int main(int argc, char** argv) {
865 InitGoogle(argv[0], &argc, &argv, true);
866 ::testing::FLAGS_gtest_death_test_style = "fast";
867 return RUN_ALL_TESTS();
870 TEST(MyDeathTest, TestOne) {
871 ::testing::FLAGS_gtest_death_test_style = "threadsafe";
872 // This test is run in the "threadsafe" style:
873 ASSERT_DEATH(ThisShouldDie(), "");
876 TEST(MyDeathTest, TestTwo) {
877 // This test is run in the "fast" style:
878 ASSERT_DEATH(ThisShouldDie(), "");
884 The `statement` argument of `ASSERT_EXIT()` can be any valid C++ statement. If
885 it leaves the current function via a `return` statement or by throwing an
886 exception, the death test is considered to have failed. Some googletest macros
887 may return from the current function (e.g. `ASSERT_TRUE()`), so be sure to avoid
890 Since `statement` runs in the child process, any in-memory side effect (e.g.
891 modifying a variable, releasing memory, etc) it causes will *not* be observable
892 in the parent process. In particular, if you release memory in a death test,
893 your program will fail the heap check as the parent process will never see the
894 memory reclaimed. To solve this problem, you can
896 1. try not to free memory in a death test;
897 2. free the memory again in the parent process; or
898 3. do not use the heap checker in your program.
900 Due to an implementation detail, you cannot place multiple death test assertions
901 on the same line; otherwise, compilation will fail with an unobvious error
904 Despite the improved thread safety afforded by the "threadsafe" style of death
905 test, thread problems such as deadlock are still possible in the presence of
906 handlers registered with `pthread_atfork(3)`.
909 ## Using Assertions in Sub-routines
911 ### Adding Traces to Assertions
913 If a test sub-routine is called from several places, when an assertion inside it
914 fails, it can be hard to tell which invocation of the sub-routine the failure is
915 from. You can alleviate this problem using extra logging or custom failure
916 messages, but that usually clutters up your tests. A better solution is to use
917 the `SCOPED_TRACE` macro or the `ScopedTrace` utility:
920 SCOPED_TRACE(message);
921 ScopedTrace trace("file_path", line_number, message);
924 where `message` can be anything streamable to `std::ostream`. `SCOPED_TRACE`
925 macro will cause the current file name, line number, and the given message to be
926 added in every failure message. `ScopedTrace` accepts explicit file name and
927 line number in arguments, which is useful for writing test helpers. The effect
928 will be undone when the control leaves the current lexical scope.
933 10: void Sub1(int n) {
934 11: EXPECT_EQ(Bar(n), 1);
935 12: EXPECT_EQ(Bar(n + 1), 2);
938 15: TEST(FooTest, Bar) {
940 17: SCOPED_TRACE("A"); // This trace point will be included in
941 18: // every failure in this scope.
949 could result in messages like these:
952 path/to/foo_test.cc:11: Failure
957 path/to/foo_test.cc:17: A
959 path/to/foo_test.cc:12: Failure
965 Without the trace, it would've been difficult to know which invocation of
966 `Sub1()` the two failures come from respectively. (You could add an extra
967 message to each assertion in `Sub1()` to indicate the value of `n`, but that's
970 Some tips on using `SCOPED_TRACE`:
972 1. With a suitable message, it's often enough to use `SCOPED_TRACE` at the
973 beginning of a sub-routine, instead of at each call site.
974 2. When calling sub-routines inside a loop, make the loop iterator part of the
975 message in `SCOPED_TRACE` such that you can know which iteration the failure
977 3. Sometimes the line number of the trace point is enough for identifying the
978 particular invocation of a sub-routine. In this case, you don't have to
979 choose a unique message for `SCOPED_TRACE`. You can simply use `""`.
980 4. You can use `SCOPED_TRACE` in an inner scope when there is one in the outer
981 scope. In this case, all active trace points will be included in the failure
982 messages, in reverse order they are encountered.
983 5. The trace dump is clickable in Emacs - hit `return` on a line number and
984 you'll be taken to that line in the source file!
986 ### Propagating Fatal Failures
988 A common pitfall when using `ASSERT_*` and `FAIL*` is not understanding that
989 when they fail they only abort the _current function_, not the entire test. For
990 example, the following test will segfault:
994 // Generates a fatal failure and aborts the current function.
997 // The following won't be executed.
1001 TEST(FooTest, Bar) {
1002 Subroutine(); // The intended behavior is for the fatal failure
1003 // in Subroutine() to abort the entire test.
1005 // The actual behavior: the function goes on after Subroutine() returns.
1007 *p = 3; // Segfault!
1011 To alleviate this, googletest provides three different solutions. You could use
1012 either exceptions, the `(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the
1013 `HasFatalFailure()` function. They are described in the following two
1016 #### Asserting on Subroutines with an exception
1018 The following code can turn ASSERT-failure into an exception:
1021 class ThrowListener : public testing::EmptyTestEventListener {
1022 void OnTestPartResult(const testing::TestPartResult& result) override {
1023 if (result.type() == testing::TestPartResult::kFatalFailure) {
1024 throw testing::AssertionException(result);
1028 int main(int argc, char** argv) {
1030 testing::UnitTest::GetInstance()->listeners().Append(new ThrowListener);
1031 return RUN_ALL_TESTS();
1035 This listener should be added after other listeners if you have any, otherwise
1036 they won't see failed `OnTestPartResult`.
1038 #### Asserting on Subroutines
1040 As shown above, if your test calls a subroutine that has an `ASSERT_*` failure
1041 in it, the test will continue after the subroutine returns. This may not be what
1044 Often people want fatal failures to propagate like exceptions. For that
1045 googletest offers the following macros:
1047 Fatal assertion | Nonfatal assertion | Verifies
1048 ------------------------------------- | ------------------------------------- | --------
1049 `ASSERT_NO_FATAL_FAILURE(statement);` | `EXPECT_NO_FATAL_FAILURE(statement);` | `statement` doesn't generate any new fatal failures in the current thread.
1051 Only failures in the thread that executes the assertion are checked to determine
1052 the result of this type of assertions. If `statement` creates new threads,
1053 failures in these threads are ignored.
1058 ASSERT_NO_FATAL_FAILURE(Foo());
1061 EXPECT_NO_FATAL_FAILURE({
1066 Assertions from multiple threads are currently not supported on Windows.
1068 #### Checking for Failures in the Current Test
1070 `HasFatalFailure()` in the `::testing::Test` class returns `true` if an
1071 assertion in the current test has suffered a fatal failure. This allows
1072 functions to catch fatal failures in a sub-routine and return early.
1078 static bool HasFatalFailure();
1082 The typical usage, which basically simulates the behavior of a thrown exception,
1086 TEST(FooTest, Bar) {
1088 // Aborts if Subroutine() had a fatal failure.
1089 if (HasFatalFailure()) return;
1091 // The following won't be executed.
1096 If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a test
1097 fixture, you must add the `::testing::Test::` prefix, as in:
1100 if (::testing::Test::HasFatalFailure()) return;
1103 Similarly, `HasNonfatalFailure()` returns `true` if the current test has at
1104 least one non-fatal failure, and `HasFailure()` returns `true` if the current
1105 test has at least one failure of either kind.
1107 ## Logging Additional Information
1109 In your test code, you can call `RecordProperty("key", value)` to log additional
1110 information, where `value` can be either a string or an `int`. The *last* value
1111 recorded for a key will be emitted to the
1112 [XML output](#generating-an-xml-report) if you specify one. For example, the
1116 TEST_F(WidgetUsageTest, MinAndMaxWidgets) {
1117 RecordProperty("MaximumWidgets", ComputeMaxUsage());
1118 RecordProperty("MinimumWidgets", ComputeMinUsage());
1122 will output XML like this:
1126 <testcase name="MinAndMaxWidgets" status="run" time="0.006" classname="WidgetUsageTest" MaximumWidgets="12" MinimumWidgets="9" />
1132 > * `RecordProperty()` is a static member of the `Test` class. Therefore it
1133 > needs to be prefixed with `::testing::Test::` if used outside of the
1134 > `TEST` body and the test fixture class.
1135 > * `*key*` must be a valid XML attribute name, and cannot conflict with the
1136 > ones already used by googletest (`name`, `status`, `time`, `classname`,
1137 > `type_param`, and `value_param`).
1138 > * Calling `RecordProperty()` outside of the lifespan of a test is allowed.
1139 > If it's called outside of a test but between a test suite's
1140 > `SetUpTestSuite()` and `TearDownTestSuite()` methods, it will be
1141 > attributed to the XML element for the test suite. If it's called outside
1142 > of all test suites (e.g. in a test environment), it will be attributed to
1143 > the top-level XML element.
1145 ## Sharing Resources Between Tests in the Same Test Suite
1147 googletest creates a new test fixture object for each test in order to make
1148 tests independent and easier to debug. However, sometimes tests use resources
1149 that are expensive to set up, making the one-copy-per-test model prohibitively
1152 If the tests don't change the resource, there's no harm in their sharing a
1153 single resource copy. So, in addition to per-test set-up/tear-down, googletest
1154 also supports per-test-suite set-up/tear-down. To use it:
1156 1. In your test fixture class (say `FooTest` ), declare as `static` some member
1157 variables to hold the shared resources.
1158 2. Outside your test fixture class (typically just below it), define those
1159 member variables, optionally giving them initial values.
1160 3. In the same test fixture class, define a `static void SetUpTestSuite()`
1161 function (remember not to spell it as **`SetupTestSuite`** with a small
1162 `u`!) to set up the shared resources and a `static void TearDownTestSuite()`
1163 function to tear them down.
1165 That's it! googletest automatically calls `SetUpTestSuite()` before running the
1166 *first test* in the `FooTest` test suite (i.e. before creating the first
1167 `FooTest` object), and calls `TearDownTestSuite()` after running the *last test*
1168 in it (i.e. after deleting the last `FooTest` object). In between, the tests can
1169 use the shared resources.
1171 Remember that the test order is undefined, so your code can't depend on a test
1172 preceding or following another. Also, the tests must either not modify the state
1173 of any shared resource, or, if they do modify the state, they must restore the
1174 state to its original value before passing control to the next test.
1176 Here's an example of per-test-suite set-up and tear-down:
1179 class FooTest : public ::testing::Test {
1181 // Per-test-suite set-up.
1182 // Called before the first test in this test suite.
1183 // Can be omitted if not needed.
1184 static void SetUpTestSuite() {
1185 shared_resource_ = new ...;
1188 // Per-test-suite tear-down.
1189 // Called after the last test in this test suite.
1190 // Can be omitted if not needed.
1191 static void TearDownTestSuite() {
1192 delete shared_resource_;
1193 shared_resource_ = NULL;
1196 // You can define per-test set-up logic as usual.
1197 virtual void SetUp() { ... }
1199 // You can define per-test tear-down logic as usual.
1200 virtual void TearDown() { ... }
1202 // Some expensive resource shared by all tests.
1203 static T* shared_resource_;
1206 T* FooTest::shared_resource_ = NULL;
1208 TEST_F(FooTest, Test1) {
1209 ... you can refer to shared_resource_ here ...
1212 TEST_F(FooTest, Test2) {
1213 ... you can refer to shared_resource_ here ...
1217 NOTE: Though the above code declares `SetUpTestSuite()` protected, it may
1218 sometimes be necessary to declare it public, such as when using it with
1221 ## Global Set-Up and Tear-Down
1223 Just as you can do set-up and tear-down at the test level and the test suite
1224 level, you can also do it at the test program level. Here's how.
1226 First, you subclass the `::testing::Environment` class to define a test
1227 environment, which knows how to set-up and tear-down:
1230 class Environment : public ::testing::Environment {
1232 virtual ~Environment() {}
1234 // Override this to define how to set up the environment.
1235 void SetUp() override {}
1237 // Override this to define how to tear down the environment.
1238 void TearDown() override {}
1242 Then, you register an instance of your environment class with googletest by
1243 calling the `::testing::AddGlobalTestEnvironment()` function:
1246 Environment* AddGlobalTestEnvironment(Environment* env);
1249 Now, when `RUN_ALL_TESTS()` is called, it first calls the `SetUp()` method of
1250 each environment object, then runs the tests if none of the environments
1251 reported fatal failures and `GTEST_SKIP()` was not called. `RUN_ALL_TESTS()`
1252 always calls `TearDown()` with each environment object, regardless of whether or
1253 not the tests were run.
1255 It's OK to register multiple environment objects. In this suite, their `SetUp()`
1256 will be called in the order they are registered, and their `TearDown()` will be
1257 called in the reverse order.
1259 Note that googletest takes ownership of the registered environment objects.
1260 Therefore **do not delete them** by yourself.
1262 You should call `AddGlobalTestEnvironment()` before `RUN_ALL_TESTS()` is called,
1263 probably in `main()`. If you use `gtest_main`, you need to call this before
1264 `main()` starts for it to take effect. One way to do this is to define a global
1268 ::testing::Environment* const foo_env =
1269 ::testing::AddGlobalTestEnvironment(new FooEnvironment);
1272 However, we strongly recommend you to write your own `main()` and call
1273 `AddGlobalTestEnvironment()` there, as relying on initialization of global
1274 variables makes the code harder to read and may cause problems when you register
1275 multiple environments from different translation units and the environments have
1276 dependencies among them (remember that the compiler doesn't guarantee the order
1277 in which global variables from different translation units are initialized).
1279 ## Value-Parameterized Tests
1281 *Value-parameterized tests* allow you to test your code with different
1282 parameters without writing multiple copies of the same test. This is useful in a
1283 number of situations, for example:
1285 * You have a piece of code whose behavior is affected by one or more
1286 command-line flags. You want to make sure your code performs correctly for
1287 various values of those flags.
1288 * You want to test different implementations of an OO interface.
1289 * You want to test your code over various inputs (a.k.a. data-driven testing).
1290 This feature is easy to abuse, so please exercise your good sense when doing
1293 ### How to Write Value-Parameterized Tests
1295 To write value-parameterized tests, first you should define a fixture class. It
1296 must be derived from both `testing::Test` and `testing::WithParamInterface<T>`
1297 (the latter is a pure interface), where `T` is the type of your parameter
1298 values. For convenience, you can just derive the fixture class from
1299 `testing::TestWithParam<T>`, which itself is derived from both `testing::Test`
1300 and `testing::WithParamInterface<T>`. `T` can be any copyable type. If it's a
1301 raw pointer, you are responsible for managing the lifespan of the pointed
1304 NOTE: If your test fixture defines `SetUpTestSuite()` or `TearDownTestSuite()`
1305 they must be declared **public** rather than **protected** in order to use
1310 public testing::TestWithParam<const char*> {
1311 // You can implement all the usual fixture class members here.
1312 // To access the test parameter, call GetParam() from class
1313 // TestWithParam<T>.
1316 // Or, when you want to add parameters to a pre-existing fixture class:
1317 class BaseTest : public testing::Test {
1320 class BarTest : public BaseTest,
1321 public testing::WithParamInterface<const char*> {
1326 Then, use the `TEST_P` macro to define as many test patterns using this fixture
1327 as you want. The `_P` suffix is for "parameterized" or "pattern", whichever you
1331 TEST_P(FooTest, DoesBlah) {
1332 // Inside a test, access the test parameter with the GetParam() method
1333 // of the TestWithParam<T> class:
1334 EXPECT_TRUE(foo.Blah(GetParam()));
1338 TEST_P(FooTest, HasBlahBlah) {
1343 Finally, you can use `INSTANTIATE_TEST_SUITE_P` to instantiate the test suite
1344 with any set of parameters you want. googletest defines a number of functions
1345 for generating test parameters. They return what we call (surprise!) *parameter
1346 generators*. Here is a summary of them, which are all in the `testing`
1349 <!-- mdformat off(github rendering does not support multiline tables) -->
1351 | Parameter Generator | Behavior |
1352 | ----------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
1353 | `Range(begin, end [, step])` | Yields values `{begin, begin+step, begin+step+step, ...}`. The values do not include `end`. `step` defaults to 1. |
1354 | `Values(v1, v2, ..., vN)` | Yields values `{v1, v2, ..., vN}`. |
1355 | `ValuesIn(container)` and `ValuesIn(begin,end)` | Yields values from a C-style array, an STL-style container, or an iterator range `[begin, end)` |
1356 | `Bool()` | Yields sequence `{false, true}`. |
1357 | `Combine(g1, g2, ..., gN)` | Yields all combinations (Cartesian product) as std\:\:tuples of the values generated by the `N` generators. |
1361 For more details, see the comments at the definitions of these functions.
1363 The following statement will instantiate tests from the `FooTest` test suite
1364 each with parameter values `"meeny"`, `"miny"`, and `"moe"`.
1367 INSTANTIATE_TEST_SUITE_P(InstantiationName,
1369 testing::Values("meeny", "miny", "moe"));
1372 NOTE: The code above must be placed at global or namespace scope, not at
1375 NOTE: Don't forget this step! If you do your test will silently pass, but none
1376 of its suites will ever run!
1378 To distinguish different instances of the pattern (yes, you can instantiate it
1379 more than once), the first argument to `INSTANTIATE_TEST_SUITE_P` is a prefix
1380 that will be added to the actual test suite name. Remember to pick unique
1381 prefixes for different instantiations. The tests from the instantiation above
1382 will have these names:
1384 * `InstantiationName/FooTest.DoesBlah/0` for `"meeny"`
1385 * `InstantiationName/FooTest.DoesBlah/1` for `"miny"`
1386 * `InstantiationName/FooTest.DoesBlah/2` for `"moe"`
1387 * `InstantiationName/FooTest.HasBlahBlah/0` for `"meeny"`
1388 * `InstantiationName/FooTest.HasBlahBlah/1` for `"miny"`
1389 * `InstantiationName/FooTest.HasBlahBlah/2` for `"moe"`
1391 You can use these names in [`--gtest_filter`](#running-a-subset-of-the-tests).
1393 This statement will instantiate all tests from `FooTest` again, each with
1394 parameter values `"cat"` and `"dog"`:
1397 const char* pets[] = {"cat", "dog"};
1398 INSTANTIATE_TEST_SUITE_P(AnotherInstantiationName, FooTest,
1399 testing::ValuesIn(pets));
1402 The tests from the instantiation above will have these names:
1404 * `AnotherInstantiationName/FooTest.DoesBlah/0` for `"cat"`
1405 * `AnotherInstantiationName/FooTest.DoesBlah/1` for `"dog"`
1406 * `AnotherInstantiationName/FooTest.HasBlahBlah/0` for `"cat"`
1407 * `AnotherInstantiationName/FooTest.HasBlahBlah/1` for `"dog"`
1409 Please note that `INSTANTIATE_TEST_SUITE_P` will instantiate *all* tests in the
1410 given test suite, whether their definitions come before or *after* the
1411 `INSTANTIATE_TEST_SUITE_P` statement.
1413 You can see [sample7_unittest.cc] and [sample8_unittest.cc] for more examples.
1415 [sample7_unittest.cc]: ../samples/sample7_unittest.cc "Parameterized Test example"
1416 [sample8_unittest.cc]: ../samples/sample8_unittest.cc "Parameterized Test example with multiple parameters"
1418 ### Creating Value-Parameterized Abstract Tests
1420 In the above, we define and instantiate `FooTest` in the *same* source file.
1421 Sometimes you may want to define value-parameterized tests in a library and let
1422 other people instantiate them later. This pattern is known as *abstract tests*.
1423 As an example of its application, when you are designing an interface you can
1424 write a standard suite of abstract tests (perhaps using a factory function as
1425 the test parameter) that all implementations of the interface are expected to
1426 pass. When someone implements the interface, they can instantiate your suite to
1427 get all the interface-conformance tests for free.
1429 To define abstract tests, you should organize your code like this:
1431 1. Put the definition of the parameterized test fixture class (e.g. `FooTest`)
1432 in a header file, say `foo_param_test.h`. Think of this as *declaring* your
1434 2. Put the `TEST_P` definitions in `foo_param_test.cc`, which includes
1435 `foo_param_test.h`. Think of this as *implementing* your abstract tests.
1437 Once they are defined, you can instantiate them by including `foo_param_test.h`,
1438 invoking `INSTANTIATE_TEST_SUITE_P()`, and depending on the library target that
1439 contains `foo_param_test.cc`. You can instantiate the same abstract test suite
1440 multiple times, possibly in different source files.
1442 ### Specifying Names for Value-Parameterized Test Parameters
1444 The optional last argument to `INSTANTIATE_TEST_SUITE_P()` allows the user to
1445 specify a function or functor that generates custom test name suffixes based on
1446 the test parameters. The function should accept one argument of type
1447 `testing::TestParamInfo<class ParamType>`, and return `std::string`.
1449 `testing::PrintToStringParamName` is a builtin test suffix generator that
1450 returns the value of `testing::PrintToString(GetParam())`. It does not work for
1451 `std::string` or C strings.
1453 NOTE: test names must be non-empty, unique, and may only contain ASCII
1454 alphanumeric characters. In particular, they
1455 [should not contain underscores](faq.md#why-should-test-suite-names-and-test-names-not-contain-underscore)
1458 class MyTestSuite : public testing::TestWithParam<int> {};
1460 TEST_P(MyTestSuite, MyTest)
1462 std::cout << "Example Test Param: " << GetParam() << std::endl;
1465 INSTANTIATE_TEST_SUITE_P(MyGroup, MyTestSuite, testing::Range(0, 10),
1466 testing::PrintToStringParamName());
1469 Providing a custom functor allows for more control over test parameter name
1470 generation, especially for types where the automatic conversion does not
1471 generate helpful parameter names (e.g. strings as demonstrated above). The
1472 following example illustrates this for multiple parameters, an enumeration type
1473 and a string, and also demonstrates how to combine generators. It uses a lambda
1477 enum class MyType { MY_FOO = 0, MY_BAR = 1 };
1479 class MyTestSuite : public testing::TestWithParam<std::tuple<MyType, string>> {
1482 INSTANTIATE_TEST_SUITE_P(
1483 MyGroup, MyTestSuite,
1485 testing::Values(MyType::VALUE_0, MyType::VALUE_1),
1486 testing::ValuesIn("", "")),
1487 [](const testing::TestParamInfo<MyTestSuite::ParamType>& info) {
1488 string name = absl::StrCat(
1489 std::get<0>(info.param) == MY_FOO ? "Foo" : "Bar", "_",
1490 std::get<1>(info.param));
1491 absl::c_replace_if(name, [](char c) { return !std::isalnum(c); }, '_');
1498 Suppose you have multiple implementations of the same interface and want to make
1499 sure that all of them satisfy some common requirements. Or, you may have defined
1500 several types that are supposed to conform to the same "concept" and you want to
1501 verify it. In both cases, you want the same test logic repeated for different
1504 While you can write one `TEST` or `TEST_F` for each type you want to test (and
1505 you may even factor the test logic into a function template that you invoke from
1506 the `TEST`), it's tedious and doesn't scale: if you want `m` tests over `n`
1507 types, you'll end up writing `m*n` `TEST`s.
1509 *Typed tests* allow you to repeat the same test logic over a list of types. You
1510 only need to write the test logic once, although you must know the type list
1511 when writing typed tests. Here's how you do it:
1513 First, define a fixture class template. It should be parameterized by a type.
1514 Remember to derive it from `::testing::Test`:
1517 template <typename T>
1518 class FooTest : public ::testing::Test {
1521 typedef std::list<T> List;
1527 Next, associate a list of types with the test suite, which will be repeated for
1528 each type in the list:
1531 using MyTypes = ::testing::Types<char, int, unsigned int>;
1532 TYPED_TEST_SUITE(FooTest, MyTypes);
1535 The type alias (`using` or `typedef`) is necessary for the `TYPED_TEST_SUITE`
1536 macro to parse correctly. Otherwise the compiler will think that each comma in
1537 the type list introduces a new macro argument.
1539 Then, use `TYPED_TEST()` instead of `TEST_F()` to define a typed test for this
1540 test suite. You can repeat this as many times as you want:
1543 TYPED_TEST(FooTest, DoesBlah) {
1544 // Inside a test, refer to the special name TypeParam to get the type
1545 // parameter. Since we are inside a derived class template, C++ requires
1546 // us to visit the members of FooTest via 'this'.
1547 TypeParam n = this->value_;
1549 // To visit static members of the fixture, add the 'TestFixture::'
1551 n += TestFixture::shared_;
1553 // To refer to typedefs in the fixture, add the 'typename TestFixture::'
1554 // prefix. The 'typename' is required to satisfy the compiler.
1555 typename TestFixture::List values;
1557 values.push_back(n);
1561 TYPED_TEST(FooTest, HasPropertyA) { ... }
1564 You can see [sample6_unittest.cc] for a complete example.
1566 [sample6_unittest.cc]: ../samples/sample6_unittest.cc "Typed Test example"
1568 ## Type-Parameterized Tests
1570 *Type-parameterized tests* are like typed tests, except that they don't require
1571 you to know the list of types ahead of time. Instead, you can define the test
1572 logic first and instantiate it with different type lists later. You can even
1573 instantiate it more than once in the same program.
1575 If you are designing an interface or concept, you can define a suite of
1576 type-parameterized tests to verify properties that any valid implementation of
1577 the interface/concept should have. Then, the author of each implementation can
1578 just instantiate the test suite with their type to verify that it conforms to
1579 the requirements, without having to write similar tests repeatedly. Here's an
1582 First, define a fixture class template, as we did with typed tests:
1585 template <typename T>
1586 class FooTest : public ::testing::Test {
1591 Next, declare that you will define a type-parameterized test suite:
1594 TYPED_TEST_SUITE_P(FooTest);
1597 Then, use `TYPED_TEST_P()` to define a type-parameterized test. You can repeat
1598 this as many times as you want:
1601 TYPED_TEST_P(FooTest, DoesBlah) {
1602 // Inside a test, refer to TypeParam to get the type parameter.
1607 TYPED_TEST_P(FooTest, HasPropertyA) { ... }
1610 Now the tricky part: you need to register all test patterns using the
1611 `REGISTER_TYPED_TEST_SUITE_P` macro before you can instantiate them. The first
1612 argument of the macro is the test suite name; the rest are the names of the
1613 tests in this test suite:
1616 REGISTER_TYPED_TEST_SUITE_P(FooTest,
1617 DoesBlah, HasPropertyA);
1620 Finally, you are free to instantiate the pattern with the types you want. If you
1621 put the above code in a header file, you can `#include` it in multiple C++
1622 source files and instantiate it multiple times.
1625 typedef ::testing::Types<char, int, unsigned int> MyTypes;
1626 INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, MyTypes);
1629 To distinguish different instances of the pattern, the first argument to the
1630 `INSTANTIATE_TYPED_TEST_SUITE_P` macro is a prefix that will be added to the
1631 actual test suite name. Remember to pick unique prefixes for different
1634 In the special case where the type list contains only one type, you can write
1635 that type directly without `::testing::Types<...>`, like this:
1638 INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, int);
1641 You can see [sample6_unittest.cc] for a complete example.
1643 ## Testing Private Code
1645 If you change your software's internal implementation, your tests should not
1646 break as long as the change is not observable by users. Therefore, **per the
1647 black-box testing principle, most of the time you should test your code through
1648 its public interfaces.**
1650 **If you still find yourself needing to test internal implementation code,
1651 consider if there's a better design.** The desire to test internal
1652 implementation is often a sign that the class is doing too much. Consider
1653 extracting an implementation class, and testing it. Then use that implementation
1654 class in the original class.
1656 If you absolutely have to test non-public interface code though, you can. There
1657 are two cases to consider:
1659 * Static functions ( *not* the same as static member functions!) or unnamed
1661 * Private or protected class members
1663 To test them, we use the following special techniques:
1665 * Both static functions and definitions/declarations in an unnamed namespace
1666 are only visible within the same translation unit. To test them, you can
1667 `#include` the entire `.cc` file being tested in your `*_test.cc` file.
1668 (#including `.cc` files is not a good way to reuse code - you should not do
1669 this in production code!)
1671 However, a better approach is to move the private code into the
1672 `foo::internal` namespace, where `foo` is the namespace your project
1673 normally uses, and put the private declarations in a `*-internal.h` file.
1674 Your production `.cc` files and your tests are allowed to include this
1675 internal header, but your clients are not. This way, you can fully test your
1676 internal implementation without leaking it to your clients.
1678 * Private class members are only accessible from within the class or by
1679 friends. To access a class' private members, you can declare your test
1680 fixture as a friend to the class and define accessors in your fixture. Tests
1681 using the fixture can then access the private members of your production
1682 class via the accessors in the fixture. Note that even though your fixture
1683 is a friend to your production class, your tests are not automatically
1684 friends to it, as they are technically defined in sub-classes of the
1687 Another way to test private members is to refactor them into an
1688 implementation class, which is then declared in a `*-internal.h` file. Your
1689 clients aren't allowed to include this header but your tests can. Such is
1691 [Pimpl](https://www.gamedev.net/articles/programming/general-and-gameplay-programming/the-c-pimpl-r1794/)
1692 (Private Implementation) idiom.
1694 Or, you can declare an individual test as a friend of your class by adding
1695 this line in the class body:
1698 FRIEND_TEST(TestSuiteName, TestName);
1708 FRIEND_TEST(FooTest, BarReturnsZeroOnNull);
1715 TEST(FooTest, BarReturnsZeroOnNull) {
1717 EXPECT_EQ(foo.Bar(NULL), 0); // Uses Foo's private member Bar().
1721 Pay special attention when your class is defined in a namespace, as you
1722 should define your test fixtures and tests in the same namespace if you want
1723 them to be friends of your class. For example, if the code to be tested
1727 namespace my_namespace {
1730 friend class FooTest;
1731 FRIEND_TEST(FooTest, Bar);
1732 FRIEND_TEST(FooTest, Baz);
1733 ... definition of the class Foo ...
1736 } // namespace my_namespace
1739 Your test code should be something like:
1742 namespace my_namespace {
1744 class FooTest : public ::testing::Test {
1749 TEST_F(FooTest, Bar) { ... }
1750 TEST_F(FooTest, Baz) { ... }
1752 } // namespace my_namespace
1755 ## "Catching" Failures
1757 If you are building a testing utility on top of googletest, you'll want to test
1758 your utility. What framework would you use to test it? googletest, of course.
1760 The challenge is to verify that your testing utility reports failures correctly.
1761 In frameworks that report a failure by throwing an exception, you could catch
1762 the exception and assert on it. But googletest doesn't use exceptions, so how do
1763 we test that a piece of code generates an expected failure?
1765 gunit-spi.h contains some constructs to do this. After #including this header,
1769 EXPECT_FATAL_FAILURE(statement, substring);
1772 to assert that `statement` generates a fatal (e.g. `ASSERT_*`) failure in the
1773 current thread whose message contains the given `substring`, or use
1776 EXPECT_NONFATAL_FAILURE(statement, substring);
1779 if you are expecting a non-fatal (e.g. `EXPECT_*`) failure.
1781 Only failures in the current thread are checked to determine the result of this
1782 type of expectations. If `statement` creates new threads, failures in these
1783 threads are also ignored. If you want to catch failures in other threads as
1784 well, use one of the following macros instead:
1787 EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substring);
1788 EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substring);
1791 NOTE: Assertions from multiple threads are currently not supported on Windows.
1793 For technical reasons, there are some caveats:
1795 1. You cannot stream a failure message to either macro.
1797 2. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot reference
1798 local non-static variables or non-static members of `this` object.
1800 3. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot return a
1803 ## Registering tests programmatically
1805 The `TEST` macros handle the vast majority of all use cases, but there are few
1806 were runtime registration logic is required. For those cases, the framework
1807 provides the `::testing::RegisterTest` that allows callers to register arbitrary
1810 This is an advanced API only to be used when the `TEST` macros are insufficient.
1811 The macros should be preferred when possible, as they avoid most of the
1812 complexity of calling this function.
1814 It provides the following signature:
1817 template <typename Factory>
1818 TestInfo* RegisterTest(const char* test_suite_name, const char* test_name,
1819 const char* type_param, const char* value_param,
1820 const char* file, int line, Factory factory);
1823 The `factory` argument is a factory callable (move-constructible) object or
1824 function pointer that creates a new instance of the Test object. It handles
1825 ownership to the caller. The signature of the callable is `Fixture*()`, where
1826 `Fixture` is the test fixture class for the test. All tests registered with the
1827 same `test_suite_name` must return the same fixture type. This is checked at
1830 The framework will infer the fixture class from the factory and will call the
1831 `SetUpTestSuite` and `TearDownTestSuite` for it.
1833 Must be called before `RUN_ALL_TESTS()` is invoked, otherwise behavior is
1839 class MyFixture : public ::testing::Test {
1841 // All of these optional, just like in regular macro usage.
1842 static void SetUpTestSuite() { ... }
1843 static void TearDownTestSuite() { ... }
1844 void SetUp() override { ... }
1845 void TearDown() override { ... }
1848 class MyTest : public MyFixture {
1850 explicit MyTest(int data) : data_(data) {}
1851 void TestBody() override { ... }
1857 void RegisterMyTests(const std::vector<int>& values) {
1858 for (int v : values) {
1859 ::testing::RegisterTest(
1860 "MyFixture", ("Test" + std::to_string(v)).c_str(), nullptr,
1861 std::to_string(v).c_str(),
1863 // Important to use the fixture type as the return type here.
1864 [=]() -> MyFixture* { return new MyTest(v); });
1868 int main(int argc, char** argv) {
1869 std::vector<int> values_to_test = LoadValuesFromConfig();
1870 RegisterMyTests(values_to_test);
1872 return RUN_ALL_TESTS();
1875 ## Getting the Current Test's Name
1877 Sometimes a function may need to know the name of the currently running test.
1878 For example, you may be using the `SetUp()` method of your test fixture to set
1879 the golden file name based on which test is running. The `::testing::TestInfo`
1880 class has this information:
1887 // Returns the test suite name and the test name, respectively.
1889 // Do NOT delete or free the return value - it's managed by the
1891 const char* test_suite_name() const;
1892 const char* name() const;
1898 To obtain a `TestInfo` object for the currently running test, call
1899 `current_test_info()` on the `UnitTest` singleton object:
1902 // Gets information about the currently running test.
1903 // Do NOT delete the returned object - it's managed by the UnitTest class.
1904 const ::testing::TestInfo* const test_info =
1905 ::testing::UnitTest::GetInstance()->current_test_info();
1909 printf("We are in test %s of test suite %s.\n",
1911 test_info->test_suite_name());
1914 `current_test_info()` returns a null pointer if no test is running. In
1915 particular, you cannot find the test suite name in `TestSuiteSetUp()`,
1916 `TestSuiteTearDown()` (where you know the test suite name implicitly), or
1917 functions called from them.
1919 ## Extending googletest by Handling Test Events
1921 googletest provides an **event listener API** to let you receive notifications
1922 about the progress of a test program and test failures. The events you can
1923 listen to include the start and end of the test program, a test suite, or a test
1924 method, among others. You may use this API to augment or replace the standard
1925 console output, replace the XML output, or provide a completely different form
1926 of output, such as a GUI or a database. You can also use test events as
1927 checkpoints to implement a resource leak checker, for example.
1929 ### Defining Event Listeners
1931 To define a event listener, you subclass either testing::TestEventListener or
1932 testing::EmptyTestEventListener The former is an (abstract) interface, where
1933 *each pure virtual method can be overridden to handle a test event* (For
1934 example, when a test starts, the `OnTestStart()` method will be called.). The
1935 latter provides an empty implementation of all methods in the interface, such
1936 that a subclass only needs to override the methods it cares about.
1938 When an event is fired, its context is passed to the handler function as an
1939 argument. The following argument types are used:
1941 * UnitTest reflects the state of the entire test program,
1942 * TestSuite has information about a test suite, which can contain one or more
1944 * TestInfo contains the state of a test, and
1945 * TestPartResult represents the result of a test assertion.
1947 An event handler function can examine the argument it receives to find out
1948 interesting information about the event and the test program's state.
1953 class MinimalistPrinter : public ::testing::EmptyTestEventListener {
1954 // Called before a test starts.
1955 virtual void OnTestStart(const ::testing::TestInfo& test_info) {
1956 printf("*** Test %s.%s starting.\n",
1957 test_info.test_suite_name(), test_info.name());
1960 // Called after a failed assertion or a SUCCESS().
1961 virtual void OnTestPartResult(const ::testing::TestPartResult& test_part_result) {
1962 printf("%s in %s:%d\n%s\n",
1963 test_part_result.failed() ? "*** Failure" : "Success",
1964 test_part_result.file_name(),
1965 test_part_result.line_number(),
1966 test_part_result.summary());
1969 // Called after a test ends.
1970 virtual void OnTestEnd(const ::testing::TestInfo& test_info) {
1971 printf("*** Test %s.%s ending.\n",
1972 test_info.test_suite_name(), test_info.name());
1977 ### Using Event Listeners
1979 To use the event listener you have defined, add an instance of it to the
1980 googletest event listener list (represented by class TestEventListeners - note
1981 the "s" at the end of the name) in your `main()` function, before calling
1985 int main(int argc, char** argv) {
1986 ::testing::InitGoogleTest(&argc, argv);
1987 // Gets hold of the event listener list.
1988 ::testing::TestEventListeners& listeners =
1989 ::testing::UnitTest::GetInstance()->listeners();
1990 // Adds a listener to the end. googletest takes the ownership.
1991 listeners.Append(new MinimalistPrinter);
1992 return RUN_ALL_TESTS();
1996 There's only one problem: the default test result printer is still in effect, so
1997 its output will mingle with the output from your minimalist printer. To suppress
1998 the default printer, just release it from the event listener list and delete it.
1999 You can do so by adding one line:
2003 delete listeners.Release(listeners.default_result_printer());
2004 listeners.Append(new MinimalistPrinter);
2005 return RUN_ALL_TESTS();
2008 Now, sit back and enjoy a completely different output from your tests. For more
2009 details, see [sample9_unittest.cc].
2011 [sample9_unittest.cc]: ../samples/sample9_unittest.cc "Event listener example"
2013 You may append more than one listener to the list. When an `On*Start()` or
2014 `OnTestPartResult()` event is fired, the listeners will receive it in the order
2015 they appear in the list (since new listeners are added to the end of the list,
2016 the default text printer and the default XML generator will receive the event
2017 first). An `On*End()` event will be received by the listeners in the *reverse*
2018 order. This allows output by listeners added later to be framed by output from
2019 listeners added earlier.
2021 ### Generating Failures in Listeners
2023 You may use failure-raising macros (`EXPECT_*()`, `ASSERT_*()`, `FAIL()`, etc)
2024 when processing an event. There are some restrictions:
2026 1. You cannot generate any failure in `OnTestPartResult()` (otherwise it will
2027 cause `OnTestPartResult()` to be called recursively).
2028 2. A listener that handles `OnTestPartResult()` is not allowed to generate any
2031 When you add listeners to the listener list, you should put listeners that
2032 handle `OnTestPartResult()` *before* listeners that can generate failures. This
2033 ensures that failures generated by the latter are attributed to the right test
2036 See [sample10_unittest.cc] for an example of a failure-raising listener.
2038 [sample10_unittest.cc]: ../samples/sample10_unittest.cc "Failure-raising listener example"
2040 ## Running Test Programs: Advanced Options
2042 googletest test programs are ordinary executables. Once built, you can run them
2043 directly and affect their behavior via the following environment variables
2044 and/or command line flags. For the flags to work, your programs must call
2045 `::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`.
2047 To see a list of supported flags and their usage, please run your test program
2048 with the `--help` flag. You can also use `-h`, `-?`, or `/?` for short.
2050 If an option is specified both by an environment variable and by a flag, the
2051 latter takes precedence.
2055 #### Listing Test Names
2057 Sometimes it is necessary to list the available tests in a program before
2058 running them so that a filter may be applied if needed. Including the flag
2059 `--gtest_list_tests` overrides all other flags and lists tests in the following
2070 None of the tests listed are actually run if the flag is provided. There is no
2071 corresponding environment variable for this flag.
2073 #### Running a Subset of the Tests
2075 By default, a googletest program runs all tests the user has defined. Sometimes,
2076 you want to run only a subset of the tests (e.g. for debugging or quickly
2077 verifying a change). If you set the `GTEST_FILTER` environment variable or the
2078 `--gtest_filter` flag to a filter string, googletest will only run the tests
2079 whose full names (in the form of `TestSuiteName.TestName`) match the filter.
2081 The format of a filter is a '`:`'-separated list of wildcard patterns (called
2082 the *positive patterns*) optionally followed by a '`-`' and another
2083 '`:`'-separated pattern list (called the *negative patterns*). A test matches
2084 the filter if and only if it matches any of the positive patterns but does not
2085 match any of the negative patterns.
2087 A pattern may contain `'*'` (matches any string) or `'?'` (matches any single
2088 character). For convenience, the filter `'*-NegativePatterns'` can be also
2089 written as `'-NegativePatterns'`.
2093 * `./foo_test` Has no flag, and thus runs all its tests.
2094 * `./foo_test --gtest_filter=*` Also runs everything, due to the single
2095 match-everything `*` value.
2096 * `./foo_test --gtest_filter=FooTest.*` Runs everything in test suite
2098 * `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose full
2099 name contains either `"Null"` or `"Constructor"` .
2100 * `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests.
2101 * `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in test
2102 suite `FooTest` except `FooTest.Bar`.
2103 * `./foo_test --gtest_filter=FooTest.*:BarTest.*-FooTest.Bar:BarTest.Foo` Runs
2104 everything in test suite `FooTest` except `FooTest.Bar` and everything in
2105 test suite `BarTest` except `BarTest.Foo`.
2107 #### Temporarily Disabling Tests
2109 If you have a broken test that you cannot fix right away, you can add the
2110 `DISABLED_` prefix to its name. This will exclude it from execution. This is
2111 better than commenting out the code or using `#if 0`, as disabled tests are
2112 still compiled (and thus won't rot).
2114 If you need to disable all tests in a test suite, you can either add `DISABLED_`
2115 to the front of the name of each test, or alternatively add it to the front of
2116 the test suite name.
2118 For example, the following tests won't be run by googletest, even though they
2119 will still be compiled:
2122 // Tests that Foo does Abc.
2123 TEST(FooTest, DISABLED_DoesAbc) { ... }
2125 class DISABLED_BarTest : public ::testing::Test { ... };
2127 // Tests that Bar does Xyz.
2128 TEST_F(DISABLED_BarTest, DoesXyz) { ... }
2131 NOTE: This feature should only be used for temporary pain-relief. You still have
2132 to fix the disabled tests at a later date. As a reminder, googletest will print
2133 a banner warning you if a test program contains any disabled tests.
2135 TIP: You can easily count the number of disabled tests you have using `gsearch`
2136 and/or `grep`. This number can be used as a metric for improving your test
2139 #### Temporarily Enabling Disabled Tests
2141 To include disabled tests in test execution, just invoke the test program with
2142 the `--gtest_also_run_disabled_tests` flag or set the
2143 `GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other than `0`.
2144 You can combine this with the `--gtest_filter` flag to further select which
2145 disabled tests to run.
2147 ### Repeating the Tests
2149 Once in a while you'll run into a test whose result is hit-or-miss. Perhaps it
2150 will fail only 1% of the time, making it rather hard to reproduce the bug under
2151 a debugger. This can be a major source of frustration.
2153 The `--gtest_repeat` flag allows you to repeat all (or selected) test methods in
2154 a program many times. Hopefully, a flaky test will eventually fail and give you
2155 a chance to debug. Here's how to use it:
2158 $ foo_test --gtest_repeat=1000
2159 Repeat foo_test 1000 times and don't stop at failures.
2161 $ foo_test --gtest_repeat=-1
2162 A negative count means repeating forever.
2164 $ foo_test --gtest_repeat=1000 --gtest_break_on_failure
2165 Repeat foo_test 1000 times, stopping at the first failure. This
2166 is especially useful when running under a debugger: when the test
2167 fails, it will drop into the debugger and you can then inspect
2168 variables and stacks.
2170 $ foo_test --gtest_repeat=1000 --gtest_filter=FooBar.*
2171 Repeat the tests whose name matches the filter 1000 times.
2174 If your test program contains
2175 [global set-up/tear-down](#global-set-up-and-tear-down) code, it will be
2176 repeated in each iteration as well, as the flakiness may be in it. You can also
2177 specify the repeat count by setting the `GTEST_REPEAT` environment variable.
2179 ### Shuffling the Tests
2181 You can specify the `--gtest_shuffle` flag (or set the `GTEST_SHUFFLE`
2182 environment variable to `1`) to run the tests in a program in a random order.
2183 This helps to reveal bad dependencies between tests.
2185 By default, googletest uses a random seed calculated from the current time.
2186 Therefore you'll get a different order every time. The console output includes
2187 the random seed value, such that you can reproduce an order-related test failure
2188 later. To specify the random seed explicitly, use the `--gtest_random_seed=SEED`
2189 flag (or set the `GTEST_RANDOM_SEED` environment variable), where `SEED` is an
2190 integer in the range [0, 99999]. The seed value 0 is special: it tells
2191 googletest to do the default behavior of calculating the seed from the current
2194 If you combine this with `--gtest_repeat=N`, googletest will pick a different
2195 random seed and re-shuffle the tests in each iteration.
2197 ### Controlling Test Output
2199 #### Colored Terminal Output
2201 googletest can use colors in its terminal output to make it easier to spot the
2202 important information:
2206 <font color="green">[----------]</font><font color="black"> 1 test from
2208 <font color="green">[ RUN ]</font><font color="black">
2209 FooTest.DoesAbc</font><br/>
2210 <font color="green">[ OK ]</font><font color="black">
2211 FooTest.DoesAbc </font><br/>
2212 <font color="green">[----------]</font><font color="black">
2213 2 tests from BarTest</font><br/>
2214 <font color="green">[ RUN ]</font><font color="black">
2215 BarTest.HasXyzProperty </font><br/>
2216 <font color="green">[ OK ]</font><font color="black">
2217 BarTest.HasXyzProperty</font><br/>
2218 <font color="green">[ RUN ]</font><font color="black">
2219 BarTest.ReturnsTrueOnSuccess ... some error messages ...</font><br/>
2220 <font color="red">[ FAILED ]</font><font color="black">
2221 BarTest.ReturnsTrueOnSuccess ...</font><br/>
2222 <font color="green">[==========]</font><font color="black">
2223 30 tests from 14 test suites ran.</font><br/>
2224 <font color="green">[ PASSED ]</font><font color="black">
2225 28 tests.</font><br/>
2226 <font color="red">[ FAILED ]</font><font color="black">
2227 2 tests, listed below:</font><br/>
2228 <font color="red">[ FAILED ]</font><font color="black">
2229 BarTest.ReturnsTrueOnSuccess</font><br/>
2230 <font color="red">[ FAILED ]</font><font color="black">
2231 AnotherTest.DoesXyz<br/>
2237 You can set the `GTEST_COLOR` environment variable or the `--gtest_color`
2238 command line flag to `yes`, `no`, or `auto` (the default) to enable colors,
2239 disable colors, or let googletest decide. When the value is `auto`, googletest
2240 will use colors if and only if the output goes to a terminal and (on non-Windows
2241 platforms) the `TERM` environment variable is set to `xterm` or `xterm-color`.
2243 #### Suppressing the Elapsed Time
2245 By default, googletest prints the time it takes to run each test. To disable
2246 that, run the test program with the `--gtest_print_time=0` command line flag, or
2247 set the GTEST_PRINT_TIME environment variable to `0`.
2249 #### Suppressing UTF-8 Text Output
2251 In case of assertion failures, googletest prints expected and actual values of
2252 type `string` both as hex-encoded strings as well as in readable UTF-8 text if
2253 they contain valid non-ASCII UTF-8 characters. If you want to suppress the UTF-8
2254 text because, for example, you don't have an UTF-8 compatible output medium, run
2255 the test program with `--gtest_print_utf8=0` or set the `GTEST_PRINT_UTF8`
2256 environment variable to `0`.
2260 #### Generating an XML Report
2262 googletest can emit a detailed XML report to a file in addition to its normal
2263 textual output. The report contains the duration of each test, and thus can help
2264 you identify slow tests. The report is also used by the http://unittest
2265 dashboard to show per-test-method error messages.
2267 To generate the XML report, set the `GTEST_OUTPUT` environment variable or the
2268 `--gtest_output` flag to the string `"xml:path_to_output_file"`, which will
2269 create the file at the given location. You can also just use the string `"xml"`,
2270 in which case the output can be found in the `test_detail.xml` file in the
2273 If you specify a directory (for example, `"xml:output/directory/"` on Linux or
2274 `"xml:output\directory\"` on Windows), googletest will create the XML file in
2275 that directory, named after the test executable (e.g. `foo_test.xml` for test
2276 program `foo_test` or `foo_test.exe`). If the file already exists (perhaps left
2277 over from a previous run), googletest will pick a different name (e.g.
2278 `foo_test_1.xml`) to avoid overwriting it.
2280 The report is based on the `junitreport` Ant task. Since that format was
2281 originally intended for Java, a little interpretation is required to make it
2282 apply to googletest tests, as shown here:
2285 <testsuites name="AllTests" ...>
2286 <testsuite name="test_case_name" ...>
2287 <testcase name="test_name" ...>
2288 <failure message="..."/>
2289 <failure message="..."/>
2290 <failure message="..."/>
2296 * The root `<testsuites>` element corresponds to the entire test program.
2297 * `<testsuite>` elements correspond to googletest test suites.
2298 * `<testcase>` elements correspond to googletest test functions.
2300 For instance, the following program
2303 TEST(MathTest, Addition) { ... }
2304 TEST(MathTest, Subtraction) { ... }
2305 TEST(LogicTest, NonContradiction) { ... }
2308 could generate this report:
2311 <?xml version="1.0" encoding="UTF-8"?>
2312 <testsuites tests="3" failures="1" errors="0" time="0.035" timestamp="2011-10-31T18:52:42" name="AllTests">
2313 <testsuite name="MathTest" tests="2" failures="1" errors="0" time="0.015">
2314 <testcase name="Addition" status="run" time="0.007" classname="">
2315 <failure message="Value of: add(1, 1)
 Actual: 3
Expected: 2" type="">...</failure>
2316 <failure message="Value of: add(1, -1)
 Actual: 1
Expected: 0" type="">...</failure>
2318 <testcase name="Subtraction" status="run" time="0.005" classname="">
2321 <testsuite name="LogicTest" tests="1" failures="0" errors="0" time="0.005">
2322 <testcase name="NonContradiction" status="run" time="0.005" classname="">
2330 * The `tests` attribute of a `<testsuites>` or `<testsuite>` element tells how
2331 many test functions the googletest program or test suite contains, while the
2332 `failures` attribute tells how many of them failed.
2334 * The `time` attribute expresses the duration of the test, test suite, or
2335 entire test program in seconds.
2337 * The `timestamp` attribute records the local date and time of the test
2340 * Each `<failure>` element corresponds to a single failed googletest
2343 #### Generating a JSON Report
2345 googletest can also emit a JSON report as an alternative format to XML. To
2346 generate the JSON report, set the `GTEST_OUTPUT` environment variable or the
2347 `--gtest_output` flag to the string `"json:path_to_output_file"`, which will
2348 create the file at the given location. You can also just use the string
2349 `"json"`, in which case the output can be found in the `test_detail.json` file
2350 in the current directory.
2352 The report format conforms to the following JSON Schema:
2356 "$schema": "http://json-schema.org/schema#",
2362 "name": { "type": "string" },
2363 "tests": { "type": "integer" },
2364 "failures": { "type": "integer" },
2365 "disabled": { "type": "integer" },
2366 "time": { "type": "string" },
2370 "$ref": "#/definitions/TestInfo"
2378 "name": { "type": "string" },
2381 "enum": ["RUN", "NOTRUN"]
2383 "time": { "type": "string" },
2384 "classname": { "type": "string" },
2388 "$ref": "#/definitions/Failure"
2396 "failures": { "type": "string" },
2397 "type": { "type": "string" }
2402 "tests": { "type": "integer" },
2403 "failures": { "type": "integer" },
2404 "disabled": { "type": "integer" },
2405 "errors": { "type": "integer" },
2408 "format": "date-time"
2410 "time": { "type": "string" },
2411 "name": { "type": "string" },
2415 "$ref": "#/definitions/TestCase"
2422 The report uses the format that conforms to the following Proto3 using the
2423 [JSON encoding](https://developers.google.com/protocol-buffers/docs/proto3#json):
2430 import "google/protobuf/timestamp.proto";
2431 import "google/protobuf/duration.proto";
2438 google.protobuf.Timestamp timestamp = 5;
2439 google.protobuf.Duration time = 6;
2441 repeated TestCase testsuites = 8;
2450 google.protobuf.Duration time = 6;
2451 repeated TestInfo testsuite = 7;
2461 google.protobuf.Duration time = 3;
2462 string classname = 4;
2464 string failures = 1;
2467 repeated Failure failures = 5;
2471 For instance, the following program
2474 TEST(MathTest, Addition) { ... }
2475 TEST(MathTest, Subtraction) { ... }
2476 TEST(LogicTest, NonContradiction) { ... }
2479 could generate this report:
2487 "timestamp": "2011-10-31T18:52:42Z",
2504 "message": "Value of: add(1, 1)\n Actual: 3\nExpected: 2",
2508 "message": "Value of: add(1, -1)\n Actual: 1\nExpected: 0",
2514 "name": "Subtraction",
2522 "name": "LogicTest",
2529 "name": "NonContradiction",
2540 IMPORTANT: The exact format of the JSON document is subject to change.
2542 ### Controlling How Failures Are Reported
2544 #### Turning Assertion Failures into Break-Points
2546 When running test programs under a debugger, it's very convenient if the
2547 debugger can catch an assertion failure and automatically drop into interactive
2548 mode. googletest's *break-on-failure* mode supports this behavior.
2550 To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a value
2551 other than `0`. Alternatively, you can use the `--gtest_break_on_failure`
2554 #### Disabling Catching Test-Thrown Exceptions
2556 googletest can be used either with or without exceptions enabled. If a test
2557 throws a C++ exception or (on Windows) a structured exception (SEH), by default
2558 googletest catches it, reports it as a test failure, and continues with the next
2559 test method. This maximizes the coverage of a test run. Also, on Windows an
2560 uncaught exception will cause a pop-up window, so catching the exceptions allows
2561 you to run the tests automatically.
2563 When debugging the test failures, however, you may instead want the exceptions
2564 to be handled by the debugger, such that you can examine the call stack when an
2565 exception is thrown. To achieve that, set the `GTEST_CATCH_EXCEPTIONS`
2566 environment variable to `0`, or use the `--gtest_catch_exceptions=0` flag when