1 # Advanced googletest Topics
5 Now that you have read the [googletest Primer](primer.md) and learned how to
6 write tests using googletest, it's time to learn some new tricks. This document
7 will show you more assertions as well as how to construct complex failure
8 messages, propagate fatal failures, reuse and speed up your test fixtures, and
9 use various flags with your tests.
13 This section covers some less frequently used, but still significant,
16 ### Explicit Success and Failure
18 See [Explicit Success and Failure](reference/assertions.md#success-failure) in
19 the Assertions Reference.
21 ### Exception Assertions
23 See [Exception Assertions](reference/assertions.md#exceptions) in the Assertions
26 ### Predicate Assertions for Better Error Messages
28 Even though googletest has a rich set of assertions, they can never be complete,
29 as it's impossible (nor a good idea) to anticipate all scenarios a user might
30 run into. Therefore, sometimes a user has to use `EXPECT_TRUE()` to check a
31 complex expression, for lack of a better macro. This has the problem of not
32 showing you the values of the parts of the expression, making it hard to
33 understand what went wrong. As a workaround, some users choose to construct the
34 failure message by themselves, streaming it into `EXPECT_TRUE()`. However, this
35 is awkward especially when the expression has side-effects or is expensive to
38 googletest gives you three different options to solve this problem:
40 #### Using an Existing Boolean Function
42 If you already have a function or functor that returns `bool` (or a type that
43 can be implicitly converted to `bool`), you can use it in a *predicate
44 assertion* to get the function arguments printed for free. See
45 [`EXPECT_PRED*`](reference/assertions.md#EXPECT_PRED) in the Assertions
46 Reference for details.
48 #### Using a Function That Returns an AssertionResult
50 While `EXPECT_PRED*()` and friends are handy for a quick job, the syntax is not
51 satisfactory: you have to use different macros for different arities, and it
52 feels more like Lisp than C++. The `::testing::AssertionResult` class solves
55 An `AssertionResult` object represents the result of an assertion (whether it's
56 a success or a failure, and an associated message). You can create an
57 `AssertionResult` using one of these factory functions:
62 // Returns an AssertionResult object to indicate that an assertion has
64 AssertionResult AssertionSuccess();
66 // Returns an AssertionResult object to indicate that an assertion has
68 AssertionResult AssertionFailure();
73 You can then use the `<<` operator to stream messages to the `AssertionResult`
76 To provide more readable messages in Boolean assertions (e.g. `EXPECT_TRUE()`),
77 write a predicate function that returns `AssertionResult` instead of `bool`. For
78 example, if you define `IsEven()` as:
81 testing::AssertionResult IsEven(int n) {
83 return testing::AssertionSuccess();
85 return testing::AssertionFailure() << n << " is odd";
97 the failed assertion `EXPECT_TRUE(IsEven(Fib(4)))` will print:
100 Value of: IsEven(Fib(4))
101 Actual: false (3 is odd)
105 instead of a more opaque
108 Value of: IsEven(Fib(4))
113 If you want informative messages in `EXPECT_FALSE` and `ASSERT_FALSE` as well
114 (one third of Boolean assertions in the Google code base are negative ones), and
115 are fine with making the predicate slower in the success case, you can supply a
119 testing::AssertionResult IsEven(int n) {
121 return testing::AssertionSuccess() << n << " is even";
123 return testing::AssertionFailure() << n << " is odd";
127 Then the statement `EXPECT_FALSE(IsEven(Fib(6)))` will print
130 Value of: IsEven(Fib(6))
131 Actual: true (8 is even)
135 #### Using a Predicate-Formatter
137 If you find the default message generated by
138 [`EXPECT_PRED*`](reference/assertions.md#EXPECT_PRED) and
139 [`EXPECT_TRUE`](reference/assertions.md#EXPECT_TRUE) unsatisfactory, or some
140 arguments to your predicate do not support streaming to `ostream`, you can
141 instead use *predicate-formatter assertions* to *fully* customize how the
142 message is formatted. See
143 [`EXPECT_PRED_FORMAT*`](reference/assertions.md#EXPECT_PRED_FORMAT) in the
144 Assertions Reference for details.
146 ### Floating-Point Comparison
148 See [Floating-Point Comparison](reference/assertions.md#floating-point) in the
149 Assertions Reference.
151 #### Floating-Point Predicate-Format Functions
153 Some floating-point operations are useful, but not that often used. In order to
154 avoid an explosion of new macros, we provide them as predicate-format functions
155 that can be used in the predicate assertion macro
156 [`EXPECT_PRED_FORMAT2`](reference/assertions.md#EXPECT_PRED_FORMAT), for
160 using ::testing::FloatLE;
161 using ::testing::DoubleLE;
163 EXPECT_PRED_FORMAT2(FloatLE, val1, val2);
164 EXPECT_PRED_FORMAT2(DoubleLE, val1, val2);
167 The above code verifies that `val1` is less than, or approximately equal to,
170 ### Asserting Using gMock Matchers
172 See [`EXPECT_THAT`](reference/assertions.md#EXPECT_THAT) in the Assertions
175 ### More String Assertions
177 (Please read the [previous](#asserting-using-gmock-matchers) section first if
180 You can use the gMock [string matchers](reference/matchers.md#string-matchers)
181 with [`EXPECT_THAT`](reference/assertions.md#EXPECT_THAT) to do more string
182 comparison tricks (sub-string, prefix, suffix, regular expression, and etc). For
186 using ::testing::HasSubstr;
187 using ::testing::MatchesRegex;
189 ASSERT_THAT(foo_string, HasSubstr("needle"));
190 EXPECT_THAT(bar_string, MatchesRegex("\\w*\\d+"));
193 ### Windows HRESULT assertions
195 See [Windows HRESULT Assertions](reference/assertions.md#HRESULT) in the
196 Assertions Reference.
200 You can call the function
203 ::testing::StaticAssertTypeEq<T1, T2>();
206 to assert that types `T1` and `T2` are the same. The function does nothing if
207 the assertion is satisfied. If the types are different, the function call will
208 fail to compile, the compiler error message will say that `T1 and T2 are not the
209 same type` and most likely (depending on the compiler) show you the actual
210 values of `T1` and `T2`. This is mainly useful inside template code.
212 **Caveat**: When used inside a member function of a class template or a function
213 template, `StaticAssertTypeEq<T1, T2>()` is effective only if the function is
214 instantiated. For example, given:
217 template <typename T> class Foo {
219 void Bar() { testing::StaticAssertTypeEq<int, T>(); }
226 void Test1() { Foo<bool> foo; }
229 will not generate a compiler error, as `Foo<bool>::Bar()` is never actually
230 instantiated. Instead, you need:
233 void Test2() { Foo<bool> foo; foo.Bar(); }
236 to cause a compiler error.
238 ### Assertion Placement
240 You can use assertions in any C++ function. In particular, it doesn't have to be
241 a method of the test fixture class. The one constraint is that assertions that
242 generate a fatal failure (`FAIL*` and `ASSERT_*`) can only be used in
243 void-returning functions. This is a consequence of Google's not using
244 exceptions. By placing it in a non-void function you'll get a confusing compile
245 error like `"error: void value not ignored as it ought to be"` or `"cannot
246 initialize return object of type 'bool' with an rvalue of type 'void'"` or
247 `"error: no viable conversion from 'void' to 'string'"`.
249 If you need to use fatal assertions in a function that returns non-void, one
250 option is to make the function return the value in an out parameter instead. For
251 example, you can rewrite `T2 Foo(T1 x)` to `void Foo(T1 x, T2* result)`. You
252 need to make sure that `*result` contains some sensible value even when the
253 function returns prematurely. As the function now returns `void`, you can use
254 any assertion inside of it.
256 If changing the function's type is not an option, you should just use assertions
257 that generate non-fatal failures, such as `ADD_FAILURE*` and `EXPECT_*`.
260 NOTE: Constructors and destructors are not considered void-returning functions,
261 according to the C++ language specification, and so you may not use fatal
262 assertions in them; you'll get a compilation error if you try. Instead, either
263 call `abort` and crash the entire test executable, or put the fatal assertion in
264 a `SetUp`/`TearDown` function; see
265 [constructor/destructor vs. `SetUp`/`TearDown`](faq.md#CtorVsSetUp)
267 {: .callout .warning}
268 WARNING: A fatal assertion in a helper function (private void-returning method)
269 called from a constructor or destructor does not terminate the current test, as
270 your intuition might suggest: it merely returns from the constructor or
271 destructor early, possibly leaving your object in a partially-constructed or
272 partially-destructed state! You almost certainly want to `abort` or use
273 `SetUp`/`TearDown` instead.
275 ## Skipping test execution
277 Related to the assertions `SUCCEED()` and `FAIL()`, you can prevent further test
278 execution at runtime with the `GTEST_SKIP()` macro. This is useful when you need
279 to check for preconditions of the system under test during runtime and skip
280 tests in a meaningful way.
282 `GTEST_SKIP()` can be used in individual test cases or in the `SetUp()` methods
283 of classes derived from either `::testing::Environment` or `::testing::Test`.
287 TEST(SkipTest, DoesSkip) {
288 GTEST_SKIP() << "Skipping single test";
289 EXPECT_EQ(0, 1); // Won't fail; it won't be executed
292 class SkipFixture : public ::testing::Test {
294 void SetUp() override {
295 GTEST_SKIP() << "Skipping all tests for this fixture";
299 // Tests for SkipFixture won't be executed.
300 TEST_F(SkipFixture, SkipsOneTest) {
301 EXPECT_EQ(5, 7); // Won't fail
305 As with assertion macros, you can stream a custom message into `GTEST_SKIP()`.
307 ## Teaching googletest How to Print Your Values
309 When a test assertion such as `EXPECT_EQ` fails, googletest prints the argument
310 values to help you debug. It does this using a user-extensible value printer.
312 This printer knows how to print built-in C++ types, native arrays, STL
313 containers, and any type that supports the `<<` operator. For other types, it
314 prints the raw bytes in the value and hopes that you the user can figure it out.
316 As mentioned earlier, the printer is *extensible*. That means you can teach it
317 to do a better job at printing your particular type than to dump the bytes. To
318 do that, define `<<` for your type:
325 class Bar { // We want googletest to be able to print instances of this.
327 // Create a free inline friend function.
328 friend std::ostream& operator<<(std::ostream& os, const Bar& bar) {
329 return os << bar.DebugString(); // whatever needed to print bar to os
333 // If you can't declare the function in the class it's important that the
334 // << operator is defined in the SAME namespace that defines Bar. C++'s look-up
335 // rules rely on that.
336 std::ostream& operator<<(std::ostream& os, const Bar& bar) {
337 return os << bar.DebugString(); // whatever needed to print bar to os
343 Sometimes, this might not be an option: your team may consider it bad style to
344 have a `<<` operator for `Bar`, or `Bar` may already have a `<<` operator that
345 doesn't do what you want (and you cannot change it). If so, you can instead
346 define a `PrintTo()` function like this:
355 friend void PrintTo(const Bar& bar, std::ostream* os) {
356 *os << bar.DebugString(); // whatever needed to print bar to os
360 // If you can't declare the function in the class it's important that PrintTo()
361 // is defined in the SAME namespace that defines Bar. C++'s look-up rules rely
363 void PrintTo(const Bar& bar, std::ostream* os) {
364 *os << bar.DebugString(); // whatever needed to print bar to os
370 If you have defined both `<<` and `PrintTo()`, the latter will be used when
371 googletest is concerned. This allows you to customize how the value appears in
372 googletest's output without affecting code that relies on the behavior of its
375 If you want to print a value `x` using googletest's value printer yourself, just
376 call `::testing::PrintToString(x)`, which returns an `std::string`:
379 vector<pair<Bar, int> > bar_ints = GetBarIntVector();
381 EXPECT_TRUE(IsCorrectBarIntVector(bar_ints))
382 << "bar_ints = " << testing::PrintToString(bar_ints);
387 In many applications, there are assertions that can cause application failure if
388 a condition is not met. These consistency checks, which ensure that the program
389 is in a known good state, are there to fail at the earliest possible time after
390 some program state is corrupted. If the assertion checks the wrong condition,
391 then the program may proceed in an erroneous state, which could lead to memory
392 corruption, security holes, or worse. Hence it is vitally important to test that
393 such assertion statements work as expected.
395 Since these precondition checks cause the processes to die, we call such tests
396 _death tests_. More generally, any test that checks that a program terminates
397 (except by throwing an exception) in an expected fashion is also a death test.
399 Note that if a piece of code throws an exception, we don't consider it "death"
400 for the purpose of death tests, as the caller of the code could catch the
401 exception and avoid the crash. If you want to verify exceptions thrown by your
402 code, see [Exception Assertions](#ExceptionAssertions).
404 If you want to test `EXPECT_*()/ASSERT_*()` failures in your test code, see
405 ["Catching" Failures](#catching-failures).
407 ### How to Write a Death Test
409 GoogleTest provides assertion macros to support death tests. See
410 [Death Assertions](reference/assertions.md#death) in the Assertions Reference
413 To write a death test, simply use one of the macros inside your test function.
417 TEST(MyDeathTest, Foo) {
418 // This death test uses a compound statement.
422 }, "Error on line .* of Foo()");
425 TEST(MyDeathTest, NormalExit) {
426 EXPECT_EXIT(NormalExit(), testing::ExitedWithCode(0), "Success");
429 TEST(MyDeathTest, KillProcess) {
430 EXPECT_EXIT(KillProcess(), testing::KilledBySignal(SIGKILL),
431 "Sending myself unblockable signal");
437 * calling `Foo(5)` causes the process to die with the given error message,
438 * calling `NormalExit()` causes the process to print `"Success"` to stderr and
439 exit with exit code 0, and
440 * calling `KillProcess()` kills the process with signal `SIGKILL`.
442 The test function body may contain other assertions and statements as well, if
445 Note that a death test only cares about three things:
447 1. does `statement` abort or exit the process?
448 2. (in the case of `ASSERT_EXIT` and `EXPECT_EXIT`) does the exit status
449 satisfy `predicate`? Or (in the case of `ASSERT_DEATH` and `EXPECT_DEATH`)
450 is the exit status non-zero? And
451 3. does the stderr output match `matcher`?
453 In particular, if `statement` generates an `ASSERT_*` or `EXPECT_*` failure, it
454 will **not** cause the death test to fail, as googletest assertions don't abort
457 ### Death Test Naming
459 {: .callout .important}
460 IMPORTANT: We strongly recommend you to follow the convention of naming your
461 **test suite** (not test) `*DeathTest` when it contains a death test, as
462 demonstrated in the above example. The
463 [Death Tests And Threads](#death-tests-and-threads) section below explains why.
465 If a test fixture class is shared by normal tests and death tests, you can use
466 `using` or `typedef` to introduce an alias for the fixture class and avoid
467 duplicating its code:
470 class FooTest : public testing::Test { ... };
472 using FooDeathTest = FooTest;
474 TEST_F(FooTest, DoesThis) {
478 TEST_F(FooDeathTest, DoesThat) {
483 ### Regular Expression Syntax
485 When built with Bazel and using Abseil, googletest uses the
486 [RE2](https://github.com/google/re2/wiki/Syntax) syntax. Otherwise, for POSIX
487 systems (Linux, Cygwin, Mac), googletest uses the
488 [POSIX extended regular expression](http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap09.html#tag_09_04)
489 syntax. To learn about POSIX syntax, you may want to read this
490 [Wikipedia entry](http://en.wikipedia.org/wiki/Regular_expression#POSIX_Extended_Regular_Expressions).
492 On Windows, googletest uses its own simple regular expression implementation. It
493 lacks many features. For example, we don't support union (`"x|y"`), grouping
494 (`"(xy)"`), brackets (`"[xy]"`), and repetition count (`"x{5,7}"`), among
495 others. Below is what we do support (`A` denotes a literal character, period
496 (`.`), or a single `\\ ` escape sequence; `x` and `y` denote regular
500 ---------- | --------------------------------------------------------------
501 `c` | matches any literal character `c`
502 `\\d` | matches any decimal digit
503 `\\D` | matches any character that's not a decimal digit
507 `\\s` | matches any ASCII whitespace, including `\n`
508 `\\S` | matches any character that's not a whitespace
511 `\\w` | matches any letter, `_`, or decimal digit
512 `\\W` | matches any character that `\\w` doesn't match
513 `\\c` | matches any literal character `c`, which must be a punctuation
514 `.` | matches any single character except `\n`
515 `A?` | matches 0 or 1 occurrences of `A`
516 `A*` | matches 0 or many occurrences of `A`
517 `A+` | matches 1 or many occurrences of `A`
518 `^` | matches the beginning of a string (not that of each line)
519 `$` | matches the end of a string (not that of each line)
520 `xy` | matches `x` followed by `y`
522 To help you determine which capability is available on your system, googletest
523 defines macros to govern which regular expression it is using. The macros are:
524 `GTEST_USES_SIMPLE_RE=1` or `GTEST_USES_POSIX_RE=1`. If you want your death
525 tests to work in all cases, you can either `#if` on these macros or use the more
530 See [Death Assertions](reference/assertions.md#death) in the Assertions
533 ### Death Tests And Threads
535 The reason for the two death test styles has to do with thread safety. Due to
536 well-known problems with forking in the presence of threads, death tests should
537 be run in a single-threaded context. Sometimes, however, it isn't feasible to
538 arrange that kind of environment. For example, statically-initialized modules
539 may start threads before main is ever reached. Once threads have been created,
540 it may be difficult or impossible to clean them up.
542 googletest has three features intended to raise awareness of threading issues.
544 1. A warning is emitted if multiple threads are running when a death test is
546 2. Test suites with a name ending in "DeathTest" are run before all other
548 3. It uses `clone()` instead of `fork()` to spawn the child process on Linux
549 (`clone()` is not available on Cygwin and Mac), as `fork()` is more likely
550 to cause the child to hang when the parent process has multiple threads.
552 It's perfectly fine to create threads inside a death test statement; they are
553 executed in a separate process and cannot affect the parent.
555 ### Death Test Styles
557 The "threadsafe" death test style was introduced in order to help mitigate the
558 risks of testing in a possibly multithreaded environment. It trades increased
559 test execution time (potentially dramatically so) for improved thread safety.
561 The automated testing framework does not set the style flag. You can choose a
562 particular style of death tests by setting the flag programmatically:
565 GTEST_FLAG_SET(death_test_style, "threadsafe")
568 You can do this in `main()` to set the style for all death tests in the binary,
569 or in individual tests. Recall that flags are saved before running each test and
570 restored afterwards, so you need not do that yourself. For example:
573 int main(int argc, char** argv) {
574 testing::InitGoogleTest(&argc, argv);
575 GTEST_FLAG_SET(death_test_style, "fast");
576 return RUN_ALL_TESTS();
579 TEST(MyDeathTest, TestOne) {
580 GTEST_FLAG_SET(death_test_style, "threadsafe");
581 // This test is run in the "threadsafe" style:
582 ASSERT_DEATH(ThisShouldDie(), "");
585 TEST(MyDeathTest, TestTwo) {
586 // This test is run in the "fast" style:
587 ASSERT_DEATH(ThisShouldDie(), "");
593 The `statement` argument of `ASSERT_EXIT()` can be any valid C++ statement. If
594 it leaves the current function via a `return` statement or by throwing an
595 exception, the death test is considered to have failed. Some googletest macros
596 may return from the current function (e.g. `ASSERT_TRUE()`), so be sure to avoid
599 Since `statement` runs in the child process, any in-memory side effect (e.g.
600 modifying a variable, releasing memory, etc) it causes will *not* be observable
601 in the parent process. In particular, if you release memory in a death test,
602 your program will fail the heap check as the parent process will never see the
603 memory reclaimed. To solve this problem, you can
605 1. try not to free memory in a death test;
606 2. free the memory again in the parent process; or
607 3. do not use the heap checker in your program.
609 Due to an implementation detail, you cannot place multiple death test assertions
610 on the same line; otherwise, compilation will fail with an unobvious error
613 Despite the improved thread safety afforded by the "threadsafe" style of death
614 test, thread problems such as deadlock are still possible in the presence of
615 handlers registered with `pthread_atfork(3)`.
617 ## Using Assertions in Sub-routines
620 Note: If you want to put a series of test assertions in a subroutine to check
621 for a complex condition, consider using
622 [a custom GMock matcher](gmock_cook_book.md#NewMatchers) instead. This lets you
623 provide a more readable error message in case of failure and avoid all of the
624 issues described below.
626 ### Adding Traces to Assertions
628 If a test sub-routine is called from several places, when an assertion inside it
629 fails, it can be hard to tell which invocation of the sub-routine the failure is
630 from. You can alleviate this problem using extra logging or custom failure
631 messages, but that usually clutters up your tests. A better solution is to use
632 the `SCOPED_TRACE` macro or the `ScopedTrace` utility:
635 SCOPED_TRACE(message);
639 ScopedTrace trace("file_path", line_number, message);
642 where `message` can be anything streamable to `std::ostream`. `SCOPED_TRACE`
643 macro will cause the current file name, line number, and the given message to be
644 added in every failure message. `ScopedTrace` accepts explicit file name and
645 line number in arguments, which is useful for writing test helpers. The effect
646 will be undone when the control leaves the current lexical scope.
651 10: void Sub1(int n) {
652 11: EXPECT_EQ(Bar(n), 1);
653 12: EXPECT_EQ(Bar(n + 1), 2);
656 15: TEST(FooTest, Bar) {
658 17: SCOPED_TRACE("A"); // This trace point will be included in
659 18: // every failure in this scope.
667 could result in messages like these:
670 path/to/foo_test.cc:11: Failure
675 path/to/foo_test.cc:17: A
677 path/to/foo_test.cc:12: Failure
683 Without the trace, it would've been difficult to know which invocation of
684 `Sub1()` the two failures come from respectively. (You could add an extra
685 message to each assertion in `Sub1()` to indicate the value of `n`, but that's
688 Some tips on using `SCOPED_TRACE`:
690 1. With a suitable message, it's often enough to use `SCOPED_TRACE` at the
691 beginning of a sub-routine, instead of at each call site.
692 2. When calling sub-routines inside a loop, make the loop iterator part of the
693 message in `SCOPED_TRACE` such that you can know which iteration the failure
695 3. Sometimes the line number of the trace point is enough for identifying the
696 particular invocation of a sub-routine. In this case, you don't have to
697 choose a unique message for `SCOPED_TRACE`. You can simply use `""`.
698 4. You can use `SCOPED_TRACE` in an inner scope when there is one in the outer
699 scope. In this case, all active trace points will be included in the failure
700 messages, in reverse order they are encountered.
701 5. The trace dump is clickable in Emacs - hit `return` on a line number and
702 you'll be taken to that line in the source file!
704 ### Propagating Fatal Failures
706 A common pitfall when using `ASSERT_*` and `FAIL*` is not understanding that
707 when they fail they only abort the _current function_, not the entire test. For
708 example, the following test will segfault:
712 // Generates a fatal failure and aborts the current function.
715 // The following won't be executed.
720 Subroutine(); // The intended behavior is for the fatal failure
721 // in Subroutine() to abort the entire test.
723 // The actual behavior: the function goes on after Subroutine() returns.
729 To alleviate this, googletest provides three different solutions. You could use
730 either exceptions, the `(ASSERT|EXPECT)_NO_FATAL_FAILURE` assertions or the
731 `HasFatalFailure()` function. They are described in the following two
734 #### Asserting on Subroutines with an exception
736 The following code can turn ASSERT-failure into an exception:
739 class ThrowListener : public testing::EmptyTestEventListener {
740 void OnTestPartResult(const testing::TestPartResult& result) override {
741 if (result.type() == testing::TestPartResult::kFatalFailure) {
742 throw testing::AssertionException(result);
746 int main(int argc, char** argv) {
748 testing::UnitTest::GetInstance()->listeners().Append(new ThrowListener);
749 return RUN_ALL_TESTS();
753 This listener should be added after other listeners if you have any, otherwise
754 they won't see failed `OnTestPartResult`.
756 #### Asserting on Subroutines
758 As shown above, if your test calls a subroutine that has an `ASSERT_*` failure
759 in it, the test will continue after the subroutine returns. This may not be what
762 Often people want fatal failures to propagate like exceptions. For that
763 googletest offers the following macros:
765 Fatal assertion | Nonfatal assertion | Verifies
766 ------------------------------------- | ------------------------------------- | --------
767 `ASSERT_NO_FATAL_FAILURE(statement);` | `EXPECT_NO_FATAL_FAILURE(statement);` | `statement` doesn't generate any new fatal failures in the current thread.
769 Only failures in the thread that executes the assertion are checked to determine
770 the result of this type of assertions. If `statement` creates new threads,
771 failures in these threads are ignored.
776 ASSERT_NO_FATAL_FAILURE(Foo());
779 EXPECT_NO_FATAL_FAILURE({
784 Assertions from multiple threads are currently not supported on Windows.
786 #### Checking for Failures in the Current Test
788 `HasFatalFailure()` in the `::testing::Test` class returns `true` if an
789 assertion in the current test has suffered a fatal failure. This allows
790 functions to catch fatal failures in a sub-routine and return early.
796 static bool HasFatalFailure();
800 The typical usage, which basically simulates the behavior of a thrown exception,
806 // Aborts if Subroutine() had a fatal failure.
807 if (HasFatalFailure()) return;
809 // The following won't be executed.
814 If `HasFatalFailure()` is used outside of `TEST()` , `TEST_F()` , or a test
815 fixture, you must add the `::testing::Test::` prefix, as in:
818 if (testing::Test::HasFatalFailure()) return;
821 Similarly, `HasNonfatalFailure()` returns `true` if the current test has at
822 least one non-fatal failure, and `HasFailure()` returns `true` if the current
823 test has at least one failure of either kind.
825 ## Logging Additional Information
827 In your test code, you can call `RecordProperty("key", value)` to log additional
828 information, where `value` can be either a string or an `int`. The *last* value
829 recorded for a key will be emitted to the
830 [XML output](#generating-an-xml-report) if you specify one. For example, the
834 TEST_F(WidgetUsageTest, MinAndMaxWidgets) {
835 RecordProperty("MaximumWidgets", ComputeMaxUsage());
836 RecordProperty("MinimumWidgets", ComputeMinUsage());
840 will output XML like this:
844 <testcase name="MinAndMaxWidgets" file="test.cpp" line="1" status="run" time="0.006" classname="WidgetUsageTest" MaximumWidgets="12" MinimumWidgets="9" />
851 > * `RecordProperty()` is a static member of the `Test` class. Therefore it
852 > needs to be prefixed with `::testing::Test::` if used outside of the
853 > `TEST` body and the test fixture class.
854 > * *`key`* must be a valid XML attribute name, and cannot conflict with the
855 > ones already used by googletest (`name`, `status`, `time`, `classname`,
856 > `type_param`, and `value_param`).
857 > * Calling `RecordProperty()` outside of the lifespan of a test is allowed.
858 > If it's called outside of a test but between a test suite's
859 > `SetUpTestSuite()` and `TearDownTestSuite()` methods, it will be
860 > attributed to the XML element for the test suite. If it's called outside
861 > of all test suites (e.g. in a test environment), it will be attributed to
862 > the top-level XML element.
864 ## Sharing Resources Between Tests in the Same Test Suite
866 googletest creates a new test fixture object for each test in order to make
867 tests independent and easier to debug. However, sometimes tests use resources
868 that are expensive to set up, making the one-copy-per-test model prohibitively
871 If the tests don't change the resource, there's no harm in their sharing a
872 single resource copy. So, in addition to per-test set-up/tear-down, googletest
873 also supports per-test-suite set-up/tear-down. To use it:
875 1. In your test fixture class (say `FooTest` ), declare as `static` some member
876 variables to hold the shared resources.
877 2. Outside your test fixture class (typically just below it), define those
878 member variables, optionally giving them initial values.
879 3. In the same test fixture class, define a `static void SetUpTestSuite()`
880 function (remember not to spell it as **`SetupTestSuite`** with a small
881 `u`!) to set up the shared resources and a `static void TearDownTestSuite()`
882 function to tear them down.
884 That's it! googletest automatically calls `SetUpTestSuite()` before running the
885 *first test* in the `FooTest` test suite (i.e. before creating the first
886 `FooTest` object), and calls `TearDownTestSuite()` after running the *last test*
887 in it (i.e. after deleting the last `FooTest` object). In between, the tests can
888 use the shared resources.
890 Remember that the test order is undefined, so your code can't depend on a test
891 preceding or following another. Also, the tests must either not modify the state
892 of any shared resource, or, if they do modify the state, they must restore the
893 state to its original value before passing control to the next test.
895 Note that `SetUpTestSuite()` may be called multiple times for a test fixture
896 class that has derived classes, so you should not expect code in the function
897 body to be run only once. Also, derived classes still have access to shared
898 resources defined as static members, so careful consideration is needed when
899 managing shared resources to avoid memory leaks.
901 Here's an example of per-test-suite set-up and tear-down:
904 class FooTest : public testing::Test {
906 // Per-test-suite set-up.
907 // Called before the first test in this test suite.
908 // Can be omitted if not needed.
909 static void SetUpTestSuite() {
910 // Avoid reallocating static objects if called in subclasses of FooTest.
911 if (shared_resource_ == nullptr) {
912 shared_resource_ = new ...;
916 // Per-test-suite tear-down.
917 // Called after the last test in this test suite.
918 // Can be omitted if not needed.
919 static void TearDownTestSuite() {
920 delete shared_resource_;
921 shared_resource_ = nullptr;
924 // You can define per-test set-up logic as usual.
925 void SetUp() override { ... }
927 // You can define per-test tear-down logic as usual.
928 void TearDown() override { ... }
930 // Some expensive resource shared by all tests.
931 static T* shared_resource_;
934 T* FooTest::shared_resource_ = nullptr;
936 TEST_F(FooTest, Test1) {
937 ... you can refer to shared_resource_ here ...
940 TEST_F(FooTest, Test2) {
941 ... you can refer to shared_resource_ here ...
946 NOTE: Though the above code declares `SetUpTestSuite()` protected, it may
947 sometimes be necessary to declare it public, such as when using it with
950 ## Global Set-Up and Tear-Down
952 Just as you can do set-up and tear-down at the test level and the test suite
953 level, you can also do it at the test program level. Here's how.
955 First, you subclass the `::testing::Environment` class to define a test
956 environment, which knows how to set-up and tear-down:
959 class Environment : public ::testing::Environment {
961 ~Environment() override {}
963 // Override this to define how to set up the environment.
964 void SetUp() override {}
966 // Override this to define how to tear down the environment.
967 void TearDown() override {}
971 Then, you register an instance of your environment class with googletest by
972 calling the `::testing::AddGlobalTestEnvironment()` function:
975 Environment* AddGlobalTestEnvironment(Environment* env);
978 Now, when `RUN_ALL_TESTS()` is called, it first calls the `SetUp()` method of
979 each environment object, then runs the tests if none of the environments
980 reported fatal failures and `GTEST_SKIP()` was not called. `RUN_ALL_TESTS()`
981 always calls `TearDown()` with each environment object, regardless of whether or
982 not the tests were run.
984 It's OK to register multiple environment objects. In this suite, their `SetUp()`
985 will be called in the order they are registered, and their `TearDown()` will be
986 called in the reverse order.
988 Note that googletest takes ownership of the registered environment objects.
989 Therefore **do not delete them** by yourself.
991 You should call `AddGlobalTestEnvironment()` before `RUN_ALL_TESTS()` is called,
992 probably in `main()`. If you use `gtest_main`, you need to call this before
993 `main()` starts for it to take effect. One way to do this is to define a global
997 testing::Environment* const foo_env =
998 testing::AddGlobalTestEnvironment(new FooEnvironment);
1001 However, we strongly recommend you to write your own `main()` and call
1002 `AddGlobalTestEnvironment()` there, as relying on initialization of global
1003 variables makes the code harder to read and may cause problems when you register
1004 multiple environments from different translation units and the environments have
1005 dependencies among them (remember that the compiler doesn't guarantee the order
1006 in which global variables from different translation units are initialized).
1008 ## Value-Parameterized Tests
1010 *Value-parameterized tests* allow you to test your code with different
1011 parameters without writing multiple copies of the same test. This is useful in a
1012 number of situations, for example:
1014 * You have a piece of code whose behavior is affected by one or more
1015 command-line flags. You want to make sure your code performs correctly for
1016 various values of those flags.
1017 * You want to test different implementations of an OO interface.
1018 * You want to test your code over various inputs (a.k.a. data-driven testing).
1019 This feature is easy to abuse, so please exercise your good sense when doing
1022 ### How to Write Value-Parameterized Tests
1024 To write value-parameterized tests, first you should define a fixture class. It
1025 must be derived from both `testing::Test` and `testing::WithParamInterface<T>`
1026 (the latter is a pure interface), where `T` is the type of your parameter
1027 values. For convenience, you can just derive the fixture class from
1028 `testing::TestWithParam<T>`, which itself is derived from both `testing::Test`
1029 and `testing::WithParamInterface<T>`. `T` can be any copyable type. If it's a
1030 raw pointer, you are responsible for managing the lifespan of the pointed
1034 NOTE: If your test fixture defines `SetUpTestSuite()` or `TearDownTestSuite()`
1035 they must be declared **public** rather than **protected** in order to use
1040 public testing::TestWithParam<const char*> {
1041 // You can implement all the usual fixture class members here.
1042 // To access the test parameter, call GetParam() from class
1043 // TestWithParam<T>.
1046 // Or, when you want to add parameters to a pre-existing fixture class:
1047 class BaseTest : public testing::Test {
1050 class BarTest : public BaseTest,
1051 public testing::WithParamInterface<const char*> {
1056 Then, use the `TEST_P` macro to define as many test patterns using this fixture
1057 as you want. The `_P` suffix is for "parameterized" or "pattern", whichever you
1061 TEST_P(FooTest, DoesBlah) {
1062 // Inside a test, access the test parameter with the GetParam() method
1063 // of the TestWithParam<T> class:
1064 EXPECT_TRUE(foo.Blah(GetParam()));
1068 TEST_P(FooTest, HasBlahBlah) {
1073 Finally, you can use the `INSTANTIATE_TEST_SUITE_P` macro to instantiate the
1074 test suite with any set of parameters you want. GoogleTest defines a number of
1075 functions for generating test parameters—see details at
1076 [`INSTANTIATE_TEST_SUITE_P`](reference/testing.md#INSTANTIATE_TEST_SUITE_P) in
1077 the Testing Reference.
1079 For example, the following statement will instantiate tests from the `FooTest`
1080 test suite each with parameter values `"meeny"`, `"miny"`, and `"moe"` using the
1081 [`Values`](reference/testing.md#param-generators) parameter generator:
1084 INSTANTIATE_TEST_SUITE_P(MeenyMinyMoe,
1086 testing::Values("meeny", "miny", "moe"));
1090 NOTE: The code above must be placed at global or namespace scope, not at
1093 The first argument to `INSTANTIATE_TEST_SUITE_P` is a unique name for the
1094 instantiation of the test suite. The next argument is the name of the test
1095 pattern, and the last is the
1096 [parameter generator](reference/testing.md#param-generators).
1098 You can instantiate a test pattern more than once, so to distinguish different
1099 instances of the pattern, the instantiation name is added as a prefix to the
1100 actual test suite name. Remember to pick unique prefixes for different
1101 instantiations. The tests from the instantiation above will have these names:
1103 * `MeenyMinyMoe/FooTest.DoesBlah/0` for `"meeny"`
1104 * `MeenyMinyMoe/FooTest.DoesBlah/1` for `"miny"`
1105 * `MeenyMinyMoe/FooTest.DoesBlah/2` for `"moe"`
1106 * `MeenyMinyMoe/FooTest.HasBlahBlah/0` for `"meeny"`
1107 * `MeenyMinyMoe/FooTest.HasBlahBlah/1` for `"miny"`
1108 * `MeenyMinyMoe/FooTest.HasBlahBlah/2` for `"moe"`
1110 You can use these names in [`--gtest_filter`](#running-a-subset-of-the-tests).
1112 The following statement will instantiate all tests from `FooTest` again, each
1113 with parameter values `"cat"` and `"dog"` using the
1114 [`ValuesIn`](reference/testing.md#param-generators) parameter generator:
1117 const char* pets[] = {"cat", "dog"};
1118 INSTANTIATE_TEST_SUITE_P(Pets, FooTest, testing::ValuesIn(pets));
1121 The tests from the instantiation above will have these names:
1123 * `Pets/FooTest.DoesBlah/0` for `"cat"`
1124 * `Pets/FooTest.DoesBlah/1` for `"dog"`
1125 * `Pets/FooTest.HasBlahBlah/0` for `"cat"`
1126 * `Pets/FooTest.HasBlahBlah/1` for `"dog"`
1128 Please note that `INSTANTIATE_TEST_SUITE_P` will instantiate *all* tests in the
1129 given test suite, whether their definitions come before or *after* the
1130 `INSTANTIATE_TEST_SUITE_P` statement.
1132 Additionally, by default, every `TEST_P` without a corresponding
1133 `INSTANTIATE_TEST_SUITE_P` causes a failing test in test suite
1134 `GoogleTestVerification`. If you have a test suite where that omission is not an
1135 error, for example it is in a library that may be linked in for other reasons or
1136 where the list of test cases is dynamic and may be empty, then this check can be
1137 suppressed by tagging the test suite:
1140 GTEST_ALLOW_UNINSTANTIATED_PARAMETERIZED_TEST(FooTest);
1143 You can see [sample7_unittest.cc] and [sample8_unittest.cc] for more examples.
1145 [sample7_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample7_unittest.cc "Parameterized Test example"
1146 [sample8_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample8_unittest.cc "Parameterized Test example with multiple parameters"
1148 ### Creating Value-Parameterized Abstract Tests
1150 In the above, we define and instantiate `FooTest` in the *same* source file.
1151 Sometimes you may want to define value-parameterized tests in a library and let
1152 other people instantiate them later. This pattern is known as *abstract tests*.
1153 As an example of its application, when you are designing an interface you can
1154 write a standard suite of abstract tests (perhaps using a factory function as
1155 the test parameter) that all implementations of the interface are expected to
1156 pass. When someone implements the interface, they can instantiate your suite to
1157 get all the interface-conformance tests for free.
1159 To define abstract tests, you should organize your code like this:
1161 1. Put the definition of the parameterized test fixture class (e.g. `FooTest`)
1162 in a header file, say `foo_param_test.h`. Think of this as *declaring* your
1164 2. Put the `TEST_P` definitions in `foo_param_test.cc`, which includes
1165 `foo_param_test.h`. Think of this as *implementing* your abstract tests.
1167 Once they are defined, you can instantiate them by including `foo_param_test.h`,
1168 invoking `INSTANTIATE_TEST_SUITE_P()`, and depending on the library target that
1169 contains `foo_param_test.cc`. You can instantiate the same abstract test suite
1170 multiple times, possibly in different source files.
1172 ### Specifying Names for Value-Parameterized Test Parameters
1174 The optional last argument to `INSTANTIATE_TEST_SUITE_P()` allows the user to
1175 specify a function or functor that generates custom test name suffixes based on
1176 the test parameters. The function should accept one argument of type
1177 `testing::TestParamInfo<class ParamType>`, and return `std::string`.
1179 `testing::PrintToStringParamName` is a builtin test suffix generator that
1180 returns the value of `testing::PrintToString(GetParam())`. It does not work for
1181 `std::string` or C strings.
1184 NOTE: test names must be non-empty, unique, and may only contain ASCII
1185 alphanumeric characters. In particular, they
1186 [should not contain underscores](faq.md#why-should-test-suite-names-and-test-names-not-contain-underscore)
1189 class MyTestSuite : public testing::TestWithParam<int> {};
1191 TEST_P(MyTestSuite, MyTest)
1193 std::cout << "Example Test Param: " << GetParam() << std::endl;
1196 INSTANTIATE_TEST_SUITE_P(MyGroup, MyTestSuite, testing::Range(0, 10),
1197 testing::PrintToStringParamName());
1200 Providing a custom functor allows for more control over test parameter name
1201 generation, especially for types where the automatic conversion does not
1202 generate helpful parameter names (e.g. strings as demonstrated above). The
1203 following example illustrates this for multiple parameters, an enumeration type
1204 and a string, and also demonstrates how to combine generators. It uses a lambda
1208 enum class MyType { MY_FOO = 0, MY_BAR = 1 };
1210 class MyTestSuite : public testing::TestWithParam<std::tuple<MyType, std::string>> {
1213 INSTANTIATE_TEST_SUITE_P(
1214 MyGroup, MyTestSuite,
1216 testing::Values(MyType::MY_FOO, MyType::MY_BAR),
1217 testing::Values("A", "B")),
1218 [](const testing::TestParamInfo<MyTestSuite::ParamType>& info) {
1219 std::string name = absl::StrCat(
1220 std::get<0>(info.param) == MyType::MY_FOO ? "Foo" : "Bar",
1221 std::get<1>(info.param));
1222 absl::c_replace_if(name, [](char c) { return !std::isalnum(c); }, '_');
1229 Suppose you have multiple implementations of the same interface and want to make
1230 sure that all of them satisfy some common requirements. Or, you may have defined
1231 several types that are supposed to conform to the same "concept" and you want to
1232 verify it. In both cases, you want the same test logic repeated for different
1235 While you can write one `TEST` or `TEST_F` for each type you want to test (and
1236 you may even factor the test logic into a function template that you invoke from
1237 the `TEST`), it's tedious and doesn't scale: if you want `m` tests over `n`
1238 types, you'll end up writing `m*n` `TEST`s.
1240 *Typed tests* allow you to repeat the same test logic over a list of types. You
1241 only need to write the test logic once, although you must know the type list
1242 when writing typed tests. Here's how you do it:
1244 First, define a fixture class template. It should be parameterized by a type.
1245 Remember to derive it from `::testing::Test`:
1248 template <typename T>
1249 class FooTest : public testing::Test {
1252 using List = std::list<T>;
1258 Next, associate a list of types with the test suite, which will be repeated for
1259 each type in the list:
1262 using MyTypes = ::testing::Types<char, int, unsigned int>;
1263 TYPED_TEST_SUITE(FooTest, MyTypes);
1266 The type alias (`using` or `typedef`) is necessary for the `TYPED_TEST_SUITE`
1267 macro to parse correctly. Otherwise the compiler will think that each comma in
1268 the type list introduces a new macro argument.
1270 Then, use `TYPED_TEST()` instead of `TEST_F()` to define a typed test for this
1271 test suite. You can repeat this as many times as you want:
1274 TYPED_TEST(FooTest, DoesBlah) {
1275 // Inside a test, refer to the special name TypeParam to get the type
1276 // parameter. Since we are inside a derived class template, C++ requires
1277 // us to visit the members of FooTest via 'this'.
1278 TypeParam n = this->value_;
1280 // To visit static members of the fixture, add the 'TestFixture::'
1282 n += TestFixture::shared_;
1284 // To refer to typedefs in the fixture, add the 'typename TestFixture::'
1285 // prefix. The 'typename' is required to satisfy the compiler.
1286 typename TestFixture::List values;
1288 values.push_back(n);
1292 TYPED_TEST(FooTest, HasPropertyA) { ... }
1295 You can see [sample6_unittest.cc] for a complete example.
1297 [sample6_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample6_unittest.cc "Typed Test example"
1299 ## Type-Parameterized Tests
1301 *Type-parameterized tests* are like typed tests, except that they don't require
1302 you to know the list of types ahead of time. Instead, you can define the test
1303 logic first and instantiate it with different type lists later. You can even
1304 instantiate it more than once in the same program.
1306 If you are designing an interface or concept, you can define a suite of
1307 type-parameterized tests to verify properties that any valid implementation of
1308 the interface/concept should have. Then, the author of each implementation can
1309 just instantiate the test suite with their type to verify that it conforms to
1310 the requirements, without having to write similar tests repeatedly. Here's an
1313 First, define a fixture class template, as we did with typed tests:
1316 template <typename T>
1317 class FooTest : public testing::Test {
1318 void DoSomethingInteresting();
1323 Next, declare that you will define a type-parameterized test suite:
1326 TYPED_TEST_SUITE_P(FooTest);
1329 Then, use `TYPED_TEST_P()` to define a type-parameterized test. You can repeat
1330 this as many times as you want:
1333 TYPED_TEST_P(FooTest, DoesBlah) {
1334 // Inside a test, refer to TypeParam to get the type parameter.
1337 // You will need to use `this` explicitly to refer to fixture members.
1338 this->DoSomethingInteresting()
1342 TYPED_TEST_P(FooTest, HasPropertyA) { ... }
1345 Now the tricky part: you need to register all test patterns using the
1346 `REGISTER_TYPED_TEST_SUITE_P` macro before you can instantiate them. The first
1347 argument of the macro is the test suite name; the rest are the names of the
1348 tests in this test suite:
1351 REGISTER_TYPED_TEST_SUITE_P(FooTest,
1352 DoesBlah, HasPropertyA);
1355 Finally, you are free to instantiate the pattern with the types you want. If you
1356 put the above code in a header file, you can `#include` it in multiple C++
1357 source files and instantiate it multiple times.
1360 using MyTypes = ::testing::Types<char, int, unsigned int>;
1361 INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, MyTypes);
1364 To distinguish different instances of the pattern, the first argument to the
1365 `INSTANTIATE_TYPED_TEST_SUITE_P` macro is a prefix that will be added to the
1366 actual test suite name. Remember to pick unique prefixes for different
1369 In the special case where the type list contains only one type, you can write
1370 that type directly without `::testing::Types<...>`, like this:
1373 INSTANTIATE_TYPED_TEST_SUITE_P(My, FooTest, int);
1376 You can see [sample6_unittest.cc] for a complete example.
1378 ## Testing Private Code
1380 If you change your software's internal implementation, your tests should not
1381 break as long as the change is not observable by users. Therefore, **per the
1382 black-box testing principle, most of the time you should test your code through
1383 its public interfaces.**
1385 **If you still find yourself needing to test internal implementation code,
1386 consider if there's a better design.** The desire to test internal
1387 implementation is often a sign that the class is doing too much. Consider
1388 extracting an implementation class, and testing it. Then use that implementation
1389 class in the original class.
1391 If you absolutely have to test non-public interface code though, you can. There
1392 are two cases to consider:
1394 * Static functions ( *not* the same as static member functions!) or unnamed
1396 * Private or protected class members
1398 To test them, we use the following special techniques:
1400 * Both static functions and definitions/declarations in an unnamed namespace
1401 are only visible within the same translation unit. To test them, you can
1402 `#include` the entire `.cc` file being tested in your `*_test.cc` file.
1403 (#including `.cc` files is not a good way to reuse code - you should not do
1404 this in production code!)
1406 However, a better approach is to move the private code into the
1407 `foo::internal` namespace, where `foo` is the namespace your project
1408 normally uses, and put the private declarations in a `*-internal.h` file.
1409 Your production `.cc` files and your tests are allowed to include this
1410 internal header, but your clients are not. This way, you can fully test your
1411 internal implementation without leaking it to your clients.
1413 * Private class members are only accessible from within the class or by
1414 friends. To access a class' private members, you can declare your test
1415 fixture as a friend to the class and define accessors in your fixture. Tests
1416 using the fixture can then access the private members of your production
1417 class via the accessors in the fixture. Note that even though your fixture
1418 is a friend to your production class, your tests are not automatically
1419 friends to it, as they are technically defined in sub-classes of the
1422 Another way to test private members is to refactor them into an
1423 implementation class, which is then declared in a `*-internal.h` file. Your
1424 clients aren't allowed to include this header but your tests can. Such is
1426 [Pimpl](https://www.gamedev.net/articles/programming/general-and-gameplay-programming/the-c-pimpl-r1794/)
1427 (Private Implementation) idiom.
1429 Or, you can declare an individual test as a friend of your class by adding
1430 this line in the class body:
1433 FRIEND_TEST(TestSuiteName, TestName);
1443 FRIEND_TEST(FooTest, BarReturnsZeroOnNull);
1450 TEST(FooTest, BarReturnsZeroOnNull) {
1452 EXPECT_EQ(foo.Bar(NULL), 0); // Uses Foo's private member Bar().
1456 Pay special attention when your class is defined in a namespace. If you want
1457 your test fixtures and tests to be friends of your class, then they must be
1458 defined in the exact same namespace (no anonymous or inline namespaces).
1460 For example, if the code to be tested looks like:
1463 namespace my_namespace {
1466 friend class FooTest;
1467 FRIEND_TEST(FooTest, Bar);
1468 FRIEND_TEST(FooTest, Baz);
1469 ... definition of the class Foo ...
1472 } // namespace my_namespace
1475 Your test code should be something like:
1478 namespace my_namespace {
1480 class FooTest : public testing::Test {
1485 TEST_F(FooTest, Bar) { ... }
1486 TEST_F(FooTest, Baz) { ... }
1488 } // namespace my_namespace
1491 ## "Catching" Failures
1493 If you are building a testing utility on top of googletest, you'll want to test
1494 your utility. What framework would you use to test it? googletest, of course.
1496 The challenge is to verify that your testing utility reports failures correctly.
1497 In frameworks that report a failure by throwing an exception, you could catch
1498 the exception and assert on it. But googletest doesn't use exceptions, so how do
1499 we test that a piece of code generates an expected failure?
1501 `"gtest/gtest-spi.h"` contains some constructs to do this.
1502 After #including this header, you can use
1505 EXPECT_FATAL_FAILURE(statement, substring);
1508 to assert that `statement` generates a fatal (e.g. `ASSERT_*`) failure in the
1509 current thread whose message contains the given `substring`, or use
1512 EXPECT_NONFATAL_FAILURE(statement, substring);
1515 if you are expecting a non-fatal (e.g. `EXPECT_*`) failure.
1517 Only failures in the current thread are checked to determine the result of this
1518 type of expectations. If `statement` creates new threads, failures in these
1519 threads are also ignored. If you want to catch failures in other threads as
1520 well, use one of the following macros instead:
1523 EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substring);
1524 EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substring);
1528 NOTE: Assertions from multiple threads are currently not supported on Windows.
1530 For technical reasons, there are some caveats:
1532 1. You cannot stream a failure message to either macro.
1534 2. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot reference
1535 local non-static variables or non-static members of `this` object.
1537 3. `statement` in `EXPECT_FATAL_FAILURE{_ON_ALL_THREADS}()` cannot return a
1540 ## Registering tests programmatically
1542 The `TEST` macros handle the vast majority of all use cases, but there are few
1543 where runtime registration logic is required. For those cases, the framework
1544 provides the `::testing::RegisterTest` that allows callers to register arbitrary
1547 This is an advanced API only to be used when the `TEST` macros are insufficient.
1548 The macros should be preferred when possible, as they avoid most of the
1549 complexity of calling this function.
1551 It provides the following signature:
1554 template <typename Factory>
1555 TestInfo* RegisterTest(const char* test_suite_name, const char* test_name,
1556 const char* type_param, const char* value_param,
1557 const char* file, int line, Factory factory);
1560 The `factory` argument is a factory callable (move-constructible) object or
1561 function pointer that creates a new instance of the Test object. It handles
1562 ownership to the caller. The signature of the callable is `Fixture*()`, where
1563 `Fixture` is the test fixture class for the test. All tests registered with the
1564 same `test_suite_name` must return the same fixture type. This is checked at
1567 The framework will infer the fixture class from the factory and will call the
1568 `SetUpTestSuite` and `TearDownTestSuite` for it.
1570 Must be called before `RUN_ALL_TESTS()` is invoked, otherwise behavior is
1576 class MyFixture : public testing::Test {
1578 // All of these optional, just like in regular macro usage.
1579 static void SetUpTestSuite() { ... }
1580 static void TearDownTestSuite() { ... }
1581 void SetUp() override { ... }
1582 void TearDown() override { ... }
1585 class MyTest : public MyFixture {
1587 explicit MyTest(int data) : data_(data) {}
1588 void TestBody() override { ... }
1594 void RegisterMyTests(const std::vector<int>& values) {
1595 for (int v : values) {
1596 testing::RegisterTest(
1597 "MyFixture", ("Test" + std::to_string(v)).c_str(), nullptr,
1598 std::to_string(v).c_str(),
1600 // Important to use the fixture type as the return type here.
1601 [=]() -> MyFixture* { return new MyTest(v); });
1605 int main(int argc, char** argv) {
1606 testing::InitGoogleTest(&argc, argv);
1607 std::vector<int> values_to_test = LoadValuesFromConfig();
1608 RegisterMyTests(values_to_test);
1610 return RUN_ALL_TESTS();
1614 ## Getting the Current Test's Name
1616 Sometimes a function may need to know the name of the currently running test.
1617 For example, you may be using the `SetUp()` method of your test fixture to set
1618 the golden file name based on which test is running. The
1619 [`TestInfo`](reference/testing.md#TestInfo) class has this information.
1621 To obtain a `TestInfo` object for the currently running test, call
1622 `current_test_info()` on the [`UnitTest`](reference/testing.md#UnitTest)
1626 // Gets information about the currently running test.
1627 // Do NOT delete the returned object - it's managed by the UnitTest class.
1628 const testing::TestInfo* const test_info =
1629 testing::UnitTest::GetInstance()->current_test_info();
1631 printf("We are in test %s of test suite %s.\n",
1633 test_info->test_suite_name());
1636 `current_test_info()` returns a null pointer if no test is running. In
1637 particular, you cannot find the test suite name in `SetUpTestSuite()`,
1638 `TearDownTestSuite()` (where you know the test suite name implicitly), or
1639 functions called from them.
1641 ## Extending googletest by Handling Test Events
1643 googletest provides an **event listener API** to let you receive notifications
1644 about the progress of a test program and test failures. The events you can
1645 listen to include the start and end of the test program, a test suite, or a test
1646 method, among others. You may use this API to augment or replace the standard
1647 console output, replace the XML output, or provide a completely different form
1648 of output, such as a GUI or a database. You can also use test events as
1649 checkpoints to implement a resource leak checker, for example.
1651 ### Defining Event Listeners
1653 To define a event listener, you subclass either
1654 [`testing::TestEventListener`](reference/testing.md#TestEventListener) or
1655 [`testing::EmptyTestEventListener`](reference/testing.md#EmptyTestEventListener)
1656 The former is an (abstract) interface, where *each pure virtual method can be
1657 overridden to handle a test event* (For example, when a test starts, the
1658 `OnTestStart()` method will be called.). The latter provides an empty
1659 implementation of all methods in the interface, such that a subclass only needs
1660 to override the methods it cares about.
1662 When an event is fired, its context is passed to the handler function as an
1663 argument. The following argument types are used:
1665 * UnitTest reflects the state of the entire test program,
1666 * TestSuite has information about a test suite, which can contain one or more
1668 * TestInfo contains the state of a test, and
1669 * TestPartResult represents the result of a test assertion.
1671 An event handler function can examine the argument it receives to find out
1672 interesting information about the event and the test program's state.
1677 class MinimalistPrinter : public testing::EmptyTestEventListener {
1678 // Called before a test starts.
1679 void OnTestStart(const testing::TestInfo& test_info) override {
1680 printf("*** Test %s.%s starting.\n",
1681 test_info.test_suite_name(), test_info.name());
1684 // Called after a failed assertion or a SUCCESS().
1685 void OnTestPartResult(const testing::TestPartResult& test_part_result) override {
1686 printf("%s in %s:%d\n%s\n",
1687 test_part_result.failed() ? "*** Failure" : "Success",
1688 test_part_result.file_name(),
1689 test_part_result.line_number(),
1690 test_part_result.summary());
1693 // Called after a test ends.
1694 void OnTestEnd(const testing::TestInfo& test_info) override {
1695 printf("*** Test %s.%s ending.\n",
1696 test_info.test_suite_name(), test_info.name());
1701 ### Using Event Listeners
1703 To use the event listener you have defined, add an instance of it to the
1704 googletest event listener list (represented by class
1705 [`TestEventListeners`](reference/testing.md#TestEventListeners) - note the "s"
1706 at the end of the name) in your `main()` function, before calling
1710 int main(int argc, char** argv) {
1711 testing::InitGoogleTest(&argc, argv);
1712 // Gets hold of the event listener list.
1713 testing::TestEventListeners& listeners =
1714 testing::UnitTest::GetInstance()->listeners();
1715 // Adds a listener to the end. googletest takes the ownership.
1716 listeners.Append(new MinimalistPrinter);
1717 return RUN_ALL_TESTS();
1721 There's only one problem: the default test result printer is still in effect, so
1722 its output will mingle with the output from your minimalist printer. To suppress
1723 the default printer, just release it from the event listener list and delete it.
1724 You can do so by adding one line:
1728 delete listeners.Release(listeners.default_result_printer());
1729 listeners.Append(new MinimalistPrinter);
1730 return RUN_ALL_TESTS();
1733 Now, sit back and enjoy a completely different output from your tests. For more
1734 details, see [sample9_unittest.cc].
1736 [sample9_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample9_unittest.cc "Event listener example"
1738 You may append more than one listener to the list. When an `On*Start()` or
1739 `OnTestPartResult()` event is fired, the listeners will receive it in the order
1740 they appear in the list (since new listeners are added to the end of the list,
1741 the default text printer and the default XML generator will receive the event
1742 first). An `On*End()` event will be received by the listeners in the *reverse*
1743 order. This allows output by listeners added later to be framed by output from
1744 listeners added earlier.
1746 ### Generating Failures in Listeners
1748 You may use failure-raising macros (`EXPECT_*()`, `ASSERT_*()`, `FAIL()`, etc)
1749 when processing an event. There are some restrictions:
1751 1. You cannot generate any failure in `OnTestPartResult()` (otherwise it will
1752 cause `OnTestPartResult()` to be called recursively).
1753 2. A listener that handles `OnTestPartResult()` is not allowed to generate any
1756 When you add listeners to the listener list, you should put listeners that
1757 handle `OnTestPartResult()` *before* listeners that can generate failures. This
1758 ensures that failures generated by the latter are attributed to the right test
1761 See [sample10_unittest.cc] for an example of a failure-raising listener.
1763 [sample10_unittest.cc]: https://github.com/google/googletest/blob/master/googletest/samples/sample10_unittest.cc "Failure-raising listener example"
1765 ## Running Test Programs: Advanced Options
1767 googletest test programs are ordinary executables. Once built, you can run them
1768 directly and affect their behavior via the following environment variables
1769 and/or command line flags. For the flags to work, your programs must call
1770 `::testing::InitGoogleTest()` before calling `RUN_ALL_TESTS()`.
1772 To see a list of supported flags and their usage, please run your test program
1773 with the `--help` flag. You can also use `-h`, `-?`, or `/?` for short.
1775 If an option is specified both by an environment variable and by a flag, the
1776 latter takes precedence.
1780 #### Listing Test Names
1782 Sometimes it is necessary to list the available tests in a program before
1783 running them so that a filter may be applied if needed. Including the flag
1784 `--gtest_list_tests` overrides all other flags and lists tests in the following
1795 None of the tests listed are actually run if the flag is provided. There is no
1796 corresponding environment variable for this flag.
1798 #### Running a Subset of the Tests
1800 By default, a googletest program runs all tests the user has defined. Sometimes,
1801 you want to run only a subset of the tests (e.g. for debugging or quickly
1802 verifying a change). If you set the `GTEST_FILTER` environment variable or the
1803 `--gtest_filter` flag to a filter string, googletest will only run the tests
1804 whose full names (in the form of `TestSuiteName.TestName`) match the filter.
1806 The format of a filter is a '`:`'-separated list of wildcard patterns (called
1807 the *positive patterns*) optionally followed by a '`-`' and another
1808 '`:`'-separated pattern list (called the *negative patterns*). A test matches
1809 the filter if and only if it matches any of the positive patterns but does not
1810 match any of the negative patterns.
1812 A pattern may contain `'*'` (matches any string) or `'?'` (matches any single
1813 character). For convenience, the filter `'*-NegativePatterns'` can be also
1814 written as `'-NegativePatterns'`.
1818 * `./foo_test` Has no flag, and thus runs all its tests.
1819 * `./foo_test --gtest_filter=*` Also runs everything, due to the single
1820 match-everything `*` value.
1821 * `./foo_test --gtest_filter=FooTest.*` Runs everything in test suite
1823 * `./foo_test --gtest_filter=*Null*:*Constructor*` Runs any test whose full
1824 name contains either `"Null"` or `"Constructor"` .
1825 * `./foo_test --gtest_filter=-*DeathTest.*` Runs all non-death tests.
1826 * `./foo_test --gtest_filter=FooTest.*-FooTest.Bar` Runs everything in test
1827 suite `FooTest` except `FooTest.Bar`.
1828 * `./foo_test --gtest_filter=FooTest.*:BarTest.*-FooTest.Bar:BarTest.Foo` Runs
1829 everything in test suite `FooTest` except `FooTest.Bar` and everything in
1830 test suite `BarTest` except `BarTest.Foo`.
1832 #### Stop test execution upon first failure
1834 By default, a googletest program runs all tests the user has defined. In some
1835 cases (e.g. iterative test development & execution) it may be desirable stop
1836 test execution upon first failure (trading improved latency for completeness).
1837 If `GTEST_FAIL_FAST` environment variable or `--gtest_fail_fast` flag is set,
1838 the test runner will stop execution as soon as the first test failure is found.
1840 #### Temporarily Disabling Tests
1842 If you have a broken test that you cannot fix right away, you can add the
1843 `DISABLED_` prefix to its name. This will exclude it from execution. This is
1844 better than commenting out the code or using `#if 0`, as disabled tests are
1845 still compiled (and thus won't rot).
1847 If you need to disable all tests in a test suite, you can either add `DISABLED_`
1848 to the front of the name of each test, or alternatively add it to the front of
1849 the test suite name.
1851 For example, the following tests won't be run by googletest, even though they
1852 will still be compiled:
1855 // Tests that Foo does Abc.
1856 TEST(FooTest, DISABLED_DoesAbc) { ... }
1858 class DISABLED_BarTest : public testing::Test { ... };
1860 // Tests that Bar does Xyz.
1861 TEST_F(DISABLED_BarTest, DoesXyz) { ... }
1865 NOTE: This feature should only be used for temporary pain-relief. You still have
1866 to fix the disabled tests at a later date. As a reminder, googletest will print
1867 a banner warning you if a test program contains any disabled tests.
1870 TIP: You can easily count the number of disabled tests you have using
1871 `grep`. This number can be used as a metric for
1872 improving your test quality.
1874 #### Temporarily Enabling Disabled Tests
1876 To include disabled tests in test execution, just invoke the test program with
1877 the `--gtest_also_run_disabled_tests` flag or set the
1878 `GTEST_ALSO_RUN_DISABLED_TESTS` environment variable to a value other than `0`.
1879 You can combine this with the `--gtest_filter` flag to further select which
1880 disabled tests to run.
1882 ### Repeating the Tests
1884 Once in a while you'll run into a test whose result is hit-or-miss. Perhaps it
1885 will fail only 1% of the time, making it rather hard to reproduce the bug under
1886 a debugger. This can be a major source of frustration.
1888 The `--gtest_repeat` flag allows you to repeat all (or selected) test methods in
1889 a program many times. Hopefully, a flaky test will eventually fail and give you
1890 a chance to debug. Here's how to use it:
1893 $ foo_test --gtest_repeat=1000
1894 Repeat foo_test 1000 times and don't stop at failures.
1896 $ foo_test --gtest_repeat=-1
1897 A negative count means repeating forever.
1899 $ foo_test --gtest_repeat=1000 --gtest_break_on_failure
1900 Repeat foo_test 1000 times, stopping at the first failure. This
1901 is especially useful when running under a debugger: when the test
1902 fails, it will drop into the debugger and you can then inspect
1903 variables and stacks.
1905 $ foo_test --gtest_repeat=1000 --gtest_filter=FooBar.*
1906 Repeat the tests whose name matches the filter 1000 times.
1909 If your test program contains
1910 [global set-up/tear-down](#global-set-up-and-tear-down) code, it will be
1911 repeated in each iteration as well, as the flakiness may be in it. You can also
1912 specify the repeat count by setting the `GTEST_REPEAT` environment variable.
1914 ### Shuffling the Tests
1916 You can specify the `--gtest_shuffle` flag (or set the `GTEST_SHUFFLE`
1917 environment variable to `1`) to run the tests in a program in a random order.
1918 This helps to reveal bad dependencies between tests.
1920 By default, googletest uses a random seed calculated from the current time.
1921 Therefore you'll get a different order every time. The console output includes
1922 the random seed value, such that you can reproduce an order-related test failure
1923 later. To specify the random seed explicitly, use the `--gtest_random_seed=SEED`
1924 flag (or set the `GTEST_RANDOM_SEED` environment variable), where `SEED` is an
1925 integer in the range [0, 99999]. The seed value 0 is special: it tells
1926 googletest to do the default behavior of calculating the seed from the current
1929 If you combine this with `--gtest_repeat=N`, googletest will pick a different
1930 random seed and re-shuffle the tests in each iteration.
1932 ### Distributing Test Functions to Multiple Machines
1934 If you have more than one machine you can use to run a test program, you might
1935 want to run the test functions in parallel and get the result faster. We call
1936 this technique *sharding*, where each machine is called a *shard*.
1938 GoogleTest is compatible with test sharding. To take advantage of this feature,
1939 your test runner (not part of GoogleTest) needs to do the following:
1941 1. Allocate a number of machines (shards) to run the tests.
1942 1. On each shard, set the `GTEST_TOTAL_SHARDS` environment variable to the total
1943 number of shards. It must be the same for all shards.
1944 1. On each shard, set the `GTEST_SHARD_INDEX` environment variable to the index
1945 of the shard. Different shards must be assigned different indices, which
1946 must be in the range `[0, GTEST_TOTAL_SHARDS - 1]`.
1947 1. Run the same test program on all shards. When GoogleTest sees the above two
1948 environment variables, it will select a subset of the test functions to run.
1949 Across all shards, each test function in the program will be run exactly
1951 1. Wait for all shards to finish, then collect and report the results.
1953 Your project may have tests that were written without GoogleTest and thus don't
1954 understand this protocol. In order for your test runner to figure out which test
1955 supports sharding, it can set the environment variable `GTEST_SHARD_STATUS_FILE`
1956 to a non-existent file path. If a test program supports sharding, it will create
1957 this file to acknowledge that fact; otherwise it will not create it. The actual
1958 contents of the file are not important at this time, although we may put some
1959 useful information in it in the future.
1961 Here's an example to make it clear. Suppose you have a test program `foo_test`
1962 that contains the following 5 test functions:
1972 Suppose you have 3 machines at your disposal. To run the test functions in
1973 parallel, you would set `GTEST_TOTAL_SHARDS` to 3 on all machines, and set
1974 `GTEST_SHARD_INDEX` to 0, 1, and 2 on the machines respectively. Then you would
1975 run the same `foo_test` on each machine.
1977 GoogleTest reserves the right to change how the work is distributed across the
1978 shards, but here's one possible scenario:
1980 * Machine #0 runs `A.V` and `B.X`.
1981 * Machine #1 runs `A.W` and `B.Y`.
1982 * Machine #2 runs `B.Z`.
1984 ### Controlling Test Output
1986 #### Colored Terminal Output
1988 googletest can use colors in its terminal output to make it easier to spot the
1989 important information:
1992 <font color="green">[----------]</font> 1 test from FooTest
1993 <font color="green">[ RUN ]</font> FooTest.DoesAbc
1994 <font color="green">[ OK ]</font> FooTest.DoesAbc
1995 <font color="green">[----------]</font> 2 tests from BarTest
1996 <font color="green">[ RUN ]</font> BarTest.HasXyzProperty
1997 <font color="green">[ OK ]</font> BarTest.HasXyzProperty
1998 <font color="green">[ RUN ]</font> BarTest.ReturnsTrueOnSuccess
1999 ... some error messages ...
2000 <font color="red">[ FAILED ]</font> BarTest.ReturnsTrueOnSuccess
2002 <font color="green">[==========]</font> 30 tests from 14 test suites ran.
2003 <font color="green">[ PASSED ]</font> 28 tests.
2004 <font color="red">[ FAILED ]</font> 2 tests, listed below:
2005 <font color="red">[ FAILED ]</font> BarTest.ReturnsTrueOnSuccess
2006 <font color="red">[ FAILED ]</font> AnotherTest.DoesXyz
2011 You can set the `GTEST_COLOR` environment variable or the `--gtest_color`
2012 command line flag to `yes`, `no`, or `auto` (the default) to enable colors,
2013 disable colors, or let googletest decide. When the value is `auto`, googletest
2014 will use colors if and only if the output goes to a terminal and (on non-Windows
2015 platforms) the `TERM` environment variable is set to `xterm` or `xterm-color`.
2017 #### Suppressing test passes
2019 By default, googletest prints 1 line of output for each test, indicating if it
2020 passed or failed. To show only test failures, run the test program with
2021 `--gtest_brief=1`, or set the GTEST_BRIEF environment variable to `1`.
2023 #### Suppressing the Elapsed Time
2025 By default, googletest prints the time it takes to run each test. To disable
2026 that, run the test program with the `--gtest_print_time=0` command line flag, or
2027 set the GTEST_PRINT_TIME environment variable to `0`.
2029 #### Suppressing UTF-8 Text Output
2031 In case of assertion failures, googletest prints expected and actual values of
2032 type `string` both as hex-encoded strings as well as in readable UTF-8 text if
2033 they contain valid non-ASCII UTF-8 characters. If you want to suppress the UTF-8
2034 text because, for example, you don't have an UTF-8 compatible output medium, run
2035 the test program with `--gtest_print_utf8=0` or set the `GTEST_PRINT_UTF8`
2036 environment variable to `0`.
2038 #### Generating an XML Report
2040 googletest can emit a detailed XML report to a file in addition to its normal
2041 textual output. The report contains the duration of each test, and thus can help
2042 you identify slow tests.
2044 To generate the XML report, set the `GTEST_OUTPUT` environment variable or the
2045 `--gtest_output` flag to the string `"xml:path_to_output_file"`, which will
2046 create the file at the given location. You can also just use the string `"xml"`,
2047 in which case the output can be found in the `test_detail.xml` file in the
2050 If you specify a directory (for example, `"xml:output/directory/"` on Linux or
2051 `"xml:output\directory\"` on Windows), googletest will create the XML file in
2052 that directory, named after the test executable (e.g. `foo_test.xml` for test
2053 program `foo_test` or `foo_test.exe`). If the file already exists (perhaps left
2054 over from a previous run), googletest will pick a different name (e.g.
2055 `foo_test_1.xml`) to avoid overwriting it.
2057 The report is based on the `junitreport` Ant task. Since that format was
2058 originally intended for Java, a little interpretation is required to make it
2059 apply to googletest tests, as shown here:
2062 <testsuites name="AllTests" ...>
2063 <testsuite name="test_case_name" ...>
2064 <testcase name="test_name" ...>
2065 <failure message="..."/>
2066 <failure message="..."/>
2067 <failure message="..."/>
2073 * The root `<testsuites>` element corresponds to the entire test program.
2074 * `<testsuite>` elements correspond to googletest test suites.
2075 * `<testcase>` elements correspond to googletest test functions.
2077 For instance, the following program
2080 TEST(MathTest, Addition) { ... }
2081 TEST(MathTest, Subtraction) { ... }
2082 TEST(LogicTest, NonContradiction) { ... }
2085 could generate this report:
2088 <?xml version="1.0" encoding="UTF-8"?>
2089 <testsuites tests="3" failures="1" errors="0" time="0.035" timestamp="2011-10-31T18:52:42" name="AllTests">
2090 <testsuite name="MathTest" tests="2" failures="1" errors="0" time="0.015">
2091 <testcase name="Addition" file="test.cpp" line="1" status="run" time="0.007" classname="">
2092 <failure message="Value of: add(1, 1)
 Actual: 3
Expected: 2" type="">...</failure>
2093 <failure message="Value of: add(1, -1)
 Actual: 1
Expected: 0" type="">...</failure>
2095 <testcase name="Subtraction" file="test.cpp" line="2" status="run" time="0.005" classname="">
2098 <testsuite name="LogicTest" tests="1" failures="0" errors="0" time="0.005">
2099 <testcase name="NonContradiction" file="test.cpp" line="3" status="run" time="0.005" classname="">
2107 * The `tests` attribute of a `<testsuites>` or `<testsuite>` element tells how
2108 many test functions the googletest program or test suite contains, while the
2109 `failures` attribute tells how many of them failed.
2111 * The `time` attribute expresses the duration of the test, test suite, or
2112 entire test program in seconds.
2114 * The `timestamp` attribute records the local date and time of the test
2117 * The `file` and `line` attributes record the source file location, where the
2120 * Each `<failure>` element corresponds to a single failed googletest
2123 #### Generating a JSON Report
2125 googletest can also emit a JSON report as an alternative format to XML. To
2126 generate the JSON report, set the `GTEST_OUTPUT` environment variable or the
2127 `--gtest_output` flag to the string `"json:path_to_output_file"`, which will
2128 create the file at the given location. You can also just use the string
2129 `"json"`, in which case the output can be found in the `test_detail.json` file
2130 in the current directory.
2132 The report format conforms to the following JSON Schema:
2136 "$schema": "http://json-schema.org/schema#",
2142 "name": { "type": "string" },
2143 "tests": { "type": "integer" },
2144 "failures": { "type": "integer" },
2145 "disabled": { "type": "integer" },
2146 "time": { "type": "string" },
2150 "$ref": "#/definitions/TestInfo"
2158 "name": { "type": "string" },
2159 "file": { "type": "string" },
2160 "line": { "type": "integer" },
2163 "enum": ["RUN", "NOTRUN"]
2165 "time": { "type": "string" },
2166 "classname": { "type": "string" },
2170 "$ref": "#/definitions/Failure"
2178 "failures": { "type": "string" },
2179 "type": { "type": "string" }
2184 "tests": { "type": "integer" },
2185 "failures": { "type": "integer" },
2186 "disabled": { "type": "integer" },
2187 "errors": { "type": "integer" },
2190 "format": "date-time"
2192 "time": { "type": "string" },
2193 "name": { "type": "string" },
2197 "$ref": "#/definitions/TestCase"
2204 The report uses the format that conforms to the following Proto3 using the
2205 [JSON encoding](https://developers.google.com/protocol-buffers/docs/proto3#json):
2212 import "google/protobuf/timestamp.proto";
2213 import "google/protobuf/duration.proto";
2220 google.protobuf.Timestamp timestamp = 5;
2221 google.protobuf.Duration time = 6;
2223 repeated TestCase testsuites = 8;
2232 google.protobuf.Duration time = 6;
2233 repeated TestInfo testsuite = 7;
2245 google.protobuf.Duration time = 3;
2246 string classname = 4;
2248 string failures = 1;
2251 repeated Failure failures = 5;
2255 For instance, the following program
2258 TEST(MathTest, Addition) { ... }
2259 TEST(MathTest, Subtraction) { ... }
2260 TEST(LogicTest, NonContradiction) { ... }
2263 could generate this report:
2271 "timestamp": "2011-10-31T18:52:42Z",
2290 "message": "Value of: add(1, 1)\n Actual: 3\nExpected: 2",
2294 "message": "Value of: add(1, -1)\n Actual: 1\nExpected: 0",
2300 "name": "Subtraction",
2310 "name": "LogicTest",
2317 "name": "NonContradiction",
2330 {: .callout .important}
2331 IMPORTANT: The exact format of the JSON document is subject to change.
2333 ### Controlling How Failures Are Reported
2335 #### Detecting Test Premature Exit
2337 Google Test implements the _premature-exit-file_ protocol for test runners to
2338 catch any kind of unexpected exits of test programs. Upon start, Google Test
2339 creates the file which will be automatically deleted after all work has been
2340 finished. Then, the test runner can check if this file exists. In case the file
2341 remains undeleted, the inspected test has exited prematurely.
2343 This feature is enabled only if the `TEST_PREMATURE_EXIT_FILE` environment
2344 variable has been set.
2346 #### Turning Assertion Failures into Break-Points
2348 When running test programs under a debugger, it's very convenient if the
2349 debugger can catch an assertion failure and automatically drop into interactive
2350 mode. googletest's *break-on-failure* mode supports this behavior.
2352 To enable it, set the `GTEST_BREAK_ON_FAILURE` environment variable to a value
2353 other than `0`. Alternatively, you can use the `--gtest_break_on_failure`
2356 #### Disabling Catching Test-Thrown Exceptions
2358 googletest can be used either with or without exceptions enabled. If a test
2359 throws a C++ exception or (on Windows) a structured exception (SEH), by default
2360 googletest catches it, reports it as a test failure, and continues with the next
2361 test method. This maximizes the coverage of a test run. Also, on Windows an
2362 uncaught exception will cause a pop-up window, so catching the exceptions allows
2363 you to run the tests automatically.
2365 When debugging the test failures, however, you may instead want the exceptions
2366 to be handled by the debugger, such that you can examine the call stack when an
2367 exception is thrown. To achieve that, set the `GTEST_CATCH_EXCEPTIONS`
2368 environment variable to `0`, or use the `--gtest_catch_exceptions=0` flag when
2371 ### Sanitizer Integration
2374 [Undefined Behavior Sanitizer](https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html),
2375 [Address Sanitizer](https://github.com/google/sanitizers/wiki/AddressSanitizer),
2377 [Thread Sanitizer](https://github.com/google/sanitizers/wiki/ThreadSanitizerCppManual)
2378 all provide weak functions that you can override to trigger explicit failures
2379 when they detect sanitizer errors, such as creating a reference from `nullptr`.
2380 To override these functions, place definitions for them in a source file that
2381 you compile as part of your main binary:
2385 void __ubsan_on_report() {
2386 FAIL() << "Encountered an undefined behavior sanitizer error";
2388 void __asan_on_error() {
2389 FAIL() << "Encountered an address sanitizer error";
2391 void __tsan_on_report() {
2392 FAIL() << "Encountered a thread sanitizer error";
2397 After compiling your project with one of the sanitizers enabled, if a particular
2398 test triggers a sanitizer error, googletest will report that it failed.