The Automake test suite User interface ============== Running all tests ----------------- make check You can use `-jN' for faster completion (it even helps on a uniprocessor system, due to unavoidable sleep delays, as noted below). Interpretation -------------- Successes: PASS - success XFAIL - expected failure Failures: FAIL - failure XPASS - unexpected success Other: SKIP - skipped tests (third party tools not available) Getting details from failures ----------------------------- Each test is a shell script, and by default is run by /bin/sh. In a non-VPATH build you can run them directly, they will be verbose. By default, verbose output of a test foo.test is retained in the log file foo.log. A summary log is created in the file test-suite.log. You can limit the set of files using the TESTS variable, and enable detailed test output at the end of the test run with the VERBOSE variable: env VERBOSE=x TESTS='first.test second.test ...' make -e check Supported shells ---------------- The test scripts are written with portability in mind, so that they should run with any decent Bourne-compatible shell. However, some care must be used with Zsh, since, when not directly starting in Bourne-compatibility mode, it has some incompatibilities in the handling of `$0' which conflict with our usage, and which have no easy workaround. Thus, if you want to run a test script, say foo.test, with Zsh, you *can't* simply do `zsh foo.test', but you *must* resort to: zsh -o no_function_argzero foo.test Note that this problem does not occur if zsh is executed through a symlink with a basename of `sh', since in that case it starts in Bourne compatibility mode. So you should be perfectly safe when /bin/sh is zsh. Reporting failures ------------------ Send verbose output, i.e., the contents of test-suite.log, of failing tests to , along with the usual version numbers (which Automake, which Autoconf, which operating system, which make version, which shell, etc.) Writing test cases ================== Do -- If you plan to fix a bug, write the test case first. This way you'll make sure the test catches the bug, and that it succeeds once you have fixed the bug. Add a copyright/license paragraph. Explain what the test does. Cite the PR number (if any), and the original reporter (if any), so we can find or ask for information if needed. If a test checks examples or idioms given in the documentation, make sure the documentation reference them appropriately in comments, as in: @c Keep in sync with autodist-config-headers.test. @example ... @end example Use `required=...' for required tools. Do not explicitly require tools which can be taken for granted because they're listed in the GNU Coding Standards (for example, `gzip'). Include ./defs in every test script (see existing tests for examples of how to do this). Use the `skip_' function to skip tests, with a meaningful message if possible. Where convenient, use the `warn_' function to print generic warnings, the `fail_' function for test failures, and the `fatal_' function for hard errors. In case a hard error is due to a failed set-up of a test scenario, you can use the `framework_fail_' function instead. For tests that use the `parallel-tests' Automake option, set the shell variable `parallel_tests' to "yes" before including ./defs. Also, do not use for them a name that ends in `-p.test', since that would risk to clash with automatically-generated tests. For tests that are *not* meant to work with the `parallel-tests' Automake option (these should be very very few), set the shell variable `parallel_tests' to "no" before including ./defs. ./defs sets a skeleton configure.in. If possible, append to this file. In some cases you'll have to overwrite it, but this should be the exception. Note that configure.in registers Makefile.in but do not output anything by default. If you need ./configure to create Makefile, append AC_OUTPUT to configure.in. Use `set -e' to catch failures you might not have thought of. End the test script with a `:' or `Exit 0'. Otherwise, when somebody changes the test by adding a failing command after the last command, the test will spuriously fail because $? is nonzero at the end. Note that this is relevant also for tests using `set -e', if they contain commands like "grep ... Makefile.in && Exit 1" (and there are indeed a lot of such tests). Use $ACLOCAL, $AUTOMAKE, $AUTOCONF, $AUTOUPDATE, $AUTOHEADER, $PERL, $MAKE, $EGREP, and $FGREP, instead of the corresponding commands. Use $sleep when you have to make sure that some file is newer than another. Use `cat' or `grep' to display (part of) files that may be interesting for debugging, so that when a user send a verbose output we don't have to ask him for more details. Display stderr output on the stderr file descriptor. If some redirected command is likely to fail, and `set -e' is in effect, display its output even in the failure case, before exiting. Use `Exit' rather than `exit' to abort a test. Use `$PATH_SEPARATOR', not hard-coded `:', as the separator of PATH's entries. It's more important to make sure that a feature works, than make sure that Automake's output looks correct. It might look correct and still fail to work. In other words, prefer running `make' over grepping `Makefile.in' (or do both). If you run $AUTOMAKE or $AUTOCONF several times in the same test and change `configure.in' by the meantime, do rm -rf autom4te.cache before the following runs. On fast machines the new `configure.in' could otherwise have the same timestamp as the old `autom4te.cache'. Alternatively, use `--force' for subsequent runs of the tools. Use filenames with two consecutive spaces when testing that some code preserves filenames with spaces. This will catch errors like `echo $filename | ...`. Before commit: make sure the test is executable, add the tests to TESTS in Makefile.am, add it to XFAIL_TESTS in addition if needed, write a ChangeLog entry, send the diff to . Do not ------ Do not test an Automake error with `$AUTOMAKE && Exit 1', or in three years we'll discover that this test failed for some other bogus reason. This happened many times. Better use something like AUTOMAKE_fails grep 'expected diagnostic' stderr (Note this doesn't prevent the test from failing for another reason, but at least it makes sure the original error is still here.) Do not override Makefile variables using make arguments, as in e.g.: $MAKE prefix=/opt install This is not portable for recursive targets (targets that call a sub-make may not pass `prefix=/opt' along). Use the following instead: prefix=/opt $MAKE -e install