Takafumi Arakaki
Peter Bengtsson
Gary Donovan
+Brendan McCollam
+Erik Rose
+1.2.1
+
+- Correct nose.__version__ (#549). Thanks to Chris Withers for the bug report.
+
+1.2.0
+
+- Fixed issue where plugins included with `addplugins` keyword could
+ be overridden by built-in plugins (or third-party plugins registered
+ with setuptools) of the same name (#466).
+ Patch by Brendan McCollam
+- Adds :option:`--cover-xml` and :option:`--cover-xml-file` (#311).
+ Patch by Timothée Peignier.
+- Adds support for :option:`--cover-branches` (related to #370).
+ Patch by Timothée Peignier.
+- Fixed Unicode issue on Python 3.1 with coverage (#442)
+- fixed class level fixture handling in multiprocessing plugin
+- Clue in the ``unittest`` module so it no longer prints traceback frames for
+ our clones of their simple assertion helpers (#453). Patch by Erik Rose.
+- Stop using the ``assert`` statement in ``ok_`` and ``eq_`` so they work under
+ ``python -O`` (#504). Patch by Erik Rose.
+- Add loglevel option to logcapture plugin (#493). Patch by Arach Tchoupani.
+- Add doctest options flag (#7 from google code tracker). Patch by Michael
+ Forbes.
+- Add support for using 2to3 with the nosetests setuptools command. Patch by
+ Andrey Golovizin.
+- Add --cover-min-percentage flag to force test runs without sufficient
+ coverage to fail (#540). Patch by Domen Kožar.
+- Add travis-ci configuraion (#545). Patch by Domen Kožar.
+- Call reactor.stop from twisted thread (#301). Patch by Adi Roiban.
+
+
1.1.2
- Fixed regression where the .coverage file was not saved (#439).
--- /dev/null
+include AUTHORS
+include CHANGELOG
+include NEWS
+include README.txt
+include lgpl.txt
+include nosetests.1
+include install-rpm.sh
+include bin/nosetests
+include distribute_setup.py
+include selftest.py
+include setup3lib.py
+include patch.py
+include nose/usage.txt
+graft doc
+graft examples
+graft unit_tests
+graft functional_tests
+prune doc/.build
-1.0!
-----
+1.2
+---
-nose version 1.0 adds support for python 3. The thanks of a grateful
-nation go out to Alex Stewart, aka foogod, for all of his great work.
+This release was a long time in coming, and wouldn't have been
+possible at all without the contributions of numerous developers and
+community members. Please check out the changelog to see what's new
+and who to thank.
+
+If you're interested in the future of nose, please take a look at the
+nose2 project on github (https://github.com/nose-devs/nose2) or pypi
+(http://pypi.python.org/pypi/nose2/0.4.1).
+
+And lastly if you have the money to spare, please consider donating to
+the John Hunter memorial fund (http://numfocus.org/johnhunter/). We
+all give up time with our families to work on free software: so now
+the free software community that has benefited so much from that
+time we took can give something back to his family.
-Metadata-Version: 1.0
+Metadata-Version: 1.1
Name: nose
-Version: 1.1.2
+Version: 1.2.1
Summary: nose extends unittest to make testing easier
Home-page: http://readthedocs.org/docs/nose/
Author: Jason Pellerin
coverage and profiling, flexible attribute-based test selection,
output capture and more. More information about writing plugins may be
found on in the nose API documentation, here:
- http://somethingaboutorange.com/mrl/projects/nose/
+ http://readthedocs.org/docs/nose/
If you have recently reported a bug marked as fixed, or have a craving for
- the very latest, you may want the unstable development version instead:
- http://bitbucket.org/jpellerin/nose/get/tip.gz#egg=nose-dev
+ the very latest, you may want the development version instead:
+ https://github.com/nose-devs/nose/tarball/master#egg=nose-dev
Keywords: test unittest doctest automatic discovery
Platform: UNKNOWN
-Classifier: Development Status :: 4 - Beta
+Classifier: Development Status :: 5 - Production/Stable
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)
Classifier: Natural Language :: English
configuration options in your project's *setup.cfg* file, or a .noserc
or nose.cfg file in your home directory. In any of these standard
.ini-style config files, you put your nosetests configuration in a
-``[nosetests]`` section. Options are the same as on the command line,
+"[nosetests]" section. Options are the same as on the command line,
with the -- prefix removed. For options that are simple switches, you
must supply a value:
All configuration files that are found will be loaded and their
options combined. You can override the standard config file loading
-with the ``-c`` option.
+with the "-c" option.
Using Plugins
Produce HTML coverage information in dir
+--cover-branches
+
+ Include branch coverage in coverage report [NOSE_COVER_BRANCHES]
+
+--cover-xml
+
+ Produce XML coverage information
+
+--cover-xml-file=FILE
+
+ Produce XML coverage information in file
+
--pdb
Drop into debugger on errors
# built documents.
#
# The short X.Y version.
-version = '1.1'
+version = '1.2'
# The full version, including alpha/beta/rc tags.
-release = '1.1.2'
+release = '1.2.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
====================
You'd like to contribute to nose? Great! Now that nose is hosted under
-`Mercurial <http://selenic.com/mercurial/>`__, contributing is even easier.
+`GitHub <http://github.com/>`__, contributing is even easier.
Get the code!
-------------
-Start by getting a local working copy of nose, either stable, from google code::
+Start by getting a local working copy of nose from github::
- hg clone http://python-nose.googlecode.com/hg/ nose-stable
-
-or unstable, from bitbucket::
-
- hg clone http://bitbucket.org/jpellerin/nose/ nose-unstable
+ git clone https://github.com/nose-devs/nose
If you plan to submit changes back to the core repository, you should set up a
-public repository of your own somewhere. `Bitbucket <http://bitbucket.org>`__
-is a good place to do that. Once you've set up your bitbucket nose repository,
-if working from **stable**, pull from your working copy of nose-stable, and push
-to bitbucket. That (with occasional merging) will be your normal practice for
-keeping your repository up to date. If you're on bitbucket and working from
-**unstable**, just **fork** http://bitbucket.org/jpellerin/nose/.
+public fork of your own somewhere (`GitHub <http://github.com/>`__ is a good
+place to do that). See GitHub's `help <http://help.github.com/>`__ for details
+on how to contribute to a Git hosted project like nose.
Running nose's tests
--------------------
What to work on?
----------------
-You can find a list of open issues at nose's `google code repository
-<http://code.google.com/p/python-nose/issues>`__. If you'd like to
+You can find a list of open issues at nose's `issue tracker
+<http://github.com/nose-devs/nose/issues>`__. If you'd like to
work on an issue, leave a comment on the issue detailing how you plan
-to fix it, and where to find the Mercurial repository where you will
-publish your changes.
+to fix it, or simply submit a pull request.
I have a great idea for a plugin...
-----------------------------------
Get the code
------------
-The stable branch of nose is hosted at `google code
-<http://code.google.com/p/python-nose/>`__. You should clone this
-branch if you're developing a plugin or working on bug fixes for nose::
+nose is hosted at `GitHub
+<http://github.com/nose-devs/nose/>`__. You should clone this
+repository if you're developing a plugin or working on bug fixes for nose::
- hg clone http://python-nose.googlecode.com/hg/ nose-stable
+ git clone https://github.com/nose-devs/nose
-The **unstable** branch of nose is hosted at `bitbucket
-<http://bitbucket.org/jpellerin/nose/>`__. You should **fork** this branch if
-you are developing new features for nose. Then clone your fork, and submit
-your changes as a pull request. If you just want to use unstable, you can
-clone the branch::
-
- hg clone http://bitbucket.org/jpellerin/nose/ nose-unstable
+You should **fork** this repository if you are developing new features for
+nose. Then submit your changes as a pull request.
Read
+++ /dev/null
-def test():
- pass
+++ /dev/null
-Using custom plugins without setuptools
----------------------------------------
-
-If you have one or more custom plugins that you'd like to use with nose, but
-can't or don't want to register that plugin as a setuptools entrypoint, you
-can use the ``addplugins`` keyword argument to :func:`nose.core.main` or
-:func:`nose.core.run` to make the plugins available.
-
-To do this you would construct a launcher script for nose, something like::
-
- from nose import main
- from yourpackage import YourPlugin, YourOtherPlugin
-
- if __name__ == '__main__':
- nose.main(addplugins=[YourPlugin(), YourOtherPlugin()])
-
-Here's an example. Say that you don't like the fact that the collect-only
-plugin outputs 'ok' for each test it finds; instead you want it to output
-'maybe.' You could modify the plugin itself, or instead, create a Maybe plugin
-that transforms the output into your desired shape.
-
-Without the plugin, we get 'ok.'
-
->>> import os
->>> support = os.path.join(os.path.dirname(__file__), 'support')
->>> from nose.plugins.plugintest import run_buffered as run
->>> argv = [__file__, '-v', support] # --collect-only
->>> run(argv=argv)
-test.test ... ok
-<BLANKLINE>
-----------------------------------------------------------------------
-Ran 1 test in ...s
-<BLANKLINE>
-OK
-
-Without '-v', we get a dot.
-
->>> run(argv=[__file__, support])
-.
-----------------------------------------------------------------------
-Ran 1 test in ...s
-<BLANKLINE>
-OK
-
-The plugin is simple. It captures and wraps the test result output stream and
-replaces 'ok' with 'maybe' and '.' with '?'.
-
->>> from nose.plugins.base import Plugin
->>> class Maybe(Plugin):
-... def setOutputStream(self, stream):
-... self.stream = stream
-... return self
-... def flush(self):
-... self.stream.flush()
-... def writeln(self, out=""):
-... self.write(out + "\n")
-... def write(self, out):
-... if out == "ok\n":
-... out = "maybe\n"
-... elif out == ".":
-... out = "?"
-... self.stream.write(out)
-
-To activate the plugin, we pass an instance in the addplugins list.
-
->>> run(argv=argv + ['--with-maybe'], addplugins=[Maybe()])
-test.test ... maybe
-<BLANKLINE>
-----------------------------------------------------------------------
-Ran 1 test in ...s
-<BLANKLINE>
-OK
-
->>> run(argv=[__file__, support, '--with-maybe'], addplugins=[Maybe()])
-?
-----------------------------------------------------------------------
-Ran 1 test in ...s
-<BLANKLINE>
-OK
-
+++ /dev/null
-def test():
- pass
-
-def test_fails():
- assert False, "This test fails"
+++ /dev/null
-def test():
- pass
+++ /dev/null
-Finding tests in all modules
-============================
-
-Normally, nose only looks for tests in modules whose names match testMatch. By
-default that means modules with 'test' or 'Test' at the start of the name
-after an underscore (_) or dash (-) or other non-alphanumeric character.
-
-If you want to collect tests from all modules, use the ``--all-modules``
-command line argument to activate the :doc:`allmodules plugin
-<../../plugins/allmodules>`.
-
-.. Note ::
-
- The function :func:`nose.plugins.plugintest.run` reformats test result
- output to remove timings, which will vary from run to run, and
- redirects the output to stdout.
-
- >>> from nose.plugins.plugintest import run_buffered as run
-
-..
-
- >>> import os
- >>> support = os.path.join(os.path.dirname(__file__), 'support')
- >>> argv = [__file__, '-v', support]
-
-The target directory contains a test module and a normal module.
-
- >>> support_files = [d for d in os.listdir(support)
- ... if not d.startswith('.')
- ... and d.endswith('.py')]
- >>> support_files.sort()
- >>> support_files
- ['mod.py', 'test.py']
-
-When run without ``--all-modules``, only the test module is examined for tests.
-
- >>> run(argv=argv)
- test.test ... ok
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 1 test in ...s
- <BLANKLINE>
- OK
-
-When ``--all-modules`` is active, both modules are examined.
-
- >>> from nose.plugins.allmodules import AllModules
- >>> argv = [__file__, '-v', '--all-modules', support]
- >>> run(argv=argv, plugins=[AllModules()]) # doctest: +REPORT_NDIFF
- mod.test ... ok
- mod.test_fails ... FAIL
- test.test ... ok
- <BLANKLINE>
- ======================================================================
- FAIL: mod.test_fails
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- ...
- AssertionError: This test fails
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 3 tests in ...s
- <BLANKLINE>
- FAILED (failures=1)
-
-
-
+++ /dev/null
-Generating HTML Coverage with nose
-----------------------------------
-
-.. Note ::
-
- HTML coverage requires Ned Batchelder's `coverage.py`_ module.
-..
-
-Console coverage output is useful but terse. For a more browseable view of
-code coverage, the coverage plugin supports basic HTML coverage output.
-
-.. hide this from the actual documentation:
- >>> from nose.plugins.plugintest import run_buffered as run
- >>> import os
- >>> support = os.path.join(os.path.dirname(__file__), 'support')
- >>> cover_html_dir = os.path.join(support, 'cover')
- >>> cover_file = os.path.join(os.getcwd(), '.coverage')
- >>> if os.path.exists(cover_file):
- ... os.unlink(cover_file)
- ...
-
-
-The console coverage output is printed, as normal.
-
- >>> from nose.plugins.cover import Coverage
- >>> cover_html_dir = os.path.join(support, 'cover')
- >>> run(argv=[__file__, '-v', '--with-coverage', '--cover-package=blah',
- ... '--cover-html', '--cover-html-dir=' + cover_html_dir,
- ... support, ],
- ... plugins=[Coverage()]) # doctest: +REPORT_NDIFF
- test_covered.test_blah ... hi
- ok
- <BLANKLINE>
- Name Stmts Miss Cover Missing
- -------------------------------------
- blah 4 1 75% 6
- ----------------------------------------------------------------------
- Ran 1 test in ...s
- <BLANKLINE>
- OK
-
-The html coverage reports are saved to disk in the directory specified by the
-``--cover-html-dir`` option. In that directory you'll find ``index.html``
-which links to a detailed coverage report for each module in the report. The
-detail pages show the module source, colorized to indicated which lines are
-covered and which are not. There is an example of this HTML output in the
-`coverage.py`_ docs.
-
-.. hide this from the actual documentation:
- >>> os.path.exists(cover_file)
- True
- >>> os.path.exists(os.path.join(cover_html_dir, 'index.html'))
- True
- >>> os.path.exists(os.path.join(cover_html_dir, 'blah.html'))
- True
-
-.. _`coverage.py`: http://nedbatchelder.com/code/coverage/
+++ /dev/null
---- coverage_html.rst.orig 2010-08-31 23:13:33.000000000 -0700
-+++ coverage_html.rst 2010-08-31 23:14:25.000000000 -0700
-@@ -78,11 +78,11 @@
- </div>
- <div class="coverage">
- <div class="cov"><span class="num"><pre>1</pre></span><pre>def dostuff():</pre></div>
-- <div class="cov"><span class="num"><pre>2</pre></span><pre> print 'hi'</pre></div>
-+ <div class="cov"><span class="num"><pre>2</pre></span><pre> print('hi')</pre></div>
- <div class="skip"><span class="num"><pre>3</pre></span><pre></pre></div>
- <div class="skip"><span class="num"><pre>4</pre></span><pre></pre></div>
- <div class="cov"><span class="num"><pre>5</pre></span><pre>def notcov():</pre></div>
-- <div class="nocov"><span class="num"><pre>6</pre></span><pre> print 'not covered'</pre></div>
-+ <div class="nocov"><span class="num"><pre>6</pre></span><pre> print('not covered')</pre></div>
- <div class="skip"><span class="num"><pre>7</pre></span><pre></pre></div>
- </div>
- </body>
+++ /dev/null
-import sys
-import os
-import shutil
-from nose.plugins.skip import SkipTest
-from nose.plugins.cover import Coverage
-from nose.plugins.plugintest import munge_nose_output_for_doctest
-
-# This fixture is not reentrant because we have to cleanup the files that
-# coverage produces once all tests have finished running.
-_multiprocess_shared_ = True
-
-def setup_module():
- try:
- import coverage
- if 'active' in Coverage.status:
- raise SkipTest("Coverage plugin is active. Skipping tests of "
- "plugin itself.")
- except ImportError:
- raise SkipTest("coverage module not available")
-
-def teardown_module():
- # Clean up the files produced by coverage
- cover_html_dir = os.path.join(os.path.dirname(__file__), 'support', 'cover')
- if os.path.exists(cover_html_dir):
- shutil.rmtree(cover_html_dir)
-
+++ /dev/null
-Doctest Fixtures
-----------------
-
-Doctest files, like other tests, can be made more efficient or meaningful or
-at least easier to write by judicious use of fixtures. nose supports limited
-fixtures for use with doctest files.
-
-Module-level fixtures
-=====================
-
-Fixtures for a doctest file may define any or all of the following methods for
-module-level setup:
-
-* setup
-* setup_module
-* setupModule
-* setUpModule
-
-Each module-level setup function may optionally take a single argument, the
-fixtures module itself.
-
-Example::
-
- def setup_module(module):
- module.called[:] = []
-
-Similarly, module-level teardown methods are available, which also optionally
-take the fixtures module as an argument:
-
-* teardown
-* teardown_module
-* teardownModule
-* tearDownModule
-
-Example::
-
- def teardown_module(module):
- module.called[:] = []
- module.done = True
-
-Module-level setup executes **before any tests are loaded** from the doctest
-file. This is the right place to raise :class:`nose.plugins.skip.SkipTest`,
-for example.
-
-Test-level fixtures
-===================
-
-In addition to module-level fixtures, *test*-level fixtures are
-supported. Keep in mind that in the doctest lexicon, the *test* is the *entire
-doctest file* -- not each individual example within the file. So, like the
-module-level fixtures, test-level fixtures execute *once per file*. The
-differences are that:
-
-- test-level fixtures execute **after** tests have been loaded, but **before**
- any tests have executed.
-- test-level fixtures receive the doctest :class:`doctest.DocFileCase` loaded
- from the file as their one *required* argument.
-
-**setup_test(test)** is called before the test is run.
-
-Example::
-
- def setup_test(test):
- called.append(test)
- test.globs['count'] = len(called)
- setup_test.__test__ = False
-
-**teardown_test(test)** is alled after the test, unless setup raised an
-uncaught exception. The argument is the :class:`doctest.DocFileCase` object,
-*not* a unittest.TestCase.
-
-Example::
-
- def teardown_test(test):
- pass
- teardown_test.__test__ = False
-
-Bottom line: setup_test, teardown_test have access to the *doctest test*,
-while setup, setup_module, etc have access to the *fixture*
-module. setup_module runs before tests are loaded, setup_test after.
-
-.. note ::
-
- As in the examples, it's a good idea to tag your setup_test/teardown_test
- functions with ``__test__ = False`` to avoid them being collected as tests.
-
-Lastly, the fixtures for a doctest file may supply a **globs(globs)**
-function. The dict returned by this function will be passed to the doctest
-runner as the globals available to the test. You can use this, for example, to
-easily inject a module's globals into a doctest that has been moved from the
-module to a separate file.
-
-Example
-=======
-
-This doctest has some simple fixtures:
-
-.. include :: doctest_fixtures_fixtures.py
- :literal:
-
-The ``globs`` defined in the fixtures make the variable ``something``
-available in all examples.
-
- >>> something
- 'Something?'
-
-The ``count`` variable is injected by the test-level fixture.
-
- >>> count
- 1
-
-.. warning ::
-
- This whole file is one doctest test. setup_test doesn't do what you think!
- It exists to give you access to the test case and examples, but it runs
- *once*, before all of them, not before each.
-
- >>> count
- 1
-
- Thus, ``count`` stays 1 throughout the test, no matter how many examples it
- includes.
\ No newline at end of file
+++ /dev/null
-called = []
-
-def globs(globs):
- globs['something'] = 'Something?'
- return globs
-
-def setup_module(module):
- module.called[:] = []
-
-def setup_test(test):
- called.append(test)
- test.globs['count'] = len(called)
-setup_test.__test__ = False
-
-def teardown_test(test):
- pass
-teardown_test.__test__ = False
+++ /dev/null
-[DEFAULT]
-can_frobnicate = 1
-likes_cheese = 0
+++ /dev/null
-Running Initialization Code Before the Test Run
------------------------------------------------
-
-Many applications, especially those using web frameworks like Pylons_
-or Django_, can't be tested without first being configured or
-otherwise initialized. Plugins can fulfill this requirement by
-implementing :meth:`begin() <nose.plugins.base.IPluginInterface.begin>`.
-
-In this example, we'll use a very simple example: a widget class that
-can't be tested without a configuration.
-
-Here's the widget class. It's configured at the class or instance
-level by setting the ``cfg`` attribute to a dictionary.
-
- >>> class ConfigurableWidget(object):
- ... cfg = None
- ... def can_frobnicate(self):
- ... return self.cfg.get('can_frobnicate', True)
- ... def likes_cheese(self):
- ... return self.cfg.get('likes_cheese', True)
-
-The tests verify that the widget's methods can be called without
-raising any exceptions.
-
- >>> import unittest
- >>> class TestConfigurableWidget(unittest.TestCase):
- ... longMessage = False
- ... def setUp(self):
- ... self.widget = ConfigurableWidget()
- ... def test_can_frobnicate(self):
- ... """Widgets can frobnicate (or not)"""
- ... self.widget.can_frobnicate()
- ... def test_likes_cheese(self):
- ... """Widgets might like cheese"""
- ... self.widget.likes_cheese()
- ... def shortDescription(self): # 2.7 compat
- ... try:
- ... doc = self._testMethodDoc
- ... except AttributeError:
- ... # 2.4 compat
- ... doc = self._TestCase__testMethodDoc
- ... return doc and doc.split("\n")[0].strip() or None
-
-The tests are bundled into a suite that we can pass to the test runner.
-
- >>> def suite():
- ... return unittest.TestSuite([
- ... TestConfigurableWidget('test_can_frobnicate'),
- ... TestConfigurableWidget('test_likes_cheese')])
-
-When we run tests without first configuring the ConfigurableWidget,
-the tests fail.
-
-.. Note ::
-
- The function :func:`nose.plugins.plugintest.run` reformats test result
- output to remove timings, which will vary from run to run, and
- redirects the output to stdout.
-
- >>> from nose.plugins.plugintest import run_buffered as run
-
-..
-
- >>> argv = [__file__, '-v']
- >>> run(argv=argv, suite=suite()) # doctest: +REPORT_NDIFF
- Widgets can frobnicate (or not) ... ERROR
- Widgets might like cheese ... ERROR
- <BLANKLINE>
- ======================================================================
- ERROR: Widgets can frobnicate (or not)
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- ...
- AttributeError: 'NoneType' object has no attribute 'get'
- <BLANKLINE>
- ======================================================================
- ERROR: Widgets might like cheese
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- ...
- AttributeError: 'NoneType' object has no attribute 'get'
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 2 tests in ...s
- <BLANKLINE>
- FAILED (errors=2)
-
-To configure the widget system before running tests, write a plugin
-that implements :meth:`begin() <nose.plugins.base.IPluginInterface.begin>`
-and initializes the system with a hard-coded configuration. (Later, we'll
-write a better plugin that accepts a command-line argument specifying the
-configuration file.)
-
- >>> from nose.plugins import Plugin
- >>> class ConfiguringPlugin(Plugin):
- ... enabled = True
- ... def configure(self, options, conf):
- ... pass # always on
- ... def begin(self):
- ... ConfigurableWidget.cfg = {}
-
-Now configure and execute a new test run using the plugin, which will
-inject the hard-coded configuration.
-
- >>> run(argv=argv, suite=suite(),
- ... plugins=[ConfiguringPlugin()]) # doctest: +REPORT_NDIFF
- Widgets can frobnicate (or not) ... ok
- Widgets might like cheese ... ok
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 2 tests in ...s
- <BLANKLINE>
- OK
-
-This time the tests pass, because the widget class is configured.
-
-But the ConfiguringPlugin is pretty lame -- the configuration it
-installs is hard coded. A better plugin would allow the user to
-specify a configuration file on the command line:
-
- >>> class BetterConfiguringPlugin(Plugin):
- ... def options(self, parser, env={}):
- ... parser.add_option('--widget-config', action='store',
- ... dest='widget_config', default=None,
- ... help='Specify path to widget config file')
- ... def configure(self, options, conf):
- ... if options.widget_config:
- ... self.load_config(options.widget_config)
- ... self.enabled = True
- ... def begin(self):
- ... ConfigurableWidget.cfg = self.cfg
- ... def load_config(self, path):
- ... from ConfigParser import ConfigParser
- ... p = ConfigParser()
- ... p.read([path])
- ... self.cfg = dict(p.items('DEFAULT'))
-
-To use the plugin, we need a config file.
-
- >>> import os
- >>> cfg_file = os.path.join(os.path.dirname(__file__), 'example.cfg')
- >>> bytes = open(cfg_file, 'w').write("""\
- ... [DEFAULT]
- ... can_frobnicate = 1
- ... likes_cheese = 0
- ... """)
-
-Now we can execute a test run using that configuration, after first
-resetting the widget system to an unconfigured state.
-
- >>> ConfigurableWidget.cfg = None
- >>> argv = [__file__, '-v', '--widget-config', cfg_file]
- >>> run(argv=argv, suite=suite(),
- ... plugins=[BetterConfiguringPlugin()]) # doctest: +REPORT_NDIFF
- Widgets can frobnicate (or not) ... ok
- Widgets might like cheese ... ok
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 2 tests in ...s
- <BLANKLINE>
- OK
-
-.. _Pylons: http://pylonshq.com/
-.. _Django: http://www.djangoproject.com/
+++ /dev/null
---- init_plugin.rst.orig 2010-08-31 10:36:54.000000000 -0700
-+++ init_plugin.rst 2010-08-31 10:37:30.000000000 -0700
-@@ -143,6 +143,7 @@
- ... can_frobnicate = 1
- ... likes_cheese = 0
- ... """)
-+ 46
-
- Now we can execute a test run using that configuration, after first
- resetting the widget system to an unconfigured state.
+++ /dev/null
-def test_spam():
- assert True
-
+++ /dev/null
-def test_eggs():
- assert True
-
+++ /dev/null
-Excluding Unwanted Packages
----------------------------
-
-Normally, nose discovery descends into all packages. Plugins can
-change this behavior by implementing :meth:`IPluginInterface.wantDirectory()`.
-
-In this example, we have a wanted package called ``wanted_package``
-and an unwanted package called ``unwanted_package``.
-
- >>> import os
- >>> support = os.path.join(os.path.dirname(__file__), 'support')
- >>> support_files = [d for d in os.listdir(support)
- ... if not d.startswith('.')]
- >>> support_files.sort()
- >>> support_files
- ['unwanted_package', 'wanted_package']
-
-When we run nose normally, tests are loaded from both packages.
-
-.. Note ::
-
- The function :func:`nose.plugins.plugintest.run` reformats test result
- output to remove timings, which will vary from run to run, and
- redirects the output to stdout.
-
- >>> from nose.plugins.plugintest import run_buffered as run
-
-..
-
- >>> argv = [__file__, '-v', support]
- >>> run(argv=argv) # doctest: +REPORT_NDIFF
- unwanted_package.test_spam.test_spam ... ok
- wanted_package.test_eggs.test_eggs ... ok
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 2 tests in ...s
- <BLANKLINE>
- OK
-
-To exclude the tests in the unwanted package, we can write a simple
-plugin that implements :meth:`IPluginInterface.wantDirectory()` and returns ``False`` if
-the basename of the directory is ``"unwanted_package"``. This will
-prevent nose from descending into the unwanted package.
-
- >>> from nose.plugins import Plugin
- >>> class UnwantedPackagePlugin(Plugin):
- ... # no command line arg needed to activate plugin
- ... enabled = True
- ... name = "unwanted-package"
- ...
- ... def configure(self, options, conf):
- ... pass # always on
- ...
- ... def wantDirectory(self, dirname):
- ... want = None
- ... if os.path.basename(dirname) == "unwanted_package":
- ... want = False
- ... return want
-
-In the next test run we use the plugin, and the unwanted package is
-not discovered.
-
- >>> run(argv=argv,
- ... plugins=[UnwantedPackagePlugin()]) # doctest: +REPORT_NDIFF
- wanted_package.test_eggs.test_eggs ... ok
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 1 test in ...s
- <BLANKLINE>
- OK
\ No newline at end of file
+++ /dev/null
-nose.plugins.plugintest, os.environ and sys.argv
-------------------------------------------------
-
-:class:`nose.plugins.plugintest.PluginTester` and
-:func:`nose.plugins.plugintest.run` are utilities for testing nose
-plugins. When testing plugins, it should be possible to control the
-environment seen plugins under test, and that environment should never
-be affected by ``os.environ`` or ``sys.argv``.
-
- >>> import os
- >>> import sys
- >>> import unittest
- >>> import nose.config
- >>> from nose.plugins import Plugin
- >>> from nose.plugins.builtin import FailureDetail, Capture
- >>> from nose.plugins.plugintest import PluginTester
-
-Our test plugin takes no command-line arguments and simply prints the
-environment it's given by nose.
-
- >>> class PrintEnvPlugin(Plugin):
- ... name = "print-env"
- ...
- ... # no command line arg needed to activate plugin
- ... enabled = True
- ... def configure(self, options, conf):
- ... if not self.can_configure:
- ... return
- ... self.conf = conf
- ...
- ... def options(self, parser, env={}):
- ... print "env:", env
-
-To test the argv, we use a config class that prints the argv it's
-given by nose. We need to monkeypatch nose.config.Config, so that we
-can test the cases where that is used as the default.
-
- >>> old_config = nose.config.Config
- >>> class PrintArgvConfig(old_config):
- ...
- ... def configure(self, argv=None, doc=None):
- ... print "argv:", argv
- ... old_config.configure(self, argv, doc)
- >>> nose.config.Config = PrintArgvConfig
-
-The class under test, PluginTester, is designed to be used by
-subclassing.
-
- >>> class Tester(PluginTester):
- ... activate = "-v"
- ... plugins = [PrintEnvPlugin(),
- ... FailureDetail(),
- ... Capture(),
- ... ]
- ...
- ... def makeSuite(self):
- ... return unittest.TestSuite(tests=[])
-
-For the purposes of this test, we need a known ``os.environ`` and
-``sys.argv``.
-
- >>> old_environ = os.environ
- >>> old_argv = sys.argv
- >>> os.environ = {"spam": "eggs"}
- >>> sys.argv = ["spamtests"]
-
-PluginTester always uses the [nosetests, self.activate] as its argv.
-If ``env`` is not overridden, the default is an empty ``env``.
-
- >>> tester = Tester()
- >>> tester.setUp()
- argv: ['nosetests', '-v']
- env: {}
-
-An empty ``env`` is respected...
-
- >>> class EmptyEnvTester(Tester):
- ... env = {}
- >>> tester = EmptyEnvTester()
- >>> tester.setUp()
- argv: ['nosetests', '-v']
- env: {}
-
-... as is a non-empty ``env``.
-
- >>> class NonEmptyEnvTester(Tester):
- ... env = {"foo": "bar"}
- >>> tester = NonEmptyEnvTester()
- >>> tester.setUp()
- argv: ['nosetests', '-v']
- env: {'foo': 'bar'}
-
-
-``nose.plugins.plugintest.run()`` should work analogously.
-
- >>> from nose.plugins.plugintest import run_buffered as run
- >>> run(suite=unittest.TestSuite(tests=[]),
- ... plugins=[PrintEnvPlugin()]) # doctest: +REPORT_NDIFF
- argv: ['nosetests', '-v']
- env: {}
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 0 tests in ...s
- <BLANKLINE>
- OK
- >>> run(env={},
- ... suite=unittest.TestSuite(tests=[]),
- ... plugins=[PrintEnvPlugin()]) # doctest: +REPORT_NDIFF
- argv: ['nosetests', '-v']
- env: {}
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 0 tests in ...s
- <BLANKLINE>
- OK
- >>> run(env={"foo": "bar"},
- ... suite=unittest.TestSuite(tests=[]),
- ... plugins=[PrintEnvPlugin()]) # doctest: +REPORT_NDIFF
- argv: ['nosetests', '-v']
- env: {'foo': 'bar'}
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 0 tests in ...s
- <BLANKLINE>
- OK
-
-An explicit argv parameter is honoured:
-
- >>> run(argv=["spam"],
- ... suite=unittest.TestSuite(tests=[]),
- ... plugins=[PrintEnvPlugin()]) # doctest: +REPORT_NDIFF
- argv: ['spam']
- env: {}
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 0 tests in ...s
- <BLANKLINE>
- OK
-
-An explicit config parameter with an env is honoured:
-
- >>> from nose.plugins.manager import PluginManager
- >>> manager = PluginManager(plugins=[PrintEnvPlugin()])
- >>> config = PrintArgvConfig(env={"foo": "bar"}, plugins=manager)
- >>> run(config=config,
- ... suite=unittest.TestSuite(tests=[])) # doctest: +REPORT_NDIFF
- argv: ['nosetests', '-v']
- env: {'foo': 'bar'}
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 0 tests in ...s
- <BLANKLINE>
- OK
-
-
-Clean up.
-
- >>> os.environ = old_environ
- >>> sys.argv = old_argv
- >>> nose.config.Config = old_config
+++ /dev/null
-When Plugins Fail
------------------
-
-Plugin methods should not fail silently. When a plugin method raises
-an exception before or during the execution of a test, the exception
-will be wrapped in a :class:`nose.failure.Failure` instance and appear as a
-failing test. Exceptions raised at other times, such as in the
-preparation phase with ``prepareTestLoader`` or ``prepareTestResult``,
-or after a test executes, in ``afterTest`` will stop the entire test
-run.
-
- >>> import os
- >>> import sys
- >>> from nose.plugins import Plugin
- >>> from nose.plugins.plugintest import run_buffered as run
-
-Our first test plugins take no command-line arguments and raises
-AttributeError in beforeTest and afterTest.
-
- >>> class EnabledPlugin(Plugin):
- ... """Plugin that takes no command-line arguments"""
- ...
- ... enabled = True
- ...
- ... def configure(self, options, conf):
- ... pass
- ... def options(self, parser, env={}):
- ... pass
- >>> class FailBeforePlugin(EnabledPlugin):
- ... name = "fail-before"
- ...
- ... def beforeTest(self, test):
- ... raise AttributeError()
- >>> class FailAfterPlugin(EnabledPlugin):
- ... name = "fail-after"
- ...
- ... def afterTest(self, test):
- ... raise AttributeError()
-
-Running tests with the fail-before plugin enabled will result in all
-tests failing.
-
- >>> support = os.path.join(os.path.dirname(__file__), 'support')
- >>> suitepath = os.path.join(support, 'test_spam.py')
- >>> run(argv=['nosetests', suitepath],
- ... plugins=[FailBeforePlugin()])
- EE
- ======================================================================
- ERROR: test_spam.test_spam
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- ...
- AttributeError
- <BLANKLINE>
- ======================================================================
- ERROR: test_spam.test_eggs
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- ...
- AttributeError
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 0 tests in ...s
- <BLANKLINE>
- FAILED (errors=2)
-
-But with the fail-after plugin, the entire test run will fail.
-
- >>> run(argv=['nosetests', suitepath],
- ... plugins=[FailAfterPlugin()])
- Traceback (most recent call last):
- ...
- AttributeError
-
-Likewise, since the next plugin fails in a preparatory method, outside
-of test execution, the entire test run fails when the plugin is used.
-
- >>> class FailPreparationPlugin(EnabledPlugin):
- ... name = "fail-prepare"
- ...
- ... def prepareTestLoader(self, loader):
- ... raise TypeError("That loader is not my type")
- >>> run(argv=['nosetests', suitepath],
- ... plugins=[FailPreparationPlugin()])
- Traceback (most recent call last):
- ...
- TypeError: That loader is not my type
-
-
-Even AttributeErrors and TypeErrors are not silently suppressed as
-they used to be for some generative plugin methods (issue152).
-
-These methods caught TypeError and AttributeError and did not record
-the exception, before issue152 was fixed: .loadTestsFromDir(),
-.loadTestsFromModule(), .loadTestsFromTestCase(),
-loadTestsFromTestClass, and .makeTest(). Now, the exception is
-caught, but logged as a Failure.
-
- >>> class FailLoadPlugin(EnabledPlugin):
- ... name = "fail-load"
- ...
- ... def loadTestsFromModule(self, module):
- ... # we're testing exception handling behaviour during
- ... # iteration, so be a generator function, without
- ... # actually yielding any tests
- ... if False:
- ... yield None
- ... raise TypeError("bug in plugin")
- >>> run(argv=['nosetests', suitepath],
- ... plugins=[FailLoadPlugin()])
- ..E
- ======================================================================
- ERROR: Failure: TypeError (bug in plugin)
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- ...
- TypeError: bug in plugin
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 3 tests in ...s
- <BLANKLINE>
- FAILED (errors=1)
-
-
-Also, before issue152 was resolved, .loadTestsFromFile() and
-.loadTestsFromName() didn't catch these errors at all, so the
-following test would crash nose:
-
- >>> class FailLoadFromNamePlugin(EnabledPlugin):
- ... name = "fail-load-from-name"
- ...
- ... def loadTestsFromName(self, name, module=None, importPath=None):
- ... if False:
- ... yield None
- ... raise TypeError("bug in plugin")
- >>> run(argv=['nosetests', suitepath],
- ... plugins=[FailLoadFromNamePlugin()])
- E
- ======================================================================
- ERROR: Failure: TypeError (bug in plugin)
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- ...
- TypeError: bug in plugin
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 1 test in ...s
- <BLANKLINE>
- FAILED (errors=1)
+++ /dev/null
-def test_spam():
- assert True
-
-def test_eggs():
- pass
+++ /dev/null
-Minimal plugin
---------------
-
-Plugins work as long as they implement the minimal interface required
-by nose.plugins.base. They do not have to derive from
-nose.plugins.Plugin.
-
- >>> class NullPlugin(object):
- ...
- ... enabled = True
- ... name = "null"
- ... score = 100
- ...
- ... def options(self, parser, env):
- ... pass
- ...
- ... def configure(self, options, conf):
- ... pass
- >>> import unittest
- >>> from nose.plugins.plugintest import run_buffered as run
- >>> run(suite=unittest.TestSuite(tests=[]),
- ... plugins=[NullPlugin()]) # doctest: +REPORT_NDIFF
- ----------------------------------------------------------------------
- Ran 0 tests in ...s
- <BLANKLINE>
- OK
-
-Plugins can derive from nose.plugins.base and do nothing except set a
-name.
-
- >>> import os
- >>> from nose.plugins import Plugin
- >>> class DerivedNullPlugin(Plugin):
- ...
- ... name = "derived-null"
-
-Enabled plugin that's otherwise empty
-
- >>> class EnabledDerivedNullPlugin(Plugin):
- ...
- ... enabled = True
- ... name = "enabled-derived-null"
- ...
- ... def options(self, parser, env=os.environ):
- ... pass
- ...
- ... def configure(self, options, conf):
- ... if not self.can_configure:
- ... return
- ... self.conf = conf
- >>> run(suite=unittest.TestSuite(tests=[]),
- ... plugins=[DerivedNullPlugin(), EnabledDerivedNullPlugin()])
- ... # doctest: +REPORT_NDIFF
- ----------------------------------------------------------------------
- Ran 0 tests in ...s
- <BLANKLINE>
- OK
+++ /dev/null
-import os
-import unittest
-from nose.plugins import Plugin
-from nose.plugins.plugintest import PluginTester
-from nose.plugins.manager import ZeroNinePlugin
-
-here = os.path.abspath(os.path.dirname(__file__))
-
-support = os.path.join(os.path.dirname(os.path.dirname(here)), 'support')
-
-
-class EmptyPlugin(Plugin):
- pass
-
-class TestEmptyPlugin(PluginTester, unittest.TestCase):
- activate = '--with-empty'
- plugins = [ZeroNinePlugin(EmptyPlugin())]
- suitepath = os.path.join(here, 'empty_plugin.rst')
-
- def test_empty_zero_nine_does_not_crash(self):
- print self.output
- assert "'EmptyPlugin' object has no attribute 'loadTestsFromPath'" \
- not in self.output
-
-
-
+++ /dev/null
-Failure of Errorclasses
------------------------
-
-Errorclasses (skips, deprecations, etc.) define whether or not they
-represent test failures.
-
- >>> import os
- >>> import sys
- >>> from nose.plugins.plugintest import run_buffered as run
- >>> from nose.plugins.skip import Skip
- >>> from nose.plugins.deprecated import Deprecated
- >>> support = os.path.join(os.path.dirname(__file__), 'support')
- >>> sys.path.insert(0, support)
- >>> from errorclass_failure_plugin import Todo, TodoPlugin, \
- ... NonFailureTodoPlugin
- >>> todo_test = os.path.join(support, 'errorclass_failing_test.py')
- >>> misc_test = os.path.join(support, 'errorclass_tests.py')
-
-nose.plugins.errorclass.ErrorClass has an argument ``isfailure``. With a
-true isfailure, when the errorclass' exception is raised by a test,
-tracebacks are printed.
-
- >>> run(argv=["nosetests", "-v", "--with-todo", todo_test],
- ... plugins=[TodoPlugin()]) # doctest: +REPORT_NDIFF
- errorclass_failing_test.test_todo ... TODO: fix me
- errorclass_failing_test.test_2 ... ok
- <BLANKLINE>
- ======================================================================
- TODO: errorclass_failing_test.test_todo
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- ...
- Todo: fix me
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 2 tests in ...s
- <BLANKLINE>
- FAILED (TODO=1)
-
-
-Also, ``--stop`` stops the test run.
-
- >>> run(argv=["nosetests", "-v", "--with-todo", "--stop", todo_test],
- ... plugins=[TodoPlugin()]) # doctest: +REPORT_NDIFF
- errorclass_failing_test.test_todo ... TODO: fix me
- <BLANKLINE>
- ======================================================================
- TODO: errorclass_failing_test.test_todo
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- ...
- Todo: fix me
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 1 test in ...s
- <BLANKLINE>
- FAILED (TODO=1)
-
-
-With a false .isfailure, errorclass exceptions raised by tests are
-treated as "ignored errors." For ignored errors, tracebacks are not
-printed, and the test run does not stop.
-
- >>> run(argv=["nosetests", "-v", "--with-non-failure-todo", "--stop",
- ... todo_test],
- ... plugins=[NonFailureTodoPlugin()]) # doctest: +REPORT_NDIFF
- errorclass_failing_test.test_todo ... TODO: fix me
- errorclass_failing_test.test_2 ... ok
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 2 tests in ...s
- <BLANKLINE>
- OK (TODO=1)
-
-
-Exception detail strings of errorclass errors are always printed when
--v is in effect, regardless of whether the error is ignored. Note
-that exception detail strings may have more than one line.
-
- >>> run(argv=["nosetests", "-v", "--with-todo", misc_test],
- ... plugins=[TodoPlugin(), Skip(), Deprecated()])
- ... # doctest: +REPORT_NDIFF
- errorclass_tests.test_todo ... TODO: fix me
- errorclass_tests.test_2 ... ok
- errorclass_tests.test_3 ... SKIP: skipety-skip
- errorclass_tests.test_4 ... SKIP
- errorclass_tests.test_5 ... DEPRECATED: spam
- eggs
- <BLANKLINE>
- spam
- errorclass_tests.test_6 ... DEPRECATED: spam
- <BLANKLINE>
- ======================================================================
- TODO: errorclass_tests.test_todo
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- ...
- Todo: fix me
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 6 tests in ...s
- <BLANKLINE>
- FAILED (DEPRECATED=2, SKIP=2, TODO=1)
-
-Without -v, the exception detail strings are only displayed if the
-error is not ignored (otherwise, there's no traceback).
-
- >>> run(argv=["nosetests", "--with-todo", misc_test],
- ... plugins=[TodoPlugin(), Skip(), Deprecated()])
- ... # doctest: +REPORT_NDIFF
- T.SSDD
- ======================================================================
- TODO: errorclass_tests.test_todo
- ----------------------------------------------------------------------
- Traceback (most recent call last):
- ...
- Todo: fix me
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 6 tests in ...s
- <BLANKLINE>
- FAILED (DEPRECATED=2, SKIP=2, TODO=1)
-
->>> sys.path.remove(support)
+++ /dev/null
-from errorclass_failure_plugin import Todo
-
-def test_todo():
- raise Todo("fix me")
-
-def test_2():
- pass
+++ /dev/null
-from nose.plugins.errorclass import ErrorClass, ErrorClassPlugin
-
-class Todo(Exception):
- pass
-
-class TodoPlugin(ErrorClassPlugin):
-
- name = "todo"
-
- todo = ErrorClass(Todo, label='TODO', isfailure=True)
-
-class NonFailureTodoPlugin(ErrorClassPlugin):
-
- name = "non-failure-todo"
-
- todo = ErrorClass(Todo, label='TODO', isfailure=False)
+++ /dev/null
-from errorclass_failure_plugin import Todo
-from nose import SkipTest, DeprecatedTest
-
-def test_todo():
- raise Todo('fix me')
-
-def test_2():
- pass
-
-def test_3():
- raise SkipTest('skipety-skip')
-
-def test_4():
- raise SkipTest()
-
-def test_5():
- raise DeprecatedTest('spam\neggs\n\nspam')
-
-def test_6():
- raise DeprecatedTest('spam')
+++ /dev/null
-Importing Tests
----------------
-
-When a package imports tests from another package, the tests are
-**completely** relocated into the importing package. This means that the
-fixtures from the source package are **not** run when the tests in the
-importing package are executed.
-
-For example, consider this collection of packages:
-
- >>> import os
- >>> support = os.path.join(os.path.dirname(__file__), 'support')
- >>> from nose.util import ls_tree
- >>> print ls_tree(support) # doctest: +REPORT_NDIFF
- |-- package1
- | |-- __init__.py
- | `-- test_module.py
- |-- package2c
- | |-- __init__.py
- | `-- test_module.py
- `-- package2f
- |-- __init__.py
- `-- test_module.py
-
-In these packages, the tests are all defined in package1, and are imported
-into package2f and package2c.
-
-.. Note ::
-
- The run() function in :mod:`nose.plugins.plugintest` reformats test result
- output to remove timings, which will vary from run to run, and
- redirects the output to stdout.
-
- >>> from nose.plugins.plugintest import run_buffered as run
-
-..
-
-package1 has fixtures, which we can see by running all of the tests. Note
-below that the test names reflect the modules into which the tests are
-imported, not the source modules.
-
- >>> argv = [__file__, '-v', support]
- >>> run(argv=argv) # doctest: +REPORT_NDIFF
- package1 setup
- test (package1.test_module.TestCase) ... ok
- package1.test_module.TestClass.test_class ... ok
- package1.test_module.test_function ... ok
- package2c setup
- test (package2c.test_module.TestCase) ... ok
- package2c.test_module.TestClass.test_class ... ok
- package2f setup
- package2f.test_module.test_function ... ok
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 6 tests in ...s
- <BLANKLINE>
- OK
-
-When tests are run in package2f or package2c, only the fixtures from those
-packages are executed.
-
- >>> argv = [__file__, '-v', os.path.join(support, 'package2f')]
- >>> run(argv=argv) # doctest: +REPORT_NDIFF
- package2f setup
- package2f.test_module.test_function ... ok
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 1 test in ...s
- <BLANKLINE>
- OK
- >>> argv = [__file__, '-v', os.path.join(support, 'package2c')]
- >>> run(argv=argv) # doctest: +REPORT_NDIFF
- package2c setup
- test (package2c.test_module.TestCase) ... ok
- package2c.test_module.TestClass.test_class ... ok
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 2 tests in ...s
- <BLANKLINE>
- OK
-
-This also applies when only the specific tests are selected via the
-command-line.
-
- >>> argv = [__file__, '-v',
- ... os.path.join(support, 'package2c', 'test_module.py') +
- ... ':TestClass.test_class']
- >>> run(argv=argv) # doctest: +REPORT_NDIFF
- package2c setup
- package2c.test_module.TestClass.test_class ... ok
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 1 test in ...s
- <BLANKLINE>
- OK
- >>> argv = [__file__, '-v',
- ... os.path.join(support, 'package2c', 'test_module.py') +
- ... ':TestCase.test']
- >>> run(argv=argv) # doctest: +REPORT_NDIFF
- package2c setup
- test (package2c.test_module.TestCase) ... ok
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 1 test in ...s
- <BLANKLINE>
- OK
- >>> argv = [__file__, '-v',
- ... os.path.join(support, 'package2f', 'test_module.py') +
- ... ':test_function']
- >>> run(argv=argv) # doctest: +REPORT_NDIFF
- package2f setup
- package2f.test_module.test_function ... ok
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 1 test in ...s
- <BLANKLINE>
- OK
+++ /dev/null
-def setup():
- print 'package1 setup'
+++ /dev/null
-import unittest
-
-def test_function():
- pass
-
-class TestClass:
- def test_class(self):
- pass
-
-class TestCase(unittest.TestCase):
- def test(self):
- pass
+++ /dev/null
-def setup():
- print 'package2c setup'
+++ /dev/null
-from package1.test_module import TestClass, TestCase
+++ /dev/null
-def setup():
- print 'package2f setup'
+++ /dev/null
-from package1.test_module import test_function
+++ /dev/null
-Parallel Testing with nose
---------------------------
-
-.. Note ::
-
- Use of the multiprocess plugin on python 2.5 or earlier requires
- the multiprocessing_ module, available from PyPI and at
- http://code.google.com/p/python-multiprocessing/.
-
-..
-
-Using the `nose.plugin.multiprocess` plugin, you can parallelize a
-test run across a configurable number of worker processes. While this can
-speed up CPU-bound test runs, it is mainly useful for IO-bound tests
-that spend most of their time waiting for data to arrive from someplace
-else and can benefit from parallelization.
-
-.. _multiprocessing : http://code.google.com/p/python-multiprocessing/
-
-How tests are distributed
-=========================
-
-The ideal case would be to dispatch each test to a worker process separately,
-and to have enough worker processes that the entire test run takes only as
-long as the slowest test. This ideal is not attainable in all cases, however,
-because many test suites depend on context (class, module or package)
-fixtures.
-
-Some context fixtures are re-entrant -- that is, they can be called many times
-concurrently. Other context fixtures can be shared among tests running in
-different processes. Still others must be run once and only once for a given
-set of tests, and must be in the same process as the tests themselves.
-
-The plugin can't know the difference between these types of context fixtures
-unless you tell it, so the default behavior is to dispatch the entire context
-suite to a worker as a unit. This way, the fixtures are run once, in the same
-process as the tests. (That, of course, is how they are run when the plugin
-is not active: All tests are run in a single process.)
-
-Controlling distribution
-^^^^^^^^^^^^^^^^^^^^^^^^
-
-There are two context-level variables that you can use to control this default
-behavior.
-
-If a context's fixtures are re-entrant, set ``_multiprocess_can_split_ = True``
-in the context, and the plugin will dispatch tests in suites bound to that
-context as if the context had no fixtures. This means that the fixtures will
-execute multiple times, typically once per test, and concurrently.
-
-For example, a module that contains re-entrant fixtures might look like::
-
- _multiprocess_can_split_ = True
-
- def setup():
- ...
-
-A class might look like::
-
- class TestClass:
- _multiprocess_can_split_ = True
-
- @classmethod
- def setup_class(cls):
- ...
-
-Alternatively, if a context's fixtures may only be run once, or may not run
-concurrently, but *may* be shared by tests running in different processes
--- for instance a package-level fixture that starts an external http server or
-initializes a shared database -- then set ``_multiprocess_shared_ = True`` in
-the context. Fixtures for contexts so marked will execute in the primary nose
-process, and tests in those contexts will be individually dispatched to run in
-parallel.
-
-A module with shareable fixtures might look like::
-
- _multiprocess_shared_ = True
-
- def setup():
- ...
-
-A class might look like::
-
- class TestClass:
- _multiprocess_shared_ = True
-
- @classmethod
- def setup_class(cls):
- ...
-
-These options are mutually exclusive: you can't mark a context as both
-splittable and shareable.
-
-Example
-~~~~~~~
-
-Consider three versions of the same test suite. One
-is marked ``_multiprocess_shared_``, another ``_multiprocess_can_split_``,
-and the third is unmarked. They all define the same fixtures:
-
- called = []
-
- def setup():
- print "setup called"
- called.append('setup')
-
- def teardown():
- print "teardown called"
- called.append('teardown')
-
-And each has two tests that just test that ``setup()`` has been called
-once and only once.
-
-When run without the multiprocess plugin, fixtures for the shared,
-can-split and not-shared test suites execute at the same times, and
-all tests pass.
-
-.. Note ::
-
- The run() function in :mod:`nose.plugins.plugintest` reformats test result
- output to remove timings, which will vary from run to run, and
- redirects the output to stdout.
-
- >>> from nose.plugins.plugintest import run_buffered as run
-
-..
-
- >>> import os
- >>> support = os.path.join(os.path.dirname(__file__), 'support')
- >>> test_not_shared = os.path.join(support, 'test_not_shared.py')
- >>> test_shared = os.path.join(support, 'test_shared.py')
- >>> test_can_split = os.path.join(support, 'test_can_split.py')
-
-The module with shared fixtures passes.
-
- >>> run(argv=['nosetests', '-v', test_shared]) #doctest: +REPORT_NDIFF
- setup called
- test_shared.TestMe.test_one ... ok
- test_shared.test_a ... ok
- test_shared.test_b ... ok
- teardown called
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 3 tests in ...s
- <BLANKLINE>
- OK
-
-As does the module with no fixture annotations.
-
- >>> run(argv=['nosetests', '-v', test_not_shared]) #doctest: +REPORT_NDIFF
- setup called
- test_not_shared.TestMe.test_one ... ok
- test_not_shared.test_a ... ok
- test_not_shared.test_b ... ok
- teardown called
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 3 tests in ...s
- <BLANKLINE>
- OK
-
-And the module that marks its fixtures as re-entrant.
-
- >>> run(argv=['nosetests', '-v', test_can_split]) #doctest: +REPORT_NDIFF
- setup called
- test_can_split.TestMe.test_one ... ok
- test_can_split.test_a ... ok
- test_can_split.test_b ... ok
- teardown called
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 3 tests in ...s
- <BLANKLINE>
- OK
-
-However, when run with the ``--processes=2`` switch, each test module
-behaves differently.
-
- >>> from nose.plugins.multiprocess import MultiProcess
-
-The module marked ``_multiprocess_shared_`` executes correctly, although as with
-any use of the multiprocess plugin, the order in which the tests execute is
-indeterminate.
-
-First we have to reset all of the test modules.
-
- >>> import sys
- >>> sys.modules['test_not_shared'].called[:] = []
- >>> sys.modules['test_can_split'].called[:] = []
-
-Then we can run the tests again with the multiprocess plugin active.
-
- >>> run(argv=['nosetests', '-v', '--processes=2', test_shared],
- ... plugins=[MultiProcess()]) #doctest: +ELLIPSIS
- setup called
- test_shared.... ok
- teardown called
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 3 tests in ...s
- <BLANKLINE>
- OK
-
-As does the one not marked -- however in this case, ``--processes=2``
-will do *nothing at all*: since the tests are in a module with
-unmarked fixtures, the entire test module will be dispatched to a
-single runner process.
-
-However, the module marked ``_multiprocess_can_split_`` will fail, since
-the fixtures *are not reentrant*. A module such as this *must not* be
-marked ``_multiprocess_can_split_``, or tests will fail in one or more
-runner processes as fixtures are re-executed.
-
-We have to reset all of the test modules again.
-
- >>> import sys
- >>> sys.modules['test_not_shared'].called[:] = []
- >>> sys.modules['test_can_split'].called[:] = []
-
-Then we can run again and see the failures.
-
- >>> run(argv=['nosetests', '-v', '--processes=2', test_can_split],
- ... plugins=[MultiProcess()]) #doctest: +ELLIPSIS
- setup called
- teardown called
- test_can_split....
- ...
- FAILED (failures=...)
-
-Other differences in test running
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The main difference between using the multiprocess plugin and not doing so
-is obviously that tests run concurrently under multiprocess. However, there
-are a few other differences that may impact your test suite:
-
-* More tests may be found
-
- Because tests are dispatched to worker processes by name, a worker
- process may find and run tests in a module that would not be found during a
- normal test run. For instance, if a non-test module contains a test-like
- function, that function would be discovered as a test in a worker process
- if the entire module is dispatched to the worker. This is because worker
- processes load tests in *directed* mode -- the same way that nose loads
- tests when you explicitly name a module -- rather than in *discovered* mode,
- the mode nose uses when looking for tests in a directory.
-
-* Out-of-order output
-
- Test results are collected by workers and returned to the master process for
- output. Since different processes may complete their tests at different
- times, test result output order is not determinate.
-
-* Plugin interaction warning
-
- The multiprocess plugin does not work well with other plugins that expect to
- wrap or gain control of the test-running process. Examples from nose's
- builtin plugins include coverage and profiling: a test run using
- both multiprocess and either of those is likely to fail in some
- confusing and spectacular way.
-
-* Python 2.6 warning
-
- This is unlikely to impact you unless you are writing tests for nose itself,
- but be aware that under python 2.6, the multiprocess plugin is not
- re-entrant. For example, when running nose with the plugin active, you can't
- use subprocess to launch another copy of nose that also uses the
- multiprocess plugin. This is why this test is skipped under python 2.6 when
- run with the ``--processes`` switch.
+++ /dev/null
-from nose.plugins.skip import SkipTest
-from nose.plugins.multiprocess import MultiProcess
-
-_multiprocess_can_split_ = True
-
-def setup_module():
- try:
- import multiprocessing
- if 'active' in MultiProcess.status:
- raise SkipTest("Multiprocess plugin is active. Skipping tests of "
- "plugin itself.")
- except ImportError:
- raise SkipTest("multiprocessing module not available")
-
-
-
+++ /dev/null
-import sys
-called = []
-
-_multiprocess_can_split_ = 1
-
-def setup():
- print >> sys.stderr, "setup called"
- called.append('setup')
-
-
-def teardown():
- print >> sys.stderr, "teardown called"
- called.append('teardown')
-
-
-def test_a():
- assert len(called) == 1, "len(%s) !=1" % called
-
-
-def test_b():
- assert len(called) == 1, "len(%s) !=1" % called
-
-
-class TestMe:
- def setup_class(cls):
- cls._setup = True
- setup_class = classmethod(setup_class)
-
- def test_one(self):
- assert self._setup, "Class was not set up"
+++ /dev/null
-import sys
-called = []
-
-_multiprocess_ = 1
-
-def setup():
- print >> sys.stderr, "setup called"
- called.append('setup')
-
-
-def teardown():
- print >> sys.stderr, "teardown called"
- called.append('teardown')
-
-
-def test_a():
- assert len(called) == 1, "len(%s) !=1" % called
-
-
-def test_b():
- assert len(called) == 1, "len(%s) !=1" % called
-
-
-class TestMe:
- def setup_class(cls):
- cls._setup = True
- setup_class = classmethod(setup_class)
-
- def test_one(self):
- assert self._setup, "Class was not set up"
+++ /dev/null
-import os
-import sys
-
-here = os.path.dirname(__file__)
-flag = os.path.join(here, 'shared_flag')
-
-_multiprocess_shared_ = 1
-
-def _log(val):
- ff = open(flag, 'a+')
- ff.write(val)
- ff.write("\n")
- ff.close()
-
-
-def _clear():
- if os.path.isfile(flag):
- os.unlink(flag)
-
-
-def logged():
- return [line for line in open(flag, 'r')]
-
-
-def setup():
- print >> sys.stderr, "setup called"
- _log('setup')
-
-
-def teardown():
- print >> sys.stderr, "teardown called"
- _clear()
-
-
-def test_a():
- assert len(logged()) == 1, "len(%s) !=1" % called
-
-
-def test_b():
- assert len(logged()) == 1, "len(%s) !=1" % called
-
-
-class TestMe:
- def setup_class(cls):
- cls._setup = True
- setup_class = classmethod(setup_class)
-
- def test_one(self):
- assert self._setup, "Class was not set up"
+++ /dev/null
-Restricted Plugin Managers
---------------------------
-
-In some cases, such as running under the ``python setup.py test`` command,
-nose is not able to use all available plugins. In those cases, a
-`nose.plugins.manager.RestrictedPluginManager` is used to exclude plugins that
-implement API methods that nose is unable to call.
-
-Support files for this test are in the support directory.
-
- >>> import os
- >>> support = os.path.join(os.path.dirname(__file__), 'support')
-
-For this test, we'll use a simple plugin that implements the ``startTest``
-method.
-
- >>> from nose.plugins.base import Plugin
- >>> from nose.plugins.manager import RestrictedPluginManager
- >>> class StartPlugin(Plugin):
- ... def startTest(self, test):
- ... print "started %s" % test
-
-.. Note ::
-
- The run() function in :mod:`nose.plugins.plugintest` reformats test result
- output to remove timings, which will vary from run to run, and
- redirects the output to stdout.
-
- >>> from nose.plugins.plugintest import run_buffered as run
-
-..
-
-When run with a normal plugin manager, the plugin executes.
-
- >>> argv = ['plugintest', '-v', '--with-startplugin', support]
- >>> run(argv=argv, plugins=[StartPlugin()]) # doctest: +REPORT_NDIFF
- started test.test
- test.test ... ok
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 1 test in ...s
- <BLANKLINE>
- OK
-
-However, when run with a restricted plugin manager configured to exclude
-plugins implementing `startTest`, an exception is raised and nose exits.
-
- >>> restricted = RestrictedPluginManager(
- ... plugins=[StartPlugin()], exclude=('startTest',), load=False)
- >>> run(argv=argv, plugins=restricted) #doctest: +REPORT_NDIFF +ELLIPSIS
- Traceback (most recent call last):
- ...
- SystemExit: ...
-
-Errors are only raised when options defined by excluded plugins are used.
-
- >>> argv = ['plugintest', '-v', support]
- >>> run(argv=argv, plugins=restricted) # doctest: +REPORT_NDIFF
- test.test ... ok
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 1 test in ...s
- <BLANKLINE>
- OK
-
-When a disabled option appears in a configuration file, instead of on the
-command line, a warning is raised instead of an exception.
-
- >>> argv = ['plugintest', '-v', '-c', os.path.join(support, 'start.cfg'),
- ... support]
- >>> run(argv=argv, plugins=restricted) # doctest: +ELLIPSIS
- RuntimeWarning: Option 'with-startplugin' in config file '...start.cfg' ignored: excluded by runtime environment
- test.test ... ok
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 1 test in ...s
- <BLANKLINE>
- OK
-
-However, if an option appears in a configuration file that is not recognized
-either as an option defined by nose, or by an active or excluded plugin, an
-error is raised.
-
- >>> argv = ['plugintest', '-v', '-c', os.path.join(support, 'bad.cfg'),
- ... support]
- >>> run(argv=argv, plugins=restricted) # doctest: +ELLIPSIS
- Traceback (most recent call last):
- ...
- ConfigError: Error reading config file '...bad.cfg': no such option 'with-meltedcheese'
+++ /dev/null
---- restricted_plugin_options.rst.orig 2010-08-31 10:57:04.000000000 -0700
-+++ restricted_plugin_options.rst 2010-08-31 10:57:51.000000000 -0700
-@@ -86,5 +86,5 @@
- >>> run(argv=argv, plugins=restricted) # doctest: +ELLIPSIS
- Traceback (most recent call last):
- ...
-- ConfigError: Error reading config file '...bad.cfg': no such option 'with-meltedcheese'
-+ nose.config.ConfigError: Error reading config file '...bad.cfg': no such option 'with-meltedcheese'
-
+++ /dev/null
-[nosetests]
-with-meltedcheese=1
\ No newline at end of file
+++ /dev/null
-[nosetests]
-with-startplugin=1
\ No newline at end of file
+++ /dev/null
-def test():
- pass
+++ /dev/null
-Using a Custom Selector
------------------------
-
-By default, nose uses a `nose.selector.Selector` instance to decide
-what is and is not a test. The default selector is fairly simple: for
-the most part, if an object's name matches the ``testMatch`` regular
-expression defined in the active `nose.config.Config` instance, the
-object is selected as a test.
-
-This behavior is fine for new projects, but may be undesireable for
-older projects with a different test naming scheme. Fortunately, you
-can easily override this behavior by providing a custom selector using
-a plugin.
-
- >>> import os
- >>> support = os.path.join(os.path.dirname(__file__), 'support')
-
-In this example, the project to be tested consists of a module and
-package and associated tests, laid out like this::
-
- >>> from nose.util import ls_tree
- >>> print ls_tree(support)
- |-- mymodule.py
- |-- mypackage
- | |-- __init__.py
- | |-- strings.py
- | `-- math
- | |-- __init__.py
- | `-- basic.py
- `-- tests
- |-- testlib.py
- |-- math
- | `-- basic.py
- |-- mymodule
- | `-- my_function.py
- `-- strings
- `-- cat.py
-
-Because the test modules do not include ``test`` in their names,
-nose's default selector is unable to discover this project's tests.
-
-.. Note ::
-
- The run() function in :mod:`nose.plugins.plugintest` reformats test result
- output to remove timings, which will vary from run to run, and
- redirects the output to stdout.
-
- >>> from nose.plugins.plugintest import run_buffered as run
-
-..
-
- >>> argv = [__file__, '-v', support]
- >>> run(argv=argv)
- ----------------------------------------------------------------------
- Ran 0 tests in ...s
- <BLANKLINE>
- OK
-
-The tests for the example project follow a few basic conventions:
-
-* The are all located under the tests/ directory.
-* Test modules are organized into groups under directories named for
- the module or package they test.
-* testlib is *not* a test module, but it must be importable by the
- test modules.
-* Test modules contain unitest.TestCase classes that are tests, and
- may contain other functions or classes that are NOT tests, no matter
- how they are named.
-
-We can codify those conventions in a selector class.
-
- >>> from nose.selector import Selector
- >>> import unittest
- >>> class MySelector(Selector):
- ... def wantDirectory(self, dirname):
- ... # we want the tests directory and all directories
- ... # beneath it, and no others
- ... parts = dirname.split(os.path.sep)
- ... return 'tests' in parts
- ... def wantFile(self, filename):
- ... # we want python modules under tests/, except testlib
- ... parts = filename.split(os.path.sep)
- ... base, ext = os.path.splitext(parts[-1])
- ... return 'tests' in parts and ext == '.py' and base != 'testlib'
- ... def wantModule(self, module):
- ... # wantDirectory and wantFile above will ensure that
- ... # we never see an unwanted module
- ... return True
- ... def wantFunction(self, function):
- ... # never collect functions
- ... return False
- ... def wantClass(self, cls):
- ... # only collect TestCase subclasses
- ... return issubclass(cls, unittest.TestCase)
-
-To use our selector class, we need a plugin that can inject it into
-the test loader.
-
- >>> from nose.plugins import Plugin
- >>> class UseMySelector(Plugin):
- ... enabled = True
- ... def configure(self, options, conf):
- ... pass # always on
- ... def prepareTestLoader(self, loader):
- ... loader.selector = MySelector(loader.config)
-
-Now we can execute a test run using the custom selector, and the
-project's tests will be collected.
-
- >>> run(argv=argv, plugins=[UseMySelector()])
- test_add (basic.TestBasicMath) ... ok
- test_sub (basic.TestBasicMath) ... ok
- test_tuple_groups (my_function.MyFunction) ... ok
- test_cat (cat.StringsCat) ... ok
- <BLANKLINE>
- ----------------------------------------------------------------------
- Ran 4 tests in ...s
- <BLANKLINE>
- OK
+++ /dev/null
-def my_function(a, b, c):
- return (a, (b, c))
+++ /dev/null
-from mypackage.math.basic import *
+++ /dev/null
-def add(a, b):
- return a + b
-
-def sub(a, b):
- return a - b
+++ /dev/null
-def cat(a, b):
- return "%s%s" % (a, b)
+++ /dev/null
-import testlib
-from mypackage import math
-
-
-class TestBasicMath(testlib.Base):
-
- def test_add(self):
- self.assertEqual(math.add(1, 2), 3)
-
- def test_sub(self):
- self.assertEqual(math.sub(3, 1), 2)
-
-
-class TestHelperClass:
- def __init__(self):
- raise Exception(
- "This test helper class should not be collected")
+++ /dev/null
-import mymodule
-import testlib
-
-class MyFunction(testlib.Base):
-
- def test_tuple_groups(self):
- self.assertEqual(mymodule.my_function(1, 2, 3), (1, (2, 3)))
+++ /dev/null
-import testlib
-from mypackage import strings
-
-class StringsCat(testlib.Base):
-
- def test_cat(self):
- self.assertEqual(strings.cat('one', 'two'), 'onetwo')
-
-
-def test_helper_function():
- raise Exception(
- "This test helper function should not be collected")
+++ /dev/null
-import unittest
-
-class Base(unittest.TestCase):
- """Use this base class for all tests.
- """
- pass
+++ /dev/null
-<?xml version="1.0" encoding="UTF-8"?><testsuite name="nosetests" tests="4" errors="1" failures="1" skip="1"><testcase classname="test_skip" name="test_ok" time="0.002" /><testcase classname="test_skip" name="test_err" time="0.002"><error type="exceptions.Exception" message="oh no"><![CDATA[Traceback (most recent call last):
- File "/usr/local/Cellar/jython/2.5.1/libexec/Lib/unittest.py", line 260, in run
- testMethod()
- File "/private/tmp/nose_release_1.1.2/nose/case.py", line 197, in runTest
- self.test(*self.arg)
- File "/private/tmp/nose_release_1.1.2/functional_tests/doc_tests/test_xunit_plugin/support/test_skip.py", line 7, in test_err
- raise Exception("oh no")
-Exception: oh no
-]]></error></testcase><testcase classname="test_skip" name="test_fail" time="0.003"><failure type="exceptions.AssertionError" message="bye"><![CDATA[Traceback (most recent call last):
- File "/usr/local/Cellar/jython/2.5.1/libexec/Lib/unittest.py", line 260, in run
- testMethod()
- File "/private/tmp/nose_release_1.1.2/nose/case.py", line 197, in runTest
- self.test(*self.arg)
- File "/private/tmp/nose_release_1.1.2/functional_tests/doc_tests/test_xunit_plugin/support/test_skip.py", line 10, in test_fail
- assert False, "bye"
-AssertionError: bye
-]]></failure></testcase><testcase classname="test_skip" name="test_skip" time="0.002"><skipped type="nose.plugins.skip.SkipTest" message="not me"><![CDATA[Traceback (most recent call last):
- File "/usr/local/Cellar/jython/2.5.1/libexec/Lib/unittest.py", line 260, in run
- testMethod()
- File "/private/tmp/nose_release_1.1.2/nose/case.py", line 197, in runTest
- self.test(*self.arg)
- File "/private/tmp/nose_release_1.1.2/functional_tests/doc_tests/test_xunit_plugin/support/test_skip.py", line 13, in test_skip
- raise SkipTest("not me")
-SkipTest: not me
-]]></skipped></testcase></testsuite>
\ No newline at end of file
+++ /dev/null
-from nose.exc import SkipTest
-
-def test_ok():
- pass
-
-def test_err():
- raise Exception("oh no")
-
-def test_fail():
- assert False, "bye"
-
-def test_skip():
- raise SkipTest("not me")
+++ /dev/null
-XUnit output supports skips
----------------------------
-
->>> import os
->>> from nose.plugins.xunit import Xunit
->>> from nose.plugins.skip import SkipTest, Skip
->>> support = os.path.join(os.path.dirname(__file__), 'support')
->>> outfile = os.path.join(support, 'nosetests.xml')
->>> from nose.plugins.plugintest import run_buffered as run
->>> argv = [__file__, '-v', '--with-xunit', support,
-... '--xunit-file=%s' % outfile]
->>> run(argv=argv, plugins=[Xunit(), Skip()]) # doctest: +ELLIPSIS
-test_skip.test_ok ... ok
-test_skip.test_err ... ERROR
-test_skip.test_fail ... FAIL
-test_skip.test_skip ... SKIP: not me
-<BLANKLINE>
-======================================================================
-ERROR: test_skip.test_err
-----------------------------------------------------------------------
-Traceback (most recent call last):
-...
-Exception: oh no
-<BLANKLINE>
-======================================================================
-FAIL: test_skip.test_fail
-----------------------------------------------------------------------
-Traceback (most recent call last):
-...
-AssertionError: bye
-<BLANKLINE>
-----------------------------------------------------------------------
-XML: ...nosetests.xml
-----------------------------------------------------------------------
-Ran 4 tests in ...s
-<BLANKLINE>
-FAILED (SKIP=1, errors=1, failures=1)
-
->>> open(outfile, 'r').read() # doctest: +ELLIPSIS
-'<?xml version="1.0" encoding="UTF-8"?><testsuite name="nosetests" tests="4" errors="1" failures="1" skip="1"><testcase classname="test_skip" name="test_ok" time="..." /><testcase classname="test_skip" name="test_err" time="..."><error type="...Exception" message="oh no">...</error></testcase><testcase classname="test_skip" name="test_fail" time="..."><failure type="...AssertionError" message="bye">...</failure></testcase><testcase classname="test_skip" name="test_skip" time="..."><skipped type="...SkipTest" message="not me">...</skipped></testcase></testsuite>'
have distribute installed, ``python3 setup.py install`` will install
it via distribute's bootstrap script.
+Additionally, if your project is using `2to3
+<http://docs.python.org/library/2to3.html>`_, ``python3 setup.py nosetests``
+command will automatically convert your sources with 2to3 and then the
+tests with python 3.
+
.. warning ::
nose itself supports python 3, but many 3rd-party plugins do not!
+++ /dev/null
-Generating HTML Coverage with nose
-----------------------------------
-
-.. Note ::
-
- HTML coverage requires Ned Batchelder's `coverage.py`_ module.
-..
-
-Console coverage output is useful but terse. For a more browseable view of
-code coverage, the coverage plugin supports basic HTML coverage output.
-
-.. hide this from the actual documentation:
- >>> from nose.plugins.plugintest import run_buffered as run
- >>> import os
- >>> support = os.path.join(os.path.dirname(__file__), 'support')
- >>> cover_html_dir = os.path.join(support, 'cover')
- >>> cover_file = os.path.join(os.getcwd(), '.coverage')
- >>> if os.path.exists(cover_file):
- ... os.unlink(cover_file)
- ...
-
-
-The console coverage output is printed, as normal.
-
- >>> from nose.plugins.cover import Coverage
- >>> cover_html_dir = os.path.join(support, 'cover')
- >>> run(argv=[__file__, '-v', '--with-coverage', '--cover-package=blah',
- ... '--cover-html', '--cover-html-dir=' + cover_html_dir,
- ... support, ],
- ... plugins=[Coverage()]) # doctest: +REPORT_NDIFF
- test_covered.test_blah ... hi
- ok
- <BLANKLINE>
- Name Stmts Miss Cover Missing
- -------------------------------------
- blah 4 1 75% 6
- ----------------------------------------------------------------------
- Ran 1 test in ...s
- <BLANKLINE>
- OK
-
-The html coverage reports are saved to disk in the directory specified by the
-``--cover-html-dir`` option. In that directory you'll find ``index.html``
-which links to a detailed coverage report for each module in the report. The
-detail pages show the module source, colorized to indicated which lines are
-covered and which are not. There is an example of this HTML output in the
-`coverage.py`_ docs.
-
-.. hide this from the actual documentation:
- >>> os.path.exists(cover_file)
- True
- >>> os.path.exists(os.path.join(cover_html_dir, 'index.html'))
- True
- >>> os.path.exists(os.path.join(cover_html_dir, 'blah.html'))
- True
-
-.. _`coverage.py`: http://nedbatchelder.com/code/coverage/
+++ /dev/null
---- coverage_html.rst.orig 2010-08-31 23:13:33.000000000 -0700
-+++ coverage_html.rst 2010-08-31 23:14:25.000000000 -0700
-@@ -78,11 +78,11 @@
- </div>
- <div class="coverage">
- <div class="cov"><span class="num"><pre>1</pre></span><pre>def dostuff():</pre></div>
-- <div class="cov"><span class="num"><pre>2</pre></span><pre> print 'hi'</pre></div>
-+ <div class="cov"><span class="num"><pre>2</pre></span><pre> print('hi')</pre></div>
- <div class="skip"><span class="num"><pre>3</pre></span><pre></pre></div>
- <div class="skip"><span class="num"><pre>4</pre></span><pre></pre></div>
- <div class="cov"><span class="num"><pre>5</pre></span><pre>def notcov():</pre></div>
-- <div class="nocov"><span class="num"><pre>6</pre></span><pre> print 'not covered'</pre></div>
-+ <div class="nocov"><span class="num"><pre>6</pre></span><pre> print('not covered')</pre></div>
- <div class="skip"><span class="num"><pre>7</pre></span><pre></pre></div>
- </div>
- </body>
+++ /dev/null
-import sys
-import os
-import shutil
-from nose.plugins.skip import SkipTest
-from nose.plugins.cover import Coverage
-from nose.plugins.plugintest import munge_nose_output_for_doctest
-
-# This fixture is not reentrant because we have to cleanup the files that
-# coverage produces once all tests have finished running.
-_multiprocess_shared_ = True
-
-def setup_module():
- try:
- import coverage
- if 'active' in Coverage.status:
- raise SkipTest("Coverage plugin is active. Skipping tests of "
- "plugin itself.")
- except ImportError:
- raise SkipTest("coverage module not available")
-
-def teardown_module():
- # Clean up the files produced by coverage
- cover_html_dir = os.path.join(os.path.dirname(__file__), 'support', 'cover')
- if os.path.exists(cover_html_dir):
- shutil.rmtree(cover_html_dir)
-
-<?xml version="1.0" encoding="UTF-8"?><testsuite name="nosetests" tests="4" errors="1" failures="1" skip="1"><testcase classname="test_skip" name="test_ok" time="0.002" /><testcase classname="test_skip" name="test_err" time="0.002"><error type="exceptions.Exception" message="oh no"><![CDATA[Traceback (most recent call last):
- File "/usr/local/Cellar/jython/2.5.1/libexec/Lib/unittest.py", line 260, in run
+<?xml version="1.0" encoding="UTF-8"?><testsuite name="nosetests" tests="4" errors="1" failures="1" skip="1"><testcase classname="test_skip" name="test_ok" time="0.002" /><testcase classname="test_skip" name="test_err" time="0.000"><error type="exceptions.Exception" message="oh no"><![CDATA[Traceback (most recent call last):
+ File "/usr/local/lib/python2.6/unittest.py", line 279, in run
testMethod()
- File "/private/tmp/nose_release_1.1.2/nose/case.py", line 197, in runTest
+ File "/home/jpellerin/code/nose-gh/nose/case.py", line 197, in runTest
self.test(*self.arg)
- File "/private/tmp/nose_release_1.1.2/functional_tests/doc_tests/test_xunit_plugin/support/test_skip.py", line 7, in test_err
+ File "/home/jpellerin/code/nose-gh/functional_tests/doc_tests/test_xunit_plugin/support/test_skip.py", line 7, in test_err
raise Exception("oh no")
Exception: oh no
-]]></error></testcase><testcase classname="test_skip" name="test_fail" time="0.003"><failure type="exceptions.AssertionError" message="bye"><![CDATA[Traceback (most recent call last):
- File "/usr/local/Cellar/jython/2.5.1/libexec/Lib/unittest.py", line 260, in run
+]]></error></testcase><testcase classname="test_skip" name="test_fail" time="0.000"><failure type="exceptions.AssertionError" message="bye"><![CDATA[Traceback (most recent call last):
+ File "/usr/local/lib/python2.6/unittest.py", line 279, in run
testMethod()
- File "/private/tmp/nose_release_1.1.2/nose/case.py", line 197, in runTest
+ File "/home/jpellerin/code/nose-gh/nose/case.py", line 197, in runTest
self.test(*self.arg)
- File "/private/tmp/nose_release_1.1.2/functional_tests/doc_tests/test_xunit_plugin/support/test_skip.py", line 10, in test_fail
+ File "/home/jpellerin/code/nose-gh/functional_tests/doc_tests/test_xunit_plugin/support/test_skip.py", line 10, in test_fail
assert False, "bye"
AssertionError: bye
-]]></failure></testcase><testcase classname="test_skip" name="test_skip" time="0.002"><skipped type="nose.plugins.skip.SkipTest" message="not me"><![CDATA[Traceback (most recent call last):
- File "/usr/local/Cellar/jython/2.5.1/libexec/Lib/unittest.py", line 260, in run
+]]></failure></testcase><testcase classname="test_skip" name="test_skip" time="0.000"><skipped type="nose.plugins.skip.SkipTest" message="not me"><![CDATA[Traceback (most recent call last):
+ File "/usr/local/lib/python2.6/unittest.py", line 279, in run
testMethod()
- File "/private/tmp/nose_release_1.1.2/nose/case.py", line 197, in runTest
+ File "/home/jpellerin/code/nose-gh/nose/case.py", line 197, in runTest
self.test(*self.arg)
- File "/private/tmp/nose_release_1.1.2/functional_tests/doc_tests/test_xunit_plugin/support/test_skip.py", line 13, in test_skip
+ File "/home/jpellerin/code/nose-gh/functional_tests/doc_tests/test_xunit_plugin/support/test_skip.py", line 13, in test_skip
raise SkipTest("not me")
SkipTest: not me
]]></skipped></testcase></testsuite>
\ No newline at end of file
--- /dev/null
+def moo():
+ print 'covered'
import blah
+import moo
def test_blah():
blah.dostuff()
+
+def test_moo():
+ moo.dostuff()
<?xml version="1.0" encoding="UTF-8"?><testsuite name="nosetests" tests="6" errors="2" failures="1" skip="1"><testcase classname="test_xunit_as_suite.TestForXunit" name="test_error" time="0.002"><error type="exceptions.TypeError" message="oops, wrong type"><![CDATA[Traceback (most recent call last):
- File "/usr/local/Cellar/jython/2.5.1/libexec/Lib/unittest.py", line 260, in run
+ File "/usr/lib/python2.7/unittest/case.py", line 327, in run
testMethod()
- File "/private/tmp/nose_release_1.1.2/functional_tests/support/xunit/test_xunit_as_suite.py", line 15, in test_error
+ File "/home/jpellerin/code/nose-gh/functional_tests/support/xunit/test_xunit_as_suite.py", line 15, in test_error
raise TypeError("oops, wrong type")
TypeError: oops, wrong type
-]]></error></testcase><testcase classname="test_xunit_as_suite.TestForXunit" name="test_fail" time="0.002"><failure type="exceptions.AssertionError" message="'this' != 'that'"><![CDATA[Traceback (most recent call last):
- File "/usr/local/Cellar/jython/2.5.1/libexec/Lib/unittest.py", line 260, in run
+]]></error></testcase><testcase classname="test_xunit_as_suite.TestForXunit" name="test_fail" time="0.000"><failure type="exceptions.AssertionError" message="'this' != 'that'"><![CDATA[Traceback (most recent call last):
+ File "/usr/lib/python2.7/unittest/case.py", line 327, in run
testMethod()
- File "/private/tmp/nose_release_1.1.2/functional_tests/support/xunit/test_xunit_as_suite.py", line 12, in test_fail
+ File "/home/jpellerin/code/nose-gh/functional_tests/support/xunit/test_xunit_as_suite.py", line 12, in test_fail
self.assertEqual("this","that")
- File "/usr/local/Cellar/jython/2.5.1/libexec/Lib/unittest.py", line 333, in failUnlessEqual
- raise self.failureException, \
+ File "/usr/lib/python2.7/unittest/case.py", line 511, in assertEqual
+ assertion_func(first, second, msg=msg)
+ File "/usr/lib/python2.7/unittest/case.py", line 504, in _baseAssertEqual
+ raise self.failureException(msg)
AssertionError: 'this' != 'that'
-]]></failure></testcase><testcase classname="test_xunit_as_suite.TestForXunit" name="test_non_ascii_error" time="0.001"><error type="exceptions.Exception" message="日本"><![CDATA[Traceback (most recent call last):
- File "/usr/local/Cellar/jython/2.5.1/libexec/Lib/unittest.py", line 260, in run
+]]></failure></testcase><testcase classname="test_xunit_as_suite.TestForXunit" name="test_non_ascii_error" time="0.000"><error type="exceptions.Exception" message="日本"><![CDATA[Traceback (most recent call last):
+ File "/usr/lib/python2.7/unittest/case.py", line 327, in run
testMethod()
- File "/private/tmp/nose_release_1.1.2/functional_tests/support/xunit/test_xunit_as_suite.py", line 18, in test_non_ascii_error
+ File "/home/jpellerin/code/nose-gh/functional_tests/support/xunit/test_xunit_as_suite.py", line 18, in test_non_ascii_error
raise Exception(u"日本")
-Exception: <unprintable Exception object>
-]]></error></testcase><testcase classname="test_xunit_as_suite.TestForXunit" name="test_output" time="0.001" /><testcase classname="test_xunit_as_suite.TestForXunit" name="test_skip" time="0.000"><skipped type="nose.plugins.skip.SkipTest" message="skipit"><![CDATA[Traceback (most recent call last):
- File "/usr/local/Cellar/jython/2.5.1/libexec/Lib/unittest.py", line 260, in run
- testMethod()
- File "/private/tmp/nose_release_1.1.2/functional_tests/support/xunit/test_xunit_as_suite.py", line 24, in test_skip
- raise SkipTest("skipit")
-SkipTest: skipit
-]]></skipped></testcase><testcase classname="test_xunit_as_suite.TestForXunit" name="test_success" time="0.001" /></testsuite>
\ No newline at end of file
+Exception: \u65e5\u672c
+]]></error></testcase><testcase classname="test_xunit_as_suite.TestForXunit" name="test_output" time="0.000" /><testcase classname="test_xunit_as_suite.TestForXunit" name="test_skip" time="0.000"><skipped type="unittest.case.SkipTest" message="skipit"><![CDATA[SkipTest: skipit
+]]></skipped></testcase><testcase classname="test_xunit_as_suite.TestForXunit" name="test_success" time="0.000" /></testsuite>
\ No newline at end of file
--- /dev/null
+"""Test the coverage plugin."""
+import os
+import unittest
+import shutil
+
+from nose.plugins import PluginTester
+from nose.plugins.cover import Coverage
+
+support = os.path.join(os.path.dirname(__file__), 'support')
+
+
+class TestCoveragePlugin(PluginTester, unittest.TestCase):
+ activate = "--with-coverage"
+ args = ['-v', '--cover-package=blah', '--cover-html', '--cover-min-percentage', '25']
+ plugins = [Coverage()]
+ suitepath = os.path.join(support, 'coverage')
+
+ def setUp(self):
+ self.cover_file = os.path.join(os.getcwd(), '.coverage')
+ self.cover_html_dir = os.path.join(os.getcwd(), 'cover')
+ if os.path.exists(self.cover_file):
+ os.unlink(self.cover_file)
+ if os.path.exists(self.cover_html_dir):
+ shutil.rmtree(self.cover_html_dir)
+ super(TestCoveragePlugin, self).setUp()
+
+ def runTest(self):
+ self.assertTrue("blah 4 3 25% 1" in self.output)
+ self.assertTrue("Ran 1 test in""" in self.output)
+ # Assert coverage html report exists
+ self.assertTrue(os.path.exists(os.path.join(self.cover_html_dir,
+ 'index.html')))
+ # Assert coverage data is saved
+ self.assertTrue(os.path.exists(self.cover_file))
+
+
+class TestCoverageMinPercentagePlugin(PluginTester, unittest.TestCase):
+ activate = "--with-coverage"
+ args = ['-v', '--cover-package=blah', '--cover-min-percentage', '100']
+ plugins = [Coverage()]
+ suitepath = os.path.join(support, 'coverage')
+
+ def setUp(self):
+ self.cover_file = os.path.join(os.getcwd(), '.coverage')
+ self.cover_html_dir = os.path.join(os.getcwd(), 'cover')
+ if os.path.exists(self.cover_file):
+ os.unlink(self.cover_file)
+ if os.path.exists(self.cover_html_dir):
+ shutil.rmtree(self.cover_html_dir)
+ self.assertRaises(SystemExit,
+ super(TestCoverageMinPercentagePlugin, self).setUp)
+
+ def runTest(self):
+ pass
+
+
+class TestCoverageMinPercentageTOTALPlugin(PluginTester, unittest.TestCase):
+ activate = "--with-coverage"
+ args = ['-v', '--cover-package=blah', '--cover-package=moo',
+ '--cover-min-percentage', '100']
+ plugins = [Coverage()]
+ suitepath = os.path.join(support, 'coverage2')
+
+ def setUp(self):
+ self.cover_file = os.path.join(os.getcwd(), '.coverage')
+ self.cover_html_dir = os.path.join(os.getcwd(), 'cover')
+ if os.path.exists(self.cover_file):
+ os.unlink(self.cover_file)
+ if os.path.exists(self.cover_html_dir):
+ shutil.rmtree(self.cover_html_dir)
+ self.assertRaises(SystemExit,
+ super(TestCoverageMinPercentageTOTALPlugin, self).setUp)
+
+ def runTest(self):
+ pass
+
+if __name__ == '__main__':
+ unittest.main()
--- /dev/null
+import unittest
+from nose.plugins import Plugin
+from nose.plugins.manager import DefaultPluginManager
+
+class OverridesSkip(Plugin):
+ """Plugin to override the built-in Skip"""
+ enabled = True
+ name = 'skip'
+ is_overridden = True
+
+
+class TestDefaultPluginManager(unittest.TestCase):
+
+ def test_extraplugins_override_builtins(self):
+ pm = DefaultPluginManager()
+ pm.addPlugins(extraplugins=[OverridesSkip()])
+ pm.loadPlugins()
+ for plugin in pm.plugins:
+ if plugin.name == "skip":
+ break
+ overridden = getattr(plugin, 'is_overridden', False)
+ self.assertTrue(overridden)
--- /dev/null
+import os
+from unittest import TestCase
+
+from nose.plugins import PluginTester
+from nose.plugins.skip import SkipTest
+from nose.plugins.multiprocess import MultiProcess
+
+support = os.path.join(os.path.dirname(__file__), 'support')
+
+def setup():
+ try:
+ import multiprocessing
+ if 'active' in MultiProcess.status:
+ raise SkipTest("Multiprocess plugin is active. Skipping tests of "
+ "plugin itself.")
+ except ImportError:
+ raise SkipTest("multiprocessing module not available")
+
+class MPTestBase(PluginTester, TestCase):
+ processes = 1
+ activate = '--processes=1'
+ plugins = [MultiProcess()]
+ suitepath = os.path.join(support, 'timeout.py')
+
+ def __init__(self, *args, **kwargs):
+ self.activate = '--processes=%d' % self.processes
+ PluginTester.__init__(self)
+ TestCase.__init__(self, *args, **kwargs)
--- /dev/null
+class TestFunctionalTest(object):
+ counter = 0
+ @classmethod
+ def setup_class(cls):
+ cls.counter += 1
+ @classmethod
+ def teardown_class(cls):
+ cls.counter -= 1
+ def _run(self):
+ assert self.counter==1
+ def test1(self):
+ self._run()
+ def test2(self):
+ self._run()
--- /dev/null
+counter=[0]
+_multiprocess_shared_ = True
+def setup_package():
+ counter[0] += 1
+def teardown_package():
+ counter[0] -= 1
--- /dev/null
+#from . import counter
+from time import sleep
+#_multiprocess_can_split_ = True
+class Test1(object):
+ def test1(self):
+ sleep(1)
+ pass
+class Test2(object):
+ def test2(self):
+ sleep(1)
+ pass
--- /dev/null
+import os
+import sys
+
+import nose
+from nose.plugins.multiprocess import MultiProcess
+from nose.config import Config
+from nose.plugins.manager import PluginManager
+
+if __name__ == '__main__':
+ if len(sys.argv) < 3:
+ print "USAGE: %s TEST_FILE LOG_FILE" % sys.argv[0]
+ sys.exit(1)
+ os.environ['NOSE_MP_LOG']=sys.argv[2]
+ nose.main(defaultTest=sys.argv[1], argv=[sys.argv[0],'--processes=1','-v'], config=Config(plugins=PluginManager(plugins=[MultiProcess()])))
--- /dev/null
+import os
+
+from tempfile import mktemp
+from time import sleep
+
+if 'NOSE_MP_LOG' not in os.environ:
+ raise Exception('Environment variable NOSE_MP_LOG is not set')
+
+logfile = os.environ['NOSE_MP_LOG']
+
+def log(w):
+ f = open(logfile, 'a')
+ f.write(w+"\n")
+ f.close()
+#make sure all tests in this file are dispatched to the same subprocess
+def setup():
+ log('setup')
+
+def test_timeout():
+ log('test_timeout')
+ sleep(2)
+ log('test_timeout_finished')
+
+# check timeout will not prevent remaining tests dispatched to the same subprocess to continue to run
+def test_pass():
+ log('test_pass')
+
+def teardown():
+ log('teardown')
--- /dev/null
+import os
+
+from tempfile import mktemp
+from time import sleep
+
+if 'NOSE_MP_LOG' not in os.environ:
+ raise Exception('Environment variable NOSE_MP_LOG is not set')
+
+logfile = os.environ['NOSE_MP_LOG']
+
+def log(w):
+ f = open(logfile, 'a')
+ f.write(w+"\n")
+ f.close()
+#make sure all tests in this file are dispatched to the same subprocess
+def setup():
+ '''global logfile
+ logfile = mktemp()
+ print "tempfile is:",logfile'''
+ log('setup')
+
+def test_timeout():
+ log('test_timeout')
+ sleep(2)
+ log('test_timeout_finished')
+
+# check timeout will not prevent remaining tests dispatched to the same subprocess to continue to run
+def test_pass():
+ log('test_pass')
+
+def teardown():
+ log('teardown')
+ sleep(10)
+ log('teardown_finished')
+#make sure all tests in this file are dispatched to the same subprocess
+def setup():
+ pass
def test_timeout():
"this test *should* fail when process-timeout=1"
from time import sleep
sleep(2)
+# check timeout will not prevent remaining tests dispatched to the same subprocess to continue to run
+def test_pass():
+ pass
--- /dev/null
+import os
+
+from test_multiprocessing import MPTestBase
+
+
+#test case for #462
+class TestClassFixture(MPTestBase):
+ suitepath = os.path.join(os.path.dirname(__file__), 'support', 'class.py')
+
+ def runTest(self):
+ assert str(self.output).strip().endswith('OK')
+ assert 'Ran 2 tests' in self.output
+
--- /dev/null
+import os
+
+from test_multiprocessing import MPTestBase
+
+class TestConcurrentShared(MPTestBase):
+ processes = 2
+ suitepath = os.path.join(os.path.dirname(__file__), 'support',
+ 'concurrent_shared')
+
+ def runTest(self):
+ assert 'Ran 2 tests in 1.' in self.output, "make sure two tests use 1.x seconds (no more than 2 seconsd)"
+ assert str(self.output).strip().endswith('OK')
+
--- /dev/null
+from subprocess import Popen,PIPE
+import os
+import sys
+from time import sleep
+import signal
+
+import nose
+
+support = os.path.join(os.path.dirname(__file__), 'support')
+
+PYTHONPATH = os.environ['PYTHONPATH'] if 'PYTHONPATH' in os.environ else ''
+def setup():
+ nose_parent_dir = os.path.normpath(os.path.join(os.path.abspath(os.path.dirname(nose.__file__)),'..'))
+ paths = [nose_parent_dir]
+ if PYTHONPATH:
+ paths.append(PYTHONPATH)
+ os.environ['PYTHONPATH'] = os.pathsep.join(paths)
+def teardown():
+ if PYTHONPATH:
+ os.environ['PYTHONPATH'] = PYTHONPATH
+ else:
+ del os.environ['PYTHONPATH']
+
+runner = os.path.join(support, 'fake_nosetest.py')
+def keyboardinterrupt(case):
+ #os.setsid would create a process group so signals sent to the
+ #parent process will propogates to all children processes
+ from tempfile import mktemp
+ logfile = mktemp()
+ process = Popen([sys.executable,runner,os.path.join(support,case),logfile], preexec_fn=os.setsid, stdout=PIPE, stderr=PIPE, bufsize=-1)
+
+ #wait until logfile is created:
+ retry=100
+ while not os.path.exists(logfile):
+ sleep(0.1)
+ retry -= 1
+ if not retry:
+ raise Exception('Timeout while waiting for log file to be created by fake_nosetest.py')
+
+ os.killpg(process.pid, signal.SIGINT)
+ return process, logfile
+
+def get_log_content(logfile):
+ '''prefix = 'tempfile is: '
+ if not stdout.startswith(prefix):
+ raise Exception('stdout does not contain tmp file name: '+stdout)
+ logfile = stdout[len(prefix):].strip() #remove trailing new line char'''
+ f = open(logfile)
+ content = f.read()
+ f.close()
+ os.remove(logfile)
+ return content
+
+def test_keyboardinterrupt():
+ process, logfile = keyboardinterrupt('keyboardinterrupt.py')
+ stdout, stderr = [s.decode('utf-8') for s in process.communicate(None)]
+ print stderr
+ log = get_log_content(logfile)
+ assert 'setup' in log
+ assert 'test_timeout' in log
+ assert 'test_timeout_finished' not in log
+ assert 'test_pass' not in log
+ assert 'teardown' in log
+ assert 'Ran 0 tests' in stderr
+ assert 'KeyboardInterrupt' in stderr
+ assert 'FAILED (errors=1)' in stderr
+ assert 'ERROR: Worker 0 keyboard interrupt, failing current test '+os.path.join(support,'keyboardinterrupt.py') in stderr
+
+
+def test_keyboardinterrupt_twice():
+ process, logfile = keyboardinterrupt('keyboardinterrupt_twice.py')
+ sleep(0.5)
+ os.killpg(process.pid, signal.SIGINT)
+ stdout, stderr = [s.decode('utf-8') for s in process.communicate(None)]
+ log = get_log_content(logfile)
+ assert 'setup' in log
+ assert 'test_timeout' in log
+ assert 'test_timeout_finished' not in log
+ assert 'test_pass' not in log
+ assert 'teardown' in log
+ assert 'teardown_finished' not in log
+ assert 'Ran 0 tests' in stderr
+ assert 'KeyboardInterrupt' in stderr
+ assert 'FAILED (errors=1)' in stderr
import os
-import unittest
-from nose.plugins import PluginTester
-from nose.plugins.skip import SkipTest
-from nose.plugins.multiprocess import MultiProcess
+from test_multiprocessing import support, MPTestBase
-
-support = os.path.join(os.path.dirname(__file__), 'support')
-
-
-def setup():
- try:
- import multiprocessing
- if 'active' in MultiProcess.status:
- raise SkipTest("Multiprocess plugin is active. Skipping tests of "
- "plugin itself.")
- except ImportError:
- raise SkipTest("multiprocessing module not available")
-
-
-class TestMPNameError(PluginTester, unittest.TestCase):
- activate = '--processes=2'
- plugins = [MultiProcess()]
- suitepath = os.path.join(support, 'nameerror.py')
+class TestMPNameError(MPTestBase):
+ processes = 2
+ suitepath = os.path.join(os.path.dirname(__file__), 'support', 'nameerror.py')
def runTest(self):
print str(self.output)
import os
-import unittest
-from nose.plugins import PluginTester
-from nose.plugins.skip import SkipTest
-from nose.plugins.multiprocess import MultiProcess
+from test_multiprocessing import MPTestBase
-support = os.path.join(os.path.dirname(__file__), 'support')
-
-
-def setup():
- try:
- import multiprocessing
- if 'active' in MultiProcess.status:
- raise SkipTest("Multiprocess plugin is active. Skipping tests of "
- "plugin itself.")
- except ImportError:
- raise SkipTest("multiprocessing module not available")
-
-
-
-class TestMPTimeout(PluginTester, unittest.TestCase):
- activate = '--processes=2'
+class TestMPTimeout(MPTestBase):
args = ['--process-timeout=1']
- plugins = [MultiProcess()]
- suitepath = os.path.join(support, 'timeout.py')
+ suitepath = os.path.join(os.path.dirname(__file__), 'support', 'timeout.py')
def runTest(self):
assert "TimedOutException: 'timeout.test_timeout'" in self.output
-
+ assert "Ran 2 tests in" in self.output
+ assert "FAILED (errors=1)" in self.output
class TestMPTimeoutPass(TestMPTimeout):
args = ['--process-timeout=3']
def runTest(self):
assert "TimedOutException: 'timeout.test_timeout'" not in self.output
+ assert "Ran 2 tests in" in self.output
assert str(self.output).strip().endswith('OK')
+
--- /dev/null
+Metadata-Version: 1.1
+Name: nose
+Version: 1.2.1
+Summary: nose extends unittest to make testing easier
+Home-page: http://readthedocs.org/docs/nose/
+Author: Jason Pellerin
+Author-email: jpellerin+nose@gmail.com
+License: GNU LGPL
+Description: nose extends the test loading and running features of unittest, making
+ it easier to write, find and run tests.
+
+ By default, nose will run tests in files or directories under the current
+ working directory whose names include "test" or "Test" at a word boundary
+ (like "test_this" or "functional_test" or "TestClass" but not
+ "libtest"). Test output is similar to that of unittest, but also includes
+ captured stdout output from failing tests, for easy print-style debugging.
+
+ These features, and many more, are customizable through the use of
+ plugins. Plugins included with nose provide support for doctest, code
+ coverage and profiling, flexible attribute-based test selection,
+ output capture and more. More information about writing plugins may be
+ found on in the nose API documentation, here:
+ http://readthedocs.org/docs/nose/
+
+ If you have recently reported a bug marked as fixed, or have a craving for
+ the very latest, you may want the development version instead:
+ https://github.com/nose-devs/nose/tarball/master#egg=nose-dev
+
+Keywords: test unittest doctest automatic discovery
+Platform: UNKNOWN
+Classifier: Development Status :: 5 - Production/Stable
+Classifier: Intended Audience :: Developers
+Classifier: License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)
+Classifier: Natural Language :: English
+Classifier: Operating System :: OS Independent
+Classifier: Programming Language :: Python
+Classifier: Programming Language :: Python :: 3
+Classifier: Topic :: Software Development :: Testing
--- /dev/null
+AUTHORS
+CHANGELOG
+MANIFEST.in
+NEWS
+README.txt
+distribute_setup.py
+install-rpm.sh
+lgpl.txt
+nosetests.1
+patch.py
+selftest.py
+setup.cfg
+setup.py
+setup3lib.py
+bin/nosetests
+doc/Makefile
+doc/api.rst
+doc/conf.py
+doc/contributing.rst
+doc/developing.rst
+doc/docstring.py
+doc/finding_tests.rst
+doc/further_reading.rst
+doc/index.html
+doc/index.rst
+doc/man.rst
+doc/manbuilder.py
+doc/manbuilder.pyc
+doc/manpage.py
+doc/manpage.pyc
+doc/more_info.rst
+doc/news.rst
+doc/plugins.rst
+doc/rtd-requirements.txt
+doc/setuptools_integration.rst
+doc/testing.rst
+doc/testing_tools.rst
+doc/usage.rst
+doc/writing_tests.rst
+doc/.static/nose.css
+doc/.templates/index.html
+doc/.templates/indexsidebar.html
+doc/.templates/layout.html
+doc/.templates/page.html
+doc/api/commands.rst
+doc/api/config.rst
+doc/api/core.rst
+doc/api/importer.rst
+doc/api/inspector.rst
+doc/api/loader.rst
+doc/api/plugin_manager.rst
+doc/api/proxy.rst
+doc/api/result.rst
+doc/api/selector.rst
+doc/api/suite.rst
+doc/api/test_cases.rst
+doc/api/twistedtools.rst
+doc/api/util.rst
+doc/plugins/allmodules.rst
+doc/plugins/attrib.rst
+doc/plugins/builtin.rst
+doc/plugins/capture.rst
+doc/plugins/collect.rst
+doc/plugins/cover.rst
+doc/plugins/debug.rst
+doc/plugins/deprecated.rst
+doc/plugins/doctests.rst
+doc/plugins/documenting.rst
+doc/plugins/errorclasses.rst
+doc/plugins/failuredetail.rst
+doc/plugins/interface.rst
+doc/plugins/isolate.rst
+doc/plugins/logcapture.rst
+doc/plugins/multiprocess.rst
+doc/plugins/other.rst
+doc/plugins/prof.rst
+doc/plugins/skip.rst
+doc/plugins/testid.rst
+doc/plugins/testing.rst
+doc/plugins/writing.rst
+doc/plugins/xunit.rst
+examples/attrib_plugin.py
+examples/html_plugin/htmlplug.py
+examples/html_plugin/setup.py
+examples/plugin/plug.py
+examples/plugin/setup.py
+functional_tests/test_attribute_plugin.py
+functional_tests/test_attribute_plugin.pyc
+functional_tests/test_buggy_generators.py
+functional_tests/test_buggy_generators.pyc
+functional_tests/test_cases.py
+functional_tests/test_cases.pyc
+functional_tests/test_collector.py
+functional_tests/test_collector.pyc
+functional_tests/test_commands.py
+functional_tests/test_commands.pyc
+functional_tests/test_config_files.py
+functional_tests/test_config_files.pyc
+functional_tests/test_coverage_plugin.py
+functional_tests/test_coverage_plugin.pyc
+functional_tests/test_defaultpluginmanager.py
+functional_tests/test_defaultpluginmanager.pyc
+functional_tests/test_doctest_plugin.py
+functional_tests/test_doctest_plugin.pyc
+functional_tests/test_entrypoints.py
+functional_tests/test_entrypoints.pyc
+functional_tests/test_failuredetail_plugin.py
+functional_tests/test_failuredetail_plugin.pyc
+functional_tests/test_generator_fixtures.py
+functional_tests/test_generator_fixtures.pyc
+functional_tests/test_id_plugin.py
+functional_tests/test_id_plugin.pyc
+functional_tests/test_importer.py
+functional_tests/test_importer.pyc
+functional_tests/test_isolate_plugin.py
+functional_tests/test_isolate_plugin.pyc
+functional_tests/test_issue_072.py
+functional_tests/test_issue_072.pyc
+functional_tests/test_issue_082.py
+functional_tests/test_issue_082.pyc
+functional_tests/test_issue_408.py
+functional_tests/test_issue_408.pyc
+functional_tests/test_load_tests_from_test_case.py
+functional_tests/test_load_tests_from_test_case.pyc
+functional_tests/test_loader.py
+functional_tests/test_loader.pyc
+functional_tests/test_namespace_pkg.py
+functional_tests/test_namespace_pkg.pyc
+functional_tests/test_plugin_api.py
+functional_tests/test_plugin_api.pyc
+functional_tests/test_plugins.py
+functional_tests/test_plugins.pyc
+functional_tests/test_plugintest.py
+functional_tests/test_plugintest.pyc
+functional_tests/test_program.py
+functional_tests/test_program.pyc
+functional_tests/test_result.py
+functional_tests/test_result.pyc
+functional_tests/test_selector.py
+functional_tests/test_selector.pyc
+functional_tests/test_skip_pdb_interaction.py
+functional_tests/test_skip_pdb_interaction.pyc
+functional_tests/test_success.py
+functional_tests/test_success.pyc
+functional_tests/test_suite.py
+functional_tests/test_suite.pyc
+functional_tests/test_withid_failures.rst
+functional_tests/test_xunit.py
+functional_tests/test_xunit.pyc
+functional_tests/doc_tests/test_addplugins/test_addplugins.rst
+functional_tests/doc_tests/test_addplugins/support/test.py
+functional_tests/doc_tests/test_addplugins/support/test.pyc
+functional_tests/doc_tests/test_allmodules/test_allmodules.rst
+functional_tests/doc_tests/test_allmodules/support/mod.py
+functional_tests/doc_tests/test_allmodules/support/mod.pyc
+functional_tests/doc_tests/test_allmodules/support/test.py
+functional_tests/doc_tests/test_allmodules/support/test.pyc
+functional_tests/doc_tests/test_coverage_html/coverage_html_fixtures.pyc
+functional_tests/doc_tests/test_coverage_html/support/blah.pyc
+functional_tests/doc_tests/test_coverage_html/support/tests/test_covered.pyc
+functional_tests/doc_tests/test_doctest_fixtures/doctest_fixtures.rst
+functional_tests/doc_tests/test_doctest_fixtures/doctest_fixtures_fixtures.py
+functional_tests/doc_tests/test_doctest_fixtures/doctest_fixtures_fixtures.pyc
+functional_tests/doc_tests/test_init_plugin/example.cfg
+functional_tests/doc_tests/test_init_plugin/init_plugin.rst
+functional_tests/doc_tests/test_init_plugin/init_plugin.rst.py3.patch
+functional_tests/doc_tests/test_issue089/unwanted_package.rst
+functional_tests/doc_tests/test_issue089/support/unwanted_package/__init__.py
+functional_tests/doc_tests/test_issue089/support/unwanted_package/__init__.pyc
+functional_tests/doc_tests/test_issue089/support/unwanted_package/test_spam.py
+functional_tests/doc_tests/test_issue089/support/unwanted_package/test_spam.pyc
+functional_tests/doc_tests/test_issue089/support/wanted_package/__init__.py
+functional_tests/doc_tests/test_issue089/support/wanted_package/__init__.pyc
+functional_tests/doc_tests/test_issue089/support/wanted_package/test_eggs.py
+functional_tests/doc_tests/test_issue089/support/wanted_package/test_eggs.pyc
+functional_tests/doc_tests/test_issue097/plugintest_environment.rst
+functional_tests/doc_tests/test_issue107/plugin_exceptions.rst
+functional_tests/doc_tests/test_issue107/support/test_spam.py
+functional_tests/doc_tests/test_issue107/support/test_spam.pyc
+functional_tests/doc_tests/test_issue119/empty_plugin.rst
+functional_tests/doc_tests/test_issue119/test_zeronine.py
+functional_tests/doc_tests/test_issue119/test_zeronine.pyc
+functional_tests/doc_tests/test_issue142/errorclass_failure.rst
+functional_tests/doc_tests/test_issue142/support/errorclass_failing_test.py
+functional_tests/doc_tests/test_issue142/support/errorclass_failing_test.pyc
+functional_tests/doc_tests/test_issue142/support/errorclass_failure_plugin.py
+functional_tests/doc_tests/test_issue142/support/errorclass_failure_plugin.pyc
+functional_tests/doc_tests/test_issue142/support/errorclass_tests.py
+functional_tests/doc_tests/test_issue142/support/errorclass_tests.pyc
+functional_tests/doc_tests/test_issue145/imported_tests.rst
+functional_tests/doc_tests/test_issue145/support/package1/__init__.py
+functional_tests/doc_tests/test_issue145/support/package1/__init__.pyc
+functional_tests/doc_tests/test_issue145/support/package1/test_module.py
+functional_tests/doc_tests/test_issue145/support/package1/test_module.pyc
+functional_tests/doc_tests/test_issue145/support/package2c/__init__.py
+functional_tests/doc_tests/test_issue145/support/package2c/__init__.pyc
+functional_tests/doc_tests/test_issue145/support/package2c/test_module.py
+functional_tests/doc_tests/test_issue145/support/package2c/test_module.pyc
+functional_tests/doc_tests/test_issue145/support/package2f/__init__.py
+functional_tests/doc_tests/test_issue145/support/package2f/__init__.pyc
+functional_tests/doc_tests/test_issue145/support/package2f/test_module.py
+functional_tests/doc_tests/test_issue145/support/package2f/test_module.pyc
+functional_tests/doc_tests/test_multiprocess/multiprocess.rst
+functional_tests/doc_tests/test_multiprocess/multiprocess_fixtures.py
+functional_tests/doc_tests/test_multiprocess/multiprocess_fixtures.pyc
+functional_tests/doc_tests/test_multiprocess/support/test_can_split.py
+functional_tests/doc_tests/test_multiprocess/support/test_can_split.pyc
+functional_tests/doc_tests/test_multiprocess/support/test_not_shared.py
+functional_tests/doc_tests/test_multiprocess/support/test_not_shared.pyc
+functional_tests/doc_tests/test_multiprocess/support/test_shared.py
+functional_tests/doc_tests/test_multiprocess/support/test_shared.pyc
+functional_tests/doc_tests/test_restricted_plugin_options/restricted_plugin_options.rst
+functional_tests/doc_tests/test_restricted_plugin_options/restricted_plugin_options.rst.py3.patch
+functional_tests/doc_tests/test_restricted_plugin_options/support/bad.cfg
+functional_tests/doc_tests/test_restricted_plugin_options/support/start.cfg
+functional_tests/doc_tests/test_restricted_plugin_options/support/test.py
+functional_tests/doc_tests/test_restricted_plugin_options/support/test.pyc
+functional_tests/doc_tests/test_selector_plugin/selector_plugin.rst
+functional_tests/doc_tests/test_selector_plugin/support/mymodule.py
+functional_tests/doc_tests/test_selector_plugin/support/mymodule.pyc
+functional_tests/doc_tests/test_selector_plugin/support/mypackage/__init__.py
+functional_tests/doc_tests/test_selector_plugin/support/mypackage/__init__.pyc
+functional_tests/doc_tests/test_selector_plugin/support/mypackage/strings.py
+functional_tests/doc_tests/test_selector_plugin/support/mypackage/strings.pyc
+functional_tests/doc_tests/test_selector_plugin/support/mypackage/math/__init__.py
+functional_tests/doc_tests/test_selector_plugin/support/mypackage/math/__init__.pyc
+functional_tests/doc_tests/test_selector_plugin/support/mypackage/math/basic.py
+functional_tests/doc_tests/test_selector_plugin/support/mypackage/math/basic.pyc
+functional_tests/doc_tests/test_selector_plugin/support/tests/testlib.py
+functional_tests/doc_tests/test_selector_plugin/support/tests/testlib.pyc
+functional_tests/doc_tests/test_selector_plugin/support/tests/math/basic.py
+functional_tests/doc_tests/test_selector_plugin/support/tests/math/basic.pyc
+functional_tests/doc_tests/test_selector_plugin/support/tests/mymodule/my_function.py
+functional_tests/doc_tests/test_selector_plugin/support/tests/mymodule/my_function.pyc
+functional_tests/doc_tests/test_selector_plugin/support/tests/strings/cat.py
+functional_tests/doc_tests/test_selector_plugin/support/tests/strings/cat.pyc
+functional_tests/doc_tests/test_xunit_plugin/test_skips.rst
+functional_tests/doc_tests/test_xunit_plugin/support/nosetests.xml
+functional_tests/doc_tests/test_xunit_plugin/support/test_skip.py
+functional_tests/doc_tests/test_xunit_plugin/support/test_skip.pyc
+functional_tests/support/test.cfg
+functional_tests/support/test_buggy_generators.py
+functional_tests/support/test_buggy_generators.pyc
+functional_tests/support/xunit.xml
+functional_tests/support/att/test_attr.py
+functional_tests/support/att/test_attr.pyc
+functional_tests/support/coverage/blah.py
+functional_tests/support/coverage/blah.pyc
+functional_tests/support/coverage/tests/test_covered.py
+functional_tests/support/coverage/tests/test_covered.pyc
+functional_tests/support/coverage2/blah.py
+functional_tests/support/coverage2/moo.py
+functional_tests/support/coverage2/moo.pyc
+functional_tests/support/coverage2/tests/test_covered.py
+functional_tests/support/coverage2/tests/test_covered.pyc
+functional_tests/support/ctx/mod_import_skip.py
+functional_tests/support/ctx/mod_import_skip.pyc
+functional_tests/support/ctx/mod_setup_fails.py
+functional_tests/support/ctx/mod_setup_fails.pyc
+functional_tests/support/ctx/mod_setup_skip.py
+functional_tests/support/ctx/mod_setup_skip.pyc
+functional_tests/support/dir1/mod.py
+functional_tests/support/dir1/mod.pyc
+functional_tests/support/dir1/pak/__init__.py
+functional_tests/support/dir1/pak/__init__.pyc
+functional_tests/support/dir1/pak/mod.py
+functional_tests/support/dir1/pak/mod.pyc
+functional_tests/support/dir1/pak/sub/__init__.py
+functional_tests/support/dir1/pak/sub/__init__.pyc
+functional_tests/support/dir2/mod.py
+functional_tests/support/dir2/mod.pyc
+functional_tests/support/dir2/pak/__init__.py
+functional_tests/support/dir2/pak/__init__.pyc
+functional_tests/support/dir2/pak/mod.py
+functional_tests/support/dir2/pak/mod.pyc
+functional_tests/support/dir2/pak/sub/__init__.py
+functional_tests/support/dir2/pak/sub/__init__.pyc
+functional_tests/support/dtt/some_mod.py
+functional_tests/support/dtt/some_mod.pyc
+functional_tests/support/dtt/docs/doc.txt
+functional_tests/support/dtt/docs/errdoc.txt
+functional_tests/support/dtt/docs/nodoc.txt
+functional_tests/support/empty/.hidden
+functional_tests/support/ep/setup.py
+functional_tests/support/ep/someplugin.py
+functional_tests/support/ep/Some_plugin.egg-info/PKG-INFO
+functional_tests/support/ep/Some_plugin.egg-info/SOURCES.txt
+functional_tests/support/ep/Some_plugin.egg-info/dependency_links.txt
+functional_tests/support/ep/Some_plugin.egg-info/entry_points.txt
+functional_tests/support/ep/Some_plugin.egg-info/top_level.txt
+functional_tests/support/fdp/test_fdp.py
+functional_tests/support/fdp/test_fdp.pyc
+functional_tests/support/fdp/test_fdp_no_capt.py
+functional_tests/support/fdp/test_fdp_no_capt.pyc
+functional_tests/support/gen/test.py
+functional_tests/support/gen/test.pyc
+functional_tests/support/id_fails/test_a.py
+functional_tests/support/id_fails/test_a.pyc
+functional_tests/support/id_fails/test_b.py
+functional_tests/support/id_fails/test_b.pyc
+functional_tests/support/idp/exm.py
+functional_tests/support/idp/exm.pyc
+functional_tests/support/idp/tests.py
+functional_tests/support/idp/tests.pyc
+functional_tests/support/ipt/test1/ipthelp.py
+functional_tests/support/ipt/test1/ipthelp.pyc
+functional_tests/support/ipt/test1/tests.py
+functional_tests/support/ipt/test1/tests.pyc
+functional_tests/support/ipt/test2/ipthelp.py
+functional_tests/support/ipt/test2/ipthelp.pyc
+functional_tests/support/ipt/test2/tests.py
+functional_tests/support/ipt/test2/tests.pyc
+functional_tests/support/issue038/test.py
+functional_tests/support/issue038/test.pyc
+functional_tests/support/issue072/test.py
+functional_tests/support/issue072/test.pyc
+functional_tests/support/issue082/_mypackage/__init__.py
+functional_tests/support/issue082/_mypackage/_eggs.py
+functional_tests/support/issue082/_mypackage/bacon.py
+functional_tests/support/issue082/mypublicpackage/__init__.py
+functional_tests/support/issue082/mypublicpackage/__init__.pyc
+functional_tests/support/issue082/mypublicpackage/_foo.py
+functional_tests/support/issue082/mypublicpackage/_foo.pyc
+functional_tests/support/issue082/mypublicpackage/bar.py
+functional_tests/support/issue082/mypublicpackage/bar.pyc
+functional_tests/support/issue130/test.py
+functional_tests/support/issue130/test.pyc
+functional_tests/support/issue143/not-a-package/__init__.py
+functional_tests/support/issue143/not-a-package/test.py
+functional_tests/support/issue191/setup.cfg
+functional_tests/support/issue191/setup.py
+functional_tests/support/issue191/test.py
+functional_tests/support/issue191/test.pyc
+functional_tests/support/issue191/UNKNOWN.egg-info/PKG-INFO
+functional_tests/support/issue191/UNKNOWN.egg-info/SOURCES.txt
+functional_tests/support/issue191/UNKNOWN.egg-info/dependency_links.txt
+functional_tests/support/issue191/UNKNOWN.egg-info/top_level.txt
+functional_tests/support/issue269/test_bad_class.py
+functional_tests/support/issue269/test_bad_class.pyc
+functional_tests/support/issue279/test_mod_setup_fails.py
+functional_tests/support/issue279/test_mod_setup_fails.pyc
+functional_tests/support/issue408/nosetests.xml
+functional_tests/support/issue408/test.py
+functional_tests/support/issue408/test.pyc
+functional_tests/support/ltfn/state.py
+functional_tests/support/ltfn/state.pyc
+functional_tests/support/ltfn/test_mod.py
+functional_tests/support/ltfn/test_mod.pyc
+functional_tests/support/ltfn/test_pak1/__init__.py
+functional_tests/support/ltfn/test_pak1/__init__.pyc
+functional_tests/support/ltfn/test_pak1/test_mod.py
+functional_tests/support/ltfn/test_pak1/test_mod.pyc
+functional_tests/support/ltfn/test_pak2/__init__.py
+functional_tests/support/ltfn/test_pak2/__init__.pyc
+functional_tests/support/ltftc/tests.py
+functional_tests/support/ltftc/tests.pyc
+functional_tests/support/namespace_pkg/namespace_pkg/__init__.py
+functional_tests/support/namespace_pkg/namespace_pkg/__init__.pyc
+functional_tests/support/namespace_pkg/namespace_pkg/example.py
+functional_tests/support/namespace_pkg/namespace_pkg/example.pyc
+functional_tests/support/namespace_pkg/namespace_pkg/test_pkg.py
+functional_tests/support/namespace_pkg/namespace_pkg/test_pkg.pyc
+functional_tests/support/namespace_pkg/site-packages/namespace_pkg/__init__.py
+functional_tests/support/namespace_pkg/site-packages/namespace_pkg/example2.py
+functional_tests/support/namespace_pkg/site-packages/namespace_pkg/example2.pyc
+functional_tests/support/namespace_pkg/site-packages/namespace_pkg/test_pkg2.py
+functional_tests/support/namespace_pkg/site-packages/namespace_pkg/test_pkg2.pyc
+functional_tests/support/package1/example.py
+functional_tests/support/package1/example.pyc
+functional_tests/support/package1/tests/test_example_function.py
+functional_tests/support/package1/tests/test_example_function.pyc
+functional_tests/support/package2/maths.py
+functional_tests/support/package2/maths.pyc
+functional_tests/support/package2/test_pak/__init__.py
+functional_tests/support/package2/test_pak/__init__.pyc
+functional_tests/support/package2/test_pak/test_mod.py
+functional_tests/support/package2/test_pak/test_mod.pyc
+functional_tests/support/package2/test_pak/test_sub/__init__.py
+functional_tests/support/package2/test_pak/test_sub/__init__.pyc
+functional_tests/support/package2/test_pak/test_sub/test_mod.py
+functional_tests/support/package2/test_pak/test_sub/test_mod.pyc
+functional_tests/support/package3/lib/a.py
+functional_tests/support/package3/lib/a.pyc
+functional_tests/support/package3/src/b.py
+functional_tests/support/package3/src/b.pyc
+functional_tests/support/package3/tests/test_a.py
+functional_tests/support/package3/tests/test_a.pyc
+functional_tests/support/package3/tests/test_b.py
+functional_tests/support/package3/tests/test_b.pyc
+functional_tests/support/pass/test.py
+functional_tests/support/pass/test.pyc
+functional_tests/support/todo/test_with_todo.py
+functional_tests/support/todo/test_with_todo.pyc
+functional_tests/support/todo/todoplug.py
+functional_tests/support/todo/todoplug.pyc
+functional_tests/support/twist/test_twisted.py
+functional_tests/support/twist/test_twisted.pyc
+functional_tests/support/xunit/test_xunit_as_suite.py
+functional_tests/support/xunit/test_xunit_as_suite.pyc
+functional_tests/test_issue120/test_named_test_with_doctest.rst
+functional_tests/test_issue120/support/some_test.py
+functional_tests/test_issue120/support/some_test.pyc
+functional_tests/test_multiprocessing/__init__.py
+functional_tests/test_multiprocessing/__init__.pyc
+functional_tests/test_multiprocessing/test_class.py
+functional_tests/test_multiprocessing/test_class.pyc
+functional_tests/test_multiprocessing/test_concurrent_shared.py
+functional_tests/test_multiprocessing/test_concurrent_shared.pyc
+functional_tests/test_multiprocessing/test_keyboardinterrupt.py
+functional_tests/test_multiprocessing/test_keyboardinterrupt.pyc
+functional_tests/test_multiprocessing/test_nameerror.py
+functional_tests/test_multiprocessing/test_nameerror.pyc
+functional_tests/test_multiprocessing/test_process_timeout.py
+functional_tests/test_multiprocessing/test_process_timeout.pyc
+functional_tests/test_multiprocessing/support/class.py
+functional_tests/test_multiprocessing/support/class.pyc
+functional_tests/test_multiprocessing/support/fake_nosetest.py
+functional_tests/test_multiprocessing/support/keyboardinterrupt.py
+functional_tests/test_multiprocessing/support/keyboardinterrupt.pyc
+functional_tests/test_multiprocessing/support/keyboardinterrupt_twice.py
+functional_tests/test_multiprocessing/support/keyboardinterrupt_twice.pyc
+functional_tests/test_multiprocessing/support/nameerror.py
+functional_tests/test_multiprocessing/support/nameerror.pyc
+functional_tests/test_multiprocessing/support/timeout.py
+functional_tests/test_multiprocessing/support/timeout.pyc
+functional_tests/test_multiprocessing/support/concurrent_shared/__init__.py
+functional_tests/test_multiprocessing/support/concurrent_shared/__init__.pyc
+functional_tests/test_multiprocessing/support/concurrent_shared/test.py
+functional_tests/test_multiprocessing/support/concurrent_shared/test.pyc
+nose/__init__.py
+nose/case.py
+nose/commands.py
+nose/config.py
+nose/core.py
+nose/exc.py
+nose/failure.py
+nose/importer.py
+nose/inspector.py
+nose/loader.py
+nose/proxy.py
+nose/pyversion.py
+nose/result.py
+nose/selector.py
+nose/suite.py
+nose/twistedtools.py
+nose/usage.txt
+nose/util.py
+nose.egg-info/PKG-INFO
+nose.egg-info/SOURCES.txt
+nose.egg-info/dependency_links.txt
+nose.egg-info/entry_points.txt
+nose.egg-info/not-zip-safe
+nose.egg-info/top_level.txt
+nose/ext/__init__.py
+nose/ext/dtcompat.py
+nose/plugins/__init__.py
+nose/plugins/allmodules.py
+nose/plugins/attrib.py
+nose/plugins/base.py
+nose/plugins/builtin.py
+nose/plugins/capture.py
+nose/plugins/collect.py
+nose/plugins/cover.py
+nose/plugins/debug.py
+nose/plugins/deprecated.py
+nose/plugins/doctests.py
+nose/plugins/errorclass.py
+nose/plugins/failuredetail.py
+nose/plugins/isolate.py
+nose/plugins/logcapture.py
+nose/plugins/manager.py
+nose/plugins/multiprocess.py
+nose/plugins/plugintest.py
+nose/plugins/prof.py
+nose/plugins/skip.py
+nose/plugins/testid.py
+nose/plugins/xunit.py
+nose/sphinx/__init__.py
+nose/sphinx/pluginopts.py
+nose/tools/__init__.py
+nose/tools/nontrivial.py
+nose/tools/trivial.py
+unit_tests/helpers.py
+unit_tests/helpers.pyc
+unit_tests/mock.py
+unit_tests/mock.pyc
+unit_tests/test_attribute_plugin.py
+unit_tests/test_attribute_plugin.pyc
+unit_tests/test_bug105.py
+unit_tests/test_bug105.pyc
+unit_tests/test_capture_plugin.py
+unit_tests/test_capture_plugin.pyc
+unit_tests/test_cases.py
+unit_tests/test_cases.pyc
+unit_tests/test_config.py
+unit_tests/test_config.pyc
+unit_tests/test_config_defaults.rst
+unit_tests/test_core.py
+unit_tests/test_core.pyc
+unit_tests/test_deprecated_plugin.py
+unit_tests/test_deprecated_plugin.pyc
+unit_tests/test_doctest_error_handling.py
+unit_tests/test_doctest_error_handling.pyc
+unit_tests/test_doctest_munging.rst
+unit_tests/test_id_plugin.py
+unit_tests/test_id_plugin.pyc
+unit_tests/test_importer.py
+unit_tests/test_importer.pyc
+unit_tests/test_inspector.py
+unit_tests/test_inspector.pyc
+unit_tests/test_isolation_plugin.py
+unit_tests/test_isolation_plugin.pyc
+unit_tests/test_issue155.rst
+unit_tests/test_issue270.rst
+unit_tests/test_issue270_fixtures.py
+unit_tests/test_issue270_fixtures.pyc
+unit_tests/test_issue_006.py
+unit_tests/test_issue_006.pyc
+unit_tests/test_issue_064.py
+unit_tests/test_issue_064.pyc
+unit_tests/test_issue_065.py
+unit_tests/test_issue_065.pyc
+unit_tests/test_issue_100.rst
+unit_tests/test_issue_100.rst.py3.patch
+unit_tests/test_issue_101.py
+unit_tests/test_issue_101.pyc
+unit_tests/test_issue_159.rst
+unit_tests/test_issue_227.py
+unit_tests/test_issue_227.pyc
+unit_tests/test_issue_230.py
+unit_tests/test_issue_230.pyc
+unit_tests/test_lazy_suite.py
+unit_tests/test_lazy_suite.pyc
+unit_tests/test_loader.py
+unit_tests/test_loader.pyc
+unit_tests/test_logcapture_plugin.py
+unit_tests/test_logcapture_plugin.pyc
+unit_tests/test_logging.py
+unit_tests/test_logging.pyc
+unit_tests/test_ls_tree.rst
+unit_tests/test_multiprocess.py
+unit_tests/test_multiprocess.pyc
+unit_tests/test_multiprocess_runner.py
+unit_tests/test_multiprocess_runner.pyc
+unit_tests/test_pdb_plugin.py
+unit_tests/test_pdb_plugin.pyc
+unit_tests/test_plugin.py
+unit_tests/test_plugin.pyc
+unit_tests/test_plugin_interfaces.py
+unit_tests/test_plugin_interfaces.pyc
+unit_tests/test_plugin_manager.py
+unit_tests/test_plugin_manager.pyc
+unit_tests/test_plugins.py
+unit_tests/test_plugins.pyc
+unit_tests/test_result_proxy.py
+unit_tests/test_result_proxy.pyc
+unit_tests/test_selector.py
+unit_tests/test_selector.pyc
+unit_tests/test_selector_plugins.py
+unit_tests/test_selector_plugins.pyc
+unit_tests/test_skip_plugin.py
+unit_tests/test_skip_plugin.pyc
+unit_tests/test_suite.py
+unit_tests/test_suite.pyc
+unit_tests/test_tools.py
+unit_tests/test_tools.pyc
+unit_tests/test_twisted.py
+unit_tests/test_twisted.pyc
+unit_tests/test_twisted_testcase.py
+unit_tests/test_twisted_testcase.pyc
+unit_tests/test_utils.py
+unit_tests/test_utils.pyc
+unit_tests/test_xunit.py
+unit_tests/test_xunit.pyc
+unit_tests/support/script.py
+unit_tests/support/test.py
+unit_tests/support/bug101/tests.py
+unit_tests/support/bug105/tests.py
+unit_tests/support/bug105/tests.pyc
+unit_tests/support/config_defaults/a.cfg
+unit_tests/support/config_defaults/b.cfg
+unit_tests/support/config_defaults/invalid.cfg
+unit_tests/support/config_defaults/invalid_value.cfg
+unit_tests/support/doctest/err_doctests.py
+unit_tests/support/doctest/err_doctests.pyc
+unit_tests/support/doctest/no_doctests.py
+unit_tests/support/doctest/no_doctests.pyc
+unit_tests/support/foo/__init__.py
+unit_tests/support/foo/__init__.pyc
+unit_tests/support/foo/doctests.txt
+unit_tests/support/foo/test_foo.py
+unit_tests/support/foo/bar/__init__.py
+unit_tests/support/foo/bar/__init__.pyc
+unit_tests/support/foo/bar/buz.py
+unit_tests/support/foo/bar/buz.pyc
+unit_tests/support/foo/tests/dir_test_file.py
+unit_tests/support/issue006/tests.py
+unit_tests/support/issue006/tests.pyc
+unit_tests/support/issue065/tests.py
+unit_tests/support/issue065/tests.pyc
+unit_tests/support/issue270/__init__.py
+unit_tests/support/issue270/__init__.pyc
+unit_tests/support/issue270/foo_test.py
+unit_tests/support/issue270/foo_test.pyc
+unit_tests/support/other/file.txt
+unit_tests/support/pkgorg/lib/modernity.py
+unit_tests/support/pkgorg/tests/test_mod.py
+unit_tests/support/test-dir/test.py
\ No newline at end of file
--- /dev/null
+[console_scripts]
+nosetests = nose:run_exit
+nosetests-2.7 = nose:run_exit
+
+[distutils.commands]
+nosetests = nose.commands:nosetests
+
from nose.tools import with_setup
__author__ = 'Jason Pellerin'
-__versioninfo__ = (1, 1, 2)
+__versioninfo__ = (1, 2, 1)
__version__ = '.'.join(map(str, __versioninfo__))
__all__ = [
def run(self):
"""ensure tests are capable of being run, then
run nose.main with a reconstructed argument list"""
- self.run_command('egg_info')
+ if getattr(self.distribution, 'use_2to3', False):
+ # If we run 2to3 we can not do this inplace:
- # Build extensions in-place
- self.reinitialize_command('build_ext', inplace=1)
- self.run_command('build_ext')
+ # Ensure metadata is up-to-date
+ self.reinitialize_command('build_py', inplace=0)
+ self.run_command('build_py')
+ bpy_cmd = self.get_finalized_command("build_py")
+ build_path = bpy_cmd.build_lib
+
+ # Build extensions
+ self.reinitialize_command('egg_info', egg_base=build_path)
+ self.run_command('egg_info')
+
+ self.reinitialize_command('build_ext', inplace=0)
+ self.run_command('build_ext')
+ else:
+ self.run_command('egg_info')
+
+ # Build extensions in-place
+ self.reinitialize_command('build_ext', inplace=1)
+ self.run_command('build_ext')
if self.distribution.install_requires:
self.distribution.fetch_build_eggs(
self.distribution.fetch_build_eggs(
self.distribution.tests_require)
- argv = ['nosetests']
+ ei_cmd = self.get_finalized_command("egg_info")
+ argv = ['nosetests', ei_cmd.egg_base]
for (option_name, cmd_name) in self.option_to_cmds.items():
if option_name in option_blacklist:
continue
self.verbosity = int(env.get('NOSE_VERBOSE', 1))
self.where = ()
self.py3where = ()
- self.workingDir = None
+ self.workingDir = None
"""
def __init__(self, **kw):
self.firstPackageWins = False
self.parserClass = OptionParser
self.worker = False
-
+
self._default = self.__dict__.copy()
self.update(kw)
self._orig = self.__dict__.copy()
dummy_parser = self.parserClass()
self.plugins.addOptions(dummy_parser, {})
self.plugins.configure(self.options, self)
-
+
def __repr__(self):
d = self.__dict__.copy()
# don't expose env, could include sensitive info
if sys.version_info >= (3,):
options.where = options.py3where
- # `where` is an append action, so it can't have a default value
+ # `where` is an append action, so it can't have a default value
# in the parser, or that default will always be in the list
if not options.where:
options.where = env.get('NOSE_WHERE', None)
if options.where is not None:
self.configureWhere(options.where)
-
+
if options.testMatch:
self.testMatch = re.compile(options.testMatch)
-
+
if options.ignoreFiles:
self.ignoreFiles = map(re.compile, tolist(options.ignoreFiles))
log.info("Ignoring files matching %s", options.ignoreFiles)
else:
log.info("Ignoring files matching %s", self.ignoreFilesDefaultStrings)
-
+
if options.include:
self.include = map(re.compile, tolist(options.include))
log.info("Including tests matching %s", options.include)
from logging.config import fileConfig
fileConfig(self.loggingConfig)
return
-
+
format = logging.Formatter('%(name)s: %(levelname)s: %(message)s')
if self.debugLog:
handler = logging.FileHandler(self.debugLog)
if handler not in logger.handlers:
logger.addHandler(handler)
- # default level
+ # default level
lvl = logging.WARNING
if self.verbosity >= 5:
lvl = 0
def todict(self):
return self.__dict__.copy()
-
+
def update(self, d):
self.__dict__.update(d)
"""
def __getstate__(self):
return {}
-
+
def __setstate__(self, state):
pass
def __getnewargs__(self):
return ()
-
+
def __getattr__(self, attr):
return None
__all__ = ['TestProgram', 'main', 'run', 'run_exit', 'runmodule', 'collector',
'TextTestRunner']
-
+
class TextTestRunner(unittest.TextTestRunner):
"""Test runner that uses nose's TextTestResult to enable errorClasses,
as well as providing hooks for plugins to override or replace the test
output stream, results, and the test case itself.
- """
+ """
def __init__(self, stream=sys.stderr, descriptions=1, verbosity=1,
config=None):
if config is None:
self.config = config
unittest.TextTestRunner.__init__(self, stream, descriptions, verbosity)
-
+
def _makeResult(self):
return TextTestResult(self.stream,
self.descriptions,
wrapper = self.config.plugins.prepareTest(test)
if wrapper is not None:
test = wrapper
-
+
# plugins can decorate or capture the output stream
wrapped = self.config.plugins.setOutputStream(self.stream)
if wrapped is not None:
self.stream = wrapped
-
+
result = self._makeResult()
start = time.time()
test(result)
self.config.plugins.finalize(result)
return result
-
+
class TestProgram(unittest.TestProgram):
"""Collect and run tests, returning success or failure.
if config is None:
config = self.makeConfig(env, plugins)
if addplugins:
- config.plugins.addPlugins(addplugins)
+ config.plugins.addPlugins(extraplugins=addplugins)
self.config = config
self.suite = suite
self.exit = exit
"""Load a Config, pre-filled with user config files if any are
found.
"""
- cfg_files = all_config_files()
+ cfg_files = all_config_files()
if plugins:
manager = PluginManager(plugins=plugins)
else:
manager = DefaultPluginManager()
return Config(
env=env, files=cfg_files, plugins=manager)
-
+
def parseArgs(self, argv):
"""Parse argv and env and configure running environment.
"""
if self.config.options.showPlugins:
self.showPlugins()
sys.exit(0)
-
+
if self.testLoader is None:
self.testLoader = defaultTestLoader(config=self.config)
elif isclass(self.testLoader):
if plug_loader is not None:
self.testLoader = plug_loader
log.debug("test loader is %s", self.testLoader)
-
+
# FIXME if self.module is a string, add it to self.testNames? not sure
if self.config.testNames:
if self.config.workingDir is not None:
os.chdir(self.config.workingDir)
self.createTests()
-
+
def createTests(self):
"""Create the tests to run. If a self.suite
is set, then that suite will be used. Otherwise, tests will be
self.options = []
def add_option(self, *arg, **kw):
self.options.append((arg, kw.pop('help', '')))
-
+
v = self.config.verbosity
self.config.plugins.sort()
- for p in self.config.plugins:
+ for p in self.config.plugins:
print "Plugin %s" % p.name
if v >= 2:
print " score: %s" % p.score
initial_indent=' ',
subsequent_indent=' '))
print
-
+
def usage(cls):
import nose
if hasattr(nose, '__loader__'):
* addplugins: List of **extra** plugins to use. Pass a list of plugin
instances in this argument to make custom plugins available while
still using the DefaultPluginManager.
-
+
With the exception that the ``exit`` argument is always set
- to False.
+ to False.
"""
kw['exit'] = False
return TestProgram(*arg, **kw).success
unittest.TestSuite. The collector will, by default, load options from
all config files and execute loader.loadTestsFromNames() on the
configured testNames, or '.' if no testNames are configured.
- """
+ """
# plugins that implement any of these methods are disabled, since
# we don't control the test runner and won't be able to run them
# finalize() is also not called, but plugins that use it aren't disabled,
* Writing a plugin that adds detail to error reports
- Implement ``formatError`` and/or ``formatFailture``. The error tuple
+ Implement ``formatError`` and/or ``formatFailure``. The error tuple
you return (error class, error message, traceback) will replace the
original error tuple.
msg = len(ev.args) and ev.args[0] or ''
else:
msg = ev.message
- if not isinstance(msg, unicode):
+ if (isinstance(msg, basestring) and
+ not isinstance(msg, unicode)):
msg = msg.decode('utf8', 'replace')
ev = u'%s: %s' % (ev.__class__.__name__, msg)
if not isinstance(output, unicode):
.. _coverage: http://www.nedbatchelder.com/code/modules/coverage.html
"""
import logging
-import os
import re
import sys
+import StringIO
from nose.plugins.base import Plugin
from nose.util import src, tolist
-log = logging.getLogger(__name__)
-
-COVERAGE_TEMPLATE = '''<html>
-<head>
-%(title)s
-</head>
-<body>
-%(header)s
-<style>
-.coverage pre {float: left; margin: 0px 1em; border: none;
- padding: 0px; }
-.num pre { margin: 0px }
-.nocov, .nocov pre {background-color: #faa}
-.cov, .cov pre {background-color: #cfc}
-div.coverage div { clear: both; height: 1.1em}
-</style>
-<div class="stats">
-%(stats)s
-</div>
-<div class="coverage">
-%(body)s
-</div>
-</body>
-</html>
-'''
-
-COVERAGE_STATS_TEMPLATE = '''Covered: %(covered)s lines<br/>
-Missed: %(missed)s lines<br/>
-Skipped %(skipped)s lines<br/>
-Percent: %(percent)s %%<br/>
-'''
+log = logging.getLogger(__name__)
class Coverage(Plugin):
"""
coverTests = False
coverPackages = None
- _coverInstance = None
+ coverInstance = None
+ coverErase = False
+ coverMinPercentage = None
score = 200
status = {}
- def coverInstance(self):
- if not self._coverInstance:
- import coverage
- try:
- self._coverInstance = coverage.coverage()
- except coverage.CoverageException:
- self._coverInstance = coverage
- return self._coverInstance
- coverInstance = property(coverInstance)
-
def options(self, parser, env):
"""
Add options to command line.
"""
- Plugin.options(self, parser, env)
+ super(Coverage, self).options(parser, env)
parser.add_option("--cover-package", action="append",
default=env.get('NOSE_COVER_PACKAGE'),
metavar="PACKAGE",
default=env.get('NOSE_COVER_TESTS'),
help="Include test modules in coverage report "
"[NOSE_COVER_TESTS]")
+ parser.add_option("--cover-min-percentage", action="store",
+ dest="cover_min_percentage",
+ default=env.get('NOSE_COVER_MIN_PERCENTAGE'),
+ help="Minimum percentage of coverage for tests"
+ "to pass [NOSE_COVER_MIN_PERCENTAGE]")
parser.add_option("--cover-inclusive", action="store_true",
dest="cover_inclusive",
default=env.get('NOSE_COVER_INCLUSIVE'),
dest='cover_html_dir',
metavar='DIR',
help='Produce HTML coverage information in dir')
-
- def configure(self, options, config):
+ parser.add_option("--cover-branches", action="store_true",
+ default=env.get('NOSE_COVER_BRANCHES'),
+ dest="cover_branches",
+ help="Include branch coverage in coverage report "
+ "[NOSE_COVER_BRANCHES]")
+ parser.add_option("--cover-xml", action="store_true",
+ default=env.get('NOSE_COVER_XML'),
+ dest="cover_xml",
+ help="Produce XML coverage information")
+ parser.add_option("--cover-xml-file", action="store",
+ default=env.get('NOSE_COVER_XML_FILE', 'coverage.xml'),
+ dest="cover_xml_file",
+ metavar="FILE",
+ help="Produce XML coverage information in file")
+
+ def configure(self, options, conf):
"""
Configure plugin.
"""
self.status.pop('active')
except KeyError:
pass
- Plugin.configure(self, options, config)
- if config.worker:
+ super(Coverage, self).configure(options, conf)
+ if conf.worker:
return
if self.enabled:
try:
"unable to import coverage module")
self.enabled = False
return
- self.conf = config
+ self.conf = conf
self.coverErase = options.cover_erase
self.coverTests = options.cover_tests
self.coverPackages = []
if options.cover_html:
self.coverHtmlDir = options.cover_html_dir
log.debug('Will put HTML coverage report in %s', self.coverHtmlDir)
+ self.coverBranches = options.cover_branches
+ self.coverXmlFile = None
+ if options.cover_min_percentage:
+ self.coverMinPercentage = int(options.cover_min_percentage.rstrip('%'))
+ if options.cover_xml:
+ self.coverXmlFile = options.cover_xml_file
+ log.debug('Will put XML coverage report in %s', self.coverXmlFile)
if self.enabled:
self.status['active'] = True
+ self.coverInstance = coverage.coverage(auto_data=False,
+ branch=self.coverBranches, data_suffix=None)
def begin(self):
"""
self.skipModules = sys.modules.keys()[:]
if self.coverErase:
log.debug("Clearing previously collected coverage statistics")
+ self.coverInstance.combine()
self.coverInstance.erase()
self.coverInstance.exclude('#pragma[: ]+[nN][oO] [cC][oO][vV][eE][rR]')
+ self.coverInstance.load()
self.coverInstance.start()
def report(self, stream):
"""
log.debug("Coverage report")
self.coverInstance.stop()
+ self.coverInstance.combine()
self.coverInstance.save()
- modules = [ module
+ modules = [module
for name, module in sys.modules.items()
- if self.wantModuleCoverage(name, module) ]
+ if self.wantModuleCoverage(name, module)]
log.debug("Coverage report will cover modules: %s", modules)
self.coverInstance.report(modules, file=stream)
if self.coverHtmlDir:
log.debug("Generating HTML coverage report")
- if hasattr(self.coverInstance, 'html_report'):
- self.coverInstance.html_report(modules, self.coverHtmlDir)
- else:
- self.report_html(modules)
-
- def report_html(self, modules):
- if not os.path.exists(self.coverHtmlDir):
- os.makedirs(self.coverHtmlDir)
- files = {}
- for m in modules:
- if hasattr(m, '__name__') and hasattr(m, '__file__'):
- files[m.__name__] = m.__file__
- self.coverInstance.annotate(files.values())
- global_stats = {'covered': 0, 'missed': 0, 'skipped': 0}
- file_list = []
- for m, f in files.iteritems():
- if f.endswith('pyc'):
- f = f[:-1]
- coverfile = f+',cover'
- outfile, stats = self.htmlAnnotate(m, f, coverfile,
- self.coverHtmlDir)
- for field in ('covered', 'missed', 'skipped'):
- global_stats[field] += stats[field]
- file_list.append((stats['percent'], m, outfile, stats))
- os.unlink(coverfile)
- file_list.sort()
- global_stats['percent'] = self.computePercent(
- global_stats['covered'], global_stats['missed'])
- # Now write out an index file for the coverage HTML
- index = open(os.path.join(self.coverHtmlDir, 'index.html'), 'w')
- index.write('<html><head><title>Coverage Index</title></head>'
- '<body><p>')
- index.write(COVERAGE_STATS_TEMPLATE % global_stats)
- index.write('<table><tr><td>File</td><td>Covered</td><td>Missed'
- '</td><td>Skipped</td><td>Percent</td></tr>')
- for junk, name, outfile, stats in file_list:
- stats['a'] = '<a href="%s">%s</a>' % (outfile, name)
- index.write('<tr><td>%(a)s</td><td>%(covered)s</td><td>'
- '%(missed)s</td><td>%(skipped)s</td><td>'
- '%(percent)s %%</td></tr>' % stats)
- index.write('</table></p></html')
- index.close()
-
- def htmlAnnotate(self, name, file, coverfile, outputDir):
- log.debug('Name: %s file: %s' % (name, file, ))
- rows = []
- data = open(coverfile, 'r').read().split('\n')
- padding = len(str(len(data)))
- stats = {'covered': 0, 'missed': 0, 'skipped': 0}
- for lineno, line in enumerate(data):
- lineno += 1
- if line:
- status = line[0]
- line = line[2:]
- else:
- status = ''
- line = ''
- lineno = (' ' * (padding - len(str(lineno)))) + str(lineno)
- for old, new in (('&', '&'), ('<', '<'), ('>', '>'),
- ('"', '"'), ):
- line = line.replace(old, new)
- if status == '!':
- rows.append('<div class="nocov"><span class="num"><pre>'
- '%s</pre></span><pre>%s</pre></div>' % (lineno,
- line))
- stats['missed'] += 1
- elif status == '>':
- rows.append('<div class="cov"><span class="num"><pre>%s</pre>'
- '</span><pre>%s</pre></div>' % (lineno, line))
- stats['covered'] += 1
+ self.coverInstance.html_report(modules, self.coverHtmlDir)
+ if self.coverXmlFile:
+ log.debug("Generating XML coverage report")
+ self.coverInstance.xml_report(modules, self.coverXmlFile)
+
+ # make sure we have minimum required coverage
+ if self.coverMinPercentage:
+ f = StringIO.StringIO()
+ self.coverInstance.report(modules, file=f)
+ m = re.search(r'-------\s\w+\s+\d+\s+\d+\s+(\d+)%\s+\d*\s{0,1}$', f.getvalue())
+ if m:
+ percentage = int(m.groups()[0])
+ if percentage < self.coverMinPercentage:
+ log.error('TOTAL Coverage did not reach minimum '
+ 'required: %d%%' % self.coverMinPercentage)
+ sys.exit(1)
else:
- rows.append('<div class="skip"><span class="num"><pre>%s</pre>'
- '</span><pre>%s</pre></div>' % (lineno, line))
- stats['skipped'] += 1
- stats['percent'] = self.computePercent(stats['covered'],
- stats['missed'])
- html = COVERAGE_TEMPLATE % {'title': '<title>%s</title>' % name,
- 'header': name,
- 'body': '\n'.join(rows),
- 'stats': COVERAGE_STATS_TEMPLATE % stats,
- }
- outfilename = name + '.html'
- outfile = open(os.path.join(outputDir, outfilename), 'w')
- outfile.write(html)
- outfile.close()
- return outfilename, stats
+ log.error("No total percentage was found in coverage output, "
+ "something went wrong.")
- def computePercent(self, covered, missed):
- if covered + missed == 0:
- percent = 1
- else:
- percent = covered/(covered+missed+0.0)
- return int(percent * 100)
def wantModuleCoverage(self, name, module):
if not hasattr(module, '__file__'):
help="Find fixtures for a doctest file in module "
"with this name appended to the base name "
"of the doctest file")
+ parser.add_option('--doctest-options', action="append",
+ dest="doctestOptions",
+ metavar="OPTIONS",
+ help="Specify options to pass to doctest. " +
+ "Eg. '+ELLIPSIS,+NORMALIZE_WHITESPACE'")
# Set the default as a list, if given in env; otherwise
# an additional value set on the command line will cause
# an error.
self.extension = tolist(options.doctestExtension)
self.fixtures = options.doctestFixtures
self.finder = doctest.DocTestFinder()
-
+ self.optionflags = 0
+ if options.doctestOptions:
+ flags = ",".join(options.doctestOptions).split(',')
+ for flag in flags:
+ try:
+ if flag.startswith('+'):
+ self.optionflags |= getattr(doctest, flag[1:])
+ elif flag.startswith('-'):
+ self.optionflags &= ~getattr(doctest, flag[1:])
+ else:
+ raise ValueError(
+ "Must specify doctest options with starting " +
+ "'+' or '-'. Got %s" % (flag,))
+ except AttributeError:
+ raise ValueError("Unknown doctest option %s" %
+ (flag[1:],))
+
def prepareTestLoader(self, loader):
"""Capture loader's suiteClass.
continue
if not test.filename:
test.filename = module_file
- cases.append(DocTestCase(test, result_var=self.doctest_result_var))
+ cases.append(DocTestCase(test,
+ optionflags=self.optionflags,
+ result_var=self.doctest_result_var))
if cases:
yield self.suiteClass(cases, context=module, can_split=False)
if test.examples:
case = DocFileCase(
test,
+ optionflags=self.optionflags,
setUp=getattr(fixture_context, 'setup_test', None),
tearDown=getattr(fixture_context, 'teardown_test', None),
result_var=self.doctest_result_var)
for test in doctests:
if len(test.examples) == 0:
continue
- yield DocTestCase(test, obj=obj,
+ yield DocTestCase(test, obj=obj, optionflags=self.optionflags,
result_var=self.doctest_result_var)
def matches(self, name):
"--logging-clear-handlers", action="store_true",
default=False, dest="logcapture_clear",
help="Clear all other logging handlers")
+ parser.add_option(
+ "--logging-level", action="store",
+ default='NOTSET', dest="logcapture_level",
+ help="Set the log level to capture")
def configure(self, options, conf):
"""Configure plugin.
self.logformat = options.logcapture_format
self.logdatefmt = options.logcapture_datefmt
self.clear = options.logcapture_clear
+ self.loglevel = options.logcapture_level
if options.logcapture_filters:
self.filters = options.logcapture_filters.split(',')
root_logger.handlers.remove(handler)
root_logger.addHandler(self.handler)
# to make sure everything gets captured
- root_logger.setLevel(logging.NOTSET)
+ loglevel = getattr(self, "loglevel", "NOTSET")
+ root_logger.setLevel(getattr(logging, loglevel))
def begin(self):
"""Set up logging handler before test run begins.
:class:`EntryPointPluginManager`
This manager uses setuptools entrypoints to load plugins.
+:class:`ExtraPluginsPluginManager`
+ This manager loads extra plugins specified with the keyword
+ `addplugins`.
+
:class:`DefaultPluginMananger`
This is the manager class that will be used by default. If
setuptools is installed, it is a subclass of
import logging
import os
import sys
+from itertools import chain as iterchain
from warnings import warn
import nose.config
from nose.failure import Failure
class PluginManager(object):
- """Base class for plugin managers. Does not implement loadPlugins, so it
- may only be used with a static list of plugins.
+ """Base class for plugin managers. PluginManager is intended to be
+ used only with a static list of plugins. The loadPlugins() implementation
+ only reloads plugins from _extraplugins to prevent those from being
+ overridden by a subclass.
The basic functionality of a plugin manager is to proxy all unknown
attributes through a ``PluginProxy`` to a list of plugins.
def __init__(self, plugins=(), proxyClass=None):
self._plugins = []
+ self._extraplugins = ()
self._proxies = {}
if plugins:
self.addPlugins(plugins)
if getattr(p, 'name', None) != new_name]
self._plugins.append(plug)
- def addPlugins(self, plugins):
- for plug in plugins:
+ def addPlugins(self, plugins=(), extraplugins=()):
+ """extraplugins are maintained in a separate list and
+ re-added by loadPlugins() to prevent their being overwritten
+ by plugins added by a subclass of PluginManager
+ """
+ self._extraplugins = extraplugins
+ for plug in iterchain(plugins, extraplugins):
self.addPlugin(plug)
def configure(self, options, config):
log.debug("Plugins enabled: %s", enabled)
def loadPlugins(self):
- pass
+ for plug in self._extraplugins:
+ self.addPlugin(plug)
def sort(self):
return sort_list(self._plugins, lambda x: getattr(x, 'score', 1), reverse=True)
def loadPlugins(self):
"""Load plugins by iterating the `nose.plugins` entry point.
"""
- super(EntryPointPluginManager, self).loadPlugins()
from pkg_resources import iter_entry_points
-
loaded = {}
for entry_point, adapt in self.entry_points:
for ep in iter_entry_points(entry_point):
else:
plug = plugcls()
self.addPlugin(plug)
+ super(EntryPointPluginManager, self).loadPlugins()
class BuiltinPluginManager(PluginManager):
try:
import pkg_resources
- class DefaultPluginManager(BuiltinPluginManager, EntryPointPluginManager):
+ class DefaultPluginManager(EntryPointPluginManager, BuiltinPluginManager):
pass
-except ImportError:
- DefaultPluginManager = BuiltinPluginManager
+except ImportError:
+ class DefaultPluginManager(BuiltinPluginManager):
+ pass
class RestrictedPluginManager(DefaultPluginManager):
"""Plugin manager that restricts the plugin list to those not
Process = Queue = Pool = Event = Value = Array = None
-class TimedOutException(Exception):
+# have to inherit KeyboardInterrupt to it will interrupt process properly
+class TimedOutException(KeyboardInterrupt):
def __init__(self, value = "Timed Out"):
self.value = value
def __str__(self):
global Process, Queue, Pool, Event, Value, Array
try:
from multiprocessing import Manager, Process
+ #prevent the server process created in the manager which holds Python
+ #objects and allows other processes to manipulate them using proxies
+ #to interrupt on SIGINT (keyboardinterrupt) so that the communication
+ #channel between subprocesses and main process is still usable after
+ #ctrl+C is received in the main process.
+ old=signal.signal(signal.SIGINT, signal.SIG_IGN)
m = Manager()
+ #reset it back so main process will receive a KeyboardInterrupt
+ #exception on ctrl+c
+ signal.signal(signal.SIGINT, old)
Queue, Pool, Event, Value, Array = (
m.Queue, m.Pool, m.Event, m.Value, m.Array
)
config=self.config,
loaderClass=self.loaderClass)
+def signalhandler(sig, frame):
+ raise TimedOutException()
+
class MultiProcessTestRunner(TextTestRunner):
waitkilltime = 5.0 # max time to wait to terminate a process that does not
- # respond to SIGINT
+ # respond to SIGILL
def __init__(self, **kw):
self.loaderClass = kw.pop('loaderClass', loader.defaultTestLoader)
super(MultiProcessTestRunner, self).__init__(**kw)
- def run(self, test):
- """
- Execute the test (which may be a test suite). If the test is a suite,
- distribute it out among as many processes as have been configured, at
- as fine a level as is possible given the context fixtures defined in
- the suite or any sub-suites.
-
- """
- log.debug("%s.run(%s) (%s)", self, test, os.getpid())
- wrapper = self.config.plugins.prepareTest(test)
- if wrapper is not None:
- test = wrapper
-
- # plugins can decorate or capture the output stream
- wrapped = self.config.plugins.setOutputStream(self.stream)
- if wrapped is not None:
- self.stream = wrapped
-
- testQueue = Queue()
- resultQueue = Queue()
- tasks = []
- completed = []
- workers = []
- to_teardown = []
- shouldStop = Event()
-
- result = self._makeResult()
- start = time.time()
-
+ def collect(self, test, testQueue, tasks, to_teardown, result):
# dispatch and collect results
# put indexes only on queue because tests aren't picklable
for case in self.nextBatch(test):
result.addError(case, sys.exc_info())
else:
to_teardown.append(case)
- for _t in case:
- test_addr = self.addtask(testQueue,tasks,_t)
- log.debug("Queued shared-fixture test %s (%s) to %s",
- len(tasks), test_addr, testQueue)
+ if case.factory:
+ ancestors=case.factory.context.get(case, [])
+ for an in ancestors[:2]:
+ #log.debug('reset ancestor %s', an)
+ if getattr(an, '_multiprocess_shared_', False):
+ an._multiprocess_can_split_=True
+ #an._multiprocess_shared_=False
+ self.collect(case, testQueue, tasks, to_teardown, result)
else:
test_addr = self.addtask(testQueue,tasks,case)
log.debug("Queued test %s (%s) to %s",
len(tasks), test_addr, testQueue)
+ def startProcess(self, iworker, testQueue, resultQueue, shouldStop, result):
+ currentaddr = Value('c',bytes_(''))
+ currentstart = Value('d',time.time())
+ keyboardCaught = Event()
+ p = Process(target=runner,
+ args=(iworker, testQueue,
+ resultQueue,
+ currentaddr,
+ currentstart,
+ keyboardCaught,
+ shouldStop,
+ self.loaderClass,
+ result.__class__,
+ pickle.dumps(self.config)))
+ p.currentaddr = currentaddr
+ p.currentstart = currentstart
+ p.keyboardCaught = keyboardCaught
+ old = signal.signal(signal.SIGILL, signalhandler)
+ p.start()
+ signal.signal(signal.SIGILL, old)
+ return p
+
+ def run(self, test):
+ """
+ Execute the test (which may be a test suite). If the test is a suite,
+ distribute it out among as many processes as have been configured, at
+ as fine a level as is possible given the context fixtures defined in
+ the suite or any sub-suites.
+
+ """
+ log.debug("%s.run(%s) (%s)", self, test, os.getpid())
+ wrapper = self.config.plugins.prepareTest(test)
+ if wrapper is not None:
+ test = wrapper
+
+ # plugins can decorate or capture the output stream
+ wrapped = self.config.plugins.setOutputStream(self.stream)
+ if wrapped is not None:
+ self.stream = wrapped
+
+ testQueue = Queue()
+ resultQueue = Queue()
+ tasks = []
+ completed = []
+ workers = []
+ to_teardown = []
+ shouldStop = Event()
+
+ result = self._makeResult()
+ start = time.time()
+
+ self.collect(test, testQueue, tasks, to_teardown, result)
+
log.debug("Starting %s workers", self.config.multiprocess_workers)
for i in range(self.config.multiprocess_workers):
- currentaddr = Value('c',bytes_(''))
- currentstart = Value('d',0.0)
- keyboardCaught = Event()
- p = Process(target=runner, args=(i, testQueue, resultQueue,
- currentaddr, currentstart,
- keyboardCaught, shouldStop,
- self.loaderClass,
- result.__class__,
- pickle.dumps(self.config)))
- p.currentaddr = currentaddr
- p.currentstart = currentstart
- p.keyboardCaught = keyboardCaught
- # p.setDaemon(True)
- p.start()
+ p = self.startProcess(i, testQueue, resultQueue, shouldStop, result)
workers.append(p)
log.debug("Started worker process %s", i+1)
# need to keep track of the next time to check for timeouts in case
# more than one process times out at the same time.
nexttimeout=self.config.multiprocess_timeout
- while tasks:
- log.debug("Waiting for results (%s/%s tasks), next timeout=%.3fs",
- len(completed), total_tasks,nexttimeout)
- try:
- iworker, addr, newtask_addrs, batch_result = resultQueue.get(
- timeout=nexttimeout)
- log.debug('Results received for worker %d, %s, new tasks: %d',
- iworker,addr,len(newtask_addrs))
+ thrownError = None
+
+ try:
+ while tasks:
+ log.debug("Waiting for results (%s/%s tasks), next timeout=%.3fs",
+ len(completed), total_tasks,nexttimeout)
try:
+ iworker, addr, newtask_addrs, batch_result = resultQueue.get(
+ timeout=nexttimeout)
+ log.debug('Results received for worker %d, %s, new tasks: %d',
+ iworker,addr,len(newtask_addrs))
try:
- tasks.remove(addr)
- except ValueError:
- log.warn('worker %s failed to remove from tasks: %s',
- iworker,addr)
- total_tasks += len(newtask_addrs)
- for newaddr in newtask_addrs:
- tasks.append(newaddr)
- except KeyError:
- log.debug("Got result for unknown task? %s", addr)
- log.debug("current: %s",str(list(tasks)[0]))
- else:
- completed.append([addr,batch_result])
- self.consolidate(result, batch_result)
- if (self.config.stopOnError
- and not result.wasSuccessful()):
- # set the stop condition
- shouldStop.set()
- break
- if self.config.multiprocess_restartworker:
- log.debug('joining worker %s',iworker)
- # wait for working, but not that important if worker
- # cannot be joined in fact, for workers that add to
- # testQueue, they will not terminate until all their
- # items are read
- workers[iworker].join(timeout=1)
- if not shouldStop.is_set() and not testQueue.empty():
- log.debug('starting new process on worker %s',iworker)
- currentaddr = Value('c',bytes_(''))
- currentstart = Value('d',time.time())
- keyboardCaught = Event()
- workers[iworker] = Process(target=runner,
- args=(iworker, testQueue,
- resultQueue,
- currentaddr,
- currentstart,
- keyboardCaught,
- shouldStop,
- self.loaderClass,
- result.__class__,
- pickle.dumps(self.config)))
- workers[iworker].currentaddr = currentaddr
- workers[iworker].currentstart = currentstart
- workers[iworker].keyboardCaught = keyboardCaught
- workers[iworker].start()
- except Empty:
- log.debug("Timed out with %s tasks pending "
- "(empty testQueue=%d): %s",
- len(tasks),testQueue.empty(),str(tasks))
- any_alive = False
- for iworker, w in enumerate(workers):
- if w.is_alive():
- worker_addr = bytes_(w.currentaddr.value,'ascii')
- timeprocessing = time.time() - w.currentstart.value
- if ( len(worker_addr) == 0
+ try:
+ tasks.remove(addr)
+ except ValueError:
+ log.warn('worker %s failed to remove from tasks: %s',
+ iworker,addr)
+ total_tasks += len(newtask_addrs)
+ tasks.extend(newtask_addrs)
+ except KeyError:
+ log.debug("Got result for unknown task? %s", addr)
+ log.debug("current: %s",str(list(tasks)[0]))
+ else:
+ completed.append([addr,batch_result])
+ self.consolidate(result, batch_result)
+ if (self.config.stopOnError
+ and not result.wasSuccessful()):
+ # set the stop condition
+ shouldStop.set()
+ break
+ if self.config.multiprocess_restartworker:
+ log.debug('joining worker %s',iworker)
+ # wait for working, but not that important if worker
+ # cannot be joined in fact, for workers that add to
+ # testQueue, they will not terminate until all their
+ # items are read
+ workers[iworker].join(timeout=1)
+ if not shouldStop.is_set() and not testQueue.empty():
+ log.debug('starting new process on worker %s',iworker)
+ workers[iworker] = self.startProcess(iworker, testQueue, resultQueue, shouldStop, result)
+ except Empty:
+ log.debug("Timed out with %s tasks pending "
+ "(empty testQueue=%r): %s",
+ len(tasks),testQueue.empty(),str(tasks))
+ any_alive = False
+ for iworker, w in enumerate(workers):
+ if w.is_alive():
+ worker_addr = bytes_(w.currentaddr.value,'ascii')
+ timeprocessing = time.time() - w.currentstart.value
+ if ( len(worker_addr) == 0
+ and timeprocessing > self.config.multiprocess_timeout-0.1):
+ log.debug('worker %d has finished its work item, '
+ 'but is not exiting? do we wait for it?',
+ iworker)
+ else:
+ any_alive = True
+ if (len(worker_addr) > 0
and timeprocessing > self.config.multiprocess_timeout-0.1):
- log.debug('worker %d has finished its work item, '
- 'but is not exiting? do we wait for it?',
- iworker)
- else:
- any_alive = True
- if (len(worker_addr) > 0
- and timeprocessing > self.config.multiprocess_timeout-0.1):
- log.debug('timed out worker %s: %s',
- iworker,worker_addr)
- w.currentaddr.value = bytes_('')
- # If the process is in C++ code, sending a SIGINT
- # might not send a python KeybordInterrupt exception
- # therefore, send multiple signals until an
- # exception is caught. If this takes too long, then
- # terminate the process
- w.keyboardCaught.clear()
- startkilltime = time.time()
- while not w.keyboardCaught.is_set() and w.is_alive():
- if time.time()-startkilltime > self.waitkilltime:
- # have to terminate...
- log.error("terminating worker %s",iworker)
- w.terminate()
- currentaddr = Value('c',bytes_(''))
- currentstart = Value('d',time.time())
- keyboardCaught = Event()
- workers[iworker] = Process(target=runner,
- args=(iworker, testQueue, resultQueue,
- currentaddr, currentstart,
- keyboardCaught, shouldStop,
- self.loaderClass,
- result.__class__,
- pickle.dumps(self.config)))
- workers[iworker].currentaddr = currentaddr
- workers[iworker].currentstart = currentstart
- workers[iworker].keyboardCaught = keyboardCaught
- workers[iworker].start()
- # there is a small probability that the
- # terminated process might send a result,
- # which has to be specially handled or
- # else processes might get orphaned.
- w = workers[iworker]
- break
- os.kill(w.pid, signal.SIGINT)
- time.sleep(0.1)
- if not any_alive and testQueue.empty():
- log.debug("All workers dead")
- break
- nexttimeout=self.config.multiprocess_timeout
- for w in workers:
- if w.is_alive() and len(w.currentaddr.value) > 0:
- timeprocessing = time.time()-w.currentstart.value
- if timeprocessing <= self.config.multiprocess_timeout:
- nexttimeout = min(nexttimeout,
- self.config.multiprocess_timeout-timeprocessing)
-
- log.debug("Completed %s tasks (%s remain)", len(completed), len(tasks))
-
- for case in to_teardown:
- log.debug("Tearing down shared fixtures for %s", case)
- try:
- case.tearDown()
- except (KeyboardInterrupt, SystemExit):
- raise
- except:
- result.addError(case, sys.exc_info())
+ log.debug('timed out worker %s: %s',
+ iworker,worker_addr)
+ w.currentaddr.value = bytes_('')
+ # If the process is in C++ code, sending a SIGILL
+ # might not send a python KeybordInterrupt exception
+ # therefore, send multiple signals until an
+ # exception is caught. If this takes too long, then
+ # terminate the process
+ w.keyboardCaught.clear()
+ startkilltime = time.time()
+ while not w.keyboardCaught.is_set() and w.is_alive():
+ if time.time()-startkilltime > self.waitkilltime:
+ # have to terminate...
+ log.error("terminating worker %s",iworker)
+ w.terminate()
+ # there is a small probability that the
+ # terminated process might send a result,
+ # which has to be specially handled or
+ # else processes might get orphaned.
+ workers[iworker] = w = self.startProcess(iworker, testQueue, resultQueue, shouldStop, result)
+ break
+ os.kill(w.pid, signal.SIGILL)
+ time.sleep(0.1)
+ if not any_alive and testQueue.empty():
+ log.debug("All workers dead")
+ break
+ nexttimeout=self.config.multiprocess_timeout
+ for w in workers:
+ if w.is_alive() and len(w.currentaddr.value) > 0:
+ timeprocessing = time.time()-w.currentstart.value
+ if timeprocessing <= self.config.multiprocess_timeout:
+ nexttimeout = min(nexttimeout,
+ self.config.multiprocess_timeout-timeprocessing)
+ log.debug("Completed %s tasks (%s remain)", len(completed), len(tasks))
+
+ except (KeyboardInterrupt, SystemExit), e:
+ log.info('parent received ctrl-c when waiting for test results')
+ thrownError = e
+ #resultQueue.get(False)
+
+ result.addError(test, sys.exc_info())
+
+ try:
+ for case in to_teardown:
+ log.debug("Tearing down shared fixtures for %s", case)
+ try:
+ case.tearDown()
+ except (KeyboardInterrupt, SystemExit):
+ raise
+ except:
+ result.addError(case, sys.exc_info())
- stop = time.time()
+ stop = time.time()
- # first write since can freeze on shutting down processes
- result.printErrors()
- result.printSummary(start, stop)
- self.config.plugins.finalize(result)
+ # first write since can freeze on shutting down processes
+ result.printErrors()
+ result.printSummary(start, stop)
+ self.config.plugins.finalize(result)
- log.debug("Tell all workers to stop")
- for w in workers:
- if w.is_alive():
- testQueue.put('STOP', block=False)
+ if thrownError is None:
+ log.debug("Tell all workers to stop")
+ for w in workers:
+ if w.is_alive():
+ testQueue.put('STOP', block=False)
- # wait for the workers to end
- try:
+ # wait for the workers to end
for iworker,worker in enumerate(workers):
if worker.is_alive():
log.debug('joining worker %s',iworker)
- worker.join()#10)
+ worker.join()
if worker.is_alive():
log.debug('failed to join worker %s',iworker)
- except KeyboardInterrupt:
- log.info('parent received ctrl-c')
+ except (KeyboardInterrupt, SystemExit):
+ log.info('parent received ctrl-c when shutting down: stop all processes')
for worker in workers:
- worker.terminate()
- worker.join()
+ if worker.is_alive():
+ worker.terminate()
+ if thrownError: raise thrownError
+ else: raise
return result
for batch in self.nextBatch(case):
yield batch
- def checkCanSplit(self, context, fixt):
+ def checkCanSplit(context, fixt):
"""
Callback that we use to check whether the fixtures found in a
context or ancestor are ones we care about.
if getattr(context, '_multiprocess_can_split_', False):
return False
return True
+ checkCanSplit = staticmethod(checkCanSplit)
def sharedFixtures(self, case):
context = getattr(case, 'context', None)
test(result)
currentaddr.value = bytes_('')
resultQueue.put((ix, test_addr, test.tasks, batch(result)))
- except KeyboardInterrupt:
- keyboardCaught.set()
- if len(currentaddr.value) > 0:
- log.exception('Worker %s keyboard interrupt, failing '
- 'current test %s',ix,test_addr)
+ except KeyboardInterrupt, e: #TimedOutException:
+ timeout = isinstance(e, TimedOutException)
+ if timeout:
+ keyboardCaught.set()
+ if len(currentaddr.value):
+ if timeout:
+ msg = 'Worker %s timed out, failing current test %s'
+ else:
+ msg = 'Worker %s keyboard interrupt, failing current test %s'
+ log.exception(msg,ix,test_addr)
currentaddr.value = bytes_('')
failure.Failure(*sys.exc_info())(result)
resultQueue.put((ix, test_addr, test.tasks, batch(result)))
else:
- log.debug('Worker %s test %s timed out',ix,test_addr)
+ if timeout:
+ msg = 'Worker %s test %s timed out'
+ else:
+ msg = 'Worker %s test %s keyboard interrupt'
+ log.debug(msg,ix,test_addr)
resultQueue.put((ix, test_addr, test.tasks, batch(result)))
+ if not timeout:
+ raise
except SystemExit:
currentaddr.value = bytes_('')
log.exception('Worker %s system exit',ix)
else:
result, orig = result, result
try:
+ #log.debug('setUp for %s', id(self));
self.setUp()
except KeyboardInterrupt:
raise
result.addError(self, self._exc_info())
return
try:
- localtests = [test for test in self._tests]
- if len(localtests) > 1 and self.testQueue is not None:
- log.debug("queue %d tests"%len(localtests))
- for test in localtests:
- if isinstance(test.test,nose.failure.Failure):
- # proably failed in the generator, so execute directly
- # to get the exception
- test(orig)
- else:
- MultiProcessTestRunner.addtask(self.testQueue,
- self.tasks, test)
- else:
- for test in localtests:
- if (isinstance(test,nose.case.Test)
- and self.arg is not None):
- test.test.arg = self.arg
+ for test in self._tests:
+ if (isinstance(test,nose.case.Test)
+ and self.arg is not None):
+ test.test.arg = self.arg
+ else:
+ test.arg = self.arg
+ test.testQueue = self.testQueue
+ test.tasks = self.tasks
+ if result.shouldStop:
+ log.debug("stopping")
+ break
+ # each nose.case.Test will create its own result proxy
+ # so the cases need the original result, to avoid proxy
+ # chains
+ #log.debug('running test %s in suite %s', test, self);
+ try:
+ test(orig)
+ except KeyboardInterrupt, e:
+ timeout = isinstance(e, TimedOutException)
+ if timeout:
+ msg = 'Timeout when running test %s in suite %s'
else:
- test.arg = self.arg
- test.testQueue = self.testQueue
- test.tasks = self.tasks
- if result.shouldStop:
- log.debug("stopping")
- break
- # each nose.case.Test will create its own result proxy
- # so the cases need the original result, to avoid proxy
- # chains
- try:
- test(orig)
- except KeyboardInterrupt,e:
- err = (TimedOutException,TimedOutException(str(test)),
- sys.exc_info()[2])
- test.config.plugins.addError(test,err)
- orig.addError(test,err)
+ msg = 'KeyboardInterrupt when running test %s in suite %s'
+ log.debug(msg, test, self)
+ err = (TimedOutException,TimedOutException(str(test)),
+ sys.exc_info()[2])
+ test.config.plugins.addError(test,err)
+ orig.addError(test,err)
+ if not timeout:
+ raise
finally:
self.has_run = True
try:
+ #log.debug('tearDown for %s', id(self));
self.tearDown()
except KeyboardInterrupt:
raise
===============
The plugin interface is well-tested enough to safely unit test your
-use of its hooks with some level of confidence. However, there is also
-a mixin for unittest.TestCase called PluginTester that's designed to
+use of its hooks with some level of confidence. However, there is also
+a mixin for unittest.TestCase called PluginTester that's designed to
test plugins in their native runtime environment.
Here's a simple example with a do-nothing plugin and a composed suite.
... for line in self.output:
... # i.e. check for patterns
... pass
- ...
+ ...
... # or check for a line containing ...
... assert "ValueError" in self.output
... def makeSuite(self):
And here is a more complex example of testing a plugin that has extra
arguments and reads environment variables.
-
+
>>> import unittest, os
>>> from nose.plugins import Plugin, PluginTester
>>> class FancyOutputter(Plugin):
... self.fanciness = 2
... if 'EVEN_FANCIER' in self.env:
... self.fanciness = 3
- ...
+ ...
... def options(self, parser, env=os.environ):
... self.env = env
... parser.add_option('--more-fancy', action='store_true')
... Plugin.options(self, parser, env=env)
- ...
+ ...
... def report(self, stream):
... stream.write("FANCY " * self.fanciness)
- ...
+ ...
>>> class TestFancyOutputter(PluginTester, unittest.TestCase):
... activate = '--with-fancy' # enables the plugin
... plugins = [FancyOutputter()]
... args = ['--more-fancy']
... env = {'EVEN_FANCIER': '1'}
- ...
+ ...
... def test_fancy_output(self):
... assert "FANCY FANCY FANCY" in self.output, (
... "got: %s" % self.output)
... def runTest(self):
... raise ValueError("I hate fancy stuff")
... return unittest.TestSuite([TC()])
- ...
+ ...
>>> res = unittest.TestResult()
>>> case = TestFancyOutputter('test_fancy_output')
>>> case(res)
return self.__buffer.getvalue()
def __getattr__(self, attr):
return getattr(self.__buffer, attr)
-
+
try:
from multiprocessing import Manager
Buffer = MultiProcessFile
class PluginTester(object):
"""A mixin for testing nose plugins in their runtime environment.
-
- Subclass this and mix in unittest.TestCase to run integration/functional
- tests on your plugin. When setUp() is called, the stub test suite is
- executed with your plugin so that during an actual test you can inspect the
+
+ Subclass this and mix in unittest.TestCase to run integration/functional
+ tests on your plugin. When setUp() is called, the stub test suite is
+ executed with your plugin so that during an actual test you can inspect the
artifacts of how your plugin interacted with the stub test suite.
-
+
- activate
-
+
- the argument to send nosetests to activate the plugin
-
+
- suitepath
-
+
- if set, this is the path of the suite to test. Otherwise, you
will need to use the hook, makeSuite()
-
+
- plugins
- the list of plugins to make available during the run. Note
that this does not mean these plugins will be *enabled* during
the run -- only the plugins enabled by the activate argument
or other settings in argv or env will be enabled.
-
+
- args
-
+
- a list of arguments to add to the nosetests command, in addition to
the activate argument
-
+
- env
-
+
- optional dict of environment variables to send nosetests
"""
argv = None
plugins = []
ignoreFiles = None
-
+
def makeSuite(self):
"""returns a suite object of tests to run (unittest.TestSuite())
-
- If self.suitepath is None, this must be implemented. The returned suite
- object will be executed with all plugins activated. It may return
+
+ If self.suitepath is None, this must be implemented. The returned suite
+ object will be executed with all plugins activated. It may return
None.
-
+
Here is an example of a basic suite object you can return ::
-
+
>>> import unittest
>>> class SomeTest(unittest.TestCase):
... def runTest(self):
... raise ValueError("Now do something, plugin!")
- ...
+ ...
>>> unittest.TestSuite([SomeTest()]) # doctest: +ELLIPSIS
<unittest...TestSuite tests=[<...SomeTest testMethod=runTest>]>
-
+
"""
raise NotImplementedError
-
+
def _execPlugin(self):
"""execute the plugin on the internal test suite.
"""
from nose.config import Config
from nose.core import TestProgram
from nose.plugins.manager import PluginManager
-
+
suite = None
stream = Buffer()
conf = Config(env=self.env,
conf.ignoreFiles = self.ignoreFiles
if not self.suitepath:
suite = self.makeSuite()
-
+
self.nose = TestProgram(argv=self.argv, config=conf, suite=suite,
exit=False)
self.output = AccessDecorator(stream)
-
+
def setUp(self):
- """runs nosetests with the specified test suite, all plugins
+ """runs nosetests with the specified test suite, all plugins
activated.
"""
self.argv = ['nosetests', self.activate]
if self.args:
self.argv.extend(self.args)
if self.suitepath:
- self.argv.append(self.suitepath)
+ self.argv.append(self.suitepath)
self._execPlugin()
if 'argv' not in kw:
kw['argv'] = ['nosetests', '-v']
kw['config'].stream = buffer
-
+
# Set up buffering so that all output goes to our buffer,
# or warn user if deprecated behavior is active. If this is not
# done, prints and warnings will either be out of place or
out = buffer.getvalue()
print munge_nose_output_for_doctest(out)
-
+
def run_buffered(*arg, **kw):
kw['buffer_all'] = True
run(*arg, **kw)
from nose.util import src, set
try:
- from pickle import dump, load
+ from cPickle import dump, load
except ImportError:
from pickle import dump, load
--- /dev/null
+"""
+Tools for testing
+-----------------
+
+nose.tools provides a few convenience functions to make writing tests
+easier. You don't have to use them; nothing in the rest of nose depends
+on any of these methods.
+
+"""
+from nose.tools.nontrivial import *
+from nose.tools.trivial import *
-"""
-Tools for testing
------------------
-
-nose.tools provides a few convenience functions to make writing tests
-easier. You don't have to use them; nothing in the rest of nose depends
-on any of these methods.
-"""
-import re
+"""Tools not exempt from being descended into in tracebacks"""
+
import time
-import unittest
-__all__ = ['ok_', 'eq_', 'make_decorator', 'raises', 'set_trace', 'timed',
- 'with_setup', 'TimeExpired', 'istest', 'nottest']
+__all__ = ['make_decorator', 'raises', 'set_trace', 'timed', 'with_setup',
+ 'TimeExpired', 'istest', 'nottest']
class TimeExpired(AssertionError):
pass
-def ok_(expr, msg=None):
- """Shorthand for assert. Saves 3 whole characters!
- """
- assert expr, msg
-
-
-def eq_(a, b, msg=None):
- """Shorthand for 'assert a == b, "%r != %r" % (a, b)
- """
- assert a == b, msg or "%r != %r" % (a, b)
-
-
def make_decorator(func):
"""
Wraps a test decorator so as to properly replicate metadata
"""
func.__test__ = False
return func
-
-#
-# Expose assert* from unittest.TestCase
-# - give them pep8 style names
-#
-caps = re.compile('([A-Z])')
-
-def pep8(name):
- return caps.sub(lambda m: '_' + m.groups()[0].lower(), name)
-
-class Dummy(unittest.TestCase):
- def nop():
- pass
-_t = Dummy('nop')
-
-for at in [ at for at in dir(_t)
- if at.startswith('assert') and not '_' in at ]:
- pepd = pep8(at)
- vars()[pepd] = getattr(_t, at)
- __all__.append(pepd)
-
-del Dummy
-del _t
-del pep8
--- /dev/null
+"""Tools so trivial that tracebacks should not descend into them
+
+We define the ``__unittest`` symbol in their module namespace so unittest will
+skip them when printing tracebacks, just as it does for their corresponding
+methods in ``unittest`` proper.
+
+"""
+import re
+import unittest
+
+
+__all__ = ['ok_', 'eq_']
+
+# Use the same flag as unittest itself to prevent descent into these functions:
+__unittest = 1
+
+
+def ok_(expr, msg=None):
+ """Shorthand for assert. Saves 3 whole characters!
+ """
+ if not expr:
+ raise AssertionError(msg)
+
+
+def eq_(a, b, msg=None):
+ """Shorthand for 'assert a == b, "%r != %r" % (a, b)
+ """
+ if not a == b:
+ raise AssertionError(msg or "%r != %r" % (a, b))
+
+
+#
+# Expose assert* from unittest.TestCase
+# - give them pep8 style names
+#
+caps = re.compile('([A-Z])')
+
+def pep8(name):
+ return caps.sub(lambda m: '_' + m.groups()[0].lower(), name)
+
+class Dummy(unittest.TestCase):
+ def nop():
+ pass
+_t = Dummy('nop')
+
+for at in [ at for at in dir(_t)
+ if at.startswith('assert') and not '_' in at ]:
+ pepd = pep8(at)
+ vars()[pepd] = getattr(_t, at)
+ __all__.append(pepd)
+
+del Dummy
+del _t
+del pep8
you mix tests using these tools and tests using twisted.trial.
"""
global _twisted_thread
- reactor.stop()
+
+ def stop_reactor():
+ '''Helper for calling stop from withing the thread.'''
+ reactor.stop()
+
+ reactor.callFromThread(stop_reactor)
reactor_thread.join()
for p in reactor.getDelayedCalls():
if p.active():
.TP
+\fB\-\-cover\-branches\fR\fR
+Include branch coverage in coverage report [NOSE_COVER_BRANCHES]
+
+
+.TP
+\fB\-\-cover\-xml\fR\fR
+Produce XML coverage information
+
+
+.TP
+\fB\-\-cover\-xml\-file\fR\fR=FILE
+Produce XML coverage information in file
+
+
+.TP
\fB\-\-pdb\fR\fR
Drop into debugger on errors
.SH COPYRIGHT
LGPL
-.\" Generated by docutils manpage writer on 2011-07-30 18:55.
+.\" Generated by docutils manpage writer on 2012-09-03 09:06.
.\"
[nosetests]
-with-doctest=1
-doctest-extension=.rst
-doctest-fixtures=_fixtures
-py3where=build/tests
+with-doctest = 1
+doctest-extension = .rst
+doctest-fixtures = _fixtures
+py3where = build/tests
[bdist_rpm]
-doc_files = man/man1/nosetests.1 README.txt
-;; Uncomment if your platform automatically gzips man pages
-;; See README.BDIST_RPM
-;; install_script = install-rpm.sh
+doc_files = README.txt
+
+[egg_info]
+tag_build =
+tag_date = 0
+tag_svn_revision = 0
+
import sys
import os
-VERSION = '1.1.2'
+VERSION = '1.2.1'
py_vers_tag = '-%s.%s' % sys.version_info[:2]
test_dirs = ['functional_tests', 'unit_tests', os.path.join('doc','doc_tests'), 'nose']
coverage and profiling, flexible attribute-based test selection,
output capture and more. More information about writing plugins may be
found on in the nose API documentation, here:
- http://somethingaboutorange.com/mrl/projects/nose/
+ http://readthedocs.org/docs/nose/
If you have recently reported a bug marked as fixed, or have a craving for
- the very latest, you may want the unstable development version instead:
- http://bitbucket.org/jpellerin/nose/get/tip.gz#egg=nose-dev
+ the very latest, you may want the development version instead:
+ https://github.com/nose-devs/nose/tarball/master#egg=nose-dev
""",
license = 'GNU LGPL',
keywords = 'test unittest doctest automatic discovery',
'examples/*.py',
'examples/*/*.py']},
classifiers = [
- 'Development Status :: 4 - Beta',
+ 'Development Status :: 5 - Production/Stable',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU Library or Lesser General Public License (LGPL)',
'Natural Language :: English',
del kwargs[a]
return _setup(*args, **kwargs)
else:
+ import glob
import os
import re
import logging
from setuptools.command.build_py import Mixin2to3
from distutils import dir_util, file_util, log
import setuptools.command.test
- from pkg_resources import normalize_path
+ from pkg_resources import normalize_path, Environment
try:
import patch
patch.logger.setLevel(logging.WARN)
# Override 'test' command to make sure 'build_tests' gets run first.
def run(self):
self.run_command('build_tests')
+ this_dir = os.path.normpath(os.path.abspath(os.path.dirname(__file__)))
+ lib_dirs = glob.glob(os.path.join(this_dir, 'build', 'lib*'))
+ test_dir = os.path.join(this_dir, 'build', 'tests')
+ env = Environment(search_path=lib_dirs)
+ distributions = env["nose"]
+ assert len(distributions) == 1, (
+ "Incorrect number of distributions found")
+ dist = distributions[0]
+ dist.activate()
+ sys.path.insert(0, test_dir)
setuptools.command.test.test.run(self)
def setup(*args, **kwargs):
eq_(1, len(c.handler.buffer))
eq_("Hello", c.handler.buffer[0].msg)
+ def test_loglevel(self):
+ c = LogCapture()
+ parser = OptionParser()
+ c.addOptions(parser, {})
+ options, args = parser.parse_args(['--logging-level', 'INFO'])
+ c.configure(options, Config())
+ c.start()
+ log = logging.getLogger("loglevel")
+ log.debug("Hello")
+ log.info("Goodbye")
+ c.end()
+ records = c.formatLogRecords()
+ eq_(1, len(c.handler.buffer))
+ eq_("Goodbye", c.handler.buffer[0].msg)
+ eq_("loglevel: INFO: Goodbye", records[0])
+
def test_clears_all_existing_log_handlers(self):
c = LogCapture()
parser = OptionParser()
c.beforeTest(mktest())
c.end()
-
if py27:
expect = ["<class 'nose.plugins.logcapture.MyMemoryHandler'>"]
else:
assert records[0].startswith('foo:'), records[0]
assert records[1].startswith('foo.x:'), records[1]
assert records[2].startswith('bar.quux:'), records[2]
-
+
def test_logging_filter_exclude(self):
env = {'NOSE_LOGFILTER': '-foo,-bar'}
c = LogCapture()
eq_(2, len(records))
assert records[0].startswith('foobar.something:'), records[0]
assert records[1].startswith('abara:'), records[1]
-
+
def test_logging_filter_exclude_and_include(self):
env = {'NOSE_LOGFILTER': 'foo,-foo.bar'}
c = LogCapture()
else:
self.fail("eq_(1, 0) did not raise assertion error")
+ def test_eq_unittest_flag(self):
+ """Make sure eq_() is in a namespace that has __unittest = 1.
+
+ This lets tracebacks refrain from descending into the eq_ frame.
+
+ """
+ assert '__unittest' in eq_.func_globals
+
+ def test_istest_unittest_flag(self):
+ """Make sure istest() is not in a namespace that has __unittest = 1.
+
+ That is, make sure our __unittest labeling didn't get overzealous.
+
+ """
+ assert '__unittest' not in istest.func_globals
+
def test_raises(self):
from nose.case import FunctionTestCase
-
+
def raise_typeerror():
raise TypeError("foo")
def noraise():
pass
-
+
raise_good = raises(TypeError)(raise_typeerror)
raise_other = raises(ValueError)(raise_typeerror)
no_raise = raises(TypeError)(noraise)
tc = FunctionTestCase(raise_good)
self.assertEqual(str(tc), "%s.%s" % (__name__, 'raise_typeerror'))
-
+
raise_good()
try:
raise_other()
def too_slow():
time.sleep(.3)
too_slow = timed(.2)(too_slow)
-
+
def quick():
time.sleep(.1)
quick = timed(.2)(quick)
def f1():
pass
-
+
f2 = make_decorator(func)(f1)
-
+
assert f2.setup == 'setup'
assert f2.teardown == 'teardown'
def test_nested_decorators(self):
from nose.tools import raises, timed, with_setup
-
+
def test():
pass
-
+
def foo():
pass
-
+
test = with_setup(foo, foo)(test)
test = timed(1.0)(test)
test = raises(TypeError)(test)
def test_decorator_func_sorting(self):
from nose.tools import raises, timed, with_setup
from nose.util import func_lineno
-
+
def test1():
pass
self.assertEqual(func_lineno(test1), test1_pos)
self.assertEqual(func_lineno(test2), test2_pos)
self.assertEqual(func_lineno(test3), test3_pos)
-
+
def test_testcase_funcs(self):
import nose.tools
tc_asserts = [ at for at in dir(nose.tools)
if at.startswith('assert_') ]
print tc_asserts
-
+
# FIXME: not sure which of these are in all supported
# versions of python
assert 'assert_raises' in tc_asserts
from nose.tools import with_setup
from nose.case import FunctionTestCase
from unittest import TestResult
-
+
called = []
def test():
called.append('test')
def test3():
called.append('test3')
-
+
def s1():
called.append('s1')
def s3():
called.append('s3')
-
+
def t1():
called.append('t1')
def t3():
called.append('t3')
-
+
ws1 = with_setup(s1, t1)(test)
case1 = FunctionTestCase(ws1)
case1(TestResult())
case3(TestResult())
self.assertEqual(called, ['s1', 's2', 's3',
'test3', 't3', 't2', 't1'])
-
+
if __name__ == '__main__':
unittest.main()