[bumpversion]
-current_version = 64.0.0
+current_version = 63.1.0
commit = True
tag = True
distutils:
- local
python:
- - 3.7-dev
- - 3.10-dev
+ # Build on pre-releases until stable, then stable releases.
+ # actions/setup-python#213
+ - ~3.7.0-0
+ - ~3.10.0-0
# disabled due to #3365
- # - 3.11-dev
+ # - ~3.11.0-0
- pypy-3.7
platform:
- ubuntu-latest
- name: Setup Python
uses: actions/setup-python@v3
with:
- python-version: "3.11-dev"
+ python-version: "3.10"
- name: Install tox
run: |
python -m pip install tox
-v64.0.0
--------
-
-
-Deprecations
-^^^^^^^^^^^^
-* #3380: Passing some types of parameters via ``--global-option`` to setuptools PEP 517/PEP 660 backend
- is now considered deprecated. The user can pass the same arbitrary parameter
- via ``--build-option`` (``--global-option`` is now reserved for flags like
- ``--verbose`` or ``--quiet``).
-
- Both ``--build-option`` and ``--global-option`` are supported as a **transitional** effort (a.k.a. "escape hatch").
- In the future a proper list of allowed ``config_settings`` may be created.
-
-Breaking Changes
-^^^^^^^^^^^^^^^^
-* #3265: Added implementation for *editable install* hooks (PEP 660).
-
- By default the users will experience a *lenient* behavior which prioritises
- the ability of the users of changing the distributed packages (e.g. adding new
- files or removing old ones).
- But they can also opt into a *strict* mode, which will try to replicate as much
- as possible the behavior of the package as if it would be normally installed by
- end users. The *strict* editable installation is not able to detect if files
- are added or removed from the project (a new installation is required).
-
- .. important::
- The *editable* aspect of the *editable install* supported this implementation
- is restricted to the Python modules contained in the distributed package.
- Changes in binary extensions (e.g. C/C++), entry-point definitions,
- dependencies, metadata, datafiles, etc may require a new installation.
-
-Changes
-^^^^^^^
-* #3380: Improved the handling of the ``config_settings`` parameter in both PEP 517 and
- PEP 660 interfaces:
-
- - It is possible now to pass both ``--global-option`` and ``--build-option``.
- As discussed in #1928, arbitrary arguments passed via ``--global-option``
- should be placed before the name of the setuptools' internal command, while
- ``--build-option`` should come after.
-
- - Users can pass ``editable-mode=strict`` to select a strict behaviour for the
- editable installation.
-* #3392: Exposed ``get_output_mapping()`` from ``build_py`` and ``build_ext``
- subcommands. This interface is reserved for the use of ``setuptools``
- Extensions and third part packages are explicitly disallowed to calling it.
- However, any implementation overwriting ``build_py`` or ``build_ext`` are
- required to honour this interface.
-* #3412: Added ability of collecting source files from custom build sub-commands to
- ``sdist``. This allows plugins and customization scripts to automatically
- add required source files in the source distribution.
-* #3414: Users can *temporarily* specify an environment variable
- ``SETUPTOOLS_ENABLE_FEATURE=legacy-editable`` as a escape hatch for the
- :pep:`660` behavior. This setting is **transitional** and may be removed in the
- future.
-* #3484: Added *transient* ``compat`` mode to editable installs.
- This more will be temporarily available (to facilitate the transition period)
- for those that want to emulate the behavior of the ``develop`` command
- (in terms of what is added to ``sys.path``).
- This mode is provided "as is", with limited support, and will be removed in
- future versions of ``setuptools``.
-
-Documentation changes
-^^^^^^^^^^^^^^^^^^^^^
-* #3414: Updated :doc:`Development Mode </userguide/development_mode>` to reflect on the
- implementation of :pep:`660`.
-
-
-v63.4.3
--------
-
-
-Misc
-^^^^
-* #3496: Update to pypa/distutils@b65aa40 including more robust support for library/include dir handling in msvccompiler (pypa/distutils#153) and test suite improvements.
-
-
-v63.4.2
--------
-
-
-Misc
-^^^^
-* #3453: Bump vendored version of :pypi:`pyparsing` to 3.0.9.
-* #3481: Add warning for potential ``install_requires`` and ``extras_require``
- misconfiguration in ``setup.cfg``
-* #3487: Modified ``pyproject.toml`` validation exception handling to
- make relevant debugging information easier to spot.
-
-
-v63.4.1
--------
-
-
-Misc
-^^^^
-* #3482: Sync with pypa/distutils@274758f1c02048d295efdbc13d2f88d9923547f8, restoring compatibility shim in bdist.format_commands.
-
-
-v63.4.0
--------
-
-
-Changes
-^^^^^^^
-* #2971: ``upload_docs`` command is deprecated once again.
-
-Documentation changes
-^^^^^^^^^^^^^^^^^^^^^
-* #3443: Installed ``sphinx-hoverxref`` extension to show tooltips on internal an external references.
- -- by :user:`humitos`
-* #3444: Installed ``sphinx-notfound-page`` extension to generate nice 404 pages.
- -- by :user:`humitos`
-
-Misc
-^^^^
-* #3480: Merge with pypa/distutils@c397f4c
-
-
-v63.3.0
--------
-
-
-Changes
-^^^^^^^
-* #3475: Merge with pypa/distutils@129480b, including substantial delinting and cleanup, some refactoring around compiler logic, better messaging in cygwincompiler (pypa/distutils#161).
-
-
-v63.2.0
--------
-
-
-Changes
-^^^^^^^
-* #3395: Included a performance optimization: ``setuptools.build_meta`` no longer tries
- to :func:`compile` the setup script code before :func:`exec`-ing it.
-
-Misc
-^^^^
-* #3435: Corrected issue in macOS framework builds on Python 3.9 not installed by homebrew (pypa/distutils#158).
-
-
v63.1.0
-------
^^^^^^^^^^^^^^^^
* #3421: Drop setuptools' support for installing an entrypoint extra requirements at load time:
- the functionality has been broken since v60.8.0.
- - the mechanism to do so is deprecated (``fetch_build_eggs``).
+ - the mechanism to do so is deprecated (`fetch_build_eggs`).
- that use case (e.g. a custom command class entrypoint) is covered by making sure the necessary build requirements are declared.
Documentation changes
--- /dev/null
+Added implementation for *editable install* hooks (PEP 660) - **beta** stage.
+
+- The user will be able select between two distinct behaviors:
+
+ - *lax*, which prioritises the ability of the users of changing the
+ distributed packages (e.g. adding new files or removing old ones)
+
+ - *strict*, which will try to replicate as much as possible the behavior of
+ the package as if it would be normally installed by end users.
+ The *strict* editable installation is not able to detect if files are
+ added or removed from the project (a new installation is required).
+
+.. important::
+ The *editable* aspect of the *editable install* supported this implementation
+ is restricted to the Python modules contained in the distributed package.
+ Changes in binary extensions (e.g. C/C++), entry-point definitions,
+ dependencies, metadata, datafiles, etc require a new installation.
--- /dev/null
+Improved the handling of the ``config_settings`` parameter in both PEP 517 and
+PEP 660 interfaces:
+
+- It is possible now to pass both ``--global-option`` and ``--build-option``.
+ As discussed in #1928, arbitrary arguments passed via ``--global-option``
+ should be placed before the name of the setuptools' internal command, while
+ ``--build-option`` should come after.
+
+- Users can pass ``editable-mode=strict`` to select a strict behaviour for the
+ editable installation.
--- /dev/null
+Passing some types of parameters via ``--global-option`` to setuptools PEP 517/PEP 660 backend
+is now considered deprecated. The user can pass the same arbitrary parameter
+via ``--build-option`` (``--global-option`` is now reserved for flags like
+``--verbose`` or ``--quiet``).
+
+Both ``--build-option`` and ``--global-option`` are supported as a **transitional** effort (a.k.a. "escape hatch").
+In the future a proper list of allowed ``config_settings`` may be created.
--- /dev/null
+Exposed ``get_output_mapping()`` from ``build_py`` and ``build_ext``
+subcommands. This interface is reserved for the use of ``setuptools``
+Extensions and third part packages are explicitly disallowed to calling it.
+However, any implementation overwriting ``build_py`` or ``build_ext`` are
+required to honour this interface.
--- /dev/null
+Added ability of collecting source files from custom build sub-commands to
+``sdist``. This allows plugins and customization scripts to automatically
+add required source files in the source distribution.
--- /dev/null
+Users can *temporarily* specify an environment variable
+``SETUPTOOLS_ENABLE_FEATURE=legacy-editable`` as a escape hatch for the
+:pep:`660` behavior. This setting is **transitional** and may be removed in the
+future.
--- /dev/null
+Updated :doc:`Development Mode </userguide/development_mode>` to reflect on the
+implementation of :pep:`660`.
.. code-block:: toml
[build-system]
- requires = ["setuptools"]
+ requires = ["setuptools", "wheel"]
build-backend = "backend"
backend-path = ["_custom_build"]
),
})
-# Support tooltips on references
-extensions += ['hoverxref.extension']
-hoverxref_auto_ref = True
-hoverxref_intersphinx = [
- 'python',
- 'pip',
- 'build',
- 'PyPUG',
- 'packaging',
- 'twine',
- 'importlib-resources',
-]
-
# Add support for linking usernames
github_url = 'https://github.com'
github_repo_org = 'pypa'
('py:exc', 'LibError'), # undocumented
('py:exc', 'LinkError'), # undocumented
('py:exc', 'PreprocessError'), # undocumented
- ('py:exc', 'setuptools.errors.PlatformError'), # sphinx cannot find it
('py:func', 'distutils.CCompiler.new_compiler'), # undocumented
# undocumented:
('py:func', 'distutils.dist.DistributionMetadata.read_pkg_file'),
extensions += ['sphinx-favicon']
html_static_path = ['images'] # should contain the folder with icons
-# Add support for nice Not Found 404 pages
-extensions += ['notfound.extension']
-
# List of dicts with <link> HTML attributes
# static-file points to files in the html_static_path (href is computed)
favicons = [
* ``setuptools`` projects without ``setup.py`` (e.g., ``setup.cfg``-only)::
- python -c "from setuptools import setup; setup()" --help
+ python -c "import setuptools; setup()" --help
* ``distutils`` projects (with a ``setup.py`` importing ``distutils``)::
[build-system]
requires = [
"setuptools >= 40.9.0",
+ "wheel",
]
build-backend = "setuptools.build_meta"
1) build system requirement, 2) required dependency and 3) optional
dependency.
-Each dependency, regardless of type, needs to be specified according to :pep:`508`
-and :pep:`440`.
-This allows adding version :pep:`range restrictions <440#version-specifiers>`
-and :ref:`environment markers <environment-markers>`.
+.. attention::
+ Each dependency, regardless of type, needs to be specified according to :pep:`508`.
+ This allows adding version :pep:`range restrictions <440#version-specifiers>`
+ and :ref:`environment markers <environment-markers>`.
+ Please note however that public package indexes, such as `PyPI`_
+ might not accept packages that declare dependencies using
+ :pep:`direct URLs <440#direct-references>`.
.. _build-requires:
to implement custom detection logic.
-Direct URL dependencies
------------------------
-
-.. attention::
- `PyPI`_ and other standards-conformant package indices **do not** accept
- packages that declare dependencies using direct URLs. ``pip`` will accept them
- when installing packages from the local filesystem or from another URL,
- however.
-
-Dependencies that are not available on a package index but can be downloaded
-elsewhere in the form of a source repository or archive may be specified
-using a variant of :pep:`PEP 440's direct references <440#direct-references>`:
-
-.. tab:: pyproject.toml
-
- .. code-block:: toml
-
- [project]
- # ...
- dependencies = [
- "Package-A @ git+https://example.net/package-a.git@main",
- "Package-B @ https://example.net/archives/package-b.whl",
- ]
-
-.. tab:: setup.cfg
-
- .. code-block:: ini
-
- [options]
- #...
- install_requires =
- Package-A @ git+https://example.net/package-a.git@main
- Package-B @ https://example.net/archives/package-b.whl
-
-.. tab:: setup.py
-
- .. code-block:: python
-
- setup(
- install_requires=[
- "Package-A @ git+https://example.net/package-a.git@main",
- "Package-B @ https://example.net/archives/package-b.whl",
- ],
- ...,
- )
-
-For source repository URLs, a list of supported protocols and VCS-specific
-features such as selecting certain branches or tags can be found in pip's
-documentation on `VCS support <https://pip.pypa.io/en/latest/topics/vcs-support/>`_.
-Supported formats for archive URLs are sdists and wheels.
-
-
Optional dependencies
=====================
Setuptools allows you to declare dependencies that are not installed by default.
which other components can refer and have them installed.
A use case for this approach is that other package can use this "extra" for their
-own dependencies. For example, if ``Package-B`` needs ``Package-A`` with PDF support
+own dependencies. For example, if ``Package-B`` needs ``Package-B`` with PDF support
installed, it might declare the dependency like this:
.. tab:: pyproject.toml
.. admonition:: Virtual Environments
- You can think about virtual environments as "isolated Python runtime deployments"
+ You can think virtual environments as "isolated Python runtime deployments"
that allow users to install different sets of libraries and tools without
messing with the global behaviour of the system.
- They are a safe way of testing new projects and can be created easily
+ They are the safest way of testing new projects and can be created easily
with the :mod:`venv` module from the standard library.
Please note however that depending on your operating system or distribution,
.. note::
.. versionadded:: v64.0.0
- Added new *strict* mode for editable installations.
- The exact details of how this mode is implemented may vary.
+ *Strict* mode implemented as **EXPERIMENTAL**.
Limitations
</userguide/entry_point>` to work properly.
- *Strict* editable installs require the file system to support
either :wiki:`symbolic <symbolic link>` or :wiki:`hard links <hard link>`.
- This installation mode might also generate auxiliary files under the project directory.
-- There is *no guarantee* that the editable installation will be performed
- using a specific technique. Depending on each project, ``setuptools`` may
- select a different approach to ensure the package is importable at runtime.
-- There is *no guarantee* that files outside the top-level package directory
- will be accessible after an editable install.
-- There is *no guarantee* that attributes like ``__path__`` or ``__file__``
- will correspond to the exact location of the original files (e.g.,
- ``setuptools`` might employ file links to perform the editable installation).
- Users are encouraged to use tools like :mod:`importlib.resources` or
- :mod:`importlib.metadata` when trying to access package files directly.
- Editable installations may not work with
:doc:`namespaces created with pkgutil or pkg_resouces
<PyPUG:guides/packaging-namespace-packages>`.
- Please use :pep:`420`-style implicit namespaces [#namespaces]_.
+ Please use :pep:`420`-style implicit namespaces.
- Support for :pep:`420`-style implicit namespace packages for
projects structured using :ref:`flat-layout` is still **experimental**.
If you experience problems, you can try converting your package structure
---------------
If your project is not compatible with the new "editable installs" or you wish
-to replicate the legacy behavior, for the time being you can also perform the
-installation in the ``compat`` mode:
-
-.. code-block:: bash
-
- pip install -e . --config-settings editable_mode=compat
-
-This installation mode will try to emulate how ``python setup.py develop``
-works (still within the context of :pep:`660`).
-
-.. warning::
- The ``compat`` mode is *transitional* and will be removed in
- future versions of ``setuptools``, it exists only to help during the
- migration period.
- Also note that support for this mode is limited:
- it is safe to assume that the ``compat`` mode is offered "as is", and
- improvements are unlikely to be implemented.
- Users are encouraged to try out the new editable installation techniques
- and make the necessary adaptations.
-
-If the ``compat`` mode does not work for you, you can also disable the
-:pep:`editable install <660>` hooks in ``setuptools`` by setting an environment
-variable:
+to use the legacy behavior (that mimics the old and deprecated
+``python setup.py develop`` command), you can set an environment variable:
.. code-block::
SETUPTOOLS_USE_FEATURE="legacy-editable"
-
-This *may* cause the installer (e.g. ``pip``) to effectively run the "legacy"
-installation command: ``python setup.py develop`` [#installer]_.
-
-
-How editable installations work?
---------------------------------
-
-*Advanced topic*
-
-There are many techniques that can be used to expose packages under development
-in such a way that they are available as if they were installed.
-Depending on the project file structure and the selected mode, ``setuptools``
-will choose one of these approaches for the editable installation [#criteria]_.
-
-A non-exhaustive list of implementation mechanisms is presented below.
-More information is available on the text of :pep:`PEP 660 <660#what-to-put-in-the-wheel>`.
-
-- A static ``.pth`` file [#static_pth]_ can be added to one of the directories
- listed in :func:`site.getsitepackages` or :func:`site.getusersitepackages` to
- extend :obj:`sys.path`.
-- A directory containing a *farm of file links* that mimic the
- project structure and point to the original files can be employed.
- This directory can then be added to :obj:`sys.path` using a static ``.pth`` file.
-- A dynamic ``.pth`` file [#dynamic_pth]_ can also be used to install an
- "import :term:`finder`" (:obj:`~importlib.abc.MetaPathFinder` or
- :obj:`~importlib.abc.PathEntryFinder`) that will hook into Python's
- :doc:`import system <python:reference/import>` machinery.
-
-.. attention::
- ``Setuptools`` offers **no guarantee** of which technique will be used to
- perform an editable installation. This will vary from project to project
- and may change depending on the specific version of ``setuptools`` being
- used.
-
-
-----
-
-.. rubric:: Notes
-
-.. [#namespaces]
- You *may* be able to use *strict* editable installations with namespace
- packages created with ``pkgutil`` or ``pkg_namespaces``, however this is not
- officially supported.
-
-.. [#installer]
- For this workaround to work, the installer tool needs to support legacy
- editable installations. (Future versions of ``pip``, for example, may drop
- support for this feature).
-
-.. [#criteria]
- ``setuptools`` strives to find a balance between allowing the user to see
- the effects of project files being edited while still trying to keep the
- editable installation as similar as possible to a regular installation.
-
-.. [#static_pth]
- i.e., a ``.pth`` file where each line correspond to a path that should be
- added to :obj:`sys.path`. See :mod:`Site-specific configuration hook <site>`.
-
-.. [#dynamic_pth]
- i.e., a ``.pth`` file that starts where each line starts with an ``import``
- statement and executes arbitrary Python code. See :mod:`Site-specific
- configuration hook <site>`.
to validate the ``setup()`` argument, if it's supplied. The ``Distribution``
object will have the initial value of the attribute set to ``None``, and the
validation function will only be called if the ``setup()`` call sets it to
-a non-``None`` value. Here's an example validation function::
+a non-None value. Here's an example validation function::
def assert_bool(dist, attr, value):
"""Verify that value is True, False, 0, or 1"""
if bool(value) != value:
- raise SetupError(
+ raise DistutilsSetupError(
"%r must be a boolean value (got %r)" % (attr,value)
)
Your function should accept three arguments: the ``Distribution`` object,
the attribute name, and the attribute value. It should raise a
``SetupError`` (from the ``setuptools.errors`` module) if the argument
-is invalid. Remember, your function will only be called with non-``None`` values,
-and the default value of arguments defined this way is always ``None``. So, your
+is invalid. Remember, your function will only be called with non-None values,
+and the default value of arguments defined this way is always None. So, your
commands should always be prepared for the possibility that the attribute will
be ``None`` when they access it later.
Customizing Distribution Options
--------------------------------
-Plugins may wish to extend or alter the options on a ``Distribution`` object to
+Plugins may wish to extend or alter the options on a Distribution object to
suit the purposes of that project. For example, a tool that infers the
``Distribution.version`` from SCM-metadata may need to hook into the
option finalization. To enable this feature, Setuptools offers an entry
-point ``setuptools.finalize_distribution_options``. That entry point must
-be a callable taking one argument (the ``Distribution`` instance).
+point "setuptools.finalize_distribution_options". That entry point must
+be a callable taking one argument (the Distribution instance).
If the callable has an ``.order`` property, that value will be used to
determine the order in which the hook is called. Lower numbers are called
[tool.setuptools.packages]
find = {} # Scanning implicit namespaces is active by default
# OR
- find = {namespaces = false} # Disable implicit namespaces
+ find = {namespace = false} # Disable implicit namespaces
Finding simple packages
.. note:: New in 61.0.0
.. important::
- If compatibility with legacy builds or versions of tools that don't support
- certain packaging standards (e.g. :pep:`517` or :pep:`660`), a simple ``setup.py``
- script can be added to your project [#setupcfg-caveats]_
- (while keeping the configuration in ``pyproject.toml``):
+ For the time being, ``pip`` still might require a ``setup.py`` file
+ to support :doc:`editable installs <pip:cli/pip_install>`.
+
+ A simple script will suffice, for example:
.. code-block:: python
.. rubric:: Notes
-.. [#setupcfg-caveats] ``pip`` may allow editable install only with ``pyproject.toml``
- and ``setup.cfg``. However, this behavior may not be consistent over various ``pip``
- versions and other packaging-related tools
- (``setup.py`` is more reliable on those scenarios).
-
.. [#entry-points] Dynamic ``scripts`` and ``gui-scripts`` are a special case.
When resolving these metadata keys, ``setuptools`` will look for
``tool.setuptool.dynamic.entry-points``, and use the values of the
(optional files marked with ``#``)::
mypackage
- ├── pyproject.toml # and/or setup.cfg/setup.py (depending on the configuration method)
+ ├── pyproject.toml
+ | # setup.cfg or setup.py (depending on the confuguration method)
| # README.rst or README.md (a nice description of your package)
| # LICENCE (properly chosen license information, e.g. MIT, BSD-3, GPL-3, MPL-2, etc...)
└── mypackage
# OR
[tool.setuptools.packages.find]
- # All the following settings are optional:
where = ["src"] # ["."] by default
include = ["mypackage*"] # ["*"] by default
exclude = ["mypackage.tests*"] # empty by default
.. code-block:: ini
[options]
- packages = find: # OR `find_namespace:` if you want to use namespaces
+ packages = find: # OR `find_namespaces:` if you want to use namespaces
- [options.packages.find] # (always `find` even if `find_namespace:` was used before)
- # This section is optional as well as each of the following options:
- where=src # . by default
- include=mypackage* # * by default
- exclude=mypackage.tests* # empty by default
+ [options.packages.find] # (always `find` even if `find_namespaces:` was used before)
+ # This section is optional
+ # Each entry in this section is optional, and if not specified, the default values are:
+ # `where=.`, `include=*` and `exclude=` (empty).
+ include=mypackage*
+ exclude=mypackage.tests*
.. tab:: setup.py [#setup.py]_
setup(
# ...
packages=find_packages(
- # All keyword arguments below are optional:
- where='src', # '.' by default
- include=['mypackage*'], # ['*'] by default
+ where='.',
+ include=['mypackage*'], # ["*"] by default
exclude=['mypackage.tests'], # empty by default
),
# ...
)
When you pass the above information, alongside other necessary information,
-``setuptools`` walks through the directory specified in ``where`` (defaults to ``.``) and filters the packages
+``setuptools`` walks through the directory specified in ``where`` (omitted
+here as the package resides in the current directory) and filters the packages
it can find following the ``include`` patterns (defaults to ``*``), then it removes
-those that match the ``exclude`` patterns (defaults to empty) and returns a list of Python packages.
+those that match the ``exclude`` patterns and returns a list of Python packages.
For more details and advanced use, go to :ref:`package_discovery`.
pip install --editable .
-See :doc:`development_mode` for more information.
+This creates a link file in your interpreter site package directory which
+associate with your source code. For more information, see :doc:`development_mode`.
.. tip::
Prior to :ref:`pip v21.1 <pip:v21-1>`, a ``setup.py`` script was
required to be compatible with development mode. With late
- versions of pip, projects without ``setup.py`` may be installed in this mode.
+ versions of pip, ``setup.cfg``-only projects may be installed in this mode.
- If you have a version of ``pip`` older than v21.1 or is using a different
- packaging-related tool that does not support :pep:`660`, you might need to keep a
+ If you are experimenting with :doc:`configuration using pyproject.toml <pyproject_config>`,
+ or have version of ``pip`` older than v21.1, you might need to keep a
``setup.py`` file in file in your repository if you want to use editable
- installs.
+ installs (for the time being).
A simple script will suffice, for example:
setup()
- You can still keep all the configuration in
- :doc:`pyproject.toml </userguide/pyproject_config>` and/or
- :doc:`setup.cfg </userguide/declarative_config>`
+ You can still keep all the configuration in :doc:`setup.cfg </userguide/declarative_config>`
+ (or :doc:`pyproject.toml </userguide/pyproject_config>`).
Uploading your package to PyPI
+++ /dev/null
-Permission is hereby granted, free of charge, to any person obtaining
-a copy of this software and associated documentation files (the
-"Software"), to deal in the Software without restriction, including
-without limitation the rights to use, copy, modify, merge, publish,
-distribute, sublicense, and/or sell copies of the Software, and to
-permit persons to whom the Software is furnished to do so, subject to
-the following conditions:
-
-The above copyright notice and this permission notice shall be
-included in all copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
-IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
-CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
-TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
-SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+++ /dev/null
-Metadata-Version: 2.1
-Name: pyparsing
-Version: 3.0.9
-Summary: pyparsing module - Classes and methods to define and execute parsing grammars
-Author-email: Paul McGuire <ptmcg.gm+pyparsing@gmail.com>
-Requires-Python: >=3.6.8
-Description-Content-Type: text/x-rst
-Classifier: Development Status :: 5 - Production/Stable
-Classifier: Intended Audience :: Developers
-Classifier: Intended Audience :: Information Technology
-Classifier: License :: OSI Approved :: MIT License
-Classifier: Operating System :: OS Independent
-Classifier: Programming Language :: Python
-Classifier: Programming Language :: Python :: 3
-Classifier: Programming Language :: Python :: 3.6
-Classifier: Programming Language :: Python :: 3.7
-Classifier: Programming Language :: Python :: 3.8
-Classifier: Programming Language :: Python :: 3.9
-Classifier: Programming Language :: Python :: 3.10
-Classifier: Programming Language :: Python :: 3 :: Only
-Classifier: Programming Language :: Python :: Implementation :: CPython
-Classifier: Programming Language :: Python :: Implementation :: PyPy
-Classifier: Typing :: Typed
-Requires-Dist: railroad-diagrams ; extra == "diagrams"
-Requires-Dist: jinja2 ; extra == "diagrams"
-Project-URL: Homepage, https://github.com/pyparsing/pyparsing/
-Provides-Extra: diagrams
-
-PyParsing -- A Python Parsing Module
-====================================
-
-|Build Status| |Coverage|
-
-Introduction
-============
-
-The pyparsing module is an alternative approach to creating and
-executing simple grammars, vs. the traditional lex/yacc approach, or the
-use of regular expressions. The pyparsing module provides a library of
-classes that client code uses to construct the grammar directly in
-Python code.
-
-*[Since first writing this description of pyparsing in late 2003, this
-technique for developing parsers has become more widespread, under the
-name Parsing Expression Grammars - PEGs. See more information on PEGs*
-`here <https://en.wikipedia.org/wiki/Parsing_expression_grammar>`__
-*.]*
-
-Here is a program to parse ``"Hello, World!"`` (or any greeting of the form
-``"salutation, addressee!"``):
-
-.. code:: python
-
- from pyparsing import Word, alphas
- greet = Word(alphas) + "," + Word(alphas) + "!"
- hello = "Hello, World!"
- print(hello, "->", greet.parseString(hello))
-
-The program outputs the following::
-
- Hello, World! -> ['Hello', ',', 'World', '!']
-
-The Python representation of the grammar is quite readable, owing to the
-self-explanatory class names, and the use of '+', '|' and '^' operator
-definitions.
-
-The parsed results returned from ``parseString()`` is a collection of type
-``ParseResults``, which can be accessed as a
-nested list, a dictionary, or an object with named attributes.
-
-The pyparsing module handles some of the problems that are typically
-vexing when writing text parsers:
-
-- extra or missing whitespace (the above program will also handle ``"Hello,World!"``, ``"Hello , World !"``, etc.)
-- quoted strings
-- embedded comments
-
-The examples directory includes a simple SQL parser, simple CORBA IDL
-parser, a config file parser, a chemical formula parser, and a four-
-function algebraic notation parser, among many others.
-
-Documentation
-=============
-
-There are many examples in the online docstrings of the classes
-and methods in pyparsing. You can find them compiled into `online docs <https://pyparsing-docs.readthedocs.io/en/latest/>`__. Additional
-documentation resources and project info are listed in the online
-`GitHub wiki <https://github.com/pyparsing/pyparsing/wiki>`__. An
-entire directory of examples can be found `here <https://github.com/pyparsing/pyparsing/tree/master/examples>`__.
-
-License
-=======
-
-MIT License. See header of the `pyparsing.py <https://github.com/pyparsing/pyparsing/blob/master/pyparsing/__init__.py#L1-L23>`__ file.
-
-History
-=======
-
-See `CHANGES <https://github.com/pyparsing/pyparsing/blob/master/CHANGES>`__ file.
-
-.. |Build Status| image:: https://github.com/pyparsing/pyparsing/actions/workflows/ci.yml/badge.svg
- :target: https://github.com/pyparsing/pyparsing/actions/workflows/ci.yml
-.. |Coverage| image:: https://codecov.io/gh/pyparsing/pyparsing/branch/master/graph/badge.svg
- :target: https://codecov.io/gh/pyparsing/pyparsing
-
+++ /dev/null
-pyparsing-3.0.9.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
-pyparsing-3.0.9.dist-info/LICENSE,sha256=ENUSChaAWAT_2otojCIL-06POXQbVzIGBNRVowngGXI,1023
-pyparsing-3.0.9.dist-info/METADATA,sha256=h_fpm9rwvgZsE8v5YNF4IAo-IpaFWCOfUEm5MMByIiM,4207
-pyparsing-3.0.9.dist-info/RECORD,,
-pyparsing-3.0.9.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
-pyparsing-3.0.9.dist-info/WHEEL,sha256=jPMR_Dzkc4X4icQtmz81lnNY_kAsfog7ry7qoRvYLXw,81
-pyparsing/__init__.py,sha256=52QH3lgPbJhba0estckoGPHRH8JvQSSCGoWiEn2m0bU,9159
-pyparsing/__pycache__/__init__.cpython-38.pyc,,
-pyparsing/__pycache__/actions.cpython-38.pyc,,
-pyparsing/__pycache__/common.cpython-38.pyc,,
-pyparsing/__pycache__/core.cpython-38.pyc,,
-pyparsing/__pycache__/exceptions.cpython-38.pyc,,
-pyparsing/__pycache__/helpers.cpython-38.pyc,,
-pyparsing/__pycache__/results.cpython-38.pyc,,
-pyparsing/__pycache__/testing.cpython-38.pyc,,
-pyparsing/__pycache__/unicode.cpython-38.pyc,,
-pyparsing/__pycache__/util.cpython-38.pyc,,
-pyparsing/actions.py,sha256=wU9i32e0y1ymxKE3OUwSHO-SFIrt1h_wv6Ws0GQjpNU,6426
-pyparsing/common.py,sha256=lFL97ooIeR75CmW5hjURZqwDCTgruqltcTCZ-ulLO2Q,12936
-pyparsing/core.py,sha256=u8GptQE_H6wMkl8OZhxeK1aAPIDXXNgwdShORBwBVS4,213310
-pyparsing/diagram/__init__.py,sha256=f_EfxahqrdkRVahmTwLJXkZ9EEDKNd-O7lBbpJYlE1g,23668
-pyparsing/diagram/__pycache__/__init__.cpython-38.pyc,,
-pyparsing/exceptions.py,sha256=3LbSafD32NYb1Tzt85GHNkhEAU1eZkTtNSk24cPMemo,9023
-pyparsing/helpers.py,sha256=QpUOjW0-psvueMwWb9bQpU2noqKCv98_wnw1VSzSdVo,39129
-pyparsing/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
-pyparsing/results.py,sha256=HgNvWVXBdQP-Q6PtJfoCEeOJk2nwEvG-2KVKC5sGA30,25341
-pyparsing/testing.py,sha256=7tu4Abp4uSeJV0N_yEPRmmNUhpd18ZQP3CrX41DM814,13402
-pyparsing/unicode.py,sha256=fwuhMj30SQ165Cv7HJpu-rSxGbRm93kN9L4Ei7VGc1Y,10787
-pyparsing/util.py,sha256=kq772O5YSeXOSdP-M31EWpbH_ayj7BMHImBYo9xPD5M,6805
+++ /dev/null
-Wheel-Version: 1.0
-Generator: flit 3.6.0
-Root-Is-Purelib: true
-Tag: py3-none-any
)
-__version_info__ = version_info(3, 0, 9, "final", 0)
-__version_time__ = "05 May 2022 07:02 UTC"
+__version_info__ = version_info(3, 0, 8, "final", 0)
+__version_time__ = "09 Apr 2022 23:29 UTC"
__version__ = __version_info__.__version__
__versionTime__ = __version_time__
__author__ = "Paul McGuire <ptmcg.gm+pyparsing@gmail.com>"
na = one_of("N/A NA").set_parse_action(replace_with(math.nan))
term = na | num
- term[1, ...].parse_string("324 234 N/A 234") # -> [324, 234, nan, 234]
+ OneOrMore(term).parse_string("324 234 N/A 234") # -> [324, 234, nan, 234]
"""
return lambda s, l, t: [repl_str]
# core.py
#
import os
-import typing
from typing import (
+ Optional as OptionalType,
+ Iterable as IterableType,
NamedTuple,
Union,
Callable,
List,
TextIO,
Set,
+ Dict as DictType,
Sequence,
)
from abc import ABC, abstractmethod
def _should_enable_warnings(
- cmd_line_warn_options: typing.Iterable[str], warn_env_var: typing.Optional[str]
+ cmd_line_warn_options: IterableType[str], warn_env_var: OptionalType[str]
) -> bool:
enable = bool(warn_env_var)
for warn_opt in cmd_line_warn_options:
DEFAULT_WHITE_CHARS: str = " \n\t\r"
verbose_stacktrace: bool = False
- _literalStringClass: typing.Optional[type] = None
+ _literalStringClass: OptionalType[type] = None
@staticmethod
def set_default_whitespace_chars(chars: str) -> None:
Example::
# default whitespace chars are space, <TAB> and newline
- Word(alphas)[1, ...].parse_string("abc def\nghi jkl") # -> ['abc', 'def', 'ghi', 'jkl']
+ OneOrMore(Word(alphas)).parse_string("abc def\nghi jkl") # -> ['abc', 'def', 'ghi', 'jkl']
# change to just treat newline as significant
ParserElement.set_default_whitespace_chars(" \t")
- Word(alphas)[1, ...].parse_string("abc def\nghi jkl") # -> ['abc', 'def']
+ OneOrMore(Word(alphas)).parse_string("abc def\nghi jkl") # -> ['abc', 'def']
"""
ParserElement.DEFAULT_WHITE_CHARS = chars
ParserElement._literalStringClass = cls
class DebugActions(NamedTuple):
- debug_try: typing.Optional[DebugStartAction]
- debug_match: typing.Optional[DebugSuccessAction]
- debug_fail: typing.Optional[DebugExceptionAction]
+ debug_try: OptionalType[DebugStartAction]
+ debug_match: OptionalType[DebugSuccessAction]
+ debug_fail: OptionalType[DebugExceptionAction]
def __init__(self, savelist: bool = False):
self.parseAction: List[ParseAction] = list()
- self.failAction: typing.Optional[ParseFailAction] = None
+ self.failAction: OptionalType[ParseFailAction] = None
self.customName = None
self._defaultName = None
self.resultsName = None
integerK = integer.copy().add_parse_action(lambda toks: toks[0] * 1024) + Suppress("K")
integerM = integer.copy().add_parse_action(lambda toks: toks[0] * 1024 * 1024) + Suppress("M")
- print((integerK | integerM | integer)[1, ...].parse_string("5K 100 640K 256M"))
+ print(OneOrMore(integerK | integerM | integer).parse_string("5K 100 640K 256M"))
prints::
# cache for left-recursion in Forward references
recursion_lock = RLock()
- recursion_memos: typing.Dict[
+ recursion_memos: DictType[
Tuple[int, "Forward", bool], Tuple[int, Union[ParseResults, Exception]]
] = {}
@staticmethod
def enable_left_recursion(
- cache_size_limit: typing.Optional[int] = None, *, force=False
+ cache_size_limit: OptionalType[int] = None, *, force=False
) -> None:
"""
Enables "bounded recursion" parsing, which allows for both direct and indirect
Example::
- patt = Word(alphas)[1, ...]
+ patt = OneOrMore(Word(alphas))
patt.parse_string('ablaj /* comment */ lskjd')
# -> ['ablaj']
# turn on debugging for wd
wd.set_debug()
- term[1, ...].parse_string("abc 123 xyz 890")
+ OneOrMore(term).parse_string("abc 123 xyz 890")
prints::
self,
tests: Union[str, List[str]],
parse_all: bool = True,
- comment: typing.Optional[Union["ParserElement", str]] = "#",
+ comment: OptionalType[Union["ParserElement", str]] = "#",
full_dump: bool = True,
print_results: bool = True,
failure_tests: bool = False,
post_parse: Callable[[str, ParseResults], str] = None,
- file: typing.Optional[TextIO] = None,
+ file: OptionalType[TextIO] = None,
with_line_numbers: bool = False,
*,
parseAll: bool = True,
def __init__(
self,
match_string: str = "",
- ident_chars: typing.Optional[str] = None,
+ ident_chars: OptionalType[str] = None,
caseless: bool = False,
*,
matchString: str = "",
- identChars: typing.Optional[str] = None,
+ identChars: OptionalType[str] = None,
):
super().__init__()
identChars = identChars or ident_chars
Example::
- CaselessLiteral("CMD")[1, ...].parse_string("cmd CMD Cmd10")
+ OneOrMore(CaselessLiteral("CMD")).parse_string("cmd CMD Cmd10")
# -> ['CMD', 'CMD', 'CMD']
(Contrast with example for :class:`CaselessKeyword`.)
Example::
- CaselessKeyword("CMD")[1, ...].parse_string("cmd CMD Cmd10")
+ OneOrMore(CaselessKeyword("CMD")).parse_string("cmd CMD Cmd10")
# -> ['CMD', 'CMD']
(Contrast with example for :class:`CaselessLiteral`.)
def __init__(
self,
match_string: str = "",
- ident_chars: typing.Optional[str] = None,
+ ident_chars: OptionalType[str] = None,
*,
matchString: str = "",
- identChars: typing.Optional[str] = None,
+ identChars: OptionalType[str] = None,
):
identChars = identChars or ident_chars
match_string = matchString or match_string
def __init__(
self,
init_chars: str = "",
- body_chars: typing.Optional[str] = None,
+ body_chars: OptionalType[str] = None,
min: int = 1,
max: int = 0,
exact: int = 0,
as_keyword: bool = False,
- exclude_chars: typing.Optional[str] = None,
+ exclude_chars: OptionalType[str] = None,
*,
- initChars: typing.Optional[str] = None,
- bodyChars: typing.Optional[str] = None,
+ initChars: OptionalType[str] = None,
+ bodyChars: OptionalType[str] = None,
asKeyword: bool = False,
- excludeChars: typing.Optional[str] = None,
+ excludeChars: OptionalType[str] = None,
):
initChars = initChars or init_chars
bodyChars = bodyChars or body_chars
self,
charset: str,
as_keyword: bool = False,
- exclude_chars: typing.Optional[str] = None,
+ exclude_chars: OptionalType[str] = None,
*,
asKeyword: bool = False,
- excludeChars: typing.Optional[str] = None,
+ excludeChars: OptionalType[str] = None,
):
asKeyword = asKeyword or as_keyword
excludeChars = excludeChars or exclude_chars
def __init__(
self,
quote_char: str = "",
- esc_char: typing.Optional[str] = None,
- esc_quote: typing.Optional[str] = None,
+ esc_char: OptionalType[str] = None,
+ esc_quote: OptionalType[str] = None,
multiline: bool = False,
unquote_results: bool = True,
- end_quote_char: typing.Optional[str] = None,
+ end_quote_char: OptionalType[str] = None,
convert_whitespace_escapes: bool = True,
*,
quoteChar: str = "",
- escChar: typing.Optional[str] = None,
- escQuote: typing.Optional[str] = None,
+ escChar: OptionalType[str] = None,
+ escQuote: OptionalType[str] = None,
unquoteResults: bool = True,
- endQuoteChar: typing.Optional[str] = None,
+ endQuoteChar: OptionalType[str] = None,
convertWhitespaceEscapes: bool = True,
):
super().__init__()
post-processing parsed tokens.
"""
- def __init__(self, exprs: typing.Iterable[ParserElement], savelist: bool = False):
+ def __init__(self, exprs: IterableType[ParserElement], savelist: bool = False):
super().__init__(savelist)
self.exprs: List[ParserElement]
if isinstance(exprs, _generatorType):
Example::
integer = Word(nums)
- name_expr = Word(alphas)[1, ...]
+ name_expr = OneOrMore(Word(alphas))
expr = And([integer("id"), name_expr("name"), integer("age")])
# more easily written as:
def _generateDefaultName(self):
return "-"
- def __init__(
- self, exprs_arg: typing.Iterable[ParserElement], savelist: bool = True
- ):
+ def __init__(self, exprs_arg: IterableType[ParserElement], savelist: bool = True):
exprs: List[ParserElement] = list(exprs_arg)
if exprs and Ellipsis in exprs:
tmp = []
[['123'], ['3.1416'], ['789']]
"""
- def __init__(self, exprs: typing.Iterable[ParserElement], savelist: bool = False):
+ def __init__(self, exprs: IterableType[ParserElement], savelist: bool = False):
super().__init__(exprs, savelist)
if self.exprs:
self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs)
print(number.search_string("123 3.1416 789")) # Better -> [['123'], ['3.1416'], ['789']]
"""
- def __init__(self, exprs: typing.Iterable[ParserElement], savelist: bool = False):
+ def __init__(self, exprs: IterableType[ParserElement], savelist: bool = False):
super().__init__(exprs, savelist)
if self.exprs:
self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs)
- size: 20
"""
- def __init__(self, exprs: typing.Iterable[ParserElement], savelist: bool = True):
+ def __init__(self, exprs: IterableType[ParserElement], savelist: bool = True):
super().__init__(exprs, savelist)
if self.exprs:
self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs)
label = data_word + FollowedBy(':')
attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join))
- attr_expr[1, ...].parse_string("shape: SQUARE color: BLACK posn: upper left").pprint()
+ OneOrMore(attr_expr).parse_string("shape: SQUARE color: BLACK posn: upper left").pprint()
prints::
"""
def __init__(
- self, expr: Union[ParserElement, str], retreat: typing.Optional[int] = None
+ self, expr: Union[ParserElement, str], retreat: OptionalType[int] = None
):
super().__init__(expr)
self.expr = self.expr().leave_whitespace()
# very crude boolean expression - to support parenthesis groups and
# operation hierarchy, use infix_notation
- boolean_expr = boolean_term + ((AND | OR) + boolean_term)[...]
+ boolean_expr = boolean_term + ZeroOrMore((AND | OR) + boolean_term)
# integers that are followed by "." are actually floats
integer = Word(nums) + ~Char(".")
def __init__(
self,
expr: ParserElement,
- stop_on: typing.Optional[Union[ParserElement, str]] = None,
+ stop_on: OptionalType[Union[ParserElement, str]] = None,
*,
- stopOn: typing.Optional[Union[ParserElement, str]] = None,
+ stopOn: OptionalType[Union[ParserElement, str]] = None,
):
super().__init__(expr)
stopOn = stopOn or stop_on
attr_expr = Group(label + Suppress(':') + OneOrMore(data_word).set_parse_action(' '.join))
text = "shape: SQUARE posn: upper left color: BLACK"
- attr_expr[1, ...].parse_string(text).pprint() # Fail! read 'color' as data instead of next label -> [['shape', 'SQUARE color']]
+ OneOrMore(attr_expr).parse_string(text).pprint() # Fail! read 'color' as data instead of next label -> [['shape', 'SQUARE color']]
# use stop_on attribute for OneOrMore to avoid reading label string as part of the data
attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join))
def __init__(
self,
expr: ParserElement,
- stop_on: typing.Optional[Union[ParserElement, str]] = None,
+ stop_on: OptionalType[Union[ParserElement, str]] = None,
*,
- stopOn: typing.Optional[Union[ParserElement, str]] = None,
+ stopOn: OptionalType[Union[ParserElement, str]] = None,
):
super().__init__(expr, stopOn=stopOn or stop_on)
self.mayReturnEmpty = True
other: Union[ParserElement, str],
include: bool = False,
ignore: bool = None,
- fail_on: typing.Optional[Union[ParserElement, str]] = None,
+ fail_on: OptionalType[Union[ParserElement, str]] = None,
*,
failOn: Union[ParserElement, str] = None,
):
parser created using ``Forward``.
"""
- def __init__(self, other: typing.Optional[Union[ParserElement, str]] = None):
+ def __init__(self, other: OptionalType[Union[ParserElement, str]] = None):
self.caller_frame = traceback.extract_stack(limit=2)[0]
super().__init__(other, savelist=False)
self.lshift_line = None
join_string: str = "",
adjacent: bool = True,
*,
- joinString: typing.Optional[str] = None,
+ joinString: OptionalType[str] = None,
):
super().__init__(expr)
joinString = joinString if joinString is not None else join_string
attr_expr = (label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join))
# print attributes as plain groups
- print(attr_expr[1, ...].parse_string(text).dump())
+ print(OneOrMore(attr_expr).parse_string(text).dump())
- # instead of OneOrMore(expr), parse using Dict(Group(expr)[1, ...]) - Dict will auto-assign names
- result = Dict(Group(attr_expr)[1, ...]).parse_string(text)
+ # instead of OneOrMore(expr), parse using Dict(OneOrMore(Group(expr))) - Dict will auto-assign names
+ result = Dict(OneOrMore(Group(attr_expr))).parse_string(text)
print(result.dump())
# access named fields as dict entries, or output as dict
source = "a, b, c,d"
wd = Word(alphas)
- wd_list1 = wd + (',' + wd)[...]
+ wd_list1 = wd + ZeroOrMore(',' + wd)
print(wd_list1.parse_string(source))
# often, delimiters that are useful during parsing are just in the
# way afterward - use Suppress to keep them out of the parsed output
- wd_list2 = wd + (Suppress(',') + wd)[...]
+ wd_list2 = wd + ZeroOrMore(Suppress(',') + wd)
print(wd_list2.parse_string(source))
# Skipped text (using '...') can be suppressed as well
def remove_duplicate_chars(tokens):
return ''.join(sorted(set(''.join(tokens))))
- wds = wd[1, ...].set_parse_action(remove_duplicate_chars)
+ wds = OneOrMore(wd).set_parse_action(remove_duplicate_chars)
print(wds.parse_string("slkdjs sld sldd sdlf sdljf"))
prints::
Example (compare the last to example in :class:`ParserElement.transform_string`::
- hex_ints = Word(hexnums)[1, ...].set_parse_action(token_map(int, 16))
+ hex_ints = OneOrMore(Word(hexnums)).set_parse_action(token_map(int, 16))
hex_ints.run_tests('''
00 11 22 aa FF 0a 0d 1a
''')
upperword = Word(alphas).set_parse_action(token_map(str.upper))
- upperword[1, ...].run_tests('''
+ OneOrMore(upperword).run_tests('''
my kingdom for a horse
''')
wd = Word(alphas).set_parse_action(token_map(str.title))
- wd[1, ...].set_parse_action(' '.join).run_tests('''
+ OneOrMore(wd).set_parse_action(' '.join).run_tests('''
now is the winter of our discontent made glorious summer by this sun of york
''')
# build list of built-in expressions, for future reference if a global default value
# gets updated
-_builtin_exprs: List[ParserElement] = [
- v for v in vars().values() if isinstance(v, ParserElement)
-]
+_builtin_exprs = [v for v in vars().values() if isinstance(v, ParserElement)]
# backward compatibility names
tokenMap = token_map
import railroad
import pyparsing
-import typing
+from pkg_resources import resource_filename
from typing import (
List,
+ Optional,
NamedTuple,
Generic,
TypeVar,
import inspect
-jinja2_template_source = """\
-<!DOCTYPE html>
-<html>
-<head>
- {% if not head %}
- <style type="text/css">
- .railroad-heading {
- font-family: monospace;
- }
- </style>
- {% else %}
- {{ head | safe }}
- {% endif %}
-</head>
-<body>
-{{ body | safe }}
-{% for diagram in diagrams %}
- <div class="railroad-group">
- <h1 class="railroad-heading">{{ diagram.title }}</h1>
- <div class="railroad-description">{{ diagram.text }}</div>
- <div class="railroad-svg">
- {{ diagram.svg }}
- </div>
- </div>
-{% endfor %}
-</body>
-</html>
-"""
-
-template = Template(jinja2_template_source)
+with open(resource_filename(__name__, "template.jinja2"), encoding="utf-8") as fp:
+ template = Template(fp.read())
# Note: ideally this would be a dataclass, but we're supporting Python 3.5+ so we can't do this yet
NamedDiagram = NamedTuple(
"NamedDiagram",
- [("name", str), ("diagram", typing.Optional[railroad.DiagramItem]), ("index", int)],
+ [("name", str), ("diagram", Optional[railroad.DiagramItem]), ("index", int)],
)
"""
A simple structure for associating a name with a railroad diagram
"""
data = []
for diagram in diagrams:
- if diagram.diagram is None:
- continue
io = StringIO()
diagram.diagram.writeSvg(io.write)
title = diagram.name
def to_railroad(
element: pyparsing.ParserElement,
- diagram_kwargs: typing.Optional[dict] = None,
+ diagram_kwargs: Optional[dict] = None,
vertical: int = 3,
show_results_names: bool = False,
show_groups: bool = False,
parent: EditablePartial,
number: int,
name: str = None,
- parent_index: typing.Optional[int] = None,
+ parent_index: Optional[int] = None,
):
#: The pyparsing element that this represents
self.element: pyparsing.ParserElement = element
#: The name of the element
- self.name: typing.Optional[str] = name
+ self.name: str = name
#: The output Railroad element in an unconverted state
self.converted: EditablePartial = converted
#: The parent Railroad element, which we store so that we can extract this if it's duplicated
#: The order in which we found this element, used for sorting diagrams if this is extracted into a diagram
self.number: int = number
#: The index of this inside its parent
- self.parent_index: typing.Optional[int] = parent_index
+ self.parent_index: Optional[int] = parent_index
#: If true, we should extract this out into a subdiagram
self.extract: bool = False
#: If true, all of this element's children have been filled out
Stores some state that persists between recursions into the element tree
"""
- def __init__(self, diagram_kwargs: typing.Optional[dict] = None):
+ def __init__(self, diagram_kwargs: Optional[dict] = None):
#: A dictionary mapping ParserElements to state relating to them
self._element_diagram_states: Dict[int, ElementState] = {}
#: A dictionary mapping ParserElement IDs to subdiagrams generated from them
def _inner(
element: pyparsing.ParserElement,
- parent: typing.Optional[EditablePartial],
+ parent: Optional[EditablePartial],
lookup: ConverterState = None,
vertical: int = None,
index: int = 0,
name_hint: str = None,
show_results_names: bool = False,
show_groups: bool = False,
- ) -> typing.Optional[EditablePartial]:
+ ) -> Optional[EditablePartial]:
ret = fn(
element,
@_apply_diagram_item_enhancements
def _to_diagram_element(
element: pyparsing.ParserElement,
- parent: typing.Optional[EditablePartial],
+ parent: Optional[EditablePartial],
lookup: ConverterState = None,
vertical: int = None,
index: int = 0,
name_hint: str = None,
show_results_names: bool = False,
show_groups: bool = False,
-) -> typing.Optional[EditablePartial]:
+) -> Optional[EditablePartial]:
"""
Recursively converts a PyParsing Element to a railroad Element
:param lookup: The shared converter state that keeps track of useful things
else:
ret = EditablePartial.from_call(railroad.Group, label="", item="")
elif isinstance(element, pyparsing.TokenConverter):
- ret = EditablePartial.from_call(
- AnnotatedItem, label=type(element).__name__.lower(), item=""
- )
+ ret = EditablePartial.from_call(AnnotatedItem, label=type(element).__name__.lower(), item="")
elif isinstance(element, pyparsing.Opt):
ret = EditablePartial.from_call(railroad.Optional, item="")
elif isinstance(element, pyparsing.OneOrMore):
--- /dev/null
+<!DOCTYPE html>
+<html>
+<head>
+ {% if not head %}
+ <style type="text/css">
+ .railroad-heading {
+ font-family: monospace;
+ }
+ </style>
+ {% else %}
+ {{ hear | safe }}
+ {% endif %}
+</head>
+<body>
+{{ body | safe }}
+{% for diagram in diagrams %}
+ <div class="railroad-group">
+ <h1 class="railroad-heading">{{ diagram.title }}</h1>
+ <div class="railroad-description">{{ diagram.text }}</div>
+ <div class="railroad-svg">
+ {{ diagram.svg }}
+ </div>
+ </div>
+{% endfor %}
+</body>
+</html>
import re
import sys
-import typing
+from typing import Optional
from .util import col, line, lineno, _collapse_string_to_ranges
from .unicode import pyparsing_unicode as ppu
self,
pstr: str,
loc: int = 0,
- msg: typing.Optional[str] = None,
+ msg: Optional[str] = None,
elem=None,
):
self.loc = loc
# helpers.py
import html.entities
import re
-import typing
from . import __diag__
from .core import *
expr: Union[str, ParserElement],
delim: Union[str, ParserElement] = ",",
combine: bool = False,
- min: typing.Optional[int] = None,
- max: typing.Optional[int] = None,
+ min: OptionalType[int] = None,
+ max: OptionalType[int] = None,
*,
allow_trailing_delim: bool = False,
) -> ParserElement:
def counted_array(
expr: ParserElement,
- int_expr: typing.Optional[ParserElement] = None,
+ int_expr: OptionalType[ParserElement] = None,
*,
- intExpr: typing.Optional[ParserElement] = None,
+ intExpr: OptionalType[ParserElement] = None,
) -> ParserElement:
"""Helper to define a counted list of expressions.
def one_of(
- strs: Union[typing.Iterable[str], str],
+ strs: Union[IterableType[str], str],
caseless: bool = False,
use_regex: bool = True,
as_keyword: bool = False,
text = "shape: SQUARE posn: upper left color: light blue texture: burlap"
attr_expr = (label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join))
- print(attr_expr[1, ...].parse_string(text).dump())
+ print(OneOrMore(attr_expr).parse_string(text).dump())
attr_label = label
attr_value = Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)
def nested_expr(
opener: Union[str, ParserElement] = "(",
closer: Union[str, ParserElement] = ")",
- content: typing.Optional[ParserElement] = None,
+ content: OptionalType[ParserElement] = None,
ignore_expr: ParserElement = quoted_string(),
*,
ignoreExpr: ParserElement = quoted_string(),
return _makeTags(tag_str, True)
-any_open_tag: ParserElement
-any_close_tag: ParserElement
any_open_tag, any_close_tag = make_html_tags(
Word(alphas, alphanums + "_:").set_name("any tag")
)
InfixNotationOperatorArgType,
int,
OpAssoc,
- typing.Optional[ParseAction],
+ OptionalType[ParseAction],
],
Tuple[
InfixNotationOperatorArgType,
if rightLeftAssoc not in (OpAssoc.LEFT, OpAssoc.RIGHT):
raise ValueError("operator must indicate right or left associativity")
- thisExpr: Forward = Forward().set_name(term_name)
+ thisExpr = Forward().set_name(term_name)
if rightLeftAssoc is OpAssoc.LEFT:
if arity == 1:
matchExpr = _FB(lastExpr + opExpr) + Group(lastExpr + opExpr[1, ...])
assignment = Group(identifier + "=" + rvalue)
stmt << (funcDef | assignment | identifier)
- module_body = stmt[1, ...]
+ module_body = OneOrMore(stmt)
parseTree = module_body.parseString(data)
parseTree.pprint()
# build list of built-in expressions, for future reference if a global default value
# gets updated
-_builtin_exprs: List[ParserElement] = [
- v for v in vars().values() if isinstance(v, ParserElement)
-]
+_builtin_exprs = [v for v in vars().values() if isinstance(v, ParserElement)]
# pre-PEP8 compatible names
print(numlist.parse_string("0 123 321")) # -> ['123', '321']
label = Word(alphas)
- patt = label("LABEL") + Word(nums)[1, ...]
+ patt = label("LABEL") + OneOrMore(Word(nums))
print(patt.parse_string("AAB 123 321").dump())
# Use pop() in a parse action to remove named result (note that corresponding value is not
Example::
- patt = Word(alphas)[1, ...]
+ patt = OneOrMore(Word(alphas))
# use a parse action to append the reverse of the matched strings, to make a palindrome
def make_palindrome(tokens):
Example::
- patt = Word(alphas)[1, ...]
+ patt = OneOrMore(Word(alphas))
result = patt.parse_string("sldkj lsdkj sldkj")
# even though the result prints in string-like form, it is actually a pyparsing ParseResults
print(type(result), result) # -> <class 'pyparsing.ParseResults'> ['sldkj', 'lsdkj', 'sldkj']
user_data = (Group(house_number_expr)("house_number")
| Group(ssn_expr)("ssn")
| Group(integer)("age"))
- user_info = user_data[1, ...]
+ user_info = OneOrMore(user_data)
result = user_info.parse_string("22 111-22-3333 #221B")
for item in result:
# testing.py
from contextlib import contextmanager
-import typing
+from typing import Optional
from .core import (
ParserElement,
@staticmethod
def with_line_numbers(
s: str,
- start_line: typing.Optional[int] = None,
- end_line: typing.Optional[int] = None,
+ start_line: Optional[int] = None,
+ end_line: Optional[int] = None,
expand_tabs: bool = True,
eol_mark: str = "|",
- mark_spaces: typing.Optional[str] = None,
- mark_control: typing.Optional[str] = None,
+ mark_spaces: Optional[str] = None,
+ mark_control: Optional[str] = None,
) -> str:
"""
Helpful method for debugging a parser - prints a string with line and column numbers.
A namespace class for defining common language unicode_sets.
"""
- # fmt: off
-
- # define ranges in language character sets
- _ranges: UnicodeRangeList = [
- (0x0020, sys.maxunicode),
- ]
-
- class BasicMultilingualPlane(unicode_set):
- "Unicode set for the Basic Multilingual Plane"
- _ranges: UnicodeRangeList = [
- (0x0020, 0xFFFF),
- ]
+ _ranges: UnicodeRangeList = [(32, sys.maxunicode)]
class Latin1(unicode_set):
"Unicode set for Latin-1 Unicode Character Range"
class CJK(Chinese, Japanese, Hangul):
"Unicode set for combined Chinese, Japanese, and Korean (CJK) Unicode Character Range"
+ pass
class Thai(unicode_set):
"Unicode set for Thai Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x0E01, 0x0E3A),
- (0x0E3F, 0x0E5B)
- ]
+ _ranges: UnicodeRangeList = [(0x0E01, 0x0E3A), (0x0E3F, 0x0E5B)]
class Arabic(unicode_set):
"Unicode set for Arabic Unicode Character Range"
class Devanagari(unicode_set):
"Unicode set for Devanagari Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x0900, 0x097F),
- (0xA8E0, 0xA8FF)
- ]
-
- # fmt: on
+ _ranges: UnicodeRangeList = [(0x0900, 0x097F), (0xA8E0, 0xA8FF)]
pyparsing_unicode.Japanese._ranges = (
+ pyparsing_unicode.Japanese.Katakana._ranges
)
-pyparsing_unicode.BMP = pyparsing_unicode.BasicMultilingualPlane
-
-# add language identifiers using language Unicode
+# define ranges in language character sets
pyparsing_unicode.العربية = pyparsing_unicode.Arabic
pyparsing_unicode.中文 = pyparsing_unicode.Chinese
pyparsing_unicode.кириллица = pyparsing_unicode.Cyrillic
packaging==21.3
-pyparsing==3.0.9
+pyparsing==3.0.8
appdirs==1.4.3
jaraco.text==3.7.0
# required for jaraco.text on older Pythons
[tool.pytest-enabler.cov]
addopts = "--cov"
-[tool.pytest-enabler.xdist]
+[pytest.enabler.xdist]
addopts = "-n auto"
[tool.towncrier]
[metadata]
name = setuptools
-version = 64.0.0
+version = 64.0.0b1
author = Python Packaging Authority
author_email = distutils-sig@python.org
description = Easily download, build, install, upgrade, and uninstall Python packages
pytest >= 6
pytest-checkdocs >= 2.4
pytest-flake8
- # workaround for tholo/pytest-flake8#87
- flake8 < 5
pytest-black >= 0.3.7; \
# workaround for jaraco/skeleton#22
python_implementation != "PyPy"
jaraco.packaging >= 9
rst.linker >= 1.9
jaraco.tidelift >= 1.4
- sphinx-notfound-page == 0.8.3
- sphinx-hoverxref < 2
# local
pygments-github-lexers==0.0.5
import subprocess
import contextlib
import warnings
-import unittest.mock as mock
+import unittest.mock
with contextlib.suppress(ImportError):
import winreg
try:
out = subprocess.check_output(
- f'cmd /u /c "{vcvarsall}" {plat_spec} && set',
+ 'cmd /u /c "{}" {} && set'.format(vcvarsall, plat_spec),
stderr=subprocess.STDOUT,
).decode('utf-16le', errors='replace')
except subprocess.CalledProcessError as exc:
log.error(exc.output)
- raise DistutilsPlatformError(f"Error executing {exc.cmd}")
+ raise DistutilsPlatformError("Error executing {}".format(exc.cmd))
env = {
key.lower(): value
self.plat_name = None
self.initialized = False
- @classmethod
- def _configure(cls, vc_env):
- """
- Set class-level include/lib dirs.
- """
- cls.include_dirs = cls._parse_path(vc_env.get('include', ''))
- cls.library_dirs = cls._parse_path(vc_env.get('lib', ''))
-
- @staticmethod
- def _parse_path(val):
- return [dir.rstrip(os.sep) for dir in val.split(os.pathsep) if dir]
-
def initialize(self, plat_name=None):
# multi-init means we would need to check platform same each time...
assert not self.initialized, "don't init multiple times"
# sanity check for platforms to prevent obscure errors later.
if plat_name not in PLAT_TO_VCVARS:
raise DistutilsPlatformError(
- f"--plat-name must be one of {tuple(PLAT_TO_VCVARS)}"
+ "--plat-name must be one of {}".format(tuple(PLAT_TO_VCVARS))
)
# Get the vcvarsall.bat spec for the requested platform.
raise DistutilsPlatformError(
"Unable to find a compatible " "Visual Studio installation."
)
- self._configure(vc_env)
self._paths = vc_env.get('path', '')
paths = self._paths.split(os.pathsep)
self.mc = _find_exe("mc.exe", paths) # message compiler
self.mt = _find_exe("mt.exe", paths) # message compiler
+ for dir in vc_env.get('include', '').split(os.pathsep):
+ if dir:
+ self.add_include_dir(dir.rstrip(os.sep))
+
+ for dir in vc_env.get('lib', '').split(os.pathsep):
+ if dir:
+ self.add_library_dir(dir.rstrip(os.sep))
+
self.preprocess_options = None
# bpo-38597: Always compile with dynamic linking
# Future releases of Python 3.x will include all past
# Better to raise an exception instead of silently continuing
# and later complain about sources and targets having
# different lengths
- raise CompileError(f"Don't know how to compile {p}")
+ raise CompileError("Don't know how to compile {}".format(p))
return list(map(make_out_path, source_filenames))
- def compile( # noqa: C901
+ def compile(
self,
sources,
output_dir=None,
continue
else:
# how to handle this file?
- raise CompileError(f"Don't know how to compile {src} to {obj}")
+ raise CompileError(
+ "Don't know how to compile {} to {}".format(src, obj)
+ )
args = [self.cc] + compile_opts + pp_opts
if add_cpp_opts:
else:
return
warnings.warn("Fallback spawn triggered. Please update distutils monkeypatch.")
- with mock.patch.dict('os.environ', env):
+ with unittest.mock.patch.dict('os.environ', env):
bag.value = super().spawn(cmd)
# -- Miscellaneous methods -----------------------------------------
# compression using `compress`
if compress == 'compress':
- warn("'compress' is deprecated.", DeprecationWarning)
+ warn("'compress' will be deprecated.", PendingDeprecationWarning)
# the option varies depending on the platform
compressed_name = archive_name + compress_ext[compress]
if sys.platform == 'win32':
return archive_name
-def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): # noqa: C901
+def make_zipfile(base_name, base_dir, verbose=0, dry_run=0):
"""Create a zip file from all the files under 'base_dir'.
The output zip file will be named 'base_name' + ".zip". Uses either the
# -- Worker methods ------------------------------------------------
- def compile( # noqa: C901
+ def compile(
self,
sources,
output_dir=None,
# create_static_lib ()
- def link( # noqa: C901
+ def link(
self,
target_desc,
objects,
def_file = os.path.join(temp_dir, '%s.def' % modname)
contents = ['EXPORTS']
for sym in export_symbols or []:
- contents.append(' {}=_{}'.format(sym, sym))
+ contents.append(' %s=_%s' % (sym, sym))
self.execute(write_file, (def_file, contents), "writing %s" % def_file)
# Borland C++ has problems with '/' in paths
else:
objects.append(file)
- for ell in library_dirs:
- ld_args.append("/L%s" % os.path.normpath(ell))
+ for l in library_dirs:
+ ld_args.append("/L%s" % os.path.normpath(l))
ld_args.append("/L.") # we sometimes use relative paths
# list of object files
(base, ext) = os.path.splitext(os.path.normcase(src_name))
if ext not in (self.src_extensions + ['.rc', '.res']):
raise UnknownFileError(
- "unknown file type '{}' (from '{}')".format(ext, src_name)
+ "unknown file type '%s' (from '%s')" % (ext, src_name)
)
if strip_dir:
base = os.path.basename(base)
Contains CCompiler, an abstract base class that defines the interface
for the Distutils compiler abstraction model."""
-import sys
-import os
-import re
-from distutils.errors import (
- CompileError,
- LinkError,
- UnknownFileError,
- DistutilsPlatformError,
- DistutilsModuleError,
-)
+import sys, os, re
+from distutils.errors import *
from distutils.spawn import spawn
from distutils.file_util import move_file
from distutils.dir_util import mkpath
}
language_order = ["c++", "objc", "c"]
- include_dirs = []
- """
- include dirs specific to this compiler class
- """
-
- library_dirs = []
- """
- library dirs specific to this compiler class
- """
-
def __init__(self, verbose=0, dry_run=0, force=0):
self.dry_run = dry_run
self.force = force
def _setup_compile(self, outdir, macros, incdirs, sources, depends, extra):
"""Process arguments and decide which source files to compile."""
- outdir, macros, incdirs = self._fix_compile_args(outdir, macros, incdirs)
+ if outdir is None:
+ outdir = self.output_dir
+ elif not isinstance(outdir, str):
+ raise TypeError("'output_dir' must be a string or None")
+
+ if macros is None:
+ macros = self.macros
+ elif isinstance(macros, list):
+ macros = macros + (self.macros or [])
+ else:
+ raise TypeError("'macros' (if supplied) must be a list of tuples")
+
+ if incdirs is None:
+ incdirs = self.include_dirs
+ elif isinstance(incdirs, (list, tuple)):
+ incdirs = list(incdirs) + (self.include_dirs or [])
+ else:
+ raise TypeError("'include_dirs' (if supplied) must be a list of strings")
if extra is None:
extra = []
else:
raise TypeError("'include_dirs' (if supplied) must be a list of strings")
- # add include dirs for class
- include_dirs += self.__class__.include_dirs
-
return output_dir, macros, include_dirs
def _prep_compile(self, sources, output_dir, depends=None):
else:
raise TypeError("'library_dirs' (if supplied) must be a list of strings")
- # add library dirs for class
- library_dirs += self.__class__.library_dirs
-
if runtime_library_dirs is None:
runtime_library_dirs = self.runtime_library_dirs
elif isinstance(runtime_library_dirs, (list, tuple)):
"""
raise NotImplementedError
- def has_function( # noqa: C901
+ def has_function(
self,
funcname,
includes=None,
base = base[os.path.isabs(base) :] # If abs, chop off leading /
if ext not in self.src_extensions:
raise UnknownFileError(
- "unknown file type '{}' (from '{}')".format(ext, src_name)
+ "unknown file type '%s' (from '%s')" % (ext, src_name)
)
if strip_dir:
base = os.path.basename(base)
self, libname, lib_type='static', strip_dir=0, output_dir='' # or 'shared'
):
assert output_dir is not None
- expected = '"static", "shared", "dylib", "xcode_stub"'
- if lib_type not in eval(expected):
- raise ValueError(f"'lib_type' must be {expected}")
+ if lib_type not in ("static", "shared", "dylib", "xcode_stub"):
+ raise ValueError(
+ "'lib_type' must be \"static\", \"shared\", \"dylib\", or \"xcode_stub\""
+ )
fmt = getattr(self, lib_type + "_lib_format")
ext = getattr(self, lib_type + "_lib_extension")
in the distutils.command package.
"""
-import sys
-import os
-import re
+import sys, os, re
from distutils.errors import DistutilsOptionError
from distutils import util, dir_util, file_util, archive_util, dep_util
from distutils import log
if option[-1] == "=":
option = option[:-1]
value = getattr(self, option)
- self.announce(indent + "{} = {}".format(option, value), level=log.INFO)
+ self.announce(indent + "%s = %s" % (option, value), level=log.INFO)
def run(self):
"""A command's raison d'etre: carry out the action it exists to
return default
elif not isinstance(val, str):
raise DistutilsOptionError(
- "'{}' must be a {} (got `{}`)".format(option, what, val)
+ "'%s' must be a %s (got `%s`)" % (option, what, val)
)
return val
ok = False
if not ok:
raise DistutilsOptionError(
- "'{}' must be a list of strings (got {!r})".format(option, val)
+ "'%s' must be a list of strings (got %r)" % (option, val)
)
def _ensure_tested_string(self, option, tester, what, error_fmt, default=None):
raise TypeError("'infiles' must be a string, or a list or tuple of strings")
if exec_msg is None:
- exec_msg = "generating {} from {}".format(outfile, ', '.join(infiles))
+ exec_msg = "generating %s from %s" % (outfile, ', '.join(infiles))
# If 'outfile' must be regenerated (either because it doesn't
# exist, is out-of-date, or the 'force' flag is true) then
Package containing implementation of all the standard Distutils
commands."""
-__all__ = [ # noqa: F822
+__all__ = [
'build',
'build_py',
'build_ext',
'bdist_wininst',
'check',
'upload',
+ # These two are reserved for future use:
+ #'bdist_sdux',
+ #'bdist_pkgtool',
+ # Note:
+ # bdist_packager is not included because it only provides
+ # an abstract base class
]
import os
import functools
import subprocess
-import sysconfig
@functools.lru_cache()
def enabled():
"""
- Only enabled for Python 3.9 framework homebrew builds
- except ensurepip and venv.
+ Only enabled for Python 3.9 framework builds except ensurepip and venv.
"""
PY39 = (3, 9) < sys.version_info < (3, 10)
framework = sys.platform == 'darwin' and sys._framework
- homebrew = "Cellar" in sysconfig.get_config_var('projectbase')
venv = sys.prefix != sys.base_prefix
ensurepip = os.environ.get("ENSUREPIP_OPTIONS")
- return PY39 and framework and homebrew and not venv and not ensurepip
+ return PY39 and framework and not venv and not ensurepip
schemes = dict(
distribution)."""
import os
-import warnings
-
from distutils.core import Command
-from distutils.errors import DistutilsPlatformError, DistutilsOptionError
+from distutils.errors import *
from distutils.util import get_platform
formats = []
for format in bdist.format_commands:
- formats.append(("formats=" + format, None, bdist.format_commands[format][1]))
+ formats.append(("formats=" + format, None, bdist.format_command[format][1]))
pretty_printer = FancyGetopt(formats)
pretty_printer.print_help("List of available distribution formats:")
-class ListCompat(dict):
- # adapter to allow for Setuptools compatibility in format_commands
- def append(self, item):
- warnings.warn(
- """format_commands is now a dict. append is deprecated.""",
- DeprecationWarning,
- stacklevel=2,
- )
-
-
class bdist(Command):
description = "create a built (binary) distribution"
# Debian-ish Linux, Solaris, FreeBSD, ..., Windows, Mac OS.
default_format = {'posix': 'gztar', 'nt': 'zip'}
- # Define commands in preferred order for the --help-formats option
- format_commands = ListCompat(
- {
- 'rpm': ('bdist_rpm', "RPM distribution"),
- 'gztar': ('bdist_dumb', "gzip'ed tar file"),
- 'bztar': ('bdist_dumb', "bzip2'ed tar file"),
- 'xztar': ('bdist_dumb', "xz'ed tar file"),
- 'ztar': ('bdist_dumb', "compressed tar file"),
- 'tar': ('bdist_dumb', "tar file"),
- 'wininst': ('bdist_wininst', "Windows executable installer"),
- 'zip': ('bdist_dumb', "ZIP file"),
- 'msi': ('bdist_msi', "Microsoft Installer"),
- }
- )
-
- # for compatibility until consumers only reference format_commands
- format_command = format_commands
+ # Establish the preferred order (for the --help-formats option).
+ format_commands = [
+ 'rpm',
+ 'gztar',
+ 'bztar',
+ 'xztar',
+ 'ztar',
+ 'tar',
+ 'wininst',
+ 'zip',
+ 'msi',
+ ]
+
+ # And the real information.
+ format_command = {
+ 'rpm': ('bdist_rpm', "RPM distribution"),
+ 'gztar': ('bdist_dumb', "gzip'ed tar file"),
+ 'bztar': ('bdist_dumb', "bzip2'ed tar file"),
+ 'xztar': ('bdist_dumb', "xz'ed tar file"),
+ 'ztar': ('bdist_dumb', "compressed tar file"),
+ 'tar': ('bdist_dumb', "tar file"),
+ 'wininst': ('bdist_wininst', "Windows executable installer"),
+ 'zip': ('bdist_dumb', "ZIP file"),
+ 'msi': ('bdist_msi', "Microsoft Installer"),
+ }
def initialize_options(self):
self.bdist_base = None
commands = []
for format in self.formats:
try:
- commands.append(self.format_commands[format][0])
+ commands.append(self.format_command[format][0])
except KeyError:
raise DistutilsOptionError("invalid format '%s'" % format)
from distutils.core import Command
from distutils.util import get_platform
from distutils.dir_util import remove_tree, ensure_relative
-from distutils.errors import DistutilsPlatformError
+from distutils.errors import *
from distutils.sysconfig import get_python_version
from distutils import log
# And make an archive relative to the root of the
# pseudo-installation tree.
- archive_basename = "{}.{}".format(
- self.distribution.get_fullname(), self.plat_name
- )
+ archive_basename = "%s.%s" % (self.distribution.get_fullname(), self.plat_name)
pseudoinstall_root = os.path.join(self.dist_dir, archive_basename)
if not self.relative:
default, cancel, bitmap=true)"""
super().__init__(*args)
ruler = self.h - 36
+ bmwidth = 152 * ruler / 328
+ # if kw.get("bitmap", True):
+ # self.bitmap("Bitmap", 0, 0, bmwidth, ruler, "PythonWin")
self.line("BottomLine", 0, ruler, self.w, 0)
def title(self, title):
)
self.install_script_key = None
- def run(self): # noqa: C901
+ def run(self):
if not self.skip_build:
self.run_command('build')
if not target_version:
assert self.skip_build, "Should have already checked this"
target_version = '%d.%d' % sys.version_info[:2]
- plat_specifier = ".{}-{}".format(self.plat_name, target_version)
+ plat_specifier = ".%s-%s" % (self.plat_name, target_version)
build = self.get_finalized_command('build')
build.build_lib = os.path.join(build.build_base, 'lib' + plat_specifier)
# in Add-Remove-Programs (APR)
fullname = self.distribution.get_fullname()
if self.target_version:
- product_name = "Python {} {}".format(self.target_version, fullname)
+ product_name = "Python %s %s" % (self.target_version, fullname)
else:
product_name = "Python %s" % (fullname)
self.db = msilib.init_database(
if not self.keep_temp:
remove_tree(self.bdist_dir, dry_run=self.dry_run)
- def add_files(self): # noqa: C901
+ def add_files(self):
db = self.db
cab = msilib.CAB("distfiles")
rootdir = os.path.abspath(self.bdist_dir)
for file in os.listdir(dir.absolute):
afile = os.path.join(dir.absolute, file)
if os.path.isdir(afile):
- short = "{}|{}".format(dir.make_short(file), file)
+ short = "%s|%s" % (dir.make_short(file), file)
default = file + version
newdir = Directory(db, cab, dir, file, default, short)
todo.append(newdir)
exe_action = "PythonExe" + ver
target_dir_prop = "TARGETDIR" + ver
exe_prop = "PYTHON" + ver
-
- # Type: msidbLocatorTypeRawValue + msidbLocatorType64bit
- Type = 2 + 16 * bool(msilib.Win64)
+ if msilib.Win64:
+ # type: msidbLocatorTypeRawValue + msidbLocatorType64bit
+ Type = 2 + 16
+ else:
+ Type = 2
add_data(
self.db,
"RegLocator",
# see "Dialog Style Bits"
modal = 3 # visible | modal
modeless = 1 # visible
+ track_disk_space = 32
# UI customization properties
add_data(
320,
80,
0x30003,
- "[ProductName] setup ended prematurely because of an error. "
- "Your system has not been modified. To install this program "
- "at a later time, please run the installation again.",
+ "[ProductName] setup ended prematurely because of an error. Your system has not been modified. To install this program at a later time, please run the installation again.",
)
fatal.text(
"Description2",
80,
0x30003,
"[ProductName] setup was interrupted. Your system has not been modified. "
- "To install this program at a later time, please run the installation "
- "again.",
+ "To install this program at a later time, please run the installation again.",
)
user_exit.text(
"Description2",
330,
50,
3,
- "The following applications are using files that need to be updated by "
- "this "
- "setup. Close these applications and then click Retry to continue the "
- "installation or Cancel to exit it.",
+ "The following applications are using files that need to be updated by this setup. Close these applications and then click Retry to continue the installation or Cancel to exit it.",
)
inuse.control(
"List",
None,
)
error.text("ErrorText", 50, 9, 280, 48, 3, "")
+ # error.control("ErrorIcon", "Icon", 15, 9, 24, 24, 5242881, None, "py.ico", None, None)
error.pushbutton("N", 120, 72, 81, 21, 3, "No", None).event(
"EndDialog", "ErrorNo"
)
194,
30,
3,
- "Please wait while the installer finishes determining your disk space "
- "requirements.",
+ "Please wait while the installer finishes determining your disk space requirements.",
)
c = costing.pushbutton("Return", 102, 57, 56, 17, 3, "Return", None)
c.event("EndDialog", "Exit")
320,
40,
0x30003,
- "Please wait while the Installer prepares to guide you through the "
- "installation.",
+ "Please wait while the Installer prepares to guide you through the installation.",
)
prep.title("Welcome to the [ProductName] Installer")
c = prep.text("ActionText", 15, 110, 320, 20, 0x30003, "Pondering...")
# Close dialog when maintenance action scheduled
c.event("EndDialog", "Return", 'MaintenanceForm_Action<>"Change"', 20)
+ # c.event("NewDialog", "SelectFeaturesDlg", 'MaintenanceForm_Action="Change"', 21)
maint.cancel("Cancel", "RepairRadioGroup").event("SpawnDialog", "CancelDlg")
def get_installer_filename(self, fullname):
# Factored out to allow overriding in subclasses
if self.target_version:
- base_name = "{}.{}-py{}.msi".format(
+ base_name = "%s.%s-py%s.msi" % (
fullname,
self.plat_name,
self.target_version,
)
else:
- base_name = "{}.{}.msi".format(fullname, self.plat_name)
+ base_name = "%s.%s.msi" % (fullname, self.plat_name)
installer_name = os.path.join(self.dist_dir, base_name)
return installer_name
Implements the Distutils 'bdist_rpm' command (create RPM source and binary
distributions)."""
-import subprocess
-import sys
-import os
-
+import subprocess, sys, os
from distutils.core import Command
from distutils.debug import DEBUG
from distutils.file_util import write_file
-from distutils.errors import (
- DistutilsOptionError,
- DistutilsPlatformError,
- DistutilsFileError,
- DistutilsExecError,
-)
+from distutils.errors import *
from distutils.sysconfig import get_python_version
from distutils import log
self.ensure_string('force_arch')
- def run(self): # noqa: C901
+ def run(self):
if DEBUG:
print("before _get_package_data():")
print("vendor =", self.vendor)
nvr_string = "%{name}-%{version}-%{release}"
src_rpm = nvr_string + ".src.rpm"
non_src_rpm = "%{arch}/" + nvr_string + ".%{arch}.rpm"
- q_cmd = r"rpm -q --qf '{} {}\n' --specfile '{}'".format(
+ q_cmd = r"rpm -q --qf '%s %s\n' --specfile '%s'" % (
src_rpm,
non_src_rpm,
spec_path,
line = out.readline()
if not line:
break
- ell = line.strip().split()
- assert len(ell) == 2
- binary_rpms.append(ell[1])
+ l = line.strip().split()
+ assert len(l) == 2
+ binary_rpms.append(l[1])
# The source rpm is named after the first entry in the spec file
if source_rpm is None:
- source_rpm = ell[0]
+ source_rpm = l[0]
status = out.close()
if status:
def _dist_path(self, path):
return os.path.join(self.dist_dir, os.path.basename(path))
- def _make_spec_file(self): # noqa: C901
+ def _make_spec_file(self):
"""Generate the text of an RPM spec file and return it as a
list of strings (one per line).
"""
):
val = getattr(self, field.lower())
if isinstance(val, list):
- spec_file.append('{}: {}'.format(field, ' '.join(val)))
+ spec_file.append('%s: %s' % (field, ' '.join(val)))
elif val is not None:
- spec_file.append('{}: {}'.format(field, val))
+ spec_file.append('%s: %s' % (field, val))
if self.distribution.get_url():
spec_file.append('Url: ' + self.distribution.get_url())
# rpm scripts
# figure out default build script
- def_setup_call = "{} {}".format(self.python, os.path.basename(sys.argv[0]))
+ def_setup_call = "%s %s" % (self.python, os.path.basename(sys.argv[0]))
def_build = "%s build" % def_setup_call
if self.use_rpm_opt_flags:
def_build = 'env CFLAGS="$RPM_OPT_FLAGS" ' + def_build
from distutils.core import Command
from distutils.util import get_platform
from distutils.dir_util import remove_tree
-from distutils.errors import DistutilsOptionError, DistutilsPlatformError
+from distutils.errors import *
from distutils.sysconfig import get_python_version
from distutils import log
if not target_version:
assert self.skip_build, "Should have already checked this"
target_version = '%d.%d' % sys.version_info[:2]
- plat_specifier = ".{}-{}".format(self.plat_name, target_version)
+ plat_specifier = ".%s-%s" % (self.plat_name, target_version)
build = self.get_finalized_command('build')
build.build_lib = os.path.join(build.build_base, 'lib' + plat_specifier)
]:
data = getattr(metadata, name, "")
if data:
- info = info + ("\n {}: {}".format(name.capitalize(), escape(data)))
- lines.append("{}={}".format(name, escape(data)))
+ info = info + ("\n %s: %s" % (name.capitalize(), escape(data)))
+ lines.append("%s=%s" % (name, escape(data)))
# The [setup] section contains entries controlling
# the installer runtime.
import time
import distutils
- build_info = "Built {} with distutils-{}".format(
+ build_info = "Built %s with distutils-%s" % (
time.ctime(time.time()),
distutils.__version__,
)
# We need to normalize newlines, so we open in text mode and
# convert back to bytes. "latin-1" simply avoids any possible
# failures.
- with open(self.pre_install_script, encoding="latin-1") as script:
+ with open(self.pre_install_script, "r", encoding="latin-1") as script:
script_data = script.read().encode("latin-1")
cfgdata = cfgdata + script_data + b"\n\0"
else:
# it's better to include this in the name
installer_name = os.path.join(
self.dist_dir,
- "{}.{}-py{}.exe".format(fullname, self.plat_name, self.target_version),
+ "%s.%s-py%s.exe" % (fullname, self.plat_name, self.target_version),
)
else:
installer_name = os.path.join(
- self.dist_dir, "{}.{}.exe".format(fullname, self.plat_name)
+ self.dist_dir, "%s.%s.exe" % (fullname, self.plat_name)
)
return installer_name
- def get_exe_bytes(self): # noqa: C901
+ def get_exe_bytes(self):
# If a target-version other than the current version has been
# specified, then using the MSVC version from *this* build is no good.
# Without actually finding and executing the target version and parsing
else:
sfix = ''
- filename = os.path.join(directory, "wininst-{}{}.exe".format(bv, sfix))
+ filename = os.path.join(directory, "wininst-%s%s.exe" % (bv, sfix))
f = open(filename, "rb")
try:
return f.read()
Implements the Distutils 'build' command."""
-import sys
-import os
+import sys, os
from distutils.core import Command
from distutils.errors import DistutilsOptionError
from distutils.util import get_platform
self.executable = None
self.parallel = None
- def finalize_options(self): # noqa: C901
+ def finalize_options(self):
if self.plat_name is None:
self.plat_name = get_platform()
else:
"using './configure --help' on your platform)"
)
- plat_specifier = ".{}-{}".format(self.plat_name, sys.implementation.cache_tag)
+ plat_specifier = ".%s-%s" % (self.plat_name, sys.implementation.cache_tag)
# Make it so Python 2.x and Python 2.x with --with-pydebug don't
# share the same build directories. Doing so confuses the build
import os
from distutils.core import Command
-from distutils.errors import DistutilsSetupError
+from distutils.errors import *
from distutils.sysconfig import customize_compiler
from distutils import log
import re
import sys
from distutils.core import Command
-from distutils.errors import (
- DistutilsOptionError,
- DistutilsSetupError,
- CCompilerError,
- DistutilsError,
- CompileError,
- DistutilsPlatformError,
-)
+from distutils.errors import *
from distutils.sysconfig import customize_compiler, get_python_version
from distutils.sysconfig import get_config_h_filename
from distutils.dep_util import newer_group
self.user = None
self.parallel = None
- def finalize_options(self): # noqa: C901
+ def finalize_options(self):
from distutils import sysconfig
self.set_undefined_options(
except ValueError:
raise DistutilsOptionError("parallel should be an integer")
- def run(self): # noqa: C901
+ def run(self):
from distutils.ccompiler import new_compiler
# 'self.extensions', as supplied by setup.py, is a list of
# Now actually compile and link everything.
self.build_extensions()
- def check_extensions_list(self, extensions): # noqa: C901
+ def check_extensions_list(self, extensions):
"""Ensure that the list of extensions (presumably provided as a
command option 'extensions') is valid, i.e. it is a list of
Extension objects. We also support the old-style list of 2-tuples,
except (CCompilerError, DistutilsError, CompileError) as e:
if not ext.optional:
raise
- self.warn('building extension "{}" failed: {}'.format(ext.name, e))
+ self.warn('building extension "%s" failed: %s' % (ext.name, e))
def build_extension(self, ext):
sources = ext.sources
ext.export_symbols.append(initfunc_name)
return ext.export_symbols
- def get_libraries(self, ext): # noqa: C901
+ def get_libraries(self, ext):
"""Return the list of libraries to link against when building a
shared extension. On most platforms, this is just 'ext.libraries';
on Windows, we add the Python library (eg. python20.dll).
import glob
from distutils.core import Command
-from distutils.errors import DistutilsOptionError, DistutilsFileError
+from distutils.errors import *
from distutils.util import convert_path
from distutils import log
def build_package_data(self):
"""Copy data files into build directory"""
+ lastdir = None
for package, src_dir, build_dir, filenames in self.data_files:
for filename in filenames:
target = os.path.join(build_dir, filename)
return outfiles, updated_files
- def _copy_script(self, script, outfiles, updated_files): # noqa: C901
+ def _copy_script(self, script, outfiles, updated_files):
shebang_match = None
script = convert_path(script)
outfile = os.path.join(self.build_dir, os.path.basename(script))
Implements the Distutils 'check' command.
"""
-import contextlib
+from email.utils import getaddresses
from distutils.core import Command
from distutils.errors import DistutilsSetupError
-with contextlib.suppress(ImportError):
- import docutils.utils
- import docutils.parsers.rst
- import docutils.frontend
- import docutils.nodes
+try:
+ # docutils is installed
+ from docutils.utils import Reporter
+ from docutils.parsers.rst import Parser
+ from docutils import frontend
+ from docutils import nodes
- class SilentReporter(docutils.utils.Reporter):
+ class SilentReporter(Reporter):
def __init__(
self,
source,
def system_message(self, level, message, *children, **kwargs):
self.messages.append((level, message, children, kwargs))
- return docutils.nodes.system_message(
+ return nodes.system_message(
message, level=level, type=self.levels[level], *children, **kwargs
)
+ HAS_DOCUTILS = True
+except Exception:
+ # Catch all exceptions because exceptions besides ImportError probably
+ # indicate that docutils is not ported to Py3k.
+ HAS_DOCUTILS = False
+
class check(Command):
"""This command checks the meta-data of the package."""
if self.metadata:
self.check_metadata()
if self.restructuredtext:
- if 'docutils' in globals():
- try:
- self.check_restructuredtext()
- except TypeError as exc:
- raise DistutilsSetupError(str(exc))
+ if HAS_DOCUTILS:
+ self.check_restructuredtext()
elif self.strict:
raise DistutilsSetupError('The docutils package is needed.')
if line is None:
warning = warning[1]
else:
- warning = '{} (line {})'.format(warning[1], line)
+ warning = '%s (line %s)' % (warning[1], line)
self.warn(warning)
def _check_rst_data(self, data):
"""Returns warnings when the provided data doesn't compile."""
# the include and csv_table directives need this to be a path
source_path = self.distribution.script_name or 'setup.py'
- parser = docutils.parsers.rst.Parser()
- settings = docutils.frontend.OptionParser(
- components=(docutils.parsers.rst.Parser,)
- ).get_default_values()
+ parser = Parser()
+ settings = frontend.OptionParser(components=(Parser,)).get_default_values()
settings.tab_width = 4
settings.pep_references = None
settings.rfc_references = None
error_handler=settings.error_encoding_error_handler,
)
- document = docutils.nodes.document(settings, reporter, source=source_path)
+ document = nodes.document(settings, reporter, source=source_path)
document.note_source(source_path, -1)
try:
parser.parse(data, document)
this header file lives".
"""
-import os
-import re
+import os, re
from distutils.core import Command
from distutils.errors import DistutilsExecError
from distutils.core import Command
from distutils.debug import DEBUG
from distutils.sysconfig import get_config_vars
+from distutils.errors import DistutilsPlatformError
from distutils.file_util import write_file
from distutils.util import convert_path, subst_vars, change_root
from distutils.util import get_platform
-from distutils.errors import DistutilsOptionError, DistutilsPlatformError
+from distutils.errors import DistutilsOptionError
from . import _framework_compat as fw
from .. import _collections
INSTALL_SCHEMES = {
'posix_prefix': {
'purelib': '{base}/lib/{implementation_lower}{py_version_short}/site-packages',
- 'platlib': '{platbase}/{platlibdir}/{implementation_lower}'
- '{py_version_short}/site-packages',
- 'headers': '{base}/include/{implementation_lower}'
- '{py_version_short}{abiflags}/{dist_name}',
+ 'platlib': '{platbase}/{platlibdir}/{implementation_lower}{py_version_short}/site-packages',
+ 'headers': '{base}/include/{implementation_lower}{py_version_short}{abiflags}/{dist_name}',
'scripts': '{base}/bin',
'data': '{base}',
},
INSTALL_SCHEMES['nt_user'] = {
'purelib': '{usersite}',
'platlib': '{usersite}',
- 'headers': '{userbase}/{implementation}{py_version_nodot_plat}'
- '/Include/{dist_name}',
+ 'headers': '{userbase}/{implementation}{py_version_nodot_plat}/Include/{dist_name}',
'scripts': '{userbase}/{implementation}{py_version_nodot_plat}/Scripts',
'data': '{userbase}',
}
INSTALL_SCHEMES['posix_user'] = {
'purelib': '{usersite}',
'platlib': '{usersite}',
- 'headers': '{userbase}/include/{implementation_lower}'
- '{py_version_short}{abiflags}/{dist_name}',
+ 'headers': '{userbase}/include/{implementation_lower}{py_version_short}{abiflags}/{dist_name}',
'scripts': '{userbase}/bin',
'data': '{userbase}',
}
# party Python modules on various platforms given a wide
# array of user input is decided. Yes, it's quite complex!)
- def finalize_options(self): # noqa: C901
+ def finalize_options(self):
"""Finalizes options."""
# This method (and its helpers, like 'finalize_unix()',
# 'finalize_other()', and 'select_scheme()') is where the default
-"""
-distutils.command.install_egg_info
+"""distutils.command.install_egg_info
Implements the Distutils 'install_egg_info' command, for installing
-a package's PKG-INFO metadata.
-"""
+a package's PKG-INFO metadata."""
-import os
-import sys
-import re
from distutils.cmd import Command
from distutils import log, dir_util
+import os, sys, re
class install_egg_info(Command):
import getpass
import io
-import urllib.parse
-import urllib.request
+import urllib.parse, urllib.request
from warnings import warn
from distutils.core import PyPIRCCommand
+from distutils.errors import *
from distutils import log
def check_metadata(self):
"""Deprecated API."""
warn(
- "distutils.command.register.check_metadata is deprecated; "
- "use the check command instead",
- DeprecationWarning,
+ "distutils.command.register.check_metadata is deprecated, \
+ use the check command instead",
+ PendingDeprecationWarning,
)
check = self.distribution.get_command_obj('check')
check.ensure_finalized()
(code, result) = self.post_to_server(self.build_post_data('verify'))
log.info('Server response (%s): %s', code, result)
- def send_metadata(self): # noqa: C901
+ def send_metadata(self):
'''Send the metadata to the package index server.
Well, do the following:
auth.add_password(self.realm, host, username, password)
# send the info to the server and report the result
code, result = self.post_to_server(self.build_post_data('submit'), auth)
- self.announce('Server response ({}): {}'.format(code, result), log.INFO)
+ self.announce('Server response (%s): %s' % (code, result), log.INFO)
# possibly save the login
if code == 200:
log.info('Server response (%s): %s', code, result)
else:
log.info('You will receive an email shortly.')
- log.info('Follow the instructions in it to ' 'complete registration.')
+ log.info(('Follow the instructions in it to ' 'complete registration.'))
elif choice == '3':
data = {':action': 'password_reset'}
data['email'] = ''
data['metadata_version'] = '1.1'
return data
- def post_to_server(self, data, auth=None): # noqa: C901
+ def post_to_server(self, data, auth=None):
'''Post a query to the server, and return a string response.'''
if 'name' in data:
self.announce(
- 'Registering {} to {}'.format(data['name'], self.repository), log.INFO
+ 'Registering %s to %s' % (data['name'], self.repository), log.INFO
)
# Build up the MIME payload for the urllib2 POST data
boundary = '--------------GHSKFJDLGDS7543FJKLFHRE75642756743254'
from distutils.filelist import FileList
from distutils import log
from distutils.util import convert_path
-from distutils.errors import DistutilsOptionError, DistutilsTemplateError
+from distutils.errors import DistutilsTemplateError, DistutilsOptionError
def show_formats():
seps = '/'
vcs_dirs = ['RCS', 'CVS', r'\.svn', r'\.hg', r'\.git', r'\.bzr', '_darcs']
- vcs_ptrn = r'(^|{})({})({}).*'.format(seps, '|'.join(vcs_dirs), seps)
+ vcs_ptrn = r'(^|%s)(%s)(%s).*' % (seps, '|'.join(vcs_dirs), seps)
self.filelist.exclude_pattern(vcs_ptrn, is_regex=1)
def write_manifest(self):
for command, pyversion, filename in self.distribution.dist_files:
self.upload_file(command, pyversion, filename)
- def upload_file(self, command, pyversion, filename): # noqa: C901
+ def upload_file(self, command, pyversion, filename):
# Makes sure the repository URL is compliant
schema, netloc, url, params, query, fragments = urlparse(self.repository)
if params or query or fragments:
body.write(end_boundary)
body = body.getvalue()
- msg = "Submitting {} to {}".format(filename, self.repository)
+ msg = "Submitting %s to %s" % (filename, self.repository)
self.announce(msg, log.INFO)
# build the Request
raise
if status == 200:
- self.announce('Server response ({}): {}'.format(status, reason), log.INFO)
+ self.announce('Server response (%s): %s' % (status, reason), log.INFO)
if self.show_response:
text = self._read_pypi_response(result)
msg = '\n'.join(('-' * 75, text, '-' * 75))
self.announce(msg, log.INFO)
else:
- msg = 'Upload failed ({}): {}'.format(status, reason)
+ msg = 'Upload failed (%s): %s' % (status, reason)
self.announce(msg, log.ERROR)
raise DistutilsError(msg)
with os.fdopen(os.open(rc, os.O_CREAT | os.O_WRONLY, 0o600), 'w') as f:
f.write(DEFAULT_PYPIRC % (username, password))
- def _read_pypirc(self): # noqa: C901
+ def _read_pypirc(self):
"""Reads the .pypirc file."""
rc = self._get_rc_file()
if os.path.exists(rc):
import tokenize
from distutils.debug import DEBUG
-from distutils.errors import (
- DistutilsSetupError,
- DistutilsError,
- CCompilerError,
- DistutilsArgError,
-)
+from distutils.errors import *
# Mainly import these so setup scripts can "from distutils.core import" them.
from distutils.dist import Distribution
from distutils.config import PyPIRCCommand
from distutils.extension import Extension
-
-__all__ = ['Distribution', 'Command', 'PyPIRCCommand', 'Extension', 'setup']
-
# This is a barebones help message generated displayed when the user
# runs the setup script with no arguments at all. More useful help
# is generated with various --help options: global help, list commands,
def gen_usage(script_name):
script = os.path.basename(script_name)
- return USAGE % locals()
+ return USAGE % vars()
# Some mild magic to control the behaviour of 'setup()' from 'run_setup()'.
)
-def setup(**attrs): # noqa: C901
+def setup(**attrs):
"""The gateway to the Distutils: do everything your setup script needs
to do, in a highly flexible and user-driven way. Briefly: create a
Distribution instance; find and parse config files; parse the command
if 'name' not in attrs:
raise SystemExit("error in setup command: %s" % msg)
else:
- raise SystemExit("error in {} setup command: {}".format(attrs['name'], msg))
+ raise SystemExit("error in %s setup command: %s" % (attrs['name'], msg))
if _setup_stop_after == "init":
return dist
raise SystemExit("interrupted")
except OSError as exc:
if DEBUG:
- sys.stderr.write("error: {}\n".format(exc))
+ sys.stderr.write("error: %s\n" % (exc,))
raise
else:
- raise SystemExit("error: {}".format(exc))
+ raise SystemExit("error: %s" % (exc,))
except (DistutilsError, CCompilerError) as msg:
if DEBUG:
used to drive the Distutils.
"""
if stop_after not in ('init', 'config', 'commandline', 'run'):
- raise ValueError("invalid value for 'stop_after': {!r}".format(stop_after))
+ raise ValueError("invalid value for 'stop_after': %r" % (stop_after,))
global _setup_stop_after, _setup_distribution
_setup_stop_after = stop_after
raise ValueError("Unknown MS Compiler version %s " % msc_ver)
-_runtime_library_dirs_msg = (
- "Unable to set runtime library search path on Windows, "
- "usually indicated by `runtime_library_dirs` parameter to Extension"
-)
-
-
class CygwinCCompiler(UnixCCompiler):
"""Handles the Cygwin port of the GNU C compiler to Windows."""
super().__init__(verbose, dry_run, force)
status, details = check_config_h()
- self.debug_print(
- "Python's GCC status: {} (details: {})".format(status, details)
- )
+ self.debug_print("Python's GCC status: %s (details: %s)" % (status, details))
if status is not CONFIG_H_OK:
self.warn(
"Python's pyconfig.h doesn't seem to support your compiler. "
compiler_so='%s -mcygwin -mdll -O -Wall' % self.cc,
compiler_cxx='%s -mcygwin -O -Wall' % self.cxx,
linker_exe='%s -mcygwin' % self.cc,
- linker_so=('{} -mcygwin {}'.format(self.linker_dll, shared_option)),
+ linker_so=('%s -mcygwin %s' % (self.linker_dll, shared_option)),
)
# Include the appropriate MSVC runtime library if Python was built
objects = copy.copy(objects or [])
if runtime_library_dirs:
- self.warn(_runtime_library_dirs_msg)
+ self.warn(
+ "I don't know what to do with 'runtime_library_dirs': "
+ + str(runtime_library_dirs)
+ )
# Additional libraries
libraries.extend(self.dll_libraries)
# generate the filenames for these files
def_file = os.path.join(temp_dir, dll_name + ".def")
+ lib_file = os.path.join(temp_dir, 'lib' + dll_name + ".a")
# Generate .def file
contents = ["LIBRARY %s" % os.path.basename(output_filename), "EXPORTS"]
contents.append(sym)
self.execute(write_file, (def_file, contents), "writing %s" % def_file)
- # next add options for def-file
+ # next add options for def-file and to creating import libraries
+ # doesn't work: bfd_close build\...\libfoo.a: Invalid operation
+ # extra_preargs.extend(["-Wl,--out-implib,%s" % lib_file])
# for gcc/ld the def-file is specified as any object files
objects.append(def_file)
# cygwin doesn't support rpath. While in theory we could error
# out like MSVC does, code might expect it to work like on Unix, so
# just warn and hope for the best.
- self.warn(_runtime_library_dirs_msg)
+ self.warn("don't know how to set runtime library search path on Windows")
return []
# -- Miscellaneous methods -----------------------------------------
base, ext = os.path.splitext(os.path.normcase(src_name))
if ext not in (self.src_extensions + ['.rc', '.res']):
raise UnknownFileError(
- "unknown file type '{}' (from '{}')".format(ext, src_name)
+ "unknown file type '%s' (from '%s')" % (ext, src_name)
)
if strip_dir:
base = os.path.basename(base)
compiler_so='%s -mdll -O -Wall' % self.cc,
compiler_cxx='%s -O -Wall' % self.cxx,
linker_exe='%s' % self.cc,
- linker_so='{} {}'.format(self.linker_dll, shared_option),
+ linker_so='%s %s' % (self.linker_dll, shared_option),
)
# Maybe we should also append -mthreads, but then the finished
self.dll_libraries = get_msvcr()
def runtime_library_dir_option(self, dir):
- raise DistutilsPlatformError(_runtime_library_dirs_msg)
+ raise DistutilsPlatformError(
+ "don't know how to set runtime library search path on Windows"
+ )
# Because these compilers aren't configured in Python's pyconfig.h file by
finally:
config_h.close()
except OSError as exc:
- return (CONFIG_H_UNCERTAIN, "couldn't read '{}': {}".format(fn, exc.strerror))
+ return (CONFIG_H_UNCERTAIN, "couldn't read '%s': %s" % (fn, exc.strerror))
def is_cygwincc(cc):
if missing == 'error': # blow up when we stat() the file
pass
elif missing == 'ignore': # missing source dropped from
- continue # target's dependency list
+ continue # target's dependency list
elif missing == 'newer': # missing source means target is
- return 1 # out-of-date
+ return 1 # out-of-date
source_mtime = os.stat(source)[ST_MTIME]
if source_mtime > target_mtime:
import os
import errno
-from distutils.errors import DistutilsInternalError, DistutilsFileError
+from distutils.errors import DistutilsFileError, DistutilsInternalError
from distutils import log
# cache for by mkpath() -- in addition to cheapening redundant calls,
# eliminates redundant "creating /foo/bar/baz" messages in dry-run mode
_path_created = {}
-
-def mkpath(name, mode=0o777, verbose=1, dry_run=0): # noqa: C901
+# I don't use os.makedirs because a) it's new to Python 1.5.2, and
+# b) it blows up if the directory already exists (I want to silently
+# succeed in that case).
+def mkpath(name, mode=0o777, verbose=1, dry_run=0):
"""Create a directory and any missing ancestor directories.
If the directory already exists (or if 'name' is the empty string, which
(eg. some sub-path exists, but is a file rather than a directory).
If 'verbose' is true, print a one-line summary of each mkdir to stdout.
Return the list of directories actually created.
-
- os.makedirs is not used because:
-
- a) It's new to Python 1.5.2, and
- b) it blows up if the directory already exists (in which case it should
- silently succeed).
"""
global _path_created
# Detect a common bug -- name is None
if not isinstance(name, str):
raise DistutilsInternalError(
- "mkpath: 'name' must be a string (got {!r})".format(name)
+ "mkpath: 'name' must be a string (got %r)" % (name,)
)
# XXX what's the better way to handle verbosity? print as we create
except OSError as exc:
if not (exc.errno == errno.EEXIST and os.path.isdir(head)):
raise DistutilsFileError(
- "could not create '{}': {}".format(head, exc.args[-1])
+ "could not create '%s': %s" % (head, exc.args[-1])
)
created_dirs.append(head)
mkpath(dir, mode, verbose=verbose, dry_run=dry_run)
-def copy_tree( # noqa: C901
+def copy_tree(
src,
dst,
preserve_mode=1,
names = []
else:
raise DistutilsFileError(
- "error listing files in '{}': {}".format(src, e.strerror)
+ "error listing files in '%s': %s" % (src, e.strerror)
)
if not dry_run:
except ImportError:
warnings = None
-from distutils.errors import (
- DistutilsOptionError,
- DistutilsModuleError,
- DistutilsArgError,
- DistutilsClassError,
-)
+from distutils.errors import *
from distutils.fancy_getopt import FancyGetopt, translate_longopt
from distutils.util import check_environ, strtobool, rfc822_escape
from distutils import log
# -- Creation/initialization methods -------------------------------
- def __init__(self, attrs=None): # noqa: C901
+ def __init__(self, attrs=None):
"""Construct a new Distribution instance: initialize all the
attributes of a Distribution, and then use 'attrs' (a dictionary
mapping attribute names to values) to assign some of those
return files
- def parse_config_files(self, filenames=None): # noqa: C901
+ def parse_config_files(self, filenames=None):
from configparser import ConfigParser
# Ignore install directory options if we have a venv
),
]
- def _parse_command_opts(self, parser, args): # noqa: C901
+ def _parse_command_opts(self, parser, args):
"""Parse the command-line options for a single command.
'parser' must be a FancyGetopt instance; 'args' must be the list
of arguments, starting with the current command (whose options
return klass
for pkgname in self.get_command_packages():
- module_name = "{}.{}".format(pkgname, command)
+ module_name = "%s.%s" % (pkgname, command)
klass_name = command
try:
return cmd_obj
- def _set_command_options(self, command_obj, option_dict=None): # noqa: C901
+ def _set_command_options(self, command_obj, option_dict=None):
"""Set the options for 'command_obj' from 'option_dict'. Basically
this means copying elements of a dictionary ('option_dict') to
attributes of an instance ('command').
self.announce(" setting options for '%s' command:" % command_name)
for (option, (source, value)) in option_dict.items():
if DEBUG:
- self.announce(" {} = {} (from {})".format(option, value, source))
+ self.announce(" %s = %s (from %s)" % (option, value, source))
try:
bool_opts = [translate_longopt(o) for o in command_obj.boolean_options]
except AttributeError:
def maybe_write(header, val):
if val:
- file.write(f"{header}: {val}\n")
+ file.write("{}: {}\n".format(header, val))
# optional fields
maybe_write("Summary", self.get_description())
def _write_list(self, file, name, values):
values = values or []
for value in values:
- file.write('{}: {}\n'.format(name, value))
+ file.write('%s: %s\n' % (name, value))
# -- Metadata query methods ----------------------------------------
return self.version or "0.0.0"
def get_fullname(self):
- return "{}-{}".format(self.get_name(), self.get_version())
+ return "%s-%s" % (self.get_name(), self.get_version())
def get_author(self):
return self.author
warnings.warn(msg)
def __repr__(self):
- return '<{}.{}({!r}) at {:#x}>'.format(
+ return '<%s.%s(%r) at %#x>' % (
self.__class__.__module__,
self.__class__.__qualname__,
self.name,
)
-def read_setup_file(filename): # noqa: C901
+def read_setup_file(filename):
"""Reads a Setup file and returns Extension instances."""
from distutils.sysconfig import parse_makefile, expand_makefile_vars, _variable_rx
* options set attributes of a passed-in object
"""
-import sys
-import string
-import re
+import sys, string, re
import getopt
-from distutils.errors import DistutilsGetoptError, DistutilsArgError
+from distutils.errors import *
# Much like command_re in distutils.core, this is close to but not quite
# the same as a Python NAME -- except, in the spirit of most GNU
longopt_re = re.compile(r'^%s$' % longopt_pat)
# For recognizing "negative alias" options, eg. "quiet=!verbose"
-neg_alias_re = re.compile("^({})=!({})$".format(longopt_pat, longopt_pat))
+neg_alias_re = re.compile("^(%s)=!(%s)$" % (longopt_pat, longopt_pat))
# This is used to translate long options to legitimate Python identifiers
# (for use as attributes of some object).
self._check_alias_dict(negative_alias, "negative alias")
self.negative_alias = negative_alias
- def _grok_option_table(self): # noqa: C901
+ def _grok_option_table(self):
"""Populate the various data structures that keep tabs on the
option table. Called by 'getopt()' before it can do anything
worthwhile.
else:
# the option table is part of the code, so simply
# assert that it is correct
- raise ValueError("invalid option tuple: {!r}".format(option))
+ raise ValueError("invalid option tuple: %r" % (option,))
# Type- and value-check the option names
if not isinstance(long, str) or len(long) < 2:
self.short_opts.append(short)
self.short2long[short[0]] = long
- def getopt(self, args=None, object=None): # noqa: C901
+ def getopt(self, args=None, object=None):
"""Parse command-line options in args. Store as attributes on object.
If 'args' is None or not supplied, uses 'sys.argv[1:]'. If
else:
return self.option_order
- def generate_help(self, header=None): # noqa: C901
+ def generate_help(self, header=None):
"""Generate help text (a list of strings, one per suggested line of
output) from the option table for this FancyGetopt object.
"""
for option in self.option_table:
long = option[0]
short = option[1]
- ell = len(long)
+ l = len(long)
if long[-1] == '=':
- ell = ell - 1
+ l = l - 1
if short is not None:
- ell = ell + 5 # " (-x)" where short == 'x'
- if ell > max_opt:
- max_opt = ell
+ l = l + 5 # " (-x)" where short == 'x'
+ if l > max_opt:
+ max_opt = l
opt_width = max_opt + 2 + 2 + 2 # room for indent + dashes + gutter
# Case 2: we have a short option, so we have to include it
# just after the long option
else:
- opt_names = "{} (-{})".format(long, short)
+ opt_names = "%s (-%s)" % (long, short)
if text:
lines.append(" --%-*s %s" % (max_opt, opt_names, text[0]))
else:
lines.append(" --%-*s" % opt_names)
- for ell in text[1:]:
- lines.append(big_indent + ell)
+ for l in text[1:]:
+ lines.append(big_indent + l)
return lines
def print_help(self, header=None, file=None):
cur_len = 0 # length of current line
while chunks:
- ell = len(chunks[0])
- if cur_len + ell <= width: # can squeeze (at least) this chunk in
+ l = len(chunks[0])
+ if cur_len + l <= width: # can squeeze (at least) this chunk in
cur_line.append(chunks[0])
del chunks[0]
- cur_len = cur_len + ell
+ cur_len = cur_len + l
else: # this line is full
# drop last chunk if all space
if cur_line and cur_line[-1][0] == ' ':
_copy_action = {None: 'copying', 'hard': 'hard linking', 'sym': 'symbolically linking'}
-def _copy_file_contents(src, dst, buffer_size=16 * 1024): # noqa: C901
+def _copy_file_contents(src, dst, buffer_size=16 * 1024):
"""Copy the file 'src' to 'dst'; both must be filenames. Any error
opening either file, reading from 'src', or writing to 'dst', raises
DistutilsFileError. Data is read/written in chunks of 'buffer_size'
try:
fsrc = open(src, 'rb')
except OSError as e:
- raise DistutilsFileError("could not open '{}': {}".format(src, e.strerror))
+ raise DistutilsFileError("could not open '%s': %s" % (src, e.strerror))
if os.path.exists(dst):
try:
os.unlink(dst)
except OSError as e:
raise DistutilsFileError(
- "could not delete '{}': {}".format(dst, e.strerror)
+ "could not delete '%s': %s" % (dst, e.strerror)
)
try:
fdst = open(dst, 'wb')
except OSError as e:
- raise DistutilsFileError(
- "could not create '{}': {}".format(dst, e.strerror)
- )
+ raise DistutilsFileError("could not create '%s': %s" % (dst, e.strerror))
while True:
try:
buf = fsrc.read(buffer_size)
except OSError as e:
raise DistutilsFileError(
- "could not read from '{}': {}".format(src, e.strerror)
+ "could not read from '%s': %s" % (src, e.strerror)
)
if not buf:
fdst.write(buf)
except OSError as e:
raise DistutilsFileError(
- "could not write to '{}': {}".format(dst, e.strerror)
+ "could not write to '%s': %s" % (dst, e.strerror)
)
finally:
if fdst:
fsrc.close()
-def copy_file( # noqa: C901
+def copy_file(
src,
dst,
preserve_mode=1,
# XXX I suspect this is Unix-specific -- need porting help!
-def move_file(src, dst, verbose=1, dry_run=0): # noqa: C901
+def move_file(src, dst, verbose=1, dry_run=0):
"""Move a file 'src' to 'dst'. If 'dst' is a directory, the file will
be moved into it with the same name; otherwise, 'src' is just renamed
dst = os.path.join(dst, basename(src))
elif exists(dst):
raise DistutilsFileError(
- "can't move '{}': destination '{}' already exists".format(src, dst)
+ "can't move '%s': destination '%s' already exists" % (src, dst)
)
if not isdir(dirname(dst)):
raise DistutilsFileError(
- "can't move '{}': destination '{}' not a valid path".format(src, dst)
+ "can't move '%s': destination '%s' not a valid path" % (src, dst)
)
copy_it = False
if num == errno.EXDEV:
copy_it = True
else:
- raise DistutilsFileError(
- "couldn't move '{}' to '{}': {}".format(src, dst, msg)
- )
+ raise DistutilsFileError("couldn't move '%s' to '%s': %s" % (src, dst, msg))
if copy_it:
copy_file(src, dst, verbose=verbose)
return (action, patterns, dir, dir_pattern)
- def process_template_line(self, line): # noqa: C901
+ def process_template_line(self, line):
# Parse the line: split it up, make sure the right number of words
# is there, and return the relevant words. 'action' is always
# defined: it's the first word of the line. Which of the other
)
elif action == 'recursive-include':
- self.debug_print("recursive-include {} {}".format(dir, ' '.join(patterns)))
+ self.debug_print("recursive-include %s %s" % (dir, ' '.join(patterns)))
for pattern in patterns:
if not self.include_pattern(pattern, prefix=dir):
msg = (
log.warn(msg, pattern, dir)
elif action == 'recursive-exclude':
- self.debug_print("recursive-exclude {} {}".format(dir, ' '.join(patterns)))
+ self.debug_print("recursive-exclude %s %s" % (dir, ' '.join(patterns)))
for pattern in patterns:
if not self.exclude_pattern(pattern, prefix=dir):
log.warn(
if os.sep == '\\':
sep = r'\\'
pattern_re = pattern_re[len(start) : len(pattern_re) - len(end)]
- pattern_re = r'{}\A{}{}.*{}{}'.format(start, prefix_re, sep, pattern_re, end)
+ pattern_re = r'%s\A%s%s.*%s%s' % (start, prefix_re, sep, pattern_re, end)
else: # no prefix -- respect anchor flag
if anchor:
- pattern_re = r'{}\A{}'.format(start, pattern_re[len(start) :])
+ pattern_re = r'%s\A%s' % (start, pattern_re[len(start) :])
return re.compile(pattern_re)
except RegError:
continue
key = RegEnumKey(h, 0)
- d = Reg.get_value(base, r"{}\{}".format(p, key))
+ d = Reg.get_value(base, r"%s\%s" % (p, key))
self.macros["$(FrameworkVersion)"] = d["version"]
def sub(self, s):
raise DistutilsPlatformError("Unable to find vcvarsall.bat")
log.debug("Calling 'vcvarsall.bat %s' (version=%s)", arch, version)
popen = subprocess.Popen(
- '"{}" {} & set'.format(vcvarsall, arch),
+ '"%s" %s & set' % (vcvarsall, arch),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
self.__arch = None # deprecated name
self.initialized = False
- def initialize(self, plat_name=None): # noqa: C901
+ def initialize(self, plat_name=None):
# multi-init means we would need to check platform same each time...
assert not self.initialized, "don't init multiple times"
if self.__version < 8.0:
# sanity check for platforms to prevent obscure errors later.
ok_plats = 'win32', 'win-amd64'
if plat_name not in ok_plats:
- raise DistutilsPlatformError(
- "--plat-name must be one of {}".format(ok_plats)
- )
+ raise DistutilsPlatformError("--plat-name must be one of %s" % (ok_plats,))
if (
"DISTUTILS_USE_SDK" in os.environ
obj_names.append(os.path.join(output_dir, base + self.obj_extension))
return obj_names
- def compile( # noqa: C901
+ def compile(
self,
sources,
output_dir=None,
continue
else:
# how to handle this file?
- raise CompileError(
- "Don't know how to compile {} to {}".format(src, obj)
- )
+ raise CompileError("Don't know how to compile %s to %s" % (src, obj))
output_opt = "/Fo" + obj
try:
else:
log.debug("skipping %s (up-to-date)", output_filename)
- def link( # noqa: C901
+ def link(
self,
target_desc,
objects,
mfinfo = self.manifest_get_embed_info(target_desc, ld_args)
if mfinfo is not None:
mffilename, mfid = mfinfo
- out_arg = '-outputresource:{};{}'.format(output_filename, mfid)
+ out_arg = '-outputresource:%s;%s' % (output_filename, mfid)
try:
self.spawn(['mt.exe', '-nologo', '-manifest', mffilename, out_arg])
except DistutilsExecError as msg:
# hacked by Robin Becker and Thomas Heller to do a better job of
# finding DevStudio (through the registry)
-import sys
-import os
+import sys, os
from distutils.errors import (
DistutilsExecError,
DistutilsPlatformError,
self.set_macro("FrameworkSDKDir", net, "sdkinstallrootv1.1")
else:
self.set_macro("FrameworkSDKDir", net, "sdkinstallroot")
- except KeyError:
+ except KeyError as exc: #
raise DistutilsPlatformError(
"""Python was built with Visual Studio 2003;
extensions must be built with a compiler than can generate compatible binaries.
except RegError:
continue
key = RegEnumKey(h, 0)
- d = read_values(base, r"{}\{}".format(p, key))
+ d = read_values(base, r"%s\%s" % (p, key))
self.macros["$(FrameworkVersion)"] = d["version"]
def sub(self, s):
obj_names.append(os.path.join(output_dir, base + self.obj_extension))
return obj_names
- def compile( # noqa: C901
+ def compile(
self,
sources,
output_dir=None,
continue
else:
# how to handle this file?
- raise CompileError(
- "Don't know how to compile {} to {}".format(src, obj)
- )
+ raise CompileError("Don't know how to compile %s to %s" % (src, obj))
output_opt = "/Fo" + obj
try:
else:
log.debug("skipping %s (up-to-date)", output_filename)
- def link( # noqa: C901
+ def link(
self,
target_desc,
objects,
path = path + " dirs"
if self.__version >= 7:
- key = r"{}\{:0.1f}\VC\VC_OBJECTS_PLATFORM_INFO\Win32\Directories".format(
+ key = r"%s\%0.1f\VC\VC_OBJECTS_PLATFORM_INFO\Win32\Directories" % (
self.__root,
self.__version,
)
from distutils.msvc9compiler import MSVCCompiler
# get_build_architecture not really relevant now we support cross-compile
- from distutils.msvc9compiler import MacroExpander # noqa: F811
+ from distutils.msvc9compiler import MacroExpander
return _aix_support.aix_platform()
except ImportError:
pass
- return "{}-{}.{}".format(osname, version, release)
+ return "%s-%s.%s" % (osname, version, release)
from distutils import log
-def spawn(cmd, search_path=1, verbose=0, dry_run=0, env=None): # noqa: C901
+def spawn(cmd, search_path=1, verbose=0, dry_run=0, env=None):
"""Run another program, specified as a command list 'cmd', in a new process.
'cmd' is just the argument list for the new process, ie.
except OSError as exc:
if not DEBUG:
cmd = cmd[0]
- raise DistutilsExecError(
- "command {!r} failed: {}".format(cmd, exc.args[-1])
- ) from exc
+ raise DistutilsExecError("command %r failed: %s" % (cmd, exc.args[-1])) from exc
if exitcode:
if not DEBUG:
cmd = cmd[0]
raise DistutilsExecError(
- "command {!r} failed with exit code {}".format(cmd, exitcode)
+ "command %r failed with exit code %s" % (cmd, exitcode)
)
)
-def customize_compiler(compiler): # noqa: C901
+def customize_compiler(compiler):
"""Do any platform-specific customization of a CCompiler instance.
Mainly needed on Unix, so we can plug in the information that
_findvar2_rx = re.compile(r"\${([A-Za-z][A-Za-z0-9_]*)}")
-def parse_makefile(fn, g=None): # noqa: C901
+def parse_makefile(fn, g=None):
"""Parse a Makefile-style file.
A dictionary containing name/value pairs is returned. If an
-"""
-Test suite for distutils.
+"""Test suite for distutils.
+
+This test suite consists of a collection of test modules in the
+distutils.tests package. Each test module has a name starting with
+'test' and contains a function test_suite(). The function is expected
+to return an initialized unittest.TestSuite instance.
Tests for the command classes in the distutils.command package are
included in distutils.tests as well, instead of using a separate
distutils.command.tests package, since command identification is done
by import rather than matching pre-defined names.
+
"""
+
+import os
+import sys
+import unittest
+from test.support import run_unittest
+
+from .py38compat import save_restore_warnings_filters
+
+
+here = os.path.dirname(__file__) or os.curdir
+
+
+def test_suite():
+ suite = unittest.TestSuite()
+ for fn in os.listdir(here):
+ if fn.startswith("test") and fn.endswith(".py"):
+ modname = "distutils.tests." + fn[:-3]
+ # bpo-40055: Save/restore warnings filters to leave them unchanged.
+ # Importing tests imports docutils which imports pkg_resources
+ # which adds a warnings filter.
+ with save_restore_warnings_filters():
+ __import__(modname)
+ module = sys.modules[modname]
+ suite.addTest(module.test_suite())
+ return suite
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
)
+# From Python 3.9
+@contextlib.contextmanager
+def _save_restore_warnings_filters():
+ old_filters = warnings.filters[:]
+ try:
+ yield
+ finally:
+ warnings.filters[:] = old_filters
+
+
try:
- from test.support.import_helper import (
- DirsOnSysPath,
- CleanImport,
- )
+ from test.support.warnings_helper import save_restore_warnings_filters
except (ModuleNotFoundError, ImportError):
- from test.support import (
- DirsOnSysPath,
- CleanImport,
- )
+ save_restore_warnings_filters = _save_restore_warnings_filters
if sys.version_info < (3, 9):
import sys
import shutil
import tempfile
+import unittest
import sysconfig
-import itertools
+from copy import deepcopy
-import pytest
+from . import py38compat as os_helper
+from distutils import log
from distutils.log import DEBUG, INFO, WARN, ERROR, FATAL
from distutils.core import Distribution
-@pytest.mark.usefixtures('distutils_logging_silencer')
-class LoggingSilencer:
+class LoggingSilencer(object):
+ def setUp(self):
+ super().setUp()
+ self.threshold = log.set_threshold(log.FATAL)
+ # catching warnings
+ # when log will be replaced by logging
+ # we won't need such monkey-patch anymore
+ self._old_log = log.Log._log
+ log.Log._log = self._log
+ self.logs = []
+
+ def tearDown(self):
+ log.set_threshold(self.threshold)
+ log.Log._log = self._old_log
+ super().tearDown()
+
def _log(self, level, msg, args):
if level not in (DEBUG, INFO, WARN, ERROR, FATAL):
raise ValueError('%s wrong log level' % str(level))
self.logs = []
-@pytest.mark.usefixtures('distutils_managed_tempdir')
-class TempdirManager:
- """
- Mix-in class that handles temporary directories for test cases.
+class TempdirManager(object):
+ """Mix-in class that handles temporary directories for test cases.
+
+ This is intended to be used with unittest.TestCase.
"""
+ def setUp(self):
+ super().setUp()
+ self.old_cwd = os.getcwd()
+ self.tempdirs = []
+
+ def tearDown(self):
+ # Restore working dir, for Solaris and derivatives, where rmdir()
+ # on the current directory fails.
+ os.chdir(self.old_cwd)
+ super().tearDown()
+ while self.tempdirs:
+ tmpdir = self.tempdirs.pop()
+ os_helper.rmtree(tmpdir)
+
def mkdtemp(self):
"""Create a temporary directory that will be cleaned up.
"""Class to store options for retrieval via set_undefined_options()."""
def __init__(self, **kwargs):
- vars(self).update(kwargs)
+ for kw, val in kwargs.items():
+ setattr(self, kw, val)
def ensure_finalized(self):
pass
+class EnvironGuard(object):
+ def setUp(self):
+ super(EnvironGuard, self).setUp()
+ self.old_environ = deepcopy(os.environ)
+
+ def tearDown(self):
+ for key, value in self.old_environ.items():
+ if os.environ.get(key) != value:
+ os.environ[key] = value
+
+ for key in tuple(os.environ.keys()):
+ if key not in self.old_environ:
+ del os.environ[key]
+
+ super(EnvironGuard, self).tearDown()
+
+
def copy_xxmodule_c(directory):
"""Helper for tests that need the xxmodule.c source file.
If the source file can be found, it will be copied to *directory*. If not,
the test will be skipped. Errors during copy are not caught.
"""
- shutil.copy(_get_xxmodule_path(), os.path.join(directory, 'xxmodule.c'))
+ filename = _get_xxmodule_path()
+ if filename is None:
+ raise unittest.SkipTest(
+ 'cannot find xxmodule.c (test must run in ' 'the python build dir)'
+ )
+ shutil.copy(filename, directory)
def _get_xxmodule_path():
- source_name = 'xxmodule.c' if sys.version_info > (3, 9) else 'xxmodule-3.8.c'
- return os.path.join(os.path.dirname(__file__), source_name)
+ srcdir = sysconfig.get_config_var('srcdir')
+ candidates = [
+ # use installed copy if available
+ os.path.join(os.path.dirname(__file__), 'xxmodule.c'),
+ # otherwise try using copy from build directory
+ os.path.join(srcdir, 'Modules', 'xxmodule.c'),
+ # srcdir mysteriously can be $srcdir/Lib/distutils/tests when
+ # this file is run from its parent directory, so walk up the
+ # tree to find the real srcdir
+ os.path.join(srcdir, '..', '..', '..', 'Modules', 'xxmodule.c'),
+ ]
+ for path in candidates:
+ if os.path.exists(path):
+ return path
def fixup_build_ext(cmd):
else:
name, equals, value = runshared.partition('=')
cmd.library_dirs = [d for d in value.split(os.pathsep) if d]
-
-
-def combine_markers(cls):
- """
- pytest will honor markers as found on the class, but when
- markers are on multiple subclasses, only one appears. Use
- this decorator to combine those markers.
- """
- cls.pytestmark = [
- mark
- for base in itertools.chain([cls], cls.__bases__)
- for mark in getattr(base, 'pytestmark', [])
- ]
- return cls
+# -*- coding: utf-8 -*-
"""Tests for distutils.archive_util."""
+import unittest
import os
import sys
import tarfile
from os.path import splitdrive
import warnings
-import functools
-import operator
-import pathlib
-
-import pytest
from distutils import archive_util
from distutils.archive_util import (
make_archive,
ARCHIVE_FORMATS,
)
-from distutils.spawn import spawn
+from distutils.spawn import find_executable, spawn
from distutils.tests import support
-from test.support import patch
+from test.support import run_unittest, patch
from .unix_compat import require_unix_id, require_uid_0, grp, pwd, UID_0_SUPPORT
from .py38compat import change_cwd
from .py38compat import check_warnings
+try:
+ import zipfile
+
+ ZIP_SUPPORT = True
+except ImportError:
+ ZIP_SUPPORT = find_executable('zip')
+
+try:
+ import zlib
+
+ ZLIB_SUPPORT = True
+except ImportError:
+ ZLIB_SUPPORT = False
+
+try:
+ import bz2
+except ImportError:
+ bz2 = None
+
+try:
+ import lzma
+except ImportError:
+ lzma = None
+
+
def can_fs_encode(filename):
"""
Return True if the filename can be saved in the file system.
return True
-def all_equal(values):
- return functools.reduce(operator.eq, values)
-
-
-def same_drive(*paths):
- return all_equal(pathlib.Path(path).drive for path in paths)
-
-
-class ArchiveUtilTestCase(support.TempdirManager, support.LoggingSilencer):
- @pytest.mark.usefixtures('needs_zlib')
+class ArchiveUtilTestCase(
+ support.TempdirManager, support.LoggingSilencer, unittest.TestCase
+):
+ @unittest.skipUnless(ZLIB_SUPPORT, 'Need zlib support to run')
def test_make_tarball(self, name='archive'):
# creating something to tar
tmpdir = self._create_files()
# trying an uncompressed one
self._make_tarball(tmpdir, name, '.tar', compress=None)
- @pytest.mark.usefixtures('needs_zlib')
+ @unittest.skipUnless(ZLIB_SUPPORT, 'Need zlib support to run')
def test_make_tarball_gzip(self):
tmpdir = self._create_files()
self._make_tarball(tmpdir, 'archive', '.tar.gz', compress='gzip')
+ @unittest.skipUnless(bz2, 'Need bz2 support to run')
def test_make_tarball_bzip2(self):
- pytest.importorskip('bz2')
tmpdir = self._create_files()
self._make_tarball(tmpdir, 'archive', '.tar.bz2', compress='bzip2')
+ @unittest.skipUnless(lzma, 'Need lzma support to run')
def test_make_tarball_xz(self):
- pytest.importorskip('lzma')
tmpdir = self._create_files()
self._make_tarball(tmpdir, 'archive', '.tar.xz', compress='xz')
- @pytest.mark.skipif("not can_fs_encode('årchiv')")
+ @unittest.skipUnless(
+ can_fs_encode('årchiv'), 'File system cannot handle this filename'
+ )
def test_make_tarball_latin1(self):
"""
Mirror test_make_tarball, except filename contains latin characters.
"""
self.test_make_tarball('årchiv') # note this isn't a real word
- @pytest.mark.skipif("not can_fs_encode('のアーカイブ')")
+ @unittest.skipUnless(
+ can_fs_encode('のアーカイブ'), 'File system cannot handle this filename'
+ )
def test_make_tarball_extended(self):
"""
Mirror test_make_tarball, except filename contains extended
def _make_tarball(self, tmpdir, target_name, suffix, **kwargs):
tmpdir2 = self.mkdtemp()
- if same_drive(tmpdir, tmpdir2):
- pytest.skip("source and target should be on same drive")
+ unittest.skipUnless(
+ splitdrive(tmpdir)[0] == splitdrive(tmpdir2)[0],
+ "source and target should be on same drive",
+ )
base_name = os.path.join(tmpdir2, target_name)
# check if the compressed tarball was created
tarball = base_name + suffix
- assert os.path.exists(tarball)
- assert self._tarinfo(tarball) == self._created_files
+ self.assertTrue(os.path.exists(tarball))
+ self.assertEqual(self._tarinfo(tarball), self._created_files)
def _tarinfo(self, path):
tar = tarfile.open(path)
os.mkdir(os.path.join(dist, 'sub2'))
return tmpdir
- @pytest.mark.usefixtures('needs_zlib')
- @pytest.mark.skipif("not (find_executable('tar') and find_executable('gzip'))")
+ @unittest.skipUnless(
+ find_executable('tar') and find_executable('gzip') and ZLIB_SUPPORT,
+ 'Need the tar, gzip and zlib command to run',
+ )
def test_tarfile_vs_tar(self):
tmpdir = self._create_files()
tmpdir2 = self.mkdtemp()
# check if the compressed tarball was created
tarball = base_name + '.tar.gz'
- assert os.path.exists(tarball)
+ self.assertTrue(os.path.exists(tarball))
# now create another tarball using `tar`
tarball2 = os.path.join(tmpdir, 'archive2.tar.gz')
finally:
os.chdir(old_dir)
- assert os.path.exists(tarball2)
+ self.assertTrue(os.path.exists(tarball2))
# let's compare both tarballs
- assert self._tarinfo(tarball) == self._created_files
- assert self._tarinfo(tarball2) == self._created_files
+ self.assertEqual(self._tarinfo(tarball), self._created_files)
+ self.assertEqual(self._tarinfo(tarball2), self._created_files)
# trying an uncompressed one
base_name = os.path.join(tmpdir2, 'archive')
finally:
os.chdir(old_dir)
tarball = base_name + '.tar'
- assert os.path.exists(tarball)
+ self.assertTrue(os.path.exists(tarball))
# now for a dry_run
base_name = os.path.join(tmpdir2, 'archive')
finally:
os.chdir(old_dir)
tarball = base_name + '.tar'
- assert os.path.exists(tarball)
+ self.assertTrue(os.path.exists(tarball))
- @pytest.mark.skipif("not find_executable('compress')")
+ @unittest.skipUnless(
+ find_executable('compress'), 'The compress program is required'
+ )
def test_compress_deprecated(self):
tmpdir = self._create_files()
base_name = os.path.join(self.mkdtemp(), 'archive')
- # using compress and testing the DeprecationWarning
+ # using compress and testing the PendingDeprecationWarning
old_dir = os.getcwd()
os.chdir(tmpdir)
try:
finally:
os.chdir(old_dir)
tarball = base_name + '.tar.Z'
- assert os.path.exists(tarball)
- assert len(w.warnings) == 1
+ self.assertTrue(os.path.exists(tarball))
+ self.assertEqual(len(w.warnings), 1)
# same test with dry_run
os.remove(tarball)
make_tarball(base_name, 'dist', compress='compress', dry_run=True)
finally:
os.chdir(old_dir)
- assert not os.path.exists(tarball)
- assert len(w.warnings) == 1
+ self.assertFalse(os.path.exists(tarball))
+ self.assertEqual(len(w.warnings), 1)
- @pytest.mark.usefixtures('needs_zlib')
+ @unittest.skipUnless(
+ ZIP_SUPPORT and ZLIB_SUPPORT, 'Need zip and zlib support to run'
+ )
def test_make_zipfile(self):
- zipfile = pytest.importorskip('zipfile')
# creating something to tar
tmpdir = self._create_files()
base_name = os.path.join(self.mkdtemp(), 'archive')
# check if the compressed tarball was created
tarball = base_name + '.zip'
- assert os.path.exists(tarball)
+ self.assertTrue(os.path.exists(tarball))
with zipfile.ZipFile(tarball) as zf:
- assert sorted(zf.namelist()) == self._zip_created_files
+ self.assertEqual(sorted(zf.namelist()), self._zip_created_files)
+ @unittest.skipUnless(ZIP_SUPPORT, 'Need zip support to run')
def test_make_zipfile_no_zlib(self):
- zipfile = pytest.importorskip('zipfile')
patch(self, archive_util.zipfile, 'zlib', None) # force zlib ImportError
called = []
make_zipfile(base_name, 'dist')
tarball = base_name + '.zip'
- assert called == [((tarball, "w"), {'compression': zipfile.ZIP_STORED})]
- assert os.path.exists(tarball)
+ self.assertEqual(
+ called, [((tarball, "w"), {'compression': zipfile.ZIP_STORED})]
+ )
+ self.assertTrue(os.path.exists(tarball))
with zipfile.ZipFile(tarball) as zf:
- assert sorted(zf.namelist()) == self._zip_created_files
+ self.assertEqual(sorted(zf.namelist()), self._zip_created_files)
def test_check_archive_formats(self):
- assert check_archive_formats(['gztar', 'xxx', 'zip']) == 'xxx'
- assert (
+ self.assertEqual(check_archive_formats(['gztar', 'xxx', 'zip']), 'xxx')
+ self.assertIsNone(
check_archive_formats(['gztar', 'bztar', 'xztar', 'ztar', 'tar', 'zip'])
- is None
)
def test_make_archive(self):
tmpdir = self.mkdtemp()
base_name = os.path.join(tmpdir, 'archive')
- with pytest.raises(ValueError):
- make_archive(base_name, 'xxx')
+ self.assertRaises(ValueError, make_archive, base_name, 'xxx')
def test_make_archive_cwd(self):
current_dir = os.getcwd()
try:
try:
make_archive('xxx', 'xxx', root_dir=self.mkdtemp())
- except Exception:
+ except:
pass
- assert os.getcwd() == current_dir
+ self.assertEqual(os.getcwd(), current_dir)
finally:
del ARCHIVE_FORMATS['xxx']
base_dir = self._create_files()
base_name = os.path.join(self.mkdtemp(), 'archive')
res = make_archive(base_name, 'tar', base_dir, 'dist')
- assert os.path.exists(res)
- assert os.path.basename(res) == 'archive.tar'
- assert self._tarinfo(res) == self._created_files
+ self.assertTrue(os.path.exists(res))
+ self.assertEqual(os.path.basename(res), 'archive.tar')
+ self.assertEqual(self._tarinfo(res), self._created_files)
- @pytest.mark.usefixtures('needs_zlib')
+ @unittest.skipUnless(ZLIB_SUPPORT, 'Need zlib support to run')
def test_make_archive_gztar(self):
base_dir = self._create_files()
base_name = os.path.join(self.mkdtemp(), 'archive')
res = make_archive(base_name, 'gztar', base_dir, 'dist')
- assert os.path.exists(res)
- assert os.path.basename(res) == 'archive.tar.gz'
- assert self._tarinfo(res) == self._created_files
+ self.assertTrue(os.path.exists(res))
+ self.assertEqual(os.path.basename(res), 'archive.tar.gz')
+ self.assertEqual(self._tarinfo(res), self._created_files)
+ @unittest.skipUnless(bz2, 'Need bz2 support to run')
def test_make_archive_bztar(self):
- pytest.importorskip('bz2')
base_dir = self._create_files()
base_name = os.path.join(self.mkdtemp(), 'archive')
res = make_archive(base_name, 'bztar', base_dir, 'dist')
- assert os.path.exists(res)
- assert os.path.basename(res) == 'archive.tar.bz2'
- assert self._tarinfo(res) == self._created_files
+ self.assertTrue(os.path.exists(res))
+ self.assertEqual(os.path.basename(res), 'archive.tar.bz2')
+ self.assertEqual(self._tarinfo(res), self._created_files)
+ @unittest.skipUnless(lzma, 'Need xz support to run')
def test_make_archive_xztar(self):
- pytest.importorskip('lzma')
base_dir = self._create_files()
base_name = os.path.join(self.mkdtemp(), 'archive')
res = make_archive(base_name, 'xztar', base_dir, 'dist')
- assert os.path.exists(res)
- assert os.path.basename(res) == 'archive.tar.xz'
- assert self._tarinfo(res) == self._created_files
+ self.assertTrue(os.path.exists(res))
+ self.assertEqual(os.path.basename(res), 'archive.tar.xz')
+ self.assertEqual(self._tarinfo(res), self._created_files)
def test_make_archive_owner_group(self):
# testing make_archive with owner and group, with various combinations
res = make_archive(
base_name, 'zip', root_dir, base_dir, owner=owner, group=group
)
- assert os.path.exists(res)
+ self.assertTrue(os.path.exists(res))
res = make_archive(base_name, 'zip', root_dir, base_dir)
- assert os.path.exists(res)
+ self.assertTrue(os.path.exists(res))
res = make_archive(
base_name, 'tar', root_dir, base_dir, owner=owner, group=group
)
- assert os.path.exists(res)
+ self.assertTrue(os.path.exists(res))
res = make_archive(
base_name, 'tar', root_dir, base_dir, owner='kjhkjhkjg', group='oihohoh'
)
- assert os.path.exists(res)
+ self.assertTrue(os.path.exists(res))
- @pytest.mark.usefixtures('needs_zlib')
+ @unittest.skipUnless(ZLIB_SUPPORT, "Requires zlib")
@require_unix_id
@require_uid_0
def test_tarfile_root_owner(self):
os.chdir(old_dir)
# check if the compressed tarball was created
- assert os.path.exists(archive_name)
+ self.assertTrue(os.path.exists(archive_name))
# now checks the rights
archive = tarfile.open(archive_name)
try:
for member in archive.getmembers():
- assert member.uid == 0
- assert member.gid == 0
+ self.assertEqual(member.uid, 0)
+ self.assertEqual(member.gid, 0)
finally:
archive.close()
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(ArchiveUtilTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.command.bdist."""
import os
+import unittest
+from test.support import run_unittest
import warnings
from distutils.command.bdist import bdist
from distutils.tests import support
-class TestBuild(support.TempdirManager):
+class BuildTestCase(support.TempdirManager, unittest.TestCase):
def test_formats(self):
# let's create a command and make sure
# we can set the format
cmd = bdist(dist)
cmd.formats = ['msi']
cmd.ensure_finalized()
- assert cmd.formats == ['msi']
+ self.assertEqual(cmd.formats, ['msi'])
# what formats does bdist offer?
formats = [
'zip',
'ztar',
]
- found = sorted(cmd.format_commands)
- assert found == formats
+ found = sorted(cmd.format_command)
+ self.assertEqual(found, formats)
def test_skip_build(self):
# bug #10946: bdist --skip-build should trickle down to subcommands
if getattr(subcmd, '_unsupported', False):
# command is not supported on this build
continue
- assert subcmd.skip_build, '%s should take --skip-build from bdist' % name
+ self.assertTrue(
+ subcmd.skip_build, '%s should take --skip-build from bdist' % name
+ )
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(BuildTestCase)
+
+
+if __name__ == '__main__':
+ run_unittest(test_suite())
import os
import sys
import zipfile
-
-import pytest
+import unittest
+from test.support import run_unittest
from distutils.core import Distribution
from distutils.command.bdist_dumb import bdist_dumb
"""
+try:
+ import zlib
+
+ ZLIB_SUPPORT = True
+except ImportError:
+ ZLIB_SUPPORT = False
-@support.combine_markers
-@pytest.mark.usefixtures('save_env')
-@pytest.mark.usefixtures('save_argv')
-@pytest.mark.usefixtures('save_cwd')
-class TestBuildDumb(
+
+class BuildDumbTestCase(
support.TempdirManager,
support.LoggingSilencer,
+ support.EnvironGuard,
+ unittest.TestCase,
):
- @pytest.mark.usefixtures('needs_zlib')
+ def setUp(self):
+ super(BuildDumbTestCase, self).setUp()
+ self.old_location = os.getcwd()
+ self.old_sys_argv = sys.argv, sys.argv[:]
+
+ def tearDown(self):
+ os.chdir(self.old_location)
+ sys.argv = self.old_sys_argv[0]
+ sys.argv[:] = self.old_sys_argv[1]
+ super(BuildDumbTestCase, self).tearDown()
+
+ @unittest.skipUnless(ZLIB_SUPPORT, 'Need zlib support to run')
def test_simple_built(self):
# let's create a simple package
# see what we have
dist_created = os.listdir(os.path.join(pkg_dir, 'dist'))
- base = "{}.{}.zip".format(dist.get_fullname(), cmd.plat_name)
+ base = "%s.%s.zip" % (dist.get_fullname(), cmd.plat_name)
- assert dist_created == [base]
+ self.assertEqual(dist_created, [base])
# now let's check what we have in the zip file
fp = zipfile.ZipFile(os.path.join('dist', base))
wanted = ['foo-0.1-py%s.%s.egg-info' % sys.version_info[:2], 'foo.py']
if not sys.dont_write_bytecode:
wanted.append('foo.%s.pyc' % sys.implementation.cache_tag)
- assert contents == sorted(wanted)
+ self.assertEqual(contents, sorted(wanted))
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(BuildDumbTestCase)
+
+
+if __name__ == '__main__':
+ run_unittest(test_suite())
"""Tests for distutils.command.bdist_msi."""
-import pytest
-
+import sys
+import unittest
+from test.support import run_unittest
from distutils.tests import support
from .py38compat import check_warnings
-pytest.importorskip('msilib')
-
-
-class TestBDistMSI(support.TempdirManager, support.LoggingSilencer):
+@unittest.skipUnless(sys.platform == 'win32', 'these tests require Windows')
+class BDistMSITestCase(
+ support.TempdirManager, support.LoggingSilencer, unittest.TestCase
+):
def test_minimal(self):
# minimal test XXX need more tests
from distutils.command.bdist_msi import bdist_msi
with check_warnings(("", DeprecationWarning)):
cmd = bdist_msi(dist)
cmd.ensure_finalized()
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(BDistMSITestCase)
+
+
+if __name__ == '__main__':
+ run_unittest(test_suite())
"""Tests for distutils.command.bdist_rpm."""
+import unittest
import sys
import os
-
-import pytest
+from test.support import run_unittest
from distutils.core import Distribution
from distutils.command.bdist_rpm import bdist_rpm
from distutils.tests import support
-from distutils.spawn import find_executable # noqa: F401
+from distutils.spawn import find_executable
from .py38compat import requires_zlib
"""
-@pytest.fixture(autouse=True)
-def sys_executable_encodable():
- try:
- sys.executable.encode('UTF-8')
- except UnicodeEncodeError:
- pytest.skip("sys.executable is not encodable to UTF-8")
-
-
-mac_woes = pytest.mark.skipif(
- "not sys.platform.startswith('linux')",
- reason='spurious sdtout/stderr output under macOS',
-)
-
-
-@pytest.mark.usefixtures('save_env')
-@pytest.mark.usefixtures('save_argv')
-@pytest.mark.usefixtures('save_cwd')
-class TestBuildRpm(
+class BuildRpmTestCase(
support.TempdirManager,
+ support.EnvironGuard,
support.LoggingSilencer,
+ unittest.TestCase,
):
- @mac_woes
+ def setUp(self):
+ try:
+ sys.executable.encode("UTF-8")
+ except UnicodeEncodeError:
+ raise unittest.SkipTest("sys.executable is not encodable to UTF-8")
+
+ super(BuildRpmTestCase, self).setUp()
+ self.old_location = os.getcwd()
+ self.old_sys_argv = sys.argv, sys.argv[:]
+
+ def tearDown(self):
+ os.chdir(self.old_location)
+ sys.argv = self.old_sys_argv[0]
+ sys.argv[:] = self.old_sys_argv[1]
+ super(BuildRpmTestCase, self).tearDown()
+
+ # XXX I am unable yet to make this test work without
+ # spurious sdtout/stderr output under Mac OS X
+ @unittest.skipUnless(
+ sys.platform.startswith('linux'), 'spurious sdtout/stderr output under Mac OS X'
+ )
@requires_zlib()
- @pytest.mark.skipif("not find_executable('rpm')")
- @pytest.mark.skipif("not find_executable('rpmbuild')")
+ @unittest.skipIf(find_executable('rpm') is None, 'the rpm command is not found')
+ @unittest.skipIf(
+ find_executable('rpmbuild') is None, 'the rpmbuild command is not found'
+ )
def test_quiet(self):
# let's create a package
tmp_dir = self.mkdtemp()
cmd.run()
dist_created = os.listdir(os.path.join(pkg_dir, 'dist'))
- assert 'foo-0.1-1.noarch.rpm' in dist_created
+ self.assertIn('foo-0.1-1.noarch.rpm', dist_created)
# bug #2945: upload ignores bdist_rpm files
- assert ('bdist_rpm', 'any', 'dist/foo-0.1-1.src.rpm') in dist.dist_files
- assert ('bdist_rpm', 'any', 'dist/foo-0.1-1.noarch.rpm') in dist.dist_files
+ self.assertIn(('bdist_rpm', 'any', 'dist/foo-0.1-1.src.rpm'), dist.dist_files)
+ self.assertIn(
+ ('bdist_rpm', 'any', 'dist/foo-0.1-1.noarch.rpm'), dist.dist_files
+ )
- @mac_woes
+ # XXX I am unable yet to make this test work without
+ # spurious sdtout/stderr output under Mac OS X
+ @unittest.skipUnless(
+ sys.platform.startswith('linux'), 'spurious sdtout/stderr output under Mac OS X'
+ )
@requires_zlib()
# http://bugs.python.org/issue1533164
- @pytest.mark.skipif("not find_executable('rpm')")
- @pytest.mark.skipif("not find_executable('rpmbuild')")
+ @unittest.skipIf(find_executable('rpm') is None, 'the rpm command is not found')
+ @unittest.skipIf(
+ find_executable('rpmbuild') is None, 'the rpmbuild command is not found'
+ )
def test_no_optimize_flag(self):
# let's create a package that breaks bdist_rpm
tmp_dir = self.mkdtemp()
cmd.run()
dist_created = os.listdir(os.path.join(pkg_dir, 'dist'))
- assert 'foo-0.1-1.noarch.rpm' in dist_created
+ self.assertIn('foo-0.1-1.noarch.rpm', dist_created)
# bug #2945: upload ignores bdist_rpm files
- assert ('bdist_rpm', 'any', 'dist/foo-0.1-1.src.rpm') in dist.dist_files
- assert ('bdist_rpm', 'any', 'dist/foo-0.1-1.noarch.rpm') in dist.dist_files
+ self.assertIn(('bdist_rpm', 'any', 'dist/foo-0.1-1.src.rpm'), dist.dist_files)
+ self.assertIn(
+ ('bdist_rpm', 'any', 'dist/foo-0.1-1.noarch.rpm'), dist.dist_files
+ )
os.remove(os.path.join(pkg_dir, 'dist', 'foo-0.1-1.noarch.rpm'))
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(BuildRpmTestCase)
+
+
+if __name__ == '__main__':
+ run_unittest(test_suite())
"""Tests for distutils.command.bdist_wininst."""
-import pytest
+import sys
+import platform
+import unittest
+from test.support import run_unittest
from .py38compat import check_warnings
from distutils.tests import support
-@pytest.mark.skipif("platform.machine() == 'ARM64'")
-@pytest.mark.skipif("bdist_wininst._unsupported")
-class TestBuildWinInst(support.TempdirManager, support.LoggingSilencer):
+@unittest.skipIf(
+ sys.platform == 'win32' and platform.machine() == 'ARM64',
+ 'bdist_wininst is not supported in this install',
+)
+@unittest.skipIf(
+ getattr(bdist_wininst, '_unsupported', False),
+ 'bdist_wininst is not supported in this install',
+)
+class BuildWinInstTestCase(
+ support.TempdirManager, support.LoggingSilencer, unittest.TestCase
+):
def test_get_exe_bytes(self):
# issue5731: command was broken on non-windows platforms
# and make sure it finds it and returns its content
# no matter what platform we have
exe_file = cmd.get_exe_bytes()
- assert len(exe_file) > 10
+ self.assertGreater(len(exe_file), 10)
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(BuildWinInstTestCase)
+
+
+if __name__ == '__main__':
+ run_unittest(test_suite())
"""Tests for distutils.command.build."""
+import unittest
import os
import sys
+from test.support import run_unittest
from distutils.command.build import build
from distutils.tests import support
from sysconfig import get_platform
-class TestBuild(support.TempdirManager, support.LoggingSilencer):
+class BuildTestCase(support.TempdirManager, support.LoggingSilencer, unittest.TestCase):
def test_finalize_options(self):
pkg_dir, dist = self.create_dist()
cmd = build(dist)
cmd.finalize_options()
# if not specified, plat_name gets the current platform
- assert cmd.plat_name == get_platform()
+ self.assertEqual(cmd.plat_name, get_platform())
# build_purelib is build + lib
wanted = os.path.join(cmd.build_base, 'lib')
- assert cmd.build_purelib == wanted
+ self.assertEqual(cmd.build_purelib, wanted)
# build_platlib is 'build/lib.platform-cache_tag[-pydebug]'
# examples:
# build/lib.macosx-10.3-i386-cpython39
- plat_spec = '.{}-{}'.format(cmd.plat_name, sys.implementation.cache_tag)
+ plat_spec = '.%s-%s' % (cmd.plat_name, sys.implementation.cache_tag)
if hasattr(sys, 'gettotalrefcount'):
- assert cmd.build_platlib.endswith('-pydebug')
+ self.assertTrue(cmd.build_platlib.endswith('-pydebug'))
plat_spec += '-pydebug'
wanted = os.path.join(cmd.build_base, 'lib' + plat_spec)
- assert cmd.build_platlib == wanted
+ self.assertEqual(cmd.build_platlib, wanted)
# by default, build_lib = build_purelib
- assert cmd.build_lib == cmd.build_purelib
+ self.assertEqual(cmd.build_lib, cmd.build_purelib)
# build_temp is build/temp.<plat>
wanted = os.path.join(cmd.build_base, 'temp' + plat_spec)
- assert cmd.build_temp == wanted
+ self.assertEqual(cmd.build_temp, wanted)
# build_scripts is build/scripts-x.x
wanted = os.path.join(cmd.build_base, 'scripts-%d.%d' % sys.version_info[:2])
- assert cmd.build_scripts == wanted
+ self.assertEqual(cmd.build_scripts, wanted)
# executable is os.path.normpath(sys.executable)
- assert cmd.executable == os.path.normpath(sys.executable)
+ self.assertEqual(cmd.executable, os.path.normpath(sys.executable))
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(BuildTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.command.build_clib."""
+import unittest
import os
+import sys
-from test.support import missing_compiler_executable
-
-import pytest
+from test.support import run_unittest, missing_compiler_executable
from distutils.command.build_clib import build_clib
from distutils.errors import DistutilsSetupError
from distutils.tests import support
-class TestBuildCLib(support.TempdirManager, support.LoggingSilencer):
+class BuildCLibTestCase(
+ support.TempdirManager, support.LoggingSilencer, unittest.TestCase
+):
def test_check_library_dist(self):
pkg_dir, dist = self.create_dist()
cmd = build_clib(dist)
# 'libraries' option must be a list
- with pytest.raises(DistutilsSetupError):
- cmd.check_library_list('foo')
+ self.assertRaises(DistutilsSetupError, cmd.check_library_list, 'foo')
# each element of 'libraries' must a 2-tuple
- with pytest.raises(DistutilsSetupError):
- cmd.check_library_list(['foo1', 'foo2'])
+ self.assertRaises(DistutilsSetupError, cmd.check_library_list, ['foo1', 'foo2'])
# first element of each tuple in 'libraries'
# must be a string (the library name)
- with pytest.raises(DistutilsSetupError):
- cmd.check_library_list([(1, 'foo1'), ('name', 'foo2')])
+ self.assertRaises(
+ DistutilsSetupError, cmd.check_library_list, [(1, 'foo1'), ('name', 'foo2')]
+ )
# library name may not contain directory separators
- with pytest.raises(DistutilsSetupError):
- cmd.check_library_list(
- [('name', 'foo1'), ('another/name', 'foo2')],
- )
+ self.assertRaises(
+ DistutilsSetupError,
+ cmd.check_library_list,
+ [('name', 'foo1'), ('another/name', 'foo2')],
+ )
# second element of each tuple must be a dictionary (build info)
- with pytest.raises(DistutilsSetupError):
- cmd.check_library_list(
- [('name', {}), ('another', 'foo2')],
- )
+ self.assertRaises(
+ DistutilsSetupError,
+ cmd.check_library_list,
+ [('name', {}), ('another', 'foo2')],
+ )
# those work
libs = [('name', {}), ('name', {'ok': 'good'})]
# "in 'libraries' option 'sources' must be present and must be
# a list of source filenames
cmd.libraries = [('name', {})]
- with pytest.raises(DistutilsSetupError):
- cmd.get_source_files()
+ self.assertRaises(DistutilsSetupError, cmd.get_source_files)
cmd.libraries = [('name', {'sources': 1})]
- with pytest.raises(DistutilsSetupError):
- cmd.get_source_files()
+ self.assertRaises(DistutilsSetupError, cmd.get_source_files)
cmd.libraries = [('name', {'sources': ['a', 'b']})]
- assert cmd.get_source_files() == ['a', 'b']
+ self.assertEqual(cmd.get_source_files(), ['a', 'b'])
cmd.libraries = [('name', {'sources': ('a', 'b')})]
- assert cmd.get_source_files() == ['a', 'b']
+ self.assertEqual(cmd.get_source_files(), ['a', 'b'])
cmd.libraries = [
('name', {'sources': ('a', 'b')}),
('name2', {'sources': ['c', 'd']}),
]
- assert cmd.get_source_files() == ['a', 'b', 'c', 'd']
+ self.assertEqual(cmd.get_source_files(), ['a', 'b', 'c', 'd'])
def test_build_libraries(self):
# build_libraries is also doing a bit of typo checking
lib = [('name', {'sources': 'notvalid'})]
- with pytest.raises(DistutilsSetupError):
- cmd.build_libraries(lib)
+ self.assertRaises(DistutilsSetupError, cmd.build_libraries, lib)
lib = [('name', {'sources': list()})]
cmd.build_libraries(lib)
cmd.include_dirs = 'one-dir'
cmd.finalize_options()
- assert cmd.include_dirs == ['one-dir']
+ self.assertEqual(cmd.include_dirs, ['one-dir'])
cmd.include_dirs = None
cmd.finalize_options()
- assert cmd.include_dirs == []
+ self.assertEqual(cmd.include_dirs, [])
cmd.distribution.libraries = 'WONTWORK'
- with pytest.raises(DistutilsSetupError):
- cmd.finalize_options()
+ self.assertRaises(DistutilsSetupError, cmd.finalize_options)
- @pytest.mark.skipif('platform.system() == "Windows"')
+ @unittest.skipIf(sys.platform == 'win32', "can't test on Windows")
def test_run(self):
pkg_dir, dist = self.create_dist()
cmd = build_clib(dist)
cmd.run()
# let's check the result
- assert 'libfoo.a' in os.listdir(build_temp)
+ self.assertIn('libfoo.a', os.listdir(build_temp))
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(BuildCLibTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
import os
from io import StringIO
import textwrap
-import site
-import contextlib
-import platform
-import tempfile
-import importlib
-import shutil
from distutils.core import Distribution
from distutils.command.build_ext import build_ext
UnknownFileError,
)
+import unittest
from test import support
from . import py38compat as os_helper
-from . import py38compat as import_helper
-import pytest
-import re
-
-
-@pytest.fixture()
-def user_site_dir(request):
- self = request.instance
- self.tmp_dir = self.mkdtemp()
- from distutils.command import build_ext
-
- orig_user_base = site.USER_BASE
-
- site.USER_BASE = self.mkdtemp()
- build_ext.USER_BASE = site.USER_BASE
-
- # bpo-30132: On Windows, a .pdb file may be created in the current
- # working directory. Create a temporary working directory to cleanup
- # everything at the end of the test.
- with os_helper.change_cwd(self.tmp_dir):
- yield
-
- site.USER_BASE = orig_user_base
- build_ext.USER_BASE = orig_user_base
-
-
-@contextlib.contextmanager
-def safe_extension_import(name, path):
- with import_helper.CleanImport(name):
- with extension_redirect(name, path) as new_path:
- with import_helper.DirsOnSysPath(new_path):
- yield
-
-
-@contextlib.contextmanager
-def extension_redirect(mod, path):
- """
- Tests will fail to tear down an extension module if it's been imported.
-
- Before importing, copy the file to a temporary directory that won't
- be cleaned up. Yield the new path.
- """
- if platform.system() != "Windows" and sys.platform != "cygwin":
- yield path
- return
- with import_helper.DirsOnSysPath(path):
- spec = importlib.util.find_spec(mod)
- filename = os.path.basename(spec.origin)
- trash_dir = tempfile.mkdtemp(prefix='deleteme')
- dest = os.path.join(trash_dir, os.path.basename(filename))
- shutil.copy(spec.origin, dest)
- yield trash_dir
- # TODO: can the file be scheduled for deletion?
-
-
-@pytest.mark.usefixtures('user_site_dir')
-class TestBuildExt(TempdirManager, LoggingSilencer):
+from test.support.script_helper import assert_python_ok
+
+# http://bugs.python.org/issue4373
+# Don't load the xx module more than once.
+ALREADY_TESTED = False
+
+
+class BuildExtTestCase(TempdirManager, LoggingSilencer, unittest.TestCase):
+ def setUp(self):
+ # Create a simple test environment
+ super(BuildExtTestCase, self).setUp()
+ self.tmp_dir = self.mkdtemp()
+ import site
+
+ self.old_user_base = site.USER_BASE
+ site.USER_BASE = self.mkdtemp()
+ from distutils.command import build_ext
+
+ build_ext.USER_BASE = site.USER_BASE
+
+ # bpo-30132: On Windows, a .pdb file may be created in the current
+ # working directory. Create a temporary working directory to cleanup
+ # everything at the end of the test.
+ change_cwd = os_helper.change_cwd(self.tmp_dir)
+ change_cwd.__enter__()
+ self.addCleanup(change_cwd.__exit__, None, None, None)
+
+ def tearDown(self):
+ import site
+
+ site.USER_BASE = self.old_user_base
+ from distutils.command import build_ext
+
+ build_ext.USER_BASE = self.old_user_base
+ super(BuildExtTestCase, self).tearDown()
+
def build_ext(self, *args, **kwargs):
return build_ext(*args, **kwargs)
def test_build_ext(self):
cmd = support.missing_compiler_executable()
+ if cmd is not None:
+ self.skipTest('The %r command is not found' % cmd)
+ global ALREADY_TESTED
copy_xxmodule_c(self.tmp_dir)
xx_c = os.path.join(self.tmp_dir, 'xxmodule.c')
xx_ext = Extension('xx', [xx_c])
finally:
sys.stdout = old_stdout
- with safe_extension_import('xx', self.tmp_dir):
- self._test_xx()
+ if ALREADY_TESTED:
+ self.skipTest('Already tested in %s' % ALREADY_TESTED)
+ else:
+ ALREADY_TESTED = type(self).__name__
- @staticmethod
- def _test_xx():
- import xx
+ code = textwrap.dedent(
+ """
+ tmp_dir = {self.tmp_dir!r}
- for attr in ('error', 'foo', 'new', 'roj'):
- assert hasattr(xx, attr)
+ import sys
+ import unittest
+ from test import support
- assert xx.foo(2, 5) == 7
- assert xx.foo(13, 15) == 28
- assert xx.new().demo() is None
- if support.HAVE_DOCSTRINGS:
- doc = 'This is a template module just for instruction.'
- assert xx.__doc__ == doc
- assert isinstance(xx.Null(), xx.Null)
- assert isinstance(xx.Str(), xx.Str)
+ sys.path.insert(0, tmp_dir)
+ import xx
+
+ class Tests(unittest.TestCase):
+ def test_xx(self):
+ for attr in ('error', 'foo', 'new', 'roj'):
+ self.assertTrue(hasattr(xx, attr))
+
+ self.assertEqual(xx.foo(2, 5), 7)
+ self.assertEqual(xx.foo(13,15), 28)
+ self.assertEqual(xx.new().demo(), None)
+ if support.HAVE_DOCSTRINGS:
+ doc = 'This is a template module just for instruction.'
+ self.assertEqual(xx.__doc__, doc)
+ self.assertIsInstance(xx.Null(), xx.Null)
+ self.assertIsInstance(xx.Str(), xx.Str)
+
+
+ unittest.main()
+ """.format(
+ **locals()
+ )
+ )
+ assert_python_ok('-c', code)
def test_solaris_enable_shared(self):
dist = Distribution({'name': 'xx'})
_config_vars['Py_ENABLE_SHARED'] = old_var
# make sure we get some library dirs under solaris
- assert len(cmd.library_dirs) > 0
+ self.assertGreater(len(cmd.library_dirs), 0)
def test_user_site(self):
import site
# making sure the user option is there
options = [name for name, short, lable in cmd.user_options]
- assert 'user' in options
+ self.assertIn('user', options)
# setting a value
cmd.user = 1
# see if include_dirs and library_dirs
# were set
- assert lib in cmd.library_dirs
- assert lib in cmd.rpath
- assert incl in cmd.include_dirs
+ self.assertIn(lib, cmd.library_dirs)
+ self.assertIn(lib, cmd.rpath)
+ self.assertIn(incl, cmd.include_dirs)
def test_optional_extension(self):
dist = Distribution({'name': 'xx', 'ext_modules': modules})
cmd = self.build_ext(dist)
cmd.ensure_finalized()
- with pytest.raises((UnknownFileError, CompileError)):
- cmd.run() # should raise an error
+ self.assertRaises(
+ (UnknownFileError, CompileError), cmd.run
+ ) # should raise an error
modules = [Extension('foo', ['xxx'], optional=True)]
dist = Distribution({'name': 'xx', 'ext_modules': modules})
py_include = sysconfig.get_python_inc()
for p in py_include.split(os.path.pathsep):
- assert p in cmd.include_dirs
+ self.assertIn(p, cmd.include_dirs)
plat_py_include = sysconfig.get_python_inc(plat_specific=1)
for p in plat_py_include.split(os.path.pathsep):
- assert p in cmd.include_dirs
+ self.assertIn(p, cmd.include_dirs)
# make sure cmd.libraries is turned into a list
# if it's a string
cmd = self.build_ext(dist)
cmd.libraries = 'my_lib, other_lib lastlib'
cmd.finalize_options()
- assert cmd.libraries == ['my_lib', 'other_lib', 'lastlib']
+ self.assertEqual(cmd.libraries, ['my_lib', 'other_lib', 'lastlib'])
# make sure cmd.library_dirs is turned into a list
# if it's a string
cmd = self.build_ext(dist)
cmd.library_dirs = 'my_lib_dir%sother_lib_dir' % os.pathsep
cmd.finalize_options()
- assert 'my_lib_dir' in cmd.library_dirs
- assert 'other_lib_dir' in cmd.library_dirs
+ self.assertIn('my_lib_dir', cmd.library_dirs)
+ self.assertIn('other_lib_dir', cmd.library_dirs)
# make sure rpath is turned into a list
# if it's a string
cmd = self.build_ext(dist)
cmd.rpath = 'one%stwo' % os.pathsep
cmd.finalize_options()
- assert cmd.rpath == ['one', 'two']
+ self.assertEqual(cmd.rpath, ['one', 'two'])
# make sure cmd.link_objects is turned into a list
# if it's a string
cmd = build_ext(dist)
cmd.link_objects = 'one two,three'
cmd.finalize_options()
- assert cmd.link_objects == ['one', 'two', 'three']
+ self.assertEqual(cmd.link_objects, ['one', 'two', 'three'])
# XXX more tests to perform for win32
cmd = self.build_ext(dist)
cmd.define = 'one,two'
cmd.finalize_options()
- assert cmd.define == [('one', '1'), ('two', '1')]
+ self.assertEqual(cmd.define, [('one', '1'), ('two', '1')])
# make sure undef is turned into a list of
# strings if they are ','-separated strings
cmd = self.build_ext(dist)
cmd.undef = 'one,two'
cmd.finalize_options()
- assert cmd.undef == ['one', 'two']
+ self.assertEqual(cmd.undef, ['one', 'two'])
# make sure swig_opts is turned into a list
cmd = self.build_ext(dist)
cmd.swig_opts = None
cmd.finalize_options()
- assert cmd.swig_opts == []
+ self.assertEqual(cmd.swig_opts, [])
cmd = self.build_ext(dist)
cmd.swig_opts = '1 2'
cmd.finalize_options()
- assert cmd.swig_opts == ['1', '2']
+ self.assertEqual(cmd.swig_opts, ['1', '2'])
def test_check_extensions_list(self):
dist = Distribution()
cmd = self.build_ext(dist)
cmd.finalize_options()
- # 'extensions' option must be a list of Extension instances
- with pytest.raises(DistutilsSetupError):
- cmd.check_extensions_list('foo')
+ #'extensions' option must be a list of Extension instances
+ self.assertRaises(DistutilsSetupError, cmd.check_extensions_list, 'foo')
# each element of 'ext_modules' option must be an
# Extension instance or 2-tuple
exts = [('bar', 'foo', 'bar'), 'foo']
- with pytest.raises(DistutilsSetupError):
- cmd.check_extensions_list(exts)
+ self.assertRaises(DistutilsSetupError, cmd.check_extensions_list, exts)
# first element of each tuple in 'ext_modules'
# must be the extension name (a string) and match
# a python dotted-separated name
exts = [('foo-bar', '')]
- with pytest.raises(DistutilsSetupError):
- cmd.check_extensions_list(exts)
+ self.assertRaises(DistutilsSetupError, cmd.check_extensions_list, exts)
# second element of each tuple in 'ext_modules'
# must be a dictionary (build info)
exts = [('foo.bar', '')]
- with pytest.raises(DistutilsSetupError):
- cmd.check_extensions_list(exts)
+ self.assertRaises(DistutilsSetupError, cmd.check_extensions_list, exts)
# ok this one should pass
exts = [('foo.bar', {'sources': [''], 'libraries': 'foo', 'some': 'bar'})]
cmd.check_extensions_list(exts)
ext = exts[0]
- assert isinstance(ext, Extension)
+ self.assertIsInstance(ext, Extension)
# check_extensions_list adds in ext the values passed
# when they are in ('include_dirs', 'library_dirs', 'libraries'
# 'extra_objects', 'extra_compile_args', 'extra_link_args')
- assert ext.libraries == 'foo'
- assert not hasattr(ext, 'some')
+ self.assertEqual(ext.libraries, 'foo')
+ self.assertFalse(hasattr(ext, 'some'))
# 'macros' element of build info dict must be 1- or 2-tuple
exts = [
},
)
]
- with pytest.raises(DistutilsSetupError):
- cmd.check_extensions_list(exts)
+ self.assertRaises(DistutilsSetupError, cmd.check_extensions_list, exts)
exts[0][1]['macros'] = [('1', '2'), ('3',)]
cmd.check_extensions_list(exts)
- assert exts[0].undef_macros == ['3']
- assert exts[0].define_macros == [('1', '2')]
+ self.assertEqual(exts[0].undef_macros, ['3'])
+ self.assertEqual(exts[0].define_macros, [('1', '2')])
def test_get_source_files(self):
modules = [Extension('foo', ['xxx'], optional=False)]
dist = Distribution({'name': 'xx', 'ext_modules': modules})
cmd = self.build_ext(dist)
cmd.ensure_finalized()
- assert cmd.get_source_files() == ['xxx']
+ self.assertEqual(cmd.get_source_files(), ['xxx'])
def test_unicode_module_names(self):
modules = [
dist = Distribution({'name': 'xx', 'ext_modules': modules})
cmd = self.build_ext(dist)
cmd.ensure_finalized()
- assert re.search(r'foo(_d)?\..*', cmd.get_ext_filename(modules[0].name))
- assert re.search(r'föö(_d)?\..*', cmd.get_ext_filename(modules[1].name))
- assert cmd.get_export_symbols(modules[0]) == ['PyInit_foo']
- assert cmd.get_export_symbols(modules[1]) == ['PyInitU_f_1gaa']
+ self.assertRegex(cmd.get_ext_filename(modules[0].name), r'foo(_d)?\..*')
+ self.assertRegex(cmd.get_ext_filename(modules[1].name), r'föö(_d)?\..*')
+ self.assertEqual(cmd.get_export_symbols(modules[0]), ['PyInit_foo'])
+ self.assertEqual(cmd.get_export_symbols(modules[1]), ['PyInitU_f_1gaa'])
def test_compiler_option(self):
# cmd.compiler is an option and
cmd.compiler = 'unix'
cmd.ensure_finalized()
cmd.run()
- assert cmd.compiler == 'unix'
+ self.assertEqual(cmd.compiler, 'unix')
def test_get_outputs(self):
cmd = support.missing_compiler_executable()
+ if cmd is not None:
+ self.skipTest('The %r command is not found' % cmd)
tmp_dir = self.mkdtemp()
c_file = os.path.join(tmp_dir, 'foo.c')
self.write_file(c_file, 'void PyInit_foo(void) {}\n')
cmd = self.build_ext(dist)
fixup_build_ext(cmd)
cmd.ensure_finalized()
- assert len(cmd.get_outputs()) == 1
+ self.assertEqual(len(cmd.get_outputs()), 1)
cmd.build_lib = os.path.join(self.tmp_dir, 'build')
cmd.build_temp = os.path.join(self.tmp_dir, 'tempt')
so_file = cmd.get_outputs()[0]
finally:
os.chdir(old_wd)
- assert os.path.exists(so_file)
+ self.assertTrue(os.path.exists(so_file))
ext_suffix = sysconfig.get_config_var('EXT_SUFFIX')
- assert so_file.endswith(ext_suffix)
+ self.assertTrue(so_file.endswith(ext_suffix))
so_dir = os.path.dirname(so_file)
- assert so_dir == other_tmp_dir
+ self.assertEqual(so_dir, other_tmp_dir)
cmd.inplace = 0
cmd.compiler = None
cmd.run()
so_file = cmd.get_outputs()[0]
- assert os.path.exists(so_file)
- assert so_file.endswith(ext_suffix)
+ self.assertTrue(os.path.exists(so_file))
+ self.assertTrue(so_file.endswith(ext_suffix))
so_dir = os.path.dirname(so_file)
- assert so_dir == cmd.build_lib
+ self.assertEqual(so_dir, cmd.build_lib)
# inplace = 0, cmd.package = 'bar'
build_py = cmd.get_finalized_command('build_py')
path = cmd.get_ext_fullpath('foo')
# checking that the last directory is the build_dir
path = os.path.split(path)[0]
- assert path == cmd.build_lib
+ self.assertEqual(path, cmd.build_lib)
# inplace = 1, cmd.package = 'bar'
cmd.inplace = 1
# checking that the last directory is bar
path = os.path.split(path)[0]
lastdir = os.path.split(path)[-1]
- assert lastdir == 'bar'
+ self.assertEqual(lastdir, 'bar')
def test_ext_fullpath(self):
ext = sysconfig.get_config_var('EXT_SUFFIX')
curdir = os.getcwd()
wanted = os.path.join(curdir, 'src', 'lxml', 'etree' + ext)
path = cmd.get_ext_fullpath('lxml.etree')
- assert wanted == path
+ self.assertEqual(wanted, path)
# building lxml.etree not inplace
cmd.inplace = 0
cmd.build_lib = os.path.join(curdir, 'tmpdir')
wanted = os.path.join(curdir, 'tmpdir', 'lxml', 'etree' + ext)
path = cmd.get_ext_fullpath('lxml.etree')
- assert wanted == path
+ self.assertEqual(wanted, path)
# building twisted.runner.portmap not inplace
build_py = cmd.get_finalized_command('build_py')
cmd.distribution.packages = ['twisted', 'twisted.runner.portmap']
path = cmd.get_ext_fullpath('twisted.runner.portmap')
wanted = os.path.join(curdir, 'tmpdir', 'twisted', 'runner', 'portmap' + ext)
- assert wanted == path
+ self.assertEqual(wanted, path)
# building twisted.runner.portmap inplace
cmd.inplace = 1
path = cmd.get_ext_fullpath('twisted.runner.portmap')
wanted = os.path.join(curdir, 'twisted', 'runner', 'portmap' + ext)
- assert wanted == path
+ self.assertEqual(wanted, path)
- @pytest.mark.skipif('platform.system() != "Darwin"')
- @pytest.mark.usefixtures('save_env')
+ @unittest.skipUnless(sys.platform == 'darwin', 'test only relevant for MacOSX')
def test_deployment_target_default(self):
# Issue 9516: Test that, in the absence of the environment variable,
# an extension module is compiled with the same deployment target as
# the interpreter.
self._try_compile_deployment_target('==', None)
- @pytest.mark.skipif('platform.system() != "Darwin"')
- @pytest.mark.usefixtures('save_env')
+ @unittest.skipUnless(sys.platform == 'darwin', 'test only relevant for MacOSX')
def test_deployment_target_too_low(self):
# Issue 9516: Test that an extension module is not allowed to be
# compiled with a deployment target less than that of the interpreter.
- with pytest.raises(DistutilsPlatformError):
- self._try_compile_deployment_target('>', '10.1')
+ self.assertRaises(
+ DistutilsPlatformError, self._try_compile_deployment_target, '>', '10.1'
+ )
- @pytest.mark.skipif('platform.system() != "Darwin"')
- @pytest.mark.usefixtures('save_env')
+ @unittest.skipUnless(sys.platform == 'darwin', 'test only relevant for MacOSX')
def test_deployment_target_higher_ok(self):
# Issue 9516: Test that an extension module can be compiled with a
# deployment target higher than that of the interpreter: the ext
self._try_compile_deployment_target('<', deptarget)
def _try_compile_deployment_target(self, operator, target):
+ orig_environ = os.environ
+ os.environ = orig_environ.copy()
+ self.addCleanup(setattr, os, 'environ', orig_environ)
+
if target is None:
if os.environ.get('MACOSX_DEPLOYMENT_TARGET'):
del os.environ['MACOSX_DEPLOYMENT_TARGET']
deptarget_ext = Extension(
'deptarget',
[deptarget_c],
- extra_compile_args=['-DTARGET={}'.format(target)],
+ extra_compile_args=['-DTARGET=%s' % (target,)],
)
dist = Distribution({'name': 'deptarget', 'ext_modules': [deptarget_ext]})
dist.package_dir = self.tmp_dir
self.fail("Wrong deployment target during compilation")
-class TestParallelBuildExt(TestBuildExt):
+class ParallelBuildExtTestCase(BuildExtTestCase):
def build_ext(self, *args, **kwargs):
build_ext = super().build_ext(*args, **kwargs)
build_ext.parallel = True
return build_ext
+
+
+def test_suite():
+ suite = unittest.TestSuite()
+ suite.addTest(unittest.TestLoader().loadTestsFromTestCase(BuildExtTestCase))
+ suite.addTest(unittest.TestLoader().loadTestsFromTestCase(ParallelBuildExtTestCase))
+ return suite
+
+
+if __name__ == '__main__':
+ support.run_unittest(__name__)
import os
import sys
-import unittest.mock as mock
-
-import pytest
+import unittest
from distutils.command.build_py import build_py
from distutils.core import Distribution
from distutils.errors import DistutilsFileError
+from unittest.mock import patch
from distutils.tests import support
+from test.support import run_unittest
-@support.combine_markers
-class TestBuildPy(support.TempdirManager, support.LoggingSilencer):
+class BuildPyTestCase(
+ support.TempdirManager, support.LoggingSilencer, unittest.TestCase
+):
def test_package_data(self):
sources = self.mkdtemp()
f = open(os.path.join(sources, "__init__.py"), "w")
cmd = build_py(dist)
cmd.compile = 1
cmd.ensure_finalized()
- assert cmd.package_data == dist.package_data
+ self.assertEqual(cmd.package_data, dist.package_data)
cmd.run()
# This makes sure the list of outputs includes byte-compiled
# files for Python modules but not for package data files
# (there shouldn't *be* byte-code files for those!).
- assert len(cmd.get_outputs()) == 3
+ self.assertEqual(len(cmd.get_outputs()), 3)
pkgdest = os.path.join(destination, "pkg")
files = os.listdir(pkgdest)
pycache_dir = os.path.join(pkgdest, "__pycache__")
- assert "__init__.py" in files
- assert "README.txt" in files
+ self.assertIn("__init__.py", files)
+ self.assertIn("README.txt", files)
if sys.dont_write_bytecode:
- assert not os.path.exists(pycache_dir)
+ self.assertFalse(os.path.exists(pycache_dir))
else:
pyc_files = os.listdir(pycache_dir)
- assert "__init__.%s.pyc" % sys.implementation.cache_tag in pyc_files
+ self.assertIn("__init__.%s.pyc" % sys.implementation.cache_tag, pyc_files)
def test_empty_package_dir(self):
# See bugs #1668596/#1720897
except DistutilsFileError:
self.fail("failed package_data test when package_dir is ''")
- @pytest.mark.skipif('sys.dont_write_bytecode')
+ @unittest.skipIf(sys.dont_write_bytecode, 'byte-compile disabled')
def test_byte_compile(self):
project_dir, dist = self.create_dist(py_modules=['boiledeggs'])
os.chdir(project_dir)
cmd.run()
found = os.listdir(cmd.build_lib)
- assert sorted(found) == ['__pycache__', 'boiledeggs.py']
+ self.assertEqual(sorted(found), ['__pycache__', 'boiledeggs.py'])
found = os.listdir(os.path.join(cmd.build_lib, '__pycache__'))
- assert found == ['boiledeggs.%s.pyc' % sys.implementation.cache_tag]
+ self.assertEqual(found, ['boiledeggs.%s.pyc' % sys.implementation.cache_tag])
- @pytest.mark.skipif('sys.dont_write_bytecode')
+ @unittest.skipIf(sys.dont_write_bytecode, 'byte-compile disabled')
def test_byte_compile_optimized(self):
project_dir, dist = self.create_dist(py_modules=['boiledeggs'])
os.chdir(project_dir)
cmd.run()
found = os.listdir(cmd.build_lib)
- assert sorted(found) == ['__pycache__', 'boiledeggs.py']
+ self.assertEqual(sorted(found), ['__pycache__', 'boiledeggs.py'])
found = os.listdir(os.path.join(cmd.build_lib, '__pycache__'))
- expect = f'boiledeggs.{sys.implementation.cache_tag}.opt-1.pyc'
- assert sorted(found) == [expect]
+ expect = 'boiledeggs.{}.opt-1.pyc'.format(sys.implementation.cache_tag)
+ self.assertEqual(sorted(found), [expect])
def test_dir_in_package_data(self):
"""
finally:
sys.dont_write_bytecode = old_dont_write_bytecode
- assert 'byte-compiling is disabled' in self.logs[0][1] % self.logs[0][2]
+ self.assertIn('byte-compiling is disabled', self.logs[0][1] % self.logs[0][2])
- @mock.patch("distutils.command.build_py.log.warn")
+ @patch("distutils.command.build_py.log.warn")
def test_namespace_package_does_not_warn(self, log_warn):
"""
Originally distutils implementation did not account for PEP 420
cmd.run()
# Test should complete successfully with no exception
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(BuildPyTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.command.build_scripts."""
import os
+import unittest
from distutils.command.build_scripts import build_scripts
from distutils.core import Distribution
from distutils import sysconfig
from distutils.tests import support
+from test.support import run_unittest
-class TestBuildScripts(support.TempdirManager, support.LoggingSilencer):
+class BuildScriptsTestCase(
+ support.TempdirManager, support.LoggingSilencer, unittest.TestCase
+):
def test_default_settings(self):
cmd = self.get_build_scripts_cmd("/foo/bar", [])
- assert not cmd.force
- assert cmd.build_dir is None
+ self.assertFalse(cmd.force)
+ self.assertIsNone(cmd.build_dir)
cmd.finalize_options()
- assert cmd.force
- assert cmd.build_dir == "/foo/bar"
+ self.assertTrue(cmd.force)
+ self.assertEqual(cmd.build_dir, "/foo/bar")
def test_build(self):
source = self.mkdtemp()
built = os.listdir(target)
for name in expected:
- assert name in built
+ self.assertIn(name, built)
def get_build_scripts_cmd(self, target, scripts):
import sys
built = os.listdir(target)
for name in expected:
- assert name in built
+ self.assertIn(name, built)
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(BuildScriptsTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
+++ /dev/null
-import os
-import sys
-import platform
-import textwrap
-import sysconfig
-
-import pytest
-
-from distutils import ccompiler
-
-
-def _make_strs(paths):
- """
- Convert paths to strings for legacy compatibility.
- """
- if sys.version_info > (3, 8) and platform.system() != "Windows":
- return paths
- return list(map(os.fspath, paths))
-
-
-@pytest.fixture
-def c_file(tmp_path):
- c_file = tmp_path / 'foo.c'
- gen_headers = ('Python.h',)
- is_windows = platform.system() == "Windows"
- plat_headers = ('windows.h',) * is_windows
- all_headers = gen_headers + plat_headers
- headers = '\n'.join(f'#include <{header}>\n' for header in all_headers)
- payload = (
- textwrap.dedent(
- """
- #headers
- void PyInit_foo(void) {}
- """
- )
- .lstrip()
- .replace('#headers', headers)
- )
- c_file.write_text(payload)
- return c_file
-
-
-def test_set_include_dirs(c_file):
- """
- Extensions should build even if set_include_dirs is invoked.
- In particular, compiler-specific paths should not be overridden.
- """
- compiler = ccompiler.new_compiler()
- python = sysconfig.get_paths()['include']
- compiler.set_include_dirs([python])
- compiler.compile(_make_strs([c_file]))
-
- # do it again, setting include dirs after any initialization
- compiler.set_include_dirs([python])
- compiler.compile(_make_strs([c_file]))
"""Tests for distutils.command.check."""
import os
import textwrap
+import unittest
+from test.support import run_unittest
-import pytest
-
-from distutils.command.check import check
+from distutils.command.check import check, HAS_DOCUTILS
from distutils.tests import support
from distutils.errors import DistutilsSetupError
HERE = os.path.dirname(__file__)
-@support.combine_markers
-class TestCheck(support.LoggingSilencer, support.TempdirManager):
+class CheckTestCase(support.LoggingSilencer, support.TempdirManager, unittest.TestCase):
def _run(self, metadata=None, cwd=None, **options):
if metadata is None:
metadata = {}
# by default, check is checking the metadata
# should have some warnings
cmd = self._run()
- assert cmd._warnings == 1
+ self.assertEqual(cmd._warnings, 1)
# now let's add the required fields
# and run it again, to make sure we don't get
'version': 'xxx',
}
cmd = self._run(metadata)
- assert cmd._warnings == 0
+ self.assertEqual(cmd._warnings, 0)
# now with the strict mode, we should
# get an error if there are missing metadata
- with pytest.raises(DistutilsSetupError):
- self._run({}, **{'strict': 1})
+ self.assertRaises(DistutilsSetupError, self._run, {}, **{'strict': 1})
# and of course, no error when all metadata are present
cmd = self._run(metadata, strict=1)
- assert cmd._warnings == 0
+ self.assertEqual(cmd._warnings, 0)
# now a test with non-ASCII characters
metadata = {
'long_description': 'More things about esszet \u00df',
}
cmd = self._run(metadata)
- assert cmd._warnings == 0
+ self.assertEqual(cmd._warnings, 0)
def test_check_author_maintainer(self):
for kind in ("author", "maintainer"):
'version': 'xxx',
}
cmd = self._run(metadata)
- assert cmd._warnings == 0
+ self.assertEqual(cmd._warnings, 0)
# the check should not warn if only email is given
metadata[kind + '_email'] = 'name@email.com'
cmd = self._run(metadata)
- assert cmd._warnings == 0
+ self.assertEqual(cmd._warnings, 0)
# the check should not warn if only the name is given
metadata[kind] = "Name"
del metadata[kind + '_email']
cmd = self._run(metadata)
- assert cmd._warnings == 0
+ self.assertEqual(cmd._warnings, 0)
+ @unittest.skipUnless(HAS_DOCUTILS, "won't test without docutils")
def test_check_document(self):
- pytest.importorskip('docutils')
pkg_info, dist = self.create_dist()
cmd = check(dist)
# let's see if it detects broken rest
broken_rest = 'title\n===\n\ntest'
msgs = cmd._check_rst_data(broken_rest)
- assert len(msgs) == 1
+ self.assertEqual(len(msgs), 1)
# and non-broken rest
rest = 'title\n=====\n\ntest'
msgs = cmd._check_rst_data(rest)
- assert len(msgs) == 0
+ self.assertEqual(len(msgs), 0)
+ @unittest.skipUnless(HAS_DOCUTILS, "won't test without docutils")
def test_check_restructuredtext(self):
- pytest.importorskip('docutils')
# let's see if it detects broken rest in long_description
broken_rest = 'title\n===\n\ntest'
pkg_info, dist = self.create_dist(long_description=broken_rest)
cmd = check(dist)
cmd.check_restructuredtext()
- assert cmd._warnings == 1
+ self.assertEqual(cmd._warnings, 1)
# let's see if we have an error with strict=1
metadata = {
'version': 'xxx',
'long_description': broken_rest,
}
- with pytest.raises(DistutilsSetupError):
- self._run(metadata, **{'strict': 1, 'restructuredtext': 1})
+ self.assertRaises(
+ DistutilsSetupError,
+ self._run,
+ metadata,
+ **{'strict': 1, 'restructuredtext': 1}
+ )
# and non-broken rest, including a non-ASCII character to test #12114
metadata['long_description'] = 'title\n=====\n\ntest \u00df'
cmd = self._run(metadata, strict=1, restructuredtext=1)
- assert cmd._warnings == 0
+ self.assertEqual(cmd._warnings, 0)
# check that includes work to test #31292
metadata['long_description'] = 'title\n=====\n\n.. include:: includetest.rst'
cmd = self._run(metadata, cwd=HERE, strict=1, restructuredtext=1)
- assert cmd._warnings == 0
+ self.assertEqual(cmd._warnings, 0)
+ @unittest.skipUnless(HAS_DOCUTILS, "won't test without docutils")
def test_check_restructuredtext_with_syntax_highlight(self):
- pytest.importorskip('docutils')
# Don't fail if there is a `code` or `code-block` directive
example_rst_docs = []
cmd.check_restructuredtext()
msgs = cmd._check_rst_data(rest_with_code)
if pygments is not None:
- assert len(msgs) == 0
+ self.assertEqual(len(msgs), 0)
else:
- assert len(msgs) == 1
- assert (
- str(msgs[0][1])
- == 'Cannot analyze code. Pygments package not found.'
+ self.assertEqual(len(msgs), 1)
+ self.assertEqual(
+ str(msgs[0][1]), 'Cannot analyze code. Pygments package not found.'
)
def test_check_all(self):
- with pytest.raises(DistutilsSetupError):
- self._run({}, **{'strict': 1, 'restructuredtext': 1})
+
+ metadata = {'url': 'xxx', 'author': 'xxx'}
+ self.assertRaises(
+ DistutilsSetupError, self._run, {}, **{'strict': 1, 'restructuredtext': 1}
+ )
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(CheckTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.command.clean."""
import os
+import unittest
from distutils.command.clean import clean
from distutils.tests import support
+from test.support import run_unittest
-class TestClean(support.TempdirManager, support.LoggingSilencer):
+class cleanTestCase(support.TempdirManager, support.LoggingSilencer, unittest.TestCase):
def test_simple_run(self):
pkg_dir, dist = self.create_dist()
cmd = clean(dist)
# make sure the files where removed
for name, path in dirs:
- assert not os.path.exists(path), '%s was not removed' % path
+ self.assertFalse(os.path.exists(path), '%s was not removed' % path)
# let's run the command again (should spit warnings but succeed)
cmd.all = 1
cmd.ensure_finalized()
cmd.run()
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(cleanTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.cmd."""
+import unittest
import os
-from test.support import captured_stdout
+from test.support import captured_stdout, run_unittest
from distutils.cmd import Command
from distutils.dist import Distribution
from distutils.errors import DistutilsOptionError
from distutils import debug
-import pytest
class MyCmd(Command):
pass
-@pytest.fixture
-def cmd(request):
- return MyCmd(Distribution())
+class CommandTestCase(unittest.TestCase):
+ def setUp(self):
+ dist = Distribution()
+ self.cmd = MyCmd(dist)
+ def test_ensure_string_list(self):
-class TestCommand:
- def test_ensure_string_list(self, cmd):
+ cmd = self.cmd
cmd.not_string_list = ['one', 2, 'three']
cmd.yes_string_list = ['one', 'two', 'three']
cmd.not_string_list2 = object()
cmd.ensure_string_list('yes_string_list')
cmd.ensure_string_list('yes_string_list2')
- with pytest.raises(DistutilsOptionError):
- cmd.ensure_string_list('not_string_list')
+ self.assertRaises(
+ DistutilsOptionError, cmd.ensure_string_list, 'not_string_list'
+ )
- with pytest.raises(DistutilsOptionError):
- cmd.ensure_string_list('not_string_list2')
+ self.assertRaises(
+ DistutilsOptionError, cmd.ensure_string_list, 'not_string_list2'
+ )
cmd.option1 = 'ok,dok'
cmd.ensure_string_list('option1')
- assert cmd.option1 == ['ok', 'dok']
+ self.assertEqual(cmd.option1, ['ok', 'dok'])
cmd.option2 = ['xxx', 'www']
cmd.ensure_string_list('option2')
cmd.option3 = ['ok', 2]
- with pytest.raises(DistutilsOptionError):
- cmd.ensure_string_list('option3')
+ self.assertRaises(DistutilsOptionError, cmd.ensure_string_list, 'option3')
+
+ def test_make_file(self):
+
+ cmd = self.cmd
- def test_make_file(self, cmd):
# making sure it raises when infiles is not a string or a list/tuple
- with pytest.raises(TypeError):
- cmd.make_file(infiles=1, outfile='', func='func', args=())
+ self.assertRaises(
+ TypeError, cmd.make_file, infiles=1, outfile='', func='func', args=()
+ )
# making sure execute gets called properly
def _execute(func, args, exec_msg, level):
- assert exec_msg == 'generating out from in'
+ self.assertEqual(exec_msg, 'generating out from in')
cmd.force = True
cmd.execute = _execute
cmd.make_file(infiles='in', outfile='out', func='func', args=())
- def test_dump_options(self, cmd):
+ def test_dump_options(self):
msgs = []
def _announce(msg, level):
msgs.append(msg)
+ cmd = self.cmd
cmd.announce = _announce
cmd.option1 = 1
cmd.option2 = 1
cmd.dump_options()
wanted = ["command options for 'MyCmd':", ' option1 = 1', ' option2 = 1']
- assert msgs == wanted
+ self.assertEqual(msgs, wanted)
- def test_ensure_string(self, cmd):
+ def test_ensure_string(self):
+ cmd = self.cmd
cmd.option1 = 'ok'
cmd.ensure_string('option1')
cmd.option2 = None
cmd.ensure_string('option2', 'xxx')
- assert hasattr(cmd, 'option2')
+ self.assertTrue(hasattr(cmd, 'option2'))
cmd.option3 = 1
- with pytest.raises(DistutilsOptionError):
- cmd.ensure_string('option3')
+ self.assertRaises(DistutilsOptionError, cmd.ensure_string, 'option3')
- def test_ensure_filename(self, cmd):
+ def test_ensure_filename(self):
+ cmd = self.cmd
cmd.option1 = __file__
cmd.ensure_filename('option1')
cmd.option2 = 'xxx'
- with pytest.raises(DistutilsOptionError):
- cmd.ensure_filename('option2')
+ self.assertRaises(DistutilsOptionError, cmd.ensure_filename, 'option2')
- def test_ensure_dirname(self, cmd):
+ def test_ensure_dirname(self):
+ cmd = self.cmd
cmd.option1 = os.path.dirname(__file__) or os.curdir
cmd.ensure_dirname('option1')
cmd.option2 = 'xxx'
- with pytest.raises(DistutilsOptionError):
- cmd.ensure_dirname('option2')
+ self.assertRaises(DistutilsOptionError, cmd.ensure_dirname, 'option2')
- def test_debug_print(self, cmd):
+ def test_debug_print(self):
+ cmd = self.cmd
with captured_stdout() as stdout:
cmd.debug_print('xxx')
stdout.seek(0)
- assert stdout.read() == ''
+ self.assertEqual(stdout.read(), '')
debug.DEBUG = True
try:
with captured_stdout() as stdout:
cmd.debug_print('xxx')
stdout.seek(0)
- assert stdout.read() == 'xxx\n'
+ self.assertEqual(stdout.read(), 'xxx\n')
finally:
debug.DEBUG = False
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(CommandTestCase)
+
+
+if __name__ == '__main__':
+ run_unittest(test_suite())
"""Tests for distutils.pypirc.pypirc."""
import os
+import unittest
-import pytest
+from distutils.core import PyPIRCCommand
+from distutils.core import Distribution
+from distutils.log import set_threshold
+from distutils.log import WARN
from distutils.tests import support
+from test.support import run_unittest
PYPIRC = """\
[distutils]
"""
-@support.combine_markers
-@pytest.mark.usefixtures('threshold_warn')
-@pytest.mark.usefixtures('pypirc')
class BasePyPIRCCommandTestCase(
support.TempdirManager,
support.LoggingSilencer,
+ support.EnvironGuard,
+ unittest.TestCase,
):
- pass
+ def setUp(self):
+ """Patches the environment."""
+ super(BasePyPIRCCommandTestCase, self).setUp()
+ self.tmp_dir = self.mkdtemp()
+ os.environ['HOME'] = self.tmp_dir
+ os.environ['USERPROFILE'] = self.tmp_dir
+ self.rc = os.path.join(self.tmp_dir, '.pypirc')
+ self.dist = Distribution()
+
+ class command(PyPIRCCommand):
+ def __init__(self, dist):
+ super().__init__(dist)
+
+ def initialize_options(self):
+ pass
+
+ finalize_options = initialize_options
+
+ self._cmd = command
+ self.old_threshold = set_threshold(WARN)
+
+ def tearDown(self):
+ """Removes the patch."""
+ set_threshold(self.old_threshold)
+ super(BasePyPIRCCommandTestCase, self).tearDown()
class PyPIRCCommandTestCase(BasePyPIRCCommandTestCase):
('server', 'server1'),
('username', 'me'),
]
- assert config == waited
+ self.assertEqual(config, waited)
# old format
self.write_file(self.rc, PYPIRC_OLD)
('server', 'server-login'),
('username', 'tarek'),
]
- assert config == waited
+ self.assertEqual(config, waited)
def test_server_empty_registration(self):
cmd = self._cmd(self.dist)
rc = cmd._get_rc_file()
- assert not os.path.exists(rc)
+ self.assertFalse(os.path.exists(rc))
cmd._store_pypirc('tarek', 'xxx')
- assert os.path.exists(rc)
+ self.assertTrue(os.path.exists(rc))
f = open(rc)
try:
content = f.read()
- assert content == WANTED
+ self.assertEqual(content, WANTED)
finally:
f.close()
('server', 'server3'),
('username', 'cbiggles'),
]
- assert config == waited
+ self.assertEqual(config, waited)
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(PyPIRCCommandTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.command.config."""
+import unittest
import os
import sys
-from test.support import missing_compiler_executable
-
-import pytest
+from test.support import run_unittest, missing_compiler_executable
from distutils.command.config import dump_file, config
from distutils.tests import support
from distutils import log
-@pytest.fixture(autouse=True)
-def info_log(request, monkeypatch):
- self = request.instance
- self._logs = []
- monkeypatch.setattr(log, 'info', self._info)
-
-
-@support.combine_markers
-class TestConfig(support.LoggingSilencer, support.TempdirManager):
+class ConfigTestCase(
+ support.LoggingSilencer, support.TempdirManager, unittest.TestCase
+):
def _info(self, msg, *args):
for line in msg.splitlines():
self._logs.append(line)
+ def setUp(self):
+ super(ConfigTestCase, self).setUp()
+ self._logs = []
+ self.old_log = log.info
+ log.info = self._info
+
+ def tearDown(self):
+ log.info = self.old_log
+ super(ConfigTestCase, self).tearDown()
+
def test_dump_file(self):
this_file = os.path.splitext(__file__)[0] + '.py'
f = open(this_file)
f.close()
dump_file(this_file, 'I am the header')
- assert len(self._logs) == numlines + 1
+ self.assertEqual(len(self._logs), numlines + 1)
- @pytest.mark.skipif('platform.system() == "Windows"')
+ @unittest.skipIf(sys.platform == 'win32', "can't test on Windows")
def test_search_cpp(self):
cmd = missing_compiler_executable(['preprocessor'])
if cmd is not None:
# simple pattern searches
match = cmd.search_cpp(pattern='xxx', body='/* xxx */')
- assert match == 0
+ self.assertEqual(match, 0)
match = cmd.search_cpp(pattern='_configtest', body='/* xxx */')
- assert match == 1
+ self.assertEqual(match, 1)
def test_finalize_options(self):
# finalize_options does a bit of transformation
cmd.library_dirs = 'three%sfour' % os.pathsep
cmd.ensure_finalized()
- assert cmd.include_dirs == ['one', 'two']
- assert cmd.libraries == ['one']
- assert cmd.library_dirs == ['three', 'four']
+ self.assertEqual(cmd.include_dirs, ['one', 'two'])
+ self.assertEqual(cmd.libraries, ['one'])
+ self.assertEqual(cmd.library_dirs, ['three', 'four'])
def test_clean(self):
# _clean removes files
self.write_file(f2, 'xxx')
for f in (f1, f2):
- assert os.path.exists(f)
+ self.assertTrue(os.path.exists(f))
pkg_dir, dist = self.create_dist()
cmd = config(dist)
cmd._clean(f1, f2)
for f in (f1, f2):
- assert not os.path.exists(f)
+ self.assertFalse(os.path.exists(f))
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(ConfigTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
import io
import distutils.core
import os
+import shutil
import sys
-from test.support import captured_stdout
-
-import pytest
-
+from test.support import captured_stdout, run_unittest
from . import py38compat as os_helper
+import unittest
+from distutils.tests import support
+from distutils import log
from distutils.dist import Distribution
# setup script that uses __file__
"""
-@pytest.fixture(autouse=True)
-def save_stdout(monkeypatch):
- monkeypatch.setattr(sys, 'stdout', sys.stdout)
+class CoreTestCase(support.EnvironGuard, unittest.TestCase):
+ def setUp(self):
+ super(CoreTestCase, self).setUp()
+ self.old_stdout = sys.stdout
+ self.cleanup_testfn()
+ self.old_argv = sys.argv, sys.argv[:]
+ self.addCleanup(log.set_threshold, log._global_log.threshold)
+
+ def tearDown(self):
+ sys.stdout = self.old_stdout
+ self.cleanup_testfn()
+ sys.argv = self.old_argv[0]
+ sys.argv[:] = self.old_argv[1]
+ super(CoreTestCase, self).tearDown()
+
+ def cleanup_testfn(self):
+ path = os_helper.TESTFN
+ if os.path.isfile(path):
+ os.remove(path)
+ elif os.path.isdir(path):
+ shutil.rmtree(path)
-
-@pytest.mark.usefixtures('save_env')
-@pytest.mark.usefixtures('save_argv')
-@pytest.mark.usefixtures('cleanup_testfn')
-class TestCore:
def write_setup(self, text, path=os_helper.TESTFN):
f = open(path, "w")
try:
# Make sure run_setup does not clobber sys.argv
argv_copy = sys.argv.copy()
distutils.core.run_setup(self.write_setup(setup_does_nothing))
- assert sys.argv == argv_copy
+ self.assertEqual(sys.argv, argv_copy)
def test_run_setup_defines_subclass(self):
# Make sure the script can use __file__; if that's missing, the test
# setup.py script will raise NameError.
dist = distutils.core.run_setup(self.write_setup(setup_defines_subclass))
install = dist.get_command_obj('install')
- assert 'cmd' in install.sub_commands
+ self.assertIn('cmd', install.sub_commands)
def test_run_setup_uses_current_dir(self):
# This tests that the setup script is run with the current directory
output = sys.stdout.getvalue()
if output.endswith("\n"):
output = output[:-1]
- assert cwd == output
+ self.assertEqual(cwd, output)
def test_run_setup_within_if_main(self):
dist = distutils.core.run_setup(
self.write_setup(setup_within_if_main), stop_after="config"
)
- assert isinstance(dist, Distribution)
- assert dist.get_name() == "setup_within_if_main"
+ self.assertIsInstance(dist, Distribution)
+ self.assertEqual(dist.get_name(), "setup_within_if_main")
def test_run_commands(self):
sys.argv = ['setup.py', 'build']
dist = distutils.core.run_setup(
self.write_setup(setup_within_if_main), stop_after="commandline"
)
- assert 'build' not in dist.have_run
+ self.assertNotIn('build', dist.have_run)
distutils.core.run_commands(dist)
- assert 'build' in dist.have_run
+ self.assertIn('build', dist.have_run)
def test_debug_mode(self):
# this covers the code called when DEBUG is set
with captured_stdout() as stdout:
distutils.core.setup(name='bar')
stdout.seek(0)
- assert stdout.read() == 'bar\n'
+ self.assertEqual(stdout.read(), 'bar\n')
distutils.core.DEBUG = True
try:
distutils.core.DEBUG = False
stdout.seek(0)
wanted = "options (after parsing config files):\n"
- assert stdout.readlines()[0] == wanted
+ self.assertEqual(stdout.readlines()[0], wanted)
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(CoreTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.cygwinccompiler."""
+import unittest
import sys
import os
-
-import pytest
+from test.support import run_unittest
from distutils.cygwinccompiler import (
check_config_h,
get_msvcr,
)
from distutils.tests import support
-from distutils import sysconfig
-@pytest.fixture(autouse=True)
-def stuff(request, monkeypatch, distutils_managed_tempdir):
- self = request.instance
- self.python_h = os.path.join(self.mkdtemp(), 'python.h')
- monkeypatch.setattr(sysconfig, 'get_config_h_filename', self._get_config_h_filename)
- monkeypatch.setattr(sys, 'version', sys.version)
+class CygwinCCompilerTestCase(support.TempdirManager, unittest.TestCase):
+ def setUp(self):
+ super(CygwinCCompilerTestCase, self).setUp()
+ self.version = sys.version
+ self.python_h = os.path.join(self.mkdtemp(), 'python.h')
+ from distutils import sysconfig
+
+ self.old_get_config_h_filename = sysconfig.get_config_h_filename
+ sysconfig.get_config_h_filename = self._get_config_h_filename
+
+ def tearDown(self):
+ sys.version = self.version
+ from distutils import sysconfig
+ sysconfig.get_config_h_filename = self.old_get_config_h_filename
+ super(CygwinCCompilerTestCase, self).tearDown()
-class TestCygwinCCompiler(support.TempdirManager):
def _get_config_h_filename(self):
return self.python_h
- @pytest.mark.skipif('sys.platform != "cygwin"')
- @pytest.mark.skipif('not os.path.exists("/usr/lib/libbash.dll.a")')
+ @unittest.skipIf(sys.platform != "cygwin", "Not running on Cygwin")
+ @unittest.skipIf(
+ not os.path.exists("/usr/lib/libbash.dll.a"), "Don't know a linkable library"
+ )
def test_find_library_file(self):
from distutils.cygwinccompiler import CygwinCCompiler
compiler = CygwinCCompiler()
link_name = "bash"
linkable_file = compiler.find_library_file(["/usr/lib"], link_name)
- assert linkable_file is not None
- assert os.path.exists(linkable_file)
- assert linkable_file == f"/usr/lib/lib{link_name:s}.dll.a"
+ self.assertIsNotNone(linkable_file)
+ self.assertTrue(os.path.exists(linkable_file))
+ self.assertEquals(linkable_file, "/usr/lib/lib{:s}.dll.a".format(link_name))
- @pytest.mark.skipif('sys.platform != "cygwin"')
+ @unittest.skipIf(sys.platform != "cygwin", "Not running on Cygwin")
def test_runtime_library_dir_option(self):
from distutils.cygwinccompiler import CygwinCCompiler
-
compiler = CygwinCCompiler()
- assert compiler.runtime_library_dir_option('/foo') == []
+ self.assertEqual(compiler.runtime_library_dir_option('/foo'), [])
def test_check_config_h(self):
'4.0.1 (Apple Computer, Inc. build 5370)]'
)
- assert check_config_h()[0] == CONFIG_H_OK
+ self.assertEqual(check_config_h()[0], CONFIG_H_OK)
# then it tries to see if it can find "__GNUC__" in pyconfig.h
sys.version = 'something without the *CC word'
# if the file doesn't exist it returns CONFIG_H_UNCERTAIN
- assert check_config_h()[0] == CONFIG_H_UNCERTAIN
+ self.assertEqual(check_config_h()[0], CONFIG_H_UNCERTAIN)
# if it exists but does not contain __GNUC__, it returns CONFIG_H_NOTOK
self.write_file(self.python_h, 'xxx')
- assert check_config_h()[0] == CONFIG_H_NOTOK
+ self.assertEqual(check_config_h()[0], CONFIG_H_NOTOK)
# and CONFIG_H_OK if __GNUC__ is found
self.write_file(self.python_h, 'xxx __GNUC__ xxx')
- assert check_config_h()[0] == CONFIG_H_OK
+ self.assertEqual(check_config_h()[0], CONFIG_H_OK)
def test_get_msvcr(self):
'2.6.1 (r261:67515, Dec 6 2008, 16:42:21) '
'\n[GCC 4.0.1 (Apple Computer, Inc. build 5370)]'
)
- assert get_msvcr() is None
+ self.assertEqual(get_msvcr(), None)
# MSVC 7.0
sys.version = (
'2.5.1 (r251:54863, Apr 18 2007, 08:51:08) ' '[MSC v.1300 32 bits (Intel)]'
)
- assert get_msvcr() == ['msvcr70']
+ self.assertEqual(get_msvcr(), ['msvcr70'])
# MSVC 7.1
sys.version = (
'2.5.1 (r251:54863, Apr 18 2007, 08:51:08) ' '[MSC v.1310 32 bits (Intel)]'
)
- assert get_msvcr() == ['msvcr71']
+ self.assertEqual(get_msvcr(), ['msvcr71'])
# VS2005 / MSVC 8.0
sys.version = (
'2.5.1 (r251:54863, Apr 18 2007, 08:51:08) ' '[MSC v.1400 32 bits (Intel)]'
)
- assert get_msvcr() == ['msvcr80']
+ self.assertEqual(get_msvcr(), ['msvcr80'])
# VS2008 / MSVC 9.0
sys.version = (
'2.5.1 (r251:54863, Apr 18 2007, 08:51:08) ' '[MSC v.1500 32 bits (Intel)]'
)
- assert get_msvcr() == ['msvcr90']
+ self.assertEqual(get_msvcr(), ['msvcr90'])
- sys.version = (
- '3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 18:46:30) '
- '[MSC v.1929 32 bit (Intel)]'
- )
- assert get_msvcr() == ['ucrt', 'vcruntime140']
+ sys.version = '3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 18:46:30) [MSC v.1929 32 bit (Intel)]'
+ self.assertEqual(get_msvcr(), ['ucrt', 'vcruntime140'])
# unknown
sys.version = (
'2.5.1 (r251:54863, Apr 18 2007, 08:51:08) ' '[MSC v.2000 32 bits (Intel)]'
)
- with pytest.raises(ValueError):
- get_msvcr()
+ self.assertRaises(ValueError, get_msvcr)
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(CygwinCCompilerTestCase)
+
+
+if __name__ == '__main__':
+ run_unittest(test_suite())
"""Tests for distutils.dep_util."""
+import unittest
import os
from distutils.dep_util import newer, newer_pairwise, newer_group
from distutils.errors import DistutilsFileError
from distutils.tests import support
-import pytest
+from test.support import run_unittest
-class TestDepUtil(support.TempdirManager):
+class DepUtilTestCase(support.TempdirManager, unittest.TestCase):
def test_newer(self):
tmpdir = self.mkdtemp()
old_file = os.path.abspath(__file__)
# Raise DistutilsFileError if 'new_file' does not exist.
- with pytest.raises(DistutilsFileError):
- newer(new_file, old_file)
+ self.assertRaises(DistutilsFileError, newer, new_file, old_file)
# Return true if 'new_file' exists and is more recently modified than
# 'old_file', or if 'new_file' exists and 'old_file' doesn't.
self.write_file(new_file)
- assert newer(new_file, 'I_dont_exist')
- assert newer(new_file, old_file)
+ self.assertTrue(newer(new_file, 'I_dont_exist'))
+ self.assertTrue(newer(new_file, old_file))
# Return false if both exist and 'old_file' is the same age or younger
# than 'new_file'.
- assert not newer(old_file, new_file)
+ self.assertFalse(newer(old_file, new_file))
def test_newer_pairwise(self):
tmpdir = self.mkdtemp()
self.write_file(two)
self.write_file(four)
- assert newer_pairwise([one, two], [three, four]) == ([one], [three])
+ self.assertEqual(newer_pairwise([one, two], [three, four]), ([one], [three]))
def test_newer_group(self):
tmpdir = self.mkdtemp()
self.write_file(one)
self.write_file(two)
self.write_file(three)
- assert newer_group([one, two, three], old_file)
- assert not newer_group([one, two, old_file], three)
+ self.assertTrue(newer_group([one, two, three], old_file))
+ self.assertFalse(newer_group([one, two, old_file], three))
# missing handling
os.remove(one)
- with pytest.raises(OSError):
- newer_group([one, two, old_file], three)
+ self.assertRaises(OSError, newer_group, [one, two, old_file], three)
- assert not newer_group([one, two, old_file], three, missing='ignore')
+ self.assertFalse(newer_group([one, two, old_file], three, missing='ignore'))
- assert newer_group([one, two, old_file], three, missing='newer')
+ self.assertTrue(newer_group([one, two, old_file], three, missing='newer'))
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(DepUtilTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.dir_util."""
+import unittest
import os
import stat
-import unittest.mock as mock
+import sys
+from unittest.mock import patch
from distutils import dir_util, errors
from distutils.dir_util import (
from distutils import log
from distutils.tests import support
-import pytest
+from test.support import run_unittest
-@pytest.fixture(autouse=True)
-def stuff(request, monkeypatch, distutils_managed_tempdir):
- self = request.instance
- self._logs = []
- tmp_dir = self.mkdtemp()
- self.root_target = os.path.join(tmp_dir, 'deep')
- self.target = os.path.join(self.root_target, 'here')
- self.target2 = os.path.join(tmp_dir, 'deep2')
- monkeypatch.setattr(log, 'info', self._log)
-
-
-class TestDirUtil(support.TempdirManager):
+class DirUtilTestCase(support.TempdirManager, unittest.TestCase):
def _log(self, msg, *args):
if len(args) > 0:
self._logs.append(msg % args)
else:
self._logs.append(msg)
+ def setUp(self):
+ super(DirUtilTestCase, self).setUp()
+ self._logs = []
+ tmp_dir = self.mkdtemp()
+ self.root_target = os.path.join(tmp_dir, 'deep')
+ self.target = os.path.join(self.root_target, 'here')
+ self.target2 = os.path.join(tmp_dir, 'deep2')
+ self.old_log = log.info
+ log.info = self._log
+
+ def tearDown(self):
+ log.info = self.old_log
+ super(DirUtilTestCase, self).tearDown()
+
def test_mkpath_remove_tree_verbosity(self):
mkpath(self.target, verbose=0)
wanted = []
- assert self._logs == wanted
+ self.assertEqual(self._logs, wanted)
remove_tree(self.root_target, verbose=0)
mkpath(self.target, verbose=1)
wanted = ['creating %s' % self.root_target, 'creating %s' % self.target]
- assert self._logs == wanted
+ self.assertEqual(self._logs, wanted)
self._logs = []
remove_tree(self.root_target, verbose=1)
wanted = ["removing '%s' (and everything under it)" % self.root_target]
- assert self._logs == wanted
+ self.assertEqual(self._logs, wanted)
- @pytest.mark.skipif("platform.system() == 'Windows'")
+ @unittest.skipIf(
+ sys.platform.startswith('win'),
+ "This test is only appropriate for POSIX-like systems.",
+ )
def test_mkpath_with_custom_mode(self):
# Get and set the current umask value for testing mode bits.
umask = os.umask(0o002)
os.umask(umask)
mkpath(self.target, 0o700)
- assert stat.S_IMODE(os.stat(self.target).st_mode) == 0o700 & ~umask
+ self.assertEqual(stat.S_IMODE(os.stat(self.target).st_mode), 0o700 & ~umask)
mkpath(self.target2, 0o555)
- assert stat.S_IMODE(os.stat(self.target2).st_mode) == 0o555 & ~umask
+ self.assertEqual(stat.S_IMODE(os.stat(self.target2).st_mode), 0o555 & ~umask)
def test_create_tree_verbosity(self):
create_tree(self.root_target, ['one', 'two', 'three'], verbose=0)
- assert self._logs == []
+ self.assertEqual(self._logs, [])
remove_tree(self.root_target, verbose=0)
wanted = ['creating %s' % self.root_target]
create_tree(self.root_target, ['one', 'two', 'three'], verbose=1)
- assert self._logs == wanted
+ self.assertEqual(self._logs, wanted)
remove_tree(self.root_target, verbose=0)
mkpath(self.target, verbose=0)
copy_tree(self.target, self.target2, verbose=0)
- assert self._logs == []
+ self.assertEqual(self._logs, [])
remove_tree(self.root_target, verbose=0)
with open(a_file, 'w') as f:
f.write('some content')
- wanted = ['copying {} -> {}'.format(a_file, self.target2)]
+ wanted = ['copying %s -> %s' % (a_file, self.target2)]
copy_tree(self.target, self.target2, verbose=1)
- assert self._logs == wanted
+ self.assertEqual(self._logs, wanted)
remove_tree(self.root_target, verbose=0)
remove_tree(self.target2, verbose=0)
fh.write('some content')
copy_tree(self.target, self.target2)
- assert os.listdir(self.target2) == ['ok.txt']
+ self.assertEqual(os.listdir(self.target2), ['ok.txt'])
remove_tree(self.root_target, verbose=0)
remove_tree(self.target2, verbose=0)
def test_ensure_relative(self):
if os.sep == '/':
- assert ensure_relative('/home/foo') == 'home/foo'
- assert ensure_relative('some/path') == 'some/path'
+ self.assertEqual(ensure_relative('/home/foo'), 'home/foo')
+ self.assertEqual(ensure_relative('some/path'), 'some/path')
else: # \\
- assert ensure_relative('c:\\home\\foo') == 'c:home\\foo'
- assert ensure_relative('home\\foo') == 'home\\foo'
+ self.assertEqual(ensure_relative('c:\\home\\foo'), 'c:home\\foo')
+ self.assertEqual(ensure_relative('home\\foo'), 'home\\foo')
def test_copy_tree_exception_in_listdir(self):
"""
An exception in listdir should raise a DistutilsFileError
"""
- with mock.patch("os.listdir", side_effect=OSError()), pytest.raises(
+ with patch("os.listdir", side_effect=OSError()), self.assertRaises(
errors.DistutilsFileError
):
src = self.tempdirs[-1]
dir_util.copy_tree(src, None)
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(DirUtilTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
import os
import io
import sys
+import unittest
import warnings
import textwrap
-import functools
-import unittest.mock as mock
-import pytest
+from unittest import mock
from distutils.dist import Distribution, fix_help_options
from distutils.cmd import Command
-from test.support import captured_stdout, captured_stderr
+from test.support import captured_stdout, captured_stderr, run_unittest
from .py38compat import TESTFN
from distutils.tests import support
from distutils import log
return self._config_files
-@pytest.fixture
-def clear_argv():
- del sys.argv[1:]
-
-
-@support.combine_markers
-@pytest.mark.usefixtures('save_env')
-@pytest.mark.usefixtures('save_argv')
-class TestDistributionBehavior(
+class DistributionTestCase(
support.LoggingSilencer,
support.TempdirManager,
+ support.EnvironGuard,
+ unittest.TestCase,
):
+ def setUp(self):
+ super(DistributionTestCase, self).setUp()
+ self.argv = sys.argv, sys.argv[:]
+ del sys.argv[1:]
+
+ def tearDown(self):
+ sys.argv = self.argv[0]
+ sys.argv[:] = self.argv[1]
+ super(DistributionTestCase, self).tearDown()
+
def create_distribution(self, configfiles=()):
d = TestDistribution()
d._config_files = configfiles
d.parse_command_line()
return d
- def test_command_packages_unspecified(self, clear_argv):
+ def test_command_packages_unspecified(self):
sys.argv.append("build")
d = self.create_distribution()
- assert d.get_command_packages() == ["distutils.command"]
+ self.assertEqual(d.get_command_packages(), ["distutils.command"])
- def test_command_packages_cmdline(self, clear_argv):
+ def test_command_packages_cmdline(self):
from distutils.tests.test_dist import test_dist
sys.argv.extend(
)
d = self.create_distribution()
# let's actually try to load our test command:
- assert d.get_command_packages() == [
- "distutils.command",
- "foo.bar",
- "distutils.tests",
- ]
+ self.assertEqual(
+ d.get_command_packages(),
+ ["distutils.command", "foo.bar", "distutils.tests"],
+ )
cmd = d.get_command_obj("test_dist")
- assert isinstance(cmd, test_dist)
- assert cmd.sample_option == "sometext"
+ self.assertIsInstance(cmd, test_dist)
+ self.assertEqual(cmd.sample_option, "sometext")
- @pytest.mark.skipif(
+ @unittest.skipIf(
'distutils' not in Distribution.parse_config_files.__module__,
- reason='Cannot test when virtualenv has monkey-patched Distribution',
+ 'Cannot test when virtualenv has monkey-patched Distribution.',
)
- def test_venv_install_options(self, request):
+ def test_venv_install_options(self):
sys.argv.append("install")
- request.addfinalizer(functools.partial(os.unlink, TESTFN))
+ self.addCleanup(os.unlink, TESTFN)
fakepath = '/somedir'
)
# Base case: Not in a Virtual Environment
- with mock.patch.multiple(sys, prefix='/a', base_prefix='/a'):
+ with mock.patch.multiple(sys, prefix='/a', base_prefix='/a') as values:
d = self.create_distribution([TESTFN])
option_tuple = (TESTFN, fakepath)
'root': option_tuple,
}
- assert sorted(d.command_options.get('install').keys()) == sorted(
- result_dict.keys()
+ self.assertEqual(
+ sorted(d.command_options.get('install').keys()), sorted(result_dict.keys())
)
for (key, value) in d.command_options.get('install').items():
- assert value == result_dict[key]
+ self.assertEqual(value, result_dict[key])
# Test case: In a Virtual Environment
- with mock.patch.multiple(sys, prefix='/a', base_prefix='/b'):
+ with mock.patch.multiple(sys, prefix='/a', base_prefix='/b') as values:
d = self.create_distribution([TESTFN])
for key in result_dict.keys():
- assert key not in d.command_options.get('install', {})
+ self.assertNotIn(key, d.command_options.get('install', {}))
- def test_command_packages_configfile(self, request, clear_argv):
+ def test_command_packages_configfile(self):
sys.argv.append("build")
- request.addfinalizer(functools.partial(os.unlink, TESTFN))
+ self.addCleanup(os.unlink, TESTFN)
f = open(TESTFN, "w")
try:
print("[global]", file=f)
f.close()
d = self.create_distribution([TESTFN])
- assert d.get_command_packages() == ["distutils.command", "foo.bar", "splat"]
+ self.assertEqual(
+ d.get_command_packages(), ["distutils.command", "foo.bar", "splat"]
+ )
# ensure command line overrides config:
sys.argv[1:] = ["--command-packages", "spork", "build"]
d = self.create_distribution([TESTFN])
- assert d.get_command_packages() == ["distutils.command", "spork"]
+ self.assertEqual(d.get_command_packages(), ["distutils.command", "spork"])
# Setting --command-packages to '' should cause the default to
# be used even if a config file specified something else:
sys.argv[1:] = ["--command-packages", "", "build"]
d = self.create_distribution([TESTFN])
- assert d.get_command_packages() == ["distutils.command"]
+ self.assertEqual(d.get_command_packages(), ["distutils.command"])
- def test_empty_options(self, request):
+ def test_empty_options(self):
# an empty options dictionary should not stay in the
# list of attributes
def _warn(msg):
warns.append(msg)
- request.addfinalizer(
- functools.partial(setattr, warnings, 'warn', warnings.warn)
- )
+ self.addCleanup(setattr, warnings, 'warn', warnings.warn)
warnings.warn = _warn
dist = Distribution(
attrs={
}
)
- assert len(warns) == 0
- assert 'options' not in dir(dist)
+ self.assertEqual(len(warns), 0)
+ self.assertNotIn('options', dir(dist))
def test_finalize_options(self):
attrs = {'keywords': 'one,two', 'platforms': 'one,two'}
dist.finalize_options()
# finalize_option splits platforms and keywords
- assert dist.metadata.platforms == ['one', 'two']
- assert dist.metadata.keywords == ['one', 'two']
+ self.assertEqual(dist.metadata.platforms, ['one', 'two'])
+ self.assertEqual(dist.metadata.keywords, ['one', 'two'])
attrs = {'keywords': 'foo bar', 'platforms': 'foo bar'}
dist = Distribution(attrs=attrs)
dist.finalize_options()
- assert dist.metadata.platforms == ['foo bar']
- assert dist.metadata.keywords == ['foo bar']
+ self.assertEqual(dist.metadata.platforms, ['foo bar'])
+ self.assertEqual(dist.metadata.keywords, ['foo bar'])
def test_get_command_packages(self):
dist = Distribution()
- assert dist.command_packages is None
+ self.assertEqual(dist.command_packages, None)
cmds = dist.get_command_packages()
- assert cmds == ['distutils.command']
- assert dist.command_packages == ['distutils.command']
+ self.assertEqual(cmds, ['distutils.command'])
+ self.assertEqual(dist.command_packages, ['distutils.command'])
dist.command_packages = 'one,two'
cmds = dist.get_command_packages()
- assert cmds == ['distutils.command', 'one', 'two']
+ self.assertEqual(cmds, ['distutils.command', 'one', 'two'])
def test_announce(self):
# make sure the level is known
dist = Distribution()
args = ('ok',)
kwargs = {'level': 'ok2'}
- with pytest.raises(ValueError):
- dist.announce(args, kwargs)
+ self.assertRaises(ValueError, dist.announce, args, kwargs)
def test_find_config_files_disable(self):
# Ticket #1180: Allow user to disable their home config file.
os.path.expanduser = old_expander
# make sure --no-user-cfg disables the user cfg file
- assert len(all_files) - 1 == len(files)
+ self.assertEqual(len(all_files) - 1, len(files))
+
+class MetadataTestCase(support.TempdirManager, support.EnvironGuard, unittest.TestCase):
+ def setUp(self):
+ super(MetadataTestCase, self).setUp()
+ self.argv = sys.argv, sys.argv[:]
+
+ def tearDown(self):
+ sys.argv = self.argv[0]
+ sys.argv[:] = self.argv[1]
+ super(MetadataTestCase, self).tearDown()
-@pytest.mark.usefixtures('save_env')
-@pytest.mark.usefixtures('save_argv')
-class MetadataTestCase(support.TempdirManager):
def format_metadata(self, dist):
sio = io.StringIO()
dist.metadata.write_pkg_file(sio)
attrs = {"name": "package", "version": "1.0"}
dist = Distribution(attrs)
meta = self.format_metadata(dist)
- assert "Metadata-Version: 1.0" in meta
- assert "provides:" not in meta.lower()
- assert "requires:" not in meta.lower()
- assert "obsoletes:" not in meta.lower()
+ self.assertIn("Metadata-Version: 1.0", meta)
+ self.assertNotIn("provides:", meta.lower())
+ self.assertNotIn("requires:", meta.lower())
+ self.assertNotIn("obsoletes:", meta.lower())
def test_provides(self):
attrs = {
"provides": ["package", "package.sub"],
}
dist = Distribution(attrs)
- assert dist.metadata.get_provides() == ["package", "package.sub"]
- assert dist.get_provides() == ["package", "package.sub"]
+ self.assertEqual(dist.metadata.get_provides(), ["package", "package.sub"])
+ self.assertEqual(dist.get_provides(), ["package", "package.sub"])
meta = self.format_metadata(dist)
- assert "Metadata-Version: 1.1" in meta
- assert "requires:" not in meta.lower()
- assert "obsoletes:" not in meta.lower()
+ self.assertIn("Metadata-Version: 1.1", meta)
+ self.assertNotIn("requires:", meta.lower())
+ self.assertNotIn("obsoletes:", meta.lower())
def test_provides_illegal(self):
- with pytest.raises(ValueError):
- Distribution(
- {"name": "package", "version": "1.0", "provides": ["my.pkg (splat)"]},
- )
+ self.assertRaises(
+ ValueError,
+ Distribution,
+ {"name": "package", "version": "1.0", "provides": ["my.pkg (splat)"]},
+ )
def test_requires(self):
attrs = {
"requires": ["other", "another (==1.0)"],
}
dist = Distribution(attrs)
- assert dist.metadata.get_requires() == ["other", "another (==1.0)"]
- assert dist.get_requires() == ["other", "another (==1.0)"]
+ self.assertEqual(dist.metadata.get_requires(), ["other", "another (==1.0)"])
+ self.assertEqual(dist.get_requires(), ["other", "another (==1.0)"])
meta = self.format_metadata(dist)
- assert "Metadata-Version: 1.1" in meta
- assert "provides:" not in meta.lower()
- assert "Requires: other" in meta
- assert "Requires: another (==1.0)" in meta
- assert "obsoletes:" not in meta.lower()
+ self.assertIn("Metadata-Version: 1.1", meta)
+ self.assertNotIn("provides:", meta.lower())
+ self.assertIn("Requires: other", meta)
+ self.assertIn("Requires: another (==1.0)", meta)
+ self.assertNotIn("obsoletes:", meta.lower())
def test_requires_illegal(self):
- with pytest.raises(ValueError):
- Distribution(
- {"name": "package", "version": "1.0", "requires": ["my.pkg (splat)"]},
- )
+ self.assertRaises(
+ ValueError,
+ Distribution,
+ {"name": "package", "version": "1.0", "requires": ["my.pkg (splat)"]},
+ )
def test_requires_to_list(self):
attrs = {"name": "package", "requires": iter(["other"])}
dist = Distribution(attrs)
- assert isinstance(dist.metadata.requires, list)
+ self.assertIsInstance(dist.metadata.requires, list)
def test_obsoletes(self):
attrs = {
"obsoletes": ["other", "another (<1.0)"],
}
dist = Distribution(attrs)
- assert dist.metadata.get_obsoletes() == ["other", "another (<1.0)"]
- assert dist.get_obsoletes() == ["other", "another (<1.0)"]
+ self.assertEqual(dist.metadata.get_obsoletes(), ["other", "another (<1.0)"])
+ self.assertEqual(dist.get_obsoletes(), ["other", "another (<1.0)"])
meta = self.format_metadata(dist)
- assert "Metadata-Version: 1.1" in meta
- assert "provides:" not in meta.lower()
- assert "requires:" not in meta.lower()
- assert "Obsoletes: other" in meta
- assert "Obsoletes: another (<1.0)" in meta
+ self.assertIn("Metadata-Version: 1.1", meta)
+ self.assertNotIn("provides:", meta.lower())
+ self.assertNotIn("requires:", meta.lower())
+ self.assertIn("Obsoletes: other", meta)
+ self.assertIn("Obsoletes: another (<1.0)", meta)
def test_obsoletes_illegal(self):
- with pytest.raises(ValueError):
- Distribution(
- {"name": "package", "version": "1.0", "obsoletes": ["my.pkg (splat)"]},
- )
+ self.assertRaises(
+ ValueError,
+ Distribution,
+ {"name": "package", "version": "1.0", "obsoletes": ["my.pkg (splat)"]},
+ )
def test_obsoletes_to_list(self):
attrs = {"name": "package", "obsoletes": iter(["other"])}
dist = Distribution(attrs)
- assert isinstance(dist.metadata.obsoletes, list)
+ self.assertIsInstance(dist.metadata.obsoletes, list)
def test_classifier(self):
attrs = {
'classifiers': ['Programming Language :: Python :: 3'],
}
dist = Distribution(attrs)
- assert dist.get_classifiers() == ['Programming Language :: Python :: 3']
+ self.assertEqual(
+ dist.get_classifiers(), ['Programming Language :: Python :: 3']
+ )
meta = self.format_metadata(dist)
- assert 'Metadata-Version: 1.1' in meta
+ self.assertIn('Metadata-Version: 1.1', meta)
def test_classifier_invalid_type(self):
attrs = {
with captured_stderr() as error:
d = Distribution(attrs)
# should have warning about passing a non-list
- assert 'should be a list' in error.getvalue()
+ self.assertIn('should be a list', error.getvalue())
# should be converted to a list
- assert isinstance(d.metadata.classifiers, list)
- assert d.metadata.classifiers == list(attrs['classifiers'])
+ self.assertIsInstance(d.metadata.classifiers, list)
+ self.assertEqual(d.metadata.classifiers, list(attrs['classifiers']))
def test_keywords(self):
attrs = {
'keywords': ['spam', 'eggs', 'life of brian'],
}
dist = Distribution(attrs)
- assert dist.get_keywords() == ['spam', 'eggs', 'life of brian']
+ self.assertEqual(dist.get_keywords(), ['spam', 'eggs', 'life of brian'])
def test_keywords_invalid_type(self):
attrs = {
with captured_stderr() as error:
d = Distribution(attrs)
# should have warning about passing a non-list
- assert 'should be a list' in error.getvalue()
+ self.assertIn('should be a list', error.getvalue())
# should be converted to a list
- assert isinstance(d.metadata.keywords, list)
- assert d.metadata.keywords == list(attrs['keywords'])
+ self.assertIsInstance(d.metadata.keywords, list)
+ self.assertEqual(d.metadata.keywords, list(attrs['keywords']))
def test_platforms(self):
attrs = {
'platforms': ['GNU/Linux', 'Some Evil Platform'],
}
dist = Distribution(attrs)
- assert dist.get_platforms() == ['GNU/Linux', 'Some Evil Platform']
+ self.assertEqual(dist.get_platforms(), ['GNU/Linux', 'Some Evil Platform'])
def test_platforms_invalid_types(self):
attrs = {
with captured_stderr() as error:
d = Distribution(attrs)
# should have warning about passing a non-list
- assert 'should be a list' in error.getvalue()
+ self.assertIn('should be a list', error.getvalue())
# should be converted to a list
- assert isinstance(d.metadata.platforms, list)
- assert d.metadata.platforms == list(attrs['platforms'])
+ self.assertIsInstance(d.metadata.platforms, list)
+ self.assertEqual(d.metadata.platforms, list(attrs['platforms']))
def test_download_url(self):
attrs = {
}
dist = Distribution(attrs)
meta = self.format_metadata(dist)
- assert 'Metadata-Version: 1.1' in meta
+ self.assertIn('Metadata-Version: 1.1', meta)
def test_long_description(self):
long_desc = textwrap.dedent(
dist = Distribution(attrs)
meta = self.format_metadata(dist)
meta = meta.replace('\n' + 8 * ' ', '\n')
- assert long_desc in meta
+ self.assertIn(long_desc, meta)
def test_custom_pydistutils(self):
# fixes #2166
if sys.platform in ('linux', 'darwin'):
os.environ['HOME'] = temp_dir
files = dist.find_config_files()
- assert user_filename in files
+ self.assertIn(user_filename, files)
# win32-style
if sys.platform == 'win32':
# home drive should be found
os.environ['USERPROFILE'] = temp_dir
files = dist.find_config_files()
- assert user_filename in files, '{!r} not found in {!r}'.format(
- user_filename, files
+ self.assertIn(
+ user_filename, files, '%r not found in %r' % (user_filename, files)
)
finally:
os.remove(user_filename)
def test_fix_help_options(self):
help_tuples = [('a', 'b', 'c', 'd'), (1, 2, 3, 4)]
fancy_options = fix_help_options(help_tuples)
- assert fancy_options[0] == ('a', 'b', 'c')
- assert fancy_options[1] == (1, 2, 3)
+ self.assertEqual(fancy_options[0], ('a', 'b', 'c'))
+ self.assertEqual(fancy_options[1], (1, 2, 3))
def test_show_help(self):
# smoke test, just makes sure some help is displayed
dist.parse_command_line()
output = [line for line in s.getvalue().split('\n') if line.strip() != '']
- assert output
+ self.assertTrue(output)
def test_read_metadata(self):
attrs = {
PKG_INFO.seek(0)
metadata.read_pkg_file(PKG_INFO)
- assert metadata.name == "package"
- assert metadata.version == "1.0"
- assert metadata.description == "xxx"
- assert metadata.download_url == 'http://example.com'
- assert metadata.keywords == ['one', 'two']
- assert metadata.platforms is None
- assert metadata.obsoletes is None
- assert metadata.requires == ['foo']
+ self.assertEqual(metadata.name, "package")
+ self.assertEqual(metadata.version, "1.0")
+ self.assertEqual(metadata.description, "xxx")
+ self.assertEqual(metadata.download_url, 'http://example.com')
+ self.assertEqual(metadata.keywords, ['one', 'two'])
+ self.assertEqual(metadata.platforms, None)
+ self.assertEqual(metadata.obsoletes, None)
+ self.assertEqual(metadata.requires, ['foo'])
+
+
+def test_suite():
+ suite = unittest.TestSuite()
+ suite.addTest(unittest.TestLoader().loadTestsFromTestCase(DistributionTestCase))
+ suite.addTest(unittest.TestLoader().loadTestsFromTestCase(MetadataTestCase))
+ return suite
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.extension."""
+import unittest
import os
import warnings
+from test.support import run_unittest
from distutils.extension import read_setup_file, Extension
from .py38compat import check_warnings
-import pytest
-class TestExtension:
+class ExtensionTestCase(unittest.TestCase):
def test_read_setup_file(self):
# trying to read a Setup file
# (sample extracted from the PyGame project)
'transform',
]
- assert names == wanted
+ self.assertEqual(names, wanted)
def test_extension_init(self):
# the first argument, which is the name, must be a string
- with pytest.raises(AssertionError):
- Extension(1, [])
+ self.assertRaises(AssertionError, Extension, 1, [])
ext = Extension('name', [])
- assert ext.name == 'name'
+ self.assertEqual(ext.name, 'name')
# the second argument, which is the list of files, must
# be a list of strings
- with pytest.raises(AssertionError):
- Extension('name', 'file')
- with pytest.raises(AssertionError):
- Extension('name', ['file', 1])
+ self.assertRaises(AssertionError, Extension, 'name', 'file')
+ self.assertRaises(AssertionError, Extension, 'name', ['file', 1])
ext = Extension('name', ['file1', 'file2'])
- assert ext.sources == ['file1', 'file2']
+ self.assertEqual(ext.sources, ['file1', 'file2'])
# others arguments have defaults
for attr in (
'swig_opts',
'depends',
):
- assert getattr(ext, attr) == []
+ self.assertEqual(getattr(ext, attr), [])
- assert ext.language is None
- assert ext.optional is None
+ self.assertEqual(ext.language, None)
+ self.assertEqual(ext.optional, None)
# if there are unknown keyword options, warn about them
with check_warnings() as w:
warnings.simplefilter('always')
ext = Extension('name', ['file1', 'file2'], chic=True)
- assert len(w.warnings) == 1
- assert str(w.warnings[0].message) == "Unknown Extension options: 'chic'"
+ self.assertEqual(len(w.warnings), 1)
+ self.assertEqual(
+ str(w.warnings[0].message), "Unknown Extension options: 'chic'"
+ )
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(ExtensionTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.file_util."""
+import unittest
import os
import errno
-import unittest.mock as mock
+from unittest.mock import patch
from distutils.file_util import move_file, copy_file
from distutils import log
from distutils.tests import support
from distutils.errors import DistutilsFileError
+from test.support import run_unittest
from .py38compat import unlink
-import pytest
-@pytest.fixture(autouse=True)
-def stuff(request, monkeypatch, distutils_managed_tempdir):
- self = request.instance
- self._logs = []
- tmp_dir = self.mkdtemp()
- self.source = os.path.join(tmp_dir, 'f1')
- self.target = os.path.join(tmp_dir, 'f2')
- self.target_dir = os.path.join(tmp_dir, 'd1')
- monkeypatch.setattr(log, 'info', self._log)
-
-
-class TestFileUtil(support.TempdirManager):
+class FileUtilTestCase(support.TempdirManager, unittest.TestCase):
def _log(self, msg, *args):
if len(args) > 0:
self._logs.append(msg % args)
else:
self._logs.append(msg)
+ def setUp(self):
+ super(FileUtilTestCase, self).setUp()
+ self._logs = []
+ self.old_log = log.info
+ log.info = self._log
+ tmp_dir = self.mkdtemp()
+ self.source = os.path.join(tmp_dir, 'f1')
+ self.target = os.path.join(tmp_dir, 'f2')
+ self.target_dir = os.path.join(tmp_dir, 'd1')
+
+ def tearDown(self):
+ log.info = self.old_log
+ super(FileUtilTestCase, self).tearDown()
+
def test_move_file_verbosity(self):
f = open(self.source, 'w')
try:
move_file(self.source, self.target, verbose=0)
wanted = []
- assert self._logs == wanted
+ self.assertEqual(self._logs, wanted)
# back to original state
move_file(self.target, self.source, verbose=0)
move_file(self.source, self.target, verbose=1)
- wanted = ['moving {} -> {}'.format(self.source, self.target)]
- assert self._logs == wanted
+ wanted = ['moving %s -> %s' % (self.source, self.target)]
+ self.assertEqual(self._logs, wanted)
# back to original state
move_file(self.target, self.source, verbose=0)
# now the target is a dir
os.mkdir(self.target_dir)
move_file(self.source, self.target_dir, verbose=1)
- wanted = ['moving {} -> {}'.format(self.source, self.target_dir)]
- assert self._logs == wanted
+ wanted = ['moving %s -> %s' % (self.source, self.target_dir)]
+ self.assertEqual(self._logs, wanted)
def test_move_file_exception_unpacking_rename(self):
# see issue 22182
- with mock.patch("os.rename", side_effect=OSError("wrong", 1)), pytest.raises(
+ with patch("os.rename", side_effect=OSError("wrong", 1)), self.assertRaises(
DistutilsFileError
):
with open(self.source, 'w') as fobj:
def test_move_file_exception_unpacking_unlink(self):
# see issue 22182
- with mock.patch(
- "os.rename", side_effect=OSError(errno.EXDEV, "wrong")
- ), mock.patch("os.unlink", side_effect=OSError("wrong", 1)), pytest.raises(
- DistutilsFileError
- ):
+ with patch("os.rename", side_effect=OSError(errno.EXDEV, "wrong")), patch(
+ "os.unlink", side_effect=OSError("wrong", 1)
+ ), self.assertRaises(DistutilsFileError):
with open(self.source, 'w') as fobj:
fobj.write('spam eggs')
move_file(self.source, self.target, verbose=0)
copy_file(self.source, self.target, link='hard')
st2 = os.stat(self.source)
st3 = os.stat(self.target)
- assert os.path.samestat(st, st2), (st, st2)
- assert os.path.samestat(st2, st3), (st2, st3)
- with open(self.source) as f:
- assert f.read() == 'some content'
+ self.assertTrue(os.path.samestat(st, st2), (st, st2))
+ self.assertTrue(os.path.samestat(st2, st3), (st2, st3))
+ with open(self.source, 'r') as f:
+ self.assertEqual(f.read(), 'some content')
def test_copy_file_hard_link_failure(self):
# If hard linking fails, copy_file() falls back on copying file
with open(self.source, 'w') as f:
f.write('some content')
st = os.stat(self.source)
- with mock.patch("os.link", side_effect=OSError(0, "linking unsupported")):
+ with patch("os.link", side_effect=OSError(0, "linking unsupported")):
copy_file(self.source, self.target, link='hard')
st2 = os.stat(self.source)
st3 = os.stat(self.target)
- assert os.path.samestat(st, st2), (st, st2)
- assert not os.path.samestat(st2, st3), (st2, st3)
+ self.assertTrue(os.path.samestat(st, st2), (st, st2))
+ self.assertFalse(os.path.samestat(st2, st3), (st2, st3))
for fn in (self.source, self.target):
- with open(fn) as f:
- assert f.read() == 'some content'
+ with open(fn, 'r') as f:
+ self.assertEqual(f.read(), 'some content')
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(FileUtilTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.filelist."""
import os
import re
+import unittest
from distutils import debug
from distutils.log import WARN
from distutils.errors import DistutilsTemplateError
from distutils.filelist import glob_to_re, translate_pattern, FileList
from distutils import filelist
-from test.support import captured_stdout
+from test.support import captured_stdout, run_unittest
from distutils.tests import support
from . import py38compat as os_helper
-import pytest
MANIFEST_IN = """\
return s.replace('/', os.sep)
-class TestFileList(support.LoggingSilencer):
+class FileListTestCase(support.LoggingSilencer, unittest.TestCase):
def assertNoWarnings(self):
- assert self.get_logs(WARN) == []
+ self.assertEqual(self.get_logs(WARN), [])
self.clear_logs()
def assertWarnings(self):
- assert len(self.get_logs(WARN)) > 0
+ self.assertGreater(len(self.get_logs(WARN)), 0)
self.clear_logs()
def test_glob_to_re(self):
(r'foo\\??', r'(?s:foo\\\\[^%(sep)s][^%(sep)s])\Z'),
):
regex = regex % {'sep': sep}
- assert glob_to_re(glob) == regex
+ self.assertEqual(glob_to_re(glob), regex)
def test_process_template_line(self):
# testing all MANIFEST.in template patterns
mlp('dir/dir2/graft2'),
]
- assert file_list.files == wanted
+ self.assertEqual(file_list.files, wanted)
def test_debug_print(self):
file_list = FileList()
with captured_stdout() as stdout:
file_list.debug_print('xxx')
- assert stdout.getvalue() == ''
+ self.assertEqual(stdout.getvalue(), '')
debug.DEBUG = True
try:
with captured_stdout() as stdout:
file_list.debug_print('xxx')
- assert stdout.getvalue() == 'xxx\n'
+ self.assertEqual(stdout.getvalue(), 'xxx\n')
finally:
debug.DEBUG = False
file_list = FileList()
files = ['a', 'b', 'c']
file_list.set_allfiles(files)
- assert file_list.allfiles == files
+ self.assertEqual(file_list.allfiles, files)
def test_remove_duplicates(self):
file_list = FileList()
# files must be sorted beforehand (sdist does it)
file_list.sort()
file_list.remove_duplicates()
- assert file_list.files == ['a', 'b', 'c', 'g']
+ self.assertEqual(file_list.files, ['a', 'b', 'c', 'g'])
def test_translate_pattern(self):
# not regex
- assert hasattr(translate_pattern('a', anchor=True, is_regex=False), 'search')
+ self.assertTrue(
+ hasattr(translate_pattern('a', anchor=True, is_regex=False), 'search')
+ )
# is a regex
regex = re.compile('a')
- assert translate_pattern(regex, anchor=True, is_regex=True) == regex
+ self.assertEqual(translate_pattern(regex, anchor=True, is_regex=True), regex)
# plain string flagged as regex
- assert hasattr(translate_pattern('a', anchor=True, is_regex=True), 'search')
+ self.assertTrue(
+ hasattr(translate_pattern('a', anchor=True, is_regex=True), 'search')
+ )
# glob support
- assert translate_pattern('*.py', anchor=True, is_regex=False).search(
- 'filelist.py'
+ self.assertTrue(
+ translate_pattern('*.py', anchor=True, is_regex=False).search('filelist.py')
)
def test_exclude_pattern(self):
# return False if no match
file_list = FileList()
- assert not file_list.exclude_pattern('*.py')
+ self.assertFalse(file_list.exclude_pattern('*.py'))
# return True if files match
file_list = FileList()
file_list.files = ['a.py', 'b.py']
- assert file_list.exclude_pattern('*.py')
+ self.assertTrue(file_list.exclude_pattern('*.py'))
# test excludes
file_list = FileList()
file_list.files = ['a.py', 'a.txt']
file_list.exclude_pattern('*.py')
- assert file_list.files == ['a.txt']
+ self.assertEqual(file_list.files, ['a.txt'])
def test_include_pattern(self):
# return False if no match
file_list = FileList()
file_list.set_allfiles([])
- assert not file_list.include_pattern('*.py')
+ self.assertFalse(file_list.include_pattern('*.py'))
# return True if files match
file_list = FileList()
file_list.set_allfiles(['a.py', 'b.txt'])
- assert file_list.include_pattern('*.py')
+ self.assertTrue(file_list.include_pattern('*.py'))
# test * matches all files
file_list = FileList()
- assert file_list.allfiles is None
+ self.assertIsNone(file_list.allfiles)
file_list.set_allfiles(['a.py', 'b.txt'])
file_list.include_pattern('*')
- assert file_list.allfiles == ['a.py', 'b.txt']
+ self.assertEqual(file_list.allfiles, ['a.py', 'b.txt'])
def test_process_template(self):
mlp = make_local_path
'prune',
'blarg',
):
- with pytest.raises(DistutilsTemplateError):
- file_list.process_template_line(action)
+ self.assertRaises(
+ DistutilsTemplateError, file_list.process_template_line, action
+ )
# include
file_list = FileList()
file_list.set_allfiles(['a.py', 'b.txt', mlp('d/c.py')])
file_list.process_template_line('include *.py')
- assert file_list.files == ['a.py']
+ self.assertEqual(file_list.files, ['a.py'])
self.assertNoWarnings()
file_list.process_template_line('include *.rb')
- assert file_list.files == ['a.py']
+ self.assertEqual(file_list.files, ['a.py'])
self.assertWarnings()
# exclude
file_list.files = ['a.py', 'b.txt', mlp('d/c.py')]
file_list.process_template_line('exclude *.py')
- assert file_list.files == ['b.txt', mlp('d/c.py')]
+ self.assertEqual(file_list.files, ['b.txt', mlp('d/c.py')])
self.assertNoWarnings()
file_list.process_template_line('exclude *.rb')
- assert file_list.files == ['b.txt', mlp('d/c.py')]
+ self.assertEqual(file_list.files, ['b.txt', mlp('d/c.py')])
self.assertWarnings()
# global-include
file_list.set_allfiles(['a.py', 'b.txt', mlp('d/c.py')])
file_list.process_template_line('global-include *.py')
- assert file_list.files == ['a.py', mlp('d/c.py')]
+ self.assertEqual(file_list.files, ['a.py', mlp('d/c.py')])
self.assertNoWarnings()
file_list.process_template_line('global-include *.rb')
- assert file_list.files == ['a.py', mlp('d/c.py')]
+ self.assertEqual(file_list.files, ['a.py', mlp('d/c.py')])
self.assertWarnings()
# global-exclude
file_list.files = ['a.py', 'b.txt', mlp('d/c.py')]
file_list.process_template_line('global-exclude *.py')
- assert file_list.files == ['b.txt']
+ self.assertEqual(file_list.files, ['b.txt'])
self.assertNoWarnings()
file_list.process_template_line('global-exclude *.rb')
- assert file_list.files == ['b.txt']
+ self.assertEqual(file_list.files, ['b.txt'])
self.assertWarnings()
# recursive-include
file_list.set_allfiles(['a.py', mlp('d/b.py'), mlp('d/c.txt'), mlp('d/d/e.py')])
file_list.process_template_line('recursive-include d *.py')
- assert file_list.files == [mlp('d/b.py'), mlp('d/d/e.py')]
+ self.assertEqual(file_list.files, [mlp('d/b.py'), mlp('d/d/e.py')])
self.assertNoWarnings()
file_list.process_template_line('recursive-include e *.py')
- assert file_list.files == [mlp('d/b.py'), mlp('d/d/e.py')]
+ self.assertEqual(file_list.files, [mlp('d/b.py'), mlp('d/d/e.py')])
self.assertWarnings()
# recursive-exclude
file_list.files = ['a.py', mlp('d/b.py'), mlp('d/c.txt'), mlp('d/d/e.py')]
file_list.process_template_line('recursive-exclude d *.py')
- assert file_list.files == ['a.py', mlp('d/c.txt')]
+ self.assertEqual(file_list.files, ['a.py', mlp('d/c.txt')])
self.assertNoWarnings()
file_list.process_template_line('recursive-exclude e *.py')
- assert file_list.files == ['a.py', mlp('d/c.txt')]
+ self.assertEqual(file_list.files, ['a.py', mlp('d/c.txt')])
self.assertWarnings()
# graft
file_list.set_allfiles(['a.py', mlp('d/b.py'), mlp('d/d/e.py'), mlp('f/f.py')])
file_list.process_template_line('graft d')
- assert file_list.files == [mlp('d/b.py'), mlp('d/d/e.py')]
+ self.assertEqual(file_list.files, [mlp('d/b.py'), mlp('d/d/e.py')])
self.assertNoWarnings()
file_list.process_template_line('graft e')
- assert file_list.files == [mlp('d/b.py'), mlp('d/d/e.py')]
+ self.assertEqual(file_list.files, [mlp('d/b.py'), mlp('d/d/e.py')])
self.assertWarnings()
# prune
file_list.files = ['a.py', mlp('d/b.py'), mlp('d/d/e.py'), mlp('f/f.py')]
file_list.process_template_line('prune d')
- assert file_list.files == ['a.py', mlp('f/f.py')]
+ self.assertEqual(file_list.files, ['a.py', mlp('f/f.py')])
self.assertNoWarnings()
file_list.process_template_line('prune e')
- assert file_list.files == ['a.py', mlp('f/f.py')]
+ self.assertEqual(file_list.files, ['a.py', mlp('f/f.py')])
self.assertWarnings()
-class TestFindAll:
+class FindAllTestCase(unittest.TestCase):
@os_helper.skip_unless_symlink
def test_missing_symlink(self):
with os_helper.temp_cwd():
os.symlink('foo', 'bar')
- assert filelist.findall() == []
+ self.assertEqual(filelist.findall(), [])
def test_basic_discovery(self):
"""
file2 = os.path.join('bar', 'file2.txt')
os_helper.create_empty_file(file2)
expected = [file2, file1]
- assert sorted(filelist.findall()) == expected
+ self.assertEqual(sorted(filelist.findall()), expected)
def test_non_local_discovery(self):
"""
file1 = os.path.join(temp_dir, 'file1.txt')
os_helper.create_empty_file(file1)
expected = [file1]
- assert filelist.findall(temp_dir) == expected
+ self.assertEqual(filelist.findall(temp_dir), expected)
@os_helper.skip_unless_symlink
def test_symlink_loop(self):
os.symlink('.', link)
files = filelist.findall(temp_dir)
assert len(files) == 1
+
+
+def test_suite():
+ return unittest.TestSuite(
+ [
+ unittest.TestLoader().loadTestsFromTestCase(FileListTestCase),
+ unittest.TestLoader().loadTestsFromTestCase(FindAllTestCase),
+ ]
+ )
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
import os
import sys
+import unittest
import site
-import pathlib
-from test.support import captured_stdout
-
-import pytest
+from test.support import captured_stdout, run_unittest
from distutils import sysconfig
from distutils.command.install import install
from distutils.tests import support
from test import support as test_support
+import pytest
+
def _make_ext_name(modname):
return modname + sysconfig.get_config_var('EXT_SUFFIX')
-@support.combine_markers
-@pytest.mark.usefixtures('save_env')
-class TestInstall(
+class InstallTestCase(
support.TempdirManager,
+ support.EnvironGuard,
support.LoggingSilencer,
+ unittest.TestCase,
):
@pytest.mark.xfail(
'platform.system() == "Windows" and sys.version_info > (3, 11)',
cmd.home = destination
cmd.ensure_finalized()
- assert cmd.install_base == destination
- assert cmd.install_platbase == destination
+ self.assertEqual(cmd.install_base, destination)
+ self.assertEqual(cmd.install_platbase, destination)
def check_path(got, expected):
got = os.path.normpath(got)
expected = os.path.normpath(expected)
- assert got == expected
+ self.assertEqual(got, expected)
impl_name = sys.implementation.name.replace("cpython", "python")
libdir = os.path.join(destination, "lib", impl_name)
check_path(cmd.install_scripts, os.path.join(destination, "bin"))
check_path(cmd.install_data, destination)
- def test_user_site(self, monkeypatch):
+ def test_user_site(self):
# test install with --user
# preparing the environment for the test
+ self.old_user_base = site.USER_BASE
+ self.old_user_site = site.USER_SITE
self.tmpdir = self.mkdtemp()
- orig_site = site.USER_SITE
- orig_base = site.USER_BASE
- monkeypatch.setattr(site, 'USER_BASE', os.path.join(self.tmpdir, 'B'))
- monkeypatch.setattr(site, 'USER_SITE', os.path.join(self.tmpdir, 'S'))
- monkeypatch.setattr(install_module, 'USER_BASE', site.USER_BASE)
- monkeypatch.setattr(install_module, 'USER_SITE', site.USER_SITE)
+ self.user_base = os.path.join(self.tmpdir, 'B')
+ self.user_site = os.path.join(self.tmpdir, 'S')
+ site.USER_BASE = self.user_base
+ site.USER_SITE = self.user_site
+ install_module.USER_BASE = self.user_base
+ install_module.USER_SITE = self.user_site
def _expanduser(path):
if path.startswith('~'):
return os.path.normpath(self.tmpdir + path[1:])
return path
- monkeypatch.setattr(os.path, 'expanduser', _expanduser)
+ self.old_expand = os.path.expanduser
+ os.path.expanduser = _expanduser
+
+ def cleanup():
+ site.USER_BASE = self.old_user_base
+ site.USER_SITE = self.old_user_site
+ install_module.USER_BASE = self.old_user_base
+ install_module.USER_SITE = self.old_user_site
+ os.path.expanduser = self.old_expand
+
+ self.addCleanup(cleanup)
for key in ('nt_user', 'posix_user'):
- assert key in INSTALL_SCHEMES
+ self.assertIn(key, INSTALL_SCHEMES)
dist = Distribution({'name': 'xx'})
cmd = install(dist)
# making sure the user option is there
options = [name for name, short, lable in cmd.user_options]
- assert 'user' in options
+ self.assertIn('user', options)
# setting a value
cmd.user = 1
# user base and site shouldn't be created yet
- assert not os.path.exists(site.USER_BASE)
- assert not os.path.exists(site.USER_SITE)
+ self.assertFalse(os.path.exists(self.user_base))
+ self.assertFalse(os.path.exists(self.user_site))
# let's run finalize
cmd.ensure_finalized()
# now they should
- assert os.path.exists(site.USER_BASE)
- assert os.path.exists(site.USER_SITE)
+ self.assertTrue(os.path.exists(self.user_base))
+ self.assertTrue(os.path.exists(self.user_site))
- assert 'userbase' in cmd.config_vars
- assert 'usersite' in cmd.config_vars
+ self.assertIn('userbase', cmd.config_vars)
+ self.assertIn('usersite', cmd.config_vars)
- actual_headers = os.path.relpath(cmd.install_headers, site.USER_BASE)
+ actual_headers = os.path.relpath(cmd.install_headers, self.user_base)
if os.name == 'nt':
- site_path = os.path.relpath(os.path.dirname(orig_site), orig_base)
+ site_path = os.path.relpath(
+ os.path.dirname(self.old_user_site), self.old_user_base
+ )
include = os.path.join(site_path, 'Include')
else:
include = sysconfig.get_python_inc(0, '')
expect_headers = os.path.join(include, 'xx')
- assert os.path.normcase(actual_headers) == os.path.normcase(expect_headers)
+ self.assertEqual(
+ os.path.normcase(actual_headers), os.path.normcase(expect_headers)
+ )
def test_handle_extra_path(self):
dist = Distribution({'name': 'xx', 'extra_path': 'path,dirs'})
# two elements
cmd.handle_extra_path()
- assert cmd.extra_path == ['path', 'dirs']
- assert cmd.extra_dirs == 'dirs'
- assert cmd.path_file == 'path'
+ self.assertEqual(cmd.extra_path, ['path', 'dirs'])
+ self.assertEqual(cmd.extra_dirs, 'dirs')
+ self.assertEqual(cmd.path_file, 'path')
# one element
cmd.extra_path = ['path']
cmd.handle_extra_path()
- assert cmd.extra_path == ['path']
- assert cmd.extra_dirs == 'path'
- assert cmd.path_file == 'path'
+ self.assertEqual(cmd.extra_path, ['path'])
+ self.assertEqual(cmd.extra_dirs, 'path')
+ self.assertEqual(cmd.path_file, 'path')
# none
dist.extra_path = cmd.extra_path = None
cmd.handle_extra_path()
- assert cmd.extra_path is None
- assert cmd.extra_dirs == ''
- assert cmd.path_file is None
+ self.assertEqual(cmd.extra_path, None)
+ self.assertEqual(cmd.extra_dirs, '')
+ self.assertEqual(cmd.path_file, None)
# three elements (no way !)
cmd.extra_path = 'path,dirs,again'
- with pytest.raises(DistutilsOptionError):
- cmd.handle_extra_path()
+ self.assertRaises(DistutilsOptionError, cmd.handle_extra_path)
def test_finalize_options(self):
dist = Distribution({'name': 'xx'})
# install-base/install-platbase -- not both
cmd.prefix = 'prefix'
cmd.install_base = 'base'
- with pytest.raises(DistutilsOptionError):
- cmd.finalize_options()
+ self.assertRaises(DistutilsOptionError, cmd.finalize_options)
# must supply either home or prefix/exec-prefix -- not both
cmd.install_base = None
cmd.home = 'home'
- with pytest.raises(DistutilsOptionError):
- cmd.finalize_options()
+ self.assertRaises(DistutilsOptionError, cmd.finalize_options)
# can't combine user with prefix/exec_prefix/home or
# install_(plat)base
cmd.prefix = None
cmd.user = 'user'
- with pytest.raises(DistutilsOptionError):
- cmd.finalize_options()
+ self.assertRaises(DistutilsOptionError, cmd.finalize_options)
def test_record(self):
install_dir = self.mkdtemp()
'sayhi',
'UNKNOWN-0.0.0-py%s.%s.egg-info' % sys.version_info[:2],
]
- assert found == expected
+ self.assertEqual(found, expected)
def test_record_extensions(self):
cmd = test_support.missing_compiler_executable()
if cmd is not None:
- pytest.skip('The %r command is not found' % cmd)
+ self.skipTest('The %r command is not found' % cmd)
install_dir = self.mkdtemp()
project_dir, dist = self.create_dist(
ext_modules=[Extension('xx', ['xxmodule.c'])]
cmd.ensure_finalized()
cmd.run()
- content = pathlib.Path(cmd.record).read_text()
+ f = open(cmd.record)
+ try:
+ content = f.read()
+ finally:
+ f.close()
found = [os.path.basename(line) for line in content.splitlines()]
expected = [
_make_ext_name('xx'),
'UNKNOWN-0.0.0-py%s.%s.egg-info' % sys.version_info[:2],
]
- assert found == expected
+ self.assertEqual(found, expected)
def test_debug_mode(self):
# this covers the code called when DEBUG is set
self.test_record()
finally:
install_module.DEBUG = False
- assert len(self.logs) > old_logs_len
+ self.assertGreater(len(self.logs), old_logs_len)
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(InstallTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.command.install_data."""
import os
-
-import pytest
+import unittest
from distutils.command.install_data import install_data
from distutils.tests import support
+from test.support import run_unittest
-@pytest.mark.usefixtures('save_env')
-class TestInstallData(
+class InstallDataTestCase(
support.TempdirManager,
support.LoggingSilencer,
+ support.EnvironGuard,
+ unittest.TestCase,
):
def test_simple_run(self):
pkg_dir, dist = self.create_dist()
self.write_file(two, 'xxx')
cmd.data_files = [one, (inst2, [two])]
- assert cmd.get_inputs() == [one, (inst2, [two])]
+ self.assertEqual(cmd.get_inputs(), [one, (inst2, [two])])
# let's run the command
cmd.ensure_finalized()
cmd.run()
# let's check the result
- assert len(cmd.get_outputs()) == 2
+ self.assertEqual(len(cmd.get_outputs()), 2)
rtwo = os.path.split(two)[-1]
- assert os.path.exists(os.path.join(inst2, rtwo))
+ self.assertTrue(os.path.exists(os.path.join(inst2, rtwo)))
rone = os.path.split(one)[-1]
- assert os.path.exists(os.path.join(inst, rone))
+ self.assertTrue(os.path.exists(os.path.join(inst, rone)))
cmd.outfiles = []
# let's try with warn_dir one
cmd.run()
# let's check the result
- assert len(cmd.get_outputs()) == 2
- assert os.path.exists(os.path.join(inst2, rtwo))
- assert os.path.exists(os.path.join(inst, rone))
+ self.assertEqual(len(cmd.get_outputs()), 2)
+ self.assertTrue(os.path.exists(os.path.join(inst2, rtwo)))
+ self.assertTrue(os.path.exists(os.path.join(inst, rone)))
cmd.outfiles = []
# now using root and empty dir
cmd.root = os.path.join(pkg_dir, 'root')
+ inst3 = os.path.join(cmd.install_dir, 'inst3')
inst4 = os.path.join(pkg_dir, 'inst4')
three = os.path.join(cmd.install_dir, 'three')
self.write_file(three, 'xx')
cmd.run()
# let's check the result
- assert len(cmd.get_outputs()) == 4
- assert os.path.exists(os.path.join(inst2, rtwo))
- assert os.path.exists(os.path.join(inst, rone))
+ self.assertEqual(len(cmd.get_outputs()), 4)
+ self.assertTrue(os.path.exists(os.path.join(inst2, rtwo)))
+ self.assertTrue(os.path.exists(os.path.join(inst, rone)))
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(InstallDataTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.command.install_headers."""
import os
-
-import pytest
+import unittest
from distutils.command.install_headers import install_headers
from distutils.tests import support
+from test.support import run_unittest
-@pytest.mark.usefixtures('save_env')
-class TestInstallHeaders(
+class InstallHeadersTestCase(
support.TempdirManager,
support.LoggingSilencer,
+ support.EnvironGuard,
+ unittest.TestCase,
):
def test_simple_run(self):
# we have two headers
pkg_dir, dist = self.create_dist(headers=headers)
cmd = install_headers(dist)
- assert cmd.get_inputs() == headers
+ self.assertEqual(cmd.get_inputs(), headers)
# let's run the command
cmd.install_dir = os.path.join(pkg_dir, 'inst')
cmd.run()
# let's check the results
- assert len(cmd.get_outputs()) == 2
+ self.assertEqual(len(cmd.get_outputs()), 2)
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(InstallHeadersTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
import sys
import os
import importlib.util
-
-import pytest
+import unittest
from distutils.command.install_lib import install_lib
from distutils.extension import Extension
from distutils.tests import support
from distutils.errors import DistutilsOptionError
+from test.support import run_unittest
-@support.combine_markers
-@pytest.mark.usefixtures('save_env')
-class TestInstallLib(
+class InstallLibTestCase(
support.TempdirManager,
support.LoggingSilencer,
+ support.EnvironGuard,
+ unittest.TestCase,
):
def test_finalize_options(self):
dist = self.create_dist()[1]
cmd = install_lib(dist)
cmd.finalize_options()
- assert cmd.compile == 1
- assert cmd.optimize == 0
+ self.assertEqual(cmd.compile, 1)
+ self.assertEqual(cmd.optimize, 0)
# optimize must be 0, 1, or 2
cmd.optimize = 'foo'
- with pytest.raises(DistutilsOptionError):
- cmd.finalize_options()
+ self.assertRaises(DistutilsOptionError, cmd.finalize_options)
cmd.optimize = '4'
- with pytest.raises(DistutilsOptionError):
- cmd.finalize_options()
+ self.assertRaises(DistutilsOptionError, cmd.finalize_options)
cmd.optimize = '2'
cmd.finalize_options()
- assert cmd.optimize == 2
+ self.assertEqual(cmd.optimize, 2)
- @pytest.mark.skipif('sys.dont_write_bytecode')
+ @unittest.skipIf(sys.dont_write_bytecode, 'byte-compile disabled')
def test_byte_compile(self):
project_dir, dist = self.create_dist()
os.chdir(project_dir)
pyc_opt_file = importlib.util.cache_from_source(
'foo.py', optimization=cmd.optimize
)
- assert os.path.exists(pyc_file)
- assert os.path.exists(pyc_opt_file)
+ self.assertTrue(os.path.exists(pyc_file))
+ self.assertTrue(os.path.exists(pyc_opt_file))
def test_get_outputs(self):
project_dir, dist = self.create_dist()
# get_outputs should return 4 elements: spam/__init__.py and .pyc,
# foo.import-tag-abiflags.so / foo.pyd
outputs = cmd.get_outputs()
- assert len(outputs) == 4, outputs
+ self.assertEqual(len(outputs), 4, outputs)
def test_get_inputs(self):
project_dir, dist = self.create_dist()
# get_inputs should return 2 elements: spam/__init__.py and
# foo.import-tag-abiflags.so / foo.pyd
inputs = cmd.get_inputs()
- assert len(inputs) == 2, inputs
+ self.assertEqual(len(inputs), 2, inputs)
def test_dont_write_bytecode(self):
# makes sure byte_compile is not used
finally:
sys.dont_write_bytecode = old_dont_write_bytecode
- assert 'byte-compiling is disabled' in self.logs[0][1] % self.logs[0][2]
+ self.assertIn('byte-compiling is disabled', self.logs[0][1] % self.logs[0][2])
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(InstallLibTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.command.install_scripts."""
import os
+import unittest
from distutils.command.install_scripts import install_scripts
from distutils.core import Distribution
from distutils.tests import support
+from test.support import run_unittest
-class TestInstallScripts(support.TempdirManager, support.LoggingSilencer):
+class InstallScriptsTestCase(
+ support.TempdirManager, support.LoggingSilencer, unittest.TestCase
+):
def test_default_settings(self):
dist = Distribution()
dist.command_obj["build"] = support.DummyCommand(build_scripts="/foo/bar")
skip_build=1,
)
cmd = install_scripts(dist)
- assert not cmd.force
- assert not cmd.skip_build
- assert cmd.build_dir is None
- assert cmd.install_dir is None
+ self.assertFalse(cmd.force)
+ self.assertFalse(cmd.skip_build)
+ self.assertIsNone(cmd.build_dir)
+ self.assertIsNone(cmd.install_dir)
cmd.finalize_options()
- assert cmd.force
- assert cmd.skip_build
- assert cmd.build_dir == "/foo/bar"
- assert cmd.install_dir == "/splat/funk"
+ self.assertTrue(cmd.force)
+ self.assertTrue(cmd.skip_build)
+ self.assertEqual(cmd.build_dir, "/foo/bar")
+ self.assertEqual(cmd.install_dir, "/splat/funk")
def test_installation(self):
source = self.mkdtemp()
installed = os.listdir(target)
for name in expected:
- assert name in installed
+ self.assertIn(name, installed)
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(InstallScriptsTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
import io
import sys
-from test.support import swap_attr
-
-import pytest
+import unittest
+from test.support import swap_attr, run_unittest
from distutils import log
-class TestLog:
- @pytest.mark.parametrize(
- 'errors',
- (
+class TestLog(unittest.TestCase):
+ def test_non_ascii(self):
+ # Issues #8663, #34421: test that non-encodable text is escaped with
+ # backslashreplace error handler and encodable non-ASCII text is
+ # output as is.
+ for errors in (
'strict',
'backslashreplace',
'surrogateescape',
'replace',
'ignore',
- ),
- )
- def test_non_ascii(self, errors):
- # Issues #8663, #34421: test that non-encodable text is escaped with
- # backslashreplace error handler and encodable non-ASCII text is
- # output as is.
- stdout = io.TextIOWrapper(io.BytesIO(), encoding='cp437', errors=errors)
- stderr = io.TextIOWrapper(io.BytesIO(), encoding='cp437', errors=errors)
- old_threshold = log.set_threshold(log.DEBUG)
- try:
- with swap_attr(sys, 'stdout', stdout), swap_attr(sys, 'stderr', stderr):
- log.debug('Dεbug\tMėssãge')
- log.fatal('Fαtal\tÈrrōr')
- finally:
- log.set_threshold(old_threshold)
-
- stdout.seek(0)
- assert stdout.read().rstrip() == (
- 'Dεbug\tM?ss?ge'
- if errors == 'replace'
- else 'Dεbug\tMssge'
- if errors == 'ignore'
- else 'Dεbug\tM\\u0117ss\\xe3ge'
- )
- stderr.seek(0)
- assert stderr.read().rstrip() == (
- 'Fαtal\t?rr?r'
- if errors == 'replace'
- else 'Fαtal\trrr'
- if errors == 'ignore'
- else 'Fαtal\t\\xc8rr\\u014dr'
- )
+ ):
+ with self.subTest(errors=errors):
+ stdout = io.TextIOWrapper(io.BytesIO(), encoding='cp437', errors=errors)
+ stderr = io.TextIOWrapper(io.BytesIO(), encoding='cp437', errors=errors)
+ old_threshold = log.set_threshold(log.DEBUG)
+ try:
+ with swap_attr(sys, 'stdout', stdout), swap_attr(
+ sys, 'stderr', stderr
+ ):
+ log.debug('Dεbug\tMėssãge')
+ log.fatal('Fαtal\tÈrrōr')
+ finally:
+ log.set_threshold(old_threshold)
+
+ stdout.seek(0)
+ self.assertEqual(
+ stdout.read().rstrip(),
+ 'Dεbug\tM?ss?ge'
+ if errors == 'replace'
+ else 'Dεbug\tMssge'
+ if errors == 'ignore'
+ else 'Dεbug\tM\\u0117ss\\xe3ge',
+ )
+ stderr.seek(0)
+ self.assertEqual(
+ stderr.read().rstrip(),
+ 'Fαtal\t?rr?r'
+ if errors == 'replace'
+ else 'Fαtal\trrr'
+ if errors == 'ignore'
+ else 'Fαtal\t\\xc8rr\\u014dr',
+ )
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(TestLog)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.msvc9compiler."""
import sys
+import unittest
import os
from distutils.errors import DistutilsPlatformError
from distutils.tests import support
-import pytest
+from test.support import run_unittest
# A manifest with the only assembly reference being the msvcrt assembly, so
# should have the assembly completely stripped. Note that although the
SKIP_MESSAGE = "These tests are only for win32"
-@pytest.mark.skipif('SKIP_MESSAGE', reason=SKIP_MESSAGE)
-class Testmsvc9compiler(support.TempdirManager):
+@unittest.skipUnless(SKIP_MESSAGE is None, SKIP_MESSAGE)
+class msvc9compilerTestCase(support.TempdirManager, unittest.TestCase):
def test_no_compiler(self):
# makes sure query_vcvarsall raises
# a DistutilsPlatformError if the compiler
old_find_vcvarsall = msvc9compiler.find_vcvarsall
msvc9compiler.find_vcvarsall = _find_vcvarsall
try:
- with pytest.raises(DistutilsPlatformError):
- query_vcvarsall('wont find this version')
+ self.assertRaises(
+ DistutilsPlatformError, query_vcvarsall, 'wont find this version'
+ )
finally:
msvc9compiler.find_vcvarsall = old_find_vcvarsall
def test_reg_class(self):
from distutils.msvc9compiler import Reg
- with pytest.raises(KeyError):
- Reg.get_value('xxx', 'xxx')
+ self.assertRaises(KeyError, Reg.get_value, 'xxx', 'xxx')
# looking for values that should exist on all
# windows registry versions.
path = r'Control Panel\Desktop'
v = Reg.get_value(path, 'dragfullwindows')
- assert v in ('0', '1', '2')
+ self.assertIn(v, ('0', '1', '2'))
import winreg
HKCU = winreg.HKEY_CURRENT_USER
keys = Reg.read_keys(HKCU, 'xxxx')
- assert keys is None
+ self.assertEqual(keys, None)
keys = Reg.read_keys(HKCU, r'Control Panel')
- assert 'Desktop' in keys
+ self.assertIn('Desktop', keys)
def test_remove_visual_c_ref(self):
from distutils.msvc9compiler import MSVCCompiler
f.close()
# makes sure the manifest was properly cleaned
- assert content == _CLEANED_MANIFEST
+ self.assertEqual(content, _CLEANED_MANIFEST)
def test_remove_entire_manifest(self):
from distutils.msvc9compiler import MSVCCompiler
compiler = MSVCCompiler()
got = compiler._remove_visual_c_ref(manifest)
- assert got is None
+ self.assertIsNone(got)
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(msvc9compilerTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils._msvccompiler."""
import sys
+import unittest
import os
import threading
-import unittest.mock as mock
-
-import pytest
from distutils.errors import DistutilsPlatformError
from distutils.tests import support
-from distutils import _msvccompiler
+from test.support import run_unittest
-needs_winreg = pytest.mark.skipif('not hasattr(_msvccompiler, "winreg")')
+SKIP_MESSAGE = None if sys.platform == "win32" else "These tests are only for win32"
-class Testmsvccompiler(support.TempdirManager):
+@unittest.skipUnless(SKIP_MESSAGE is None, SKIP_MESSAGE)
+class msvccompilerTestCase(support.TempdirManager, unittest.TestCase):
def test_no_compiler(self):
+ import distutils._msvccompiler as _msvccompiler
+
# makes sure query_vcvarsall raises
# a DistutilsPlatformError if the compiler
# is not found
old_find_vcvarsall = _msvccompiler._find_vcvarsall
_msvccompiler._find_vcvarsall = _find_vcvarsall
try:
- with pytest.raises(DistutilsPlatformError):
- _msvccompiler._get_vc_env(
- 'wont find this version',
- )
+ self.assertRaises(
+ DistutilsPlatformError,
+ _msvccompiler._get_vc_env,
+ 'wont find this version',
+ )
finally:
_msvccompiler._find_vcvarsall = old_find_vcvarsall
- @needs_winreg
def test_get_vc_env_unicode(self):
+ import distutils._msvccompiler as _msvccompiler
+
test_var = 'ṰḖṤṪ┅ṼẨṜ'
test_value = '₃⁴₅'
os.environ[test_var] = test_value
try:
env = _msvccompiler._get_vc_env('x86')
- assert test_var.lower() in env
- assert test_value == env[test_var.lower()]
+ self.assertIn(test_var.lower(), env)
+ self.assertEqual(test_value, env[test_var.lower()])
finally:
os.environ.pop(test_var)
if old_distutils_use_sdk:
os.environ['DISTUTILS_USE_SDK'] = old_distutils_use_sdk
- @needs_winreg
- @pytest.mark.parametrize('ver', (2015, 2017))
- def test_get_vc(self, ver):
- # This function cannot be mocked, so pass if VC is found
- # and skip otherwise.
- lookup = getattr(_msvccompiler, f'_find_vc{ver}')
- expected_version = {2015: 14, 2017: 15}[ver]
- version, path = lookup()
- if not version:
- pytest.skip(f"VS {ver} is not installed")
- assert version >= expected_version
- assert os.path.isdir(path)
+ def test_get_vc2017(self):
+ import distutils._msvccompiler as _msvccompiler
+
+ # This function cannot be mocked, so pass it if we find VS 2017
+ # and mark it skipped if we do not.
+ version, path = _msvccompiler._find_vc2017()
+ if version:
+ self.assertGreaterEqual(version, 15)
+ self.assertTrue(os.path.isdir(path))
+ else:
+ raise unittest.SkipTest("VS 2017 is not installed")
+
+ def test_get_vc2015(self):
+ import distutils._msvccompiler as _msvccompiler
+
+ # This function cannot be mocked, so pass it if we find VS 2015
+ # and mark it skipped if we do not.
+ version, path = _msvccompiler._find_vc2015()
+ if version:
+ self.assertGreaterEqual(version, 14)
+ self.assertTrue(os.path.isdir(path))
+ else:
+ raise unittest.SkipTest("VS 2015 is not installed")
class CheckThread(threading.Thread):
return not self.exc_info
-class TestSpawn:
+class TestSpawn(unittest.TestCase):
def test_concurrent_safe(self):
"""
Concurrent calls to spawn should have consistent results.
"""
+ import distutils._msvccompiler as _msvccompiler
+
compiler = _msvccompiler.MSVCCompiler()
compiler._paths = "expected"
inner_cmd = 'import os; assert os.environ["PATH"] == "expected"'
If CCompiler.spawn has been monkey-patched without support
for an env, it should still execute.
"""
+ import distutils._msvccompiler as _msvccompiler
from distutils import ccompiler
compiler = _msvccompiler.MSVCCompiler()
"A spawn without an env argument."
assert os.environ["PATH"] == "expected"
- with mock.patch.object(ccompiler.CCompiler, 'spawn', CCompiler_spawn):
+ with unittest.mock.patch.object(ccompiler.CCompiler, 'spawn', CCompiler_spawn):
compiler.spawn(["n/a"])
assert os.environ.get("PATH") != "expected"
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(msvccompilerTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.command.register."""
import os
+import unittest
import getpass
import urllib
+import warnings
+
+from test.support import run_unittest
+
+from .py38compat import check_warnings
from distutils.command import register as register_module
from distutils.command.register import register
from distutils.log import INFO
from distutils.tests.test_config import BasePyPIRCCommandTestCase
-import pytest
try:
import docutils
"""
-class Inputs:
+class Inputs(object):
"""Fakes user inputs."""
def __init__(self, *answers):
self.index += 1
-class FakeOpener:
+class FakeOpener(object):
"""Fakes a PyPI server"""
def __init__(self):
}.get(name.lower(), default)
-@pytest.fixture(autouse=True)
-def autopass(monkeypatch):
- monkeypatch.setattr(getpass, 'getpass', lambda prompt: 'password')
+class RegisterTestCase(BasePyPIRCCommandTestCase):
+ def setUp(self):
+ super(RegisterTestCase, self).setUp()
+ # patching the password prompt
+ self._old_getpass = getpass.getpass
+ def _getpass(prompt):
+ return 'password'
-@pytest.fixture(autouse=True)
-def fake_opener(monkeypatch, request):
- opener = FakeOpener()
- monkeypatch.setattr(urllib.request, 'build_opener', opener)
- monkeypatch.setattr(urllib.request, '_opener', None)
- request.instance.conn = opener
+ getpass.getpass = _getpass
+ urllib.request._opener = None
+ self.old_opener = urllib.request.build_opener
+ self.conn = urllib.request.build_opener = FakeOpener()
+ def tearDown(self):
+ getpass.getpass = self._old_getpass
+ urllib.request._opener = None
+ urllib.request.build_opener = self.old_opener
+ super(RegisterTestCase, self).tearDown()
-class TestRegister(BasePyPIRCCommandTestCase):
def _get_cmd(self, metadata=None):
if metadata is None:
metadata = {
'author_email': 'xxx',
'name': 'xxx',
'version': 'xxx',
- 'long_description': 'xxx',
}
pkg_info, dist = self.create_dist(**metadata)
return register(dist)
cmd = self._get_cmd()
# we shouldn't have a .pypirc file yet
- assert not os.path.exists(self.rc)
+ self.assertFalse(os.path.exists(self.rc))
# patching input and getpass.getpass
# so register gets happy
del register_module.input
# we should have a brand new .pypirc file
- assert os.path.exists(self.rc)
+ self.assertTrue(os.path.exists(self.rc))
# with the content similar to WANTED_PYPIRC
f = open(self.rc)
try:
content = f.read()
- assert content == WANTED_PYPIRC
+ self.assertEqual(content, WANTED_PYPIRC)
finally:
f.close()
# let's see what the server received : we should
# have 2 similar requests
- assert len(self.conn.reqs) == 2
+ self.assertEqual(len(self.conn.reqs), 2)
req1 = dict(self.conn.reqs[0].headers)
req2 = dict(self.conn.reqs[1].headers)
- assert req1['Content-length'] == '1358'
- assert req2['Content-length'] == '1358'
- assert b'xxx' in self.conn.reqs[1].data
+ self.assertEqual(req1['Content-length'], '1359')
+ self.assertEqual(req2['Content-length'], '1359')
+ self.assertIn(b'xxx', self.conn.reqs[1].data)
def test_password_not_in_file(self):
# dist.password should be set
# therefore used afterwards by other commands
- assert cmd.distribution.password == 'password'
+ self.assertEqual(cmd.distribution.password, 'password')
def test_registering(self):
# this test runs choice 2
del register_module.input
# we should have send a request
- assert len(self.conn.reqs) == 1
+ self.assertEqual(len(self.conn.reqs), 1)
req = self.conn.reqs[0]
headers = dict(req.headers)
- assert headers['Content-length'] == '608'
- assert b'tarek' in req.data
+ self.assertEqual(headers['Content-length'], '608')
+ self.assertIn(b'tarek', req.data)
def test_password_reset(self):
# this test runs choice 3
del register_module.input
# we should have send a request
- assert len(self.conn.reqs) == 1
+ self.assertEqual(len(self.conn.reqs), 1)
req = self.conn.reqs[0]
headers = dict(req.headers)
- assert headers['Content-length'] == '290'
- assert b'tarek' in req.data
+ self.assertEqual(headers['Content-length'], '290')
+ self.assertIn(b'tarek', req.data)
+ @unittest.skipUnless(docutils is not None, 'needs docutils')
def test_strict(self):
- # testing the strict option
+ # testing the script option
# when on, the register command stops if
# the metadata is incomplete or if
# long_description is not reSt compliant
- pytest.importorskip('docutils')
-
# empty metadata
cmd = self._get_cmd({})
cmd.ensure_finalized()
cmd.strict = 1
- with pytest.raises(DistutilsSetupError):
- cmd.run()
+ self.assertRaises(DistutilsSetupError, cmd.run)
# metadata are OK but long_description is broken
metadata = {
cmd = self._get_cmd(metadata)
cmd.ensure_finalized()
cmd.strict = 1
- with pytest.raises(DistutilsSetupError):
- cmd.run()
+ self.assertRaises(DistutilsSetupError, cmd.run)
# now something that works
metadata['long_description'] = 'title\n=====\n\ntext'
finally:
del register_module.input
- def test_register_invalid_long_description(self, monkeypatch):
- pytest.importorskip('docutils')
+ @unittest.skipUnless(docutils is not None, 'needs docutils')
+ def test_register_invalid_long_description(self):
description = ':funkie:`str`' # mimic Sphinx-specific markup
metadata = {
'url': 'xxx',
cmd.ensure_finalized()
cmd.strict = True
inputs = Inputs('2', 'tarek', 'tarek@ziade.org')
- monkeypatch.setattr(register_module, 'input', inputs, raising=False)
+ register_module.input = inputs
+ self.addCleanup(delattr, register_module, 'input')
- with pytest.raises(DistutilsSetupError):
- cmd.run()
+ self.assertRaises(DistutilsSetupError, cmd.run)
+
+ def test_check_metadata_deprecated(self):
+ # makes sure make_metadata is deprecated
+ cmd = self._get_cmd()
+ with check_warnings() as w:
+ warnings.simplefilter("always")
+ cmd.check_metadata()
+ self.assertEqual(len(w.warnings), 1)
def test_list_classifiers(self):
cmd = self._get_cmd()
cmd.list_classifiers = 1
cmd.run()
results = self.get_logs(INFO)
- assert results == ['running check', 'xxx']
+ self.assertEqual(results, ['running check', 'xxx'])
def test_show_response(self):
# test that the --show-response option return a well formatted response
del register_module.input
results = self.get_logs(INFO)
- assert results[3] == 75 * '-' + '\nxxx\n' + 75 * '-'
+ self.assertEqual(results[3], 75 * '-' + '\nxxx\n' + 75 * '-')
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(RegisterTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.command.sdist."""
import os
import tarfile
+import unittest
import warnings
import zipfile
from os.path import join
from textwrap import dedent
-from test.support import captured_stdout
+from test.support import captured_stdout, run_unittest
from .unix_compat import require_unix_id, require_uid_0, pwd, grp
-import pytest
-import path
-import jaraco.path
-
from .py38compat import check_warnings
+try:
+ import zlib
+
+ ZLIB_SUPPORT = True
+except ImportError:
+ ZLIB_SUPPORT = False
+
from distutils.command.sdist import sdist, show_formats
from distutils.core import Distribution
from distutils.tests.test_config import BasePyPIRCCommandTestCase
from distutils.errors import DistutilsOptionError
-from distutils.spawn import find_executable # noqa: F401
+from distutils.spawn import find_executable
from distutils.log import WARN
from distutils.filelist import FileList
from distutils.archive_util import ARCHIVE_FORMATS
"""
-@pytest.fixture(autouse=True)
-def project_dir(request, pypirc):
- self = request.instance
- jaraco.path.build(
- {
- 'somecode': {
- '__init__.py': '#',
- },
- 'README': 'xxx',
- 'setup.py': SETUP_PY,
- },
- self.tmp_dir,
- )
- with path.Path(self.tmp_dir):
- yield
-
-
-class TestSDist(BasePyPIRCCommandTestCase):
+class SDistTestCase(BasePyPIRCCommandTestCase):
+ def setUp(self):
+ # PyPIRCCommandTestCase creates a temp dir already
+ # and put it in self.tmp_dir
+ super(SDistTestCase, self).setUp()
+ # setting up an environment
+ self.old_path = os.getcwd()
+ os.mkdir(join(self.tmp_dir, 'somecode'))
+ os.mkdir(join(self.tmp_dir, 'dist'))
+ # a package, and a README
+ self.write_file((self.tmp_dir, 'README'), 'xxx')
+ self.write_file((self.tmp_dir, 'somecode', '__init__.py'), '#')
+ self.write_file((self.tmp_dir, 'setup.py'), SETUP_PY)
+ os.chdir(self.tmp_dir)
+
+ def tearDown(self):
+ # back to normal
+ os.chdir(self.old_path)
+ super(SDistTestCase, self).tearDown()
+
def get_cmd(self, metadata=None):
"""Returns a cmd"""
if metadata is None:
cmd.dist_dir = 'dist'
return dist, cmd
- @pytest.mark.usefixtures('needs_zlib')
+ @unittest.skipUnless(ZLIB_SUPPORT, 'Need zlib support to run')
def test_prune_file_list(self):
# this test creates a project with some VCS dirs and an NFS rename
# file, then launches sdist to check they get pruned on all systems
# now let's check what we have
dist_folder = join(self.tmp_dir, 'dist')
files = os.listdir(dist_folder)
- assert files == ['fake-1.0.zip']
+ self.assertEqual(files, ['fake-1.0.zip'])
zip_file = zipfile.ZipFile(join(dist_folder, 'fake-1.0.zip'))
try:
'somecode/',
'somecode/__init__.py',
]
- assert sorted(content) == ['fake-1.0/' + x for x in expected]
+ self.assertEqual(sorted(content), ['fake-1.0/' + x for x in expected])
- @pytest.mark.usefixtures('needs_zlib')
- @pytest.mark.skipif("not find_executable('tar')")
- @pytest.mark.skipif("not find_executable('gzip')")
+ @unittest.skipUnless(ZLIB_SUPPORT, 'Need zlib support to run')
+ @unittest.skipIf(find_executable('tar') is None, "The tar command is not found")
+ @unittest.skipIf(find_executable('gzip') is None, "The gzip command is not found")
def test_make_distribution(self):
# now building a sdist
dist, cmd = self.get_cmd()
dist_folder = join(self.tmp_dir, 'dist')
result = os.listdir(dist_folder)
result.sort()
- assert result == ['fake-1.0.tar', 'fake-1.0.tar.gz']
+ self.assertEqual(result, ['fake-1.0.tar', 'fake-1.0.tar.gz'])
os.remove(join(dist_folder, 'fake-1.0.tar'))
os.remove(join(dist_folder, 'fake-1.0.tar.gz'))
result = os.listdir(dist_folder)
result.sort()
- assert result == ['fake-1.0.tar', 'fake-1.0.tar.gz']
+ self.assertEqual(result, ['fake-1.0.tar', 'fake-1.0.tar.gz'])
- @pytest.mark.usefixtures('needs_zlib')
+ @unittest.skipUnless(ZLIB_SUPPORT, 'Need zlib support to run')
def test_add_defaults(self):
# http://bugs.python.org/issue2279
# now let's check what we have
dist_folder = join(self.tmp_dir, 'dist')
files = os.listdir(dist_folder)
- assert files == ['fake-1.0.zip']
+ self.assertEqual(files, ['fake-1.0.zip'])
zip_file = zipfile.ZipFile(join(dist_folder, 'fake-1.0.zip'))
try:
'somecode/doc.dat',
'somecode/doc.txt',
]
- assert sorted(content) == ['fake-1.0/' + x for x in expected]
+ self.assertEqual(sorted(content), ['fake-1.0/' + x for x in expected])
# checking the MANIFEST
f = open(join(self.tmp_dir, 'MANIFEST'))
manifest = f.read()
finally:
f.close()
- assert manifest == MANIFEST % {'sep': os.sep}
+ self.assertEqual(manifest, MANIFEST % {'sep': os.sep})
- @pytest.mark.usefixtures('needs_zlib')
+ @unittest.skipUnless(ZLIB_SUPPORT, 'Need zlib support to run')
def test_metadata_check_option(self):
# testing the `medata-check` option
dist, cmd = self.get_cmd(metadata={})
warnings = [
msg for msg in self.get_logs(WARN) if msg.startswith('warning: check:')
]
- assert len(warnings) == 1
+ self.assertEqual(len(warnings), 1)
# trying with a complete set of metadata
self.clear_logs()
warnings = [
msg for msg in self.get_logs(WARN) if msg.startswith('warning: check:')
]
- assert len(warnings) == 0
+ self.assertEqual(len(warnings), 0)
def test_check_metadata_deprecated(self):
# makes sure make_metadata is deprecated
with check_warnings() as w:
warnings.simplefilter("always")
cmd.check_metadata()
- assert len(w.warnings) == 1
+ self.assertEqual(len(w.warnings), 1)
def test_show_formats(self):
with captured_stdout() as stdout:
for line in stdout.getvalue().split('\n')
if line.strip().startswith('--formats=')
]
- assert len(output) == num_formats
+ self.assertEqual(len(output), num_formats)
def test_finalize_options(self):
dist, cmd = self.get_cmd()
cmd.finalize_options()
# default options set by finalize
- assert cmd.manifest == 'MANIFEST'
- assert cmd.template == 'MANIFEST.in'
- assert cmd.dist_dir == 'dist'
+ self.assertEqual(cmd.manifest, 'MANIFEST')
+ self.assertEqual(cmd.template, 'MANIFEST.in')
+ self.assertEqual(cmd.dist_dir, 'dist')
# formats has to be a string splitable on (' ', ',') or
# a stringlist
cmd.formats = 1
- with pytest.raises(DistutilsOptionError):
- cmd.finalize_options()
+ self.assertRaises(DistutilsOptionError, cmd.finalize_options)
cmd.formats = ['zip']
cmd.finalize_options()
# formats has to be known
cmd.formats = 'supazipa'
- with pytest.raises(DistutilsOptionError):
- cmd.finalize_options()
+ self.assertRaises(DistutilsOptionError, cmd.finalize_options)
# the following tests make sure there is a nice error message instead
# of a traceback when parsing an invalid manifest template
cmd.filelist = FileList()
cmd.read_template()
warnings = self.get_logs(WARN)
- assert len(warnings) == 1
+ self.assertEqual(len(warnings), 1)
def test_invalid_template_unknown_command(self):
self._check_template('taunt knights *')
# this manifest command takes one argument
self._check_template('prune')
- @pytest.mark.skipif("platform.system() != 'Windows'")
+ @unittest.skipIf(os.name != 'nt', 'test relevant for Windows only')
def test_invalid_template_wrong_path(self):
# on Windows, trailing slashes are not allowed
# this used to crash instead of raising a warning: #8286
self._check_template('include examples/')
- @pytest.mark.usefixtures('needs_zlib')
+ @unittest.skipUnless(ZLIB_SUPPORT, 'Need zlib support to run')
def test_get_file_list(self):
# make sure MANIFEST is recalculated
dist, cmd = self.get_cmd()
finally:
f.close()
- assert len(manifest) == 5
+ self.assertEqual(len(manifest), 5)
# adding a file
self.write_file((self.tmp_dir, 'somecode', 'doc2.txt'), '#')
f.close()
# do we have the new file in MANIFEST ?
- assert len(manifest2) == 6
- assert 'doc2.txt' in manifest2[-1]
+ self.assertEqual(len(manifest2), 6)
+ self.assertIn('doc2.txt', manifest2[-1])
- @pytest.mark.usefixtures('needs_zlib')
+ @unittest.skipUnless(ZLIB_SUPPORT, 'Need zlib support to run')
def test_manifest_marker(self):
# check that autogenerated MANIFESTs have a marker
dist, cmd = self.get_cmd()
finally:
f.close()
- assert manifest[0] == '# file GENERATED by distutils, do NOT edit'
+ self.assertEqual(manifest[0], '# file GENERATED by distutils, do NOT edit')
- @pytest.mark.usefixtures('needs_zlib')
+ @unittest.skipUnless(ZLIB_SUPPORT, "Need zlib support to run")
def test_manifest_comments(self):
# make sure comments don't cause exceptions or wrong includes
contents = dedent(
self.write_file((self.tmp_dir, 'bad.py'), "# don't pick me!")
self.write_file((self.tmp_dir, '#bad.py'), "# don't pick me!")
cmd.run()
- assert cmd.filelist.files == ['good.py']
+ self.assertEqual(cmd.filelist.files, ['good.py'])
- @pytest.mark.usefixtures('needs_zlib')
+ @unittest.skipUnless(ZLIB_SUPPORT, 'Need zlib support to run')
def test_manual_manifest(self):
# check that a MANIFEST without a marker is left alone
dist, cmd = self.get_cmd()
'This project maintains its MANIFEST file itself.',
)
cmd.run()
- assert cmd.filelist.files == ['README.manual']
+ self.assertEqual(cmd.filelist.files, ['README.manual'])
f = open(cmd.manifest)
try:
finally:
f.close()
- assert manifest == ['README.manual']
+ self.assertEqual(manifest, ['README.manual'])
archive_name = join(self.tmp_dir, 'dist', 'fake-1.0.tar.gz')
archive = tarfile.open(archive_name)
filenames = [tarinfo.name for tarinfo in archive]
finally:
archive.close()
- assert sorted(filenames) == [
- 'fake-1.0',
- 'fake-1.0/PKG-INFO',
- 'fake-1.0/README.manual',
- ]
+ self.assertEqual(
+ sorted(filenames),
+ ['fake-1.0', 'fake-1.0/PKG-INFO', 'fake-1.0/README.manual'],
+ )
- @pytest.mark.usefixtures('needs_zlib')
+ @unittest.skipUnless(ZLIB_SUPPORT, "requires zlib")
@require_unix_id
@require_uid_0
- @pytest.mark.skipif("not find_executable('tar')")
- @pytest.mark.skipif("not find_executable('gzip')")
+ @unittest.skipIf(find_executable('tar') is None, "The tar command is not found")
+ @unittest.skipIf(find_executable('gzip') is None, "The gzip command is not found")
def test_make_distribution_owner_group(self):
# now building a sdist
dist, cmd = self.get_cmd()
archive = tarfile.open(archive_name)
try:
for member in archive.getmembers():
- assert member.uid == 0
- assert member.gid == 0
+ self.assertEqual(member.uid, 0)
+ self.assertEqual(member.gid, 0)
finally:
archive.close()
# rights (see #7408)
try:
for member in archive.getmembers():
- assert member.uid == os.getuid()
+ self.assertEqual(member.uid, os.getuid())
finally:
archive.close()
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(SDistTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
import os
import stat
import sys
-import unittest.mock as mock
-
-from test.support import unix_shell
+import unittest.mock
+from test.support import run_unittest, unix_shell
from . import py38compat as os_helper
from distutils.spawn import spawn
from distutils.errors import DistutilsExecError
from distutils.tests import support
-import pytest
-class TestSpawn(support.TempdirManager, support.LoggingSilencer):
- @pytest.mark.skipif("os.name not in ('nt', 'posix')")
+class SpawnTestCase(support.TempdirManager, support.LoggingSilencer, unittest.TestCase):
+ @unittest.skipUnless(os.name in ('nt', 'posix'), 'Runs only under posix or nt')
def test_spawn(self):
tmpdir = self.mkdtemp()
self.write_file(exe, 'exit 1')
os.chmod(exe, 0o777)
- with pytest.raises(DistutilsExecError):
- spawn([exe])
+ self.assertRaises(DistutilsExecError, spawn, [exe])
# now something that works
if sys.platform != 'win32':
# test path parameter
rv = find_executable(program, path=tmp_dir)
- assert rv == filename
+ self.assertEqual(rv, filename)
if sys.platform == 'win32':
# test without ".exe" extension
rv = find_executable(program_noeext, path=tmp_dir)
- assert rv == filename
+ self.assertEqual(rv, filename)
# test find in the current directory
with os_helper.change_cwd(tmp_dir):
rv = find_executable(program)
- assert rv == program
+ self.assertEqual(rv, program)
# test non-existent program
dont_exist_program = "dontexist_" + program
rv = find_executable(dont_exist_program, path=tmp_dir)
- assert rv is None
+ self.assertIsNone(rv)
# PATH='': no match, except in the current directory
with os_helper.EnvironmentVarGuard() as env:
env['PATH'] = ''
- with mock.patch(
+ with unittest.mock.patch(
'distutils.spawn.os.confstr', return_value=tmp_dir, create=True
- ), mock.patch('distutils.spawn.os.defpath', tmp_dir):
+ ), unittest.mock.patch('distutils.spawn.os.defpath', tmp_dir):
rv = find_executable(program)
- assert rv is None
+ self.assertIsNone(rv)
# look in current directory
with os_helper.change_cwd(tmp_dir):
rv = find_executable(program)
- assert rv == program
+ self.assertEqual(rv, program)
# PATH=':': explicitly looks in the current directory
with os_helper.EnvironmentVarGuard() as env:
env['PATH'] = os.pathsep
- with mock.patch(
+ with unittest.mock.patch(
'distutils.spawn.os.confstr', return_value='', create=True
- ), mock.patch('distutils.spawn.os.defpath', ''):
+ ), unittest.mock.patch('distutils.spawn.os.defpath', ''):
rv = find_executable(program)
- assert rv is None
+ self.assertIsNone(rv)
# look in current directory
with os_helper.change_cwd(tmp_dir):
rv = find_executable(program)
- assert rv == program
+ self.assertEqual(rv, program)
# missing PATH: test os.confstr("CS_PATH") and os.defpath
with os_helper.EnvironmentVarGuard() as env:
env.pop('PATH', None)
# without confstr
- with mock.patch(
+ with unittest.mock.patch(
'distutils.spawn.os.confstr', side_effect=ValueError, create=True
- ), mock.patch('distutils.spawn.os.defpath', tmp_dir):
+ ), unittest.mock.patch('distutils.spawn.os.defpath', tmp_dir):
rv = find_executable(program)
- assert rv == filename
+ self.assertEqual(rv, filename)
# with confstr
- with mock.patch(
+ with unittest.mock.patch(
'distutils.spawn.os.confstr', return_value=tmp_dir, create=True
- ), mock.patch('distutils.spawn.os.defpath', ''):
+ ), unittest.mock.patch('distutils.spawn.os.defpath', ''):
rv = find_executable(program)
- assert rv == filename
+ self.assertEqual(rv, filename)
def test_spawn_missing_exe(self):
- with pytest.raises(DistutilsExecError) as ctx:
+ with self.assertRaises(DistutilsExecError) as ctx:
spawn(['does-not-exist'])
- assert "command 'does-not-exist' failed" in str(ctx.value)
+ self.assertIn("command 'does-not-exist' failed", str(ctx.exception))
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(SpawnTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
import subprocess
import sys
import textwrap
+import unittest
-import pytest
import jaraco.envs
import distutils
from distutils import sysconfig
-from distutils.ccompiler import get_default_compiler # noqa: F401
+from distutils.ccompiler import get_default_compiler
from distutils.unixccompiler import UnixCCompiler
-from test.support import swap_item
+from distutils.tests import support
+from test.support import run_unittest, swap_item
from .py38compat import TESTFN
-@pytest.mark.usefixtures('save_env')
-@pytest.mark.usefixtures('cleanup_testfn')
-class TestSysconfig:
+class SysconfigTestCase(support.EnvironGuard, unittest.TestCase):
+ def setUp(self):
+ super(SysconfigTestCase, self).setUp()
+ self.makefile = None
+
+ def tearDown(self):
+ if self.makefile is not None:
+ os.unlink(self.makefile)
+ self.cleanup_testfn()
+ super(SysconfigTestCase, self).tearDown()
+
def cleanup_testfn(self):
if os.path.isfile(TESTFN):
os.remove(TESTFN)
def test_get_config_h_filename(self):
config_h = sysconfig.get_config_h_filename()
- assert os.path.isfile(config_h), config_h
+ self.assertTrue(os.path.isfile(config_h), config_h)
- @pytest.mark.skipif("platform.system() == 'Windows'")
- @pytest.mark.skipif("sys.implementation.name != 'cpython'")
+ @unittest.skipIf(
+ sys.platform == 'win32', 'Makefile only exists on Unix like systems'
+ )
+ @unittest.skipIf(
+ sys.implementation.name != 'cpython', 'Makefile only exists in CPython'
+ )
def test_get_makefile_filename(self):
makefile = sysconfig.get_makefile_filename()
- assert os.path.isfile(makefile), makefile
+ self.assertTrue(os.path.isfile(makefile), makefile)
def test_get_python_lib(self):
# XXX doesn't work on Linux when Python was never installed before
# self.assertTrue(os.path.isdir(lib_dir), lib_dir)
# test for pythonxx.lib?
- assert sysconfig.get_python_lib() != sysconfig.get_python_lib(prefix=TESTFN)
+ self.assertNotEqual(
+ sysconfig.get_python_lib(), sysconfig.get_python_lib(prefix=TESTFN)
+ )
def test_get_config_vars(self):
cvars = sysconfig.get_config_vars()
- assert isinstance(cvars, dict)
- assert cvars
+ self.assertIsInstance(cvars, dict)
+ self.assertTrue(cvars)
- @pytest.mark.skipif('sysconfig.IS_PYPY')
- @pytest.mark.xfail(reason="broken")
+ @unittest.skip('sysconfig.IS_PYPY')
def test_srcdir(self):
# See Issues #15322, #15364.
srcdir = sysconfig.get_config_var('srcdir')
- assert os.path.isabs(srcdir), srcdir
- assert os.path.isdir(srcdir), srcdir
+ self.assertTrue(os.path.isabs(srcdir), srcdir)
+ self.assertTrue(os.path.isdir(srcdir), srcdir)
if sysconfig.python_build:
# The python executable has not been installed so srcdir
# should be a full source checkout.
Python_h = os.path.join(srcdir, 'Include', 'Python.h')
- assert os.path.exists(Python_h), Python_h
- assert sysconfig._is_python_source_dir(srcdir)
+ self.assertTrue(os.path.exists(Python_h), Python_h)
+ self.assertTrue(sysconfig._is_python_source_dir(srcdir))
elif os.name == 'posix':
- assert os.path.dirname(sysconfig.get_makefile_filename()) == srcdir
+ self.assertEqual(os.path.dirname(sysconfig.get_makefile_filename()), srcdir)
def test_srcdir_independent_of_cwd(self):
# srcdir should be independent of the current working directory
srcdir2 = sysconfig.get_config_var('srcdir')
finally:
os.chdir(cwd)
- assert srcdir == srcdir2
+ self.assertEqual(srcdir, srcdir2)
def customize_compiler(self):
# make sure AR gets caught
return comp
- @pytest.mark.skipif("get_default_compiler() != 'unix'")
+ @unittest.skipUnless(
+ get_default_compiler() == 'unix', 'not testing if default compiler is not unix'
+ )
def test_customize_compiler(self):
# Make sure that sysconfig._config_vars is initialized
sysconfig.get_config_vars()
os.environ['RANLIB'] = 'env_ranlib'
comp = self.customize_compiler()
- assert comp.exes['archiver'] == 'env_ar --env-arflags'
- assert comp.exes['preprocessor'] == 'env_cpp --env-cppflags'
- assert comp.exes['compiler'] == 'env_cc --sc-cflags --env-cflags --env-cppflags'
- assert comp.exes['compiler_so'] == (
- 'env_cc --sc-cflags ' '--env-cflags ' '--env-cppflags --sc-ccshared'
+ self.assertEqual(comp.exes['archiver'], 'env_ar --env-arflags')
+ self.assertEqual(comp.exes['preprocessor'], 'env_cpp --env-cppflags')
+ self.assertEqual(
+ comp.exes['compiler'], 'env_cc --sc-cflags --env-cflags --env-cppflags'
+ )
+ self.assertEqual(
+ comp.exes['compiler_so'],
+ ('env_cc --sc-cflags ' '--env-cflags ' '--env-cppflags --sc-ccshared'),
)
- assert comp.exes['compiler_cxx'] == 'env_cxx --env-cxx-flags'
- assert comp.exes['linker_exe'] == 'env_cc'
- assert comp.exes['linker_so'] == (
- 'env_ldshared --env-ldflags --env-cflags' ' --env-cppflags'
+ self.assertEqual(comp.exes['compiler_cxx'], 'env_cxx --env-cxx-flags')
+ self.assertEqual(comp.exes['linker_exe'], 'env_cc')
+ self.assertEqual(
+ comp.exes['linker_so'],
+ ('env_ldshared --env-ldflags --env-cflags' ' --env-cppflags'),
)
- assert comp.shared_lib_extension == 'sc_shutil_suffix'
+ self.assertEqual(comp.shared_lib_extension, 'sc_shutil_suffix')
if sys.platform == "darwin":
- assert comp.exes['ranlib'] == 'env_ranlib'
+ self.assertEqual(comp.exes['ranlib'], 'env_ranlib')
else:
- assert 'ranlib' not in comp.exes
+ self.assertTrue('ranlib' not in comp.exes)
del os.environ['AR']
del os.environ['CC']
del os.environ['RANLIB']
comp = self.customize_compiler()
- assert comp.exes['archiver'] == 'sc_ar --sc-arflags'
- assert comp.exes['preprocessor'] == 'sc_cc -E'
- assert comp.exes['compiler'] == 'sc_cc --sc-cflags'
- assert comp.exes['compiler_so'] == 'sc_cc --sc-cflags --sc-ccshared'
- assert comp.exes['compiler_cxx'] == 'sc_cxx'
- assert comp.exes['linker_exe'] == 'sc_cc'
- assert comp.exes['linker_so'] == 'sc_ldshared'
- assert comp.shared_lib_extension == 'sc_shutil_suffix'
- assert 'ranlib' not in comp.exes
+ self.assertEqual(comp.exes['archiver'], 'sc_ar --sc-arflags')
+ self.assertEqual(comp.exes['preprocessor'], 'sc_cc -E')
+ self.assertEqual(comp.exes['compiler'], 'sc_cc --sc-cflags')
+ self.assertEqual(comp.exes['compiler_so'], 'sc_cc --sc-cflags --sc-ccshared')
+ self.assertEqual(comp.exes['compiler_cxx'], 'sc_cxx')
+ self.assertEqual(comp.exes['linker_exe'], 'sc_cc')
+ self.assertEqual(comp.exes['linker_so'], 'sc_ldshared')
+ self.assertEqual(comp.shared_lib_extension, 'sc_shutil_suffix')
+ self.assertTrue('ranlib' not in comp.exes)
def test_parse_makefile_base(self):
self.makefile = TESTFN
finally:
fd.close()
d = sysconfig.parse_makefile(self.makefile)
- assert d == {'CONFIG_ARGS': "'--arg1=optarg1' 'ENV=LIB'", 'OTHER': 'foo'}
+ self.assertEqual(
+ d, {'CONFIG_ARGS': "'--arg1=optarg1' 'ENV=LIB'", 'OTHER': 'foo'}
+ )
def test_parse_makefile_literal_dollar(self):
self.makefile = TESTFN
finally:
fd.close()
d = sysconfig.parse_makefile(self.makefile)
- assert d == {'CONFIG_ARGS': r"'--arg1=optarg1' 'ENV=\$LIB'", 'OTHER': 'foo'}
+ self.assertEqual(
+ d, {'CONFIG_ARGS': r"'--arg1=optarg1' 'ENV=\$LIB'", 'OTHER': 'foo'}
+ )
def test_sysconfig_module(self):
import sysconfig as global_sysconfig
- assert global_sysconfig.get_config_var('CFLAGS') == sysconfig.get_config_var(
- 'CFLAGS'
+ self.assertEqual(
+ global_sysconfig.get_config_var('CFLAGS'),
+ sysconfig.get_config_var('CFLAGS'),
)
- assert global_sysconfig.get_config_var('LDFLAGS') == sysconfig.get_config_var(
- 'LDFLAGS'
+ self.assertEqual(
+ global_sysconfig.get_config_var('LDFLAGS'),
+ sysconfig.get_config_var('LDFLAGS'),
)
- @pytest.mark.skipif("sysconfig.get_config_var('CUSTOMIZED_OSX_COMPILER')")
+ @unittest.skipIf(
+ sysconfig.get_config_var('CUSTOMIZED_OSX_COMPILER'), 'compiler flags customized'
+ )
def test_sysconfig_compiler_vars(self):
# On OS X, binary installers support extension module building on
# various levels of the operating system with differing Xcode
import sysconfig as global_sysconfig
if sysconfig.get_config_var('CUSTOMIZED_OSX_COMPILER'):
- pytest.skip('compiler flags customized')
- assert global_sysconfig.get_config_var('LDSHARED') == sysconfig.get_config_var(
- 'LDSHARED'
+ self.skipTest('compiler flags customized')
+ self.assertEqual(
+ global_sysconfig.get_config_var('LDSHARED'),
+ sysconfig.get_config_var('LDSHARED'),
+ )
+ self.assertEqual(
+ global_sysconfig.get_config_var('CC'), sysconfig.get_config_var('CC')
)
- assert global_sysconfig.get_config_var('CC') == sysconfig.get_config_var('CC')
- @pytest.mark.skipif("not sysconfig.get_config_var('EXT_SUFFIX')")
+ @unittest.skipIf(
+ sysconfig.get_config_var('EXT_SUFFIX') is None,
+ 'EXT_SUFFIX required for this test',
+ )
def test_SO_deprecation(self):
- with pytest.warns(DeprecationWarning):
- sysconfig.get_config_var('SO')
+ self.assertWarns(DeprecationWarning, sysconfig.get_config_var, 'SO')
def test_customize_compiler_before_get_config_vars(self):
# Issue #21923: test that a Distribution compiler
universal_newlines=True,
)
outs, errs = p.communicate()
- assert 0 == p.returncode, "Subprocess failed: " + outs
+ self.assertEqual(0, p.returncode, "Subprocess failed: " + outs)
def test_parse_config_h(self):
config_h = sysconfig.get_config_h_filename()
input = {}
with open(config_h, encoding="utf-8") as f:
result = sysconfig.parse_config_h(f, g=input)
- assert input is result
+ self.assertTrue(input is result)
with open(config_h, encoding="utf-8") as f:
result = sysconfig.parse_config_h(f)
- assert isinstance(result, dict)
+ self.assertTrue(isinstance(result, dict))
- @pytest.mark.skipif("platform.system() != 'Windows'")
- @pytest.mark.skipif("sys.implementation.name != 'cpython'")
+ @unittest.skipUnless(sys.platform == 'win32', 'Testing windows pyd suffix')
+ @unittest.skipUnless(
+ sys.implementation.name == 'cpython', 'Need cpython for this test'
+ )
def test_win_ext_suffix(self):
- assert sysconfig.get_config_var("EXT_SUFFIX").endswith(".pyd")
- assert sysconfig.get_config_var("EXT_SUFFIX") != ".pyd"
-
- @pytest.mark.skipif("platform.system() != 'Windows'")
- @pytest.mark.skipif("sys.implementation.name != 'cpython'")
- @pytest.mark.skipif(
- '\\PCbuild\\'.casefold() not in sys.executable.casefold(),
- reason='Need sys.executable to be in a source tree',
+ self.assertTrue(sysconfig.get_config_var("EXT_SUFFIX").endswith(".pyd"))
+ self.assertNotEqual(sysconfig.get_config_var("EXT_SUFFIX"), ".pyd")
+
+ @unittest.skipUnless(sys.platform == 'win32', 'Testing Windows build layout')
+ @unittest.skipUnless(
+ sys.implementation.name == 'cpython', 'Need cpython for this test'
+ )
+ @unittest.skipUnless(
+ '\\PCbuild\\'.casefold() in sys.executable.casefold(),
+ 'Need sys.executable to be in a source tree',
)
def test_win_build_venv_from_source_tree(self):
"""Ensure distutils.sysconfig detects venvs from source tree builds."""
cmd, env={**os.environ, "PYTHONPATH": distutils_path}
)
assert out == "True"
+
+
+def test_suite():
+ suite = unittest.TestSuite()
+ suite.addTest(unittest.TestLoader().loadTestsFromTestCase(SysconfigTestCase))
+ return suite
+
+
+if __name__ == '__main__':
+ run_unittest(test_suite())
"""Tests for distutils.text_file."""
import os
+import unittest
from distutils.text_file import TextFile
from distutils.tests import support
+from test.support import run_unittest
TEST_DATA = """# test file
"""
-class TestTextFile(support.TempdirManager):
+class TextFileTestCase(support.TempdirManager, unittest.TestCase):
def test_class(self):
# old tests moved from text_file.__main__
# so they are really called by the buildbots
def test_input(count, description, file, expected_result):
result = file.readlines()
- assert result == expected_result
+ self.assertEqual(result, expected_result)
tmpdir = self.mkdtemp()
filename = os.path.join(tmpdir, "test.txt")
test_input(6, "join lines with collapsing", in_file, result6)
finally:
in_file.close()
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(TextFileTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.unixccompiler."""
import os
import sys
-import unittest.mock as mock
+import unittest
+from test.support import run_unittest
+from unittest.mock import patch
from .py38compat import EnvironmentVarGuard
from distutils.util import _clear_cached_macosx_ver
from . import support
-import pytest
-@pytest.fixture(autouse=True)
-def save_values(monkeypatch):
- monkeypatch.setattr(sys, 'platform', sys.platform)
- monkeypatch.setattr(sysconfig, 'get_config_var', sysconfig.get_config_var)
- monkeypatch.setattr(sysconfig, 'get_config_vars', sysconfig.get_config_vars)
+class UnixCCompilerTestCase(support.TempdirManager, unittest.TestCase):
+ def setUp(self):
+ super().setUp()
+ self._backup_platform = sys.platform
+ self._backup_get_config_var = sysconfig.get_config_var
+ self._backup_get_config_vars = sysconfig.get_config_vars
+ class CompilerWrapper(UnixCCompiler):
+ def rpath_foo(self):
+ return self.runtime_library_dir_option('/foo')
-@pytest.fixture(autouse=True)
-def compiler_wrapper(request):
- class CompilerWrapper(UnixCCompiler):
- def rpath_foo(self):
- return self.runtime_library_dir_option('/foo')
+ self.cc = CompilerWrapper()
- request.instance.cc = CompilerWrapper()
+ def tearDown(self):
+ super().tearDown()
+ sys.platform = self._backup_platform
+ sysconfig.get_config_var = self._backup_get_config_var
+ sysconfig.get_config_vars = self._backup_get_config_vars
-
-class TestUnixCCompiler(support.TempdirManager):
- @pytest.mark.skipif('platform.system == "Windows"') # noqa: C901
- def test_runtime_libdir_option(self): # noqa: C901
+ @unittest.skipIf(sys.platform == 'win32', "can't test on Windows")
+ def test_runtime_libdir_option(self):
# Issue #5900; GitHub Issue #37
#
# Ensure RUNPATH is added to extension modules with RPATH if
def do_darwin_test(syscfg_macosx_ver, env_macosx_ver, expected_flag):
env = os.environ
- msg = "macOS version = (sysconfig={!r}, env={!r})".format(
+ msg = "macOS version = (sysconfig=%r, env=%r)" % (
syscfg_macosx_ver,
env_macosx_ver,
)
# Run the test
if expected_flag is not None:
- assert self.cc.rpath_foo() == expected_flag, msg
+ self.assertEqual(self.cc.rpath_foo(), expected_flag, msg=msg)
else:
- with pytest.raises(
- DistutilsPlatformError, match=darwin_ver_var + r' mismatch'
+ with self.assertRaisesRegex(
+ DistutilsPlatformError, darwin_ver_var + r' mismatch', msg=msg
):
self.cc.rpath_foo()
return 'xxx'
sysconfig.get_config_var = gcv
- assert self.cc.rpath_foo() == ['+s', '-L/foo']
+ self.assertEqual(self.cc.rpath_foo(), ['+s', '-L/foo'])
def gcv(v):
return 'gcc'
sysconfig.get_config_var = gcv
- assert self.cc.rpath_foo() == ['-Wl,+s', '-L/foo']
+ self.assertEqual(self.cc.rpath_foo(), ['-Wl,+s', '-L/foo'])
def gcv(v):
return 'g++'
sysconfig.get_config_var = gcv
- assert self.cc.rpath_foo() == ['-Wl,+s', '-L/foo']
+ self.assertEqual(self.cc.rpath_foo(), ['-Wl,+s', '-L/foo'])
sysconfig.get_config_var = old_gcv
return 'yes'
sysconfig.get_config_var = gcv
- assert self.cc.rpath_foo() == '-Wl,--enable-new-dtags,-R/foo'
+ self.assertEqual(self.cc.rpath_foo(), '-Wl,--enable-new-dtags,-R/foo')
def gcv(v):
if v == 'CC':
return 'yes'
sysconfig.get_config_var = gcv
- assert self.cc.rpath_foo() == '-Wl,--enable-new-dtags,-R/foo'
+ self.assertEqual(self.cc.rpath_foo(), '-Wl,--enable-new-dtags,-R/foo')
# GCC non-GNULD
sys.platform = 'bar'
return 'no'
sysconfig.get_config_var = gcv
- assert self.cc.rpath_foo() == '-Wl,-R/foo'
+ self.assertEqual(self.cc.rpath_foo(), '-Wl,-R/foo')
# GCC GNULD with fully qualified configuration prefix
# see #7617
return 'yes'
sysconfig.get_config_var = gcv
- assert self.cc.rpath_foo() == '-Wl,--enable-new-dtags,-R/foo'
+ self.assertEqual(self.cc.rpath_foo(), '-Wl,--enable-new-dtags,-R/foo')
# non-GCC GNULD
sys.platform = 'bar'
return 'yes'
sysconfig.get_config_var = gcv
- assert self.cc.rpath_foo() == '-Wl,--enable-new-dtags,-R/foo'
+ self.assertEqual(self.cc.rpath_foo(), '-Wl,--enable-new-dtags,-R/foo')
# non-GCC non-GNULD
sys.platform = 'bar'
return 'no'
sysconfig.get_config_var = gcv
- assert self.cc.rpath_foo() == '-Wl,-R/foo'
+ self.assertEqual(self.cc.rpath_foo(), '-Wl,-R/foo')
- @pytest.mark.skipif('platform.system == "Windows"')
+ @unittest.skipIf(sys.platform == 'win32', "can't test on Windows")
def test_cc_overrides_ldshared(self):
# Issue #18080:
# ensure that setting CC env variable also changes default linker
env['CC'] = 'my_cc'
del env['LDSHARED']
sysconfig.customize_compiler(self.cc)
- assert self.cc.linker_so[0] == 'my_cc'
+ self.assertEqual(self.cc.linker_so[0], 'my_cc')
- @pytest.mark.skipif('platform.system == "Windows"')
+ @unittest.skipIf(sys.platform == 'win32', "can't test on Windows")
def test_cc_overrides_ldshared_for_cxx_correctly(self):
"""
Ensure that setting CC env variable also changes default linker
sysconfig.get_config_var = gcv
sysconfig.get_config_vars = gcvs
- with mock.patch.object(
+ with patch.object(
self.cc, 'spawn', return_value=None
- ) as mock_spawn, mock.patch.object(
+ ) as mock_spawn, patch.object(
self.cc, '_need_link', return_value=True
- ), mock.patch.object(
+ ), patch.object(
self.cc, 'mkpath', return_value=None
), EnvironmentVarGuard() as env:
env['CC'] = 'ccache my_cc'
env['CXX'] = 'my_cxx'
del env['LDSHARED']
sysconfig.customize_compiler(self.cc)
- assert self.cc.linker_so[0:2] == ['ccache', 'my_cc']
+ self.assertEqual(self.cc.linker_so[0:2], ['ccache', 'my_cc'])
self.cc.link(None, [], 'a.out', target_lang='c++')
call_args = mock_spawn.call_args[0][0]
expected = ['my_cxx', '-bundle', '-undefined', 'dynamic_lookup']
assert call_args[:4] == expected
- @pytest.mark.skipif('platform.system == "Windows"')
+ @unittest.skipIf(sys.platform == 'win32', "can't test on Windows")
def test_explicit_ldshared(self):
# Issue #18080:
# ensure that setting CC env variable does not change
env['CC'] = 'my_cc'
env['LDSHARED'] = 'my_ld -bundle -dynamic'
sysconfig.customize_compiler(self.cc)
- assert self.cc.linker_so[0] == 'my_ld'
+ self.assertEqual(self.cc.linker_so[0], 'my_ld')
def test_has_function(self):
# Issue https://github.com/pypa/distutils/issues/64:
self.cc.output_dir = 'scratch'
os.chdir(self.mkdtemp())
self.cc.has_function('abort', includes=['stdlib.h'])
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(UnixCCompilerTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.command.upload."""
import os
+import unittest
import unittest.mock as mock
from urllib.request import HTTPError
+from test.support import run_unittest
from distutils.command import upload as upload_mod
from distutils.command.upload import upload
from distutils.log import ERROR, INFO
from distutils.tests.test_config import PYPIRC, BasePyPIRCCommandTestCase
-import pytest
PYPIRC_LONG_PASSWORD = """\
[distutils]
"""
-class FakeOpen:
+class FakeOpen(object):
def __init__(self, url, msg=None, code=None):
self.url = url
if not isinstance(url, str):
return self.code
-@pytest.fixture(autouse=True)
-def urlopen(request, monkeypatch):
- self = request.instance
- monkeypatch.setattr(upload_mod, 'urlopen', self._urlopen)
- self.next_msg = self.next_code = None
+class uploadTestCase(BasePyPIRCCommandTestCase):
+ def setUp(self):
+ super(uploadTestCase, self).setUp()
+ self.old_open = upload_mod.urlopen
+ upload_mod.urlopen = self._urlopen
+ self.last_open = None
+ self.next_msg = None
+ self.next_code = None
+ def tearDown(self):
+ upload_mod.urlopen = self.old_open
+ super(uploadTestCase, self).tearDown()
-class TestUpload(BasePyPIRCCommandTestCase):
def _urlopen(self, url):
self.last_open = FakeOpen(url, msg=self.next_msg, code=self.next_code)
return self.last_open
('realm', 'pypi'),
('repository', 'https://upload.pypi.org/legacy/'),
):
- assert getattr(cmd, attr) == waited
+ self.assertEqual(getattr(cmd, attr), waited)
def test_saved_password(self):
# file with no password
dist = Distribution()
cmd = upload(dist)
cmd.finalize_options()
- assert cmd.password is None
+ self.assertEqual(cmd.password, None)
# make sure we get it as well, if another command
# initialized it at the dist level
dist.password = 'xxx'
cmd = upload(dist)
cmd.finalize_options()
- assert cmd.password == 'xxx'
+ self.assertEqual(cmd.password, 'xxx')
def test_upload(self):
tmp = self.mkdtemp()
# what did we send ?
headers = dict(self.last_open.req.headers)
- assert int(headers['Content-length']) >= 2162
+ self.assertGreaterEqual(int(headers['Content-length']), 2162)
content_type = headers['Content-type']
- assert content_type.startswith('multipart/form-data')
- assert self.last_open.req.get_method() == 'POST'
+ self.assertTrue(content_type.startswith('multipart/form-data'))
+ self.assertEqual(self.last_open.req.get_method(), 'POST')
expected_url = 'https://upload.pypi.org/legacy/'
- assert self.last_open.req.get_full_url() == expected_url
+ self.assertEqual(self.last_open.req.get_full_url(), expected_url)
data = self.last_open.req.data
- assert b'xxx' in data
- assert b'protocol_version' in data
- assert b'sha256_digest' in data
- assert (
- b'cd2eb0837c9b4c962c22d2ff8b5441b7b45805887f051d39bf133b583baf'
- b'6860' in data
+ self.assertIn(b'xxx', data)
+ self.assertIn(b'protocol_version', data)
+ self.assertIn(b'sha256_digest', data)
+ self.assertIn(
+ b'cd2eb0837c9b4c962c22d2ff8b5441b7b45805887f051d39bf133b583baf' b'6860',
+ data,
)
if b'md5_digest' in data:
- assert b'f561aaf6ef0bf14d4208bb46a4ccb3ad' in data
+ self.assertIn(b'f561aaf6ef0bf14d4208bb46a4ccb3ad', data)
if b'blake2_256_digest' in data:
- assert (
+ self.assertIn(
b'b6f289a27d4fe90da63c503bfe0a9b761a8f76bb86148565065f040be'
b'6d1c3044cf7ded78ef800509bccb4b648e507d88dc6383d67642aadcc'
- b'ce443f1534330a' in data
+ b'ce443f1534330a',
+ data,
)
# The PyPI response body was echoed
results = self.get_logs(INFO)
- assert results[-1] == 75 * '-' + '\nxyzzy\n' + 75 * '-'
+ self.assertEqual(results[-1], 75 * '-' + '\nxyzzy\n' + 75 * '-')
# bpo-32304: archives whose last byte was b'\r' were corrupted due to
# normalization intended for Mac OS 9.
cmd.run()
headers = dict(self.last_open.req.headers)
- assert int(headers['Content-length']) >= 2172
- assert b'long description\r' in self.last_open.req.data
+ self.assertGreaterEqual(int(headers['Content-length']), 2172)
+ self.assertIn(b'long description\r', self.last_open.req.data)
def test_upload_fails(self):
self.next_msg = "Not Found"
self.next_code = 404
- with pytest.raises(DistutilsError):
- self.test_upload()
+ self.assertRaises(DistutilsError, self.test_upload)
- @pytest.mark.parametrize(
- 'exception,expected,raised_exception',
- [
- (OSError('oserror'), 'oserror', OSError),
- pytest.param(
- HTTPError('url', 400, 'httperror', {}, None),
- 'Upload failed (400): httperror',
- DistutilsError,
- id="HTTP 400",
- ),
- ],
- )
- def test_wrong_exception_order(self, exception, expected, raised_exception):
+ def test_wrong_exception_order(self):
tmp = self.mkdtemp()
path = os.path.join(tmp, 'xxx')
self.write_file(path)
self.write_file(self.rc, PYPIRC_LONG_PASSWORD)
pkg_dir, dist = self.create_dist(dist_files=dist_files)
-
- with mock.patch(
- 'distutils.command.upload.urlopen',
- new=mock.Mock(side_effect=exception),
- ):
- with pytest.raises(raised_exception):
- cmd = upload(dist)
- cmd.ensure_finalized()
- cmd.run()
- results = self.get_logs(ERROR)
- assert expected in results[-1]
- self.clear_logs()
+ tests = [
+ (OSError('oserror'), 'oserror', OSError),
+ (
+ HTTPError('url', 400, 'httperror', {}, None),
+ 'Upload failed (400): httperror',
+ DistutilsError,
+ ),
+ ]
+ for exception, expected, raised_exception in tests:
+ with self.subTest(exception=type(exception).__name__):
+ with mock.patch(
+ 'distutils.command.upload.urlopen',
+ new=mock.Mock(side_effect=exception),
+ ):
+ with self.assertRaises(raised_exception):
+ cmd = upload(dist)
+ cmd.ensure_finalized()
+ cmd.run()
+ results = self.get_logs(ERROR)
+ self.assertIn(expected, results[-1])
+ self.clear_logs()
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(uploadTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.util."""
import os
import sys
+import unittest
import sysconfig as stdlib_sysconfig
-import unittest.mock as mock
from copy import copy
+from test.support import run_unittest
+from unittest import mock
-import pytest
-
+from distutils.errors import DistutilsPlatformError, DistutilsByteCompileError
from distutils.util import (
get_platform,
convert_path,
grok_environment_error,
get_host_platform,
)
-from distutils import util
+from distutils import util # used to patch _environ_checked
from distutils import sysconfig
-from distutils.errors import DistutilsPlatformError, DistutilsByteCompileError
-
-
-@pytest.fixture(autouse=True)
-def environment(monkeypatch):
- monkeypatch.setattr(os, 'name', os.name)
- monkeypatch.setattr(sys, 'platform', sys.platform)
- monkeypatch.setattr(sys, 'version', sys.version)
- monkeypatch.setattr(os, 'sep', os.sep)
- monkeypatch.setattr(os.path, 'join', os.path.join)
- monkeypatch.setattr(os.path, 'isabs', os.path.isabs)
- monkeypatch.setattr(os.path, 'splitdrive', os.path.splitdrive)
- monkeypatch.setattr(sysconfig, '_config_vars', copy(sysconfig._config_vars))
-
+from distutils.tests import support
+
+
+class UtilTestCase(support.EnvironGuard, unittest.TestCase):
+ def setUp(self):
+ super(UtilTestCase, self).setUp()
+ # saving the environment
+ self.name = os.name
+ self.platform = sys.platform
+ self.version = sys.version
+ self.sep = os.sep
+ self.join = os.path.join
+ self.isabs = os.path.isabs
+ self.splitdrive = os.path.splitdrive
+ self._config_vars = copy(sysconfig._config_vars)
+
+ # patching os.uname
+ if hasattr(os, 'uname'):
+ self.uname = os.uname
+ self._uname = os.uname()
+ else:
+ self.uname = None
+ self._uname = None
+
+ os.uname = self._get_uname
+
+ def tearDown(self):
+ # getting back the environment
+ os.name = self.name
+ sys.platform = self.platform
+ sys.version = self.version
+ os.sep = self.sep
+ os.path.join = self.join
+ os.path.isabs = self.isabs
+ os.path.splitdrive = self.splitdrive
+ if self.uname is not None:
+ os.uname = self.uname
+ else:
+ del os.uname
+ sysconfig._config_vars = copy(self._config_vars)
+ super(UtilTestCase, self).tearDown()
+
+ def _set_uname(self, uname):
+ self._uname = uname
+
+ def _get_uname(self):
+ return self._uname
-@pytest.mark.usefixtures('save_env')
-class TestUtil:
def test_get_host_platform(self):
- with mock.patch('os.name', 'nt'):
- with mock.patch('sys.version', '... [... (ARM64)]'):
- assert get_host_platform() == 'win-arm64'
- with mock.patch('sys.version', '... [... (ARM)]'):
- assert get_host_platform() == 'win-arm32'
+ with unittest.mock.patch('os.name', 'nt'):
+ with unittest.mock.patch('sys.version', '... [... (ARM64)]'):
+ self.assertEqual(get_host_platform(), 'win-arm64')
+ with unittest.mock.patch('sys.version', '... [... (ARM)]'):
+ self.assertEqual(get_host_platform(), 'win-arm32')
- with mock.patch('sys.version_info', (3, 9, 0, 'final', 0)):
- assert get_host_platform() == stdlib_sysconfig.get_platform()
+ with unittest.mock.patch('sys.version_info', (3, 9, 0, 'final', 0)):
+ self.assertEqual(get_host_platform(), stdlib_sysconfig.get_platform())
def test_get_platform(self):
- with mock.patch('os.name', 'nt'):
- with mock.patch.dict('os.environ', {'VSCMD_ARG_TGT_ARCH': 'x86'}):
- assert get_platform() == 'win32'
- with mock.patch.dict('os.environ', {'VSCMD_ARG_TGT_ARCH': 'x64'}):
- assert get_platform() == 'win-amd64'
- with mock.patch.dict('os.environ', {'VSCMD_ARG_TGT_ARCH': 'arm'}):
- assert get_platform() == 'win-arm32'
- with mock.patch.dict('os.environ', {'VSCMD_ARG_TGT_ARCH': 'arm64'}):
- assert get_platform() == 'win-arm64'
+ with unittest.mock.patch('os.name', 'nt'):
+ with unittest.mock.patch.dict('os.environ', {'VSCMD_ARG_TGT_ARCH': 'x86'}):
+ self.assertEqual(get_platform(), 'win32')
+ with unittest.mock.patch.dict('os.environ', {'VSCMD_ARG_TGT_ARCH': 'x64'}):
+ self.assertEqual(get_platform(), 'win-amd64')
+ with unittest.mock.patch.dict('os.environ', {'VSCMD_ARG_TGT_ARCH': 'arm'}):
+ self.assertEqual(get_platform(), 'win-arm32')
+ with unittest.mock.patch.dict(
+ 'os.environ', {'VSCMD_ARG_TGT_ARCH': 'arm64'}
+ ):
+ self.assertEqual(get_platform(), 'win-arm64')
def test_convert_path(self):
# linux/mac
os.path.join = _join
- assert convert_path('/home/to/my/stuff') == '/home/to/my/stuff'
+ self.assertEqual(convert_path('/home/to/my/stuff'), '/home/to/my/stuff')
# win
os.sep = '\\'
os.path.join = _join
- with pytest.raises(ValueError):
- convert_path('/home/to/my/stuff')
- with pytest.raises(ValueError):
- convert_path('home/to/my/stuff/')
+ self.assertRaises(ValueError, convert_path, '/home/to/my/stuff')
+ self.assertRaises(ValueError, convert_path, 'home/to/my/stuff/')
- assert convert_path('home/to/my/stuff') == 'home\\to\\my\\stuff'
- assert convert_path('.') == os.curdir
+ self.assertEqual(convert_path('home/to/my/stuff'), 'home\\to\\my\\stuff')
+ self.assertEqual(convert_path('.'), os.curdir)
def test_change_root(self):
# linux/mac
os.path.join = _join
- assert change_root('/root', '/old/its/here') == '/root/old/its/here'
- assert change_root('/root', 'its/here') == '/root/its/here'
+ self.assertEqual(change_root('/root', '/old/its/here'), '/root/old/its/here')
+ self.assertEqual(change_root('/root', 'its/here'), '/root/its/here')
# windows
os.name = 'nt'
os.path.join = _join
- assert (
- change_root('c:\\root', 'c:\\old\\its\\here') == 'c:\\root\\old\\its\\here'
+ self.assertEqual(
+ change_root('c:\\root', 'c:\\old\\its\\here'), 'c:\\root\\old\\its\\here'
)
- assert change_root('c:\\root', 'its\\here') == 'c:\\root\\its\\here'
+ self.assertEqual(change_root('c:\\root', 'its\\here'), 'c:\\root\\its\\here')
# BugsBunny os (it's a great os)
os.name = 'BugsBunny'
- with pytest.raises(DistutilsPlatformError):
- change_root('c:\\root', 'its\\here')
+ self.assertRaises(DistutilsPlatformError, change_root, 'c:\\root', 'its\\here')
# XXX platforms to be covered: mac
check_environ()
- assert os.environ['PLAT'] == get_platform()
- assert util._environ_checked == 1
+ self.assertEqual(os.environ['PLAT'], get_platform())
+ self.assertEqual(util._environ_checked, 1)
- @pytest.mark.skipif("os.name != 'posix'")
+ @unittest.skipUnless(os.name == 'posix', 'specific to posix')
def test_check_environ_getpwuid(self):
util._environ_checked = 0
os.environ.pop('HOME', None)
)
with mock.patch.object(pwd, 'getpwuid', return_value=result):
check_environ()
- assert os.environ['HOME'] == '/home/distutils'
+ self.assertEqual(os.environ['HOME'], '/home/distutils')
util._environ_checked = 0
os.environ.pop('HOME', None)
# bpo-10496: Catch pwd.getpwuid() error
with mock.patch.object(pwd, 'getpwuid', side_effect=KeyError):
check_environ()
- assert 'HOME' not in os.environ
+ self.assertNotIn('HOME', os.environ)
def test_split_quoted(self):
- assert split_quoted('""one"" "two" \'three\' \\four') == [
- 'one',
- 'two',
- 'three',
- 'four',
- ]
+ self.assertEqual(
+ split_quoted('""one"" "two" \'three\' \\four'),
+ ['one', 'two', 'three', 'four'],
+ )
def test_strtobool(self):
yes = ('y', 'Y', 'yes', 'True', 't', 'true', 'True', 'On', 'on', '1')
no = ('n', 'no', 'f', 'false', 'off', '0', 'Off', 'No', 'N')
for y in yes:
- assert strtobool(y)
+ self.assertTrue(strtobool(y))
for n in no:
- assert not strtobool(n)
+ self.assertFalse(strtobool(n))
def test_rfc822_escape(self):
header = 'I am a\npoor\nlonesome\nheader\n'
wanted = ('I am a%(8s)spoor%(8s)slonesome%(8s)s' 'header%(8s)s') % {
'8s': '\n' + 8 * ' '
}
- assert res == wanted
+ self.assertEqual(res, wanted)
def test_dont_write_bytecode(self):
# makes sure byte_compile raise a DistutilsError
old_dont_write_bytecode = sys.dont_write_bytecode
sys.dont_write_bytecode = True
try:
- with pytest.raises(DistutilsByteCompileError):
- byte_compile([])
+ self.assertRaises(DistutilsByteCompileError, byte_compile, [])
finally:
sys.dont_write_bytecode = old_dont_write_bytecode
# test obsolete function to ensure backward compat (#4931)
exc = IOError("Unable to find batch file")
msg = grok_environment_error(exc)
- assert msg == "error: Unable to find batch file"
+ self.assertEqual(msg, "error: Unable to find batch file")
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(UtilTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
"""Tests for distutils.version."""
-import pytest
-
+import unittest
import distutils
from distutils.version import LooseVersion
from distutils.version import StrictVersion
+from test.support import run_unittest
-@pytest.fixture(autouse=True)
-def suppress_deprecation():
- with distutils.version.suppress_known_deprecation():
- yield
+class VersionTestCase(unittest.TestCase):
+ def setUp(self):
+ self.ctx = distutils.version.suppress_known_deprecation()
+ self.ctx.__enter__()
+ def tearDown(self):
+ self.ctx.__exit__(None, None, None)
-class TestVersion:
def test_prerelease(self):
version = StrictVersion('1.2.3a1')
- assert version.version == (1, 2, 3)
- assert version.prerelease == ('a', 1)
- assert str(version) == '1.2.3a1'
+ self.assertEqual(version.version, (1, 2, 3))
+ self.assertEqual(version.prerelease, ('a', 1))
+ self.assertEqual(str(version), '1.2.3a1')
version = StrictVersion('1.2.0')
- assert str(version) == '1.2'
+ self.assertEqual(str(version), '1.2')
def test_cmp_strict(self):
versions = (
raise AssertionError(
("cmp(%s, %s) " "shouldn't raise ValueError") % (v1, v2)
)
- assert res == wanted, 'cmp({}, {}) should be {}, got {}'.format(
- v1, v2, wanted, res
+ self.assertEqual(
+ res, wanted, 'cmp(%s, %s) should be %s, got %s' % (v1, v2, wanted, res)
)
res = StrictVersion(v1)._cmp(v2)
- assert res == wanted, 'cmp({}, {}) should be {}, got {}'.format(
- v1, v2, wanted, res
+ self.assertEqual(
+ res, wanted, 'cmp(%s, %s) should be %s, got %s' % (v1, v2, wanted, res)
)
res = StrictVersion(v1)._cmp(object())
- assert (
- res is NotImplemented
- ), 'cmp({}, {}) should be NotImplemented, got {}'.format(v1, v2, res)
+ self.assertIs(
+ res,
+ NotImplemented,
+ 'cmp(%s, %s) should be NotImplemented, got %s' % (v1, v2, res),
+ )
def test_cmp(self):
versions = (
for v1, v2, wanted in versions:
res = LooseVersion(v1)._cmp(LooseVersion(v2))
- assert res == wanted, 'cmp({}, {}) should be {}, got {}'.format(
- v1, v2, wanted, res
+ self.assertEqual(
+ res, wanted, 'cmp(%s, %s) should be %s, got %s' % (v1, v2, wanted, res)
)
res = LooseVersion(v1)._cmp(v2)
- assert res == wanted, 'cmp({}, {}) should be {}, got {}'.format(
- v1, v2, wanted, res
+ self.assertEqual(
+ res, wanted, 'cmp(%s, %s) should be %s, got %s' % (v1, v2, wanted, res)
)
res = LooseVersion(v1)._cmp(object())
- assert (
- res is NotImplemented
- ), 'cmp({}, {}) should be NotImplemented, got {}'.format(v1, v2, res)
+ self.assertIs(
+ res,
+ NotImplemented,
+ 'cmp(%s, %s) should be NotImplemented, got %s' % (v1, v2, res),
+ )
+
+
+def test_suite():
+ return unittest.TestLoader().loadTestsFromTestCase(VersionTestCase)
+
+
+if __name__ == "__main__":
+ run_unittest(test_suite())
+"""Tests harness for distutils.versionpredicate.
+
+"""
+
+import distutils.versionpredicate
+import doctest
+from test.support import run_unittest
+
+
+def test_suite():
+ return doctest.DocTestSuite(distutils.versionpredicate)
+
+
+if __name__ == '__main__':
+ run_unittest(test_suite())
import sys
+import unittest
try:
import grp
except ImportError:
grp = pwd = None
-import pytest
-
UNIX_ID_SUPPORT = grp and pwd
UID_0_SUPPORT = UNIX_ID_SUPPORT and sys.platform != "cygwin"
-require_unix_id = pytest.mark.skipif(
- not UNIX_ID_SUPPORT, reason="Requires grp and pwd support"
-)
-require_uid_0 = pytest.mark.skipif(not UID_0_SUPPORT, reason="Requires UID 0 support")
+require_unix_id = unittest.skipUnless(UNIX_ID_SUPPORT, "Requires grp and pwd support")
+require_uid_0 = unittest.skipUnless(UID_0_SUPPORT, "Requires UID 0 support")
+++ /dev/null
-
-/* Use this file as a template to start implementing a module that
- also declares object types. All occurrences of 'Xxo' should be changed
- to something reasonable for your objects. After that, all other
- occurrences of 'xx' should be changed to something reasonable for your
- module. If your module is named foo your sourcefile should be named
- foomodule.c.
-
- You will probably want to delete all references to 'x_attr' and add
- your own types of attributes instead. Maybe you want to name your
- local variables other than 'self'. If your object type is needed in
- other files, you'll have to create a file "foobarobject.h"; see
- floatobject.h for an example. */
-
-/* Xxo objects */
-
-#include "Python.h"
-
-static PyObject *ErrorObject;
-
-typedef struct {
- PyObject_HEAD
- PyObject *x_attr; /* Attributes dictionary */
-} XxoObject;
-
-static PyTypeObject Xxo_Type;
-
-#define XxoObject_Check(v) (Py_TYPE(v) == &Xxo_Type)
-
-static XxoObject *
-newXxoObject(PyObject *arg)
-{
- XxoObject *self;
- self = PyObject_New(XxoObject, &Xxo_Type);
- if (self == NULL)
- return NULL;
- self->x_attr = NULL;
- return self;
-}
-
-/* Xxo methods */
-
-static void
-Xxo_dealloc(XxoObject *self)
-{
- Py_XDECREF(self->x_attr);
- PyObject_Del(self);
-}
-
-static PyObject *
-Xxo_demo(XxoObject *self, PyObject *args)
-{
- if (!PyArg_ParseTuple(args, ":demo"))
- return NULL;
- Py_INCREF(Py_None);
- return Py_None;
-}
-
-static PyMethodDef Xxo_methods[] = {
- {"demo", (PyCFunction)Xxo_demo, METH_VARARGS,
- PyDoc_STR("demo() -> None")},
- {NULL, NULL} /* sentinel */
-};
-
-static PyObject *
-Xxo_getattro(XxoObject *self, PyObject *name)
-{
- if (self->x_attr != NULL) {
- PyObject *v = PyDict_GetItemWithError(self->x_attr, name);
- if (v != NULL) {
- Py_INCREF(v);
- return v;
- }
- else if (PyErr_Occurred()) {
- return NULL;
- }
- }
- return PyObject_GenericGetAttr((PyObject *)self, name);
-}
-
-static int
-Xxo_setattr(XxoObject *self, const char *name, PyObject *v)
-{
- if (self->x_attr == NULL) {
- self->x_attr = PyDict_New();
- if (self->x_attr == NULL)
- return -1;
- }
- if (v == NULL) {
- int rv = PyDict_DelItemString(self->x_attr, name);
- if (rv < 0 && PyErr_ExceptionMatches(PyExc_KeyError))
- PyErr_SetString(PyExc_AttributeError,
- "delete non-existing Xxo attribute");
- return rv;
- }
- else
- return PyDict_SetItemString(self->x_attr, name, v);
-}
-
-static PyTypeObject Xxo_Type = {
- /* The ob_type field must be initialized in the module init function
- * to be portable to Windows without using C++. */
- PyVarObject_HEAD_INIT(NULL, 0)
- "xxmodule.Xxo", /*tp_name*/
- sizeof(XxoObject), /*tp_basicsize*/
- 0, /*tp_itemsize*/
- /* methods */
- (destructor)Xxo_dealloc, /*tp_dealloc*/
- 0, /*tp_vectorcall_offset*/
- (getattrfunc)0, /*tp_getattr*/
- (setattrfunc)Xxo_setattr, /*tp_setattr*/
- 0, /*tp_as_async*/
- 0, /*tp_repr*/
- 0, /*tp_as_number*/
- 0, /*tp_as_sequence*/
- 0, /*tp_as_mapping*/
- 0, /*tp_hash*/
- 0, /*tp_call*/
- 0, /*tp_str*/
- (getattrofunc)Xxo_getattro, /*tp_getattro*/
- 0, /*tp_setattro*/
- 0, /*tp_as_buffer*/
- Py_TPFLAGS_DEFAULT, /*tp_flags*/
- 0, /*tp_doc*/
- 0, /*tp_traverse*/
- 0, /*tp_clear*/
- 0, /*tp_richcompare*/
- 0, /*tp_weaklistoffset*/
- 0, /*tp_iter*/
- 0, /*tp_iternext*/
- Xxo_methods, /*tp_methods*/
- 0, /*tp_members*/
- 0, /*tp_getset*/
- 0, /*tp_base*/
- 0, /*tp_dict*/
- 0, /*tp_descr_get*/
- 0, /*tp_descr_set*/
- 0, /*tp_dictoffset*/
- 0, /*tp_init*/
- 0, /*tp_alloc*/
- 0, /*tp_new*/
- 0, /*tp_free*/
- 0, /*tp_is_gc*/
-};
-/* --------------------------------------------------------------------- */
-
-/* Function of two integers returning integer */
-
-PyDoc_STRVAR(xx_foo_doc,
-"foo(i,j)\n\
-\n\
-Return the sum of i and j.");
-
-static PyObject *
-xx_foo(PyObject *self, PyObject *args)
-{
- long i, j;
- long res;
- if (!PyArg_ParseTuple(args, "ll:foo", &i, &j))
- return NULL;
- res = i+j; /* XXX Do something here */
- return PyLong_FromLong(res);
-}
-
-
-/* Function of no arguments returning new Xxo object */
-
-static PyObject *
-xx_new(PyObject *self, PyObject *args)
-{
- XxoObject *rv;
-
- if (!PyArg_ParseTuple(args, ":new"))
- return NULL;
- rv = newXxoObject(args);
- if (rv == NULL)
- return NULL;
- return (PyObject *)rv;
-}
-
-/* Example with subtle bug from extensions manual ("Thin Ice"). */
-
-static PyObject *
-xx_bug(PyObject *self, PyObject *args)
-{
- PyObject *list, *item;
-
- if (!PyArg_ParseTuple(args, "O:bug", &list))
- return NULL;
-
- item = PyList_GetItem(list, 0);
- /* Py_INCREF(item); */
- PyList_SetItem(list, 1, PyLong_FromLong(0L));
- PyObject_Print(item, stdout, 0);
- printf("\n");
- /* Py_DECREF(item); */
-
- Py_INCREF(Py_None);
- return Py_None;
-}
-
-/* Test bad format character */
-
-static PyObject *
-xx_roj(PyObject *self, PyObject *args)
-{
- PyObject *a;
- long b;
- if (!PyArg_ParseTuple(args, "O#:roj", &a, &b))
- return NULL;
- Py_INCREF(Py_None);
- return Py_None;
-}
-
-
-/* ---------- */
-
-static PyTypeObject Str_Type = {
- /* The ob_type field must be initialized in the module init function
- * to be portable to Windows without using C++. */
- PyVarObject_HEAD_INIT(NULL, 0)
- "xxmodule.Str", /*tp_name*/
- 0, /*tp_basicsize*/
- 0, /*tp_itemsize*/
- /* methods */
- 0, /*tp_dealloc*/
- 0, /*tp_vectorcall_offset*/
- 0, /*tp_getattr*/
- 0, /*tp_setattr*/
- 0, /*tp_as_async*/
- 0, /*tp_repr*/
- 0, /*tp_as_number*/
- 0, /*tp_as_sequence*/
- 0, /*tp_as_mapping*/
- 0, /*tp_hash*/
- 0, /*tp_call*/
- 0, /*tp_str*/
- 0, /*tp_getattro*/
- 0, /*tp_setattro*/
- 0, /*tp_as_buffer*/
- Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /*tp_flags*/
- 0, /*tp_doc*/
- 0, /*tp_traverse*/
- 0, /*tp_clear*/
- 0, /*tp_richcompare*/
- 0, /*tp_weaklistoffset*/
- 0, /*tp_iter*/
- 0, /*tp_iternext*/
- 0, /*tp_methods*/
- 0, /*tp_members*/
- 0, /*tp_getset*/
- 0, /* see PyInit_xx */ /*tp_base*/
- 0, /*tp_dict*/
- 0, /*tp_descr_get*/
- 0, /*tp_descr_set*/
- 0, /*tp_dictoffset*/
- 0, /*tp_init*/
- 0, /*tp_alloc*/
- 0, /*tp_new*/
- 0, /*tp_free*/
- 0, /*tp_is_gc*/
-};
-
-/* ---------- */
-
-static PyObject *
-null_richcompare(PyObject *self, PyObject *other, int op)
-{
- Py_INCREF(Py_NotImplemented);
- return Py_NotImplemented;
-}
-
-static PyTypeObject Null_Type = {
- /* The ob_type field must be initialized in the module init function
- * to be portable to Windows without using C++. */
- PyVarObject_HEAD_INIT(NULL, 0)
- "xxmodule.Null", /*tp_name*/
- 0, /*tp_basicsize*/
- 0, /*tp_itemsize*/
- /* methods */
- 0, /*tp_dealloc*/
- 0, /*tp_vectorcall_offset*/
- 0, /*tp_getattr*/
- 0, /*tp_setattr*/
- 0, /*tp_as_async*/
- 0, /*tp_repr*/
- 0, /*tp_as_number*/
- 0, /*tp_as_sequence*/
- 0, /*tp_as_mapping*/
- 0, /*tp_hash*/
- 0, /*tp_call*/
- 0, /*tp_str*/
- 0, /*tp_getattro*/
- 0, /*tp_setattro*/
- 0, /*tp_as_buffer*/
- Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /*tp_flags*/
- 0, /*tp_doc*/
- 0, /*tp_traverse*/
- 0, /*tp_clear*/
- null_richcompare, /*tp_richcompare*/
- 0, /*tp_weaklistoffset*/
- 0, /*tp_iter*/
- 0, /*tp_iternext*/
- 0, /*tp_methods*/
- 0, /*tp_members*/
- 0, /*tp_getset*/
- 0, /* see PyInit_xx */ /*tp_base*/
- 0, /*tp_dict*/
- 0, /*tp_descr_get*/
- 0, /*tp_descr_set*/
- 0, /*tp_dictoffset*/
- 0, /*tp_init*/
- 0, /*tp_alloc*/
- PyType_GenericNew, /*tp_new*/
- 0, /*tp_free*/
- 0, /*tp_is_gc*/
-};
-
-
-/* ---------- */
-
-
-/* List of functions defined in the module */
-
-static PyMethodDef xx_methods[] = {
- {"roj", xx_roj, METH_VARARGS,
- PyDoc_STR("roj(a,b) -> None")},
- {"foo", xx_foo, METH_VARARGS,
- xx_foo_doc},
- {"new", xx_new, METH_VARARGS,
- PyDoc_STR("new() -> new Xx object")},
- {"bug", xx_bug, METH_VARARGS,
- PyDoc_STR("bug(o) -> None")},
- {NULL, NULL} /* sentinel */
-};
-
-PyDoc_STRVAR(module_doc,
-"This is a template module just for instruction.");
-
-
-static int
-xx_exec(PyObject *m)
-{
- /* Slot initialization is subject to the rules of initializing globals.
- C99 requires the initializers to be "address constants". Function
- designators like 'PyType_GenericNew', with implicit conversion to
- a pointer, are valid C99 address constants.
-
- However, the unary '&' operator applied to a non-static variable
- like 'PyBaseObject_Type' is not required to produce an address
- constant. Compilers may support this (gcc does), MSVC does not.
-
- Both compilers are strictly standard conforming in this particular
- behavior.
- */
- Null_Type.tp_base = &PyBaseObject_Type;
- Str_Type.tp_base = &PyUnicode_Type;
-
- /* Finalize the type object including setting type of the new type
- * object; doing it here is required for portability, too. */
- if (PyType_Ready(&Xxo_Type) < 0)
- goto fail;
-
- /* Add some symbolic constants to the module */
- if (ErrorObject == NULL) {
- ErrorObject = PyErr_NewException("xx.error", NULL, NULL);
- if (ErrorObject == NULL)
- goto fail;
- }
- Py_INCREF(ErrorObject);
- PyModule_AddObject(m, "error", ErrorObject);
-
- /* Add Str */
- if (PyType_Ready(&Str_Type) < 0)
- goto fail;
- PyModule_AddObject(m, "Str", (PyObject *)&Str_Type);
-
- /* Add Null */
- if (PyType_Ready(&Null_Type) < 0)
- goto fail;
- PyModule_AddObject(m, "Null", (PyObject *)&Null_Type);
- return 0;
- fail:
- Py_XDECREF(m);
- return -1;
-}
-
-static struct PyModuleDef_Slot xx_slots[] = {
- {Py_mod_exec, xx_exec},
- {0, NULL},
-};
-
-static struct PyModuleDef xxmodule = {
- PyModuleDef_HEAD_INIT,
- "xx",
- module_doc,
- 0,
- xx_methods,
- xx_slots,
- NULL,
- NULL,
- NULL
-};
-
-/* Export function for the module (*must* be called PyInit_xx) */
-
-PyMODINIT_FUNC
-PyInit_xx(void)
-{
- return PyModuleDef_Init(&xxmodule);
-}
+++ /dev/null
-
-/* Use this file as a template to start implementing a module that
- also declares object types. All occurrences of 'Xxo' should be changed
- to something reasonable for your objects. After that, all other
- occurrences of 'xx' should be changed to something reasonable for your
- module. If your module is named foo your sourcefile should be named
- foomodule.c.
-
- You will probably want to delete all references to 'x_attr' and add
- your own types of attributes instead. Maybe you want to name your
- local variables other than 'self'. If your object type is needed in
- other files, you'll have to create a file "foobarobject.h"; see
- floatobject.h for an example. */
-
-/* Xxo objects */
-
-#include "Python.h"
-
-static PyObject *ErrorObject;
-
-typedef struct {
- PyObject_HEAD
- PyObject *x_attr; /* Attributes dictionary */
-} XxoObject;
-
-static PyTypeObject Xxo_Type;
-
-#define XxoObject_Check(v) Py_IS_TYPE(v, &Xxo_Type)
-
-static XxoObject *
-newXxoObject(PyObject *arg)
-{
- XxoObject *self;
- self = PyObject_New(XxoObject, &Xxo_Type);
- if (self == NULL)
- return NULL;
- self->x_attr = NULL;
- return self;
-}
-
-/* Xxo methods */
-
-static void
-Xxo_dealloc(XxoObject *self)
-{
- Py_XDECREF(self->x_attr);
- PyObject_Free(self);
-}
-
-static PyObject *
-Xxo_demo(XxoObject *self, PyObject *args)
-{
- if (!PyArg_ParseTuple(args, ":demo"))
- return NULL;
- Py_INCREF(Py_None);
- return Py_None;
-}
-
-static PyMethodDef Xxo_methods[] = {
- {"demo", (PyCFunction)Xxo_demo, METH_VARARGS,
- PyDoc_STR("demo() -> None")},
- {NULL, NULL} /* sentinel */
-};
-
-static PyObject *
-Xxo_getattro(XxoObject *self, PyObject *name)
-{
- if (self->x_attr != NULL) {
- PyObject *v = PyDict_GetItemWithError(self->x_attr, name);
- if (v != NULL) {
- Py_INCREF(v);
- return v;
- }
- else if (PyErr_Occurred()) {
- return NULL;
- }
- }
- return PyObject_GenericGetAttr((PyObject *)self, name);
-}
-
-static int
-Xxo_setattr(XxoObject *self, const char *name, PyObject *v)
-{
- if (self->x_attr == NULL) {
- self->x_attr = PyDict_New();
- if (self->x_attr == NULL)
- return -1;
- }
- if (v == NULL) {
- int rv = PyDict_DelItemString(self->x_attr, name);
- if (rv < 0 && PyErr_ExceptionMatches(PyExc_KeyError))
- PyErr_SetString(PyExc_AttributeError,
- "delete non-existing Xxo attribute");
- return rv;
- }
- else
- return PyDict_SetItemString(self->x_attr, name, v);
-}
-
-static PyTypeObject Xxo_Type = {
- /* The ob_type field must be initialized in the module init function
- * to be portable to Windows without using C++. */
- PyVarObject_HEAD_INIT(NULL, 0)
- "xxmodule.Xxo", /*tp_name*/
- sizeof(XxoObject), /*tp_basicsize*/
- 0, /*tp_itemsize*/
- /* methods */
- (destructor)Xxo_dealloc, /*tp_dealloc*/
- 0, /*tp_vectorcall_offset*/
- (getattrfunc)0, /*tp_getattr*/
- (setattrfunc)Xxo_setattr, /*tp_setattr*/
- 0, /*tp_as_async*/
- 0, /*tp_repr*/
- 0, /*tp_as_number*/
- 0, /*tp_as_sequence*/
- 0, /*tp_as_mapping*/
- 0, /*tp_hash*/
- 0, /*tp_call*/
- 0, /*tp_str*/
- (getattrofunc)Xxo_getattro, /*tp_getattro*/
- 0, /*tp_setattro*/
- 0, /*tp_as_buffer*/
- Py_TPFLAGS_DEFAULT, /*tp_flags*/
- 0, /*tp_doc*/
- 0, /*tp_traverse*/
- 0, /*tp_clear*/
- 0, /*tp_richcompare*/
- 0, /*tp_weaklistoffset*/
- 0, /*tp_iter*/
- 0, /*tp_iternext*/
- Xxo_methods, /*tp_methods*/
- 0, /*tp_members*/
- 0, /*tp_getset*/
- 0, /*tp_base*/
- 0, /*tp_dict*/
- 0, /*tp_descr_get*/
- 0, /*tp_descr_set*/
- 0, /*tp_dictoffset*/
- 0, /*tp_init*/
- 0, /*tp_alloc*/
- 0, /*tp_new*/
- 0, /*tp_free*/
- 0, /*tp_is_gc*/
-};
-/* --------------------------------------------------------------------- */
-
-/* Function of two integers returning integer */
-
-PyDoc_STRVAR(xx_foo_doc,
-"foo(i,j)\n\
-\n\
-Return the sum of i and j.");
-
-static PyObject *
-xx_foo(PyObject *self, PyObject *args)
-{
- long i, j;
- long res;
- if (!PyArg_ParseTuple(args, "ll:foo", &i, &j))
- return NULL;
- res = i+j; /* XXX Do something here */
- return PyLong_FromLong(res);
-}
-
-
-/* Function of no arguments returning new Xxo object */
-
-static PyObject *
-xx_new(PyObject *self, PyObject *args)
-{
- XxoObject *rv;
-
- if (!PyArg_ParseTuple(args, ":new"))
- return NULL;
- rv = newXxoObject(args);
- if (rv == NULL)
- return NULL;
- return (PyObject *)rv;
-}
-
-/* Example with subtle bug from extensions manual ("Thin Ice"). */
-
-static PyObject *
-xx_bug(PyObject *self, PyObject *args)
-{
- PyObject *list, *item;
-
- if (!PyArg_ParseTuple(args, "O:bug", &list))
- return NULL;
-
- item = PyList_GetItem(list, 0);
- /* Py_INCREF(item); */
- PyList_SetItem(list, 1, PyLong_FromLong(0L));
- PyObject_Print(item, stdout, 0);
- printf("\n");
- /* Py_DECREF(item); */
-
- Py_INCREF(Py_None);
- return Py_None;
-}
-
-/* Test bad format character */
-
-static PyObject *
-xx_roj(PyObject *self, PyObject *args)
-{
- PyObject *a;
- long b;
- if (!PyArg_ParseTuple(args, "O#:roj", &a, &b))
- return NULL;
- Py_INCREF(Py_None);
- return Py_None;
-}
-
-
-/* ---------- */
-
-static PyTypeObject Str_Type = {
- /* The ob_type field must be initialized in the module init function
- * to be portable to Windows without using C++. */
- PyVarObject_HEAD_INIT(NULL, 0)
- "xxmodule.Str", /*tp_name*/
- 0, /*tp_basicsize*/
- 0, /*tp_itemsize*/
- /* methods */
- 0, /*tp_dealloc*/
- 0, /*tp_vectorcall_offset*/
- 0, /*tp_getattr*/
- 0, /*tp_setattr*/
- 0, /*tp_as_async*/
- 0, /*tp_repr*/
- 0, /*tp_as_number*/
- 0, /*tp_as_sequence*/
- 0, /*tp_as_mapping*/
- 0, /*tp_hash*/
- 0, /*tp_call*/
- 0, /*tp_str*/
- 0, /*tp_getattro*/
- 0, /*tp_setattro*/
- 0, /*tp_as_buffer*/
- Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /*tp_flags*/
- 0, /*tp_doc*/
- 0, /*tp_traverse*/
- 0, /*tp_clear*/
- 0, /*tp_richcompare*/
- 0, /*tp_weaklistoffset*/
- 0, /*tp_iter*/
- 0, /*tp_iternext*/
- 0, /*tp_methods*/
- 0, /*tp_members*/
- 0, /*tp_getset*/
- 0, /* see PyInit_xx */ /*tp_base*/
- 0, /*tp_dict*/
- 0, /*tp_descr_get*/
- 0, /*tp_descr_set*/
- 0, /*tp_dictoffset*/
- 0, /*tp_init*/
- 0, /*tp_alloc*/
- 0, /*tp_new*/
- 0, /*tp_free*/
- 0, /*tp_is_gc*/
-};
-
-/* ---------- */
-
-static PyObject *
-null_richcompare(PyObject *self, PyObject *other, int op)
-{
- Py_INCREF(Py_NotImplemented);
- return Py_NotImplemented;
-}
-
-static PyTypeObject Null_Type = {
- /* The ob_type field must be initialized in the module init function
- * to be portable to Windows without using C++. */
- PyVarObject_HEAD_INIT(NULL, 0)
- "xxmodule.Null", /*tp_name*/
- 0, /*tp_basicsize*/
- 0, /*tp_itemsize*/
- /* methods */
- 0, /*tp_dealloc*/
- 0, /*tp_vectorcall_offset*/
- 0, /*tp_getattr*/
- 0, /*tp_setattr*/
- 0, /*tp_as_async*/
- 0, /*tp_repr*/
- 0, /*tp_as_number*/
- 0, /*tp_as_sequence*/
- 0, /*tp_as_mapping*/
- 0, /*tp_hash*/
- 0, /*tp_call*/
- 0, /*tp_str*/
- 0, /*tp_getattro*/
- 0, /*tp_setattro*/
- 0, /*tp_as_buffer*/
- Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE, /*tp_flags*/
- 0, /*tp_doc*/
- 0, /*tp_traverse*/
- 0, /*tp_clear*/
- null_richcompare, /*tp_richcompare*/
- 0, /*tp_weaklistoffset*/
- 0, /*tp_iter*/
- 0, /*tp_iternext*/
- 0, /*tp_methods*/
- 0, /*tp_members*/
- 0, /*tp_getset*/
- 0, /* see PyInit_xx */ /*tp_base*/
- 0, /*tp_dict*/
- 0, /*tp_descr_get*/
- 0, /*tp_descr_set*/
- 0, /*tp_dictoffset*/
- 0, /*tp_init*/
- 0, /*tp_alloc*/
- PyType_GenericNew, /*tp_new*/
- 0, /*tp_free*/
- 0, /*tp_is_gc*/
-};
-
-
-/* ---------- */
-
-
-/* List of functions defined in the module */
-
-static PyMethodDef xx_methods[] = {
- {"roj", xx_roj, METH_VARARGS,
- PyDoc_STR("roj(a,b) -> None")},
- {"foo", xx_foo, METH_VARARGS,
- xx_foo_doc},
- {"new", xx_new, METH_VARARGS,
- PyDoc_STR("new() -> new Xx object")},
- {"bug", xx_bug, METH_VARARGS,
- PyDoc_STR("bug(o) -> None")},
- {NULL, NULL} /* sentinel */
-};
-
-PyDoc_STRVAR(module_doc,
-"This is a template module just for instruction.");
-
-
-static int
-xx_exec(PyObject *m)
-{
- /* Slot initialization is subject to the rules of initializing globals.
- C99 requires the initializers to be "address constants". Function
- designators like 'PyType_GenericNew', with implicit conversion to
- a pointer, are valid C99 address constants.
-
- However, the unary '&' operator applied to a non-static variable
- like 'PyBaseObject_Type' is not required to produce an address
- constant. Compilers may support this (gcc does), MSVC does not.
-
- Both compilers are strictly standard conforming in this particular
- behavior.
- */
- Null_Type.tp_base = &PyBaseObject_Type;
- Str_Type.tp_base = &PyUnicode_Type;
-
- /* Finalize the type object including setting type of the new type
- * object; doing it here is required for portability, too. */
- if (PyType_Ready(&Xxo_Type) < 0) {
- return -1;
- }
-
- /* Add some symbolic constants to the module */
- if (ErrorObject == NULL) {
- ErrorObject = PyErr_NewException("xx.error", NULL, NULL);
- if (ErrorObject == NULL) {
- return -1;
- }
- }
- int rc = PyModule_AddType(m, (PyTypeObject *)ErrorObject);
- Py_DECREF(ErrorObject);
- if (rc < 0) {
- return -1;
- }
-
- /* Add Str and Null types */
- if (PyModule_AddType(m, &Str_Type) < 0) {
- return -1;
- }
- if (PyModule_AddType(m, &Null_Type) < 0) {
- return -1;
- }
-
- return 0;
-}
-
-static struct PyModuleDef_Slot xx_slots[] = {
- {Py_mod_exec, xx_exec},
- {0, NULL},
-};
-
-static struct PyModuleDef xxmodule = {
- PyModuleDef_HEAD_INIT,
- "xx",
- module_doc,
- 0,
- xx_methods,
- xx_slots,
- NULL,
- NULL,
- NULL
-};
-
-/* Export function for the module (*must* be called PyInit_xx) */
-
-PyMODINIT_FUNC
-PyInit_xx(void)
-{
- return PyModuleDef_Init(&xxmodule);
-}
that (optionally) takes care of stripping comments, ignoring blank
lines, and joining lines with backslashes."""
-import sys
+import sys, io
class TextFile:
"""Open a new file named 'filename'. This overrides both the
'filename' and 'file' arguments to the constructor."""
self.filename = filename
- self.file = open(self.filename, errors=self.errors)
+ self.file = io.open(self.filename, 'r', errors=self.errors)
self.current_line = 0
def close(self):
line."""
sys.stderr.write("warning: " + self.gen_error(msg, line) + "\n")
- def readline(self): # noqa: C901
+ def readline(self):
"""Read and return a single logical line from the current file (or
from an internal buffer if lines have previously been "unread"
with 'unreadline()'). If the 'join_lines' option is true, this
* link shared library handled by 'cc -shared'
"""
-import os
-import sys
-import re
-import shlex
-import itertools
+import os, sys, re, shlex
from distutils import sysconfig
from distutils.dep_util import newer
pp_args.extend(extra_postargs)
pp_args.append(source)
- # reasons to preprocess:
- # - force is indicated
- # - output is directed to stdout
- # - source file is newer than the target
- preprocess = self.force or output_file is None or newer(source, output_file)
- if not preprocess:
- return
-
- if output_file:
- self.mkpath(os.path.dirname(output_file))
-
- try:
- self.spawn(pp_args)
- except DistutilsExecError as msg:
- raise CompileError(msg)
+ # We need to preprocess: either we're being forced to, or we're
+ # generating output to stdout, or there's a target output file and
+ # the source file is newer than the target (or the target doesn't
+ # exist).
+ if self.force or output_file is None or newer(source, output_file):
+ if output_file:
+ self.mkpath(os.path.dirname(output_file))
+ try:
+ self.spawn(pp_args)
+ except DistutilsExecError as msg:
+ raise CompileError(msg)
def _compile(self, obj, src, ext, cc_args, extra_postargs, pp_opts):
compiler_so = compiler_fixup(self.compiler_so, cc_args + extra_postargs)
def library_option(self, lib):
return "-l" + lib
- @staticmethod
- def _library_root(dir):
- """
- macOS users can specify an alternate SDK using'-isysroot'.
- Calculate the SDK root if it is specified.
-
- Note that, as of Xcode 7, Apple SDKs may contain textual stub
- libraries with .tbd extensions rather than the normal .dylib
- shared libraries installed in /. The Apple compiler tool
- chain handles this transparently but it can cause problems
- for programs that are being built with an SDK and searching
- for specific libraries. Callers of find_library_file need to
- keep in mind that the base filename of the returned SDK library
- file might have a different extension from that of the library
- file installed on the running system, for example:
- /Applications/Xcode.app/Contents/Developer/Platforms/
- MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/
- usr/lib/libedit.tbd
- vs
- /usr/lib/libedit.dylib
- """
- cflags = sysconfig.get_config_var('CFLAGS')
- match = re.search(r'-isysroot\s*(\S+)', cflags)
-
- apply_root = (
- sys.platform == 'darwin'
- and match
- and (
+ def find_library_file(self, dirs, lib, debug=0):
+ shared_f = self.library_filename(lib, lib_type='shared')
+ dylib_f = self.library_filename(lib, lib_type='dylib')
+ xcode_stub_f = self.library_filename(lib, lib_type='xcode_stub')
+ static_f = self.library_filename(lib, lib_type='static')
+
+ if sys.platform == 'darwin':
+ # On OSX users can specify an alternate SDK using
+ # '-isysroot', calculate the SDK root if it is specified
+ # (and use it further on)
+ #
+ # Note that, as of Xcode 7, Apple SDKs may contain textual stub
+ # libraries with .tbd extensions rather than the normal .dylib
+ # shared libraries installed in /. The Apple compiler tool
+ # chain handles this transparently but it can cause problems
+ # for programs that are being built with an SDK and searching
+ # for specific libraries. Callers of find_library_file need to
+ # keep in mind that the base filename of the returned SDK library
+ # file might have a different extension from that of the library
+ # file installed on the running system, for example:
+ # /Applications/Xcode.app/Contents/Developer/Platforms/
+ # MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/
+ # usr/lib/libedit.tbd
+ # vs
+ # /usr/lib/libedit.dylib
+ cflags = sysconfig.get_config_var('CFLAGS')
+ m = re.search(r'-isysroot\s*(\S+)', cflags)
+ if m is None:
+ sysroot = '/'
+ else:
+ sysroot = m.group(1)
+
+ for dir in dirs:
+ shared = os.path.join(dir, shared_f)
+ dylib = os.path.join(dir, dylib_f)
+ static = os.path.join(dir, static_f)
+ xcode_stub = os.path.join(dir, xcode_stub_f)
+
+ if sys.platform == 'darwin' and (
dir.startswith('/System/')
or (dir.startswith('/usr/') and not dir.startswith('/usr/local/'))
- )
- )
-
- return os.path.join(match.group(1), dir[1:]) if apply_root else dir
-
- def find_library_file(self, dirs, lib, debug=0):
- r"""
- Second-guess the linker with not much hard
- data to go on: GCC seems to prefer the shared library, so
- assume that *all* Unix C compilers do,
- ignoring even GCC's "-static" option.
-
- >>> compiler = UnixCCompiler()
- >>> compiler._library_root = lambda dir: dir
- >>> monkeypatch = getfixture('monkeypatch')
- >>> monkeypatch.setattr(os.path, 'exists', lambda d: 'existing' in d)
- >>> dirs = ('/foo/bar/missing', '/foo/bar/existing')
- >>> compiler.find_library_file(dirs, 'abc').replace('\\', '/')
- '/foo/bar/existing/libabc.dylib'
- >>> compiler.find_library_file(reversed(dirs), 'abc').replace('\\', '/')
- '/foo/bar/existing/libabc.dylib'
- >>> monkeypatch.setattr(os.path, 'exists',
- ... lambda d: 'existing' in d and '.a' in d)
- >>> compiler.find_library_file(dirs, 'abc').replace('\\', '/')
- '/foo/bar/existing/libabc.a'
- >>> compiler.find_library_file(reversed(dirs), 'abc').replace('\\', '/')
- '/foo/bar/existing/libabc.a'
- """
- lib_names = (
- self.library_filename(lib, lib_type=type)
- for type in 'dylib xcode_stub shared static'.split()
- )
-
- roots = map(self._library_root, dirs)
-
- searched = (
- os.path.join(root, lib_name)
- for root, lib_name in itertools.product(roots, lib_names)
- )
-
- found = filter(os.path.exists, searched)
-
- # Return None if it could not be found in any dir.
- return next(found, None)
+ ):
+
+ shared = os.path.join(sysroot, dir[1:], shared_f)
+ dylib = os.path.join(sysroot, dir[1:], dylib_f)
+ static = os.path.join(sysroot, dir[1:], static_f)
+ xcode_stub = os.path.join(sysroot, dir[1:], xcode_stub_f)
+
+ # We're second-guessing the linker here, with not much hard
+ # data to go on: GCC seems to prefer the shared library, so I'm
+ # assuming that *all* Unix C compilers do. And of course I'm
+ # ignoring even GCC's "-static" option. So sue me.
+ if os.path.exists(dylib):
+ return dylib
+ elif os.path.exists(xcode_stub):
+ return xcode_stub
+ elif os.path.exists(shared):
+ return shared
+ elif os.path.exists(static):
+ return static
+
+ # Oops, didn't find it in *any* of 'dirs'
+ return None
import subprocess
import sys
import sysconfig
-from distutils.errors import DistutilsPlatformError, DistutilsByteCompileError
+from distutils.errors import DistutilsPlatformError
from distutils.dep_util import newer
from distutils.spawn import spawn
from distutils import log
+from distutils.errors import DistutilsByteCompileError
def get_host_platform():
print.
"""
if msg is None:
- msg = "{}{!r}".format(func.__name__, args)
+ msg = "%s%r" % (func.__name__, args)
if msg[-2:] == ',)': # correct for singleton tuple
msg = msg[0:-2] + ')'
elif val in ('n', 'no', 'f', 'false', 'off', '0'):
return 0
else:
- raise ValueError("invalid truth value {!r}".format(val))
+ raise ValueError("invalid truth value %r" % (val,))
-def byte_compile( # noqa: C901
+def byte_compile(
py_files,
optimize=0,
force=0,
)
def __repr__(self):
- return "{} ('{}')".format(self.__class__.__name__, str(self))
+ return "%s ('%s')" % (self.__class__.__name__, str(self))
def __eq__(self, other):
c = self._cmp(other)
return vstring
- def _cmp(self, other): # noqa: C901
+ def _cmp(self, other):
if isinstance(other, str):
with suppress_known_deprecation():
other = StrictVersion(other)
--- /dev/null
+Permission is hereby granted, free of charge, to any person obtaining
+a copy of this software and associated documentation files (the
+"Software"), to deal in the Software without restriction, including
+without limitation the rights to use, copy, modify, merge, publish,
+distribute, sublicense, and/or sell copies of the Software, and to
+permit persons to whom the Software is furnished to do so, subject to
+the following conditions:
+
+The above copyright notice and this permission notice shall be
+included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
--- /dev/null
+Metadata-Version: 2.1
+Name: pyparsing
+Version: 3.0.8
+Summary: pyparsing module - Classes and methods to define and execute parsing grammars
+Author-email: Paul McGuire <ptmcg.gm+pyparsing@gmail.com>
+Requires-Python: >=3.6.8
+Description-Content-Type: text/x-rst
+Classifier: Development Status :: 5 - Production/Stable
+Classifier: Intended Audience :: Developers
+Classifier: Intended Audience :: Information Technology
+Classifier: License :: OSI Approved :: MIT License
+Classifier: Operating System :: OS Independent
+Classifier: Programming Language :: Python
+Classifier: Programming Language :: Python :: 3
+Classifier: Programming Language :: Python :: 3.6
+Classifier: Programming Language :: Python :: 3.7
+Classifier: Programming Language :: Python :: 3.8
+Classifier: Programming Language :: Python :: 3.9
+Classifier: Programming Language :: Python :: 3.10
+Classifier: Programming Language :: Python :: 3 :: Only
+Classifier: Programming Language :: Python :: Implementation :: CPython
+Classifier: Programming Language :: Python :: Implementation :: PyPy
+Classifier: Typing :: Typed
+Requires-Dist: railroad-diagrams ; extra == "diagrams"
+Requires-Dist: jinja2 ; extra == "diagrams"
+Project-URL: Homepage, https://github.com/pyparsing/pyparsing/
+Provides-Extra: diagrams
+
+PyParsing -- A Python Parsing Module
+====================================
+
+|Build Status| |Coverage|
+
+Introduction
+============
+
+The pyparsing module is an alternative approach to creating and
+executing simple grammars, vs. the traditional lex/yacc approach, or the
+use of regular expressions. The pyparsing module provides a library of
+classes that client code uses to construct the grammar directly in
+Python code.
+
+*[Since first writing this description of pyparsing in late 2003, this
+technique for developing parsers has become more widespread, under the
+name Parsing Expression Grammars - PEGs. See more information on PEGs*
+`here <https://en.wikipedia.org/wiki/Parsing_expression_grammar>`__
+*.]*
+
+Here is a program to parse ``"Hello, World!"`` (or any greeting of the form
+``"salutation, addressee!"``):
+
+.. code:: python
+
+ from pyparsing import Word, alphas
+ greet = Word(alphas) + "," + Word(alphas) + "!"
+ hello = "Hello, World!"
+ print(hello, "->", greet.parseString(hello))
+
+The program outputs the following::
+
+ Hello, World! -> ['Hello', ',', 'World', '!']
+
+The Python representation of the grammar is quite readable, owing to the
+self-explanatory class names, and the use of '+', '|' and '^' operator
+definitions.
+
+The parsed results returned from ``parseString()`` is a collection of type
+``ParseResults``, which can be accessed as a
+nested list, a dictionary, or an object with named attributes.
+
+The pyparsing module handles some of the problems that are typically
+vexing when writing text parsers:
+
+- extra or missing whitespace (the above program will also handle ``"Hello,World!"``, ``"Hello , World !"``, etc.)
+- quoted strings
+- embedded comments
+
+The examples directory includes a simple SQL parser, simple CORBA IDL
+parser, a config file parser, a chemical formula parser, and a four-
+function algebraic notation parser, among many others.
+
+Documentation
+=============
+
+There are many examples in the online docstrings of the classes
+and methods in pyparsing. You can find them compiled into `online docs <https://pyparsing-docs.readthedocs.io/en/latest/>`__. Additional
+documentation resources and project info are listed in the online
+`GitHub wiki <https://github.com/pyparsing/pyparsing/wiki>`__. An
+entire directory of examples can be found `here <https://github.com/pyparsing/pyparsing/tree/master/examples>`__.
+
+License
+=======
+
+MIT License. See header of the `pyparsing.py <https://github.com/pyparsing/pyparsing/blob/master/pyparsing/__init__.py#L1-L23>`__ file.
+
+History
+=======
+
+See `CHANGES <https://github.com/pyparsing/pyparsing/blob/master/CHANGES>`__ file.
+
+.. |Build Status| image:: https://github.com/pyparsing/pyparsing/actions/workflows/ci.yml/badge.svg
+ :target: https://github.com/pyparsing/pyparsing/actions/workflows/ci.yml
+.. |Coverage| image:: https://codecov.io/gh/pyparsing/pyparsing/branch/master/graph/badge.svg
+ :target: https://codecov.io/gh/pyparsing/pyparsing
+
--- /dev/null
+pyparsing-3.0.8.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
+pyparsing-3.0.8.dist-info/LICENSE,sha256=ENUSChaAWAT_2otojCIL-06POXQbVzIGBNRVowngGXI,1023
+pyparsing-3.0.8.dist-info/METADATA,sha256=dEvZBGz3Owm5LYEaqDeKb6e3ZgOrF48WaCI_PG1n5BE,4207
+pyparsing-3.0.8.dist-info/RECORD,,
+pyparsing-3.0.8.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
+pyparsing-3.0.8.dist-info/WHEEL,sha256=jPMR_Dzkc4X4icQtmz81lnNY_kAsfog7ry7qoRvYLXw,81
+pyparsing/__init__.py,sha256=EMa1HCuq9HJhEDR8fUThu2gD0nl6Cs8FFEWZZ0eRCM8,9159
+pyparsing/__pycache__/__init__.cpython-38.pyc,,
+pyparsing/__pycache__/actions.cpython-38.pyc,,
+pyparsing/__pycache__/common.cpython-38.pyc,,
+pyparsing/__pycache__/core.cpython-38.pyc,,
+pyparsing/__pycache__/exceptions.cpython-38.pyc,,
+pyparsing/__pycache__/helpers.cpython-38.pyc,,
+pyparsing/__pycache__/results.cpython-38.pyc,,
+pyparsing/__pycache__/testing.cpython-38.pyc,,
+pyparsing/__pycache__/unicode.cpython-38.pyc,,
+pyparsing/__pycache__/util.cpython-38.pyc,,
+pyparsing/actions.py,sha256=60v7mETOBzc01YPH_qQD5isavgcSJpAfIKpzgjM3vaU,6429
+pyparsing/common.py,sha256=lFL97ooIeR75CmW5hjURZqwDCTgruqltcTCZ-ulLO2Q,12936
+pyparsing/core.py,sha256=zBzGw5vcSd58pB1QkYpY6O_XCcHVKX_nH5xglRx_L-M,213278
+pyparsing/diagram/__init__.py,sha256=oU_UEh6O5voKSFjUdq462_mpmURLOfUIsmWvxi1qgTQ,23003
+pyparsing/diagram/__pycache__/__init__.cpython-38.pyc,,
+pyparsing/diagram/template.jinja2,sha256=SfQ8SLktSBqI5W1DGcUVH1vdflRD6x2sQBApxrcNg7s,589
+pyparsing/exceptions.py,sha256=H4D9gqMavqmAFSsdrU_J6bO-jA-T-A7yvtXWZpooIUA,9030
+pyparsing/helpers.py,sha256=EyjpgDOc3ivwRsU4VXxAWdgIs5gaqMDaLWcwRh5mqxc,39007
+pyparsing/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
+pyparsing/results.py,sha256=Hd6FAAh5sF8zGXpwsamdVqFUblIwyQf0FH0t7FCb1OY,25353
+pyparsing/testing.py,sha256=szs8AKZREZMhL0y0vsMfaTVAnpqPHetg6VKJBNmc4QY,13388
+pyparsing/unicode.py,sha256=IR-ioeGY29cZ49tG8Ts7ITPWWNP5G2DcZs58oa8zn44,10381
+pyparsing/util.py,sha256=kq772O5YSeXOSdP-M31EWpbH_ayj7BMHImBYo9xPD5M,6805
--- /dev/null
+Wheel-Version: 1.0
+Generator: flit 3.6.0
+Root-Is-Purelib: true
+Tag: py3-none-any
+++ /dev/null
-Permission is hereby granted, free of charge, to any person obtaining
-a copy of this software and associated documentation files (the
-"Software"), to deal in the Software without restriction, including
-without limitation the rights to use, copy, modify, merge, publish,
-distribute, sublicense, and/or sell copies of the Software, and to
-permit persons to whom the Software is furnished to do so, subject to
-the following conditions:
-
-The above copyright notice and this permission notice shall be
-included in all copies or substantial portions of the Software.
-
-THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
-IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
-CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
-TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
-SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+++ /dev/null
-Metadata-Version: 2.1
-Name: pyparsing
-Version: 3.0.9
-Summary: pyparsing module - Classes and methods to define and execute parsing grammars
-Author-email: Paul McGuire <ptmcg.gm+pyparsing@gmail.com>
-Requires-Python: >=3.6.8
-Description-Content-Type: text/x-rst
-Classifier: Development Status :: 5 - Production/Stable
-Classifier: Intended Audience :: Developers
-Classifier: Intended Audience :: Information Technology
-Classifier: License :: OSI Approved :: MIT License
-Classifier: Operating System :: OS Independent
-Classifier: Programming Language :: Python
-Classifier: Programming Language :: Python :: 3
-Classifier: Programming Language :: Python :: 3.6
-Classifier: Programming Language :: Python :: 3.7
-Classifier: Programming Language :: Python :: 3.8
-Classifier: Programming Language :: Python :: 3.9
-Classifier: Programming Language :: Python :: 3.10
-Classifier: Programming Language :: Python :: 3 :: Only
-Classifier: Programming Language :: Python :: Implementation :: CPython
-Classifier: Programming Language :: Python :: Implementation :: PyPy
-Classifier: Typing :: Typed
-Requires-Dist: railroad-diagrams ; extra == "diagrams"
-Requires-Dist: jinja2 ; extra == "diagrams"
-Project-URL: Homepage, https://github.com/pyparsing/pyparsing/
-Provides-Extra: diagrams
-
-PyParsing -- A Python Parsing Module
-====================================
-
-|Build Status| |Coverage|
-
-Introduction
-============
-
-The pyparsing module is an alternative approach to creating and
-executing simple grammars, vs. the traditional lex/yacc approach, or the
-use of regular expressions. The pyparsing module provides a library of
-classes that client code uses to construct the grammar directly in
-Python code.
-
-*[Since first writing this description of pyparsing in late 2003, this
-technique for developing parsers has become more widespread, under the
-name Parsing Expression Grammars - PEGs. See more information on PEGs*
-`here <https://en.wikipedia.org/wiki/Parsing_expression_grammar>`__
-*.]*
-
-Here is a program to parse ``"Hello, World!"`` (or any greeting of the form
-``"salutation, addressee!"``):
-
-.. code:: python
-
- from pyparsing import Word, alphas
- greet = Word(alphas) + "," + Word(alphas) + "!"
- hello = "Hello, World!"
- print(hello, "->", greet.parseString(hello))
-
-The program outputs the following::
-
- Hello, World! -> ['Hello', ',', 'World', '!']
-
-The Python representation of the grammar is quite readable, owing to the
-self-explanatory class names, and the use of '+', '|' and '^' operator
-definitions.
-
-The parsed results returned from ``parseString()`` is a collection of type
-``ParseResults``, which can be accessed as a
-nested list, a dictionary, or an object with named attributes.
-
-The pyparsing module handles some of the problems that are typically
-vexing when writing text parsers:
-
-- extra or missing whitespace (the above program will also handle ``"Hello,World!"``, ``"Hello , World !"``, etc.)
-- quoted strings
-- embedded comments
-
-The examples directory includes a simple SQL parser, simple CORBA IDL
-parser, a config file parser, a chemical formula parser, and a four-
-function algebraic notation parser, among many others.
-
-Documentation
-=============
-
-There are many examples in the online docstrings of the classes
-and methods in pyparsing. You can find them compiled into `online docs <https://pyparsing-docs.readthedocs.io/en/latest/>`__. Additional
-documentation resources and project info are listed in the online
-`GitHub wiki <https://github.com/pyparsing/pyparsing/wiki>`__. An
-entire directory of examples can be found `here <https://github.com/pyparsing/pyparsing/tree/master/examples>`__.
-
-License
-=======
-
-MIT License. See header of the `pyparsing.py <https://github.com/pyparsing/pyparsing/blob/master/pyparsing/__init__.py#L1-L23>`__ file.
-
-History
-=======
-
-See `CHANGES <https://github.com/pyparsing/pyparsing/blob/master/CHANGES>`__ file.
-
-.. |Build Status| image:: https://github.com/pyparsing/pyparsing/actions/workflows/ci.yml/badge.svg
- :target: https://github.com/pyparsing/pyparsing/actions/workflows/ci.yml
-.. |Coverage| image:: https://codecov.io/gh/pyparsing/pyparsing/branch/master/graph/badge.svg
- :target: https://codecov.io/gh/pyparsing/pyparsing
-
+++ /dev/null
-pyparsing-3.0.9.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4
-pyparsing-3.0.9.dist-info/LICENSE,sha256=ENUSChaAWAT_2otojCIL-06POXQbVzIGBNRVowngGXI,1023
-pyparsing-3.0.9.dist-info/METADATA,sha256=h_fpm9rwvgZsE8v5YNF4IAo-IpaFWCOfUEm5MMByIiM,4207
-pyparsing-3.0.9.dist-info/RECORD,,
-pyparsing-3.0.9.dist-info/REQUESTED,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
-pyparsing-3.0.9.dist-info/WHEEL,sha256=jPMR_Dzkc4X4icQtmz81lnNY_kAsfog7ry7qoRvYLXw,81
-pyparsing/__init__.py,sha256=52QH3lgPbJhba0estckoGPHRH8JvQSSCGoWiEn2m0bU,9159
-pyparsing/__pycache__/__init__.cpython-38.pyc,,
-pyparsing/__pycache__/actions.cpython-38.pyc,,
-pyparsing/__pycache__/common.cpython-38.pyc,,
-pyparsing/__pycache__/core.cpython-38.pyc,,
-pyparsing/__pycache__/exceptions.cpython-38.pyc,,
-pyparsing/__pycache__/helpers.cpython-38.pyc,,
-pyparsing/__pycache__/results.cpython-38.pyc,,
-pyparsing/__pycache__/testing.cpython-38.pyc,,
-pyparsing/__pycache__/unicode.cpython-38.pyc,,
-pyparsing/__pycache__/util.cpython-38.pyc,,
-pyparsing/actions.py,sha256=wU9i32e0y1ymxKE3OUwSHO-SFIrt1h_wv6Ws0GQjpNU,6426
-pyparsing/common.py,sha256=lFL97ooIeR75CmW5hjURZqwDCTgruqltcTCZ-ulLO2Q,12936
-pyparsing/core.py,sha256=u8GptQE_H6wMkl8OZhxeK1aAPIDXXNgwdShORBwBVS4,213310
-pyparsing/diagram/__init__.py,sha256=f_EfxahqrdkRVahmTwLJXkZ9EEDKNd-O7lBbpJYlE1g,23668
-pyparsing/diagram/__pycache__/__init__.cpython-38.pyc,,
-pyparsing/exceptions.py,sha256=3LbSafD32NYb1Tzt85GHNkhEAU1eZkTtNSk24cPMemo,9023
-pyparsing/helpers.py,sha256=QpUOjW0-psvueMwWb9bQpU2noqKCv98_wnw1VSzSdVo,39129
-pyparsing/py.typed,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
-pyparsing/results.py,sha256=HgNvWVXBdQP-Q6PtJfoCEeOJk2nwEvG-2KVKC5sGA30,25341
-pyparsing/testing.py,sha256=7tu4Abp4uSeJV0N_yEPRmmNUhpd18ZQP3CrX41DM814,13402
-pyparsing/unicode.py,sha256=fwuhMj30SQ165Cv7HJpu-rSxGbRm93kN9L4Ei7VGc1Y,10787
-pyparsing/util.py,sha256=kq772O5YSeXOSdP-M31EWpbH_ayj7BMHImBYo9xPD5M,6805
+++ /dev/null
-Wheel-Version: 1.0
-Generator: flit 3.6.0
-Root-Is-Purelib: true
-Tag: py3-none-any
)
-__version_info__ = version_info(3, 0, 9, "final", 0)
-__version_time__ = "05 May 2022 07:02 UTC"
+__version_info__ = version_info(3, 0, 8, "final", 0)
+__version_time__ = "09 Apr 2022 23:29 UTC"
__version__ = __version_info__.__version__
__versionTime__ = __version_time__
__author__ = "Paul McGuire <ptmcg.gm+pyparsing@gmail.com>"
na = one_of("N/A NA").set_parse_action(replace_with(math.nan))
term = na | num
- term[1, ...].parse_string("324 234 N/A 234") # -> [324, 234, nan, 234]
+ OneOrMore(term).parse_string("324 234 N/A 234") # -> [324, 234, nan, 234]
"""
return lambda s, l, t: [repl_str]
# core.py
#
import os
-import typing
from typing import (
+ Optional as OptionalType,
+ Iterable as IterableType,
NamedTuple,
Union,
Callable,
List,
TextIO,
Set,
+ Dict as DictType,
Sequence,
)
from abc import ABC, abstractmethod
def _should_enable_warnings(
- cmd_line_warn_options: typing.Iterable[str], warn_env_var: typing.Optional[str]
+ cmd_line_warn_options: IterableType[str], warn_env_var: OptionalType[str]
) -> bool:
enable = bool(warn_env_var)
for warn_opt in cmd_line_warn_options:
DEFAULT_WHITE_CHARS: str = " \n\t\r"
verbose_stacktrace: bool = False
- _literalStringClass: typing.Optional[type] = None
+ _literalStringClass: OptionalType[type] = None
@staticmethod
def set_default_whitespace_chars(chars: str) -> None:
Example::
# default whitespace chars are space, <TAB> and newline
- Word(alphas)[1, ...].parse_string("abc def\nghi jkl") # -> ['abc', 'def', 'ghi', 'jkl']
+ OneOrMore(Word(alphas)).parse_string("abc def\nghi jkl") # -> ['abc', 'def', 'ghi', 'jkl']
# change to just treat newline as significant
ParserElement.set_default_whitespace_chars(" \t")
- Word(alphas)[1, ...].parse_string("abc def\nghi jkl") # -> ['abc', 'def']
+ OneOrMore(Word(alphas)).parse_string("abc def\nghi jkl") # -> ['abc', 'def']
"""
ParserElement.DEFAULT_WHITE_CHARS = chars
ParserElement._literalStringClass = cls
class DebugActions(NamedTuple):
- debug_try: typing.Optional[DebugStartAction]
- debug_match: typing.Optional[DebugSuccessAction]
- debug_fail: typing.Optional[DebugExceptionAction]
+ debug_try: OptionalType[DebugStartAction]
+ debug_match: OptionalType[DebugSuccessAction]
+ debug_fail: OptionalType[DebugExceptionAction]
def __init__(self, savelist: bool = False):
self.parseAction: List[ParseAction] = list()
- self.failAction: typing.Optional[ParseFailAction] = None
+ self.failAction: OptionalType[ParseFailAction] = None
self.customName = None
self._defaultName = None
self.resultsName = None
integerK = integer.copy().add_parse_action(lambda toks: toks[0] * 1024) + Suppress("K")
integerM = integer.copy().add_parse_action(lambda toks: toks[0] * 1024 * 1024) + Suppress("M")
- print((integerK | integerM | integer)[1, ...].parse_string("5K 100 640K 256M"))
+ print(OneOrMore(integerK | integerM | integer).parse_string("5K 100 640K 256M"))
prints::
# cache for left-recursion in Forward references
recursion_lock = RLock()
- recursion_memos: typing.Dict[
+ recursion_memos: DictType[
Tuple[int, "Forward", bool], Tuple[int, Union[ParseResults, Exception]]
] = {}
@staticmethod
def enable_left_recursion(
- cache_size_limit: typing.Optional[int] = None, *, force=False
+ cache_size_limit: OptionalType[int] = None, *, force=False
) -> None:
"""
Enables "bounded recursion" parsing, which allows for both direct and indirect
Example::
- patt = Word(alphas)[1, ...]
+ patt = OneOrMore(Word(alphas))
patt.parse_string('ablaj /* comment */ lskjd')
# -> ['ablaj']
# turn on debugging for wd
wd.set_debug()
- term[1, ...].parse_string("abc 123 xyz 890")
+ OneOrMore(term).parse_string("abc 123 xyz 890")
prints::
self,
tests: Union[str, List[str]],
parse_all: bool = True,
- comment: typing.Optional[Union["ParserElement", str]] = "#",
+ comment: OptionalType[Union["ParserElement", str]] = "#",
full_dump: bool = True,
print_results: bool = True,
failure_tests: bool = False,
post_parse: Callable[[str, ParseResults], str] = None,
- file: typing.Optional[TextIO] = None,
+ file: OptionalType[TextIO] = None,
with_line_numbers: bool = False,
*,
parseAll: bool = True,
def __init__(
self,
match_string: str = "",
- ident_chars: typing.Optional[str] = None,
+ ident_chars: OptionalType[str] = None,
caseless: bool = False,
*,
matchString: str = "",
- identChars: typing.Optional[str] = None,
+ identChars: OptionalType[str] = None,
):
super().__init__()
identChars = identChars or ident_chars
Example::
- CaselessLiteral("CMD")[1, ...].parse_string("cmd CMD Cmd10")
+ OneOrMore(CaselessLiteral("CMD")).parse_string("cmd CMD Cmd10")
# -> ['CMD', 'CMD', 'CMD']
(Contrast with example for :class:`CaselessKeyword`.)
Example::
- CaselessKeyword("CMD")[1, ...].parse_string("cmd CMD Cmd10")
+ OneOrMore(CaselessKeyword("CMD")).parse_string("cmd CMD Cmd10")
# -> ['CMD', 'CMD']
(Contrast with example for :class:`CaselessLiteral`.)
def __init__(
self,
match_string: str = "",
- ident_chars: typing.Optional[str] = None,
+ ident_chars: OptionalType[str] = None,
*,
matchString: str = "",
- identChars: typing.Optional[str] = None,
+ identChars: OptionalType[str] = None,
):
identChars = identChars or ident_chars
match_string = matchString or match_string
def __init__(
self,
init_chars: str = "",
- body_chars: typing.Optional[str] = None,
+ body_chars: OptionalType[str] = None,
min: int = 1,
max: int = 0,
exact: int = 0,
as_keyword: bool = False,
- exclude_chars: typing.Optional[str] = None,
+ exclude_chars: OptionalType[str] = None,
*,
- initChars: typing.Optional[str] = None,
- bodyChars: typing.Optional[str] = None,
+ initChars: OptionalType[str] = None,
+ bodyChars: OptionalType[str] = None,
asKeyword: bool = False,
- excludeChars: typing.Optional[str] = None,
+ excludeChars: OptionalType[str] = None,
):
initChars = initChars or init_chars
bodyChars = bodyChars or body_chars
self,
charset: str,
as_keyword: bool = False,
- exclude_chars: typing.Optional[str] = None,
+ exclude_chars: OptionalType[str] = None,
*,
asKeyword: bool = False,
- excludeChars: typing.Optional[str] = None,
+ excludeChars: OptionalType[str] = None,
):
asKeyword = asKeyword or as_keyword
excludeChars = excludeChars or exclude_chars
def __init__(
self,
quote_char: str = "",
- esc_char: typing.Optional[str] = None,
- esc_quote: typing.Optional[str] = None,
+ esc_char: OptionalType[str] = None,
+ esc_quote: OptionalType[str] = None,
multiline: bool = False,
unquote_results: bool = True,
- end_quote_char: typing.Optional[str] = None,
+ end_quote_char: OptionalType[str] = None,
convert_whitespace_escapes: bool = True,
*,
quoteChar: str = "",
- escChar: typing.Optional[str] = None,
- escQuote: typing.Optional[str] = None,
+ escChar: OptionalType[str] = None,
+ escQuote: OptionalType[str] = None,
unquoteResults: bool = True,
- endQuoteChar: typing.Optional[str] = None,
+ endQuoteChar: OptionalType[str] = None,
convertWhitespaceEscapes: bool = True,
):
super().__init__()
post-processing parsed tokens.
"""
- def __init__(self, exprs: typing.Iterable[ParserElement], savelist: bool = False):
+ def __init__(self, exprs: IterableType[ParserElement], savelist: bool = False):
super().__init__(savelist)
self.exprs: List[ParserElement]
if isinstance(exprs, _generatorType):
Example::
integer = Word(nums)
- name_expr = Word(alphas)[1, ...]
+ name_expr = OneOrMore(Word(alphas))
expr = And([integer("id"), name_expr("name"), integer("age")])
# more easily written as:
def _generateDefaultName(self):
return "-"
- def __init__(
- self, exprs_arg: typing.Iterable[ParserElement], savelist: bool = True
- ):
+ def __init__(self, exprs_arg: IterableType[ParserElement], savelist: bool = True):
exprs: List[ParserElement] = list(exprs_arg)
if exprs and Ellipsis in exprs:
tmp = []
[['123'], ['3.1416'], ['789']]
"""
- def __init__(self, exprs: typing.Iterable[ParserElement], savelist: bool = False):
+ def __init__(self, exprs: IterableType[ParserElement], savelist: bool = False):
super().__init__(exprs, savelist)
if self.exprs:
self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs)
print(number.search_string("123 3.1416 789")) # Better -> [['123'], ['3.1416'], ['789']]
"""
- def __init__(self, exprs: typing.Iterable[ParserElement], savelist: bool = False):
+ def __init__(self, exprs: IterableType[ParserElement], savelist: bool = False):
super().__init__(exprs, savelist)
if self.exprs:
self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs)
- size: 20
"""
- def __init__(self, exprs: typing.Iterable[ParserElement], savelist: bool = True):
+ def __init__(self, exprs: IterableType[ParserElement], savelist: bool = True):
super().__init__(exprs, savelist)
if self.exprs:
self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs)
label = data_word + FollowedBy(':')
attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join))
- attr_expr[1, ...].parse_string("shape: SQUARE color: BLACK posn: upper left").pprint()
+ OneOrMore(attr_expr).parse_string("shape: SQUARE color: BLACK posn: upper left").pprint()
prints::
"""
def __init__(
- self, expr: Union[ParserElement, str], retreat: typing.Optional[int] = None
+ self, expr: Union[ParserElement, str], retreat: OptionalType[int] = None
):
super().__init__(expr)
self.expr = self.expr().leave_whitespace()
# very crude boolean expression - to support parenthesis groups and
# operation hierarchy, use infix_notation
- boolean_expr = boolean_term + ((AND | OR) + boolean_term)[...]
+ boolean_expr = boolean_term + ZeroOrMore((AND | OR) + boolean_term)
# integers that are followed by "." are actually floats
integer = Word(nums) + ~Char(".")
def __init__(
self,
expr: ParserElement,
- stop_on: typing.Optional[Union[ParserElement, str]] = None,
+ stop_on: OptionalType[Union[ParserElement, str]] = None,
*,
- stopOn: typing.Optional[Union[ParserElement, str]] = None,
+ stopOn: OptionalType[Union[ParserElement, str]] = None,
):
super().__init__(expr)
stopOn = stopOn or stop_on
attr_expr = Group(label + Suppress(':') + OneOrMore(data_word).set_parse_action(' '.join))
text = "shape: SQUARE posn: upper left color: BLACK"
- attr_expr[1, ...].parse_string(text).pprint() # Fail! read 'color' as data instead of next label -> [['shape', 'SQUARE color']]
+ OneOrMore(attr_expr).parse_string(text).pprint() # Fail! read 'color' as data instead of next label -> [['shape', 'SQUARE color']]
# use stop_on attribute for OneOrMore to avoid reading label string as part of the data
attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join))
def __init__(
self,
expr: ParserElement,
- stop_on: typing.Optional[Union[ParserElement, str]] = None,
+ stop_on: OptionalType[Union[ParserElement, str]] = None,
*,
- stopOn: typing.Optional[Union[ParserElement, str]] = None,
+ stopOn: OptionalType[Union[ParserElement, str]] = None,
):
super().__init__(expr, stopOn=stopOn or stop_on)
self.mayReturnEmpty = True
other: Union[ParserElement, str],
include: bool = False,
ignore: bool = None,
- fail_on: typing.Optional[Union[ParserElement, str]] = None,
+ fail_on: OptionalType[Union[ParserElement, str]] = None,
*,
failOn: Union[ParserElement, str] = None,
):
parser created using ``Forward``.
"""
- def __init__(self, other: typing.Optional[Union[ParserElement, str]] = None):
+ def __init__(self, other: OptionalType[Union[ParserElement, str]] = None):
self.caller_frame = traceback.extract_stack(limit=2)[0]
super().__init__(other, savelist=False)
self.lshift_line = None
join_string: str = "",
adjacent: bool = True,
*,
- joinString: typing.Optional[str] = None,
+ joinString: OptionalType[str] = None,
):
super().__init__(expr)
joinString = joinString if joinString is not None else join_string
attr_expr = (label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join))
# print attributes as plain groups
- print(attr_expr[1, ...].parse_string(text).dump())
+ print(OneOrMore(attr_expr).parse_string(text).dump())
- # instead of OneOrMore(expr), parse using Dict(Group(expr)[1, ...]) - Dict will auto-assign names
- result = Dict(Group(attr_expr)[1, ...]).parse_string(text)
+ # instead of OneOrMore(expr), parse using Dict(OneOrMore(Group(expr))) - Dict will auto-assign names
+ result = Dict(OneOrMore(Group(attr_expr))).parse_string(text)
print(result.dump())
# access named fields as dict entries, or output as dict
source = "a, b, c,d"
wd = Word(alphas)
- wd_list1 = wd + (',' + wd)[...]
+ wd_list1 = wd + ZeroOrMore(',' + wd)
print(wd_list1.parse_string(source))
# often, delimiters that are useful during parsing are just in the
# way afterward - use Suppress to keep them out of the parsed output
- wd_list2 = wd + (Suppress(',') + wd)[...]
+ wd_list2 = wd + ZeroOrMore(Suppress(',') + wd)
print(wd_list2.parse_string(source))
# Skipped text (using '...') can be suppressed as well
def remove_duplicate_chars(tokens):
return ''.join(sorted(set(''.join(tokens))))
- wds = wd[1, ...].set_parse_action(remove_duplicate_chars)
+ wds = OneOrMore(wd).set_parse_action(remove_duplicate_chars)
print(wds.parse_string("slkdjs sld sldd sdlf sdljf"))
prints::
Example (compare the last to example in :class:`ParserElement.transform_string`::
- hex_ints = Word(hexnums)[1, ...].set_parse_action(token_map(int, 16))
+ hex_ints = OneOrMore(Word(hexnums)).set_parse_action(token_map(int, 16))
hex_ints.run_tests('''
00 11 22 aa FF 0a 0d 1a
''')
upperword = Word(alphas).set_parse_action(token_map(str.upper))
- upperword[1, ...].run_tests('''
+ OneOrMore(upperword).run_tests('''
my kingdom for a horse
''')
wd = Word(alphas).set_parse_action(token_map(str.title))
- wd[1, ...].set_parse_action(' '.join).run_tests('''
+ OneOrMore(wd).set_parse_action(' '.join).run_tests('''
now is the winter of our discontent made glorious summer by this sun of york
''')
# build list of built-in expressions, for future reference if a global default value
# gets updated
-_builtin_exprs: List[ParserElement] = [
- v for v in vars().values() if isinstance(v, ParserElement)
-]
+_builtin_exprs = [v for v in vars().values() if isinstance(v, ParserElement)]
# backward compatibility names
tokenMap = token_map
import railroad
import pyparsing
-import typing
+from pkg_resources import resource_filename
from typing import (
List,
+ Optional,
NamedTuple,
Generic,
TypeVar,
import inspect
-jinja2_template_source = """\
-<!DOCTYPE html>
-<html>
-<head>
- {% if not head %}
- <style type="text/css">
- .railroad-heading {
- font-family: monospace;
- }
- </style>
- {% else %}
- {{ head | safe }}
- {% endif %}
-</head>
-<body>
-{{ body | safe }}
-{% for diagram in diagrams %}
- <div class="railroad-group">
- <h1 class="railroad-heading">{{ diagram.title }}</h1>
- <div class="railroad-description">{{ diagram.text }}</div>
- <div class="railroad-svg">
- {{ diagram.svg }}
- </div>
- </div>
-{% endfor %}
-</body>
-</html>
-"""
-
-template = Template(jinja2_template_source)
+with open(resource_filename(__name__, "template.jinja2"), encoding="utf-8") as fp:
+ template = Template(fp.read())
# Note: ideally this would be a dataclass, but we're supporting Python 3.5+ so we can't do this yet
NamedDiagram = NamedTuple(
"NamedDiagram",
- [("name", str), ("diagram", typing.Optional[railroad.DiagramItem]), ("index", int)],
+ [("name", str), ("diagram", Optional[railroad.DiagramItem]), ("index", int)],
)
"""
A simple structure for associating a name with a railroad diagram
"""
data = []
for diagram in diagrams:
- if diagram.diagram is None:
- continue
io = StringIO()
diagram.diagram.writeSvg(io.write)
title = diagram.name
def to_railroad(
element: pyparsing.ParserElement,
- diagram_kwargs: typing.Optional[dict] = None,
+ diagram_kwargs: Optional[dict] = None,
vertical: int = 3,
show_results_names: bool = False,
show_groups: bool = False,
parent: EditablePartial,
number: int,
name: str = None,
- parent_index: typing.Optional[int] = None,
+ parent_index: Optional[int] = None,
):
#: The pyparsing element that this represents
self.element: pyparsing.ParserElement = element
#: The name of the element
- self.name: typing.Optional[str] = name
+ self.name: str = name
#: The output Railroad element in an unconverted state
self.converted: EditablePartial = converted
#: The parent Railroad element, which we store so that we can extract this if it's duplicated
#: The order in which we found this element, used for sorting diagrams if this is extracted into a diagram
self.number: int = number
#: The index of this inside its parent
- self.parent_index: typing.Optional[int] = parent_index
+ self.parent_index: Optional[int] = parent_index
#: If true, we should extract this out into a subdiagram
self.extract: bool = False
#: If true, all of this element's children have been filled out
Stores some state that persists between recursions into the element tree
"""
- def __init__(self, diagram_kwargs: typing.Optional[dict] = None):
+ def __init__(self, diagram_kwargs: Optional[dict] = None):
#: A dictionary mapping ParserElements to state relating to them
self._element_diagram_states: Dict[int, ElementState] = {}
#: A dictionary mapping ParserElement IDs to subdiagrams generated from them
def _inner(
element: pyparsing.ParserElement,
- parent: typing.Optional[EditablePartial],
+ parent: Optional[EditablePartial],
lookup: ConverterState = None,
vertical: int = None,
index: int = 0,
name_hint: str = None,
show_results_names: bool = False,
show_groups: bool = False,
- ) -> typing.Optional[EditablePartial]:
+ ) -> Optional[EditablePartial]:
ret = fn(
element,
@_apply_diagram_item_enhancements
def _to_diagram_element(
element: pyparsing.ParserElement,
- parent: typing.Optional[EditablePartial],
+ parent: Optional[EditablePartial],
lookup: ConverterState = None,
vertical: int = None,
index: int = 0,
name_hint: str = None,
show_results_names: bool = False,
show_groups: bool = False,
-) -> typing.Optional[EditablePartial]:
+) -> Optional[EditablePartial]:
"""
Recursively converts a PyParsing Element to a railroad Element
:param lookup: The shared converter state that keeps track of useful things
else:
ret = EditablePartial.from_call(railroad.Group, label="", item="")
elif isinstance(element, pyparsing.TokenConverter):
- ret = EditablePartial.from_call(
- AnnotatedItem, label=type(element).__name__.lower(), item=""
- )
+ ret = EditablePartial.from_call(AnnotatedItem, label=type(element).__name__.lower(), item="")
elif isinstance(element, pyparsing.Opt):
ret = EditablePartial.from_call(railroad.Optional, item="")
elif isinstance(element, pyparsing.OneOrMore):
--- /dev/null
+<!DOCTYPE html>
+<html>
+<head>
+ {% if not head %}
+ <style type="text/css">
+ .railroad-heading {
+ font-family: monospace;
+ }
+ </style>
+ {% else %}
+ {{ hear | safe }}
+ {% endif %}
+</head>
+<body>
+{{ body | safe }}
+{% for diagram in diagrams %}
+ <div class="railroad-group">
+ <h1 class="railroad-heading">{{ diagram.title }}</h1>
+ <div class="railroad-description">{{ diagram.text }}</div>
+ <div class="railroad-svg">
+ {{ diagram.svg }}
+ </div>
+ </div>
+{% endfor %}
+</body>
+</html>
import re
import sys
-import typing
+from typing import Optional
from .util import col, line, lineno, _collapse_string_to_ranges
from .unicode import pyparsing_unicode as ppu
self,
pstr: str,
loc: int = 0,
- msg: typing.Optional[str] = None,
+ msg: Optional[str] = None,
elem=None,
):
self.loc = loc
# helpers.py
import html.entities
import re
-import typing
from . import __diag__
from .core import *
expr: Union[str, ParserElement],
delim: Union[str, ParserElement] = ",",
combine: bool = False,
- min: typing.Optional[int] = None,
- max: typing.Optional[int] = None,
+ min: OptionalType[int] = None,
+ max: OptionalType[int] = None,
*,
allow_trailing_delim: bool = False,
) -> ParserElement:
def counted_array(
expr: ParserElement,
- int_expr: typing.Optional[ParserElement] = None,
+ int_expr: OptionalType[ParserElement] = None,
*,
- intExpr: typing.Optional[ParserElement] = None,
+ intExpr: OptionalType[ParserElement] = None,
) -> ParserElement:
"""Helper to define a counted list of expressions.
def one_of(
- strs: Union[typing.Iterable[str], str],
+ strs: Union[IterableType[str], str],
caseless: bool = False,
use_regex: bool = True,
as_keyword: bool = False,
text = "shape: SQUARE posn: upper left color: light blue texture: burlap"
attr_expr = (label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join))
- print(attr_expr[1, ...].parse_string(text).dump())
+ print(OneOrMore(attr_expr).parse_string(text).dump())
attr_label = label
attr_value = Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)
def nested_expr(
opener: Union[str, ParserElement] = "(",
closer: Union[str, ParserElement] = ")",
- content: typing.Optional[ParserElement] = None,
+ content: OptionalType[ParserElement] = None,
ignore_expr: ParserElement = quoted_string(),
*,
ignoreExpr: ParserElement = quoted_string(),
return _makeTags(tag_str, True)
-any_open_tag: ParserElement
-any_close_tag: ParserElement
any_open_tag, any_close_tag = make_html_tags(
Word(alphas, alphanums + "_:").set_name("any tag")
)
InfixNotationOperatorArgType,
int,
OpAssoc,
- typing.Optional[ParseAction],
+ OptionalType[ParseAction],
],
Tuple[
InfixNotationOperatorArgType,
if rightLeftAssoc not in (OpAssoc.LEFT, OpAssoc.RIGHT):
raise ValueError("operator must indicate right or left associativity")
- thisExpr: Forward = Forward().set_name(term_name)
+ thisExpr = Forward().set_name(term_name)
if rightLeftAssoc is OpAssoc.LEFT:
if arity == 1:
matchExpr = _FB(lastExpr + opExpr) + Group(lastExpr + opExpr[1, ...])
assignment = Group(identifier + "=" + rvalue)
stmt << (funcDef | assignment | identifier)
- module_body = stmt[1, ...]
+ module_body = OneOrMore(stmt)
parseTree = module_body.parseString(data)
parseTree.pprint()
# build list of built-in expressions, for future reference if a global default value
# gets updated
-_builtin_exprs: List[ParserElement] = [
- v for v in vars().values() if isinstance(v, ParserElement)
-]
+_builtin_exprs = [v for v in vars().values() if isinstance(v, ParserElement)]
# pre-PEP8 compatible names
print(numlist.parse_string("0 123 321")) # -> ['123', '321']
label = Word(alphas)
- patt = label("LABEL") + Word(nums)[1, ...]
+ patt = label("LABEL") + OneOrMore(Word(nums))
print(patt.parse_string("AAB 123 321").dump())
# Use pop() in a parse action to remove named result (note that corresponding value is not
Example::
- patt = Word(alphas)[1, ...]
+ patt = OneOrMore(Word(alphas))
# use a parse action to append the reverse of the matched strings, to make a palindrome
def make_palindrome(tokens):
Example::
- patt = Word(alphas)[1, ...]
+ patt = OneOrMore(Word(alphas))
result = patt.parse_string("sldkj lsdkj sldkj")
# even though the result prints in string-like form, it is actually a pyparsing ParseResults
print(type(result), result) # -> <class 'pyparsing.ParseResults'> ['sldkj', 'lsdkj', 'sldkj']
user_data = (Group(house_number_expr)("house_number")
| Group(ssn_expr)("ssn")
| Group(integer)("age"))
- user_info = user_data[1, ...]
+ user_info = OneOrMore(user_data)
result = user_info.parse_string("22 111-22-3333 #221B")
for item in result:
# testing.py
from contextlib import contextmanager
-import typing
+from typing import Optional
from .core import (
ParserElement,
@staticmethod
def with_line_numbers(
s: str,
- start_line: typing.Optional[int] = None,
- end_line: typing.Optional[int] = None,
+ start_line: Optional[int] = None,
+ end_line: Optional[int] = None,
expand_tabs: bool = True,
eol_mark: str = "|",
- mark_spaces: typing.Optional[str] = None,
- mark_control: typing.Optional[str] = None,
+ mark_spaces: Optional[str] = None,
+ mark_control: Optional[str] = None,
) -> str:
"""
Helpful method for debugging a parser - prints a string with line and column numbers.
A namespace class for defining common language unicode_sets.
"""
- # fmt: off
-
- # define ranges in language character sets
- _ranges: UnicodeRangeList = [
- (0x0020, sys.maxunicode),
- ]
-
- class BasicMultilingualPlane(unicode_set):
- "Unicode set for the Basic Multilingual Plane"
- _ranges: UnicodeRangeList = [
- (0x0020, 0xFFFF),
- ]
+ _ranges: UnicodeRangeList = [(32, sys.maxunicode)]
class Latin1(unicode_set):
"Unicode set for Latin-1 Unicode Character Range"
class CJK(Chinese, Japanese, Hangul):
"Unicode set for combined Chinese, Japanese, and Korean (CJK) Unicode Character Range"
+ pass
class Thai(unicode_set):
"Unicode set for Thai Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x0E01, 0x0E3A),
- (0x0E3F, 0x0E5B)
- ]
+ _ranges: UnicodeRangeList = [(0x0E01, 0x0E3A), (0x0E3F, 0x0E5B)]
class Arabic(unicode_set):
"Unicode set for Arabic Unicode Character Range"
class Devanagari(unicode_set):
"Unicode set for Devanagari Unicode Character Range"
- _ranges: UnicodeRangeList = [
- (0x0900, 0x097F),
- (0xA8E0, 0xA8FF)
- ]
-
- # fmt: on
+ _ranges: UnicodeRangeList = [(0x0900, 0x097F), (0xA8E0, 0xA8FF)]
pyparsing_unicode.Japanese._ranges = (
+ pyparsing_unicode.Japanese.Katakana._ranges
)
-pyparsing_unicode.BMP = pyparsing_unicode.BasicMultilingualPlane
-
-# add language identifiers using language Unicode
+# define ranges in language character sets
pyparsing_unicode.العربية = pyparsing_unicode.Arabic
pyparsing_unicode.中文 = pyparsing_unicode.Chinese
pyparsing_unicode.кириллица = pyparsing_unicode.Cyrillic
packaging==21.3
-pyparsing==3.0.9
+pyparsing==3.0.8
ordered-set==3.1.1
more_itertools==8.8.0
jaraco.text==3.7.0
'build_wheel',
'build_sdist',
'get_requires_for_build_editable',
- 'prepare_metadata_for_build_editable',
'build_editable',
'__legacy__',
'SetupRequirementsError']
>>> list(fn(None))
[]
>>> list(fn({"editable-mode": "strict"}))
- ['--mode', 'strict']
+ ['--strict']
+ >>> list(fn({"editable-mode": "other"}))
+ Traceback (most recent call last):
+ ...
+ ValueError: Invalid value for `editable-mode`: 'other'. Try: 'strict'.
"""
cfg = config_settings or {}
- mode = cfg.get("editable-mode") or cfg.get("editable_mode")
- if not mode:
+ if "editable-mode" not in cfg and "editable_mode" not in cfg:
return
- yield from ["--mode", str(mode)]
+ mode = cfg.get("editable-mode") or cfg.get("editable_mode")
+ if mode != "strict":
+ msg = f"Invalid value for `editable-mode`: {mode!r}. Try: 'strict'."
+ raise ValueError(msg)
+ yield "--strict"
def _arbitrary_args(self, config_settings: _ConfigSettings) -> Iterator[str]:
"""
with _open_setup_script(__file__) as f:
code = f.read().replace(r'\r\n', r'\n')
- exec(code, locals())
+ exec(compile(code, __file__, 'exec'), locals())
def get_requires_for_build_wheel(self, config_settings=None):
return self._get_build_requires(config_settings, requirements=['wheel'])
import sys
if 'egg' not in bdist.format_commands:
- try:
- bdist.format_commands['egg'] = ('bdist_egg', "Python .egg file")
- except TypeError:
- # For backward compatibility with older distutils (stdlib)
- bdist.format_command['egg'] = ('bdist_egg', "Python .egg file")
- bdist.format_commands.append('egg')
+ bdist.format_command['egg'] = ('bdist_egg', "Python .egg file")
+ bdist.format_commands.append('egg')
del bdist, sys
import traceback
import warnings
from contextlib import suppress
-from enum import Enum
from inspect import cleandoc
from itertools import chain
from pathlib import Path
Optional,
Tuple,
TypeVar,
- Union,
+ Union
)
-from setuptools import Command, SetuptoolsDeprecationWarning, errors, namespaces
+from setuptools import Command, errors, namespaces
from setuptools.discovery import find_package_path
from setuptools.dist import Distribution
_logger = logging.getLogger(__name__)
-class _EditableMode(Enum):
- """
- Possible editable installation modes:
- `lenient` (new files automatically added to the package - DEFAULT);
- `strict` (requires a new installation when files are added/removed); or
- `compat` (attempts to emulate `python setup.py develop` - DEPRECATED).
- """
-
- STRICT = "strict"
- LENIENT = "lenient"
- COMPAT = "compat" # TODO: Remove `compat` after Dec/2022.
-
- @classmethod
- def convert(cls, mode: Optional[str]) -> "_EditableMode":
- if not mode:
- return _EditableMode.LENIENT # default
-
- _mode = mode.upper()
- if _mode not in _EditableMode.__members__:
- raise errors.OptionError(f"Invalid editable mode: {mode!r}. Try: 'strict'.")
-
- if _mode == "COMPAT":
- msg = """
- The 'compat' editable mode is transitional and will be removed
- in future versions of `setuptools`.
- Please adapt your code accordingly to use either the 'strict' or the
- 'lenient' modes.
-
- For more information, please check:
- https://setuptools.pypa.io/en/latest/userguide/development_mode.html
- """
- warnings.warn(msg, SetuptoolsDeprecationWarning)
-
- return _EditableMode[_mode]
-
-
_STRICT_WARNING = """
New or renamed files may not be automatically picked up without a new installation.
"""
-_LENIENT_WARNING = """
+_LAX_WARNING = """
Options like `package-data`, `include/exclude-package-data` or
`packages.find.exclude/include` may have no effect.
"""
class editable_wheel(Command):
- """Build 'editable' wheel for development.
- (This command is reserved for internal use of setuptools).
- """
+ """Build 'editable' wheel for development"""
description = "create a PEP 660 'editable' wheel"
user_options = [
("dist-dir=", "d", "directory to put final built distributions in"),
("dist-info-dir=", "I", "path to a pre-build .dist-info directory"),
- ("mode=", None, cleandoc(_EditableMode.__doc__ or "")),
+ ("strict", None, "perform an strict installation"),
]
+ boolean_options = ["strict"]
+
def initialize_options(self):
self.dist_dir = None
self.dist_info_dir = None
self.project_dir = None
- self.mode = None
+ self.strict = False
def finalize_options(self):
dist = self.distribution
"""Decides which strategy to use to implement an editable installation."""
build_name = f"__editable__.{name}-{tag}"
project_dir = Path(self.project_dir)
- mode = _EditableMode.convert(self.mode)
- if mode is _EditableMode.STRICT:
+ if self.strict or os.getenv("SETUPTOOLS_EDITABLE", None) == "strict":
auxiliary_dir = _empty_dir(Path(self.project_dir, "build", build_name))
return _LinkTree(self.distribution, name, auxiliary_dir, build_lib)
packages = _find_packages(self.distribution)
has_simple_layout = _simple_layout(packages, self.package_dir, project_dir)
- is_compat_mode = mode is _EditableMode.COMPAT
- if set(self.package_dir) == {""} and has_simple_layout or is_compat_mode:
+ if set(self.package_dir) == {""} and has_simple_layout:
# src-layout(ish) is relatively safe for a simple pth file
- src_dir = self.package_dir.get("", ".")
+ src_dir = self.package_dir[""]
return _StaticPth(self.distribution, name, [Path(project_dir, src_dir)])
# Use a MetaPathFinder to avoid adding accidental top-level packages/modules
Editable install will be performed using .pth file to extend `sys.path` with:
{self.path_entries!r}
"""
- _logger.warning(msg + _LENIENT_WARNING)
+ _logger.warning(msg + _LAX_WARNING)
return self
def __exit__(self, _exc_type, _exc_value, _traceback):
def __enter__(self):
msg = "Editable install will be performed using a meta path finder.\n"
- _logger.warning(msg + _LENIENT_WARNING)
+ _logger.warning(msg + _LAX_WARNING)
return self
def __exit__(self, _exc_type, _exc_value, _traceback):
self.target_dir = None
def finalize_options(self):
- log.warn(
- "Upload_docs command is deprecated. Use Read the Docs "
- "(https://readthedocs.org) instead.")
upload.finalize_options(self)
if self.upload_dir is None:
if self.has_sphinx():
else:
self.ensure_dirname('upload_dir')
self.target_dir = self.upload_dir
+ if 'pypi.python.org' in self.repository:
+ log.warn("Upload_docs command is deprecated for PyPi. Use RTD instead.")
self.announce('Using upload directory %s' % self.target_dir)
def create_zipfile(self, filename):
to access a backward compatible API, but this module is provisional
and might be removed in the future.
"""
- warnings.warn(dedent(msg), SetuptoolsDeprecationWarning, stacklevel=2)
+ warnings.warn(dedent(msg), SetuptoolsDeprecationWarning)
return fn(*args, **kwargs)
return cast(Fn, _wrapper)
try:
return validator.validate(config)
except validator.ValidationError as ex:
- summary = f"configuration error: {ex.summary}"
- if ex.name.strip("`") != "project":
- # Probably it is just a field missing/misnamed, not worthy the verbosity...
- _logger.debug(summary)
- _logger.debug(ex.details)
-
- error = f"invalid pyproject.toml config: {ex.name}."
- raise ValueError(f"{error}\n{summary}") from None
+ _logger.error(f"configuration error: {ex.summary}") # type: ignore
+ _logger.debug(ex.details) # type: ignore
+ error = ValueError(f"invalid pyproject.toml config: {ex.name}") # type: ignore
+ raise error from None
def apply_configuration(
"""
import os
-import contextlib
-import functools
import warnings
+import functools
from collections import defaultdict
from functools import partial
from functools import wraps
Optional, Tuple, TypeVar, Union)
from distutils.errors import DistutilsOptionError, DistutilsFileError
-from setuptools.extern.packaging.requirements import Requirement, InvalidRequirement
from setuptools.extern.packaging.version import Version, InvalidVersion
from setuptools.extern.packaging.specifiers import SpecifierSet
from setuptools._deprecation_warning import SetuptoolsDeprecationWarning
return meta, options
-def _warn_accidental_env_marker_misconfig(label: str, orig_value: str, parsed: list):
- """Because users sometimes misinterpret this configuration:
-
- [options.extras_require]
- foo = bar;python_version<"4"
-
- It looks like one requirement with an environment marker
- but because there is no newline, it's parsed as two requirements
- with a semicolon as separator.
-
- Therefore, if:
- * input string does not contain a newline AND
- * parsed result contains two requirements AND
- * parsing of the two parts from the result ("<first>;<second>")
- leads in a valid Requirement with a valid marker
- a UserWarning is shown to inform the user about the possible problem.
- """
- if "\n" in orig_value or len(parsed) != 2:
- return
-
- with contextlib.suppress(InvalidRequirement):
- original_requirements_str = ";".join(parsed)
- req = Requirement(original_requirements_str)
- if req.marker is not None:
- msg = (
- f"One of the parsed requirements in `{label}` "
- f"looks like a valid environment marker: '{parsed[1]}'\n"
- "Make sure that the config is correct and check "
- "https://setuptools.pypa.io/en/latest/userguide/declarative_config.html#opt-2" # noqa: E501
- )
- warnings.warn(msg, UserWarning)
-
-
class ConfigHandler(Generic[Target]):
"""Handles metadata supplied in configuration files."""
return parse
@classmethod
- def _parse_section_to_dict_with_key(cls, section_options, values_parser):
+ def _parse_section_to_dict(cls, section_options, values_parser=None):
"""Parses section options into a dictionary.
- Applies a given parser to each option in a section.
+ Optionally applies a given parser to values.
:param dict section_options:
- :param callable values_parser: function with 2 args corresponding to key, value
+ :param callable values_parser:
:rtype: dict
"""
value = {}
+ values_parser = values_parser or (lambda val: val)
for key, (_, val) in section_options.items():
- value[key] = values_parser(key, val)
+ value[key] = values_parser(val)
return value
- @classmethod
- def _parse_section_to_dict(cls, section_options, values_parser=None):
- """Parses section options into a dictionary.
-
- Optionally applies a given parser to each value.
-
- :param dict section_options:
- :param callable values_parser: function with 1 arg corresponding to option value
- :rtype: dict
- """
- parser = (lambda _, v: values_parser(v)) if values_parser else (lambda _, v: v)
- return cls._parse_section_to_dict_with_key(section_options, parser)
-
def parse_section(self, section_options):
"""Parses configuration file section.
:param dict section_options:
"""
for (name, (_, value)) in section_options.items():
- with contextlib.suppress(KeyError):
- # Keep silent for a new option may appear anytime.
+ try:
self[name] = value
+ except KeyError:
+ pass # Keep silent for a new option may appear anytime.
+
def parse(self):
"""Parses configuration file items from one
or more related sections.
def _parse_file_in_root(self, value):
return self._parse_file(value, root_dir=self.root_dir)
- def _parse_requirements_list(self, label: str, value: str):
+ def _parse_requirements_list(self, value):
# Parse a requirements list, either by reading in a `file:`, or a list.
parsed = self._parse_list_semicolon(self._parse_file_in_root(value))
- _warn_accidental_env_marker_misconfig(label, value, parsed)
# Filter it to only include lines that are not comments. `parse_list`
# will have stripped each line and filtered out empties.
return [line for line in parsed if not line.startswith("#")]
"consider using implicit namespaces instead (PEP 420).",
SetuptoolsDeprecationWarning,
),
- 'install_requires': partial(
- self._parse_requirements_list, "install_requires"
- ),
+ 'install_requires': self._parse_requirements_list,
'setup_requires': self._parse_list_semicolon,
'tests_require': self._parse_list_semicolon,
'packages': self._parse_packages,
:param dict section_options:
"""
- parsed = self._parse_section_to_dict_with_key(
+ parsed = self._parse_section_to_dict(
section_options,
- lambda k, v: self._parse_requirements_list(f"extras_require[{k}]", v)
+ self._parse_requirements_list,
)
-
self['extras_require'] = parsed
def parse_section_data_files(self, section_options):
:keyword list[str] runtime_library_dirs:
list of directories to search for C/C++ libraries at run time
- (for shared extensions, this is when the extension is loaded).
- Setting this will cause an exception during build on Windows
- platforms.
+ (for shared extensions, this is when the extension is loaded)
:keyword list[str] extra_objects:
list of extra files to link with (eg. object files not implied
:keyword bool py_limited_api:
opt-in flag for the usage of :doc:`Python's limited API <python:c-api/stable>`.
-
- :raises setuptools.errors.PlatformError: if 'runtime_library_dirs' is
- specified on Windows. (since v63)
"""
def __init__(self, name, sources, *args, **kw):
+import logging
import re
from configparser import ConfigParser
from inspect import cleandoc
@pytest.mark.parametrize(
- "example, error_msg",
+ "example, error_msg, value_shown_in_debug",
[
(
"""
version = "1.2"
requires = ['pywin32; platform_system=="Windows"' ]
""",
- "configuration error: .project. must not contain ..requires.. properties",
+ "configuration error: `project` must not contain {'requires'} properties",
+ '"requires": ["pywin32; platform_system==\\"Windows\\""]',
),
],
)
-def test_invalid_example(tmp_path, example, error_msg):
+def test_invalid_example(tmp_path, caplog, example, error_msg, value_shown_in_debug):
+ caplog.set_level(logging.DEBUG)
pyproject = tmp_path / "pyproject.toml"
pyproject.write_text(cleandoc(example))
- pattern = re.compile(f"invalid pyproject.toml.*{error_msg}.*", re.M | re.S)
- with pytest.raises(ValueError, match=pattern):
+ caplog.clear()
+ with pytest.raises(ValueError, match="invalid pyproject.toml"):
read_configuration(pyproject)
+ # Make sure the logs give guidance to the user
+ error_log = caplog.record_tuples[0]
+ assert error_log[1] == logging.ERROR
+ assert error_msg in error_log[2]
+
+ debug_log = caplog.record_tuples[1]
+ assert debug_log[1] == logging.DEBUG
+ debug_msg = "".join(line.strip() for line in debug_log[2].splitlines())
+ assert value_shown_in_debug in debug_msg
+
@pytest.mark.parametrize("config", ("", "[tool.something]\nvalue = 42"))
def test_empty(tmp_path, config):
}
assert dist.metadata.provides_extras == set(['pdf', 'rest'])
- @pytest.mark.parametrize(
- "config",
- [
- "[options.extras_require]\nfoo = bar;python_version<'3'",
- "[options.extras_require]\nfoo = bar;os_name=='linux'",
- "[options.extras_require]\nfoo = bar;python_version<'3'\n",
- "[options.extras_require]\nfoo = bar;os_name=='linux'\n",
- "[options]\ninstall_requires = bar;python_version<'3'",
- "[options]\ninstall_requires = bar;os_name=='linux'",
- "[options]\ninstall_requires = bar;python_version<'3'\n",
- "[options]\ninstall_requires = bar;os_name=='linux'\n",
- ],
- )
- def test_warn_accidental_env_marker_misconfig(self, config, tmpdir):
- fake_env(tmpdir, config)
- match = (
- r"One of the parsed requirements in `(install_requires|extras_require.+)` "
- "looks like a valid environment marker.*"
- )
- with pytest.warns(UserWarning, match=match):
- with get_dist(tmpdir) as _:
- pass
-
- @pytest.mark.parametrize(
- "config",
- [
- "[options.extras_require]\nfoo =\n bar;python_version<'3'",
- "[options.extras_require]\nfoo = bar;baz\nboo = xxx;yyy",
- "[options.extras_require]\nfoo =\n bar;python_version<'3'\n",
- "[options.extras_require]\nfoo = bar;baz\nboo = xxx;yyy\n",
- "[options.extras_require]\nfoo =\n bar\n python_version<'3'\n",
- "[options]\ninstall_requires =\n bar;python_version<'3'",
- "[options]\ninstall_requires = bar;baz\nboo = xxx;yyy",
- "[options]\ninstall_requires =\n bar;python_version<'3'\n",
- "[options]\ninstall_requires = bar;baz\nboo = xxx;yyy\n",
- "[options]\ninstall_requires =\n bar\n python_version<'3'\n",
- ],
- )
- def test_nowarn_accidental_env_marker_misconfig(self, config, tmpdir, recwarn):
- fake_env(tmpdir, config)
- with get_dist(tmpdir) as _:
- pass
- # The examples are valid, no warnings shown
- assert not any(w.category == UserWarning for w in recwarn)
-
def test_dash_preserved_extras_require(self, tmpdir):
fake_env(tmpdir, '[options.extras_require]\n' 'foo-a = foo\n' 'foo_b = test\n')
pytestmark = pytest.mark.integration
-LATEST, = Enum("v", "LATEST")
+LATEST, = list(Enum("v", "LATEST"))
"""Default version to be checked"""
# There are positive and negative aspects of checking the latest version of the
# packages.
# means it will download the previous stable version of setuptools.
# `pip` flags can avoid that (the version of setuptools under test
# should be the one to be used)
-INSTALL_OPTIONS = (
+SDIST_OPTIONS = (
"--ignore-installed",
"--no-build-isolation",
- # Omit "--no-binary :all:" the sdist is supplied directly.
- # Allows dependencies as wheels.
+ # We don't need "--no-binary :all:" since we specify the path to the sdist.
+ # It also helps with performance, since dependencies can come from wheels.
)
# The downside of `--no-build-isolation` is that pip will not download build
# dependencies. The test script will have to also handle that.
# Use a virtualenv to simulate PEP 517 isolation
# but install fresh setuptools wheel to ensure the version under development
run([*venv_pip, "install", "-I", setuptools_wheel])
- run([*venv_pip, "install", *INSTALL_OPTIONS, sdist])
+ run([*venv_pip, "install", *SDIST_OPTIONS, sdist])
# Execute a simple script to make sure the package was installed correctly
script = f"import {package}; print(getattr({package}, '__version__', 0))"
raise ValueError(f"Release for {package} {version} was yanked")
version = metadata["info"]["version"]
- release = metadata["releases"][version] if version is LATEST else metadata["urls"]
- sdist, = filter(lambda d: d["packagetype"] == "sdist", release)
- return sdist
+ release = metadata["releases"][version]
+ dists = [d for d in release if d["packagetype"] == "sdist"]
+ if len(dists) == 0:
+ raise ValueError(f"No sdist found for {package} {version}")
+
+ for dist in dists:
+ if dist["filename"].endswith(".tar.gz"):
+ return dist
+
+ # Not all packages are publishing tar.gz
+ return dist
def download(url, dest, md5_digest):
def build_deps(package, sdist_file):
"""Find out what are the build dependencies for a package.
- "Manually" install them, since pip will not install build
+ We need to "manually" install them, since pip will not install build
deps with `--no-build-isolation`.
"""
import tomli as toml
# testenv without tomli
archive = Archive(sdist_file)
- info = toml.loads(_read_pyproject(archive))
+ pyproject = _read_pyproject(archive)
+
+ info = toml.loads(pyproject)
deps = info.get("build-system", {}).get("requires", [])
deps += EXTRA_BUILD_DEPS.get(package, [])
# Remove setuptools from requirements (and deduplicate)
def _read_pyproject(archive):
- contents = (
- archive.get_content(member)
- for member in archive
- if os.path.basename(archive.get_name(member)) == "pyproject.toml"
- )
- return next(contents, "")
+ for member in archive:
+ if os.path.basename(archive.get_name(member)) == "pyproject.toml":
+ return archive.get_content(member)
+ return ""
import tarfile
import importlib
import contextlib
+import subprocess
from concurrent import futures
import re
from zipfile import ZipFile
build_backend = self.get_build_backend()
assert not Path("build").exists()
- cfg = {"--global-option": ["--mode", "strict"]}
+ cfg = {"--global-option": "--strict"}
build_backend.prepare_metadata_for_build_editable("_meta", cfg)
build_backend.build_editable("temp", cfg, "_meta")
def test_editable_without_config_settings(self, tmpdir_cwd):
"""
- Sanity check to ensure tests with --mode=strict are different from the ones
- without --mode.
+ Sanity check to ensure tests with --strict are different from the ones
+ without --strict.
- --mode=strict should create a local directory with a package tree.
+ --strict should create a local directory with a package tree.
The directory should not get created otherwise.
"""
path.build(self._simple_pyproject_example)
@pytest.mark.parametrize(
"config_settings", [
- {"--build-option": ["--mode", "strict"]},
+ {"--build-option": "--strict"},
{"editable-mode": "strict"},
]
)
build_backend.build_sdist("temp")
-def test_legacy_editable_install(venv, tmpdir, tmpdir_cwd):
+def test_legacy_editable_install(tmpdir, tmpdir_cwd):
pyproject = """
[build-system]
requires = ["setuptools"]
path.build({"pyproject.toml": DALS(pyproject), "mymod.py": ""})
# First: sanity check
- cmd = ["pip", "install", "--no-build-isolation", "-e", "."]
- output = str(venv.run(cmd, cwd=tmpdir), "utf-8").lower()
+ cmd = [sys.executable, "-m", "pip", "install", "--no-build-isolation", "-e", "."]
+ output = str(subprocess.check_output(cmd, cwd=tmpdir), "utf-8").lower()
assert "running setup.py develop for myproj" not in output
assert "created wheel for myproj" in output
# Then: real test
env = {**os.environ, "SETUPTOOLS_ENABLE_FEATURES": "legacy-editable"}
- cmd = ["pip", "install", "--no-build-isolation", "-e", "."]
- output = str(venv.run(cmd, cwd=tmpdir, env=env), "utf-8").lower()
+ cmd = [sys.executable, "-m", "pip", "install", "--no-build-isolation", "-e", "."]
+ output = str(subprocess.check_output(cmd, cwd=tmpdir, env=env), "utf-8").lower()
assert "running setup.py develop for myproj" in output
build_py.editable_mode = True
build_py.ensure_finalized()
build_lib = build_py.build_lib.replace(os.sep, "/")
- outputs = {x.replace(os.sep, "/") for x in build_py.get_outputs()}
- assert outputs == {
+ outputs = [x.replace(os.sep, "/") for x in build_py.get_outputs()]
+ assert outputs == [
f"{build_lib}/mypkg/__init__.py",
f"{build_lib}/mypkg/resource_file.txt",
f"{build_lib}/mypkg/sub1/__init__.py",
f"{build_lib}/mypkg/sub2/mod2.py",
f"{build_lib}/mypkg/sub2/nested/__init__.py",
f"{build_lib}/mypkg/sub2/nested/mod3.py",
- }
+ ]
mapping = {
k.replace(os.sep, "/"): v.replace(os.sep, "/")
for k, v in build_py.get_output_mapping().items()
from setuptools.dist import Distribution
-@pytest.fixture(params=["strict", "lenient"])
-def editable_opts(request):
+@pytest.fixture(params=["strict", "lax"])
+def editable_mode(request, monkeypatch):
if request.param == "strict":
- return ["--config-settings", "editable-mode=strict"]
- return []
+ monkeypatch.setenv("SETUPTOOLS_EDITABLE", "strict")
+ yield
EXAMPLE = {
EXAMPLE, # No setup.py script
]
)
-def test_editable_with_pyproject(tmp_path, venv, files, editable_opts):
+def test_editable_with_pyproject(tmp_path, venv, files, editable_mode):
project = tmp_path / "mypkg"
project.mkdir()
jaraco.path.build(files, prefix=project)
cmd = [venv.exe(), "-m", "pip", "install",
"--no-build-isolation", # required to force current version of setuptools
- "-e", str(project), *editable_opts]
+ "-e", str(project)]
print(str(subprocess.check_output(cmd), "utf-8"))
cmd = [venv.exe(), "-m", "mypkg"]
assert subprocess.check_output(cmd).strip() == b"3.14159.post0 foobar 42"
-def test_editable_with_flat_layout(tmp_path, venv, editable_opts):
+def test_editable_with_flat_layout(tmp_path, venv, editable_mode):
files = {
"mypkg": {
"pyproject.toml": dedent("""\
cmd = [venv.exe(), "-m", "pip", "install",
"--no-build-isolation", # required to force current version of setuptools
- "-e", str(project), *editable_opts]
+ "-e", str(project)]
print(str(subprocess.check_output(cmd), "utf-8"))
cmd = [venv.exe(), "-c", "import pkg, mod; print(pkg.a, mod.b)"]
assert subprocess.check_output(cmd).strip() == b"4 2"
class TestLegacyNamespaces:
"""Ported from test_develop"""
- def test_namespace_package_importable(self, venv, tmp_path, editable_opts):
+ def test_namespace_package_importable(self, venv, tmp_path, editable_mode):
"""
Installing two packages sharing the same namespace, one installed
naturally using pip or `--single-version-externally-managed`
pkg_A = namespaces.build_namespace_package(tmp_path, 'myns.pkgA')
pkg_B = namespaces.build_namespace_package(tmp_path, 'myns.pkgB')
# use pip to install to the target directory
- opts = editable_opts[:]
- opts.append("--no-build-isolation") # force current version of setuptools
+ opts = ["--no-build-isolation"] # force current version of setuptools
venv.run(["python", "-m", "pip", "install", str(pkg_A), *opts])
venv.run(["python", "-m", "pip", "install", "-e", str(pkg_B), *opts])
venv.run(["python", "-c", "import myns.pkgA; import myns.pkgB"])
class TestPep420Namespaces:
- def test_namespace_package_importable(self, venv, tmp_path, editable_opts):
+ def test_namespace_package_importable(self, venv, tmp_path, editable_mode):
"""
Installing two packages sharing the same namespace, one installed
normally using pip and the other installed in editable mode
pkg_A = namespaces.build_pep420_namespace_package(tmp_path, 'myns.n.pkgA')
pkg_B = namespaces.build_pep420_namespace_package(tmp_path, 'myns.n.pkgB')
# use pip to install to the target directory
- opts = editable_opts[:]
- opts.append("--no-build-isolation") # force current version of setuptools
+ opts = ["--no-build-isolation"] # force current version of setuptools
venv.run(["python", "-m", "pip", "install", str(pkg_A), *opts])
venv.run(["python", "-m", "pip", "install", "-e", str(pkg_B), *opts])
venv.run(["python", "-c", "import myns.n.pkgA; import myns.n.pkgB"])
- def test_namespace_created_via_package_dir(self, venv, tmp_path, editable_opts):
+ def test_namespace_created_via_package_dir(self, venv, tmp_path, editable_mode):
"""Currently users can create a namespace by tweaking `package_dir`"""
files = {
"pkgA": {
pkg_C = namespaces.build_pep420_namespace_package(tmp_path, 'myns.n.pkgC')
# use pip to install to the target directory
- opts = editable_opts[:]
- opts.append("--no-build-isolation") # force current version of setuptools
+ opts = ["--no-build-isolation"] # force current version of setuptools
venv.run(["python", "-m", "pip", "install", str(pkg_A), *opts])
venv.run(["python", "-m", "pip", "install", "-e", str(pkg_B), *opts])
venv.run(["python", "-m", "pip", "install", "-e", str(pkg_C), *opts])
platform.python_implementation() == 'PyPy',
reason="Workaround fails on PyPy (why?)",
)
-def test_editable_with_prefix(tmp_path, sample_project, editable_opts):
+@pytest.mark.parametrize("mode", ("strict", "lax"))
+def test_editable_with_prefix(tmp_path, sample_project, mode):
"""
Editable install to a prefix should be discoverable.
"""
# install workaround
pip_run.launch.inject_sitecustomize(str(site_packages))
- env = dict(os.environ, PYTHONPATH=str(site_packages))
+ env = dict(os.environ, PYTHONPATH=str(site_packages), SETUPTOOLS_EDITABLE=mode)
cmd = [
sys.executable,
'-m',
'--prefix',
str(prefix),
'--no-build-isolation',
- *editable_opts,
]
subprocess.check_call(cmd, env=env)
}
@pytest.mark.parametrize("layout", EXAMPLES.keys())
- def test_editable_install(self, tmp_path, venv, layout, editable_opts):
- opts = editable_opts
- project = install_project("mypkg", venv, tmp_path, self.EXAMPLES[layout], *opts)
+ def test_editable_install(self, tmp_path, venv, layout, editable_mode):
+ project = install_project("mypkg", venv, tmp_path, self.EXAMPLES[layout])
# Ensure stray files are not importable
cmd_import_error = """\
assert next(aux.glob("**/resource.not_in_manifest"), None) is None
- def test_strict_install(self, tmp_path, venv):
- opts = ["--config-settings", "editable-mode=strict"]
- install_project("mypkg", venv, tmp_path, self.FILES, *opts)
+ def test_strict_install(self, tmp_path, venv, monkeypatch):
+ monkeypatch.setenv("SETUPTOOLS_EDITABLE", "strict")
+ install_project("mypkg", venv, tmp_path, self.FILES)
out = venv.run(["python", "-c", "import mypkg.mod1; print(mypkg.mod1.var)"])
assert b"42" in out
assert b"resource.not_in_manifest" in out
-@pytest.mark.filterwarnings("ignore:.*compat.*:setuptools.SetuptoolsDeprecationWarning")
-def test_compat_install(tmp_path, venv):
- # TODO: Remove `compat` after Dec/2022.
- opts = ["--config-settings", "editable-mode=compat"]
- files = TestOverallBehaviour.EXAMPLES["custom-layout"]
- install_project("mypkg", venv, tmp_path, files, *opts)
-
- out = venv.run(["python", "-c", "import mypkg.mod1; print(mypkg.mod1.var)"])
- assert b"42" in out
-
- expected_path = comparable_path(str(tmp_path))
-
- # Compatible behaviour will make spurious modules and excluded
- # files importable directly from the original path
- for cmd in (
- "import otherfile; print(otherfile)",
- "import other; print(other)",
- "import mypkg; print(mypkg)",
- ):
- out = comparable_path(str(venv.run(["python", "-c", cmd]), "utf-8"))
- assert expected_path in out
-
- # Compatible behaviour will not consider custom mappings
- cmd = """\
- try:
- from mypkg import subpackage;
- except ImportError as ex:
- print(ex)
- """
- out = str(venv.run(["python", "-c", dedent(cmd)]), "utf-8")
- assert "cannot import name 'subpackage'" in out
-
-
-def install_project(name, venv, tmp_path, files, *opts):
+def install_project(name, venv, tmp_path, files):
project = tmp_path / name
project.mkdir()
jaraco.path.build(files, prefix=project)
- opts = [*opts, "--no-build-isolation"] # force current version of setuptools
+ opts = ["--no-build-isolation"] # force current version of setuptools
venv.run(["python", "-m", "pip", "install", "-e", str(project), *opts])
return project
other_stat = other.stat()
assert file_stat[stat.ST_INO] == other_stat[stat.ST_INO]
assert file_stat[stat.ST_DEV] == other_stat[stat.ST_DEV]
-
-
-def comparable_path(str_with_path: str) -> str:
- return str_with_path.lower().replace(os.sep, "/").replace("//", "/")
'arg5 a\\\\b',
]
proc = subprocess.Popen(
- cmd, stdout=subprocess.PIPE, stdin=subprocess.PIPE, text=True)
- stdout, stderr = proc.communicate('hello\nworld\n')
- actual = stdout.replace('\r\n', '\n')
+ cmd, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
+ stdout, stderr = proc.communicate('hello\nworld\n'.encode('ascii'))
+ actual = stdout.decode('ascii').replace('\r\n', '\n')
expected = textwrap.dedent(r"""
\foo-script.py
['arg1', 'arg 2', 'arg "2\\"', 'arg 4\\', 'arg5 a\\\\b']
cmd,
stdout=subprocess.PIPE,
stdin=subprocess.PIPE,
- stderr=subprocess.STDOUT,
- text=True,
- )
+ stderr=subprocess.STDOUT)
stdout, stderr = proc.communicate()
- actual = stdout.replace('\r\n', '\n')
+ actual = stdout.decode('ascii').replace('\r\n', '\n')
expected = textwrap.dedent(r"""
\foo-script.py
[]
]
proc = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stdin=subprocess.PIPE,
- stderr=subprocess.STDOUT, text=True)
+ stderr=subprocess.STDOUT)
stdout, stderr = proc.communicate()
assert not stdout
assert not stderr
setenv =
PROJECT_ROOT = {toxinidir}
commands =
- pytest --integration {posargs:-vv --durations=10} setuptools/tests/integration
+ pytest --integration {posargs:-vv --durations=10 setuptools/tests/integration}
# use verbose mode by default to facilitate debugging from CI logs
[testenv:docs]