* "pydoc-topics", which builds a Python module containing a dictionary with
plain text documentation for the labels defined in
- `tools/pyspecific.py` -- pydoc needs these to show topic and keyword help.
+ ``tools/pyspecific.py`` -- pydoc needs these to show topic and keyword help.
* "suspicious", which checks the parsed markup for text that looks like
malformed and thus unconverted reST.
Free the given *key* allocated by :c:func:`PyThread_tss_alloc`, after
first calling :c:func:`PyThread_tss_delete` to ensure any associated
thread locals have been unassigned. This is a no-op if the *key*
- argument is `NULL`.
+ argument is ``NULL``.
.. note::
A freed key becomes a dangling pointer. You should reset the key to
.. c:member:: int configure_locale
- Set the LC_CTYPE locale to the user preferred locale?
+ Set the LC_CTYPE locale to the user preferred locale.
If equals to 0, set :c:member:`~PyPreConfig.coerce_c_locale` and
:c:member:`~PyPreConfig.coerce_c_locale_warn` members to 0.
Allocator Domains
=================
+.. _allocator-domains:
+
All allocating functions belong to one of three different "domains" (see also
:c:type:`PyMemAllocatorDomain`). These domains represent different allocation
strategies and are optimized for different purposes. The specific details on
debug hooks on top on the new allocator.
+ .. warning::
+
+ :c:func:`PyMem_SetAllocator` does have the following contract:
+
+ * It can be called after :c:func:`Py_PreInitialize` and before
+ :c:func:`Py_InitializeFromConfig` to install a custom memory
+ allocator. There are no restrictions over the installed allocator
+ other than the ones imposed by the domain (for instance, the Raw
+ Domain allows the allocator to be called without the GIL held). See
+ :ref:`the section on allocator domains <allocator-domains>` for more
+ information.
+
+ * If called after Python has finish initializing (after
+ :c:func:`Py_InitializeFromConfig` has been called) the allocator
+ **must** wrap the existing allocator. Substituting the current
+ allocator for some other arbitrary one is **not supported**.
+
+
+
.. c:function:: void PyMem_SetupDebugHooks(void)
Setup :ref:`debug hooks in the Python memory allocators <pymem-debug-hooks>`
.. c:function:: unsigned long PyType_GetFlags(PyTypeObject* type)
Return the :c:member:`~PyTypeObject.tp_flags` member of *type*. This function is primarily
- meant for use with `Py_LIMITED_API`; the individual flag bits are
+ meant for use with ``Py_LIMITED_API``; the individual flag bits are
guaranteed to be stable across Python releases, but access to
:c:member:`~PyTypeObject.tp_flags` itself is not part of the limited API.
+------------------------------------------------+-----------------------------------+-------------------+---+---+---+---+
.. [#slots]
- A slot name in parentheses indicates it is (effectively) deprecated.
- Names in angle brackets should be treated as read-only.
- Names in square brackets are for internal use only.
- "<R>" (as a prefix) means the field is required (must be non-``NULL``).
+
+ **()**: A slot name in parentheses indicates it is (effectively) deprecated.
+
+ **<>**: Names in angle brackets should be initially set to ``NULL`` and
+ treated as read-only.
+
+ **[]**: Names in square brackets are for internal use only.
+
+ **<R>** (as a prefix) means the field is required (must be non-``NULL``).
+
.. [#cols] Columns:
**"O"**: set on :c:type:`PyBaseObject_Type`
**Inheritance:**
This flag is not inherited.
+ However, subclasses will not be instantiable unless they provide a
+ non-NULL :c:member:`~PyTypeObject.tp_new` (which is only possible
+ via the C API).
+
+ .. note::
+
+ To disallow instantiating a class directly but allow instantiating
+ its subclasses (e.g. for an :term:`abstract base class`),
+ do not use this flag.
+ Instead, make :c:member:`~PyTypeObject.tp_new` only succeed for
+ subclasses.
.. versionadded:: 3.10
Tuple of base types.
- This is set for types created by a class statement. It should be ``NULL`` for
- statically defined types.
+ This field should be set to ``NULL`` and treated as read-only.
+ Python will fill it in when the type is :c:func:`initialized <PyType_Ready>`.
+
+ For dynamically created classes, the ``Py_tp_bases``
+ :c:type:`slot <PyType_Slot>` can be used instead of the *bases* argument
+ of :c:func:`PyType_FromSpecWithBases`.
+ The argument form is preferred.
+
+ .. warning::
+
+ Multiple inheritance does not work well for statically defined types.
+ If you set ``tp_bases`` to a tuple, Python will not raise an error,
+ but some slots will only be inherited from the first base.
**Inheritance:**
Tuple containing the expanded set of base types, starting with the type itself
and ending with :class:`object`, in Method Resolution Order.
+ This field should be set to ``NULL`` and treated as read-only.
+ Python will fill it in when the type is :c:func:`initialized <PyType_Ready>`.
**Inheritance:**
PyImport_Import:PyObject*:name:0:
PyImport_ImportFrozenModule:int:::
-PyImport_ImportFrozenModule:const char*:::
+PyImport_ImportFrozenModule:const char*:name::
PyImport_ImportFrozenModuleObject:int:::
-PyImport_ImportFrozenModuleObject:PyObject*::+1:
+PyImport_ImportFrozenModuleObject:PyObject*:name:+1:
PyImport_ImportModule:PyObject*::+1:
PyImport_ImportModule:const char*:name::
.. code-block:: shell-session
- $ /opt/bin/python3.4-config --cflags
- -I/opt/include/python3.4m -I/opt/include/python3.4m -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes
+ $ /opt/bin/python3.11-config --cflags
+ -I/opt/include/python3.11 -I/opt/include/python3.11 -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall
-* ``pythonX.Y-config --ldflags`` will give you the recommended flags when
- linking:
+* ``pythonX.Y-config --ldflags --embed`` will give you the recommended flags
+ when linking:
.. code-block:: shell-session
- $ /opt/bin/python3.4-config --ldflags
- -L/opt/lib/python3.4/config-3.4m -lpthread -ldl -lutil -lm -lpython3.4m -Xlinker -export-dynamic
+ $ /opt/bin/python3.11-config --ldflags --embed
+ -L/opt/lib/python3.11/config-3.11-x86_64-linux-gnu -L/opt/lib -lpython3.11 -lpthread -ldl -lutil -lm
.. note::
To avoid confusion between several Python installations (and especially
Starting in Python 3.8, you can!
-Assignment expressions using the walrus operator `:=` assign a variable in an
+Assignment expressions using the walrus operator ``:=`` assign a variable in an
expression::
while chunk := fp.read(200):
How does the Python version numbering scheme work?
--------------------------------------------------
-Python versions are numbered A.B.C or A.B. A is the major version number -- it
-is only incremented for really major changes in the language. B is the minor
-version number, incremented for less earth-shattering changes. C is the
-micro-level -- it is incremented for each bugfix release. See :pep:`6` for more
-information about bugfix releases.
+Python versions are numbered "A.B.C" or "A.B":
+
+* *A* is the major version number -- it is only incremented for really major
+ changes in the language.
+* *B* is the minor version number -- it is incremented for less earth-shattering
+ changes.
+* *C* is the micro version number -- it is incremented for each bugfix release.
+
+See :pep:`6` for more information about bugfix releases.
Not all releases are bugfix releases. In the run-up to a new major release, a
series of development releases are made, denoted as alpha, beta, or release
modules, and release candidates are frozen, making no changes except as needed
to fix critical bugs.
-Alpha, beta and release candidate versions have an additional suffix. The
-suffix for an alpha version is "aN" for some small number N, the suffix for a
-beta version is "bN" for some small number N, and the suffix for a release
-candidate version is "rcN" for some small number N. In other words, all versions
-labeled 2.0aN precede the versions labeled 2.0bN, which precede versions labeled
-2.0rcN, and *those* precede 2.0.
+Alpha, beta and release candidate versions have an additional suffix:
+
+* The suffix for an alpha version is "aN" for some small number *N*.
+* The suffix for a beta version is "bN" for some small number *N*.
+* The suffix for a release candidate version is "rcN" for some small number *N*.
+
+In other words, all versions labeled *2.0aN* precede the versions labeled
+*2.0bN*, which precede versions labeled *2.0rcN*, and *those* precede 2.0.
You may also find version numbers with a "+" suffix, e.g. "2.2+". These are
unreleased versions, built directly from the CPython development repository. In
programming.
There are also good IDEs for Python. IDLE is a cross-platform IDE for Python
-that is written in Python using Tkinter. PythonWin is a Windows-specific IDE.
+that is written in Python using Tkinter.
Emacs users will be happy to know that there is a very good Python mode for
Emacs. All of these programming environments provide syntax highlighting,
auto-indenting, and access to the interactive interpreter while coding. Consult
for pdb as an example.
The IDLE interactive development environment, which is part of the standard
-Python distribution (normally available as Tools/scripts/idle), includes a
-graphical debugger.
+Python distribution (normally available as
+`Tools/scripts/idle3 <https://github.com/python/cpython/blob/main/Tools/scripts/idle3>`_),
+includes a graphical debugger.
PythonWin is a Python IDE that includes a GUI debugger based on pdb. The
PythonWin debugger colors breakpoints and has quite a few cool features such as
Python binary to produce a single executable.
One is to use the freeze tool, which is included in the Python source tree as
-``Tools/freeze``. It converts Python byte code to C arrays; with a C compiler you can
+`Tools/freeze <https://github.com/python/cpython/tree/main/Tools/freeze>`_.
+It converts Python byte code to C arrays; with a C compiler you can
embed all your modules into a new program, which is then linked with the
standard Python modules.
Why am I getting an UnboundLocalError when the variable has a value?
--------------------------------------------------------------------
-It can be a surprise to get the UnboundLocalError in previously working
+It can be a surprise to get the :exc:`UnboundLocalError` in previously working
code when it is modified by adding an assignment statement somewhere in
the body of a function.
>>> x = 10
>>> def bar():
... print(x)
+ ...
>>> bar()
10
... print(x)
... x += 1
-results in an UnboundLocalError:
+results in an :exc:`!UnboundLocalError`:
>>> foo()
Traceback (most recent call last):
... global x
... print(x)
... x += 1
+ ...
>>> foobar()
10
... x += 1
... bar()
... print(x)
+ ...
>>> foo()
10
11
import mod
print(config.x)
-Note that using a module is also the basis for implementing the Singleton design
+Note that using a module is also the basis for implementing the singleton design
pattern, for the same reason.
It's good practice if you import modules in the following order:
-1. standard library modules -- e.g. ``sys``, ``os``, ``getopt``, ``re``
+1. standard library modules -- e.g. :mod:`sys`, :mod:`os`, :mod:`argparse`, :mod:`re`
2. third-party library modules (anything installed in Python's site-packages
- directory) -- e.g. mx.DateTime, ZODB, PIL.Image, etc.
+ directory) -- e.g. :mod:`!dateutil`, :mod:`!requests`, :mod:`!PIL.Image`
3. locally developed modules
It is sometimes necessary to move imports to a function or class to avoid
Some operations (for example ``y.append(10)`` and ``y.sort()``) mutate the
object, whereas superficially similar operations (for example ``y = y + [10]``
-and ``sorted(y)``) create a new object. In general in Python (and in all cases
+and :func:`sorted(y) <sorted>`) create a new object. In general in Python (and in all cases
in the standard library) a method that mutates an object will return ``None``
to help avoid getting the two types of operations confused. So if you
mistakenly write ``y.sort()`` thinking it will give you a sorted copy of ``y``,
How can I find the methods or attributes of an object?
------------------------------------------------------
-For an instance x of a user-defined class, ``dir(x)`` returns an alphabetized
+For an instance ``x`` of a user-defined class, :func:`dir(x) <dir>` returns an alphabetized
list of the names containing the instance attributes and methods and attributes
defined by its class.
<__main__.A object at 0x16D07CC>
Arguably the class has a name: even though it is bound to two names and invoked
-through the name B the created instance is still reported as an instance of
-class A. However, it is impossible to say whether the instance's name is a or
-b, since both names are bound to the same value.
+through the name ``B`` the created instance is still reported as an instance of
+class ``A``. However, it is impossible to say whether the instance's name is ``a`` or
+``b``, since both names are bound to the same value.
Generally speaking it should not be necessary for your code to "know the names"
of particular values. Unless you are deliberately writing introspective
----------------------------------------------------------
Trying to lookup an ``int`` literal attribute in the normal manner gives
-a syntax error because the period is seen as a decimal point::
+a :exc:`SyntaxError` because the period is seen as a decimal point::
>>> 1.__class__
File "<stdin>", line 1
How do I convert a number to a string?
--------------------------------------
-To convert, e.g., the number 144 to the string '144', use the built-in type
+To convert, e.g., the number ``144`` to the string ``'144'``, use the built-in type
constructor :func:`str`. If you want a hexadecimal or octal representation, use
the built-in functions :func:`hex` or :func:`oct`. For fancy formatting, see
the :ref:`f-strings` and :ref:`formatstrings` sections,
For simple input parsing, the easiest approach is usually to split the line into
whitespace-delimited words using the :meth:`~str.split` method of string objects
and then convert decimal strings to numeric values using :func:`int` or
-:func:`float`. ``split()`` supports an optional "sep" parameter which is useful
+:func:`float`. :meth:`!split()` supports an optional "sep" parameter which is useful
if the line uses something other than whitespace as a separator.
For more complicated input parsing, regular expressions are more powerful
-than C's :c:func:`sscanf` and better suited for the task.
+than C's ``sscanf`` and better suited for the task.
What does 'UnicodeDecodeError' or 'UnicodeEncodeError' error mean?
The ``array`` module also provides methods for creating arrays of fixed types
with compact representations, but they are slower to index than lists. Also
-note that NumPy and other third party packages define array-like structures with
+note that `NumPy <https://numpy.org/>`_
+and other third party packages define array-like structures with
various characteristics as well.
-To get Lisp-style linked lists, you can emulate cons cells using tuples::
+To get Lisp-style linked lists, you can emulate *cons cells* using tuples::
lisp_list = ("like", ("this", ("example", None) ) )
If mutability is desired, you could use lists instead of tuples. Here the
-analogue of lisp car is ``lisp_list[0]`` and the analogue of cdr is
+analogue of a Lisp *car* is ``lisp_list[0]`` and the analogue of *cdr* is
``lisp_list[1]``. Only do this if you're sure you really need to, because it's
usually a lot slower than using Python lists.
['foo', 'item']
To see why this happens, you need to know that (a) if an object implements an
-``__iadd__`` magic method, it gets called when the ``+=`` augmented assignment
+:meth:`~object.__iadd__` magic method, it gets called when the ``+=`` augmented
+assignment
is executed, and its return value is what gets used in the assignment statement;
-and (b) for lists, ``__iadd__`` is equivalent to calling ``extend`` on the list
+and (b) for lists, :meth:`!__iadd__` is equivalent to calling :meth:`~list.extend` on the list
and returning the list. That's why we say that for lists, ``+=`` is a
-"shorthand" for ``list.extend``::
+"shorthand" for :meth:`!list.extend`::
>>> a_list = []
>>> a_list += [1]
...
TypeError: 'tuple' object does not support item assignment
-The ``__iadd__`` succeeds, and thus the list is extended, but even though
+The :meth:`!__iadd__` succeeds, and thus the list is extended, but even though
``result`` points to the same object that ``a_tuple[0]`` already points to,
that final assignment still results in an error, because tuples are immutable.
How do I check if an object is an instance of a given class or of a subclass of it?
-----------------------------------------------------------------------------------
-Use the built-in function ``isinstance(obj, cls)``. You can check if an object
+Use the built-in function :func:`isinstance(obj, cls) <isinstance>`. You can
+check if an object
is an instance of any of a number of classes by providing a tuple instead of a
single class, e.g. ``isinstance(obj, (class1, class2, ...))``, and can also
check whether an object is one of Python's built-in types, e.g.
argument string to uppercase before calling the underlying
``self._outfile.write()`` method. All other methods are delegated to the
underlying ``self._outfile`` object. The delegation is accomplished via the
-``__getattr__`` method; consult :ref:`the language reference <attribute-access>`
+:meth:`~object.__getattr__` method; consult :ref:`the language reference <attribute-access>`
for more information about controlling attribute access.
Note that for more general cases delegation can get trickier. When attributes
-must be set as well as retrieved, the class must define a :meth:`__setattr__`
+must be set as well as retrieved, the class must define a :meth:`~object.__setattr__`
method too, and it must do so carefully. The basic implementation of
-:meth:`__setattr__` is roughly equivalent to the following::
+:meth:`!__setattr__` is roughly equivalent to the following::
class X:
...
self.__dict__[name] = value
...
-Most :meth:`__setattr__` implementations must modify ``self.__dict__`` to store
+Most :meth:`!__setattr__` implementations must modify
+:meth:`self.__dict__ <object.__dict__>` to store
local state for self without causing an infinite recursion.
There are several possible reasons for this.
-The del statement does not necessarily call :meth:`__del__` -- it simply
+The :keyword:`del` statement does not necessarily call :meth:`~object.__del__` -- it simply
decrements the object's reference count, and if this reaches zero
-:meth:`__del__` is called.
+:meth:`!__del__` is called.
If your data structures contain circular links (e.g. a tree where each child has
a parent reference and each parent has a list of children) the reference counts
will never go back to zero. Once in a while Python runs an algorithm to detect
such cycles, but the garbage collector might run some time after the last
-reference to your data structure vanishes, so your :meth:`__del__` method may be
+reference to your data structure vanishes, so your :meth:`!__del__` method may be
called at an inconvenient and random time. This is inconvenient if you're trying
-to reproduce a problem. Worse, the order in which object's :meth:`__del__`
+to reproduce a problem. Worse, the order in which object's :meth:`!__del__`
methods are executed is arbitrary. You can run :func:`gc.collect` to force a
collection, but there *are* pathological cases where objects will never be
collected.
Despite the cycle collector, it's still a good idea to define an explicit
``close()`` method on objects to be called whenever you're done with them. The
``close()`` method can then remove attributes that refer to subobjects. Don't
-call :meth:`__del__` directly -- :meth:`__del__` should call ``close()`` and
+call :meth:`!__del__` directly -- :meth:`!__del__` should call ``close()`` and
``close()`` should make sure that it can be called more than once for the same
object.
Normally, calling :func:`sys.exc_clear` will take care of this by clearing
the last recorded exception.
-Finally, if your :meth:`__del__` method raises an exception, a warning message
+Finally, if your :meth:`!__del__` method raises an exception, a warning message
is printed to :data:`sys.stderr`.
How can a subclass control what data is stored in an immutable instance?
------------------------------------------------------------------------
-When subclassing an immutable type, override the :meth:`__new__` method
-instead of the :meth:`__init__` method. The latter only runs *after* an
+When subclassing an immutable type, override the :meth:`~object.__new__` method
+instead of the :meth:`~object.__init__` method. The latter only runs *after* an
instance is created, which is too late to alter data in an immutable
instance.
Embedding the Python interpreter in a Windows app can be summarized as follows:
-1. Do _not_ build Python into your .exe file directly. On Windows, Python must
+1. Do **not** build Python into your .exe file directly. On Windows, Python must
be a DLL to handle importing modules that are themselves DLL's. (This is the
first key undocumented fact.) Instead, link to :file:`python{NN}.dll`; it is
typically installed in ``C:\Windows\System``. *NN* is the Python version, a
2. If you use SWIG, it is easy to create a Python "extension module" that will
make the app's data and methods available to Python. SWIG will handle just
about all the grungy details for you. The result is C code that you link
- *into* your .exe file (!) You do _not_ have to create a DLL file, and this
+ *into* your .exe file (!) You do **not** have to create a DLL file, and this
also simplifies linking.
3. SWIG will create an init function (a C function) whose name depends on the
5. There are two problems with Python's C API which will become apparent if you
use a compiler other than MSVC, the compiler used to build pythonNN.dll.
- Problem 1: The so-called "Very High Level" functions that take FILE *
+ Problem 1: The so-called "Very High Level" functions that take ``FILE *``
arguments will not work in a multi-compiler environment because each
- compiler's notion of a struct FILE will be different. From an implementation
- standpoint these are very _low_ level functions.
+ compiler's notion of a ``struct FILE`` will be different. From an implementation
+ standpoint these are very low level functions.
Problem 2: SWIG generates the following code when generating wrappers to void
functions:
A list of bytecode instructions can be found in the documentation for
:ref:`the dis module <bytecodes>`.
+ callable
+ A callable is an object that can be called, possibly with a set
+ of arguments (see :term:`argument`), with the following syntax::
+
+ callable(argument1, argument2, ...)
+
+ A :term:`function`, and by extension a :term:`method`, is a callable.
+ An instance of a class that implements the :meth:`~object.__call__`
+ method is also a callable.
+
callback
A subroutine function which is passed as an argument to be executed at
some point in the future.
package
A Python :term:`module` which can contain submodules or recursively,
- subpackages. Technically, a package is a Python module with an
+ subpackages. Technically, a package is a Python module with a
``__path__`` attribute.
See also :term:`regular package` and :term:`namespace package`.
if args.quiet:
print(answer)
elif args.verbose:
- print("{} to the power {} equals {}".format(args.x, args.y, answer))
+ print(f"{args.x} to the power {args.y} equals {answer}")
else:
- print("{}^{} == {}".format(args.x, args.y, answer))
+ print(f"{args.x}^{args.y} == {answer}")
Note that slight difference in the usage text. Note the ``[-v | -q]``,
which tells us that we can either use ``-v`` or ``-q``,
.. highlight:: c
+.. _howto-clinic:
+
**********************
Argument Clinic How-To
**********************
and that you have the permissions to create and update files in it.
+.. _custom-level-handling:
+
+Custom handling of levels
+-------------------------
+
+Sometimes, you might want to do something slightly different from the standard
+handling of levels in handlers, where all levels above a threshold get
+processed by a handler. To do this, you need to use filters. Let's look at a
+scenario where you want to arrange things as follows:
+
+* Send messages of severity ``INFO`` and ``WARNING`` to ``sys.stdout``
+* Send messages of severity ``ERROR`` and above to ``sys.stderr``
+* Send messages of severity ``DEBUG`` and above to file ``app.log``
+
+Suppose you configure logging with the following JSON:
+
+.. code-block:: json
+
+ {
+ "version": 1,
+ "disable_existing_loggers": false,
+ "formatters": {
+ "simple": {
+ "format": "%(levelname)-8s - %(message)s"
+ }
+ },
+ "handlers": {
+ "stdout": {
+ "class": "logging.StreamHandler",
+ "level": "INFO",
+ "formatter": "simple",
+ "stream": "ext://sys.stdout",
+ },
+ "stderr": {
+ "class": "logging.StreamHandler",
+ "level": "ERROR",
+ "formatter": "simple",
+ "stream": "ext://sys.stderr"
+ },
+ "file": {
+ "class": "logging.FileHandler",
+ "formatter": "simple",
+ "filename": "app.log",
+ "mode": "w"
+ }
+ },
+ "root": {
+ "level": "DEBUG",
+ "handlers": [
+ "stderr",
+ "stdout",
+ "file"
+ ]
+ }
+ }
+
+This configuration does *almost* what we want, except that ``sys.stdout`` would
+show messages of severity ``ERROR`` and above as well as ``INFO`` and
+``WARNING`` messages. To prevent this, we can set up a filter which excludes
+those messages and add it to the relevant handler. This can be configured by
+adding a ``filters`` section parallel to ``formatters`` and ``handlers``:
+
+.. code-block:: json
+
+ "filters": {
+ "warnings_and_below": {
+ "()" : "__main__.filter_maker",
+ "level": "WARNING"
+ }
+ }
+
+and changing the section on the ``stdout`` handler to add it:
+
+.. code-block:: json
+
+ "stdout": {
+ "class": "logging.StreamHandler",
+ "level": "INFO",
+ "formatter": "simple",
+ "stream": "ext://sys.stdout",
+ "filters": ["warnings_and_below"]
+ }
+
+A filter is just a function, so we can define the ``filter_maker`` (a factory
+function) as follows:
+
+.. code-block:: python
+
+ def filter_maker(level):
+ level = getattr(logging, level)
+
+ def filter(record):
+ return record.levelno <= level
+
+ return filter
+
+This converts the string argument passed in to a numeric level, and returns a
+function which only returns ``True`` if the level of the passed in record is
+at or below the specified level. Note that in this example I have defined the
+``filter_maker`` in a test script ``main.py`` that I run from the command line,
+so its module will be ``__main__`` - hence the ``__main__.filter_maker`` in the
+filter configuration. You will need to change that if you define it in a
+different module.
+
+With the filter added, we can run ``main.py``, which in full is:
+
+.. code-block:: python
+
+ import json
+ import logging
+ import logging.config
+
+ CONFIG = '''
+ {
+ "version": 1,
+ "disable_existing_loggers": false,
+ "formatters": {
+ "simple": {
+ "format": "%(levelname)-8s - %(message)s"
+ }
+ },
+ "filters": {
+ "warnings_and_below": {
+ "()" : "__main__.filter_maker",
+ "level": "WARNING"
+ }
+ },
+ "handlers": {
+ "stdout": {
+ "class": "logging.StreamHandler",
+ "level": "INFO",
+ "formatter": "simple",
+ "stream": "ext://sys.stdout",
+ "filters": ["warnings_and_below"]
+ },
+ "stderr": {
+ "class": "logging.StreamHandler",
+ "level": "ERROR",
+ "formatter": "simple",
+ "stream": "ext://sys.stderr"
+ },
+ "file": {
+ "class": "logging.FileHandler",
+ "formatter": "simple",
+ "filename": "app.log",
+ "mode": "w"
+ }
+ },
+ "root": {
+ "level": "DEBUG",
+ "handlers": [
+ "stderr",
+ "stdout",
+ "file"
+ ]
+ }
+ }
+ '''
+
+ def filter_maker(level):
+ level = getattr(logging, level)
+
+ def filter(record):
+ return record.levelno <= level
+
+ return filter
+
+ logging.config.dictConfig(json.loads(CONFIG))
+ logging.debug('A DEBUG message')
+ logging.info('An INFO message')
+ logging.warning('A WARNING message')
+ logging.error('An ERROR message')
+ logging.critical('A CRITICAL message')
+
+And after running it like this:
+
+.. code-block:: shell
+
+ python main.py 2>stderr.log >stdout.log
+
+We can see the results are as expected:
+
+.. code-block:: shell
+
+ $ more *.log
+ ::::::::::::::
+ app.log
+ ::::::::::::::
+ DEBUG - A DEBUG message
+ INFO - An INFO message
+ WARNING - A WARNING message
+ ERROR - An ERROR message
+ CRITICAL - A CRITICAL message
+ ::::::::::::::
+ stderr.log
+ ::::::::::::::
+ ERROR - An ERROR message
+ CRITICAL - A CRITICAL message
+ ::::::::::::::
+ stdout.log
+ ::::::::::::::
+ INFO - An INFO message
+ WARNING - A WARNING message
+
+
Configuration server example
----------------------------
print('complete')
+.. _blocking-handlers:
+
Dealing with handlers that block
--------------------------------
Running a logging socket listener in production
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-To run a logging listener in production, you may need to use a process-management tool
-such as `Supervisor <http://supervisord.org/>`_. `Here
-<https://gist.github.com/vsajip/4b227eeec43817465ca835ca66f75e2b>`_ is a Gist which
-provides the bare-bones files to run the above functionality using Supervisor: you
-will need to change the `/path/to/` parts in the Gist to reflect the actual paths you
-want to use.
-
+.. _socket-listener-gist: https://gist.github.com/vsajip/4b227eeec43817465ca835ca66f75e2b
+
+To run a logging listener in production, you may need to use a
+process-management tool such as `Supervisor <http://supervisord.org/>`_.
+`Here is a Gist <socket-listener-gist_>`__
+which provides the bare-bones files to run the above functionality using
+Supervisor. It consists of the following files:
+
++-------------------------+----------------------------------------------------+
+| File | Purpose |
++=========================+====================================================+
+| :file:`prepare.sh` | A Bash script to prepare the environment for |
+| | testing |
++-------------------------+----------------------------------------------------+
+| :file:`supervisor.conf` | The Supervisor configuration file, which has |
+| | entries for the listener and a multi-process web |
+| | application |
++-------------------------+----------------------------------------------------+
+| :file:`ensure_app.sh` | A Bash script to ensure that Supervisor is running |
+| | with the above configuration |
++-------------------------+----------------------------------------------------+
+| :file:`log_listener.py` | The socket listener program which receives log |
+| | events and records them to a file |
++-------------------------+----------------------------------------------------+
+| :file:`main.py` | A simple web application which performs logging |
+| | via a socket connected to the listener |
++-------------------------+----------------------------------------------------+
+| :file:`webapp.json` | A JSON configuration file for the web application |
++-------------------------+----------------------------------------------------+
+| :file:`client.py` | A Python script to exercise the web application |
++-------------------------+----------------------------------------------------+
+
+The web application uses `Gunicorn <https://gunicorn.org/>`_, which is a
+popular web application server that starts multiple worker processes to handle
+requests. This example setup shows how the workers can write to the same log file
+without conflicting with one another --- they all go through the socket listener.
+
+To test these files, do the following in a POSIX environment:
+
+#. Download `the Gist <socket-listener-gist_>`__
+ as a ZIP archive using the :guilabel:`Download ZIP` button.
+
+#. Unzip the above files from the archive into a scratch directory.
+
+#. In the scratch directory, run ``bash prepare.sh`` to get things ready.
+ This creates a :file:`run` subdirectory to contain Supervisor-related and
+ log files, and a :file:`venv` subdirectory to contain a virtual environment
+ into which ``bottle``, ``gunicorn`` and ``supervisor`` are installed.
+
+#. Run ``bash ensure_app.sh`` to ensure that Supervisor is running with
+ the above configuration.
+
+#. Run ``venv/bin/python client.py`` to exercise the web application,
+ which will lead to records being written to the log.
+
+#. Inspect the log files in the :file:`run` subdirectory. You should see the
+ most recent log lines in files matching the pattern :file:`app.log*`. They won't be in
+ any particular order, since they have been handled concurrently by different
+ worker processes in a non-deterministic way.
+
+#. You can shut down the listener and the web application by running
+ ``venv/bin/supervisorctl -c supervisor.conf shutdown``.
+
+You may need to tweak the configuration files in the unlikely event that the
+configured ports clash with something else in your test environment.
.. _context-info:
--------------------------------------------------
Sometimes you want to format times using UTC, which can be done using a class
-such as `UTCFormatter`, shown below::
+such as ``UTCFormatter``, shown below::
import logging
import time
i = 1
logger.debug('Message %d', i, extra=extra)
+How to treat a logger like an output stream
+-------------------------------------------
+
+Sometimes, you need to interface to a third-party API which expects a file-like
+object to write to, but you want to direct the API's output to a logger. You
+can do this using a class which wraps a logger with a file-like API.
+Here's a short script illustrating such a class:
+
+.. code-block:: python
+
+ import logging
+
+ class LoggerWriter:
+ def __init__(self, logger, level):
+ self.logger = logger
+ self.level = level
+
+ def write(self, message):
+ if message != '\n': # avoid printing bare newlines, if you like
+ self.logger.log(self.level, message)
+
+ def flush(self):
+ # doesn't actually do anything, but might be expected of a file-like
+ # object - so optional depending on your situation
+ pass
+
+ def close(self):
+ # doesn't actually do anything, but might be expected of a file-like
+ # object - so optional depending on your situation. You might want
+ # to set a flag so that later calls to write raise an exception
+ pass
+
+ def main():
+ logging.basicConfig(level=logging.DEBUG)
+ logger = logging.getLogger('demo')
+ info_fp = LoggerWriter(logger, logging.INFO)
+ debug_fp = LoggerWriter(logger, logging.DEBUG)
+ print('An INFO message', file=info_fp)
+ print('A DEBUG message', file=debug_fp)
+
+ if __name__ == "__main__":
+ main()
+
+When this script is run, it prints
+
+.. code-block:: text
+
+ INFO:demo:An INFO message
+ DEBUG:demo:A DEBUG message
+
+You could also use ``LoggerWriter`` to redirect ``sys.stdout`` and
+``sys.stderr`` by doing something like this:
+
+.. code-block:: python
+
+ import sys
+
+ sys.stdout = LoggerWriter(logger, logging.INFO)
+ sys.stderr = LoggerWriter(logger, logging.WARNING)
+
+You should do this *after* configuring logging for your needs. In the above
+example, the :func:`~logging.basicConfig` call does this (using the
+``sys.stderr`` value *before* it is overwritten by a ``LoggerWriter``
+instance). Then, you'd get this kind of result:
+
+.. code-block:: pycon
+
+ >>> print('Foo')
+ INFO:demo:Foo
+ >>> print('Bar', file=sys.stderr)
+ WARNING:demo:Bar
+ >>>
+
+Of course, these above examples show output according to the format used by
+:func:`~logging.basicConfig`, but you can use a different formatter when you
+configure logging.
+
+Note that with the above scheme, you are somewhat at the mercy of buffering and
+the sequence of write calls which you are intercepting. For example, with the
+definition of ``LoggerWriter`` above, if you have the snippet
+
+.. code-block:: python
+
+ sys.stderr = LoggerWriter(logger, logging.WARNING)
+ 1 / 0
+
+then running the script results in
+
+.. code-block:: text
+
+ WARNING:demo:Traceback (most recent call last):
+
+ WARNING:demo: File "/home/runner/cookbook-loggerwriter/test.py", line 53, in <module>
+
+ WARNING:demo:
+ WARNING:demo:main()
+ WARNING:demo: File "/home/runner/cookbook-loggerwriter/test.py", line 49, in main
+
+ WARNING:demo:
+ WARNING:demo:1 / 0
+ WARNING:demo:ZeroDivisionError
+ WARNING:demo::
+ WARNING:demo:division by zero
+
+As you can see, this output isn't ideal. That's because the underlying code
+which writes to ``sys.stderr`` makes mutiple writes, each of which results in a
+separate logged line (for example, the last three lines above). To get around
+this problem, you need to buffer things and only output log lines when newlines
+are seen. Let's use a slghtly better implementation of ``LoggerWriter``:
+
+.. code-block:: python
+
+ class BufferingLoggerWriter(LoggerWriter):
+ def __init__(self, logger, level):
+ super().__init__(logger, level)
+ self.buffer = ''
+
+ def write(self, message):
+ if '\n' not in message:
+ self.buffer += message
+ else:
+ parts = message.split('\n')
+ if self.buffer:
+ s = self.buffer + parts.pop(0)
+ self.logger.log(self.level, s)
+ self.buffer = parts.pop()
+ for part in parts:
+ self.logger.log(self.level, part)
+
+This just buffers up stuff until a newline is seen, and then logs complete
+lines. With this approach, you get better output:
+
+.. code-block:: text
+
+ WARNING:demo:Traceback (most recent call last):
+ WARNING:demo: File "/home/runner/cookbook-loggerwriter/main.py", line 55, in <module>
+ WARNING:demo: main()
+ WARNING:demo: File "/home/runner/cookbook-loggerwriter/main.py", line 52, in main
+ WARNING:demo: 1/0
+ WARNING:demo:ZeroDivisionError: division by zero
+
.. patterns-to-avoid:
%Y-%m-%d %H:%M:%S
-with the milliseconds tacked on at the end. The ``style`` is one of `%`, '{'
-or '$'. If one of these is not specified, then '%' will be used.
+with the milliseconds tacked on at the end. The ``style`` is one of ``'%'``,
+``'{'``, or ``'$'``. If one of these is not specified, then ``'%'`` will be used.
-If the ``style`` is '%', the message format string uses
+If the ``style`` is ``'%'``, the message format string uses
``%(<dictionary key>)s`` styled string substitution; the possible keys are
-documented in :ref:`logrecord-attributes`. If the style is '{', the message
+documented in :ref:`logrecord-attributes`. If the style is ``'{'``, the message
format string is assumed to be compatible with :meth:`str.format` (using
-keyword arguments), while if the style is '$' then the message format string
+keyword arguments), while if the style is ``'$'`` then the message format string
should conform to what is expected by :meth:`string.Template.substitute`.
.. versionchanged:: 3.2
+--------------+-------------------------------------------------+-------+
On all platforms, the "personal" file can be temporarily disabled by
-passing the `--no-user-cfg` option.
+passing the ``--no-user-cfg`` option.
Notes:
* usage_ - The string describing the program usage (default: generated from
arguments added to parser)
- * description_ - Text to display before the argument help (default: none)
+ * description_ - Text to display before the argument help
+ (by default, no text)
- * epilog_ - Text to display after the argument help (default: none)
+ * epilog_ - Text to display after the argument help (by default, no text)
* parents_ - A list of :class:`ArgumentParser` objects whose arguments should
also be included
Namespace(out=<_io.TextIOWrapper name='file.txt' mode='w' encoding='UTF-8'>, raw=<_io.FileIO name='raw.dat' mode='wb'>)
FileType objects understand the pseudo-argument ``'-'`` and automatically
- convert this into ``sys.stdin`` for readable :class:`FileType` objects and
- ``sys.stdout`` for writable :class:`FileType` objects::
+ convert this into :data:`sys.stdin` for readable :class:`FileType` objects and
+ :data:`sys.stdout` for writable :class:`FileType` objects::
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument('infile', type=argparse.FileType('r'))
Network logging can block the event loop. It is recommended to use
-a separate thread for handling logs or use non-blocking IO.
+a separate thread for handling logs or use non-blocking IO. For example,
+see :ref:`blocking-handlers`.
.. _asyncio-coroutine-not-scheduled:
Return the running event loop in the current OS thread.
- If there is no running event loop a :exc:`RuntimeError` is raised.
+ Raise a :exc:`RuntimeError` if there is no running event loop.
+
This function can only be called from a coroutine or a callback.
.. versionadded:: 3.7
Get the current event loop.
- If there is no current event loop set in the current OS thread,
- the OS thread is main, and :func:`set_event_loop` has not yet
- been called, asyncio will create a new event loop and set it as the
- current one.
+ When called from a coroutine or a callback (e.g. scheduled with
+ call_soon or similar API), this function will always return the
+ running event loop.
+
+ If there is no running event loop set, the function will return
+ the result of ``get_event_loop_policy().get_event_loop()`` call.
Because this function has rather complex behavior (especially
when custom event loop policies are in use), using the
:func:`get_running_loop` function is preferred to :func:`get_event_loop`
in coroutines and callbacks.
- Consider also using the :func:`asyncio.run` function instead of using
- lower level functions to manually create and close an event loop.
+ As noted above, consider using the higher-level :func:`asyncio.run` function,
+ instead of using these lower level functions to manually create and close an
+ event loop.
.. deprecated:: 3.10
- Deprecation warning is emitted if there is no running event loop.
- In future Python releases, this function will be an alias of
- :func:`get_running_loop`.
+ Deprecation warning is emitted if there is no current event loop.
+ In Python 3.12 it will be an error.
+
+ .. note::
+ In Python versions 3.10.0--3.10.8 this function
+ (and other functions which used it implicitly) emitted a
+ :exc:`DeprecationWarning` if there was no running event loop, even if
+ the current loop was set.
.. function:: set_event_loop(loop)
- Set *loop* as a current event loop for the current OS thread.
+ Set *loop* as the current event loop for the current OS thread.
.. function:: new_event_loop()
Upgrade an existing transport-based connection to TLS.
- Return a new transport instance, that the *protocol* must start using
- immediately after the *await*. The *transport* instance passed to
- the *start_tls* method should never be used again.
+ Create a TLS coder/decoder instance and insert it between the *transport*
+ and the *protocol*. The coder/decoder implements both *transport*-facing
+ protocol and *protocol*-facing transport.
+
+ Return the created two-interface instance. After *await*, the *protocol*
+ must stop using the original *transport* and communicate with the returned
+ object only because the coder caches *protocol*-side data and sporadically
+ exchanges extra TLS session packets with *transport*.
Parameters:
pool, cpu_bound)
print('custom process pool', result)
- asyncio.run(main())
+ if __name__ == '__main__':
+ asyncio.run(main())
+
+ Note that the entry point guard (``if __name__ == '__main__'``)
+ is required for option 3 due to the peculiarities of :mod:`multiprocessing`,
+ which is used by :class:`~concurrent.futures.ProcessPoolExecutor`.
+ See :ref:`Safe importing of main module <multiprocessing-safe-main-import>`.
This method returns a :class:`asyncio.Future` object.
- The **preferred** function to get the running event loop.
* - :func:`asyncio.get_event_loop`
- - Get an event loop instance (current or via the policy).
+ - Get an event loop instance (running or current via the current policy).
* - :func:`asyncio.set_event_loop`
- Set the event loop as current via the current policy.
On Windows, :class:`ProactorEventLoop` is now used by default.
+ .. deprecated:: 3.10.9
+ :meth:`get_event_loop` now emits a :exc:`DeprecationWarning` if there
+ is no current event loop set and a new event loop has been implicitly
+ created. In Python 3.12 it will be an error.
+
.. class:: WindowsSelectorEventLoopPolicy
will be received. After all buffered data is flushed, the
protocol's :meth:`protocol.connection_lost()
<BaseProtocol.connection_lost>` method will be called with
- :const:`None` as its argument.
+ :const:`None` as its argument. The transport should not be
+ used once it is closed.
.. method:: BaseTransport.is_closing()
a connection is open.
However, :meth:`protocol.eof_received() <Protocol.eof_received>`
- is called at most once. Once `eof_received()` is called,
+ is called at most once. Once ``eof_received()`` is called,
``data_received()`` is not called anymore.
.. method:: Protocol.eof_received()
.. class:: StreamReader
Represents a reader object that provides APIs to read data
- from the IO stream.
+ from the IO stream. As an :term:`asynchronous iterable`, the
+ object supports the :keyword:`async for` statement.
It is not recommended to instantiate *StreamReader* objects
directly; use :func:`open_connection` and :func:`start_server`
* the :meth:`~asyncio.subprocess.Process.communicate` and
:meth:`~asyncio.subprocess.Process.wait` methods don't have a
- *timeout* parameter: use the :func:`wait_for` function;
+ *timeout* parameter: use the :func:`~asyncio.wait_for` function;
* the :meth:`Process.wait() <asyncio.subprocess.Process.wait>` method
is asynchronous, whereas :meth:`subprocess.Popen.wait` method
Coroutines
==========
+**Source code:** :source:`Lib/asyncio/coroutines.py`
+
+----------------------------------------------------
+
:term:`Coroutines <coroutine>` declared with the async/await syntax is the
preferred way of writing asyncio applications. For example, the following
snippet of code prints "hello", waits 1 second,
Creating Tasks
==============
+**Source code:** :source:`Lib/asyncio/tasks.py`
+
+-----------------------------------------------
+
.. function:: create_task(coro, *, name=None)
Wrap the *coro* :ref:`coroutine <coroutine>` into a :class:`Task`
# blocking_io complete at 19:50:54
# finished main at 19:50:54
- Directly calling `blocking_io()` in any coroutine would block the event loop
+ Directly calling ``blocking_io()`` in any coroutine would block the event loop
for its duration, resulting in an additional 1 second of run time. Instead,
- by using `asyncio.to_thread()`, we can run it in a separate thread without
+ by using ``asyncio.to_thread()``, we can run it in a separate thread without
blocking the event loop.
.. note::
- Due to the :term:`GIL`, `asyncio.to_thread()` can typically only be used
+ Due to the :term:`GIL`, ``asyncio.to_thread()`` can typically only be used
to make IO-bound functions non-blocking. However, for extension modules
that release the GIL or alternative Python implementations that don't
- have one, `asyncio.to_thread()` can also be used for CPU-bound functions.
+ have one, ``asyncio.to_thread()`` can also be used for CPU-bound functions.
.. versionadded:: 3.9
Encode the :term:`bytes-like object` *s* using Base64 and return the encoded
:class:`bytes`.
- Optional *altchars* must be a :term:`bytes-like object` of at least
- length 2 (additional characters are ignored) which specifies an alternative
- alphabet for the ``+`` and ``/`` characters. This allows an application to e.g.
- generate URL or filesystem safe Base64 strings. The default is ``None``, for
- which the standard Base64 alphabet is used.
+ Optional *altchars* must be a :term:`bytes-like object` of length 2 which
+ specifies an alternative alphabet for the ``+`` and ``/`` characters.
+ This allows an application to e.g. generate URL or filesystem safe Base64
+ strings. The default is ``None``, for which the standard Base64 alphabet is used.
+
+ May assert or raise a a :exc:`ValueError` if the length of *altchars* is not 2. Raises a
+ :exc:`TypeError` if *altchars* is not a :term:`bytes-like object`.
.. function:: b64decode(s, altchars=None, validate=False)
Decode the Base64 encoded :term:`bytes-like object` or ASCII string
*s* and return the decoded :class:`bytes`.
- Optional *altchars* must be a :term:`bytes-like object` or ASCII string of
- at least length 2 (additional characters are ignored) which specifies the
- alternative alphabet used instead of the ``+`` and ``/`` characters.
+ Optional *altchars* must be a :term:`bytes-like object` or ASCII string
+ of length 2 which specifies the alternative alphabet used instead of the
+ ``+`` and ``/`` characters.
A :exc:`binascii.Error` exception is raised
if *s* is incorrectly padded.
these non-alphabet characters in the input result in a
:exc:`binascii.Error`.
+ May assert or raise a :exc:`ValueError` if the length of *altchars* is not 2.
.. function:: standard_b64encode(s)
For real file names, the canonical form is an operating-system-dependent,
:func:`case-normalized <os.path.normcase>` :func:`absolute path
- <os.path.abspath>`. A *filename* with angle brackets, such as `"<stdin>"`
+ <os.path.abspath>`. A *filename* with angle brackets, such as ``"<stdin>"``
generated in interactive mode, is returned unchanged.
.. method:: reset()
will be set to ``True``.
Attempting to decompress data after the end of stream is reached
- raises an `EOFError`. Any data found after the end of the
+ raises an :exc:`EOFError`. Any data found after the end of the
stream is ignored and saved in the :attr:`~.unused_data` attribute.
.. versionchanged:: 3.5
>>> out = out + comp.flush()
The example above uses a very "nonrandom" stream of data
-(a stream of `b"z"` chunks). Random data tends to compress poorly,
+(a stream of ``b"z"`` chunks). Random data tends to compress poorly,
while ordered, repetitive data usually yields a high compression ratio.
Writing and reading a bzip2-compressed file in binary mode:
.. note::
- Underlying encoded files are always opened in binary mode.
+ If *encoding* is not ``None``, then the
+ underlying encoded files are always opened in binary mode.
No automatic conversion of ``'\n'`` is done on reading and writing.
The *mode* argument may be any binary mode acceptable to the built-in
:func:`open` function; the ``'b'`` is automatically added.
All threads enqueued to ``ThreadPoolExecutor`` will be joined before the
interpreter can exit. Note that the exit handler which does this is
- executed *before* any exit handlers added using `atexit`. This means
+ executed *before* any exit handlers added using ``atexit``. This means
exceptions in the main thread must be caught and handled in order to
signal threads to exit gracefully. For this reason, it is recommended
that ``ThreadPoolExecutor`` not be used for long-running tasks.
tests.
If the method returns ``False`` then the :class:`Future` was cancelled,
- i.e. :meth:`Future.cancel` was called and returned `True`. Any threads
+ i.e. :meth:`Future.cancel` was called and returned ``True``. Any threads
waiting on the :class:`Future` completing (i.e. through
:func:`as_completed` or :func:`wait`) will be woken up.
If the method returns ``True`` then the :class:`Future` was not cancelled
and has been put in the running state, i.e. calls to
- :meth:`Future.running` will return `True`.
+ :meth:`Future.running` will return ``True``.
This method can only be called once and cannot be called after
:meth:`Future.set_result` or :meth:`Future.set_exception` have been
# Code to release resource, e.g.:
release_resource(resource)
+ The function can then be used like this::
+
>>> with managed_resource(timeout=3600) as resource:
... # Resource is released at the end of this block,
... # even if code in the block raises an exception
finally:
print(f'it took {time.monotonic() - now}s to run')
- @timeit()
- async def main():
- # ... async code ...
+ @timeit()
+ async def main():
+ # ... async code ...
When used as a decorator, a new generator instance is implicitly created on
each function call. This allows the otherwise "one-shot" context managers
:ref:`asynchronous context managers <async-context-managers>`::
async def send_http(session=None):
- if not session:
- # If no http session, create it with aiohttp
- cm = aiohttp.ClientSession()
- else:
- # Caller is responsible for closing the session
- cm = nullcontext(session)
+ if not session:
+ # If no http session, create it with aiohttp
+ cm = aiohttp.ClientSession()
+ else:
+ # Caller is responsible for closing the session
+ cm = nullcontext(session)
- async with cm as session:
- # Send http requests with session
+ async with cm as session:
+ # Send http requests with session
.. versionadded:: 3.7
print('Finishing')
return False
+ The class can then be used like this::
+
>>> @mycontext()
... def function():
... print('The bit in the middle')
print('Finishing')
return False
+ The class can then be used like this::
+
>>> @mycontext()
... async def function():
... print('The bit in the middle')
A read-only property. Set to the value the variable had before
the :meth:`ContextVar.set` method call that created the token.
- It points to :attr:`Token.MISSING` is the variable was not set
+ It points to :attr:`Token.MISSING` if the variable was not set
before the call.
.. attribute:: Token.MISSING
.. moduleauthor:: Thomas Heller <theller@python.net>
+**Source code:** :source:`Lib/ctypes`
+
--------------
:mod:`ctypes` is a foreign function library for Python. It provides C compatible
>>> printf(b"%f bottles of beer\n", 42.5)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
- ArgumentError: argument 2: exceptions.TypeError: Don't know how to convert parameter 2
+ ArgumentError: argument 2: TypeError: Don't know how to convert parameter 2
>>>
As has been mentioned before, all Python types except integers, strings, and
31
>>>
+.. _ctypes-calling-variadic-functions:
+
+Calling varadic functions
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+On a lot of platforms calling variadic functions through ctypes is exactly the same
+as calling functions with a fixed number of parameters. On some platforms, and in
+particular ARM64 for Apple Platforms, the calling convention for variadic functions
+is different than that for regular functions.
+
+On those platforms it is required to specify the *argtypes* attribute for the
+regular, non-variadic, function arguments:
+
+.. code-block:: python3
+
+ libc.printf.argtypes = [ctypes.c_char_p]
+
+Because specifying the attribute does inhibit portability it is adviced to always
+specify ``argtypes`` for all variadic functions.
+
.. _ctypes-calling-functions-with-own-custom-data-types:
>>> printf(b"%d %d %d", 1, 2, 3)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
- ArgumentError: argument 2: exceptions.TypeError: wrong type
+ ArgumentError: argument 2: TypeError: wrong type
>>> printf(b"%s %d %f\n", b"X", 2, 3)
X 2 3.000000
13
>>> strchr(b"abcdef", b"def")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
- ArgumentError: argument 2: exceptions.TypeError: one character string expected
+ ArgumentError: argument 2: TypeError: one character string expected
>>> print(strchr(b"abcdef", b"x"))
None
>>> strchr(b"abcdef", b"d")
.. function:: GetLastError()
Windows only: Returns the last error code set by Windows in the calling thread.
- This function calls the Windows `GetLastError()` function directly,
+ This function calls the Windows ``GetLastError()`` function directly,
it does not return the ctypes-private copy of the error code.
.. function:: get_errno()
.. moduleauthor:: Eric S. Raymond <esr@thyrsus.com>
.. sectionauthor:: Eric S. Raymond <esr@thyrsus.com>
+**Source code:** :source:`Lib/curses/ascii.py`
+
--------------
The :mod:`curses.ascii` module supplies name constants for ASCII characters and
.. sectionauthor:: Moshe Zadka <moshez@zadka.site.co.il>
.. sectionauthor:: Eric Raymond <esr@thyrsus.com>
+**Source code:** :source:`Lib/curses`
+
--------------
The :mod:`curses` module provides an interface to the curses library, the
Change the definition of a color, taking the number of the color to be changed
followed by three RGB values (for the amounts of red, green, and blue
components). The value of *color_number* must be between ``0`` and
- `COLORS - 1`. Each of *r*, *g*, *b*, must be a value between ``0`` and
+ ``COLORS - 1``. Each of *r*, *g*, *b*, must be a value between ``0`` and
``1000``. When :func:`init_color` is used, all occurrences of that color on the
screen immediately change to the new definition. This function is a no-op on
most terminals; it is active only if :func:`can_change_color` returns ``True``.
class C:
...
- @dataclass(init=True, repr=True, eq=True, order=False, unsafe_hash=False, frozen=False, match_args=True, kw_only=False, slots=False)
+ @dataclass(init=True, repr=True, eq=True, order=False, unsafe_hash=False, frozen=False,
+ match_args=True, kw_only=False, slots=False)
class C:
...
@dataclass
class C:
i: int
- j: int = None
- database: InitVar[DatabaseType] = None
+ j: int | None = None
+ database: InitVar[DatabaseType | None] = None
def __post_init__(self, database):
if self.j is None and database is not None:
two digits of ``offset.hours`` and ``offset.minutes`` respectively.
.. versionchanged:: 3.6
- Name generated from ``offset=timedelta(0)`` is now plain `'UTC'`, not
+ Name generated from ``offset=timedelta(0)`` is now plain ``'UTC'``, not
``'UTC+00:00'``.
Alternative constructor that only accepts instances of :class:`float` or
:class:`int`.
- Note `Decimal.from_float(0.1)` is not the same as `Decimal('0.1')`.
+ Note ``Decimal.from_float(0.1)`` is not the same as ``Decimal('0.1')``.
Since 0.1 is not exactly representable in binary floating point, the
value is stored as the nearest representable value which is
- `0x1.999999999999ap-4`. That equivalent value in decimal is
- `0.1000000000000000055511151231257827021181583404541015625`.
+ ``0x1.999999999999ap-4``. That equivalent value in decimal is
+ ``0.1000000000000000055511151231257827021181583404541015625``.
.. note:: From Python 3.2 onwards, a :class:`Decimal` instance
can also be constructed directly from a :class:`float`.
.. method:: exp(x)
- Returns `e ** x`.
+ Returns ``e ** x``.
.. method:: fma(x, y, z)
.. productionlist:: doctest
directive: "#" "doctest:" `directive_options`
- directive_options: `directive_option` ("," `directive_option`)\*
+ directive_options: `directive_option` ("," `directive_option`)*
directive_option: `on_or_off` `directive_option_name`
- on_or_off: "+" \| "-"
- directive_option_name: "DONT_ACCEPT_BLANKLINE" \| "NORMALIZE_WHITESPACE" \| ...
+ on_or_off: "+" | "-"
+ directive_option_name: "DONT_ACCEPT_BLANKLINE" | "NORMALIZE_WHITESPACE" | ...
Whitespace is not allowed between the ``+`` or ``-`` and the directive option
name. The directive option name can be any of the option flag names explained
In a model generated from bytes, any header values that (in contravention of
the RFCs) contain non-ASCII bytes will, when retrieved through this
interface, be represented as :class:`~email.header.Header` objects with
- a charset of `unknown-8bit`.
+ a charset of ``unknown-8bit``.
.. method:: __len__()
a multiple of 4). The encoded block was kept as-is.
* :class:`InvalidDateDefect` -- When decoding an invalid or unparsable date field.
- The original value is kept as-is.
\ No newline at end of file
+ The original value is kept as-is.
specified as ``-0000`` (indicating it is in UTC but contains no
information about the source timezone), then :attr:`.datetime` will be a
naive :class:`~datetime.datetime`. If a specific timezone offset is
- found (including `+0000`), then :attr:`.datetime` will contain an aware
+ found (including ``+0000``), then :attr:`.datetime` will contain an aware
``datetime`` that uses :class:`datetime.timezone` to record the timezone
offset.
.. versionadded:: 3.4
+**Source code:** :source:`Lib/ensurepip`
+
--------------
The :mod:`ensurepip` package provides support for bootstrapping the ``pip``
supported.
-.. function:: print(*objects, sep=' ', end='\n', file=sys.stdout, flush=False)
+.. function:: print(*objects, sep=' ', end='\n', file=None, flush=False)
Print *objects* to the text stream *file*, separated by *sep* and followed
by *end*. *sep*, *end*, *file*, and *flush*, if present, must be given as keyword
BLAKE2s, 0 in sequential mode).
* *last_node*: boolean indicating whether the processed node is the last
- one (`False` for sequential mode).
+ one (``False`` for sequential mode).
.. figure:: hashlib-blake2-tree.png
:alt: Explanation of tree mode parameters.
:class:`SimpleHTTPRequestHandler` will follow symbolic links when handling
requests, this makes it possible for files outside of the specified directory
to be served.
+
+Earlier versions of Python did not scrub control characters from the
+log messages emitted to stderr from ``python -m http.server`` or the
+default :class:`BaseHTTPRequestHandler` ``.log_message``
+implementation. This could allow remote clients connecting to your
+server to send nefarious control codes to your terminal.
+
+.. versionadded:: 3.10.9
+ Control characters are scrubbed in stderr logs.
An abstract base class for resource readers capable of serving
the ``files`` interface. Subclasses ResourceReader and provides
concrete implementations of the ResourceReader's abstract
- methods. Therefore, any loader supplying TraversableReader
+ methods. Therefore, any loader supplying TraversableResources
also supplies ResourceReader.
Loaders that wish to support resource reading are expected to
| ``CHAR_MAX`` | Nothing is specified in this locale. |
+--------------+-----------------------------------------+
- The function sets temporarily the ``LC_CTYPE`` locale to the ``LC_NUMERIC``
+ The function temporarily sets the ``LC_CTYPE`` locale to the ``LC_NUMERIC``
locale or the ``LC_MONETARY`` locale if locales are different and numeric or
monetary strings are non-ASCII. This temporary change affects other threads.
.. versionchanged:: 3.7
- The function now sets temporarily the ``LC_CTYPE`` locale to the
+ The function now temporarily sets the ``LC_CTYPE`` locale to the
``LC_NUMERIC`` locale in some cases.
Get a regular expression that can be used with the regex function to
recognize a positive response to a yes/no question.
- .. note::
-
- The expression is in the syntax suitable for the :c:func:`regex` function
- from the C library, which might differ from the syntax used in :mod:`re`.
-
.. data:: NOEXPR
Get a regular expression that can be used with the regex(3) function to
recognize a negative response to a yes/no question.
+ .. note::
+
+ The regular expressions for :const:`YESEXPR` and
+ :const:`NOEXPR` use syntax suitable for the
+ :c:func:`regex` function from the C library, which might
+ differ from the syntax used in :mod:`re`.
+
.. data:: CRNCYSTR
Get the currency symbol, preceded by "-" if the symbol should appear before
Formats a number *val* according to the current :const:`LC_NUMERIC` setting.
The format follows the conventions of the ``%`` operator. For floating point
- values, the decimal point is modified if appropriate. If *grouping* is true,
+ values, the decimal point is modified if appropriate. If *grouping* is ``True``,
also takes the grouping into account.
If *monetary* is true, the conversion uses monetary thousands separator and
Formats a number *val* according to the current :const:`LC_MONETARY` settings.
The returned string includes the currency symbol if *symbol* is true, which is
- the default. If *grouping* is true (which is not the default), grouping is done
- with the value. If *international* is true (which is not the default), the
+ the default. If *grouping* is ``True`` (which is not the default), grouping is done
+ with the value. If *international* is ``True`` (which is not the default), the
international currency symbol is used.
- Note that this function will not work with the 'C' locale, so you have to set a
- locale via :func:`setlocale` first.
+ .. note::
+
+ This function will not work with the 'C' locale, so you have to set a
+ locale via :func:`setlocale` first.
.. function:: str(float)
:c:func:`gettext` or :c:func:`dcgettext`. For these applications, it may be
necessary to bind the text domain, so that the libraries can properly locate
their message catalogs.
-
will be set to ``True``.
Attempting to decompress data after the end of stream is reached
- raises an `EOFError`. Any data found after the end of the
+ raises an :exc:`EOFError`. Any data found after the end of the
stream is ignored and saved in the :attr:`~.unused_data` attribute.
.. versionchanged:: 3.5
However, global variables which are just module level constants cause no
problems.
+.. _multiprocessing-safe-main-import:
+
Safe importing of main module
Make sure that the main module can be safely imported by a new Python
.. function:: remove(path, *, dir_fd=None)
Remove (delete) the file *path*. If *path* is a directory, an
- :exc:`IsADirectoryError` is raised. Use :func:`rmdir` to remove directories.
+ :exc:`OSError` is raised. Use :func:`rmdir` to remove directories.
If the file does not exist, a :exc:`FileNotFoundError` is raised.
This function can support :ref:`paths relative to directory descriptors
system records access and modification times; see :func:`~os.stat`. The best
way to preserve exact times is to use the *st_atime_ns* and *st_mtime_ns*
fields from the :func:`os.stat` result object with the *ns* parameter to
- `utime`.
+ :func:`utime`.
This function can support :ref:`specifying a file descriptor <path_fd>`,
:ref:`paths relative to directory descriptors <dir_fd>` and :ref:`not
library :c:data:`POSIX_SPAWN_RESETIDS` flag.
If the *setsid* argument is ``True``, it will create a new session ID
- for `posix_spawn`. *setsid* requires :c:data:`POSIX_SPAWN_SETSID`
+ for ``posix_spawn``. *setsid* requires :c:data:`POSIX_SPAWN_SETSID`
or :c:data:`POSIX_SPAWN_SETSID_NP` flag. Otherwise, :exc:`NotImplementedError`
is raised.
some fixed length. Patterns which start with negative lookbehind assertions may
match at the beginning of the string being searched.
+.. _re-conditional-expression:
+.. index:: single: (?(; in regular expressions
+
``(?(id/name)yes-pattern|no-pattern)``
Will try to match with ``yes-pattern`` if the group with given *id* or
*name* exists, and with ``no-pattern`` if it doesn't. ``no-pattern`` is
.. sectionauthor:: Fred L. Drake, Jr. <fdrake@acm.org>
-Python offers two different primitive operations based on regular expressions:
-:func:`re.match` checks for a match only at the beginning of the string, while
-:func:`re.search` checks for a match anywhere in the string (this is what Perl
-does by default).
+Python offers different primitive operations based on regular expressions:
+
++ :func:`re.match` checks for a match only at the beginning of the string
++ :func:`re.search` checks for a match anywhere in the string
+ (this is what Perl does by default)
++ :func:`re.fullmatch` checks for entire string to be a match
+
For example::
>>> re.match("c", "abcdef") # No match
>>> re.search("c", "abcdef") # Match
<re.Match object; span=(2, 3), match='c'>
+ >>> re.fullmatch("p.*n", "python") # Match
+ <re.Match object; span=(0, 6), match='python'>
+ >>> re.fullmatch("r.*n", "python") # No match
Regular expressions beginning with ``'^'`` can be used with :func:`search` to
restrict the match at the beginning of the string::
beginning of the string, whereas using :func:`search` with a regular expression
beginning with ``'^'`` will match at the beginning of each line. ::
- >>> re.match('X', 'A\nB\nX', re.MULTILINE) # No match
- >>> re.search('^X', 'A\nB\nX', re.MULTILINE) # Match
+ >>> re.match("X", "A\nB\nX", re.MULTILINE) # No match
+ >>> re.search("^X", "A\nB\nX", re.MULTILINE) # Match
<re.Match object; span=(4, 5), match='X'>
.. function:: compare_digest(a, b)
- Return ``True`` if strings *a* and *b* are equal, otherwise ``False``,
+ Return ``True`` if strings or
+ :term:`bytes-like objects <bytes-like object>`
+ *a* and *b* are equal, otherwise ``False``,
using a "constant-time compare" to reduce the risk of
`timing attacks <https://codahale.com/a-lesson-in-timing-attacks/>`_.
See :func:`hmac.compare_digest` for additional details.
events.
*sizehint* informs epoll about the expected number of events to be
- registered. It must be positive, or `-1` to use the default. It is only
+ registered. It must be positive, or ``-1`` to use the default. It is only
used on older systems where :c:func:`epoll_create1` is not available;
otherwise it has no effect (though its value is still checked).
.. module:: signal
:synopsis: Set handlers for asynchronous events.
+**Source code:** :source:`Lib/signal.py`
+
--------------
This module provides mechanisms to use signal handlers in Python.
When :const:`SOCK_NONBLOCK` or :const:`SOCK_CLOEXEC`
bit flags are applied to *type* they are cleared, and
:attr:`socket.type` will not reflect them. They are still passed
- to the underlying system `socket()` call. Therefore,
+ to the underlying system ``socket()`` call. Therefore,
::
Note that :class:`UnixDatagramServer` derives from :class:`UDPServer`, not from
:class:`UnixStreamServer` --- the only difference between an IP and a Unix
-stream server is the address family, which is simply repeated in both Unix
-server classes.
+server is the address family.
.. class:: ForkingMixIn
The :attr:`self.rfile` and :attr:`self.wfile` attributes can be
read or written, respectively, to get the request data or return data
to the client.
-
- The :attr:`rfile` attributes of both classes support the
- :class:`io.BufferedIOBase` readable interface, and
- :attr:`DatagramRequestHandler.wfile` supports the
- :class:`io.BufferedIOBase` writable interface.
+ The :attr:`!rfile` attributes support the :class:`io.BufferedIOBase` readable interface,
+ and :attr:`!wfile` attributes support the :class:`!io.BufferedIOBase` writable interface.
.. versionchanged:: 3.6
:attr:`StreamRequestHandler.wfile` also supports the
* :ref:`sqlite3-adapters`
* :ref:`sqlite3-converters`
* :ref:`sqlite3-connection-context-manager`
+ * :ref:`sqlite3-howto-row-factory`
* :ref:`sqlite3-explanation` for in-depth background on transaction control.
.. note::
- The :mod:`!sqlite3` module supports both ``qmark`` and ``numeric`` DB-API
- parameter styles, because that is what the underlying SQLite library
- supports. However, the DB-API does not allow multiple values for
+ The :mod:`!sqlite3` module supports ``qmark``, ``numeric``,
+ and ``named`` DB-API parameter styles,
+ because that is what the underlying SQLite library supports.
+ However, the DB-API does not allow multiple values for
the ``paramstyle`` attribute.
.. data:: sqlite_version
:meth:`~Cursor.executescript` on it with the given *sql_script*.
Return the new cursor object.
- .. method:: create_function(name, narg, func, \*, deterministic=False)
+ .. method:: create_function(name, narg, func, *, deterministic=False)
Create or remove a user-defined SQL function.
con.close()
- .. method:: backup(target, \*, pages=-1, progress=None, name="main", sleep=0.250)
+ .. method:: backup(target, *, pages=-1, progress=None, name="main", sleep=0.250)
Create a backup of an SQLite database.
.. attribute:: row_factory
- A callable that accepts two arguments,
- a :class:`Cursor` object and the raw row results as a :class:`tuple`,
- and returns a custom object representing an SQLite row.
-
- Example:
-
- .. doctest::
-
- >>> def dict_factory(cursor, row):
- ... col_names = [col[0] for col in cursor.description]
- ... return {key: value for key, value in zip(col_names, row)}
- >>> con = sqlite3.connect(":memory:")
- >>> con.row_factory = dict_factory
- >>> for row in con.execute("SELECT 1 AS a, 2 AS b"):
- ... print(row)
- {'a': 1, 'b': 2}
+ The initial :attr:`~Cursor.row_factory`
+ for :class:`Cursor` objects created from this connection.
+ Assigning to this attribute does not affect the :attr:`!row_factory`
+ of existing cursors belonging to this connection, only new ones.
+ Is ``None`` by default,
+ meaning each row is returned as a :class:`tuple`.
- If returning a tuple doesn't suffice and you want name-based access to
- columns, you should consider setting :attr:`row_factory` to the
- highly optimized :class:`sqlite3.Row` type. :class:`Row` provides both
- index-based and case-insensitive name-based access to columns with almost no
- memory overhead. It will probably be better than your own custom
- dictionary-based approach or even a db_row based solution.
-
- .. XXX what's a db_row-based solution?
+ See :ref:`sqlite3-howto-row-factory` for more details.
.. attribute:: text_factory
.. method:: fetchone()
- If :attr:`~Connection.row_factory` is ``None``,
+ If :attr:`~Cursor.row_factory` is ``None``,
return the next row query result set as a :class:`tuple`.
Else, pass it to the row factory and return its result.
Return ``None`` if no more data is available.
including :abbr:`CTE (Common Table Expression)` queries.
It is only updated by the :meth:`execute` and :meth:`executemany` methods.
+ .. attribute:: row_factory
+
+ Control how a row fetched from this :class:`!Cursor` is represented.
+ If ``None``, a row is represented as a :class:`tuple`.
+ Can be set to the included :class:`sqlite3.Row`;
+ or a :term:`callable` that accepts two arguments,
+ a :class:`Cursor` object and the :class:`!tuple` of row values,
+ and returns a custom object representing an SQLite row.
+
+ Defaults to what :attr:`Connection.row_factory` was set to
+ when the :class:`!Cursor` was created.
+ Assigning to this attribute does not affect
+ :attr:`Connection.row_factory` of the parent connection.
+
+ See :ref:`sqlite3-howto-row-factory` for more details.
+
.. The sqlite3.Row example used to be a how-to. It has now been incorporated
into the Row reference. We keep the anchor here in order not to break
It supports iteration, equality testing, :func:`len`,
and :term:`mapping` access by column name and index.
- Two row objects compare equal if have equal columns and equal members.
+ Two :class:`!Row` objects compare equal
+ if they have identical column names and values.
+
+ See :ref:`sqlite3-howto-row-factory` for more details.
.. method:: keys
.. versionchanged:: 3.5
Added support of slicing.
- Example:
-
- .. doctest::
-
- >>> con = sqlite3.connect(":memory:")
- >>> con.row_factory = sqlite3.Row
- >>> res = con.execute("SELECT 'Earth' AS name, 6378 AS radius")
- >>> row = res.fetchone()
- >>> row.keys()
- ['name', 'radius']
- >>> row[0], row["name"] # Access by index and name.
- ('Earth', 'Earth')
- >>> row["RADIUS"] # Column names are case-insensitive.
- 6378
-
PrepareProtocol objects
^^^^^^^^^^^^^^^^^^^^^^^
return f"Point({self.x}, {self.y})"
def adapt_point(point):
- return f"{point.x};{point.y}".encode("utf-8")
+ return f"{point.x};{point.y}"
def convert_point(s):
x, y = list(map(float, s.split(b";")))
def convert_date(val):
"""Convert ISO 8601 date to datetime.date object."""
- return datetime.date.fromisoformat(val)
+ return datetime.date.fromisoformat(val.decode())
def convert_datetime(val):
"""Convert ISO 8601 datetime to datetime.datetime object."""
- return datetime.datetime.fromisoformat(val)
+ return datetime.datetime.fromisoformat(val.decode())
def convert_timestamp(val):
"""Convert Unix epoch timestamp to datetime.datetime object."""
- return datetime.datetime.fromtimestamp(val)
+ return datetime.datetime.fromtimestamp(int(val))
sqlite3.register_converter("date", convert_date)
sqlite3.register_converter("datetime", convert_datetime)
sqlite3.register_converter("timestamp", convert_timestamp)
+.. testcode::
+ :hide:
+
+ dt = datetime.datetime(2019, 5, 18, 15, 17, 8, 123456)
+
+ assert adapt_date_iso(dt.date()) == "2019-05-18"
+ assert convert_date(b"2019-05-18") == dt.date()
+
+ assert adapt_datetime_iso(dt) == "2019-05-18T15:17:08.123456"
+ assert convert_datetime(b"2019-05-18T15:17:08.123456") == dt
+
+ # Using current time as fromtimestamp() returns local date/time.
+ # Droping microseconds as adapt_datetime_epoch truncates fractional second part.
+ now = datetime.datetime.now().replace(microsecond=0)
+ current_timestamp = int(now.timestamp())
+
+ assert adapt_datetime_epoch(now) == current_timestamp
+ assert convert_timestamp(str(current_timestamp).encode()) == now
+
.. _sqlite3-connection-shortcuts:
.. _SQLite URI documentation: https://www.sqlite.org/uri.html
+.. _sqlite3-howto-row-factory:
+
+How to create and use row factories
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+By default, :mod:`!sqlite3` represents each row as a :class:`tuple`.
+If a :class:`!tuple` does not suit your needs,
+you can use the :class:`sqlite3.Row` class
+or a custom :attr:`~Cursor.row_factory`.
+
+While :attr:`!row_factory` exists as an attribute both on the
+:class:`Cursor` and the :class:`Connection`,
+it is recommended to set :class:`Connection.row_factory`,
+so all cursors created from the connection will use the same row factory.
+
+:class:`!Row` provides indexed and case-insensitive named access to columns,
+with minimal memory overhead and performance impact over a :class:`!tuple`.
+To use :class:`!Row` as a row factory,
+assign it to the :attr:`!row_factory` attribute:
+
+.. doctest::
+
+ >>> con = sqlite3.connect(":memory:")
+ >>> con.row_factory = sqlite3.Row
+
+Queries now return :class:`!Row` objects:
+
+.. doctest::
+
+ >>> res = con.execute("SELECT 'Earth' AS name, 6378 AS radius")
+ >>> row = res.fetchone()
+ >>> row.keys()
+ ['name', 'radius']
+ >>> row[0] # Access by index.
+ 'Earth'
+ >>> row["name"] # Access by name.
+ 'Earth'
+ >>> row["RADIUS"] # Column names are case-insensitive.
+ 6378
+
+You can create a custom :attr:`~Cursor.row_factory`
+that returns each row as a :class:`dict`, with column names mapped to values:
+
+.. testcode::
+
+ def dict_factory(cursor, row):
+ fields = [column[0] for column in cursor.description]
+ return {key: value for key, value in zip(fields, row)}
+
+Using it, queries now return a :class:`!dict` instead of a :class:`!tuple`:
+
+.. doctest::
+
+ >>> con = sqlite3.connect(":memory:")
+ >>> con.row_factory = dict_factory
+ >>> for row in con.execute("SELECT 1 AS a, 2 AS b"):
+ ... print(row)
+ {'a': 1, 'b': 2}
+
+The following row factory returns a :term:`named tuple`:
+
+.. testcode::
+
+ from collections import namedtuple
+
+ def namedtuple_factory(cursor, row):
+ fields = [column[0] for column in cursor.description]
+ cls = namedtuple("Row", fields)
+ return cls._make(row)
+
+:func:`!namedtuple_factory` can be used as follows:
+
+.. doctest::
+
+ >>> con = sqlite3.connect(":memory:")
+ >>> con.row_factory = namedtuple_factory
+ >>> cur = con.execute("SELECT 1 AS a, 2 AS b")
+ >>> row = cur.fetchone()
+ >>> row
+ Row(a=1, b=2)
+ >>> row[0] # Indexed access.
+ 1
+ >>> row.b # Attribute access.
+ 2
+
+With some adjustments, the above recipe can be adapted to use a
+:class:`~dataclasses.dataclass`, or any other custom class,
+instead of a :class:`~collections.namedtuple`.
+
+
.. _sqlite3-explanation:
Explanation
The relative likelihood is computed as the probability of a sample
occurring in a narrow range divided by the width of the range (hence
the word "density"). Since the likelihood is relative to other points,
- its value can be greater than `1.0`.
+ its value can be greater than ``1.0``.
.. method:: NormalDist.cdf(x)
range [*start*, *end*]. Optional arguments *start* and *end* are
interpreted as in slice notation.
+ If *sub* is empty, returns the number of empty strings between characters
+ which is the length of the string plus one.
+
.. method:: str.encode(encoding="utf-8", errors="strict")
The subsequence to search for may be any :term:`bytes-like object` or an
integer in the range 0 to 255.
+ If *sub* is empty, returns the number of empty slices between characters
+ which is the length of the bytes object plus one.
+
.. versionchanged:: 3.3
Also accept an integer in the range 0 to 255 as the subsequence.
A dictionary's keys are *almost* arbitrary values. Values that are not
:term:`hashable`, that is, values containing lists, dictionaries or other
mutable types (that are compared by value rather than by object identity) may
-not be used as keys. Numeric types used for keys obey the normal rules for
-numeric comparison: if two numbers compare equal (such as ``1`` and ``1.0``)
-then they can be used interchangeably to index the same dictionary entry. (Note
-however, that since computers store floating-point numbers as approximations it
-is usually unwise to use them as dictionary keys.)
+not be used as keys.
+Values that compare equal (such as ``1``, ``1.0``, and ``True``)
+can be used interchangeably to index the same dictionary entry.
.. class:: dict(**kwargs)
dict(mapping, **kwargs)
--------------
-This module performs conversions between Python values and C structs represented
-as Python :class:`bytes` objects. This can be used in handling binary data
-stored in files or from network connections, among other sources. It uses
-:ref:`struct-format-strings` as compact descriptions of the layout of the C
-structs and the intended conversion to/from Python values.
+This module converts between Python values and C structs represented
+as Python :class:`bytes` objects. Compact :ref:`format strings <struct-format-strings>`
+describe the intended conversions to/from Python values.
+The module's functions and objects can be used for two largely
+distinct applications, data exchange with external sources (files or
+network connections), or data transfer between the Python application
+and the C layer.
.. note::
- By default, the result of packing a given C struct includes pad bytes in
- order to maintain proper alignment for the C types involved; similarly,
- alignment is taken into account when unpacking. This behavior is chosen so
- that the bytes of a packed struct correspond exactly to the layout in memory
- of the corresponding C struct. To handle platform-independent data formats
- or omit implicit pad bytes, use ``standard`` size and alignment instead of
- ``native`` size and alignment: see :ref:`struct-alignment` for details.
+ When no prefix character is given, native mode is the default. It
+ packs or unpacks data based on the platform and compiler on which
+ the Python interpreter was built.
+ The result of packing a given C struct includes pad bytes which
+ maintain proper alignment for the C types involved; similarly,
+ alignment is taken into account when unpacking. In contrast, when
+ communicating data between external sources, the programmer is
+ responsible for defining byte ordering and padding between elements.
+ See :ref:`struct-alignment` for details.
Several :mod:`struct` functions (and methods of :class:`Struct`) take a *buffer*
argument. This refers to objects that implement the :ref:`bufferobjects` and
Format Strings
--------------
-Format strings are the mechanism used to specify the expected layout when
-packing and unpacking data. They are built up from :ref:`format-characters`,
-which specify the type of data being packed/unpacked. In addition, there are
-special characters for controlling the :ref:`struct-alignment`.
+Format strings describe the data layout when
+packing and unpacking data. They are built up from :ref:`format characters<format-characters>`,
+which specify the type of data being packed/unpacked. In addition,
+special characters control the :ref:`byte order, size and alignment<struct-alignment>`.
+Each format string consists of an optional prefix character which
+describes the overall properties of the data and one or more format
+characters which describe the actual data values and padding.
.. _struct-alignment:
By default, C types are represented in the machine's native format and byte
order, and properly aligned by skipping pad bytes if necessary (according to the
rules used by the C compiler).
+This behavior is chosen so
+that the bytes of a packed struct correspond exactly to the memory layout
+of the corresponding C struct.
+Whether to use native byte ordering
+and padding or standard formats depends on the application.
.. index::
single: @ (at); in struct format strings
If the first character is not one of these, ``'@'`` is assumed.
-Native byte order is big-endian or little-endian, depending on the host
-system. For example, Intel x86 and AMD64 (x86-64) are little-endian;
-IBM z and most legacy architectures are big-endian;
-and ARM, RISC-V and IBM Power feature switchable endianness
-(bi-endian, though the former two are nearly always little-endian in practice).
-Use ``sys.byteorder`` to check the endianness of your system.
+Native byte order is big-endian or little-endian, depending on the
+host system. For example, Intel x86, AMD64 (x86-64), and Apple M1 are
+little-endian; IBM z and many legacy architectures are big-endian.
+Use :data:`sys.byteorder` to check the endianness of your system.
Native size and alignment are determined using the C compiler's
``sizeof`` expression. This is always combined with native byte order.
+--------+--------------------------+--------------------+----------------+------------+
| Format | C Type | Python type | Standard size | Notes |
+========+==========================+====================+================+============+
-| ``x`` | pad byte | no value | | |
+| ``x`` | pad byte | no value | | \(7) |
+--------+--------------------------+--------------------+----------------+------------+
| ``c`` | :c:expr:`char` | bytes of length 1 | 1 | |
+--------+--------------------------+--------------------+----------------+------------+
+--------+--------------------------+--------------------+----------------+------------+
| ``d`` | :c:expr:`double` | float | 8 | \(4) |
+--------+--------------------------+--------------------+----------------+------------+
-| ``s`` | :c:expr:`char[]` | bytes | | |
+| ``s`` | :c:expr:`char[]` | bytes | | \(9) |
+--------+--------------------------+--------------------+----------------+------------+
-| ``p`` | :c:expr:`char[]` | bytes | | |
+| ``p`` | :c:expr:`char[]` | bytes | | \(8) |
+--------+--------------------------+--------------------+----------------+------------+
| ``P`` | :c:expr:`void \*` | integer | | \(5) |
+--------+--------------------------+--------------------+----------------+------------+
operations. See the Wikipedia page on the `half-precision floating-point
format <half precision format_>`_ for more information.
+(7)
+ When packing, ``'x'`` inserts one NUL byte.
+
+(8)
+ The ``'p'`` format character encodes a "Pascal string", meaning a short
+ variable-length string stored in a *fixed number of bytes*, given by the count.
+ The first byte stored is the length of the string, or 255, whichever is
+ smaller. The bytes of the string follow. If the string passed in to
+ :func:`pack` is too long (longer than the count minus 1), only the leading
+ ``count-1`` bytes of the string are stored. If the string is shorter than
+ ``count-1``, it is padded with null bytes so that exactly count bytes in all
+ are used. Note that for :func:`unpack`, the ``'p'`` format character consumes
+ ``count`` bytes, but that the string returned can never contain more than 255
+ bytes.
+
+(9)
+ For the ``'s'`` format character, the count is interpreted as the length of the
+ bytes, not a repeat count like for the other format characters; for example,
+ ``'10s'`` means a single 10-byte string mapping to or from a single
+ Python byte string, while ``'10c'`` means 10
+ separate one byte character elements (e.g., ``cccccccccc``) mapping
+ to or from ten different Python byte objects. (See :ref:`struct-examples`
+ for a concrete demonstration of the difference.)
+ If a count is not given, it defaults to 1. For packing, the string is
+ truncated or padded with null bytes as appropriate to make it fit. For
+ unpacking, the resulting bytes object always has exactly the specified number
+ of bytes. As a special case, ``'0s'`` means a single, empty string (while
+ ``'0c'`` means 0 characters).
A format character may be preceded by an integral repeat count. For example,
the format string ``'4h'`` means exactly the same as ``'hhhh'``.
Whitespace characters between formats are ignored; a count and its format must
not contain whitespace though.
-For the ``'s'`` format character, the count is interpreted as the length of the
-bytes, not a repeat count like for the other format characters; for example,
-``'10s'`` means a single 10-byte string, while ``'10c'`` means 10 characters.
-If a count is not given, it defaults to 1. For packing, the string is
-truncated or padded with null bytes as appropriate to make it fit. For
-unpacking, the resulting bytes object always has exactly the specified number
-of bytes. As a special case, ``'0s'`` means a single, empty string (while
-``'0c'`` means 0 characters).
-
When packing a value ``x`` using one of the integer formats (``'b'``,
``'B'``, ``'h'``, ``'H'``, ``'i'``, ``'I'``, ``'l'``, ``'L'``,
``'q'``, ``'Q'``), if ``x`` is outside the valid range for that format
Previously, some of the integer formats wrapped out-of-range values and
raised :exc:`DeprecationWarning` instead of :exc:`struct.error`.
-The ``'p'`` format character encodes a "Pascal string", meaning a short
-variable-length string stored in a *fixed number of bytes*, given by the count.
-The first byte stored is the length of the string, or 255, whichever is
-smaller. The bytes of the string follow. If the string passed in to
-:func:`pack` is too long (longer than the count minus 1), only the leading
-``count-1`` bytes of the string are stored. If the string is shorter than
-``count-1``, it is padded with null bytes so that exactly count bytes in all
-are used. Note that for :func:`unpack`, the ``'p'`` format character consumes
-``count`` bytes, but that the string returned can never contain more than 255
-bytes.
-
.. index:: single: ? (question mark); in struct format strings
For the ``'?'`` format character, the return value is either :const:`True` or
^^^^^^^^
.. note::
- All examples assume a native byte order, size, and alignment with a
- big-endian machine.
+ Native byte order examples (designated by the ``'@'`` format prefix or
+ lack of any prefix character) may not match what the reader's
+ machine produces as
+ that depends on the platform and compiler.
+
+Pack and unpack integers of three different sizes, using big endian
+ordering::
+
+ >>> from struct import *
+ >>> pack(">bhl", 1, 2, 3)
+ b'\x01\x00\x02\x00\x00\x00\x03'
+ >>> unpack('>bhl', b'\x01\x00\x02\x00\x00\x00\x03'
+ (1, 2, 3)
+ >>> calcsize('>bhl')
+ 7
-A basic example of packing/unpacking three integers::
+Attempt to pack an integer which is too large for the defined field::
- >>> from struct import *
- >>> pack('hhl', 1, 2, 3)
- b'\x00\x01\x00\x02\x00\x00\x00\x03'
- >>> unpack('hhl', b'\x00\x01\x00\x02\x00\x00\x00\x03')
- (1, 2, 3)
- >>> calcsize('hhl')
- 8
+ >>> pack(">h", 99999)
+ Traceback (most recent call last):
+ File "<stdin>", line 1, in <module>
+ struct.error: 'h' format requires -32768 <= number <= 32767
+
+Demonstrate the difference between ``'s'`` and ``'c'`` format
+characters::
+
+ >>> pack("@ccc", b'1', b'2', b'3')
+ b'123'
+ >>> pack("@3s", b'123')
+ b'123'
Unpacked fields can be named by assigning them to variables or by wrapping
the result in a named tuple::
>>> Student._make(unpack('<10sHHb', record))
Student(name=b'raymond ', serialnum=4658, school=264, gradelevel=8)
-The ordering of format characters may have an impact on size since the padding
-needed to satisfy alignment requirements is different::
-
- >>> pack('ci', b'*', 0x12131415)
- b'*\x00\x00\x00\x12\x13\x14\x15'
- >>> pack('ic', 0x12131415, b'*')
- b'\x12\x13\x14\x15*'
- >>> calcsize('ci')
+The ordering of format characters may have an impact on size in native
+mode since padding is implicit. In standard mode, the user is
+responsible for inserting any desired padding.
+Note in
+the first ``pack`` call below that three NUL bytes were added after the
+packed ``'#'`` to align the following integer on a four-byte boundary.
+In this example, the output was produced on a little endian machine::
+
+ >>> pack('@ci', b'#', 0x12131415)
+ b'#\x00\x00\x00\x15\x14\x13\x12'
+ >>> pack('@ic', 0x12131415, b'#')
+ b'\x15\x14\x13\x12#'
+ >>> calcsize('@ci')
8
- >>> calcsize('ic')
+ >>> calcsize('@ic')
5
-The following format ``'llh0l'`` specifies two pad bytes at the end, assuming
-longs are aligned on 4-byte boundaries::
+The following format ``'llh0l'`` results in two pad bytes being added
+at the end, assuming the platform's longs are aligned on 4-byte boundaries::
- >>> pack('llh0l', 1, 2, 3)
+ >>> pack('@llh0l', 1, 2, 3)
b'\x00\x00\x00\x01\x00\x00\x00\x02\x00\x03\x00\x00'
-This only works when native size and alignment are in effect; standard size and
-alignment does not enforce any alignment.
-
.. seealso::
Module :mod:`array`
Packed binary storage of homogeneous data.
- Module :mod:`xdrlib`
- Packing and unpacking of XDR data.
+ Module :mod:`json`
+ JSON encoder and decoder.
+
+ Module :mod:`pickle`
+ Python object serialization.
+
+
+.. _applications:
+
+Applications
+------------
+
+Two main applications for the :mod:`struct` module exist, data
+interchange between Python and C code within an application or another
+application compiled using the same compiler (:ref:`native formats<struct-native-formats>`), and
+data interchange between applications using agreed upon data layout
+(:ref:`standard formats<struct-standard-formats>`). Generally speaking, the format strings
+constructed for these two domains are distinct.
+
+
+.. _struct-native-formats:
+
+Native Formats
+^^^^^^^^^^^^^^
+
+When constructing format strings which mimic native layouts, the
+compiler and machine architecture determine byte ordering and padding.
+In such cases, the ``@`` format character should be used to specify
+native byte ordering and data sizes. Internal pad bytes are normally inserted
+automatically. It is possible that a zero-repeat format code will be
+needed at the end of a format string to round up to the correct
+byte boundary for proper alignment of consective chunks of data.
+
+Consider these two simple examples (on a 64-bit, little-endian
+machine)::
+
+ >>> calcsize('@lhl')
+ 24
+ >>> calcsize('@llh')
+ 18
+
+Data is not padded to an 8-byte boundary at the end of the second
+format string without the use of extra padding. A zero-repeat format
+code solves that problem::
+
+ >>> calcsize('@llh0l')
+ 24
+
+The ``'x'`` format code can be used to specify the repeat, but for
+native formats it is better to use a zero-repeat format like ``'0l'``.
+
+By default, native byte ordering and alignment is used, but it is
+better to be explicit and use the ``'@'`` prefix character.
+
+
+.. _struct-standard-formats:
+
+Standard Formats
+^^^^^^^^^^^^^^^^
+
+When exchanging data beyond your process such as networking or storage,
+be precise. Specify the exact byte order, size, and alignment. Do
+not assume they match the native order of a particular machine.
+For example, network byte order is big-endian, while many popular CPUs
+are little-endian. By defining this explicitly, the user need not
+care about the specifics of the platform their code is running on.
+The first character should typically be ``<`` or ``>``
+(or ``!``). Padding is the responsibility of the programmer. The
+zero-repeat format character won't work. Instead, the user must
+explicitly add ``'x'`` pad bytes where needed. Revisiting the
+examples from the previous section, we have::
+
+ >>> calcsize('<qh6xq')
+ 24
+ >>> pack('<qh6xq', 1, 2, 3) == pack('@lhl', 1, 2, 3)
+ True
+ >>> calcsize('@llh')
+ 18
+ >>> pack('@llh', 1, 2, 3) == pack('<qqh', 1, 2, 3)
+ True
+ >>> calcsize('<qqh6x')
+ 24
+ >>> calcsize('@llh0l')
+ 24
+ >>> pack('@llh0l', 1, 2, 3) == pack('<qqh6x', 1, 2, 3)
+ True
+
+The above results (executed on a 64-bit machine) aren't guaranteed to
+match when executed on different machines. For example, the examples
+below were executed on a 32-bit machine::
+
+ >>> calcsize('<qqh6x')
+ 24
+ >>> calcsize('@llh0l')
+ 12
+ >>> pack('@llh0l', 1, 2, 3) == pack('<qqh6x', 1, 2, 3)
+ False
.. _struct-objects:
.. class:: Struct(format)
Return a new Struct object which writes and reads binary data according to
- the format string *format*. Creating a Struct object once and calling its
- methods is more efficient than calling the :mod:`struct` functions with the
- same format since the format string only needs to be compiled once.
+ the format string *format*. Creating a ``Struct`` object once and calling its
+ methods is more efficient than calling module-level functions with the
+ same format since the format string is only compiled once.
.. note::
If *env* is not ``None``, it must be a mapping that defines the environment
variables for the new process; these are used instead of the default
- behavior of inheriting the current process' environment. It is passed directly
- to :class:`Popen`.
+ behavior of inheriting the current process' environment. It is passed
+ directly to :class:`Popen`. This mapping can be str to str on any platform
+ or bytes to bytes on POSIX platforms much like :data:`os.environ` or
+ :data:`os.environb`.
Examples::
If *env* is not ``None``, it must be a mapping that defines the environment
variables for the new process; these are used instead of the default
- behavior of inheriting the current process' environment.
+ behavior of inheriting the current process' environment. This mapping can be
+ str to str on any platform or bytes to bytes on POSIX platforms much like
+ :data:`os.environ` or :data:`os.environb`.
.. note::
can then log the event, raise an exception to abort the operation,
or terminate the process entirely.
+ Note that audit hooks are primarily for collecting information about internal
+ or otherwise unobservable actions, whether by Python or libraries written in
+ Python. They are not suitable for implementing a "sandbox". In particular,
+ malicious code can trivially disable or bypass hooks added using this
+ function. At a minimum, any security-sensitive hooks must be added using the
+ C API :c:func:`PySys_AddAuditHook` before initialising the runtime, and any
+ modules allowing arbitrary memory modification (such as :mod:`ctypes`) should
+ be completely removed or closely monitored.
+
.. audit-event:: sys.addaudithook "" sys.addaudithook
Calling :func:`sys.addaudithook` will itself raise an auditing event
Print low-level information to stderr about the state of CPython's memory
allocator.
- If Python is `built in debug mode <debug-build>` (:option:`configure
+ If Python is :ref:`built in debug mode <debug-build>` (:option:`configure
--with-pydebug option <--with-pydebug>`), it also performs some expensive
internal consistency checks.
files to (and read them from) a parallel directory tree rooted at this
directory, rather than from ``__pycache__`` directories in the source code
tree. Any ``__pycache__`` directories in the source code tree will be ignored
- and new `.pyc` files written within the pycache prefix. Thus if you use
+ and new ``.pyc`` files written within the pycache prefix. Thus if you use
:mod:`compileall` as a pre-build step, you must ensure you run it with the
same pycache prefix (if any) that you will use at runtime.
.. function:: get_asyncgen_hooks()
Returns an *asyncgen_hooks* object, which is similar to a
- :class:`~collections.namedtuple` of the form `(firstiter, finalizer)`,
+ :class:`~collections.namedtuple` of the form ``(firstiter, finalizer)``,
where *firstiter* and *finalizer* are expected to be either ``None`` or
functions which take an :term:`asynchronous generator iterator` as an
argument, and are used to schedule finalization of an asynchronous
.. Other sections I have in mind are
Tkinter internals
- Freezing Tkinter applications
\ No newline at end of file
+ Freezing Tkinter applications
.. seealso::
Module :mod:`tkinter.commondialog`
- Tkinter standard dialog module
\ No newline at end of file
+ Tkinter standard dialog module
.. seealso::
- :ref:`Bindings-and-Events`
\ No newline at end of file
+ :ref:`Bindings-and-Events`
askokcancel(title=None, message=None, **options)
askretrycancel(title=None, message=None, **options)
askyesno(title=None, message=None, **options)
- askyesnocancel(title=None, message=None, **options)
\ No newline at end of file
+ askyesnocancel(title=None, message=None, **options)
The :term:`loader` which loaded the module. Defaults to ``None``.
This attribute is to match :attr:`importlib.machinery.ModuleSpec.loader`
- as stored in the attr:`__spec__` object.
+ as stored in the :attr:`__spec__` object.
.. note::
A future version of Python may stop setting this attribute by default.
:attr:`__name__` if the module is a package itself). Defaults to ``None``.
This attribute is to match :attr:`importlib.machinery.ModuleSpec.parent`
- as stored in the attr:`__spec__` object.
+ as stored in the :attr:`__spec__` object.
.. note::
A future version of Python may stop setting this attribute by default.
def scale(scalar: float, vector: Vector) -> Vector:
return [scalar * num for num in vector]
- # typechecks; a list of floats qualifies as a Vector.
+ # passes type checking; a list of floats qualifies as a Vector.
new_vector = scale(2.0, [1.0, -4.2, 5.4])
Type aliases are useful for simplifying complex type signatures. For example::
def get_user_name(user_id: UserId) -> str:
...
- # typechecks
+ # passes type checking
user_a = get_user_name(UserId(42351))
- # does not typecheck; an int is not a UserId
+ # fails type checking; an int is not a UserId
user_b = get_user_name(-1)
You may still perform all ``int`` operations on a variable of type ``UserId``,
UserId = NewType('UserId', int)
- # Fails at runtime and does not typecheck
+ # Fails at runtime and does not pass type checking
class AdminUserId(UserId): pass
However, it is possible to create a :class:`NewType` based on a 'derived' ``NewType``::
.. versionchanged:: 3.10
``Callable`` now supports :class:`ParamSpec` and :data:`Concatenate`.
- See :pep:`612` for more information.
+ See :pep:`612` for more details.
.. seealso::
The documentation for :class:`ParamSpec` and :class:`Concatenate` provides
class body.
The :class:`Generic` base class defines :meth:`~object.__class_getitem__` so
-that ``LoggedVar[t]`` is valid as a type::
+that ``LoggedVar[T]`` is valid as a type::
from collections.abc import Iterable
s = a # OK
def foo(item: Any) -> int:
- # Typechecks; 'item' could be any type,
+ # Passes type checking; 'item' could be any type,
# and that type might have a 'bar' method
item.bar()
...
-Notice that no typechecking is performed when assigning a value of type
+Notice that no type checking is performed when assigning a value of type
:data:`Any` to a more precise type. For example, the static type checker did
not report an error when assigning ``a`` to ``s`` even though ``s`` was
declared to be of type :class:`str` and receives an :class:`int` value at
it as a return value) of a more specialized type is a type error. For example::
def hash_a(item: object) -> int:
- # Fails; an object does not have a 'magic' method.
+ # Fails type checking; an object does not have a 'magic' method.
item.magic()
...
def hash_b(item: Any) -> int:
- # Typechecks
+ # Passes type checking
item.magic()
...
- # Typechecks, since ints and strs are subclasses of object
+ # Passes type checking, since ints and strs are subclasses of object
hash_a(42)
hash_a("foo")
- # Typechecks, since Any is compatible with all types
+ # Passes type checking, since Any is compatible with all types
hash_b(42)
hash_b("foo")
is equivalent to ``Tuple[Any, ...]``, and in turn to :class:`tuple`.
.. deprecated:: 3.9
- :class:`builtins.tuple <tuple>` now supports ``[]``. See :pep:`585` and
- :ref:`types-genericalias`.
+ :class:`builtins.tuple <tuple>` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. data:: Union
respectively.
.. deprecated:: 3.9
- :class:`collections.abc.Callable` now supports ``[]``. See :pep:`585` and
- :ref:`types-genericalias`.
+ :class:`collections.abc.Callable` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. versionchanged:: 3.10
``Callable`` now supports :class:`ParamSpec` and :data:`Concatenate`.
- See :pep:`612` for more information.
+ See :pep:`612` for more details.
.. seealso::
The documentation for :class:`ParamSpec` and :class:`Concatenate` provide
.. versionadded:: 3.5.2
.. deprecated:: 3.9
- :class:`builtins.type <type>` now supports ``[]``. See :pep:`585` and
- :ref:`types-genericalias`.
+ :class:`builtins.type <type>` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. data:: Literal
is not a subtype of the former, since ``List`` is invariant.
The responsibility of writing type-safe type guards is left to the user.
- ``TypeGuard`` also works with type variables. For more information, see
- :pep:`647` (User-Defined Type Guards).
+ ``TypeGuard`` also works with type variables. See :pep:`647` for more details.
.. versionadded:: 3.10
func(C()) # Passes static type check
- See :pep:`544` for details. Protocol classes decorated with
+ See :pep:`544` for more details. Protocol classes decorated with
:func:`runtime_checkable` (described later) act as simple-minded runtime
protocols that check only the presence of given attributes, ignoring their
type signatures.
...
.. deprecated:: 3.9
- :class:`builtins.dict <dict>` now supports ``[]``. See :pep:`585` and
- :ref:`types-genericalias`.
+ :class:`builtins.dict <dict>` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: List(list, MutableSequence[T])
return [item for item in vector if item > 0]
.. deprecated:: 3.9
- :class:`builtins.list <list>` now supports ``[]``. See :pep:`585` and
- :ref:`types-genericalias`.
+ :class:`builtins.list <list>` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: Set(set, MutableSet[T])
to use an abstract collection type such as :class:`AbstractSet`.
.. deprecated:: 3.9
- :class:`builtins.set <set>` now supports ``[]``. See :pep:`585` and
- :ref:`types-genericalias`.
+ :class:`builtins.set <set>` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: FrozenSet(frozenset, AbstractSet[T_co])
A generic version of :class:`builtins.frozenset <frozenset>`.
.. deprecated:: 3.9
- :class:`builtins.frozenset <frozenset>` now supports ``[]``. See
- :pep:`585` and :ref:`types-genericalias`.
+ :class:`builtins.frozenset <frozenset>`
+ now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. note:: :data:`Tuple` is a special form.
.. versionadded:: 3.5.2
.. deprecated:: 3.9
- :class:`collections.defaultdict` now supports ``[]``. See :pep:`585` and
- :ref:`types-genericalias`.
+ :class:`collections.defaultdict` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: OrderedDict(collections.OrderedDict, MutableMapping[KT, VT])
.. versionadded:: 3.7.2
.. deprecated:: 3.9
- :class:`collections.OrderedDict` now supports ``[]``. See :pep:`585` and
- :ref:`types-genericalias`.
+ :class:`collections.OrderedDict` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: ChainMap(collections.ChainMap, MutableMapping[KT, VT])
.. versionadded:: 3.6.1
.. deprecated:: 3.9
- :class:`collections.ChainMap` now supports ``[]``. See :pep:`585` and
- :ref:`types-genericalias`.
+ :class:`collections.ChainMap` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: Counter(collections.Counter, Dict[T, int])
.. versionadded:: 3.6.1
.. deprecated:: 3.9
- :class:`collections.Counter` now supports ``[]``. See :pep:`585` and
- :ref:`types-genericalias`.
+ :class:`collections.Counter` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: Deque(deque, MutableSequence[T])
.. versionadded:: 3.6.1
.. deprecated:: 3.9
- :class:`collections.deque` now supports ``[]``. See :pep:`585` and
- :ref:`types-genericalias`.
+ :class:`collections.deque` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
Other concrete types
""""""""""""""""""""
represent the types of I/O streams such as returned by
:func:`open`.
- .. deprecated-removed:: 3.8 3.12
+ .. deprecated-removed:: 3.8 3.13
The ``typing.io`` namespace is deprecated and will be removed.
These types should be directly imported from ``typing`` instead.
``Pattern[str]``, ``Pattern[bytes]``, ``Match[str]``, or
``Match[bytes]``.
- .. deprecated-removed:: 3.8 3.12
+ .. deprecated-removed:: 3.8 3.13
The ``typing.re`` namespace is deprecated and will be removed.
These types should be directly imported from ``typing`` instead.
Corresponding to collections in :mod:`collections.abc`
""""""""""""""""""""""""""""""""""""""""""""""""""""""
-.. class:: AbstractSet(Sized, Collection[T_co])
+.. class:: AbstractSet(Collection[T_co])
A generic version of :class:`collections.abc.Set`.
.. deprecated:: 3.9
- :class:`collections.abc.Set` now supports ``[]``. See :pep:`585` and
- :ref:`types-genericalias`.
+ :class:`collections.abc.Set` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: ByteString(Sequence[int])
annotate arguments of any of the types mentioned above.
.. deprecated:: 3.9
- :class:`collections.abc.ByteString` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.ByteString` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: Collection(Sized, Iterable[T_co], Container[T_co])
.. versionadded:: 3.6.0
.. deprecated:: 3.9
- :class:`collections.abc.Collection` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.Collection` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: Container(Generic[T_co])
A generic version of :class:`collections.abc.Container`.
.. deprecated:: 3.9
- :class:`collections.abc.Container` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.Container` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
-.. class:: ItemsView(MappingView, Generic[KT_co, VT_co])
+.. class:: ItemsView(MappingView, AbstractSet[tuple[KT_co, VT_co]])
A generic version of :class:`collections.abc.ItemsView`.
.. deprecated:: 3.9
- :class:`collections.abc.ItemsView` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.ItemsView` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
-.. class:: KeysView(MappingView[KT_co], AbstractSet[KT_co])
+.. class:: KeysView(MappingView, AbstractSet[KT_co])
A generic version of :class:`collections.abc.KeysView`.
.. deprecated:: 3.9
- :class:`collections.abc.KeysView` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.KeysView` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
-.. class:: Mapping(Sized, Collection[KT], Generic[VT_co])
+.. class:: Mapping(Collection[KT], Generic[KT, VT_co])
A generic version of :class:`collections.abc.Mapping`.
This type can be used as follows::
return word_list[word]
.. deprecated:: 3.9
- :class:`collections.abc.Mapping` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.Mapping` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
-.. class:: MappingView(Sized, Iterable[T_co])
+.. class:: MappingView(Sized)
A generic version of :class:`collections.abc.MappingView`.
.. deprecated:: 3.9
- :class:`collections.abc.MappingView` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.MappingView` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: MutableMapping(Mapping[KT, VT])
A generic version of :class:`collections.abc.MutableMapping`.
.. deprecated:: 3.9
- :class:`collections.abc.MutableMapping` now supports ``[]``. See
- :pep:`585` and :ref:`types-genericalias`.
+ :class:`collections.abc.MutableMapping`
+ now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: MutableSequence(Sequence[T])
A generic version of :class:`collections.abc.MutableSequence`.
.. deprecated:: 3.9
- :class:`collections.abc.MutableSequence` now supports ``[]``. See
- :pep:`585` and :ref:`types-genericalias`.
+ :class:`collections.abc.MutableSequence`
+ now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: MutableSet(AbstractSet[T])
A generic version of :class:`collections.abc.MutableSet`.
.. deprecated:: 3.9
- :class:`collections.abc.MutableSet` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.MutableSet` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: Sequence(Reversible[T_co], Collection[T_co])
A generic version of :class:`collections.abc.Sequence`.
.. deprecated:: 3.9
- :class:`collections.abc.Sequence` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.Sequence` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
-.. class:: ValuesView(MappingView[VT_co])
+.. class:: ValuesView(MappingView, Collection[_VT_co])
A generic version of :class:`collections.abc.ValuesView`.
.. deprecated:: 3.9
- :class:`collections.abc.ValuesView` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.ValuesView` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
Corresponding to other types in :mod:`collections.abc`
""""""""""""""""""""""""""""""""""""""""""""""""""""""
A generic version of :class:`collections.abc.Iterable`.
.. deprecated:: 3.9
- :class:`collections.abc.Iterable` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.Iterable` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: Iterator(Iterable[T_co])
A generic version of :class:`collections.abc.Iterator`.
.. deprecated:: 3.9
- :class:`collections.abc.Iterator` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.Iterator` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: Generator(Iterator[T_co], Generic[T_co, T_contra, V_co])
start += 1
.. deprecated:: 3.9
- :class:`collections.abc.Generator` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.Generator` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: Hashable
A generic version of :class:`collections.abc.Reversible`.
.. deprecated:: 3.9
- :class:`collections.abc.Reversible` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.Reversible` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: Sized
.. versionadded:: 3.5.3
.. deprecated:: 3.9
- :class:`collections.abc.Coroutine` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.Coroutine` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: AsyncGenerator(AsyncIterator[T_co], Generic[T_co, T_contra])
.. versionadded:: 3.6.1
.. deprecated:: 3.9
- :class:`collections.abc.AsyncGenerator` now supports ``[]``. See
- :pep:`585` and :ref:`types-genericalias`.
+ :class:`collections.abc.AsyncGenerator`
+ now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: AsyncIterable(Generic[T_co])
.. versionadded:: 3.5.2
.. deprecated:: 3.9
- :class:`collections.abc.AsyncIterable` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.AsyncIterable` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: AsyncIterator(AsyncIterable[T_co])
.. versionadded:: 3.5.2
.. deprecated:: 3.9
- :class:`collections.abc.AsyncIterator` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.AsyncIterator` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: Awaitable(Generic[T_co])
.. versionadded:: 3.5.2
.. deprecated:: 3.9
- :class:`collections.abc.Awaitable` now supports ``[]``. See :pep:`585`
- and :ref:`types-genericalias`.
+ :class:`collections.abc.Awaitable` now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
Context manager types
.. versionadded:: 3.6.0
.. deprecated:: 3.9
- :class:`contextlib.AbstractContextManager` now supports ``[]``. See
- :pep:`585` and :ref:`types-genericalias`.
+ :class:`contextlib.AbstractContextManager`
+ now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
.. class:: AsyncContextManager(Generic[T_co])
.. versionadded:: 3.6.2
.. deprecated:: 3.9
- :class:`contextlib.AbstractAsyncContextManager` now supports ``[]``. See
- :pep:`585` and :ref:`types-genericalias`.
+ :class:`contextlib.AbstractAsyncContextManager`
+ now supports subscripting (``[]``).
+ See :pep:`585` and :ref:`types-genericalias`.
Protocols
---------
def process(response):
<actual implementation>
- See :pep:`484` for details and comparison with other typing semantics.
+ See :pep:`484` for more details and comparison with other typing semantics.
.. decorator:: final
That aside there is a way to use ``mock`` to affect the results of an import.
Importing fetches an *object* from the :data:`sys.modules` dictionary. Note that it
fetches an *object*, which need not be a module. Importing a module for the
-first time results in a module object being put in `sys.modules`, so usually
+first time results in a module object being put in ``sys.modules``, so usually
when you import something you get a module back. This need not be the case
however.
--------------
-The :mod:`venv` module provides support for creating lightweight "virtual
-environments" with their own site directories, optionally isolated from system
-site directories. Each virtual environment has its own Python binary (which
-matches the version of the binary that was used to create this environment) and
-can have its own independent set of installed Python packages in its site
-directories.
+.. _venv-def:
+.. _venv-intro:
+
+The :mod:`!venv` module supports creating lightweight "virtual environments",
+each with their own independent set of Python packages installed in
+their :mod:`site` directories.
+A virtual environment is created on top of an existing
+Python installation, known as the virtual environment's "base" Python, and may
+optionally be isolated from the packages in the base environment,
+so only those explicitly installed in the virtual environment are available.
+
+When used from within a virtual environment, common installation tools such as
+`pip`_ will install Python packages into a virtual environment
+without needing to be told to do so explicitly.
-See :pep:`405` for more information about Python virtual environments.
+See :pep:`405` for more background on Python virtual environments.
.. seealso::
`Python Packaging User Guide: Creating and using virtual environments
<https://packaging.python.org/guides/installing-using-pip-and-virtual-environments/#creating-a-virtual-environment>`__
-
Creating virtual environments
-----------------------------
.. include:: /using/venv-create.inc
+.. _venv-explanation:
-.. _venv-def:
+How venvs work
+--------------
-.. note:: A virtual environment is a Python environment such that the Python
- interpreter, libraries and scripts installed into it are isolated from those
- installed in other virtual environments, and (by default) any libraries
- installed in a "system" Python, i.e., one which is installed as part of your
- operating system.
-
- A virtual environment is a directory tree which contains Python executable
- files and other files which indicate that it is a virtual environment.
-
- Common installation tools such as setuptools_ and pip_ work as
- expected with virtual environments. In other words, when a virtual
- environment is active, they install Python packages into the virtual
- environment without needing to be told to do so explicitly.
-
- When a virtual environment is active (i.e., the virtual environment's Python
- interpreter is running), the attributes :attr:`sys.prefix` and
- :attr:`sys.exec_prefix` point to the base directory of the virtual
- environment, whereas :attr:`sys.base_prefix` and
- :attr:`sys.base_exec_prefix` point to the non-virtual environment Python
- installation which was used to create the virtual environment. If a virtual
- environment is not active, then :attr:`sys.prefix` is the same as
- :attr:`sys.base_prefix` and :attr:`sys.exec_prefix` is the same as
- :attr:`sys.base_exec_prefix` (they all point to a non-virtual environment
- Python installation).
-
- When a virtual environment is active, any options that change the
- installation path will be ignored from all :mod:`distutils` configuration
- files to prevent projects being inadvertently installed outside of the
- virtual environment.
-
- When working in a command shell, users can make a virtual environment active
- by running an ``activate`` script in the virtual environment's executables
- directory (the precise filename and command to use the file is
- shell-dependent), which prepends the virtual environment's directory for
- executables to the ``PATH`` environment variable for the running shell. There
- should be no need in other circumstances to activate a virtual
- environment; scripts installed into virtual environments have a "shebang"
- line which points to the virtual environment's Python interpreter. This means
- that the script will run with that interpreter regardless of the value of
- ``PATH``. On Windows, "shebang" line processing is supported if you have the
- Python Launcher for Windows installed (this was added to Python in 3.3 - see
- :pep:`397` for more details). Thus, double-clicking an installed script in a
- Windows Explorer window should run the script with the correct interpreter
- without there needing to be any reference to its virtual environment in
- ``PATH``.
+When a Python interpreter is running from a virtual environment,
+:data:`sys.prefix` and :data:`sys.exec_prefix`
+point to the directories of the virtual environment,
+whereas :data:`sys.base_prefix` and :data:`sys.base_exec_prefix`
+point to those of the base Python used to create the environment.
+It is sufficient to check
+``sys.prefix == sys.base_prefix`` to determine if the current interpreter is
+running from a virtual environment.
+
+A virtual environment may be "activated" using a script in its binary directory
+(``bin`` on POSIX; ``Scripts`` on Windows).
+This will prepend that directory to your :envvar:`!PATH`, so that running
+:program:`!python` will invoke the environment's Python interpreter
+and you can run installed scripts without having to use their full path.
+The invocation of the activation script is platform-specific
+(:samp:`{<venv>}` must be replaced by the path to the directory
+containing the virtual environment):
+
++-------------+------------+--------------------------------------------------+
+| Platform | Shell | Command to activate virtual environment |
++=============+============+==================================================+
+| POSIX | bash/zsh | :samp:`$ source {<venv>}/bin/activate` |
+| +------------+--------------------------------------------------+
+| | fish | :samp:`$ source {<venv>}/bin/activate.fish` |
+| +------------+--------------------------------------------------+
+| | csh/tcsh | :samp:`$ source {<venv>}/bin/activate.csh` |
+| +------------+--------------------------------------------------+
+| | PowerShell | :samp:`$ {<venv>}/bin/Activate.ps1` |
++-------------+------------+--------------------------------------------------+
+| Windows | cmd.exe | :samp:`C:\\> {<venv>}\\Scripts\\activate.bat` |
+| +------------+--------------------------------------------------+
+| | PowerShell | :samp:`PS C:\\> {<venv>}\\Scripts\\Activate.ps1` |
++-------------+------------+--------------------------------------------------+
+
+.. versionadded:: 3.4
+ :program:`!fish` and :program:`!csh` activation scripts.
+
+.. versionadded:: 3.8
+ PowerShell activation scripts installed under POSIX for PowerShell Core
+ support.
+
+You don't specifically *need* to activate a virtual environment,
+as you can just specify the full path to that environment's
+Python interpreter when invoking Python.
+Furthermore, all scripts installed in the environment
+should be runnable without activating it.
+
+In order to achieve this, scripts installed into virtual environments have
+a "shebang" line which points to the environment's Python interpreter,
+i.e. :samp:`#!/{<path-to-venv>}/bin/python`.
+This means that the script will run with that interpreter regardless of the
+value of :envvar:`!PATH`. On Windows, "shebang" line processing is supported if
+you have the :ref:`launcher` installed. Thus, double-clicking an installed
+script in a Windows Explorer window should run it with the correct interpreter
+without the environment needing to be activated or on the :envvar:`!PATH`.
+
+When a virtual environment has been activated, the :envvar:`!VIRTUAL_ENV`
+environment variable is set to the path of the environment.
+Since explicitly activating a virtual environment is not required to use it,
+:envvar:`!VIRTUAL_ENV` cannot be relied upon to determine
+whether a virtual environment is being used.
.. warning:: Because scripts installed in environments should not expect the
environment to be activated, their shebang lines contain the absolute paths
environment in its new location. Otherwise, software installed into the
environment may not work as expected.
+You can deactivate a virtual environment by typing ``deactivate`` in your shell.
+The exact mechanism is platform-specific and is an internal implementation
+detail (typically, a script or shell function will be used).
+
+
.. _venv-api:
API
.. method:: ensure_directories(env_dir)
- Creates the environment directory and all necessary directories, and
- returns a context object. This is just a holder for attributes (such as
- paths), for use by the other methods. The directories are allowed to
- exist already, as long as either ``clear`` or ``upgrade`` were
- specified to allow operating on an existing environment directory.
+ Creates the environment directory and all necessary subdirectories that
+ don't already exist, and returns a context object. This context object
+ is just a holder for attributes (such as paths) for use by the other
+ methods. If the :class:`EnvBuilder` is created with the arg
+ ``clear=True``, contents of the environment directory will be cleared
+ and then all necessary subdirectories will be recreated.
+
+ The returned context object is a :class:`types.SimpleNamespace` with the
+ following attributes:
+
+ * ``env_dir`` - The location of the virtual environment. Used for
+ ``__VENV_DIR__`` in activation scripts (see :meth:`install_scripts`).
+
+ * ``env_name`` - The name of the virtual environment. Used for
+ ``__VENV_NAME__`` in activation scripts (see :meth:`install_scripts`).
+
+ * ``prompt`` - The prompt to be used by the activation scripts. Used for
+ ``__VENV_PROMPT__`` in activation scripts (see :meth:`install_scripts`).
+
+ * ``executable`` - The underlying Python executable used by the virtual
+ environment. This takes into account the case where a virtual environment
+ is created from another virtual environment.
+
+ * ``inc_path`` - The include path for the virtual environment.
+
+ * ``lib_path`` - The purelib path for the virtual environment.
+
+ * ``bin_path`` - The script path for the virtual environment.
+
+ * ``bin_name`` - The name of the script path relative to the virtual
+ environment location. Used for ``__VENV_BIN_NAME__`` in activation
+ scripts (see :meth:`install_scripts`).
+
+ * ``env_exe`` - The name of the Python interpreter in the virtual
+ environment. Used for ``__VENV_PYTHON__`` in activation scripts
+ (see :meth:`install_scripts`).
+
+ * ``env_exec_cmd`` - The name of the Python interpreter, taking into
+ account filesystem redirections. This can be used to run Python in
+ the virtual environment.
+
+
+ .. versionchanged:: 3.12
+ The attribute ``lib_path`` was added to the context, and the context
+ object was documented.
+
+ .. versionchanged:: 3.11
+ The *venv*
+ :ref:`sysconfig installation scheme <installation_paths>`
+ is used to construct the paths of the created directories.
.. method:: create_configuration(context)
prevent their use as dictionary keys. *callback* is the same as the parameter
of the same name to the :func:`ref` function.
+ Accessing an attribute of the proxy object after the referent is
+ garbage collected raises :exc:`ReferenceError`.
+
.. versionchanged:: 3.8
Extended the operator support on proxy objects to include the matrix
multiplication operators ``@`` and ``@=``.
.. moduleauthor:: Phillip J. Eby <pje@telecommunity.com>
.. sectionauthor:: Phillip J. Eby <pje@telecommunity.com>
+**Source code:** :source:`Lib/wsgiref`
+
--------------
The Web Server Gateway Interface (WSGI) is a standard interface between web
Similarly, explicitly stating the *standalone* argument causes the
standalone document declarations to be added to the prologue of the XML
document.
- If the value is set to `True`, `standalone="yes"` is added,
- otherwise it is set to `"no"`.
+ If the value is set to ``True``, ``standalone="yes"`` is added,
+ otherwise it is set to ``"no"``.
Not stating the argument will omit the declaration from the document.
.. versionchanged:: 3.8
may be passed to calls.
The *headers* parameter is an optional sequence of HTTP headers to send with
each request, expressed as a sequence of 2-tuples representing the header
- name and value. (e.g. `[('Header-Name', 'value')]`).
+ name and value. (e.g. ``[('Header-Name', 'value')]``).
The obsolete *use_datetime* flag is similar to *use_builtin_types* but it
applies only to date/time values.
The client that interacts with the above server is included in
-`Lib/xmlrpc/client.py`::
+``Lib/xmlrpc/client.py``::
server = ServerProxy("http://localhost:8000")
.. moduleauthor:: Paul Ganssle <paul@ganssle.io>
.. sectionauthor:: Paul Ganssle <paul@ganssle.io>
+**Source code:** :source:`Lib/zoneinfo`
+
--------------
The :mod:`zoneinfo` module provides a concrete time zone implementation to
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+Audioop
+-------
+
+The audioop module uses the code base in g771.c file of the SoX project::
+
+ Programming the AdLib/Sound Blaster
+ FM Music Chips
+ Version 2.0 (24 Feb 1992)
+ Copyright (c) 1991, 1992 by Jeffrey S. Lee
+ jlee@smylex.uucp
+ Warranty and Copyright Policy
+ This document is provided on an "as-is" basis, and its author makes
+ no warranty or representation, express or implied, with respect to
+ its quality performance or fitness for a particular purpose. In no
+ event will the author of this document be liable for direct, indirect,
+ special, incidental, or consequential damages arising out of the use
+ or inability to use the information contained within. Use of this
+ document is at your own risk.
+ This file may be used and copied freely so long as the applicable
+ copyright notices are retained, and no modifications are made to the
+ text of the document. No money shall be charged for its distribution
+ beyond reasonable shipping, handling and duplication costs, nor shall
+ proprietary changes be made to this document so that it cannot be
+ distributed freely. This document may not be included in published
+ material or commercial packages without the written consent of its
+ author.
keyword: if
keyword: as
pair: match; case
+ single: as; match statement
single: : (colon); compound statement
.. versionadded:: 3.10
and *__weakref__* for each instance.
+.. _datamodel-note-slots:
+
Notes on using *__slots__*
""""""""""""""""""""""""""
When using a class name in a pattern, positional arguments in the pattern are not
allowed by default, i.e. ``case MyClass(x, y)`` is typically invalid without special
-support in ``MyClass``. To be able to use that kind of patterns, the class needs to
+support in ``MyClass``. To be able to use that kind of pattern, the class needs to
define a *__match_args__* attribute.
.. data:: object.__match_args__
true).
* Mappings (instances of :class:`dict`) compare equal if and only if they have
- equal `(key, value)` pairs. Equality comparison of the keys and values
+ equal ``(key, value)`` pairs. Equality comparison of the keys and values
enforces reflexivity.
Order comparisons (``<``, ``>``, ``<=``, and ``>=``) raise :exc:`TypeError`.
In particular, ``&`` followed by a symbol, token or parenthesized
group indicates a positive lookahead (i.e., is required to match but
not consumed), while ``!`` indicates a negative lookahead (i.e., is
-required _not_ to match). We use the ``|`` separator to mean PEG's
+required *not* to match). We use the ``|`` separator to mean PEG's
"ordered choice" (written as ``/`` in traditional PEG grammars). See
:pep:`617` for more details on the grammar's syntax.
for each of these, looks for an appropriate :term:`path entry finder`
(:class:`~importlib.abc.PathEntryFinder`) for the
path entry. Because this can be an expensive operation (e.g. there may be
-`stat()` call overheads for this search), the path based finder maintains
+``stat()`` call overheads for this search), the path based finder maintains
a cache mapping path entries to path entry finders. This cache is maintained
in :data:`sys.path_importer_cache` (despite the name, this cache actually
stores finder objects rather than being limited to :term:`importer` objects).
annotated_assignment_stmt: `augtarget` ":" `expression`
: ["=" (`starred_expression` | `yield_expression`)]
-The difference from normal :ref:`assignment` is that only single target is allowed.
+The difference from normal :ref:`assignment` is that only a single target is allowed.
For simple names as assignment targets, if in class or module scope,
the annotations are evaluated and stored in a special class or module
IDEs.
.. versionchanged:: 3.8
- Now annotated assignments allow same expressions in the right hand side as
- the regular assignments. Previously, some expressions (like un-parenthesized
+ Now annotated assignments allow the same expressions in the right hand side as
+ regular assignments. Previously, some expressions (like un-parenthesized
tuple expressions) caused a syntax error.
as though the clauses had been separated out into individual import
statements.
-The details of the first step, finding and loading modules are described in
+The details of the first step, finding and loading modules, are described in
greater detail in the section on the :ref:`import system <importsystem>`,
which also describes the various types of packages and modules that can
be imported, as well as all the hooks that can be used to customize
.. productionlist:: python-grammar
nonlocal_stmt: "nonlocal" `identifier` ("," `identifier`)*
-.. XXX add when implemented
- : ["=" (`target_list` "=")+ starred_expression]
- : | "nonlocal" identifier augop expression_list
-
The :keyword:`nonlocal` statement causes the listed identifiers to refer to
previously bound variables in the nearest enclosing scope excluding globals.
This is important because the default behavior for binding is to search the
local namespace first. The statement allows encapsulated code to rebind
variables outside of the local scope besides the global (module) scope.
-.. XXX not implemented
- The :keyword:`nonlocal` statement may prepend an assignment or augmented
- assignment, but not an expression.
-
Names listed in a :keyword:`nonlocal` statement, unlike those listed in a
:keyword:`global` statement, must refer to pre-existing bindings in an
enclosing scope (the scope in which a new binding should be created cannot
from sphinx.errors import NoUri
except ImportError:
from sphinx.environment import NoUri
-from sphinx.locale import translators
+from sphinx.locale import _ as sphinx_gettext
from sphinx.util import status_iterator, logging
from sphinx.util.nodes import split_explicit_title
from sphinx.writers.text import TextWriter, TextTranslator
def run(self):
self.assert_has_content()
pnode = nodes.compound(classes=['impl-detail'])
- label = translators['sphinx'].gettext(self.label_text)
+ label = sphinx_gettext(self.label_text)
content = self.content
add_text = nodes.strong(label, label)
self.state.nested_parse(content, self.content_offset, pnode)
else:
args = []
- label = translators['sphinx'].gettext(self._label[min(2, len(args))])
+ label = sphinx_gettext(self._label[min(2, len(args))])
text = label.format(name="``{}``".format(name),
args=", ".join("``{}``".format(a) for a in args if a))
else:
label = self._removed_label
- label = translators['sphinx'].gettext(label)
+ label = sphinx_gettext(label)
text = label.format(deprecated=self.arguments[0], removed=self.arguments[1])
if len(self.arguments) == 3:
inodes, messages = self.state.inline_text(self.arguments[2],
You might have noticed that methods like ``insert``, ``remove`` or ``sort`` that
only modify the list have no return value printed -- they return the default
-``None``. [1]_ This is a design principle for all mutable data structures in
+``None``. [#]_ This is a design principle for all mutable data structures in
Python.
Another thing you might notice is that not all data can be sorted or
.. rubric:: Footnotes
-.. [1] Other languages may return the mutated object, which allows method
+.. [#] Other languages may return the mutated object, which allows method
chaining, such as ``d->insert("a")->remove("b")->sort();``.
which executes the statement(s) in *command*, analogous to the shell's
:option:`-c` option. Since Python statements often contain spaces or other
characters that are special to the shell, it is usually advised to quote
-*command* in its entirety with single quotes.
+*command* in its entirety.
Some Python modules are also useful as scripts. These can be invoked using
``python -m module [arg] ...``, which executes the source file for *module* as
In particular, :envvar:`CFLAGS` should not contain:
- * the compiler flag `-I` (for setting the search path for include files).
- The `-I` flags are processed from left to right, and any flags in
- :envvar:`CFLAGS` would take precedence over user- and package-supplied `-I`
+ * the compiler flag ``-I`` (for setting the search path for include files).
+ The ``-I`` flags are processed from left to right, and any flags in
+ :envvar:`CFLAGS` would take precedence over user- and package-supplied ``-I``
flags.
- * hardening flags such as `-Werror` because distributions cannot control
+ * hardening flags such as ``-Werror`` because distributions cannot control
whether packages installed by users conform to such heightened
standards.
In particular, :envvar:`LDFLAGS` should not contain:
- * the compiler flag `-L` (for setting the search path for libraries).
- The `-L` flags are processed from left to right, and any flags in
- :envvar:`LDFLAGS` would take precedence over user- and package-supplied `-L`
+ * the compiler flag ``-L`` (for setting the search path for libraries).
+ The ``-L`` flags are processed from left to right, and any flags in
+ :envvar:`LDFLAGS` would take precedence over user- and package-supplied ``-L``
flags.
.. envvar:: CONFIGURE_LDFLAGS_NODIST
$ popd
3. Build Python with custom OpenSSL
- (see the configure `--with-openssl` and `--with-openssl-rpath` options)
+ (see the configure ``--with-openssl`` and ``--with-openssl-rpath`` options)
.. code-block:: shell-session
.. deprecated:: 3.6
``pyvenv`` was the recommended tool for creating virtual environments for
- Python 3.3 and 3.4, and is `deprecated in Python 3.6
- <https://docs.python.org/dev/whatsnew/3.6.html#id8>`_.
+ Python 3.3 and 3.4, and is
+ :ref:`deprecated in Python 3.6 <whatsnew36-venv>`.
.. versionchanged:: 3.5
The use of ``venv`` is now recommended for creating virtual environments.
environment will be created, according to the given options, at each provided
path.
-Once a virtual environment has been created, it can be "activated" using a
-script in the virtual environment's binary directory. The invocation of the
-script is platform-specific (`<venv>` must be replaced by the path of the
-directory containing the virtual environment):
-
-+-------------+-----------------+-----------------------------------------+
-| Platform | Shell | Command to activate virtual environment |
-+=============+=================+=========================================+
-| POSIX | bash/zsh | $ source <venv>/bin/activate |
-+-------------+-----------------+-----------------------------------------+
-| | fish | $ source <venv>/bin/activate.fish |
-+-------------+-----------------+-----------------------------------------+
-| | csh/tcsh | $ source <venv>/bin/activate.csh |
-+-------------+-----------------+-----------------------------------------+
-| | PowerShell Core | $ <venv>/bin/Activate.ps1 |
-+-------------+-----------------+-----------------------------------------+
-| Windows | cmd.exe | C:\\> <venv>\\Scripts\\activate.bat |
-+-------------+-----------------+-----------------------------------------+
-| | PowerShell | PS C:\\> <venv>\\Scripts\\Activate.ps1 |
-+-------------+-----------------+-----------------------------------------+
-
-When a virtual environment is active, the :envvar:`VIRTUAL_ENV` environment
-variable is set to the path of the virtual environment. This can be used to
-check if one is running inside a virtual environment.
-
-You don't specifically *need* to activate an environment; activation just
-prepends the virtual environment's binary directory to your path, so that
-"python" invokes the virtual environment's Python interpreter and you can run
-installed scripts without having to use their full path. However, all scripts
-installed in a virtual environment should be runnable without activating it,
-and run with the virtual environment's Python automatically.
-
-You can deactivate a virtual environment by typing "deactivate" in your shell.
-The exact mechanism is platform-specific and is an internal implementation
-detail (typically a script or shell function will be used).
-
-.. versionadded:: 3.4
- ``fish`` and ``csh`` activation scripts.
-
-.. versionadded:: 3.8
- PowerShell activation scripts installed under POSIX for PowerShell Core
- support.
+---------------------------+--------------------------------------+--------------------------+
| Include_pip | Install bundled pip and setuptools | 1 |
+---------------------------+--------------------------------------+--------------------------+
-| Include_symbols | Install debugging symbols (`*`.pdb) | 0 |
+| Include_symbols | Install debugging symbols (``*.pdb``)| 0 |
+---------------------------+--------------------------------------+--------------------------+
| Include_tcltk | Install Tcl/Tk support and IDLE | 1 |
+---------------------------+--------------------------------------+--------------------------+
of an explanation to start you programming, but many details have been
simplified or ignored. Where should you go to get a more complete picture?
-https://docs.python.org/dev/howto/descriptor.html is a lengthy tutorial introduction to
+The :ref:`descriptorhowto` is a lengthy tutorial introduction to
the descriptor features, written by Guido van Rossum. If my description has
whetted your appetite, go read this tutorial next, because it goes into much
more detail about the new features while still remaining quite easy to read.
PEP 3101: Advanced String Formatting
=====================================================
-In Python 3.0, the `%` operator is supplemented by a more powerful string
+In Python 3.0, the ``%`` operator is supplemented by a more powerful string
formatting method, :meth:`format`. Support for the :meth:`str.format` method
has been backported to Python 2.6.
-In 2.6, both 8-bit and Unicode strings have a `.format()` method that
+In 2.6, both 8-bit and Unicode strings have a ``.format()`` method that
treats the string as a template and takes the arguments to be formatted.
-The formatting template uses curly brackets (`{`, `}`) as special characters::
+The formatting template uses curly brackets (``{``, ``}``) as special characters::
>>> # Substitute positional argument 0 into the string.
>>> "User ID: {0}".format("root")
* The ElementTree library, :mod:`xml.etree`, no longer escapes
ampersands and angle brackets when outputting an XML processing
- instruction (which looks like `<?xml-stylesheet href="#style1"?>`)
- or comment (which looks like `<!-- comment -->`).
+ instruction (which looks like ``<?xml-stylesheet href="#style1"?>``)
+ or comment (which looks like ``<!-- comment -->``).
(Patch by Neil Muller; :issue:`2746`.)
* The :meth:`~StringIO.StringIO.readline` method of :class:`~StringIO.StringIO` objects now does
New typing features:
* :pep:`604`, Allow writing union types as X | Y
-* :pep:`613`, Explicit Type Aliases
* :pep:`612`, Parameter Specification Variables
+* :pep:`613`, Explicit Type Aliases
+* :pep:`647`, User-Defined Type Guards
Important deprecations, removals or restrictions:
New in 3.10 maintenance releases.
-Apply syntax highlighting to `.pyi` files. (Contributed by Alex
+Apply syntax highlighting to ``.pyi`` files. (Contributed by Alex
Waygood and Terry Jan Reedy in :issue:`45447`.)
Include prompts when saving Shell with inputs and outputs.
* The ``PY_SSIZE_T_CLEAN`` macro must now be defined to use
:c:func:`PyArg_ParseTuple` and :c:func:`Py_BuildValue` formats which use
``#``: ``es#``, ``et#``, ``s#``, ``u#``, ``y#``, ``z#``, ``U#`` and ``Z#``.
- See :ref:`Parsing arguments and building values
- <arg-parsing>` and the :pep:`353`.
+ See :ref:`arg-parsing` and :pep:`353`.
(Contributed by Victor Stinner in :issue:`40943`.)
* Since :c:func:`Py_REFCNT()` is changed to the inline static function,
:c:func:`Py_GetProgramFullPath`, :c:func:`Py_GetPythonHome` and
:c:func:`Py_GetProgramName` functions now return ``NULL`` if called before
:c:func:`Py_Initialize` (before Python is initialized). Use the new
- :ref:`Python Initialization Configuration API <init-config>` to get the
- :ref:`Python Path Configuration. <init-path-config>`.
+ :ref:`init-config` API to get the :ref:`init-path-config`.
(Contributed by Victor Stinner in :issue:`42260`.)
* :c:func:`PyList_SET_ITEM`, :c:func:`PyTuple_SET_ITEM` and
``picklebufobject.h``, ``pyarena.h``, ``pyctype.h``, ``pydebug.h``,
``pyfpe.h``, and ``pytime.h`` have been moved to the ``Include/cpython``
directory. These files must not be included directly, as they are already
- included in ``Python.h``: :ref:`Include Files <api-includes>`. If they have
+ included in ``Python.h``; see :ref:`api-includes`. If they have
been included directly, consider including ``Python.h`` instead.
(Contributed by Nicholas Sim in :issue:`35134`.)
instead of module names for running specific tests (:issue:`10620`). The new
test discovery can find tests within packages, locating any test importable
from the top-level directory. The top-level directory can be specified with
- the `-t` option, a pattern for matching files with ``-p``, and a directory to
+ the ``-t`` option, a pattern for matching files with ``-p``, and a directory to
start discovery with ``-s``:
.. code-block:: shell-session
:class:`asyncore.dispatcher` now provides a
:meth:`~asyncore.dispatcher.handle_accepted()` method
-returning a `(sock, addr)` pair which is called when a connection has actually
+returning a ``(sock, addr)`` pair which is called when a connection has actually
been established with a new remote endpoint. This is supposed to be used as a
replacement for old :meth:`~asyncore.dispatcher.handle_accept()` and avoids
the user to call :meth:`~asyncore.dispatcher.accept()` directly.
:attr:`sys.path_importer_cache` where it represents the use of implicit
finders, but semantically it should not change anything.
-* :class:`importlib.abc.Finder` no longer specifies a `find_module()` abstract
+* :class:`importlib.abc.Finder` no longer specifies a ``find_module()`` abstract
method that must be implemented. If you were relying on subclasses to
implement that method, make sure to check for the method's existence first.
- You will probably want to check for `find_loader()` first, though, in the
+ You will probably want to check for ``find_loader()`` first, though, in the
case of working with :term:`path entry finders <path entry finder>`.
* :mod:`pkgutil` has been converted to use :mod:`importlib` internally. This
``opt-`` tag in ``.pyc`` file names. The
:func:`importlib.util.cache_from_source` has gained an *optimization*
parameter to help control the ``opt-`` tag. Because of this, the
- *debug_override* parameter of the function is now deprecated. `.pyo` files
+ *debug_override* parameter of the function is now deprecated. ``.pyo`` files
are also no longer supported as a file argument to the Python interpreter and
thus serve no purpose when distributed on their own (i.e. sourceless code
distribution). Due to the fact that the magic number for bytecode has changed
- in Python 3.5, all old `.pyo` files from previous versions of Python are
+ in Python 3.5, all old ``.pyo`` files from previous versions of Python are
invalid regardless of this PEP.
* The :mod:`socket` module now exports the :data:`~socket.CAN_RAW_FD_FRAMES`
The :class:`contextlib.AbstractContextManager` class has been added to
provide an abstract base class for context managers. It provides a
-sensible default implementation for `__enter__()` which returns
-``self`` and leaves `__exit__()` an abstract method. A matching
+sensible default implementation for ``__enter__()`` which returns
+``self`` and leaves ``__exit__()`` an abstract method. A matching
class has been added to the :mod:`typing` module as
:class:`typing.ContextManager`.
(Contributed by Brett Cannon in :issue:`25609`.)
site
----
-When specifying paths to add to :attr:`sys.path` in a `.pth` file,
+When specifying paths to add to :attr:`sys.path` in a ``.pth`` file,
you may now specify file paths on top of directories (e.g. zip files).
(Contributed by Wolfgang Langner in :issue:`26587`).
Victor Stinner.)
New Linux constants ``TCP_USER_TIMEOUT`` and ``TCP_CONGESTION`` were added.
-(Contributed by Omar Sandoval, issue:`26273`).
+(Contributed by Omar Sandoval, :issue:`26273`).
socketserver
The :mod:`tkinter.tix` module is now deprecated. :mod:`tkinter` users
should use :mod:`tkinter.ttk` instead.
+.. _whatsnew36-venv:
+
venv
~~~~
* :c:func:`PySys_AddWarnOptionUnicode` is not currently usable by embedding
applications due to the requirement to create a Unicode object prior to
- calling `Py_Initialize`. Use :c:func:`PySys_AddWarnOption` instead.
+ calling ``Py_Initialize``. Use :c:func:`PySys_AddWarnOption` instead.
* warnings filters added by an embedding application with
:c:func:`PySys_AddWarnOption` should now more consistently take precedence
There is a new function parameter syntax ``/`` to indicate that some
function parameters must be specified positionally and cannot be used as
keyword arguments. This is the same notation shown by ``help()`` for C
-functions annotated with Larry Hastings' `Argument Clinic
-<https://docs.python.org/3/howto/clinic.html>`_ tool.
+functions annotated with Larry Hastings'
+:ref:`Argument Clinic <howto-clinic>` tool.
In the following example, parameters *a* and *b* are positional-only,
while *c* or *d* can be positional or keyword, and *e* or *f* are
the new Windows tab. (Contributed by Mark Roseman and Terry Jan Reedy in
:issue:`33962`.)
-Apply syntax highlighting to `.pyi` files. (Contributed by Alex
+Apply syntax highlighting to ``.pyi`` files. (Contributed by Alex
Waygood and Terry Jan Reedy in :issue:`45447`.)
imaplib
/*--start constants--*/
#define PY_MAJOR_VERSION 3
#define PY_MINOR_VERSION 10
-#define PY_MICRO_VERSION 8
+#define PY_MICRO_VERSION 9
#define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL
#define PY_RELEASE_SERIAL 0
/* Version as a string */
-#define PY_VERSION "3.10.8"
+#define PY_VERSION "3.10.9"
/*--end constants--*/
/* Version as a single 4-byte hex number, e.g. 0x010502B2 == 1.5.2b2.
def __parameters__(self):
params = []
for arg in self.__args__:
+ if isinstance(arg, type) and not isinstance(arg, GenericAlias):
+ continue
# Looks like a genericalias
if hasattr(arg, "__parameters__") and isinstance(arg.__parameters__, tuple):
params.extend(arg.__parameters__)
subst = dict(zip(self.__parameters__, item))
new_args = []
for arg in self.__args__:
+ if isinstance(arg, type) and not isinstance(arg, GenericAlias):
+ new_args.append(arg)
+ continue
if _is_typevarlike(arg):
if _is_param_expr(arg):
arg = subst[arg]
# arguments, try to parse more single-dash options out
# of the tail of the option string
chars = self.prefix_chars
- if arg_count == 0 and option_string[1] not in chars:
+ if (
+ arg_count == 0
+ and option_string[1] not in chars
+ and explicit_arg != ''
+ ):
action_tuples.append((action, [], option_string))
char = option_string[0]
option_string = char + explicit_arg[0]
location in a file.
"""
for child in walk(node):
+ # TypeIgnore is a special case where lineno is not an attribute
+ # but rather a field of the node itself.
+ if isinstance(child, TypeIgnore):
+ child.lineno = getattr(child, 'lineno', 0) + n
+ continue
+
if 'lineno' in child._attributes:
child.lineno = getattr(child, 'lineno', 0) + n
if (
if sock is not None:
sock.close()
raise
+ finally:
+ exceptions = my_exceptions = None
async def create_connection(
self, protocol_factory, host=None, port=None,
if sock is None:
exceptions = [exc for sub in exceptions for exc in sub]
- if len(exceptions) == 1:
- raise exceptions[0]
- else:
- # If they all have the same str(), raise one.
- model = str(exceptions[0])
- if all(str(exc) == model for exc in exceptions):
+ try:
+ if len(exceptions) == 1:
raise exceptions[0]
- # Raise a combined exception so the user can see all
- # the various error messages.
- raise OSError('Multiple exceptions: {}'.format(
- ', '.join(str(exc) for exc in exceptions)))
+ else:
+ # If they all have the same str(), raise one.
+ model = str(exceptions[0])
+ if all(str(exc) == model for exc in exceptions):
+ raise exceptions[0]
+ # Raise a combined exception so the user can see all
+ # the various error messages.
+ raise OSError('Multiple exceptions: {}'.format(
+ ', '.join(str(exc) for exc in exceptions)))
+ finally:
+ exceptions = None
else:
if sock is None:
event_list = self._selector.select(timeout)
self._process_events(event_list)
+ # Needed to break cycles when an exception occurs.
+ event_list = None
# Handle 'later' callbacks that are ready.
end_time = self.time() + self._clock_resolution
if (self._local._loop is None and
not self._local._set_called and
threading.current_thread() is threading.main_thread()):
+ stacklevel = 2
+ try:
+ f = sys._getframe(1)
+ except AttributeError:
+ pass
+ else:
+ while f:
+ module = f.f_globals.get('__name__')
+ if not (module == 'asyncio' or module.startswith('asyncio.')):
+ break
+ f = f.f_back
+ stacklevel += 1
+ import warnings
+ warnings.warn('There is no current event loop',
+ DeprecationWarning, stacklevel=stacklevel)
self.set_event_loop(self.new_event_loop())
if self._local._loop is None:
def _get_event_loop(stacklevel=3):
+ # This internal method is going away in Python 3.12, left here only for
+ # backwards compatibility with 3.10.0 - 3.10.8 and 3.11.0.
+ # Similarly, this method's C equivalent in _asyncio is going away as well.
+ # See GH-99949 for more details.
current_loop = _get_running_loop()
if current_loop is not None:
return current_loop
- import warnings
- warnings.warn('There is no current event loop',
- DeprecationWarning, stacklevel=stacklevel)
return get_event_loop_policy().get_event_loop()
self._pending_write = 0
self._conn_lost = 0
self._closing = False # Set when close() called.
+ self._called_connection_lost = False
self._eof_written = False
if self._server is not None:
self._server._attach()
self._empty_waiter.set_result(None)
else:
self._empty_waiter.set_exception(exc)
- if self._closing:
+ if self._closing and self._called_connection_lost:
return
self._closing = True
self._conn_lost += 1
self._loop.call_soon(self._call_connection_lost, exc)
def _call_connection_lost(self, exc):
+ if self._called_connection_lost:
+ return
try:
self._protocol.connection_lost(exc)
finally:
if server is not None:
server._detach()
self._server = None
+ self._called_connection_lost = True
def get_write_buffer_size(self):
size = self._pending_write
fut = self.create_future()
self._sock_connect(fut, sock, address)
- return await fut
+ try:
+ return await fut
+ finally:
+ # Needed to break cycles when an exception occurs.
+ fut = None
def _sock_connect(self, fut, sock, address):
fd = sock.fileno()
fut.set_exception(exc)
else:
fut.set_result(None)
+ finally:
+ fut = None
def _sock_write_done(self, fd, fut, handle=None):
if handle is None or not handle.cancelled():
fut.set_exception(exc)
else:
fut.set_result(None)
+ finally:
+ fut = None
async def sock_accept(self, sock):
"""Accept a connection.
def _start(self, args, shell, stdin, stdout, stderr, bufsize, **kwargs):
stdin_w = None
- if stdin == subprocess.PIPE:
- # Use a socket pair for stdin, since not all platforms
+ if stdin == subprocess.PIPE and sys.platform.startswith('aix'):
+ # Use a socket pair for stdin on AIX, since it does not
# support selecting read events on the write end of a
# socket (which we use in order to detect closing of the
- # other end). Notably this is needed on AIX, and works
- # just fine on other platforms.
+ # other end).
stdin, stdin_w = socket.socketpair()
try:
self._proc = subprocess.Popen(
self._poll(timeout)
tmp = self._results
self._results = []
- return tmp
+ try:
+ return tmp
+ finally:
+ # Needed to break cycles when an exception occurs.
+ tmp = None
def _result(self, value):
fut = self._loop.create_future()
else:
f.set_result(value)
self._results.append(f)
+ finally:
+ f = None
# Remove unregistered futures
for ov in self._unregistered:
codecs. Output is also codec dependent and will usually be
Unicode as well.
- Underlying encoded files are always opened in binary mode.
+ If encoding is not None, then the
+ underlying encoded files are always opened in binary mode.
The default file mode is 'r', meaning to open the file in read mode.
encoding specifies the encoding which is to be used for the
x.char = b'a\0b\0'
self.assertEqual(bytes(x), b'a\x00###')
+ def test_gh99275(self):
+ class BrokenStructure(Structure):
+ def __init_subclass__(cls, **kwargs):
+ cls._fields_ = [] # This line will fail, `stgdict` is not ready
+
+ with self.assertRaisesRegex(TypeError,
+ 'ctypes state is not initialized'):
+ class Subclass(BrokenStructure): ...
+
# __set__ and __get__ should raise a TypeError in case their self
# argument is not a ctype instance.
def test___set__(self):
cls, msg = self.get_except(Person, b"Someone", (1, 2))
self.assertEqual(cls, RuntimeError)
self.assertEqual(msg,
- "(Phone) <class 'TypeError'>: "
+ "(Phone) TypeError: "
"expected bytes, int found")
cls, msg = self.get_except(Person, b"Someone", (b"a", b"b", b"c"))
self.assertEqual(cls, RuntimeError)
self.assertEqual(msg,
- "(Phone) <class 'TypeError'>: too many initializers")
+ "(Phone) TypeError: too many initializers")
def test_huge_field_name(self):
# issue12881: segfault with large structure field names
def _create_fn(name, args, body, *, globals=None, locals=None,
return_type=MISSING):
- # Note that we mutate locals when exec() is called. Caller
- # beware! The only callers are internal to this module, so no
+ # Note that we may mutate locals. Callers beware!
+ # The only callers are internal to this module, so no
# worries about external callers.
if locals is None:
locals = {}
- if 'BUILTINS' not in locals:
- locals['BUILTINS'] = builtins
return_annotation = ''
if return_type is not MISSING:
locals['_return_type'] = return_type
# self_name is what "self" is called in this function: don't
# hard-code "self", since that might be a field name.
if frozen:
- return f'BUILTINS.object.__setattr__({self_name},{name!r},{value})'
+ return f'__dataclass_builtins_object__.__setattr__({self_name},{name!r},{value})'
return f'{self_name}.{name}={value}'
locals.update({
'MISSING': MISSING,
'_HAS_DEFAULT_FACTORY': _HAS_DEFAULT_FACTORY,
+ '__dataclass_builtins_object__': object,
})
body_lines = []
# Check bidi
RandAL = [stringprep.in_table_d1(x) for x in label]
- for c in RandAL:
- if c:
- # There is a RandAL char in the string. Must perform further
- # tests:
- # 1) The characters in section 5.8 MUST be prohibited.
- # This is table C.8, which was already checked
- # 2) If a string contains any RandALCat character, the string
- # MUST NOT contain any LCat character.
- if any(stringprep.in_table_d2(x) for x in label):
- raise UnicodeError("Violation of BIDI requirement 2")
-
- # 3) If a string contains any RandALCat character, a
- # RandALCat character MUST be the first character of the
- # string, and a RandALCat character MUST be the last
- # character of the string.
- if not RandAL[0] or not RandAL[-1]:
- raise UnicodeError("Violation of BIDI requirement 3")
+ if any(RandAL):
+ # There is a RandAL char in the string. Must perform further
+ # tests:
+ # 1) The characters in section 5.8 MUST be prohibited.
+ # This is table C.8, which was already checked
+ # 2) If a string contains any RandALCat character, the string
+ # MUST NOT contain any LCat character.
+ if any(stringprep.in_table_d2(x) for x in label):
+ raise UnicodeError("Violation of BIDI requirement 2")
+ # 3) If a string contains any RandALCat character, a
+ # RandALCat character MUST be the first character of the
+ # string, and a RandALCat character MUST be the last
+ # character of the string.
+ if not RandAL[0] or not RandAL[-1]:
+ raise UnicodeError("Violation of BIDI requirement 3")
return label
__all__ = ["version", "bootstrap"]
_PACKAGE_NAMES = ('setuptools', 'pip')
-_SETUPTOOLS_VERSION = "63.2.0"
-_PIP_VERSION = "22.2.2"
+_SETUPTOOLS_VERSION = "65.5.0"
+_PIP_VERSION = "22.3.1"
_PROJECTS = [
("setuptools", _SETUPTOOLS_VERSION, "py3"),
("pip", _PIP_VERSION, "py3"),
import html
import http.client
import io
+import itertools
import mimetypes
import os
import posixpath
self.log_message(format, *args)
+ # https://en.wikipedia.org/wiki/List_of_Unicode_characters#Control_codes
+ _control_char_table = str.maketrans(
+ {c: fr'\x{c:02x}' for c in itertools.chain(range(0x20), range(0x7f,0xa0))})
+ _control_char_table[ord('\\')] = r'\\'
+
def log_message(self, format, *args):
"""Log an arbitrary message.
The client ip and current date/time are prefixed to
every message.
+ Unicode control characters are replaced with escaped hex
+ before writing the output to stderr.
+
"""
+ message = format % args
sys.stderr.write("%s - - [%s] %s\n" %
(self.address_string(),
self.log_date_time_string(),
- format%args))
+ message.translate(self._control_char_table)))
def version_string(self):
"""Return the server software version string."""
=========================
+gh-97527: Fix a bug in the previous bugfix that caused IDLE to not
+start when run with 3.10.8, 3.12.0a1, and at least Microsoft Python
+3.10.2288.0 installed without the Lib/test package. 3.11.0 was never
+affected.
+
gh-65802: Document handling of extensions in Save As dialogs.
gh-95191: Include prompts when saving Shell (interactive input/output).
from os.path import expanduser
import plistlib
from sys import platform # Used in _init_tk_type, changed by test.
-from test.support import requires, ResourceDenied
import tkinter
def _init_tk_type():
""" Initialize _tk_type for isXyzTk functions.
+
+ This function is only called once, when _tk_type is still None.
"""
global _tk_type
if platform == 'darwin':
- try:
- requires('gui')
- except ResourceDenied: # Possible when testing.
- _tk_type = "cocoa" # Newest and most common.
- else:
- root = tkinter.Tk()
- ws = root.tk.call('tk', 'windowingsystem')
- if 'x11' in ws:
- _tk_type = "xquartz"
- elif 'aqua' not in ws:
- _tk_type = "other"
- elif 'AppKit' in root.tk.call('winfo', 'server', '.'):
+
+ # When running IDLE, GUI is present, test/* may not be.
+ # When running tests, test/* is present, GUI may not be.
+ # If not, guess most common. Does not matter for testing.
+ from idlelib.__init__ import testing
+ if testing:
+ from test.support import requires, ResourceDenied
+ try:
+ requires('gui')
+ except ResourceDenied:
_tk_type = "cocoa"
- else:
- _tk_type = "carbon"
- root.destroy()
+ return
+
+ root = tkinter.Tk()
+ ws = root.tk.call('tk', 'windowingsystem')
+ if 'x11' in ws:
+ _tk_type = "xquartz"
+ elif 'aqua' not in ws:
+ _tk_type = "other"
+ elif 'AppKit' in root.tk.call('winfo', 'server', '.'):
+ _tk_type = "cocoa"
+ else:
+ _tk_type = "carbon"
+ root.destroy()
else:
_tk_type = "other"
+ return
def isAquaTk():
"""
from . import _adapters, _meta
from ._meta import PackageMetadata
from ._collections import FreezableDefaultDict, Pair
-from ._functools import method_cache
+from ._functools import method_cache, pass_none
from ._itertools import unique_everseen
from ._meta import PackageMetadata, SimplePath
normalized name from the file system path.
"""
stem = os.path.basename(str(self._path))
- return self._name_from_stem(stem) or super()._normalized_name
+ return (
+ pass_none(Prepared.normalize)(self._name_from_stem(stem))
+ or super()._normalized_name
+ )
- def _name_from_stem(self, stem):
- name, ext = os.path.splitext(stem)
+ @staticmethod
+ def _name_from_stem(stem):
+ """
+ >>> PathDistribution._name_from_stem('foo-3.0.egg-info')
+ 'foo'
+ >>> PathDistribution._name_from_stem('CherryPy-3.0.dist-info')
+ 'CherryPy'
+ >>> PathDistribution._name_from_stem('face.egg-info')
+ 'face'
+ """
+ filename, ext = os.path.splitext(stem)
if ext not in ('.dist-info', '.egg-info'):
return
- name, sep, rest = stem.partition('-')
+ name, sep, rest = filename.partition('-')
return name
wrapper.cache_clear = lambda: None
return wrapper
+
+
+# From jaraco.functools 3.3
+def pass_none(func):
+ """
+ Wrap func so it's not called if its first param is None
+
+ >>> print_text = pass_none(print)
+ >>> print_text('text')
+ text
+ >>> print_text(None)
+ """
+
+ @functools.wraps(func)
+ def wrapper(param, *args, **kwargs):
+ if param is not None:
+ return func(param, *args, **kwargs)
+
+ return wrapper
# Was this function wrapped by a decorator?
if follow_wrapper_chains:
- obj = unwrap(obj, stop=(lambda f: hasattr(f, "__signature__")))
+ # Unwrap until we find an explicit signature or a MethodType (which will be
+ # handled explicitly below).
+ obj = unwrap(obj, stop=(lambda f: hasattr(f, "__signature__")
+ or isinstance(f, types.MethodType)))
if isinstance(obj, types.MethodType):
# If the unwrapped object is a *method*, we might want to
# skip its first parameter (self).
def usesTime(self):
fmt = self._fmt
- return fmt.find('$asctime') >= 0 or fmt.find(self.asctime_format) >= 0
+ return fmt.find('$asctime') >= 0 or fmt.find(self.asctime_search) >= 0
def validate(self):
pattern = Template.pattern
if family == 'AF_INET':
return ('localhost', 0)
elif family == 'AF_UNIX':
- # Prefer abstract sockets if possible to avoid problems with the address
- # size. When coding portable applications, some implementations have
- # sun_path as short as 92 bytes in the sockaddr_un struct.
- if util.abstract_sockets_supported:
- return f"\0listener-{os.getpid()}-{next(_mmap_counter)}"
return tempfile.mktemp(prefix='listener-', dir=util.get_temp_dir())
elif family == 'AF_PIPE':
return tempfile.mktemp(prefix=r'\\.\pipe\pyc-%d-%d-' %
)
finally:
_winapi.CloseHandle(h_map)
- size = _winapi.VirtualQuerySize(p_buf)
+ try:
+ size = _winapi.VirtualQuerySize(p_buf)
+ finally:
+ _winapi.UnmapViewOfFile(p_buf)
self._mmap = mmap.mmap(-1, size, tagname=name)
self._size = size
if last is None:
last = first + 10
filename = self.curframe.f_code.co_filename
+ # gh-93696: stdlib frozen modules provide a useful __file__
+ # this workaround can be removed with the closure of gh-89815
+ if filename.startswith("<frozen"):
+ tmp = self.curframe.f_globals.get("__file__")
+ if isinstance(tmp, str):
+ filename = tmp
breaklist = self.get_file_breaks(filename)
try:
lines = linecache.getlines(filename, self.curframe.f_globals)
except when needed.
"""
+ _fields = ('system', 'node', 'release', 'version', 'machine', 'processor')
+
@functools.cached_property
def processor(self):
return _unknown_as_blank(_Processor.get())
@classmethod
def _make(cls, iterable):
# override factory to affect length check
- num_fields = len(cls._fields)
+ num_fields = len(cls._fields) - 1
result = cls.__new__(cls, *iterable)
if len(result) != num_fields + 1:
msg = f'Expected {num_fields} arguments, got {len(result)}'
return len(tuple(iter(self)))
def __reduce__(self):
- return uname_result, tuple(self)[:len(self._fields)]
+ return uname_result, tuple(self)[:len(self._fields) - 1]
_uname_cache = None
# -*- coding: utf-8 -*-
-# Autogenerated by Sphinx on Tue Oct 11 12:21:26 2022
+# Autogenerated by Sphinx on Tue Dec 6 18:31:02 2022
topics = {'assert': 'The "assert" statement\n'
'**********************\n'
'\n'
'yield_expression)]\n'
'\n'
'The difference from normal Assignment statements is that only '
- 'single\n'
+ 'a single\n'
'target is allowed.\n'
'\n'
'For simple names as assignment targets, if in class or module '
'analysis\n'
' tools and IDEs.\n'
'\n'
- 'Changed in version 3.8: Now annotated assignments allow same\n'
- 'expressions in the right hand side as the regular '
- 'assignments.\n'
- 'Previously, some expressions (like un-parenthesized tuple '
- 'expressions)\n'
- 'caused a syntax error.\n',
+ 'Changed in version 3.8: Now annotated assignments allow the '
+ 'same\n'
+ 'expressions in the right hand side as regular assignments. '
+ 'Previously,\n'
+ 'some expressions (like un-parenthesized tuple expressions) '
+ 'caused a\n'
+ 'syntax error.\n',
'async': 'Coroutines\n'
'**********\n'
'\n'
'\n'
'* Mappings (instances of "dict") compare equal if and only if '
'they\n'
- ' have equal *(key, value)* pairs. Equality comparison of the '
+ ' have equal "(key, value)" pairs. Equality comparison of the '
'keys and\n'
' values enforces reflexivity.\n'
'\n'
'the clauses had been separated out into individual import '
'statements.\n'
'\n'
- 'The details of the first step, finding and loading modules are\n'
+ 'The details of the first step, finding and loading modules, are\n'
'described in greater detail in the section on the import system, '
'which\n'
'also describes the various types of packages and modules that can '
'y)" is\n'
'typically invalid without special support in "MyClass". To '
'be able to\n'
- 'use that kind of patterns, the class needs to define a\n'
- '*__match_args__* attribute.\n'
+ 'use that kind of pattern, the class needs to define a '
+ '*__match_args__*\n'
+ 'attribute.\n'
'\n'
'object.__match_args__\n'
'\n'
'*start* and\n'
' *end* are interpreted as in slice notation.\n'
'\n'
+ ' If *sub* is empty, returns the number of empty strings '
+ 'between\n'
+ ' characters which is the length of the string plus one.\n'
+ '\n'
"str.encode(encoding='utf-8', errors='strict')\n"
'\n'
' Return an encoded version of the string as a bytes '
'followed by\n'
' the string itself.\n'
'\n'
- 'str.rsplit(sep=None, maxsplit=- 1)\n'
+ 'str.rsplit(sep=None, maxsplit=-1)\n'
'\n'
' Return a list of the words in the string, using *sep* '
'as the\n'
" >>> 'Monty Python'.removesuffix(' Python')\n"
" 'Monty'\n"
'\n'
- 'str.split(sep=None, maxsplit=- 1)\n'
+ 'str.split(sep=None, maxsplit=-1)\n'
'\n'
' Return a list of the words in the string, using *sep* '
'as the\n'
'dictionaries or\n'
'other mutable types (that are compared by value rather than '
'by object\n'
- 'identity) may not be used as keys. Numeric types used for '
- 'keys obey\n'
- 'the normal rules for numeric comparison: if two numbers '
- 'compare equal\n'
- '(such as "1" and "1.0") then they can be used '
- 'interchangeably to index\n'
- 'the same dictionary entry. (Note however, that since '
- 'computers store\n'
- 'floating-point numbers as approximations it is usually '
- 'unwise to use\n'
- 'them as dictionary keys.)\n'
+ 'identity) may not be used as keys. Values that compare equal '
+ '(such as\n'
+ '"1", "1.0", and "True") can be used interchangeably to index '
+ 'the same\n'
+ 'dictionary entry.\n'
'\n'
'class dict(**kwargs)\n'
'class dict(mapping, **kwargs)\n'
# otherwise let the copy occur. copy2 will raise an error
if srcentry.is_dir():
copytree(srcobj, dstname, symlinks, ignore,
- copy_function, dirs_exist_ok=dirs_exist_ok)
+ copy_function, ignore_dangling_symlinks,
+ dirs_exist_ok)
else:
copy_function(srcobj, dstname)
elif srcentry.is_dir():
copytree(srcobj, dstname, symlinks, ignore, copy_function,
- dirs_exist_ok=dirs_exist_ok)
+ ignore_dangling_symlinks, dirs_exist_ok)
else:
# Will raise a SpecialFileError for unsupported file types
copy_function(srcobj, dstname)
def __repr__(self):
return f'{type(self).__name__}(mu={self._mu!r}, sigma={self._sigma!r})'
+
+ def __getstate__(self):
+ return self._mu, self._sigma
+
+ def __setstate__(self, state):
+ self._mu, self._sigma = state
class SemLock(_multiprocessing.SemLock):
pass
name = f'test_semlock_subclass-{os.getpid()}'
- s = SemLock(1, 0, 10, name, 0)
+ s = SemLock(1, 0, 10, name, False)
_multiprocessing.sem_unlink(name)
syslog.closelog()
+def test_not_in_gc():
+ import gc
+
+ hook = lambda *a: None
+ sys.addaudithook(hook)
+
+ for o in gc.get_objects():
+ if isinstance(o, list):
+ assert hook not in o
+
+
if __name__ == "__main__":
from test.support import suppress_msvcrt_asserts
Raise an AssertionError if the process exit code is not equal to exitcode.
- If the process runs longer than timeout seconds (SHORT_TIMEOUT by default),
+ If the process runs longer than timeout seconds (LONG_TIMEOUT by default),
kill the process (if signal.SIGKILL is available) and raise an
AssertionError. The timeout feature is not available on Windows.
"""
import signal
if timeout is None:
- timeout = SHORT_TIMEOUT
+ timeout = LONG_TIMEOUT
t0 = time.monotonic()
sleep = 0.001
max_sleep = 0.1
# process is still running
dt = time.monotonic() - t0
- if dt > SHORT_TIMEOUT:
+ if dt > timeout:
try:
os.kill(pid, signal.SIGKILL)
os.waitpid(pid, 0)
Sig('-z'),
]
failures = ['a', '--foo', '-xa', '-x --foo', '-x -z', '-z -x',
- '-yx', '-yz a', '-yyyx', '-yyyza', '-xyza']
+ '-yx', '-yz a', '-yyyx', '-yyyza', '-xyza', '-x=']
successes = [
('', NS(x=False, yyy=None, z=None)),
('-x', NS(x=True, yyy=None, z=None)),
self.assertEqual(ast.increment_lineno(src).lineno, 2)
self.assertIsNone(ast.increment_lineno(src).end_lineno)
+ def test_increment_lineno_on_module(self):
+ src = ast.parse(dedent("""\
+ a = 1
+ b = 2 # type: ignore
+ c = 3
+ d = 4 # type: ignore@tag
+ """), type_comments=True)
+ ast.increment_lineno(src, n=5)
+ self.assertEqual(src.type_ignores[0].lineno, 7)
+ self.assertEqual(src.type_ignores[1].lineno, 9)
+ self.assertEqual(src.type_ignores[1].tag, '@tag')
+
def test_iter_fields(self):
node = ast.parse('foo()', mode='eval')
d = dict(ast.iter_fields(node.body))
def test_env_var_debug(self):
code = '\n'.join((
'import asyncio',
- 'loop = asyncio.get_event_loop()',
+ 'loop = asyncio.new_event_loop()',
'print(loop.get_debug())'))
# Test with -E to not fail if the unit test was run with
def test_get_event_loop(self):
policy = asyncio.DefaultEventLoopPolicy()
self.assertIsNone(policy._local._loop)
-
- loop = policy.get_event_loop()
+ with self.assertWarns(DeprecationWarning) as cm:
+ loop = policy.get_event_loop()
+ self.assertEqual(cm.filename, __file__)
self.assertIsInstance(loop, asyncio.AbstractEventLoop)
self.assertIs(policy._local._loop, loop)
policy, "set_event_loop",
wraps=policy.set_event_loop) as m_set_event_loop:
- loop = policy.get_event_loop()
+ with self.assertWarns(DeprecationWarning) as cm:
+ loop = policy.get_event_loop()
+ self.addCleanup(loop.close)
+ self.assertEqual(cm.filename, __file__)
# policy._local._loop must be set through .set_event_loop()
# (the unix DefaultEventLoopPolicy needs this call to attach
def test_set_event_loop(self):
policy = asyncio.DefaultEventLoopPolicy()
- old_loop = policy.get_event_loop()
+ old_loop = policy.new_event_loop()
+ policy.set_event_loop(old_loop)
self.assertRaises(AssertionError, policy.set_event_loop, object())
asyncio.set_event_loop_policy(Policy())
loop = asyncio.new_event_loop()
- with self.assertWarns(DeprecationWarning) as cm:
- with self.assertRaises(TestError):
- asyncio.get_event_loop()
- self.assertEqual(cm.warnings[0].filename, __file__)
+ with self.assertRaises(TestError):
+ asyncio.get_event_loop()
asyncio.set_event_loop(None)
- with self.assertWarns(DeprecationWarning) as cm:
- with self.assertRaises(TestError):
- asyncio.get_event_loop()
- self.assertEqual(cm.warnings[0].filename, __file__)
+ with self.assertRaises(TestError):
+ asyncio.get_event_loop()
with self.assertRaisesRegex(RuntimeError, 'no running'):
asyncio.get_running_loop()
loop.run_until_complete(func())
asyncio.set_event_loop(loop)
- with self.assertWarns(DeprecationWarning) as cm:
- with self.assertRaises(TestError):
- asyncio.get_event_loop()
- self.assertEqual(cm.warnings[0].filename, __file__)
-
+ with self.assertRaises(TestError):
+ asyncio.get_event_loop()
asyncio.set_event_loop(None)
- with self.assertWarns(DeprecationWarning) as cm:
- with self.assertRaises(TestError):
- asyncio.get_event_loop()
- self.assertEqual(cm.warnings[0].filename, __file__)
+ with self.assertRaises(TestError):
+ asyncio.get_event_loop()
finally:
asyncio.set_event_loop_policy(old_policy)
self.addCleanup(loop2.close)
self.assertEqual(cm.warnings[0].filename, __file__)
asyncio.set_event_loop(None)
- with self.assertWarns(DeprecationWarning) as cm:
- with self.assertRaisesRegex(RuntimeError, 'no current'):
- asyncio.get_event_loop()
- self.assertEqual(cm.warnings[0].filename, __file__)
+ with self.assertRaisesRegex(RuntimeError, 'no current'):
+ asyncio.get_event_loop()
with self.assertRaisesRegex(RuntimeError, 'no running'):
asyncio.get_running_loop()
loop.run_until_complete(func())
asyncio.set_event_loop(loop)
- with self.assertWarns(DeprecationWarning) as cm:
- self.assertIs(asyncio.get_event_loop(), loop)
- self.assertEqual(cm.warnings[0].filename, __file__)
+ self.assertIs(asyncio.get_event_loop(), loop)
asyncio.set_event_loop(None)
- with self.assertWarns(DeprecationWarning) as cm:
- with self.assertRaisesRegex(RuntimeError, 'no current'):
- asyncio.get_event_loop()
- self.assertEqual(cm.warnings[0].filename, __file__)
+ with self.assertRaisesRegex(RuntimeError, 'no current'):
+ asyncio.get_event_loop()
finally:
asyncio.set_event_loop_policy(old_policy)
self.assertTrue(f.cancelled())
def test_constructor_without_loop(self):
- with self.assertWarns(DeprecationWarning) as cm:
- with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'):
- self._new_future()
- self.assertEqual(cm.warnings[0].filename, __file__)
+ with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
+ self._new_future()
def test_constructor_use_running_loop(self):
async def test():
self.assertIs(f.get_loop(), self.loop)
def test_constructor_use_global_loop(self):
- # Deprecated in 3.10
+ # Deprecated in 3.10, undeprecated in 3.11.1
asyncio.set_event_loop(self.loop)
self.addCleanup(asyncio.set_event_loop, None)
- with self.assertWarns(DeprecationWarning) as cm:
- f = self._new_future()
- self.assertEqual(cm.warnings[0].filename, __file__)
+ f = self._new_future()
self.assertIs(f._loop, self.loop)
self.assertIs(f.get_loop(), self.loop)
return (arg, threading.get_ident())
ex = concurrent.futures.ThreadPoolExecutor(1)
f1 = ex.submit(run, 'oi')
- with self.assertWarns(DeprecationWarning) as cm:
- with self.assertRaises(RuntimeError):
- asyncio.wrap_future(f1)
- self.assertEqual(cm.warnings[0].filename, __file__)
+ with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
+ asyncio.wrap_future(f1)
ex.shutdown(wait=True)
def test_wrap_future_use_running_loop(self):
ex.shutdown(wait=True)
def test_wrap_future_use_global_loop(self):
- # Deprecated in 3.10
+ # Deprecated in 3.10, undeprecated in 3.11.1
asyncio.set_event_loop(self.loop)
self.addCleanup(asyncio.set_event_loop, None)
def run(arg):
return (arg, threading.get_ident())
ex = concurrent.futures.ThreadPoolExecutor(1)
f1 = ex.submit(run, 'oi')
- with self.assertWarns(DeprecationWarning) as cm:
- f2 = asyncio.wrap_future(f1)
- self.assertEqual(cm.warnings[0].filename, __file__)
+ f2 = asyncio.wrap_future(f1)
self.assertIs(self.loop, f2._loop)
ex.shutdown(wait=True)
tr._closing = True
tr._force_close(None)
test_utils.run_briefly(self.loop)
+ # See https://github.com/python/cpython/issues/89237
+ # `protocol.connection_lost` should be called even if
+ # the transport was closed forcefully otherwise
+ # the resources held by protocol will never be freed
+ # and waiters will never be notified leading to hang.
+ self.assertTrue(self.protocol.connection_lost.called)
+
+ def test_force_close_protocol_connection_lost_once(self):
+ tr = self.socket_transport()
self.assertFalse(self.protocol.connection_lost.called)
+ tr._closing = True
+ # Calling _force_close twice should not call
+ # protocol.connection_lost twice
+ tr._force_close(None)
+ tr._force_close(None)
+ test_utils.run_briefly(self.loop)
+ self.assertEqual(1, self.protocol.connection_lost.call_count)
+
+ def test_close_protocol_connection_lost_once(self):
+ tr = self.socket_transport()
+ self.assertFalse(self.protocol.connection_lost.called)
+ # Calling close twice should not call
+ # protocol.connection_lost twice
+ tr.close()
+ tr.close()
+ test_utils.run_briefly(self.loop)
+ self.assertEqual(1, self.protocol.connection_lost.call_count)
def test_fatal_error_2(self):
tr = self.socket_transport()
"""Tests for sendfile functionality."""
import asyncio
+import errno
import os
import socket
import sys
srv_proto, cli_proto = self.prepare_sendfile(close_after=1024)
with self.assertRaises(ConnectionError):
- self.run_loop(
- self.loop.sendfile(cli_proto.transport, self.file))
+ try:
+ self.run_loop(
+ self.loop.sendfile(cli_proto.transport, self.file))
+ except OSError as e:
+ # macOS may raise OSError of EPROTOTYPE when writing to a
+ # socket that is in the process of closing down.
+ if e.errno == errno.EPROTOTYPE and sys.platform == "darwin":
+ raise ConnectionError
+ else:
+ raise
+
self.run_loop(srv_proto.done)
self.assertTrue(1024 <= srv_proto.nbytes < len(self.DATA),
self.assertEqual(data, b'data')
def test_streamreader_constructor_without_loop(self):
- with self.assertWarns(DeprecationWarning) as cm:
- with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'):
- asyncio.StreamReader()
- self.assertEqual(cm.warnings[0].filename, __file__)
+ with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
+ asyncio.StreamReader()
def test_streamreader_constructor_use_running_loop(self):
# asyncio issue #184: Ensure that StreamReaderProtocol constructor
def test_streamreader_constructor_use_global_loop(self):
# asyncio issue #184: Ensure that StreamReaderProtocol constructor
# retrieves the current loop if the loop parameter is not set
- # Deprecated in 3.10
+ # Deprecated in 3.10, undeprecated in 3.11.1
self.addCleanup(asyncio.set_event_loop, None)
asyncio.set_event_loop(self.loop)
- with self.assertWarns(DeprecationWarning) as cm:
- reader = asyncio.StreamReader()
- self.assertEqual(cm.warnings[0].filename, __file__)
+ reader = asyncio.StreamReader()
self.assertIs(reader._loop, self.loop)
def test_streamreaderprotocol_constructor_without_loop(self):
reader = mock.Mock()
- with self.assertWarns(DeprecationWarning) as cm:
- with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'):
- asyncio.StreamReaderProtocol(reader)
- self.assertEqual(cm.warnings[0].filename, __file__)
+ with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
+ asyncio.StreamReaderProtocol(reader)
def test_streamreaderprotocol_constructor_use_running_loop(self):
# asyncio issue #184: Ensure that StreamReaderProtocol constructor
def test_streamreaderprotocol_constructor_use_global_loop(self):
# asyncio issue #184: Ensure that StreamReaderProtocol constructor
# retrieves the current loop if the loop parameter is not set
- # Deprecated in 3.10
+ # Deprecated in 3.10, undeprecated in 3.11.1
self.addCleanup(asyncio.set_event_loop, None)
asyncio.set_event_loop(self.loop)
reader = mock.Mock()
- with self.assertWarns(DeprecationWarning) as cm:
- protocol = asyncio.StreamReaderProtocol(reader)
- self.assertEqual(cm.warnings[0].filename, __file__)
+ protocol = asyncio.StreamReaderProtocol(reader)
self.assertIs(protocol._loop, self.loop)
def test_multiple_drain(self):
self.assertEqual(output, None)
self.assertEqual(exitcode, 0)
+ @unittest.skipIf(sys.platform != 'linux', "Don't have /dev/stdin")
+ def test_devstdin_input(self):
+
+ async def devstdin_input(message):
+ code = 'file = open("/dev/stdin"); data = file.read(); print(len(data))'
+ proc = await asyncio.create_subprocess_exec(
+ sys.executable, '-c', code,
+ stdin=asyncio.subprocess.PIPE,
+ stdout=asyncio.subprocess.PIPE,
+ stderr=asyncio.subprocess.PIPE,
+ close_fds=False,
+ )
+ stdout, stderr = await proc.communicate(message)
+ exitcode = await proc.wait()
+ return (stdout, exitcode)
+
+ output, exitcode = self.loop.run_until_complete(devstdin_input(b'abc'))
+ self.assertEqual(output.rstrip(), b'3')
+ self.assertEqual(exitcode, 0)
+
def test_cancel_process_wait(self):
# Issue #23140: cancel Process.wait()
a = notmuch()
self.addCleanup(a.close)
- with self.assertWarns(DeprecationWarning) as cm:
- with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'):
- asyncio.ensure_future(a)
- self.assertEqual(cm.warnings[0].filename, __file__)
+ with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
+ asyncio.ensure_future(a)
async def test():
return asyncio.ensure_future(notmuch())
self.assertTrue(t.done())
self.assertEqual(t.result(), 'ok')
- # Deprecated in 3.10
+ # Deprecated in 3.10.0, undeprecated in 3.10.9
asyncio.set_event_loop(self.loop)
self.addCleanup(asyncio.set_event_loop, None)
- with self.assertWarns(DeprecationWarning) as cm:
- t = asyncio.ensure_future(notmuch())
- self.assertEqual(cm.warnings[0].filename, __file__)
+ t = asyncio.ensure_future(notmuch())
self.assertIs(t._loop, self.loop)
self.loop.run_until_complete(t)
self.assertTrue(t.done())
a = notmuch()
self.addCleanup(a.close)
- with self.assertWarns(DeprecationWarning) as cm:
- with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'):
- asyncio.ensure_future(a)
- self.assertEqual(cm.warnings[0].filename, __file__)
+ with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'):
+ asyncio.ensure_future(a)
async def test():
return asyncio.ensure_future(notmuch())
self.assertTrue(t.done())
self.assertEqual(t.result(), 'ok')
- # Deprecated in 3.10
+ # Deprecated in 3.10.0, undeprecated in 3.10.9
asyncio.set_event_loop(self.loop)
self.addCleanup(asyncio.set_event_loop, None)
- with self.assertWarns(DeprecationWarning) as cm:
- t = asyncio.ensure_future(notmuch())
- self.assertEqual(cm.warnings[0].filename, __file__)
+ t = asyncio.ensure_future(notmuch())
self.assertIs(t._loop, self.loop)
self.loop.run_until_complete(t)
self.assertTrue(t.done())
self.addCleanup(a.close)
futs = asyncio.as_completed([a])
- with self.assertWarns(DeprecationWarning) as cm:
- with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'):
- list(futs)
- self.assertEqual(cm.warnings[0].filename, __file__)
+ with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
+ list(futs)
def test_as_completed_coroutine_use_running_loop(self):
loop = self.new_test_loop()
loop.run_until_complete(test())
def test_as_completed_coroutine_use_global_loop(self):
- # Deprecated in 3.10
+ # Deprecated in 3.10.0, undeprecated in 3.10.9
async def coro():
return 42
loop = self.new_test_loop()
asyncio.set_event_loop(loop)
self.addCleanup(asyncio.set_event_loop, None)
- futs = asyncio.as_completed([coro()])
- with self.assertWarns(DeprecationWarning) as cm:
- futs = list(futs)
- self.assertEqual(cm.warnings[0].filename, __file__)
+ futs = list(asyncio.as_completed([coro()]))
self.assertEqual(len(futs), 1)
self.assertEqual(loop.run_until_complete(futs[0]), 42)
inner = coro()
self.addCleanup(inner.close)
- with self.assertWarns(DeprecationWarning) as cm:
- with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'):
- asyncio.shield(inner)
- self.assertEqual(cm.warnings[0].filename, __file__)
+ with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
+ asyncio.shield(inner)
def test_shield_coroutine_use_running_loop(self):
async def coro():
self.assertEqual(res, 42)
def test_shield_coroutine_use_global_loop(self):
- # Deprecated in 3.10
+ # Deprecated in 3.10.0, undeprecated in 3.10.9
async def coro():
return 42
asyncio.set_event_loop(self.loop)
self.addCleanup(asyncio.set_event_loop, None)
- with self.assertWarns(DeprecationWarning) as cm:
- outer = asyncio.shield(coro())
- self.assertEqual(cm.warnings[0].filename, __file__)
+ outer = asyncio.shield(coro())
self.assertEqual(outer._loop, self.loop)
res = self.loop.run_until_complete(outer)
self.assertEqual(res, 42)
self.assertIsNone(asyncio.current_task(loop=self.loop))
def test_current_task_no_running_loop_implicit(self):
- with self.assertRaises(RuntimeError):
+ with self.assertRaisesRegex(RuntimeError, 'no running event loop'):
asyncio.current_task()
def test_current_task_with_implicit_loop(self):
return asyncio.gather(*args, **kwargs)
def test_constructor_empty_sequence_without_loop(self):
- with self.assertWarns(DeprecationWarning) as cm:
- with self.assertRaises(RuntimeError):
- asyncio.gather()
- self.assertEqual(cm.warnings[0].filename, __file__)
+ with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
+ asyncio.gather()
def test_constructor_empty_sequence_use_running_loop(self):
async def gather():
self.assertEqual(fut.result(), [])
def test_constructor_empty_sequence_use_global_loop(self):
- # Deprecated in 3.10
+ # Deprecated in 3.10.0, undeprecated in 3.10.9
asyncio.set_event_loop(self.one_loop)
self.addCleanup(asyncio.set_event_loop, None)
- with self.assertWarns(DeprecationWarning) as cm:
- fut = asyncio.gather()
- self.assertEqual(cm.warnings[0].filename, __file__)
+ fut = asyncio.gather()
self.assertIsInstance(fut, asyncio.Future)
self.assertIs(fut._loop, self.one_loop)
self._run_loop(self.one_loop)
self.addCleanup(gen1.close)
gen2 = coro()
self.addCleanup(gen2.close)
- with self.assertWarns(DeprecationWarning) as cm:
- with self.assertRaises(RuntimeError):
- asyncio.gather(gen1, gen2)
- self.assertEqual(cm.warnings[0].filename, __file__)
+ with self.assertRaisesRegex(RuntimeError, 'no current event loop'):
+ asyncio.gather(gen1, gen2)
def test_constructor_use_running_loop(self):
async def coro():
self.one_loop.run_until_complete(fut)
def test_constructor_use_global_loop(self):
- # Deprecated in 3.10
+ # Deprecated in 3.10.0, undeprecated in 3.10.9
async def coro():
return 'abc'
asyncio.set_event_loop(self.other_loop)
self.addCleanup(asyncio.set_event_loop, None)
gen1 = coro()
gen2 = coro()
- with self.assertWarns(DeprecationWarning) as cm:
- fut = asyncio.gather(gen1, gen2)
- self.assertEqual(cm.warnings[0].filename, __file__)
+ fut = asyncio.gather(gen1, gen2)
self.assertIs(fut._loop, self.other_loop)
self.other_loop.run_until_complete(fut)
def test_child_watcher_replace_mainloop_existing(self):
policy = self.create_policy()
- loop = policy.get_event_loop()
+ loop = policy.new_event_loop()
+ policy.set_event_loop(loop)
# Explicitly setup SafeChildWatcher,
# default ThreadedChildWatcher has no _loop property
self.close_loop(self.loop)
self.assertFalse(self.loop.call_exception_handler.called)
+ def test_address_argument_type_error(self):
+ # Regression test for https://github.com/python/cpython/issues/98793
+ proactor = self.loop._proactor
+ sock = socket.socket(type=socket.SOCK_DGRAM)
+ bad_address = None
+ with self.assertRaises(TypeError):
+ proactor.connect(sock, bad_address)
+ with self.assertRaises(TypeError):
+ proactor.sendto(sock, b'abc', addr=bad_address)
+ sock.close()
+
class WinPolicyTests(test_utils.TestCase):
('syslog.closelog', '', '')]
)
+ def test_not_in_gc(self):
+ returncode, _, stderr = self.run_python("test_not_in_gc")
+ if returncode:
+ self.fail(stderr)
+
if __name__ == "__main__":
unittest.main()
__import__('string')
__import__(name='sys')
__import__(name='time', level=0)
- self.assertRaises(ImportError, __import__, 'spamspam')
+ self.assertRaises(ModuleNotFoundError, __import__, 'spamspam')
self.assertRaises(TypeError, __import__, 1, 2, 3, 4)
self.assertRaises(ValueError, __import__, '')
self.assertRaises(TypeError, __import__, 'sys', name='sys')
self.assertEqual(A.__module__, __name__)
with self.assertRaises(ValueError):
type('A\x00B', (), {})
- with self.assertRaises(ValueError):
+ with self.assertRaises(UnicodeEncodeError):
type('A\udcdcB', (), {})
with self.assertRaises(TypeError):
type(b'A', (), {})
with self.assertRaises(ValueError):
A.__name__ = 'A\x00B'
self.assertEqual(A.__name__, 'C')
- with self.assertRaises(ValueError):
+ with self.assertRaises(UnicodeEncodeError):
A.__name__ = 'A\udcdcB'
self.assertEqual(A.__name__, 'C')
with self.assertRaises(TypeError):
self.kwargs.clear()
gc.collect()
return 0
- x = IntWithDict(dont_inherit=IntWithDict())
+ x = IntWithDict(optimize=IntWithDict())
# We test the argument handling of "compile" here, the compilation
# itself is not relevant. When we pass flags=x below, x.__index__() is
# called, which changes the keywords dict.
--- /dev/null
+import os
+from test.support import load_package_tests
+
+def load_tests(*args):
+ return load_package_tests(os.path.dirname(__file__), *args)
--- /dev/null
+import unittest
+
+unittest.main('test.test_capi')
self.assertRaises(TypeError, _testcapi.get_mapping_values, bad_mapping)
self.assertRaises(TypeError, _testcapi.get_mapping_items, bad_mapping)
+ def test_mapping_has_key(self):
+ dct = {'a': 1}
+ self.assertTrue(_testcapi.mapping_has_key(dct, 'a'))
+ self.assertFalse(_testcapi.mapping_has_key(dct, 'b'))
+
+ class SubDict(dict):
+ pass
+
+ dct2 = SubDict({'a': 1})
+ self.assertTrue(_testcapi.mapping_has_key(dct2, 'a'))
+ self.assertFalse(_testcapi.mapping_has_key(dct2, 'b'))
+
@unittest.skipUnless(hasattr(_testcapi, 'negative_refcount'),
'need _testcapi.negative_refcount')
def test_negative_refcount(self):
--- /dev/null
+import unittest
+import sys
+import warnings
+from test import support
+from test.support import import_helper
+from test.support import warnings_helper
+
+try:
+ import _testcapi
+except ImportError:
+ _testcapi = None
+
+
+class CAPITest(unittest.TestCase):
+
+ # Test PyUnicode_FromFormat()
+ def test_from_format(self):
+ import_helper.import_module('ctypes')
+ from ctypes import (
+ c_char_p,
+ pythonapi, py_object, sizeof,
+ c_int, c_long, c_longlong, c_ssize_t,
+ c_uint, c_ulong, c_ulonglong, c_size_t, c_void_p)
+ name = "PyUnicode_FromFormat"
+ _PyUnicode_FromFormat = getattr(pythonapi, name)
+ _PyUnicode_FromFormat.argtypes = (c_char_p,)
+ _PyUnicode_FromFormat.restype = py_object
+
+ def PyUnicode_FromFormat(format, *args):
+ cargs = tuple(
+ py_object(arg) if isinstance(arg, str) else arg
+ for arg in args)
+ return _PyUnicode_FromFormat(format, *cargs)
+
+ def check_format(expected, format, *args):
+ text = PyUnicode_FromFormat(format, *args)
+ self.assertEqual(expected, text)
+
+ # ascii format, non-ascii argument
+ check_format('ascii\x7f=unicode\xe9',
+ b'ascii\x7f=%U', 'unicode\xe9')
+
+ # non-ascii format, ascii argument: ensure that PyUnicode_FromFormatV()
+ # raises an error
+ self.assertRaisesRegex(ValueError,
+ r'^PyUnicode_FromFormatV\(\) expects an ASCII-encoded format '
+ 'string, got a non-ASCII byte: 0xe9$',
+ PyUnicode_FromFormat, b'unicode\xe9=%s', 'ascii')
+
+ # test "%c"
+ check_format('\uabcd',
+ b'%c', c_int(0xabcd))
+ check_format('\U0010ffff',
+ b'%c', c_int(0x10ffff))
+ with self.assertRaises(OverflowError):
+ PyUnicode_FromFormat(b'%c', c_int(0x110000))
+ # Issue #18183
+ check_format('\U00010000\U00100000',
+ b'%c%c', c_int(0x10000), c_int(0x100000))
+
+ # test "%"
+ check_format('%',
+ b'%')
+ check_format('%',
+ b'%%')
+ check_format('%s',
+ b'%%s')
+ check_format('[%]',
+ b'[%%]')
+ check_format('%abc',
+ b'%%%s', b'abc')
+
+ # truncated string
+ check_format('abc',
+ b'%.3s', b'abcdef')
+ check_format('abc[\ufffd',
+ b'%.5s', 'abc[\u20ac]'.encode('utf8'))
+ check_format("'\\u20acABC'",
+ b'%A', '\u20acABC')
+ check_format("'\\u20",
+ b'%.5A', '\u20acABCDEF')
+ check_format("'\u20acABC'",
+ b'%R', '\u20acABC')
+ check_format("'\u20acA",
+ b'%.3R', '\u20acABCDEF')
+ check_format('\u20acAB',
+ b'%.3S', '\u20acABCDEF')
+ check_format('\u20acAB',
+ b'%.3U', '\u20acABCDEF')
+ check_format('\u20acAB',
+ b'%.3V', '\u20acABCDEF', None)
+ check_format('abc[\ufffd',
+ b'%.5V', None, 'abc[\u20ac]'.encode('utf8'))
+
+ # following tests comes from #7330
+ # test width modifier and precision modifier with %S
+ check_format("repr= abc",
+ b'repr=%5S', 'abc')
+ check_format("repr=ab",
+ b'repr=%.2S', 'abc')
+ check_format("repr= ab",
+ b'repr=%5.2S', 'abc')
+
+ # test width modifier and precision modifier with %R
+ check_format("repr= 'abc'",
+ b'repr=%8R', 'abc')
+ check_format("repr='ab",
+ b'repr=%.3R', 'abc')
+ check_format("repr= 'ab",
+ b'repr=%5.3R', 'abc')
+
+ # test width modifier and precision modifier with %A
+ check_format("repr= 'abc'",
+ b'repr=%8A', 'abc')
+ check_format("repr='ab",
+ b'repr=%.3A', 'abc')
+ check_format("repr= 'ab",
+ b'repr=%5.3A', 'abc')
+
+ # test width modifier and precision modifier with %s
+ check_format("repr= abc",
+ b'repr=%5s', b'abc')
+ check_format("repr=ab",
+ b'repr=%.2s', b'abc')
+ check_format("repr= ab",
+ b'repr=%5.2s', b'abc')
+
+ # test width modifier and precision modifier with %U
+ check_format("repr= abc",
+ b'repr=%5U', 'abc')
+ check_format("repr=ab",
+ b'repr=%.2U', 'abc')
+ check_format("repr= ab",
+ b'repr=%5.2U', 'abc')
+
+ # test width modifier and precision modifier with %V
+ check_format("repr= abc",
+ b'repr=%5V', 'abc', b'123')
+ check_format("repr=ab",
+ b'repr=%.2V', 'abc', b'123')
+ check_format("repr= ab",
+ b'repr=%5.2V', 'abc', b'123')
+ check_format("repr= 123",
+ b'repr=%5V', None, b'123')
+ check_format("repr=12",
+ b'repr=%.2V', None, b'123')
+ check_format("repr= 12",
+ b'repr=%5.2V', None, b'123')
+
+ # test integer formats (%i, %d, %u)
+ check_format('010',
+ b'%03i', c_int(10))
+ check_format('0010',
+ b'%0.4i', c_int(10))
+ check_format('-123',
+ b'%i', c_int(-123))
+ check_format('-123',
+ b'%li', c_long(-123))
+ check_format('-123',
+ b'%lli', c_longlong(-123))
+ check_format('-123',
+ b'%zi', c_ssize_t(-123))
+
+ check_format('-123',
+ b'%d', c_int(-123))
+ check_format('-123',
+ b'%ld', c_long(-123))
+ check_format('-123',
+ b'%lld', c_longlong(-123))
+ check_format('-123',
+ b'%zd', c_ssize_t(-123))
+
+ check_format('123',
+ b'%u', c_uint(123))
+ check_format('123',
+ b'%lu', c_ulong(123))
+ check_format('123',
+ b'%llu', c_ulonglong(123))
+ check_format('123',
+ b'%zu', c_size_t(123))
+
+ # test long output
+ min_longlong = -(2 ** (8 * sizeof(c_longlong) - 1))
+ max_longlong = -min_longlong - 1
+ check_format(str(min_longlong),
+ b'%lld', c_longlong(min_longlong))
+ check_format(str(max_longlong),
+ b'%lld', c_longlong(max_longlong))
+ max_ulonglong = 2 ** (8 * sizeof(c_ulonglong)) - 1
+ check_format(str(max_ulonglong),
+ b'%llu', c_ulonglong(max_ulonglong))
+ PyUnicode_FromFormat(b'%p', c_void_p(-1))
+
+ # test padding (width and/or precision)
+ check_format('123'.rjust(10, '0'),
+ b'%010i', c_int(123))
+ check_format('123'.rjust(100),
+ b'%100i', c_int(123))
+ check_format('123'.rjust(100, '0'),
+ b'%.100i', c_int(123))
+ check_format('123'.rjust(80, '0').rjust(100),
+ b'%100.80i', c_int(123))
+
+ check_format('123'.rjust(10, '0'),
+ b'%010u', c_uint(123))
+ check_format('123'.rjust(100),
+ b'%100u', c_uint(123))
+ check_format('123'.rjust(100, '0'),
+ b'%.100u', c_uint(123))
+ check_format('123'.rjust(80, '0').rjust(100),
+ b'%100.80u', c_uint(123))
+
+ check_format('123'.rjust(10, '0'),
+ b'%010x', c_int(0x123))
+ check_format('123'.rjust(100),
+ b'%100x', c_int(0x123))
+ check_format('123'.rjust(100, '0'),
+ b'%.100x', c_int(0x123))
+ check_format('123'.rjust(80, '0').rjust(100),
+ b'%100.80x', c_int(0x123))
+
+ # test %A
+ check_format(r"%A:'abc\xe9\uabcd\U0010ffff'",
+ b'%%A:%A', 'abc\xe9\uabcd\U0010ffff')
+
+ # test %V
+ check_format('repr=abc',
+ b'repr=%V', 'abc', b'xyz')
+
+ # test %p
+ # We cannot test the exact result,
+ # because it returns a hex representation of a C pointer,
+ # which is going to be different each time. But, we can test the format.
+ p_format_regex = r'^0x[a-zA-Z0-9]{3,}$'
+ p_format1 = PyUnicode_FromFormat(b'%p', 'abc')
+ self.assertIsInstance(p_format1, str)
+ self.assertRegex(p_format1, p_format_regex)
+
+ p_format2 = PyUnicode_FromFormat(b'%p %p', '123456', b'xyz')
+ self.assertIsInstance(p_format2, str)
+ self.assertRegex(p_format2,
+ r'0x[a-zA-Z0-9]{3,} 0x[a-zA-Z0-9]{3,}')
+
+ # Extra args are ignored:
+ p_format3 = PyUnicode_FromFormat(b'%p', '123456', None, b'xyz')
+ self.assertIsInstance(p_format3, str)
+ self.assertRegex(p_format3, p_format_regex)
+
+ # Test string decode from parameter of %s using utf-8.
+ # b'\xe4\xba\xba\xe6\xb0\x91' is utf-8 encoded byte sequence of
+ # '\u4eba\u6c11'
+ check_format('repr=\u4eba\u6c11',
+ b'repr=%V', None, b'\xe4\xba\xba\xe6\xb0\x91')
+
+ #Test replace error handler.
+ check_format('repr=abc\ufffd',
+ b'repr=%V', None, b'abc\xff')
+
+ # not supported: copy the raw format string. these tests are just here
+ # to check for crashes and should not be considered as specifications
+ check_format('%s',
+ b'%1%s', b'abc')
+ check_format('%1abc',
+ b'%1abc')
+ check_format('%+i',
+ b'%+i', c_int(10))
+ check_format('%.%s',
+ b'%.%s', b'abc')
+
+ # Issue #33817: empty strings
+ check_format('',
+ b'')
+ check_format('',
+ b'%s', b'')
+
+ # Test PyUnicode_AsWideChar()
+ @support.cpython_only
+ def test_aswidechar(self):
+ from _testcapi import unicode_aswidechar
+ import_helper.import_module('ctypes')
+ from ctypes import c_wchar, sizeof
+
+ wchar, size = unicode_aswidechar('abcdef', 2)
+ self.assertEqual(size, 2)
+ self.assertEqual(wchar, 'ab')
+
+ wchar, size = unicode_aswidechar('abc', 3)
+ self.assertEqual(size, 3)
+ self.assertEqual(wchar, 'abc')
+
+ wchar, size = unicode_aswidechar('abc', 4)
+ self.assertEqual(size, 3)
+ self.assertEqual(wchar, 'abc\0')
+
+ wchar, size = unicode_aswidechar('abc', 10)
+ self.assertEqual(size, 3)
+ self.assertEqual(wchar, 'abc\0')
+
+ wchar, size = unicode_aswidechar('abc\0def', 20)
+ self.assertEqual(size, 7)
+ self.assertEqual(wchar, 'abc\0def\0')
+
+ nonbmp = chr(0x10ffff)
+ if sizeof(c_wchar) == 2:
+ buflen = 3
+ nchar = 2
+ else: # sizeof(c_wchar) == 4
+ buflen = 2
+ nchar = 1
+ wchar, size = unicode_aswidechar(nonbmp, buflen)
+ self.assertEqual(size, nchar)
+ self.assertEqual(wchar, nonbmp + '\0')
+
+ # Test PyUnicode_AsWideCharString()
+ @support.cpython_only
+ def test_aswidecharstring(self):
+ from _testcapi import unicode_aswidecharstring
+ import_helper.import_module('ctypes')
+ from ctypes import c_wchar, sizeof
+
+ wchar, size = unicode_aswidecharstring('abc')
+ self.assertEqual(size, 3)
+ self.assertEqual(wchar, 'abc\0')
+
+ wchar, size = unicode_aswidecharstring('abc\0def')
+ self.assertEqual(size, 7)
+ self.assertEqual(wchar, 'abc\0def\0')
+
+ nonbmp = chr(0x10ffff)
+ if sizeof(c_wchar) == 2:
+ nchar = 2
+ else: # sizeof(c_wchar) == 4
+ nchar = 1
+ wchar, size = unicode_aswidecharstring(nonbmp)
+ self.assertEqual(size, nchar)
+ self.assertEqual(wchar, nonbmp + '\0')
+
+ # Test PyUnicode_AsUCS4()
+ @support.cpython_only
+ def test_asucs4(self):
+ from _testcapi import unicode_asucs4
+ for s in ['abc', '\xa1\xa2', '\u4f60\u597d', 'a\U0001f600',
+ 'a\ud800b\udfffc', '\ud834\udd1e']:
+ l = len(s)
+ self.assertEqual(unicode_asucs4(s, l, True), s+'\0')
+ self.assertEqual(unicode_asucs4(s, l, False), s+'\uffff')
+ self.assertEqual(unicode_asucs4(s, l+1, True), s+'\0\uffff')
+ self.assertEqual(unicode_asucs4(s, l+1, False), s+'\0\uffff')
+ self.assertRaises(SystemError, unicode_asucs4, s, l-1, True)
+ self.assertRaises(SystemError, unicode_asucs4, s, l-2, False)
+ s = '\0'.join([s, s])
+ self.assertEqual(unicode_asucs4(s, len(s), True), s+'\0')
+ self.assertEqual(unicode_asucs4(s, len(s), False), s+'\uffff')
+
+ # Test PyUnicode_AsUTF8()
+ @support.cpython_only
+ def test_asutf8(self):
+ from _testcapi import unicode_asutf8
+
+ bmp = '\u0100'
+ bmp2 = '\uffff'
+ nonbmp = chr(0x10ffff)
+
+ self.assertEqual(unicode_asutf8(bmp), b'\xc4\x80')
+ self.assertEqual(unicode_asutf8(bmp2), b'\xef\xbf\xbf')
+ self.assertEqual(unicode_asutf8(nonbmp), b'\xf4\x8f\xbf\xbf')
+ self.assertRaises(UnicodeEncodeError, unicode_asutf8, 'a\ud800b\udfffc')
+
+ # Test PyUnicode_AsUTF8AndSize()
+ @support.cpython_only
+ def test_asutf8andsize(self):
+ from _testcapi import unicode_asutf8andsize
+
+ bmp = '\u0100'
+ bmp2 = '\uffff'
+ nonbmp = chr(0x10ffff)
+
+ self.assertEqual(unicode_asutf8andsize(bmp), (b'\xc4\x80', 2))
+ self.assertEqual(unicode_asutf8andsize(bmp2), (b'\xef\xbf\xbf', 3))
+ self.assertEqual(unicode_asutf8andsize(nonbmp), (b'\xf4\x8f\xbf\xbf', 4))
+ self.assertRaises(UnicodeEncodeError, unicode_asutf8andsize, 'a\ud800b\udfffc')
+
+ # Test PyUnicode_FindChar()
+ @support.cpython_only
+ def test_findchar(self):
+ from _testcapi import unicode_findchar
+
+ for str in "\xa1", "\u8000\u8080", "\ud800\udc02", "\U0001f100\U0001f1f1":
+ for i, ch in enumerate(str):
+ self.assertEqual(unicode_findchar(str, ord(ch), 0, len(str), 1), i)
+ self.assertEqual(unicode_findchar(str, ord(ch), 0, len(str), -1), i)
+
+ str = "!>_<!"
+ self.assertEqual(unicode_findchar(str, 0x110000, 0, len(str), 1), -1)
+ self.assertEqual(unicode_findchar(str, 0x110000, 0, len(str), -1), -1)
+ # start < end
+ self.assertEqual(unicode_findchar(str, ord('!'), 1, len(str)+1, 1), 4)
+ self.assertEqual(unicode_findchar(str, ord('!'), 1, len(str)+1, -1), 4)
+ # start >= end
+ self.assertEqual(unicode_findchar(str, ord('!'), 0, 0, 1), -1)
+ self.assertEqual(unicode_findchar(str, ord('!'), len(str), 0, 1), -1)
+ # negative
+ self.assertEqual(unicode_findchar(str, ord('!'), -len(str), -1, 1), 0)
+ self.assertEqual(unicode_findchar(str, ord('!'), -len(str), -1, -1), 0)
+
+ # Test PyUnicode_CopyCharacters()
+ @support.cpython_only
+ def test_copycharacters(self):
+ from _testcapi import unicode_copycharacters
+
+ strings = [
+ 'abcde', '\xa1\xa2\xa3\xa4\xa5',
+ '\u4f60\u597d\u4e16\u754c\uff01',
+ '\U0001f600\U0001f601\U0001f602\U0001f603\U0001f604'
+ ]
+
+ for idx, from_ in enumerate(strings):
+ # wide -> narrow: exceed maxchar limitation
+ for to in strings[:idx]:
+ self.assertRaises(
+ SystemError,
+ unicode_copycharacters, to, 0, from_, 0, 5
+ )
+ # same kind
+ for from_start in range(5):
+ self.assertEqual(
+ unicode_copycharacters(from_, 0, from_, from_start, 5),
+ (from_[from_start:from_start+5].ljust(5, '\0'),
+ 5-from_start)
+ )
+ for to_start in range(5):
+ self.assertEqual(
+ unicode_copycharacters(from_, to_start, from_, to_start, 5),
+ (from_[to_start:to_start+5].rjust(5, '\0'),
+ 5-to_start)
+ )
+ # narrow -> wide
+ # Tests omitted since this creates invalid strings.
+
+ s = strings[0]
+ self.assertRaises(IndexError, unicode_copycharacters, s, 6, s, 0, 5)
+ self.assertRaises(IndexError, unicode_copycharacters, s, -1, s, 0, 5)
+ self.assertRaises(IndexError, unicode_copycharacters, s, 0, s, 6, 5)
+ self.assertRaises(IndexError, unicode_copycharacters, s, 0, s, -1, 5)
+ self.assertRaises(SystemError, unicode_copycharacters, s, 1, s, 0, 5)
+ self.assertRaises(SystemError, unicode_copycharacters, s, 0, s, 0, -1)
+ self.assertRaises(SystemError, unicode_copycharacters, s, 0, b'', 0, 0)
+
+ @support.cpython_only
+ @support.requires_legacy_unicode_capi
+ def test_encode_decimal(self):
+ from _testcapi import unicode_encodedecimal
+ with warnings_helper.check_warnings():
+ warnings.simplefilter('ignore', DeprecationWarning)
+ self.assertEqual(unicode_encodedecimal('123'),
+ b'123')
+ self.assertEqual(unicode_encodedecimal('\u0663.\u0661\u0664'),
+ b'3.14')
+ self.assertEqual(unicode_encodedecimal(
+ "\N{EM SPACE}3.14\N{EN SPACE}"), b' 3.14 ')
+ self.assertRaises(UnicodeEncodeError,
+ unicode_encodedecimal, "123\u20ac", "strict")
+ self.assertRaisesRegex(
+ ValueError,
+ "^'decimal' codec can't encode character",
+ unicode_encodedecimal, "123\u20ac", "replace")
+
+ @support.cpython_only
+ @support.requires_legacy_unicode_capi
+ def test_transform_decimal(self):
+ from _testcapi import unicode_transformdecimaltoascii as transform_decimal
+ with warnings_helper.check_warnings():
+ warnings.simplefilter('ignore', DeprecationWarning)
+ self.assertEqual(transform_decimal('123'),
+ '123')
+ self.assertEqual(transform_decimal('\u0663.\u0661\u0664'),
+ '3.14')
+ self.assertEqual(transform_decimal("\N{EM SPACE}3.14\N{EN SPACE}"),
+ "\N{EM SPACE}3.14\N{EN SPACE}")
+ self.assertEqual(transform_decimal('123\u20ac'),
+ '123\u20ac')
+
+ @support.cpython_only
+ def test_pep393_utf8_caching_bug(self):
+ # Issue #25709: Problem with string concatenation and utf-8 cache
+ from _testcapi import getargs_s_hash
+ for k in 0x24, 0xa4, 0x20ac, 0x1f40d:
+ s = ''
+ for i in range(5):
+ # Due to CPython specific optimization the 's' string can be
+ # resized in-place.
+ s += chr(k)
+ # Parsing with the "s#" format code calls indirectly
+ # PyUnicode_AsUTF8AndSize() which creates the UTF-8
+ # encoded string cached in the Unicode object.
+ self.assertEqual(getargs_s_hash(s), chr(k).encode() * (i + 1))
+ # Check that the second call returns the same result
+ self.assertEqual(getargs_s_hash(s), chr(k).encode() * (i + 1))
+
+
+if __name__ == "__main__":
+ unittest.main()
"spamspam", self.spambe)
def test_bug691291(self):
- # Files are always opened in binary mode, even if no binary mode was
+ # If encoding is not None, then
+ # files are always opened in binary mode, even if no binary mode was
# specified. This means that no automatic conversion of '\n' is done
# on reading and writing.
s1 = 'Hello\r\nworld\r\n'
self.assertEqual("pyth\xf6n.org".encode("idna"), b"xn--pythn-mua.org")
self.assertEqual("pyth\xf6n.org.".encode("idna"), b"xn--pythn-mua.org.")
+ def test_builtin_decode_length_limit(self):
+ with self.assertRaisesRegex(UnicodeError, "too long"):
+ (b"xn--016c"+b"a"*1100).decode("idna")
+ with self.assertRaisesRegex(UnicodeError, "too long"):
+ (b"xn--016c"+b"a"*70).decode("idna")
+
def test_stream(self):
r = codecs.getreader("idna")(io.BytesIO(b"abc"))
r.read(3)
def __await__(self):
yield
+ self.validate_abstract_methods(Awaitable, '__await__')
+
non_samples = [None, int(), gen(), object()]
for x in non_samples:
self.assertNotIsInstance(x, Awaitable)
def __await__(self):
yield
+ self.validate_abstract_methods(Coroutine, '__await__', 'send', 'throw')
+
non_samples = [None, int(), gen(), object(), Bar()]
for x in non_samples:
self.assertNotIsInstance(x, Coroutine)
containers = [
seq,
ItemsView({1: nan, 2: obj}),
+ KeysView({1: nan, 2: obj}),
ValuesView({1: nan, 2: obj})
]
for container in containers:
mymap['red'] = 5
self.assertIsInstance(mymap.keys(), Set)
self.assertIsInstance(mymap.keys(), KeysView)
+ self.assertIsInstance(mymap.values(), Collection)
+ self.assertIsInstance(mymap.values(), ValuesView)
self.assertIsInstance(mymap.items(), Set)
self.assertIsInstance(mymap.items(), ItemsView)
self.assertFalse(issubclass(sample, ByteString))
self.assertNotIsInstance(memoryview(b""), ByteString)
self.assertFalse(issubclass(memoryview, ByteString))
+ self.validate_abstract_methods(ByteString, '__getitem__', '__len__')
def test_MutableSequence(self):
for sample in [tuple, str, bytes]:
self.assertClose(complex(5.3, 9.8).conjugate(), 5.3-9.8j)
def test_constructor(self):
- class OS:
+ class NS:
def __init__(self, value): self.value = value
def __complex__(self): return self.value
- class NS(object):
- def __init__(self, value): self.value = value
- def __complex__(self): return self.value
- self.assertEqual(complex(OS(1+10j)), 1+10j)
self.assertEqual(complex(NS(1+10j)), 1+10j)
- self.assertRaises(TypeError, complex, OS(None))
self.assertRaises(TypeError, complex, NS(None))
self.assertRaises(TypeError, complex, {})
self.assertRaises(TypeError, complex, NS(1.5))
def test_unawaited_warning_during_shutdown(self):
code = ("import asyncio\n"
"async def f(): pass\n"
- "asyncio.gather(f())\n")
+ "async def t(): asyncio.gather(f())\n"
+ "asyncio.run(t())\n")
assert_python_ok("-c", code)
code = ("import sys\n"
c = C('foo')
self.assertEqual(c.object, 'foo')
+ def test_field_named_BUILTINS_frozen(self):
+ # gh-96151
+ @dataclass(frozen=True)
+ class C:
+ BUILTINS: int
+ c = C(5)
+ self.assertEqual(c.BUILTINS, 5)
+
def test_field_named_like_builtin(self):
# Attribute names can shadow built-in names
# since code generation is used.
self.assertIsInstance(d.values(), collections.abc.ValuesView)
self.assertIsInstance(d.values(), collections.abc.MappingView)
self.assertIsInstance(d.values(), collections.abc.Sized)
+ self.assertIsInstance(d.values(), collections.abc.Collection)
+ self.assertIsInstance(d.values(), collections.abc.Iterable)
+ self.assertIsInstance(d.values(), collections.abc.Container)
self.assertIsInstance(d.items(), collections.abc.ItemsView)
self.assertIsInstance(d.items(), collections.abc.MappingView)
def test_case_md5_uintmax(self, size):
self.check('md5', b'A'*size, '28138d306ff1b8281f1a9067e1a1a2b3')
+ @unittest.skipIf(sys.maxsize < _4G - 1, 'test cannot run on 32-bit systems')
+ @bigmemtest(size=_4G - 1, memuse=1, dry_run=False)
+ def test_sha3_update_overflow(self, size):
+ """Regression test for gh-98517 CVE-2022-37454."""
+ h = hashlib.sha3_224()
+ h.update(b'\x01')
+ h.update(b'\x01'*0xffff_ffff)
+ self.assertEqual(h.hexdigest(), '80762e8ce6700f114fec0f621fd97c4b9c00147fa052215294cceeed')
+
# use the three examples from Federal Information Processing Standards
# Publication 180-1, Secure Hash Standard, 1995 April 17
# http://www.itl.nist.gov/div897/pubs/fip180-1.htm
import datetime
import threading
from unittest import mock
-from io import BytesIO
+from io import BytesIO, StringIO
import unittest
from test import support
match = self.HTTPResponseMatch.search(response)
self.assertIsNotNone(match)
+ def test_unprintable_not_logged(self):
+ # We call the method from the class directly as our Socketless
+ # Handler subclass overrode it... nice for everything BUT this test.
+ self.handler.client_address = ('127.0.0.1', 1337)
+ log_message = BaseHTTPRequestHandler.log_message
+ with mock.patch.object(sys, 'stderr', StringIO()) as fake_stderr:
+ log_message(self.handler, '/foo')
+ log_message(self.handler, '/\033bar\000\033')
+ log_message(self.handler, '/spam %s.', 'a')
+ log_message(self.handler, '/spam %s.', '\033\x7f\x9f\xa0beans')
+ log_message(self.handler, '"GET /foo\\b"ar\007 HTTP/1.0"')
+ stderr = fake_stderr.getvalue()
+ self.assertNotIn('\033', stderr) # non-printable chars are caught.
+ self.assertNotIn('\000', stderr) # non-printable chars are caught.
+ lines = stderr.splitlines()
+ self.assertIn('/foo', lines[0])
+ self.assertIn(r'/\x1bbar\x00\x1b', lines[1])
+ self.assertIn('/spam a.', lines[2])
+ self.assertIn('/spam \\x1b\\x7f\\x9f\xa0beans.', lines[3])
+ self.assertIn(r'"GET /foo\\b"ar\x07 HTTP/1.0"', lines[4])
+
def test_http_1_1(self):
result = self.send_typical_request(b'GET / HTTP/1.1\r\n\r\n')
self.verify_http_server_response(result[0])
+import gc
import importlib
import importlib.util
import os
self.assertEqual(mod.x, 42)
+ @support.cpython_only
+ def test_create_builtin_subinterp(self):
+ # gh-99578: create_builtin() behavior changes after the creation of the
+ # first sub-interpreter. Test both code paths, before and after the
+ # creation of a sub-interpreter. Previously, create_builtin() had
+ # a reference leak after the creation of the first sub-interpreter.
+
+ import builtins
+ create_builtin = support.get_attribute(_imp, "create_builtin")
+ class Spec:
+ name = "builtins"
+ spec = Spec()
+
+ def check_get_builtins():
+ refcnt = sys.getrefcount(builtins)
+ mod = _imp.create_builtin(spec)
+ self.assertIs(mod, builtins)
+ self.assertEqual(sys.getrefcount(builtins), refcnt + 1)
+ # Check that a GC collection doesn't crash
+ gc.collect()
+
+ check_get_builtins()
+
+ ret = support.run_in_subinterp("import builtins")
+ self.assertEqual(ret, 0)
+
+ check_get_builtins()
+
+
class ReloadTests(unittest.TestCase):
"""Very basic tests to make sure that imp.reload() operates just like
self.assertIn(ep.dist.name, ('distinfo-pkg', 'egginfo-pkg'))
self.assertEqual(ep.dist.version, "1.0.0")
- def test_entry_points_unique_packages(self):
- # Entry points should only be exposed for the first package
- # on sys.path with a given name.
+ def test_entry_points_unique_packages_normalized(self):
+ """
+ Entry points should only be exposed for the first package
+ on sys.path with a given name (even when normalized).
+ """
alt_site_dir = self.fixtures.enter_context(fixtures.tempdir())
self.fixtures.enter_context(self.add_sys_path(alt_site_dir))
alt_pkg = {
- "distinfo_pkg-1.1.0.dist-info": {
+ "DistInfo_pkg-1.1.0.dist-info": {
"METADATA": """
Name: distinfo-pkg
Version: 1.1.0
"""Decorator to protect sys.dont_write_bytecode from mutation and to skip
tests that require it to be set to False."""
if sys.dont_write_bytecode:
- return lambda *args, **kwargs: None
+ return unittest.skip("relies on writing bytecode")(fxn)
@functools.wraps(fxn)
def wrapper(*args, **kwargs):
original = sys.dont_write_bytecode
self.assertEqual(str(inspect.signature(foo)), '(a)')
def test_signature_on_decorated(self):
- import functools
-
def decorator(func):
@functools.wraps(func)
def wrapper(*args, **kwargs) -> int:
def bar(self, a, b):
pass
+ bar = decorator(Foo().bar)
+
self.assertEqual(self.signature(Foo.bar),
((('self', ..., ..., "positional_or_keyword"),
('a', ..., ..., "positional_or_keyword"),
# from "func" to "wrapper", hence no
# return_annotation
+ self.assertEqual(self.signature(bar),
+ ((('a', ..., ..., "positional_or_keyword"),
+ ('b', ..., ..., "positional_or_keyword")),
+ ...))
+
# Test that we handle method wrappers correctly
def decorator(func):
@functools.wraps(func)
open('non-existent', 'r', opener=badopener)
self.assertEqual(str(cm.exception), 'opener returned -2')
+ def test_opener_invalid_fd(self):
+ # Check that OSError is raised with error code EBADF if the
+ # opener returns an invalid file descriptor (see gh-82212).
+ fd = os_helper.make_bad_fd()
+ with self.assertRaises(OSError) as cm:
+ self.open('foo', opener=lambda name, flags: fd)
+ self.assertEqual(cm.exception.errno, errno.EBADF)
+
def test_fileio_closefd(self):
# Issue #4841
with self.open(__file__, 'rb') as f1, \
self.assertEqual(decoder.decode(b"\r\r\n"), "\r\r\n")
class CIncrementalNewlineDecoderTest(IncrementalNewlineDecoderTest):
- pass
+ @support.cpython_only
+ def test_uninitialized(self):
+ uninitialized = self.IncrementalNewlineDecoder.__new__(
+ self.IncrementalNewlineDecoder)
+ self.assertRaises(ValueError, uninitialized.decode, b'bar')
+ self.assertRaises(ValueError, uninitialized.getstate)
+ self.assertRaises(ValueError, uninitialized.setstate, (b'foo', 0))
+ self.assertRaises(ValueError, uninitialized.reset)
+
class PyIncrementalNewlineDecoderTest(IncrementalNewlineDecoderTest):
pass
self.assertRaises(TypeError, cycle, 5)
self.assertEqual(list(islice(cycle(gen3()),10)), [0,1,2,0,1,2,0,1,2,0])
+ def test_cycle_copy_pickle(self):
# check copy, deepcopy, pickle
c = cycle('abc')
self.assertEqual(next(c), 'a')
d = pickle.loads(p) # rebuild the cycle object
self.assertEqual(take(20, d), list('cdeabcdeabcdeabcdeab'))
+ def test_cycle_unpickle_compat(self):
+ testcases = [
+ b'citertools\ncycle\n(c__builtin__\niter\n((lI1\naI2\naI3\natRI1\nbtR((lI1\naI0\ntb.',
+ b'citertools\ncycle\n(c__builtin__\niter\n(](K\x01K\x02K\x03etRK\x01btR(]K\x01aK\x00tb.',
+ b'\x80\x02citertools\ncycle\nc__builtin__\niter\n](K\x01K\x02K\x03e\x85RK\x01b\x85R]K\x01aK\x00\x86b.',
+ b'\x80\x03citertools\ncycle\ncbuiltins\niter\n](K\x01K\x02K\x03e\x85RK\x01b\x85R]K\x01aK\x00\x86b.',
+ b'\x80\x04\x95=\x00\x00\x00\x00\x00\x00\x00\x8c\titertools\x8c\x05cycle\x93\x8c\x08builtins\x8c\x04iter\x93](K\x01K\x02K\x03e\x85RK\x01b\x85R]K\x01aK\x00\x86b.',
+
+ b'citertools\ncycle\n(c__builtin__\niter\n((lp0\nI1\naI2\naI3\natRI1\nbtR(g0\nI1\ntb.',
+ b'citertools\ncycle\n(c__builtin__\niter\n(]q\x00(K\x01K\x02K\x03etRK\x01btR(h\x00K\x01tb.',
+ b'\x80\x02citertools\ncycle\nc__builtin__\niter\n]q\x00(K\x01K\x02K\x03e\x85RK\x01b\x85Rh\x00K\x01\x86b.',
+ b'\x80\x03citertools\ncycle\ncbuiltins\niter\n]q\x00(K\x01K\x02K\x03e\x85RK\x01b\x85Rh\x00K\x01\x86b.',
+ b'\x80\x04\x95<\x00\x00\x00\x00\x00\x00\x00\x8c\titertools\x8c\x05cycle\x93\x8c\x08builtins\x8c\x04iter\x93]\x94(K\x01K\x02K\x03e\x85RK\x01b\x85Rh\x00K\x01\x86b.',
+
+ b'citertools\ncycle\n(c__builtin__\niter\n((lI1\naI2\naI3\natRI1\nbtR((lI1\naI00\ntb.',
+ b'citertools\ncycle\n(c__builtin__\niter\n(](K\x01K\x02K\x03etRK\x01btR(]K\x01aI00\ntb.',
+ b'\x80\x02citertools\ncycle\nc__builtin__\niter\n](K\x01K\x02K\x03e\x85RK\x01b\x85R]K\x01a\x89\x86b.',
+ b'\x80\x03citertools\ncycle\ncbuiltins\niter\n](K\x01K\x02K\x03e\x85RK\x01b\x85R]K\x01a\x89\x86b.',
+ b'\x80\x04\x95<\x00\x00\x00\x00\x00\x00\x00\x8c\titertools\x8c\x05cycle\x93\x8c\x08builtins\x8c\x04iter\x93](K\x01K\x02K\x03e\x85RK\x01b\x85R]K\x01a\x89\x86b.',
+
+ b'citertools\ncycle\n(c__builtin__\niter\n((lp0\nI1\naI2\naI3\natRI1\nbtR(g0\nI01\ntb.',
+ b'citertools\ncycle\n(c__builtin__\niter\n(]q\x00(K\x01K\x02K\x03etRK\x01btR(h\x00I01\ntb.',
+ b'\x80\x02citertools\ncycle\nc__builtin__\niter\n]q\x00(K\x01K\x02K\x03e\x85RK\x01b\x85Rh\x00\x88\x86b.',
+ b'\x80\x03citertools\ncycle\ncbuiltins\niter\n]q\x00(K\x01K\x02K\x03e\x85RK\x01b\x85Rh\x00\x88\x86b.',
+ b'\x80\x04\x95;\x00\x00\x00\x00\x00\x00\x00\x8c\titertools\x8c\x05cycle\x93\x8c\x08builtins\x8c\x04iter\x93]\x94(K\x01K\x02K\x03e\x85RK\x01b\x85Rh\x00\x88\x86b.',
+ ]
+ assert len(testcases) == 20
+ for t in testcases:
+ it = pickle.loads(t)
+ self.assertEqual(take(10, it), [2, 3, 1, 2, 3, 1, 2, 3, 1, 2])
+
def test_cycle_setstate(self):
# Verify both modes for restoring state
self.assertEqual(math.dist(p, q), 5*scale)
self.assertEqual(math.dist(q, p), 5*scale)
+ def test_math_dist_leak(self):
+ # gh-98897: Check for error handling does not leak memory
+ with self.assertRaises(ValueError):
+ math.dist([1, 2], [3, 4, 5])
+
def testIsqrt(self):
# Test a variety of inputs, large and small.
test_values = (
self.assertEqual(stdout.split('\n')[6].rstrip('\r'), expected)
+ def test_gh_93696_frozen_list(self):
+ frozen_src = """
+ def func():
+ x = "Sentinel string for gh-93696"
+ print(x)
+ """
+ host_program = """
+ import os
+ import sys
+
+ def _create_fake_frozen_module():
+ with open('gh93696.py') as f:
+ src = f.read()
+
+ # this function has a co_filename as if it were in a frozen module
+ dummy_mod = compile(src, "<frozen gh93696>", "exec")
+ func_code = dummy_mod.co_consts[0]
+
+ mod = type(sys)("gh93696")
+ mod.func = type(lambda: None)(func_code, mod.__dict__)
+ mod.__file__ = 'gh93696.py'
+
+ return mod
+
+ mod = _create_fake_frozen_module()
+ mod.func()
+ """
+ commands = """
+ break 20
+ continue
+ step
+ list
+ quit
+ """
+ with open('gh93696.py', 'w') as f:
+ f.write(textwrap.dedent(frozen_src))
+
+ with open('gh93696_host.py', 'w') as f:
+ f.write(textwrap.dedent(host_program))
+
+ self.addCleanup(os_helper.unlink, 'gh93696.py')
+ self.addCleanup(os_helper.unlink, 'gh93696_host.py')
+ stdout, stderr = self._run_pdb(["gh93696_host.py"], commands)
+ # verify that pdb found the source of the "frozen" function
+ self.assertIn('x = "Sentinel string for gh-93696"', stdout, "Sentinel statement not found")
+
class ChecklineTests(unittest.TestCase):
def setUp(self):
linecache.clearcache() # Pdb.checkline() uses linecache.getline()
self.assertEqual(res[:], expected)
self.assertEqual(res[:5], expected[:5])
+ def test_uname_fields(self):
+ self.assertIn('processor', platform.uname()._fields)
+
+ def test_uname_asdict(self):
+ res = platform.uname()._asdict()
+ self.assertEqual(len(res), 6)
+ self.assertIn('processor', res)
+
@unittest.skipIf(sys.platform in ['win32', 'OpenVMS'], "uname -p not used")
def test_uname_processor(self):
"""
self.checkPatternError(r'()(?(2)a)',
"invalid group reference 2", 5)
+ def test_re_groupref_exists_validation_bug(self):
+ for i in range(256):
+ with self.subTest(code=i):
+ re.compile(r'()(?(1)\x%02x?)' % i)
+
def test_re_groupref_overflow(self):
from sre_constants import MAXGROUPS
self.checkTemplateError('()', r'\g<%s>' % MAXGROUPS, 'xx',
@os_helper.skip_unless_symlink
def test_copytree_dangling_symlinks(self):
- # a dangling symlink raises an error at the end
src_dir = self.mkdtemp()
+ valid_file = os.path.join(src_dir, 'test.txt')
+ write_file(valid_file, 'abc')
+ dir_a = os.path.join(src_dir, 'dir_a')
+ os.mkdir(dir_a)
+ for d in src_dir, dir_a:
+ os.symlink('IDONTEXIST', os.path.join(d, 'broken'))
+ os.symlink(valid_file, os.path.join(d, 'valid'))
+
+ # A dangling symlink should raise an error.
dst_dir = os.path.join(self.mkdtemp(), 'destination')
- os.symlink('IDONTEXIST', os.path.join(src_dir, 'test.txt'))
- os.mkdir(os.path.join(src_dir, 'test_dir'))
- write_file((src_dir, 'test_dir', 'test.txt'), '456')
self.assertRaises(Error, shutil.copytree, src_dir, dst_dir)
- # a dangling symlink is ignored with the proper flag
+ # Dangling symlinks should be ignored with the proper flag.
dst_dir = os.path.join(self.mkdtemp(), 'destination2')
shutil.copytree(src_dir, dst_dir, ignore_dangling_symlinks=True)
- self.assertNotIn('test.txt', os.listdir(dst_dir))
+ for root, dirs, files in os.walk(dst_dir):
+ self.assertNotIn('broken', files)
+ self.assertIn('valid', files)
# a dangling symlink is copied if symlinks=True
dst_dir = os.path.join(self.mkdtemp(), 'destination3')
nd = NormalDist(100, 15)
self.assertNotEqual(nd, lnd)
- def test_pickle_and_copy(self):
+ def test_copy(self):
nd = self.module.NormalDist(37.5, 5.625)
nd1 = copy.copy(nd)
self.assertEqual(nd, nd1)
nd2 = copy.deepcopy(nd)
self.assertEqual(nd, nd2)
- nd3 = pickle.loads(pickle.dumps(nd))
- self.assertEqual(nd, nd3)
+
+ def test_pickle(self):
+ nd = self.module.NormalDist(37.5, 5.625)
+ for proto in range(pickle.HIGHEST_PROTOCOL + 1):
+ with self.subTest(proto=proto):
+ pickled = pickle.loads(pickle.dumps(nd, protocol=proto))
+ self.assertEqual(nd, pickled)
def test_hashability(self):
ND = self.module.NormalDist
from unittest import TestCase, mock
from test.test_grammar import (VALID_UNDERSCORE_LITERALS,
INVALID_UNDERSCORE_LITERALS)
+from test.support import os_helper
+from test.support.script_helper import run_test_script, make_script
import os
import token
self.check_roundtrip(code)
+class CTokenizerBufferTests(unittest.TestCase):
+ def test_newline_at_the_end_of_buffer(self):
+ # See issue 99581: Make sure that if we need to add a new line at the
+ # end of the buffer, we have enough space in the buffer, specially when
+ # the current line is as long as the buffer space available.
+ test_script = f"""\
+ #coding: latin-1
+ #{"a"*10000}
+ #{"a"*10002}"""
+ with os_helper.temp_dir() as temp_dir:
+ file_name = make_script(temp_dir, 'foo', test_script)
+ run_test_script(file_name)
+
+
if __name__ == "__main__":
unittest.main()
'''))
self.assertFalse([msgid for msgid in msgids if 'doc' in msgid])
+ def test_moduledocstring(self):
+ for doc in ('"""doc"""', "r'''doc'''", "R'doc'", 'u"doc"'):
+ with self.subTest(doc):
+ msgids = self.extract_docstrings_from_str(dedent('''\
+ %s
+ ''' % doc))
+ self.assertIn('doc', msgids)
+
+ def test_moduledocstring_bytes(self):
+ msgids = self.extract_docstrings_from_str(dedent('''\
+ b"""doc"""
+ '''))
+ self.assertFalse([msgid for msgid in msgids if 'doc' in msgid])
+
+ def test_moduledocstring_fstring(self):
+ msgids = self.extract_docstrings_from_str(dedent('''\
+ f"""doc"""
+ '''))
+ self.assertFalse([msgid for msgid in msgids if 'doc' in msgid])
+
def test_msgid(self):
msgids = self.extract_docstrings_from_str(
'''_("""doc""" r'str' u"ing")''')
import os
+from pickle import dump
import sys
from test.support import captured_stdout
from test.support.os_helper import (TESTFN, rmtree, unlink)
self.assertIn(modname, coverage)
self.assertEqual(coverage[modname], (5, 100))
+ def test_coverageresults_update(self):
+ # Update empty CoverageResults with a non-empty infile.
+ infile = TESTFN + '-infile'
+ with open(infile, 'wb') as f:
+ dump(({}, {}, {'caller': 1}), f, protocol=1)
+ self.addCleanup(unlink, infile)
+ results = trace.CoverageResults({}, {}, infile, {})
+ self.assertEqual(results.callers, {'caller': 1})
+
### Tests that don't mess with sys.settrace and can be traced
### themselves TODO: Skip tests that do mess with sys.settrace when
### regrtest is invoked with -T option.
self.assertEqual(x.bar, 'abc')
self.assertEqual(x.x, 1)
self.assertEqual(x.__dict__, {'foo': 42, 'bar': 'abc'})
- s = pickle.dumps(P)
+ s = pickle.dumps(P, proto)
D = pickle.loads(s)
class E:
self.assertEqual(D.__parameters__, ())
- with self.assertRaises(Exception):
+ with self.assertRaises(TypeError):
D[int]
- with self.assertRaises(Exception):
+ with self.assertRaises(TypeError):
D[Any]
- with self.assertRaises(Exception):
+ with self.assertRaises(TypeError):
D[T]
def test_new_with_args(self):
class Foo(obj):
pass
+ def test_complex_subclasses(self):
+ T_co = TypeVar("T_co", covariant=True)
+
+ class Base(Generic[T_co]):
+ ...
+
+ T = TypeVar("T")
+
+ # see gh-94607: this fails in that bug
+ class Sub(Base, Generic[T]):
+ ...
+
+ def test_parameter_detection(self):
+ self.assertEqual(List[T].__parameters__, (T,))
+ self.assertEqual(List[List[T]].__parameters__, (T,))
+ class A:
+ __parameters__ = (T,)
+ # Bare classes should be skipped
+ for a in (List, list):
+ for b in (int, TypeVar, ParamSpec, types.GenericAlias, types.UnionType):
+ with self.subTest(generic=a, sub=b):
+ with self.assertRaisesRegex(TypeError,
+ '.* is not a generic class|'
+ 'no type variables left'):
+ a[b][str]
+ # Duck-typing anything that looks like it has __parameters__.
+ # C version of GenericAlias
+ self.assertEqual(list[A()].__parameters__, (T,))
+
+ def test_non_generic_subscript(self):
+ T = TypeVar('T')
+ class G(Generic[T]):
+ pass
+
+ for s in (int, G, List, list,
+ TypeVar, ParamSpec,
+ types.GenericAlias, types.UnionType):
+
+ for t in Tuple, tuple:
+ with self.subTest(tuple=t, sub=s):
+ self.assertEqual(t[s, T][int], t[s, int])
+ self.assertEqual(t[T, s][int], t[int, s])
+ a = t[s]
+ with self.assertRaises(TypeError):
+ a[int]
+
+ for c in Callable, collections.abc.Callable:
+ with self.subTest(callable=c, sub=s):
+ self.assertEqual(c[[s], T][int], c[[s], int])
+ self.assertEqual(c[[T], s][int], c[[int], s])
+ a = c[[s], s]
+ with self.assertRaises(TypeError):
+ a[int]
+
+
class ClassVarTests(BaseTestCase):
def test_basics(self):
import unicodedata
import unittest
import warnings
-from test.support import import_helper
from test.support import warnings_helper
from test import support, string_tests
from test.support.script_helper import assert_python_failure
self.checkequalnofix(9, 'abcdefghiabc', 'find', 'abc', 1)
self.checkequalnofix(-1, 'abcdefghiabc', 'find', 'def', 4)
+ # test utf-8 non-ascii char
+ self.checkequal(0, 'тест', 'find', 'т')
+ self.checkequal(3, 'тест', 'find', 'т', 1)
+ self.checkequal(-1, 'тест', 'find', 'т', 1, 3)
+ self.checkequal(-1, 'тест', 'find', 'e') # english `e`
+ # test utf-8 non-ascii slice
+ self.checkequal(1, 'тест тест', 'find', 'ес')
+ self.checkequal(1, 'тест тест', 'find', 'ес', 1)
+ self.checkequal(1, 'тест тест', 'find', 'ес', 1, 3)
+ self.checkequal(6, 'тест тест', 'find', 'ес', 2)
+ self.checkequal(-1, 'тест тест', 'find', 'ес', 6, 7)
+ self.checkequal(-1, 'тест тест', 'find', 'ес', 7)
+ self.checkequal(-1, 'тест тест', 'find', 'ec') # english `ec`
+
self.assertRaises(TypeError, 'hello'.find)
self.assertRaises(TypeError, 'hello'.find, 42)
# test mixed kinds
self.checkequalnofix(9, 'abcdefghiabc', 'rfind', 'abc')
self.checkequalnofix(12, 'abcdefghiabc', 'rfind', '')
self.checkequalnofix(12, 'abcdefghiabc', 'rfind', '')
+ # test utf-8 non-ascii char
+ self.checkequal(1, 'тест', 'rfind', 'е')
+ self.checkequal(1, 'тест', 'rfind', 'е', 1)
+ self.checkequal(-1, 'тест', 'rfind', 'е', 2)
+ self.checkequal(-1, 'тест', 'rfind', 'e') # english `e`
+ # test utf-8 non-ascii slice
+ self.checkequal(6, 'тест тест', 'rfind', 'ес')
+ self.checkequal(6, 'тест тест', 'rfind', 'ес', 1)
+ self.checkequal(1, 'тест тест', 'rfind', 'ес', 1, 3)
+ self.checkequal(6, 'тест тест', 'rfind', 'ес', 2)
+ self.checkequal(-1, 'тест тест', 'rfind', 'ес', 6, 7)
+ self.checkequal(-1, 'тест тест', 'rfind', 'ес', 7)
+ self.checkequal(-1, 'тест тест', 'rfind', 'ec') # english `ec`
# test mixed kinds
self.checkequal(0, 'a' + '\u0102' * 100, 'rfind', 'a')
self.checkequal(0, 'a' + '\U00100304' * 100, 'rfind', 'a')
self.assertEqual(proc.rc, 10, proc)
-class CAPITest(unittest.TestCase):
-
- # Test PyUnicode_FromFormat()
- def test_from_format(self):
- import_helper.import_module('ctypes')
- from ctypes import (
- c_char_p,
- pythonapi, py_object, sizeof,
- c_int, c_long, c_longlong, c_ssize_t,
- c_uint, c_ulong, c_ulonglong, c_size_t, c_void_p)
- name = "PyUnicode_FromFormat"
- _PyUnicode_FromFormat = getattr(pythonapi, name)
- _PyUnicode_FromFormat.argtypes = (c_char_p,)
- _PyUnicode_FromFormat.restype = py_object
-
- def PyUnicode_FromFormat(format, *args):
- cargs = tuple(
- py_object(arg) if isinstance(arg, str) else arg
- for arg in args)
- return _PyUnicode_FromFormat(format, *cargs)
-
- def check_format(expected, format, *args):
- text = PyUnicode_FromFormat(format, *args)
- self.assertEqual(expected, text)
-
- # ascii format, non-ascii argument
- check_format('ascii\x7f=unicode\xe9',
- b'ascii\x7f=%U', 'unicode\xe9')
-
- # non-ascii format, ascii argument: ensure that PyUnicode_FromFormatV()
- # raises an error
- self.assertRaisesRegex(ValueError,
- r'^PyUnicode_FromFormatV\(\) expects an ASCII-encoded format '
- 'string, got a non-ASCII byte: 0xe9$',
- PyUnicode_FromFormat, b'unicode\xe9=%s', 'ascii')
-
- # test "%c"
- check_format('\uabcd',
- b'%c', c_int(0xabcd))
- check_format('\U0010ffff',
- b'%c', c_int(0x10ffff))
- with self.assertRaises(OverflowError):
- PyUnicode_FromFormat(b'%c', c_int(0x110000))
- # Issue #18183
- check_format('\U00010000\U00100000',
- b'%c%c', c_int(0x10000), c_int(0x100000))
-
- # test "%"
- check_format('%',
- b'%')
- check_format('%',
- b'%%')
- check_format('%s',
- b'%%s')
- check_format('[%]',
- b'[%%]')
- check_format('%abc',
- b'%%%s', b'abc')
-
- # truncated string
- check_format('abc',
- b'%.3s', b'abcdef')
- check_format('abc[\ufffd',
- b'%.5s', 'abc[\u20ac]'.encode('utf8'))
- check_format("'\\u20acABC'",
- b'%A', '\u20acABC')
- check_format("'\\u20",
- b'%.5A', '\u20acABCDEF')
- check_format("'\u20acABC'",
- b'%R', '\u20acABC')
- check_format("'\u20acA",
- b'%.3R', '\u20acABCDEF')
- check_format('\u20acAB',
- b'%.3S', '\u20acABCDEF')
- check_format('\u20acAB',
- b'%.3U', '\u20acABCDEF')
- check_format('\u20acAB',
- b'%.3V', '\u20acABCDEF', None)
- check_format('abc[\ufffd',
- b'%.5V', None, 'abc[\u20ac]'.encode('utf8'))
-
- # following tests comes from #7330
- # test width modifier and precision modifier with %S
- check_format("repr= abc",
- b'repr=%5S', 'abc')
- check_format("repr=ab",
- b'repr=%.2S', 'abc')
- check_format("repr= ab",
- b'repr=%5.2S', 'abc')
-
- # test width modifier and precision modifier with %R
- check_format("repr= 'abc'",
- b'repr=%8R', 'abc')
- check_format("repr='ab",
- b'repr=%.3R', 'abc')
- check_format("repr= 'ab",
- b'repr=%5.3R', 'abc')
-
- # test width modifier and precision modifier with %A
- check_format("repr= 'abc'",
- b'repr=%8A', 'abc')
- check_format("repr='ab",
- b'repr=%.3A', 'abc')
- check_format("repr= 'ab",
- b'repr=%5.3A', 'abc')
-
- # test width modifier and precision modifier with %s
- check_format("repr= abc",
- b'repr=%5s', b'abc')
- check_format("repr=ab",
- b'repr=%.2s', b'abc')
- check_format("repr= ab",
- b'repr=%5.2s', b'abc')
-
- # test width modifier and precision modifier with %U
- check_format("repr= abc",
- b'repr=%5U', 'abc')
- check_format("repr=ab",
- b'repr=%.2U', 'abc')
- check_format("repr= ab",
- b'repr=%5.2U', 'abc')
-
- # test width modifier and precision modifier with %V
- check_format("repr= abc",
- b'repr=%5V', 'abc', b'123')
- check_format("repr=ab",
- b'repr=%.2V', 'abc', b'123')
- check_format("repr= ab",
- b'repr=%5.2V', 'abc', b'123')
- check_format("repr= 123",
- b'repr=%5V', None, b'123')
- check_format("repr=12",
- b'repr=%.2V', None, b'123')
- check_format("repr= 12",
- b'repr=%5.2V', None, b'123')
-
- # test integer formats (%i, %d, %u)
- check_format('010',
- b'%03i', c_int(10))
- check_format('0010',
- b'%0.4i', c_int(10))
- check_format('-123',
- b'%i', c_int(-123))
- check_format('-123',
- b'%li', c_long(-123))
- check_format('-123',
- b'%lli', c_longlong(-123))
- check_format('-123',
- b'%zi', c_ssize_t(-123))
-
- check_format('-123',
- b'%d', c_int(-123))
- check_format('-123',
- b'%ld', c_long(-123))
- check_format('-123',
- b'%lld', c_longlong(-123))
- check_format('-123',
- b'%zd', c_ssize_t(-123))
-
- check_format('123',
- b'%u', c_uint(123))
- check_format('123',
- b'%lu', c_ulong(123))
- check_format('123',
- b'%llu', c_ulonglong(123))
- check_format('123',
- b'%zu', c_size_t(123))
-
- # test long output
- min_longlong = -(2 ** (8 * sizeof(c_longlong) - 1))
- max_longlong = -min_longlong - 1
- check_format(str(min_longlong),
- b'%lld', c_longlong(min_longlong))
- check_format(str(max_longlong),
- b'%lld', c_longlong(max_longlong))
- max_ulonglong = 2 ** (8 * sizeof(c_ulonglong)) - 1
- check_format(str(max_ulonglong),
- b'%llu', c_ulonglong(max_ulonglong))
- PyUnicode_FromFormat(b'%p', c_void_p(-1))
-
- # test padding (width and/or precision)
- check_format('123'.rjust(10, '0'),
- b'%010i', c_int(123))
- check_format('123'.rjust(100),
- b'%100i', c_int(123))
- check_format('123'.rjust(100, '0'),
- b'%.100i', c_int(123))
- check_format('123'.rjust(80, '0').rjust(100),
- b'%100.80i', c_int(123))
-
- check_format('123'.rjust(10, '0'),
- b'%010u', c_uint(123))
- check_format('123'.rjust(100),
- b'%100u', c_uint(123))
- check_format('123'.rjust(100, '0'),
- b'%.100u', c_uint(123))
- check_format('123'.rjust(80, '0').rjust(100),
- b'%100.80u', c_uint(123))
-
- check_format('123'.rjust(10, '0'),
- b'%010x', c_int(0x123))
- check_format('123'.rjust(100),
- b'%100x', c_int(0x123))
- check_format('123'.rjust(100, '0'),
- b'%.100x', c_int(0x123))
- check_format('123'.rjust(80, '0').rjust(100),
- b'%100.80x', c_int(0x123))
-
- # test %A
- check_format(r"%A:'abc\xe9\uabcd\U0010ffff'",
- b'%%A:%A', 'abc\xe9\uabcd\U0010ffff')
-
- # test %V
- check_format('repr=abc',
- b'repr=%V', 'abc', b'xyz')
-
- # test %p
- # We cannot test the exact result,
- # because it returns a hex representation of a C pointer,
- # which is going to be different each time. But, we can test the format.
- p_format_regex = r'^0x[a-zA-Z0-9]{3,}$'
- p_format1 = PyUnicode_FromFormat(b'%p', 'abc')
- self.assertIsInstance(p_format1, str)
- self.assertRegex(p_format1, p_format_regex)
-
- p_format2 = PyUnicode_FromFormat(b'%p %p', '123456', b'xyz')
- self.assertIsInstance(p_format2, str)
- self.assertRegex(p_format2,
- r'0x[a-zA-Z0-9]{3,} 0x[a-zA-Z0-9]{3,}')
-
- # Extra args are ignored:
- p_format3 = PyUnicode_FromFormat(b'%p', '123456', None, b'xyz')
- self.assertIsInstance(p_format3, str)
- self.assertRegex(p_format3, p_format_regex)
-
- # Test string decode from parameter of %s using utf-8.
- # b'\xe4\xba\xba\xe6\xb0\x91' is utf-8 encoded byte sequence of
- # '\u4eba\u6c11'
- check_format('repr=\u4eba\u6c11',
- b'repr=%V', None, b'\xe4\xba\xba\xe6\xb0\x91')
-
- #Test replace error handler.
- check_format('repr=abc\ufffd',
- b'repr=%V', None, b'abc\xff')
-
- # not supported: copy the raw format string. these tests are just here
- # to check for crashes and should not be considered as specifications
- check_format('%s',
- b'%1%s', b'abc')
- check_format('%1abc',
- b'%1abc')
- check_format('%+i',
- b'%+i', c_int(10))
- check_format('%.%s',
- b'%.%s', b'abc')
-
- # Issue #33817: empty strings
- check_format('',
- b'')
- check_format('',
- b'%s', b'')
-
- # Test PyUnicode_AsWideChar()
- @support.cpython_only
- def test_aswidechar(self):
- from _testcapi import unicode_aswidechar
- import_helper.import_module('ctypes')
- from ctypes import c_wchar, sizeof
-
- wchar, size = unicode_aswidechar('abcdef', 2)
- self.assertEqual(size, 2)
- self.assertEqual(wchar, 'ab')
-
- wchar, size = unicode_aswidechar('abc', 3)
- self.assertEqual(size, 3)
- self.assertEqual(wchar, 'abc')
-
- wchar, size = unicode_aswidechar('abc', 4)
- self.assertEqual(size, 3)
- self.assertEqual(wchar, 'abc\0')
-
- wchar, size = unicode_aswidechar('abc', 10)
- self.assertEqual(size, 3)
- self.assertEqual(wchar, 'abc\0')
-
- wchar, size = unicode_aswidechar('abc\0def', 20)
- self.assertEqual(size, 7)
- self.assertEqual(wchar, 'abc\0def\0')
-
- nonbmp = chr(0x10ffff)
- if sizeof(c_wchar) == 2:
- buflen = 3
- nchar = 2
- else: # sizeof(c_wchar) == 4
- buflen = 2
- nchar = 1
- wchar, size = unicode_aswidechar(nonbmp, buflen)
- self.assertEqual(size, nchar)
- self.assertEqual(wchar, nonbmp + '\0')
-
- # Test PyUnicode_AsWideCharString()
- @support.cpython_only
- def test_aswidecharstring(self):
- from _testcapi import unicode_aswidecharstring
- import_helper.import_module('ctypes')
- from ctypes import c_wchar, sizeof
-
- wchar, size = unicode_aswidecharstring('abc')
- self.assertEqual(size, 3)
- self.assertEqual(wchar, 'abc\0')
-
- wchar, size = unicode_aswidecharstring('abc\0def')
- self.assertEqual(size, 7)
- self.assertEqual(wchar, 'abc\0def\0')
-
- nonbmp = chr(0x10ffff)
- if sizeof(c_wchar) == 2:
- nchar = 2
- else: # sizeof(c_wchar) == 4
- nchar = 1
- wchar, size = unicode_aswidecharstring(nonbmp)
- self.assertEqual(size, nchar)
- self.assertEqual(wchar, nonbmp + '\0')
-
- # Test PyUnicode_AsUCS4()
- @support.cpython_only
- def test_asucs4(self):
- from _testcapi import unicode_asucs4
- for s in ['abc', '\xa1\xa2', '\u4f60\u597d', 'a\U0001f600',
- 'a\ud800b\udfffc', '\ud834\udd1e']:
- l = len(s)
- self.assertEqual(unicode_asucs4(s, l, True), s+'\0')
- self.assertEqual(unicode_asucs4(s, l, False), s+'\uffff')
- self.assertEqual(unicode_asucs4(s, l+1, True), s+'\0\uffff')
- self.assertEqual(unicode_asucs4(s, l+1, False), s+'\0\uffff')
- self.assertRaises(SystemError, unicode_asucs4, s, l-1, True)
- self.assertRaises(SystemError, unicode_asucs4, s, l-2, False)
- s = '\0'.join([s, s])
- self.assertEqual(unicode_asucs4(s, len(s), True), s+'\0')
- self.assertEqual(unicode_asucs4(s, len(s), False), s+'\uffff')
-
- # Test PyUnicode_AsUTF8()
- @support.cpython_only
- def test_asutf8(self):
- from _testcapi import unicode_asutf8
-
- bmp = '\u0100'
- bmp2 = '\uffff'
- nonbmp = chr(0x10ffff)
-
- self.assertEqual(unicode_asutf8(bmp), b'\xc4\x80')
- self.assertEqual(unicode_asutf8(bmp2), b'\xef\xbf\xbf')
- self.assertEqual(unicode_asutf8(nonbmp), b'\xf4\x8f\xbf\xbf')
- self.assertRaises(UnicodeEncodeError, unicode_asutf8, 'a\ud800b\udfffc')
-
- # Test PyUnicode_AsUTF8AndSize()
- @support.cpython_only
- def test_asutf8andsize(self):
- from _testcapi import unicode_asutf8andsize
-
- bmp = '\u0100'
- bmp2 = '\uffff'
- nonbmp = chr(0x10ffff)
-
- self.assertEqual(unicode_asutf8andsize(bmp), (b'\xc4\x80', 2))
- self.assertEqual(unicode_asutf8andsize(bmp2), (b'\xef\xbf\xbf', 3))
- self.assertEqual(unicode_asutf8andsize(nonbmp), (b'\xf4\x8f\xbf\xbf', 4))
- self.assertRaises(UnicodeEncodeError, unicode_asutf8andsize, 'a\ud800b\udfffc')
-
- # Test PyUnicode_FindChar()
- @support.cpython_only
- def test_findchar(self):
- from _testcapi import unicode_findchar
-
- for str in "\xa1", "\u8000\u8080", "\ud800\udc02", "\U0001f100\U0001f1f1":
- for i, ch in enumerate(str):
- self.assertEqual(unicode_findchar(str, ord(ch), 0, len(str), 1), i)
- self.assertEqual(unicode_findchar(str, ord(ch), 0, len(str), -1), i)
-
- str = "!>_<!"
- self.assertEqual(unicode_findchar(str, 0x110000, 0, len(str), 1), -1)
- self.assertEqual(unicode_findchar(str, 0x110000, 0, len(str), -1), -1)
- # start < end
- self.assertEqual(unicode_findchar(str, ord('!'), 1, len(str)+1, 1), 4)
- self.assertEqual(unicode_findchar(str, ord('!'), 1, len(str)+1, -1), 4)
- # start >= end
- self.assertEqual(unicode_findchar(str, ord('!'), 0, 0, 1), -1)
- self.assertEqual(unicode_findchar(str, ord('!'), len(str), 0, 1), -1)
- # negative
- self.assertEqual(unicode_findchar(str, ord('!'), -len(str), -1, 1), 0)
- self.assertEqual(unicode_findchar(str, ord('!'), -len(str), -1, -1), 0)
-
- # Test PyUnicode_CopyCharacters()
- @support.cpython_only
- def test_copycharacters(self):
- from _testcapi import unicode_copycharacters
-
- strings = [
- 'abcde', '\xa1\xa2\xa3\xa4\xa5',
- '\u4f60\u597d\u4e16\u754c\uff01',
- '\U0001f600\U0001f601\U0001f602\U0001f603\U0001f604'
- ]
-
- for idx, from_ in enumerate(strings):
- # wide -> narrow: exceed maxchar limitation
- for to in strings[:idx]:
- self.assertRaises(
- SystemError,
- unicode_copycharacters, to, 0, from_, 0, 5
- )
- # same kind
- for from_start in range(5):
- self.assertEqual(
- unicode_copycharacters(from_, 0, from_, from_start, 5),
- (from_[from_start:from_start+5].ljust(5, '\0'),
- 5-from_start)
- )
- for to_start in range(5):
- self.assertEqual(
- unicode_copycharacters(from_, to_start, from_, to_start, 5),
- (from_[to_start:to_start+5].rjust(5, '\0'),
- 5-to_start)
- )
- # narrow -> wide
- # Tests omitted since this creates invalid strings.
-
- s = strings[0]
- self.assertRaises(IndexError, unicode_copycharacters, s, 6, s, 0, 5)
- self.assertRaises(IndexError, unicode_copycharacters, s, -1, s, 0, 5)
- self.assertRaises(IndexError, unicode_copycharacters, s, 0, s, 6, 5)
- self.assertRaises(IndexError, unicode_copycharacters, s, 0, s, -1, 5)
- self.assertRaises(SystemError, unicode_copycharacters, s, 1, s, 0, 5)
- self.assertRaises(SystemError, unicode_copycharacters, s, 0, s, 0, -1)
- self.assertRaises(SystemError, unicode_copycharacters, s, 0, b'', 0, 0)
-
- @support.cpython_only
- @support.requires_legacy_unicode_capi
- def test_encode_decimal(self):
- from _testcapi import unicode_encodedecimal
- with warnings_helper.check_warnings():
- warnings.simplefilter('ignore', DeprecationWarning)
- self.assertEqual(unicode_encodedecimal('123'),
- b'123')
- self.assertEqual(unicode_encodedecimal('\u0663.\u0661\u0664'),
- b'3.14')
- self.assertEqual(unicode_encodedecimal(
- "\N{EM SPACE}3.14\N{EN SPACE}"), b' 3.14 ')
- self.assertRaises(UnicodeEncodeError,
- unicode_encodedecimal, "123\u20ac", "strict")
- self.assertRaisesRegex(
- ValueError,
- "^'decimal' codec can't encode character",
- unicode_encodedecimal, "123\u20ac", "replace")
-
- @support.cpython_only
- @support.requires_legacy_unicode_capi
- def test_transform_decimal(self):
- from _testcapi import unicode_transformdecimaltoascii as transform_decimal
- with warnings_helper.check_warnings():
- warnings.simplefilter('ignore', DeprecationWarning)
- self.assertEqual(transform_decimal('123'),
- '123')
- self.assertEqual(transform_decimal('\u0663.\u0661\u0664'),
- '3.14')
- self.assertEqual(transform_decimal("\N{EM SPACE}3.14\N{EN SPACE}"),
- "\N{EM SPACE}3.14\N{EN SPACE}")
- self.assertEqual(transform_decimal('123\u20ac'),
- '123\u20ac')
-
- @support.cpython_only
- def test_pep393_utf8_caching_bug(self):
- # Issue #25709: Problem with string concatenation and utf-8 cache
- from _testcapi import getargs_s_hash
- for k in 0x24, 0xa4, 0x20ac, 0x1f40d:
- s = ''
- for i in range(5):
- # Due to CPython specific optimization the 's' string can be
- # resized in-place.
- s += chr(k)
- # Parsing with the "s#" format code calls indirectly
- # PyUnicode_AsUTF8AndSize() which creates the UTF-8
- # encoded string cached in the Unicode object.
- self.assertEqual(getargs_s_hash(s), chr(k).encode() * (i + 1))
- # Check that the second call returns the same result
- self.assertEqual(getargs_s_hash(s), chr(k).encode() * (i + 1))
-
class StringModuleTest(unittest.TestCase):
def test_formatter_parser(self):
def parse(format):
import unicodedata
import unittest
from test.support import (open_urlresource, requires_resource, script_helper,
- cpython_only, check_disallow_instantiation)
+ cpython_only, check_disallow_instantiation,
+ ResourceDenied)
class UnicodeMethodsTest(unittest.TestCase):
except PermissionError:
self.skipTest(f"Permission error when downloading {TESTDATAURL} "
f"into the test data directory")
- except (OSError, HTTPException):
- self.fail(f"Could not retrieve {TESTDATAURL}")
+ except (OSError, HTTPException) as exc:
+ self.skipTest(f"Failed to download {TESTDATAURL}: {exc}")
with testdata:
self.run_normalization_tests(testdata)
"""Check handling of invalid ports."""
for bytes in (False, True):
for parse in (urllib.parse.urlsplit, urllib.parse.urlparse):
- for port in ("foo", "1.5", "-1", "0x10"):
+ for port in ("foo", "1.5", "-1", "0x10", "-0", "1_1", " 1", "1 ", "६"):
with self.subTest(bytes=bytes, parse=parse, port=port):
netloc = "www.example.net:" + port
url = "http://" + netloc
if bytes:
- netloc = netloc.encode("ascii")
- url = url.encode("ascii")
+ if netloc.isascii() and port.isascii():
+ netloc = netloc.encode("ascii")
+ url = url.encode("ascii")
+ else:
+ continue
p = parse(url)
self.assertEqual(p.netloc, netloc)
with self.assertRaises(ValueError):
self.assertEqual(splitnport('127.0.0.1', 55), ('127.0.0.1', 55))
self.assertEqual(splitnport('parrot:cheese'), ('parrot', None))
self.assertEqual(splitnport('parrot:cheese', 55), ('parrot', None))
+ self.assertEqual(splitnport('parrot: +1_0 '), ('parrot', None))
def test_splitquery(self):
# Normal cases are exercised by other tests; ensure that we also
if sys.platform == 'win32':
expect_exe = os.path.normcase(os.path.realpath(expect_exe))
- def pip_cmd_checker(cmd):
+ def pip_cmd_checker(cmd, **kwargs):
cmd[0] = os.path.normcase(cmd[0])
self.assertEqual(
cmd,
)
fake_context = builder.ensure_directories(fake_env_dir)
- with patch('venv.subprocess.check_call', pip_cmd_checker):
+ with patch('venv.subprocess.check_output', pip_cmd_checker):
builder.upgrade_dependencies(fake_context)
@requireVenvCreate
try:
yield
except subprocess.CalledProcessError as exc:
- out = exc.output.decode(errors="replace")
- err = exc.stderr.decode(errors="replace")
+ out = (exc.output or b'').decode(errors="replace")
+ err = (exc.stderr or b'').decode(errors="replace")
self.fail(
f"{exc}\n\n"
f"**Subprocess Output**\n{out}\n\n"
# deallocation of c2.
del c2
- def test_callback_in_cycle_1(self):
+ def test_callback_in_cycle(self):
import gc
class J(object):
del I, J, II
gc.collect()
- def test_callback_in_cycle_2(self):
+ def test_callback_reachable_one_way(self):
import gc
- # This is just like test_callback_in_cycle_1, except that II is an
- # old-style class. The symptom is different then: an instance of an
- # old-style class looks in its own __dict__ first. 'J' happens to
- # get cleared from I.__dict__ before 'wr', and 'J' was never in II's
- # __dict__, so the attribute isn't found. The difference is that
- # the old-style II doesn't have a NULL __mro__ (it doesn't have any
- # __mro__), so no segfault occurs. Instead it got:
- # test_callback_in_cycle_2 (__main__.ReferencesTestCase) ...
- # Exception exceptions.AttributeError:
- # "II instance has no attribute 'J'" in <bound method II.acallback
- # of <?.II instance at 0x00B9B4B8>> ignored
-
- class J(object):
- pass
-
- class II:
- def acallback(self, ignore):
- self.J
-
- I = II()
- I.J = J
- I.wr = weakref.ref(J, I.acallback)
-
- del I, J, II
- gc.collect()
-
- def test_callback_in_cycle_3(self):
- import gc
-
- # This one broke the first patch that fixed the last two. In this
- # case, the objects reachable from the callback aren't also reachable
+ # This one broke the first patch that fixed the previous test. In this case,
+ # the objects reachable from the callback aren't also reachable
# from the object (c1) *triggering* the callback: you can get to
# c1 from c2, but not vice-versa. The result was that c2's __dict__
# got tp_clear'ed by the time the c2.cb callback got invoked.
del c1, c2
gc.collect()
- def test_callback_in_cycle_4(self):
+ def test_callback_different_classes(self):
import gc
- # Like test_callback_in_cycle_3, except c2 and c1 have different
+ # Like test_callback_reachable_one_way, except c2 and c1 have different
# classes. c2's class (C) isn't reachable from c1 then, so protecting
# objects reachable from the dying object (c1) isn't enough to stop
# c2's class (C) from getting tp_clear'ed before c2.cb is invoked.
"lines", "xpixels" and "ypixels". There is an additional possible
option "update", which if given then all subsequent options ensure
that any possible out of date information is recalculated."""
- args = ['-%s' % arg for arg in args if not arg.startswith('-')]
+ args = ['-%s' % arg for arg in args]
args += [index1, index2]
res = self.tk.call(self._w, 'count', *args) or None
if res is not None and len(args) <= 3:
self.assertEqual(text.search('-test', '1.0', 'end'), '1.2')
self.assertEqual(text.search('test', '1.0', 'end'), '1.3')
+ def test_count(self):
+ # XXX Some assertions do not check against the intended result,
+ # but instead check the current result to prevent regression.
+ text = self.text
+ text.insert('1.0',
+ 'Lorem ipsum dolor sit amet,\n'
+ 'consectetur adipiscing elit,\n'
+ 'sed do eiusmod tempor incididunt\n'
+ 'ut labore et dolore magna aliqua.')
+
+ options = ('chars', 'indices', 'lines',
+ 'displaychars', 'displayindices', 'displaylines',
+ 'xpixels', 'ypixels')
+ if self.wantobjects:
+ self.assertEqual(len(text.count('1.0', 'end', *options)), 8)
+ else:
+ text.count('1.0', 'end', *options)
+ self.assertEqual(text.count('1.0', 'end', 'chars', 'lines'), (124, 4)
+ if self.wantobjects else '124 4')
+ self.assertEqual(text.count('1.3', '4.5', 'chars', 'lines'), (92, 3)
+ if self.wantobjects else '92 3')
+ self.assertEqual(text.count('4.5', '1.3', 'chars', 'lines'), (-92, -3)
+ if self.wantobjects else '-92 -3')
+ self.assertEqual(text.count('1.3', '1.3', 'chars', 'lines'), (0, 0)
+ if self.wantobjects else '0 0')
+ self.assertEqual(text.count('1.0', 'end', 'lines'), (4,)
+ if self.wantobjects else ('4',))
+ self.assertEqual(text.count('end', '1.0', 'lines'), (-4,)
+ if self.wantobjects else ('-4',))
+ self.assertEqual(text.count('1.3', '1.5', 'lines'), None
+ if self.wantobjects else ('0',))
+ self.assertEqual(text.count('1.3', '1.3', 'lines'), None
+ if self.wantobjects else ('0',))
+ self.assertEqual(text.count('1.0', 'end'), (124,) # 'indices' by default
+ if self.wantobjects else ('124',))
+ self.assertRaises(tkinter.TclError, text.count, '1.0', 'end', 'spam')
+ self.assertRaises(tkinter.TclError, text.count, '1.0', 'end', '-lines')
+
+ self.assertIsInstance(text.count('1.3', '1.5', 'ypixels'), tuple)
+ self.assertIsInstance(text.count('1.3', '1.5', 'update', 'ypixels'), int
+ if self.wantobjects else str)
+ self.assertEqual(text.count('1.3', '1.3', 'update', 'ypixels'), None
+ if self.wantobjects else '0')
+ self.assertEqual(text.count('1.3', '1.5', 'update', 'indices'), 2
+ if self.wantobjects else '2')
+ self.assertEqual(text.count('1.3', '1.3', 'update', 'indices'), None
+ if self.wantobjects else '0')
+ self.assertEqual(text.count('1.3', '1.5', 'update'), (2,)
+ if self.wantobjects else ('2',))
+ self.assertEqual(text.count('1.3', '1.3', 'update'), None
+ if self.wantobjects else ('0',))
+
if __name__ == "__main__":
unittest.main()
self.checkPixelsParam(widget, 'yscrollincrement',
10, 0, 11.2, 13.6, -10, '0.1i')
+ def _test_option_joinstyle(self, c, factory):
+ for joinstyle in 'bevel', 'miter', 'round':
+ i = factory(joinstyle=joinstyle)
+ self.assertEqual(c.itemcget(i, 'joinstyle'), joinstyle)
+ self.assertRaises(TclError, factory, joinstyle='spam')
+
+ def _test_option_smooth(self, c, factory):
+ for smooth in 1, True, '1', 'true', 'yes', 'on':
+ i = factory(smooth=smooth)
+ self.assertEqual(c.itemcget(i, 'smooth'), 'true')
+ for smooth in 0, False, '0', 'false', 'no', 'off':
+ i = factory(smooth=smooth)
+ self.assertEqual(c.itemcget(i, 'smooth'), '0')
+ i = factory(smooth=True, splinestep=30)
+ self.assertEqual(c.itemcget(i, 'smooth'), 'true')
+ self.assertEqual(c.itemcget(i, 'splinestep'), '30')
+ i = factory(smooth='raw', splinestep=30)
+ self.assertEqual(c.itemcget(i, 'smooth'), 'raw')
+ self.assertEqual(c.itemcget(i, 'splinestep'), '30')
+ self.assertRaises(TclError, factory, smooth='spam')
+
+ def test_create_rectangle(self):
+ c = self.create()
+ i1 = c.create_rectangle(20, 30, 60, 10)
+ self.assertEqual(c.coords(i1), [20.0, 10.0, 60.0, 30.0])
+ self.assertEqual(c.bbox(i1), (19, 9, 61, 31))
+
+ i2 = c.create_rectangle([21, 31, 61, 11])
+ self.assertEqual(c.coords(i2), [21.0, 11.0, 61.0, 31.0])
+ self.assertEqual(c.bbox(i2), (20, 10, 62, 32))
+
+ i3 = c.create_rectangle((22, 32), (62, 12))
+ self.assertEqual(c.coords(i3), [22.0, 12.0, 62.0, 32.0])
+ self.assertEqual(c.bbox(i3), (21, 11, 63, 33))
+
+ i4 = c.create_rectangle([(23, 33), (63, 13)])
+ self.assertEqual(c.coords(i4), [23.0, 13.0, 63.0, 33.0])
+ self.assertEqual(c.bbox(i4), (22, 12, 64, 34))
+
+ self.assertRaises(TclError, c.create_rectangle, 20, 30, 60)
+ self.assertRaises(TclError, c.create_rectangle, [20, 30, 60])
+ self.assertRaises(TclError, c.create_rectangle, 20, 30, 40, 50, 60, 10)
+ self.assertRaises(TclError, c.create_rectangle, [20, 30, 40, 50, 60, 10])
+ self.assertRaises(TclError, c.create_rectangle, 20, 30)
+ self.assertRaises(TclError, c.create_rectangle, [20, 30])
+ self.assertRaises(IndexError, c.create_rectangle)
+ self.assertRaises(IndexError, c.create_rectangle, [])
+
+ def test_create_line(self):
+ c = self.create()
+ i1 = c.create_line(20, 30, 40, 50, 60, 10)
+ self.assertEqual(c.coords(i1), [20.0, 30.0, 40.0, 50.0, 60.0, 10.0])
+ self.assertEqual(c.bbox(i1), (18, 8, 62, 52))
+ self.assertEqual(c.itemcget(i1, 'arrow'), 'none')
+ self.assertEqual(c.itemcget(i1, 'arrowshape'), '8 10 3')
+ self.assertEqual(c.itemcget(i1, 'capstyle'), 'butt')
+ self.assertEqual(c.itemcget(i1, 'joinstyle'), 'round')
+ self.assertEqual(c.itemcget(i1, 'smooth'), '0')
+ self.assertEqual(c.itemcget(i1, 'splinestep'), '12')
+
+ i2 = c.create_line([21, 31, 41, 51, 61, 11])
+ self.assertEqual(c.coords(i2), [21.0, 31.0, 41.0, 51.0, 61.0, 11.0])
+ self.assertEqual(c.bbox(i2), (19, 9, 63, 53))
+
+ i3 = c.create_line((22, 32), (42, 52), (62, 12))
+ self.assertEqual(c.coords(i3), [22.0, 32.0, 42.0, 52.0, 62.0, 12.0])
+ self.assertEqual(c.bbox(i3), (20, 10, 64, 54))
+
+ i4 = c.create_line([(23, 33), (43, 53), (63, 13)])
+ self.assertEqual(c.coords(i4), [23.0, 33.0, 43.0, 53.0, 63.0, 13.0])
+ self.assertEqual(c.bbox(i4), (21, 11, 65, 55))
+
+ self.assertRaises(TclError, c.create_line, 20, 30, 60)
+ self.assertRaises(TclError, c.create_line, [20, 30, 60])
+ self.assertRaises(TclError, c.create_line, 20, 30)
+ self.assertRaises(TclError, c.create_line, [20, 30])
+ self.assertRaises(IndexError, c.create_line)
+ self.assertRaises(IndexError, c.create_line, [])
+
+ for arrow in 'none', 'first', 'last', 'both':
+ i = c.create_line(20, 30, 60, 10, arrow=arrow)
+ self.assertEqual(c.itemcget(i, 'arrow'), arrow)
+ i = c.create_line(20, 30, 60, 10, arrow='first', arrowshape=[10, 15, 5])
+ self.assertEqual(c.itemcget(i, 'arrowshape'), '10 15 5')
+ self.assertRaises(TclError, c.create_line, 20, 30, 60, 10, arrow='spam')
+
+ for capstyle in 'butt', 'projecting', 'round':
+ i = c.create_line(20, 30, 60, 10, capstyle=capstyle)
+ self.assertEqual(c.itemcget(i, 'capstyle'), capstyle)
+ self.assertRaises(TclError, c.create_line, 20, 30, 60, 10, capstyle='spam')
+
+ self._test_option_joinstyle(c,
+ lambda **kwargs: c.create_line(20, 30, 40, 50, 60, 10, **kwargs))
+ self._test_option_smooth(c,
+ lambda **kwargs: c.create_line(20, 30, 60, 10, **kwargs))
+
+ def test_create_polygon(self):
+ c = self.create()
+ i1 = c.create_polygon(20, 30, 40, 50, 60, 10)
+ self.assertEqual(c.coords(i1), [20.0, 30.0, 40.0, 50.0, 60.0, 10.0])
+ self.assertEqual(c.bbox(i1), (19, 9, 61, 51))
+ self.assertEqual(c.itemcget(i1, 'joinstyle'), 'round')
+ self.assertEqual(c.itemcget(i1, 'smooth'), '0')
+ self.assertEqual(c.itemcget(i1, 'splinestep'), '12')
+
+ i2 = c.create_polygon([21, 31, 41, 51, 61, 11])
+ self.assertEqual(c.coords(i2), [21.0, 31.0, 41.0, 51.0, 61.0, 11.0])
+ self.assertEqual(c.bbox(i2), (20, 10, 62, 52))
+
+ i3 = c.create_polygon((22, 32), (42, 52), (62, 12))
+ self.assertEqual(c.coords(i3), [22.0, 32.0, 42.0, 52.0, 62.0, 12.0])
+ self.assertEqual(c.bbox(i3), (21, 11, 63, 53))
+
+ i4 = c.create_polygon([(23, 33), (43, 53), (63, 13)])
+ self.assertEqual(c.coords(i4), [23.0, 33.0, 43.0, 53.0, 63.0, 13.0])
+ self.assertEqual(c.bbox(i4), (22, 12, 64, 54))
+
+ self.assertRaises(TclError, c.create_polygon, 20, 30, 60)
+ self.assertRaises(TclError, c.create_polygon, [20, 30, 60])
+ self.assertRaises(IndexError, c.create_polygon)
+ self.assertRaises(IndexError, c.create_polygon, [])
+
+ self._test_option_joinstyle(c,
+ lambda **kwargs: c.create_polygon(20, 30, 40, 50, 60, 10, **kwargs))
+ self._test_option_smooth(c,
+ lambda **kwargs: c.create_polygon(20, 30, 40, 50, 60, 10, **kwargs))
+
+ def test_coords(self):
+ c = self.create()
+ i = c.create_line(20, 30, 40, 50, 60, 10, tags='x')
+ self.assertEqual(c.coords(i), [20.0, 30.0, 40.0, 50.0, 60.0, 10.0])
+ self.assertEqual(c.coords('x'), [20.0, 30.0, 40.0, 50.0, 60.0, 10.0])
+ self.assertEqual(c.bbox(i), (18, 8, 62, 52))
+
+ c.coords(i, 50, 60, 70, 80, 90, 40)
+ self.assertEqual(c.coords(i), [50.0, 60.0, 70.0, 80.0, 90.0, 40.0])
+ self.assertEqual(c.bbox(i), (48, 38, 92, 82))
+
+ c.coords(i, [21, 31, 41, 51, 61, 11])
+ self.assertEqual(c.coords(i), [21.0, 31.0, 41.0, 51.0, 61.0, 11.0])
+
+ c.coords(i, 20, 30, 60, 10)
+ self.assertEqual(c.coords(i), [20.0, 30.0, 60.0, 10.0])
+ self.assertEqual(c.bbox(i), (18, 8, 62, 32))
+
+ self.assertRaises(TclError, c.coords, i, 20, 30, 60)
+ self.assertRaises(TclError, c.coords, i, [20, 30, 60])
+ self.assertRaises(TclError, c.coords, i, 20, 30)
+ self.assertRaises(TclError, c.coords, i, [20, 30])
+
+ c.coords(i, '20', '30c', '60i', '10p')
+ coords = c.coords(i)
+ self.assertIsInstance(coords, list)
+ self.assertEqual(len(coords), 4)
+ self.assertEqual(coords[0], 20)
+ for i in range(4):
+ self.assertIsInstance(coords[i], float)
+
@requires_tcl(8, 6)
def test_moveto(self):
widget = self.create()
try:
with open(self.infile, 'rb') as f:
counts, calledfuncs, callers = pickle.load(f)
- self.update(self.__class__(counts, calledfuncs, callers))
+ self.update(self.__class__(counts, calledfuncs, callers=callers))
except (OSError, EOFError, ValueError) as err:
print(("Skipping counts file %r: %s"
% (self.infile, err)), file=sys.stderr)
# of difflib. See #11763.
_diffThreshold = 2**16
- # Attribute used by TestSuite for classSetUp
-
- _classSetupFailed = False
-
- _class_cleanups = []
+ def __init_subclass__(cls, *args, **kwargs):
+ # Attribute used by TestSuite for classSetUp
+ cls._classSetupFailed = False
+ cls._class_cleanups = []
+ super().__init_subclass__(*args, **kwargs)
def __init__(self, methodName='runTest'):
"""Create an instance of the class that will use the named test
from types import CodeType, ModuleType, MethodType
from unittest.util import safe_repr
from functools import wraps, partial
+from threading import RLock
class InvalidSpecError(Exception):
class NonCallableMock(Base):
"""A non-callable version of `Mock`"""
+ # Store a mutex as a class attribute in order to protect concurrent access
+ # to mock attributes. Using a class attribute allows all NonCallableMock
+ # instances to share the mutex for simplicity.
+ #
+ # See https://github.com/python/cpython/issues/98624 for why this is
+ # necessary.
+ _lock = RLock()
+
def __new__(cls, /, *args, **kw):
# every instance has its own class
# so we can create magic methods on the
f"{name!r} is not a valid assertion. Use a spec "
f"for the mock if {name!r} is meant to be an attribute.")
- result = self._mock_children.get(name)
- if result is _deleted:
- raise AttributeError(name)
- elif result is None:
- wraps = None
- if self._mock_wraps is not None:
- # XXXX should we get the attribute without triggering code
- # execution?
- wraps = getattr(self._mock_wraps, name)
-
- result = self._get_child_mock(
- parent=self, name=name, wraps=wraps, _new_name=name,
- _new_parent=self
- )
- self._mock_children[name] = result
-
- elif isinstance(result, _SpecState):
- try:
- result = create_autospec(
- result.spec, result.spec_set, result.instance,
- result.parent, result.name
+ with NonCallableMock._lock:
+ result = self._mock_children.get(name)
+ if result is _deleted:
+ raise AttributeError(name)
+ elif result is None:
+ wraps = None
+ if self._mock_wraps is not None:
+ # XXXX should we get the attribute without triggering code
+ # execution?
+ wraps = getattr(self._mock_wraps, name)
+
+ result = self._get_child_mock(
+ parent=self, name=name, wraps=wraps, _new_name=name,
+ _new_parent=self
)
- except InvalidSpecError:
- target_name = self.__dict__['_mock_name'] or self
- raise InvalidSpecError(
- f'Cannot autospec attr {name!r} from target '
- f'{target_name!r} as it has already been mocked out. '
- f'[target={self!r}, attr={result.spec!r}]')
- self._mock_children[name] = result
+ self._mock_children[name] = result
+
+ elif isinstance(result, _SpecState):
+ try:
+ result = create_autospec(
+ result.spec, result.spec_set, result.instance,
+ result.parent, result.name
+ )
+ except InvalidSpecError:
+ target_name = self.__dict__['_mock_name'] or self
+ raise InvalidSpecError(
+ f'Cannot autospec attr {name!r} from target '
+ f'{target_name!r} as it has already been mocked out. '
+ f'[target={self!r}, attr={result.spec!r}]')
+ self._mock_children[name] = result
return result
def __call__(self, f):
if isinstance(f, type):
return self.decorate_class(f)
+ if inspect.iscoroutinefunction(f):
+ return self.decorate_async_callable(f)
+ return self.decorate_callable(f)
+
+
+ def decorate_callable(self, f):
@wraps(f)
def _inner(*args, **kw):
self._patch_dict()
return _inner
+ def decorate_async_callable(self, f):
+ @wraps(f)
+ async def _inner(*args, **kw):
+ self._patch_dict()
+ try:
+ return await f(*args, **kw)
+ finally:
+ self._unpatch_dict()
+
+ return _inner
+
+
def decorate_class(self, klass):
for attr in dir(klass):
attr_value = getattr(klass, attr)
ret = None
first = True
excs = [(exctype, value, tb)]
+ seen = {id(value)} # Detect loops in chained exceptions.
while excs:
(exctype, value, tb) = excs.pop()
# Skip test runner traceback levels
if value is not None:
for c in (value.__cause__, value.__context__):
- if c is not None:
+ if c is not None and id(c) not in seen:
excs.append((type(c), c, c.__traceback__))
+ seen.add(id(c))
return ret
def _is_relevant_tb_level(self, tb):
self.addCleanup(support.gc_collect)
def test_full_cycle(self):
+ expected = ['setUp',
+ 'asyncSetUp',
+ 'test',
+ 'asyncTearDown',
+ 'tearDown',
+ 'cleanup6',
+ 'cleanup5',
+ 'cleanup4',
+ 'cleanup3',
+ 'cleanup2',
+ 'cleanup1']
class Test(unittest.IsolatedAsyncioTestCase):
def setUp(self):
self.assertEqual(events, [])
events.append('setUp')
+ self.addCleanup(self.on_cleanup1)
+ self.addAsyncCleanup(self.on_cleanup2)
async def asyncSetUp(self):
- self.assertEqual(events, ['setUp'])
+ self.assertEqual(events, expected[:1])
events.append('asyncSetUp')
- self.addAsyncCleanup(self.on_cleanup1)
+ self.addCleanup(self.on_cleanup3)
+ self.addAsyncCleanup(self.on_cleanup4)
async def test_func(self):
- self.assertEqual(events, ['setUp',
- 'asyncSetUp'])
+ self.assertEqual(events, expected[:2])
events.append('test')
- self.addAsyncCleanup(self.on_cleanup2)
+ self.addCleanup(self.on_cleanup5)
+ self.addAsyncCleanup(self.on_cleanup6)
async def asyncTearDown(self):
- self.assertEqual(events, ['setUp',
- 'asyncSetUp',
- 'test'])
+ self.assertEqual(events, expected[:3])
events.append('asyncTearDown')
def tearDown(self):
- self.assertEqual(events, ['setUp',
- 'asyncSetUp',
- 'test',
- 'asyncTearDown'])
+ self.assertEqual(events, expected[:4])
events.append('tearDown')
- async def on_cleanup1(self):
- self.assertEqual(events, ['setUp',
- 'asyncSetUp',
- 'test',
- 'asyncTearDown',
- 'tearDown',
- 'cleanup2'])
+ def on_cleanup1(self):
+ self.assertEqual(events, expected[:10])
events.append('cleanup1')
async def on_cleanup2(self):
- self.assertEqual(events, ['setUp',
- 'asyncSetUp',
- 'test',
- 'asyncTearDown',
- 'tearDown'])
+ self.assertEqual(events, expected[:9])
events.append('cleanup2')
+ def on_cleanup3(self):
+ self.assertEqual(events, expected[:8])
+ events.append('cleanup3')
+
+ async def on_cleanup4(self):
+ self.assertEqual(events, expected[:7])
+ events.append('cleanup4')
+
+ def on_cleanup5(self):
+ self.assertEqual(events, expected[:6])
+ events.append('cleanup5')
+
+ async def on_cleanup6(self):
+ self.assertEqual(events, expected[:5])
+ events.append('cleanup6')
+
events = []
test = Test("test_func")
result = test.run()
self.assertEqual(result.errors, [])
self.assertEqual(result.failures, [])
- expected = ['setUp', 'asyncSetUp', 'test',
- 'asyncTearDown', 'tearDown', 'cleanup2', 'cleanup1']
self.assertEqual(events, expected)
events = []
self.assertEqual(len(dropped), 1)
self.assertIn("raise self.failureException(msg)", dropped[0])
+ def test_addFailure_filter_traceback_frames_chained_exception_self_loop(self):
+ class Foo(unittest.TestCase):
+ def test_1(self):
+ pass
+
+ def get_exc_info():
+ try:
+ loop = Exception("Loop")
+ loop.__cause__ = loop
+ loop.__context__ = loop
+ raise loop
+ except:
+ return sys.exc_info()
+
+ exc_info_tuple = get_exc_info()
+
+ test = Foo('test_1')
+ result = unittest.TestResult()
+ result.startTest(test)
+ result.addFailure(test, exc_info_tuple)
+ result.stopTest(test)
+
+ formatted_exc = result.failures[0][1]
+ self.assertEqual(formatted_exc.count("Exception: Loop\n"), 1)
+
+ def test_addFailure_filter_traceback_frames_chained_exception_cycle(self):
+ class Foo(unittest.TestCase):
+ def test_1(self):
+ pass
+
+ def get_exc_info():
+ try:
+ # Create two directionally opposed cycles
+ # __cause__ in one direction, __context__ in the other
+ A, B, C = Exception("A"), Exception("B"), Exception("C")
+ edges = [(C, B), (B, A), (A, C)]
+ for ex1, ex2 in edges:
+ ex1.__cause__ = ex2
+ ex2.__context__ = ex1
+ raise C
+ except:
+ return sys.exc_info()
+
+ exc_info_tuple = get_exc_info()
+
+ test = Foo('test_1')
+ result = unittest.TestResult()
+ result.startTest(test)
+ result.addFailure(test, exc_info_tuple)
+ result.stopTest(test)
+
+ formatted_exc = result.failures[0][1]
+ self.assertEqual(formatted_exc.count("Exception: A\n"), 1)
+ self.assertEqual(formatted_exc.count("Exception: B\n"), 1)
+ self.assertEqual(formatted_exc.count("Exception: C\n"), 1)
+
# "addError(test, err)"
# ...
# "Called when the test case test raises an unexpected exception err
class TestableTest(unittest.TestCase):
def setUp(self):
ordering.append('setUp')
+ test.addCleanup(cleanup2)
if blowUp:
raise Exception('foo')
def testNothing(self):
ordering.append('test')
+ test.addCleanup(cleanup3)
def tearDown(self):
ordering.append('tearDown')
ordering.append('cleanup1')
def cleanup2():
ordering.append('cleanup2')
+ def cleanup3():
+ ordering.append('cleanup3')
test.addCleanup(cleanup1)
- test.addCleanup(cleanup2)
def success(some_test):
self.assertEqual(some_test, test)
result.addSuccess = success
test.run(result)
- self.assertEqual(ordering, ['setUp', 'test', 'tearDown',
+ self.assertEqual(ordering, ['setUp', 'test', 'tearDown', 'cleanup3',
'cleanup2', 'cleanup1', 'success'])
blowUp = True
test = TestableTest('testNothing')
test.addCleanup(cleanup1)
test.run(result)
- self.assertEqual(ordering, ['setUp', 'cleanup1'])
+ self.assertEqual(ordering, ['setUp', 'cleanup2', 'cleanup1'])
def testTestCaseDebugExecutesCleanups(self):
ordering = []
def testNothing(self):
ordering.append('test')
+ self.addCleanup(cleanup3)
def tearDown(self):
ordering.append('tearDown')
+ test.addCleanup(cleanup4)
test = TestableTest('testNothing')
test.addCleanup(cleanup2)
def cleanup2():
ordering.append('cleanup2')
+ def cleanup3():
+ ordering.append('cleanup3')
+ def cleanup4():
+ ordering.append('cleanup4')
test.debug()
- self.assertEqual(ordering, ['setUp', 'test', 'tearDown', 'cleanup1', 'cleanup2'])
+ self.assertEqual(ordering, ['setUp', 'test', 'tearDown', 'cleanup4',
+ 'cleanup3', 'cleanup1', 'cleanup2'])
class TestClassCleanup(unittest.TestCase):
ordering.append('test')
@classmethod
def tearDownClass(cls):
+ ordering.append('tearDownClass')
raise Exception('TearDownClassExc')
suite = unittest.defaultTestLoader.loadTestsFromTestCase(TestableTest)
with self.assertRaises(Exception) as cm:
suite.debug()
self.assertEqual(str(cm.exception), 'TearDownClassExc')
- self.assertEqual(ordering, ['setUpClass', 'test'])
+ self.assertEqual(ordering, ['setUpClass', 'test', 'tearDownClass'])
self.assertTrue(TestableTest._class_cleanups)
TestableTest._class_cleanups.clear()
with self.assertRaises(Exception) as cm:
suite.debug()
self.assertEqual(str(cm.exception), 'TearDownClassExc')
- self.assertEqual(ordering, ['setUpClass', 'test'])
+ self.assertEqual(ordering, ['setUpClass', 'test', 'tearDownClass'])
self.assertTrue(TestableTest._class_cleanups)
TestableTest._class_cleanups.clear()
self.assertEqual(ordering,
['setUpClass', 'test', 'tearDownClass', 'cleanup_good'])
+ def test_run_nested_test(self):
+ ordering = []
+
+ class InnerTest(unittest.TestCase):
+ @classmethod
+ def setUpClass(cls):
+ ordering.append('inner setup')
+ cls.addClassCleanup(ordering.append, 'inner cleanup')
+ def test(self):
+ ordering.append('inner test')
+
+ class OuterTest(unittest.TestCase):
+ @classmethod
+ def setUpClass(cls):
+ ordering.append('outer setup')
+ cls.addClassCleanup(ordering.append, 'outer cleanup')
+ def test(self):
+ ordering.append('start outer test')
+ runTests(InnerTest)
+ ordering.append('end outer test')
+
+ runTests(OuterTest)
+ self.assertEqual(ordering, [
+ 'outer setup', 'start outer test',
+ 'inner setup', 'inner test', 'inner cleanup',
+ 'end outer test', 'outer cleanup'])
+
class TestModuleCleanUp(unittest.TestCase):
def test_add_and_do_ModuleCleanup(self):
unittest.addModuleCleanup(cleanup, ordering)
@staticmethod
def tearDownModule():
+ ordering.append('tearDownModule')
raise Exception('CleanUpExc')
class TestableTest(unittest.TestCase):
self.assertEqual(result.errors[0][1].splitlines()[-1],
'Exception: CleanUpExc')
self.assertEqual(ordering, ['setUpModule', 'setUpClass', 'test',
- 'tearDownClass', 'cleanup_good'])
+ 'tearDownClass', 'tearDownModule',
+ 'cleanup_good'])
self.assertEqual(unittest.case._module_cleanups, [])
def test_debug_module_executes_cleanUp(self):
unittest.addModuleCleanup(cleanup, ordering, blowUp=blowUp)
@staticmethod
def tearDownModule():
+ ordering.append('tearDownModule')
raise Exception('TearDownModuleExc')
class TestableTest(unittest.TestCase):
suite.debug()
self.assertEqual(str(cm.exception), 'TearDownModuleExc')
self.assertEqual(ordering, ['setUpModule', 'setUpClass', 'test',
- 'tearDownClass'])
+ 'tearDownClass', 'tearDownModule'])
self.assertTrue(unittest.case._module_cleanups)
unittest.case._module_cleanups.clear()
suite.debug()
self.assertEqual(str(cm.exception), 'TearDownModuleExc')
self.assertEqual(ordering, ['setUpModule', 'setUpClass', 'test',
- 'tearDownClass'])
+ 'tearDownClass', 'tearDownModule'])
self.assertTrue(unittest.case._module_cleanups)
unittest.case._module_cleanups.clear()
run(test_async())
+ def test_patch_dict_async_def(self):
+ foo = {'a': 'a'}
+ @patch.dict(foo, {'a': 'b'})
+ async def test_async():
+ self.assertEqual(foo['a'], 'b')
+
+ self.assertTrue(iscoroutinefunction(test_async))
+ run(test_async())
+
+ def test_patch_dict_async_def_context(self):
+ foo = {'a': 'a'}
+ async def test_async():
+ with patch.dict(foo, {'a': 'b'}):
+ self.assertEqual(foo['a'], 'b')
+
+ run(test_async())
+
class AsyncMockTest(unittest.TestCase):
def test_iscoroutinefunction_default(self):
def port(self):
port = self._hostinfo[1]
if port is not None:
- try:
- port = int(port, 10)
- except ValueError:
- message = f'Port could not be cast to integer value as {port!r}'
- raise ValueError(message) from None
- if not ( 0 <= port <= 65535):
+ if port.isdigit() and port.isascii():
+ port = int(port)
+ else:
+ raise ValueError(f"Port could not be cast to integer value as {port!r}")
+ if not (0 <= port <= 65535):
raise ValueError("Port out of range 0-65535")
return port
def _splitnport(host, defport=-1):
"""Split host and port, returning numeric port.
Return given default port if no ':' found; defaults to -1.
- Return numerical port if a valid number are found after ':'.
+ Return numerical port if a valid number is found after ':'.
Return None if ':' but not a valid number."""
host, delim, port = host.rpartition(':')
if not delim:
host = port
elif port:
- try:
+ if port.isdigit() and port.isascii():
nport = int(port)
- except ValueError:
+ else:
nport = None
return host, nport
return host, defport
# for are actually localized, but in theory some system could do so.)
env = dict(os.environ)
env['LC_ALL'] = 'C'
- proc = subprocess.Popen((executable,) + args,
+ # Empty strings will be quoted by popen so we should just ommit it
+ if args != ('',):
+ command = (executable, *args)
+ else:
+ command = (executable,)
+ proc = subprocess.Popen(command,
stdout=subprocess.PIPE,
stderr=subprocess.DEVNULL,
env=env)
mac = _find_mac_near_keyword('ifconfig', args, keywords, lambda i: i+1)
if mac:
return mac
- return None
+ return None
def _ip_getnode():
"""Get the hardware address on Unix by running ip."""
shutil.copyfile(src, dst)
break
+ def _call_new_python(self, context, *py_args, **kwargs):
+ """Executes the newly created Python using safe-ish options"""
+ # gh-98251: We do not want to just use '-I' because that masks
+ # legitimate user preferences (such as not writing bytecode). All we
+ # really need is to ensure that the path variables do not overrule
+ # normal venv handling.
+ args = [context.env_exec_cmd, *py_args]
+ kwargs['env'] = env = os.environ.copy()
+ env['VIRTUAL_ENV'] = context.env_dir
+ env.pop('PYTHONHOME', None)
+ env.pop('PYTHONPATH', None)
+ kwargs['cwd'] = context.env_dir
+ kwargs['executable'] = context.env_exec_cmd
+ subprocess.check_output(args, **kwargs)
+
def _setup_pip(self, context):
"""Installs or upgrades pip in a virtual environment"""
- # We run ensurepip in isolated mode to avoid side effects from
- # environment vars, the current directory and anything else
- # intended for the global Python environment
- cmd = [context.env_exec_cmd, '-Im', 'ensurepip', '--upgrade',
- '--default-pip']
- subprocess.check_output(cmd, stderr=subprocess.STDOUT)
+ self._call_new_python(context, '-m', 'ensurepip', '--upgrade',
+ '--default-pip', stderr=subprocess.STDOUT)
def setup_scripts(self, context):
"""
logger.debug(
f'Upgrading {CORE_VENV_DEPS} packages in {context.bin_path}'
)
- cmd = [context.env_exec_cmd, '-m', 'pip', 'install', '--upgrade']
- cmd.extend(CORE_VENV_DEPS)
- subprocess.check_call(cmd)
+ self._call_new_python(context, '-m', 'pip', 'install', '--upgrade',
+ *CORE_VENV_DEPS)
def create(env_dir, system_site_packages=False, clear=False,
end
if test -n "$_OLD_FISH_PROMPT_OVERRIDE"
- functions -e fish_prompt
set -e _OLD_FISH_PROMPT_OVERRIDE
- functions -c _old_fish_prompt fish_prompt
- functions -e _old_fish_prompt
+ # prevents error when using nested fish instances (Issue #93858)
+ if functions -q _old_fish_prompt
+ functions -e fish_prompt
+ functions -c _old_fish_prompt fish_prompt
+ functions -e _old_fish_prompt
+ end
end
set -e VIRTUAL_ENV
),
),
dict(
- name="SQLite 3.37.2",
- url="https://sqlite.org/2022/sqlite-autoconf-3370200.tar.gz",
- checksum='683cc5312ee74e71079c14d24b7a6d27',
+ name="SQLite 3.39.4",
+ url="https://sqlite.org/2022/sqlite-autoconf-3390400.tar.gz",
+ checksum="44b7e6691b0954086f717a6c43b622a5",
extra_cflags=('-Os '
'-DSQLITE_ENABLE_FTS5 '
'-DSQLITE_ENABLE_FTS4 '
This document provides a quick overview of some macOS specific features in
the Python distribution.
+Compilers for building on macOS
+===============================
+
+The core developers primarily test builds on macOS with Apple's compiler tools,
+either Xcode or the Command Line Tools. For these we only support building with
+a compiler that includes an SDK that targets the OS on the build machine, that is
+the version of Xcode that shipped with the OS version or one newer.
+
+For example, for macOS 12 we support Xcode 13 and Xcode 14 (or the corresponding
+Command Line Tools).
+
+Building with other compilers, such as GCC, likely works, but is not actively supported.
+
macOS specific arguments to configure
=====================================
-rm -f gb-18030-2000.xml
docclean:
- $(MAKE) -C Doc clean
+ $(MAKE) -C $(srcdir)/Doc clean
# like the 'clean' target but retain the profile guided optimization (PGO)
# data. The PGO data is only valid if source code remains unchanged.
Python News
+++++++++++
+What's New in Python 3.10.9 final?
+==================================
+
+*Release date: 2022-12-06*
+
+Security
+--------
+
+- gh-issue-100001: ``python -m http.server`` no longer allows terminal
+ control characters sent within a garbage request to be printed to the
+ stderr server log.
+
+ This is done by changing the :mod:`http.server`
+ :class:`BaseHTTPRequestHandler` ``.log_message`` method to replace control
+ characters with a ``\xHH`` hex escape before printing.
+
+- gh-issue-87604: Avoid publishing list of active per-interpreter audit
+ hooks via the :mod:`gc` module
+
+- gh-issue-98433: The IDNA codec decoder used on DNS hostnames by
+ :mod:`socket` or :mod:`asyncio` related name resolution functions no
+ longer involves a quadratic algorithm. This prevents a potential CPU
+ denial of service if an out-of-spec excessive length hostname involving
+ bidirectional characters were decoded. Some protocols such as
+ :mod:`urllib` http ``3xx`` redirects potentially allow for an attacker to
+ supply such a name.
+
+- gh-issue-98739: Update bundled libexpat to 2.5.0
+
+- gh-issue-98517: Port XKCP's fix for the buffer overflows in SHA-3
+ (CVE-2022-37454).
+
+- gh-issue-97514: On Linux the :mod:`multiprocessing` module returns to
+ using filesystem backed unix domain sockets for communication with the
+ *forkserver* process instead of the Linux abstract socket namespace. Only
+ code that chooses to use the :ref:`"forkserver" start method
+ <multiprocessing-start-methods>` is affected.
+
+ Abstract sockets have no permissions and could allow any user on the
+ system in the same `network namespace
+ <https://man7.org/linux/man-pages/man7/network_namespaces.7.html>`_ (often
+ the whole system) to inject code into the multiprocessing *forkserver*
+ process. This was a potential privilege escalation. Filesystem based
+ socket permissions restrict this to the *forkserver* process user as was
+ the default in Python 3.8 and earlier.
+
+ This prevents Linux `CVE-2022-42919
+ <https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-42919>`_.
+
+Core and Builtins
+-----------------
+
+- gh-issue-99578: Fix a reference bug in :func:`_imp.create_builtin()` after
+ the creation of the first sub-interpreter for modules ``builtins`` and
+ ``sys``. Patch by Victor Stinner.
+
+- gh-issue-99581: Fixed a bug that was causing a buffer overflow if the
+ tokenizer copies a line missing the newline caracter from a file that is
+ as long as the available tokenizer buffer. Patch by Pablo galindo
+
+- gh-issue-96055: Update :mod:`faulthandler` to emit an error message with
+ the proper unexpected signal number. Patch by Dong-hee Na.
+
+- gh-issue-98852: Fix subscription of :class:`types.GenericAlias` instances
+ containing bare generic types: for example ``tuple[A, T][int]``, where
+ ``A`` is a generic type, and ``T`` is a type variable.
+
+- gh-issue-98415: Fix detection of MAC addresses for :mod:`uuid` on certain
+ OSs. Patch by Chaim Sanders
+
+- gh-issue-92119: Print exception class name instead of its string
+ representation when raising errors from :mod:`ctypes` calls.
+
+- gh-issue-93696: Allow :mod:`pdb` to locate source for frozen modules in
+ the standard library.
+
+- bpo-31718: Raise :exc:`ValueError` instead of :exc:`SystemError` when
+ methods of uninitialized :class:`io.IncrementalNewlineDecoder` objects are
+ called. Patch by Oren Milman.
+
+- bpo-38031: Fix a possible assertion failure in :class:`io.FileIO` when the
+ opener returns an invalid file descriptor.
+
+Library
+-------
+
+- gh-issue-100001: Also \ escape \s in the http.server
+ BaseHTTPRequestHandler.log_message so that it is technically possible to
+ parse the line and reconstruct what the original data was. Without this a
+ \xHH is ambiguious as to if it is a hex replacement we put in or the
+ characters r"\x" came through in the original request line.
+
+- gh-issue-93453: :func:`asyncio.get_event_loop` now only emits a
+ deprecation warning when a new event loop was created implicitly. It no
+ longer emits a deprecation warning if the current event loop was set.
+
+- gh-issue-51524: Fix bug when calling trace.CoverageResults with valid
+ infile.
+
+- gh-issue-99645: Fix a bug in handling class cleanups in
+ :class:`unittest.TestCase`. Now ``addClassCleanup()`` uses separate lists
+ for different ``TestCase`` subclasses, and ``doClassCleanups()`` only
+ cleans up the particular class.
+
+- gh-issue-97001: Release the GIL when calling termios APIs to avoid
+ blocking threads.
+
+- gh-issue-99341: Fix :func:`ast.increment_lineno` to also cover
+ :class:`ast.TypeIgnore` when changing line numbers.
+
+- gh-issue-74044: Fixed bug where :func:`inspect.signature` reported
+ incorrect arguments for decorated methods.
+
+- gh-issue-99275: Fix ``SystemError`` in :mod:`ctypes` when exception was
+ not set during ``__initsubclass__``.
+
+- gh-issue-99155: Fix :class:`statistics.NormalDist` pickle with ``0`` and
+ ``1`` protocols.
+
+- gh-issue-99134: Update the bundled copy of pip to version 22.3.1.
+
+- gh-issue-99130: Apply bugfixes from `importlib_metadata 4.11.4
+ <https://importlib-metadata.readthedocs.io/en/latest/history.html#v4-11-4>`_,
+ namely: In ``PathDistribution._name_from_stem``, avoid including parts of
+ the extension in the result. In ``PathDistribution._normalized_name``,
+ ensure names loaded from the stem of the filename are also normalized,
+ ensuring duplicate entry points by packages varying only by non-normalized
+ name are hidden.
+
+- gh-issue-83004: Clean up refleak on failed module initialisation in
+ :mod:`_zoneinfo`
+
+- gh-issue-83004: Clean up refleaks on failed module initialisation in in
+ :mod:`_pickle`
+
+- gh-issue-83004: Clean up refleak on failed module initialisation in
+ :mod:`_io`.
+
+- gh-issue-98897: Fix memory leak in :func:`math.dist` when both points
+ don't have the same dimension. Patch by Kumar Aditya.
+
+- gh-issue-98793: Fix argument typechecks in :func:`!_overlapped.WSAConnect`
+ and :func:`!_overlapped.Overlapped.WSASendTo` functions.
+
+- gh-issue-98740: Fix internal error in the :mod:`re` module which in very
+ rare circumstances prevented compilation of a regular expression
+ containing a :ref:`conditional expression <re-conditional-expression>`
+ without the "else" branch.
+
+- gh-issue-98703: Fix :meth:`asyncio.StreamWriter.drain` to call
+ ``protocol.connection_lost`` callback only once on Windows.
+
+- gh-issue-98624: Add a mutex to unittest.mock.NonCallableMock to protect
+ concurrent access to mock attributes.
+
+- gh-issue-89237: Fix hang on Windows in ``subprocess.wait_closed()`` in
+ :mod:`asyncio` with :class:`~asyncio.ProactorEventLoop`. Patch by Kumar
+ Aditya.
+
+- gh-issue-98458: Fix infinite loop in unittest when a self-referencing
+ chained exception is raised
+
+- gh-issue-97928: :meth:`tkinter.Text.count` raises now an exception for
+ options starting with "-" instead of silently ignoring them.
+
+- gh-issue-97966: On ``uname_result``, restored expectation that ``_fields``
+ and ``_asdict`` would include all six properties including ``processor``.
+
+- gh-issue-98331: Update the bundled copies of pip and setuptools to
+ versions 22.3 and 65.5.0 respectively.
+
+- gh-issue-96035: Fix bug in :func:`urllib.parse.urlparse` that causes
+ certain port numbers containing whitespace, underscores, plus and minus
+ signs, or non-ASCII digits to be incorrectly accepted.
+
+- gh-issue-98251: Allow :mod:`venv` to pass along :envvar:`PYTHON*`
+ variables to ``ensurepip`` and ``pip`` when they do not impact path
+ resolution
+
+- gh-issue-98178: On macOS, fix a crash in :func:`syslog.syslog` in
+ multi-threaded applications. On macOS, the libc ``syslog()`` function is
+ not thread-safe, so :func:`syslog.syslog` no longer releases the GIL to
+ call it. Patch by Victor Stinner.
+
+- gh-issue-96151: Allow ``BUILTINS`` to be a valid field name for frozen
+ dataclasses.
+
+- gh-issue-98086: Make sure ``patch.dict()`` can be applied on async
+ functions.
+
+- gh-issue-88863: To avoid apparent memory leaks when
+ :func:`asyncio.open_connection` raises, break reference cycles generated
+ by local exception and future instances (which has exception instance as
+ its member var). Patch by Dong Uk, Kang.
+
+- gh-issue-93858: Prevent error when activating venv in nested fish
+ instances.
+
+- bpo-46364: Restrict use of sockets instead of pipes for stdin of
+ subprocesses created by :mod:`asyncio` to AIX platform only.
+
+- bpo-38523: :func:`shutil.copytree` now applies the
+ *ignore_dangling_symlinks* argument recursively.
+
+- bpo-36267: Fix IndexError in :class:`argparse.ArgumentParser` when a
+ ``store_true`` action is given an explicit argument.
+
+Documentation
+-------------
+
+- gh-issue-92892: Document that calling variadic functions with ctypes
+ requires special care on macOS/arm64 (and possibly other platforms).
+
+Tests
+-----
+
+- gh-issue-99892: Skip test_normalization() of test_unicodedata if it fails
+ to download NormalizationTest.txt file from pythontest.net. Patch by
+ Victor Stinner.
+
+- bpo-34272: Some C API tests were moved into the new Lib/test/test_capi/
+ directory.
+
+Build
+-----
+
+- gh-issue-99086: Fix ``-Wimplicit-int``, ``-Wstrict-prototypes``, and
+ ``-Wimplicit-function-declaration`` compiler warnings in
+ :program:`configure` checks.
+
+- gh-issue-99086: Fix ``-Wimplicit-int`` compiler warning in
+ :program:`configure` check for ``PTHREAD_SCOPE_SYSTEM``.
+
+- gh-issue-97731: Specify the full path to the source location for ``make
+ docclean`` (needed for cross-builds).
+
+- gh-issue-98671: Fix ``NO_MISALIGNED_ACCESSES`` being not defined for the
+ SHA3 extension when ``HAVE_ALIGNED_REQUIRED`` is set. Allowing builds on
+ hardware that unaligned memory accesses are not allowed.
+
+Windows
+-------
+
+- gh-issue-99345: Use faster initialization functions to detect install
+ location for Windows Store package
+
+- gh-issue-98689: Update Windows builds to zlib v1.2.13. v1.2.12 has
+ CVE-2022-37434, but the vulnerable ``inflateGetHeader`` API is not used by
+ Python.
+
+- gh-issue-94328: Update Windows installer to use SQLite 3.39.4.
+
+- bpo-40882: Fix a memory leak in
+ :class:`multiprocessing.shared_memory.SharedMemory` on Windows.
+
+macOS
+-----
+
+- gh-issue-94328: Update macOS installer to SQLite 3.39.4.
+
+IDLE
+----
+
+- gh-issue-97527: Fix a bug in the previous bugfix that caused IDLE to not
+ start when run with 3.10.8, 3.12.0a1, and at least Microsoft Python
+ 3.10.2288.0 installed without the Lib/test package. 3.11.0 was never
+ affected.
+
+Tools/Demos
+-----------
+
+- gh-issue-95731: Fix handling of module docstrings in
+ :file:`Tools/i18n/pygettext.py`.
+
+
What's New in Python 3.10.8 final?
==================================
creates new event loop only if called from the main thread.
- bpo-38918: Add an entry for ``__module__`` in the "function" & "method"
- sections of the `inspect docs types and members table
- <https://docs.python.org/3/library/inspect.html#types-and-members>`_
+ sections of the :mod:`inspect` docs' :ref:`inspect-types` table.
- bpo-3530: In the :mod:`ast` module documentation, fix a misleading
``NodeTransformer`` example and add advice on when to use the
return loop;
}
- if (PyErr_WarnEx(PyExc_DeprecationWarning,
- "There is no current event loop",
- stacklevel))
- {
- return NULL;
- }
-
policy = PyObject_CallNoArgs(asyncio_get_event_loop_policy);
if (policy == NULL) {
return NULL;
return get_event_loop(1);
}
+// This internal method is going away in Python 3.12, left here only for
+// backwards compatibility with 3.10.0 - 3.10.8 and 3.11.0.
+// Similarly, this method's Python equivalent in asyncio.events is going
+// away as well.
+// See GH-99949 for more details.
/*[clinic input]
_asyncio._get_event_loop
stacklevel: int = 3
PyErr_Fetch(&tp, &v, &tb);
PyErr_NormalizeException(&tp, &v, &tb);
- cls_str = PyObject_Str(tp);
+ if (PyType_Check(tp))
+ cls_str = PyUnicode_FromString(_PyType_Name((PyTypeObject *)tp));
+ else
+ cls_str = PyObject_Str(tp);
if (cls_str) {
PyUnicode_AppendAndDel(&s, cls_str);
PyUnicode_AppendAndDel(&s, PyUnicode_FromString(": "));
}
stgdict = PyType_stgdict(type);
- if (!stgdict)
+ if (!stgdict) {
+ PyErr_SetString(PyExc_TypeError,
+ "ctypes state is not initialized");
return -1;
+ }
/* If this structure/union is already marked final we cannot assign
_fields_ anymore. */
goto fail;
/* BlockingIOError, for compatibility */
- Py_INCREF(PyExc_BlockingIOError);
- if (PyModule_AddObject(m, "BlockingIOError",
- (PyObject *) PyExc_BlockingIOError) < 0)
+ if (PyModule_AddObjectRef(m, "BlockingIOError",
+ (PyObject *) PyExc_BlockingIOError) < 0) {
goto fail;
+ }
/* Concrete base types of the IO ABCs.
(the ABCs themselves are declared through inheritance in io.py)
ret = -1;
if (!fd_is_own)
self->fd = -1;
- if (self->fd >= 0)
+ if (self->fd >= 0) {
+ PyObject *exc, *val, *tb;
+ PyErr_Fetch(&exc, &val, &tb);
internal_close(self);
+ _PyErr_ChainExceptions(exc, val, tb);
+ }
done:
#ifdef MS_WINDOWS
PyObject *errors)
/*[clinic end generated code: output=fbd04d443e764ec2 input=89db6b19c6b126bf]*/
{
- self->decoder = decoder;
- Py_INCREF(decoder);
if (errors == NULL) {
- self->errors = _PyUnicode_FromId(&PyId_strict);
- if (self->errors == NULL)
+ errors = _PyUnicode_FromId(&PyId_strict);
+ if (errors == NULL) {
return -1;
+ }
}
- else {
- self->errors = errors;
- }
- Py_INCREF(self->errors);
+ Py_XSETREF(self->errors, Py_NewRef(errors));
+ Py_XSETREF(self->decoder, Py_NewRef(decoder));
self->translate = translate ? 1 : 0;
self->seennl = 0;
self->pendingcr = 0;
return 0;
}
+#define CHECK_INITIALIZED_DECODER(self) \
+ if (self->errors == NULL) { \
+ PyErr_SetString(PyExc_ValueError, \
+ "IncrementalNewlineDecoder.__init__() not called"); \
+ return NULL; \
+ }
+
#define SEEN_CR 1
#define SEEN_LF 2
#define SEEN_CRLF 4
Py_ssize_t output_len;
nldecoder_object *self = (nldecoder_object *) myself;
- if (self->decoder == NULL) {
- PyErr_SetString(PyExc_ValueError,
- "IncrementalNewlineDecoder.__init__ not called");
- return NULL;
- }
+ CHECK_INITIALIZED_DECODER(self);
/* decode input (with the eventual \r from a previous pass) */
if (self->decoder != Py_None) {
PyObject *buffer;
unsigned long long flag;
+ CHECK_INITIALIZED_DECODER(self);
+
if (self->decoder != Py_None) {
PyObject *state = PyObject_CallMethodNoArgs(self->decoder,
_PyIO_str_getstate);
PyObject *buffer;
unsigned long long flag;
+ CHECK_INITIALIZED_DECODER(self);
+
if (!PyTuple_Check(state)) {
PyErr_SetString(PyExc_TypeError, "state argument must be a tuple");
return NULL;
_io_IncrementalNewlineDecoder_reset_impl(nldecoder_object *self)
/*[clinic end generated code: output=32fa40c7462aa8ff input=728678ddaea776df]*/
{
+ CHECK_INITIALIZED_DECODER(self);
+
self->seennl = 0;
self->pendingcr = 0;
if (self->decoder != Py_None)
static PyObject *
incrementalnewlinedecoder_newlines_get(nldecoder_object *self, void *context)
{
+ CHECK_INITIALIZED_DECODER(self);
+
switch (self->seennl) {
case SEEN_CR:
return PyUnicode_FromString("\r");
if (st->UnpicklingError == NULL)
return NULL;
- Py_INCREF(st->PickleError);
- if (PyModule_AddObject(m, "PickleError", st->PickleError) < 0)
+ if (PyModule_AddObjectRef(m, "PickleError", st->PickleError) < 0) {
return NULL;
- Py_INCREF(st->PicklingError);
- if (PyModule_AddObject(m, "PicklingError", st->PicklingError) < 0)
+ }
+ if (PyModule_AddObjectRef(m, "PicklingError", st->PicklingError) < 0) {
return NULL;
- Py_INCREF(st->UnpicklingError);
- if (PyModule_AddObject(m, "UnpicklingError", st->UnpicklingError) < 0)
+ }
+ if (PyModule_AddObjectRef(m, "UnpicklingError", st->UnpicklingError) < 0) {
return NULL;
-
+ }
if (_Pickle_InitState(st) < 0)
return NULL;
i = 0;
curData = data;
while(i < dataByteLen) {
- if ((instance->byteIOIndex == 0) && (dataByteLen >= (i + rateInBytes))) {
+ if ((instance->byteIOIndex == 0) && (dataByteLen-i >= rateInBytes)) {
#ifdef SnP_FastLoop_Absorb
/* processing full blocks first */
}
else {
/* normal lane: using the message queue */
-
- partialBlock = (unsigned int)(dataByteLen - i);
- if (partialBlock+instance->byteIOIndex > rateInBytes)
+ if (dataByteLen-i > rateInBytes-instance->byteIOIndex)
partialBlock = rateInBytes-instance->byteIOIndex;
+ else
+ partialBlock = (unsigned int)(dataByteLen - i);
#ifdef KeccakReference
displayBytes(1, "Block to be absorbed (part)", curData, partialBlock);
#endif
i = 0;
curData = data;
while(i < dataByteLen) {
- if ((instance->byteIOIndex == rateInBytes) && (dataByteLen >= (i + rateInBytes))) {
+ if ((instance->byteIOIndex == rateInBytes) && (dataByteLen-i >= rateInBytes)) {
for(j=dataByteLen-i; j>=rateInBytes; j-=rateInBytes) {
SnP_Permute(instance->state);
SnP_ExtractBytes(instance->state, curData, 0, rateInBytes);
SnP_Permute(instance->state);
instance->byteIOIndex = 0;
}
- partialBlock = (unsigned int)(dataByteLen - i);
- if (partialBlock+instance->byteIOIndex > rateInBytes)
+ if (dataByteLen-i > rateInBytes-instance->byteIOIndex)
partialBlock = rateInBytes-instance->byteIOIndex;
+ else
+ partialBlock = (unsigned int)(dataByteLen - i);
i += partialBlock;
SnP_ExtractBytes(instance->state, curData, instance->byteIOIndex, partialBlock);
#endif
/* Prevent bus errors on platforms requiring aligned accesses such ARM. */
-#if HAVE_ALIGNED_REQUIRED && !defined(NO_MISALIGNED_ACCESSES)
+#if defined(HAVE_ALIGNED_REQUIRED) && !defined(NO_MISALIGNED_ACCESSES)
#define NO_MISALIGNED_ACCESSES
#endif
#endif
/* Report failure */
-#define FAIL do { VTRACE(("FAIL: %d\n", __LINE__)); return 0; } while (0)
+#define FAIL do { VTRACE(("FAIL: %d\n", __LINE__)); return -1; } while (0)
/* Extract opcode, argument, or skip count from code array */
#define GET_OP \
skip = *code; \
VTRACE(("%lu (skip to %p)\n", \
(unsigned long)skip, code+skip)); \
- if (skip-adj > (uintptr_t)(end - code)) \
+ if (skip-adj > (uintptr_t)(end - code)) \
FAIL; \
code++; \
} while (0)
}
}
- return 1;
+ return 0;
}
+/* Returns 0 on success, -1 on failure, and 1 if the last op is JUMP. */
static int
_validate_inner(SRE_CODE *code, SRE_CODE *end, Py_ssize_t groups)
{
case SRE_OP_IN_LOC_IGNORE:
GET_SKIP;
/* Stop 1 before the end; we check the FAILURE below */
- if (!_validate_charset(code, code+skip-2))
+ if (_validate_charset(code, code+skip-2))
FAIL;
if (code[skip-2] != SRE_OP_FAILURE)
FAIL;
}
/* Validate the charset */
if (flags & SRE_INFO_CHARSET) {
- if (!_validate_charset(code, newcode-1))
+ if (_validate_charset(code, newcode-1))
FAIL;
if (newcode[-1] != SRE_OP_FAILURE)
FAIL;
if (skip == 0)
break;
/* Stop 2 before the end; we check the JUMP below */
- if (!_validate_inner(code, code+skip-3, groups))
+ if (_validate_inner(code, code+skip-3, groups))
FAIL;
code += skip-3;
/* Check that it ends with a JUMP, and that each JUMP
else if (code+skip-1 != target)
FAIL;
}
+ if (code != target)
+ FAIL;
}
break;
FAIL;
if (max > SRE_MAXREPEAT)
FAIL;
- if (!_validate_inner(code, code+skip-4, groups))
+ if (_validate_inner(code, code+skip-4, groups))
FAIL;
code += skip-4;
GET_OP;
FAIL;
if (max > SRE_MAXREPEAT)
FAIL;
- if (!_validate_inner(code, code+skip-3, groups))
+ if (_validate_inner(code, code+skip-3, groups))
FAIL;
code += skip-3;
GET_OP;
to allow arbitrary jumps anywhere in the code; so we just look
for a JUMP opcode preceding our skip target.
*/
- if (skip >= 3 && skip-3 < (uintptr_t)(end - code) &&
- code[skip-3] == SRE_OP_JUMP)
- {
- VTRACE(("both then and else parts present\n"));
- if (!_validate_inner(code+1, code+skip-3, groups))
- FAIL;
+ VTRACE(("then part:\n"));
+ int rc = _validate_inner(code+1, code+skip-1, groups);
+ if (rc == 1) {
+ VTRACE(("else part:\n"));
code += skip-2; /* Position after JUMP, at <skipno> */
GET_SKIP;
- if (!_validate_inner(code, code+skip-1, groups))
- FAIL;
- code += skip-1;
- }
- else {
- VTRACE(("only a then part present\n"));
- if (!_validate_inner(code+1, code+skip-1, groups))
- FAIL;
- code += skip-1;
+ rc = _validate_inner(code, code+skip-1, groups);
}
+ if (rc)
+ FAIL;
+ code += skip-1;
break;
case SRE_OP_ASSERT:
if (arg & 0x80000000)
FAIL; /* Width too large */
/* Stop 1 before the end; we check the SUCCESS below */
- if (!_validate_inner(code+1, code+skip-2, groups))
+ if (_validate_inner(code+1, code+skip-2, groups))
FAIL;
code += skip-2;
GET_OP;
FAIL;
break;
+ case SRE_OP_JUMP:
+ if (code + 1 != end)
+ FAIL;
+ VTRACE(("JUMP: %d\n", __LINE__));
+ return 1;
+
default:
FAIL;
}
VTRACE(("okay\n"));
- return 1;
+ return 0;
}
static int
static int
_validate(PatternObject *self)
{
- if (!_validate_outer(self->code, self->code+self->codesize, self->groups))
+ if (_validate_outer(self->code, self->code+self->codesize, self->groups))
{
PyErr_SetString(PyExc_RuntimeError, "invalid SRE code");
return 0;
return PyMapping_Items(obj);
}
+static PyObject *
+test_mapping_has_key_string(PyObject *self, PyObject *Py_UNUSED(args))
+{
+ PyObject *context = PyDict_New();
+ PyObject *val = PyLong_FromLong(1);
+
+ // Since this uses `const char*` it is easier to test this in C:
+ PyDict_SetItemString(context, "a", val);
+ if (!PyMapping_HasKeyString(context, "a")) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "Existing mapping key does not exist");
+ return NULL;
+ }
+ if (PyMapping_HasKeyString(context, "b")) {
+ PyErr_SetString(PyExc_RuntimeError,
+ "Missing mapping key exists");
+ return NULL;
+ }
+
+ Py_DECREF(val);
+ Py_DECREF(context);
+ Py_RETURN_NONE;
+}
+
+static PyObject *
+mapping_has_key(PyObject* self, PyObject *args)
+{
+ PyObject *context, *key;
+ if (!PyArg_ParseTuple(args, "OO", &context, &key)) {
+ return NULL;
+ }
+ return PyLong_FromLong(PyMapping_HasKey(context, key));
+}
+
static PyObject *
test_pythread_tss_key_state(PyObject *self, PyObject *args)
{"get_mapping_keys", get_mapping_keys, METH_O},
{"get_mapping_values", get_mapping_values, METH_O},
{"get_mapping_items", get_mapping_items, METH_O},
+ {"test_mapping_has_key_string", test_mapping_has_key_string, METH_NOARGS},
+ {"mapping_has_key", mapping_has_key, METH_VARARGS},
{"test_pythread_tss_key_state", test_pythread_tss_key_state, METH_VARARGS},
{"hamt", new_hamt, METH_NOARGS},
{"bad_get", (PyCFunction)(void(*)(void))bad_get, METH_FASTCALL},
}
/*[clinic input]
+_winapi.UnmapViewOfFile
+
+ address: LPCVOID
+ /
+[clinic start generated code]*/
+
+static PyObject *
+_winapi_UnmapViewOfFile_impl(PyObject *module, LPCVOID address)
+/*[clinic end generated code: output=4f7e18ac75d19744 input=8c4b6119ad9288a3]*/
+{
+ BOOL success;
+
+ Py_BEGIN_ALLOW_THREADS
+ success = UnmapViewOfFile(address);
+ Py_END_ALLOW_THREADS
+
+ if (!success) {
+ return PyErr_SetFromWindowsErr(0);
+ }
+
+ Py_RETURN_NONE;
+}
+
+/*[clinic input]
_winapi.OpenFileMapping -> HANDLE
desired_access: DWORD
_WINAPI_READFILE_METHODDEF
_WINAPI_SETNAMEDPIPEHANDLESTATE_METHODDEF
_WINAPI_TERMINATEPROCESS_METHODDEF
+ _WINAPI_UNMAPVIEWOFFILE_METHODDEF
_WINAPI_VIRTUALQUERYSIZE_METHODDEF
_WINAPI_WAITNAMEDPIPE_METHODDEF
_WINAPI_WAITFORMULTIPLEOBJECTS_METHODDEF
goto error;
}
- Py_INCREF(&PyZoneInfo_ZoneInfoType);
- PyModule_AddObject(m, "ZoneInfo", (PyObject *)&PyZoneInfo_ZoneInfoType);
+ if (PyModule_AddObjectRef(m, "ZoneInfo", (PyObject *)&PyZoneInfo_ZoneInfoType) < 0) {
+ goto error;
+ }
/* Populate imports */
PyObject *_tzpath_module = PyImport_ImportModule("zoneinfo._tzpath");
+/* The audioop module uses the code base in g777.c file of the Sox project.
+ * Source: https://web.archive.org/web/19970716121258/http://www.spies.com/Sox/Archive/soxgamma.tar.gz
+ * Programming the AdLib/Sound Blaster
+ * FM Music Chips
+ * Version 2.0 (24 Feb 1992)
+ *
+ * Copyright (c) 1991, 1992 by Jeffrey S. Lee
+ *
+ * jlee@smylex.uucp
+ *
+ *
+ *
+ * Warranty and Copyright Policy
+ *
+ * This document is provided on an "as-is" basis, and its author makes
+ * no warranty or representation, express or implied, with respect to
+ * its quality performance or fitness for a particular purpose. In no
+ * event will the author of this document be liable for direct, indirect,
+ * special, incidental, or consequential damages arising out of the use
+ * or inability to use the information contained within. Use of this
+ * document is at your own risk.
+ *
+ * This file may be used and copied freely so long as the applicable
+ * copyright notices are retained, and no modifications are made to the
+ * text of the document. No money shall be charged for its distribution
+ * beyond reasonable shipping, handling and duplication costs, nor shall
+ * proprietary changes be made to this document so that it cannot be
+ * distributed freely. This document may not be included in published
+ * material or commercial packages without the written consent of its
+ * author. */
/* audioopmodule - Module to detect peak values in arrays */
}
-/* Code shamelessly stolen from sox, 12.17.7, g711.c
-** (c) Craig Reese, Joe Campbell and Jeff Poskanzer 1989 */
-
-/* From g711.c:
- *
- * December 30, 1994:
- * Functions linear2alaw, linear2ulaw have been updated to correctly
- * convert unquantized 16 bit values.
- * Tables for direct u- to A-law and A- to u-law conversions have been
- * corrected.
- * Borge Lindberg, Center for PersonKommunikation, Aalborg University.
- * bli@cpk.auc.dk
- *
- */
#define BIAS 0x84 /* define the add-in bias for 16 bit samples */
#define CLIP 32635
#define SIGN_BIT (0x80) /* Sign bit for an A-law byte. */
return return_value;
}
+PyDoc_STRVAR(_winapi_UnmapViewOfFile__doc__,
+"UnmapViewOfFile($module, address, /)\n"
+"--\n"
+"\n");
+
+#define _WINAPI_UNMAPVIEWOFFILE_METHODDEF \
+ {"UnmapViewOfFile", (PyCFunction)_winapi_UnmapViewOfFile, METH_O, _winapi_UnmapViewOfFile__doc__},
+
+static PyObject *
+_winapi_UnmapViewOfFile_impl(PyObject *module, LPCVOID address);
+
+static PyObject *
+_winapi_UnmapViewOfFile(PyObject *module, PyObject *arg)
+{
+ PyObject *return_value = NULL;
+ LPCVOID address;
+
+ if (!PyArg_Parse(arg, "" F_POINTER ":UnmapViewOfFile", &address)) {
+ goto exit;
+ }
+ return_value = _winapi_UnmapViewOfFile_impl(module, address);
+
+exit:
+ return return_value;
+}
+
PyDoc_STRVAR(_winapi_OpenFileMapping__doc__,
"OpenFileMapping($module, desired_access, inherit_handle, name, /)\n"
"--\n"
exit:
return return_value;
}
-/*[clinic end generated code: output=d76d0a5901db2e2a input=a9049054013a1b77]*/
+/*[clinic end generated code: output=acabf8f2b5cc44a1 input=a9049054013a1b77]*/
HANDLE ConnectSocket;
PyObject *AddressObj;
- if (!_PyArg_ParseStack(args, nargs, ""F_HANDLE"O:WSAConnect",
- &ConnectSocket, &AddressObj)) {
+ if (!_PyArg_ParseStack(args, nargs, ""F_HANDLE"O!:WSAConnect",
+ &ConnectSocket, &PyTuple_Type, &AddressObj)) {
goto exit;
}
return_value = _overlapped_WSAConnect_impl(module, ConnectSocket, AddressObj);
DWORD flags;
PyObject *AddressObj;
- if (!_PyArg_ParseStack(args, nargs, ""F_HANDLE"OkO:WSASendTo",
- &handle, &bufobj, &flags, &AddressObj)) {
+ if (!_PyArg_ParseStack(args, nargs, ""F_HANDLE"OkO!:WSASendTo",
+ &handle, &bufobj, &flags, &PyTuple_Type, &AddressObj)) {
goto exit;
}
return_value = _overlapped_Overlapped_WSASendTo_impl(self, handle, bufobj, flags, AddressObj);
exit:
return return_value;
}
-/*[clinic end generated code: output=d3215a6ca589735a input=a9049054013a1b77]*/
+/*[clinic end generated code: output=e685b61b3da0524d input=a9049054013a1b77]*/
See http://semver.org.
*/
#define XML_MAJOR_VERSION 2
-#define XML_MINOR_VERSION 4
-#define XML_MICRO_VERSION 9
+#define XML_MINOR_VERSION 5
+#define XML_MICRO_VERSION 0
#ifdef __cplusplus
}
-/* 90815a2b2c80c03b2b889fe1d427bb2b9e3282aa065e42784e001db4f23de324 (2.4.9+)
+/* 5ab094ffadd6edfc94c3eee53af44a86951f9f1f0933ada3114bbce2bfb02c99 (2.5.0+)
__ __ _
___\ \/ /_ __ __ _| |_
/ _ \\ /| '_ \ / _` | __|
Copyright (c) 2021 Dong-hee Na <donghee.na@python.org>
Copyright (c) 2022 Samanta Navarro <ferivoz@riseup.net>
Copyright (c) 2022 Jeffrey Walton <noloader@gmail.com>
+ Copyright (c) 2022 Jann Horn <jannh@google.com>
Licensed under the MIT license:
Permission is hereby granted, free of charge, to any person obtaining
parserInit(parser, encodingName);
if (encodingName && ! parser->m_protocolEncodingName) {
+ if (dtd) {
+ // We need to stop the upcoming call to XML_ParserFree from happily
+ // destroying parser->m_dtd because the DTD is shared with the parent
+ // parser and the only guard that keeps XML_ParserFree from destroying
+ // parser->m_dtd is parser->m_isParamEntity but it will be set to
+ // XML_TRUE only later in XML_ExternalEntityParserCreate (or not at all).
+ parser->m_dtd = NULL;
+ }
XML_ParserFree(parser);
return NULL;
}
int len;
const char *rawName;
TAG *tag = parser->m_tagStack;
- parser->m_tagStack = tag->parent;
- tag->parent = parser->m_freeTagList;
- parser->m_freeTagList = tag;
rawName = s + enc->minBytesPerChar * 2;
len = XmlNameLength(enc, rawName);
if (len != tag->rawNameLength
*eventPP = rawName;
return XML_ERROR_TAG_MISMATCH;
}
+ parser->m_tagStack = tag->parent;
+ tag->parent = parser->m_freeTagList;
+ parser->m_freeTagList = tag;
--parser->m_tagLevel;
if (parser->m_endElementHandler) {
const XML_Char *localPart;
parser->m_handlerArg, parser->m_declElementType->name,
parser->m_declAttributeId->name, parser->m_declAttributeType, 0,
role == XML_ROLE_REQUIRED_ATTRIBUTE_VALUE);
- poolClear(&parser->m_tempPool);
handleDefault = XML_FALSE;
}
}
+ poolClear(&parser->m_tempPool);
break;
case XML_ROLE_DEFAULT_ATTRIBUTE_VALUE:
case XML_ROLE_FIXED_ATTRIBUTE_VALUE:
*
* If 'standalone' is false, the DTD must have no
* parameter entities or we wouldn't have passed the outer
- * 'if' statement. That measn the only entity in the hash
+ * 'if' statement. That means the only entity in the hash
* table is the external subset name "#" which cannot be
* given as a parameter entity name in XML syntax, so the
* lookup must have returned NULL and we don't even reach
if (result != XML_ERROR_NONE)
return result;
- else if (textEnd != next
- && parser->m_parsingStatus.parsing == XML_SUSPENDED) {
+
+ if (textEnd != next && parser->m_parsingStatus.parsing == XML_SUSPENDED) {
entity->processed = (int)(next - (const char *)entity->textPtr);
return result;
- } else {
+ }
+
#ifdef XML_DTD
- entityTrackingOnClose(parser, entity, __LINE__);
+ entityTrackingOnClose(parser, entity, __LINE__);
#endif
- entity->open = XML_FALSE;
- parser->m_openInternalEntities = openEntity->next;
- /* put openEntity back in list of free instances */
- openEntity->next = parser->m_freeInternalEntities;
- parser->m_freeInternalEntities = openEntity;
+ entity->open = XML_FALSE;
+ parser->m_openInternalEntities = openEntity->next;
+ /* put openEntity back in list of free instances */
+ openEntity->next = parser->m_freeInternalEntities;
+ parser->m_freeInternalEntities = openEntity;
+
+ // If there are more open entities we want to stop right here and have the
+ // upcoming call to XML_ResumeParser continue with entity content, or it would
+ // be ignored altogether.
+ if (parser->m_openInternalEntities != NULL
+ && parser->m_parsingStatus.parsing == XML_SUSPENDED) {
+ return XML_ERROR_NONE;
}
#ifdef XML_DTD
BT_LF, /* line feed = "\n" */
BT_GT, /* greater than = ">" */
BT_QUOT, /* quotation character = "\"" */
- BT_APOS, /* aposthrophe = "'" */
+ BT_APOS, /* apostrophe = "'" */
BT_EQUALS, /* equal sign = "=" */
BT_QUEST, /* question mark = "?" */
BT_EXCL, /* exclamation mark = "!" */
size_t i;
fault_handler_t *handler = NULL;
int save_errno = errno;
+ int found = 0;
if (!fatal_error.enabled)
return;
for (i=0; i < faulthandler_nsignals; i++) {
handler = &faulthandler_handlers[i];
- if (handler->signum == signum)
+ if (handler->signum == signum) {
+ found = 1;
break;
+ }
}
if (handler == NULL) {
/* faulthandler_nsignals == 0 (unlikely) */
/* restore the previous handler */
faulthandler_disable_fatal_handler(handler);
- PUTS(fd, "Fatal Python error: ");
- PUTS(fd, handler->name);
- PUTS(fd, "\n\n");
+ if (found) {
+ PUTS(fd, "Fatal Python error: ");
+ PUTS(fd, handler->name);
+ PUTS(fd, "\n\n");
+ }
+ else {
+ char unknown_signum[23] = {0,};
+ snprintf(unknown_signum, 23, "%d", signum);
+ PUTS(fd, "Fatal Python error from unexpected signum: ");
+ PUTS(fd, unknown_signum);
+ PUTS(fd, "\n\n");
+ }
faulthandler_dump_traceback(fd, fatal_error.all_threads,
fatal_error.interp);
PyErr_SetString(PyExc_TypeError, "state is not a tuple");
return NULL;
}
+ // The second item can be 1/0 in old pickles and True/False in new pickles
if (!PyArg_ParseTuple(state, "O!i", &PyList_Type, &saved, &firstpass)) {
return NULL;
}
if (m != n) {
PyErr_SetString(PyExc_ValueError,
"both points must have the same number of dimensions");
- return NULL;
-
+ goto error_exit;
}
if (n > NUM_STACK_ELEMS) {
diffs = (double *) PyObject_Malloc(n * sizeof(double));
if (diffs == NULL) {
- return PyErr_NoMemory();
+ PyErr_NoMemory();
+ goto error_exit;
}
}
for (i=0 ; i<n ; i++) {
_overlapped.WSAConnect
client_handle as ConnectSocket: HANDLE
- address_as_bytes as AddressObj: object
+ address_as_bytes as AddressObj: object(subclass_of='&PyTuple_Type')
/
Bind a remote address to a connectionless (UDP) socket.
static PyObject *
_overlapped_WSAConnect_impl(PyObject *module, HANDLE ConnectSocket,
PyObject *AddressObj)
-/*[clinic end generated code: output=ea0b4391e94dad63 input=169f8075e9ae7fa4]*/
+/*[clinic end generated code: output=ea0b4391e94dad63 input=7cf65313d49c015a]*/
{
char AddressBuf[sizeof(struct sockaddr_in6)];
SOCKADDR *Address = (SOCKADDR*)AddressBuf;
handle: HANDLE
buf as bufobj: object
flags: DWORD
- address_as_bytes as AddressObj: object
+ address_as_bytes as AddressObj: object(subclass_of='&PyTuple_Type')
/
Start overlapped sendto over a connectionless (UDP) socket.
_overlapped_Overlapped_WSASendTo_impl(OverlappedObject *self, HANDLE handle,
PyObject *bufobj, DWORD flags,
PyObject *AddressObj)
-/*[clinic end generated code: output=fe0ff55eb60d65e1 input=f709e6ecebd9bc18]*/
+/*[clinic end generated code: output=fe0ff55eb60d65e1 input=932a34941465df43]*/
{
char AddressBuf[sizeof(struct sockaddr_in6)];
SOCKADDR *Address = (SOCKADDR*)AddressBuf;
FSCTL_GET_REPARSE_POINT is not exported with WIN32_LEAN_AND_MEAN. */
# include <windows.h>
# include <pathcch.h>
+# include <lmcons.h> // UNLEN
+# include "osdefs.h" // SEP
+# define HAVE_SYMLINK
#endif
#ifdef __VXWORKS__
# ifdef HAVE_PROCESS_H
# include <process.h>
# endif
-# ifndef IO_REPARSE_TAG_SYMLINK
-# define IO_REPARSE_TAG_SYMLINK (0xA000000CL)
-# endif
-# ifndef IO_REPARSE_TAG_MOUNT_POINT
-# define IO_REPARSE_TAG_MOUNT_POINT (0xA0000003L)
-# endif
-# include "osdefs.h" // SEP
# include <malloc.h>
-# include <windows.h>
-# include <shellapi.h> // ShellExecute()
-# include <lmcons.h> // UNLEN
-# define HAVE_SYMLINK
#endif /* _MSC_VER */
#ifndef MAXPATHLEN
rl_attempted_completion_function = flex_complete;
/* Set Python word break characters */
completer_word_break_characters =
- rl_completer_word_break_characters =
strdup(" \t\n`~!@#$%^&*()-=+[{]}\\|;:'\",<>/?");
/* All nonalphanums except '.' */
+ rl_completer_word_break_characters = completer_word_break_characters;
mod_state->begidx = PyLong_FromLong(0L);
mod_state->endidx = PyLong_FromLong(0L);
*/
PyObject *ident = S_ident_o;
Py_XINCREF(ident);
+#ifdef __APPLE__
+ // gh-98178: On macOS, libc syslog() is not thread-safe
+ syslog(priority, "%s", message);
+#else
Py_BEGIN_ALLOW_THREADS;
syslog(priority, "%s", message);
Py_END_ALLOW_THREADS;
+#endif
Py_XDECREF(ident);
Py_RETURN_NONE;
}
{
termiosmodulestate *state = PyModule_GetState(module);
struct termios mode;
- if (tcgetattr(fd, &mode) == -1) {
+ int r;
+
+ Py_BEGIN_ALLOW_THREADS
+ r = tcgetattr(fd, &mode);
+ Py_END_ALLOW_THREADS
+ if (r == -1) {
return PyErr_SetFromErrno(state->TermiosError);
}
/* Get the old mode, in case there are any hidden fields... */
termiosmodulestate *state = PyModule_GetState(module);
struct termios mode;
- if (tcgetattr(fd, &mode) == -1) {
+ int r;
+
+ Py_BEGIN_ALLOW_THREADS
+ r = tcgetattr(fd, &mode);
+ Py_END_ALLOW_THREADS
+ if (r == -1) {
return PyErr_SetFromErrno(state->TermiosError);
}
return PyErr_SetFromErrno(state->TermiosError);
if (cfsetospeed(&mode, (speed_t) ospeed) == -1)
return PyErr_SetFromErrno(state->TermiosError);
- if (tcsetattr(fd, when, &mode) == -1)
+
+ Py_BEGIN_ALLOW_THREADS
+ r = tcsetattr(fd, when, &mode);
+ Py_END_ALLOW_THREADS
+
+ if (r == -1)
return PyErr_SetFromErrno(state->TermiosError);
Py_RETURN_NONE;
/*[clinic end generated code: output=5945f589b5d3ac66 input=dc2f32417691f8ed]*/
{
termiosmodulestate *state = PyModule_GetState(module);
- if (tcsendbreak(fd, duration) == -1) {
+ int r;
+
+ Py_BEGIN_ALLOW_THREADS
+ r = tcsendbreak(fd, duration);
+ Py_END_ALLOW_THREADS
+
+ if (r == -1) {
return PyErr_SetFromErrno(state->TermiosError);
}
/*[clinic end generated code: output=5fd86944c6255955 input=c99241b140b32447]*/
{
termiosmodulestate *state = PyModule_GetState(module);
- if (tcdrain(fd) == -1) {
+ int r;
+
+ Py_BEGIN_ALLOW_THREADS
+ r = tcdrain(fd);
+ Py_END_ALLOW_THREADS
+
+ if (r == -1) {
return PyErr_SetFromErrno(state->TermiosError);
}
/*[clinic end generated code: output=2424f80312ec2f21 input=0f7d08122ddc07b5]*/
{
termiosmodulestate *state = PyModule_GetState(module);
- if (tcflush(fd, queue) == -1) {
+ int r;
+
+ Py_BEGIN_ALLOW_THREADS
+ r = tcflush(fd, queue);
+ Py_END_ALLOW_THREADS
+
+ if (r == -1) {
return PyErr_SetFromErrno(state->TermiosError);
}
/*[clinic end generated code: output=afd10928e6ea66eb input=c6aff0640b6efd9c]*/
{
termiosmodulestate *state = PyModule_GetState(module);
- if (tcflow(fd, action) == -1) {
+ int r;
+
+ Py_BEGIN_ALLOW_THREADS
+ r = tcflow(fd, action);
+ Py_END_ALLOW_THREADS
+
+ if (r == -1) {
return PyErr_SetFromErrno(state->TermiosError);
}
Py_ssize_t iparam = 0;
for (Py_ssize_t iarg = 0; iarg < nargs; iarg++) {
PyObject *t = PyTuple_GET_ITEM(args, iarg);
+ if (PyType_Check(t)) {
+ continue;
+ }
int typevar = is_typevar(t);
if (typevar < 0) {
Py_DECREF(parameters);
static PyObject *
subs_tvars(PyObject *obj, PyObject *params, PyObject **argitems)
{
+ if (PyType_Check(obj)) {
+ Py_INCREF(obj);
+ return obj;
+ }
+
_Py_IDENTIFIER(__parameters__);
PyObject *subparams;
if (_PyObject_LookupAttrId(obj, &PyId___parameters__, &subparams) < 0) {
tsize = -tsize;
}
size_t size = _PyObject_VAR_SIZE(tp, tsize);
+ assert(size <= (size_t)PY_SSIZE_T_MAX);
+ dictoffset += (Py_ssize_t)size;
- dictoffset += (long)size;
_PyObject_ASSERT(obj, dictoffset > 0);
_PyObject_ASSERT(obj, dictoffset % SIZEOF_VOID_P == 0);
}
* will be living in full pools -- would be a shame to miss them.
*/
for (i = 0; i < maxarenas; ++i) {
- uint j;
uintptr_t base = arenas[i].address;
/* Skip arenas which are not allocated. */
/* visit every pool in the arena */
assert(base <= (uintptr_t) arenas[i].pool_address);
- for (j = 0; base < (uintptr_t) arenas[i].pool_address;
- ++j, base += POOL_SIZE) {
+ for (; base < (uintptr_t) arenas[i].pool_address; base += POOL_SIZE) {
poolp p = (poolp)base;
const uint sz = p->szidx;
uint freeblocks;
#include <string>
+#include <appmodel.h>
#include <winrt\Windows.ApplicationModel.h>
#include <winrt\Windows.Storage.h>
#endif
static std::wstring
-get_user_base()
+get_package_family()
{
try {
- const auto appData = winrt::Windows::Storage::ApplicationData::Current();
- if (appData) {
- const auto localCache = appData.LocalCacheFolder();
- if (localCache) {
- auto path = localCache.Path();
- if (!path.empty()) {
- return std::wstring(path) + L"\\local-packages";
- }
- }
+ UINT32 nameLength = MAX_PATH;
+ std::wstring name;
+ name.resize(nameLength);
+ DWORD rc = GetCurrentPackageFamilyName(&nameLength, name.data());
+ if (rc == ERROR_SUCCESS) {
+ name.resize(nameLength - 1);
+ return name;
}
- } catch (...) {
+ else if (rc != ERROR_INSUFFICIENT_BUFFER) {
+ throw rc;
+ }
+ name.resize(nameLength);
+ rc = GetCurrentPackageFamilyName(&nameLength, name.data());
+ if (rc != ERROR_SUCCESS) {
+ throw rc;
+ }
+ name.resize(nameLength - 1);
+ return name;
}
+ catch (...) {
+ }
+
return std::wstring();
}
static std::wstring
-get_package_family()
+get_user_base()
{
try {
- const auto package = winrt::Windows::ApplicationModel::Package::Current();
- if (package) {
- const auto id = package.Id();
- if (id) {
- return std::wstring(id.FamilyName());
+ const auto appData = winrt::Windows::Storage::ApplicationData::Current();
+ if (appData) {
+ const auto localCache = appData.LocalCacheFolder();
+ if (localCache) {
+ std::wstring path { localCache.Path().c_str() };
+ if (!path.empty()) {
+ return path + L"\\local-packages";
+ }
}
}
- }
- catch (...) {
+ } catch (...) {
}
return std::wstring();
get_package_home()
{
try {
- const auto package = winrt::Windows::ApplicationModel::Package::Current();
- if (package) {
- const auto path = package.InstalledLocation();
- if (path) {
- return std::wstring(path.Path());
- }
+ UINT32 pathLength = MAX_PATH;
+ std::wstring path;
+ path.resize(pathLength);
+ DWORD rc = GetCurrentPackagePath(&pathLength, path.data());
+ if (rc == ERROR_SUCCESS) {
+ path.resize(pathLength - 1);
+ return path;
+ }
+ else if (rc != ERROR_INSUFFICIENT_BUFFER) {
+ throw rc;
+ }
+ path.resize(pathLength);
+ rc = GetCurrentPackagePath(&pathLength, path.data());
+ if (rc != ERROR_SUCCESS) {
+ throw rc;
}
+ path.resize(pathLength - 1);
+ return path;
}
catch (...) {
}
set libraries=%libraries% bzip2-1.0.8\r
if NOT "%IncludeLibffiSrc%"=="false" set libraries=%libraries% libffi-3.3.0\r
if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1q\r
-set libraries=%libraries% sqlite-3.37.2.0\r
+set libraries=%libraries% sqlite-3.39.4.0\r
if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tcl-core-8.6.12.0\r
if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tk-8.6.12.0\r
if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tix-8.4.3.6\r
set libraries=%libraries% xz-5.2.5\r
-set libraries=%libraries% zlib-1.2.12\r
+set libraries=%libraries% zlib-1.2.13\r
\r
for %%e in (%libraries%) do (\r
if exist "%EXTERNALS_DIR%\%%e" (\r
<ExternalsDir>$(EXTERNALS_DIR)</ExternalsDir>\r
<ExternalsDir Condition="$(ExternalsDir) == ''">$([System.IO.Path]::GetFullPath(`$(PySourcePath)externals`))</ExternalsDir>\r
<ExternalsDir Condition="!HasTrailingSlash($(ExternalsDir))">$(ExternalsDir)\</ExternalsDir>\r
- <sqlite3Dir>$(ExternalsDir)sqlite-3.37.2.0\</sqlite3Dir>\r
+ <sqlite3Dir>$(ExternalsDir)sqlite-3.39.4.0\</sqlite3Dir>\r
<bz2Dir>$(ExternalsDir)bzip2-1.0.8\</bz2Dir>\r
<lzmaDir>$(ExternalsDir)xz-5.2.5\</lzmaDir>\r
<libffiDir>$(ExternalsDir)libffi-3.3.0\</libffiDir>\r
<opensslOutDir>$(ExternalsDir)openssl-bin-1.1.1q\$(ArchName)\</opensslOutDir>\r
<opensslIncludeDir>$(opensslOutDir)include</opensslIncludeDir>\r
<nasmDir>$(ExternalsDir)\nasm-2.11.06\</nasmDir>\r
- <zlibDir>$(ExternalsDir)\zlib-1.2.12\</zlibDir>\r
+ <zlibDir>$(ExternalsDir)\zlib-1.2.13\</zlibDir>\r
\r
<!-- Suffix for all binaries when building for debug -->\r
<PyDebugExt Condition="'$(PyDebugExt)' == '' and $(Configuration) == 'Debug'">_d</PyDebugExt>\r
again when building.\r
\r
_sqlite3\r
- Wraps SQLite 3.37.2, which is itself built by sqlite3.vcxproj\r
+ Wraps SQLite 3.39.4, which is itself built by sqlite3.vcxproj\r
Homepage:\r
https://www.sqlite.org/\r
_tkinter\r
error_ret(tok);
goto error;
}
- if (!tok_reserve_buf(tok, buflen + 1)) {
+ // Make room for the null terminator *and* potentially
+ // an extra newline character that we may need to artificially
+ // add.
+ size_t buffer_size = buflen + 2;
+ if (!tok_reserve_buf(tok, buffer_size)) {
goto error;
}
memcpy(tok->inp, buf, buflen);
return 0;
}
if (tok->inp[-1] != '\n') {
+ assert(tok->inp + 1 < tok->end);
/* Last line does not end in \n, fake one */
*tok->inp++ = '\n';
*tok->inp = '\0';
if (_PyUnicode_EqualToASCIIString(name, p->name)) {
if (p->initfunc == NULL) {
/* Cannot re-init internal module ("sys" or "builtins") */
- return PyImport_AddModuleObject(name);
+ mod = PyImport_AddModuleObject(name);
+ return Py_XNewRef(mod);
}
mod = (*p->initfunc)();
if (interp->audit_hooks == NULL) {
return NULL;
}
+ /* Avoid having our list of hooks show up in the GC module */
+ PyObject_GC_UnTrack(interp->audit_hooks);
}
if (PyList_Append(interp->audit_hooks, hook) < 0) {
-This is Python version 3.10.8
+This is Python version 3.10.9
=============================
.. image:: https://travis-ci.com/python/cpython.svg?branch=master
if ttype == tokenize.STRING and is_literal_string(tstring):
self.__addentry(safe_eval(tstring), lineno, isdocstring=1)
self.__freshmodule = 0
- elif ttype not in (tokenize.COMMENT, tokenize.NL):
- self.__freshmodule = 0
- return
+ return
+ if ttype in (tokenize.COMMENT, tokenize.NL, tokenize.ENCODING):
+ return
+ self.__freshmodule = 0
# class or func/method docstring?
if ttype == tokenize.NAME and tstring in ('class', 'def'):
self.__state = self.__suiteseen
CC="$CC -pg"
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
-int main() { return 0; }
+int main(void) { return 0; }
_ACEOF
if ac_fn_c_try_link "$LINENO"; then :
void* routine(void* p){return NULL;}
-int main(){
+int main(void){
pthread_t p;
if(pthread_create(&p,NULL,routine,NULL)!=0)
return 1;
void* routine(void* p){return NULL;}
-int main(){
+int main(void){
pthread_t p;
if(pthread_create(&p,NULL,routine,NULL)!=0)
return 1;
void* routine(void* p){return NULL;}
-int main(){
+int main(void){
pthread_t p;
if(pthread_create(&p,NULL,routine,NULL)!=0)
return 1;
void* routine(void* p){return NULL;}
-int main(){
+int main(void){
pthread_t p;
if(pthread_create(&p,NULL,routine,NULL)!=0)
return 1;
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
-int main()
+int main(void)
{
char s[16];
int i, *p1, *p2;
/* end confdefs.h. */
#include <stdio.h>
+#include <stdlib.h>
#include <pthread.h>
void * start_routine (void *arg) { exit (0); }
void *foo(void *parm) {
return NULL;
}
- main() {
+ int main(void) {
pthread_attr_t attr;
pthread_t id;
if (pthread_attr_init(&attr)) return (-1);
#include <sys/stat.h>
#include <unistd.h>
-int main(int argc, char*argv[])
+int main(int argc, char *argv[])
{
if(chflags(argv[0], 0) != 0)
return 1;
#include <sys/stat.h>
#include <unistd.h>
-int main(int argc, char*argv[])
+int main(int argc, char *argv[])
{
if(lchflags(argv[0], 0) != 0)
return 1;
#include <sys/socket.h>
#include <netinet/in.h>
-int main()
+int main(void)
{
int passive, gaierr, inet4 = 0, inet6 = 0;
struct addrinfo hints, *ai, *aitop;
#include <stdlib.h>
#include <math.h>
-int main() {
+int main(void) {
volatile double x, y, z;
/* 1./(1-2**-53) -> 1+2**-52 (correct), 1.0 (double rounding) */
x = 0.99999999999999989; /* 1-2**-53 */
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
-int main()
+int main(void)
{
return (((-1)>>3 == -1) ? 0 : 1);
}
#include <stdlib.h>
#include <unistd.h>
-int main()
+int main(void)
{
int val1 = nice(1);
if (val1 != -1 && val1 == nice(2))
#include <poll.h>
#include <unistd.h>
-int main()
+int main(void)
{
struct pollfd poll_struct = { 42, POLLIN|POLLPRI|POLLOUT, 0 };
int poll_test;
extern char *tzname[];
#endif
-int main()
+int main(void)
{
/* Note that we need to ensure that not only does tzset(3)
do 'something' with localtime, but it works as documented
cat confdefs.h - <<_ACEOF >conftest.$ac_ext
/* end confdefs.h. */
+#include <stddef.h>
#include <stdio.h>
-#include<stdlib.h>
-int main() {
+#include <stdlib.h>
+int main(void) {
size_t len = -1;
const char *str = "text";
len = mbstowcs(NULL, str, 0);
#include <stdlib.h>
#include <string.h>
void foo(void *p, void *q) { memmove(p, q, 19); }
-int main() {
+int main(void) {
char a[32] = "123456789000000000";
foo(&a[9], a);
if (strcmp(a, "123456789123456789000000000") != 0)
);
return r;
}
- int main() {
+ int main(void) {
int p = 8;
if ((foo(&p) ? : p) != 6)
return 1;
#include <stdatomic.h>
atomic_int int_var;
atomic_uintptr_t uintptr_var;
- int main() {
+ int main(void) {
atomic_store_explicit(&int_var, 5, memory_order_relaxed);
atomic_store_explicit(&uintptr_var, 0, memory_order_relaxed);
int loaded_value = atomic_load_explicit(&int_var, memory_order_seq_cst);
int val;
- int main() {
+ int main(void) {
__atomic_store_n(&val, 1, __ATOMIC_SEQ_CST);
(void)__atomic_load_n(&val, __ATOMIC_SEQ_CST);
return 0;
#include <dirent.h>
- int main() {
+ int main(void) {
struct dirent entry;
return entry.d_type == DT_UNKNOWN;
}
/* end confdefs.h. */
+ #include <stddef.h>
#include <unistd.h>
#include <sys/syscall.h>
#include <linux/random.h>
- int main() {
+ int main(void) {
char buffer[1];
const size_t buflen = sizeof(buffer);
const int flags = GRND_NONBLOCK;
/* end confdefs.h. */
+ #include <stddef.h>
#include <sys/random.h>
- int main() {
+ int main(void) {
char buffer[1];
const size_t buflen = sizeof(buffer);
const int flags = 0;
if test "x$enable_profiling" = xyes; then
ac_save_cc="$CC"
CC="$CC -pg"
- AC_LINK_IFELSE([AC_LANG_SOURCE([[int main() { return 0; }]])],
+ AC_LINK_IFELSE([AC_LANG_SOURCE([[int main(void) { return 0; }]])],
[],
[enable_profiling=no])
CC="$ac_save_cc"
void* routine(void* p){return NULL;}
-int main(){
+int main(void){
pthread_t p;
if(pthread_create(&p,NULL,routine,NULL)!=0)
return 1;
void* routine(void* p){return NULL;}
-int main(){
+int main(void){
pthread_t p;
if(pthread_create(&p,NULL,routine,NULL)!=0)
return 1;
void* routine(void* p){return NULL;}
-int main(){
+int main(void){
pthread_t p;
if(pthread_create(&p,NULL,routine,NULL)!=0)
return 1;
void* routine(void* p){return NULL;}
-int main(){
+int main(void){
pthread_t p;
if(pthread_create(&p,NULL,routine,NULL)!=0)
return 1;
AC_MSG_CHECKING(aligned memory access is required)
AC_CACHE_VAL(ac_cv_aligned_required,
[AC_RUN_IFELSE([AC_LANG_SOURCE([[
-int main()
+int main(void)
{
char s[16];
int i, *p1, *p2;
AC_MSG_CHECKING([for pthread_create in -lpthread])
AC_LINK_IFELSE([AC_LANG_PROGRAM([[
#include <stdio.h>
+#include <stdlib.h>
#include <pthread.h>
void * start_routine (void *arg) { exit (0); }]], [[
void *foo(void *parm) {
return NULL;
}
- main() {
+ int main(void) {
pthread_attr_t attr;
pthread_t id;
if (pthread_attr_init(&attr)) return (-1);
AC_RUN_IFELSE([AC_LANG_SOURCE([[
#include <sys/stat.h>
#include <unistd.h>
-int main(int argc, char*argv[])
+int main(int argc, char *argv[])
{
if(chflags(argv[0], 0) != 0)
return 1;
AC_RUN_IFELSE([AC_LANG_SOURCE([[
#include <sys/stat.h>
#include <unistd.h>
-int main(int argc, char*argv[])
+int main(int argc, char *argv[])
{
if(lchflags(argv[0], 0) != 0)
return 1;
#include <sys/socket.h>
#include <netinet/in.h>
-int main()
+int main(void)
{
int passive, gaierr, inet4 = 0, inet6 = 0;
struct addrinfo hints, *ai, *aitop;
AC_RUN_IFELSE([AC_LANG_SOURCE([[
#include <stdlib.h>
#include <math.h>
-int main() {
+int main(void) {
volatile double x, y, z;
/* 1./(1-2**-53) -> 1+2**-52 (correct), 1.0 (double rounding) */
x = 0.99999999999999989; /* 1-2**-53 */
AC_MSG_CHECKING(whether right shift extends the sign bit)
AC_CACHE_VAL(ac_cv_rshift_extends_sign, [
AC_RUN_IFELSE([AC_LANG_SOURCE([[
-int main()
+int main(void)
{
return (((-1)>>3 == -1) ? 0 : 1);
}
AC_RUN_IFELSE([AC_LANG_SOURCE([[
#include <stdlib.h>
#include <unistd.h>
-int main()
+int main(void)
{
int val1 = nice(1);
if (val1 != -1 && val1 == nice(2))
#include <poll.h>
#include <unistd.h>
-int main()
+int main(void)
{
struct pollfd poll_struct = { 42, POLLIN|POLLPRI|POLLOUT, 0 };
int poll_test;
extern char *tzname[];
#endif
-int main()
+int main(void)
{
/* Note that we need to ensure that not only does tzset(3)
do 'something' with localtime, but it works as documented
AC_MSG_CHECKING(for broken mbstowcs)
AC_CACHE_VAL(ac_cv_broken_mbstowcs,
AC_RUN_IFELSE([AC_LANG_SOURCE([[
+#include <stddef.h>
#include <stdio.h>
-#include<stdlib.h>
-int main() {
+#include <stdlib.h>
+int main(void) {
size_t len = -1;
const char *str = "text";
len = mbstowcs(NULL, str, 0);
#include <stdlib.h>
#include <string.h>
void foo(void *p, void *q) { memmove(p, q, 19); }
-int main() {
+int main(void) {
char a[32] = "123456789000000000";
foo(&a[9], a);
if (strcmp(a, "123456789123456789000000000") != 0)
);
return r;
}
- int main() {
+ int main(void) {
int p = 8;
if ((foo(&p) ? : p) != 6)
return 1;
#include <stdatomic.h>
atomic_int int_var;
atomic_uintptr_t uintptr_var;
- int main() {
+ int main(void) {
atomic_store_explicit(&int_var, 5, memory_order_relaxed);
atomic_store_explicit(&uintptr_var, 0, memory_order_relaxed);
int loaded_value = atomic_load_explicit(&int_var, memory_order_seq_cst);
[
AC_LANG_SOURCE([[
int val;
- int main() {
+ int main(void) {
__atomic_store_n(&val, 1, __ATOMIC_SEQ_CST);
(void)__atomic_load_n(&val, __ATOMIC_SEQ_CST);
return 0;
AC_LANG_SOURCE([[
#include <dirent.h>
- int main() {
+ int main(void) {
struct dirent entry;
return entry.d_type == DT_UNKNOWN;
}
AC_LINK_IFELSE(
[
AC_LANG_SOURCE([[
+ #include <stddef.h>
#include <unistd.h>
#include <sys/syscall.h>
#include <linux/random.h>
- int main() {
+ int main(void) {
char buffer[1];
const size_t buflen = sizeof(buffer);
const int flags = GRND_NONBLOCK;
AC_LINK_IFELSE(
[
AC_LANG_SOURCE([[
+ #include <stddef.h>
#include <sys/random.h>
- int main() {
+ int main(void) {
char buffer[1];
const size_t buflen = sizeof(buffer);
const int flags = 0;