From: JinWang An Date: Wed, 18 Jan 2023 06:01:48 +0000 (+0900) Subject: Imported Upstream version 3.10.9 X-Git-Tag: upstream/3.10.9^0 X-Git-Url: http://review.tizen.org/git/?a=commitdiff_plain;h=647a127df741e75721bfed8c31cf7f5fd371eb26;p=platform%2Fupstream%2Fpython3.git Imported Upstream version 3.10.9 --- diff --git a/Doc/README.rst b/Doc/README.rst index 729f4f85..5c85ad7c 100644 --- a/Doc/README.rst +++ b/Doc/README.rst @@ -91,7 +91,7 @@ Available make targets are: * "pydoc-topics", which builds a Python module containing a dictionary with plain text documentation for the labels defined in - `tools/pyspecific.py` -- pydoc needs these to show topic and keyword help. + ``tools/pyspecific.py`` -- pydoc needs these to show topic and keyword help. * "suspicious", which checks the parsed markup for text that looks like malformed and thus unconverted reST. diff --git a/Doc/c-api/init.rst b/Doc/c-api/init.rst index 31921896..483bcd99 100644 --- a/Doc/c-api/init.rst +++ b/Doc/c-api/init.rst @@ -1722,7 +1722,7 @@ is not possible due to its implementation being opaque at build time. Free the given *key* allocated by :c:func:`PyThread_tss_alloc`, after first calling :c:func:`PyThread_tss_delete` to ensure any associated thread locals have been unassigned. This is a no-op if the *key* - argument is `NULL`. + argument is ``NULL``. .. note:: A freed key becomes a dangling pointer. You should reset the key to diff --git a/Doc/c-api/init_config.rst b/Doc/c-api/init_config.rst index 2b6da2a7..15794fb5 100644 --- a/Doc/c-api/init_config.rst +++ b/Doc/c-api/init_config.rst @@ -254,7 +254,7 @@ PyPreConfig .. c:member:: int configure_locale - Set the LC_CTYPE locale to the user preferred locale? + Set the LC_CTYPE locale to the user preferred locale. If equals to 0, set :c:member:`~PyPreConfig.coerce_c_locale` and :c:member:`~PyPreConfig.coerce_c_locale_warn` members to 0. diff --git a/Doc/c-api/memory.rst b/Doc/c-api/memory.rst index 1372dc45..046719c3 100644 --- a/Doc/c-api/memory.rst +++ b/Doc/c-api/memory.rst @@ -95,6 +95,8 @@ for the I/O buffer escapes completely the Python memory manager. Allocator Domains ================= +.. _allocator-domains: + All allocating functions belong to one of three different "domains" (see also :c:type:`PyMemAllocatorDomain`). These domains represent different allocation strategies and are optimized for different purposes. The specific details on @@ -477,6 +479,25 @@ Customize Memory Allocators debug hooks on top on the new allocator. + .. warning:: + + :c:func:`PyMem_SetAllocator` does have the following contract: + + * It can be called after :c:func:`Py_PreInitialize` and before + :c:func:`Py_InitializeFromConfig` to install a custom memory + allocator. There are no restrictions over the installed allocator + other than the ones imposed by the domain (for instance, the Raw + Domain allows the allocator to be called without the GIL held). See + :ref:`the section on allocator domains ` for more + information. + + * If called after Python has finish initializing (after + :c:func:`Py_InitializeFromConfig` has been called) the allocator + **must** wrap the existing allocator. Substituting the current + allocator for some other arbitrary one is **not supported**. + + + .. c:function:: void PyMem_SetupDebugHooks(void) Setup :ref:`debug hooks in the Python memory allocators ` diff --git a/Doc/c-api/type.rst b/Doc/c-api/type.rst index 01d00bed..97816948 100644 --- a/Doc/c-api/type.rst +++ b/Doc/c-api/type.rst @@ -40,7 +40,7 @@ Type Objects .. c:function:: unsigned long PyType_GetFlags(PyTypeObject* type) Return the :c:member:`~PyTypeObject.tp_flags` member of *type*. This function is primarily - meant for use with `Py_LIMITED_API`; the individual flag bits are + meant for use with ``Py_LIMITED_API``; the individual flag bits are guaranteed to be stable across Python releases, but access to :c:member:`~PyTypeObject.tp_flags` itself is not part of the limited API. diff --git a/Doc/c-api/typeobj.rst b/Doc/c-api/typeobj.rst index 21385afb..53062e1d 100644 --- a/Doc/c-api/typeobj.rst +++ b/Doc/c-api/typeobj.rst @@ -149,10 +149,16 @@ Quick Reference +------------------------------------------------+-----------------------------------+-------------------+---+---+---+---+ .. [#slots] - A slot name in parentheses indicates it is (effectively) deprecated. - Names in angle brackets should be treated as read-only. - Names in square brackets are for internal use only. - "" (as a prefix) means the field is required (must be non-``NULL``). + + **()**: A slot name in parentheses indicates it is (effectively) deprecated. + + **<>**: Names in angle brackets should be initially set to ``NULL`` and + treated as read-only. + + **[]**: Names in square brackets are for internal use only. + + **** (as a prefix) means the field is required (must be non-``NULL``). + .. [#cols] Columns: **"O"**: set on :c:type:`PyBaseObject_Type` @@ -1212,6 +1218,17 @@ and :c:type:`PyType_Type` effectively act as defaults.) **Inheritance:** This flag is not inherited. + However, subclasses will not be instantiable unless they provide a + non-NULL :c:member:`~PyTypeObject.tp_new` (which is only possible + via the C API). + + .. note:: + + To disallow instantiating a class directly but allow instantiating + its subclasses (e.g. for an :term:`abstract base class`), + do not use this flag. + Instead, make :c:member:`~PyTypeObject.tp_new` only succeed for + subclasses. .. versionadded:: 3.10 @@ -1898,8 +1915,19 @@ and :c:type:`PyType_Type` effectively act as defaults.) Tuple of base types. - This is set for types created by a class statement. It should be ``NULL`` for - statically defined types. + This field should be set to ``NULL`` and treated as read-only. + Python will fill it in when the type is :c:func:`initialized `. + + For dynamically created classes, the ``Py_tp_bases`` + :c:type:`slot ` can be used instead of the *bases* argument + of :c:func:`PyType_FromSpecWithBases`. + The argument form is preferred. + + .. warning:: + + Multiple inheritance does not work well for statically defined types. + If you set ``tp_bases`` to a tuple, Python will not raise an error, + but some slots will only be inherited from the first base. **Inheritance:** @@ -1911,6 +1939,8 @@ and :c:type:`PyType_Type` effectively act as defaults.) Tuple containing the expanded set of base types, starting with the type itself and ending with :class:`object`, in Method Resolution Order. + This field should be set to ``NULL`` and treated as read-only. + Python will fill it in when the type is :c:func:`initialized `. **Inheritance:** diff --git a/Doc/data/refcounts.dat b/Doc/data/refcounts.dat index 89b64e6c..cab22abc 100644 --- a/Doc/data/refcounts.dat +++ b/Doc/data/refcounts.dat @@ -1010,10 +1010,10 @@ PyImport_Import:PyObject*::+1: PyImport_Import:PyObject*:name:0: PyImport_ImportFrozenModule:int::: -PyImport_ImportFrozenModule:const char*::: +PyImport_ImportFrozenModule:const char*:name:: PyImport_ImportFrozenModuleObject:int::: -PyImport_ImportFrozenModuleObject:PyObject*::+1: +PyImport_ImportFrozenModuleObject:PyObject*:name:+1: PyImport_ImportModule:PyObject*::+1: PyImport_ImportModule:const char*:name:: diff --git a/Doc/extending/embedding.rst b/Doc/extending/embedding.rst index 5f5abdf9..e64db373 100644 --- a/Doc/extending/embedding.rst +++ b/Doc/extending/embedding.rst @@ -298,16 +298,16 @@ be directly useful to you: .. code-block:: shell-session - $ /opt/bin/python3.4-config --cflags - -I/opt/include/python3.4m -I/opt/include/python3.4m -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes + $ /opt/bin/python3.11-config --cflags + -I/opt/include/python3.11 -I/opt/include/python3.11 -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -* ``pythonX.Y-config --ldflags`` will give you the recommended flags when - linking: +* ``pythonX.Y-config --ldflags --embed`` will give you the recommended flags + when linking: .. code-block:: shell-session - $ /opt/bin/python3.4-config --ldflags - -L/opt/lib/python3.4/config-3.4m -lpthread -ldl -lutil -lm -lpython3.4m -Xlinker -export-dynamic + $ /opt/bin/python3.11-config --ldflags --embed + -L/opt/lib/python3.11/config-3.11-x86_64-linux-gnu -L/opt/lib -lpython3.11 -lpthread -ldl -lutil -lm .. note:: To avoid confusion between several Python installations (and especially diff --git a/Doc/faq/design.rst b/Doc/faq/design.rst index 9da1d01a..9dbfacd7 100644 --- a/Doc/faq/design.rst +++ b/Doc/faq/design.rst @@ -155,7 +155,7 @@ Why can't I use an assignment in an expression? Starting in Python 3.8, you can! -Assignment expressions using the walrus operator `:=` assign a variable in an +Assignment expressions using the walrus operator ``:=`` assign a variable in an expression:: while chunk := fp.read(200): diff --git a/Doc/faq/general.rst b/Doc/faq/general.rst index 7f64d847..16f726c5 100644 --- a/Doc/faq/general.rst +++ b/Doc/faq/general.rst @@ -125,11 +125,15 @@ find packages of interest to you. How does the Python version numbering scheme work? -------------------------------------------------- -Python versions are numbered A.B.C or A.B. A is the major version number -- it -is only incremented for really major changes in the language. B is the minor -version number, incremented for less earth-shattering changes. C is the -micro-level -- it is incremented for each bugfix release. See :pep:`6` for more -information about bugfix releases. +Python versions are numbered "A.B.C" or "A.B": + +* *A* is the major version number -- it is only incremented for really major + changes in the language. +* *B* is the minor version number -- it is incremented for less earth-shattering + changes. +* *C* is the micro version number -- it is incremented for each bugfix release. + +See :pep:`6` for more information about bugfix releases. Not all releases are bugfix releases. In the run-up to a new major release, a series of development releases are made, denoted as alpha, beta, or release @@ -139,12 +143,14 @@ Betas are more stable, preserving existing interfaces but possibly adding new modules, and release candidates are frozen, making no changes except as needed to fix critical bugs. -Alpha, beta and release candidate versions have an additional suffix. The -suffix for an alpha version is "aN" for some small number N, the suffix for a -beta version is "bN" for some small number N, and the suffix for a release -candidate version is "rcN" for some small number N. In other words, all versions -labeled 2.0aN precede the versions labeled 2.0bN, which precede versions labeled -2.0rcN, and *those* precede 2.0. +Alpha, beta and release candidate versions have an additional suffix: + +* The suffix for an alpha version is "aN" for some small number *N*. +* The suffix for a beta version is "bN" for some small number *N*. +* The suffix for a release candidate version is "rcN" for some small number *N*. + +In other words, all versions labeled *2.0aN* precede the versions labeled +*2.0bN*, which precede versions labeled *2.0rcN*, and *those* precede 2.0. You may also find version numbers with a "+" suffix, e.g. "2.2+". These are unreleased versions, built directly from the CPython development repository. In @@ -435,7 +441,7 @@ With the interpreter, documentation is never far from the student as they are programming. There are also good IDEs for Python. IDLE is a cross-platform IDE for Python -that is written in Python using Tkinter. PythonWin is a Windows-specific IDE. +that is written in Python using Tkinter. Emacs users will be happy to know that there is a very good Python mode for Emacs. All of these programming environments provide syntax highlighting, auto-indenting, and access to the interactive interpreter while coding. Consult diff --git a/Doc/faq/programming.rst b/Doc/faq/programming.rst index a33774ac..f8140481 100644 --- a/Doc/faq/programming.rst +++ b/Doc/faq/programming.rst @@ -25,8 +25,9 @@ Reference Manual `. You can also write your own debugger by using the code for pdb as an example. The IDLE interactive development environment, which is part of the standard -Python distribution (normally available as Tools/scripts/idle), includes a -graphical debugger. +Python distribution (normally available as +`Tools/scripts/idle3 `_), +includes a graphical debugger. PythonWin is a Python IDE that includes a GUI debugger based on pdb. The PythonWin debugger colors breakpoints and has quite a few cool features such as @@ -78,7 +79,8 @@ set of modules required by a program and bind these modules together with a Python binary to produce a single executable. One is to use the freeze tool, which is included in the Python source tree as -``Tools/freeze``. It converts Python byte code to C arrays; with a C compiler you can +`Tools/freeze `_. +It converts Python byte code to C arrays; with a C compiler you can embed all your modules into a new program, which is then linked with the standard Python modules. @@ -114,7 +116,7 @@ Core Language Why am I getting an UnboundLocalError when the variable has a value? -------------------------------------------------------------------- -It can be a surprise to get the UnboundLocalError in previously working +It can be a surprise to get the :exc:`UnboundLocalError` in previously working code when it is modified by adding an assignment statement somewhere in the body of a function. @@ -123,6 +125,7 @@ This code: >>> x = 10 >>> def bar(): ... print(x) + ... >>> bar() 10 @@ -133,7 +136,7 @@ works, but this code: ... print(x) ... x += 1 -results in an UnboundLocalError: +results in an :exc:`!UnboundLocalError`: >>> foo() Traceback (most recent call last): @@ -155,6 +158,7 @@ global: ... global x ... print(x) ... x += 1 + ... >>> foobar() 10 @@ -176,6 +180,7 @@ keyword: ... x += 1 ... bar() ... print(x) + ... >>> foo() 10 11 @@ -273,7 +278,7 @@ main.py:: import mod print(config.x) -Note that using a module is also the basis for implementing the Singleton design +Note that using a module is also the basis for implementing the singleton design pattern, for the same reason. @@ -291,9 +296,9 @@ using multiple imports per line uses less screen space. It's good practice if you import modules in the following order: -1. standard library modules -- e.g. ``sys``, ``os``, ``getopt``, ``re`` +1. standard library modules -- e.g. :mod:`sys`, :mod:`os`, :mod:`argparse`, :mod:`re` 2. third-party library modules (anything installed in Python's site-packages - directory) -- e.g. mx.DateTime, ZODB, PIL.Image, etc. + directory) -- e.g. :mod:`!dateutil`, :mod:`!requests`, :mod:`!PIL.Image` 3. locally developed modules It is sometimes necessary to move imports to a function or class to avoid @@ -471,7 +476,7 @@ object ``x`` refers to). After this assignment we have two objects (the ints Some operations (for example ``y.append(10)`` and ``y.sort()``) mutate the object, whereas superficially similar operations (for example ``y = y + [10]`` -and ``sorted(y)``) create a new object. In general in Python (and in all cases +and :func:`sorted(y) `) create a new object. In general in Python (and in all cases in the standard library) a method that mutates an object will return ``None`` to help avoid getting the two types of operations confused. So if you mistakenly write ``y.sort()`` thinking it will give you a sorted copy of ``y``, @@ -644,7 +649,7 @@ Sequences can be copied by slicing:: How can I find the methods or attributes of an object? ------------------------------------------------------ -For an instance x of a user-defined class, ``dir(x)`` returns an alphabetized +For an instance ``x`` of a user-defined class, :func:`dir(x) ` returns an alphabetized list of the names containing the instance attributes and methods and attributes defined by its class. @@ -669,9 +674,9 @@ callable. Consider the following code:: <__main__.A object at 0x16D07CC> Arguably the class has a name: even though it is bound to two names and invoked -through the name B the created instance is still reported as an instance of -class A. However, it is impossible to say whether the instance's name is a or -b, since both names are bound to the same value. +through the name ``B`` the created instance is still reported as an instance of +class ``A``. However, it is impossible to say whether the instance's name is ``a`` or +``b``, since both names are bound to the same value. Generally speaking it should not be necessary for your code to "know the names" of particular values. Unless you are deliberately writing introspective @@ -841,7 +846,7 @@ How do I get int literal attribute instead of SyntaxError? ---------------------------------------------------------- Trying to lookup an ``int`` literal attribute in the normal manner gives -a syntax error because the period is seen as a decimal point:: +a :exc:`SyntaxError` because the period is seen as a decimal point:: >>> 1.__class__ File "", line 1 @@ -887,7 +892,7 @@ leading '0' in a decimal number (except '0'). How do I convert a number to a string? -------------------------------------- -To convert, e.g., the number 144 to the string '144', use the built-in type +To convert, e.g., the number ``144`` to the string ``'144'``, use the built-in type constructor :func:`str`. If you want a hexadecimal or octal representation, use the built-in functions :func:`hex` or :func:`oct`. For fancy formatting, see the :ref:`f-strings` and :ref:`formatstrings` sections, @@ -1006,11 +1011,11 @@ Not as such. For simple input parsing, the easiest approach is usually to split the line into whitespace-delimited words using the :meth:`~str.split` method of string objects and then convert decimal strings to numeric values using :func:`int` or -:func:`float`. ``split()`` supports an optional "sep" parameter which is useful +:func:`float`. :meth:`!split()` supports an optional "sep" parameter which is useful if the line uses something other than whitespace as a separator. For more complicated input parsing, regular expressions are more powerful -than C's :c:func:`sscanf` and better suited for the task. +than C's ``sscanf`` and better suited for the task. What does 'UnicodeDecodeError' or 'UnicodeEncodeError' error mean? @@ -1206,15 +1211,16 @@ difference is that a Python list can contain objects of many different types. The ``array`` module also provides methods for creating arrays of fixed types with compact representations, but they are slower to index than lists. Also -note that NumPy and other third party packages define array-like structures with +note that `NumPy `_ +and other third party packages define array-like structures with various characteristics as well. -To get Lisp-style linked lists, you can emulate cons cells using tuples:: +To get Lisp-style linked lists, you can emulate *cons cells* using tuples:: lisp_list = ("like", ("this", ("example", None) ) ) If mutability is desired, you could use lists instead of tuples. Here the -analogue of lisp car is ``lisp_list[0]`` and the analogue of cdr is +analogue of a Lisp *car* is ``lisp_list[0]`` and the analogue of *cdr* is ``lisp_list[1]``. Only do this if you're sure you really need to, because it's usually a lot slower than using Python lists. @@ -1334,11 +1340,12 @@ that even though there was an error, the append worked:: ['foo', 'item'] To see why this happens, you need to know that (a) if an object implements an -``__iadd__`` magic method, it gets called when the ``+=`` augmented assignment +:meth:`~object.__iadd__` magic method, it gets called when the ``+=`` augmented +assignment is executed, and its return value is what gets used in the assignment statement; -and (b) for lists, ``__iadd__`` is equivalent to calling ``extend`` on the list +and (b) for lists, :meth:`!__iadd__` is equivalent to calling :meth:`~list.extend` on the list and returning the list. That's why we say that for lists, ``+=`` is a -"shorthand" for ``list.extend``:: +"shorthand" for :meth:`!list.extend`:: >>> a_list = [] >>> a_list += [1] @@ -1363,7 +1370,7 @@ Thus, in our tuple example what is happening is equivalent to:: ... TypeError: 'tuple' object does not support item assignment -The ``__iadd__`` succeeds, and thus the list is extended, but even though +The :meth:`!__iadd__` succeeds, and thus the list is extended, but even though ``result`` points to the same object that ``a_tuple[0]`` already points to, that final assignment still results in an error, because tuples are immutable. @@ -1440,7 +1447,8 @@ See also :ref:`why-self`. How do I check if an object is an instance of a given class or of a subclass of it? ----------------------------------------------------------------------------------- -Use the built-in function ``isinstance(obj, cls)``. You can check if an object +Use the built-in function :func:`isinstance(obj, cls) `. You can +check if an object is an instance of any of a number of classes by providing a tuple instead of a single class, e.g. ``isinstance(obj, (class1, class2, ...))``, and can also check whether an object is one of Python's built-in types, e.g. @@ -1537,13 +1545,13 @@ Here the ``UpperOut`` class redefines the ``write()`` method to convert the argument string to uppercase before calling the underlying ``self._outfile.write()`` method. All other methods are delegated to the underlying ``self._outfile`` object. The delegation is accomplished via the -``__getattr__`` method; consult :ref:`the language reference ` +:meth:`~object.__getattr__` method; consult :ref:`the language reference ` for more information about controlling attribute access. Note that for more general cases delegation can get trickier. When attributes -must be set as well as retrieved, the class must define a :meth:`__setattr__` +must be set as well as retrieved, the class must define a :meth:`~object.__setattr__` method too, and it must do so carefully. The basic implementation of -:meth:`__setattr__` is roughly equivalent to the following:: +:meth:`!__setattr__` is roughly equivalent to the following:: class X: ... @@ -1551,7 +1559,8 @@ method too, and it must do so carefully. The basic implementation of self.__dict__[name] = value ... -Most :meth:`__setattr__` implementations must modify ``self.__dict__`` to store +Most :meth:`!__setattr__` implementations must modify +:meth:`self.__dict__ ` to store local state for self without causing an infinite recursion. @@ -1689,17 +1698,17 @@ My class defines __del__ but it is not called when I delete the object. There are several possible reasons for this. -The del statement does not necessarily call :meth:`__del__` -- it simply +The :keyword:`del` statement does not necessarily call :meth:`~object.__del__` -- it simply decrements the object's reference count, and if this reaches zero -:meth:`__del__` is called. +:meth:`!__del__` is called. If your data structures contain circular links (e.g. a tree where each child has a parent reference and each parent has a list of children) the reference counts will never go back to zero. Once in a while Python runs an algorithm to detect such cycles, but the garbage collector might run some time after the last -reference to your data structure vanishes, so your :meth:`__del__` method may be +reference to your data structure vanishes, so your :meth:`!__del__` method may be called at an inconvenient and random time. This is inconvenient if you're trying -to reproduce a problem. Worse, the order in which object's :meth:`__del__` +to reproduce a problem. Worse, the order in which object's :meth:`!__del__` methods are executed is arbitrary. You can run :func:`gc.collect` to force a collection, but there *are* pathological cases where objects will never be collected. @@ -1707,7 +1716,7 @@ collected. Despite the cycle collector, it's still a good idea to define an explicit ``close()`` method on objects to be called whenever you're done with them. The ``close()`` method can then remove attributes that refer to subobjects. Don't -call :meth:`__del__` directly -- :meth:`__del__` should call ``close()`` and +call :meth:`!__del__` directly -- :meth:`!__del__` should call ``close()`` and ``close()`` should make sure that it can be called more than once for the same object. @@ -1724,7 +1733,7 @@ and sibling references (if they need them!). Normally, calling :func:`sys.exc_clear` will take care of this by clearing the last recorded exception. -Finally, if your :meth:`__del__` method raises an exception, a warning message +Finally, if your :meth:`!__del__` method raises an exception, a warning message is printed to :data:`sys.stderr`. @@ -1852,8 +1861,8 @@ For example, here is the implementation of How can a subclass control what data is stored in an immutable instance? ------------------------------------------------------------------------ -When subclassing an immutable type, override the :meth:`__new__` method -instead of the :meth:`__init__` method. The latter only runs *after* an +When subclassing an immutable type, override the :meth:`~object.__new__` method +instead of the :meth:`~object.__init__` method. The latter only runs *after* an instance is created, which is too late to alter data in an immutable instance. diff --git a/Doc/faq/windows.rst b/Doc/faq/windows.rst index e9a573da..a926fb4c 100644 --- a/Doc/faq/windows.rst +++ b/Doc/faq/windows.rst @@ -167,7 +167,7 @@ How can I embed Python into a Windows application? Embedding the Python interpreter in a Windows app can be summarized as follows: -1. Do _not_ build Python into your .exe file directly. On Windows, Python must +1. Do **not** build Python into your .exe file directly. On Windows, Python must be a DLL to handle importing modules that are themselves DLL's. (This is the first key undocumented fact.) Instead, link to :file:`python{NN}.dll`; it is typically installed in ``C:\Windows\System``. *NN* is the Python version, a @@ -191,7 +191,7 @@ Embedding the Python interpreter in a Windows app can be summarized as follows: 2. If you use SWIG, it is easy to create a Python "extension module" that will make the app's data and methods available to Python. SWIG will handle just about all the grungy details for you. The result is C code that you link - *into* your .exe file (!) You do _not_ have to create a DLL file, and this + *into* your .exe file (!) You do **not** have to create a DLL file, and this also simplifies linking. 3. SWIG will create an init function (a C function) whose name depends on the @@ -218,10 +218,10 @@ Embedding the Python interpreter in a Windows app can be summarized as follows: 5. There are two problems with Python's C API which will become apparent if you use a compiler other than MSVC, the compiler used to build pythonNN.dll. - Problem 1: The so-called "Very High Level" functions that take FILE * + Problem 1: The so-called "Very High Level" functions that take ``FILE *`` arguments will not work in a multi-compiler environment because each - compiler's notion of a struct FILE will be different. From an implementation - standpoint these are very _low_ level functions. + compiler's notion of a ``struct FILE`` will be different. From an implementation + standpoint these are very low level functions. Problem 2: SWIG generates the following code when generating wrappers to void functions: diff --git a/Doc/glossary.rst b/Doc/glossary.rst index aa9768f0..6e519df6 100644 --- a/Doc/glossary.rst +++ b/Doc/glossary.rst @@ -210,6 +210,16 @@ Glossary A list of bytecode instructions can be found in the documentation for :ref:`the dis module `. + callable + A callable is an object that can be called, possibly with a set + of arguments (see :term:`argument`), with the following syntax:: + + callable(argument1, argument2, ...) + + A :term:`function`, and by extension a :term:`method`, is a callable. + An instance of a class that implements the :meth:`~object.__call__` + method is also a callable. + callback A subroutine function which is passed as an argument to be executed at some point in the future. @@ -883,7 +893,7 @@ Glossary package A Python :term:`module` which can contain submodules or recursively, - subpackages. Technically, a package is a Python module with an + subpackages. Technically, a package is a Python module with a ``__path__`` attribute. See also :term:`regular package` and :term:`namespace package`. diff --git a/Doc/howto/argparse.rst b/Doc/howto/argparse.rst index a97d10cf..adc2f373 100644 --- a/Doc/howto/argparse.rst +++ b/Doc/howto/argparse.rst @@ -732,9 +732,9 @@ your program, just in case they don't know:: if args.quiet: print(answer) elif args.verbose: - print("{} to the power {} equals {}".format(args.x, args.y, answer)) + print(f"{args.x} to the power {args.y} equals {answer}") else: - print("{}^{} == {}".format(args.x, args.y, answer)) + print(f"{args.x}^{args.y} == {answer}") Note that slight difference in the usage text. Note the ``[-v | -q]``, which tells us that we can either use ``-v`` or ``-q``, diff --git a/Doc/howto/clinic.rst b/Doc/howto/clinic.rst index 5a5cbab6..62ce28bd 100644 --- a/Doc/howto/clinic.rst +++ b/Doc/howto/clinic.rst @@ -1,5 +1,7 @@ .. highlight:: c +.. _howto-clinic: + ********************** Argument Clinic How-To ********************** diff --git a/Doc/howto/logging-cookbook.rst b/Doc/howto/logging-cookbook.rst index 5b079744..fecc729c 100644 --- a/Doc/howto/logging-cookbook.rst +++ b/Doc/howto/logging-cookbook.rst @@ -276,6 +276,211 @@ choose a different directory name for the log - just ensure that the directory e and that you have the permissions to create and update files in it. +.. _custom-level-handling: + +Custom handling of levels +------------------------- + +Sometimes, you might want to do something slightly different from the standard +handling of levels in handlers, where all levels above a threshold get +processed by a handler. To do this, you need to use filters. Let's look at a +scenario where you want to arrange things as follows: + +* Send messages of severity ``INFO`` and ``WARNING`` to ``sys.stdout`` +* Send messages of severity ``ERROR`` and above to ``sys.stderr`` +* Send messages of severity ``DEBUG`` and above to file ``app.log`` + +Suppose you configure logging with the following JSON: + +.. code-block:: json + + { + "version": 1, + "disable_existing_loggers": false, + "formatters": { + "simple": { + "format": "%(levelname)-8s - %(message)s" + } + }, + "handlers": { + "stdout": { + "class": "logging.StreamHandler", + "level": "INFO", + "formatter": "simple", + "stream": "ext://sys.stdout", + }, + "stderr": { + "class": "logging.StreamHandler", + "level": "ERROR", + "formatter": "simple", + "stream": "ext://sys.stderr" + }, + "file": { + "class": "logging.FileHandler", + "formatter": "simple", + "filename": "app.log", + "mode": "w" + } + }, + "root": { + "level": "DEBUG", + "handlers": [ + "stderr", + "stdout", + "file" + ] + } + } + +This configuration does *almost* what we want, except that ``sys.stdout`` would +show messages of severity ``ERROR`` and above as well as ``INFO`` and +``WARNING`` messages. To prevent this, we can set up a filter which excludes +those messages and add it to the relevant handler. This can be configured by +adding a ``filters`` section parallel to ``formatters`` and ``handlers``: + +.. code-block:: json + + "filters": { + "warnings_and_below": { + "()" : "__main__.filter_maker", + "level": "WARNING" + } + } + +and changing the section on the ``stdout`` handler to add it: + +.. code-block:: json + + "stdout": { + "class": "logging.StreamHandler", + "level": "INFO", + "formatter": "simple", + "stream": "ext://sys.stdout", + "filters": ["warnings_and_below"] + } + +A filter is just a function, so we can define the ``filter_maker`` (a factory +function) as follows: + +.. code-block:: python + + def filter_maker(level): + level = getattr(logging, level) + + def filter(record): + return record.levelno <= level + + return filter + +This converts the string argument passed in to a numeric level, and returns a +function which only returns ``True`` if the level of the passed in record is +at or below the specified level. Note that in this example I have defined the +``filter_maker`` in a test script ``main.py`` that I run from the command line, +so its module will be ``__main__`` - hence the ``__main__.filter_maker`` in the +filter configuration. You will need to change that if you define it in a +different module. + +With the filter added, we can run ``main.py``, which in full is: + +.. code-block:: python + + import json + import logging + import logging.config + + CONFIG = ''' + { + "version": 1, + "disable_existing_loggers": false, + "formatters": { + "simple": { + "format": "%(levelname)-8s - %(message)s" + } + }, + "filters": { + "warnings_and_below": { + "()" : "__main__.filter_maker", + "level": "WARNING" + } + }, + "handlers": { + "stdout": { + "class": "logging.StreamHandler", + "level": "INFO", + "formatter": "simple", + "stream": "ext://sys.stdout", + "filters": ["warnings_and_below"] + }, + "stderr": { + "class": "logging.StreamHandler", + "level": "ERROR", + "formatter": "simple", + "stream": "ext://sys.stderr" + }, + "file": { + "class": "logging.FileHandler", + "formatter": "simple", + "filename": "app.log", + "mode": "w" + } + }, + "root": { + "level": "DEBUG", + "handlers": [ + "stderr", + "stdout", + "file" + ] + } + } + ''' + + def filter_maker(level): + level = getattr(logging, level) + + def filter(record): + return record.levelno <= level + + return filter + + logging.config.dictConfig(json.loads(CONFIG)) + logging.debug('A DEBUG message') + logging.info('An INFO message') + logging.warning('A WARNING message') + logging.error('An ERROR message') + logging.critical('A CRITICAL message') + +And after running it like this: + +.. code-block:: shell + + python main.py 2>stderr.log >stdout.log + +We can see the results are as expected: + +.. code-block:: shell + + $ more *.log + :::::::::::::: + app.log + :::::::::::::: + DEBUG - A DEBUG message + INFO - An INFO message + WARNING - A WARNING message + ERROR - An ERROR message + CRITICAL - A CRITICAL message + :::::::::::::: + stderr.log + :::::::::::::: + ERROR - An ERROR message + CRITICAL - A CRITICAL message + :::::::::::::: + stdout.log + :::::::::::::: + INFO - An INFO message + WARNING - A WARNING message + + Configuration server example ---------------------------- @@ -332,6 +537,8 @@ configuration:: print('complete') +.. _blocking-handlers: + Dealing with handlers that block -------------------------------- @@ -558,13 +765,71 @@ serialization. Running a logging socket listener in production ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -To run a logging listener in production, you may need to use a process-management tool -such as `Supervisor `_. `Here -`_ is a Gist which -provides the bare-bones files to run the above functionality using Supervisor: you -will need to change the `/path/to/` parts in the Gist to reflect the actual paths you -want to use. - +.. _socket-listener-gist: https://gist.github.com/vsajip/4b227eeec43817465ca835ca66f75e2b + +To run a logging listener in production, you may need to use a +process-management tool such as `Supervisor `_. +`Here is a Gist `__ +which provides the bare-bones files to run the above functionality using +Supervisor. It consists of the following files: + ++-------------------------+----------------------------------------------------+ +| File | Purpose | ++=========================+====================================================+ +| :file:`prepare.sh` | A Bash script to prepare the environment for | +| | testing | ++-------------------------+----------------------------------------------------+ +| :file:`supervisor.conf` | The Supervisor configuration file, which has | +| | entries for the listener and a multi-process web | +| | application | ++-------------------------+----------------------------------------------------+ +| :file:`ensure_app.sh` | A Bash script to ensure that Supervisor is running | +| | with the above configuration | ++-------------------------+----------------------------------------------------+ +| :file:`log_listener.py` | The socket listener program which receives log | +| | events and records them to a file | ++-------------------------+----------------------------------------------------+ +| :file:`main.py` | A simple web application which performs logging | +| | via a socket connected to the listener | ++-------------------------+----------------------------------------------------+ +| :file:`webapp.json` | A JSON configuration file for the web application | ++-------------------------+----------------------------------------------------+ +| :file:`client.py` | A Python script to exercise the web application | ++-------------------------+----------------------------------------------------+ + +The web application uses `Gunicorn `_, which is a +popular web application server that starts multiple worker processes to handle +requests. This example setup shows how the workers can write to the same log file +without conflicting with one another --- they all go through the socket listener. + +To test these files, do the following in a POSIX environment: + +#. Download `the Gist `__ + as a ZIP archive using the :guilabel:`Download ZIP` button. + +#. Unzip the above files from the archive into a scratch directory. + +#. In the scratch directory, run ``bash prepare.sh`` to get things ready. + This creates a :file:`run` subdirectory to contain Supervisor-related and + log files, and a :file:`venv` subdirectory to contain a virtual environment + into which ``bottle``, ``gunicorn`` and ``supervisor`` are installed. + +#. Run ``bash ensure_app.sh`` to ensure that Supervisor is running with + the above configuration. + +#. Run ``venv/bin/python client.py`` to exercise the web application, + which will lead to records being written to the log. + +#. Inspect the log files in the :file:`run` subdirectory. You should see the + most recent log lines in files matching the pattern :file:`app.log*`. They won't be in + any particular order, since they have been handled concurrently by different + worker processes in a non-deterministic way. + +#. You can shut down the listener and the web application by running + ``venv/bin/supervisorctl -c supervisor.conf shutdown``. + +You may need to tweak the configuration files in the unlikely event that the +configured ports clash with something else in your test environment. .. _context-info: @@ -2774,7 +3039,7 @@ Formatting times using UTC (GMT) via configuration -------------------------------------------------- Sometimes you want to format times using UTC, which can be done using a class -such as `UTCFormatter`, shown below:: +such as ``UTCFormatter``, shown below:: import logging import time @@ -3428,6 +3693,147 @@ the above handler, you'd pass structured data using something like this:: i = 1 logger.debug('Message %d', i, extra=extra) +How to treat a logger like an output stream +------------------------------------------- + +Sometimes, you need to interface to a third-party API which expects a file-like +object to write to, but you want to direct the API's output to a logger. You +can do this using a class which wraps a logger with a file-like API. +Here's a short script illustrating such a class: + +.. code-block:: python + + import logging + + class LoggerWriter: + def __init__(self, logger, level): + self.logger = logger + self.level = level + + def write(self, message): + if message != '\n': # avoid printing bare newlines, if you like + self.logger.log(self.level, message) + + def flush(self): + # doesn't actually do anything, but might be expected of a file-like + # object - so optional depending on your situation + pass + + def close(self): + # doesn't actually do anything, but might be expected of a file-like + # object - so optional depending on your situation. You might want + # to set a flag so that later calls to write raise an exception + pass + + def main(): + logging.basicConfig(level=logging.DEBUG) + logger = logging.getLogger('demo') + info_fp = LoggerWriter(logger, logging.INFO) + debug_fp = LoggerWriter(logger, logging.DEBUG) + print('An INFO message', file=info_fp) + print('A DEBUG message', file=debug_fp) + + if __name__ == "__main__": + main() + +When this script is run, it prints + +.. code-block:: text + + INFO:demo:An INFO message + DEBUG:demo:A DEBUG message + +You could also use ``LoggerWriter`` to redirect ``sys.stdout`` and +``sys.stderr`` by doing something like this: + +.. code-block:: python + + import sys + + sys.stdout = LoggerWriter(logger, logging.INFO) + sys.stderr = LoggerWriter(logger, logging.WARNING) + +You should do this *after* configuring logging for your needs. In the above +example, the :func:`~logging.basicConfig` call does this (using the +``sys.stderr`` value *before* it is overwritten by a ``LoggerWriter`` +instance). Then, you'd get this kind of result: + +.. code-block:: pycon + + >>> print('Foo') + INFO:demo:Foo + >>> print('Bar', file=sys.stderr) + WARNING:demo:Bar + >>> + +Of course, these above examples show output according to the format used by +:func:`~logging.basicConfig`, but you can use a different formatter when you +configure logging. + +Note that with the above scheme, you are somewhat at the mercy of buffering and +the sequence of write calls which you are intercepting. For example, with the +definition of ``LoggerWriter`` above, if you have the snippet + +.. code-block:: python + + sys.stderr = LoggerWriter(logger, logging.WARNING) + 1 / 0 + +then running the script results in + +.. code-block:: text + + WARNING:demo:Traceback (most recent call last): + + WARNING:demo: File "/home/runner/cookbook-loggerwriter/test.py", line 53, in + + WARNING:demo: + WARNING:demo:main() + WARNING:demo: File "/home/runner/cookbook-loggerwriter/test.py", line 49, in main + + WARNING:demo: + WARNING:demo:1 / 0 + WARNING:demo:ZeroDivisionError + WARNING:demo:: + WARNING:demo:division by zero + +As you can see, this output isn't ideal. That's because the underlying code +which writes to ``sys.stderr`` makes mutiple writes, each of which results in a +separate logged line (for example, the last three lines above). To get around +this problem, you need to buffer things and only output log lines when newlines +are seen. Let's use a slghtly better implementation of ``LoggerWriter``: + +.. code-block:: python + + class BufferingLoggerWriter(LoggerWriter): + def __init__(self, logger, level): + super().__init__(logger, level) + self.buffer = '' + + def write(self, message): + if '\n' not in message: + self.buffer += message + else: + parts = message.split('\n') + if self.buffer: + s = self.buffer + parts.pop(0) + self.logger.log(self.level, s) + self.buffer = parts.pop() + for part in parts: + self.logger.log(self.level, part) + +This just buffers up stuff until a newline is seen, and then logs complete +lines. With this approach, you get better output: + +.. code-block:: text + + WARNING:demo:Traceback (most recent call last): + WARNING:demo: File "/home/runner/cookbook-loggerwriter/main.py", line 55, in + WARNING:demo: main() + WARNING:demo: File "/home/runner/cookbook-loggerwriter/main.py", line 52, in main + WARNING:demo: 1/0 + WARNING:demo:ZeroDivisionError: division by zero + .. patterns-to-avoid: diff --git a/Doc/howto/logging.rst b/Doc/howto/logging.rst index 0115a941..b2276595 100644 --- a/Doc/howto/logging.rst +++ b/Doc/howto/logging.rst @@ -552,14 +552,14 @@ raw message. If there is no date format string, the default date format is: %Y-%m-%d %H:%M:%S -with the milliseconds tacked on at the end. The ``style`` is one of `%`, '{' -or '$'. If one of these is not specified, then '%' will be used. +with the milliseconds tacked on at the end. The ``style`` is one of ``'%'``, +``'{'``, or ``'$'``. If one of these is not specified, then ``'%'`` will be used. -If the ``style`` is '%', the message format string uses +If the ``style`` is ``'%'``, the message format string uses ``%()s`` styled string substitution; the possible keys are -documented in :ref:`logrecord-attributes`. If the style is '{', the message +documented in :ref:`logrecord-attributes`. If the style is ``'{'``, the message format string is assumed to be compatible with :meth:`str.format` (using -keyword arguments), while if the style is '$' then the message format string +keyword arguments), while if the style is ``'$'`` then the message format string should conform to what is expected by :meth:`string.Template.substitute`. .. versionchanged:: 3.2 diff --git a/Doc/install/index.rst b/Doc/install/index.rst index 84df5e7c..d2d8e567 100644 --- a/Doc/install/index.rst +++ b/Doc/install/index.rst @@ -761,7 +761,7 @@ And on Windows, the configuration files are: +--------------+-------------------------------------------------+-------+ On all platforms, the "personal" file can be temporarily disabled by -passing the `--no-user-cfg` option. +passing the ``--no-user-cfg`` option. Notes: diff --git a/Doc/library/argparse.rst b/Doc/library/argparse.rst index d96f17b8..bdc2cceb 100644 --- a/Doc/library/argparse.rst +++ b/Doc/library/argparse.rst @@ -154,9 +154,10 @@ ArgumentParser objects * usage_ - The string describing the program usage (default: generated from arguments added to parser) - * description_ - Text to display before the argument help (default: none) + * description_ - Text to display before the argument help + (by default, no text) - * epilog_ - Text to display after the argument help (default: none) + * epilog_ - Text to display after the argument help (by default, no text) * parents_ - A list of :class:`ArgumentParser` objects whose arguments should also be included @@ -1831,8 +1832,8 @@ FileType objects Namespace(out=<_io.TextIOWrapper name='file.txt' mode='w' encoding='UTF-8'>, raw=<_io.FileIO name='raw.dat' mode='wb'>) FileType objects understand the pseudo-argument ``'-'`` and automatically - convert this into ``sys.stdin`` for readable :class:`FileType` objects and - ``sys.stdout`` for writable :class:`FileType` objects:: + convert this into :data:`sys.stdin` for readable :class:`FileType` objects and + :data:`sys.stdout` for writable :class:`FileType` objects:: >>> parser = argparse.ArgumentParser() >>> parser.add_argument('infile', type=argparse.FileType('r')) diff --git a/Doc/library/asyncio-dev.rst b/Doc/library/asyncio-dev.rst index 7ed597a5..0816492a 100644 --- a/Doc/library/asyncio-dev.rst +++ b/Doc/library/asyncio-dev.rst @@ -149,7 +149,8 @@ adjusted:: Network logging can block the event loop. It is recommended to use -a separate thread for handling logs or use non-blocking IO. +a separate thread for handling logs or use non-blocking IO. For example, +see :ref:`blocking-handlers`. .. _asyncio-coroutine-not-scheduled: diff --git a/Doc/library/asyncio-eventloop.rst b/Doc/library/asyncio-eventloop.rst index 343672d3..a23be64e 100644 --- a/Doc/library/asyncio-eventloop.rst +++ b/Doc/library/asyncio-eventloop.rst @@ -33,7 +33,8 @@ an event loop: Return the running event loop in the current OS thread. - If there is no running event loop a :exc:`RuntimeError` is raised. + Raise a :exc:`RuntimeError` if there is no running event loop. + This function can only be called from a coroutine or a callback. .. versionadded:: 3.7 @@ -42,27 +43,35 @@ an event loop: Get the current event loop. - If there is no current event loop set in the current OS thread, - the OS thread is main, and :func:`set_event_loop` has not yet - been called, asyncio will create a new event loop and set it as the - current one. + When called from a coroutine or a callback (e.g. scheduled with + call_soon or similar API), this function will always return the + running event loop. + + If there is no running event loop set, the function will return + the result of ``get_event_loop_policy().get_event_loop()`` call. Because this function has rather complex behavior (especially when custom event loop policies are in use), using the :func:`get_running_loop` function is preferred to :func:`get_event_loop` in coroutines and callbacks. - Consider also using the :func:`asyncio.run` function instead of using - lower level functions to manually create and close an event loop. + As noted above, consider using the higher-level :func:`asyncio.run` function, + instead of using these lower level functions to manually create and close an + event loop. .. deprecated:: 3.10 - Deprecation warning is emitted if there is no running event loop. - In future Python releases, this function will be an alias of - :func:`get_running_loop`. + Deprecation warning is emitted if there is no current event loop. + In Python 3.12 it will be an error. + + .. note:: + In Python versions 3.10.0--3.10.8 this function + (and other functions which used it implicitly) emitted a + :exc:`DeprecationWarning` if there was no running event loop, even if + the current loop was set. .. function:: set_event_loop(loop) - Set *loop* as a current event loop for the current OS thread. + Set *loop* as the current event loop for the current OS thread. .. function:: new_event_loop() @@ -808,9 +817,14 @@ TLS Upgrade Upgrade an existing transport-based connection to TLS. - Return a new transport instance, that the *protocol* must start using - immediately after the *await*. The *transport* instance passed to - the *start_tls* method should never be used again. + Create a TLS coder/decoder instance and insert it between the *transport* + and the *protocol*. The coder/decoder implements both *transport*-facing + protocol and *protocol*-facing transport. + + Return the created two-interface instance. After *await*, the *protocol* + must stop using the original *transport* and communicate with the returned + object only because the coder caches *protocol*-side data and sporadically + exchanges extra TLS session packets with *transport*. Parameters: @@ -1140,7 +1154,13 @@ Executing code in thread or process pools pool, cpu_bound) print('custom process pool', result) - asyncio.run(main()) + if __name__ == '__main__': + asyncio.run(main()) + + Note that the entry point guard (``if __name__ == '__main__'``) + is required for option 3 due to the peculiarities of :mod:`multiprocessing`, + which is used by :class:`~concurrent.futures.ProcessPoolExecutor`. + See :ref:`Safe importing of main module `. This method returns a :class:`asyncio.Future` object. diff --git a/Doc/library/asyncio-llapi-index.rst b/Doc/library/asyncio-llapi-index.rst index 63ab1483..d903ea0d 100644 --- a/Doc/library/asyncio-llapi-index.rst +++ b/Doc/library/asyncio-llapi-index.rst @@ -19,7 +19,7 @@ Obtaining the Event Loop - The **preferred** function to get the running event loop. * - :func:`asyncio.get_event_loop` - - Get an event loop instance (current or via the policy). + - Get an event loop instance (running or current via the current policy). * - :func:`asyncio.set_event_loop` - Set the event loop as current via the current policy. diff --git a/Doc/library/asyncio-policy.rst b/Doc/library/asyncio-policy.rst index bfc3e309..a4cd5aa9 100644 --- a/Doc/library/asyncio-policy.rst +++ b/Doc/library/asyncio-policy.rst @@ -112,6 +112,11 @@ asyncio ships with the following built-in policies: On Windows, :class:`ProactorEventLoop` is now used by default. + .. deprecated:: 3.10.9 + :meth:`get_event_loop` now emits a :exc:`DeprecationWarning` if there + is no current event loop set and a new event loop has been implicitly + created. In Python 3.12 it will be an error. + .. class:: WindowsSelectorEventLoopPolicy diff --git a/Doc/library/asyncio-protocol.rst b/Doc/library/asyncio-protocol.rst index 8b67f4b8..7bc906ea 100644 --- a/Doc/library/asyncio-protocol.rst +++ b/Doc/library/asyncio-protocol.rst @@ -156,7 +156,8 @@ Base Transport will be received. After all buffered data is flushed, the protocol's :meth:`protocol.connection_lost() ` method will be called with - :const:`None` as its argument. + :const:`None` as its argument. The transport should not be + used once it is closed. .. method:: BaseTransport.is_closing() @@ -553,7 +554,7 @@ accept factories that return streaming protocols. a connection is open. However, :meth:`protocol.eof_received() ` - is called at most once. Once `eof_received()` is called, + is called at most once. Once ``eof_received()`` is called, ``data_received()`` is not called anymore. .. method:: Protocol.eof_received() diff --git a/Doc/library/asyncio-stream.rst b/Doc/library/asyncio-stream.rst index 19210a55..bed10a40 100644 --- a/Doc/library/asyncio-stream.rst +++ b/Doc/library/asyncio-stream.rst @@ -183,7 +183,8 @@ StreamReader .. class:: StreamReader Represents a reader object that provides APIs to read data - from the IO stream. + from the IO stream. As an :term:`asynchronous iterable`, the + object supports the :keyword:`async for` statement. It is not recommended to instantiate *StreamReader* objects directly; use :func:`open_connection` and :func:`start_server` diff --git a/Doc/library/asyncio-subprocess.rst b/Doc/library/asyncio-subprocess.rst index 28d0b21e..4274638c 100644 --- a/Doc/library/asyncio-subprocess.rst +++ b/Doc/library/asyncio-subprocess.rst @@ -175,7 +175,7 @@ their completion. * the :meth:`~asyncio.subprocess.Process.communicate` and :meth:`~asyncio.subprocess.Process.wait` methods don't have a - *timeout* parameter: use the :func:`wait_for` function; + *timeout* parameter: use the :func:`~asyncio.wait_for` function; * the :meth:`Process.wait() ` method is asynchronous, whereas :meth:`subprocess.Popen.wait` method diff --git a/Doc/library/asyncio-task.rst b/Doc/library/asyncio-task.rst index 8f9cef72..5608022c 100644 --- a/Doc/library/asyncio-task.rst +++ b/Doc/library/asyncio-task.rst @@ -18,6 +18,10 @@ and Tasks. Coroutines ========== +**Source code:** :source:`Lib/asyncio/coroutines.py` + +---------------------------------------------------- + :term:`Coroutines ` declared with the async/await syntax is the preferred way of writing asyncio applications. For example, the following snippet of code prints "hello", waits 1 second, @@ -247,6 +251,10 @@ Running an asyncio Program Creating Tasks ============== +**Source code:** :source:`Lib/asyncio/tasks.py` + +----------------------------------------------- + .. function:: create_task(coro, *, name=None) Wrap the *coro* :ref:`coroutine ` into a :class:`Task` @@ -707,17 +715,17 @@ Running in Threads # blocking_io complete at 19:50:54 # finished main at 19:50:54 - Directly calling `blocking_io()` in any coroutine would block the event loop + Directly calling ``blocking_io()`` in any coroutine would block the event loop for its duration, resulting in an additional 1 second of run time. Instead, - by using `asyncio.to_thread()`, we can run it in a separate thread without + by using ``asyncio.to_thread()``, we can run it in a separate thread without blocking the event loop. .. note:: - Due to the :term:`GIL`, `asyncio.to_thread()` can typically only be used + Due to the :term:`GIL`, ``asyncio.to_thread()`` can typically only be used to make IO-bound functions non-blocking. However, for extension modules that release the GIL or alternative Python implementations that don't - have one, `asyncio.to_thread()` can also be used for CPU-bound functions. + have one, ``asyncio.to_thread()`` can also be used for CPU-bound functions. .. versionadded:: 3.9 diff --git a/Doc/library/base64.rst b/Doc/library/base64.rst index f1063a78..694c1666 100644 --- a/Doc/library/base64.rst +++ b/Doc/library/base64.rst @@ -53,11 +53,13 @@ The modern interface provides: Encode the :term:`bytes-like object` *s* using Base64 and return the encoded :class:`bytes`. - Optional *altchars* must be a :term:`bytes-like object` of at least - length 2 (additional characters are ignored) which specifies an alternative - alphabet for the ``+`` and ``/`` characters. This allows an application to e.g. - generate URL or filesystem safe Base64 strings. The default is ``None``, for - which the standard Base64 alphabet is used. + Optional *altchars* must be a :term:`bytes-like object` of length 2 which + specifies an alternative alphabet for the ``+`` and ``/`` characters. + This allows an application to e.g. generate URL or filesystem safe Base64 + strings. The default is ``None``, for which the standard Base64 alphabet is used. + + May assert or raise a a :exc:`ValueError` if the length of *altchars* is not 2. Raises a + :exc:`TypeError` if *altchars* is not a :term:`bytes-like object`. .. function:: b64decode(s, altchars=None, validate=False) @@ -65,9 +67,9 @@ The modern interface provides: Decode the Base64 encoded :term:`bytes-like object` or ASCII string *s* and return the decoded :class:`bytes`. - Optional *altchars* must be a :term:`bytes-like object` or ASCII string of - at least length 2 (additional characters are ignored) which specifies the - alternative alphabet used instead of the ``+`` and ``/`` characters. + Optional *altchars* must be a :term:`bytes-like object` or ASCII string + of length 2 which specifies the alternative alphabet used instead of the + ``+`` and ``/`` characters. A :exc:`binascii.Error` exception is raised if *s* is incorrectly padded. @@ -78,6 +80,7 @@ The modern interface provides: these non-alphabet characters in the input result in a :exc:`binascii.Error`. + May assert or raise a :exc:`ValueError` if the length of *altchars* is not 2. .. function:: standard_b64encode(s) diff --git a/Doc/library/bdb.rst b/Doc/library/bdb.rst index 7b74bbd6..d201dc96 100644 --- a/Doc/library/bdb.rst +++ b/Doc/library/bdb.rst @@ -143,7 +143,7 @@ The :mod:`bdb` module also defines two classes: For real file names, the canonical form is an operating-system-dependent, :func:`case-normalized ` :func:`absolute path - `. A *filename* with angle brackets, such as `""` + `. A *filename* with angle brackets, such as ``""`` generated in interactive mode, is returned unchanged. .. method:: reset() diff --git a/Doc/library/bz2.rst b/Doc/library/bz2.rst index 999892e9..ae5a1598 100644 --- a/Doc/library/bz2.rst +++ b/Doc/library/bz2.rst @@ -206,7 +206,7 @@ Incremental (de)compression will be set to ``True``. Attempting to decompress data after the end of stream is reached - raises an `EOFError`. Any data found after the end of the + raises an :exc:`EOFError`. Any data found after the end of the stream is ignored and saved in the :attr:`~.unused_data` attribute. .. versionchanged:: 3.5 @@ -303,7 +303,7 @@ Using :class:`BZ2Compressor` for incremental compression: >>> out = out + comp.flush() The example above uses a very "nonrandom" stream of data -(a stream of `b"z"` chunks). Random data tends to compress poorly, +(a stream of ``b"z"`` chunks). Random data tends to compress poorly, while ordered, repetitive data usually yields a high compression ratio. Writing and reading a bzip2-compressed file in binary mode: diff --git a/Doc/library/codecs.rst b/Doc/library/codecs.rst index 1c10462c..7d56327a 100644 --- a/Doc/library/codecs.rst +++ b/Doc/library/codecs.rst @@ -189,7 +189,8 @@ wider range of codecs when working with binary files: .. note:: - Underlying encoded files are always opened in binary mode. + If *encoding* is not ``None``, then the + underlying encoded files are always opened in binary mode. No automatic conversion of ``'\n'`` is done on reading and writing. The *mode* argument may be any binary mode acceptable to the built-in :func:`open` function; the ``'b'`` is automatically added. diff --git a/Doc/library/concurrent.futures.rst b/Doc/library/concurrent.futures.rst index 8c43590c..a25ad09c 100644 --- a/Doc/library/concurrent.futures.rst +++ b/Doc/library/concurrent.futures.rst @@ -151,7 +151,7 @@ And:: All threads enqueued to ``ThreadPoolExecutor`` will be joined before the interpreter can exit. Note that the exit handler which does this is - executed *before* any exit handlers added using `atexit`. This means + executed *before* any exit handlers added using ``atexit``. This means exceptions in the main thread must be caught and handled in order to signal threads to exit gracefully. For this reason, it is recommended that ``ThreadPoolExecutor`` not be used for long-running tasks. @@ -398,13 +398,13 @@ The :class:`Future` class encapsulates the asynchronous execution of a callable. tests. If the method returns ``False`` then the :class:`Future` was cancelled, - i.e. :meth:`Future.cancel` was called and returned `True`. Any threads + i.e. :meth:`Future.cancel` was called and returned ``True``. Any threads waiting on the :class:`Future` completing (i.e. through :func:`as_completed` or :func:`wait`) will be woken up. If the method returns ``True`` then the :class:`Future` was not cancelled and has been put in the running state, i.e. calls to - :meth:`Future.running` will return `True`. + :meth:`Future.running` will return ``True``. This method can only be called once and cannot be called after :meth:`Future.set_result` or :meth:`Future.set_exception` have been diff --git a/Doc/library/contextlib.rst b/Doc/library/contextlib.rst index 7c0b8314..c6567071 100644 --- a/Doc/library/contextlib.rst +++ b/Doc/library/contextlib.rst @@ -66,6 +66,8 @@ Functions and classes provided: # Code to release resource, e.g.: release_resource(resource) + The function can then be used like this:: + >>> with managed_resource(timeout=3600) as resource: ... # Resource is released at the end of this block, ... # even if code in the block raises an exception @@ -140,9 +142,9 @@ Functions and classes provided: finally: print(f'it took {time.monotonic() - now}s to run') - @timeit() - async def main(): - # ... async code ... + @timeit() + async def main(): + # ... async code ... When used as a decorator, a new generator instance is implicitly created on each function call. This allows the otherwise "one-shot" context managers @@ -249,15 +251,15 @@ Functions and classes provided: :ref:`asynchronous context managers `:: async def send_http(session=None): - if not session: - # If no http session, create it with aiohttp - cm = aiohttp.ClientSession() - else: - # Caller is responsible for closing the session - cm = nullcontext(session) + if not session: + # If no http session, create it with aiohttp + cm = aiohttp.ClientSession() + else: + # Caller is responsible for closing the session + cm = nullcontext(session) - async with cm as session: - # Send http requests with session + async with cm as session: + # Send http requests with session .. versionadded:: 3.7 @@ -379,6 +381,8 @@ Functions and classes provided: print('Finishing') return False + The class can then be used like this:: + >>> @mycontext() ... def function(): ... print('The bit in the middle') @@ -449,6 +453,8 @@ Functions and classes provided: print('Finishing') return False + The class can then be used like this:: + >>> @mycontext() ... async def function(): ... print('The bit in the middle') diff --git a/Doc/library/contextvars.rst b/Doc/library/contextvars.rst index be1dd0c9..08a7c7d7 100644 --- a/Doc/library/contextvars.rst +++ b/Doc/library/contextvars.rst @@ -110,7 +110,7 @@ Context Variables A read-only property. Set to the value the variable had before the :meth:`ContextVar.set` method call that created the token. - It points to :attr:`Token.MISSING` is the variable was not set + It points to :attr:`Token.MISSING` if the variable was not set before the call. .. attribute:: Token.MISSING diff --git a/Doc/library/ctypes.rst b/Doc/library/ctypes.rst index 822a9b0d..077a2056 100644 --- a/Doc/library/ctypes.rst +++ b/Doc/library/ctypes.rst @@ -6,6 +6,8 @@ .. moduleauthor:: Thomas Heller +**Source code:** :source:`Lib/ctypes` + -------------- :mod:`ctypes` is a foreign function library for Python. It provides C compatible @@ -359,7 +361,7 @@ from within *IDLE* or *PythonWin*:: >>> printf(b"%f bottles of beer\n", 42.5) Traceback (most recent call last): File "", line 1, in - ArgumentError: argument 2: exceptions.TypeError: Don't know how to convert parameter 2 + ArgumentError: argument 2: TypeError: Don't know how to convert parameter 2 >>> As has been mentioned before, all Python types except integers, strings, and @@ -371,6 +373,26 @@ that they can be converted to the required C data type:: 31 >>> +.. _ctypes-calling-variadic-functions: + +Calling varadic functions +^^^^^^^^^^^^^^^^^^^^^^^^^ + +On a lot of platforms calling variadic functions through ctypes is exactly the same +as calling functions with a fixed number of parameters. On some platforms, and in +particular ARM64 for Apple Platforms, the calling convention for variadic functions +is different than that for regular functions. + +On those platforms it is required to specify the *argtypes* attribute for the +regular, non-variadic, function arguments: + +.. code-block:: python3 + + libc.printf.argtypes = [ctypes.c_char_p] + +Because specifying the attribute does inhibit portability it is adviced to always +specify ``argtypes`` for all variadic functions. + .. _ctypes-calling-functions-with-own-custom-data-types: @@ -422,7 +444,7 @@ prototype for a C function), and tries to convert the arguments to valid types:: >>> printf(b"%d %d %d", 1, 2, 3) Traceback (most recent call last): File "", line 1, in - ArgumentError: argument 2: exceptions.TypeError: wrong type + ArgumentError: argument 2: TypeError: wrong type >>> printf(b"%s %d %f\n", b"X", 2, 3) X 2 3.000000 13 @@ -472,7 +494,7 @@ single character Python bytes object into a C char:: >>> strchr(b"abcdef", b"def") Traceback (most recent call last): File "", line 1, in - ArgumentError: argument 2: exceptions.TypeError: one character string expected + ArgumentError: argument 2: TypeError: one character string expected >>> print(strchr(b"abcdef", b"x")) None >>> strchr(b"abcdef", b"d") @@ -1935,7 +1957,7 @@ Utility functions .. function:: GetLastError() Windows only: Returns the last error code set by Windows in the calling thread. - This function calls the Windows `GetLastError()` function directly, + This function calls the Windows ``GetLastError()`` function directly, it does not return the ctypes-private copy of the error code. .. function:: get_errno() diff --git a/Doc/library/curses.ascii.rst b/Doc/library/curses.ascii.rst index a69dbb2a..e1d11719 100644 --- a/Doc/library/curses.ascii.rst +++ b/Doc/library/curses.ascii.rst @@ -7,6 +7,8 @@ .. moduleauthor:: Eric S. Raymond .. sectionauthor:: Eric S. Raymond +**Source code:** :source:`Lib/curses/ascii.py` + -------------- The :mod:`curses.ascii` module supplies name constants for ASCII characters and diff --git a/Doc/library/curses.rst b/Doc/library/curses.rst index efbece43..55ab0601 100644 --- a/Doc/library/curses.rst +++ b/Doc/library/curses.rst @@ -9,6 +9,8 @@ .. sectionauthor:: Moshe Zadka .. sectionauthor:: Eric Raymond +**Source code:** :source:`Lib/curses` + -------------- The :mod:`curses` module provides an interface to the curses library, the @@ -292,7 +294,7 @@ The module :mod:`curses` defines the following functions: Change the definition of a color, taking the number of the color to be changed followed by three RGB values (for the amounts of red, green, and blue components). The value of *color_number* must be between ``0`` and - `COLORS - 1`. Each of *r*, *g*, *b*, must be a value between ``0`` and + ``COLORS - 1``. Each of *r*, *g*, *b*, must be a value between ``0`` and ``1000``. When :func:`init_color` is used, all occurrences of that color on the screen immediately change to the new definition. This function is a no-op on most terminals; it is active only if :func:`can_change_color` returns ``True``. diff --git a/Doc/library/dataclasses.rst b/Doc/library/dataclasses.rst index 02de6428..add6043b 100644 --- a/Doc/library/dataclasses.rst +++ b/Doc/library/dataclasses.rst @@ -79,7 +79,8 @@ Module contents class C: ... - @dataclass(init=True, repr=True, eq=True, order=False, unsafe_hash=False, frozen=False, match_args=True, kw_only=False, slots=False) + @dataclass(init=True, repr=True, eq=True, order=False, unsafe_hash=False, frozen=False, + match_args=True, kw_only=False, slots=False) class C: ... @@ -561,8 +562,8 @@ value is not provided when creating the class:: @dataclass class C: i: int - j: int = None - database: InitVar[DatabaseType] = None + j: int | None = None + database: InitVar[DatabaseType | None] = None def __post_init__(self, database): if self.j is None and database is not None: diff --git a/Doc/library/datetime.rst b/Doc/library/datetime.rst index 2f51dc35..c2d7715a 100644 --- a/Doc/library/datetime.rst +++ b/Doc/library/datetime.rst @@ -2246,7 +2246,7 @@ where historical changes have been made to civil time. two digits of ``offset.hours`` and ``offset.minutes`` respectively. .. versionchanged:: 3.6 - Name generated from ``offset=timedelta(0)`` is now plain `'UTC'`, not + Name generated from ``offset=timedelta(0)`` is now plain ``'UTC'``, not ``'UTC+00:00'``. diff --git a/Doc/library/decimal.rst b/Doc/library/decimal.rst index ab3d3b8d..38ebb44c 100644 --- a/Doc/library/decimal.rst +++ b/Doc/library/decimal.rst @@ -576,11 +576,11 @@ Decimal objects Alternative constructor that only accepts instances of :class:`float` or :class:`int`. - Note `Decimal.from_float(0.1)` is not the same as `Decimal('0.1')`. + Note ``Decimal.from_float(0.1)`` is not the same as ``Decimal('0.1')``. Since 0.1 is not exactly representable in binary floating point, the value is stored as the nearest representable value which is - `0x1.999999999999ap-4`. That equivalent value in decimal is - `0.1000000000000000055511151231257827021181583404541015625`. + ``0x1.999999999999ap-4``. That equivalent value in decimal is + ``0.1000000000000000055511151231257827021181583404541015625``. .. note:: From Python 3.2 onwards, a :class:`Decimal` instance can also be constructed directly from a :class:`float`. @@ -1193,7 +1193,7 @@ In addition to the three supplied contexts, new contexts can be created with the .. method:: exp(x) - Returns `e ** x`. + Returns ``e ** x``. .. method:: fma(x, y, z) diff --git a/Doc/library/doctest.rst b/Doc/library/doctest.rst index e683ad07..48e27306 100644 --- a/Doc/library/doctest.rst +++ b/Doc/library/doctest.rst @@ -697,10 +697,10 @@ special Python comments following an example's source code: .. productionlist:: doctest directive: "#" "doctest:" `directive_options` - directive_options: `directive_option` ("," `directive_option`)\* + directive_options: `directive_option` ("," `directive_option`)* directive_option: `on_or_off` `directive_option_name` - on_or_off: "+" \| "-" - directive_option_name: "DONT_ACCEPT_BLANKLINE" \| "NORMALIZE_WHITESPACE" \| ... + on_or_off: "+" | "-" + directive_option_name: "DONT_ACCEPT_BLANKLINE" | "NORMALIZE_WHITESPACE" | ... Whitespace is not allowed between the ``+`` or ``-`` and the directive option name. The directive option name can be any of the option flag names explained diff --git a/Doc/library/email.compat32-message.rst b/Doc/library/email.compat32-message.rst index 4eaa9d58..5bef155a 100644 --- a/Doc/library/email.compat32-message.rst +++ b/Doc/library/email.compat32-message.rst @@ -298,7 +298,7 @@ Here are the methods of the :class:`Message` class: In a model generated from bytes, any header values that (in contravention of the RFCs) contain non-ASCII bytes will, when retrieved through this interface, be represented as :class:`~email.header.Header` objects with - a charset of `unknown-8bit`. + a charset of ``unknown-8bit``. .. method:: __len__() diff --git a/Doc/library/email.errors.rst b/Doc/library/email.errors.rst index 7a776405..194a9869 100644 --- a/Doc/library/email.errors.rst +++ b/Doc/library/email.errors.rst @@ -114,4 +114,4 @@ All defect classes are subclassed from :class:`email.errors.MessageDefect`. a multiple of 4). The encoded block was kept as-is. * :class:`InvalidDateDefect` -- When decoding an invalid or unparsable date field. - The original value is kept as-is. \ No newline at end of file + The original value is kept as-is. diff --git a/Doc/library/email.headerregistry.rst b/Doc/library/email.headerregistry.rst index 3e1d97a0..528c9af4 100644 --- a/Doc/library/email.headerregistry.rst +++ b/Doc/library/email.headerregistry.rst @@ -153,7 +153,7 @@ headers. specified as ``-0000`` (indicating it is in UTC but contains no information about the source timezone), then :attr:`.datetime` will be a naive :class:`~datetime.datetime`. If a specific timezone offset is - found (including `+0000`), then :attr:`.datetime` will contain an aware + found (including ``+0000``), then :attr:`.datetime` will contain an aware ``datetime`` that uses :class:`datetime.timezone` to record the timezone offset. diff --git a/Doc/library/ensurepip.rst b/Doc/library/ensurepip.rst index fa1b42cf..9035ded3 100644 --- a/Doc/library/ensurepip.rst +++ b/Doc/library/ensurepip.rst @@ -7,6 +7,8 @@ .. versionadded:: 3.4 +**Source code:** :source:`Lib/ensurepip` + -------------- The :mod:`ensurepip` package provides support for bootstrapping the ``pip`` diff --git a/Doc/library/functions.rst b/Doc/library/functions.rst index 86b88c0c..97641d1b 100644 --- a/Doc/library/functions.rst +++ b/Doc/library/functions.rst @@ -1399,7 +1399,7 @@ are always available. They are listed here in alphabetical order. supported. -.. function:: print(*objects, sep=' ', end='\n', file=sys.stdout, flush=False) +.. function:: print(*objects, sep=' ', end='\n', file=None, flush=False) Print *objects* to the text stream *file*, separated by *sep* and followed by *end*. *sep*, *end*, *file*, and *flush*, if present, must be given as keyword diff --git a/Doc/library/hashlib.rst b/Doc/library/hashlib.rst index 2e753745..f642d04d 100644 --- a/Doc/library/hashlib.rst +++ b/Doc/library/hashlib.rst @@ -391,7 +391,7 @@ Constructor functions also accept the following tree hashing parameters: BLAKE2s, 0 in sequential mode). * *last_node*: boolean indicating whether the processed node is the last - one (`False` for sequential mode). + one (``False`` for sequential mode). .. figure:: hashlib-blake2-tree.png :alt: Explanation of tree mode parameters. diff --git a/Doc/library/http.server.rst b/Doc/library/http.server.rst index 89a25b8c..8bd22eac 100644 --- a/Doc/library/http.server.rst +++ b/Doc/library/http.server.rst @@ -499,3 +499,12 @@ Security Considerations :class:`SimpleHTTPRequestHandler` will follow symbolic links when handling requests, this makes it possible for files outside of the specified directory to be served. + +Earlier versions of Python did not scrub control characters from the +log messages emitted to stderr from ``python -m http.server`` or the +default :class:`BaseHTTPRequestHandler` ``.log_message`` +implementation. This could allow remote clients connecting to your +server to send nefarious control codes to your terminal. + +.. versionadded:: 3.10.9 + Control characters are scrubbed in stderr logs. diff --git a/Doc/library/importlib.rst b/Doc/library/importlib.rst index 1addba3f..f1cf9eec 100644 --- a/Doc/library/importlib.rst +++ b/Doc/library/importlib.rst @@ -862,7 +862,7 @@ ABC hierarchy:: An abstract base class for resource readers capable of serving the ``files`` interface. Subclasses ResourceReader and provides concrete implementations of the ResourceReader's abstract - methods. Therefore, any loader supplying TraversableReader + methods. Therefore, any loader supplying TraversableResources also supplies ResourceReader. Loaders that wish to support resource reading are expected to diff --git a/Doc/library/locale.rst b/Doc/library/locale.rst index dd14a379..cc1f5b42 100644 --- a/Doc/library/locale.rst +++ b/Doc/library/locale.rst @@ -147,12 +147,12 @@ The :mod:`locale` module defines the following exception and functions: | ``CHAR_MAX`` | Nothing is specified in this locale. | +--------------+-----------------------------------------+ - The function sets temporarily the ``LC_CTYPE`` locale to the ``LC_NUMERIC`` + The function temporarily sets the ``LC_CTYPE`` locale to the ``LC_NUMERIC`` locale or the ``LC_MONETARY`` locale if locales are different and numeric or monetary strings are non-ASCII. This temporary change affects other threads. .. versionchanged:: 3.7 - The function now sets temporarily the ``LC_CTYPE`` locale to the + The function now temporarily sets the ``LC_CTYPE`` locale to the ``LC_NUMERIC`` locale in some cases. @@ -227,16 +227,18 @@ The :mod:`locale` module defines the following exception and functions: Get a regular expression that can be used with the regex function to recognize a positive response to a yes/no question. - .. note:: - - The expression is in the syntax suitable for the :c:func:`regex` function - from the C library, which might differ from the syntax used in :mod:`re`. - .. data:: NOEXPR Get a regular expression that can be used with the regex(3) function to recognize a negative response to a yes/no question. + .. note:: + + The regular expressions for :const:`YESEXPR` and + :const:`NOEXPR` use syntax suitable for the + :c:func:`regex` function from the C library, which might + differ from the syntax used in :mod:`re`. + .. data:: CRNCYSTR Get the currency symbol, preceded by "-" if the symbol should appear before @@ -375,7 +377,7 @@ The :mod:`locale` module defines the following exception and functions: Formats a number *val* according to the current :const:`LC_NUMERIC` setting. The format follows the conventions of the ``%`` operator. For floating point - values, the decimal point is modified if appropriate. If *grouping* is true, + values, the decimal point is modified if appropriate. If *grouping* is ``True``, also takes the grouping into account. If *monetary* is true, the conversion uses monetary thousands separator and @@ -405,12 +407,14 @@ The :mod:`locale` module defines the following exception and functions: Formats a number *val* according to the current :const:`LC_MONETARY` settings. The returned string includes the currency symbol if *symbol* is true, which is - the default. If *grouping* is true (which is not the default), grouping is done - with the value. If *international* is true (which is not the default), the + the default. If *grouping* is ``True`` (which is not the default), grouping is done + with the value. If *international* is ``True`` (which is not the default), the international currency symbol is used. - Note that this function will not work with the 'C' locale, so you have to set a - locale via :func:`setlocale` first. + .. note:: + + This function will not work with the 'C' locale, so you have to set a + locale via :func:`setlocale` first. .. function:: str(float) @@ -597,4 +601,3 @@ applications that link with additional C libraries which internally invoke :c:func:`gettext` or :c:func:`dcgettext`. For these applications, it may be necessary to bind the text domain, so that the libraries can properly locate their message catalogs. - diff --git a/Doc/library/lzma.rst b/Doc/library/lzma.rst index 21092645..868d4dcf 100644 --- a/Doc/library/lzma.rst +++ b/Doc/library/lzma.rst @@ -258,7 +258,7 @@ Compressing and decompressing data in memory will be set to ``True``. Attempting to decompress data after the end of stream is reached - raises an `EOFError`. Any data found after the end of the + raises an :exc:`EOFError`. Any data found after the end of the stream is ignored and saved in the :attr:`~.unused_data` attribute. .. versionchanged:: 3.5 diff --git a/Doc/library/multiprocessing.rst b/Doc/library/multiprocessing.rst index 95e74b9b..f59a61fa 100644 --- a/Doc/library/multiprocessing.rst +++ b/Doc/library/multiprocessing.rst @@ -2920,6 +2920,8 @@ Global variables However, global variables which are just module level constants cause no problems. +.. _multiprocessing-safe-main-import: + Safe importing of main module Make sure that the main module can be safely imported by a new Python diff --git a/Doc/library/os.rst b/Doc/library/os.rst index 87531324..7a5efcf5 100644 --- a/Doc/library/os.rst +++ b/Doc/library/os.rst @@ -2256,7 +2256,7 @@ features: .. function:: remove(path, *, dir_fd=None) Remove (delete) the file *path*. If *path* is a directory, an - :exc:`IsADirectoryError` is raised. Use :func:`rmdir` to remove directories. + :exc:`OSError` is raised. Use :func:`rmdir` to remove directories. If the file does not exist, a :exc:`FileNotFoundError` is raised. This function can support :ref:`paths relative to directory descriptors @@ -3107,7 +3107,7 @@ features: system records access and modification times; see :func:`~os.stat`. The best way to preserve exact times is to use the *st_atime_ns* and *st_mtime_ns* fields from the :func:`os.stat` result object with the *ns* parameter to - `utime`. + :func:`utime`. This function can support :ref:`specifying a file descriptor `, :ref:`paths relative to directory descriptors ` and :ref:`not @@ -3984,7 +3984,7 @@ written in Python, such as a mail server's external command delivery program. library :c:data:`POSIX_SPAWN_RESETIDS` flag. If the *setsid* argument is ``True``, it will create a new session ID - for `posix_spawn`. *setsid* requires :c:data:`POSIX_SPAWN_SETSID` + for ``posix_spawn``. *setsid* requires :c:data:`POSIX_SPAWN_SETSID` or :c:data:`POSIX_SPAWN_SETSID_NP` flag. Otherwise, :exc:`NotImplementedError` is raised. diff --git a/Doc/library/re.rst b/Doc/library/re.rst index e7d1f056..a40c99bf 100644 --- a/Doc/library/re.rst +++ b/Doc/library/re.rst @@ -421,6 +421,9 @@ The special characters are: some fixed length. Patterns which start with negative lookbehind assertions may match at the beginning of the string being searched. +.. _re-conditional-expression: +.. index:: single: (?(; in regular expressions + ``(?(id/name)yes-pattern|no-pattern)`` Will try to match with ``yes-pattern`` if the group with given *id* or *name* exists, and with ``no-pattern`` if it doesn't. ``no-pattern`` is @@ -1468,16 +1471,22 @@ search() vs. match() .. sectionauthor:: Fred L. Drake, Jr. -Python offers two different primitive operations based on regular expressions: -:func:`re.match` checks for a match only at the beginning of the string, while -:func:`re.search` checks for a match anywhere in the string (this is what Perl -does by default). +Python offers different primitive operations based on regular expressions: + ++ :func:`re.match` checks for a match only at the beginning of the string ++ :func:`re.search` checks for a match anywhere in the string + (this is what Perl does by default) ++ :func:`re.fullmatch` checks for entire string to be a match + For example:: >>> re.match("c", "abcdef") # No match >>> re.search("c", "abcdef") # Match + >>> re.fullmatch("p.*n", "python") # Match + + >>> re.fullmatch("r.*n", "python") # No match Regular expressions beginning with ``'^'`` can be used with :func:`search` to restrict the match at the beginning of the string:: @@ -1491,8 +1500,8 @@ Note however that in :const:`MULTILINE` mode :func:`match` only matches at the beginning of the string, whereas using :func:`search` with a regular expression beginning with ``'^'`` will match at the beginning of each line. :: - >>> re.match('X', 'A\nB\nX', re.MULTILINE) # No match - >>> re.search('^X', 'A\nB\nX', re.MULTILINE) # Match + >>> re.match("X", "A\nB\nX", re.MULTILINE) # No match + >>> re.search("^X", "A\nB\nX", re.MULTILINE) # Match diff --git a/Doc/library/secrets.rst b/Doc/library/secrets.rst index dc8e5f46..4405dfc0 100644 --- a/Doc/library/secrets.rst +++ b/Doc/library/secrets.rst @@ -128,7 +128,9 @@ Other functions .. function:: compare_digest(a, b) - Return ``True`` if strings *a* and *b* are equal, otherwise ``False``, + Return ``True`` if strings or + :term:`bytes-like objects ` + *a* and *b* are equal, otherwise ``False``, using a "constant-time compare" to reduce the risk of `timing attacks `_. See :func:`hmac.compare_digest` for additional details. diff --git a/Doc/library/select.rst b/Doc/library/select.rst index 1c3d10ef..1cbe97d1 100644 --- a/Doc/library/select.rst +++ b/Doc/library/select.rst @@ -60,7 +60,7 @@ The module defines the following: events. *sizehint* informs epoll about the expected number of events to be - registered. It must be positive, or `-1` to use the default. It is only + registered. It must be positive, or ``-1`` to use the default. It is only used on older systems where :c:func:`epoll_create1` is not available; otherwise it has no effect (though its value is still checked). diff --git a/Doc/library/signal.rst b/Doc/library/signal.rst index e763e16d..40c359f5 100644 --- a/Doc/library/signal.rst +++ b/Doc/library/signal.rst @@ -4,6 +4,8 @@ .. module:: signal :synopsis: Set handlers for asynchronous events. +**Source code:** :source:`Lib/signal.py` + -------------- This module provides mechanisms to use signal handlers in Python. diff --git a/Doc/library/socket.rst b/Doc/library/socket.rst index 205d08bf..0a8f35ee 100644 --- a/Doc/library/socket.rst +++ b/Doc/library/socket.rst @@ -604,7 +604,7 @@ The following functions all create :ref:`socket objects `. When :const:`SOCK_NONBLOCK` or :const:`SOCK_CLOEXEC` bit flags are applied to *type* they are cleared, and :attr:`socket.type` will not reflect them. They are still passed - to the underlying system `socket()` call. Therefore, + to the underlying system ``socket()`` call. Therefore, :: diff --git a/Doc/library/socketserver.rst b/Doc/library/socketserver.rst index b65a3e8f..744cdaa9 100644 --- a/Doc/library/socketserver.rst +++ b/Doc/library/socketserver.rst @@ -94,8 +94,7 @@ synchronous servers of four types:: Note that :class:`UnixDatagramServer` derives from :class:`UDPServer`, not from :class:`UnixStreamServer` --- the only difference between an IP and a Unix -stream server is the address family, which is simply repeated in both Unix -server classes. +server is the address family. .. class:: ForkingMixIn @@ -430,11 +429,8 @@ Request Handler Objects The :attr:`self.rfile` and :attr:`self.wfile` attributes can be read or written, respectively, to get the request data or return data to the client. - - The :attr:`rfile` attributes of both classes support the - :class:`io.BufferedIOBase` readable interface, and - :attr:`DatagramRequestHandler.wfile` supports the - :class:`io.BufferedIOBase` writable interface. + The :attr:`!rfile` attributes support the :class:`io.BufferedIOBase` readable interface, + and :attr:`!wfile` attributes support the :class:`!io.BufferedIOBase` writable interface. .. versionchanged:: 3.6 :attr:`StreamRequestHandler.wfile` also supports the diff --git a/Doc/library/sqlite3.rst b/Doc/library/sqlite3.rst index 5f276c8d..9775f806 100644 --- a/Doc/library/sqlite3.rst +++ b/Doc/library/sqlite3.rst @@ -239,6 +239,7 @@ inserted data and retrieved values from it in multiple ways. * :ref:`sqlite3-adapters` * :ref:`sqlite3-converters` * :ref:`sqlite3-connection-context-manager` + * :ref:`sqlite3-howto-row-factory` * :ref:`sqlite3-explanation` for in-depth background on transaction control. @@ -452,9 +453,10 @@ Module constants .. note:: - The :mod:`!sqlite3` module supports both ``qmark`` and ``numeric`` DB-API - parameter styles, because that is what the underlying SQLite library - supports. However, the DB-API does not allow multiple values for + The :mod:`!sqlite3` module supports ``qmark``, ``numeric``, + and ``named`` DB-API parameter styles, + because that is what the underlying SQLite library supports. + However, the DB-API does not allow multiple values for the ``paramstyle`` attribute. .. data:: sqlite_version @@ -557,7 +559,7 @@ Connection objects :meth:`~Cursor.executescript` on it with the given *sql_script*. Return the new cursor object. - .. method:: create_function(name, narg, func, \*, deterministic=False) + .. method:: create_function(name, narg, func, *, deterministic=False) Create or remove a user-defined SQL function. @@ -855,7 +857,7 @@ Connection objects con.close() - .. method:: backup(target, \*, pages=-1, progress=None, name="main", sleep=0.250) + .. method:: backup(target, *, pages=-1, progress=None, name="main", sleep=0.250) Create a backup of an SQLite database. @@ -945,31 +947,14 @@ Connection objects .. attribute:: row_factory - A callable that accepts two arguments, - a :class:`Cursor` object and the raw row results as a :class:`tuple`, - and returns a custom object representing an SQLite row. - - Example: - - .. doctest:: - - >>> def dict_factory(cursor, row): - ... col_names = [col[0] for col in cursor.description] - ... return {key: value for key, value in zip(col_names, row)} - >>> con = sqlite3.connect(":memory:") - >>> con.row_factory = dict_factory - >>> for row in con.execute("SELECT 1 AS a, 2 AS b"): - ... print(row) - {'a': 1, 'b': 2} + The initial :attr:`~Cursor.row_factory` + for :class:`Cursor` objects created from this connection. + Assigning to this attribute does not affect the :attr:`!row_factory` + of existing cursors belonging to this connection, only new ones. + Is ``None`` by default, + meaning each row is returned as a :class:`tuple`. - If returning a tuple doesn't suffice and you want name-based access to - columns, you should consider setting :attr:`row_factory` to the - highly optimized :class:`sqlite3.Row` type. :class:`Row` provides both - index-based and case-insensitive name-based access to columns with almost no - memory overhead. It will probably be better than your own custom - dictionary-based approach or even a db_row based solution. - - .. XXX what's a db_row-based solution? + See :ref:`sqlite3-howto-row-factory` for more details. .. attribute:: text_factory @@ -1121,7 +1106,7 @@ Cursor objects .. method:: fetchone() - If :attr:`~Connection.row_factory` is ``None``, + If :attr:`~Cursor.row_factory` is ``None``, return the next row query result set as a :class:`tuple`. Else, pass it to the row factory and return its result. Return ``None`` if no more data is available. @@ -1215,6 +1200,22 @@ Cursor objects including :abbr:`CTE (Common Table Expression)` queries. It is only updated by the :meth:`execute` and :meth:`executemany` methods. + .. attribute:: row_factory + + Control how a row fetched from this :class:`!Cursor` is represented. + If ``None``, a row is represented as a :class:`tuple`. + Can be set to the included :class:`sqlite3.Row`; + or a :term:`callable` that accepts two arguments, + a :class:`Cursor` object and the :class:`!tuple` of row values, + and returns a custom object representing an SQLite row. + + Defaults to what :attr:`Connection.row_factory` was set to + when the :class:`!Cursor` was created. + Assigning to this attribute does not affect + :attr:`Connection.row_factory` of the parent connection. + + See :ref:`sqlite3-howto-row-factory` for more details. + .. The sqlite3.Row example used to be a how-to. It has now been incorporated into the Row reference. We keep the anchor here in order not to break @@ -1233,7 +1234,10 @@ Row objects It supports iteration, equality testing, :func:`len`, and :term:`mapping` access by column name and index. - Two row objects compare equal if have equal columns and equal members. + Two :class:`!Row` objects compare equal + if they have identical column names and values. + + See :ref:`sqlite3-howto-row-factory` for more details. .. method:: keys @@ -1244,21 +1248,6 @@ Row objects .. versionchanged:: 3.5 Added support of slicing. - Example: - - .. doctest:: - - >>> con = sqlite3.connect(":memory:") - >>> con.row_factory = sqlite3.Row - >>> res = con.execute("SELECT 'Earth' AS name, 6378 AS radius") - >>> row = res.fetchone() - >>> row.keys() - ['name', 'radius'] - >>> row[0], row["name"] # Access by index and name. - ('Earth', 'Earth') - >>> row["RADIUS"] # Column names are case-insensitive. - 6378 - PrepareProtocol objects ^^^^^^^^^^^^^^^^^^^^^^^ @@ -1616,7 +1605,7 @@ The following example illustrates the implicit and explicit approaches: return f"Point({self.x}, {self.y})" def adapt_point(point): - return f"{point.x};{point.y}".encode("utf-8") + return f"{point.x};{point.y}" def convert_point(s): x, y = list(map(float, s.split(b";"))) @@ -1682,20 +1671,39 @@ This section shows recipes for common adapters and converters. def convert_date(val): """Convert ISO 8601 date to datetime.date object.""" - return datetime.date.fromisoformat(val) + return datetime.date.fromisoformat(val.decode()) def convert_datetime(val): """Convert ISO 8601 datetime to datetime.datetime object.""" - return datetime.datetime.fromisoformat(val) + return datetime.datetime.fromisoformat(val.decode()) def convert_timestamp(val): """Convert Unix epoch timestamp to datetime.datetime object.""" - return datetime.datetime.fromtimestamp(val) + return datetime.datetime.fromtimestamp(int(val)) sqlite3.register_converter("date", convert_date) sqlite3.register_converter("datetime", convert_datetime) sqlite3.register_converter("timestamp", convert_timestamp) +.. testcode:: + :hide: + + dt = datetime.datetime(2019, 5, 18, 15, 17, 8, 123456) + + assert adapt_date_iso(dt.date()) == "2019-05-18" + assert convert_date(b"2019-05-18") == dt.date() + + assert adapt_datetime_iso(dt) == "2019-05-18T15:17:08.123456" + assert convert_datetime(b"2019-05-18T15:17:08.123456") == dt + + # Using current time as fromtimestamp() returns local date/time. + # Droping microseconds as adapt_datetime_epoch truncates fractional second part. + now = datetime.datetime.now().replace(microsecond=0) + current_timestamp = int(now.timestamp()) + + assert adapt_datetime_epoch(now) == current_timestamp + assert convert_timestamp(str(current_timestamp).encode()) == now + .. _sqlite3-connection-shortcuts: @@ -1835,6 +1843,96 @@ can be found in the `SQLite URI documentation`_. .. _SQLite URI documentation: https://www.sqlite.org/uri.html +.. _sqlite3-howto-row-factory: + +How to create and use row factories +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +By default, :mod:`!sqlite3` represents each row as a :class:`tuple`. +If a :class:`!tuple` does not suit your needs, +you can use the :class:`sqlite3.Row` class +or a custom :attr:`~Cursor.row_factory`. + +While :attr:`!row_factory` exists as an attribute both on the +:class:`Cursor` and the :class:`Connection`, +it is recommended to set :class:`Connection.row_factory`, +so all cursors created from the connection will use the same row factory. + +:class:`!Row` provides indexed and case-insensitive named access to columns, +with minimal memory overhead and performance impact over a :class:`!tuple`. +To use :class:`!Row` as a row factory, +assign it to the :attr:`!row_factory` attribute: + +.. doctest:: + + >>> con = sqlite3.connect(":memory:") + >>> con.row_factory = sqlite3.Row + +Queries now return :class:`!Row` objects: + +.. doctest:: + + >>> res = con.execute("SELECT 'Earth' AS name, 6378 AS radius") + >>> row = res.fetchone() + >>> row.keys() + ['name', 'radius'] + >>> row[0] # Access by index. + 'Earth' + >>> row["name"] # Access by name. + 'Earth' + >>> row["RADIUS"] # Column names are case-insensitive. + 6378 + +You can create a custom :attr:`~Cursor.row_factory` +that returns each row as a :class:`dict`, with column names mapped to values: + +.. testcode:: + + def dict_factory(cursor, row): + fields = [column[0] for column in cursor.description] + return {key: value for key, value in zip(fields, row)} + +Using it, queries now return a :class:`!dict` instead of a :class:`!tuple`: + +.. doctest:: + + >>> con = sqlite3.connect(":memory:") + >>> con.row_factory = dict_factory + >>> for row in con.execute("SELECT 1 AS a, 2 AS b"): + ... print(row) + {'a': 1, 'b': 2} + +The following row factory returns a :term:`named tuple`: + +.. testcode:: + + from collections import namedtuple + + def namedtuple_factory(cursor, row): + fields = [column[0] for column in cursor.description] + cls = namedtuple("Row", fields) + return cls._make(row) + +:func:`!namedtuple_factory` can be used as follows: + +.. doctest:: + + >>> con = sqlite3.connect(":memory:") + >>> con.row_factory = namedtuple_factory + >>> cur = con.execute("SELECT 1 AS a, 2 AS b") + >>> row = cur.fetchone() + >>> row + Row(a=1, b=2) + >>> row[0] # Indexed access. + 1 + >>> row.b # Attribute access. + 2 + +With some adjustments, the above recipe can be adapted to use a +:class:`~dataclasses.dataclass`, or any other custom class, +instead of a :class:`~collections.namedtuple`. + + .. _sqlite3-explanation: Explanation diff --git a/Doc/library/statistics.rst b/Doc/library/statistics.rst index 1ff6faec..afa2bea9 100644 --- a/Doc/library/statistics.rst +++ b/Doc/library/statistics.rst @@ -786,7 +786,7 @@ of applications in statistics. The relative likelihood is computed as the probability of a sample occurring in a narrow range divided by the width of the range (hence the word "density"). Since the likelihood is relative to other points, - its value can be greater than `1.0`. + its value can be greater than ``1.0``. .. method:: NormalDist.cdf(x) diff --git a/Doc/library/stdtypes.rst b/Doc/library/stdtypes.rst index 868d7bb2..8f90bd3e 100644 --- a/Doc/library/stdtypes.rst +++ b/Doc/library/stdtypes.rst @@ -1579,6 +1579,9 @@ expression support in the :mod:`re` module). range [*start*, *end*]. Optional arguments *start* and *end* are interpreted as in slice notation. + If *sub* is empty, returns the number of empty strings between characters + which is the length of the string plus one. + .. method:: str.encode(encoding="utf-8", errors="strict") @@ -2658,6 +2661,9 @@ arbitrary binary data. The subsequence to search for may be any :term:`bytes-like object` or an integer in the range 0 to 255. + If *sub* is empty, returns the number of empty slices between characters + which is the length of the bytes object plus one. + .. versionchanged:: 3.3 Also accept an integer in the range 0 to 255 as the subsequence. @@ -4330,11 +4336,9 @@ type, the :dfn:`dictionary`. (For other containers see the built-in A dictionary's keys are *almost* arbitrary values. Values that are not :term:`hashable`, that is, values containing lists, dictionaries or other mutable types (that are compared by value rather than by object identity) may -not be used as keys. Numeric types used for keys obey the normal rules for -numeric comparison: if two numbers compare equal (such as ``1`` and ``1.0``) -then they can be used interchangeably to index the same dictionary entry. (Note -however, that since computers store floating-point numbers as approximations it -is usually unwise to use them as dictionary keys.) +not be used as keys. +Values that compare equal (such as ``1``, ``1.0``, and ``True``) +can be used interchangeably to index the same dictionary entry. .. class:: dict(**kwargs) dict(mapping, **kwargs) diff --git a/Doc/library/struct.rst b/Doc/library/struct.rst index d12a5732..50d70731 100644 --- a/Doc/library/struct.rst +++ b/Doc/library/struct.rst @@ -12,21 +12,25 @@ -------------- -This module performs conversions between Python values and C structs represented -as Python :class:`bytes` objects. This can be used in handling binary data -stored in files or from network connections, among other sources. It uses -:ref:`struct-format-strings` as compact descriptions of the layout of the C -structs and the intended conversion to/from Python values. +This module converts between Python values and C structs represented +as Python :class:`bytes` objects. Compact :ref:`format strings ` +describe the intended conversions to/from Python values. +The module's functions and objects can be used for two largely +distinct applications, data exchange with external sources (files or +network connections), or data transfer between the Python application +and the C layer. .. note:: - By default, the result of packing a given C struct includes pad bytes in - order to maintain proper alignment for the C types involved; similarly, - alignment is taken into account when unpacking. This behavior is chosen so - that the bytes of a packed struct correspond exactly to the layout in memory - of the corresponding C struct. To handle platform-independent data formats - or omit implicit pad bytes, use ``standard`` size and alignment instead of - ``native`` size and alignment: see :ref:`struct-alignment` for details. + When no prefix character is given, native mode is the default. It + packs or unpacks data based on the platform and compiler on which + the Python interpreter was built. + The result of packing a given C struct includes pad bytes which + maintain proper alignment for the C types involved; similarly, + alignment is taken into account when unpacking. In contrast, when + communicating data between external sources, the programmer is + responsible for defining byte ordering and padding between elements. + See :ref:`struct-alignment` for details. Several :mod:`struct` functions (and methods of :class:`Struct`) take a *buffer* argument. This refers to objects that implement the :ref:`bufferobjects` and @@ -102,10 +106,13 @@ The module defines the following exception and functions: Format Strings -------------- -Format strings are the mechanism used to specify the expected layout when -packing and unpacking data. They are built up from :ref:`format-characters`, -which specify the type of data being packed/unpacked. In addition, there are -special characters for controlling the :ref:`struct-alignment`. +Format strings describe the data layout when +packing and unpacking data. They are built up from :ref:`format characters`, +which specify the type of data being packed/unpacked. In addition, +special characters control the :ref:`byte order, size and alignment`. +Each format string consists of an optional prefix character which +describes the overall properties of the data and one or more format +characters which describe the actual data values and padding. .. _struct-alignment: @@ -116,6 +123,11 @@ Byte Order, Size, and Alignment By default, C types are represented in the machine's native format and byte order, and properly aligned by skipping pad bytes if necessary (according to the rules used by the C compiler). +This behavior is chosen so +that the bytes of a packed struct correspond exactly to the memory layout +of the corresponding C struct. +Whether to use native byte ordering +and padding or standard formats depends on the application. .. index:: single: @ (at); in struct format strings @@ -144,12 +156,10 @@ following table: If the first character is not one of these, ``'@'`` is assumed. -Native byte order is big-endian or little-endian, depending on the host -system. For example, Intel x86 and AMD64 (x86-64) are little-endian; -IBM z and most legacy architectures are big-endian; -and ARM, RISC-V and IBM Power feature switchable endianness -(bi-endian, though the former two are nearly always little-endian in practice). -Use ``sys.byteorder`` to check the endianness of your system. +Native byte order is big-endian or little-endian, depending on the +host system. For example, Intel x86, AMD64 (x86-64), and Apple M1 are +little-endian; IBM z and many legacy architectures are big-endian. +Use :data:`sys.byteorder` to check the endianness of your system. Native size and alignment are determined using the C compiler's ``sizeof`` expression. This is always combined with native byte order. @@ -194,7 +204,7 @@ platform-dependent. +--------+--------------------------+--------------------+----------------+------------+ | Format | C Type | Python type | Standard size | Notes | +========+==========================+====================+================+============+ -| ``x`` | pad byte | no value | | | +| ``x`` | pad byte | no value | | \(7) | +--------+--------------------------+--------------------+----------------+------------+ | ``c`` | :c:expr:`char` | bytes of length 1 | 1 | | +--------+--------------------------+--------------------+----------------+------------+ @@ -231,9 +241,9 @@ platform-dependent. +--------+--------------------------+--------------------+----------------+------------+ | ``d`` | :c:expr:`double` | float | 8 | \(4) | +--------+--------------------------+--------------------+----------------+------------+ -| ``s`` | :c:expr:`char[]` | bytes | | | +| ``s`` | :c:expr:`char[]` | bytes | | \(9) | +--------+--------------------------+--------------------+----------------+------------+ -| ``p`` | :c:expr:`char[]` | bytes | | | +| ``p`` | :c:expr:`char[]` | bytes | | \(8) | +--------+--------------------------+--------------------+----------------+------------+ | ``P`` | :c:expr:`void \*` | integer | | \(5) | +--------+--------------------------+--------------------+----------------+------------+ @@ -291,6 +301,34 @@ Notes: operations. See the Wikipedia page on the `half-precision floating-point format `_ for more information. +(7) + When packing, ``'x'`` inserts one NUL byte. + +(8) + The ``'p'`` format character encodes a "Pascal string", meaning a short + variable-length string stored in a *fixed number of bytes*, given by the count. + The first byte stored is the length of the string, or 255, whichever is + smaller. The bytes of the string follow. If the string passed in to + :func:`pack` is too long (longer than the count minus 1), only the leading + ``count-1`` bytes of the string are stored. If the string is shorter than + ``count-1``, it is padded with null bytes so that exactly count bytes in all + are used. Note that for :func:`unpack`, the ``'p'`` format character consumes + ``count`` bytes, but that the string returned can never contain more than 255 + bytes. + +(9) + For the ``'s'`` format character, the count is interpreted as the length of the + bytes, not a repeat count like for the other format characters; for example, + ``'10s'`` means a single 10-byte string mapping to or from a single + Python byte string, while ``'10c'`` means 10 + separate one byte character elements (e.g., ``cccccccccc``) mapping + to or from ten different Python byte objects. (See :ref:`struct-examples` + for a concrete demonstration of the difference.) + If a count is not given, it defaults to 1. For packing, the string is + truncated or padded with null bytes as appropriate to make it fit. For + unpacking, the resulting bytes object always has exactly the specified number + of bytes. As a special case, ``'0s'`` means a single, empty string (while + ``'0c'`` means 0 characters). A format character may be preceded by an integral repeat count. For example, the format string ``'4h'`` means exactly the same as ``'hhhh'``. @@ -298,15 +336,6 @@ the format string ``'4h'`` means exactly the same as ``'hhhh'``. Whitespace characters between formats are ignored; a count and its format must not contain whitespace though. -For the ``'s'`` format character, the count is interpreted as the length of the -bytes, not a repeat count like for the other format characters; for example, -``'10s'`` means a single 10-byte string, while ``'10c'`` means 10 characters. -If a count is not given, it defaults to 1. For packing, the string is -truncated or padded with null bytes as appropriate to make it fit. For -unpacking, the resulting bytes object always has exactly the specified number -of bytes. As a special case, ``'0s'`` means a single, empty string (while -``'0c'`` means 0 characters). - When packing a value ``x`` using one of the integer formats (``'b'``, ``'B'``, ``'h'``, ``'H'``, ``'i'``, ``'I'``, ``'l'``, ``'L'``, ``'q'``, ``'Q'``), if ``x`` is outside the valid range for that format @@ -316,17 +345,6 @@ then :exc:`struct.error` is raised. Previously, some of the integer formats wrapped out-of-range values and raised :exc:`DeprecationWarning` instead of :exc:`struct.error`. -The ``'p'`` format character encodes a "Pascal string", meaning a short -variable-length string stored in a *fixed number of bytes*, given by the count. -The first byte stored is the length of the string, or 255, whichever is -smaller. The bytes of the string follow. If the string passed in to -:func:`pack` is too long (longer than the count minus 1), only the leading -``count-1`` bytes of the string are stored. If the string is shorter than -``count-1``, it is padded with null bytes so that exactly count bytes in all -are used. Note that for :func:`unpack`, the ``'p'`` format character consumes -``count`` bytes, but that the string returned can never contain more than 255 -bytes. - .. index:: single: ? (question mark); in struct format strings For the ``'?'`` format character, the return value is either :const:`True` or @@ -342,18 +360,36 @@ Examples ^^^^^^^^ .. note:: - All examples assume a native byte order, size, and alignment with a - big-endian machine. + Native byte order examples (designated by the ``'@'`` format prefix or + lack of any prefix character) may not match what the reader's + machine produces as + that depends on the platform and compiler. + +Pack and unpack integers of three different sizes, using big endian +ordering:: + + >>> from struct import * + >>> pack(">bhl", 1, 2, 3) + b'\x01\x00\x02\x00\x00\x00\x03' + >>> unpack('>bhl', b'\x01\x00\x02\x00\x00\x00\x03' + (1, 2, 3) + >>> calcsize('>bhl') + 7 -A basic example of packing/unpacking three integers:: +Attempt to pack an integer which is too large for the defined field:: - >>> from struct import * - >>> pack('hhl', 1, 2, 3) - b'\x00\x01\x00\x02\x00\x00\x00\x03' - >>> unpack('hhl', b'\x00\x01\x00\x02\x00\x00\x00\x03') - (1, 2, 3) - >>> calcsize('hhl') - 8 + >>> pack(">h", 99999) + Traceback (most recent call last): + File "", line 1, in + struct.error: 'h' format requires -32768 <= number <= 32767 + +Demonstrate the difference between ``'s'`` and ``'c'`` format +characters:: + + >>> pack("@ccc", b'1', b'2', b'3') + b'123' + >>> pack("@3s", b'123') + b'123' Unpacked fields can be named by assigning them to variables or by wrapping the result in a named tuple:: @@ -366,35 +402,132 @@ the result in a named tuple:: >>> Student._make(unpack('<10sHHb', record)) Student(name=b'raymond ', serialnum=4658, school=264, gradelevel=8) -The ordering of format characters may have an impact on size since the padding -needed to satisfy alignment requirements is different:: - - >>> pack('ci', b'*', 0x12131415) - b'*\x00\x00\x00\x12\x13\x14\x15' - >>> pack('ic', 0x12131415, b'*') - b'\x12\x13\x14\x15*' - >>> calcsize('ci') +The ordering of format characters may have an impact on size in native +mode since padding is implicit. In standard mode, the user is +responsible for inserting any desired padding. +Note in +the first ``pack`` call below that three NUL bytes were added after the +packed ``'#'`` to align the following integer on a four-byte boundary. +In this example, the output was produced on a little endian machine:: + + >>> pack('@ci', b'#', 0x12131415) + b'#\x00\x00\x00\x15\x14\x13\x12' + >>> pack('@ic', 0x12131415, b'#') + b'\x15\x14\x13\x12#' + >>> calcsize('@ci') 8 - >>> calcsize('ic') + >>> calcsize('@ic') 5 -The following format ``'llh0l'`` specifies two pad bytes at the end, assuming -longs are aligned on 4-byte boundaries:: +The following format ``'llh0l'`` results in two pad bytes being added +at the end, assuming the platform's longs are aligned on 4-byte boundaries:: - >>> pack('llh0l', 1, 2, 3) + >>> pack('@llh0l', 1, 2, 3) b'\x00\x00\x00\x01\x00\x00\x00\x02\x00\x03\x00\x00' -This only works when native size and alignment are in effect; standard size and -alignment does not enforce any alignment. - .. seealso:: Module :mod:`array` Packed binary storage of homogeneous data. - Module :mod:`xdrlib` - Packing and unpacking of XDR data. + Module :mod:`json` + JSON encoder and decoder. + + Module :mod:`pickle` + Python object serialization. + + +.. _applications: + +Applications +------------ + +Two main applications for the :mod:`struct` module exist, data +interchange between Python and C code within an application or another +application compiled using the same compiler (:ref:`native formats`), and +data interchange between applications using agreed upon data layout +(:ref:`standard formats`). Generally speaking, the format strings +constructed for these two domains are distinct. + + +.. _struct-native-formats: + +Native Formats +^^^^^^^^^^^^^^ + +When constructing format strings which mimic native layouts, the +compiler and machine architecture determine byte ordering and padding. +In such cases, the ``@`` format character should be used to specify +native byte ordering and data sizes. Internal pad bytes are normally inserted +automatically. It is possible that a zero-repeat format code will be +needed at the end of a format string to round up to the correct +byte boundary for proper alignment of consective chunks of data. + +Consider these two simple examples (on a 64-bit, little-endian +machine):: + + >>> calcsize('@lhl') + 24 + >>> calcsize('@llh') + 18 + +Data is not padded to an 8-byte boundary at the end of the second +format string without the use of extra padding. A zero-repeat format +code solves that problem:: + + >>> calcsize('@llh0l') + 24 + +The ``'x'`` format code can be used to specify the repeat, but for +native formats it is better to use a zero-repeat format like ``'0l'``. + +By default, native byte ordering and alignment is used, but it is +better to be explicit and use the ``'@'`` prefix character. + + +.. _struct-standard-formats: + +Standard Formats +^^^^^^^^^^^^^^^^ + +When exchanging data beyond your process such as networking or storage, +be precise. Specify the exact byte order, size, and alignment. Do +not assume they match the native order of a particular machine. +For example, network byte order is big-endian, while many popular CPUs +are little-endian. By defining this explicitly, the user need not +care about the specifics of the platform their code is running on. +The first character should typically be ``<`` or ``>`` +(or ``!``). Padding is the responsibility of the programmer. The +zero-repeat format character won't work. Instead, the user must +explicitly add ``'x'`` pad bytes where needed. Revisiting the +examples from the previous section, we have:: + + >>> calcsize('>> pack('>> calcsize('@llh') + 18 + >>> pack('@llh', 1, 2, 3) == pack('>> calcsize('>> calcsize('@llh0l') + 24 + >>> pack('@llh0l', 1, 2, 3) == pack('>> calcsize('>> calcsize('@llh0l') + 12 + >>> pack('@llh0l', 1, 2, 3) == pack('` (:option:`configure + If Python is :ref:`built in debug mode ` (:option:`configure --with-pydebug option <--with-pydebug>`), it also performs some expensive internal consistency checks. @@ -320,7 +329,7 @@ always available. files to (and read them from) a parallel directory tree rooted at this directory, rather than from ``__pycache__`` directories in the source code tree. Any ``__pycache__`` directories in the source code tree will be ignored - and new `.pyc` files written within the pycache prefix. Thus if you use + and new ``.pyc`` files written within the pycache prefix. Thus if you use :mod:`compileall` as a pre-build step, you must ensure you run it with the same pycache prefix (if any) that you will use at runtime. @@ -828,7 +837,7 @@ always available. .. function:: get_asyncgen_hooks() Returns an *asyncgen_hooks* object, which is similar to a - :class:`~collections.namedtuple` of the form `(firstiter, finalizer)`, + :class:`~collections.namedtuple` of the form ``(firstiter, finalizer)``, where *firstiter* and *finalizer* are expected to be either ``None`` or functions which take an :term:`asynchronous generator iterator` as an argument, and are used to schedule finalization of an asynchronous diff --git a/Doc/library/tk.rst b/Doc/library/tk.rst index 0cb8fda4..3dc21305 100644 --- a/Doc/library/tk.rst +++ b/Doc/library/tk.rst @@ -44,4 +44,4 @@ alternative `GUI frameworks and tools Vector: return [scalar * num for num in vector] - # typechecks; a list of floats qualifies as a Vector. + # passes type checking; a list of floats qualifies as a Vector. new_vector = scale(2.0, [1.0, -4.2, 5.4]) Type aliases are useful for simplifying complex type signatures. For example:: @@ -133,10 +133,10 @@ of the original type. This is useful in helping catch logical errors:: def get_user_name(user_id: UserId) -> str: ... - # typechecks + # passes type checking user_a = get_user_name(UserId(42351)) - # does not typecheck; an int is not a UserId + # fails type checking; an int is not a UserId user_b = get_user_name(-1) You may still perform all ``int`` operations on a variable of type ``UserId``, @@ -162,7 +162,7 @@ It is invalid to create a subtype of ``Derived``:: UserId = NewType('UserId', int) - # Fails at runtime and does not typecheck + # Fails at runtime and does not pass type checking class AdminUserId(UserId): pass However, it is possible to create a :class:`NewType` based on a 'derived' ``NewType``:: @@ -234,7 +234,7 @@ respectively. .. versionchanged:: 3.10 ``Callable`` now supports :class:`ParamSpec` and :data:`Concatenate`. - See :pep:`612` for more information. + See :pep:`612` for more details. .. seealso:: The documentation for :class:`ParamSpec` and :class:`Concatenate` provides @@ -305,7 +305,7 @@ single type parameter ``T`` . This also makes ``T`` valid as a type within the class body. The :class:`Generic` base class defines :meth:`~object.__class_getitem__` so -that ``LoggedVar[t]`` is valid as a type:: +that ``LoggedVar[T]`` is valid as a type:: from collections.abc import Iterable @@ -449,12 +449,12 @@ value of type :data:`Any` and assign it to any variable:: s = a # OK def foo(item: Any) -> int: - # Typechecks; 'item' could be any type, + # Passes type checking; 'item' could be any type, # and that type might have a 'bar' method item.bar() ... -Notice that no typechecking is performed when assigning a value of type +Notice that no type checking is performed when assigning a value of type :data:`Any` to a more precise type. For example, the static type checker did not report an error when assigning ``a`` to ``s`` even though ``s`` was declared to be of type :class:`str` and receives an :class:`int` value at @@ -486,20 +486,20 @@ reject almost all operations on it, and assigning it to a variable (or using it as a return value) of a more specialized type is a type error. For example:: def hash_a(item: object) -> int: - # Fails; an object does not have a 'magic' method. + # Fails type checking; an object does not have a 'magic' method. item.magic() ... def hash_b(item: Any) -> int: - # Typechecks + # Passes type checking item.magic() ... - # Typechecks, since ints and strs are subclasses of object + # Passes type checking, since ints and strs are subclasses of object hash_a(42) hash_a("foo") - # Typechecks, since Any is compatible with all types + # Passes type checking, since Any is compatible with all types hash_b(42) hash_b("foo") @@ -631,8 +631,8 @@ These can be used as types in annotations using ``[]``, each having a unique syn is equivalent to ``Tuple[Any, ...]``, and in turn to :class:`tuple`. .. deprecated:: 3.9 - :class:`builtins.tuple ` now supports ``[]``. See :pep:`585` and - :ref:`types-genericalias`. + :class:`builtins.tuple ` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. data:: Union @@ -720,12 +720,12 @@ These can be used as types in annotations using ``[]``, each having a unique syn respectively. .. deprecated:: 3.9 - :class:`collections.abc.Callable` now supports ``[]``. See :pep:`585` and - :ref:`types-genericalias`. + :class:`collections.abc.Callable` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. versionchanged:: 3.10 ``Callable`` now supports :class:`ParamSpec` and :data:`Concatenate`. - See :pep:`612` for more information. + See :pep:`612` for more details. .. seealso:: The documentation for :class:`ParamSpec` and :class:`Concatenate` provide @@ -827,8 +827,8 @@ These can be used as types in annotations using ``[]``, each having a unique syn .. versionadded:: 3.5.2 .. deprecated:: 3.9 - :class:`builtins.type ` now supports ``[]``. See :pep:`585` and - :ref:`types-genericalias`. + :class:`builtins.type ` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. data:: Literal @@ -1050,8 +1050,7 @@ These can be used as types in annotations using ``[]``, each having a unique syn is not a subtype of the former, since ``List`` is invariant. The responsibility of writing type-safe type guards is left to the user. - ``TypeGuard`` also works with type variables. For more information, see - :pep:`647` (User-Defined Type Guards). + ``TypeGuard`` also works with type variables. See :pep:`647` for more details. .. versionadded:: 3.10 @@ -1323,7 +1322,7 @@ These are not used in annotations. They are building blocks for creating generic func(C()) # Passes static type check - See :pep:`544` for details. Protocol classes decorated with + See :pep:`544` for more details. Protocol classes decorated with :func:`runtime_checkable` (described later) act as simple-minded runtime protocols that check only the presence of given attributes, ignoring their type signatures. @@ -1598,8 +1597,8 @@ Corresponding to built-in types ... .. deprecated:: 3.9 - :class:`builtins.dict ` now supports ``[]``. See :pep:`585` and - :ref:`types-genericalias`. + :class:`builtins.dict ` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: List(list, MutableSequence[T]) @@ -1619,8 +1618,8 @@ Corresponding to built-in types return [item for item in vector if item > 0] .. deprecated:: 3.9 - :class:`builtins.list ` now supports ``[]``. See :pep:`585` and - :ref:`types-genericalias`. + :class:`builtins.list ` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: Set(set, MutableSet[T]) @@ -1629,16 +1628,17 @@ Corresponding to built-in types to use an abstract collection type such as :class:`AbstractSet`. .. deprecated:: 3.9 - :class:`builtins.set ` now supports ``[]``. See :pep:`585` and - :ref:`types-genericalias`. + :class:`builtins.set ` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: FrozenSet(frozenset, AbstractSet[T_co]) A generic version of :class:`builtins.frozenset `. .. deprecated:: 3.9 - :class:`builtins.frozenset ` now supports ``[]``. See - :pep:`585` and :ref:`types-genericalias`. + :class:`builtins.frozenset ` + now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. note:: :data:`Tuple` is a special form. @@ -1652,8 +1652,8 @@ Corresponding to types in :mod:`collections` .. versionadded:: 3.5.2 .. deprecated:: 3.9 - :class:`collections.defaultdict` now supports ``[]``. See :pep:`585` and - :ref:`types-genericalias`. + :class:`collections.defaultdict` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: OrderedDict(collections.OrderedDict, MutableMapping[KT, VT]) @@ -1662,8 +1662,8 @@ Corresponding to types in :mod:`collections` .. versionadded:: 3.7.2 .. deprecated:: 3.9 - :class:`collections.OrderedDict` now supports ``[]``. See :pep:`585` and - :ref:`types-genericalias`. + :class:`collections.OrderedDict` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: ChainMap(collections.ChainMap, MutableMapping[KT, VT]) @@ -1673,8 +1673,8 @@ Corresponding to types in :mod:`collections` .. versionadded:: 3.6.1 .. deprecated:: 3.9 - :class:`collections.ChainMap` now supports ``[]``. See :pep:`585` and - :ref:`types-genericalias`. + :class:`collections.ChainMap` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: Counter(collections.Counter, Dict[T, int]) @@ -1684,8 +1684,8 @@ Corresponding to types in :mod:`collections` .. versionadded:: 3.6.1 .. deprecated:: 3.9 - :class:`collections.Counter` now supports ``[]``. See :pep:`585` and - :ref:`types-genericalias`. + :class:`collections.Counter` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: Deque(deque, MutableSequence[T]) @@ -1695,8 +1695,8 @@ Corresponding to types in :mod:`collections` .. versionadded:: 3.6.1 .. deprecated:: 3.9 - :class:`collections.deque` now supports ``[]``. See :pep:`585` and - :ref:`types-genericalias`. + :class:`collections.deque` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. Other concrete types """""""""""""""""""" @@ -1710,7 +1710,7 @@ Other concrete types represent the types of I/O streams such as returned by :func:`open`. - .. deprecated-removed:: 3.8 3.12 + .. deprecated-removed:: 3.8 3.13 The ``typing.io`` namespace is deprecated and will be removed. These types should be directly imported from ``typing`` instead. @@ -1724,7 +1724,7 @@ Other concrete types ``Pattern[str]``, ``Pattern[bytes]``, ``Match[str]``, or ``Match[bytes]``. - .. deprecated-removed:: 3.8 3.12 + .. deprecated-removed:: 3.8 3.13 The ``typing.re`` namespace is deprecated and will be removed. These types should be directly imported from ``typing`` instead. @@ -1752,13 +1752,13 @@ Abstract Base Classes Corresponding to collections in :mod:`collections.abc` """""""""""""""""""""""""""""""""""""""""""""""""""""" -.. class:: AbstractSet(Sized, Collection[T_co]) +.. class:: AbstractSet(Collection[T_co]) A generic version of :class:`collections.abc.Set`. .. deprecated:: 3.9 - :class:`collections.abc.Set` now supports ``[]``. See :pep:`585` and - :ref:`types-genericalias`. + :class:`collections.abc.Set` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: ByteString(Sequence[int]) @@ -1771,8 +1771,8 @@ Corresponding to collections in :mod:`collections.abc` annotate arguments of any of the types mentioned above. .. deprecated:: 3.9 - :class:`collections.abc.ByteString` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.ByteString` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: Collection(Sized, Iterable[T_co], Container[T_co]) @@ -1781,34 +1781,34 @@ Corresponding to collections in :mod:`collections.abc` .. versionadded:: 3.6.0 .. deprecated:: 3.9 - :class:`collections.abc.Collection` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.Collection` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: Container(Generic[T_co]) A generic version of :class:`collections.abc.Container`. .. deprecated:: 3.9 - :class:`collections.abc.Container` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.Container` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. -.. class:: ItemsView(MappingView, Generic[KT_co, VT_co]) +.. class:: ItemsView(MappingView, AbstractSet[tuple[KT_co, VT_co]]) A generic version of :class:`collections.abc.ItemsView`. .. deprecated:: 3.9 - :class:`collections.abc.ItemsView` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.ItemsView` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. -.. class:: KeysView(MappingView[KT_co], AbstractSet[KT_co]) +.. class:: KeysView(MappingView, AbstractSet[KT_co]) A generic version of :class:`collections.abc.KeysView`. .. deprecated:: 3.9 - :class:`collections.abc.KeysView` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.KeysView` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. -.. class:: Mapping(Sized, Collection[KT], Generic[VT_co]) +.. class:: Mapping(Collection[KT], Generic[KT, VT_co]) A generic version of :class:`collections.abc.Mapping`. This type can be used as follows:: @@ -1817,56 +1817,58 @@ Corresponding to collections in :mod:`collections.abc` return word_list[word] .. deprecated:: 3.9 - :class:`collections.abc.Mapping` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.Mapping` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. -.. class:: MappingView(Sized, Iterable[T_co]) +.. class:: MappingView(Sized) A generic version of :class:`collections.abc.MappingView`. .. deprecated:: 3.9 - :class:`collections.abc.MappingView` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.MappingView` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: MutableMapping(Mapping[KT, VT]) A generic version of :class:`collections.abc.MutableMapping`. .. deprecated:: 3.9 - :class:`collections.abc.MutableMapping` now supports ``[]``. See - :pep:`585` and :ref:`types-genericalias`. + :class:`collections.abc.MutableMapping` + now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: MutableSequence(Sequence[T]) A generic version of :class:`collections.abc.MutableSequence`. .. deprecated:: 3.9 - :class:`collections.abc.MutableSequence` now supports ``[]``. See - :pep:`585` and :ref:`types-genericalias`. + :class:`collections.abc.MutableSequence` + now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: MutableSet(AbstractSet[T]) A generic version of :class:`collections.abc.MutableSet`. .. deprecated:: 3.9 - :class:`collections.abc.MutableSet` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.MutableSet` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: Sequence(Reversible[T_co], Collection[T_co]) A generic version of :class:`collections.abc.Sequence`. .. deprecated:: 3.9 - :class:`collections.abc.Sequence` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.Sequence` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. -.. class:: ValuesView(MappingView[VT_co]) +.. class:: ValuesView(MappingView, Collection[_VT_co]) A generic version of :class:`collections.abc.ValuesView`. .. deprecated:: 3.9 - :class:`collections.abc.ValuesView` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.ValuesView` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. Corresponding to other types in :mod:`collections.abc` """""""""""""""""""""""""""""""""""""""""""""""""""""" @@ -1876,16 +1878,16 @@ Corresponding to other types in :mod:`collections.abc` A generic version of :class:`collections.abc.Iterable`. .. deprecated:: 3.9 - :class:`collections.abc.Iterable` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.Iterable` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: Iterator(Iterable[T_co]) A generic version of :class:`collections.abc.Iterator`. .. deprecated:: 3.9 - :class:`collections.abc.Iterator` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.Iterator` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: Generator(Iterator[T_co], Generic[T_co, T_contra, V_co]) @@ -1919,8 +1921,8 @@ Corresponding to other types in :mod:`collections.abc` start += 1 .. deprecated:: 3.9 - :class:`collections.abc.Generator` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.Generator` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: Hashable @@ -1931,8 +1933,8 @@ Corresponding to other types in :mod:`collections.abc` A generic version of :class:`collections.abc.Reversible`. .. deprecated:: 3.9 - :class:`collections.abc.Reversible` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.Reversible` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: Sized @@ -1956,8 +1958,8 @@ Asynchronous programming .. versionadded:: 3.5.3 .. deprecated:: 3.9 - :class:`collections.abc.Coroutine` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.Coroutine` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: AsyncGenerator(AsyncIterator[T_co], Generic[T_co, T_contra]) @@ -1993,8 +1995,9 @@ Asynchronous programming .. versionadded:: 3.6.1 .. deprecated:: 3.9 - :class:`collections.abc.AsyncGenerator` now supports ``[]``. See - :pep:`585` and :ref:`types-genericalias`. + :class:`collections.abc.AsyncGenerator` + now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: AsyncIterable(Generic[T_co]) @@ -2003,8 +2006,8 @@ Asynchronous programming .. versionadded:: 3.5.2 .. deprecated:: 3.9 - :class:`collections.abc.AsyncIterable` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.AsyncIterable` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: AsyncIterator(AsyncIterable[T_co]) @@ -2013,8 +2016,8 @@ Asynchronous programming .. versionadded:: 3.5.2 .. deprecated:: 3.9 - :class:`collections.abc.AsyncIterator` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.AsyncIterator` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: Awaitable(Generic[T_co]) @@ -2023,8 +2026,8 @@ Asynchronous programming .. versionadded:: 3.5.2 .. deprecated:: 3.9 - :class:`collections.abc.Awaitable` now supports ``[]``. See :pep:`585` - and :ref:`types-genericalias`. + :class:`collections.abc.Awaitable` now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. Context manager types @@ -2038,8 +2041,9 @@ Context manager types .. versionadded:: 3.6.0 .. deprecated:: 3.9 - :class:`contextlib.AbstractContextManager` now supports ``[]``. See - :pep:`585` and :ref:`types-genericalias`. + :class:`contextlib.AbstractContextManager` + now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. .. class:: AsyncContextManager(Generic[T_co]) @@ -2049,8 +2053,9 @@ Context manager types .. versionadded:: 3.6.2 .. deprecated:: 3.9 - :class:`contextlib.AbstractAsyncContextManager` now supports ``[]``. See - :pep:`585` and :ref:`types-genericalias`. + :class:`contextlib.AbstractAsyncContextManager` + now supports subscripting (``[]``). + See :pep:`585` and :ref:`types-genericalias`. Protocols --------- @@ -2127,7 +2132,7 @@ Functions and decorators def process(response): - See :pep:`484` for details and comparison with other typing semantics. + See :pep:`484` for more details and comparison with other typing semantics. .. decorator:: final diff --git a/Doc/library/unittest.mock-examples.rst b/Doc/library/unittest.mock-examples.rst index 24a18c68..c82d3332 100644 --- a/Doc/library/unittest.mock-examples.rst +++ b/Doc/library/unittest.mock-examples.rst @@ -1116,7 +1116,7 @@ on first use). That aside there is a way to use ``mock`` to affect the results of an import. Importing fetches an *object* from the :data:`sys.modules` dictionary. Note that it fetches an *object*, which need not be a module. Importing a module for the -first time results in a module object being put in `sys.modules`, so usually +first time results in a module object being put in ``sys.modules``, so usually when you import something you get a module back. This need not be the case however. diff --git a/Doc/library/venv.rst b/Doc/library/venv.rst index fe5e4c0c..3ab83a23 100644 --- a/Doc/library/venv.rst +++ b/Doc/library/venv.rst @@ -15,74 +15,99 @@ -------------- -The :mod:`venv` module provides support for creating lightweight "virtual -environments" with their own site directories, optionally isolated from system -site directories. Each virtual environment has its own Python binary (which -matches the version of the binary that was used to create this environment) and -can have its own independent set of installed Python packages in its site -directories. +.. _venv-def: +.. _venv-intro: + +The :mod:`!venv` module supports creating lightweight "virtual environments", +each with their own independent set of Python packages installed in +their :mod:`site` directories. +A virtual environment is created on top of an existing +Python installation, known as the virtual environment's "base" Python, and may +optionally be isolated from the packages in the base environment, +so only those explicitly installed in the virtual environment are available. + +When used from within a virtual environment, common installation tools such as +`pip`_ will install Python packages into a virtual environment +without needing to be told to do so explicitly. -See :pep:`405` for more information about Python virtual environments. +See :pep:`405` for more background on Python virtual environments. .. seealso:: `Python Packaging User Guide: Creating and using virtual environments `__ - Creating virtual environments ----------------------------- .. include:: /using/venv-create.inc +.. _venv-explanation: -.. _venv-def: +How venvs work +-------------- -.. note:: A virtual environment is a Python environment such that the Python - interpreter, libraries and scripts installed into it are isolated from those - installed in other virtual environments, and (by default) any libraries - installed in a "system" Python, i.e., one which is installed as part of your - operating system. - - A virtual environment is a directory tree which contains Python executable - files and other files which indicate that it is a virtual environment. - - Common installation tools such as setuptools_ and pip_ work as - expected with virtual environments. In other words, when a virtual - environment is active, they install Python packages into the virtual - environment without needing to be told to do so explicitly. - - When a virtual environment is active (i.e., the virtual environment's Python - interpreter is running), the attributes :attr:`sys.prefix` and - :attr:`sys.exec_prefix` point to the base directory of the virtual - environment, whereas :attr:`sys.base_prefix` and - :attr:`sys.base_exec_prefix` point to the non-virtual environment Python - installation which was used to create the virtual environment. If a virtual - environment is not active, then :attr:`sys.prefix` is the same as - :attr:`sys.base_prefix` and :attr:`sys.exec_prefix` is the same as - :attr:`sys.base_exec_prefix` (they all point to a non-virtual environment - Python installation). - - When a virtual environment is active, any options that change the - installation path will be ignored from all :mod:`distutils` configuration - files to prevent projects being inadvertently installed outside of the - virtual environment. - - When working in a command shell, users can make a virtual environment active - by running an ``activate`` script in the virtual environment's executables - directory (the precise filename and command to use the file is - shell-dependent), which prepends the virtual environment's directory for - executables to the ``PATH`` environment variable for the running shell. There - should be no need in other circumstances to activate a virtual - environment; scripts installed into virtual environments have a "shebang" - line which points to the virtual environment's Python interpreter. This means - that the script will run with that interpreter regardless of the value of - ``PATH``. On Windows, "shebang" line processing is supported if you have the - Python Launcher for Windows installed (this was added to Python in 3.3 - see - :pep:`397` for more details). Thus, double-clicking an installed script in a - Windows Explorer window should run the script with the correct interpreter - without there needing to be any reference to its virtual environment in - ``PATH``. +When a Python interpreter is running from a virtual environment, +:data:`sys.prefix` and :data:`sys.exec_prefix` +point to the directories of the virtual environment, +whereas :data:`sys.base_prefix` and :data:`sys.base_exec_prefix` +point to those of the base Python used to create the environment. +It is sufficient to check +``sys.prefix == sys.base_prefix`` to determine if the current interpreter is +running from a virtual environment. + +A virtual environment may be "activated" using a script in its binary directory +(``bin`` on POSIX; ``Scripts`` on Windows). +This will prepend that directory to your :envvar:`!PATH`, so that running +:program:`!python` will invoke the environment's Python interpreter +and you can run installed scripts without having to use their full path. +The invocation of the activation script is platform-specific +(:samp:`{}` must be replaced by the path to the directory +containing the virtual environment): + ++-------------+------------+--------------------------------------------------+ +| Platform | Shell | Command to activate virtual environment | ++=============+============+==================================================+ +| POSIX | bash/zsh | :samp:`$ source {}/bin/activate` | +| +------------+--------------------------------------------------+ +| | fish | :samp:`$ source {}/bin/activate.fish` | +| +------------+--------------------------------------------------+ +| | csh/tcsh | :samp:`$ source {}/bin/activate.csh` | +| +------------+--------------------------------------------------+ +| | PowerShell | :samp:`$ {}/bin/Activate.ps1` | ++-------------+------------+--------------------------------------------------+ +| Windows | cmd.exe | :samp:`C:\\> {}\\Scripts\\activate.bat` | +| +------------+--------------------------------------------------+ +| | PowerShell | :samp:`PS C:\\> {}\\Scripts\\Activate.ps1` | ++-------------+------------+--------------------------------------------------+ + +.. versionadded:: 3.4 + :program:`!fish` and :program:`!csh` activation scripts. + +.. versionadded:: 3.8 + PowerShell activation scripts installed under POSIX for PowerShell Core + support. + +You don't specifically *need* to activate a virtual environment, +as you can just specify the full path to that environment's +Python interpreter when invoking Python. +Furthermore, all scripts installed in the environment +should be runnable without activating it. + +In order to achieve this, scripts installed into virtual environments have +a "shebang" line which points to the environment's Python interpreter, +i.e. :samp:`#!/{}/bin/python`. +This means that the script will run with that interpreter regardless of the +value of :envvar:`!PATH`. On Windows, "shebang" line processing is supported if +you have the :ref:`launcher` installed. Thus, double-clicking an installed +script in a Windows Explorer window should run it with the correct interpreter +without the environment needing to be activated or on the :envvar:`!PATH`. + +When a virtual environment has been activated, the :envvar:`!VIRTUAL_ENV` +environment variable is set to the path of the environment. +Since explicitly activating a virtual environment is not required to use it, +:envvar:`!VIRTUAL_ENV` cannot be relied upon to determine +whether a virtual environment is being used. .. warning:: Because scripts installed in environments should not expect the environment to be activated, their shebang lines contain the absolute paths @@ -98,6 +123,11 @@ Creating virtual environments environment in its new location. Otherwise, software installed into the environment may not work as expected. +You can deactivate a virtual environment by typing ``deactivate`` in your shell. +The exact mechanism is platform-specific and is an internal implementation +detail (typically, a script or shell function will be used). + + .. _venv-api: API @@ -183,11 +213,56 @@ creation according to their needs, the :class:`EnvBuilder` class. .. method:: ensure_directories(env_dir) - Creates the environment directory and all necessary directories, and - returns a context object. This is just a holder for attributes (such as - paths), for use by the other methods. The directories are allowed to - exist already, as long as either ``clear`` or ``upgrade`` were - specified to allow operating on an existing environment directory. + Creates the environment directory and all necessary subdirectories that + don't already exist, and returns a context object. This context object + is just a holder for attributes (such as paths) for use by the other + methods. If the :class:`EnvBuilder` is created with the arg + ``clear=True``, contents of the environment directory will be cleared + and then all necessary subdirectories will be recreated. + + The returned context object is a :class:`types.SimpleNamespace` with the + following attributes: + + * ``env_dir`` - The location of the virtual environment. Used for + ``__VENV_DIR__`` in activation scripts (see :meth:`install_scripts`). + + * ``env_name`` - The name of the virtual environment. Used for + ``__VENV_NAME__`` in activation scripts (see :meth:`install_scripts`). + + * ``prompt`` - The prompt to be used by the activation scripts. Used for + ``__VENV_PROMPT__`` in activation scripts (see :meth:`install_scripts`). + + * ``executable`` - The underlying Python executable used by the virtual + environment. This takes into account the case where a virtual environment + is created from another virtual environment. + + * ``inc_path`` - The include path for the virtual environment. + + * ``lib_path`` - The purelib path for the virtual environment. + + * ``bin_path`` - The script path for the virtual environment. + + * ``bin_name`` - The name of the script path relative to the virtual + environment location. Used for ``__VENV_BIN_NAME__`` in activation + scripts (see :meth:`install_scripts`). + + * ``env_exe`` - The name of the Python interpreter in the virtual + environment. Used for ``__VENV_PYTHON__`` in activation scripts + (see :meth:`install_scripts`). + + * ``env_exec_cmd`` - The name of the Python interpreter, taking into + account filesystem redirections. This can be used to run Python in + the virtual environment. + + + .. versionchanged:: 3.12 + The attribute ``lib_path`` was added to the context, and the context + object was documented. + + .. versionchanged:: 3.11 + The *venv* + :ref:`sysconfig installation scheme ` + is used to construct the paths of the created directories. .. method:: create_configuration(context) diff --git a/Doc/library/weakref.rst b/Doc/library/weakref.rst index 4b0945c0..9a8289a7 100644 --- a/Doc/library/weakref.rst +++ b/Doc/library/weakref.rst @@ -144,6 +144,9 @@ See :ref:`__slots__ documentation ` for details. prevent their use as dictionary keys. *callback* is the same as the parameter of the same name to the :func:`ref` function. + Accessing an attribute of the proxy object after the referent is + garbage collected raises :exc:`ReferenceError`. + .. versionchanged:: 3.8 Extended the operator support on proxy objects to include the matrix multiplication operators ``@`` and ``@=``. diff --git a/Doc/library/wsgiref.rst b/Doc/library/wsgiref.rst index 2e0a16e5..7be4be47 100644 --- a/Doc/library/wsgiref.rst +++ b/Doc/library/wsgiref.rst @@ -7,6 +7,8 @@ .. moduleauthor:: Phillip J. Eby .. sectionauthor:: Phillip J. Eby +**Source code:** :source:`Lib/wsgiref` + -------------- The Web Server Gateway Interface (WSGI) is a standard interface between web diff --git a/Doc/library/xml.dom.minidom.rst b/Doc/library/xml.dom.minidom.rst index 82e5d6ae..72a7a98c 100644 --- a/Doc/library/xml.dom.minidom.rst +++ b/Doc/library/xml.dom.minidom.rst @@ -148,8 +148,8 @@ module documentation. This section lists the differences between the API and Similarly, explicitly stating the *standalone* argument causes the standalone document declarations to be added to the prologue of the XML document. - If the value is set to `True`, `standalone="yes"` is added, - otherwise it is set to `"no"`. + If the value is set to ``True``, ``standalone="yes"`` is added, + otherwise it is set to ``"no"``. Not stating the argument will omit the declaration from the document. .. versionchanged:: 3.8 diff --git a/Doc/library/xmlrpc.client.rst b/Doc/library/xmlrpc.client.rst index f4a7a4cf..1486874e 100644 --- a/Doc/library/xmlrpc.client.rst +++ b/Doc/library/xmlrpc.client.rst @@ -58,7 +58,7 @@ between conformable Python objects and XML on the wire. may be passed to calls. The *headers* parameter is an optional sequence of HTTP headers to send with each request, expressed as a sequence of 2-tuples representing the header - name and value. (e.g. `[('Header-Name', 'value')]`). + name and value. (e.g. ``[('Header-Name', 'value')]``). The obsolete *use_datetime* flag is similar to *use_builtin_types* but it applies only to date/time values. diff --git a/Doc/library/xmlrpc.server.rst b/Doc/library/xmlrpc.server.rst index 7d561e23..7dbcc350 100644 --- a/Doc/library/xmlrpc.server.rst +++ b/Doc/library/xmlrpc.server.rst @@ -262,7 +262,7 @@ This ExampleService demo can be invoked from the command line:: The client that interacts with the above server is included in -`Lib/xmlrpc/client.py`:: +``Lib/xmlrpc/client.py``:: server = ServerProxy("http://localhost:8000") diff --git a/Doc/library/zoneinfo.rst b/Doc/library/zoneinfo.rst index 1b2ba2af..b0a68e12 100644 --- a/Doc/library/zoneinfo.rst +++ b/Doc/library/zoneinfo.rst @@ -9,6 +9,8 @@ .. moduleauthor:: Paul Ganssle .. sectionauthor:: Paul Ganssle +**Source code:** :source:`Lib/zoneinfo` + -------------- The :mod:`zoneinfo` module provides a concrete time zone implementation to diff --git a/Doc/license.rst b/Doc/license.rst index 00691b30..4caecdce 100644 --- a/Doc/license.rst +++ b/Doc/license.rst @@ -984,3 +984,31 @@ https://www.w3.org/TR/xml-c14n2-testcases/ and is distributed under the THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + + +Audioop +------- + +The audioop module uses the code base in g771.c file of the SoX project:: + + Programming the AdLib/Sound Blaster + FM Music Chips + Version 2.0 (24 Feb 1992) + Copyright (c) 1991, 1992 by Jeffrey S. Lee + jlee@smylex.uucp + Warranty and Copyright Policy + This document is provided on an "as-is" basis, and its author makes + no warranty or representation, express or implied, with respect to + its quality performance or fitness for a particular purpose. In no + event will the author of this document be liable for direct, indirect, + special, incidental, or consequential damages arising out of the use + or inability to use the information contained within. Use of this + document is at your own risk. + This file may be used and copied freely so long as the applicable + copyright notices are retained, and no modifications are made to the + text of the document. No money shall be charged for its distribution + beyond reasonable shipping, handling and duplication costs, nor shall + proprietary changes be made to this document so that it cannot be + distributed freely. This document may not be included in published + material or commercial packages without the written consent of its + author. diff --git a/Doc/reference/compound_stmts.rst b/Doc/reference/compound_stmts.rst index 911c38f7..93f5682f 100644 --- a/Doc/reference/compound_stmts.rst +++ b/Doc/reference/compound_stmts.rst @@ -503,6 +503,7 @@ The :keyword:`!match` statement keyword: if keyword: as pair: match; case + single: as; match statement single: : (colon); compound statement .. versionadded:: 3.10 diff --git a/Doc/reference/datamodel.rst b/Doc/reference/datamodel.rst index e92b9a2e..eafb6ff1 100644 --- a/Doc/reference/datamodel.rst +++ b/Doc/reference/datamodel.rst @@ -1837,6 +1837,8 @@ Attribute lookup speed can be significantly improved as well. and *__weakref__* for each instance. +.. _datamodel-note-slots: + Notes on using *__slots__* """""""""""""""""""""""""" @@ -2750,7 +2752,7 @@ Customizing positional arguments in class pattern matching When using a class name in a pattern, positional arguments in the pattern are not allowed by default, i.e. ``case MyClass(x, y)`` is typically invalid without special -support in ``MyClass``. To be able to use that kind of patterns, the class needs to +support in ``MyClass``. To be able to use that kind of pattern, the class needs to define a *__match_args__* attribute. .. data:: object.__match_args__ diff --git a/Doc/reference/expressions.rst b/Doc/reference/expressions.rst index 31cdc5c1..60c34076 100644 --- a/Doc/reference/expressions.rst +++ b/Doc/reference/expressions.rst @@ -1544,7 +1544,7 @@ built-in types. true). * Mappings (instances of :class:`dict`) compare equal if and only if they have - equal `(key, value)` pairs. Equality comparison of the keys and values + equal ``(key, value)`` pairs. Equality comparison of the keys and values enforces reflexivity. Order comparisons (``<``, ``>``, ``<=``, and ``>=``) raise :exc:`TypeError`. diff --git a/Doc/reference/grammar.rst b/Doc/reference/grammar.rst index 59b45005..bc1db7b0 100644 --- a/Doc/reference/grammar.rst +++ b/Doc/reference/grammar.rst @@ -12,7 +12,7 @@ and `PEG `_. In particular, ``&`` followed by a symbol, token or parenthesized group indicates a positive lookahead (i.e., is required to match but not consumed), while ``!`` indicates a negative lookahead (i.e., is -required _not_ to match). We use the ``|`` separator to mean PEG's +required *not* to match). We use the ``|`` separator to mean PEG's "ordered choice" (written as ``/`` in traditional PEG grammars). See :pep:`617` for more details on the grammar's syntax. diff --git a/Doc/reference/import.rst b/Doc/reference/import.rst index 38379750..3f55a0b7 100644 --- a/Doc/reference/import.rst +++ b/Doc/reference/import.rst @@ -812,7 +812,7 @@ The path based finder iterates over every entry in the search path, and for each of these, looks for an appropriate :term:`path entry finder` (:class:`~importlib.abc.PathEntryFinder`) for the path entry. Because this can be an expensive operation (e.g. there may be -`stat()` call overheads for this search), the path based finder maintains +``stat()`` call overheads for this search), the path based finder maintains a cache mapping path entries to path entry finders. This cache is maintained in :data:`sys.path_importer_cache` (despite the name, this cache actually stores finder objects rather than being limited to :term:`importer` objects). diff --git a/Doc/reference/simple_stmts.rst b/Doc/reference/simple_stmts.rst index d5f1e045..12253303 100644 --- a/Doc/reference/simple_stmts.rst +++ b/Doc/reference/simple_stmts.rst @@ -330,7 +330,7 @@ statement, of a variable or attribute annotation and an optional assignment stat annotated_assignment_stmt: `augtarget` ":" `expression` : ["=" (`starred_expression` | `yield_expression`)] -The difference from normal :ref:`assignment` is that only single target is allowed. +The difference from normal :ref:`assignment` is that only a single target is allowed. For simple names as assignment targets, if in class or module scope, the annotations are evaluated and stored in a special class or module @@ -365,8 +365,8 @@ target, then the interpreter evaluates the target except for the last IDEs. .. versionchanged:: 3.8 - Now annotated assignments allow same expressions in the right hand side as - the regular assignments. Previously, some expressions (like un-parenthesized + Now annotated assignments allow the same expressions in the right hand side as + regular assignments. Previously, some expressions (like un-parenthesized tuple expressions) caused a syntax error. @@ -750,7 +750,7 @@ commas) the two steps are carried out separately for each clause, just as though the clauses had been separated out into individual import statements. -The details of the first step, finding and loading modules are described in +The details of the first step, finding and loading modules, are described in greater detail in the section on the :ref:`import system `, which also describes the various types of packages and modules that can be imported, as well as all the hooks that can be used to customize @@ -988,20 +988,12 @@ The :keyword:`!nonlocal` statement .. productionlist:: python-grammar nonlocal_stmt: "nonlocal" `identifier` ("," `identifier`)* -.. XXX add when implemented - : ["=" (`target_list` "=")+ starred_expression] - : | "nonlocal" identifier augop expression_list - The :keyword:`nonlocal` statement causes the listed identifiers to refer to previously bound variables in the nearest enclosing scope excluding globals. This is important because the default behavior for binding is to search the local namespace first. The statement allows encapsulated code to rebind variables outside of the local scope besides the global (module) scope. -.. XXX not implemented - The :keyword:`nonlocal` statement may prepend an assignment or augmented - assignment, but not an expression. - Names listed in a :keyword:`nonlocal` statement, unlike those listed in a :keyword:`global` statement, must refer to pre-existing bindings in an enclosing scope (the scope in which a new binding should be created cannot diff --git a/Doc/tools/extensions/pyspecific.py b/Doc/tools/extensions/pyspecific.py index 9abdde0d..0d8c6346 100644 --- a/Doc/tools/extensions/pyspecific.py +++ b/Doc/tools/extensions/pyspecific.py @@ -26,7 +26,7 @@ try: from sphinx.errors import NoUri except ImportError: from sphinx.environment import NoUri -from sphinx.locale import translators +from sphinx.locale import _ as sphinx_gettext from sphinx.util import status_iterator, logging from sphinx.util.nodes import split_explicit_title from sphinx.writers.text import TextWriter, TextTranslator @@ -109,7 +109,7 @@ class ImplementationDetail(Directive): def run(self): self.assert_has_content() pnode = nodes.compound(classes=['impl-detail']) - label = translators['sphinx'].gettext(self.label_text) + label = sphinx_gettext(self.label_text) content = self.content add_text = nodes.strong(label, label) self.state.nested_parse(content, self.content_offset, pnode) @@ -203,7 +203,7 @@ class AuditEvent(Directive): else: args = [] - label = translators['sphinx'].gettext(self._label[min(2, len(args))]) + label = sphinx_gettext(self._label[min(2, len(args))]) text = label.format(name="``{}``".format(name), args=", ".join("``{}``".format(a) for a in args if a)) @@ -382,7 +382,7 @@ class DeprecatedRemoved(Directive): else: label = self._removed_label - label = translators['sphinx'].gettext(label) + label = sphinx_gettext(label) text = label.format(deprecated=self.arguments[0], removed=self.arguments[1]) if len(self.arguments) == 3: inodes, messages = self.state.inline_text(self.arguments[2], diff --git a/Doc/tutorial/datastructures.rst b/Doc/tutorial/datastructures.rst index a39dc834..6ad0b0ae 100644 --- a/Doc/tutorial/datastructures.rst +++ b/Doc/tutorial/datastructures.rst @@ -122,7 +122,7 @@ An example that uses most of the list methods:: You might have noticed that methods like ``insert``, ``remove`` or ``sort`` that only modify the list have no return value printed -- they return the default -``None``. [1]_ This is a design principle for all mutable data structures in +``None``. [#]_ This is a design principle for all mutable data structures in Python. Another thing you might notice is that not all data can be sorted or @@ -731,5 +731,5 @@ interpreter will raise a :exc:`TypeError` exception. .. rubric:: Footnotes -.. [1] Other languages may return the mutated object, which allows method +.. [#] Other languages may return the mutated object, which allows method chaining, such as ``d->insert("a")->remove("b")->sort();``. diff --git a/Doc/tutorial/interpreter.rst b/Doc/tutorial/interpreter.rst index d2733a99..e804a9d0 100644 --- a/Doc/tutorial/interpreter.rst +++ b/Doc/tutorial/interpreter.rst @@ -52,7 +52,7 @@ A second way of starting the interpreter is ``python -c command [arg] ...``, which executes the statement(s) in *command*, analogous to the shell's :option:`-c` option. Since Python statements often contain spaces or other characters that are special to the shell, it is usually advised to quote -*command* in its entirety with single quotes. +*command* in its entirety. Some Python modules are also useful as scripts. These can be invoked using ``python -m module [arg] ...``, which executes the source file for *module* as diff --git a/Doc/using/configure.rst b/Doc/using/configure.rst index 13c33946..87228361 100644 --- a/Doc/using/configure.rst +++ b/Doc/using/configure.rst @@ -654,12 +654,12 @@ Compiler flags In particular, :envvar:`CFLAGS` should not contain: - * the compiler flag `-I` (for setting the search path for include files). - The `-I` flags are processed from left to right, and any flags in - :envvar:`CFLAGS` would take precedence over user- and package-supplied `-I` + * the compiler flag ``-I`` (for setting the search path for include files). + The ``-I`` flags are processed from left to right, and any flags in + :envvar:`CFLAGS` would take precedence over user- and package-supplied ``-I`` flags. - * hardening flags such as `-Werror` because distributions cannot control + * hardening flags such as ``-Werror`` because distributions cannot control whether packages installed by users conform to such heightened standards. @@ -777,9 +777,9 @@ Linker flags In particular, :envvar:`LDFLAGS` should not contain: - * the compiler flag `-L` (for setting the search path for libraries). - The `-L` flags are processed from left to right, and any flags in - :envvar:`LDFLAGS` would take precedence over user- and package-supplied `-L` + * the compiler flag ``-L`` (for setting the search path for libraries). + The ``-L`` flags are processed from left to right, and any flags in + :envvar:`LDFLAGS` would take precedence over user- and package-supplied ``-L`` flags. .. envvar:: CONFIGURE_LDFLAGS_NODIST diff --git a/Doc/using/unix.rst b/Doc/using/unix.rst index 061cfa5b..24c02c99 100644 --- a/Doc/using/unix.rst +++ b/Doc/using/unix.rst @@ -170,7 +170,7 @@ Custom OpenSSL $ popd 3. Build Python with custom OpenSSL - (see the configure `--with-openssl` and `--with-openssl-rpath` options) + (see the configure ``--with-openssl`` and ``--with-openssl-rpath`` options) .. code-block:: shell-session diff --git a/Doc/using/venv-create.inc b/Doc/using/venv-create.inc index b9785832..0422cd2e 100644 --- a/Doc/using/venv-create.inc +++ b/Doc/using/venv-create.inc @@ -16,8 +16,8 @@ re-used. .. deprecated:: 3.6 ``pyvenv`` was the recommended tool for creating virtual environments for - Python 3.3 and 3.4, and is `deprecated in Python 3.6 - `_. + Python 3.3 and 3.4, and is + :ref:`deprecated in Python 3.6 `. .. versionchanged:: 3.5 The use of ``venv`` is now recommended for creating virtual environments. @@ -105,45 +105,3 @@ Multiple paths can be given to ``venv``, in which case an identical virtual environment will be created, according to the given options, at each provided path. -Once a virtual environment has been created, it can be "activated" using a -script in the virtual environment's binary directory. The invocation of the -script is platform-specific (`` must be replaced by the path of the -directory containing the virtual environment): - -+-------------+-----------------+-----------------------------------------+ -| Platform | Shell | Command to activate virtual environment | -+=============+=================+=========================================+ -| POSIX | bash/zsh | $ source /bin/activate | -+-------------+-----------------+-----------------------------------------+ -| | fish | $ source /bin/activate.fish | -+-------------+-----------------+-----------------------------------------+ -| | csh/tcsh | $ source /bin/activate.csh | -+-------------+-----------------+-----------------------------------------+ -| | PowerShell Core | $ /bin/Activate.ps1 | -+-------------+-----------------+-----------------------------------------+ -| Windows | cmd.exe | C:\\> \\Scripts\\activate.bat | -+-------------+-----------------+-----------------------------------------+ -| | PowerShell | PS C:\\> \\Scripts\\Activate.ps1 | -+-------------+-----------------+-----------------------------------------+ - -When a virtual environment is active, the :envvar:`VIRTUAL_ENV` environment -variable is set to the path of the virtual environment. This can be used to -check if one is running inside a virtual environment. - -You don't specifically *need* to activate an environment; activation just -prepends the virtual environment's binary directory to your path, so that -"python" invokes the virtual environment's Python interpreter and you can run -installed scripts without having to use their full path. However, all scripts -installed in a virtual environment should be runnable without activating it, -and run with the virtual environment's Python automatically. - -You can deactivate a virtual environment by typing "deactivate" in your shell. -The exact mechanism is platform-specific and is an internal implementation -detail (typically a script or shell function will be used). - -.. versionadded:: 3.4 - ``fish`` and ``csh`` activation scripts. - -.. versionadded:: 3.8 - PowerShell activation scripts installed under POSIX for PowerShell Core - support. diff --git a/Doc/using/windows.rst b/Doc/using/windows.rst index 35e26eb2..9489609b 100644 --- a/Doc/using/windows.rst +++ b/Doc/using/windows.rst @@ -199,7 +199,7 @@ of available options is shown below. +---------------------------+--------------------------------------+--------------------------+ | Include_pip | Install bundled pip and setuptools | 1 | +---------------------------+--------------------------------------+--------------------------+ -| Include_symbols | Install debugging symbols (`*`.pdb) | 0 | +| Include_symbols | Install debugging symbols (``*.pdb``)| 0 | +---------------------------+--------------------------------------+--------------------------+ | Include_tcltk | Install Tcl/Tk support and IDLE | 1 | +---------------------------+--------------------------------------+--------------------------+ diff --git a/Doc/whatsnew/2.2.rst b/Doc/whatsnew/2.2.rst index 39997661..0c3bfda1 100644 --- a/Doc/whatsnew/2.2.rst +++ b/Doc/whatsnew/2.2.rst @@ -395,7 +395,7 @@ This section has just been a quick overview of the new features, giving enough of an explanation to start you programming, but many details have been simplified or ignored. Where should you go to get a more complete picture? -https://docs.python.org/dev/howto/descriptor.html is a lengthy tutorial introduction to +The :ref:`descriptorhowto` is a lengthy tutorial introduction to the descriptor features, written by Guido van Rossum. If my description has whetted your appetite, go read this tutorial next, because it goes into much more detail about the new features while still remaining quite easy to read. diff --git a/Doc/whatsnew/2.6.rst b/Doc/whatsnew/2.6.rst index 731ce6aa..34f2656f 100644 --- a/Doc/whatsnew/2.6.rst +++ b/Doc/whatsnew/2.6.rst @@ -717,13 +717,13 @@ This will produce the output:: PEP 3101: Advanced String Formatting ===================================================== -In Python 3.0, the `%` operator is supplemented by a more powerful string +In Python 3.0, the ``%`` operator is supplemented by a more powerful string formatting method, :meth:`format`. Support for the :meth:`str.format` method has been backported to Python 2.6. -In 2.6, both 8-bit and Unicode strings have a `.format()` method that +In 2.6, both 8-bit and Unicode strings have a ``.format()`` method that treats the string as a template and takes the arguments to be formatted. -The formatting template uses curly brackets (`{`, `}`) as special characters:: +The formatting template uses curly brackets (``{``, ``}``) as special characters:: >>> # Substitute positional argument 0 into the string. >>> "User ID: {0}".format("root") diff --git a/Doc/whatsnew/2.7.rst b/Doc/whatsnew/2.7.rst index fbfcc5db..08aa1102 100644 --- a/Doc/whatsnew/2.7.rst +++ b/Doc/whatsnew/2.7.rst @@ -2485,8 +2485,8 @@ In the standard library: * The ElementTree library, :mod:`xml.etree`, no longer escapes ampersands and angle brackets when outputting an XML processing - instruction (which looks like ``) - or comment (which looks like ``). + instruction (which looks like ````) + or comment (which looks like ````). (Patch by Neil Muller; :issue:`2746`.) * The :meth:`~StringIO.StringIO.readline` method of :class:`~StringIO.StringIO` objects now does diff --git a/Doc/whatsnew/3.10.rst b/Doc/whatsnew/3.10.rst index 67eaeffa..ab93491c 100644 --- a/Doc/whatsnew/3.10.rst +++ b/Doc/whatsnew/3.10.rst @@ -77,8 +77,9 @@ Interpreter improvements: New typing features: * :pep:`604`, Allow writing union types as X | Y -* :pep:`613`, Explicit Type Aliases * :pep:`612`, Parameter Specification Variables +* :pep:`613`, Explicit Type Aliases +* :pep:`647`, User-Defined Type Guards Important deprecations, removals or restrictions: @@ -1172,7 +1173,7 @@ and will be incorrect in some rare cases, including some ``_``-s in New in 3.10 maintenance releases. -Apply syntax highlighting to `.pyi` files. (Contributed by Alex +Apply syntax highlighting to ``.pyi`` files. (Contributed by Alex Waygood and Terry Jan Reedy in :issue:`45447`.) Include prompts when saving Shell with inputs and outputs. @@ -2147,8 +2148,7 @@ Porting to Python 3.10 * The ``PY_SSIZE_T_CLEAN`` macro must now be defined to use :c:func:`PyArg_ParseTuple` and :c:func:`Py_BuildValue` formats which use ``#``: ``es#``, ``et#``, ``s#``, ``u#``, ``y#``, ``z#``, ``U#`` and ``Z#``. - See :ref:`Parsing arguments and building values - ` and the :pep:`353`. + See :ref:`arg-parsing` and :pep:`353`. (Contributed by Victor Stinner in :issue:`40943`.) * Since :c:func:`Py_REFCNT()` is changed to the inline static function, @@ -2179,8 +2179,7 @@ Porting to Python 3.10 :c:func:`Py_GetProgramFullPath`, :c:func:`Py_GetPythonHome` and :c:func:`Py_GetProgramName` functions now return ``NULL`` if called before :c:func:`Py_Initialize` (before Python is initialized). Use the new - :ref:`Python Initialization Configuration API ` to get the - :ref:`Python Path Configuration. `. + :ref:`init-config` API to get the :ref:`init-path-config`. (Contributed by Victor Stinner in :issue:`42260`.) * :c:func:`PyList_SET_ITEM`, :c:func:`PyTuple_SET_ITEM` and @@ -2194,7 +2193,7 @@ Porting to Python 3.10 ``picklebufobject.h``, ``pyarena.h``, ``pyctype.h``, ``pydebug.h``, ``pyfpe.h``, and ``pytime.h`` have been moved to the ``Include/cpython`` directory. These files must not be included directly, as they are already - included in ``Python.h``: :ref:`Include Files `. If they have + included in ``Python.h``; see :ref:`api-includes`. If they have been included directly, consider including ``Python.h`` instead. (Contributed by Nicholas Sim in :issue:`35134`.) diff --git a/Doc/whatsnew/3.2.rst b/Doc/whatsnew/3.2.rst index 9b5bbd3c..a4a9779a 100644 --- a/Doc/whatsnew/3.2.rst +++ b/Doc/whatsnew/3.2.rst @@ -1745,7 +1745,7 @@ names. instead of module names for running specific tests (:issue:`10620`). The new test discovery can find tests within packages, locating any test importable from the top-level directory. The top-level directory can be specified with - the `-t` option, a pattern for matching files with ``-p``, and a directory to + the ``-t`` option, a pattern for matching files with ``-p``, and a directory to start discovery with ``-s``: .. code-block:: shell-session @@ -1857,7 +1857,7 @@ asyncore :class:`asyncore.dispatcher` now provides a :meth:`~asyncore.dispatcher.handle_accepted()` method -returning a `(sock, addr)` pair which is called when a connection has actually +returning a ``(sock, addr)`` pair which is called when a connection has actually been established with a new remote endpoint. This is supposed to be used as a replacement for old :meth:`~asyncore.dispatcher.handle_accept()` and avoids the user to call :meth:`~asyncore.dispatcher.accept()` directly. diff --git a/Doc/whatsnew/3.3.rst b/Doc/whatsnew/3.3.rst index fef1a8ac..96a63257 100644 --- a/Doc/whatsnew/3.3.rst +++ b/Doc/whatsnew/3.3.rst @@ -2389,10 +2389,10 @@ Porting Python code :attr:`sys.path_importer_cache` where it represents the use of implicit finders, but semantically it should not change anything. -* :class:`importlib.abc.Finder` no longer specifies a `find_module()` abstract +* :class:`importlib.abc.Finder` no longer specifies a ``find_module()`` abstract method that must be implemented. If you were relying on subclasses to implement that method, make sure to check for the method's existence first. - You will probably want to check for `find_loader()` first, though, in the + You will probably want to check for ``find_loader()`` first, though, in the case of working with :term:`path entry finders `. * :mod:`pkgutil` has been converted to use :mod:`importlib` internally. This diff --git a/Doc/whatsnew/3.5.rst b/Doc/whatsnew/3.5.rst index 625373d5..f9cceecb 100644 --- a/Doc/whatsnew/3.5.rst +++ b/Doc/whatsnew/3.5.rst @@ -2469,11 +2469,11 @@ Changes in the Python API ``opt-`` tag in ``.pyc`` file names. The :func:`importlib.util.cache_from_source` has gained an *optimization* parameter to help control the ``opt-`` tag. Because of this, the - *debug_override* parameter of the function is now deprecated. `.pyo` files + *debug_override* parameter of the function is now deprecated. ``.pyo`` files are also no longer supported as a file argument to the Python interpreter and thus serve no purpose when distributed on their own (i.e. sourceless code distribution). Due to the fact that the magic number for bytecode has changed - in Python 3.5, all old `.pyo` files from previous versions of Python are + in Python 3.5, all old ``.pyo`` files from previous versions of Python are invalid regardless of this PEP. * The :mod:`socket` module now exports the :data:`~socket.CAN_RAW_FD_FRAMES` diff --git a/Doc/whatsnew/3.6.rst b/Doc/whatsnew/3.6.rst index d1a9aa7d..f8f45a31 100644 --- a/Doc/whatsnew/3.6.rst +++ b/Doc/whatsnew/3.6.rst @@ -960,8 +960,8 @@ contextlib The :class:`contextlib.AbstractContextManager` class has been added to provide an abstract base class for context managers. It provides a -sensible default implementation for `__enter__()` which returns -``self`` and leaves `__exit__()` an abstract method. A matching +sensible default implementation for ``__enter__()`` which returns +``self`` and leaves ``__exit__()`` an abstract method. A matching class has been added to the :mod:`typing` module as :class:`typing.ContextManager`. (Contributed by Brett Cannon in :issue:`25609`.) @@ -1388,7 +1388,7 @@ are treated as punctuation. site ---- -When specifying paths to add to :attr:`sys.path` in a `.pth` file, +When specifying paths to add to :attr:`sys.path` in a ``.pth`` file, you may now specify file paths on top of directories (e.g. zip files). (Contributed by Wolfgang Langner in :issue:`26587`). @@ -1422,7 +1422,7 @@ The socket module now supports the address family Victor Stinner.) New Linux constants ``TCP_USER_TIMEOUT`` and ``TCP_CONGESTION`` were added. -(Contributed by Omar Sandoval, issue:`26273`). +(Contributed by Omar Sandoval, :issue:`26273`). socketserver @@ -2052,6 +2052,8 @@ tkinter The :mod:`tkinter.tix` module is now deprecated. :mod:`tkinter` users should use :mod:`tkinter.ttk` instead. +.. _whatsnew36-venv: + venv ~~~~ diff --git a/Doc/whatsnew/3.7.rst b/Doc/whatsnew/3.7.rst index ece40698..f45eaf82 100644 --- a/Doc/whatsnew/3.7.rst +++ b/Doc/whatsnew/3.7.rst @@ -2497,7 +2497,7 @@ number of other issues). Some known details affected: * :c:func:`PySys_AddWarnOptionUnicode` is not currently usable by embedding applications due to the requirement to create a Unicode object prior to - calling `Py_Initialize`. Use :c:func:`PySys_AddWarnOption` instead. + calling ``Py_Initialize``. Use :c:func:`PySys_AddWarnOption` instead. * warnings filters added by an embedding application with :c:func:`PySys_AddWarnOption` should now more consistently take precedence diff --git a/Doc/whatsnew/3.8.rst b/Doc/whatsnew/3.8.rst index 7f85ff3f..4e2dbe3b 100644 --- a/Doc/whatsnew/3.8.rst +++ b/Doc/whatsnew/3.8.rst @@ -122,8 +122,8 @@ Positional-only parameters There is a new function parameter syntax ``/`` to indicate that some function parameters must be specified positionally and cannot be used as keyword arguments. This is the same notation shown by ``help()`` for C -functions annotated with Larry Hastings' `Argument Clinic -`_ tool. +functions annotated with Larry Hastings' +:ref:`Argument Clinic ` tool. In the following example, parameters *a* and *b* are positional-only, while *c* or *d* can be positional or keyword, and *e* or *f* are diff --git a/Doc/whatsnew/3.9.rst b/Doc/whatsnew/3.9.rst index 20d79def..34fd1c11 100644 --- a/Doc/whatsnew/3.9.rst +++ b/Doc/whatsnew/3.9.rst @@ -500,7 +500,7 @@ Reedy in :issue:`40468`.) Move the indent space setting from the Font tab to the new Windows tab. (Contributed by Mark Roseman and Terry Jan Reedy in :issue:`33962`.) -Apply syntax highlighting to `.pyi` files. (Contributed by Alex +Apply syntax highlighting to ``.pyi`` files. (Contributed by Alex Waygood and Terry Jan Reedy in :issue:`45447`.) imaplib diff --git a/Include/patchlevel.h b/Include/patchlevel.h index 273caa97..48a34d0e 100644 --- a/Include/patchlevel.h +++ b/Include/patchlevel.h @@ -18,12 +18,12 @@ /*--start constants--*/ #define PY_MAJOR_VERSION 3 #define PY_MINOR_VERSION 10 -#define PY_MICRO_VERSION 8 +#define PY_MICRO_VERSION 9 #define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL #define PY_RELEASE_SERIAL 0 /* Version as a string */ -#define PY_VERSION "3.10.8" +#define PY_VERSION "3.10.9" /*--end constants--*/ /* Version as a single 4-byte hex number, e.g. 0x010502B2 == 1.5.2b2. diff --git a/Lib/_collections_abc.py b/Lib/_collections_abc.py index 40417dc1..72fd633c 100644 --- a/Lib/_collections_abc.py +++ b/Lib/_collections_abc.py @@ -441,6 +441,8 @@ class _CallableGenericAlias(GenericAlias): def __parameters__(self): params = [] for arg in self.__args__: + if isinstance(arg, type) and not isinstance(arg, GenericAlias): + continue # Looks like a genericalias if hasattr(arg, "__parameters__") and isinstance(arg.__parameters__, tuple): params.extend(arg.__parameters__) @@ -486,6 +488,9 @@ class _CallableGenericAlias(GenericAlias): subst = dict(zip(self.__parameters__, item)) new_args = [] for arg in self.__args__: + if isinstance(arg, type) and not isinstance(arg, GenericAlias): + new_args.append(arg) + continue if _is_typevarlike(arg): if _is_param_expr(arg): arg = subst[arg] diff --git a/Lib/argparse.py b/Lib/argparse.py index 9be18488..fb042a86 100644 --- a/Lib/argparse.py +++ b/Lib/argparse.py @@ -1962,7 +1962,11 @@ class ArgumentParser(_AttributeHolder, _ActionsContainer): # arguments, try to parse more single-dash options out # of the tail of the option string chars = self.prefix_chars - if arg_count == 0 and option_string[1] not in chars: + if ( + arg_count == 0 + and option_string[1] not in chars + and explicit_arg != '' + ): action_tuples.append((action, [], option_string)) char = option_string[0] option_string = char + explicit_arg[0] diff --git a/Lib/ast.py b/Lib/ast.py index 6f235c2f..4f5f9827 100644 --- a/Lib/ast.py +++ b/Lib/ast.py @@ -236,6 +236,12 @@ def increment_lineno(node, n=1): location in a file. """ for child in walk(node): + # TypeIgnore is a special case where lineno is not an attribute + # but rather a field of the node itself. + if isinstance(child, TypeIgnore): + child.lineno = getattr(child, 'lineno', 0) + n + continue + if 'lineno' in child._attributes: child.lineno = getattr(child, 'lineno', 0) + n if ( diff --git a/Lib/asyncio/base_events.py b/Lib/asyncio/base_events.py index 23849852..9a05abf4 100644 --- a/Lib/asyncio/base_events.py +++ b/Lib/asyncio/base_events.py @@ -971,6 +971,8 @@ class BaseEventLoop(events.AbstractEventLoop): if sock is not None: sock.close() raise + finally: + exceptions = my_exceptions = None async def create_connection( self, protocol_factory, host=None, port=None, @@ -1063,17 +1065,20 @@ class BaseEventLoop(events.AbstractEventLoop): if sock is None: exceptions = [exc for sub in exceptions for exc in sub] - if len(exceptions) == 1: - raise exceptions[0] - else: - # If they all have the same str(), raise one. - model = str(exceptions[0]) - if all(str(exc) == model for exc in exceptions): + try: + if len(exceptions) == 1: raise exceptions[0] - # Raise a combined exception so the user can see all - # the various error messages. - raise OSError('Multiple exceptions: {}'.format( - ', '.join(str(exc) for exc in exceptions))) + else: + # If they all have the same str(), raise one. + model = str(exceptions[0]) + if all(str(exc) == model for exc in exceptions): + raise exceptions[0] + # Raise a combined exception so the user can see all + # the various error messages. + raise OSError('Multiple exceptions: {}'.format( + ', '.join(str(exc) for exc in exceptions))) + finally: + exceptions = None else: if sock is None: @@ -1862,6 +1867,8 @@ class BaseEventLoop(events.AbstractEventLoop): event_list = self._selector.select(timeout) self._process_events(event_list) + # Needed to break cycles when an exception occurs. + event_list = None # Handle 'later' callbacks that are ready. end_time = self.time() + self._clock_resolution diff --git a/Lib/asyncio/events.py b/Lib/asyncio/events.py index 5ab1acc4..faac5376 100644 --- a/Lib/asyncio/events.py +++ b/Lib/asyncio/events.py @@ -650,6 +650,21 @@ class BaseDefaultEventLoopPolicy(AbstractEventLoopPolicy): if (self._local._loop is None and not self._local._set_called and threading.current_thread() is threading.main_thread()): + stacklevel = 2 + try: + f = sys._getframe(1) + except AttributeError: + pass + else: + while f: + module = f.f_globals.get('__name__') + if not (module == 'asyncio' or module.startswith('asyncio.')): + break + f = f.f_back + stacklevel += 1 + import warnings + warnings.warn('There is no current event loop', + DeprecationWarning, stacklevel=stacklevel) self.set_event_loop(self.new_event_loop()) if self._local._loop is None: @@ -763,12 +778,13 @@ def get_event_loop(): def _get_event_loop(stacklevel=3): + # This internal method is going away in Python 3.12, left here only for + # backwards compatibility with 3.10.0 - 3.10.8 and 3.11.0. + # Similarly, this method's C equivalent in _asyncio is going away as well. + # See GH-99949 for more details. current_loop = _get_running_loop() if current_loop is not None: return current_loop - import warnings - warnings.warn('There is no current event loop', - DeprecationWarning, stacklevel=stacklevel) return get_event_loop_policy().get_event_loop() diff --git a/Lib/asyncio/proactor_events.py b/Lib/asyncio/proactor_events.py index 9657f968..0916d9eb 100644 --- a/Lib/asyncio/proactor_events.py +++ b/Lib/asyncio/proactor_events.py @@ -60,6 +60,7 @@ class _ProactorBasePipeTransport(transports._FlowControlMixin, self._pending_write = 0 self._conn_lost = 0 self._closing = False # Set when close() called. + self._called_connection_lost = False self._eof_written = False if self._server is not None: self._server._attach() @@ -136,7 +137,7 @@ class _ProactorBasePipeTransport(transports._FlowControlMixin, self._empty_waiter.set_result(None) else: self._empty_waiter.set_exception(exc) - if self._closing: + if self._closing and self._called_connection_lost: return self._closing = True self._conn_lost += 1 @@ -151,6 +152,8 @@ class _ProactorBasePipeTransport(transports._FlowControlMixin, self._loop.call_soon(self._call_connection_lost, exc) def _call_connection_lost(self, exc): + if self._called_connection_lost: + return try: self._protocol.connection_lost(exc) finally: @@ -166,6 +169,7 @@ class _ProactorBasePipeTransport(transports._FlowControlMixin, if server is not None: server._detach() self._server = None + self._called_connection_lost = True def get_write_buffer_size(self): size = self._pending_write diff --git a/Lib/asyncio/selector_events.py b/Lib/asyncio/selector_events.py index 572d4a8c..8282f280 100644 --- a/Lib/asyncio/selector_events.py +++ b/Lib/asyncio/selector_events.py @@ -497,7 +497,11 @@ class BaseSelectorEventLoop(base_events.BaseEventLoop): fut = self.create_future() self._sock_connect(fut, sock, address) - return await fut + try: + return await fut + finally: + # Needed to break cycles when an exception occurs. + fut = None def _sock_connect(self, fut, sock, address): fd = sock.fileno() @@ -519,6 +523,8 @@ class BaseSelectorEventLoop(base_events.BaseEventLoop): fut.set_exception(exc) else: fut.set_result(None) + finally: + fut = None def _sock_write_done(self, fd, fut, handle=None): if handle is None or not handle.cancelled(): @@ -542,6 +548,8 @@ class BaseSelectorEventLoop(base_events.BaseEventLoop): fut.set_exception(exc) else: fut.set_result(None) + finally: + fut = None async def sock_accept(self, sock): """Accept a connection. diff --git a/Lib/asyncio/unix_events.py b/Lib/asyncio/unix_events.py index 39b5e838..faaca305 100644 --- a/Lib/asyncio/unix_events.py +++ b/Lib/asyncio/unix_events.py @@ -789,12 +789,11 @@ class _UnixSubprocessTransport(base_subprocess.BaseSubprocessTransport): def _start(self, args, shell, stdin, stdout, stderr, bufsize, **kwargs): stdin_w = None - if stdin == subprocess.PIPE: - # Use a socket pair for stdin, since not all platforms + if stdin == subprocess.PIPE and sys.platform.startswith('aix'): + # Use a socket pair for stdin on AIX, since it does not # support selecting read events on the write end of a # socket (which we use in order to detect closing of the - # other end). Notably this is needed on AIX, and works - # just fine on other platforms. + # other end). stdin, stdin_w = socket.socketpair() try: self._proc = subprocess.Popen( diff --git a/Lib/asyncio/windows_events.py b/Lib/asyncio/windows_events.py index da81ab43..ed1cd19c 100644 --- a/Lib/asyncio/windows_events.py +++ b/Lib/asyncio/windows_events.py @@ -439,7 +439,11 @@ class IocpProactor: self._poll(timeout) tmp = self._results self._results = [] - return tmp + try: + return tmp + finally: + # Needed to break cycles when an exception occurs. + tmp = None def _result(self, value): fut = self._loop.create_future() @@ -821,6 +825,8 @@ class IocpProactor: else: f.set_result(value) self._results.append(f) + finally: + f = None # Remove unregistered futures for ov in self._unregistered: diff --git a/Lib/codecs.py b/Lib/codecs.py index e6ad6e3a..3b173b61 100644 --- a/Lib/codecs.py +++ b/Lib/codecs.py @@ -878,7 +878,8 @@ def open(filename, mode='r', encoding=None, errors='strict', buffering=-1): codecs. Output is also codec dependent and will usually be Unicode as well. - Underlying encoded files are always opened in binary mode. + If encoding is not None, then the + underlying encoded files are always opened in binary mode. The default file mode is 'r', meaning to open the file in read mode. encoding specifies the encoding which is to be used for the diff --git a/Lib/ctypes/test/test_struct_fields.py b/Lib/ctypes/test/test_struct_fields.py index ee8415f3..fefeaea1 100644 --- a/Lib/ctypes/test/test_struct_fields.py +++ b/Lib/ctypes/test/test_struct_fields.py @@ -54,6 +54,15 @@ class StructFieldsTestCase(unittest.TestCase): x.char = b'a\0b\0' self.assertEqual(bytes(x), b'a\x00###') + def test_gh99275(self): + class BrokenStructure(Structure): + def __init_subclass__(cls, **kwargs): + cls._fields_ = [] # This line will fail, `stgdict` is not ready + + with self.assertRaisesRegex(TypeError, + 'ctypes state is not initialized'): + class Subclass(BrokenStructure): ... + # __set__ and __get__ should raise a TypeError in case their self # argument is not a ctype instance. def test___set__(self): diff --git a/Lib/ctypes/test/test_structures.py b/Lib/ctypes/test/test_structures.py index 97ad2b8e..f95d5a99 100644 --- a/Lib/ctypes/test/test_structures.py +++ b/Lib/ctypes/test/test_structures.py @@ -332,13 +332,13 @@ class StructureTestCase(unittest.TestCase): cls, msg = self.get_except(Person, b"Someone", (1, 2)) self.assertEqual(cls, RuntimeError) self.assertEqual(msg, - "(Phone) : " + "(Phone) TypeError: " "expected bytes, int found") cls, msg = self.get_except(Person, b"Someone", (b"a", b"b", b"c")) self.assertEqual(cls, RuntimeError) self.assertEqual(msg, - "(Phone) : too many initializers") + "(Phone) TypeError: too many initializers") def test_huge_field_name(self): # issue12881: segfault with large structure field names diff --git a/Lib/dataclasses.py b/Lib/dataclasses.py index 105a95b9..e1687a11 100644 --- a/Lib/dataclasses.py +++ b/Lib/dataclasses.py @@ -411,13 +411,11 @@ def _recursive_repr(user_function): def _create_fn(name, args, body, *, globals=None, locals=None, return_type=MISSING): - # Note that we mutate locals when exec() is called. Caller - # beware! The only callers are internal to this module, so no + # Note that we may mutate locals. Callers beware! + # The only callers are internal to this module, so no # worries about external callers. if locals is None: locals = {} - if 'BUILTINS' not in locals: - locals['BUILTINS'] = builtins return_annotation = '' if return_type is not MISSING: locals['_return_type'] = return_type @@ -443,7 +441,7 @@ def _field_assign(frozen, name, value, self_name): # self_name is what "self" is called in this function: don't # hard-code "self", since that might be a field name. if frozen: - return f'BUILTINS.object.__setattr__({self_name},{name!r},{value})' + return f'__dataclass_builtins_object__.__setattr__({self_name},{name!r},{value})' return f'{self_name}.{name}={value}' @@ -550,6 +548,7 @@ def _init_fn(fields, std_fields, kw_only_fields, frozen, has_post_init, locals.update({ 'MISSING': MISSING, '_HAS_DEFAULT_FACTORY': _HAS_DEFAULT_FACTORY, + '__dataclass_builtins_object__': object, }) body_lines = [] diff --git a/Lib/encodings/idna.py b/Lib/encodings/idna.py index ea405851..bf98f513 100644 --- a/Lib/encodings/idna.py +++ b/Lib/encodings/idna.py @@ -39,23 +39,21 @@ def nameprep(label): # Check bidi RandAL = [stringprep.in_table_d1(x) for x in label] - for c in RandAL: - if c: - # There is a RandAL char in the string. Must perform further - # tests: - # 1) The characters in section 5.8 MUST be prohibited. - # This is table C.8, which was already checked - # 2) If a string contains any RandALCat character, the string - # MUST NOT contain any LCat character. - if any(stringprep.in_table_d2(x) for x in label): - raise UnicodeError("Violation of BIDI requirement 2") - - # 3) If a string contains any RandALCat character, a - # RandALCat character MUST be the first character of the - # string, and a RandALCat character MUST be the last - # character of the string. - if not RandAL[0] or not RandAL[-1]: - raise UnicodeError("Violation of BIDI requirement 3") + if any(RandAL): + # There is a RandAL char in the string. Must perform further + # tests: + # 1) The characters in section 5.8 MUST be prohibited. + # This is table C.8, which was already checked + # 2) If a string contains any RandALCat character, the string + # MUST NOT contain any LCat character. + if any(stringprep.in_table_d2(x) for x in label): + raise UnicodeError("Violation of BIDI requirement 2") + # 3) If a string contains any RandALCat character, a + # RandALCat character MUST be the first character of the + # string, and a RandALCat character MUST be the last + # character of the string. + if not RandAL[0] or not RandAL[-1]: + raise UnicodeError("Violation of BIDI requirement 3") return label diff --git a/Lib/ensurepip/__init__.py b/Lib/ensurepip/__init__.py index 1826cdcf..71d3dfd8 100644 --- a/Lib/ensurepip/__init__.py +++ b/Lib/ensurepip/__init__.py @@ -11,8 +11,8 @@ from importlib import resources __all__ = ["version", "bootstrap"] _PACKAGE_NAMES = ('setuptools', 'pip') -_SETUPTOOLS_VERSION = "63.2.0" -_PIP_VERSION = "22.2.2" +_SETUPTOOLS_VERSION = "65.5.0" +_PIP_VERSION = "22.3.1" _PROJECTS = [ ("setuptools", _SETUPTOOLS_VERSION, "py3"), ("pip", _PIP_VERSION, "py3"), diff --git a/Lib/ensurepip/_bundled/pip-22.2.2-py3-none-any.whl b/Lib/ensurepip/_bundled/pip-22.2.2-py3-none-any.whl deleted file mode 100644 index 03099718..00000000 Binary files a/Lib/ensurepip/_bundled/pip-22.2.2-py3-none-any.whl and /dev/null differ diff --git a/Lib/ensurepip/_bundled/pip-22.3.1-py3-none-any.whl b/Lib/ensurepip/_bundled/pip-22.3.1-py3-none-any.whl new file mode 100644 index 00000000..c5b7753e Binary files /dev/null and b/Lib/ensurepip/_bundled/pip-22.3.1-py3-none-any.whl differ diff --git a/Lib/ensurepip/_bundled/setuptools-63.2.0-py3-none-any.whl b/Lib/ensurepip/_bundled/setuptools-63.2.0-py3-none-any.whl deleted file mode 100644 index e3b5446e..00000000 Binary files a/Lib/ensurepip/_bundled/setuptools-63.2.0-py3-none-any.whl and /dev/null differ diff --git a/Lib/ensurepip/_bundled/setuptools-65.5.0-py3-none-any.whl b/Lib/ensurepip/_bundled/setuptools-65.5.0-py3-none-any.whl new file mode 100644 index 00000000..123a13e2 Binary files /dev/null and b/Lib/ensurepip/_bundled/setuptools-65.5.0-py3-none-any.whl differ diff --git a/Lib/http/server.py b/Lib/http/server.py index e8517a7e..03dbaa51 100644 --- a/Lib/http/server.py +++ b/Lib/http/server.py @@ -93,6 +93,7 @@ import email.utils import html import http.client import io +import itertools import mimetypes import os import posixpath @@ -563,6 +564,11 @@ class BaseHTTPRequestHandler(socketserver.StreamRequestHandler): self.log_message(format, *args) + # https://en.wikipedia.org/wiki/List_of_Unicode_characters#Control_codes + _control_char_table = str.maketrans( + {c: fr'\x{c:02x}' for c in itertools.chain(range(0x20), range(0x7f,0xa0))}) + _control_char_table[ord('\\')] = r'\\' + def log_message(self, format, *args): """Log an arbitrary message. @@ -578,12 +584,16 @@ class BaseHTTPRequestHandler(socketserver.StreamRequestHandler): The client ip and current date/time are prefixed to every message. + Unicode control characters are replaced with escaped hex + before writing the output to stderr. + """ + message = format % args sys.stderr.write("%s - - [%s] %s\n" % (self.address_string(), self.log_date_time_string(), - format%args)) + message.translate(self._control_char_table))) def version_string(self): """Return the server software version string.""" diff --git a/Lib/idlelib/NEWS.txt b/Lib/idlelib/NEWS.txt index 277fd942..521b1f12 100644 --- a/Lib/idlelib/NEWS.txt +++ b/Lib/idlelib/NEWS.txt @@ -4,6 +4,11 @@ Released 2023-04-03? ========================= +gh-97527: Fix a bug in the previous bugfix that caused IDLE to not +start when run with 3.10.8, 3.12.0a1, and at least Microsoft Python +3.10.2288.0 installed without the Lib/test package. 3.11.0 was never +affected. + gh-65802: Document handling of extensions in Save As dialogs. gh-95191: Include prompts when saving Shell (interactive input/output). diff --git a/Lib/idlelib/macosx.py b/Lib/idlelib/macosx.py index 1085d689..f53bd589 100644 --- a/Lib/idlelib/macosx.py +++ b/Lib/idlelib/macosx.py @@ -4,7 +4,6 @@ A number of functions that enhance IDLE on macOS. from os.path import expanduser import plistlib from sys import platform # Used in _init_tk_type, changed by test. -from test.support import requires, ResourceDenied import tkinter @@ -16,27 +15,38 @@ _tk_type = None def _init_tk_type(): """ Initialize _tk_type for isXyzTk functions. + + This function is only called once, when _tk_type is still None. """ global _tk_type if platform == 'darwin': - try: - requires('gui') - except ResourceDenied: # Possible when testing. - _tk_type = "cocoa" # Newest and most common. - else: - root = tkinter.Tk() - ws = root.tk.call('tk', 'windowingsystem') - if 'x11' in ws: - _tk_type = "xquartz" - elif 'aqua' not in ws: - _tk_type = "other" - elif 'AppKit' in root.tk.call('winfo', 'server', '.'): + + # When running IDLE, GUI is present, test/* may not be. + # When running tests, test/* is present, GUI may not be. + # If not, guess most common. Does not matter for testing. + from idlelib.__init__ import testing + if testing: + from test.support import requires, ResourceDenied + try: + requires('gui') + except ResourceDenied: _tk_type = "cocoa" - else: - _tk_type = "carbon" - root.destroy() + return + + root = tkinter.Tk() + ws = root.tk.call('tk', 'windowingsystem') + if 'x11' in ws: + _tk_type = "xquartz" + elif 'aqua' not in ws: + _tk_type = "other" + elif 'AppKit' in root.tk.call('winfo', 'server', '.'): + _tk_type = "cocoa" + else: + _tk_type = "carbon" + root.destroy() else: _tk_type = "other" + return def isAquaTk(): """ diff --git a/Lib/importlib/metadata/__init__.py b/Lib/importlib/metadata/__init__.py index b3063cd9..682067d3 100644 --- a/Lib/importlib/metadata/__init__.py +++ b/Lib/importlib/metadata/__init__.py @@ -17,7 +17,7 @@ import collections from . import _adapters, _meta from ._meta import PackageMetadata from ._collections import FreezableDefaultDict, Pair -from ._functools import method_cache +from ._functools import method_cache, pass_none from ._itertools import unique_everseen from ._meta import PackageMetadata, SimplePath @@ -938,13 +938,25 @@ class PathDistribution(Distribution): normalized name from the file system path. """ stem = os.path.basename(str(self._path)) - return self._name_from_stem(stem) or super()._normalized_name + return ( + pass_none(Prepared.normalize)(self._name_from_stem(stem)) + or super()._normalized_name + ) - def _name_from_stem(self, stem): - name, ext = os.path.splitext(stem) + @staticmethod + def _name_from_stem(stem): + """ + >>> PathDistribution._name_from_stem('foo-3.0.egg-info') + 'foo' + >>> PathDistribution._name_from_stem('CherryPy-3.0.dist-info') + 'CherryPy' + >>> PathDistribution._name_from_stem('face.egg-info') + 'face' + """ + filename, ext = os.path.splitext(stem) if ext not in ('.dist-info', '.egg-info'): return - name, sep, rest = stem.partition('-') + name, sep, rest = filename.partition('-') return name diff --git a/Lib/importlib/metadata/_functools.py b/Lib/importlib/metadata/_functools.py index 73f50d00..71f66bd0 100644 --- a/Lib/importlib/metadata/_functools.py +++ b/Lib/importlib/metadata/_functools.py @@ -83,3 +83,22 @@ def method_cache(method, cache_wrapper=None): wrapper.cache_clear = lambda: None return wrapper + + +# From jaraco.functools 3.3 +def pass_none(func): + """ + Wrap func so it's not called if its first param is None + + >>> print_text = pass_none(print) + >>> print_text('text') + text + >>> print_text(None) + """ + + @functools.wraps(func) + def wrapper(param, *args, **kwargs): + if param is not None: + return func(param, *args, **kwargs) + + return wrapper diff --git a/Lib/inspect.py b/Lib/inspect.py index 60740c63..ad16f5c4 100644 --- a/Lib/inspect.py +++ b/Lib/inspect.py @@ -2406,7 +2406,10 @@ def _signature_from_callable(obj, *, # Was this function wrapped by a decorator? if follow_wrapper_chains: - obj = unwrap(obj, stop=(lambda f: hasattr(f, "__signature__"))) + # Unwrap until we find an explicit signature or a MethodType (which will be + # handled explicitly below). + obj = unwrap(obj, stop=(lambda f: hasattr(f, "__signature__") + or isinstance(f, types.MethodType))) if isinstance(obj, types.MethodType): # If the unwrapped object is a *method*, we might want to # skip its first parameter (self). diff --git a/Lib/logging/__init__.py b/Lib/logging/__init__.py index 09810bdf..d1d43338 100644 --- a/Lib/logging/__init__.py +++ b/Lib/logging/__init__.py @@ -487,7 +487,7 @@ class StringTemplateStyle(PercentStyle): def usesTime(self): fmt = self._fmt - return fmt.find('$asctime') >= 0 or fmt.find(self.asctime_format) >= 0 + return fmt.find('$asctime') >= 0 or fmt.find(self.asctime_search) >= 0 def validate(self): pattern = Template.pattern diff --git a/Lib/multiprocessing/connection.py b/Lib/multiprocessing/connection.py index 510e4b5a..8e2facf9 100644 --- a/Lib/multiprocessing/connection.py +++ b/Lib/multiprocessing/connection.py @@ -73,11 +73,6 @@ def arbitrary_address(family): if family == 'AF_INET': return ('localhost', 0) elif family == 'AF_UNIX': - # Prefer abstract sockets if possible to avoid problems with the address - # size. When coding portable applications, some implementations have - # sun_path as short as 92 bytes in the sockaddr_un struct. - if util.abstract_sockets_supported: - return f"\0listener-{os.getpid()}-{next(_mmap_counter)}" return tempfile.mktemp(prefix='listener-', dir=util.get_temp_dir()) elif family == 'AF_PIPE': return tempfile.mktemp(prefix=r'\\.\pipe\pyc-%d-%d-' % diff --git a/Lib/multiprocessing/shared_memory.py b/Lib/multiprocessing/shared_memory.py index 881f2001..9a1e5aa1 100644 --- a/Lib/multiprocessing/shared_memory.py +++ b/Lib/multiprocessing/shared_memory.py @@ -173,7 +173,10 @@ class SharedMemory: ) finally: _winapi.CloseHandle(h_map) - size = _winapi.VirtualQuerySize(p_buf) + try: + size = _winapi.VirtualQuerySize(p_buf) + finally: + _winapi.UnmapViewOfFile(p_buf) self._mmap = mmap.mmap(-1, size, tagname=name) self._size = size diff --git a/Lib/pdb.py b/Lib/pdb.py index 7ab50b48..7401a675 100755 --- a/Lib/pdb.py +++ b/Lib/pdb.py @@ -1248,6 +1248,12 @@ class Pdb(bdb.Bdb, cmd.Cmd): if last is None: last = first + 10 filename = self.curframe.f_code.co_filename + # gh-93696: stdlib frozen modules provide a useful __file__ + # this workaround can be removed with the closure of gh-89815 + if filename.startswith(">> 'Monty Python'.removesuffix(' Python')\n" " 'Monty'\n" '\n' - 'str.split(sep=None, maxsplit=- 1)\n' + 'str.split(sep=None, maxsplit=-1)\n' '\n' ' Return a list of the words in the string, using *sep* ' 'as the\n' @@ -13733,17 +13739,11 @@ topics = {'assert': 'The "assert" statement\n' 'dictionaries or\n' 'other mutable types (that are compared by value rather than ' 'by object\n' - 'identity) may not be used as keys. Numeric types used for ' - 'keys obey\n' - 'the normal rules for numeric comparison: if two numbers ' - 'compare equal\n' - '(such as "1" and "1.0") then they can be used ' - 'interchangeably to index\n' - 'the same dictionary entry. (Note however, that since ' - 'computers store\n' - 'floating-point numbers as approximations it is usually ' - 'unwise to use\n' - 'them as dictionary keys.)\n' + 'identity) may not be used as keys. Values that compare equal ' + '(such as\n' + '"1", "1.0", and "True") can be used interchangeably to index ' + 'the same\n' + 'dictionary entry.\n' '\n' 'class dict(**kwargs)\n' 'class dict(mapping, **kwargs)\n' diff --git a/Lib/shutil.py b/Lib/shutil.py index 0d278801..b7bffa3e 100644 --- a/Lib/shutil.py +++ b/Lib/shutil.py @@ -487,12 +487,13 @@ def _copytree(entries, src, dst, symlinks, ignore, copy_function, # otherwise let the copy occur. copy2 will raise an error if srcentry.is_dir(): copytree(srcobj, dstname, symlinks, ignore, - copy_function, dirs_exist_ok=dirs_exist_ok) + copy_function, ignore_dangling_symlinks, + dirs_exist_ok) else: copy_function(srcobj, dstname) elif srcentry.is_dir(): copytree(srcobj, dstname, symlinks, ignore, copy_function, - dirs_exist_ok=dirs_exist_ok) + ignore_dangling_symlinks, dirs_exist_ok) else: # Will raise a SpecialFileError for unsupported file types copy_function(srcobj, dstname) diff --git a/Lib/statistics.py b/Lib/statistics.py index f6624538..52f17851 100644 --- a/Lib/statistics.py +++ b/Lib/statistics.py @@ -1265,3 +1265,9 @@ class NormalDist: def __repr__(self): return f'{type(self).__name__}(mu={self._mu!r}, sigma={self._sigma!r})' + + def __getstate__(self): + return self._mu, self._sigma + + def __setstate__(self, state): + self._mu, self._sigma = state diff --git a/Lib/test/_test_multiprocessing.py b/Lib/test/_test_multiprocessing.py index be174aae..57eada63 100644 --- a/Lib/test/_test_multiprocessing.py +++ b/Lib/test/_test_multiprocessing.py @@ -5982,5 +5982,5 @@ class SemLockTests(unittest.TestCase): class SemLock(_multiprocessing.SemLock): pass name = f'test_semlock_subclass-{os.getpid()}' - s = SemLock(1, 0, 10, name, 0) + s = SemLock(1, 0, 10, name, False) _multiprocessing.sem_unlink(name) diff --git a/Lib/test/audit-tests.py b/Lib/test/audit-tests.py index b781b994..481aedd6 100644 --- a/Lib/test/audit-tests.py +++ b/Lib/test/audit-tests.py @@ -429,6 +429,17 @@ def test_syslog(): syslog.closelog() +def test_not_in_gc(): + import gc + + hook = lambda *a: None + sys.addaudithook(hook) + + for o in gc.get_objects(): + if isinstance(o, list): + assert hook not in o + + if __name__ == "__main__": from test.support import suppress_msvcrt_asserts diff --git a/Lib/test/support/__init__.py b/Lib/test/support/__init__.py index c9a80c2a..b7cf1e28 100644 --- a/Lib/test/support/__init__.py +++ b/Lib/test/support/__init__.py @@ -1986,7 +1986,7 @@ def wait_process(pid, *, exitcode, timeout=None): Raise an AssertionError if the process exit code is not equal to exitcode. - If the process runs longer than timeout seconds (SHORT_TIMEOUT by default), + If the process runs longer than timeout seconds (LONG_TIMEOUT by default), kill the process (if signal.SIGKILL is available) and raise an AssertionError. The timeout feature is not available on Windows. """ @@ -1994,7 +1994,7 @@ def wait_process(pid, *, exitcode, timeout=None): import signal if timeout is None: - timeout = SHORT_TIMEOUT + timeout = LONG_TIMEOUT t0 = time.monotonic() sleep = 0.001 max_sleep = 0.1 @@ -2005,7 +2005,7 @@ def wait_process(pid, *, exitcode, timeout=None): # process is still running dt = time.monotonic() - t0 - if dt > SHORT_TIMEOUT: + if dt > timeout: try: os.kill(pid, signal.SIGKILL) os.waitpid(pid, 0) diff --git a/Lib/test/test_argparse.py b/Lib/test/test_argparse.py index 0b237ab5..4d43e331 100644 --- a/Lib/test/test_argparse.py +++ b/Lib/test/test_argparse.py @@ -295,7 +295,7 @@ class TestOptionalsSingleDashCombined(ParserTestCase): Sig('-z'), ] failures = ['a', '--foo', '-xa', '-x --foo', '-x -z', '-z -x', - '-yx', '-yz a', '-yyyx', '-yyyza', '-xyza'] + '-yx', '-yz a', '-yyyx', '-yyyza', '-xyza', '-x='] successes = [ ('', NS(x=False, yyy=None, z=None)), ('-x', NS(x=True, yyy=None, z=None)), diff --git a/Lib/test/test_ast.py b/Lib/test/test_ast.py index 57b97350..55a3e6b8 100644 --- a/Lib/test/test_ast.py +++ b/Lib/test/test_ast.py @@ -914,6 +914,18 @@ Module( self.assertEqual(ast.increment_lineno(src).lineno, 2) self.assertIsNone(ast.increment_lineno(src).end_lineno) + def test_increment_lineno_on_module(self): + src = ast.parse(dedent("""\ + a = 1 + b = 2 # type: ignore + c = 3 + d = 4 # type: ignore@tag + """), type_comments=True) + ast.increment_lineno(src, n=5) + self.assertEqual(src.type_ignores[0].lineno, 7) + self.assertEqual(src.type_ignores[1].lineno, 9) + self.assertEqual(src.type_ignores[1].tag, '@tag') + def test_iter_fields(self): node = ast.parse('foo()', mode='eval') d = dict(ast.iter_fields(node.body)) diff --git a/Lib/test/test_asyncio/test_base_events.py b/Lib/test/test_asyncio/test_base_events.py index d77bf95a..7f7e371d 100644 --- a/Lib/test/test_asyncio/test_base_events.py +++ b/Lib/test/test_asyncio/test_base_events.py @@ -752,7 +752,7 @@ class BaseEventLoopTests(test_utils.TestCase): def test_env_var_debug(self): code = '\n'.join(( 'import asyncio', - 'loop = asyncio.get_event_loop()', + 'loop = asyncio.new_event_loop()', 'print(loop.get_debug())')) # Test with -E to not fail if the unit test was run with diff --git a/Lib/test/test_asyncio/test_events.py b/Lib/test/test_asyncio/test_events.py index bed7b5d3..92c69de7 100644 --- a/Lib/test/test_asyncio/test_events.py +++ b/Lib/test/test_asyncio/test_events.py @@ -2561,8 +2561,9 @@ class PolicyTests(unittest.TestCase): def test_get_event_loop(self): policy = asyncio.DefaultEventLoopPolicy() self.assertIsNone(policy._local._loop) - - loop = policy.get_event_loop() + with self.assertWarns(DeprecationWarning) as cm: + loop = policy.get_event_loop() + self.assertEqual(cm.filename, __file__) self.assertIsInstance(loop, asyncio.AbstractEventLoop) self.assertIs(policy._local._loop, loop) @@ -2576,7 +2577,10 @@ class PolicyTests(unittest.TestCase): policy, "set_event_loop", wraps=policy.set_event_loop) as m_set_event_loop: - loop = policy.get_event_loop() + with self.assertWarns(DeprecationWarning) as cm: + loop = policy.get_event_loop() + self.addCleanup(loop.close) + self.assertEqual(cm.filename, __file__) # policy._local._loop must be set through .set_event_loop() # (the unix DefaultEventLoopPolicy needs this call to attach @@ -2610,7 +2614,8 @@ class PolicyTests(unittest.TestCase): def test_set_event_loop(self): policy = asyncio.DefaultEventLoopPolicy() - old_loop = policy.get_event_loop() + old_loop = policy.new_event_loop() + policy.set_event_loop(old_loop) self.assertRaises(AssertionError, policy.set_event_loop, object()) @@ -2723,15 +2728,11 @@ class GetEventLoopTestsMixin: asyncio.set_event_loop_policy(Policy()) loop = asyncio.new_event_loop() - with self.assertWarns(DeprecationWarning) as cm: - with self.assertRaises(TestError): - asyncio.get_event_loop() - self.assertEqual(cm.warnings[0].filename, __file__) + with self.assertRaises(TestError): + asyncio.get_event_loop() asyncio.set_event_loop(None) - with self.assertWarns(DeprecationWarning) as cm: - with self.assertRaises(TestError): - asyncio.get_event_loop() - self.assertEqual(cm.warnings[0].filename, __file__) + with self.assertRaises(TestError): + asyncio.get_event_loop() with self.assertRaisesRegex(RuntimeError, 'no running'): asyncio.get_running_loop() @@ -2745,16 +2746,11 @@ class GetEventLoopTestsMixin: loop.run_until_complete(func()) asyncio.set_event_loop(loop) - with self.assertWarns(DeprecationWarning) as cm: - with self.assertRaises(TestError): - asyncio.get_event_loop() - self.assertEqual(cm.warnings[0].filename, __file__) - + with self.assertRaises(TestError): + asyncio.get_event_loop() asyncio.set_event_loop(None) - with self.assertWarns(DeprecationWarning) as cm: - with self.assertRaises(TestError): - asyncio.get_event_loop() - self.assertEqual(cm.warnings[0].filename, __file__) + with self.assertRaises(TestError): + asyncio.get_event_loop() finally: asyncio.set_event_loop_policy(old_policy) @@ -2778,10 +2774,8 @@ class GetEventLoopTestsMixin: self.addCleanup(loop2.close) self.assertEqual(cm.warnings[0].filename, __file__) asyncio.set_event_loop(None) - with self.assertWarns(DeprecationWarning) as cm: - with self.assertRaisesRegex(RuntimeError, 'no current'): - asyncio.get_event_loop() - self.assertEqual(cm.warnings[0].filename, __file__) + with self.assertRaisesRegex(RuntimeError, 'no current'): + asyncio.get_event_loop() with self.assertRaisesRegex(RuntimeError, 'no running'): asyncio.get_running_loop() @@ -2795,15 +2789,11 @@ class GetEventLoopTestsMixin: loop.run_until_complete(func()) asyncio.set_event_loop(loop) - with self.assertWarns(DeprecationWarning) as cm: - self.assertIs(asyncio.get_event_loop(), loop) - self.assertEqual(cm.warnings[0].filename, __file__) + self.assertIs(asyncio.get_event_loop(), loop) asyncio.set_event_loop(None) - with self.assertWarns(DeprecationWarning) as cm: - with self.assertRaisesRegex(RuntimeError, 'no current'): - asyncio.get_event_loop() - self.assertEqual(cm.warnings[0].filename, __file__) + with self.assertRaisesRegex(RuntimeError, 'no current'): + asyncio.get_event_loop() finally: asyncio.set_event_loop_policy(old_policy) diff --git a/Lib/test/test_asyncio/test_futures.py b/Lib/test/test_asyncio/test_futures.py index 01267426..0b47aa28 100644 --- a/Lib/test/test_asyncio/test_futures.py +++ b/Lib/test/test_asyncio/test_futures.py @@ -145,10 +145,8 @@ class BaseFutureTests: self.assertTrue(f.cancelled()) def test_constructor_without_loop(self): - with self.assertWarns(DeprecationWarning) as cm: - with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'): - self._new_future() - self.assertEqual(cm.warnings[0].filename, __file__) + with self.assertRaisesRegex(RuntimeError, 'no current event loop'): + self._new_future() def test_constructor_use_running_loop(self): async def test(): @@ -158,12 +156,10 @@ class BaseFutureTests: self.assertIs(f.get_loop(), self.loop) def test_constructor_use_global_loop(self): - # Deprecated in 3.10 + # Deprecated in 3.10, undeprecated in 3.11.1 asyncio.set_event_loop(self.loop) self.addCleanup(asyncio.set_event_loop, None) - with self.assertWarns(DeprecationWarning) as cm: - f = self._new_future() - self.assertEqual(cm.warnings[0].filename, __file__) + f = self._new_future() self.assertIs(f._loop, self.loop) self.assertIs(f.get_loop(), self.loop) @@ -499,10 +495,8 @@ class BaseFutureTests: return (arg, threading.get_ident()) ex = concurrent.futures.ThreadPoolExecutor(1) f1 = ex.submit(run, 'oi') - with self.assertWarns(DeprecationWarning) as cm: - with self.assertRaises(RuntimeError): - asyncio.wrap_future(f1) - self.assertEqual(cm.warnings[0].filename, __file__) + with self.assertRaisesRegex(RuntimeError, 'no current event loop'): + asyncio.wrap_future(f1) ex.shutdown(wait=True) def test_wrap_future_use_running_loop(self): @@ -517,16 +511,14 @@ class BaseFutureTests: ex.shutdown(wait=True) def test_wrap_future_use_global_loop(self): - # Deprecated in 3.10 + # Deprecated in 3.10, undeprecated in 3.11.1 asyncio.set_event_loop(self.loop) self.addCleanup(asyncio.set_event_loop, None) def run(arg): return (arg, threading.get_ident()) ex = concurrent.futures.ThreadPoolExecutor(1) f1 = ex.submit(run, 'oi') - with self.assertWarns(DeprecationWarning) as cm: - f2 = asyncio.wrap_future(f1) - self.assertEqual(cm.warnings[0].filename, __file__) + f2 = asyncio.wrap_future(f1) self.assertIs(self.loop, f2._loop) ex.shutdown(wait=True) diff --git a/Lib/test/test_asyncio/test_proactor_events.py b/Lib/test/test_asyncio/test_proactor_events.py index fc6ee1c1..60b30c6a 100644 --- a/Lib/test/test_asyncio/test_proactor_events.py +++ b/Lib/test/test_asyncio/test_proactor_events.py @@ -289,7 +289,33 @@ class ProactorSocketTransportTests(test_utils.TestCase): tr._closing = True tr._force_close(None) test_utils.run_briefly(self.loop) + # See https://github.com/python/cpython/issues/89237 + # `protocol.connection_lost` should be called even if + # the transport was closed forcefully otherwise + # the resources held by protocol will never be freed + # and waiters will never be notified leading to hang. + self.assertTrue(self.protocol.connection_lost.called) + + def test_force_close_protocol_connection_lost_once(self): + tr = self.socket_transport() self.assertFalse(self.protocol.connection_lost.called) + tr._closing = True + # Calling _force_close twice should not call + # protocol.connection_lost twice + tr._force_close(None) + tr._force_close(None) + test_utils.run_briefly(self.loop) + self.assertEqual(1, self.protocol.connection_lost.call_count) + + def test_close_protocol_connection_lost_once(self): + tr = self.socket_transport() + self.assertFalse(self.protocol.connection_lost.called) + # Calling close twice should not call + # protocol.connection_lost twice + tr.close() + tr.close() + test_utils.run_briefly(self.loop) + self.assertEqual(1, self.protocol.connection_lost.call_count) def test_fatal_error_2(self): tr = self.socket_transport() diff --git a/Lib/test/test_asyncio/test_sendfile.py b/Lib/test/test_asyncio/test_sendfile.py index effca664..aaaad9df 100644 --- a/Lib/test/test_asyncio/test_sendfile.py +++ b/Lib/test/test_asyncio/test_sendfile.py @@ -1,6 +1,7 @@ """Tests for sendfile functionality.""" import asyncio +import errno import os import socket import sys @@ -484,8 +485,17 @@ class SendfileMixin(SendfileBase): srv_proto, cli_proto = self.prepare_sendfile(close_after=1024) with self.assertRaises(ConnectionError): - self.run_loop( - self.loop.sendfile(cli_proto.transport, self.file)) + try: + self.run_loop( + self.loop.sendfile(cli_proto.transport, self.file)) + except OSError as e: + # macOS may raise OSError of EPROTOTYPE when writing to a + # socket that is in the process of closing down. + if e.errno == errno.EPROTOTYPE and sys.platform == "darwin": + raise ConnectionError + else: + raise + self.run_loop(srv_proto.done) self.assertTrue(1024 <= srv_proto.nbytes < len(self.DATA), diff --git a/Lib/test/test_asyncio/test_streams.py b/Lib/test/test_asyncio/test_streams.py index 44fcd658..994041c1 100644 --- a/Lib/test/test_asyncio/test_streams.py +++ b/Lib/test/test_asyncio/test_streams.py @@ -747,10 +747,8 @@ os.close(fd) self.assertEqual(data, b'data') def test_streamreader_constructor_without_loop(self): - with self.assertWarns(DeprecationWarning) as cm: - with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'): - asyncio.StreamReader() - self.assertEqual(cm.warnings[0].filename, __file__) + with self.assertRaisesRegex(RuntimeError, 'no current event loop'): + asyncio.StreamReader() def test_streamreader_constructor_use_running_loop(self): # asyncio issue #184: Ensure that StreamReaderProtocol constructor @@ -764,21 +762,17 @@ os.close(fd) def test_streamreader_constructor_use_global_loop(self): # asyncio issue #184: Ensure that StreamReaderProtocol constructor # retrieves the current loop if the loop parameter is not set - # Deprecated in 3.10 + # Deprecated in 3.10, undeprecated in 3.11.1 self.addCleanup(asyncio.set_event_loop, None) asyncio.set_event_loop(self.loop) - with self.assertWarns(DeprecationWarning) as cm: - reader = asyncio.StreamReader() - self.assertEqual(cm.warnings[0].filename, __file__) + reader = asyncio.StreamReader() self.assertIs(reader._loop, self.loop) def test_streamreaderprotocol_constructor_without_loop(self): reader = mock.Mock() - with self.assertWarns(DeprecationWarning) as cm: - with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'): - asyncio.StreamReaderProtocol(reader) - self.assertEqual(cm.warnings[0].filename, __file__) + with self.assertRaisesRegex(RuntimeError, 'no current event loop'): + asyncio.StreamReaderProtocol(reader) def test_streamreaderprotocol_constructor_use_running_loop(self): # asyncio issue #184: Ensure that StreamReaderProtocol constructor @@ -792,13 +786,11 @@ os.close(fd) def test_streamreaderprotocol_constructor_use_global_loop(self): # asyncio issue #184: Ensure that StreamReaderProtocol constructor # retrieves the current loop if the loop parameter is not set - # Deprecated in 3.10 + # Deprecated in 3.10, undeprecated in 3.11.1 self.addCleanup(asyncio.set_event_loop, None) asyncio.set_event_loop(self.loop) reader = mock.Mock() - with self.assertWarns(DeprecationWarning) as cm: - protocol = asyncio.StreamReaderProtocol(reader) - self.assertEqual(cm.warnings[0].filename, __file__) + protocol = asyncio.StreamReaderProtocol(reader) self.assertIs(protocol._loop, self.loop) def test_multiple_drain(self): diff --git a/Lib/test/test_asyncio/test_subprocess.py b/Lib/test/test_asyncio/test_subprocess.py index 14fa6dd7..7cd80fd6 100644 --- a/Lib/test/test_asyncio/test_subprocess.py +++ b/Lib/test/test_asyncio/test_subprocess.py @@ -401,6 +401,26 @@ class SubprocessMixin: self.assertEqual(output, None) self.assertEqual(exitcode, 0) + @unittest.skipIf(sys.platform != 'linux', "Don't have /dev/stdin") + def test_devstdin_input(self): + + async def devstdin_input(message): + code = 'file = open("/dev/stdin"); data = file.read(); print(len(data))' + proc = await asyncio.create_subprocess_exec( + sys.executable, '-c', code, + stdin=asyncio.subprocess.PIPE, + stdout=asyncio.subprocess.PIPE, + stderr=asyncio.subprocess.PIPE, + close_fds=False, + ) + stdout, stderr = await proc.communicate(message) + exitcode = await proc.wait() + return (stdout, exitcode) + + output, exitcode = self.loop.run_until_complete(devstdin_input(b'abc')) + self.assertEqual(output.rstrip(), b'3') + self.assertEqual(exitcode, 0) + def test_cancel_process_wait(self): # Issue #23140: cancel Process.wait() diff --git a/Lib/test/test_asyncio/test_tasks.py b/Lib/test/test_asyncio/test_tasks.py index 05a822ba..b08068e6 100644 --- a/Lib/test/test_asyncio/test_tasks.py +++ b/Lib/test/test_asyncio/test_tasks.py @@ -210,10 +210,8 @@ class BaseTaskTests: a = notmuch() self.addCleanup(a.close) - with self.assertWarns(DeprecationWarning) as cm: - with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'): - asyncio.ensure_future(a) - self.assertEqual(cm.warnings[0].filename, __file__) + with self.assertRaisesRegex(RuntimeError, 'no current event loop'): + asyncio.ensure_future(a) async def test(): return asyncio.ensure_future(notmuch()) @@ -223,12 +221,10 @@ class BaseTaskTests: self.assertTrue(t.done()) self.assertEqual(t.result(), 'ok') - # Deprecated in 3.10 + # Deprecated in 3.10.0, undeprecated in 3.10.9 asyncio.set_event_loop(self.loop) self.addCleanup(asyncio.set_event_loop, None) - with self.assertWarns(DeprecationWarning) as cm: - t = asyncio.ensure_future(notmuch()) - self.assertEqual(cm.warnings[0].filename, __file__) + t = asyncio.ensure_future(notmuch()) self.assertIs(t._loop, self.loop) self.loop.run_until_complete(t) self.assertTrue(t.done()) @@ -247,10 +243,8 @@ class BaseTaskTests: a = notmuch() self.addCleanup(a.close) - with self.assertWarns(DeprecationWarning) as cm: - with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'): - asyncio.ensure_future(a) - self.assertEqual(cm.warnings[0].filename, __file__) + with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'): + asyncio.ensure_future(a) async def test(): return asyncio.ensure_future(notmuch()) @@ -260,12 +254,10 @@ class BaseTaskTests: self.assertTrue(t.done()) self.assertEqual(t.result(), 'ok') - # Deprecated in 3.10 + # Deprecated in 3.10.0, undeprecated in 3.10.9 asyncio.set_event_loop(self.loop) self.addCleanup(asyncio.set_event_loop, None) - with self.assertWarns(DeprecationWarning) as cm: - t = asyncio.ensure_future(notmuch()) - self.assertEqual(cm.warnings[0].filename, __file__) + t = asyncio.ensure_future(notmuch()) self.assertIs(t._loop, self.loop) self.loop.run_until_complete(t) self.assertTrue(t.done()) @@ -1488,10 +1480,8 @@ class BaseTaskTests: self.addCleanup(a.close) futs = asyncio.as_completed([a]) - with self.assertWarns(DeprecationWarning) as cm: - with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'): - list(futs) - self.assertEqual(cm.warnings[0].filename, __file__) + with self.assertRaisesRegex(RuntimeError, 'no current event loop'): + list(futs) def test_as_completed_coroutine_use_running_loop(self): loop = self.new_test_loop() @@ -1507,17 +1497,14 @@ class BaseTaskTests: loop.run_until_complete(test()) def test_as_completed_coroutine_use_global_loop(self): - # Deprecated in 3.10 + # Deprecated in 3.10.0, undeprecated in 3.10.9 async def coro(): return 42 loop = self.new_test_loop() asyncio.set_event_loop(loop) self.addCleanup(asyncio.set_event_loop, None) - futs = asyncio.as_completed([coro()]) - with self.assertWarns(DeprecationWarning) as cm: - futs = list(futs) - self.assertEqual(cm.warnings[0].filename, __file__) + futs = list(asyncio.as_completed([coro()])) self.assertEqual(len(futs), 1) self.assertEqual(loop.run_until_complete(futs[0]), 42) @@ -1987,10 +1974,8 @@ class BaseTaskTests: inner = coro() self.addCleanup(inner.close) - with self.assertWarns(DeprecationWarning) as cm: - with self.assertRaisesRegex(RuntimeError, 'There is no current event loop'): - asyncio.shield(inner) - self.assertEqual(cm.warnings[0].filename, __file__) + with self.assertRaisesRegex(RuntimeError, 'no current event loop'): + asyncio.shield(inner) def test_shield_coroutine_use_running_loop(self): async def coro(): @@ -2004,15 +1989,13 @@ class BaseTaskTests: self.assertEqual(res, 42) def test_shield_coroutine_use_global_loop(self): - # Deprecated in 3.10 + # Deprecated in 3.10.0, undeprecated in 3.10.9 async def coro(): return 42 asyncio.set_event_loop(self.loop) self.addCleanup(asyncio.set_event_loop, None) - with self.assertWarns(DeprecationWarning) as cm: - outer = asyncio.shield(coro()) - self.assertEqual(cm.warnings[0].filename, __file__) + outer = asyncio.shield(coro()) self.assertEqual(outer._loop, self.loop) res = self.loop.run_until_complete(outer) self.assertEqual(res, 42) @@ -2950,7 +2933,7 @@ class BaseCurrentLoopTests: self.assertIsNone(asyncio.current_task(loop=self.loop)) def test_current_task_no_running_loop_implicit(self): - with self.assertRaises(RuntimeError): + with self.assertRaisesRegex(RuntimeError, 'no running event loop'): asyncio.current_task() def test_current_task_with_implicit_loop(self): @@ -3114,10 +3097,8 @@ class FutureGatherTests(GatherTestsBase, test_utils.TestCase): return asyncio.gather(*args, **kwargs) def test_constructor_empty_sequence_without_loop(self): - with self.assertWarns(DeprecationWarning) as cm: - with self.assertRaises(RuntimeError): - asyncio.gather() - self.assertEqual(cm.warnings[0].filename, __file__) + with self.assertRaisesRegex(RuntimeError, 'no current event loop'): + asyncio.gather() def test_constructor_empty_sequence_use_running_loop(self): async def gather(): @@ -3130,12 +3111,10 @@ class FutureGatherTests(GatherTestsBase, test_utils.TestCase): self.assertEqual(fut.result(), []) def test_constructor_empty_sequence_use_global_loop(self): - # Deprecated in 3.10 + # Deprecated in 3.10.0, undeprecated in 3.10.9 asyncio.set_event_loop(self.one_loop) self.addCleanup(asyncio.set_event_loop, None) - with self.assertWarns(DeprecationWarning) as cm: - fut = asyncio.gather() - self.assertEqual(cm.warnings[0].filename, __file__) + fut = asyncio.gather() self.assertIsInstance(fut, asyncio.Future) self.assertIs(fut._loop, self.one_loop) self._run_loop(self.one_loop) @@ -3223,10 +3202,8 @@ class CoroutineGatherTests(GatherTestsBase, test_utils.TestCase): self.addCleanup(gen1.close) gen2 = coro() self.addCleanup(gen2.close) - with self.assertWarns(DeprecationWarning) as cm: - with self.assertRaises(RuntimeError): - asyncio.gather(gen1, gen2) - self.assertEqual(cm.warnings[0].filename, __file__) + with self.assertRaisesRegex(RuntimeError, 'no current event loop'): + asyncio.gather(gen1, gen2) def test_constructor_use_running_loop(self): async def coro(): @@ -3240,16 +3217,14 @@ class CoroutineGatherTests(GatherTestsBase, test_utils.TestCase): self.one_loop.run_until_complete(fut) def test_constructor_use_global_loop(self): - # Deprecated in 3.10 + # Deprecated in 3.10.0, undeprecated in 3.10.9 async def coro(): return 'abc' asyncio.set_event_loop(self.other_loop) self.addCleanup(asyncio.set_event_loop, None) gen1 = coro() gen2 = coro() - with self.assertWarns(DeprecationWarning) as cm: - fut = asyncio.gather(gen1, gen2) - self.assertEqual(cm.warnings[0].filename, __file__) + fut = asyncio.gather(gen1, gen2) self.assertIs(fut._loop, self.other_loop) self.other_loop.run_until_complete(fut) diff --git a/Lib/test/test_asyncio/test_unix_events.py b/Lib/test/test_asyncio/test_unix_events.py index 1d922783..01c1214c 100644 --- a/Lib/test/test_asyncio/test_unix_events.py +++ b/Lib/test/test_asyncio/test_unix_events.py @@ -1740,7 +1740,8 @@ class PolicyTests(unittest.TestCase): def test_child_watcher_replace_mainloop_existing(self): policy = self.create_policy() - loop = policy.get_event_loop() + loop = policy.new_event_loop() + policy.set_event_loop(loop) # Explicitly setup SafeChildWatcher, # default ThreadedChildWatcher has no _loop property diff --git a/Lib/test/test_asyncio/test_windows_events.py b/Lib/test/test_asyncio/test_windows_events.py index f276cd20..afd30288 100644 --- a/Lib/test/test_asyncio/test_windows_events.py +++ b/Lib/test/test_asyncio/test_windows_events.py @@ -239,6 +239,17 @@ class ProactorTests(test_utils.TestCase): self.close_loop(self.loop) self.assertFalse(self.loop.call_exception_handler.called) + def test_address_argument_type_error(self): + # Regression test for https://github.com/python/cpython/issues/98793 + proactor = self.loop._proactor + sock = socket.socket(type=socket.SOCK_DGRAM) + bad_address = None + with self.assertRaises(TypeError): + proactor.connect(sock, bad_address) + with self.assertRaises(TypeError): + proactor.sendto(sock, b'abc', addr=bad_address) + sock.close() + class WinPolicyTests(test_utils.TestCase): diff --git a/Lib/test/test_audit.py b/Lib/test/test_audit.py index cea452dd..10a61c60 100644 --- a/Lib/test/test_audit.py +++ b/Lib/test/test_audit.py @@ -195,6 +195,11 @@ class AuditTest(unittest.TestCase): ('syslog.closelog', '', '')] ) + def test_not_in_gc(self): + returncode, _, stderr = self.run_python("test_not_in_gc") + if returncode: + self.fail(stderr) + if __name__ == "__main__": unittest.main() diff --git a/Lib/test/test_builtin.py b/Lib/test/test_builtin.py index 4b0b15f0..aabf0ab5 100644 --- a/Lib/test/test_builtin.py +++ b/Lib/test/test_builtin.py @@ -159,7 +159,7 @@ class BuiltinTest(unittest.TestCase): __import__('string') __import__(name='sys') __import__(name='time', level=0) - self.assertRaises(ImportError, __import__, 'spamspam') + self.assertRaises(ModuleNotFoundError, __import__, 'spamspam') self.assertRaises(TypeError, __import__, 1, 2, 3, 4) self.assertRaises(ValueError, __import__, '') self.assertRaises(TypeError, __import__, 'sys', name='sys') @@ -2248,7 +2248,7 @@ class TestType(unittest.TestCase): self.assertEqual(A.__module__, __name__) with self.assertRaises(ValueError): type('A\x00B', (), {}) - with self.assertRaises(ValueError): + with self.assertRaises(UnicodeEncodeError): type('A\udcdcB', (), {}) with self.assertRaises(TypeError): type(b'A', (), {}) @@ -2265,7 +2265,7 @@ class TestType(unittest.TestCase): with self.assertRaises(ValueError): A.__name__ = 'A\x00B' self.assertEqual(A.__name__, 'C') - with self.assertRaises(ValueError): + with self.assertRaises(UnicodeEncodeError): A.__name__ = 'A\udcdcB' self.assertEqual(A.__name__, 'C') with self.assertRaises(TypeError): diff --git a/Lib/test/test_call.py b/Lib/test/test_call.py index eee26909..1bf1f79f 100644 --- a/Lib/test/test_call.py +++ b/Lib/test/test_call.py @@ -546,7 +546,7 @@ class FastCallTests(unittest.TestCase): self.kwargs.clear() gc.collect() return 0 - x = IntWithDict(dont_inherit=IntWithDict()) + x = IntWithDict(optimize=IntWithDict()) # We test the argument handling of "compile" here, the compilation # itself is not relevant. When we pass flags=x below, x.__index__() is # called, which changes the keywords dict. diff --git a/Lib/test/test_capi.py b/Lib/test/test_capi.py deleted file mode 100644 index 0adb689b..00000000 --- a/Lib/test/test_capi.py +++ /dev/null @@ -1,1065 +0,0 @@ -# Run the _testcapi module tests (tests for the Python/C API): by defn, -# these are all functions _testcapi exports whose name begins with 'test_'. - -from collections import OrderedDict -import importlib.machinery -import importlib.util -import os -import pickle -import random -import re -import subprocess -import sys -import textwrap -import threading -import time -import unittest -import weakref -from test import support -from test.support import MISSING_C_DOCSTRINGS -from test.support import import_helper -from test.support import threading_helper -from test.support import warnings_helper -from test.support.script_helper import assert_python_failure, assert_python_ok -try: - import _posixsubprocess -except ImportError: - _posixsubprocess = None - -# Skip this test if the _testcapi module isn't available. -_testcapi = import_helper.import_module('_testcapi') - -import _testinternalcapi - -# Were we compiled --with-pydebug or with #define Py_DEBUG? -Py_DEBUG = hasattr(sys, 'gettotalrefcount') - - -def decode_stderr(err): - return err.decode('utf-8', 'replace').replace('\r', '') - - -def testfunction(self): - """some doc""" - return self - - -class InstanceMethod: - id = _testcapi.instancemethod(id) - testfunction = _testcapi.instancemethod(testfunction) - -class CAPITest(unittest.TestCase): - - def test_instancemethod(self): - inst = InstanceMethod() - self.assertEqual(id(inst), inst.id()) - self.assertTrue(inst.testfunction() is inst) - self.assertEqual(inst.testfunction.__doc__, testfunction.__doc__) - self.assertEqual(InstanceMethod.testfunction.__doc__, testfunction.__doc__) - - InstanceMethod.testfunction.attribute = "test" - self.assertEqual(testfunction.attribute, "test") - self.assertRaises(AttributeError, setattr, inst.testfunction, "attribute", "test") - - def test_no_FatalError_infinite_loop(self): - with support.SuppressCrashReport(): - p = subprocess.Popen([sys.executable, "-c", - 'import _testcapi;' - '_testcapi.crash_no_current_thread()'], - stdout=subprocess.PIPE, - stderr=subprocess.PIPE) - (out, err) = p.communicate() - self.assertEqual(out, b'') - # This used to cause an infinite loop. - self.assertTrue(err.rstrip().startswith( - b'Fatal Python error: ' - b'PyThreadState_Get: ' - b'the function must be called with the GIL held, ' - b'but the GIL is released ' - b'(the current Python thread state is NULL)'), - err) - - def test_memoryview_from_NULL_pointer(self): - self.assertRaises(ValueError, _testcapi.make_memoryview_from_NULL_pointer) - - def test_exc_info(self): - raised_exception = ValueError("5") - new_exc = TypeError("TEST") - try: - raise raised_exception - except ValueError as e: - tb = e.__traceback__ - orig_sys_exc_info = sys.exc_info() - orig_exc_info = _testcapi.set_exc_info(new_exc.__class__, new_exc, None) - new_sys_exc_info = sys.exc_info() - new_exc_info = _testcapi.set_exc_info(*orig_exc_info) - reset_sys_exc_info = sys.exc_info() - - self.assertEqual(orig_exc_info[1], e) - - self.assertSequenceEqual(orig_exc_info, (raised_exception.__class__, raised_exception, tb)) - self.assertSequenceEqual(orig_sys_exc_info, orig_exc_info) - self.assertSequenceEqual(reset_sys_exc_info, orig_exc_info) - self.assertSequenceEqual(new_exc_info, (new_exc.__class__, new_exc, None)) - self.assertSequenceEqual(new_sys_exc_info, new_exc_info) - else: - self.assertTrue(False) - - @unittest.skipUnless(_posixsubprocess, '_posixsubprocess required for this test.') - def test_seq_bytes_to_charp_array(self): - # Issue #15732: crash in _PySequence_BytesToCharpArray() - class Z(object): - def __len__(self): - return 1 - self.assertRaises(TypeError, _posixsubprocess.fork_exec, - 1,Z(),3,(1, 2),5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21) - # Issue #15736: overflow in _PySequence_BytesToCharpArray() - class Z(object): - def __len__(self): - return sys.maxsize - def __getitem__(self, i): - return b'x' - self.assertRaises(MemoryError, _posixsubprocess.fork_exec, - 1,Z(),3,(1, 2),5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21) - - @unittest.skipUnless(_posixsubprocess, '_posixsubprocess required for this test.') - def test_subprocess_fork_exec(self): - class Z(object): - def __len__(self): - return 1 - - # Issue #15738: crash in subprocess_fork_exec() - self.assertRaises(TypeError, _posixsubprocess.fork_exec, - Z(),[b'1'],3,(1, 2),5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21) - - @unittest.skipIf(MISSING_C_DOCSTRINGS, - "Signature information for builtins requires docstrings") - def test_docstring_signature_parsing(self): - - self.assertEqual(_testcapi.no_docstring.__doc__, None) - self.assertEqual(_testcapi.no_docstring.__text_signature__, None) - - self.assertEqual(_testcapi.docstring_empty.__doc__, None) - self.assertEqual(_testcapi.docstring_empty.__text_signature__, None) - - self.assertEqual(_testcapi.docstring_no_signature.__doc__, - "This docstring has no signature.") - self.assertEqual(_testcapi.docstring_no_signature.__text_signature__, None) - - self.assertEqual(_testcapi.docstring_with_invalid_signature.__doc__, - "docstring_with_invalid_signature($module, /, boo)\n" - "\n" - "This docstring has an invalid signature." - ) - self.assertEqual(_testcapi.docstring_with_invalid_signature.__text_signature__, None) - - self.assertEqual(_testcapi.docstring_with_invalid_signature2.__doc__, - "docstring_with_invalid_signature2($module, /, boo)\n" - "\n" - "--\n" - "\n" - "This docstring also has an invalid signature." - ) - self.assertEqual(_testcapi.docstring_with_invalid_signature2.__text_signature__, None) - - self.assertEqual(_testcapi.docstring_with_signature.__doc__, - "This docstring has a valid signature.") - self.assertEqual(_testcapi.docstring_with_signature.__text_signature__, "($module, /, sig)") - - self.assertEqual(_testcapi.docstring_with_signature_but_no_doc.__doc__, None) - self.assertEqual(_testcapi.docstring_with_signature_but_no_doc.__text_signature__, - "($module, /, sig)") - - self.assertEqual(_testcapi.docstring_with_signature_and_extra_newlines.__doc__, - "\nThis docstring has a valid signature and some extra newlines.") - self.assertEqual(_testcapi.docstring_with_signature_and_extra_newlines.__text_signature__, - "($module, /, parameter)") - - def test_c_type_with_matrix_multiplication(self): - M = _testcapi.matmulType - m1 = M() - m2 = M() - self.assertEqual(m1 @ m2, ("matmul", m1, m2)) - self.assertEqual(m1 @ 42, ("matmul", m1, 42)) - self.assertEqual(42 @ m1, ("matmul", 42, m1)) - o = m1 - o @= m2 - self.assertEqual(o, ("imatmul", m1, m2)) - o = m1 - o @= 42 - self.assertEqual(o, ("imatmul", m1, 42)) - o = 42 - o @= m1 - self.assertEqual(o, ("matmul", 42, m1)) - - def test_c_type_with_ipow(self): - # When the __ipow__ method of a type was implemented in C, using the - # modulo param would cause segfaults. - o = _testcapi.ipowType() - self.assertEqual(o.__ipow__(1), (1, None)) - self.assertEqual(o.__ipow__(2, 2), (2, 2)) - - def test_return_null_without_error(self): - # Issue #23571: A function must not return NULL without setting an - # error - if Py_DEBUG: - code = textwrap.dedent(""" - import _testcapi - from test import support - - with support.SuppressCrashReport(): - _testcapi.return_null_without_error() - """) - rc, out, err = assert_python_failure('-c', code) - err = decode_stderr(err) - self.assertRegex(err, - r'Fatal Python error: _Py_CheckFunctionResult: ' - r'a function returned NULL without setting an exception\n' - r'Python runtime state: initialized\n' - r'SystemError: ' - r'returned NULL without setting an exception\n' - r'\n' - r'Current thread.*:\n' - r' File .*", line 6 in \n') - else: - with self.assertRaises(SystemError) as cm: - _testcapi.return_null_without_error() - self.assertRegex(str(cm.exception), - 'return_null_without_error.* ' - 'returned NULL without setting an exception') - - def test_return_result_with_error(self): - # Issue #23571: A function must not return a result with an error set - if Py_DEBUG: - code = textwrap.dedent(""" - import _testcapi - from test import support - - with support.SuppressCrashReport(): - _testcapi.return_result_with_error() - """) - rc, out, err = assert_python_failure('-c', code) - err = decode_stderr(err) - self.assertRegex(err, - r'Fatal Python error: _Py_CheckFunctionResult: ' - r'a function returned a result with an exception set\n' - r'Python runtime state: initialized\n' - r'ValueError\n' - r'\n' - r'The above exception was the direct cause ' - r'of the following exception:\n' - r'\n' - r'SystemError: ' - r'returned a result with an exception set\n' - r'\n' - r'Current thread.*:\n' - r' File .*, line 6 in \n') - else: - with self.assertRaises(SystemError) as cm: - _testcapi.return_result_with_error() - self.assertRegex(str(cm.exception), - 'return_result_with_error.* ' - 'returned a result with an exception set') - - def test_getitem_with_error(self): - # Test _Py_CheckSlotResult(). Raise an exception and then calls - # PyObject_GetItem(): check that the assertion catches the bug. - # PyObject_GetItem() must not be called with an exception set. - code = textwrap.dedent(""" - import _testcapi - from test import support - - with support.SuppressCrashReport(): - _testcapi.getitem_with_error({1: 2}, 1) - """) - rc, out, err = assert_python_failure('-c', code) - err = decode_stderr(err) - if 'SystemError: ' not in err: - self.assertRegex(err, - r'Fatal Python error: _Py_CheckSlotResult: ' - r'Slot __getitem__ of type dict succeeded ' - r'with an exception set\n' - r'Python runtime state: initialized\n' - r'ValueError: bug\n' - r'\n' - r'Current thread .* \(most recent call first\):\n' - r' File .*, line 6 in \n' - r'\n' - r'Extension modules: _testcapi \(total: 1\)\n') - else: - # Python built with NDEBUG macro defined: - # test _Py_CheckFunctionResult() instead. - self.assertIn('returned a result with an exception set', err) - - def test_buildvalue_N(self): - _testcapi.test_buildvalue_N() - - def test_set_nomemory(self): - code = """if 1: - import _testcapi - - class C(): pass - - # The first loop tests both functions and that remove_mem_hooks() - # can be called twice in a row. The second loop checks a call to - # set_nomemory() after a call to remove_mem_hooks(). The third - # loop checks the start and stop arguments of set_nomemory(). - for outer_cnt in range(1, 4): - start = 10 * outer_cnt - for j in range(100): - if j == 0: - if outer_cnt != 3: - _testcapi.set_nomemory(start) - else: - _testcapi.set_nomemory(start, start + 1) - try: - C() - except MemoryError as e: - if outer_cnt != 3: - _testcapi.remove_mem_hooks() - print('MemoryError', outer_cnt, j) - _testcapi.remove_mem_hooks() - break - """ - rc, out, err = assert_python_ok('-c', code) - self.assertIn(b'MemoryError 1 10', out) - self.assertIn(b'MemoryError 2 20', out) - self.assertIn(b'MemoryError 3 30', out) - - def test_mapping_keys_values_items(self): - class Mapping1(dict): - def keys(self): - return list(super().keys()) - def values(self): - return list(super().values()) - def items(self): - return list(super().items()) - class Mapping2(dict): - def keys(self): - return tuple(super().keys()) - def values(self): - return tuple(super().values()) - def items(self): - return tuple(super().items()) - dict_obj = {'foo': 1, 'bar': 2, 'spam': 3} - - for mapping in [{}, OrderedDict(), Mapping1(), Mapping2(), - dict_obj, OrderedDict(dict_obj), - Mapping1(dict_obj), Mapping2(dict_obj)]: - self.assertListEqual(_testcapi.get_mapping_keys(mapping), - list(mapping.keys())) - self.assertListEqual(_testcapi.get_mapping_values(mapping), - list(mapping.values())) - self.assertListEqual(_testcapi.get_mapping_items(mapping), - list(mapping.items())) - - def test_mapping_keys_values_items_bad_arg(self): - self.assertRaises(AttributeError, _testcapi.get_mapping_keys, None) - self.assertRaises(AttributeError, _testcapi.get_mapping_values, None) - self.assertRaises(AttributeError, _testcapi.get_mapping_items, None) - - class BadMapping: - def keys(self): - return None - def values(self): - return None - def items(self): - return None - bad_mapping = BadMapping() - self.assertRaises(TypeError, _testcapi.get_mapping_keys, bad_mapping) - self.assertRaises(TypeError, _testcapi.get_mapping_values, bad_mapping) - self.assertRaises(TypeError, _testcapi.get_mapping_items, bad_mapping) - - @unittest.skipUnless(hasattr(_testcapi, 'negative_refcount'), - 'need _testcapi.negative_refcount') - def test_negative_refcount(self): - # bpo-35059: Check that Py_DECREF() reports the correct filename - # when calling _Py_NegativeRefcount() to abort Python. - code = textwrap.dedent(""" - import _testcapi - from test import support - - with support.SuppressCrashReport(): - _testcapi.negative_refcount() - """) - rc, out, err = assert_python_failure('-c', code) - self.assertRegex(err, - br'_testcapimodule\.c:[0-9]+: ' - br'_Py_NegativeRefcount: Assertion failed: ' - br'object has negative ref count') - - def test_trashcan_subclass(self): - # bpo-35983: Check that the trashcan mechanism for "list" is NOT - # activated when its tp_dealloc is being called by a subclass - from _testcapi import MyList - L = None - for i in range(1000): - L = MyList((L,)) - - @support.requires_resource('cpu') - def test_trashcan_python_class1(self): - self.do_test_trashcan_python_class(list) - - @support.requires_resource('cpu') - def test_trashcan_python_class2(self): - from _testcapi import MyList - self.do_test_trashcan_python_class(MyList) - - def do_test_trashcan_python_class(self, base): - # Check that the trashcan mechanism works properly for a Python - # subclass of a class using the trashcan (this specific test assumes - # that the base class "base" behaves like list) - class PyList(base): - # Count the number of PyList instances to verify that there is - # no memory leak - num = 0 - def __init__(self, *args): - __class__.num += 1 - super().__init__(*args) - def __del__(self): - __class__.num -= 1 - - for parity in (0, 1): - L = None - # We need in the order of 2**20 iterations here such that a - # typical 8MB stack would overflow without the trashcan. - for i in range(2**20): - L = PyList((L,)) - L.attr = i - if parity: - # Add one additional nesting layer - L = (L,) - self.assertGreater(PyList.num, 0) - del L - self.assertEqual(PyList.num, 0) - - def test_heap_ctype_doc_and_text_signature(self): - self.assertEqual(_testcapi.HeapDocCType.__doc__, "somedoc") - self.assertEqual(_testcapi.HeapDocCType.__text_signature__, "(arg1, arg2)") - - def test_null_type_doc(self): - self.assertEqual(_testcapi.NullTpDocType.__doc__, None) - - def test_subclass_of_heap_gc_ctype_with_tpdealloc_decrefs_once(self): - class HeapGcCTypeSubclass(_testcapi.HeapGcCType): - def __init__(self): - self.value2 = 20 - super().__init__() - - subclass_instance = HeapGcCTypeSubclass() - type_refcnt = sys.getrefcount(HeapGcCTypeSubclass) - - # Test that subclass instance was fully created - self.assertEqual(subclass_instance.value, 10) - self.assertEqual(subclass_instance.value2, 20) - - # Test that the type reference count is only decremented once - del subclass_instance - self.assertEqual(type_refcnt - 1, sys.getrefcount(HeapGcCTypeSubclass)) - - def test_subclass_of_heap_gc_ctype_with_del_modifying_dunder_class_only_decrefs_once(self): - class A(_testcapi.HeapGcCType): - def __init__(self): - self.value2 = 20 - super().__init__() - - class B(A): - def __init__(self): - super().__init__() - - def __del__(self): - self.__class__ = A - A.refcnt_in_del = sys.getrefcount(A) - B.refcnt_in_del = sys.getrefcount(B) - - subclass_instance = B() - type_refcnt = sys.getrefcount(B) - new_type_refcnt = sys.getrefcount(A) - - # Test that subclass instance was fully created - self.assertEqual(subclass_instance.value, 10) - self.assertEqual(subclass_instance.value2, 20) - - del subclass_instance - - # Test that setting __class__ modified the reference counts of the types - self.assertEqual(type_refcnt - 1, B.refcnt_in_del) - self.assertEqual(new_type_refcnt + 1, A.refcnt_in_del) - - # Test that the original type already has decreased its refcnt - self.assertEqual(type_refcnt - 1, sys.getrefcount(B)) - - # Test that subtype_dealloc decref the newly assigned __class__ only once - self.assertEqual(new_type_refcnt, sys.getrefcount(A)) - - def test_heaptype_with_dict(self): - inst = _testcapi.HeapCTypeWithDict() - inst.foo = 42 - self.assertEqual(inst.foo, 42) - self.assertEqual(inst.dictobj, inst.__dict__) - self.assertEqual(inst.dictobj, {"foo": 42}) - - inst = _testcapi.HeapCTypeWithDict() - self.assertEqual({}, inst.__dict__) - - def test_heaptype_with_negative_dict(self): - inst = _testcapi.HeapCTypeWithNegativeDict() - inst.foo = 42 - self.assertEqual(inst.foo, 42) - self.assertEqual(inst.dictobj, inst.__dict__) - self.assertEqual(inst.dictobj, {"foo": 42}) - - inst = _testcapi.HeapCTypeWithNegativeDict() - self.assertEqual({}, inst.__dict__) - - def test_heaptype_with_weakref(self): - inst = _testcapi.HeapCTypeWithWeakref() - ref = weakref.ref(inst) - self.assertEqual(ref(), inst) - self.assertEqual(inst.weakreflist, ref) - - def test_heaptype_with_buffer(self): - inst = _testcapi.HeapCTypeWithBuffer() - b = bytes(inst) - self.assertEqual(b, b"1234") - - def test_c_subclass_of_heap_ctype_with_tpdealloc_decrefs_once(self): - subclass_instance = _testcapi.HeapCTypeSubclass() - type_refcnt = sys.getrefcount(_testcapi.HeapCTypeSubclass) - - # Test that subclass instance was fully created - self.assertEqual(subclass_instance.value, 10) - self.assertEqual(subclass_instance.value2, 20) - - # Test that the type reference count is only decremented once - del subclass_instance - self.assertEqual(type_refcnt - 1, sys.getrefcount(_testcapi.HeapCTypeSubclass)) - - def test_c_subclass_of_heap_ctype_with_del_modifying_dunder_class_only_decrefs_once(self): - subclass_instance = _testcapi.HeapCTypeSubclassWithFinalizer() - type_refcnt = sys.getrefcount(_testcapi.HeapCTypeSubclassWithFinalizer) - new_type_refcnt = sys.getrefcount(_testcapi.HeapCTypeSubclass) - - # Test that subclass instance was fully created - self.assertEqual(subclass_instance.value, 10) - self.assertEqual(subclass_instance.value2, 20) - - # The tp_finalize slot will set __class__ to HeapCTypeSubclass - del subclass_instance - - # Test that setting __class__ modified the reference counts of the types - self.assertEqual(type_refcnt - 1, _testcapi.HeapCTypeSubclassWithFinalizer.refcnt_in_del) - self.assertEqual(new_type_refcnt + 1, _testcapi.HeapCTypeSubclass.refcnt_in_del) - - # Test that the original type already has decreased its refcnt - self.assertEqual(type_refcnt - 1, sys.getrefcount(_testcapi.HeapCTypeSubclassWithFinalizer)) - - # Test that subtype_dealloc decref the newly assigned __class__ only once - self.assertEqual(new_type_refcnt, sys.getrefcount(_testcapi.HeapCTypeSubclass)) - - def test_heaptype_with_setattro(self): - obj = _testcapi.HeapCTypeSetattr() - self.assertEqual(obj.pvalue, 10) - obj.value = 12 - self.assertEqual(obj.pvalue, 12) - del obj.value - self.assertEqual(obj.pvalue, 0) - - def test_pynumber_tobase(self): - from _testcapi import pynumber_tobase - self.assertEqual(pynumber_tobase(123, 2), '0b1111011') - self.assertEqual(pynumber_tobase(123, 8), '0o173') - self.assertEqual(pynumber_tobase(123, 10), '123') - self.assertEqual(pynumber_tobase(123, 16), '0x7b') - self.assertEqual(pynumber_tobase(-123, 2), '-0b1111011') - self.assertEqual(pynumber_tobase(-123, 8), '-0o173') - self.assertEqual(pynumber_tobase(-123, 10), '-123') - self.assertEqual(pynumber_tobase(-123, 16), '-0x7b') - self.assertRaises(TypeError, pynumber_tobase, 123.0, 10) - self.assertRaises(TypeError, pynumber_tobase, '123', 10) - self.assertRaises(SystemError, pynumber_tobase, 123, 0) - - def check_fatal_error(self, code, expected, not_expected=()): - with support.SuppressCrashReport(): - rc, out, err = assert_python_failure('-sSI', '-c', code) - - err = decode_stderr(err) - self.assertIn('Fatal Python error: test_fatal_error: MESSAGE\n', - err) - - match = re.search(r'^Extension modules:(.*) \(total: ([0-9]+)\)$', - err, re.MULTILINE) - if not match: - self.fail(f"Cannot find 'Extension modules:' in {err!r}") - modules = set(match.group(1).strip().split(', ')) - total = int(match.group(2)) - - for name in expected: - self.assertIn(name, modules) - for name in not_expected: - self.assertNotIn(name, modules) - self.assertEqual(len(modules), total) - - def test_fatal_error(self): - # By default, stdlib extension modules are ignored, - # but not test modules. - expected = ('_testcapi',) - not_expected = ('sys',) - code = 'import _testcapi, sys; _testcapi.fatal_error(b"MESSAGE")' - self.check_fatal_error(code, expected, not_expected) - - # Mark _testcapi as stdlib module, but not sys - expected = ('sys',) - not_expected = ('_testcapi',) - code = textwrap.dedent(''' - import _testcapi, sys - sys.stdlib_module_names = frozenset({"_testcapi"}) - _testcapi.fatal_error(b"MESSAGE") - ''') - self.check_fatal_error(code, expected) - - def test_pyobject_repr_from_null(self): - s = _testcapi.pyobject_repr_from_null() - self.assertEqual(s, '') - - def test_pyobject_str_from_null(self): - s = _testcapi.pyobject_str_from_null() - self.assertEqual(s, '') - - def test_pyobject_bytes_from_null(self): - s = _testcapi.pyobject_bytes_from_null() - self.assertEqual(s, b'') - - def test_Py_CompileString(self): - # Check that Py_CompileString respects the coding cookie - _compile = _testcapi.Py_CompileString - code = b"# -*- coding: latin1 -*-\nprint('\xc2\xa4')\n" - result = _compile(code) - expected = compile(code, "", "exec") - self.assertEqual(result.co_consts, expected.co_consts) - - -class TestPendingCalls(unittest.TestCase): - - def pendingcalls_submit(self, l, n): - def callback(): - #this function can be interrupted by thread switching so let's - #use an atomic operation - l.append(None) - - for i in range(n): - time.sleep(random.random()*0.02) #0.01 secs on average - #try submitting callback until successful. - #rely on regular interrupt to flush queue if we are - #unsuccessful. - while True: - if _testcapi._pending_threadfunc(callback): - break - - def pendingcalls_wait(self, l, n, context = None): - #now, stick around until l[0] has grown to 10 - count = 0 - while len(l) != n: - #this busy loop is where we expect to be interrupted to - #run our callbacks. Note that callbacks are only run on the - #main thread - if False and support.verbose: - print("(%i)"%(len(l),),) - for i in range(1000): - a = i*i - if context and not context.event.is_set(): - continue - count += 1 - self.assertTrue(count < 10000, - "timeout waiting for %i callbacks, got %i"%(n, len(l))) - if False and support.verbose: - print("(%i)"%(len(l),)) - - def test_pendingcalls_threaded(self): - - #do every callback on a separate thread - n = 32 #total callbacks - threads = [] - class foo(object):pass - context = foo() - context.l = [] - context.n = 2 #submits per thread - context.nThreads = n // context.n - context.nFinished = 0 - context.lock = threading.Lock() - context.event = threading.Event() - - threads = [threading.Thread(target=self.pendingcalls_thread, - args=(context,)) - for i in range(context.nThreads)] - with threading_helper.start_threads(threads): - self.pendingcalls_wait(context.l, n, context) - - def pendingcalls_thread(self, context): - try: - self.pendingcalls_submit(context.l, context.n) - finally: - with context.lock: - context.nFinished += 1 - nFinished = context.nFinished - if False and support.verbose: - print("finished threads: ", nFinished) - if nFinished == context.nThreads: - context.event.set() - - def test_pendingcalls_non_threaded(self): - #again, just using the main thread, likely they will all be dispatched at - #once. It is ok to ask for too many, because we loop until we find a slot. - #the loop can be interrupted to dispatch. - #there are only 32 dispatch slots, so we go for twice that! - l = [] - n = 64 - self.pendingcalls_submit(l, n) - self.pendingcalls_wait(l, n) - - -class SubinterpreterTest(unittest.TestCase): - - def test_subinterps(self): - import builtins - r, w = os.pipe() - code = """if 1: - import sys, builtins, pickle - with open({:d}, "wb") as f: - pickle.dump(id(sys.modules), f) - pickle.dump(id(builtins), f) - """.format(w) - with open(r, "rb") as f: - ret = support.run_in_subinterp(code) - self.assertEqual(ret, 0) - self.assertNotEqual(pickle.load(f), id(sys.modules)) - self.assertNotEqual(pickle.load(f), id(builtins)) - - def test_subinterps_recent_language_features(self): - r, w = os.pipe() - code = """if 1: - import pickle - with open({:d}, "wb") as f: - - @(lambda x:x) # Py 3.9 - def noop(x): return x - - a = (b := f'1{{2}}3') + noop('x') # Py 3.8 (:=) / 3.6 (f'') - - async def foo(arg): return await arg # Py 3.5 - - pickle.dump(dict(a=a, b=b), f) - """.format(w) - - with open(r, "rb") as f: - ret = support.run_in_subinterp(code) - self.assertEqual(ret, 0) - self.assertEqual(pickle.load(f), {'a': '123x', 'b': '123'}) - - def test_mutate_exception(self): - """ - Exceptions saved in global module state get shared between - individual module instances. This test checks whether or not - a change in one interpreter's module gets reflected into the - other ones. - """ - import binascii - - support.run_in_subinterp("import binascii; binascii.Error.foobar = 'foobar'") - - self.assertFalse(hasattr(binascii.Error, "foobar")) - - def test_module_state_shared_in_global(self): - """ - bpo-44050: Extension module state should be shared between interpreters - when it doesn't support sub-interpreters. - """ - r, w = os.pipe() - self.addCleanup(os.close, r) - self.addCleanup(os.close, w) - - script = textwrap.dedent(f""" - import importlib.machinery - import importlib.util - import os - - fullname = '_test_module_state_shared' - origin = importlib.util.find_spec('_testmultiphase').origin - loader = importlib.machinery.ExtensionFileLoader(fullname, origin) - spec = importlib.util.spec_from_loader(fullname, loader) - module = importlib.util.module_from_spec(spec) - attr_id = str(id(module.Error)).encode() - - os.write({w}, attr_id) - """) - exec(script) - main_attr_id = os.read(r, 100) - - ret = support.run_in_subinterp(script) - self.assertEqual(ret, 0) - subinterp_attr_id = os.read(r, 100) - self.assertEqual(main_attr_id, subinterp_attr_id) - - -class TestThreadState(unittest.TestCase): - - @threading_helper.reap_threads - def test_thread_state(self): - # some extra thread-state tests driven via _testcapi - def target(): - idents = [] - - def callback(): - idents.append(threading.get_ident()) - - _testcapi._test_thread_state(callback) - a = b = callback - time.sleep(1) - # Check our main thread is in the list exactly 3 times. - self.assertEqual(idents.count(threading.get_ident()), 3, - "Couldn't find main thread correctly in the list") - - target() - t = threading.Thread(target=target) - t.start() - t.join() - - @threading_helper.reap_threads - def test_gilstate_ensure_no_deadlock(self): - # See https://github.com/python/cpython/issues/96071 - code = textwrap.dedent(f""" - import _testcapi - - def callback(): - print('callback called') - - _testcapi._test_thread_state(callback) - """) - ret = assert_python_ok('-X', 'tracemalloc', '-c', code) - self.assertIn(b'callback called', ret.out) - - -class Test_testcapi(unittest.TestCase): - locals().update((name, getattr(_testcapi, name)) - for name in dir(_testcapi) - if name.startswith('test_') and not name.endswith('_code')) - - # Suppress warning from PyUnicode_FromUnicode(). - @warnings_helper.ignore_warnings(category=DeprecationWarning) - def test_widechar(self): - _testcapi.test_widechar() - - -class Test_testinternalcapi(unittest.TestCase): - locals().update((name, getattr(_testinternalcapi, name)) - for name in dir(_testinternalcapi) - if name.startswith('test_')) - - -class PyMemDebugTests(unittest.TestCase): - PYTHONMALLOC = 'debug' - # '0x04c06e0' or '04C06E0' - PTR_REGEX = r'(?:0x)?[0-9a-fA-F]+' - - def check(self, code): - with support.SuppressCrashReport(): - out = assert_python_failure( - '-c', code, - PYTHONMALLOC=self.PYTHONMALLOC, - # FreeBSD: instruct jemalloc to not fill freed() memory - # with junk byte 0x5a, see JEMALLOC(3) - MALLOC_CONF="junk:false", - ) - stderr = out.err - return stderr.decode('ascii', 'replace') - - def test_buffer_overflow(self): - out = self.check('import _testcapi; _testcapi.pymem_buffer_overflow()') - regex = (r"Debug memory block at address p={ptr}: API 'm'\n" - r" 16 bytes originally requested\n" - r" The [0-9] pad bytes at p-[0-9] are FORBIDDENBYTE, as expected.\n" - r" The [0-9] pad bytes at tail={ptr} are not all FORBIDDENBYTE \(0x[0-9a-f]{{2}}\):\n" - r" at tail\+0: 0x78 \*\*\* OUCH\n" - r" at tail\+1: 0xfd\n" - r" at tail\+2: 0xfd\n" - r" .*\n" - r"( The block was made by call #[0-9]+ to debug malloc/realloc.\n)?" - r" Data at p: cd cd cd .*\n" - r"\n" - r"Enable tracemalloc to get the memory block allocation traceback\n" - r"\n" - r"Fatal Python error: _PyMem_DebugRawFree: bad trailing pad byte") - regex = regex.format(ptr=self.PTR_REGEX) - regex = re.compile(regex, flags=re.DOTALL) - self.assertRegex(out, regex) - - def test_api_misuse(self): - out = self.check('import _testcapi; _testcapi.pymem_api_misuse()') - regex = (r"Debug memory block at address p={ptr}: API 'm'\n" - r" 16 bytes originally requested\n" - r" The [0-9] pad bytes at p-[0-9] are FORBIDDENBYTE, as expected.\n" - r" The [0-9] pad bytes at tail={ptr} are FORBIDDENBYTE, as expected.\n" - r"( The block was made by call #[0-9]+ to debug malloc/realloc.\n)?" - r" Data at p: cd cd cd .*\n" - r"\n" - r"Enable tracemalloc to get the memory block allocation traceback\n" - r"\n" - r"Fatal Python error: _PyMem_DebugRawFree: bad ID: Allocated using API 'm', verified using API 'r'\n") - regex = regex.format(ptr=self.PTR_REGEX) - self.assertRegex(out, regex) - - def check_malloc_without_gil(self, code): - out = self.check(code) - expected = ('Fatal Python error: _PyMem_DebugMalloc: ' - 'Python memory allocator called without holding the GIL') - self.assertIn(expected, out) - - def test_pymem_malloc_without_gil(self): - # Debug hooks must raise an error if PyMem_Malloc() is called - # without holding the GIL - code = 'import _testcapi; _testcapi.pymem_malloc_without_gil()' - self.check_malloc_without_gil(code) - - def test_pyobject_malloc_without_gil(self): - # Debug hooks must raise an error if PyObject_Malloc() is called - # without holding the GIL - code = 'import _testcapi; _testcapi.pyobject_malloc_without_gil()' - self.check_malloc_without_gil(code) - - def check_pyobject_is_freed(self, func_name): - code = textwrap.dedent(f''' - import gc, os, sys, _testcapi - # Disable the GC to avoid crash on GC collection - gc.disable() - try: - _testcapi.{func_name}() - # Exit immediately to avoid a crash while deallocating - # the invalid object - os._exit(0) - except _testcapi.error: - os._exit(1) - ''') - assert_python_ok( - '-c', code, - PYTHONMALLOC=self.PYTHONMALLOC, - MALLOC_CONF="junk:false", - ) - - def test_pyobject_null_is_freed(self): - self.check_pyobject_is_freed('check_pyobject_null_is_freed') - - def test_pyobject_uninitialized_is_freed(self): - self.check_pyobject_is_freed('check_pyobject_uninitialized_is_freed') - - def test_pyobject_forbidden_bytes_is_freed(self): - self.check_pyobject_is_freed('check_pyobject_forbidden_bytes_is_freed') - - def test_pyobject_freed_is_freed(self): - self.check_pyobject_is_freed('check_pyobject_freed_is_freed') - - -class PyMemMallocDebugTests(PyMemDebugTests): - PYTHONMALLOC = 'malloc_debug' - - -@unittest.skipUnless(support.with_pymalloc(), 'need pymalloc') -class PyMemPymallocDebugTests(PyMemDebugTests): - PYTHONMALLOC = 'pymalloc_debug' - - -@unittest.skipUnless(Py_DEBUG, 'need Py_DEBUG') -class PyMemDefaultTests(PyMemDebugTests): - # test default allocator of Python compiled in debug mode - PYTHONMALLOC = '' - - -class Test_ModuleStateAccess(unittest.TestCase): - """Test access to module start (PEP 573)""" - - # The C part of the tests lives in _testmultiphase, in a module called - # _testmultiphase_meth_state_access. - # This module has multi-phase initialization, unlike _testcapi. - - def setUp(self): - fullname = '_testmultiphase_meth_state_access' # XXX - origin = importlib.util.find_spec('_testmultiphase').origin - loader = importlib.machinery.ExtensionFileLoader(fullname, origin) - spec = importlib.util.spec_from_loader(fullname, loader) - module = importlib.util.module_from_spec(spec) - loader.exec_module(module) - self.module = module - - def test_subclass_get_module(self): - """PyType_GetModule for defining_class""" - class StateAccessType_Subclass(self.module.StateAccessType): - pass - - instance = StateAccessType_Subclass() - self.assertIs(instance.get_defining_module(), self.module) - - def test_subclass_get_module_with_super(self): - class StateAccessType_Subclass(self.module.StateAccessType): - def get_defining_module(self): - return super().get_defining_module() - - instance = StateAccessType_Subclass() - self.assertIs(instance.get_defining_module(), self.module) - - def test_state_access(self): - """Checks methods defined with and without argument clinic - - This tests a no-arg method (get_count) and a method with - both a positional and keyword argument. - """ - - a = self.module.StateAccessType() - b = self.module.StateAccessType() - - methods = { - 'clinic': a.increment_count_clinic, - 'noclinic': a.increment_count_noclinic, - } - - for name, increment_count in methods.items(): - with self.subTest(name): - self.assertEqual(a.get_count(), b.get_count()) - self.assertEqual(a.get_count(), 0) - - increment_count() - self.assertEqual(a.get_count(), b.get_count()) - self.assertEqual(a.get_count(), 1) - - increment_count(3) - self.assertEqual(a.get_count(), b.get_count()) - self.assertEqual(a.get_count(), 4) - - increment_count(-2, twice=True) - self.assertEqual(a.get_count(), b.get_count()) - self.assertEqual(a.get_count(), 0) - - with self.assertRaises(TypeError): - increment_count(thrice=3) - - with self.assertRaises(TypeError): - increment_count(1, 2, 3) - - def test_get_module_bad_def(self): - # _PyType_GetModuleByDef fails gracefully if it doesn't - # find what it's looking for. - # see bpo-46433 - instance = self.module.StateAccessType() - with self.assertRaises(TypeError): - instance.getmodulebydef_bad_def() - - def test_get_module_static_in_mro(self): - # Here, the class _PyType_GetModuleByDef is looking for - # appears in the MRO after a static type (Exception). - # see bpo-46433 - class Subclass(BaseException, self.module.StateAccessType): - pass - self.assertIs(Subclass().get_defining_module(), self.module) - - -if __name__ == "__main__": - unittest.main() diff --git a/Lib/test/test_capi/__init__.py b/Lib/test/test_capi/__init__.py new file mode 100644 index 00000000..4b16ecc3 --- /dev/null +++ b/Lib/test/test_capi/__init__.py @@ -0,0 +1,5 @@ +import os +from test.support import load_package_tests + +def load_tests(*args): + return load_package_tests(os.path.dirname(__file__), *args) diff --git a/Lib/test/test_capi/__main__.py b/Lib/test/test_capi/__main__.py new file mode 100644 index 00000000..05d01775 --- /dev/null +++ b/Lib/test/test_capi/__main__.py @@ -0,0 +1,3 @@ +import unittest + +unittest.main('test.test_capi') diff --git a/Lib/test/test_capi/test_getargs.py b/Lib/test/test_capi/test_getargs.py new file mode 100644 index 00000000..72b6d64a --- /dev/null +++ b/Lib/test/test_capi/test_getargs.py @@ -0,0 +1,1318 @@ +import unittest +import math +import string +import sys +import warnings +from test import support +from test.support import import_helper +from test.support import warnings_helper +# Skip this test if the _testcapi module isn't available. +_testcapi = import_helper.import_module('_testcapi') +from _testcapi import getargs_keywords, getargs_keyword_only + +# > How about the following counterproposal. This also changes some of +# > the other format codes to be a little more regular. +# > +# > Code C type Range check +# > +# > b unsigned char 0..UCHAR_MAX +# > h signed short SHRT_MIN..SHRT_MAX +# > B unsigned char none ** +# > H unsigned short none ** +# > k * unsigned long none +# > I * unsigned int 0..UINT_MAX +# +# +# > i int INT_MIN..INT_MAX +# > l long LONG_MIN..LONG_MAX +# +# > K * unsigned long long none +# > L long long LLONG_MIN..LLONG_MAX +# +# > Notes: +# > +# > * New format codes. +# > +# > ** Changed from previous "range-and-a-half" to "none"; the +# > range-and-a-half checking wasn't particularly useful. +# +# Plus a C API or two, e.g. PyLong_AsUnsignedLongMask() -> +# unsigned long and PyLong_AsUnsignedLongLongMask() -> unsigned +# long long (if that exists). + +LARGE = 0x7FFFFFFF +VERY_LARGE = 0xFF0000121212121212121242 + +from _testcapi import UCHAR_MAX, USHRT_MAX, UINT_MAX, ULONG_MAX, INT_MAX, \ + INT_MIN, LONG_MIN, LONG_MAX, PY_SSIZE_T_MIN, PY_SSIZE_T_MAX, \ + SHRT_MIN, SHRT_MAX, FLT_MIN, FLT_MAX, DBL_MIN, DBL_MAX + +DBL_MAX_EXP = sys.float_info.max_exp +INF = float('inf') +NAN = float('nan') + +# fake, they are not defined in Python's header files +LLONG_MAX = 2**63-1 +LLONG_MIN = -2**63 +ULLONG_MAX = 2**64-1 + +class Index: + def __index__(self): + return 99 + +class IndexIntSubclass(int): + def __index__(self): + return 99 + +class BadIndex: + def __index__(self): + return 1.0 + +class BadIndex2: + def __index__(self): + return True + +class BadIndex3(int): + def __index__(self): + return True + + +class Int: + def __int__(self): + return 99 + +class IntSubclass(int): + def __int__(self): + return 99 + +class BadInt: + def __int__(self): + return 1.0 + +class BadInt2: + def __int__(self): + return True + +class BadInt3(int): + def __int__(self): + return True + + +class Float: + def __float__(self): + return 4.25 + +class FloatSubclass(float): + pass + +class FloatSubclass2(float): + def __float__(self): + return 4.25 + +class BadFloat: + def __float__(self): + return 687 + +class BadFloat2: + def __float__(self): + return FloatSubclass(4.25) + +class BadFloat3(float): + def __float__(self): + return FloatSubclass(4.25) + + +class Complex: + def __complex__(self): + return 4.25+0.5j + +class ComplexSubclass(complex): + pass + +class ComplexSubclass2(complex): + def __complex__(self): + return 4.25+0.5j + +class BadComplex: + def __complex__(self): + return 1.25 + +class BadComplex2: + def __complex__(self): + return ComplexSubclass(4.25+0.5j) + +class BadComplex3(complex): + def __complex__(self): + return ComplexSubclass(4.25+0.5j) + + +class TupleSubclass(tuple): + pass + +class DictSubclass(dict): + pass + + +class Unsigned_TestCase(unittest.TestCase): + def test_b(self): + from _testcapi import getargs_b + # b returns 'unsigned char', and does range checking (0 ... UCHAR_MAX) + self.assertRaises(TypeError, getargs_b, 3.14) + self.assertEqual(99, getargs_b(Index())) + self.assertEqual(0, getargs_b(IndexIntSubclass())) + self.assertRaises(TypeError, getargs_b, BadIndex()) + with self.assertWarns(DeprecationWarning): + self.assertEqual(1, getargs_b(BadIndex2())) + self.assertEqual(0, getargs_b(BadIndex3())) + self.assertRaises(TypeError, getargs_b, Int()) + self.assertEqual(0, getargs_b(IntSubclass())) + self.assertRaises(TypeError, getargs_b, BadInt()) + self.assertRaises(TypeError, getargs_b, BadInt2()) + self.assertEqual(0, getargs_b(BadInt3())) + + self.assertRaises(OverflowError, getargs_b, -1) + self.assertEqual(0, getargs_b(0)) + self.assertEqual(UCHAR_MAX, getargs_b(UCHAR_MAX)) + self.assertRaises(OverflowError, getargs_b, UCHAR_MAX + 1) + + self.assertEqual(42, getargs_b(42)) + self.assertRaises(OverflowError, getargs_b, VERY_LARGE) + + def test_B(self): + from _testcapi import getargs_B + # B returns 'unsigned char', no range checking + self.assertRaises(TypeError, getargs_B, 3.14) + self.assertEqual(99, getargs_B(Index())) + self.assertEqual(0, getargs_B(IndexIntSubclass())) + self.assertRaises(TypeError, getargs_B, BadIndex()) + with self.assertWarns(DeprecationWarning): + self.assertEqual(1, getargs_B(BadIndex2())) + self.assertEqual(0, getargs_B(BadIndex3())) + self.assertRaises(TypeError, getargs_B, Int()) + self.assertEqual(0, getargs_B(IntSubclass())) + self.assertRaises(TypeError, getargs_B, BadInt()) + self.assertRaises(TypeError, getargs_B, BadInt2()) + self.assertEqual(0, getargs_B(BadInt3())) + + self.assertEqual(UCHAR_MAX, getargs_B(-1)) + self.assertEqual(0, getargs_B(0)) + self.assertEqual(UCHAR_MAX, getargs_B(UCHAR_MAX)) + self.assertEqual(0, getargs_B(UCHAR_MAX+1)) + + self.assertEqual(42, getargs_B(42)) + self.assertEqual(UCHAR_MAX & VERY_LARGE, getargs_B(VERY_LARGE)) + + def test_H(self): + from _testcapi import getargs_H + # H returns 'unsigned short', no range checking + self.assertRaises(TypeError, getargs_H, 3.14) + self.assertEqual(99, getargs_H(Index())) + self.assertEqual(0, getargs_H(IndexIntSubclass())) + self.assertRaises(TypeError, getargs_H, BadIndex()) + with self.assertWarns(DeprecationWarning): + self.assertEqual(1, getargs_H(BadIndex2())) + self.assertEqual(0, getargs_H(BadIndex3())) + self.assertRaises(TypeError, getargs_H, Int()) + self.assertEqual(0, getargs_H(IntSubclass())) + self.assertRaises(TypeError, getargs_H, BadInt()) + self.assertRaises(TypeError, getargs_H, BadInt2()) + self.assertEqual(0, getargs_H(BadInt3())) + + self.assertEqual(USHRT_MAX, getargs_H(-1)) + self.assertEqual(0, getargs_H(0)) + self.assertEqual(USHRT_MAX, getargs_H(USHRT_MAX)) + self.assertEqual(0, getargs_H(USHRT_MAX+1)) + + self.assertEqual(42, getargs_H(42)) + + self.assertEqual(VERY_LARGE & USHRT_MAX, getargs_H(VERY_LARGE)) + + def test_I(self): + from _testcapi import getargs_I + # I returns 'unsigned int', no range checking + self.assertRaises(TypeError, getargs_I, 3.14) + self.assertEqual(99, getargs_I(Index())) + self.assertEqual(0, getargs_I(IndexIntSubclass())) + self.assertRaises(TypeError, getargs_I, BadIndex()) + with self.assertWarns(DeprecationWarning): + self.assertEqual(1, getargs_I(BadIndex2())) + self.assertEqual(0, getargs_I(BadIndex3())) + self.assertRaises(TypeError, getargs_I, Int()) + self.assertEqual(0, getargs_I(IntSubclass())) + self.assertRaises(TypeError, getargs_I, BadInt()) + self.assertRaises(TypeError, getargs_I, BadInt2()) + self.assertEqual(0, getargs_I(BadInt3())) + + self.assertEqual(UINT_MAX, getargs_I(-1)) + self.assertEqual(0, getargs_I(0)) + self.assertEqual(UINT_MAX, getargs_I(UINT_MAX)) + self.assertEqual(0, getargs_I(UINT_MAX+1)) + + self.assertEqual(42, getargs_I(42)) + + self.assertEqual(VERY_LARGE & UINT_MAX, getargs_I(VERY_LARGE)) + + def test_k(self): + from _testcapi import getargs_k + # k returns 'unsigned long', no range checking + # it does not accept float, or instances with __int__ + self.assertRaises(TypeError, getargs_k, 3.14) + self.assertRaises(TypeError, getargs_k, Index()) + self.assertEqual(0, getargs_k(IndexIntSubclass())) + self.assertRaises(TypeError, getargs_k, BadIndex()) + self.assertRaises(TypeError, getargs_k, BadIndex2()) + self.assertEqual(0, getargs_k(BadIndex3())) + self.assertRaises(TypeError, getargs_k, Int()) + self.assertEqual(0, getargs_k(IntSubclass())) + self.assertRaises(TypeError, getargs_k, BadInt()) + self.assertRaises(TypeError, getargs_k, BadInt2()) + self.assertEqual(0, getargs_k(BadInt3())) + + self.assertEqual(ULONG_MAX, getargs_k(-1)) + self.assertEqual(0, getargs_k(0)) + self.assertEqual(ULONG_MAX, getargs_k(ULONG_MAX)) + self.assertEqual(0, getargs_k(ULONG_MAX+1)) + + self.assertEqual(42, getargs_k(42)) + + self.assertEqual(VERY_LARGE & ULONG_MAX, getargs_k(VERY_LARGE)) + +class Signed_TestCase(unittest.TestCase): + def test_h(self): + from _testcapi import getargs_h + # h returns 'short', and does range checking (SHRT_MIN ... SHRT_MAX) + self.assertRaises(TypeError, getargs_h, 3.14) + self.assertEqual(99, getargs_h(Index())) + self.assertEqual(0, getargs_h(IndexIntSubclass())) + self.assertRaises(TypeError, getargs_h, BadIndex()) + with self.assertWarns(DeprecationWarning): + self.assertEqual(1, getargs_h(BadIndex2())) + self.assertEqual(0, getargs_h(BadIndex3())) + self.assertRaises(TypeError, getargs_h, Int()) + self.assertEqual(0, getargs_h(IntSubclass())) + self.assertRaises(TypeError, getargs_h, BadInt()) + self.assertRaises(TypeError, getargs_h, BadInt2()) + self.assertEqual(0, getargs_h(BadInt3())) + + self.assertRaises(OverflowError, getargs_h, SHRT_MIN-1) + self.assertEqual(SHRT_MIN, getargs_h(SHRT_MIN)) + self.assertEqual(SHRT_MAX, getargs_h(SHRT_MAX)) + self.assertRaises(OverflowError, getargs_h, SHRT_MAX+1) + + self.assertEqual(42, getargs_h(42)) + self.assertRaises(OverflowError, getargs_h, VERY_LARGE) + + def test_i(self): + from _testcapi import getargs_i + # i returns 'int', and does range checking (INT_MIN ... INT_MAX) + self.assertRaises(TypeError, getargs_i, 3.14) + self.assertEqual(99, getargs_i(Index())) + self.assertEqual(0, getargs_i(IndexIntSubclass())) + self.assertRaises(TypeError, getargs_i, BadIndex()) + with self.assertWarns(DeprecationWarning): + self.assertEqual(1, getargs_i(BadIndex2())) + self.assertEqual(0, getargs_i(BadIndex3())) + self.assertRaises(TypeError, getargs_i, Int()) + self.assertEqual(0, getargs_i(IntSubclass())) + self.assertRaises(TypeError, getargs_i, BadInt()) + self.assertRaises(TypeError, getargs_i, BadInt2()) + self.assertEqual(0, getargs_i(BadInt3())) + + self.assertRaises(OverflowError, getargs_i, INT_MIN-1) + self.assertEqual(INT_MIN, getargs_i(INT_MIN)) + self.assertEqual(INT_MAX, getargs_i(INT_MAX)) + self.assertRaises(OverflowError, getargs_i, INT_MAX+1) + + self.assertEqual(42, getargs_i(42)) + self.assertRaises(OverflowError, getargs_i, VERY_LARGE) + + def test_l(self): + from _testcapi import getargs_l + # l returns 'long', and does range checking (LONG_MIN ... LONG_MAX) + self.assertRaises(TypeError, getargs_l, 3.14) + self.assertEqual(99, getargs_l(Index())) + self.assertEqual(0, getargs_l(IndexIntSubclass())) + self.assertRaises(TypeError, getargs_l, BadIndex()) + with self.assertWarns(DeprecationWarning): + self.assertEqual(1, getargs_l(BadIndex2())) + self.assertEqual(0, getargs_l(BadIndex3())) + self.assertRaises(TypeError, getargs_l, Int()) + self.assertEqual(0, getargs_l(IntSubclass())) + self.assertRaises(TypeError, getargs_l, BadInt()) + self.assertRaises(TypeError, getargs_l, BadInt2()) + self.assertEqual(0, getargs_l(BadInt3())) + + self.assertRaises(OverflowError, getargs_l, LONG_MIN-1) + self.assertEqual(LONG_MIN, getargs_l(LONG_MIN)) + self.assertEqual(LONG_MAX, getargs_l(LONG_MAX)) + self.assertRaises(OverflowError, getargs_l, LONG_MAX+1) + + self.assertEqual(42, getargs_l(42)) + self.assertRaises(OverflowError, getargs_l, VERY_LARGE) + + def test_n(self): + from _testcapi import getargs_n + # n returns 'Py_ssize_t', and does range checking + # (PY_SSIZE_T_MIN ... PY_SSIZE_T_MAX) + self.assertRaises(TypeError, getargs_n, 3.14) + self.assertEqual(99, getargs_n(Index())) + self.assertEqual(0, getargs_n(IndexIntSubclass())) + self.assertRaises(TypeError, getargs_n, BadIndex()) + with self.assertWarns(DeprecationWarning): + self.assertEqual(1, getargs_n(BadIndex2())) + self.assertEqual(0, getargs_n(BadIndex3())) + self.assertRaises(TypeError, getargs_n, Int()) + self.assertEqual(0, getargs_n(IntSubclass())) + self.assertRaises(TypeError, getargs_n, BadInt()) + self.assertRaises(TypeError, getargs_n, BadInt2()) + self.assertEqual(0, getargs_n(BadInt3())) + + self.assertRaises(OverflowError, getargs_n, PY_SSIZE_T_MIN-1) + self.assertEqual(PY_SSIZE_T_MIN, getargs_n(PY_SSIZE_T_MIN)) + self.assertEqual(PY_SSIZE_T_MAX, getargs_n(PY_SSIZE_T_MAX)) + self.assertRaises(OverflowError, getargs_n, PY_SSIZE_T_MAX+1) + + self.assertEqual(42, getargs_n(42)) + self.assertRaises(OverflowError, getargs_n, VERY_LARGE) + + +class LongLong_TestCase(unittest.TestCase): + def test_L(self): + from _testcapi import getargs_L + # L returns 'long long', and does range checking (LLONG_MIN + # ... LLONG_MAX) + self.assertRaises(TypeError, getargs_L, 3.14) + self.assertRaises(TypeError, getargs_L, "Hello") + self.assertEqual(99, getargs_L(Index())) + self.assertEqual(0, getargs_L(IndexIntSubclass())) + self.assertRaises(TypeError, getargs_L, BadIndex()) + with self.assertWarns(DeprecationWarning): + self.assertEqual(1, getargs_L(BadIndex2())) + self.assertEqual(0, getargs_L(BadIndex3())) + self.assertRaises(TypeError, getargs_L, Int()) + self.assertEqual(0, getargs_L(IntSubclass())) + self.assertRaises(TypeError, getargs_L, BadInt()) + self.assertRaises(TypeError, getargs_L, BadInt2()) + self.assertEqual(0, getargs_L(BadInt3())) + + self.assertRaises(OverflowError, getargs_L, LLONG_MIN-1) + self.assertEqual(LLONG_MIN, getargs_L(LLONG_MIN)) + self.assertEqual(LLONG_MAX, getargs_L(LLONG_MAX)) + self.assertRaises(OverflowError, getargs_L, LLONG_MAX+1) + + self.assertEqual(42, getargs_L(42)) + self.assertRaises(OverflowError, getargs_L, VERY_LARGE) + + def test_K(self): + from _testcapi import getargs_K + # K return 'unsigned long long', no range checking + self.assertRaises(TypeError, getargs_K, 3.14) + self.assertRaises(TypeError, getargs_K, Index()) + self.assertEqual(0, getargs_K(IndexIntSubclass())) + self.assertRaises(TypeError, getargs_K, BadIndex()) + self.assertRaises(TypeError, getargs_K, BadIndex2()) + self.assertEqual(0, getargs_K(BadIndex3())) + self.assertRaises(TypeError, getargs_K, Int()) + self.assertEqual(0, getargs_K(IntSubclass())) + self.assertRaises(TypeError, getargs_K, BadInt()) + self.assertRaises(TypeError, getargs_K, BadInt2()) + self.assertEqual(0, getargs_K(BadInt3())) + + self.assertEqual(ULLONG_MAX, getargs_K(ULLONG_MAX)) + self.assertEqual(0, getargs_K(0)) + self.assertEqual(0, getargs_K(ULLONG_MAX+1)) + + self.assertEqual(42, getargs_K(42)) + + self.assertEqual(VERY_LARGE & ULLONG_MAX, getargs_K(VERY_LARGE)) + + +class Float_TestCase(unittest.TestCase): + def assertEqualWithSign(self, actual, expected): + self.assertEqual(actual, expected) + self.assertEqual(math.copysign(1, actual), math.copysign(1, expected)) + + def test_f(self): + from _testcapi import getargs_f + self.assertEqual(getargs_f(4.25), 4.25) + self.assertEqual(getargs_f(4), 4.0) + self.assertRaises(TypeError, getargs_f, 4.25+0j) + self.assertEqual(getargs_f(Float()), 4.25) + self.assertEqual(getargs_f(FloatSubclass(7.5)), 7.5) + self.assertEqual(getargs_f(FloatSubclass2(7.5)), 7.5) + self.assertRaises(TypeError, getargs_f, BadFloat()) + with self.assertWarns(DeprecationWarning): + self.assertEqual(getargs_f(BadFloat2()), 4.25) + self.assertEqual(getargs_f(BadFloat3(7.5)), 7.5) + self.assertEqual(getargs_f(Index()), 99.0) + self.assertRaises(TypeError, getargs_f, Int()) + + for x in (FLT_MIN, -FLT_MIN, FLT_MAX, -FLT_MAX, INF, -INF): + self.assertEqual(getargs_f(x), x) + if FLT_MAX < DBL_MAX: + self.assertEqual(getargs_f(DBL_MAX), INF) + self.assertEqual(getargs_f(-DBL_MAX), -INF) + if FLT_MIN > DBL_MIN: + self.assertEqualWithSign(getargs_f(DBL_MIN), 0.0) + self.assertEqualWithSign(getargs_f(-DBL_MIN), -0.0) + self.assertEqualWithSign(getargs_f(0.0), 0.0) + self.assertEqualWithSign(getargs_f(-0.0), -0.0) + r = getargs_f(NAN) + self.assertNotEqual(r, r) + + @support.requires_IEEE_754 + def test_f_rounding(self): + from _testcapi import getargs_f + self.assertEqual(getargs_f(3.40282356e38), FLT_MAX) + self.assertEqual(getargs_f(-3.40282356e38), -FLT_MAX) + + def test_d(self): + from _testcapi import getargs_d + self.assertEqual(getargs_d(4.25), 4.25) + self.assertEqual(getargs_d(4), 4.0) + self.assertRaises(TypeError, getargs_d, 4.25+0j) + self.assertEqual(getargs_d(Float()), 4.25) + self.assertEqual(getargs_d(FloatSubclass(7.5)), 7.5) + self.assertEqual(getargs_d(FloatSubclass2(7.5)), 7.5) + self.assertRaises(TypeError, getargs_d, BadFloat()) + with self.assertWarns(DeprecationWarning): + self.assertEqual(getargs_d(BadFloat2()), 4.25) + self.assertEqual(getargs_d(BadFloat3(7.5)), 7.5) + self.assertEqual(getargs_d(Index()), 99.0) + self.assertRaises(TypeError, getargs_d, Int()) + + for x in (DBL_MIN, -DBL_MIN, DBL_MAX, -DBL_MAX, INF, -INF): + self.assertEqual(getargs_d(x), x) + self.assertRaises(OverflowError, getargs_d, 1< 1 + self.assertEqual(getargs_c(b'a'), 97) + self.assertEqual(getargs_c(bytearray(b'a')), 97) + self.assertRaises(TypeError, getargs_c, memoryview(b'a')) + self.assertRaises(TypeError, getargs_c, 's') + self.assertRaises(TypeError, getargs_c, 97) + self.assertRaises(TypeError, getargs_c, None) + + def test_y(self): + from _testcapi import getargs_y + self.assertRaises(TypeError, getargs_y, 'abc\xe9') + self.assertEqual(getargs_y(b'bytes'), b'bytes') + self.assertRaises(ValueError, getargs_y, b'nul:\0') + self.assertRaises(TypeError, getargs_y, bytearray(b'bytearray')) + self.assertRaises(TypeError, getargs_y, memoryview(b'memoryview')) + self.assertRaises(TypeError, getargs_y, None) + + def test_y_star(self): + from _testcapi import getargs_y_star + self.assertRaises(TypeError, getargs_y_star, 'abc\xe9') + self.assertEqual(getargs_y_star(b'bytes'), b'bytes') + self.assertEqual(getargs_y_star(b'nul:\0'), b'nul:\0') + self.assertEqual(getargs_y_star(bytearray(b'bytearray')), b'bytearray') + self.assertEqual(getargs_y_star(memoryview(b'memoryview')), b'memoryview') + self.assertRaises(TypeError, getargs_y_star, None) + + def test_y_hash(self): + from _testcapi import getargs_y_hash + self.assertRaises(TypeError, getargs_y_hash, 'abc\xe9') + self.assertEqual(getargs_y_hash(b'bytes'), b'bytes') + self.assertEqual(getargs_y_hash(b'nul:\0'), b'nul:\0') + self.assertRaises(TypeError, getargs_y_hash, bytearray(b'bytearray')) + self.assertRaises(TypeError, getargs_y_hash, memoryview(b'memoryview')) + self.assertRaises(TypeError, getargs_y_hash, None) + + def test_w_star(self): + # getargs_w_star() modifies first and last byte + from _testcapi import getargs_w_star + self.assertRaises(TypeError, getargs_w_star, 'abc\xe9') + self.assertRaises(TypeError, getargs_w_star, b'bytes') + self.assertRaises(TypeError, getargs_w_star, b'nul:\0') + self.assertRaises(TypeError, getargs_w_star, memoryview(b'bytes')) + buf = bytearray(b'bytearray') + self.assertEqual(getargs_w_star(buf), b'[ytearra]') + self.assertEqual(buf, bytearray(b'[ytearra]')) + buf = bytearray(b'memoryview') + self.assertEqual(getargs_w_star(memoryview(buf)), b'[emoryvie]') + self.assertEqual(buf, bytearray(b'[emoryvie]')) + self.assertRaises(TypeError, getargs_w_star, None) + + +class String_TestCase(unittest.TestCase): + def test_C(self): + from _testcapi import getargs_C + self.assertRaises(TypeError, getargs_C, 'abc') # len > 1 + self.assertEqual(getargs_C('a'), 97) + self.assertEqual(getargs_C('\u20ac'), 0x20ac) + self.assertEqual(getargs_C('\U0001f40d'), 0x1f40d) + self.assertRaises(TypeError, getargs_C, b'a') + self.assertRaises(TypeError, getargs_C, bytearray(b'a')) + self.assertRaises(TypeError, getargs_C, memoryview(b'a')) + self.assertRaises(TypeError, getargs_C, 97) + self.assertRaises(TypeError, getargs_C, None) + + def test_s(self): + from _testcapi import getargs_s + self.assertEqual(getargs_s('abc\xe9'), b'abc\xc3\xa9') + self.assertRaises(ValueError, getargs_s, 'nul:\0') + self.assertRaises(TypeError, getargs_s, b'bytes') + self.assertRaises(TypeError, getargs_s, bytearray(b'bytearray')) + self.assertRaises(TypeError, getargs_s, memoryview(b'memoryview')) + self.assertRaises(TypeError, getargs_s, None) + + def test_s_star(self): + from _testcapi import getargs_s_star + self.assertEqual(getargs_s_star('abc\xe9'), b'abc\xc3\xa9') + self.assertEqual(getargs_s_star('nul:\0'), b'nul:\0') + self.assertEqual(getargs_s_star(b'bytes'), b'bytes') + self.assertEqual(getargs_s_star(bytearray(b'bytearray')), b'bytearray') + self.assertEqual(getargs_s_star(memoryview(b'memoryview')), b'memoryview') + self.assertRaises(TypeError, getargs_s_star, None) + + def test_s_hash(self): + from _testcapi import getargs_s_hash + self.assertEqual(getargs_s_hash('abc\xe9'), b'abc\xc3\xa9') + self.assertEqual(getargs_s_hash('nul:\0'), b'nul:\0') + self.assertEqual(getargs_s_hash(b'bytes'), b'bytes') + self.assertRaises(TypeError, getargs_s_hash, bytearray(b'bytearray')) + self.assertRaises(TypeError, getargs_s_hash, memoryview(b'memoryview')) + self.assertRaises(TypeError, getargs_s_hash, None) + + def test_s_hash_int(self): + # "s#" without PY_SSIZE_T_CLEAN defined. + from _testcapi import getargs_s_hash_int + from _testcapi import getargs_s_hash_int2 + buf = bytearray([1, 2]) + self.assertRaises(SystemError, getargs_s_hash_int, buf, "abc") + self.assertRaises(SystemError, getargs_s_hash_int, buf, x=42) + self.assertRaises(SystemError, getargs_s_hash_int, buf, x="abc") + self.assertRaises(SystemError, getargs_s_hash_int2, buf, ("abc",)) + self.assertRaises(SystemError, getargs_s_hash_int2, buf, x=42) + self.assertRaises(SystemError, getargs_s_hash_int2, buf, x="abc") + buf.append(3) # still mutable -- not locked by a buffer export + # getargs_s_hash_int(buf) may not raise SystemError because skipitem() + # is not called. But it is an implementation detail. + # getargs_s_hash_int(buf) + # getargs_s_hash_int2(buf) + + def test_z(self): + from _testcapi import getargs_z + self.assertEqual(getargs_z('abc\xe9'), b'abc\xc3\xa9') + self.assertRaises(ValueError, getargs_z, 'nul:\0') + self.assertRaises(TypeError, getargs_z, b'bytes') + self.assertRaises(TypeError, getargs_z, bytearray(b'bytearray')) + self.assertRaises(TypeError, getargs_z, memoryview(b'memoryview')) + self.assertIsNone(getargs_z(None)) + + def test_z_star(self): + from _testcapi import getargs_z_star + self.assertEqual(getargs_z_star('abc\xe9'), b'abc\xc3\xa9') + self.assertEqual(getargs_z_star('nul:\0'), b'nul:\0') + self.assertEqual(getargs_z_star(b'bytes'), b'bytes') + self.assertEqual(getargs_z_star(bytearray(b'bytearray')), b'bytearray') + self.assertEqual(getargs_z_star(memoryview(b'memoryview')), b'memoryview') + self.assertIsNone(getargs_z_star(None)) + + def test_z_hash(self): + from _testcapi import getargs_z_hash + self.assertEqual(getargs_z_hash('abc\xe9'), b'abc\xc3\xa9') + self.assertEqual(getargs_z_hash('nul:\0'), b'nul:\0') + self.assertEqual(getargs_z_hash(b'bytes'), b'bytes') + self.assertRaises(TypeError, getargs_z_hash, bytearray(b'bytearray')) + self.assertRaises(TypeError, getargs_z_hash, memoryview(b'memoryview')) + self.assertIsNone(getargs_z_hash(None)) + + def test_es(self): + from _testcapi import getargs_es + self.assertEqual(getargs_es('abc\xe9'), b'abc\xc3\xa9') + self.assertEqual(getargs_es('abc\xe9', 'latin1'), b'abc\xe9') + self.assertRaises(UnicodeEncodeError, getargs_es, 'abc\xe9', 'ascii') + self.assertRaises(LookupError, getargs_es, 'abc\xe9', 'spam') + self.assertRaises(TypeError, getargs_es, b'bytes', 'latin1') + self.assertRaises(TypeError, getargs_es, bytearray(b'bytearray'), 'latin1') + self.assertRaises(TypeError, getargs_es, memoryview(b'memoryview'), 'latin1') + self.assertRaises(TypeError, getargs_es, None, 'latin1') + self.assertRaises(TypeError, getargs_es, 'nul:\0', 'latin1') + + def test_et(self): + from _testcapi import getargs_et + self.assertEqual(getargs_et('abc\xe9'), b'abc\xc3\xa9') + self.assertEqual(getargs_et('abc\xe9', 'latin1'), b'abc\xe9') + self.assertRaises(UnicodeEncodeError, getargs_et, 'abc\xe9', 'ascii') + self.assertRaises(LookupError, getargs_et, 'abc\xe9', 'spam') + self.assertEqual(getargs_et(b'bytes', 'latin1'), b'bytes') + self.assertEqual(getargs_et(bytearray(b'bytearray'), 'latin1'), b'bytearray') + self.assertRaises(TypeError, getargs_et, memoryview(b'memoryview'), 'latin1') + self.assertRaises(TypeError, getargs_et, None, 'latin1') + self.assertRaises(TypeError, getargs_et, 'nul:\0', 'latin1') + self.assertRaises(TypeError, getargs_et, b'nul:\0', 'latin1') + self.assertRaises(TypeError, getargs_et, bytearray(b'nul:\0'), 'latin1') + + def test_es_hash(self): + from _testcapi import getargs_es_hash + self.assertEqual(getargs_es_hash('abc\xe9'), b'abc\xc3\xa9') + self.assertEqual(getargs_es_hash('abc\xe9', 'latin1'), b'abc\xe9') + self.assertRaises(UnicodeEncodeError, getargs_es_hash, 'abc\xe9', 'ascii') + self.assertRaises(LookupError, getargs_es_hash, 'abc\xe9', 'spam') + self.assertRaises(TypeError, getargs_es_hash, b'bytes', 'latin1') + self.assertRaises(TypeError, getargs_es_hash, bytearray(b'bytearray'), 'latin1') + self.assertRaises(TypeError, getargs_es_hash, memoryview(b'memoryview'), 'latin1') + self.assertRaises(TypeError, getargs_es_hash, None, 'latin1') + self.assertEqual(getargs_es_hash('nul:\0', 'latin1'), b'nul:\0') + + buf = bytearray(b'x'*8) + self.assertEqual(getargs_es_hash('abc\xe9', 'latin1', buf), b'abc\xe9') + self.assertEqual(buf, bytearray(b'abc\xe9\x00xxx')) + buf = bytearray(b'x'*5) + self.assertEqual(getargs_es_hash('abc\xe9', 'latin1', buf), b'abc\xe9') + self.assertEqual(buf, bytearray(b'abc\xe9\x00')) + buf = bytearray(b'x'*4) + self.assertRaises(ValueError, getargs_es_hash, 'abc\xe9', 'latin1', buf) + self.assertEqual(buf, bytearray(b'x'*4)) + buf = bytearray() + self.assertRaises(ValueError, getargs_es_hash, 'abc\xe9', 'latin1', buf) + + def test_et_hash(self): + from _testcapi import getargs_et_hash + self.assertEqual(getargs_et_hash('abc\xe9'), b'abc\xc3\xa9') + self.assertEqual(getargs_et_hash('abc\xe9', 'latin1'), b'abc\xe9') + self.assertRaises(UnicodeEncodeError, getargs_et_hash, 'abc\xe9', 'ascii') + self.assertRaises(LookupError, getargs_et_hash, 'abc\xe9', 'spam') + self.assertEqual(getargs_et_hash(b'bytes', 'latin1'), b'bytes') + self.assertEqual(getargs_et_hash(bytearray(b'bytearray'), 'latin1'), b'bytearray') + self.assertRaises(TypeError, getargs_et_hash, memoryview(b'memoryview'), 'latin1') + self.assertRaises(TypeError, getargs_et_hash, None, 'latin1') + self.assertEqual(getargs_et_hash('nul:\0', 'latin1'), b'nul:\0') + self.assertEqual(getargs_et_hash(b'nul:\0', 'latin1'), b'nul:\0') + self.assertEqual(getargs_et_hash(bytearray(b'nul:\0'), 'latin1'), b'nul:\0') + + buf = bytearray(b'x'*8) + self.assertEqual(getargs_et_hash('abc\xe9', 'latin1', buf), b'abc\xe9') + self.assertEqual(buf, bytearray(b'abc\xe9\x00xxx')) + buf = bytearray(b'x'*5) + self.assertEqual(getargs_et_hash('abc\xe9', 'latin1', buf), b'abc\xe9') + self.assertEqual(buf, bytearray(b'abc\xe9\x00')) + buf = bytearray(b'x'*4) + self.assertRaises(ValueError, getargs_et_hash, 'abc\xe9', 'latin1', buf) + self.assertEqual(buf, bytearray(b'x'*4)) + buf = bytearray() + self.assertRaises(ValueError, getargs_et_hash, 'abc\xe9', 'latin1', buf) + + @support.requires_legacy_unicode_capi + def test_u(self): + from _testcapi import getargs_u + with self.assertWarns(DeprecationWarning): + self.assertEqual(getargs_u('abc\xe9'), 'abc\xe9') + with self.assertWarns(DeprecationWarning): + self.assertRaises(ValueError, getargs_u, 'nul:\0') + with self.assertWarns(DeprecationWarning): + self.assertRaises(TypeError, getargs_u, b'bytes') + with self.assertWarns(DeprecationWarning): + self.assertRaises(TypeError, getargs_u, bytearray(b'bytearray')) + with self.assertWarns(DeprecationWarning): + self.assertRaises(TypeError, getargs_u, memoryview(b'memoryview')) + with self.assertWarns(DeprecationWarning): + self.assertRaises(TypeError, getargs_u, None) + with warnings.catch_warnings(): + warnings.simplefilter('error', DeprecationWarning) + self.assertRaises(DeprecationWarning, getargs_u, 'abc\xe9') + + @support.requires_legacy_unicode_capi + def test_u_hash(self): + from _testcapi import getargs_u_hash + with self.assertWarns(DeprecationWarning): + self.assertEqual(getargs_u_hash('abc\xe9'), 'abc\xe9') + with self.assertWarns(DeprecationWarning): + self.assertEqual(getargs_u_hash('nul:\0'), 'nul:\0') + with self.assertWarns(DeprecationWarning): + self.assertRaises(TypeError, getargs_u_hash, b'bytes') + with self.assertWarns(DeprecationWarning): + self.assertRaises(TypeError, getargs_u_hash, bytearray(b'bytearray')) + with self.assertWarns(DeprecationWarning): + self.assertRaises(TypeError, getargs_u_hash, memoryview(b'memoryview')) + with self.assertWarns(DeprecationWarning): + self.assertRaises(TypeError, getargs_u_hash, None) + with warnings.catch_warnings(): + warnings.simplefilter('error', DeprecationWarning) + self.assertRaises(DeprecationWarning, getargs_u_hash, 'abc\xe9') + + @support.requires_legacy_unicode_capi + def test_Z(self): + from _testcapi import getargs_Z + with self.assertWarns(DeprecationWarning): + self.assertEqual(getargs_Z('abc\xe9'), 'abc\xe9') + with self.assertWarns(DeprecationWarning): + self.assertRaises(ValueError, getargs_Z, 'nul:\0') + with self.assertWarns(DeprecationWarning): + self.assertRaises(TypeError, getargs_Z, b'bytes') + with self.assertWarns(DeprecationWarning): + self.assertRaises(TypeError, getargs_Z, bytearray(b'bytearray')) + with self.assertWarns(DeprecationWarning): + self.assertRaises(TypeError, getargs_Z, memoryview(b'memoryview')) + with self.assertWarns(DeprecationWarning): + self.assertIsNone(getargs_Z(None)) + with warnings.catch_warnings(): + warnings.simplefilter('error', DeprecationWarning) + self.assertRaises(DeprecationWarning, getargs_Z, 'abc\xe9') + + @support.requires_legacy_unicode_capi + def test_Z_hash(self): + from _testcapi import getargs_Z_hash + with self.assertWarns(DeprecationWarning): + self.assertEqual(getargs_Z_hash('abc\xe9'), 'abc\xe9') + with self.assertWarns(DeprecationWarning): + self.assertEqual(getargs_Z_hash('nul:\0'), 'nul:\0') + with self.assertWarns(DeprecationWarning): + self.assertRaises(TypeError, getargs_Z_hash, b'bytes') + with self.assertWarns(DeprecationWarning): + self.assertRaises(TypeError, getargs_Z_hash, bytearray(b'bytearray')) + with self.assertWarns(DeprecationWarning): + self.assertRaises(TypeError, getargs_Z_hash, memoryview(b'memoryview')) + with self.assertWarns(DeprecationWarning): + self.assertIsNone(getargs_Z_hash(None)) + with warnings.catch_warnings(): + warnings.simplefilter('error', DeprecationWarning) + self.assertRaises(DeprecationWarning, getargs_Z_hash, 'abc\xe9') + + +class Object_TestCase(unittest.TestCase): + def test_S(self): + from _testcapi import getargs_S + obj = b'bytes' + self.assertIs(getargs_S(obj), obj) + self.assertRaises(TypeError, getargs_S, bytearray(b'bytearray')) + self.assertRaises(TypeError, getargs_S, 'str') + self.assertRaises(TypeError, getargs_S, None) + self.assertRaises(TypeError, getargs_S, memoryview(obj)) + + def test_Y(self): + from _testcapi import getargs_Y + obj = bytearray(b'bytearray') + self.assertIs(getargs_Y(obj), obj) + self.assertRaises(TypeError, getargs_Y, b'bytes') + self.assertRaises(TypeError, getargs_Y, 'str') + self.assertRaises(TypeError, getargs_Y, None) + self.assertRaises(TypeError, getargs_Y, memoryview(obj)) + + def test_U(self): + from _testcapi import getargs_U + obj = 'str' + self.assertIs(getargs_U(obj), obj) + self.assertRaises(TypeError, getargs_U, b'bytes') + self.assertRaises(TypeError, getargs_U, bytearray(b'bytearray')) + self.assertRaises(TypeError, getargs_U, None) + + +# Bug #6012 +class Test6012(unittest.TestCase): + def test(self): + self.assertEqual(_testcapi.argparsing("Hello", "World"), 1) + + +class SkipitemTest(unittest.TestCase): + + # u, and Z raises DeprecationWarning + @warnings_helper.ignore_warnings(category=DeprecationWarning) + def test_skipitem(self): + """ + If this test failed, you probably added a new "format unit" + in Python/getargs.c, but neglected to update our poor friend + skipitem() in the same file. (If so, shame on you!) + + With a few exceptions**, this function brute-force tests all + printable ASCII*** characters (32 to 126 inclusive) as format units, + checking to see that PyArg_ParseTupleAndKeywords() return consistent + errors both when the unit is attempted to be used and when it is + skipped. If the format unit doesn't exist, we'll get one of two + specific error messages (one for used, one for skipped); if it does + exist we *won't* get that error--we'll get either no error or some + other error. If we get the specific "does not exist" error for one + test and not for the other, there's a mismatch, and the test fails. + + ** Some format units have special funny semantics and it would + be difficult to accommodate them here. Since these are all + well-established and properly skipped in skipitem() we can + get away with not testing them--this test is really intended + to catch *new* format units. + + *** Python C source files must be ASCII. Therefore it's impossible + to have non-ASCII format units. + + """ + empty_tuple = () + tuple_1 = (0,) + dict_b = {'b':1} + keywords = ["a", "b"] + + for i in range(32, 127): + c = chr(i) + + # skip parentheses, the error reporting is inconsistent about them + # skip 'e', it's always a two-character code + # skip '|' and '$', they don't represent arguments anyway + if c in '()e|$': + continue + + # test the format unit when not skipped + format = c + "i" + try: + _testcapi.parse_tuple_and_keywords(tuple_1, dict_b, + format, keywords) + when_not_skipped = False + except SystemError as e: + s = "argument 1 (impossible)" + when_not_skipped = (str(e) == s) + except TypeError: + when_not_skipped = False + + # test the format unit when skipped + optional_format = "|" + format + try: + _testcapi.parse_tuple_and_keywords(empty_tuple, dict_b, + optional_format, keywords) + when_skipped = False + except SystemError as e: + s = "impossible: '{}'".format(format) + when_skipped = (str(e) == s) + + message = ("test_skipitem_parity: " + "detected mismatch between convertsimple and skipitem " + "for format unit '{}' ({}), not skipped {}, skipped {}".format( + c, i, when_skipped, when_not_skipped)) + self.assertIs(when_skipped, when_not_skipped, message) + + def test_skipitem_with_suffix(self): + parse = _testcapi.parse_tuple_and_keywords + empty_tuple = () + tuple_1 = (0,) + dict_b = {'b':1} + keywords = ["a", "b"] + + supported = ('s#', 's*', 'z#', 'z*', 'u#', 'Z#', 'y#', 'y*', 'w#', 'w*') + for c in string.ascii_letters: + for c2 in '#*': + f = c + c2 + with self.subTest(format=f): + optional_format = "|" + f + "i" + if f in supported: + parse(empty_tuple, dict_b, optional_format, keywords) + else: + with self.assertRaisesRegex(SystemError, + 'impossible'): + parse(empty_tuple, dict_b, optional_format, keywords) + + for c in map(chr, range(32, 128)): + f = 'e' + c + optional_format = "|" + f + "i" + with self.subTest(format=f): + if c in 'st': + parse(empty_tuple, dict_b, optional_format, keywords) + else: + with self.assertRaisesRegex(SystemError, + 'impossible'): + parse(empty_tuple, dict_b, optional_format, keywords) + + +class ParseTupleAndKeywords_Test(unittest.TestCase): + + def test_parse_tuple_and_keywords(self): + # Test handling errors in the parse_tuple_and_keywords helper itself + self.assertRaises(TypeError, _testcapi.parse_tuple_and_keywords, + (), {}, 42, []) + self.assertRaises(ValueError, _testcapi.parse_tuple_and_keywords, + (), {}, '', 42) + self.assertRaises(ValueError, _testcapi.parse_tuple_and_keywords, + (), {}, '', [''] * 42) + self.assertRaises(ValueError, _testcapi.parse_tuple_and_keywords, + (), {}, '', [42]) + + def test_bad_use(self): + # Test handling invalid format and keywords in + # PyArg_ParseTupleAndKeywords() + self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, + (1,), {}, '||O', ['a']) + self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, + (1, 2), {}, '|O|O', ['a', 'b']) + self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, + (), {'a': 1}, '$$O', ['a']) + self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, + (), {'a': 1, 'b': 2}, '$O$O', ['a', 'b']) + self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, + (), {'a': 1}, '$|O', ['a']) + self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, + (), {'a': 1, 'b': 2}, '$O|O', ['a', 'b']) + self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, + (1,), {}, '|O', ['a', 'b']) + self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, + (1,), {}, '|OO', ['a']) + self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, + (), {}, '|$O', ['']) + self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, + (), {}, '|OO', ['a', '']) + + def test_positional_only(self): + parse = _testcapi.parse_tuple_and_keywords + + parse((1, 2, 3), {}, 'OOO', ['', '', 'a']) + parse((1, 2), {'a': 3}, 'OOO', ['', '', 'a']) + with self.assertRaisesRegex(TypeError, + r'function takes at least 2 positional arguments \(1 given\)'): + parse((1,), {'a': 3}, 'OOO', ['', '', 'a']) + parse((1,), {}, 'O|OO', ['', '', 'a']) + with self.assertRaisesRegex(TypeError, + r'function takes at least 1 positional argument \(0 given\)'): + parse((), {}, 'O|OO', ['', '', 'a']) + parse((1, 2), {'a': 3}, 'OO$O', ['', '', 'a']) + with self.assertRaisesRegex(TypeError, + r'function takes exactly 2 positional arguments \(1 given\)'): + parse((1,), {'a': 3}, 'OO$O', ['', '', 'a']) + parse((1,), {}, 'O|O$O', ['', '', 'a']) + with self.assertRaisesRegex(TypeError, + r'function takes at least 1 positional argument \(0 given\)'): + parse((), {}, 'O|O$O', ['', '', 'a']) + with self.assertRaisesRegex(SystemError, r'Empty parameter name after \$'): + parse((1,), {}, 'O|$OO', ['', '', 'a']) + with self.assertRaisesRegex(SystemError, 'Empty keyword'): + parse((1,), {}, 'O|OO', ['', 'a', '']) + + +class Test_testcapi(unittest.TestCase): + locals().update((name, getattr(_testcapi, name)) + for name in dir(_testcapi) + if name.startswith('test_') and name.endswith('_code')) + + @warnings_helper.ignore_warnings(category=DeprecationWarning) + def test_u_code(self): + _testcapi.test_u_code() + + @warnings_helper.ignore_warnings(category=DeprecationWarning) + def test_Z_code(self): + _testcapi.test_Z_code() + + +if __name__ == "__main__": + unittest.main() diff --git a/Lib/test/test_capi/test_misc.py b/Lib/test/test_capi/test_misc.py new file mode 100644 index 00000000..404a13a0 --- /dev/null +++ b/Lib/test/test_capi/test_misc.py @@ -0,0 +1,1077 @@ +# Run the _testcapi module tests (tests for the Python/C API): by defn, +# these are all functions _testcapi exports whose name begins with 'test_'. + +from collections import OrderedDict +import importlib.machinery +import importlib.util +import os +import pickle +import random +import re +import subprocess +import sys +import textwrap +import threading +import time +import unittest +import weakref +from test import support +from test.support import MISSING_C_DOCSTRINGS +from test.support import import_helper +from test.support import threading_helper +from test.support import warnings_helper +from test.support.script_helper import assert_python_failure, assert_python_ok +try: + import _posixsubprocess +except ImportError: + _posixsubprocess = None + +# Skip this test if the _testcapi module isn't available. +_testcapi = import_helper.import_module('_testcapi') + +import _testinternalcapi + +# Were we compiled --with-pydebug or with #define Py_DEBUG? +Py_DEBUG = hasattr(sys, 'gettotalrefcount') + + +def decode_stderr(err): + return err.decode('utf-8', 'replace').replace('\r', '') + + +def testfunction(self): + """some doc""" + return self + + +class InstanceMethod: + id = _testcapi.instancemethod(id) + testfunction = _testcapi.instancemethod(testfunction) + +class CAPITest(unittest.TestCase): + + def test_instancemethod(self): + inst = InstanceMethod() + self.assertEqual(id(inst), inst.id()) + self.assertTrue(inst.testfunction() is inst) + self.assertEqual(inst.testfunction.__doc__, testfunction.__doc__) + self.assertEqual(InstanceMethod.testfunction.__doc__, testfunction.__doc__) + + InstanceMethod.testfunction.attribute = "test" + self.assertEqual(testfunction.attribute, "test") + self.assertRaises(AttributeError, setattr, inst.testfunction, "attribute", "test") + + def test_no_FatalError_infinite_loop(self): + with support.SuppressCrashReport(): + p = subprocess.Popen([sys.executable, "-c", + 'import _testcapi;' + '_testcapi.crash_no_current_thread()'], + stdout=subprocess.PIPE, + stderr=subprocess.PIPE) + (out, err) = p.communicate() + self.assertEqual(out, b'') + # This used to cause an infinite loop. + self.assertTrue(err.rstrip().startswith( + b'Fatal Python error: ' + b'PyThreadState_Get: ' + b'the function must be called with the GIL held, ' + b'but the GIL is released ' + b'(the current Python thread state is NULL)'), + err) + + def test_memoryview_from_NULL_pointer(self): + self.assertRaises(ValueError, _testcapi.make_memoryview_from_NULL_pointer) + + def test_exc_info(self): + raised_exception = ValueError("5") + new_exc = TypeError("TEST") + try: + raise raised_exception + except ValueError as e: + tb = e.__traceback__ + orig_sys_exc_info = sys.exc_info() + orig_exc_info = _testcapi.set_exc_info(new_exc.__class__, new_exc, None) + new_sys_exc_info = sys.exc_info() + new_exc_info = _testcapi.set_exc_info(*orig_exc_info) + reset_sys_exc_info = sys.exc_info() + + self.assertEqual(orig_exc_info[1], e) + + self.assertSequenceEqual(orig_exc_info, (raised_exception.__class__, raised_exception, tb)) + self.assertSequenceEqual(orig_sys_exc_info, orig_exc_info) + self.assertSequenceEqual(reset_sys_exc_info, orig_exc_info) + self.assertSequenceEqual(new_exc_info, (new_exc.__class__, new_exc, None)) + self.assertSequenceEqual(new_sys_exc_info, new_exc_info) + else: + self.assertTrue(False) + + @unittest.skipUnless(_posixsubprocess, '_posixsubprocess required for this test.') + def test_seq_bytes_to_charp_array(self): + # Issue #15732: crash in _PySequence_BytesToCharpArray() + class Z(object): + def __len__(self): + return 1 + self.assertRaises(TypeError, _posixsubprocess.fork_exec, + 1,Z(),3,(1, 2),5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21) + # Issue #15736: overflow in _PySequence_BytesToCharpArray() + class Z(object): + def __len__(self): + return sys.maxsize + def __getitem__(self, i): + return b'x' + self.assertRaises(MemoryError, _posixsubprocess.fork_exec, + 1,Z(),3,(1, 2),5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21) + + @unittest.skipUnless(_posixsubprocess, '_posixsubprocess required for this test.') + def test_subprocess_fork_exec(self): + class Z(object): + def __len__(self): + return 1 + + # Issue #15738: crash in subprocess_fork_exec() + self.assertRaises(TypeError, _posixsubprocess.fork_exec, + Z(),[b'1'],3,(1, 2),5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21) + + @unittest.skipIf(MISSING_C_DOCSTRINGS, + "Signature information for builtins requires docstrings") + def test_docstring_signature_parsing(self): + + self.assertEqual(_testcapi.no_docstring.__doc__, None) + self.assertEqual(_testcapi.no_docstring.__text_signature__, None) + + self.assertEqual(_testcapi.docstring_empty.__doc__, None) + self.assertEqual(_testcapi.docstring_empty.__text_signature__, None) + + self.assertEqual(_testcapi.docstring_no_signature.__doc__, + "This docstring has no signature.") + self.assertEqual(_testcapi.docstring_no_signature.__text_signature__, None) + + self.assertEqual(_testcapi.docstring_with_invalid_signature.__doc__, + "docstring_with_invalid_signature($module, /, boo)\n" + "\n" + "This docstring has an invalid signature." + ) + self.assertEqual(_testcapi.docstring_with_invalid_signature.__text_signature__, None) + + self.assertEqual(_testcapi.docstring_with_invalid_signature2.__doc__, + "docstring_with_invalid_signature2($module, /, boo)\n" + "\n" + "--\n" + "\n" + "This docstring also has an invalid signature." + ) + self.assertEqual(_testcapi.docstring_with_invalid_signature2.__text_signature__, None) + + self.assertEqual(_testcapi.docstring_with_signature.__doc__, + "This docstring has a valid signature.") + self.assertEqual(_testcapi.docstring_with_signature.__text_signature__, "($module, /, sig)") + + self.assertEqual(_testcapi.docstring_with_signature_but_no_doc.__doc__, None) + self.assertEqual(_testcapi.docstring_with_signature_but_no_doc.__text_signature__, + "($module, /, sig)") + + self.assertEqual(_testcapi.docstring_with_signature_and_extra_newlines.__doc__, + "\nThis docstring has a valid signature and some extra newlines.") + self.assertEqual(_testcapi.docstring_with_signature_and_extra_newlines.__text_signature__, + "($module, /, parameter)") + + def test_c_type_with_matrix_multiplication(self): + M = _testcapi.matmulType + m1 = M() + m2 = M() + self.assertEqual(m1 @ m2, ("matmul", m1, m2)) + self.assertEqual(m1 @ 42, ("matmul", m1, 42)) + self.assertEqual(42 @ m1, ("matmul", 42, m1)) + o = m1 + o @= m2 + self.assertEqual(o, ("imatmul", m1, m2)) + o = m1 + o @= 42 + self.assertEqual(o, ("imatmul", m1, 42)) + o = 42 + o @= m1 + self.assertEqual(o, ("matmul", 42, m1)) + + def test_c_type_with_ipow(self): + # When the __ipow__ method of a type was implemented in C, using the + # modulo param would cause segfaults. + o = _testcapi.ipowType() + self.assertEqual(o.__ipow__(1), (1, None)) + self.assertEqual(o.__ipow__(2, 2), (2, 2)) + + def test_return_null_without_error(self): + # Issue #23571: A function must not return NULL without setting an + # error + if Py_DEBUG: + code = textwrap.dedent(""" + import _testcapi + from test import support + + with support.SuppressCrashReport(): + _testcapi.return_null_without_error() + """) + rc, out, err = assert_python_failure('-c', code) + err = decode_stderr(err) + self.assertRegex(err, + r'Fatal Python error: _Py_CheckFunctionResult: ' + r'a function returned NULL without setting an exception\n' + r'Python runtime state: initialized\n' + r'SystemError: ' + r'returned NULL without setting an exception\n' + r'\n' + r'Current thread.*:\n' + r' File .*", line 6 in \n') + else: + with self.assertRaises(SystemError) as cm: + _testcapi.return_null_without_error() + self.assertRegex(str(cm.exception), + 'return_null_without_error.* ' + 'returned NULL without setting an exception') + + def test_return_result_with_error(self): + # Issue #23571: A function must not return a result with an error set + if Py_DEBUG: + code = textwrap.dedent(""" + import _testcapi + from test import support + + with support.SuppressCrashReport(): + _testcapi.return_result_with_error() + """) + rc, out, err = assert_python_failure('-c', code) + err = decode_stderr(err) + self.assertRegex(err, + r'Fatal Python error: _Py_CheckFunctionResult: ' + r'a function returned a result with an exception set\n' + r'Python runtime state: initialized\n' + r'ValueError\n' + r'\n' + r'The above exception was the direct cause ' + r'of the following exception:\n' + r'\n' + r'SystemError: ' + r'returned a result with an exception set\n' + r'\n' + r'Current thread.*:\n' + r' File .*, line 6 in \n') + else: + with self.assertRaises(SystemError) as cm: + _testcapi.return_result_with_error() + self.assertRegex(str(cm.exception), + 'return_result_with_error.* ' + 'returned a result with an exception set') + + def test_getitem_with_error(self): + # Test _Py_CheckSlotResult(). Raise an exception and then calls + # PyObject_GetItem(): check that the assertion catches the bug. + # PyObject_GetItem() must not be called with an exception set. + code = textwrap.dedent(""" + import _testcapi + from test import support + + with support.SuppressCrashReport(): + _testcapi.getitem_with_error({1: 2}, 1) + """) + rc, out, err = assert_python_failure('-c', code) + err = decode_stderr(err) + if 'SystemError: ' not in err: + self.assertRegex(err, + r'Fatal Python error: _Py_CheckSlotResult: ' + r'Slot __getitem__ of type dict succeeded ' + r'with an exception set\n' + r'Python runtime state: initialized\n' + r'ValueError: bug\n' + r'\n' + r'Current thread .* \(most recent call first\):\n' + r' File .*, line 6 in \n' + r'\n' + r'Extension modules: _testcapi \(total: 1\)\n') + else: + # Python built with NDEBUG macro defined: + # test _Py_CheckFunctionResult() instead. + self.assertIn('returned a result with an exception set', err) + + def test_buildvalue_N(self): + _testcapi.test_buildvalue_N() + + def test_set_nomemory(self): + code = """if 1: + import _testcapi + + class C(): pass + + # The first loop tests both functions and that remove_mem_hooks() + # can be called twice in a row. The second loop checks a call to + # set_nomemory() after a call to remove_mem_hooks(). The third + # loop checks the start and stop arguments of set_nomemory(). + for outer_cnt in range(1, 4): + start = 10 * outer_cnt + for j in range(100): + if j == 0: + if outer_cnt != 3: + _testcapi.set_nomemory(start) + else: + _testcapi.set_nomemory(start, start + 1) + try: + C() + except MemoryError as e: + if outer_cnt != 3: + _testcapi.remove_mem_hooks() + print('MemoryError', outer_cnt, j) + _testcapi.remove_mem_hooks() + break + """ + rc, out, err = assert_python_ok('-c', code) + self.assertIn(b'MemoryError 1 10', out) + self.assertIn(b'MemoryError 2 20', out) + self.assertIn(b'MemoryError 3 30', out) + + def test_mapping_keys_values_items(self): + class Mapping1(dict): + def keys(self): + return list(super().keys()) + def values(self): + return list(super().values()) + def items(self): + return list(super().items()) + class Mapping2(dict): + def keys(self): + return tuple(super().keys()) + def values(self): + return tuple(super().values()) + def items(self): + return tuple(super().items()) + dict_obj = {'foo': 1, 'bar': 2, 'spam': 3} + + for mapping in [{}, OrderedDict(), Mapping1(), Mapping2(), + dict_obj, OrderedDict(dict_obj), + Mapping1(dict_obj), Mapping2(dict_obj)]: + self.assertListEqual(_testcapi.get_mapping_keys(mapping), + list(mapping.keys())) + self.assertListEqual(_testcapi.get_mapping_values(mapping), + list(mapping.values())) + self.assertListEqual(_testcapi.get_mapping_items(mapping), + list(mapping.items())) + + def test_mapping_keys_values_items_bad_arg(self): + self.assertRaises(AttributeError, _testcapi.get_mapping_keys, None) + self.assertRaises(AttributeError, _testcapi.get_mapping_values, None) + self.assertRaises(AttributeError, _testcapi.get_mapping_items, None) + + class BadMapping: + def keys(self): + return None + def values(self): + return None + def items(self): + return None + bad_mapping = BadMapping() + self.assertRaises(TypeError, _testcapi.get_mapping_keys, bad_mapping) + self.assertRaises(TypeError, _testcapi.get_mapping_values, bad_mapping) + self.assertRaises(TypeError, _testcapi.get_mapping_items, bad_mapping) + + def test_mapping_has_key(self): + dct = {'a': 1} + self.assertTrue(_testcapi.mapping_has_key(dct, 'a')) + self.assertFalse(_testcapi.mapping_has_key(dct, 'b')) + + class SubDict(dict): + pass + + dct2 = SubDict({'a': 1}) + self.assertTrue(_testcapi.mapping_has_key(dct2, 'a')) + self.assertFalse(_testcapi.mapping_has_key(dct2, 'b')) + + @unittest.skipUnless(hasattr(_testcapi, 'negative_refcount'), + 'need _testcapi.negative_refcount') + def test_negative_refcount(self): + # bpo-35059: Check that Py_DECREF() reports the correct filename + # when calling _Py_NegativeRefcount() to abort Python. + code = textwrap.dedent(""" + import _testcapi + from test import support + + with support.SuppressCrashReport(): + _testcapi.negative_refcount() + """) + rc, out, err = assert_python_failure('-c', code) + self.assertRegex(err, + br'_testcapimodule\.c:[0-9]+: ' + br'_Py_NegativeRefcount: Assertion failed: ' + br'object has negative ref count') + + def test_trashcan_subclass(self): + # bpo-35983: Check that the trashcan mechanism for "list" is NOT + # activated when its tp_dealloc is being called by a subclass + from _testcapi import MyList + L = None + for i in range(1000): + L = MyList((L,)) + + @support.requires_resource('cpu') + def test_trashcan_python_class1(self): + self.do_test_trashcan_python_class(list) + + @support.requires_resource('cpu') + def test_trashcan_python_class2(self): + from _testcapi import MyList + self.do_test_trashcan_python_class(MyList) + + def do_test_trashcan_python_class(self, base): + # Check that the trashcan mechanism works properly for a Python + # subclass of a class using the trashcan (this specific test assumes + # that the base class "base" behaves like list) + class PyList(base): + # Count the number of PyList instances to verify that there is + # no memory leak + num = 0 + def __init__(self, *args): + __class__.num += 1 + super().__init__(*args) + def __del__(self): + __class__.num -= 1 + + for parity in (0, 1): + L = None + # We need in the order of 2**20 iterations here such that a + # typical 8MB stack would overflow without the trashcan. + for i in range(2**20): + L = PyList((L,)) + L.attr = i + if parity: + # Add one additional nesting layer + L = (L,) + self.assertGreater(PyList.num, 0) + del L + self.assertEqual(PyList.num, 0) + + def test_heap_ctype_doc_and_text_signature(self): + self.assertEqual(_testcapi.HeapDocCType.__doc__, "somedoc") + self.assertEqual(_testcapi.HeapDocCType.__text_signature__, "(arg1, arg2)") + + def test_null_type_doc(self): + self.assertEqual(_testcapi.NullTpDocType.__doc__, None) + + def test_subclass_of_heap_gc_ctype_with_tpdealloc_decrefs_once(self): + class HeapGcCTypeSubclass(_testcapi.HeapGcCType): + def __init__(self): + self.value2 = 20 + super().__init__() + + subclass_instance = HeapGcCTypeSubclass() + type_refcnt = sys.getrefcount(HeapGcCTypeSubclass) + + # Test that subclass instance was fully created + self.assertEqual(subclass_instance.value, 10) + self.assertEqual(subclass_instance.value2, 20) + + # Test that the type reference count is only decremented once + del subclass_instance + self.assertEqual(type_refcnt - 1, sys.getrefcount(HeapGcCTypeSubclass)) + + def test_subclass_of_heap_gc_ctype_with_del_modifying_dunder_class_only_decrefs_once(self): + class A(_testcapi.HeapGcCType): + def __init__(self): + self.value2 = 20 + super().__init__() + + class B(A): + def __init__(self): + super().__init__() + + def __del__(self): + self.__class__ = A + A.refcnt_in_del = sys.getrefcount(A) + B.refcnt_in_del = sys.getrefcount(B) + + subclass_instance = B() + type_refcnt = sys.getrefcount(B) + new_type_refcnt = sys.getrefcount(A) + + # Test that subclass instance was fully created + self.assertEqual(subclass_instance.value, 10) + self.assertEqual(subclass_instance.value2, 20) + + del subclass_instance + + # Test that setting __class__ modified the reference counts of the types + self.assertEqual(type_refcnt - 1, B.refcnt_in_del) + self.assertEqual(new_type_refcnt + 1, A.refcnt_in_del) + + # Test that the original type already has decreased its refcnt + self.assertEqual(type_refcnt - 1, sys.getrefcount(B)) + + # Test that subtype_dealloc decref the newly assigned __class__ only once + self.assertEqual(new_type_refcnt, sys.getrefcount(A)) + + def test_heaptype_with_dict(self): + inst = _testcapi.HeapCTypeWithDict() + inst.foo = 42 + self.assertEqual(inst.foo, 42) + self.assertEqual(inst.dictobj, inst.__dict__) + self.assertEqual(inst.dictobj, {"foo": 42}) + + inst = _testcapi.HeapCTypeWithDict() + self.assertEqual({}, inst.__dict__) + + def test_heaptype_with_negative_dict(self): + inst = _testcapi.HeapCTypeWithNegativeDict() + inst.foo = 42 + self.assertEqual(inst.foo, 42) + self.assertEqual(inst.dictobj, inst.__dict__) + self.assertEqual(inst.dictobj, {"foo": 42}) + + inst = _testcapi.HeapCTypeWithNegativeDict() + self.assertEqual({}, inst.__dict__) + + def test_heaptype_with_weakref(self): + inst = _testcapi.HeapCTypeWithWeakref() + ref = weakref.ref(inst) + self.assertEqual(ref(), inst) + self.assertEqual(inst.weakreflist, ref) + + def test_heaptype_with_buffer(self): + inst = _testcapi.HeapCTypeWithBuffer() + b = bytes(inst) + self.assertEqual(b, b"1234") + + def test_c_subclass_of_heap_ctype_with_tpdealloc_decrefs_once(self): + subclass_instance = _testcapi.HeapCTypeSubclass() + type_refcnt = sys.getrefcount(_testcapi.HeapCTypeSubclass) + + # Test that subclass instance was fully created + self.assertEqual(subclass_instance.value, 10) + self.assertEqual(subclass_instance.value2, 20) + + # Test that the type reference count is only decremented once + del subclass_instance + self.assertEqual(type_refcnt - 1, sys.getrefcount(_testcapi.HeapCTypeSubclass)) + + def test_c_subclass_of_heap_ctype_with_del_modifying_dunder_class_only_decrefs_once(self): + subclass_instance = _testcapi.HeapCTypeSubclassWithFinalizer() + type_refcnt = sys.getrefcount(_testcapi.HeapCTypeSubclassWithFinalizer) + new_type_refcnt = sys.getrefcount(_testcapi.HeapCTypeSubclass) + + # Test that subclass instance was fully created + self.assertEqual(subclass_instance.value, 10) + self.assertEqual(subclass_instance.value2, 20) + + # The tp_finalize slot will set __class__ to HeapCTypeSubclass + del subclass_instance + + # Test that setting __class__ modified the reference counts of the types + self.assertEqual(type_refcnt - 1, _testcapi.HeapCTypeSubclassWithFinalizer.refcnt_in_del) + self.assertEqual(new_type_refcnt + 1, _testcapi.HeapCTypeSubclass.refcnt_in_del) + + # Test that the original type already has decreased its refcnt + self.assertEqual(type_refcnt - 1, sys.getrefcount(_testcapi.HeapCTypeSubclassWithFinalizer)) + + # Test that subtype_dealloc decref the newly assigned __class__ only once + self.assertEqual(new_type_refcnt, sys.getrefcount(_testcapi.HeapCTypeSubclass)) + + def test_heaptype_with_setattro(self): + obj = _testcapi.HeapCTypeSetattr() + self.assertEqual(obj.pvalue, 10) + obj.value = 12 + self.assertEqual(obj.pvalue, 12) + del obj.value + self.assertEqual(obj.pvalue, 0) + + def test_pynumber_tobase(self): + from _testcapi import pynumber_tobase + self.assertEqual(pynumber_tobase(123, 2), '0b1111011') + self.assertEqual(pynumber_tobase(123, 8), '0o173') + self.assertEqual(pynumber_tobase(123, 10), '123') + self.assertEqual(pynumber_tobase(123, 16), '0x7b') + self.assertEqual(pynumber_tobase(-123, 2), '-0b1111011') + self.assertEqual(pynumber_tobase(-123, 8), '-0o173') + self.assertEqual(pynumber_tobase(-123, 10), '-123') + self.assertEqual(pynumber_tobase(-123, 16), '-0x7b') + self.assertRaises(TypeError, pynumber_tobase, 123.0, 10) + self.assertRaises(TypeError, pynumber_tobase, '123', 10) + self.assertRaises(SystemError, pynumber_tobase, 123, 0) + + def check_fatal_error(self, code, expected, not_expected=()): + with support.SuppressCrashReport(): + rc, out, err = assert_python_failure('-sSI', '-c', code) + + err = decode_stderr(err) + self.assertIn('Fatal Python error: test_fatal_error: MESSAGE\n', + err) + + match = re.search(r'^Extension modules:(.*) \(total: ([0-9]+)\)$', + err, re.MULTILINE) + if not match: + self.fail(f"Cannot find 'Extension modules:' in {err!r}") + modules = set(match.group(1).strip().split(', ')) + total = int(match.group(2)) + + for name in expected: + self.assertIn(name, modules) + for name in not_expected: + self.assertNotIn(name, modules) + self.assertEqual(len(modules), total) + + def test_fatal_error(self): + # By default, stdlib extension modules are ignored, + # but not test modules. + expected = ('_testcapi',) + not_expected = ('sys',) + code = 'import _testcapi, sys; _testcapi.fatal_error(b"MESSAGE")' + self.check_fatal_error(code, expected, not_expected) + + # Mark _testcapi as stdlib module, but not sys + expected = ('sys',) + not_expected = ('_testcapi',) + code = textwrap.dedent(''' + import _testcapi, sys + sys.stdlib_module_names = frozenset({"_testcapi"}) + _testcapi.fatal_error(b"MESSAGE") + ''') + self.check_fatal_error(code, expected) + + def test_pyobject_repr_from_null(self): + s = _testcapi.pyobject_repr_from_null() + self.assertEqual(s, '') + + def test_pyobject_str_from_null(self): + s = _testcapi.pyobject_str_from_null() + self.assertEqual(s, '') + + def test_pyobject_bytes_from_null(self): + s = _testcapi.pyobject_bytes_from_null() + self.assertEqual(s, b'') + + def test_Py_CompileString(self): + # Check that Py_CompileString respects the coding cookie + _compile = _testcapi.Py_CompileString + code = b"# -*- coding: latin1 -*-\nprint('\xc2\xa4')\n" + result = _compile(code) + expected = compile(code, "", "exec") + self.assertEqual(result.co_consts, expected.co_consts) + + +class TestPendingCalls(unittest.TestCase): + + def pendingcalls_submit(self, l, n): + def callback(): + #this function can be interrupted by thread switching so let's + #use an atomic operation + l.append(None) + + for i in range(n): + time.sleep(random.random()*0.02) #0.01 secs on average + #try submitting callback until successful. + #rely on regular interrupt to flush queue if we are + #unsuccessful. + while True: + if _testcapi._pending_threadfunc(callback): + break + + def pendingcalls_wait(self, l, n, context = None): + #now, stick around until l[0] has grown to 10 + count = 0 + while len(l) != n: + #this busy loop is where we expect to be interrupted to + #run our callbacks. Note that callbacks are only run on the + #main thread + if False and support.verbose: + print("(%i)"%(len(l),),) + for i in range(1000): + a = i*i + if context and not context.event.is_set(): + continue + count += 1 + self.assertTrue(count < 10000, + "timeout waiting for %i callbacks, got %i"%(n, len(l))) + if False and support.verbose: + print("(%i)"%(len(l),)) + + def test_pendingcalls_threaded(self): + + #do every callback on a separate thread + n = 32 #total callbacks + threads = [] + class foo(object):pass + context = foo() + context.l = [] + context.n = 2 #submits per thread + context.nThreads = n // context.n + context.nFinished = 0 + context.lock = threading.Lock() + context.event = threading.Event() + + threads = [threading.Thread(target=self.pendingcalls_thread, + args=(context,)) + for i in range(context.nThreads)] + with threading_helper.start_threads(threads): + self.pendingcalls_wait(context.l, n, context) + + def pendingcalls_thread(self, context): + try: + self.pendingcalls_submit(context.l, context.n) + finally: + with context.lock: + context.nFinished += 1 + nFinished = context.nFinished + if False and support.verbose: + print("finished threads: ", nFinished) + if nFinished == context.nThreads: + context.event.set() + + def test_pendingcalls_non_threaded(self): + #again, just using the main thread, likely they will all be dispatched at + #once. It is ok to ask for too many, because we loop until we find a slot. + #the loop can be interrupted to dispatch. + #there are only 32 dispatch slots, so we go for twice that! + l = [] + n = 64 + self.pendingcalls_submit(l, n) + self.pendingcalls_wait(l, n) + + +class SubinterpreterTest(unittest.TestCase): + + def test_subinterps(self): + import builtins + r, w = os.pipe() + code = """if 1: + import sys, builtins, pickle + with open({:d}, "wb") as f: + pickle.dump(id(sys.modules), f) + pickle.dump(id(builtins), f) + """.format(w) + with open(r, "rb") as f: + ret = support.run_in_subinterp(code) + self.assertEqual(ret, 0) + self.assertNotEqual(pickle.load(f), id(sys.modules)) + self.assertNotEqual(pickle.load(f), id(builtins)) + + def test_subinterps_recent_language_features(self): + r, w = os.pipe() + code = """if 1: + import pickle + with open({:d}, "wb") as f: + + @(lambda x:x) # Py 3.9 + def noop(x): return x + + a = (b := f'1{{2}}3') + noop('x') # Py 3.8 (:=) / 3.6 (f'') + + async def foo(arg): return await arg # Py 3.5 + + pickle.dump(dict(a=a, b=b), f) + """.format(w) + + with open(r, "rb") as f: + ret = support.run_in_subinterp(code) + self.assertEqual(ret, 0) + self.assertEqual(pickle.load(f), {'a': '123x', 'b': '123'}) + + def test_mutate_exception(self): + """ + Exceptions saved in global module state get shared between + individual module instances. This test checks whether or not + a change in one interpreter's module gets reflected into the + other ones. + """ + import binascii + + support.run_in_subinterp("import binascii; binascii.Error.foobar = 'foobar'") + + self.assertFalse(hasattr(binascii.Error, "foobar")) + + def test_module_state_shared_in_global(self): + """ + bpo-44050: Extension module state should be shared between interpreters + when it doesn't support sub-interpreters. + """ + r, w = os.pipe() + self.addCleanup(os.close, r) + self.addCleanup(os.close, w) + + script = textwrap.dedent(f""" + import importlib.machinery + import importlib.util + import os + + fullname = '_test_module_state_shared' + origin = importlib.util.find_spec('_testmultiphase').origin + loader = importlib.machinery.ExtensionFileLoader(fullname, origin) + spec = importlib.util.spec_from_loader(fullname, loader) + module = importlib.util.module_from_spec(spec) + attr_id = str(id(module.Error)).encode() + + os.write({w}, attr_id) + """) + exec(script) + main_attr_id = os.read(r, 100) + + ret = support.run_in_subinterp(script) + self.assertEqual(ret, 0) + subinterp_attr_id = os.read(r, 100) + self.assertEqual(main_attr_id, subinterp_attr_id) + + +class TestThreadState(unittest.TestCase): + + @threading_helper.reap_threads + def test_thread_state(self): + # some extra thread-state tests driven via _testcapi + def target(): + idents = [] + + def callback(): + idents.append(threading.get_ident()) + + _testcapi._test_thread_state(callback) + a = b = callback + time.sleep(1) + # Check our main thread is in the list exactly 3 times. + self.assertEqual(idents.count(threading.get_ident()), 3, + "Couldn't find main thread correctly in the list") + + target() + t = threading.Thread(target=target) + t.start() + t.join() + + @threading_helper.reap_threads + def test_gilstate_ensure_no_deadlock(self): + # See https://github.com/python/cpython/issues/96071 + code = textwrap.dedent(f""" + import _testcapi + + def callback(): + print('callback called') + + _testcapi._test_thread_state(callback) + """) + ret = assert_python_ok('-X', 'tracemalloc', '-c', code) + self.assertIn(b'callback called', ret.out) + + +class Test_testcapi(unittest.TestCase): + locals().update((name, getattr(_testcapi, name)) + for name in dir(_testcapi) + if name.startswith('test_') and not name.endswith('_code')) + + # Suppress warning from PyUnicode_FromUnicode(). + @warnings_helper.ignore_warnings(category=DeprecationWarning) + def test_widechar(self): + _testcapi.test_widechar() + + +class Test_testinternalcapi(unittest.TestCase): + locals().update((name, getattr(_testinternalcapi, name)) + for name in dir(_testinternalcapi) + if name.startswith('test_')) + + +class PyMemDebugTests(unittest.TestCase): + PYTHONMALLOC = 'debug' + # '0x04c06e0' or '04C06E0' + PTR_REGEX = r'(?:0x)?[0-9a-fA-F]+' + + def check(self, code): + with support.SuppressCrashReport(): + out = assert_python_failure( + '-c', code, + PYTHONMALLOC=self.PYTHONMALLOC, + # FreeBSD: instruct jemalloc to not fill freed() memory + # with junk byte 0x5a, see JEMALLOC(3) + MALLOC_CONF="junk:false", + ) + stderr = out.err + return stderr.decode('ascii', 'replace') + + def test_buffer_overflow(self): + out = self.check('import _testcapi; _testcapi.pymem_buffer_overflow()') + regex = (r"Debug memory block at address p={ptr}: API 'm'\n" + r" 16 bytes originally requested\n" + r" The [0-9] pad bytes at p-[0-9] are FORBIDDENBYTE, as expected.\n" + r" The [0-9] pad bytes at tail={ptr} are not all FORBIDDENBYTE \(0x[0-9a-f]{{2}}\):\n" + r" at tail\+0: 0x78 \*\*\* OUCH\n" + r" at tail\+1: 0xfd\n" + r" at tail\+2: 0xfd\n" + r" .*\n" + r"( The block was made by call #[0-9]+ to debug malloc/realloc.\n)?" + r" Data at p: cd cd cd .*\n" + r"\n" + r"Enable tracemalloc to get the memory block allocation traceback\n" + r"\n" + r"Fatal Python error: _PyMem_DebugRawFree: bad trailing pad byte") + regex = regex.format(ptr=self.PTR_REGEX) + regex = re.compile(regex, flags=re.DOTALL) + self.assertRegex(out, regex) + + def test_api_misuse(self): + out = self.check('import _testcapi; _testcapi.pymem_api_misuse()') + regex = (r"Debug memory block at address p={ptr}: API 'm'\n" + r" 16 bytes originally requested\n" + r" The [0-9] pad bytes at p-[0-9] are FORBIDDENBYTE, as expected.\n" + r" The [0-9] pad bytes at tail={ptr} are FORBIDDENBYTE, as expected.\n" + r"( The block was made by call #[0-9]+ to debug malloc/realloc.\n)?" + r" Data at p: cd cd cd .*\n" + r"\n" + r"Enable tracemalloc to get the memory block allocation traceback\n" + r"\n" + r"Fatal Python error: _PyMem_DebugRawFree: bad ID: Allocated using API 'm', verified using API 'r'\n") + regex = regex.format(ptr=self.PTR_REGEX) + self.assertRegex(out, regex) + + def check_malloc_without_gil(self, code): + out = self.check(code) + expected = ('Fatal Python error: _PyMem_DebugMalloc: ' + 'Python memory allocator called without holding the GIL') + self.assertIn(expected, out) + + def test_pymem_malloc_without_gil(self): + # Debug hooks must raise an error if PyMem_Malloc() is called + # without holding the GIL + code = 'import _testcapi; _testcapi.pymem_malloc_without_gil()' + self.check_malloc_without_gil(code) + + def test_pyobject_malloc_without_gil(self): + # Debug hooks must raise an error if PyObject_Malloc() is called + # without holding the GIL + code = 'import _testcapi; _testcapi.pyobject_malloc_without_gil()' + self.check_malloc_without_gil(code) + + def check_pyobject_is_freed(self, func_name): + code = textwrap.dedent(f''' + import gc, os, sys, _testcapi + # Disable the GC to avoid crash on GC collection + gc.disable() + try: + _testcapi.{func_name}() + # Exit immediately to avoid a crash while deallocating + # the invalid object + os._exit(0) + except _testcapi.error: + os._exit(1) + ''') + assert_python_ok( + '-c', code, + PYTHONMALLOC=self.PYTHONMALLOC, + MALLOC_CONF="junk:false", + ) + + def test_pyobject_null_is_freed(self): + self.check_pyobject_is_freed('check_pyobject_null_is_freed') + + def test_pyobject_uninitialized_is_freed(self): + self.check_pyobject_is_freed('check_pyobject_uninitialized_is_freed') + + def test_pyobject_forbidden_bytes_is_freed(self): + self.check_pyobject_is_freed('check_pyobject_forbidden_bytes_is_freed') + + def test_pyobject_freed_is_freed(self): + self.check_pyobject_is_freed('check_pyobject_freed_is_freed') + + +class PyMemMallocDebugTests(PyMemDebugTests): + PYTHONMALLOC = 'malloc_debug' + + +@unittest.skipUnless(support.with_pymalloc(), 'need pymalloc') +class PyMemPymallocDebugTests(PyMemDebugTests): + PYTHONMALLOC = 'pymalloc_debug' + + +@unittest.skipUnless(Py_DEBUG, 'need Py_DEBUG') +class PyMemDefaultTests(PyMemDebugTests): + # test default allocator of Python compiled in debug mode + PYTHONMALLOC = '' + + +class Test_ModuleStateAccess(unittest.TestCase): + """Test access to module start (PEP 573)""" + + # The C part of the tests lives in _testmultiphase, in a module called + # _testmultiphase_meth_state_access. + # This module has multi-phase initialization, unlike _testcapi. + + def setUp(self): + fullname = '_testmultiphase_meth_state_access' # XXX + origin = importlib.util.find_spec('_testmultiphase').origin + loader = importlib.machinery.ExtensionFileLoader(fullname, origin) + spec = importlib.util.spec_from_loader(fullname, loader) + module = importlib.util.module_from_spec(spec) + loader.exec_module(module) + self.module = module + + def test_subclass_get_module(self): + """PyType_GetModule for defining_class""" + class StateAccessType_Subclass(self.module.StateAccessType): + pass + + instance = StateAccessType_Subclass() + self.assertIs(instance.get_defining_module(), self.module) + + def test_subclass_get_module_with_super(self): + class StateAccessType_Subclass(self.module.StateAccessType): + def get_defining_module(self): + return super().get_defining_module() + + instance = StateAccessType_Subclass() + self.assertIs(instance.get_defining_module(), self.module) + + def test_state_access(self): + """Checks methods defined with and without argument clinic + + This tests a no-arg method (get_count) and a method with + both a positional and keyword argument. + """ + + a = self.module.StateAccessType() + b = self.module.StateAccessType() + + methods = { + 'clinic': a.increment_count_clinic, + 'noclinic': a.increment_count_noclinic, + } + + for name, increment_count in methods.items(): + with self.subTest(name): + self.assertEqual(a.get_count(), b.get_count()) + self.assertEqual(a.get_count(), 0) + + increment_count() + self.assertEqual(a.get_count(), b.get_count()) + self.assertEqual(a.get_count(), 1) + + increment_count(3) + self.assertEqual(a.get_count(), b.get_count()) + self.assertEqual(a.get_count(), 4) + + increment_count(-2, twice=True) + self.assertEqual(a.get_count(), b.get_count()) + self.assertEqual(a.get_count(), 0) + + with self.assertRaises(TypeError): + increment_count(thrice=3) + + with self.assertRaises(TypeError): + increment_count(1, 2, 3) + + def test_get_module_bad_def(self): + # _PyType_GetModuleByDef fails gracefully if it doesn't + # find what it's looking for. + # see bpo-46433 + instance = self.module.StateAccessType() + with self.assertRaises(TypeError): + instance.getmodulebydef_bad_def() + + def test_get_module_static_in_mro(self): + # Here, the class _PyType_GetModuleByDef is looking for + # appears in the MRO after a static type (Exception). + # see bpo-46433 + class Subclass(BaseException, self.module.StateAccessType): + pass + self.assertIs(Subclass().get_defining_module(), self.module) + + +if __name__ == "__main__": + unittest.main() diff --git a/Lib/test/test_capi/test_structmembers.py b/Lib/test/test_capi/test_structmembers.py new file mode 100644 index 00000000..07d2f623 --- /dev/null +++ b/Lib/test/test_capi/test_structmembers.py @@ -0,0 +1,145 @@ +import unittest +from test.support import import_helper +from test.support import warnings_helper + +# Skip this test if the _testcapi module isn't available. +import_helper.import_module('_testcapi') +from _testcapi import _test_structmembersType, \ + CHAR_MAX, CHAR_MIN, UCHAR_MAX, \ + SHRT_MAX, SHRT_MIN, USHRT_MAX, \ + INT_MAX, INT_MIN, UINT_MAX, \ + LONG_MAX, LONG_MIN, ULONG_MAX, \ + LLONG_MAX, LLONG_MIN, ULLONG_MAX, \ + PY_SSIZE_T_MAX, PY_SSIZE_T_MIN + +ts=_test_structmembersType(False, # T_BOOL + 1, # T_BYTE + 2, # T_UBYTE + 3, # T_SHORT + 4, # T_USHORT + 5, # T_INT + 6, # T_UINT + 7, # T_LONG + 8, # T_ULONG + 23, # T_PYSSIZET + 9.99999,# T_FLOAT + 10.1010101010, # T_DOUBLE + "hi" # T_STRING_INPLACE + ) + +class ReadWriteTests(unittest.TestCase): + + def test_bool(self): + ts.T_BOOL = True + self.assertEqual(ts.T_BOOL, True) + ts.T_BOOL = False + self.assertEqual(ts.T_BOOL, False) + self.assertRaises(TypeError, setattr, ts, 'T_BOOL', 1) + + def test_byte(self): + ts.T_BYTE = CHAR_MAX + self.assertEqual(ts.T_BYTE, CHAR_MAX) + ts.T_BYTE = CHAR_MIN + self.assertEqual(ts.T_BYTE, CHAR_MIN) + ts.T_UBYTE = UCHAR_MAX + self.assertEqual(ts.T_UBYTE, UCHAR_MAX) + + def test_short(self): + ts.T_SHORT = SHRT_MAX + self.assertEqual(ts.T_SHORT, SHRT_MAX) + ts.T_SHORT = SHRT_MIN + self.assertEqual(ts.T_SHORT, SHRT_MIN) + ts.T_USHORT = USHRT_MAX + self.assertEqual(ts.T_USHORT, USHRT_MAX) + + def test_int(self): + ts.T_INT = INT_MAX + self.assertEqual(ts.T_INT, INT_MAX) + ts.T_INT = INT_MIN + self.assertEqual(ts.T_INT, INT_MIN) + ts.T_UINT = UINT_MAX + self.assertEqual(ts.T_UINT, UINT_MAX) + + def test_long(self): + ts.T_LONG = LONG_MAX + self.assertEqual(ts.T_LONG, LONG_MAX) + ts.T_LONG = LONG_MIN + self.assertEqual(ts.T_LONG, LONG_MIN) + ts.T_ULONG = ULONG_MAX + self.assertEqual(ts.T_ULONG, ULONG_MAX) + + def test_py_ssize_t(self): + ts.T_PYSSIZET = PY_SSIZE_T_MAX + self.assertEqual(ts.T_PYSSIZET, PY_SSIZE_T_MAX) + ts.T_PYSSIZET = PY_SSIZE_T_MIN + self.assertEqual(ts.T_PYSSIZET, PY_SSIZE_T_MIN) + + @unittest.skipUnless(hasattr(ts, "T_LONGLONG"), "long long not present") + def test_longlong(self): + ts.T_LONGLONG = LLONG_MAX + self.assertEqual(ts.T_LONGLONG, LLONG_MAX) + ts.T_LONGLONG = LLONG_MIN + self.assertEqual(ts.T_LONGLONG, LLONG_MIN) + + ts.T_ULONGLONG = ULLONG_MAX + self.assertEqual(ts.T_ULONGLONG, ULLONG_MAX) + + ## make sure these will accept a plain int as well as a long + ts.T_LONGLONG = 3 + self.assertEqual(ts.T_LONGLONG, 3) + ts.T_ULONGLONG = 4 + self.assertEqual(ts.T_ULONGLONG, 4) + + def test_bad_assignments(self): + integer_attributes = [ + 'T_BOOL', + 'T_BYTE', 'T_UBYTE', + 'T_SHORT', 'T_USHORT', + 'T_INT', 'T_UINT', + 'T_LONG', 'T_ULONG', + 'T_PYSSIZET' + ] + if hasattr(ts, 'T_LONGLONG'): + integer_attributes.extend(['T_LONGLONG', 'T_ULONGLONG']) + + # issue8014: this produced 'bad argument to internal function' + # internal error + for nonint in None, 3.2j, "full of eels", {}, []: + for attr in integer_attributes: + self.assertRaises(TypeError, setattr, ts, attr, nonint) + + def test_inplace_string(self): + self.assertEqual(ts.T_STRING_INPLACE, "hi") + self.assertRaises(TypeError, setattr, ts, "T_STRING_INPLACE", "s") + self.assertRaises(TypeError, delattr, ts, "T_STRING_INPLACE") + + +class TestWarnings(unittest.TestCase): + + def test_byte_max(self): + with warnings_helper.check_warnings(('', RuntimeWarning)): + ts.T_BYTE = CHAR_MAX+1 + + def test_byte_min(self): + with warnings_helper.check_warnings(('', RuntimeWarning)): + ts.T_BYTE = CHAR_MIN-1 + + def test_ubyte_max(self): + with warnings_helper.check_warnings(('', RuntimeWarning)): + ts.T_UBYTE = UCHAR_MAX+1 + + def test_short_max(self): + with warnings_helper.check_warnings(('', RuntimeWarning)): + ts.T_SHORT = SHRT_MAX+1 + + def test_short_min(self): + with warnings_helper.check_warnings(('', RuntimeWarning)): + ts.T_SHORT = SHRT_MIN-1 + + def test_ushort_max(self): + with warnings_helper.check_warnings(('', RuntimeWarning)): + ts.T_USHORT = USHRT_MAX+1 + + +if __name__ == "__main__": + unittest.main() diff --git a/Lib/test/test_capi/test_unicode.py b/Lib/test/test_capi/test_unicode.py new file mode 100644 index 00000000..8c2ada8e --- /dev/null +++ b/Lib/test/test_capi/test_unicode.py @@ -0,0 +1,502 @@ +import unittest +import sys +import warnings +from test import support +from test.support import import_helper +from test.support import warnings_helper + +try: + import _testcapi +except ImportError: + _testcapi = None + + +class CAPITest(unittest.TestCase): + + # Test PyUnicode_FromFormat() + def test_from_format(self): + import_helper.import_module('ctypes') + from ctypes import ( + c_char_p, + pythonapi, py_object, sizeof, + c_int, c_long, c_longlong, c_ssize_t, + c_uint, c_ulong, c_ulonglong, c_size_t, c_void_p) + name = "PyUnicode_FromFormat" + _PyUnicode_FromFormat = getattr(pythonapi, name) + _PyUnicode_FromFormat.argtypes = (c_char_p,) + _PyUnicode_FromFormat.restype = py_object + + def PyUnicode_FromFormat(format, *args): + cargs = tuple( + py_object(arg) if isinstance(arg, str) else arg + for arg in args) + return _PyUnicode_FromFormat(format, *cargs) + + def check_format(expected, format, *args): + text = PyUnicode_FromFormat(format, *args) + self.assertEqual(expected, text) + + # ascii format, non-ascii argument + check_format('ascii\x7f=unicode\xe9', + b'ascii\x7f=%U', 'unicode\xe9') + + # non-ascii format, ascii argument: ensure that PyUnicode_FromFormatV() + # raises an error + self.assertRaisesRegex(ValueError, + r'^PyUnicode_FromFormatV\(\) expects an ASCII-encoded format ' + 'string, got a non-ASCII byte: 0xe9$', + PyUnicode_FromFormat, b'unicode\xe9=%s', 'ascii') + + # test "%c" + check_format('\uabcd', + b'%c', c_int(0xabcd)) + check_format('\U0010ffff', + b'%c', c_int(0x10ffff)) + with self.assertRaises(OverflowError): + PyUnicode_FromFormat(b'%c', c_int(0x110000)) + # Issue #18183 + check_format('\U00010000\U00100000', + b'%c%c', c_int(0x10000), c_int(0x100000)) + + # test "%" + check_format('%', + b'%') + check_format('%', + b'%%') + check_format('%s', + b'%%s') + check_format('[%]', + b'[%%]') + check_format('%abc', + b'%%%s', b'abc') + + # truncated string + check_format('abc', + b'%.3s', b'abcdef') + check_format('abc[\ufffd', + b'%.5s', 'abc[\u20ac]'.encode('utf8')) + check_format("'\\u20acABC'", + b'%A', '\u20acABC') + check_format("'\\u20", + b'%.5A', '\u20acABCDEF') + check_format("'\u20acABC'", + b'%R', '\u20acABC') + check_format("'\u20acA", + b'%.3R', '\u20acABCDEF') + check_format('\u20acAB', + b'%.3S', '\u20acABCDEF') + check_format('\u20acAB', + b'%.3U', '\u20acABCDEF') + check_format('\u20acAB', + b'%.3V', '\u20acABCDEF', None) + check_format('abc[\ufffd', + b'%.5V', None, 'abc[\u20ac]'.encode('utf8')) + + # following tests comes from #7330 + # test width modifier and precision modifier with %S + check_format("repr= abc", + b'repr=%5S', 'abc') + check_format("repr=ab", + b'repr=%.2S', 'abc') + check_format("repr= ab", + b'repr=%5.2S', 'abc') + + # test width modifier and precision modifier with %R + check_format("repr= 'abc'", + b'repr=%8R', 'abc') + check_format("repr='ab", + b'repr=%.3R', 'abc') + check_format("repr= 'ab", + b'repr=%5.3R', 'abc') + + # test width modifier and precision modifier with %A + check_format("repr= 'abc'", + b'repr=%8A', 'abc') + check_format("repr='ab", + b'repr=%.3A', 'abc') + check_format("repr= 'ab", + b'repr=%5.3A', 'abc') + + # test width modifier and precision modifier with %s + check_format("repr= abc", + b'repr=%5s', b'abc') + check_format("repr=ab", + b'repr=%.2s', b'abc') + check_format("repr= ab", + b'repr=%5.2s', b'abc') + + # test width modifier and precision modifier with %U + check_format("repr= abc", + b'repr=%5U', 'abc') + check_format("repr=ab", + b'repr=%.2U', 'abc') + check_format("repr= ab", + b'repr=%5.2U', 'abc') + + # test width modifier and precision modifier with %V + check_format("repr= abc", + b'repr=%5V', 'abc', b'123') + check_format("repr=ab", + b'repr=%.2V', 'abc', b'123') + check_format("repr= ab", + b'repr=%5.2V', 'abc', b'123') + check_format("repr= 123", + b'repr=%5V', None, b'123') + check_format("repr=12", + b'repr=%.2V', None, b'123') + check_format("repr= 12", + b'repr=%5.2V', None, b'123') + + # test integer formats (%i, %d, %u) + check_format('010', + b'%03i', c_int(10)) + check_format('0010', + b'%0.4i', c_int(10)) + check_format('-123', + b'%i', c_int(-123)) + check_format('-123', + b'%li', c_long(-123)) + check_format('-123', + b'%lli', c_longlong(-123)) + check_format('-123', + b'%zi', c_ssize_t(-123)) + + check_format('-123', + b'%d', c_int(-123)) + check_format('-123', + b'%ld', c_long(-123)) + check_format('-123', + b'%lld', c_longlong(-123)) + check_format('-123', + b'%zd', c_ssize_t(-123)) + + check_format('123', + b'%u', c_uint(123)) + check_format('123', + b'%lu', c_ulong(123)) + check_format('123', + b'%llu', c_ulonglong(123)) + check_format('123', + b'%zu', c_size_t(123)) + + # test long output + min_longlong = -(2 ** (8 * sizeof(c_longlong) - 1)) + max_longlong = -min_longlong - 1 + check_format(str(min_longlong), + b'%lld', c_longlong(min_longlong)) + check_format(str(max_longlong), + b'%lld', c_longlong(max_longlong)) + max_ulonglong = 2 ** (8 * sizeof(c_ulonglong)) - 1 + check_format(str(max_ulonglong), + b'%llu', c_ulonglong(max_ulonglong)) + PyUnicode_FromFormat(b'%p', c_void_p(-1)) + + # test padding (width and/or precision) + check_format('123'.rjust(10, '0'), + b'%010i', c_int(123)) + check_format('123'.rjust(100), + b'%100i', c_int(123)) + check_format('123'.rjust(100, '0'), + b'%.100i', c_int(123)) + check_format('123'.rjust(80, '0').rjust(100), + b'%100.80i', c_int(123)) + + check_format('123'.rjust(10, '0'), + b'%010u', c_uint(123)) + check_format('123'.rjust(100), + b'%100u', c_uint(123)) + check_format('123'.rjust(100, '0'), + b'%.100u', c_uint(123)) + check_format('123'.rjust(80, '0').rjust(100), + b'%100.80u', c_uint(123)) + + check_format('123'.rjust(10, '0'), + b'%010x', c_int(0x123)) + check_format('123'.rjust(100), + b'%100x', c_int(0x123)) + check_format('123'.rjust(100, '0'), + b'%.100x', c_int(0x123)) + check_format('123'.rjust(80, '0').rjust(100), + b'%100.80x', c_int(0x123)) + + # test %A + check_format(r"%A:'abc\xe9\uabcd\U0010ffff'", + b'%%A:%A', 'abc\xe9\uabcd\U0010ffff') + + # test %V + check_format('repr=abc', + b'repr=%V', 'abc', b'xyz') + + # test %p + # We cannot test the exact result, + # because it returns a hex representation of a C pointer, + # which is going to be different each time. But, we can test the format. + p_format_regex = r'^0x[a-zA-Z0-9]{3,}$' + p_format1 = PyUnicode_FromFormat(b'%p', 'abc') + self.assertIsInstance(p_format1, str) + self.assertRegex(p_format1, p_format_regex) + + p_format2 = PyUnicode_FromFormat(b'%p %p', '123456', b'xyz') + self.assertIsInstance(p_format2, str) + self.assertRegex(p_format2, + r'0x[a-zA-Z0-9]{3,} 0x[a-zA-Z0-9]{3,}') + + # Extra args are ignored: + p_format3 = PyUnicode_FromFormat(b'%p', '123456', None, b'xyz') + self.assertIsInstance(p_format3, str) + self.assertRegex(p_format3, p_format_regex) + + # Test string decode from parameter of %s using utf-8. + # b'\xe4\xba\xba\xe6\xb0\x91' is utf-8 encoded byte sequence of + # '\u4eba\u6c11' + check_format('repr=\u4eba\u6c11', + b'repr=%V', None, b'\xe4\xba\xba\xe6\xb0\x91') + + #Test replace error handler. + check_format('repr=abc\ufffd', + b'repr=%V', None, b'abc\xff') + + # not supported: copy the raw format string. these tests are just here + # to check for crashes and should not be considered as specifications + check_format('%s', + b'%1%s', b'abc') + check_format('%1abc', + b'%1abc') + check_format('%+i', + b'%+i', c_int(10)) + check_format('%.%s', + b'%.%s', b'abc') + + # Issue #33817: empty strings + check_format('', + b'') + check_format('', + b'%s', b'') + + # Test PyUnicode_AsWideChar() + @support.cpython_only + def test_aswidechar(self): + from _testcapi import unicode_aswidechar + import_helper.import_module('ctypes') + from ctypes import c_wchar, sizeof + + wchar, size = unicode_aswidechar('abcdef', 2) + self.assertEqual(size, 2) + self.assertEqual(wchar, 'ab') + + wchar, size = unicode_aswidechar('abc', 3) + self.assertEqual(size, 3) + self.assertEqual(wchar, 'abc') + + wchar, size = unicode_aswidechar('abc', 4) + self.assertEqual(size, 3) + self.assertEqual(wchar, 'abc\0') + + wchar, size = unicode_aswidechar('abc', 10) + self.assertEqual(size, 3) + self.assertEqual(wchar, 'abc\0') + + wchar, size = unicode_aswidechar('abc\0def', 20) + self.assertEqual(size, 7) + self.assertEqual(wchar, 'abc\0def\0') + + nonbmp = chr(0x10ffff) + if sizeof(c_wchar) == 2: + buflen = 3 + nchar = 2 + else: # sizeof(c_wchar) == 4 + buflen = 2 + nchar = 1 + wchar, size = unicode_aswidechar(nonbmp, buflen) + self.assertEqual(size, nchar) + self.assertEqual(wchar, nonbmp + '\0') + + # Test PyUnicode_AsWideCharString() + @support.cpython_only + def test_aswidecharstring(self): + from _testcapi import unicode_aswidecharstring + import_helper.import_module('ctypes') + from ctypes import c_wchar, sizeof + + wchar, size = unicode_aswidecharstring('abc') + self.assertEqual(size, 3) + self.assertEqual(wchar, 'abc\0') + + wchar, size = unicode_aswidecharstring('abc\0def') + self.assertEqual(size, 7) + self.assertEqual(wchar, 'abc\0def\0') + + nonbmp = chr(0x10ffff) + if sizeof(c_wchar) == 2: + nchar = 2 + else: # sizeof(c_wchar) == 4 + nchar = 1 + wchar, size = unicode_aswidecharstring(nonbmp) + self.assertEqual(size, nchar) + self.assertEqual(wchar, nonbmp + '\0') + + # Test PyUnicode_AsUCS4() + @support.cpython_only + def test_asucs4(self): + from _testcapi import unicode_asucs4 + for s in ['abc', '\xa1\xa2', '\u4f60\u597d', 'a\U0001f600', + 'a\ud800b\udfffc', '\ud834\udd1e']: + l = len(s) + self.assertEqual(unicode_asucs4(s, l, True), s+'\0') + self.assertEqual(unicode_asucs4(s, l, False), s+'\uffff') + self.assertEqual(unicode_asucs4(s, l+1, True), s+'\0\uffff') + self.assertEqual(unicode_asucs4(s, l+1, False), s+'\0\uffff') + self.assertRaises(SystemError, unicode_asucs4, s, l-1, True) + self.assertRaises(SystemError, unicode_asucs4, s, l-2, False) + s = '\0'.join([s, s]) + self.assertEqual(unicode_asucs4(s, len(s), True), s+'\0') + self.assertEqual(unicode_asucs4(s, len(s), False), s+'\uffff') + + # Test PyUnicode_AsUTF8() + @support.cpython_only + def test_asutf8(self): + from _testcapi import unicode_asutf8 + + bmp = '\u0100' + bmp2 = '\uffff' + nonbmp = chr(0x10ffff) + + self.assertEqual(unicode_asutf8(bmp), b'\xc4\x80') + self.assertEqual(unicode_asutf8(bmp2), b'\xef\xbf\xbf') + self.assertEqual(unicode_asutf8(nonbmp), b'\xf4\x8f\xbf\xbf') + self.assertRaises(UnicodeEncodeError, unicode_asutf8, 'a\ud800b\udfffc') + + # Test PyUnicode_AsUTF8AndSize() + @support.cpython_only + def test_asutf8andsize(self): + from _testcapi import unicode_asutf8andsize + + bmp = '\u0100' + bmp2 = '\uffff' + nonbmp = chr(0x10ffff) + + self.assertEqual(unicode_asutf8andsize(bmp), (b'\xc4\x80', 2)) + self.assertEqual(unicode_asutf8andsize(bmp2), (b'\xef\xbf\xbf', 3)) + self.assertEqual(unicode_asutf8andsize(nonbmp), (b'\xf4\x8f\xbf\xbf', 4)) + self.assertRaises(UnicodeEncodeError, unicode_asutf8andsize, 'a\ud800b\udfffc') + + # Test PyUnicode_FindChar() + @support.cpython_only + def test_findchar(self): + from _testcapi import unicode_findchar + + for str in "\xa1", "\u8000\u8080", "\ud800\udc02", "\U0001f100\U0001f1f1": + for i, ch in enumerate(str): + self.assertEqual(unicode_findchar(str, ord(ch), 0, len(str), 1), i) + self.assertEqual(unicode_findchar(str, ord(ch), 0, len(str), -1), i) + + str = "!>_= end + self.assertEqual(unicode_findchar(str, ord('!'), 0, 0, 1), -1) + self.assertEqual(unicode_findchar(str, ord('!'), len(str), 0, 1), -1) + # negative + self.assertEqual(unicode_findchar(str, ord('!'), -len(str), -1, 1), 0) + self.assertEqual(unicode_findchar(str, ord('!'), -len(str), -1, -1), 0) + + # Test PyUnicode_CopyCharacters() + @support.cpython_only + def test_copycharacters(self): + from _testcapi import unicode_copycharacters + + strings = [ + 'abcde', '\xa1\xa2\xa3\xa4\xa5', + '\u4f60\u597d\u4e16\u754c\uff01', + '\U0001f600\U0001f601\U0001f602\U0001f603\U0001f604' + ] + + for idx, from_ in enumerate(strings): + # wide -> narrow: exceed maxchar limitation + for to in strings[:idx]: + self.assertRaises( + SystemError, + unicode_copycharacters, to, 0, from_, 0, 5 + ) + # same kind + for from_start in range(5): + self.assertEqual( + unicode_copycharacters(from_, 0, from_, from_start, 5), + (from_[from_start:from_start+5].ljust(5, '\0'), + 5-from_start) + ) + for to_start in range(5): + self.assertEqual( + unicode_copycharacters(from_, to_start, from_, to_start, 5), + (from_[to_start:to_start+5].rjust(5, '\0'), + 5-to_start) + ) + # narrow -> wide + # Tests omitted since this creates invalid strings. + + s = strings[0] + self.assertRaises(IndexError, unicode_copycharacters, s, 6, s, 0, 5) + self.assertRaises(IndexError, unicode_copycharacters, s, -1, s, 0, 5) + self.assertRaises(IndexError, unicode_copycharacters, s, 0, s, 6, 5) + self.assertRaises(IndexError, unicode_copycharacters, s, 0, s, -1, 5) + self.assertRaises(SystemError, unicode_copycharacters, s, 1, s, 0, 5) + self.assertRaises(SystemError, unicode_copycharacters, s, 0, s, 0, -1) + self.assertRaises(SystemError, unicode_copycharacters, s, 0, b'', 0, 0) + + @support.cpython_only + @support.requires_legacy_unicode_capi + def test_encode_decimal(self): + from _testcapi import unicode_encodedecimal + with warnings_helper.check_warnings(): + warnings.simplefilter('ignore', DeprecationWarning) + self.assertEqual(unicode_encodedecimal('123'), + b'123') + self.assertEqual(unicode_encodedecimal('\u0663.\u0661\u0664'), + b'3.14') + self.assertEqual(unicode_encodedecimal( + "\N{EM SPACE}3.14\N{EN SPACE}"), b' 3.14 ') + self.assertRaises(UnicodeEncodeError, + unicode_encodedecimal, "123\u20ac", "strict") + self.assertRaisesRegex( + ValueError, + "^'decimal' codec can't encode character", + unicode_encodedecimal, "123\u20ac", "replace") + + @support.cpython_only + @support.requires_legacy_unicode_capi + def test_transform_decimal(self): + from _testcapi import unicode_transformdecimaltoascii as transform_decimal + with warnings_helper.check_warnings(): + warnings.simplefilter('ignore', DeprecationWarning) + self.assertEqual(transform_decimal('123'), + '123') + self.assertEqual(transform_decimal('\u0663.\u0661\u0664'), + '3.14') + self.assertEqual(transform_decimal("\N{EM SPACE}3.14\N{EN SPACE}"), + "\N{EM SPACE}3.14\N{EN SPACE}") + self.assertEqual(transform_decimal('123\u20ac'), + '123\u20ac') + + @support.cpython_only + def test_pep393_utf8_caching_bug(self): + # Issue #25709: Problem with string concatenation and utf-8 cache + from _testcapi import getargs_s_hash + for k in 0x24, 0xa4, 0x20ac, 0x1f40d: + s = '' + for i in range(5): + # Due to CPython specific optimization the 's' string can be + # resized in-place. + s += chr(k) + # Parsing with the "s#" format code calls indirectly + # PyUnicode_AsUTF8AndSize() which creates the UTF-8 + # encoded string cached in the Unicode object. + self.assertEqual(getargs_s_hash(s), chr(k).encode() * (i + 1)) + # Check that the second call returns the same result + self.assertEqual(getargs_s_hash(s), chr(k).encode() * (i + 1)) + + +if __name__ == "__main__": + unittest.main() diff --git a/Lib/test/test_codecs.py b/Lib/test/test_codecs.py index a9c43d95..24075672 100644 --- a/Lib/test/test_codecs.py +++ b/Lib/test/test_codecs.py @@ -705,7 +705,8 @@ class UTF16Test(ReadTest, unittest.TestCase): "spamspam", self.spambe) def test_bug691291(self): - # Files are always opened in binary mode, even if no binary mode was + # If encoding is not None, then + # files are always opened in binary mode, even if no binary mode was # specified. This means that no automatic conversion of '\n' is done # on reading and writing. s1 = 'Hello\r\nworld\r\n' @@ -1534,6 +1535,12 @@ class IDNACodecTest(unittest.TestCase): self.assertEqual("pyth\xf6n.org".encode("idna"), b"xn--pythn-mua.org") self.assertEqual("pyth\xf6n.org.".encode("idna"), b"xn--pythn-mua.org.") + def test_builtin_decode_length_limit(self): + with self.assertRaisesRegex(UnicodeError, "too long"): + (b"xn--016c"+b"a"*1100).decode("idna") + with self.assertRaisesRegex(UnicodeError, "too long"): + (b"xn--016c"+b"a"*70).decode("idna") + def test_stream(self): r = codecs.getreader("idna")(io.BytesIO(b"abc")) r.read(3) diff --git a/Lib/test/test_collections.py b/Lib/test/test_collections.py index 3404b8ad..f5af55a3 100644 --- a/Lib/test/test_collections.py +++ b/Lib/test/test_collections.py @@ -788,6 +788,8 @@ class TestOneTrickPonyABCs(ABCTestCase): def __await__(self): yield + self.validate_abstract_methods(Awaitable, '__await__') + non_samples = [None, int(), gen(), object()] for x in non_samples: self.assertNotIsInstance(x, Awaitable) @@ -838,6 +840,8 @@ class TestOneTrickPonyABCs(ABCTestCase): def __await__(self): yield + self.validate_abstract_methods(Coroutine, '__await__', 'send', 'throw') + non_samples = [None, int(), gen(), object(), Bar()] for x in non_samples: self.assertNotIsInstance(x, Coroutine) @@ -1580,6 +1584,7 @@ class TestCollectionABCs(ABCTestCase): containers = [ seq, ItemsView({1: nan, 2: obj}), + KeysView({1: nan, 2: obj}), ValuesView({1: nan, 2: obj}) ] for container in containers: @@ -1843,6 +1848,8 @@ class TestCollectionABCs(ABCTestCase): mymap['red'] = 5 self.assertIsInstance(mymap.keys(), Set) self.assertIsInstance(mymap.keys(), KeysView) + self.assertIsInstance(mymap.values(), Collection) + self.assertIsInstance(mymap.values(), ValuesView) self.assertIsInstance(mymap.items(), Set) self.assertIsInstance(mymap.items(), ItemsView) @@ -1918,6 +1925,7 @@ class TestCollectionABCs(ABCTestCase): self.assertFalse(issubclass(sample, ByteString)) self.assertNotIsInstance(memoryview(b""), ByteString) self.assertFalse(issubclass(memoryview, ByteString)) + self.validate_abstract_methods(ByteString, '__getitem__', '__len__') def test_MutableSequence(self): for sample in [tuple, str, bytes]: diff --git a/Lib/test/test_complex.py b/Lib/test/test_complex.py index c6a261b4..c9fd6a5c 100644 --- a/Lib/test/test_complex.py +++ b/Lib/test/test_complex.py @@ -306,15 +306,10 @@ class ComplexTest(unittest.TestCase): self.assertClose(complex(5.3, 9.8).conjugate(), 5.3-9.8j) def test_constructor(self): - class OS: + class NS: def __init__(self, value): self.value = value def __complex__(self): return self.value - class NS(object): - def __init__(self, value): self.value = value - def __complex__(self): return self.value - self.assertEqual(complex(OS(1+10j)), 1+10j) self.assertEqual(complex(NS(1+10j)), 1+10j) - self.assertRaises(TypeError, complex, OS(None)) self.assertRaises(TypeError, complex, NS(None)) self.assertRaises(TypeError, complex, {}) self.assertRaises(TypeError, complex, NS(1.5)) diff --git a/Lib/test/test_coroutines.py b/Lib/test/test_coroutines.py index acff2453..f4c52689 100644 --- a/Lib/test/test_coroutines.py +++ b/Lib/test/test_coroutines.py @@ -2319,7 +2319,8 @@ class UnawaitedWarningDuringShutdownTest(unittest.TestCase): def test_unawaited_warning_during_shutdown(self): code = ("import asyncio\n" "async def f(): pass\n" - "asyncio.gather(f())\n") + "async def t(): asyncio.gather(f())\n" + "asyncio.run(t())\n") assert_python_ok("-c", code) code = ("import sys\n" diff --git a/Lib/test/test_dataclasses.py b/Lib/test/test_dataclasses.py index f72e81c3..e805f0cf 100644 --- a/Lib/test/test_dataclasses.py +++ b/Lib/test/test_dataclasses.py @@ -230,6 +230,14 @@ class TestCase(unittest.TestCase): c = C('foo') self.assertEqual(c.object, 'foo') + def test_field_named_BUILTINS_frozen(self): + # gh-96151 + @dataclass(frozen=True) + class C: + BUILTINS: int + c = C(5) + self.assertEqual(c.BUILTINS, 5) + def test_field_named_like_builtin(self): # Attribute names can shadow built-in names # since code generation is used. diff --git a/Lib/test/test_dictviews.py b/Lib/test/test_dictviews.py index be271beb..dae93740 100644 --- a/Lib/test/test_dictviews.py +++ b/Lib/test/test_dictviews.py @@ -320,6 +320,9 @@ class DictSetTest(unittest.TestCase): self.assertIsInstance(d.values(), collections.abc.ValuesView) self.assertIsInstance(d.values(), collections.abc.MappingView) self.assertIsInstance(d.values(), collections.abc.Sized) + self.assertIsInstance(d.values(), collections.abc.Collection) + self.assertIsInstance(d.values(), collections.abc.Iterable) + self.assertIsInstance(d.values(), collections.abc.Container) self.assertIsInstance(d.items(), collections.abc.ItemsView) self.assertIsInstance(d.items(), collections.abc.MappingView) diff --git a/Lib/test/test_getargs2.py b/Lib/test/test_getargs2.py deleted file mode 100644 index 72b6d64a..00000000 --- a/Lib/test/test_getargs2.py +++ /dev/null @@ -1,1318 +0,0 @@ -import unittest -import math -import string -import sys -import warnings -from test import support -from test.support import import_helper -from test.support import warnings_helper -# Skip this test if the _testcapi module isn't available. -_testcapi = import_helper.import_module('_testcapi') -from _testcapi import getargs_keywords, getargs_keyword_only - -# > How about the following counterproposal. This also changes some of -# > the other format codes to be a little more regular. -# > -# > Code C type Range check -# > -# > b unsigned char 0..UCHAR_MAX -# > h signed short SHRT_MIN..SHRT_MAX -# > B unsigned char none ** -# > H unsigned short none ** -# > k * unsigned long none -# > I * unsigned int 0..UINT_MAX -# -# -# > i int INT_MIN..INT_MAX -# > l long LONG_MIN..LONG_MAX -# -# > K * unsigned long long none -# > L long long LLONG_MIN..LLONG_MAX -# -# > Notes: -# > -# > * New format codes. -# > -# > ** Changed from previous "range-and-a-half" to "none"; the -# > range-and-a-half checking wasn't particularly useful. -# -# Plus a C API or two, e.g. PyLong_AsUnsignedLongMask() -> -# unsigned long and PyLong_AsUnsignedLongLongMask() -> unsigned -# long long (if that exists). - -LARGE = 0x7FFFFFFF -VERY_LARGE = 0xFF0000121212121212121242 - -from _testcapi import UCHAR_MAX, USHRT_MAX, UINT_MAX, ULONG_MAX, INT_MAX, \ - INT_MIN, LONG_MIN, LONG_MAX, PY_SSIZE_T_MIN, PY_SSIZE_T_MAX, \ - SHRT_MIN, SHRT_MAX, FLT_MIN, FLT_MAX, DBL_MIN, DBL_MAX - -DBL_MAX_EXP = sys.float_info.max_exp -INF = float('inf') -NAN = float('nan') - -# fake, they are not defined in Python's header files -LLONG_MAX = 2**63-1 -LLONG_MIN = -2**63 -ULLONG_MAX = 2**64-1 - -class Index: - def __index__(self): - return 99 - -class IndexIntSubclass(int): - def __index__(self): - return 99 - -class BadIndex: - def __index__(self): - return 1.0 - -class BadIndex2: - def __index__(self): - return True - -class BadIndex3(int): - def __index__(self): - return True - - -class Int: - def __int__(self): - return 99 - -class IntSubclass(int): - def __int__(self): - return 99 - -class BadInt: - def __int__(self): - return 1.0 - -class BadInt2: - def __int__(self): - return True - -class BadInt3(int): - def __int__(self): - return True - - -class Float: - def __float__(self): - return 4.25 - -class FloatSubclass(float): - pass - -class FloatSubclass2(float): - def __float__(self): - return 4.25 - -class BadFloat: - def __float__(self): - return 687 - -class BadFloat2: - def __float__(self): - return FloatSubclass(4.25) - -class BadFloat3(float): - def __float__(self): - return FloatSubclass(4.25) - - -class Complex: - def __complex__(self): - return 4.25+0.5j - -class ComplexSubclass(complex): - pass - -class ComplexSubclass2(complex): - def __complex__(self): - return 4.25+0.5j - -class BadComplex: - def __complex__(self): - return 1.25 - -class BadComplex2: - def __complex__(self): - return ComplexSubclass(4.25+0.5j) - -class BadComplex3(complex): - def __complex__(self): - return ComplexSubclass(4.25+0.5j) - - -class TupleSubclass(tuple): - pass - -class DictSubclass(dict): - pass - - -class Unsigned_TestCase(unittest.TestCase): - def test_b(self): - from _testcapi import getargs_b - # b returns 'unsigned char', and does range checking (0 ... UCHAR_MAX) - self.assertRaises(TypeError, getargs_b, 3.14) - self.assertEqual(99, getargs_b(Index())) - self.assertEqual(0, getargs_b(IndexIntSubclass())) - self.assertRaises(TypeError, getargs_b, BadIndex()) - with self.assertWarns(DeprecationWarning): - self.assertEqual(1, getargs_b(BadIndex2())) - self.assertEqual(0, getargs_b(BadIndex3())) - self.assertRaises(TypeError, getargs_b, Int()) - self.assertEqual(0, getargs_b(IntSubclass())) - self.assertRaises(TypeError, getargs_b, BadInt()) - self.assertRaises(TypeError, getargs_b, BadInt2()) - self.assertEqual(0, getargs_b(BadInt3())) - - self.assertRaises(OverflowError, getargs_b, -1) - self.assertEqual(0, getargs_b(0)) - self.assertEqual(UCHAR_MAX, getargs_b(UCHAR_MAX)) - self.assertRaises(OverflowError, getargs_b, UCHAR_MAX + 1) - - self.assertEqual(42, getargs_b(42)) - self.assertRaises(OverflowError, getargs_b, VERY_LARGE) - - def test_B(self): - from _testcapi import getargs_B - # B returns 'unsigned char', no range checking - self.assertRaises(TypeError, getargs_B, 3.14) - self.assertEqual(99, getargs_B(Index())) - self.assertEqual(0, getargs_B(IndexIntSubclass())) - self.assertRaises(TypeError, getargs_B, BadIndex()) - with self.assertWarns(DeprecationWarning): - self.assertEqual(1, getargs_B(BadIndex2())) - self.assertEqual(0, getargs_B(BadIndex3())) - self.assertRaises(TypeError, getargs_B, Int()) - self.assertEqual(0, getargs_B(IntSubclass())) - self.assertRaises(TypeError, getargs_B, BadInt()) - self.assertRaises(TypeError, getargs_B, BadInt2()) - self.assertEqual(0, getargs_B(BadInt3())) - - self.assertEqual(UCHAR_MAX, getargs_B(-1)) - self.assertEqual(0, getargs_B(0)) - self.assertEqual(UCHAR_MAX, getargs_B(UCHAR_MAX)) - self.assertEqual(0, getargs_B(UCHAR_MAX+1)) - - self.assertEqual(42, getargs_B(42)) - self.assertEqual(UCHAR_MAX & VERY_LARGE, getargs_B(VERY_LARGE)) - - def test_H(self): - from _testcapi import getargs_H - # H returns 'unsigned short', no range checking - self.assertRaises(TypeError, getargs_H, 3.14) - self.assertEqual(99, getargs_H(Index())) - self.assertEqual(0, getargs_H(IndexIntSubclass())) - self.assertRaises(TypeError, getargs_H, BadIndex()) - with self.assertWarns(DeprecationWarning): - self.assertEqual(1, getargs_H(BadIndex2())) - self.assertEqual(0, getargs_H(BadIndex3())) - self.assertRaises(TypeError, getargs_H, Int()) - self.assertEqual(0, getargs_H(IntSubclass())) - self.assertRaises(TypeError, getargs_H, BadInt()) - self.assertRaises(TypeError, getargs_H, BadInt2()) - self.assertEqual(0, getargs_H(BadInt3())) - - self.assertEqual(USHRT_MAX, getargs_H(-1)) - self.assertEqual(0, getargs_H(0)) - self.assertEqual(USHRT_MAX, getargs_H(USHRT_MAX)) - self.assertEqual(0, getargs_H(USHRT_MAX+1)) - - self.assertEqual(42, getargs_H(42)) - - self.assertEqual(VERY_LARGE & USHRT_MAX, getargs_H(VERY_LARGE)) - - def test_I(self): - from _testcapi import getargs_I - # I returns 'unsigned int', no range checking - self.assertRaises(TypeError, getargs_I, 3.14) - self.assertEqual(99, getargs_I(Index())) - self.assertEqual(0, getargs_I(IndexIntSubclass())) - self.assertRaises(TypeError, getargs_I, BadIndex()) - with self.assertWarns(DeprecationWarning): - self.assertEqual(1, getargs_I(BadIndex2())) - self.assertEqual(0, getargs_I(BadIndex3())) - self.assertRaises(TypeError, getargs_I, Int()) - self.assertEqual(0, getargs_I(IntSubclass())) - self.assertRaises(TypeError, getargs_I, BadInt()) - self.assertRaises(TypeError, getargs_I, BadInt2()) - self.assertEqual(0, getargs_I(BadInt3())) - - self.assertEqual(UINT_MAX, getargs_I(-1)) - self.assertEqual(0, getargs_I(0)) - self.assertEqual(UINT_MAX, getargs_I(UINT_MAX)) - self.assertEqual(0, getargs_I(UINT_MAX+1)) - - self.assertEqual(42, getargs_I(42)) - - self.assertEqual(VERY_LARGE & UINT_MAX, getargs_I(VERY_LARGE)) - - def test_k(self): - from _testcapi import getargs_k - # k returns 'unsigned long', no range checking - # it does not accept float, or instances with __int__ - self.assertRaises(TypeError, getargs_k, 3.14) - self.assertRaises(TypeError, getargs_k, Index()) - self.assertEqual(0, getargs_k(IndexIntSubclass())) - self.assertRaises(TypeError, getargs_k, BadIndex()) - self.assertRaises(TypeError, getargs_k, BadIndex2()) - self.assertEqual(0, getargs_k(BadIndex3())) - self.assertRaises(TypeError, getargs_k, Int()) - self.assertEqual(0, getargs_k(IntSubclass())) - self.assertRaises(TypeError, getargs_k, BadInt()) - self.assertRaises(TypeError, getargs_k, BadInt2()) - self.assertEqual(0, getargs_k(BadInt3())) - - self.assertEqual(ULONG_MAX, getargs_k(-1)) - self.assertEqual(0, getargs_k(0)) - self.assertEqual(ULONG_MAX, getargs_k(ULONG_MAX)) - self.assertEqual(0, getargs_k(ULONG_MAX+1)) - - self.assertEqual(42, getargs_k(42)) - - self.assertEqual(VERY_LARGE & ULONG_MAX, getargs_k(VERY_LARGE)) - -class Signed_TestCase(unittest.TestCase): - def test_h(self): - from _testcapi import getargs_h - # h returns 'short', and does range checking (SHRT_MIN ... SHRT_MAX) - self.assertRaises(TypeError, getargs_h, 3.14) - self.assertEqual(99, getargs_h(Index())) - self.assertEqual(0, getargs_h(IndexIntSubclass())) - self.assertRaises(TypeError, getargs_h, BadIndex()) - with self.assertWarns(DeprecationWarning): - self.assertEqual(1, getargs_h(BadIndex2())) - self.assertEqual(0, getargs_h(BadIndex3())) - self.assertRaises(TypeError, getargs_h, Int()) - self.assertEqual(0, getargs_h(IntSubclass())) - self.assertRaises(TypeError, getargs_h, BadInt()) - self.assertRaises(TypeError, getargs_h, BadInt2()) - self.assertEqual(0, getargs_h(BadInt3())) - - self.assertRaises(OverflowError, getargs_h, SHRT_MIN-1) - self.assertEqual(SHRT_MIN, getargs_h(SHRT_MIN)) - self.assertEqual(SHRT_MAX, getargs_h(SHRT_MAX)) - self.assertRaises(OverflowError, getargs_h, SHRT_MAX+1) - - self.assertEqual(42, getargs_h(42)) - self.assertRaises(OverflowError, getargs_h, VERY_LARGE) - - def test_i(self): - from _testcapi import getargs_i - # i returns 'int', and does range checking (INT_MIN ... INT_MAX) - self.assertRaises(TypeError, getargs_i, 3.14) - self.assertEqual(99, getargs_i(Index())) - self.assertEqual(0, getargs_i(IndexIntSubclass())) - self.assertRaises(TypeError, getargs_i, BadIndex()) - with self.assertWarns(DeprecationWarning): - self.assertEqual(1, getargs_i(BadIndex2())) - self.assertEqual(0, getargs_i(BadIndex3())) - self.assertRaises(TypeError, getargs_i, Int()) - self.assertEqual(0, getargs_i(IntSubclass())) - self.assertRaises(TypeError, getargs_i, BadInt()) - self.assertRaises(TypeError, getargs_i, BadInt2()) - self.assertEqual(0, getargs_i(BadInt3())) - - self.assertRaises(OverflowError, getargs_i, INT_MIN-1) - self.assertEqual(INT_MIN, getargs_i(INT_MIN)) - self.assertEqual(INT_MAX, getargs_i(INT_MAX)) - self.assertRaises(OverflowError, getargs_i, INT_MAX+1) - - self.assertEqual(42, getargs_i(42)) - self.assertRaises(OverflowError, getargs_i, VERY_LARGE) - - def test_l(self): - from _testcapi import getargs_l - # l returns 'long', and does range checking (LONG_MIN ... LONG_MAX) - self.assertRaises(TypeError, getargs_l, 3.14) - self.assertEqual(99, getargs_l(Index())) - self.assertEqual(0, getargs_l(IndexIntSubclass())) - self.assertRaises(TypeError, getargs_l, BadIndex()) - with self.assertWarns(DeprecationWarning): - self.assertEqual(1, getargs_l(BadIndex2())) - self.assertEqual(0, getargs_l(BadIndex3())) - self.assertRaises(TypeError, getargs_l, Int()) - self.assertEqual(0, getargs_l(IntSubclass())) - self.assertRaises(TypeError, getargs_l, BadInt()) - self.assertRaises(TypeError, getargs_l, BadInt2()) - self.assertEqual(0, getargs_l(BadInt3())) - - self.assertRaises(OverflowError, getargs_l, LONG_MIN-1) - self.assertEqual(LONG_MIN, getargs_l(LONG_MIN)) - self.assertEqual(LONG_MAX, getargs_l(LONG_MAX)) - self.assertRaises(OverflowError, getargs_l, LONG_MAX+1) - - self.assertEqual(42, getargs_l(42)) - self.assertRaises(OverflowError, getargs_l, VERY_LARGE) - - def test_n(self): - from _testcapi import getargs_n - # n returns 'Py_ssize_t', and does range checking - # (PY_SSIZE_T_MIN ... PY_SSIZE_T_MAX) - self.assertRaises(TypeError, getargs_n, 3.14) - self.assertEqual(99, getargs_n(Index())) - self.assertEqual(0, getargs_n(IndexIntSubclass())) - self.assertRaises(TypeError, getargs_n, BadIndex()) - with self.assertWarns(DeprecationWarning): - self.assertEqual(1, getargs_n(BadIndex2())) - self.assertEqual(0, getargs_n(BadIndex3())) - self.assertRaises(TypeError, getargs_n, Int()) - self.assertEqual(0, getargs_n(IntSubclass())) - self.assertRaises(TypeError, getargs_n, BadInt()) - self.assertRaises(TypeError, getargs_n, BadInt2()) - self.assertEqual(0, getargs_n(BadInt3())) - - self.assertRaises(OverflowError, getargs_n, PY_SSIZE_T_MIN-1) - self.assertEqual(PY_SSIZE_T_MIN, getargs_n(PY_SSIZE_T_MIN)) - self.assertEqual(PY_SSIZE_T_MAX, getargs_n(PY_SSIZE_T_MAX)) - self.assertRaises(OverflowError, getargs_n, PY_SSIZE_T_MAX+1) - - self.assertEqual(42, getargs_n(42)) - self.assertRaises(OverflowError, getargs_n, VERY_LARGE) - - -class LongLong_TestCase(unittest.TestCase): - def test_L(self): - from _testcapi import getargs_L - # L returns 'long long', and does range checking (LLONG_MIN - # ... LLONG_MAX) - self.assertRaises(TypeError, getargs_L, 3.14) - self.assertRaises(TypeError, getargs_L, "Hello") - self.assertEqual(99, getargs_L(Index())) - self.assertEqual(0, getargs_L(IndexIntSubclass())) - self.assertRaises(TypeError, getargs_L, BadIndex()) - with self.assertWarns(DeprecationWarning): - self.assertEqual(1, getargs_L(BadIndex2())) - self.assertEqual(0, getargs_L(BadIndex3())) - self.assertRaises(TypeError, getargs_L, Int()) - self.assertEqual(0, getargs_L(IntSubclass())) - self.assertRaises(TypeError, getargs_L, BadInt()) - self.assertRaises(TypeError, getargs_L, BadInt2()) - self.assertEqual(0, getargs_L(BadInt3())) - - self.assertRaises(OverflowError, getargs_L, LLONG_MIN-1) - self.assertEqual(LLONG_MIN, getargs_L(LLONG_MIN)) - self.assertEqual(LLONG_MAX, getargs_L(LLONG_MAX)) - self.assertRaises(OverflowError, getargs_L, LLONG_MAX+1) - - self.assertEqual(42, getargs_L(42)) - self.assertRaises(OverflowError, getargs_L, VERY_LARGE) - - def test_K(self): - from _testcapi import getargs_K - # K return 'unsigned long long', no range checking - self.assertRaises(TypeError, getargs_K, 3.14) - self.assertRaises(TypeError, getargs_K, Index()) - self.assertEqual(0, getargs_K(IndexIntSubclass())) - self.assertRaises(TypeError, getargs_K, BadIndex()) - self.assertRaises(TypeError, getargs_K, BadIndex2()) - self.assertEqual(0, getargs_K(BadIndex3())) - self.assertRaises(TypeError, getargs_K, Int()) - self.assertEqual(0, getargs_K(IntSubclass())) - self.assertRaises(TypeError, getargs_K, BadInt()) - self.assertRaises(TypeError, getargs_K, BadInt2()) - self.assertEqual(0, getargs_K(BadInt3())) - - self.assertEqual(ULLONG_MAX, getargs_K(ULLONG_MAX)) - self.assertEqual(0, getargs_K(0)) - self.assertEqual(0, getargs_K(ULLONG_MAX+1)) - - self.assertEqual(42, getargs_K(42)) - - self.assertEqual(VERY_LARGE & ULLONG_MAX, getargs_K(VERY_LARGE)) - - -class Float_TestCase(unittest.TestCase): - def assertEqualWithSign(self, actual, expected): - self.assertEqual(actual, expected) - self.assertEqual(math.copysign(1, actual), math.copysign(1, expected)) - - def test_f(self): - from _testcapi import getargs_f - self.assertEqual(getargs_f(4.25), 4.25) - self.assertEqual(getargs_f(4), 4.0) - self.assertRaises(TypeError, getargs_f, 4.25+0j) - self.assertEqual(getargs_f(Float()), 4.25) - self.assertEqual(getargs_f(FloatSubclass(7.5)), 7.5) - self.assertEqual(getargs_f(FloatSubclass2(7.5)), 7.5) - self.assertRaises(TypeError, getargs_f, BadFloat()) - with self.assertWarns(DeprecationWarning): - self.assertEqual(getargs_f(BadFloat2()), 4.25) - self.assertEqual(getargs_f(BadFloat3(7.5)), 7.5) - self.assertEqual(getargs_f(Index()), 99.0) - self.assertRaises(TypeError, getargs_f, Int()) - - for x in (FLT_MIN, -FLT_MIN, FLT_MAX, -FLT_MAX, INF, -INF): - self.assertEqual(getargs_f(x), x) - if FLT_MAX < DBL_MAX: - self.assertEqual(getargs_f(DBL_MAX), INF) - self.assertEqual(getargs_f(-DBL_MAX), -INF) - if FLT_MIN > DBL_MIN: - self.assertEqualWithSign(getargs_f(DBL_MIN), 0.0) - self.assertEqualWithSign(getargs_f(-DBL_MIN), -0.0) - self.assertEqualWithSign(getargs_f(0.0), 0.0) - self.assertEqualWithSign(getargs_f(-0.0), -0.0) - r = getargs_f(NAN) - self.assertNotEqual(r, r) - - @support.requires_IEEE_754 - def test_f_rounding(self): - from _testcapi import getargs_f - self.assertEqual(getargs_f(3.40282356e38), FLT_MAX) - self.assertEqual(getargs_f(-3.40282356e38), -FLT_MAX) - - def test_d(self): - from _testcapi import getargs_d - self.assertEqual(getargs_d(4.25), 4.25) - self.assertEqual(getargs_d(4), 4.0) - self.assertRaises(TypeError, getargs_d, 4.25+0j) - self.assertEqual(getargs_d(Float()), 4.25) - self.assertEqual(getargs_d(FloatSubclass(7.5)), 7.5) - self.assertEqual(getargs_d(FloatSubclass2(7.5)), 7.5) - self.assertRaises(TypeError, getargs_d, BadFloat()) - with self.assertWarns(DeprecationWarning): - self.assertEqual(getargs_d(BadFloat2()), 4.25) - self.assertEqual(getargs_d(BadFloat3(7.5)), 7.5) - self.assertEqual(getargs_d(Index()), 99.0) - self.assertRaises(TypeError, getargs_d, Int()) - - for x in (DBL_MIN, -DBL_MIN, DBL_MAX, -DBL_MAX, INF, -INF): - self.assertEqual(getargs_d(x), x) - self.assertRaises(OverflowError, getargs_d, 1< 1 - self.assertEqual(getargs_c(b'a'), 97) - self.assertEqual(getargs_c(bytearray(b'a')), 97) - self.assertRaises(TypeError, getargs_c, memoryview(b'a')) - self.assertRaises(TypeError, getargs_c, 's') - self.assertRaises(TypeError, getargs_c, 97) - self.assertRaises(TypeError, getargs_c, None) - - def test_y(self): - from _testcapi import getargs_y - self.assertRaises(TypeError, getargs_y, 'abc\xe9') - self.assertEqual(getargs_y(b'bytes'), b'bytes') - self.assertRaises(ValueError, getargs_y, b'nul:\0') - self.assertRaises(TypeError, getargs_y, bytearray(b'bytearray')) - self.assertRaises(TypeError, getargs_y, memoryview(b'memoryview')) - self.assertRaises(TypeError, getargs_y, None) - - def test_y_star(self): - from _testcapi import getargs_y_star - self.assertRaises(TypeError, getargs_y_star, 'abc\xe9') - self.assertEqual(getargs_y_star(b'bytes'), b'bytes') - self.assertEqual(getargs_y_star(b'nul:\0'), b'nul:\0') - self.assertEqual(getargs_y_star(bytearray(b'bytearray')), b'bytearray') - self.assertEqual(getargs_y_star(memoryview(b'memoryview')), b'memoryview') - self.assertRaises(TypeError, getargs_y_star, None) - - def test_y_hash(self): - from _testcapi import getargs_y_hash - self.assertRaises(TypeError, getargs_y_hash, 'abc\xe9') - self.assertEqual(getargs_y_hash(b'bytes'), b'bytes') - self.assertEqual(getargs_y_hash(b'nul:\0'), b'nul:\0') - self.assertRaises(TypeError, getargs_y_hash, bytearray(b'bytearray')) - self.assertRaises(TypeError, getargs_y_hash, memoryview(b'memoryview')) - self.assertRaises(TypeError, getargs_y_hash, None) - - def test_w_star(self): - # getargs_w_star() modifies first and last byte - from _testcapi import getargs_w_star - self.assertRaises(TypeError, getargs_w_star, 'abc\xe9') - self.assertRaises(TypeError, getargs_w_star, b'bytes') - self.assertRaises(TypeError, getargs_w_star, b'nul:\0') - self.assertRaises(TypeError, getargs_w_star, memoryview(b'bytes')) - buf = bytearray(b'bytearray') - self.assertEqual(getargs_w_star(buf), b'[ytearra]') - self.assertEqual(buf, bytearray(b'[ytearra]')) - buf = bytearray(b'memoryview') - self.assertEqual(getargs_w_star(memoryview(buf)), b'[emoryvie]') - self.assertEqual(buf, bytearray(b'[emoryvie]')) - self.assertRaises(TypeError, getargs_w_star, None) - - -class String_TestCase(unittest.TestCase): - def test_C(self): - from _testcapi import getargs_C - self.assertRaises(TypeError, getargs_C, 'abc') # len > 1 - self.assertEqual(getargs_C('a'), 97) - self.assertEqual(getargs_C('\u20ac'), 0x20ac) - self.assertEqual(getargs_C('\U0001f40d'), 0x1f40d) - self.assertRaises(TypeError, getargs_C, b'a') - self.assertRaises(TypeError, getargs_C, bytearray(b'a')) - self.assertRaises(TypeError, getargs_C, memoryview(b'a')) - self.assertRaises(TypeError, getargs_C, 97) - self.assertRaises(TypeError, getargs_C, None) - - def test_s(self): - from _testcapi import getargs_s - self.assertEqual(getargs_s('abc\xe9'), b'abc\xc3\xa9') - self.assertRaises(ValueError, getargs_s, 'nul:\0') - self.assertRaises(TypeError, getargs_s, b'bytes') - self.assertRaises(TypeError, getargs_s, bytearray(b'bytearray')) - self.assertRaises(TypeError, getargs_s, memoryview(b'memoryview')) - self.assertRaises(TypeError, getargs_s, None) - - def test_s_star(self): - from _testcapi import getargs_s_star - self.assertEqual(getargs_s_star('abc\xe9'), b'abc\xc3\xa9') - self.assertEqual(getargs_s_star('nul:\0'), b'nul:\0') - self.assertEqual(getargs_s_star(b'bytes'), b'bytes') - self.assertEqual(getargs_s_star(bytearray(b'bytearray')), b'bytearray') - self.assertEqual(getargs_s_star(memoryview(b'memoryview')), b'memoryview') - self.assertRaises(TypeError, getargs_s_star, None) - - def test_s_hash(self): - from _testcapi import getargs_s_hash - self.assertEqual(getargs_s_hash('abc\xe9'), b'abc\xc3\xa9') - self.assertEqual(getargs_s_hash('nul:\0'), b'nul:\0') - self.assertEqual(getargs_s_hash(b'bytes'), b'bytes') - self.assertRaises(TypeError, getargs_s_hash, bytearray(b'bytearray')) - self.assertRaises(TypeError, getargs_s_hash, memoryview(b'memoryview')) - self.assertRaises(TypeError, getargs_s_hash, None) - - def test_s_hash_int(self): - # "s#" without PY_SSIZE_T_CLEAN defined. - from _testcapi import getargs_s_hash_int - from _testcapi import getargs_s_hash_int2 - buf = bytearray([1, 2]) - self.assertRaises(SystemError, getargs_s_hash_int, buf, "abc") - self.assertRaises(SystemError, getargs_s_hash_int, buf, x=42) - self.assertRaises(SystemError, getargs_s_hash_int, buf, x="abc") - self.assertRaises(SystemError, getargs_s_hash_int2, buf, ("abc",)) - self.assertRaises(SystemError, getargs_s_hash_int2, buf, x=42) - self.assertRaises(SystemError, getargs_s_hash_int2, buf, x="abc") - buf.append(3) # still mutable -- not locked by a buffer export - # getargs_s_hash_int(buf) may not raise SystemError because skipitem() - # is not called. But it is an implementation detail. - # getargs_s_hash_int(buf) - # getargs_s_hash_int2(buf) - - def test_z(self): - from _testcapi import getargs_z - self.assertEqual(getargs_z('abc\xe9'), b'abc\xc3\xa9') - self.assertRaises(ValueError, getargs_z, 'nul:\0') - self.assertRaises(TypeError, getargs_z, b'bytes') - self.assertRaises(TypeError, getargs_z, bytearray(b'bytearray')) - self.assertRaises(TypeError, getargs_z, memoryview(b'memoryview')) - self.assertIsNone(getargs_z(None)) - - def test_z_star(self): - from _testcapi import getargs_z_star - self.assertEqual(getargs_z_star('abc\xe9'), b'abc\xc3\xa9') - self.assertEqual(getargs_z_star('nul:\0'), b'nul:\0') - self.assertEqual(getargs_z_star(b'bytes'), b'bytes') - self.assertEqual(getargs_z_star(bytearray(b'bytearray')), b'bytearray') - self.assertEqual(getargs_z_star(memoryview(b'memoryview')), b'memoryview') - self.assertIsNone(getargs_z_star(None)) - - def test_z_hash(self): - from _testcapi import getargs_z_hash - self.assertEqual(getargs_z_hash('abc\xe9'), b'abc\xc3\xa9') - self.assertEqual(getargs_z_hash('nul:\0'), b'nul:\0') - self.assertEqual(getargs_z_hash(b'bytes'), b'bytes') - self.assertRaises(TypeError, getargs_z_hash, bytearray(b'bytearray')) - self.assertRaises(TypeError, getargs_z_hash, memoryview(b'memoryview')) - self.assertIsNone(getargs_z_hash(None)) - - def test_es(self): - from _testcapi import getargs_es - self.assertEqual(getargs_es('abc\xe9'), b'abc\xc3\xa9') - self.assertEqual(getargs_es('abc\xe9', 'latin1'), b'abc\xe9') - self.assertRaises(UnicodeEncodeError, getargs_es, 'abc\xe9', 'ascii') - self.assertRaises(LookupError, getargs_es, 'abc\xe9', 'spam') - self.assertRaises(TypeError, getargs_es, b'bytes', 'latin1') - self.assertRaises(TypeError, getargs_es, bytearray(b'bytearray'), 'latin1') - self.assertRaises(TypeError, getargs_es, memoryview(b'memoryview'), 'latin1') - self.assertRaises(TypeError, getargs_es, None, 'latin1') - self.assertRaises(TypeError, getargs_es, 'nul:\0', 'latin1') - - def test_et(self): - from _testcapi import getargs_et - self.assertEqual(getargs_et('abc\xe9'), b'abc\xc3\xa9') - self.assertEqual(getargs_et('abc\xe9', 'latin1'), b'abc\xe9') - self.assertRaises(UnicodeEncodeError, getargs_et, 'abc\xe9', 'ascii') - self.assertRaises(LookupError, getargs_et, 'abc\xe9', 'spam') - self.assertEqual(getargs_et(b'bytes', 'latin1'), b'bytes') - self.assertEqual(getargs_et(bytearray(b'bytearray'), 'latin1'), b'bytearray') - self.assertRaises(TypeError, getargs_et, memoryview(b'memoryview'), 'latin1') - self.assertRaises(TypeError, getargs_et, None, 'latin1') - self.assertRaises(TypeError, getargs_et, 'nul:\0', 'latin1') - self.assertRaises(TypeError, getargs_et, b'nul:\0', 'latin1') - self.assertRaises(TypeError, getargs_et, bytearray(b'nul:\0'), 'latin1') - - def test_es_hash(self): - from _testcapi import getargs_es_hash - self.assertEqual(getargs_es_hash('abc\xe9'), b'abc\xc3\xa9') - self.assertEqual(getargs_es_hash('abc\xe9', 'latin1'), b'abc\xe9') - self.assertRaises(UnicodeEncodeError, getargs_es_hash, 'abc\xe9', 'ascii') - self.assertRaises(LookupError, getargs_es_hash, 'abc\xe9', 'spam') - self.assertRaises(TypeError, getargs_es_hash, b'bytes', 'latin1') - self.assertRaises(TypeError, getargs_es_hash, bytearray(b'bytearray'), 'latin1') - self.assertRaises(TypeError, getargs_es_hash, memoryview(b'memoryview'), 'latin1') - self.assertRaises(TypeError, getargs_es_hash, None, 'latin1') - self.assertEqual(getargs_es_hash('nul:\0', 'latin1'), b'nul:\0') - - buf = bytearray(b'x'*8) - self.assertEqual(getargs_es_hash('abc\xe9', 'latin1', buf), b'abc\xe9') - self.assertEqual(buf, bytearray(b'abc\xe9\x00xxx')) - buf = bytearray(b'x'*5) - self.assertEqual(getargs_es_hash('abc\xe9', 'latin1', buf), b'abc\xe9') - self.assertEqual(buf, bytearray(b'abc\xe9\x00')) - buf = bytearray(b'x'*4) - self.assertRaises(ValueError, getargs_es_hash, 'abc\xe9', 'latin1', buf) - self.assertEqual(buf, bytearray(b'x'*4)) - buf = bytearray() - self.assertRaises(ValueError, getargs_es_hash, 'abc\xe9', 'latin1', buf) - - def test_et_hash(self): - from _testcapi import getargs_et_hash - self.assertEqual(getargs_et_hash('abc\xe9'), b'abc\xc3\xa9') - self.assertEqual(getargs_et_hash('abc\xe9', 'latin1'), b'abc\xe9') - self.assertRaises(UnicodeEncodeError, getargs_et_hash, 'abc\xe9', 'ascii') - self.assertRaises(LookupError, getargs_et_hash, 'abc\xe9', 'spam') - self.assertEqual(getargs_et_hash(b'bytes', 'latin1'), b'bytes') - self.assertEqual(getargs_et_hash(bytearray(b'bytearray'), 'latin1'), b'bytearray') - self.assertRaises(TypeError, getargs_et_hash, memoryview(b'memoryview'), 'latin1') - self.assertRaises(TypeError, getargs_et_hash, None, 'latin1') - self.assertEqual(getargs_et_hash('nul:\0', 'latin1'), b'nul:\0') - self.assertEqual(getargs_et_hash(b'nul:\0', 'latin1'), b'nul:\0') - self.assertEqual(getargs_et_hash(bytearray(b'nul:\0'), 'latin1'), b'nul:\0') - - buf = bytearray(b'x'*8) - self.assertEqual(getargs_et_hash('abc\xe9', 'latin1', buf), b'abc\xe9') - self.assertEqual(buf, bytearray(b'abc\xe9\x00xxx')) - buf = bytearray(b'x'*5) - self.assertEqual(getargs_et_hash('abc\xe9', 'latin1', buf), b'abc\xe9') - self.assertEqual(buf, bytearray(b'abc\xe9\x00')) - buf = bytearray(b'x'*4) - self.assertRaises(ValueError, getargs_et_hash, 'abc\xe9', 'latin1', buf) - self.assertEqual(buf, bytearray(b'x'*4)) - buf = bytearray() - self.assertRaises(ValueError, getargs_et_hash, 'abc\xe9', 'latin1', buf) - - @support.requires_legacy_unicode_capi - def test_u(self): - from _testcapi import getargs_u - with self.assertWarns(DeprecationWarning): - self.assertEqual(getargs_u('abc\xe9'), 'abc\xe9') - with self.assertWarns(DeprecationWarning): - self.assertRaises(ValueError, getargs_u, 'nul:\0') - with self.assertWarns(DeprecationWarning): - self.assertRaises(TypeError, getargs_u, b'bytes') - with self.assertWarns(DeprecationWarning): - self.assertRaises(TypeError, getargs_u, bytearray(b'bytearray')) - with self.assertWarns(DeprecationWarning): - self.assertRaises(TypeError, getargs_u, memoryview(b'memoryview')) - with self.assertWarns(DeprecationWarning): - self.assertRaises(TypeError, getargs_u, None) - with warnings.catch_warnings(): - warnings.simplefilter('error', DeprecationWarning) - self.assertRaises(DeprecationWarning, getargs_u, 'abc\xe9') - - @support.requires_legacy_unicode_capi - def test_u_hash(self): - from _testcapi import getargs_u_hash - with self.assertWarns(DeprecationWarning): - self.assertEqual(getargs_u_hash('abc\xe9'), 'abc\xe9') - with self.assertWarns(DeprecationWarning): - self.assertEqual(getargs_u_hash('nul:\0'), 'nul:\0') - with self.assertWarns(DeprecationWarning): - self.assertRaises(TypeError, getargs_u_hash, b'bytes') - with self.assertWarns(DeprecationWarning): - self.assertRaises(TypeError, getargs_u_hash, bytearray(b'bytearray')) - with self.assertWarns(DeprecationWarning): - self.assertRaises(TypeError, getargs_u_hash, memoryview(b'memoryview')) - with self.assertWarns(DeprecationWarning): - self.assertRaises(TypeError, getargs_u_hash, None) - with warnings.catch_warnings(): - warnings.simplefilter('error', DeprecationWarning) - self.assertRaises(DeprecationWarning, getargs_u_hash, 'abc\xe9') - - @support.requires_legacy_unicode_capi - def test_Z(self): - from _testcapi import getargs_Z - with self.assertWarns(DeprecationWarning): - self.assertEqual(getargs_Z('abc\xe9'), 'abc\xe9') - with self.assertWarns(DeprecationWarning): - self.assertRaises(ValueError, getargs_Z, 'nul:\0') - with self.assertWarns(DeprecationWarning): - self.assertRaises(TypeError, getargs_Z, b'bytes') - with self.assertWarns(DeprecationWarning): - self.assertRaises(TypeError, getargs_Z, bytearray(b'bytearray')) - with self.assertWarns(DeprecationWarning): - self.assertRaises(TypeError, getargs_Z, memoryview(b'memoryview')) - with self.assertWarns(DeprecationWarning): - self.assertIsNone(getargs_Z(None)) - with warnings.catch_warnings(): - warnings.simplefilter('error', DeprecationWarning) - self.assertRaises(DeprecationWarning, getargs_Z, 'abc\xe9') - - @support.requires_legacy_unicode_capi - def test_Z_hash(self): - from _testcapi import getargs_Z_hash - with self.assertWarns(DeprecationWarning): - self.assertEqual(getargs_Z_hash('abc\xe9'), 'abc\xe9') - with self.assertWarns(DeprecationWarning): - self.assertEqual(getargs_Z_hash('nul:\0'), 'nul:\0') - with self.assertWarns(DeprecationWarning): - self.assertRaises(TypeError, getargs_Z_hash, b'bytes') - with self.assertWarns(DeprecationWarning): - self.assertRaises(TypeError, getargs_Z_hash, bytearray(b'bytearray')) - with self.assertWarns(DeprecationWarning): - self.assertRaises(TypeError, getargs_Z_hash, memoryview(b'memoryview')) - with self.assertWarns(DeprecationWarning): - self.assertIsNone(getargs_Z_hash(None)) - with warnings.catch_warnings(): - warnings.simplefilter('error', DeprecationWarning) - self.assertRaises(DeprecationWarning, getargs_Z_hash, 'abc\xe9') - - -class Object_TestCase(unittest.TestCase): - def test_S(self): - from _testcapi import getargs_S - obj = b'bytes' - self.assertIs(getargs_S(obj), obj) - self.assertRaises(TypeError, getargs_S, bytearray(b'bytearray')) - self.assertRaises(TypeError, getargs_S, 'str') - self.assertRaises(TypeError, getargs_S, None) - self.assertRaises(TypeError, getargs_S, memoryview(obj)) - - def test_Y(self): - from _testcapi import getargs_Y - obj = bytearray(b'bytearray') - self.assertIs(getargs_Y(obj), obj) - self.assertRaises(TypeError, getargs_Y, b'bytes') - self.assertRaises(TypeError, getargs_Y, 'str') - self.assertRaises(TypeError, getargs_Y, None) - self.assertRaises(TypeError, getargs_Y, memoryview(obj)) - - def test_U(self): - from _testcapi import getargs_U - obj = 'str' - self.assertIs(getargs_U(obj), obj) - self.assertRaises(TypeError, getargs_U, b'bytes') - self.assertRaises(TypeError, getargs_U, bytearray(b'bytearray')) - self.assertRaises(TypeError, getargs_U, None) - - -# Bug #6012 -class Test6012(unittest.TestCase): - def test(self): - self.assertEqual(_testcapi.argparsing("Hello", "World"), 1) - - -class SkipitemTest(unittest.TestCase): - - # u, and Z raises DeprecationWarning - @warnings_helper.ignore_warnings(category=DeprecationWarning) - def test_skipitem(self): - """ - If this test failed, you probably added a new "format unit" - in Python/getargs.c, but neglected to update our poor friend - skipitem() in the same file. (If so, shame on you!) - - With a few exceptions**, this function brute-force tests all - printable ASCII*** characters (32 to 126 inclusive) as format units, - checking to see that PyArg_ParseTupleAndKeywords() return consistent - errors both when the unit is attempted to be used and when it is - skipped. If the format unit doesn't exist, we'll get one of two - specific error messages (one for used, one for skipped); if it does - exist we *won't* get that error--we'll get either no error or some - other error. If we get the specific "does not exist" error for one - test and not for the other, there's a mismatch, and the test fails. - - ** Some format units have special funny semantics and it would - be difficult to accommodate them here. Since these are all - well-established and properly skipped in skipitem() we can - get away with not testing them--this test is really intended - to catch *new* format units. - - *** Python C source files must be ASCII. Therefore it's impossible - to have non-ASCII format units. - - """ - empty_tuple = () - tuple_1 = (0,) - dict_b = {'b':1} - keywords = ["a", "b"] - - for i in range(32, 127): - c = chr(i) - - # skip parentheses, the error reporting is inconsistent about them - # skip 'e', it's always a two-character code - # skip '|' and '$', they don't represent arguments anyway - if c in '()e|$': - continue - - # test the format unit when not skipped - format = c + "i" - try: - _testcapi.parse_tuple_and_keywords(tuple_1, dict_b, - format, keywords) - when_not_skipped = False - except SystemError as e: - s = "argument 1 (impossible)" - when_not_skipped = (str(e) == s) - except TypeError: - when_not_skipped = False - - # test the format unit when skipped - optional_format = "|" + format - try: - _testcapi.parse_tuple_and_keywords(empty_tuple, dict_b, - optional_format, keywords) - when_skipped = False - except SystemError as e: - s = "impossible: '{}'".format(format) - when_skipped = (str(e) == s) - - message = ("test_skipitem_parity: " - "detected mismatch between convertsimple and skipitem " - "for format unit '{}' ({}), not skipped {}, skipped {}".format( - c, i, when_skipped, when_not_skipped)) - self.assertIs(when_skipped, when_not_skipped, message) - - def test_skipitem_with_suffix(self): - parse = _testcapi.parse_tuple_and_keywords - empty_tuple = () - tuple_1 = (0,) - dict_b = {'b':1} - keywords = ["a", "b"] - - supported = ('s#', 's*', 'z#', 'z*', 'u#', 'Z#', 'y#', 'y*', 'w#', 'w*') - for c in string.ascii_letters: - for c2 in '#*': - f = c + c2 - with self.subTest(format=f): - optional_format = "|" + f + "i" - if f in supported: - parse(empty_tuple, dict_b, optional_format, keywords) - else: - with self.assertRaisesRegex(SystemError, - 'impossible'): - parse(empty_tuple, dict_b, optional_format, keywords) - - for c in map(chr, range(32, 128)): - f = 'e' + c - optional_format = "|" + f + "i" - with self.subTest(format=f): - if c in 'st': - parse(empty_tuple, dict_b, optional_format, keywords) - else: - with self.assertRaisesRegex(SystemError, - 'impossible'): - parse(empty_tuple, dict_b, optional_format, keywords) - - -class ParseTupleAndKeywords_Test(unittest.TestCase): - - def test_parse_tuple_and_keywords(self): - # Test handling errors in the parse_tuple_and_keywords helper itself - self.assertRaises(TypeError, _testcapi.parse_tuple_and_keywords, - (), {}, 42, []) - self.assertRaises(ValueError, _testcapi.parse_tuple_and_keywords, - (), {}, '', 42) - self.assertRaises(ValueError, _testcapi.parse_tuple_and_keywords, - (), {}, '', [''] * 42) - self.assertRaises(ValueError, _testcapi.parse_tuple_and_keywords, - (), {}, '', [42]) - - def test_bad_use(self): - # Test handling invalid format and keywords in - # PyArg_ParseTupleAndKeywords() - self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, - (1,), {}, '||O', ['a']) - self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, - (1, 2), {}, '|O|O', ['a', 'b']) - self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, - (), {'a': 1}, '$$O', ['a']) - self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, - (), {'a': 1, 'b': 2}, '$O$O', ['a', 'b']) - self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, - (), {'a': 1}, '$|O', ['a']) - self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, - (), {'a': 1, 'b': 2}, '$O|O', ['a', 'b']) - self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, - (1,), {}, '|O', ['a', 'b']) - self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, - (1,), {}, '|OO', ['a']) - self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, - (), {}, '|$O', ['']) - self.assertRaises(SystemError, _testcapi.parse_tuple_and_keywords, - (), {}, '|OO', ['a', '']) - - def test_positional_only(self): - parse = _testcapi.parse_tuple_and_keywords - - parse((1, 2, 3), {}, 'OOO', ['', '', 'a']) - parse((1, 2), {'a': 3}, 'OOO', ['', '', 'a']) - with self.assertRaisesRegex(TypeError, - r'function takes at least 2 positional arguments \(1 given\)'): - parse((1,), {'a': 3}, 'OOO', ['', '', 'a']) - parse((1,), {}, 'O|OO', ['', '', 'a']) - with self.assertRaisesRegex(TypeError, - r'function takes at least 1 positional argument \(0 given\)'): - parse((), {}, 'O|OO', ['', '', 'a']) - parse((1, 2), {'a': 3}, 'OO$O', ['', '', 'a']) - with self.assertRaisesRegex(TypeError, - r'function takes exactly 2 positional arguments \(1 given\)'): - parse((1,), {'a': 3}, 'OO$O', ['', '', 'a']) - parse((1,), {}, 'O|O$O', ['', '', 'a']) - with self.assertRaisesRegex(TypeError, - r'function takes at least 1 positional argument \(0 given\)'): - parse((), {}, 'O|O$O', ['', '', 'a']) - with self.assertRaisesRegex(SystemError, r'Empty parameter name after \$'): - parse((1,), {}, 'O|$OO', ['', '', 'a']) - with self.assertRaisesRegex(SystemError, 'Empty keyword'): - parse((1,), {}, 'O|OO', ['', 'a', '']) - - -class Test_testcapi(unittest.TestCase): - locals().update((name, getattr(_testcapi, name)) - for name in dir(_testcapi) - if name.startswith('test_') and name.endswith('_code')) - - @warnings_helper.ignore_warnings(category=DeprecationWarning) - def test_u_code(self): - _testcapi.test_u_code() - - @warnings_helper.ignore_warnings(category=DeprecationWarning) - def test_Z_code(self): - _testcapi.test_Z_code() - - -if __name__ == "__main__": - unittest.main() diff --git a/Lib/test/test_hashlib.py b/Lib/test/test_hashlib.py index 535f4aa3..9aa6c1f0 100644 --- a/Lib/test/test_hashlib.py +++ b/Lib/test/test_hashlib.py @@ -495,6 +495,15 @@ class HashLibTestCase(unittest.TestCase): def test_case_md5_uintmax(self, size): self.check('md5', b'A'*size, '28138d306ff1b8281f1a9067e1a1a2b3') + @unittest.skipIf(sys.maxsize < _4G - 1, 'test cannot run on 32-bit systems') + @bigmemtest(size=_4G - 1, memuse=1, dry_run=False) + def test_sha3_update_overflow(self, size): + """Regression test for gh-98517 CVE-2022-37454.""" + h = hashlib.sha3_224() + h.update(b'\x01') + h.update(b'\x01'*0xffff_ffff) + self.assertEqual(h.hexdigest(), '80762e8ce6700f114fec0f621fd97c4b9c00147fa052215294cceeed') + # use the three examples from Federal Information Processing Standards # Publication 180-1, Secure Hash Standard, 1995 April 17 # http://www.itl.nist.gov/div897/pubs/fip180-1.htm diff --git a/Lib/test/test_httpservers.py b/Lib/test/test_httpservers.py index 8fdbab4e..ac8da494 100644 --- a/Lib/test/test_httpservers.py +++ b/Lib/test/test_httpservers.py @@ -26,7 +26,7 @@ import time import datetime import threading from unittest import mock -from io import BytesIO +from io import BytesIO, StringIO import unittest from test import support @@ -984,6 +984,27 @@ class BaseHTTPRequestHandlerTestCase(unittest.TestCase): match = self.HTTPResponseMatch.search(response) self.assertIsNotNone(match) + def test_unprintable_not_logged(self): + # We call the method from the class directly as our Socketless + # Handler subclass overrode it... nice for everything BUT this test. + self.handler.client_address = ('127.0.0.1', 1337) + log_message = BaseHTTPRequestHandler.log_message + with mock.patch.object(sys, 'stderr', StringIO()) as fake_stderr: + log_message(self.handler, '/foo') + log_message(self.handler, '/\033bar\000\033') + log_message(self.handler, '/spam %s.', 'a') + log_message(self.handler, '/spam %s.', '\033\x7f\x9f\xa0beans') + log_message(self.handler, '"GET /foo\\b"ar\007 HTTP/1.0"') + stderr = fake_stderr.getvalue() + self.assertNotIn('\033', stderr) # non-printable chars are caught. + self.assertNotIn('\000', stderr) # non-printable chars are caught. + lines = stderr.splitlines() + self.assertIn('/foo', lines[0]) + self.assertIn(r'/\x1bbar\x00\x1b', lines[1]) + self.assertIn('/spam a.', lines[2]) + self.assertIn('/spam \\x1b\\x7f\\x9f\xa0beans.', lines[3]) + self.assertIn(r'"GET /foo\\b"ar\x07 HTTP/1.0"', lines[4]) + def test_http_1_1(self): result = self.send_typical_request(b'GET / HTTP/1.1\r\n\r\n') self.verify_http_server_response(result[0]) diff --git a/Lib/test/test_imp.py b/Lib/test/test_imp.py index 5abe28ef..21fec2ad 100644 --- a/Lib/test/test_imp.py +++ b/Lib/test/test_imp.py @@ -1,3 +1,4 @@ +import gc import importlib import importlib.util import os @@ -379,6 +380,35 @@ class ImportTests(unittest.TestCase): self.assertEqual(mod.x, 42) + @support.cpython_only + def test_create_builtin_subinterp(self): + # gh-99578: create_builtin() behavior changes after the creation of the + # first sub-interpreter. Test both code paths, before and after the + # creation of a sub-interpreter. Previously, create_builtin() had + # a reference leak after the creation of the first sub-interpreter. + + import builtins + create_builtin = support.get_attribute(_imp, "create_builtin") + class Spec: + name = "builtins" + spec = Spec() + + def check_get_builtins(): + refcnt = sys.getrefcount(builtins) + mod = _imp.create_builtin(spec) + self.assertIs(mod, builtins) + self.assertEqual(sys.getrefcount(builtins), refcnt + 1) + # Check that a GC collection doesn't crash + gc.collect() + + check_get_builtins() + + ret = support.run_in_subinterp("import builtins") + self.assertEqual(ret, 0) + + check_get_builtins() + + class ReloadTests(unittest.TestCase): """Very basic tests to make sure that imp.reload() operates just like diff --git a/Lib/test/test_importlib/test_metadata_api.py b/Lib/test/test_importlib/test_metadata_api.py index 799f0070..ab482817 100644 --- a/Lib/test/test_importlib/test_metadata_api.py +++ b/Lib/test/test_importlib/test_metadata_api.py @@ -89,13 +89,15 @@ class APITests( self.assertIn(ep.dist.name, ('distinfo-pkg', 'egginfo-pkg')) self.assertEqual(ep.dist.version, "1.0.0") - def test_entry_points_unique_packages(self): - # Entry points should only be exposed for the first package - # on sys.path with a given name. + def test_entry_points_unique_packages_normalized(self): + """ + Entry points should only be exposed for the first package + on sys.path with a given name (even when normalized). + """ alt_site_dir = self.fixtures.enter_context(fixtures.tempdir()) self.fixtures.enter_context(self.add_sys_path(alt_site_dir)) alt_pkg = { - "distinfo_pkg-1.1.0.dist-info": { + "DistInfo_pkg-1.1.0.dist-info": { "METADATA": """ Name: distinfo-pkg Version: 1.1.0 diff --git a/Lib/test/test_importlib/util.py b/Lib/test/test_importlib/util.py index ca0d8c9b..b14ecb51 100644 --- a/Lib/test/test_importlib/util.py +++ b/Lib/test/test_importlib/util.py @@ -307,7 +307,7 @@ def writes_bytecode_files(fxn): """Decorator to protect sys.dont_write_bytecode from mutation and to skip tests that require it to be set to False.""" if sys.dont_write_bytecode: - return lambda *args, **kwargs: None + return unittest.skip("relies on writing bytecode")(fxn) @functools.wraps(fxn) def wrapper(*args, **kwargs): original = sys.dont_write_bytecode diff --git a/Lib/test/test_inspect.py b/Lib/test/test_inspect.py index 16fef5ca..03cb3bdd 100644 --- a/Lib/test/test_inspect.py +++ b/Lib/test/test_inspect.py @@ -2957,8 +2957,6 @@ class TestSignatureObject(unittest.TestCase): self.assertEqual(str(inspect.signature(foo)), '(a)') def test_signature_on_decorated(self): - import functools - def decorator(func): @functools.wraps(func) def wrapper(*args, **kwargs) -> int: @@ -2970,6 +2968,8 @@ class TestSignatureObject(unittest.TestCase): def bar(self, a, b): pass + bar = decorator(Foo().bar) + self.assertEqual(self.signature(Foo.bar), ((('self', ..., ..., "positional_or_keyword"), ('a', ..., ..., "positional_or_keyword"), @@ -2988,6 +2988,11 @@ class TestSignatureObject(unittest.TestCase): # from "func" to "wrapper", hence no # return_annotation + self.assertEqual(self.signature(bar), + ((('a', ..., ..., "positional_or_keyword"), + ('b', ..., ..., "positional_or_keyword")), + ...)) + # Test that we handle method wrappers correctly def decorator(func): @functools.wraps(func) diff --git a/Lib/test/test_io.py b/Lib/test/test_io.py index fb83762c..8dae85ac 100644 --- a/Lib/test/test_io.py +++ b/Lib/test/test_io.py @@ -880,6 +880,14 @@ class IOTest(unittest.TestCase): open('non-existent', 'r', opener=badopener) self.assertEqual(str(cm.exception), 'opener returned -2') + def test_opener_invalid_fd(self): + # Check that OSError is raised with error code EBADF if the + # opener returns an invalid file descriptor (see gh-82212). + fd = os_helper.make_bad_fd() + with self.assertRaises(OSError) as cm: + self.open('foo', opener=lambda name, flags: fd) + self.assertEqual(cm.exception.errno, errno.EBADF) + def test_fileio_closefd(self): # Issue #4841 with self.open(__file__, 'rb') as f1, \ @@ -3918,7 +3926,15 @@ class IncrementalNewlineDecoderTest(unittest.TestCase): self.assertEqual(decoder.decode(b"\r\r\n"), "\r\r\n") class CIncrementalNewlineDecoderTest(IncrementalNewlineDecoderTest): - pass + @support.cpython_only + def test_uninitialized(self): + uninitialized = self.IncrementalNewlineDecoder.__new__( + self.IncrementalNewlineDecoder) + self.assertRaises(ValueError, uninitialized.decode, b'bar') + self.assertRaises(ValueError, uninitialized.getstate) + self.assertRaises(ValueError, uninitialized.setstate, (b'foo', 0)) + self.assertRaises(ValueError, uninitialized.reset) + class PyIncrementalNewlineDecoderTest(IncrementalNewlineDecoderTest): pass diff --git a/Lib/test/test_itertools.py b/Lib/test/test_itertools.py index 4c9c597c..0ecc80d8 100644 --- a/Lib/test/test_itertools.py +++ b/Lib/test/test_itertools.py @@ -634,6 +634,7 @@ class TestBasicOps(unittest.TestCase): self.assertRaises(TypeError, cycle, 5) self.assertEqual(list(islice(cycle(gen3()),10)), [0,1,2,0,1,2,0,1,2,0]) + def test_cycle_copy_pickle(self): # check copy, deepcopy, pickle c = cycle('abc') self.assertEqual(next(c), 'a') @@ -669,6 +670,37 @@ class TestBasicOps(unittest.TestCase): d = pickle.loads(p) # rebuild the cycle object self.assertEqual(take(20, d), list('cdeabcdeabcdeabcdeab')) + def test_cycle_unpickle_compat(self): + testcases = [ + b'citertools\ncycle\n(c__builtin__\niter\n((lI1\naI2\naI3\natRI1\nbtR((lI1\naI0\ntb.', + b'citertools\ncycle\n(c__builtin__\niter\n(](K\x01K\x02K\x03etRK\x01btR(]K\x01aK\x00tb.', + b'\x80\x02citertools\ncycle\nc__builtin__\niter\n](K\x01K\x02K\x03e\x85RK\x01b\x85R]K\x01aK\x00\x86b.', + b'\x80\x03citertools\ncycle\ncbuiltins\niter\n](K\x01K\x02K\x03e\x85RK\x01b\x85R]K\x01aK\x00\x86b.', + b'\x80\x04\x95=\x00\x00\x00\x00\x00\x00\x00\x8c\titertools\x8c\x05cycle\x93\x8c\x08builtins\x8c\x04iter\x93](K\x01K\x02K\x03e\x85RK\x01b\x85R]K\x01aK\x00\x86b.', + + b'citertools\ncycle\n(c__builtin__\niter\n((lp0\nI1\naI2\naI3\natRI1\nbtR(g0\nI1\ntb.', + b'citertools\ncycle\n(c__builtin__\niter\n(]q\x00(K\x01K\x02K\x03etRK\x01btR(h\x00K\x01tb.', + b'\x80\x02citertools\ncycle\nc__builtin__\niter\n]q\x00(K\x01K\x02K\x03e\x85RK\x01b\x85Rh\x00K\x01\x86b.', + b'\x80\x03citertools\ncycle\ncbuiltins\niter\n]q\x00(K\x01K\x02K\x03e\x85RK\x01b\x85Rh\x00K\x01\x86b.', + b'\x80\x04\x95<\x00\x00\x00\x00\x00\x00\x00\x8c\titertools\x8c\x05cycle\x93\x8c\x08builtins\x8c\x04iter\x93]\x94(K\x01K\x02K\x03e\x85RK\x01b\x85Rh\x00K\x01\x86b.', + + b'citertools\ncycle\n(c__builtin__\niter\n((lI1\naI2\naI3\natRI1\nbtR((lI1\naI00\ntb.', + b'citertools\ncycle\n(c__builtin__\niter\n(](K\x01K\x02K\x03etRK\x01btR(]K\x01aI00\ntb.', + b'\x80\x02citertools\ncycle\nc__builtin__\niter\n](K\x01K\x02K\x03e\x85RK\x01b\x85R]K\x01a\x89\x86b.', + b'\x80\x03citertools\ncycle\ncbuiltins\niter\n](K\x01K\x02K\x03e\x85RK\x01b\x85R]K\x01a\x89\x86b.', + b'\x80\x04\x95<\x00\x00\x00\x00\x00\x00\x00\x8c\titertools\x8c\x05cycle\x93\x8c\x08builtins\x8c\x04iter\x93](K\x01K\x02K\x03e\x85RK\x01b\x85R]K\x01a\x89\x86b.', + + b'citertools\ncycle\n(c__builtin__\niter\n((lp0\nI1\naI2\naI3\natRI1\nbtR(g0\nI01\ntb.', + b'citertools\ncycle\n(c__builtin__\niter\n(]q\x00(K\x01K\x02K\x03etRK\x01btR(h\x00I01\ntb.', + b'\x80\x02citertools\ncycle\nc__builtin__\niter\n]q\x00(K\x01K\x02K\x03e\x85RK\x01b\x85Rh\x00\x88\x86b.', + b'\x80\x03citertools\ncycle\ncbuiltins\niter\n]q\x00(K\x01K\x02K\x03e\x85RK\x01b\x85Rh\x00\x88\x86b.', + b'\x80\x04\x95;\x00\x00\x00\x00\x00\x00\x00\x8c\titertools\x8c\x05cycle\x93\x8c\x08builtins\x8c\x04iter\x93]\x94(K\x01K\x02K\x03e\x85RK\x01b\x85Rh\x00\x88\x86b.', + ] + assert len(testcases) == 20 + for t in testcases: + it = pickle.loads(t) + self.assertEqual(take(10, it), [2, 3, 1, 2, 3, 1, 2, 3, 1, 2]) + def test_cycle_setstate(self): # Verify both modes for restoring state diff --git a/Lib/test/test_math.py b/Lib/test/test_math.py index e5f4e2bb..ada196a1 100644 --- a/Lib/test/test_math.py +++ b/Lib/test/test_math.py @@ -979,6 +979,11 @@ class MathTests(unittest.TestCase): self.assertEqual(math.dist(p, q), 5*scale) self.assertEqual(math.dist(q, p), 5*scale) + def test_math_dist_leak(self): + # gh-98897: Check for error handling does not leak memory + with self.assertRaises(ValueError): + math.dist([1, 2], [3, 4, 5]) + def testIsqrt(self): # Test a variety of inputs, large and small. test_values = ( diff --git a/Lib/test/test_pdb.py b/Lib/test/test_pdb.py index 6ac1a4a3..7f0bc719 100644 --- a/Lib/test/test_pdb.py +++ b/Lib/test/test_pdb.py @@ -1915,6 +1915,52 @@ def bœr(): self.assertEqual(stdout.split('\n')[6].rstrip('\r'), expected) + def test_gh_93696_frozen_list(self): + frozen_src = """ + def func(): + x = "Sentinel string for gh-93696" + print(x) + """ + host_program = """ + import os + import sys + + def _create_fake_frozen_module(): + with open('gh93696.py') as f: + src = f.read() + + # this function has a co_filename as if it were in a frozen module + dummy_mod = compile(src, "", "exec") + func_code = dummy_mod.co_consts[0] + + mod = type(sys)("gh93696") + mod.func = type(lambda: None)(func_code, mod.__dict__) + mod.__file__ = 'gh93696.py' + + return mod + + mod = _create_fake_frozen_module() + mod.func() + """ + commands = """ + break 20 + continue + step + list + quit + """ + with open('gh93696.py', 'w') as f: + f.write(textwrap.dedent(frozen_src)) + + with open('gh93696_host.py', 'w') as f: + f.write(textwrap.dedent(host_program)) + + self.addCleanup(os_helper.unlink, 'gh93696.py') + self.addCleanup(os_helper.unlink, 'gh93696_host.py') + stdout, stderr = self._run_pdb(["gh93696_host.py"], commands) + # verify that pdb found the source of the "frozen" function + self.assertIn('x = "Sentinel string for gh-93696"', stdout, "Sentinel statement not found") + class ChecklineTests(unittest.TestCase): def setUp(self): linecache.clearcache() # Pdb.checkline() uses linecache.getline() diff --git a/Lib/test/test_platform.py b/Lib/test/test_platform.py index 1a688775..2d411666 100644 --- a/Lib/test/test_platform.py +++ b/Lib/test/test_platform.py @@ -268,6 +268,14 @@ class PlatformTest(unittest.TestCase): self.assertEqual(res[:], expected) self.assertEqual(res[:5], expected[:5]) + def test_uname_fields(self): + self.assertIn('processor', platform.uname()._fields) + + def test_uname_asdict(self): + res = platform.uname()._asdict() + self.assertEqual(len(res), 6) + self.assertIn('processor', res) + @unittest.skipIf(sys.platform in ['win32', 'OpenVMS'], "uname -p not used") def test_uname_processor(self): """ diff --git a/Lib/test/test_re.py b/Lib/test/test_re.py index 62bfc3a7..010c52e7 100644 --- a/Lib/test/test_re.py +++ b/Lib/test/test_re.py @@ -578,6 +578,11 @@ class ReTests(unittest.TestCase): self.checkPatternError(r'()(?(2)a)', "invalid group reference 2", 5) + def test_re_groupref_exists_validation_bug(self): + for i in range(256): + with self.subTest(code=i): + re.compile(r'()(?(1)\x%02x?)' % i) + def test_re_groupref_overflow(self): from sre_constants import MAXGROUPS self.checkTemplateError('()', r'\g<%s>' % MAXGROUPS, 'xx', diff --git a/Lib/test/test_shutil.py b/Lib/test/test_shutil.py index 62e91803..0935b60d 100644 --- a/Lib/test/test_shutil.py +++ b/Lib/test/test_shutil.py @@ -731,18 +731,25 @@ class TestCopyTree(BaseTest, unittest.TestCase): @os_helper.skip_unless_symlink def test_copytree_dangling_symlinks(self): - # a dangling symlink raises an error at the end src_dir = self.mkdtemp() + valid_file = os.path.join(src_dir, 'test.txt') + write_file(valid_file, 'abc') + dir_a = os.path.join(src_dir, 'dir_a') + os.mkdir(dir_a) + for d in src_dir, dir_a: + os.symlink('IDONTEXIST', os.path.join(d, 'broken')) + os.symlink(valid_file, os.path.join(d, 'valid')) + + # A dangling symlink should raise an error. dst_dir = os.path.join(self.mkdtemp(), 'destination') - os.symlink('IDONTEXIST', os.path.join(src_dir, 'test.txt')) - os.mkdir(os.path.join(src_dir, 'test_dir')) - write_file((src_dir, 'test_dir', 'test.txt'), '456') self.assertRaises(Error, shutil.copytree, src_dir, dst_dir) - # a dangling symlink is ignored with the proper flag + # Dangling symlinks should be ignored with the proper flag. dst_dir = os.path.join(self.mkdtemp(), 'destination2') shutil.copytree(src_dir, dst_dir, ignore_dangling_symlinks=True) - self.assertNotIn('test.txt', os.listdir(dst_dir)) + for root, dirs, files in os.walk(dst_dir): + self.assertNotIn('broken', files) + self.assertIn('valid', files) # a dangling symlink is copied if symlinks=True dst_dir = os.path.join(self.mkdtemp(), 'destination3') diff --git a/Lib/test/test_statistics.py b/Lib/test/test_statistics.py index 2853b1b2..341abef8 100644 --- a/Lib/test/test_statistics.py +++ b/Lib/test/test_statistics.py @@ -2880,14 +2880,19 @@ class TestNormalDist: nd = NormalDist(100, 15) self.assertNotEqual(nd, lnd) - def test_pickle_and_copy(self): + def test_copy(self): nd = self.module.NormalDist(37.5, 5.625) nd1 = copy.copy(nd) self.assertEqual(nd, nd1) nd2 = copy.deepcopy(nd) self.assertEqual(nd, nd2) - nd3 = pickle.loads(pickle.dumps(nd)) - self.assertEqual(nd, nd3) + + def test_pickle(self): + nd = self.module.NormalDist(37.5, 5.625) + for proto in range(pickle.HIGHEST_PROTOCOL + 1): + with self.subTest(proto=proto): + pickled = pickle.loads(pickle.dumps(nd, protocol=proto)) + self.assertEqual(nd, pickled) def test_hashability(self): ND = self.module.NormalDist diff --git a/Lib/test/test_structmembers.py b/Lib/test/test_structmembers.py deleted file mode 100644 index 07d2f623..00000000 --- a/Lib/test/test_structmembers.py +++ /dev/null @@ -1,145 +0,0 @@ -import unittest -from test.support import import_helper -from test.support import warnings_helper - -# Skip this test if the _testcapi module isn't available. -import_helper.import_module('_testcapi') -from _testcapi import _test_structmembersType, \ - CHAR_MAX, CHAR_MIN, UCHAR_MAX, \ - SHRT_MAX, SHRT_MIN, USHRT_MAX, \ - INT_MAX, INT_MIN, UINT_MAX, \ - LONG_MAX, LONG_MIN, ULONG_MAX, \ - LLONG_MAX, LLONG_MIN, ULLONG_MAX, \ - PY_SSIZE_T_MAX, PY_SSIZE_T_MIN - -ts=_test_structmembersType(False, # T_BOOL - 1, # T_BYTE - 2, # T_UBYTE - 3, # T_SHORT - 4, # T_USHORT - 5, # T_INT - 6, # T_UINT - 7, # T_LONG - 8, # T_ULONG - 23, # T_PYSSIZET - 9.99999,# T_FLOAT - 10.1010101010, # T_DOUBLE - "hi" # T_STRING_INPLACE - ) - -class ReadWriteTests(unittest.TestCase): - - def test_bool(self): - ts.T_BOOL = True - self.assertEqual(ts.T_BOOL, True) - ts.T_BOOL = False - self.assertEqual(ts.T_BOOL, False) - self.assertRaises(TypeError, setattr, ts, 'T_BOOL', 1) - - def test_byte(self): - ts.T_BYTE = CHAR_MAX - self.assertEqual(ts.T_BYTE, CHAR_MAX) - ts.T_BYTE = CHAR_MIN - self.assertEqual(ts.T_BYTE, CHAR_MIN) - ts.T_UBYTE = UCHAR_MAX - self.assertEqual(ts.T_UBYTE, UCHAR_MAX) - - def test_short(self): - ts.T_SHORT = SHRT_MAX - self.assertEqual(ts.T_SHORT, SHRT_MAX) - ts.T_SHORT = SHRT_MIN - self.assertEqual(ts.T_SHORT, SHRT_MIN) - ts.T_USHORT = USHRT_MAX - self.assertEqual(ts.T_USHORT, USHRT_MAX) - - def test_int(self): - ts.T_INT = INT_MAX - self.assertEqual(ts.T_INT, INT_MAX) - ts.T_INT = INT_MIN - self.assertEqual(ts.T_INT, INT_MIN) - ts.T_UINT = UINT_MAX - self.assertEqual(ts.T_UINT, UINT_MAX) - - def test_long(self): - ts.T_LONG = LONG_MAX - self.assertEqual(ts.T_LONG, LONG_MAX) - ts.T_LONG = LONG_MIN - self.assertEqual(ts.T_LONG, LONG_MIN) - ts.T_ULONG = ULONG_MAX - self.assertEqual(ts.T_ULONG, ULONG_MAX) - - def test_py_ssize_t(self): - ts.T_PYSSIZET = PY_SSIZE_T_MAX - self.assertEqual(ts.T_PYSSIZET, PY_SSIZE_T_MAX) - ts.T_PYSSIZET = PY_SSIZE_T_MIN - self.assertEqual(ts.T_PYSSIZET, PY_SSIZE_T_MIN) - - @unittest.skipUnless(hasattr(ts, "T_LONGLONG"), "long long not present") - def test_longlong(self): - ts.T_LONGLONG = LLONG_MAX - self.assertEqual(ts.T_LONGLONG, LLONG_MAX) - ts.T_LONGLONG = LLONG_MIN - self.assertEqual(ts.T_LONGLONG, LLONG_MIN) - - ts.T_ULONGLONG = ULLONG_MAX - self.assertEqual(ts.T_ULONGLONG, ULLONG_MAX) - - ## make sure these will accept a plain int as well as a long - ts.T_LONGLONG = 3 - self.assertEqual(ts.T_LONGLONG, 3) - ts.T_ULONGLONG = 4 - self.assertEqual(ts.T_ULONGLONG, 4) - - def test_bad_assignments(self): - integer_attributes = [ - 'T_BOOL', - 'T_BYTE', 'T_UBYTE', - 'T_SHORT', 'T_USHORT', - 'T_INT', 'T_UINT', - 'T_LONG', 'T_ULONG', - 'T_PYSSIZET' - ] - if hasattr(ts, 'T_LONGLONG'): - integer_attributes.extend(['T_LONGLONG', 'T_ULONGLONG']) - - # issue8014: this produced 'bad argument to internal function' - # internal error - for nonint in None, 3.2j, "full of eels", {}, []: - for attr in integer_attributes: - self.assertRaises(TypeError, setattr, ts, attr, nonint) - - def test_inplace_string(self): - self.assertEqual(ts.T_STRING_INPLACE, "hi") - self.assertRaises(TypeError, setattr, ts, "T_STRING_INPLACE", "s") - self.assertRaises(TypeError, delattr, ts, "T_STRING_INPLACE") - - -class TestWarnings(unittest.TestCase): - - def test_byte_max(self): - with warnings_helper.check_warnings(('', RuntimeWarning)): - ts.T_BYTE = CHAR_MAX+1 - - def test_byte_min(self): - with warnings_helper.check_warnings(('', RuntimeWarning)): - ts.T_BYTE = CHAR_MIN-1 - - def test_ubyte_max(self): - with warnings_helper.check_warnings(('', RuntimeWarning)): - ts.T_UBYTE = UCHAR_MAX+1 - - def test_short_max(self): - with warnings_helper.check_warnings(('', RuntimeWarning)): - ts.T_SHORT = SHRT_MAX+1 - - def test_short_min(self): - with warnings_helper.check_warnings(('', RuntimeWarning)): - ts.T_SHORT = SHRT_MIN-1 - - def test_ushort_max(self): - with warnings_helper.check_warnings(('', RuntimeWarning)): - ts.T_USHORT = USHRT_MAX+1 - - -if __name__ == "__main__": - unittest.main() diff --git a/Lib/test/test_tokenize.py b/Lib/test/test_tokenize.py index 127f0a17..c55dff62 100644 --- a/Lib/test/test_tokenize.py +++ b/Lib/test/test_tokenize.py @@ -10,6 +10,8 @@ from textwrap import dedent from unittest import TestCase, mock from test.test_grammar import (VALID_UNDERSCORE_LITERALS, INVALID_UNDERSCORE_LITERALS) +from test.support import os_helper +from test.support.script_helper import run_test_script, make_script import os import token @@ -1654,5 +1656,19 @@ class TestRoundtrip(TestCase): self.check_roundtrip(code) +class CTokenizerBufferTests(unittest.TestCase): + def test_newline_at_the_end_of_buffer(self): + # See issue 99581: Make sure that if we need to add a new line at the + # end of the buffer, we have enough space in the buffer, specially when + # the current line is as long as the buffer space available. + test_script = f"""\ + #coding: latin-1 + #{"a"*10000} + #{"a"*10002}""" + with os_helper.temp_dir() as temp_dir: + file_name = make_script(temp_dir, 'foo', test_script) + run_test_script(file_name) + + if __name__ == "__main__": unittest.main() diff --git a/Lib/test/test_tools/test_i18n.py b/Lib/test/test_tools/test_i18n.py index 12f778db..985e2262 100644 --- a/Lib/test/test_tools/test_i18n.py +++ b/Lib/test/test_tools/test_i18n.py @@ -155,6 +155,26 @@ class Test_pygettext(unittest.TestCase): ''')) self.assertFalse([msgid for msgid in msgids if 'doc' in msgid]) + def test_moduledocstring(self): + for doc in ('"""doc"""', "r'''doc'''", "R'doc'", 'u"doc"'): + with self.subTest(doc): + msgids = self.extract_docstrings_from_str(dedent('''\ + %s + ''' % doc)) + self.assertIn('doc', msgids) + + def test_moduledocstring_bytes(self): + msgids = self.extract_docstrings_from_str(dedent('''\ + b"""doc""" + ''')) + self.assertFalse([msgid for msgid in msgids if 'doc' in msgid]) + + def test_moduledocstring_fstring(self): + msgids = self.extract_docstrings_from_str(dedent('''\ + f"""doc""" + ''')) + self.assertFalse([msgid for msgid in msgids if 'doc' in msgid]) + def test_msgid(self): msgids = self.extract_docstrings_from_str( '''_("""doc""" r'str' u"ing")''') diff --git a/Lib/test/test_trace.py b/Lib/test/test_trace.py index 75478557..3b3b2912 100644 --- a/Lib/test/test_trace.py +++ b/Lib/test/test_trace.py @@ -1,4 +1,5 @@ import os +from pickle import dump import sys from test.support import captured_stdout from test.support.os_helper import (TESTFN, rmtree, unlink) @@ -407,6 +408,15 @@ class TestCoverage(unittest.TestCase): self.assertIn(modname, coverage) self.assertEqual(coverage[modname], (5, 100)) + def test_coverageresults_update(self): + # Update empty CoverageResults with a non-empty infile. + infile = TESTFN + '-infile' + with open(infile, 'wb') as f: + dump(({}, {}, {'caller': 1}), f, protocol=1) + self.addCleanup(unlink, infile) + results = trace.CoverageResults({}, {}, infile, {}) + self.assertEqual(results.callers, {'caller': 1}) + ### Tests that don't mess with sys.settrace and can be traced ### themselves TODO: Skip tests that do mess with sys.settrace when ### regrtest is invoked with -T option. diff --git a/Lib/test/test_typing.py b/Lib/test/test_typing.py index 3f38e308..7e033c02 100644 --- a/Lib/test/test_typing.py +++ b/Lib/test/test_typing.py @@ -1560,7 +1560,7 @@ class ProtocolTests(BaseTestCase): self.assertEqual(x.bar, 'abc') self.assertEqual(x.x, 1) self.assertEqual(x.__dict__, {'foo': 42, 'bar': 'abc'}) - s = pickle.dumps(P) + s = pickle.dumps(P, proto) D = pickle.loads(s) class E: @@ -2405,11 +2405,11 @@ class GenericTests(BaseTestCase): self.assertEqual(D.__parameters__, ()) - with self.assertRaises(Exception): + with self.assertRaises(TypeError): D[int] - with self.assertRaises(Exception): + with self.assertRaises(TypeError): D[Any] - with self.assertRaises(Exception): + with self.assertRaises(TypeError): D[T] def test_new_with_args(self): @@ -2498,6 +2498,61 @@ class GenericTests(BaseTestCase): class Foo(obj): pass + def test_complex_subclasses(self): + T_co = TypeVar("T_co", covariant=True) + + class Base(Generic[T_co]): + ... + + T = TypeVar("T") + + # see gh-94607: this fails in that bug + class Sub(Base, Generic[T]): + ... + + def test_parameter_detection(self): + self.assertEqual(List[T].__parameters__, (T,)) + self.assertEqual(List[List[T]].__parameters__, (T,)) + class A: + __parameters__ = (T,) + # Bare classes should be skipped + for a in (List, list): + for b in (int, TypeVar, ParamSpec, types.GenericAlias, types.UnionType): + with self.subTest(generic=a, sub=b): + with self.assertRaisesRegex(TypeError, + '.* is not a generic class|' + 'no type variables left'): + a[b][str] + # Duck-typing anything that looks like it has __parameters__. + # C version of GenericAlias + self.assertEqual(list[A()].__parameters__, (T,)) + + def test_non_generic_subscript(self): + T = TypeVar('T') + class G(Generic[T]): + pass + + for s in (int, G, List, list, + TypeVar, ParamSpec, + types.GenericAlias, types.UnionType): + + for t in Tuple, tuple: + with self.subTest(tuple=t, sub=s): + self.assertEqual(t[s, T][int], t[s, int]) + self.assertEqual(t[T, s][int], t[int, s]) + a = t[s] + with self.assertRaises(TypeError): + a[int] + + for c in Callable, collections.abc.Callable: + with self.subTest(callable=c, sub=s): + self.assertEqual(c[[s], T][int], c[[s], int]) + self.assertEqual(c[[T], s][int], c[[int], s]) + a = c[[s], s] + with self.assertRaises(TypeError): + a[int] + + class ClassVarTests(BaseTestCase): def test_basics(self): diff --git a/Lib/test/test_unicode.py b/Lib/test/test_unicode.py index f5ce095d..f6a1651e 100644 --- a/Lib/test/test_unicode.py +++ b/Lib/test/test_unicode.py @@ -15,7 +15,6 @@ import textwrap import unicodedata import unittest import warnings -from test.support import import_helper from test.support import warnings_helper from test import support, string_tests from test.support.script_helper import assert_python_failure @@ -221,6 +220,20 @@ class UnicodeTest(string_tests.CommonTest, self.checkequalnofix(9, 'abcdefghiabc', 'find', 'abc', 1) self.checkequalnofix(-1, 'abcdefghiabc', 'find', 'def', 4) + # test utf-8 non-ascii char + self.checkequal(0, 'тест', 'find', 'т') + self.checkequal(3, 'тест', 'find', 'т', 1) + self.checkequal(-1, 'тест', 'find', 'т', 1, 3) + self.checkequal(-1, 'тест', 'find', 'e') # english `e` + # test utf-8 non-ascii slice + self.checkequal(1, 'тест тест', 'find', 'ес') + self.checkequal(1, 'тест тест', 'find', 'ес', 1) + self.checkequal(1, 'тест тест', 'find', 'ес', 1, 3) + self.checkequal(6, 'тест тест', 'find', 'ес', 2) + self.checkequal(-1, 'тест тест', 'find', 'ес', 6, 7) + self.checkequal(-1, 'тест тест', 'find', 'ес', 7) + self.checkequal(-1, 'тест тест', 'find', 'ec') # english `ec` + self.assertRaises(TypeError, 'hello'.find) self.assertRaises(TypeError, 'hello'.find, 42) # test mixed kinds @@ -251,6 +264,19 @@ class UnicodeTest(string_tests.CommonTest, self.checkequalnofix(9, 'abcdefghiabc', 'rfind', 'abc') self.checkequalnofix(12, 'abcdefghiabc', 'rfind', '') self.checkequalnofix(12, 'abcdefghiabc', 'rfind', '') + # test utf-8 non-ascii char + self.checkequal(1, 'тест', 'rfind', 'е') + self.checkequal(1, 'тест', 'rfind', 'е', 1) + self.checkequal(-1, 'тест', 'rfind', 'е', 2) + self.checkequal(-1, 'тест', 'rfind', 'e') # english `e` + # test utf-8 non-ascii slice + self.checkequal(6, 'тест тест', 'rfind', 'ес') + self.checkequal(6, 'тест тест', 'rfind', 'ес', 1) + self.checkequal(1, 'тест тест', 'rfind', 'ес', 1, 3) + self.checkequal(6, 'тест тест', 'rfind', 'ес', 2) + self.checkequal(-1, 'тест тест', 'rfind', 'ес', 6, 7) + self.checkequal(-1, 'тест тест', 'rfind', 'ес', 7) + self.checkequal(-1, 'тест тест', 'rfind', 'ec') # english `ec` # test mixed kinds self.checkequal(0, 'a' + '\u0102' * 100, 'rfind', 'a') self.checkequal(0, 'a' + '\U00100304' * 100, 'rfind', 'a') @@ -2543,492 +2569,6 @@ class UnicodeTest(string_tests.CommonTest, self.assertEqual(proc.rc, 10, proc) -class CAPITest(unittest.TestCase): - - # Test PyUnicode_FromFormat() - def test_from_format(self): - import_helper.import_module('ctypes') - from ctypes import ( - c_char_p, - pythonapi, py_object, sizeof, - c_int, c_long, c_longlong, c_ssize_t, - c_uint, c_ulong, c_ulonglong, c_size_t, c_void_p) - name = "PyUnicode_FromFormat" - _PyUnicode_FromFormat = getattr(pythonapi, name) - _PyUnicode_FromFormat.argtypes = (c_char_p,) - _PyUnicode_FromFormat.restype = py_object - - def PyUnicode_FromFormat(format, *args): - cargs = tuple( - py_object(arg) if isinstance(arg, str) else arg - for arg in args) - return _PyUnicode_FromFormat(format, *cargs) - - def check_format(expected, format, *args): - text = PyUnicode_FromFormat(format, *args) - self.assertEqual(expected, text) - - # ascii format, non-ascii argument - check_format('ascii\x7f=unicode\xe9', - b'ascii\x7f=%U', 'unicode\xe9') - - # non-ascii format, ascii argument: ensure that PyUnicode_FromFormatV() - # raises an error - self.assertRaisesRegex(ValueError, - r'^PyUnicode_FromFormatV\(\) expects an ASCII-encoded format ' - 'string, got a non-ASCII byte: 0xe9$', - PyUnicode_FromFormat, b'unicode\xe9=%s', 'ascii') - - # test "%c" - check_format('\uabcd', - b'%c', c_int(0xabcd)) - check_format('\U0010ffff', - b'%c', c_int(0x10ffff)) - with self.assertRaises(OverflowError): - PyUnicode_FromFormat(b'%c', c_int(0x110000)) - # Issue #18183 - check_format('\U00010000\U00100000', - b'%c%c', c_int(0x10000), c_int(0x100000)) - - # test "%" - check_format('%', - b'%') - check_format('%', - b'%%') - check_format('%s', - b'%%s') - check_format('[%]', - b'[%%]') - check_format('%abc', - b'%%%s', b'abc') - - # truncated string - check_format('abc', - b'%.3s', b'abcdef') - check_format('abc[\ufffd', - b'%.5s', 'abc[\u20ac]'.encode('utf8')) - check_format("'\\u20acABC'", - b'%A', '\u20acABC') - check_format("'\\u20", - b'%.5A', '\u20acABCDEF') - check_format("'\u20acABC'", - b'%R', '\u20acABC') - check_format("'\u20acA", - b'%.3R', '\u20acABCDEF') - check_format('\u20acAB', - b'%.3S', '\u20acABCDEF') - check_format('\u20acAB', - b'%.3U', '\u20acABCDEF') - check_format('\u20acAB', - b'%.3V', '\u20acABCDEF', None) - check_format('abc[\ufffd', - b'%.5V', None, 'abc[\u20ac]'.encode('utf8')) - - # following tests comes from #7330 - # test width modifier and precision modifier with %S - check_format("repr= abc", - b'repr=%5S', 'abc') - check_format("repr=ab", - b'repr=%.2S', 'abc') - check_format("repr= ab", - b'repr=%5.2S', 'abc') - - # test width modifier and precision modifier with %R - check_format("repr= 'abc'", - b'repr=%8R', 'abc') - check_format("repr='ab", - b'repr=%.3R', 'abc') - check_format("repr= 'ab", - b'repr=%5.3R', 'abc') - - # test width modifier and precision modifier with %A - check_format("repr= 'abc'", - b'repr=%8A', 'abc') - check_format("repr='ab", - b'repr=%.3A', 'abc') - check_format("repr= 'ab", - b'repr=%5.3A', 'abc') - - # test width modifier and precision modifier with %s - check_format("repr= abc", - b'repr=%5s', b'abc') - check_format("repr=ab", - b'repr=%.2s', b'abc') - check_format("repr= ab", - b'repr=%5.2s', b'abc') - - # test width modifier and precision modifier with %U - check_format("repr= abc", - b'repr=%5U', 'abc') - check_format("repr=ab", - b'repr=%.2U', 'abc') - check_format("repr= ab", - b'repr=%5.2U', 'abc') - - # test width modifier and precision modifier with %V - check_format("repr= abc", - b'repr=%5V', 'abc', b'123') - check_format("repr=ab", - b'repr=%.2V', 'abc', b'123') - check_format("repr= ab", - b'repr=%5.2V', 'abc', b'123') - check_format("repr= 123", - b'repr=%5V', None, b'123') - check_format("repr=12", - b'repr=%.2V', None, b'123') - check_format("repr= 12", - b'repr=%5.2V', None, b'123') - - # test integer formats (%i, %d, %u) - check_format('010', - b'%03i', c_int(10)) - check_format('0010', - b'%0.4i', c_int(10)) - check_format('-123', - b'%i', c_int(-123)) - check_format('-123', - b'%li', c_long(-123)) - check_format('-123', - b'%lli', c_longlong(-123)) - check_format('-123', - b'%zi', c_ssize_t(-123)) - - check_format('-123', - b'%d', c_int(-123)) - check_format('-123', - b'%ld', c_long(-123)) - check_format('-123', - b'%lld', c_longlong(-123)) - check_format('-123', - b'%zd', c_ssize_t(-123)) - - check_format('123', - b'%u', c_uint(123)) - check_format('123', - b'%lu', c_ulong(123)) - check_format('123', - b'%llu', c_ulonglong(123)) - check_format('123', - b'%zu', c_size_t(123)) - - # test long output - min_longlong = -(2 ** (8 * sizeof(c_longlong) - 1)) - max_longlong = -min_longlong - 1 - check_format(str(min_longlong), - b'%lld', c_longlong(min_longlong)) - check_format(str(max_longlong), - b'%lld', c_longlong(max_longlong)) - max_ulonglong = 2 ** (8 * sizeof(c_ulonglong)) - 1 - check_format(str(max_ulonglong), - b'%llu', c_ulonglong(max_ulonglong)) - PyUnicode_FromFormat(b'%p', c_void_p(-1)) - - # test padding (width and/or precision) - check_format('123'.rjust(10, '0'), - b'%010i', c_int(123)) - check_format('123'.rjust(100), - b'%100i', c_int(123)) - check_format('123'.rjust(100, '0'), - b'%.100i', c_int(123)) - check_format('123'.rjust(80, '0').rjust(100), - b'%100.80i', c_int(123)) - - check_format('123'.rjust(10, '0'), - b'%010u', c_uint(123)) - check_format('123'.rjust(100), - b'%100u', c_uint(123)) - check_format('123'.rjust(100, '0'), - b'%.100u', c_uint(123)) - check_format('123'.rjust(80, '0').rjust(100), - b'%100.80u', c_uint(123)) - - check_format('123'.rjust(10, '0'), - b'%010x', c_int(0x123)) - check_format('123'.rjust(100), - b'%100x', c_int(0x123)) - check_format('123'.rjust(100, '0'), - b'%.100x', c_int(0x123)) - check_format('123'.rjust(80, '0').rjust(100), - b'%100.80x', c_int(0x123)) - - # test %A - check_format(r"%A:'abc\xe9\uabcd\U0010ffff'", - b'%%A:%A', 'abc\xe9\uabcd\U0010ffff') - - # test %V - check_format('repr=abc', - b'repr=%V', 'abc', b'xyz') - - # test %p - # We cannot test the exact result, - # because it returns a hex representation of a C pointer, - # which is going to be different each time. But, we can test the format. - p_format_regex = r'^0x[a-zA-Z0-9]{3,}$' - p_format1 = PyUnicode_FromFormat(b'%p', 'abc') - self.assertIsInstance(p_format1, str) - self.assertRegex(p_format1, p_format_regex) - - p_format2 = PyUnicode_FromFormat(b'%p %p', '123456', b'xyz') - self.assertIsInstance(p_format2, str) - self.assertRegex(p_format2, - r'0x[a-zA-Z0-9]{3,} 0x[a-zA-Z0-9]{3,}') - - # Extra args are ignored: - p_format3 = PyUnicode_FromFormat(b'%p', '123456', None, b'xyz') - self.assertIsInstance(p_format3, str) - self.assertRegex(p_format3, p_format_regex) - - # Test string decode from parameter of %s using utf-8. - # b'\xe4\xba\xba\xe6\xb0\x91' is utf-8 encoded byte sequence of - # '\u4eba\u6c11' - check_format('repr=\u4eba\u6c11', - b'repr=%V', None, b'\xe4\xba\xba\xe6\xb0\x91') - - #Test replace error handler. - check_format('repr=abc\ufffd', - b'repr=%V', None, b'abc\xff') - - # not supported: copy the raw format string. these tests are just here - # to check for crashes and should not be considered as specifications - check_format('%s', - b'%1%s', b'abc') - check_format('%1abc', - b'%1abc') - check_format('%+i', - b'%+i', c_int(10)) - check_format('%.%s', - b'%.%s', b'abc') - - # Issue #33817: empty strings - check_format('', - b'') - check_format('', - b'%s', b'') - - # Test PyUnicode_AsWideChar() - @support.cpython_only - def test_aswidechar(self): - from _testcapi import unicode_aswidechar - import_helper.import_module('ctypes') - from ctypes import c_wchar, sizeof - - wchar, size = unicode_aswidechar('abcdef', 2) - self.assertEqual(size, 2) - self.assertEqual(wchar, 'ab') - - wchar, size = unicode_aswidechar('abc', 3) - self.assertEqual(size, 3) - self.assertEqual(wchar, 'abc') - - wchar, size = unicode_aswidechar('abc', 4) - self.assertEqual(size, 3) - self.assertEqual(wchar, 'abc\0') - - wchar, size = unicode_aswidechar('abc', 10) - self.assertEqual(size, 3) - self.assertEqual(wchar, 'abc\0') - - wchar, size = unicode_aswidechar('abc\0def', 20) - self.assertEqual(size, 7) - self.assertEqual(wchar, 'abc\0def\0') - - nonbmp = chr(0x10ffff) - if sizeof(c_wchar) == 2: - buflen = 3 - nchar = 2 - else: # sizeof(c_wchar) == 4 - buflen = 2 - nchar = 1 - wchar, size = unicode_aswidechar(nonbmp, buflen) - self.assertEqual(size, nchar) - self.assertEqual(wchar, nonbmp + '\0') - - # Test PyUnicode_AsWideCharString() - @support.cpython_only - def test_aswidecharstring(self): - from _testcapi import unicode_aswidecharstring - import_helper.import_module('ctypes') - from ctypes import c_wchar, sizeof - - wchar, size = unicode_aswidecharstring('abc') - self.assertEqual(size, 3) - self.assertEqual(wchar, 'abc\0') - - wchar, size = unicode_aswidecharstring('abc\0def') - self.assertEqual(size, 7) - self.assertEqual(wchar, 'abc\0def\0') - - nonbmp = chr(0x10ffff) - if sizeof(c_wchar) == 2: - nchar = 2 - else: # sizeof(c_wchar) == 4 - nchar = 1 - wchar, size = unicode_aswidecharstring(nonbmp) - self.assertEqual(size, nchar) - self.assertEqual(wchar, nonbmp + '\0') - - # Test PyUnicode_AsUCS4() - @support.cpython_only - def test_asucs4(self): - from _testcapi import unicode_asucs4 - for s in ['abc', '\xa1\xa2', '\u4f60\u597d', 'a\U0001f600', - 'a\ud800b\udfffc', '\ud834\udd1e']: - l = len(s) - self.assertEqual(unicode_asucs4(s, l, True), s+'\0') - self.assertEqual(unicode_asucs4(s, l, False), s+'\uffff') - self.assertEqual(unicode_asucs4(s, l+1, True), s+'\0\uffff') - self.assertEqual(unicode_asucs4(s, l+1, False), s+'\0\uffff') - self.assertRaises(SystemError, unicode_asucs4, s, l-1, True) - self.assertRaises(SystemError, unicode_asucs4, s, l-2, False) - s = '\0'.join([s, s]) - self.assertEqual(unicode_asucs4(s, len(s), True), s+'\0') - self.assertEqual(unicode_asucs4(s, len(s), False), s+'\uffff') - - # Test PyUnicode_AsUTF8() - @support.cpython_only - def test_asutf8(self): - from _testcapi import unicode_asutf8 - - bmp = '\u0100' - bmp2 = '\uffff' - nonbmp = chr(0x10ffff) - - self.assertEqual(unicode_asutf8(bmp), b'\xc4\x80') - self.assertEqual(unicode_asutf8(bmp2), b'\xef\xbf\xbf') - self.assertEqual(unicode_asutf8(nonbmp), b'\xf4\x8f\xbf\xbf') - self.assertRaises(UnicodeEncodeError, unicode_asutf8, 'a\ud800b\udfffc') - - # Test PyUnicode_AsUTF8AndSize() - @support.cpython_only - def test_asutf8andsize(self): - from _testcapi import unicode_asutf8andsize - - bmp = '\u0100' - bmp2 = '\uffff' - nonbmp = chr(0x10ffff) - - self.assertEqual(unicode_asutf8andsize(bmp), (b'\xc4\x80', 2)) - self.assertEqual(unicode_asutf8andsize(bmp2), (b'\xef\xbf\xbf', 3)) - self.assertEqual(unicode_asutf8andsize(nonbmp), (b'\xf4\x8f\xbf\xbf', 4)) - self.assertRaises(UnicodeEncodeError, unicode_asutf8andsize, 'a\ud800b\udfffc') - - # Test PyUnicode_FindChar() - @support.cpython_only - def test_findchar(self): - from _testcapi import unicode_findchar - - for str in "\xa1", "\u8000\u8080", "\ud800\udc02", "\U0001f100\U0001f1f1": - for i, ch in enumerate(str): - self.assertEqual(unicode_findchar(str, ord(ch), 0, len(str), 1), i) - self.assertEqual(unicode_findchar(str, ord(ch), 0, len(str), -1), i) - - str = "!>_= end - self.assertEqual(unicode_findchar(str, ord('!'), 0, 0, 1), -1) - self.assertEqual(unicode_findchar(str, ord('!'), len(str), 0, 1), -1) - # negative - self.assertEqual(unicode_findchar(str, ord('!'), -len(str), -1, 1), 0) - self.assertEqual(unicode_findchar(str, ord('!'), -len(str), -1, -1), 0) - - # Test PyUnicode_CopyCharacters() - @support.cpython_only - def test_copycharacters(self): - from _testcapi import unicode_copycharacters - - strings = [ - 'abcde', '\xa1\xa2\xa3\xa4\xa5', - '\u4f60\u597d\u4e16\u754c\uff01', - '\U0001f600\U0001f601\U0001f602\U0001f603\U0001f604' - ] - - for idx, from_ in enumerate(strings): - # wide -> narrow: exceed maxchar limitation - for to in strings[:idx]: - self.assertRaises( - SystemError, - unicode_copycharacters, to, 0, from_, 0, 5 - ) - # same kind - for from_start in range(5): - self.assertEqual( - unicode_copycharacters(from_, 0, from_, from_start, 5), - (from_[from_start:from_start+5].ljust(5, '\0'), - 5-from_start) - ) - for to_start in range(5): - self.assertEqual( - unicode_copycharacters(from_, to_start, from_, to_start, 5), - (from_[to_start:to_start+5].rjust(5, '\0'), - 5-to_start) - ) - # narrow -> wide - # Tests omitted since this creates invalid strings. - - s = strings[0] - self.assertRaises(IndexError, unicode_copycharacters, s, 6, s, 0, 5) - self.assertRaises(IndexError, unicode_copycharacters, s, -1, s, 0, 5) - self.assertRaises(IndexError, unicode_copycharacters, s, 0, s, 6, 5) - self.assertRaises(IndexError, unicode_copycharacters, s, 0, s, -1, 5) - self.assertRaises(SystemError, unicode_copycharacters, s, 1, s, 0, 5) - self.assertRaises(SystemError, unicode_copycharacters, s, 0, s, 0, -1) - self.assertRaises(SystemError, unicode_copycharacters, s, 0, b'', 0, 0) - - @support.cpython_only - @support.requires_legacy_unicode_capi - def test_encode_decimal(self): - from _testcapi import unicode_encodedecimal - with warnings_helper.check_warnings(): - warnings.simplefilter('ignore', DeprecationWarning) - self.assertEqual(unicode_encodedecimal('123'), - b'123') - self.assertEqual(unicode_encodedecimal('\u0663.\u0661\u0664'), - b'3.14') - self.assertEqual(unicode_encodedecimal( - "\N{EM SPACE}3.14\N{EN SPACE}"), b' 3.14 ') - self.assertRaises(UnicodeEncodeError, - unicode_encodedecimal, "123\u20ac", "strict") - self.assertRaisesRegex( - ValueError, - "^'decimal' codec can't encode character", - unicode_encodedecimal, "123\u20ac", "replace") - - @support.cpython_only - @support.requires_legacy_unicode_capi - def test_transform_decimal(self): - from _testcapi import unicode_transformdecimaltoascii as transform_decimal - with warnings_helper.check_warnings(): - warnings.simplefilter('ignore', DeprecationWarning) - self.assertEqual(transform_decimal('123'), - '123') - self.assertEqual(transform_decimal('\u0663.\u0661\u0664'), - '3.14') - self.assertEqual(transform_decimal("\N{EM SPACE}3.14\N{EN SPACE}"), - "\N{EM SPACE}3.14\N{EN SPACE}") - self.assertEqual(transform_decimal('123\u20ac'), - '123\u20ac') - - @support.cpython_only - def test_pep393_utf8_caching_bug(self): - # Issue #25709: Problem with string concatenation and utf-8 cache - from _testcapi import getargs_s_hash - for k in 0x24, 0xa4, 0x20ac, 0x1f40d: - s = '' - for i in range(5): - # Due to CPython specific optimization the 's' string can be - # resized in-place. - s += chr(k) - # Parsing with the "s#" format code calls indirectly - # PyUnicode_AsUTF8AndSize() which creates the UTF-8 - # encoded string cached in the Unicode object. - self.assertEqual(getargs_s_hash(s), chr(k).encode() * (i + 1)) - # Check that the second call returns the same result - self.assertEqual(getargs_s_hash(s), chr(k).encode() * (i + 1)) - class StringModuleTest(unittest.TestCase): def test_formatter_parser(self): def parse(format): diff --git a/Lib/test/test_unicodedata.py b/Lib/test/test_unicodedata.py index 2a93f0fa..ad9d3d6a 100644 --- a/Lib/test/test_unicodedata.py +++ b/Lib/test/test_unicodedata.py @@ -12,7 +12,8 @@ import sys import unicodedata import unittest from test.support import (open_urlresource, requires_resource, script_helper, - cpython_only, check_disallow_instantiation) + cpython_only, check_disallow_instantiation, + ResourceDenied) class UnicodeMethodsTest(unittest.TestCase): @@ -345,8 +346,8 @@ class NormalizationTest(unittest.TestCase): except PermissionError: self.skipTest(f"Permission error when downloading {TESTDATAURL} " f"into the test data directory") - except (OSError, HTTPException): - self.fail(f"Could not retrieve {TESTDATAURL}") + except (OSError, HTTPException) as exc: + self.skipTest(f"Failed to download {TESTDATAURL}: {exc}") with testdata: self.run_normalization_tests(testdata) diff --git a/Lib/test/test_urlparse.py b/Lib/test/test_urlparse.py index 31943f35..ca37c3c4 100644 --- a/Lib/test/test_urlparse.py +++ b/Lib/test/test_urlparse.py @@ -653,13 +653,16 @@ class UrlParseTestCase(unittest.TestCase): """Check handling of invalid ports.""" for bytes in (False, True): for parse in (urllib.parse.urlsplit, urllib.parse.urlparse): - for port in ("foo", "1.5", "-1", "0x10"): + for port in ("foo", "1.5", "-1", "0x10", "-0", "1_1", " 1", "1 ", "६"): with self.subTest(bytes=bytes, parse=parse, port=port): netloc = "www.example.net:" + port url = "http://" + netloc if bytes: - netloc = netloc.encode("ascii") - url = url.encode("ascii") + if netloc.isascii() and port.isascii(): + netloc = netloc.encode("ascii") + url = url.encode("ascii") + else: + continue p = parse(url) self.assertEqual(p.netloc, netloc) with self.assertRaises(ValueError): @@ -1186,6 +1189,7 @@ class Utility_Tests(unittest.TestCase): self.assertEqual(splitnport('127.0.0.1', 55), ('127.0.0.1', 55)) self.assertEqual(splitnport('parrot:cheese'), ('parrot', None)) self.assertEqual(splitnport('parrot:cheese', 55), ('parrot', None)) + self.assertEqual(splitnport('parrot: +1_0 '), ('parrot', None)) def test_splitquery(self): # Normal cases are exercised by other tests; ensure that we also diff --git a/Lib/test/test_venv.py b/Lib/test/test_venv.py index eca35ec4..fea986b9 100644 --- a/Lib/test/test_venv.py +++ b/Lib/test/test_venv.py @@ -159,7 +159,7 @@ class BasicTest(BaseTest): if sys.platform == 'win32': expect_exe = os.path.normcase(os.path.realpath(expect_exe)) - def pip_cmd_checker(cmd): + def pip_cmd_checker(cmd, **kwargs): cmd[0] = os.path.normcase(cmd[0]) self.assertEqual( cmd, @@ -175,7 +175,7 @@ class BasicTest(BaseTest): ) fake_context = builder.ensure_directories(fake_env_dir) - with patch('venv.subprocess.check_call', pip_cmd_checker): + with patch('venv.subprocess.check_output', pip_cmd_checker): builder.upgrade_dependencies(fake_context) @requireVenvCreate @@ -547,8 +547,8 @@ class EnsurePipTest(BaseTest): try: yield except subprocess.CalledProcessError as exc: - out = exc.output.decode(errors="replace") - err = exc.stderr.decode(errors="replace") + out = (exc.output or b'').decode(errors="replace") + err = (exc.stderr or b'').decode(errors="replace") self.fail( f"{exc}\n\n" f"**Subprocess Output**\n{out}\n\n" diff --git a/Lib/test/test_weakref.py b/Lib/test/test_weakref.py index f612e79d..478926b5 100644 --- a/Lib/test/test_weakref.py +++ b/Lib/test/test_weakref.py @@ -595,7 +595,7 @@ class ReferencesTestCase(TestBase): # deallocation of c2. del c2 - def test_callback_in_cycle_1(self): + def test_callback_in_cycle(self): import gc class J(object): @@ -635,40 +635,11 @@ class ReferencesTestCase(TestBase): del I, J, II gc.collect() - def test_callback_in_cycle_2(self): + def test_callback_reachable_one_way(self): import gc - # This is just like test_callback_in_cycle_1, except that II is an - # old-style class. The symptom is different then: an instance of an - # old-style class looks in its own __dict__ first. 'J' happens to - # get cleared from I.__dict__ before 'wr', and 'J' was never in II's - # __dict__, so the attribute isn't found. The difference is that - # the old-style II doesn't have a NULL __mro__ (it doesn't have any - # __mro__), so no segfault occurs. Instead it got: - # test_callback_in_cycle_2 (__main__.ReferencesTestCase) ... - # Exception exceptions.AttributeError: - # "II instance has no attribute 'J'" in > ignored - - class J(object): - pass - - class II: - def acallback(self, ignore): - self.J - - I = II() - I.J = J - I.wr = weakref.ref(J, I.acallback) - - del I, J, II - gc.collect() - - def test_callback_in_cycle_3(self): - import gc - - # This one broke the first patch that fixed the last two. In this - # case, the objects reachable from the callback aren't also reachable + # This one broke the first patch that fixed the previous test. In this case, + # the objects reachable from the callback aren't also reachable # from the object (c1) *triggering* the callback: you can get to # c1 from c2, but not vice-versa. The result was that c2's __dict__ # got tp_clear'ed by the time the c2.cb callback got invoked. @@ -688,10 +659,10 @@ class ReferencesTestCase(TestBase): del c1, c2 gc.collect() - def test_callback_in_cycle_4(self): + def test_callback_different_classes(self): import gc - # Like test_callback_in_cycle_3, except c2 and c1 have different + # Like test_callback_reachable_one_way, except c2 and c1 have different # classes. c2's class (C) isn't reachable from c1 then, so protecting # objects reachable from the dying object (c1) isn't enough to stop # c2's class (C) from getting tp_clear'ed before c2.cb is invoked. diff --git a/Lib/tkinter/__init__.py b/Lib/tkinter/__init__.py index c7176e69..d42d9a01 100644 --- a/Lib/tkinter/__init__.py +++ b/Lib/tkinter/__init__.py @@ -3621,7 +3621,7 @@ class Text(Widget, XView, YView): "lines", "xpixels" and "ypixels". There is an additional possible option "update", which if given then all subsequent options ensure that any possible out of date information is recalculated.""" - args = ['-%s' % arg for arg in args if not arg.startswith('-')] + args = ['-%s' % arg for arg in args] args += [index1, index2] res = self.tk.call(self._w, 'count', *args) or None if res is not None and len(args) <= 3: diff --git a/Lib/tkinter/test/test_tkinter/test_text.py b/Lib/tkinter/test/test_tkinter/test_text.py index 482f150d..ea557586 100644 --- a/Lib/tkinter/test/test_tkinter/test_text.py +++ b/Lib/tkinter/test/test_tkinter/test_text.py @@ -40,6 +40,58 @@ class TextTest(AbstractTkTest, unittest.TestCase): self.assertEqual(text.search('-test', '1.0', 'end'), '1.2') self.assertEqual(text.search('test', '1.0', 'end'), '1.3') + def test_count(self): + # XXX Some assertions do not check against the intended result, + # but instead check the current result to prevent regression. + text = self.text + text.insert('1.0', + 'Lorem ipsum dolor sit amet,\n' + 'consectetur adipiscing elit,\n' + 'sed do eiusmod tempor incididunt\n' + 'ut labore et dolore magna aliqua.') + + options = ('chars', 'indices', 'lines', + 'displaychars', 'displayindices', 'displaylines', + 'xpixels', 'ypixels') + if self.wantobjects: + self.assertEqual(len(text.count('1.0', 'end', *options)), 8) + else: + text.count('1.0', 'end', *options) + self.assertEqual(text.count('1.0', 'end', 'chars', 'lines'), (124, 4) + if self.wantobjects else '124 4') + self.assertEqual(text.count('1.3', '4.5', 'chars', 'lines'), (92, 3) + if self.wantobjects else '92 3') + self.assertEqual(text.count('4.5', '1.3', 'chars', 'lines'), (-92, -3) + if self.wantobjects else '-92 -3') + self.assertEqual(text.count('1.3', '1.3', 'chars', 'lines'), (0, 0) + if self.wantobjects else '0 0') + self.assertEqual(text.count('1.0', 'end', 'lines'), (4,) + if self.wantobjects else ('4',)) + self.assertEqual(text.count('end', '1.0', 'lines'), (-4,) + if self.wantobjects else ('-4',)) + self.assertEqual(text.count('1.3', '1.5', 'lines'), None + if self.wantobjects else ('0',)) + self.assertEqual(text.count('1.3', '1.3', 'lines'), None + if self.wantobjects else ('0',)) + self.assertEqual(text.count('1.0', 'end'), (124,) # 'indices' by default + if self.wantobjects else ('124',)) + self.assertRaises(tkinter.TclError, text.count, '1.0', 'end', 'spam') + self.assertRaises(tkinter.TclError, text.count, '1.0', 'end', '-lines') + + self.assertIsInstance(text.count('1.3', '1.5', 'ypixels'), tuple) + self.assertIsInstance(text.count('1.3', '1.5', 'update', 'ypixels'), int + if self.wantobjects else str) + self.assertEqual(text.count('1.3', '1.3', 'update', 'ypixels'), None + if self.wantobjects else '0') + self.assertEqual(text.count('1.3', '1.5', 'update', 'indices'), 2 + if self.wantobjects else '2') + self.assertEqual(text.count('1.3', '1.3', 'update', 'indices'), None + if self.wantobjects else '0') + self.assertEqual(text.count('1.3', '1.5', 'update'), (2,) + if self.wantobjects else ('2',)) + self.assertEqual(text.count('1.3', '1.3', 'update'), None + if self.wantobjects else ('0',)) + if __name__ == "__main__": unittest.main() diff --git a/Lib/tkinter/test/test_tkinter/test_widgets.py b/Lib/tkinter/test/test_tkinter/test_widgets.py index 562f4718..7c512a05 100644 --- a/Lib/tkinter/test/test_tkinter/test_widgets.py +++ b/Lib/tkinter/test/test_tkinter/test_widgets.py @@ -781,6 +781,164 @@ class CanvasTest(AbstractWidgetTest, unittest.TestCase): self.checkPixelsParam(widget, 'yscrollincrement', 10, 0, 11.2, 13.6, -10, '0.1i') + def _test_option_joinstyle(self, c, factory): + for joinstyle in 'bevel', 'miter', 'round': + i = factory(joinstyle=joinstyle) + self.assertEqual(c.itemcget(i, 'joinstyle'), joinstyle) + self.assertRaises(TclError, factory, joinstyle='spam') + + def _test_option_smooth(self, c, factory): + for smooth in 1, True, '1', 'true', 'yes', 'on': + i = factory(smooth=smooth) + self.assertEqual(c.itemcget(i, 'smooth'), 'true') + for smooth in 0, False, '0', 'false', 'no', 'off': + i = factory(smooth=smooth) + self.assertEqual(c.itemcget(i, 'smooth'), '0') + i = factory(smooth=True, splinestep=30) + self.assertEqual(c.itemcget(i, 'smooth'), 'true') + self.assertEqual(c.itemcget(i, 'splinestep'), '30') + i = factory(smooth='raw', splinestep=30) + self.assertEqual(c.itemcget(i, 'smooth'), 'raw') + self.assertEqual(c.itemcget(i, 'splinestep'), '30') + self.assertRaises(TclError, factory, smooth='spam') + + def test_create_rectangle(self): + c = self.create() + i1 = c.create_rectangle(20, 30, 60, 10) + self.assertEqual(c.coords(i1), [20.0, 10.0, 60.0, 30.0]) + self.assertEqual(c.bbox(i1), (19, 9, 61, 31)) + + i2 = c.create_rectangle([21, 31, 61, 11]) + self.assertEqual(c.coords(i2), [21.0, 11.0, 61.0, 31.0]) + self.assertEqual(c.bbox(i2), (20, 10, 62, 32)) + + i3 = c.create_rectangle((22, 32), (62, 12)) + self.assertEqual(c.coords(i3), [22.0, 12.0, 62.0, 32.0]) + self.assertEqual(c.bbox(i3), (21, 11, 63, 33)) + + i4 = c.create_rectangle([(23, 33), (63, 13)]) + self.assertEqual(c.coords(i4), [23.0, 13.0, 63.0, 33.0]) + self.assertEqual(c.bbox(i4), (22, 12, 64, 34)) + + self.assertRaises(TclError, c.create_rectangle, 20, 30, 60) + self.assertRaises(TclError, c.create_rectangle, [20, 30, 60]) + self.assertRaises(TclError, c.create_rectangle, 20, 30, 40, 50, 60, 10) + self.assertRaises(TclError, c.create_rectangle, [20, 30, 40, 50, 60, 10]) + self.assertRaises(TclError, c.create_rectangle, 20, 30) + self.assertRaises(TclError, c.create_rectangle, [20, 30]) + self.assertRaises(IndexError, c.create_rectangle) + self.assertRaises(IndexError, c.create_rectangle, []) + + def test_create_line(self): + c = self.create() + i1 = c.create_line(20, 30, 40, 50, 60, 10) + self.assertEqual(c.coords(i1), [20.0, 30.0, 40.0, 50.0, 60.0, 10.0]) + self.assertEqual(c.bbox(i1), (18, 8, 62, 52)) + self.assertEqual(c.itemcget(i1, 'arrow'), 'none') + self.assertEqual(c.itemcget(i1, 'arrowshape'), '8 10 3') + self.assertEqual(c.itemcget(i1, 'capstyle'), 'butt') + self.assertEqual(c.itemcget(i1, 'joinstyle'), 'round') + self.assertEqual(c.itemcget(i1, 'smooth'), '0') + self.assertEqual(c.itemcget(i1, 'splinestep'), '12') + + i2 = c.create_line([21, 31, 41, 51, 61, 11]) + self.assertEqual(c.coords(i2), [21.0, 31.0, 41.0, 51.0, 61.0, 11.0]) + self.assertEqual(c.bbox(i2), (19, 9, 63, 53)) + + i3 = c.create_line((22, 32), (42, 52), (62, 12)) + self.assertEqual(c.coords(i3), [22.0, 32.0, 42.0, 52.0, 62.0, 12.0]) + self.assertEqual(c.bbox(i3), (20, 10, 64, 54)) + + i4 = c.create_line([(23, 33), (43, 53), (63, 13)]) + self.assertEqual(c.coords(i4), [23.0, 33.0, 43.0, 53.0, 63.0, 13.0]) + self.assertEqual(c.bbox(i4), (21, 11, 65, 55)) + + self.assertRaises(TclError, c.create_line, 20, 30, 60) + self.assertRaises(TclError, c.create_line, [20, 30, 60]) + self.assertRaises(TclError, c.create_line, 20, 30) + self.assertRaises(TclError, c.create_line, [20, 30]) + self.assertRaises(IndexError, c.create_line) + self.assertRaises(IndexError, c.create_line, []) + + for arrow in 'none', 'first', 'last', 'both': + i = c.create_line(20, 30, 60, 10, arrow=arrow) + self.assertEqual(c.itemcget(i, 'arrow'), arrow) + i = c.create_line(20, 30, 60, 10, arrow='first', arrowshape=[10, 15, 5]) + self.assertEqual(c.itemcget(i, 'arrowshape'), '10 15 5') + self.assertRaises(TclError, c.create_line, 20, 30, 60, 10, arrow='spam') + + for capstyle in 'butt', 'projecting', 'round': + i = c.create_line(20, 30, 60, 10, capstyle=capstyle) + self.assertEqual(c.itemcget(i, 'capstyle'), capstyle) + self.assertRaises(TclError, c.create_line, 20, 30, 60, 10, capstyle='spam') + + self._test_option_joinstyle(c, + lambda **kwargs: c.create_line(20, 30, 40, 50, 60, 10, **kwargs)) + self._test_option_smooth(c, + lambda **kwargs: c.create_line(20, 30, 60, 10, **kwargs)) + + def test_create_polygon(self): + c = self.create() + i1 = c.create_polygon(20, 30, 40, 50, 60, 10) + self.assertEqual(c.coords(i1), [20.0, 30.0, 40.0, 50.0, 60.0, 10.0]) + self.assertEqual(c.bbox(i1), (19, 9, 61, 51)) + self.assertEqual(c.itemcget(i1, 'joinstyle'), 'round') + self.assertEqual(c.itemcget(i1, 'smooth'), '0') + self.assertEqual(c.itemcget(i1, 'splinestep'), '12') + + i2 = c.create_polygon([21, 31, 41, 51, 61, 11]) + self.assertEqual(c.coords(i2), [21.0, 31.0, 41.0, 51.0, 61.0, 11.0]) + self.assertEqual(c.bbox(i2), (20, 10, 62, 52)) + + i3 = c.create_polygon((22, 32), (42, 52), (62, 12)) + self.assertEqual(c.coords(i3), [22.0, 32.0, 42.0, 52.0, 62.0, 12.0]) + self.assertEqual(c.bbox(i3), (21, 11, 63, 53)) + + i4 = c.create_polygon([(23, 33), (43, 53), (63, 13)]) + self.assertEqual(c.coords(i4), [23.0, 33.0, 43.0, 53.0, 63.0, 13.0]) + self.assertEqual(c.bbox(i4), (22, 12, 64, 54)) + + self.assertRaises(TclError, c.create_polygon, 20, 30, 60) + self.assertRaises(TclError, c.create_polygon, [20, 30, 60]) + self.assertRaises(IndexError, c.create_polygon) + self.assertRaises(IndexError, c.create_polygon, []) + + self._test_option_joinstyle(c, + lambda **kwargs: c.create_polygon(20, 30, 40, 50, 60, 10, **kwargs)) + self._test_option_smooth(c, + lambda **kwargs: c.create_polygon(20, 30, 40, 50, 60, 10, **kwargs)) + + def test_coords(self): + c = self.create() + i = c.create_line(20, 30, 40, 50, 60, 10, tags='x') + self.assertEqual(c.coords(i), [20.0, 30.0, 40.0, 50.0, 60.0, 10.0]) + self.assertEqual(c.coords('x'), [20.0, 30.0, 40.0, 50.0, 60.0, 10.0]) + self.assertEqual(c.bbox(i), (18, 8, 62, 52)) + + c.coords(i, 50, 60, 70, 80, 90, 40) + self.assertEqual(c.coords(i), [50.0, 60.0, 70.0, 80.0, 90.0, 40.0]) + self.assertEqual(c.bbox(i), (48, 38, 92, 82)) + + c.coords(i, [21, 31, 41, 51, 61, 11]) + self.assertEqual(c.coords(i), [21.0, 31.0, 41.0, 51.0, 61.0, 11.0]) + + c.coords(i, 20, 30, 60, 10) + self.assertEqual(c.coords(i), [20.0, 30.0, 60.0, 10.0]) + self.assertEqual(c.bbox(i), (18, 8, 62, 32)) + + self.assertRaises(TclError, c.coords, i, 20, 30, 60) + self.assertRaises(TclError, c.coords, i, [20, 30, 60]) + self.assertRaises(TclError, c.coords, i, 20, 30) + self.assertRaises(TclError, c.coords, i, [20, 30]) + + c.coords(i, '20', '30c', '60i', '10p') + coords = c.coords(i) + self.assertIsInstance(coords, list) + self.assertEqual(len(coords), 4) + self.assertEqual(coords[0], 20) + for i in range(4): + self.assertIsInstance(coords[i], float) + @requires_tcl(8, 6) def test_moveto(self): widget = self.create() diff --git a/Lib/trace.py b/Lib/trace.py index 2cf36438..213e4651 100755 --- a/Lib/trace.py +++ b/Lib/trace.py @@ -172,7 +172,7 @@ class CoverageResults: try: with open(self.infile, 'rb') as f: counts, calledfuncs, callers = pickle.load(f) - self.update(self.__class__(counts, calledfuncs, callers)) + self.update(self.__class__(counts, calledfuncs, callers=callers)) except (OSError, EOFError, ValueError) as err: print(("Skipping counts file %r: %s" % (self.infile, err)), file=sys.stderr) diff --git a/Lib/unittest/case.py b/Lib/unittest/case.py index 61003d0c..50100e90 100644 --- a/Lib/unittest/case.py +++ b/Lib/unittest/case.py @@ -348,11 +348,11 @@ class TestCase(object): # of difflib. See #11763. _diffThreshold = 2**16 - # Attribute used by TestSuite for classSetUp - - _classSetupFailed = False - - _class_cleanups = [] + def __init_subclass__(cls, *args, **kwargs): + # Attribute used by TestSuite for classSetUp + cls._classSetupFailed = False + cls._class_cleanups = [] + super().__init_subclass__(*args, **kwargs) def __init__(self, methodName='runTest'): """Create an instance of the class that will use the named test diff --git a/Lib/unittest/mock.py b/Lib/unittest/mock.py index 3f7185b9..a647e5db 100644 --- a/Lib/unittest/mock.py +++ b/Lib/unittest/mock.py @@ -34,6 +34,7 @@ from asyncio import iscoroutinefunction from types import CodeType, ModuleType, MethodType from unittest.util import safe_repr from functools import wraps, partial +from threading import RLock class InvalidSpecError(Exception): @@ -401,6 +402,14 @@ class Base(object): class NonCallableMock(Base): """A non-callable version of `Mock`""" + # Store a mutex as a class attribute in order to protect concurrent access + # to mock attributes. Using a class attribute allows all NonCallableMock + # instances to share the mutex for simplicity. + # + # See https://github.com/python/cpython/issues/98624 for why this is + # necessary. + _lock = RLock() + def __new__(cls, /, *args, **kw): # every instance has its own class # so we can create magic methods on the @@ -640,35 +649,36 @@ class NonCallableMock(Base): f"{name!r} is not a valid assertion. Use a spec " f"for the mock if {name!r} is meant to be an attribute.") - result = self._mock_children.get(name) - if result is _deleted: - raise AttributeError(name) - elif result is None: - wraps = None - if self._mock_wraps is not None: - # XXXX should we get the attribute without triggering code - # execution? - wraps = getattr(self._mock_wraps, name) - - result = self._get_child_mock( - parent=self, name=name, wraps=wraps, _new_name=name, - _new_parent=self - ) - self._mock_children[name] = result - - elif isinstance(result, _SpecState): - try: - result = create_autospec( - result.spec, result.spec_set, result.instance, - result.parent, result.name + with NonCallableMock._lock: + result = self._mock_children.get(name) + if result is _deleted: + raise AttributeError(name) + elif result is None: + wraps = None + if self._mock_wraps is not None: + # XXXX should we get the attribute without triggering code + # execution? + wraps = getattr(self._mock_wraps, name) + + result = self._get_child_mock( + parent=self, name=name, wraps=wraps, _new_name=name, + _new_parent=self ) - except InvalidSpecError: - target_name = self.__dict__['_mock_name'] or self - raise InvalidSpecError( - f'Cannot autospec attr {name!r} from target ' - f'{target_name!r} as it has already been mocked out. ' - f'[target={self!r}, attr={result.spec!r}]') - self._mock_children[name] = result + self._mock_children[name] = result + + elif isinstance(result, _SpecState): + try: + result = create_autospec( + result.spec, result.spec_set, result.instance, + result.parent, result.name + ) + except InvalidSpecError: + target_name = self.__dict__['_mock_name'] or self + raise InvalidSpecError( + f'Cannot autospec attr {name!r} from target ' + f'{target_name!r} as it has already been mocked out. ' + f'[target={self!r}, attr={result.spec!r}]') + self._mock_children[name] = result return result @@ -1810,6 +1820,12 @@ class _patch_dict(object): def __call__(self, f): if isinstance(f, type): return self.decorate_class(f) + if inspect.iscoroutinefunction(f): + return self.decorate_async_callable(f) + return self.decorate_callable(f) + + + def decorate_callable(self, f): @wraps(f) def _inner(*args, **kw): self._patch_dict() @@ -1821,6 +1837,18 @@ class _patch_dict(object): return _inner + def decorate_async_callable(self, f): + @wraps(f) + async def _inner(*args, **kw): + self._patch_dict() + try: + return await f(*args, **kw) + finally: + self._unpatch_dict() + + return _inner + + def decorate_class(self, klass): for attr in dir(klass): attr_value = getattr(klass, attr) diff --git a/Lib/unittest/result.py b/Lib/unittest/result.py index 3da7005e..5ca4c232 100644 --- a/Lib/unittest/result.py +++ b/Lib/unittest/result.py @@ -196,6 +196,7 @@ class TestResult(object): ret = None first = True excs = [(exctype, value, tb)] + seen = {id(value)} # Detect loops in chained exceptions. while excs: (exctype, value, tb) = excs.pop() # Skip test runner traceback levels @@ -214,8 +215,9 @@ class TestResult(object): if value is not None: for c in (value.__cause__, value.__context__): - if c is not None: + if c is not None and id(c) not in seen: excs.append((type(c), c, c.__traceback__)) + seen.add(id(c)) return ret def _is_relevant_tb_level(self, tb): diff --git a/Lib/unittest/test/test_async_case.py b/Lib/unittest/test/test_async_case.py index b97ad939..a68ae86e 100644 --- a/Lib/unittest/test/test_async_case.py +++ b/Lib/unittest/test/test_async_case.py @@ -20,59 +20,73 @@ class TestAsyncCase(unittest.TestCase): self.addCleanup(support.gc_collect) def test_full_cycle(self): + expected = ['setUp', + 'asyncSetUp', + 'test', + 'asyncTearDown', + 'tearDown', + 'cleanup6', + 'cleanup5', + 'cleanup4', + 'cleanup3', + 'cleanup2', + 'cleanup1'] class Test(unittest.IsolatedAsyncioTestCase): def setUp(self): self.assertEqual(events, []) events.append('setUp') + self.addCleanup(self.on_cleanup1) + self.addAsyncCleanup(self.on_cleanup2) async def asyncSetUp(self): - self.assertEqual(events, ['setUp']) + self.assertEqual(events, expected[:1]) events.append('asyncSetUp') - self.addAsyncCleanup(self.on_cleanup1) + self.addCleanup(self.on_cleanup3) + self.addAsyncCleanup(self.on_cleanup4) async def test_func(self): - self.assertEqual(events, ['setUp', - 'asyncSetUp']) + self.assertEqual(events, expected[:2]) events.append('test') - self.addAsyncCleanup(self.on_cleanup2) + self.addCleanup(self.on_cleanup5) + self.addAsyncCleanup(self.on_cleanup6) async def asyncTearDown(self): - self.assertEqual(events, ['setUp', - 'asyncSetUp', - 'test']) + self.assertEqual(events, expected[:3]) events.append('asyncTearDown') def tearDown(self): - self.assertEqual(events, ['setUp', - 'asyncSetUp', - 'test', - 'asyncTearDown']) + self.assertEqual(events, expected[:4]) events.append('tearDown') - async def on_cleanup1(self): - self.assertEqual(events, ['setUp', - 'asyncSetUp', - 'test', - 'asyncTearDown', - 'tearDown', - 'cleanup2']) + def on_cleanup1(self): + self.assertEqual(events, expected[:10]) events.append('cleanup1') async def on_cleanup2(self): - self.assertEqual(events, ['setUp', - 'asyncSetUp', - 'test', - 'asyncTearDown', - 'tearDown']) + self.assertEqual(events, expected[:9]) events.append('cleanup2') + def on_cleanup3(self): + self.assertEqual(events, expected[:8]) + events.append('cleanup3') + + async def on_cleanup4(self): + self.assertEqual(events, expected[:7]) + events.append('cleanup4') + + def on_cleanup5(self): + self.assertEqual(events, expected[:6]) + events.append('cleanup5') + + async def on_cleanup6(self): + self.assertEqual(events, expected[:5]) + events.append('cleanup6') + events = [] test = Test("test_func") result = test.run() self.assertEqual(result.errors, []) self.assertEqual(result.failures, []) - expected = ['setUp', 'asyncSetUp', 'test', - 'asyncTearDown', 'tearDown', 'cleanup2', 'cleanup1'] self.assertEqual(events, expected) events = [] diff --git a/Lib/unittest/test/test_result.py b/Lib/unittest/test/test_result.py index c5aaba0f..90484fd1 100644 --- a/Lib/unittest/test/test_result.py +++ b/Lib/unittest/test/test_result.py @@ -275,6 +275,62 @@ class Test_TestResult(unittest.TestCase): self.assertEqual(len(dropped), 1) self.assertIn("raise self.failureException(msg)", dropped[0]) + def test_addFailure_filter_traceback_frames_chained_exception_self_loop(self): + class Foo(unittest.TestCase): + def test_1(self): + pass + + def get_exc_info(): + try: + loop = Exception("Loop") + loop.__cause__ = loop + loop.__context__ = loop + raise loop + except: + return sys.exc_info() + + exc_info_tuple = get_exc_info() + + test = Foo('test_1') + result = unittest.TestResult() + result.startTest(test) + result.addFailure(test, exc_info_tuple) + result.stopTest(test) + + formatted_exc = result.failures[0][1] + self.assertEqual(formatted_exc.count("Exception: Loop\n"), 1) + + def test_addFailure_filter_traceback_frames_chained_exception_cycle(self): + class Foo(unittest.TestCase): + def test_1(self): + pass + + def get_exc_info(): + try: + # Create two directionally opposed cycles + # __cause__ in one direction, __context__ in the other + A, B, C = Exception("A"), Exception("B"), Exception("C") + edges = [(C, B), (B, A), (A, C)] + for ex1, ex2 in edges: + ex1.__cause__ = ex2 + ex2.__context__ = ex1 + raise C + except: + return sys.exc_info() + + exc_info_tuple = get_exc_info() + + test = Foo('test_1') + result = unittest.TestResult() + result.startTest(test) + result.addFailure(test, exc_info_tuple) + result.stopTest(test) + + formatted_exc = result.failures[0][1] + self.assertEqual(formatted_exc.count("Exception: A\n"), 1) + self.assertEqual(formatted_exc.count("Exception: B\n"), 1) + self.assertEqual(formatted_exc.count("Exception: C\n"), 1) + # "addError(test, err)" # ... # "Called when the test case test raises an unexpected exception err diff --git a/Lib/unittest/test/test_runner.py b/Lib/unittest/test/test_runner.py index 453e6c3d..0082d394 100644 --- a/Lib/unittest/test/test_runner.py +++ b/Lib/unittest/test/test_runner.py @@ -106,11 +106,13 @@ class TestCleanUp(unittest.TestCase): class TestableTest(unittest.TestCase): def setUp(self): ordering.append('setUp') + test.addCleanup(cleanup2) if blowUp: raise Exception('foo') def testNothing(self): ordering.append('test') + test.addCleanup(cleanup3) def tearDown(self): ordering.append('tearDown') @@ -121,8 +123,9 @@ class TestCleanUp(unittest.TestCase): ordering.append('cleanup1') def cleanup2(): ordering.append('cleanup2') + def cleanup3(): + ordering.append('cleanup3') test.addCleanup(cleanup1) - test.addCleanup(cleanup2) def success(some_test): self.assertEqual(some_test, test) @@ -132,7 +135,7 @@ class TestCleanUp(unittest.TestCase): result.addSuccess = success test.run(result) - self.assertEqual(ordering, ['setUp', 'test', 'tearDown', + self.assertEqual(ordering, ['setUp', 'test', 'tearDown', 'cleanup3', 'cleanup2', 'cleanup1', 'success']) blowUp = True @@ -140,7 +143,7 @@ class TestCleanUp(unittest.TestCase): test = TestableTest('testNothing') test.addCleanup(cleanup1) test.run(result) - self.assertEqual(ordering, ['setUp', 'cleanup1']) + self.assertEqual(ordering, ['setUp', 'cleanup2', 'cleanup1']) def testTestCaseDebugExecutesCleanups(self): ordering = [] @@ -152,9 +155,11 @@ class TestCleanUp(unittest.TestCase): def testNothing(self): ordering.append('test') + self.addCleanup(cleanup3) def tearDown(self): ordering.append('tearDown') + test.addCleanup(cleanup4) test = TestableTest('testNothing') @@ -163,9 +168,14 @@ class TestCleanUp(unittest.TestCase): test.addCleanup(cleanup2) def cleanup2(): ordering.append('cleanup2') + def cleanup3(): + ordering.append('cleanup3') + def cleanup4(): + ordering.append('cleanup4') test.debug() - self.assertEqual(ordering, ['setUp', 'test', 'tearDown', 'cleanup1', 'cleanup2']) + self.assertEqual(ordering, ['setUp', 'test', 'tearDown', 'cleanup4', + 'cleanup3', 'cleanup1', 'cleanup2']) class TestClassCleanup(unittest.TestCase): @@ -291,13 +301,14 @@ class TestClassCleanup(unittest.TestCase): ordering.append('test') @classmethod def tearDownClass(cls): + ordering.append('tearDownClass') raise Exception('TearDownClassExc') suite = unittest.defaultTestLoader.loadTestsFromTestCase(TestableTest) with self.assertRaises(Exception) as cm: suite.debug() self.assertEqual(str(cm.exception), 'TearDownClassExc') - self.assertEqual(ordering, ['setUpClass', 'test']) + self.assertEqual(ordering, ['setUpClass', 'test', 'tearDownClass']) self.assertTrue(TestableTest._class_cleanups) TestableTest._class_cleanups.clear() @@ -307,7 +318,7 @@ class TestClassCleanup(unittest.TestCase): with self.assertRaises(Exception) as cm: suite.debug() self.assertEqual(str(cm.exception), 'TearDownClassExc') - self.assertEqual(ordering, ['setUpClass', 'test']) + self.assertEqual(ordering, ['setUpClass', 'test', 'tearDownClass']) self.assertTrue(TestableTest._class_cleanups) TestableTest._class_cleanups.clear() @@ -446,6 +457,33 @@ class TestClassCleanup(unittest.TestCase): self.assertEqual(ordering, ['setUpClass', 'test', 'tearDownClass', 'cleanup_good']) + def test_run_nested_test(self): + ordering = [] + + class InnerTest(unittest.TestCase): + @classmethod + def setUpClass(cls): + ordering.append('inner setup') + cls.addClassCleanup(ordering.append, 'inner cleanup') + def test(self): + ordering.append('inner test') + + class OuterTest(unittest.TestCase): + @classmethod + def setUpClass(cls): + ordering.append('outer setup') + cls.addClassCleanup(ordering.append, 'outer cleanup') + def test(self): + ordering.append('start outer test') + runTests(InnerTest) + ordering.append('end outer test') + + runTests(OuterTest) + self.assertEqual(ordering, [ + 'outer setup', 'start outer test', + 'inner setup', 'inner test', 'inner cleanup', + 'end outer test', 'outer cleanup']) + class TestModuleCleanUp(unittest.TestCase): def test_add_and_do_ModuleCleanup(self): @@ -657,6 +695,7 @@ class TestModuleCleanUp(unittest.TestCase): unittest.addModuleCleanup(cleanup, ordering) @staticmethod def tearDownModule(): + ordering.append('tearDownModule') raise Exception('CleanUpExc') class TestableTest(unittest.TestCase): @@ -675,7 +714,8 @@ class TestModuleCleanUp(unittest.TestCase): self.assertEqual(result.errors[0][1].splitlines()[-1], 'Exception: CleanUpExc') self.assertEqual(ordering, ['setUpModule', 'setUpClass', 'test', - 'tearDownClass', 'cleanup_good']) + 'tearDownClass', 'tearDownModule', + 'cleanup_good']) self.assertEqual(unittest.case._module_cleanups, []) def test_debug_module_executes_cleanUp(self): @@ -729,6 +769,7 @@ class TestModuleCleanUp(unittest.TestCase): unittest.addModuleCleanup(cleanup, ordering, blowUp=blowUp) @staticmethod def tearDownModule(): + ordering.append('tearDownModule') raise Exception('TearDownModuleExc') class TestableTest(unittest.TestCase): @@ -748,7 +789,7 @@ class TestModuleCleanUp(unittest.TestCase): suite.debug() self.assertEqual(str(cm.exception), 'TearDownModuleExc') self.assertEqual(ordering, ['setUpModule', 'setUpClass', 'test', - 'tearDownClass']) + 'tearDownClass', 'tearDownModule']) self.assertTrue(unittest.case._module_cleanups) unittest.case._module_cleanups.clear() @@ -759,7 +800,7 @@ class TestModuleCleanUp(unittest.TestCase): suite.debug() self.assertEqual(str(cm.exception), 'TearDownModuleExc') self.assertEqual(ordering, ['setUpModule', 'setUpClass', 'test', - 'tearDownClass']) + 'tearDownClass', 'tearDownModule']) self.assertTrue(unittest.case._module_cleanups) unittest.case._module_cleanups.clear() diff --git a/Lib/unittest/test/testmock/testasync.py b/Lib/unittest/test/testmock/testasync.py index e1866a34..22228b47 100644 --- a/Lib/unittest/test/testmock/testasync.py +++ b/Lib/unittest/test/testmock/testasync.py @@ -146,6 +146,23 @@ class AsyncPatchCMTest(unittest.TestCase): run(test_async()) + def test_patch_dict_async_def(self): + foo = {'a': 'a'} + @patch.dict(foo, {'a': 'b'}) + async def test_async(): + self.assertEqual(foo['a'], 'b') + + self.assertTrue(iscoroutinefunction(test_async)) + run(test_async()) + + def test_patch_dict_async_def_context(self): + foo = {'a': 'a'} + async def test_async(): + with patch.dict(foo, {'a': 'b'}): + self.assertEqual(foo['a'], 'b') + + run(test_async()) + class AsyncMockTest(unittest.TestCase): def test_iscoroutinefunction_default(self): diff --git a/Lib/urllib/parse.py b/Lib/urllib/parse.py index b35997bc..26ddf307 100644 --- a/Lib/urllib/parse.py +++ b/Lib/urllib/parse.py @@ -171,12 +171,11 @@ class _NetlocResultMixinBase(object): def port(self): port = self._hostinfo[1] if port is not None: - try: - port = int(port, 10) - except ValueError: - message = f'Port could not be cast to integer value as {port!r}' - raise ValueError(message) from None - if not ( 0 <= port <= 65535): + if port.isdigit() and port.isascii(): + port = int(port) + else: + raise ValueError(f"Port could not be cast to integer value as {port!r}") + if not (0 <= port <= 65535): raise ValueError("Port out of range 0-65535") return port @@ -1125,15 +1124,15 @@ def splitnport(host, defport=-1): def _splitnport(host, defport=-1): """Split host and port, returning numeric port. Return given default port if no ':' found; defaults to -1. - Return numerical port if a valid number are found after ':'. + Return numerical port if a valid number is found after ':'. Return None if ':' but not a valid number.""" host, delim, port = host.rpartition(':') if not delim: host = port elif port: - try: + if port.isdigit() and port.isascii(): nport = int(port) - except ValueError: + else: nport = None return host, nport return host, defport diff --git a/Lib/uuid.py b/Lib/uuid.py index 5ae0a3e5..fe9f87b7 100644 --- a/Lib/uuid.py +++ b/Lib/uuid.py @@ -370,7 +370,12 @@ def _get_command_stdout(command, *args): # for are actually localized, but in theory some system could do so.) env = dict(os.environ) env['LC_ALL'] = 'C' - proc = subprocess.Popen((executable,) + args, + # Empty strings will be quoted by popen so we should just ommit it + if args != ('',): + command = (executable, *args) + else: + command = (executable,) + proc = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.DEVNULL, env=env) @@ -510,7 +515,7 @@ def _ifconfig_getnode(): mac = _find_mac_near_keyword('ifconfig', args, keywords, lambda i: i+1) if mac: return mac - return None + return None def _ip_getnode(): """Get the hardware address on Unix by running ip.""" diff --git a/Lib/venv/__init__.py b/Lib/venv/__init__.py index 88520085..69155e5c 100644 --- a/Lib/venv/__init__.py +++ b/Lib/venv/__init__.py @@ -308,14 +308,25 @@ class EnvBuilder: shutil.copyfile(src, dst) break + def _call_new_python(self, context, *py_args, **kwargs): + """Executes the newly created Python using safe-ish options""" + # gh-98251: We do not want to just use '-I' because that masks + # legitimate user preferences (such as not writing bytecode). All we + # really need is to ensure that the path variables do not overrule + # normal venv handling. + args = [context.env_exec_cmd, *py_args] + kwargs['env'] = env = os.environ.copy() + env['VIRTUAL_ENV'] = context.env_dir + env.pop('PYTHONHOME', None) + env.pop('PYTHONPATH', None) + kwargs['cwd'] = context.env_dir + kwargs['executable'] = context.env_exec_cmd + subprocess.check_output(args, **kwargs) + def _setup_pip(self, context): """Installs or upgrades pip in a virtual environment""" - # We run ensurepip in isolated mode to avoid side effects from - # environment vars, the current directory and anything else - # intended for the global Python environment - cmd = [context.env_exec_cmd, '-Im', 'ensurepip', '--upgrade', - '--default-pip'] - subprocess.check_output(cmd, stderr=subprocess.STDOUT) + self._call_new_python(context, '-m', 'ensurepip', '--upgrade', + '--default-pip', stderr=subprocess.STDOUT) def setup_scripts(self, context): """ @@ -414,9 +425,8 @@ class EnvBuilder: logger.debug( f'Upgrading {CORE_VENV_DEPS} packages in {context.bin_path}' ) - cmd = [context.env_exec_cmd, '-m', 'pip', 'install', '--upgrade'] - cmd.extend(CORE_VENV_DEPS) - subprocess.check_call(cmd) + self._call_new_python(context, '-m', 'pip', 'install', '--upgrade', + *CORE_VENV_DEPS) def create(env_dir, system_site_packages=False, clear=False, diff --git a/Lib/venv/scripts/posix/activate.fish b/Lib/venv/scripts/posix/activate.fish index e40a1d71..9aa44460 100644 --- a/Lib/venv/scripts/posix/activate.fish +++ b/Lib/venv/scripts/posix/activate.fish @@ -13,10 +13,13 @@ function deactivate -d "Exit virtual environment and return to normal shell env end if test -n "$_OLD_FISH_PROMPT_OVERRIDE" - functions -e fish_prompt set -e _OLD_FISH_PROMPT_OVERRIDE - functions -c _old_fish_prompt fish_prompt - functions -e _old_fish_prompt + # prevents error when using nested fish instances (Issue #93858) + if functions -q _old_fish_prompt + functions -e fish_prompt + functions -c _old_fish_prompt fish_prompt + functions -e _old_fish_prompt + end end set -e VIRTUAL_ENV diff --git a/Mac/BuildScript/build-installer.py b/Mac/BuildScript/build-installer.py index 67a6e4b2..a200fb3f 100755 --- a/Mac/BuildScript/build-installer.py +++ b/Mac/BuildScript/build-installer.py @@ -359,9 +359,9 @@ def library_recipes(): ), ), dict( - name="SQLite 3.37.2", - url="https://sqlite.org/2022/sqlite-autoconf-3370200.tar.gz", - checksum='683cc5312ee74e71079c14d24b7a6d27', + name="SQLite 3.39.4", + url="https://sqlite.org/2022/sqlite-autoconf-3390400.tar.gz", + checksum="44b7e6691b0954086f717a6c43b622a5", extra_cflags=('-Os ' '-DSQLITE_ENABLE_FTS5 ' '-DSQLITE_ENABLE_FTS4 ' diff --git a/Mac/README.rst b/Mac/README.rst index 8f2f153d..e746ba87 100644 --- a/Mac/README.rst +++ b/Mac/README.rst @@ -10,6 +10,19 @@ Python on macOS README This document provides a quick overview of some macOS specific features in the Python distribution. +Compilers for building on macOS +=============================== + +The core developers primarily test builds on macOS with Apple's compiler tools, +either Xcode or the Command Line Tools. For these we only support building with +a compiler that includes an SDK that targets the OS on the build machine, that is +the version of Xcode that shipped with the OS version or one newer. + +For example, for macOS 12 we support Xcode 13 and Xcode 14 (or the corresponding +Command Line Tools). + +Building with other compilers, such as GCC, likely works, but is not actively supported. + macOS specific arguments to configure ===================================== diff --git a/Makefile.pre.in b/Makefile.pre.in index 8ee44bfc..51c31b94 100644 --- a/Makefile.pre.in +++ b/Makefile.pre.in @@ -1898,7 +1898,7 @@ rmtestturds: -rm -f gb-18030-2000.xml docclean: - $(MAKE) -C Doc clean + $(MAKE) -C $(srcdir)/Doc clean # like the 'clean' target but retain the profile guided optimization (PGO) # data. The PGO data is only valid if source code remains unchanged. diff --git a/Misc/NEWS b/Misc/NEWS index 4f27bca3..c5f23384 100644 --- a/Misc/NEWS +++ b/Misc/NEWS @@ -2,6 +2,281 @@ Python News +++++++++++ +What's New in Python 3.10.9 final? +================================== + +*Release date: 2022-12-06* + +Security +-------- + +- gh-issue-100001: ``python -m http.server`` no longer allows terminal + control characters sent within a garbage request to be printed to the + stderr server log. + + This is done by changing the :mod:`http.server` + :class:`BaseHTTPRequestHandler` ``.log_message`` method to replace control + characters with a ``\xHH`` hex escape before printing. + +- gh-issue-87604: Avoid publishing list of active per-interpreter audit + hooks via the :mod:`gc` module + +- gh-issue-98433: The IDNA codec decoder used on DNS hostnames by + :mod:`socket` or :mod:`asyncio` related name resolution functions no + longer involves a quadratic algorithm. This prevents a potential CPU + denial of service if an out-of-spec excessive length hostname involving + bidirectional characters were decoded. Some protocols such as + :mod:`urllib` http ``3xx`` redirects potentially allow for an attacker to + supply such a name. + +- gh-issue-98739: Update bundled libexpat to 2.5.0 + +- gh-issue-98517: Port XKCP's fix for the buffer overflows in SHA-3 + (CVE-2022-37454). + +- gh-issue-97514: On Linux the :mod:`multiprocessing` module returns to + using filesystem backed unix domain sockets for communication with the + *forkserver* process instead of the Linux abstract socket namespace. Only + code that chooses to use the :ref:`"forkserver" start method + ` is affected. + + Abstract sockets have no permissions and could allow any user on the + system in the same `network namespace + `_ (often + the whole system) to inject code into the multiprocessing *forkserver* + process. This was a potential privilege escalation. Filesystem based + socket permissions restrict this to the *forkserver* process user as was + the default in Python 3.8 and earlier. + + This prevents Linux `CVE-2022-42919 + `_. + +Core and Builtins +----------------- + +- gh-issue-99578: Fix a reference bug in :func:`_imp.create_builtin()` after + the creation of the first sub-interpreter for modules ``builtins`` and + ``sys``. Patch by Victor Stinner. + +- gh-issue-99581: Fixed a bug that was causing a buffer overflow if the + tokenizer copies a line missing the newline caracter from a file that is + as long as the available tokenizer buffer. Patch by Pablo galindo + +- gh-issue-96055: Update :mod:`faulthandler` to emit an error message with + the proper unexpected signal number. Patch by Dong-hee Na. + +- gh-issue-98852: Fix subscription of :class:`types.GenericAlias` instances + containing bare generic types: for example ``tuple[A, T][int]``, where + ``A`` is a generic type, and ``T`` is a type variable. + +- gh-issue-98415: Fix detection of MAC addresses for :mod:`uuid` on certain + OSs. Patch by Chaim Sanders + +- gh-issue-92119: Print exception class name instead of its string + representation when raising errors from :mod:`ctypes` calls. + +- gh-issue-93696: Allow :mod:`pdb` to locate source for frozen modules in + the standard library. + +- bpo-31718: Raise :exc:`ValueError` instead of :exc:`SystemError` when + methods of uninitialized :class:`io.IncrementalNewlineDecoder` objects are + called. Patch by Oren Milman. + +- bpo-38031: Fix a possible assertion failure in :class:`io.FileIO` when the + opener returns an invalid file descriptor. + +Library +------- + +- gh-issue-100001: Also \ escape \s in the http.server + BaseHTTPRequestHandler.log_message so that it is technically possible to + parse the line and reconstruct what the original data was. Without this a + \xHH is ambiguious as to if it is a hex replacement we put in or the + characters r"\x" came through in the original request line. + +- gh-issue-93453: :func:`asyncio.get_event_loop` now only emits a + deprecation warning when a new event loop was created implicitly. It no + longer emits a deprecation warning if the current event loop was set. + +- gh-issue-51524: Fix bug when calling trace.CoverageResults with valid + infile. + +- gh-issue-99645: Fix a bug in handling class cleanups in + :class:`unittest.TestCase`. Now ``addClassCleanup()`` uses separate lists + for different ``TestCase`` subclasses, and ``doClassCleanups()`` only + cleans up the particular class. + +- gh-issue-97001: Release the GIL when calling termios APIs to avoid + blocking threads. + +- gh-issue-99341: Fix :func:`ast.increment_lineno` to also cover + :class:`ast.TypeIgnore` when changing line numbers. + +- gh-issue-74044: Fixed bug where :func:`inspect.signature` reported + incorrect arguments for decorated methods. + +- gh-issue-99275: Fix ``SystemError`` in :mod:`ctypes` when exception was + not set during ``__initsubclass__``. + +- gh-issue-99155: Fix :class:`statistics.NormalDist` pickle with ``0`` and + ``1`` protocols. + +- gh-issue-99134: Update the bundled copy of pip to version 22.3.1. + +- gh-issue-99130: Apply bugfixes from `importlib_metadata 4.11.4 + `_, + namely: In ``PathDistribution._name_from_stem``, avoid including parts of + the extension in the result. In ``PathDistribution._normalized_name``, + ensure names loaded from the stem of the filename are also normalized, + ensuring duplicate entry points by packages varying only by non-normalized + name are hidden. + +- gh-issue-83004: Clean up refleak on failed module initialisation in + :mod:`_zoneinfo` + +- gh-issue-83004: Clean up refleaks on failed module initialisation in in + :mod:`_pickle` + +- gh-issue-83004: Clean up refleak on failed module initialisation in + :mod:`_io`. + +- gh-issue-98897: Fix memory leak in :func:`math.dist` when both points + don't have the same dimension. Patch by Kumar Aditya. + +- gh-issue-98793: Fix argument typechecks in :func:`!_overlapped.WSAConnect` + and :func:`!_overlapped.Overlapped.WSASendTo` functions. + +- gh-issue-98740: Fix internal error in the :mod:`re` module which in very + rare circumstances prevented compilation of a regular expression + containing a :ref:`conditional expression ` + without the "else" branch. + +- gh-issue-98703: Fix :meth:`asyncio.StreamWriter.drain` to call + ``protocol.connection_lost`` callback only once on Windows. + +- gh-issue-98624: Add a mutex to unittest.mock.NonCallableMock to protect + concurrent access to mock attributes. + +- gh-issue-89237: Fix hang on Windows in ``subprocess.wait_closed()`` in + :mod:`asyncio` with :class:`~asyncio.ProactorEventLoop`. Patch by Kumar + Aditya. + +- gh-issue-98458: Fix infinite loop in unittest when a self-referencing + chained exception is raised + +- gh-issue-97928: :meth:`tkinter.Text.count` raises now an exception for + options starting with "-" instead of silently ignoring them. + +- gh-issue-97966: On ``uname_result``, restored expectation that ``_fields`` + and ``_asdict`` would include all six properties including ``processor``. + +- gh-issue-98331: Update the bundled copies of pip and setuptools to + versions 22.3 and 65.5.0 respectively. + +- gh-issue-96035: Fix bug in :func:`urllib.parse.urlparse` that causes + certain port numbers containing whitespace, underscores, plus and minus + signs, or non-ASCII digits to be incorrectly accepted. + +- gh-issue-98251: Allow :mod:`venv` to pass along :envvar:`PYTHON*` + variables to ``ensurepip`` and ``pip`` when they do not impact path + resolution + +- gh-issue-98178: On macOS, fix a crash in :func:`syslog.syslog` in + multi-threaded applications. On macOS, the libc ``syslog()`` function is + not thread-safe, so :func:`syslog.syslog` no longer releases the GIL to + call it. Patch by Victor Stinner. + +- gh-issue-96151: Allow ``BUILTINS`` to be a valid field name for frozen + dataclasses. + +- gh-issue-98086: Make sure ``patch.dict()`` can be applied on async + functions. + +- gh-issue-88863: To avoid apparent memory leaks when + :func:`asyncio.open_connection` raises, break reference cycles generated + by local exception and future instances (which has exception instance as + its member var). Patch by Dong Uk, Kang. + +- gh-issue-93858: Prevent error when activating venv in nested fish + instances. + +- bpo-46364: Restrict use of sockets instead of pipes for stdin of + subprocesses created by :mod:`asyncio` to AIX platform only. + +- bpo-38523: :func:`shutil.copytree` now applies the + *ignore_dangling_symlinks* argument recursively. + +- bpo-36267: Fix IndexError in :class:`argparse.ArgumentParser` when a + ``store_true`` action is given an explicit argument. + +Documentation +------------- + +- gh-issue-92892: Document that calling variadic functions with ctypes + requires special care on macOS/arm64 (and possibly other platforms). + +Tests +----- + +- gh-issue-99892: Skip test_normalization() of test_unicodedata if it fails + to download NormalizationTest.txt file from pythontest.net. Patch by + Victor Stinner. + +- bpo-34272: Some C API tests were moved into the new Lib/test/test_capi/ + directory. + +Build +----- + +- gh-issue-99086: Fix ``-Wimplicit-int``, ``-Wstrict-prototypes``, and + ``-Wimplicit-function-declaration`` compiler warnings in + :program:`configure` checks. + +- gh-issue-99086: Fix ``-Wimplicit-int`` compiler warning in + :program:`configure` check for ``PTHREAD_SCOPE_SYSTEM``. + +- gh-issue-97731: Specify the full path to the source location for ``make + docclean`` (needed for cross-builds). + +- gh-issue-98671: Fix ``NO_MISALIGNED_ACCESSES`` being not defined for the + SHA3 extension when ``HAVE_ALIGNED_REQUIRED`` is set. Allowing builds on + hardware that unaligned memory accesses are not allowed. + +Windows +------- + +- gh-issue-99345: Use faster initialization functions to detect install + location for Windows Store package + +- gh-issue-98689: Update Windows builds to zlib v1.2.13. v1.2.12 has + CVE-2022-37434, but the vulnerable ``inflateGetHeader`` API is not used by + Python. + +- gh-issue-94328: Update Windows installer to use SQLite 3.39.4. + +- bpo-40882: Fix a memory leak in + :class:`multiprocessing.shared_memory.SharedMemory` on Windows. + +macOS +----- + +- gh-issue-94328: Update macOS installer to SQLite 3.39.4. + +IDLE +---- + +- gh-issue-97527: Fix a bug in the previous bugfix that caused IDLE to not + start when run with 3.10.8, 3.12.0a1, and at least Microsoft Python + 3.10.2288.0 installed without the Lib/test package. 3.11.0 was never + affected. + +Tools/Demos +----------- + +- gh-issue-95731: Fix handling of module docstrings in + :file:`Tools/i18n/pygettext.py`. + + What's New in Python 3.10.8 final? ================================== @@ -9244,8 +9519,7 @@ Documentation creates new event loop only if called from the main thread. - bpo-38918: Add an entry for ``__module__`` in the "function" & "method" - sections of the `inspect docs types and members table - `_ + sections of the :mod:`inspect` docs' :ref:`inspect-types` table. - bpo-3530: In the :mod:`ast` module documentation, fix a misleading ``NodeTransformer`` example and add advice on when to use the diff --git a/Modules/_asynciomodule.c b/Modules/_asynciomodule.c index c627382a..8d0ff690 100644 --- a/Modules/_asynciomodule.c +++ b/Modules/_asynciomodule.c @@ -332,13 +332,6 @@ get_event_loop(int stacklevel) return loop; } - if (PyErr_WarnEx(PyExc_DeprecationWarning, - "There is no current event loop", - stacklevel)) - { - return NULL; - } - policy = PyObject_CallNoArgs(asyncio_get_event_loop_policy); if (policy == NULL) { return NULL; @@ -3092,6 +3085,11 @@ _asyncio_get_event_loop_impl(PyObject *module) return get_event_loop(1); } +// This internal method is going away in Python 3.12, left here only for +// backwards compatibility with 3.10.0 - 3.10.8 and 3.11.0. +// Similarly, this method's Python equivalent in asyncio.events is going +// away as well. +// See GH-99949 for more details. /*[clinic input] _asyncio._get_event_loop stacklevel: int = 3 diff --git a/Modules/_ctypes/callproc.c b/Modules/_ctypes/callproc.c index 48694760..e009661d 100644 --- a/Modules/_ctypes/callproc.c +++ b/Modules/_ctypes/callproc.c @@ -1014,7 +1014,10 @@ void _ctypes_extend_error(PyObject *exc_class, const char *fmt, ...) PyErr_Fetch(&tp, &v, &tb); PyErr_NormalizeException(&tp, &v, &tb); - cls_str = PyObject_Str(tp); + if (PyType_Check(tp)) + cls_str = PyUnicode_FromString(_PyType_Name((PyTypeObject *)tp)); + else + cls_str = PyObject_Str(tp); if (cls_str) { PyUnicode_AppendAndDel(&s, cls_str); PyUnicode_AppendAndDel(&s, PyUnicode_FromString(": ")); diff --git a/Modules/_ctypes/stgdict.c b/Modules/_ctypes/stgdict.c index 747339de..f9811ace 100644 --- a/Modules/_ctypes/stgdict.c +++ b/Modules/_ctypes/stgdict.c @@ -420,8 +420,11 @@ PyCStructUnionType_update_stgdict(PyObject *type, PyObject *fields, int isStruct } stgdict = PyType_stgdict(type); - if (!stgdict) + if (!stgdict) { + PyErr_SetString(PyExc_TypeError, + "ctypes state is not initialized"); return -1; + } /* If this structure/union is already marked final we cannot assign _fields_ anymore. */ diff --git a/Modules/_io/_iomodule.c b/Modules/_io/_iomodule.c index 170dea41..9e53de59 100644 --- a/Modules/_io/_iomodule.c +++ b/Modules/_io/_iomodule.c @@ -718,10 +718,10 @@ PyInit__io(void) goto fail; /* BlockingIOError, for compatibility */ - Py_INCREF(PyExc_BlockingIOError); - if (PyModule_AddObject(m, "BlockingIOError", - (PyObject *) PyExc_BlockingIOError) < 0) + if (PyModule_AddObjectRef(m, "BlockingIOError", + (PyObject *) PyExc_BlockingIOError) < 0) { goto fail; + } /* Concrete base types of the IO ABCs. (the ABCs themselves are declared through inheritance in io.py) diff --git a/Modules/_io/fileio.c b/Modules/_io/fileio.c index b9856b3b..bf34b7cd 100644 --- a/Modules/_io/fileio.c +++ b/Modules/_io/fileio.c @@ -494,8 +494,12 @@ _Py_COMP_DIAG_POP ret = -1; if (!fd_is_own) self->fd = -1; - if (self->fd >= 0) + if (self->fd >= 0) { + PyObject *exc, *val, *tb; + PyErr_Fetch(&exc, &val, &tb); internal_close(self); + _PyErr_ChainExceptions(exc, val, tb); + } done: #ifdef MS_WINDOWS diff --git a/Modules/_io/textio.c b/Modules/_io/textio.c index d5b311a9..e1199a56 100644 --- a/Modules/_io/textio.c +++ b/Modules/_io/textio.c @@ -251,19 +251,16 @@ _io_IncrementalNewlineDecoder___init___impl(nldecoder_object *self, PyObject *errors) /*[clinic end generated code: output=fbd04d443e764ec2 input=89db6b19c6b126bf]*/ { - self->decoder = decoder; - Py_INCREF(decoder); if (errors == NULL) { - self->errors = _PyUnicode_FromId(&PyId_strict); - if (self->errors == NULL) + errors = _PyUnicode_FromId(&PyId_strict); + if (errors == NULL) { return -1; + } } - else { - self->errors = errors; - } - Py_INCREF(self->errors); + Py_XSETREF(self->errors, Py_NewRef(errors)); + Py_XSETREF(self->decoder, Py_NewRef(decoder)); self->translate = translate ? 1 : 0; self->seennl = 0; self->pendingcr = 0; @@ -298,6 +295,13 @@ check_decoded(PyObject *decoded) return 0; } +#define CHECK_INITIALIZED_DECODER(self) \ + if (self->errors == NULL) { \ + PyErr_SetString(PyExc_ValueError, \ + "IncrementalNewlineDecoder.__init__() not called"); \ + return NULL; \ + } + #define SEEN_CR 1 #define SEEN_LF 2 #define SEEN_CRLF 4 @@ -311,11 +315,7 @@ _PyIncrementalNewlineDecoder_decode(PyObject *myself, Py_ssize_t output_len; nldecoder_object *self = (nldecoder_object *) myself; - if (self->decoder == NULL) { - PyErr_SetString(PyExc_ValueError, - "IncrementalNewlineDecoder.__init__ not called"); - return NULL; - } + CHECK_INITIALIZED_DECODER(self); /* decode input (with the eventual \r from a previous pass) */ if (self->decoder != Py_None) { @@ -529,6 +529,8 @@ _io_IncrementalNewlineDecoder_getstate_impl(nldecoder_object *self) PyObject *buffer; unsigned long long flag; + CHECK_INITIALIZED_DECODER(self); + if (self->decoder != Py_None) { PyObject *state = PyObject_CallMethodNoArgs(self->decoder, _PyIO_str_getstate); @@ -573,6 +575,8 @@ _io_IncrementalNewlineDecoder_setstate(nldecoder_object *self, PyObject *buffer; unsigned long long flag; + CHECK_INITIALIZED_DECODER(self); + if (!PyTuple_Check(state)) { PyErr_SetString(PyExc_TypeError, "state argument must be a tuple"); return NULL; @@ -601,6 +605,8 @@ static PyObject * _io_IncrementalNewlineDecoder_reset_impl(nldecoder_object *self) /*[clinic end generated code: output=32fa40c7462aa8ff input=728678ddaea776df]*/ { + CHECK_INITIALIZED_DECODER(self); + self->seennl = 0; self->pendingcr = 0; if (self->decoder != Py_None) @@ -612,6 +618,8 @@ _io_IncrementalNewlineDecoder_reset_impl(nldecoder_object *self) static PyObject * incrementalnewlinedecoder_newlines_get(nldecoder_object *self, void *context) { + CHECK_INITIALIZED_DECODER(self); + switch (self->seennl) { case SEEN_CR: return PyUnicode_FromString("\r"); diff --git a/Modules/_pickle.c b/Modules/_pickle.c index ce54a2db..956c7b6f 100644 --- a/Modules/_pickle.c +++ b/Modules/_pickle.c @@ -8027,16 +8027,15 @@ PyInit__pickle(void) if (st->UnpicklingError == NULL) return NULL; - Py_INCREF(st->PickleError); - if (PyModule_AddObject(m, "PickleError", st->PickleError) < 0) + if (PyModule_AddObjectRef(m, "PickleError", st->PickleError) < 0) { return NULL; - Py_INCREF(st->PicklingError); - if (PyModule_AddObject(m, "PicklingError", st->PicklingError) < 0) + } + if (PyModule_AddObjectRef(m, "PicklingError", st->PicklingError) < 0) { return NULL; - Py_INCREF(st->UnpicklingError); - if (PyModule_AddObject(m, "UnpicklingError", st->UnpicklingError) < 0) + } + if (PyModule_AddObjectRef(m, "UnpicklingError", st->UnpicklingError) < 0) { return NULL; - + } if (_Pickle_InitState(st) < 0) return NULL; diff --git a/Modules/_sha3/kcp/KeccakSponge.inc b/Modules/_sha3/kcp/KeccakSponge.inc index e10739de..cf92e4db 100644 --- a/Modules/_sha3/kcp/KeccakSponge.inc +++ b/Modules/_sha3/kcp/KeccakSponge.inc @@ -171,7 +171,7 @@ int SpongeAbsorb(SpongeInstance *instance, const unsigned char *data, size_t dat i = 0; curData = data; while(i < dataByteLen) { - if ((instance->byteIOIndex == 0) && (dataByteLen >= (i + rateInBytes))) { + if ((instance->byteIOIndex == 0) && (dataByteLen-i >= rateInBytes)) { #ifdef SnP_FastLoop_Absorb /* processing full blocks first */ @@ -199,10 +199,10 @@ int SpongeAbsorb(SpongeInstance *instance, const unsigned char *data, size_t dat } else { /* normal lane: using the message queue */ - - partialBlock = (unsigned int)(dataByteLen - i); - if (partialBlock+instance->byteIOIndex > rateInBytes) + if (dataByteLen-i > rateInBytes-instance->byteIOIndex) partialBlock = rateInBytes-instance->byteIOIndex; + else + partialBlock = (unsigned int)(dataByteLen - i); #ifdef KeccakReference displayBytes(1, "Block to be absorbed (part)", curData, partialBlock); #endif @@ -281,7 +281,7 @@ int SpongeSqueeze(SpongeInstance *instance, unsigned char *data, size_t dataByte i = 0; curData = data; while(i < dataByteLen) { - if ((instance->byteIOIndex == rateInBytes) && (dataByteLen >= (i + rateInBytes))) { + if ((instance->byteIOIndex == rateInBytes) && (dataByteLen-i >= rateInBytes)) { for(j=dataByteLen-i; j>=rateInBytes; j-=rateInBytes) { SnP_Permute(instance->state); SnP_ExtractBytes(instance->state, curData, 0, rateInBytes); @@ -299,9 +299,10 @@ int SpongeSqueeze(SpongeInstance *instance, unsigned char *data, size_t dataByte SnP_Permute(instance->state); instance->byteIOIndex = 0; } - partialBlock = (unsigned int)(dataByteLen - i); - if (partialBlock+instance->byteIOIndex > rateInBytes) + if (dataByteLen-i > rateInBytes-instance->byteIOIndex) partialBlock = rateInBytes-instance->byteIOIndex; + else + partialBlock = (unsigned int)(dataByteLen - i); i += partialBlock; SnP_ExtractBytes(instance->state, curData, instance->byteIOIndex, partialBlock); diff --git a/Modules/_sha3/sha3module.c b/Modules/_sha3/sha3module.c index 3974e0b6..a2f9d8c7 100644 --- a/Modules/_sha3/sha3module.c +++ b/Modules/_sha3/sha3module.c @@ -65,7 +65,7 @@ #endif /* Prevent bus errors on platforms requiring aligned accesses such ARM. */ -#if HAVE_ALIGNED_REQUIRED && !defined(NO_MISALIGNED_ACCESSES) +#if defined(HAVE_ALIGNED_REQUIRED) && !defined(NO_MISALIGNED_ACCESSES) #define NO_MISALIGNED_ACCESSES #endif diff --git a/Modules/_sre.c b/Modules/_sre.c index 911626da..2dfbf854 100644 --- a/Modules/_sre.c +++ b/Modules/_sre.c @@ -1519,7 +1519,7 @@ _sre_compile_impl(PyObject *module, PyObject *pattern, int flags, #endif /* Report failure */ -#define FAIL do { VTRACE(("FAIL: %d\n", __LINE__)); return 0; } while (0) +#define FAIL do { VTRACE(("FAIL: %d\n", __LINE__)); return -1; } while (0) /* Extract opcode, argument, or skip count from code array */ #define GET_OP \ @@ -1543,7 +1543,7 @@ _sre_compile_impl(PyObject *module, PyObject *pattern, int flags, skip = *code; \ VTRACE(("%lu (skip to %p)\n", \ (unsigned long)skip, code+skip)); \ - if (skip-adj > (uintptr_t)(end - code)) \ + if (skip-adj > (uintptr_t)(end - code)) \ FAIL; \ code++; \ } while (0) @@ -1632,9 +1632,10 @@ _validate_charset(SRE_CODE *code, SRE_CODE *end) } } - return 1; + return 0; } +/* Returns 0 on success, -1 on failure, and 1 if the last op is JUMP. */ static int _validate_inner(SRE_CODE *code, SRE_CODE *end, Py_ssize_t groups) { @@ -1712,7 +1713,7 @@ _validate_inner(SRE_CODE *code, SRE_CODE *end, Py_ssize_t groups) case SRE_OP_IN_LOC_IGNORE: GET_SKIP; /* Stop 1 before the end; we check the FAILURE below */ - if (!_validate_charset(code, code+skip-2)) + if (_validate_charset(code, code+skip-2)) FAIL; if (code[skip-2] != SRE_OP_FAILURE) FAIL; @@ -1766,7 +1767,7 @@ _validate_inner(SRE_CODE *code, SRE_CODE *end, Py_ssize_t groups) } /* Validate the charset */ if (flags & SRE_INFO_CHARSET) { - if (!_validate_charset(code, newcode-1)) + if (_validate_charset(code, newcode-1)) FAIL; if (newcode[-1] != SRE_OP_FAILURE) FAIL; @@ -1787,7 +1788,7 @@ _validate_inner(SRE_CODE *code, SRE_CODE *end, Py_ssize_t groups) if (skip == 0) break; /* Stop 2 before the end; we check the JUMP below */ - if (!_validate_inner(code, code+skip-3, groups)) + if (_validate_inner(code, code+skip-3, groups)) FAIL; code += skip-3; /* Check that it ends with a JUMP, and that each JUMP @@ -1801,6 +1802,8 @@ _validate_inner(SRE_CODE *code, SRE_CODE *end, Py_ssize_t groups) else if (code+skip-1 != target) FAIL; } + if (code != target) + FAIL; } break; @@ -1815,7 +1818,7 @@ _validate_inner(SRE_CODE *code, SRE_CODE *end, Py_ssize_t groups) FAIL; if (max > SRE_MAXREPEAT) FAIL; - if (!_validate_inner(code, code+skip-4, groups)) + if (_validate_inner(code, code+skip-4, groups)) FAIL; code += skip-4; GET_OP; @@ -1834,7 +1837,7 @@ _validate_inner(SRE_CODE *code, SRE_CODE *end, Py_ssize_t groups) FAIL; if (max > SRE_MAXREPEAT) FAIL; - if (!_validate_inner(code, code+skip-3, groups)) + if (_validate_inner(code, code+skip-3, groups)) FAIL; code += skip-3; GET_OP; @@ -1886,24 +1889,17 @@ _validate_inner(SRE_CODE *code, SRE_CODE *end, Py_ssize_t groups) to allow arbitrary jumps anywhere in the code; so we just look for a JUMP opcode preceding our skip target. */ - if (skip >= 3 && skip-3 < (uintptr_t)(end - code) && - code[skip-3] == SRE_OP_JUMP) - { - VTRACE(("both then and else parts present\n")); - if (!_validate_inner(code+1, code+skip-3, groups)) - FAIL; + VTRACE(("then part:\n")); + int rc = _validate_inner(code+1, code+skip-1, groups); + if (rc == 1) { + VTRACE(("else part:\n")); code += skip-2; /* Position after JUMP, at */ GET_SKIP; - if (!_validate_inner(code, code+skip-1, groups)) - FAIL; - code += skip-1; - } - else { - VTRACE(("only a then part present\n")); - if (!_validate_inner(code+1, code+skip-1, groups)) - FAIL; - code += skip-1; + rc = _validate_inner(code, code+skip-1, groups); } + if (rc) + FAIL; + code += skip-1; break; case SRE_OP_ASSERT: @@ -1914,7 +1910,7 @@ _validate_inner(SRE_CODE *code, SRE_CODE *end, Py_ssize_t groups) if (arg & 0x80000000) FAIL; /* Width too large */ /* Stop 1 before the end; we check the SUCCESS below */ - if (!_validate_inner(code+1, code+skip-2, groups)) + if (_validate_inner(code+1, code+skip-2, groups)) FAIL; code += skip-2; GET_OP; @@ -1922,6 +1918,12 @@ _validate_inner(SRE_CODE *code, SRE_CODE *end, Py_ssize_t groups) FAIL; break; + case SRE_OP_JUMP: + if (code + 1 != end) + FAIL; + VTRACE(("JUMP: %d\n", __LINE__)); + return 1; + default: FAIL; @@ -1929,7 +1931,7 @@ _validate_inner(SRE_CODE *code, SRE_CODE *end, Py_ssize_t groups) } VTRACE(("okay\n")); - return 1; + return 0; } static int @@ -1944,7 +1946,7 @@ _validate_outer(SRE_CODE *code, SRE_CODE *end, Py_ssize_t groups) static int _validate(PatternObject *self) { - if (!_validate_outer(self->code, self->code+self->codesize, self->groups)) + if (_validate_outer(self->code, self->code+self->codesize, self->groups)) { PyErr_SetString(PyExc_RuntimeError, "invalid SRE code"); return 0; diff --git a/Modules/_testcapimodule.c b/Modules/_testcapimodule.c index 5c582dda..20c90c44 100644 --- a/Modules/_testcapimodule.c +++ b/Modules/_testcapimodule.c @@ -5155,6 +5155,40 @@ get_mapping_items(PyObject* self, PyObject *obj) return PyMapping_Items(obj); } +static PyObject * +test_mapping_has_key_string(PyObject *self, PyObject *Py_UNUSED(args)) +{ + PyObject *context = PyDict_New(); + PyObject *val = PyLong_FromLong(1); + + // Since this uses `const char*` it is easier to test this in C: + PyDict_SetItemString(context, "a", val); + if (!PyMapping_HasKeyString(context, "a")) { + PyErr_SetString(PyExc_RuntimeError, + "Existing mapping key does not exist"); + return NULL; + } + if (PyMapping_HasKeyString(context, "b")) { + PyErr_SetString(PyExc_RuntimeError, + "Missing mapping key exists"); + return NULL; + } + + Py_DECREF(val); + Py_DECREF(context); + Py_RETURN_NONE; +} + +static PyObject * +mapping_has_key(PyObject* self, PyObject *args) +{ + PyObject *context, *key; + if (!PyArg_ParseTuple(args, "OO", &context, &key)) { + return NULL; + } + return PyLong_FromLong(PyMapping_HasKey(context, key)); +} + static PyObject * test_pythread_tss_key_state(PyObject *self, PyObject *args) @@ -5894,6 +5928,8 @@ static PyMethodDef TestMethods[] = { {"get_mapping_keys", get_mapping_keys, METH_O}, {"get_mapping_values", get_mapping_values, METH_O}, {"get_mapping_items", get_mapping_items, METH_O}, + {"test_mapping_has_key_string", test_mapping_has_key_string, METH_NOARGS}, + {"mapping_has_key", mapping_has_key, METH_VARARGS}, {"test_pythread_tss_key_state", test_pythread_tss_key_state, METH_VARARGS}, {"hamt", new_hamt, METH_NOARGS}, {"bad_get", (PyCFunction)(void(*)(void))bad_get, METH_FASTCALL}, diff --git a/Modules/_winapi.c b/Modules/_winapi.c index 9b30a900..f6bb07fd 100644 --- a/Modules/_winapi.c +++ b/Modules/_winapi.c @@ -1402,6 +1402,30 @@ _winapi_MapViewOfFile_impl(PyObject *module, HANDLE file_map, return address; } +/*[clinic input] +_winapi.UnmapViewOfFile + + address: LPCVOID + / +[clinic start generated code]*/ + +static PyObject * +_winapi_UnmapViewOfFile_impl(PyObject *module, LPCVOID address) +/*[clinic end generated code: output=4f7e18ac75d19744 input=8c4b6119ad9288a3]*/ +{ + BOOL success; + + Py_BEGIN_ALLOW_THREADS + success = UnmapViewOfFile(address); + Py_END_ALLOW_THREADS + + if (!success) { + return PyErr_SetFromWindowsErr(0); + } + + Py_RETURN_NONE; +} + /*[clinic input] _winapi.OpenFileMapping -> HANDLE @@ -2095,6 +2119,7 @@ static PyMethodDef winapi_functions[] = { _WINAPI_READFILE_METHODDEF _WINAPI_SETNAMEDPIPEHANDLESTATE_METHODDEF _WINAPI_TERMINATEPROCESS_METHODDEF + _WINAPI_UNMAPVIEWOFFILE_METHODDEF _WINAPI_VIRTUALQUERYSIZE_METHODDEF _WINAPI_WAITNAMEDPIPE_METHODDEF _WINAPI_WAITFORMULTIPLEOBJECTS_METHODDEF diff --git a/Modules/_zoneinfo.c b/Modules/_zoneinfo.c index 0388d27c..836d1cfd 100644 --- a/Modules/_zoneinfo.c +++ b/Modules/_zoneinfo.c @@ -2648,8 +2648,9 @@ zoneinfomodule_exec(PyObject *m) goto error; } - Py_INCREF(&PyZoneInfo_ZoneInfoType); - PyModule_AddObject(m, "ZoneInfo", (PyObject *)&PyZoneInfo_ZoneInfoType); + if (PyModule_AddObjectRef(m, "ZoneInfo", (PyObject *)&PyZoneInfo_ZoneInfoType) < 0) { + goto error; + } /* Populate imports */ PyObject *_tzpath_module = PyImport_ImportModule("zoneinfo._tzpath"); diff --git a/Modules/audioop.c b/Modules/audioop.c index 2a5d805c..798e3c46 100644 --- a/Modules/audioop.c +++ b/Modules/audioop.c @@ -1,3 +1,33 @@ +/* The audioop module uses the code base in g777.c file of the Sox project. + * Source: https://web.archive.org/web/19970716121258/http://www.spies.com/Sox/Archive/soxgamma.tar.gz + * Programming the AdLib/Sound Blaster + * FM Music Chips + * Version 2.0 (24 Feb 1992) + * + * Copyright (c) 1991, 1992 by Jeffrey S. Lee + * + * jlee@smylex.uucp + * + * + * + * Warranty and Copyright Policy + * + * This document is provided on an "as-is" basis, and its author makes + * no warranty or representation, express or implied, with respect to + * its quality performance or fitness for a particular purpose. In no + * event will the author of this document be liable for direct, indirect, + * special, incidental, or consequential damages arising out of the use + * or inability to use the information contained within. Use of this + * document is at your own risk. + * + * This file may be used and copied freely so long as the applicable + * copyright notices are retained, and no modifications are made to the + * text of the document. No money shall be charged for its distribution + * beyond reasonable shipping, handling and duplication costs, nor shall + * proprietary changes be made to this document so that it cannot be + * distributed freely. This document may not be included in published + * material or commercial packages without the written consent of its + * author. */ /* audioopmodule - Module to detect peak values in arrays */ @@ -28,20 +58,6 @@ fbound(double val, double minval, double maxval) } -/* Code shamelessly stolen from sox, 12.17.7, g711.c -** (c) Craig Reese, Joe Campbell and Jeff Poskanzer 1989 */ - -/* From g711.c: - * - * December 30, 1994: - * Functions linear2alaw, linear2ulaw have been updated to correctly - * convert unquantized 16 bit values. - * Tables for direct u- to A-law and A- to u-law conversions have been - * corrected. - * Borge Lindberg, Center for PersonKommunikation, Aalborg University. - * bli@cpk.auc.dk - * - */ #define BIAS 0x84 /* define the add-in bias for 16 bit samples */ #define CLIP 32635 #define SIGN_BIT (0x80) /* Sign bit for an A-law byte. */ diff --git a/Modules/clinic/_winapi.c.h b/Modules/clinic/_winapi.c.h index f5683571..e3ed148a 100644 --- a/Modules/clinic/_winapi.c.h +++ b/Modules/clinic/_winapi.c.h @@ -731,6 +731,32 @@ exit: return return_value; } +PyDoc_STRVAR(_winapi_UnmapViewOfFile__doc__, +"UnmapViewOfFile($module, address, /)\n" +"--\n" +"\n"); + +#define _WINAPI_UNMAPVIEWOFFILE_METHODDEF \ + {"UnmapViewOfFile", (PyCFunction)_winapi_UnmapViewOfFile, METH_O, _winapi_UnmapViewOfFile__doc__}, + +static PyObject * +_winapi_UnmapViewOfFile_impl(PyObject *module, LPCVOID address); + +static PyObject * +_winapi_UnmapViewOfFile(PyObject *module, PyObject *arg) +{ + PyObject *return_value = NULL; + LPCVOID address; + + if (!PyArg_Parse(arg, "" F_POINTER ":UnmapViewOfFile", &address)) { + goto exit; + } + return_value = _winapi_UnmapViewOfFile_impl(module, address); + +exit: + return return_value; +} + PyDoc_STRVAR(_winapi_OpenFileMapping__doc__, "OpenFileMapping($module, desired_access, inherit_handle, name, /)\n" "--\n" @@ -1216,4 +1242,4 @@ _winapi__mimetypes_read_windows_registry(PyObject *module, PyObject *const *args exit: return return_value; } -/*[clinic end generated code: output=d76d0a5901db2e2a input=a9049054013a1b77]*/ +/*[clinic end generated code: output=acabf8f2b5cc44a1 input=a9049054013a1b77]*/ diff --git a/Modules/clinic/overlapped.c.h b/Modules/clinic/overlapped.c.h index 43e14a97..fab0633a 100644 --- a/Modules/clinic/overlapped.c.h +++ b/Modules/clinic/overlapped.c.h @@ -831,8 +831,8 @@ _overlapped_WSAConnect(PyObject *module, PyObject *const *args, Py_ssize_t nargs HANDLE ConnectSocket; PyObject *AddressObj; - if (!_PyArg_ParseStack(args, nargs, ""F_HANDLE"O:WSAConnect", - &ConnectSocket, &AddressObj)) { + if (!_PyArg_ParseStack(args, nargs, ""F_HANDLE"O!:WSAConnect", + &ConnectSocket, &PyTuple_Type, &AddressObj)) { goto exit; } return_value = _overlapped_WSAConnect_impl(module, ConnectSocket, AddressObj); @@ -864,8 +864,8 @@ _overlapped_Overlapped_WSASendTo(OverlappedObject *self, PyObject *const *args, DWORD flags; PyObject *AddressObj; - if (!_PyArg_ParseStack(args, nargs, ""F_HANDLE"OkO:WSASendTo", - &handle, &bufobj, &flags, &AddressObj)) { + if (!_PyArg_ParseStack(args, nargs, ""F_HANDLE"OkO!:WSASendTo", + &handle, &bufobj, &flags, &PyTuple_Type, &AddressObj)) { goto exit; } return_value = _overlapped_Overlapped_WSASendTo_impl(self, handle, bufobj, flags, AddressObj); @@ -905,4 +905,4 @@ _overlapped_Overlapped_WSARecvFrom(OverlappedObject *self, PyObject *const *args exit: return return_value; } -/*[clinic end generated code: output=d3215a6ca589735a input=a9049054013a1b77]*/ +/*[clinic end generated code: output=e685b61b3da0524d input=a9049054013a1b77]*/ diff --git a/Modules/expat/expat.h b/Modules/expat/expat.h index 2b47ce2a..1c83563c 100644 --- a/Modules/expat/expat.h +++ b/Modules/expat/expat.h @@ -1054,8 +1054,8 @@ XML_SetBillionLaughsAttackProtectionActivationThreshold( See http://semver.org. */ #define XML_MAJOR_VERSION 2 -#define XML_MINOR_VERSION 4 -#define XML_MICRO_VERSION 9 +#define XML_MINOR_VERSION 5 +#define XML_MICRO_VERSION 0 #ifdef __cplusplus } diff --git a/Modules/expat/xmlparse.c b/Modules/expat/xmlparse.c index c0bece51..b6c2eca9 100644 --- a/Modules/expat/xmlparse.c +++ b/Modules/expat/xmlparse.c @@ -1,4 +1,4 @@ -/* 90815a2b2c80c03b2b889fe1d427bb2b9e3282aa065e42784e001db4f23de324 (2.4.9+) +/* 5ab094ffadd6edfc94c3eee53af44a86951f9f1f0933ada3114bbce2bfb02c99 (2.5.0+) __ __ _ ___\ \/ /_ __ __ _| |_ / _ \\ /| '_ \ / _` | __| @@ -35,6 +35,7 @@ Copyright (c) 2021 Dong-hee Na Copyright (c) 2022 Samanta Navarro Copyright (c) 2022 Jeffrey Walton + Copyright (c) 2022 Jann Horn Licensed under the MIT license: Permission is hereby granted, free of charge, to any person obtaining @@ -1068,6 +1069,14 @@ parserCreate(const XML_Char *encodingName, parserInit(parser, encodingName); if (encodingName && ! parser->m_protocolEncodingName) { + if (dtd) { + // We need to stop the upcoming call to XML_ParserFree from happily + // destroying parser->m_dtd because the DTD is shared with the parent + // parser and the only guard that keeps XML_ParserFree from destroying + // parser->m_dtd is parser->m_isParamEntity but it will be set to + // XML_TRUE only later in XML_ExternalEntityParserCreate (or not at all). + parser->m_dtd = NULL; + } XML_ParserFree(parser); return NULL; } @@ -3011,9 +3020,6 @@ doContent(XML_Parser parser, int startTagLevel, const ENCODING *enc, int len; const char *rawName; TAG *tag = parser->m_tagStack; - parser->m_tagStack = tag->parent; - tag->parent = parser->m_freeTagList; - parser->m_freeTagList = tag; rawName = s + enc->minBytesPerChar * 2; len = XmlNameLength(enc, rawName); if (len != tag->rawNameLength @@ -3021,6 +3027,9 @@ doContent(XML_Parser parser, int startTagLevel, const ENCODING *enc, *eventPP = rawName; return XML_ERROR_TAG_MISMATCH; } + parser->m_tagStack = tag->parent; + tag->parent = parser->m_freeTagList; + parser->m_freeTagList = tag; --parser->m_tagLevel; if (parser->m_endElementHandler) { const XML_Char *localPart; @@ -4975,10 +4984,10 @@ doProlog(XML_Parser parser, const ENCODING *enc, const char *s, const char *end, parser->m_handlerArg, parser->m_declElementType->name, parser->m_declAttributeId->name, parser->m_declAttributeType, 0, role == XML_ROLE_REQUIRED_ATTRIBUTE_VALUE); - poolClear(&parser->m_tempPool); handleDefault = XML_FALSE; } } + poolClear(&parser->m_tempPool); break; case XML_ROLE_DEFAULT_ATTRIBUTE_VALUE: case XML_ROLE_FIXED_ATTRIBUTE_VALUE: @@ -5386,7 +5395,7 @@ doProlog(XML_Parser parser, const ENCODING *enc, const char *s, const char *end, * * If 'standalone' is false, the DTD must have no * parameter entities or we wouldn't have passed the outer - * 'if' statement. That measn the only entity in the hash + * 'if' statement. That means the only entity in the hash * table is the external subset name "#" which cannot be * given as a parameter entity name in XML syntax, so the * lookup must have returned NULL and we don't even reach @@ -5798,19 +5807,27 @@ internalEntityProcessor(XML_Parser parser, const char *s, const char *end, if (result != XML_ERROR_NONE) return result; - else if (textEnd != next - && parser->m_parsingStatus.parsing == XML_SUSPENDED) { + + if (textEnd != next && parser->m_parsingStatus.parsing == XML_SUSPENDED) { entity->processed = (int)(next - (const char *)entity->textPtr); return result; - } else { + } + #ifdef XML_DTD - entityTrackingOnClose(parser, entity, __LINE__); + entityTrackingOnClose(parser, entity, __LINE__); #endif - entity->open = XML_FALSE; - parser->m_openInternalEntities = openEntity->next; - /* put openEntity back in list of free instances */ - openEntity->next = parser->m_freeInternalEntities; - parser->m_freeInternalEntities = openEntity; + entity->open = XML_FALSE; + parser->m_openInternalEntities = openEntity->next; + /* put openEntity back in list of free instances */ + openEntity->next = parser->m_freeInternalEntities; + parser->m_freeInternalEntities = openEntity; + + // If there are more open entities we want to stop right here and have the + // upcoming call to XML_ResumeParser continue with entity content, or it would + // be ignored altogether. + if (parser->m_openInternalEntities != NULL + && parser->m_parsingStatus.parsing == XML_SUSPENDED) { + return XML_ERROR_NONE; } #ifdef XML_DTD diff --git a/Modules/expat/xmltok_impl.h b/Modules/expat/xmltok_impl.h index c518aada..3469c4ae 100644 --- a/Modules/expat/xmltok_impl.h +++ b/Modules/expat/xmltok_impl.h @@ -45,7 +45,7 @@ enum { BT_LF, /* line feed = "\n" */ BT_GT, /* greater than = ">" */ BT_QUOT, /* quotation character = "\"" */ - BT_APOS, /* aposthrophe = "'" */ + BT_APOS, /* apostrophe = "'" */ BT_EQUALS, /* equal sign = "=" */ BT_QUEST, /* question mark = "?" */ BT_EXCL, /* exclamation mark = "!" */ diff --git a/Modules/faulthandler.c b/Modules/faulthandler.c index 3ae62692..8d2221cf 100644 --- a/Modules/faulthandler.c +++ b/Modules/faulthandler.c @@ -349,14 +349,17 @@ faulthandler_fatal_error(int signum) size_t i; fault_handler_t *handler = NULL; int save_errno = errno; + int found = 0; if (!fatal_error.enabled) return; for (i=0; i < faulthandler_nsignals; i++) { handler = &faulthandler_handlers[i]; - if (handler->signum == signum) + if (handler->signum == signum) { + found = 1; break; + } } if (handler == NULL) { /* faulthandler_nsignals == 0 (unlikely) */ @@ -366,9 +369,18 @@ faulthandler_fatal_error(int signum) /* restore the previous handler */ faulthandler_disable_fatal_handler(handler); - PUTS(fd, "Fatal Python error: "); - PUTS(fd, handler->name); - PUTS(fd, "\n\n"); + if (found) { + PUTS(fd, "Fatal Python error: "); + PUTS(fd, handler->name); + PUTS(fd, "\n\n"); + } + else { + char unknown_signum[23] = {0,}; + snprintf(unknown_signum, 23, "%d", signum); + PUTS(fd, "Fatal Python error from unexpected signum: "); + PUTS(fd, unknown_signum); + PUTS(fd, "\n\n"); + } faulthandler_dump_traceback(fd, fatal_error.all_threads, fatal_error.interp); diff --git a/Modules/itertoolsmodule.c b/Modules/itertoolsmodule.c index f8e2c45a..25c36e77 100644 --- a/Modules/itertoolsmodule.c +++ b/Modules/itertoolsmodule.c @@ -1204,6 +1204,7 @@ cycle_setstate(cycleobject *lz, PyObject *state) PyErr_SetString(PyExc_TypeError, "state is not a tuple"); return NULL; } + // The second item can be 1/0 in old pickles and True/False in new pickles if (!PyArg_ParseTuple(state, "O!i", &PyList_Type, &saved, &firstpass)) { return NULL; } diff --git a/Modules/mathmodule.c b/Modules/mathmodule.c index 4534176a..f31ca832 100644 --- a/Modules/mathmodule.c +++ b/Modules/mathmodule.c @@ -2658,13 +2658,13 @@ math_dist_impl(PyObject *module, PyObject *p, PyObject *q) if (m != n) { PyErr_SetString(PyExc_ValueError, "both points must have the same number of dimensions"); - return NULL; - + goto error_exit; } if (n > NUM_STACK_ELEMS) { diffs = (double *) PyObject_Malloc(n * sizeof(double)); if (diffs == NULL) { - return PyErr_NoMemory(); + PyErr_NoMemory(); + goto error_exit; } } for (i=0 ; i # include +# include // UNLEN +# include "osdefs.h" // SEP +# define HAVE_SYMLINK #endif #ifdef __VXWORKS__ @@ -430,18 +433,7 @@ extern char *ctermid_r(char *); # ifdef HAVE_PROCESS_H # include # endif -# ifndef IO_REPARSE_TAG_SYMLINK -# define IO_REPARSE_TAG_SYMLINK (0xA000000CL) -# endif -# ifndef IO_REPARSE_TAG_MOUNT_POINT -# define IO_REPARSE_TAG_MOUNT_POINT (0xA0000003L) -# endif -# include "osdefs.h" // SEP # include -# include -# include // ShellExecute() -# include // UNLEN -# define HAVE_SYMLINK #endif /* _MSC_VER */ #ifndef MAXPATHLEN diff --git a/Modules/readline.c b/Modules/readline.c index c79d22f8..1d506720 100644 --- a/Modules/readline.c +++ b/Modules/readline.c @@ -1256,9 +1256,9 @@ setup_readline(readlinestate *mod_state) rl_attempted_completion_function = flex_complete; /* Set Python word break characters */ completer_word_break_characters = - rl_completer_word_break_characters = strdup(" \t\n`~!@#$%^&*()-=+[{]}\\|;:'\",<>/?"); /* All nonalphanums except '.' */ + rl_completer_word_break_characters = completer_word_break_characters; mod_state->begidx = PyLong_FromLong(0L); mod_state->endidx = PyLong_FromLong(0L); diff --git a/Modules/syslogmodule.c b/Modules/syslogmodule.c index bfa2ca2b..8adbe214 100644 --- a/Modules/syslogmodule.c +++ b/Modules/syslogmodule.c @@ -207,9 +207,14 @@ syslog_syslog(PyObject * self, PyObject * args) */ PyObject *ident = S_ident_o; Py_XINCREF(ident); +#ifdef __APPLE__ + // gh-98178: On macOS, libc syslog() is not thread-safe + syslog(priority, "%s", message); +#else Py_BEGIN_ALLOW_THREADS; syslog(priority, "%s", message); Py_END_ALLOW_THREADS; +#endif Py_XDECREF(ident); Py_RETURN_NONE; } diff --git a/Modules/termios.c b/Modules/termios.c index fdfe589e..0f238cb5 100644 --- a/Modules/termios.c +++ b/Modules/termios.c @@ -82,7 +82,12 @@ termios_tcgetattr_impl(PyObject *module, int fd) { termiosmodulestate *state = PyModule_GetState(module); struct termios mode; - if (tcgetattr(fd, &mode) == -1) { + int r; + + Py_BEGIN_ALLOW_THREADS + r = tcgetattr(fd, &mode); + Py_END_ALLOW_THREADS + if (r == -1) { return PyErr_SetFromErrno(state->TermiosError); } @@ -169,7 +174,12 @@ termios_tcsetattr_impl(PyObject *module, int fd, int when, PyObject *term) /* Get the old mode, in case there are any hidden fields... */ termiosmodulestate *state = PyModule_GetState(module); struct termios mode; - if (tcgetattr(fd, &mode) == -1) { + int r; + + Py_BEGIN_ALLOW_THREADS + r = tcgetattr(fd, &mode); + Py_END_ALLOW_THREADS + if (r == -1) { return PyErr_SetFromErrno(state->TermiosError); } @@ -211,7 +221,12 @@ termios_tcsetattr_impl(PyObject *module, int fd, int when, PyObject *term) return PyErr_SetFromErrno(state->TermiosError); if (cfsetospeed(&mode, (speed_t) ospeed) == -1) return PyErr_SetFromErrno(state->TermiosError); - if (tcsetattr(fd, when, &mode) == -1) + + Py_BEGIN_ALLOW_THREADS + r = tcsetattr(fd, when, &mode); + Py_END_ALLOW_THREADS + + if (r == -1) return PyErr_SetFromErrno(state->TermiosError); Py_RETURN_NONE; @@ -235,7 +250,13 @@ termios_tcsendbreak_impl(PyObject *module, int fd, int duration) /*[clinic end generated code: output=5945f589b5d3ac66 input=dc2f32417691f8ed]*/ { termiosmodulestate *state = PyModule_GetState(module); - if (tcsendbreak(fd, duration) == -1) { + int r; + + Py_BEGIN_ALLOW_THREADS + r = tcsendbreak(fd, duration); + Py_END_ALLOW_THREADS + + if (r == -1) { return PyErr_SetFromErrno(state->TermiosError); } @@ -256,7 +277,13 @@ termios_tcdrain_impl(PyObject *module, int fd) /*[clinic end generated code: output=5fd86944c6255955 input=c99241b140b32447]*/ { termiosmodulestate *state = PyModule_GetState(module); - if (tcdrain(fd) == -1) { + int r; + + Py_BEGIN_ALLOW_THREADS + r = tcdrain(fd); + Py_END_ALLOW_THREADS + + if (r == -1) { return PyErr_SetFromErrno(state->TermiosError); } @@ -282,7 +309,13 @@ termios_tcflush_impl(PyObject *module, int fd, int queue) /*[clinic end generated code: output=2424f80312ec2f21 input=0f7d08122ddc07b5]*/ { termiosmodulestate *state = PyModule_GetState(module); - if (tcflush(fd, queue) == -1) { + int r; + + Py_BEGIN_ALLOW_THREADS + r = tcflush(fd, queue); + Py_END_ALLOW_THREADS + + if (r == -1) { return PyErr_SetFromErrno(state->TermiosError); } @@ -308,7 +341,13 @@ termios_tcflow_impl(PyObject *module, int fd, int action) /*[clinic end generated code: output=afd10928e6ea66eb input=c6aff0640b6efd9c]*/ { termiosmodulestate *state = PyModule_GetState(module); - if (tcflow(fd, action) == -1) { + int r; + + Py_BEGIN_ALLOW_THREADS + r = tcflow(fd, action); + Py_END_ALLOW_THREADS + + if (r == -1) { return PyErr_SetFromErrno(state->TermiosError); } diff --git a/Objects/genericaliasobject.c b/Objects/genericaliasobject.c index f52bc974..9edb6d23 100644 --- a/Objects/genericaliasobject.c +++ b/Objects/genericaliasobject.c @@ -209,6 +209,9 @@ _Py_make_parameters(PyObject *args) Py_ssize_t iparam = 0; for (Py_ssize_t iarg = 0; iarg < nargs; iarg++) { PyObject *t = PyTuple_GET_ITEM(args, iarg); + if (PyType_Check(t)) { + continue; + } int typevar = is_typevar(t); if (typevar < 0) { Py_DECREF(parameters); @@ -260,6 +263,11 @@ _Py_make_parameters(PyObject *args) static PyObject * subs_tvars(PyObject *obj, PyObject *params, PyObject **argitems) { + if (PyType_Check(obj)) { + Py_INCREF(obj); + return obj; + } + _Py_IDENTIFIER(__parameters__); PyObject *subparams; if (_PyObject_LookupAttrId(obj, &PyId___parameters__, &subparams) < 0) { diff --git a/Objects/object.c b/Objects/object.c index 6d80d6df..0bef2e9d 100644 --- a/Objects/object.c +++ b/Objects/object.c @@ -1091,8 +1091,9 @@ _PyObject_GetDictPtr(PyObject *obj) tsize = -tsize; } size_t size = _PyObject_VAR_SIZE(tp, tsize); + assert(size <= (size_t)PY_SSIZE_T_MAX); + dictoffset += (Py_ssize_t)size; - dictoffset += (long)size; _PyObject_ASSERT(obj, dictoffset > 0); _PyObject_ASSERT(obj, dictoffset % SIZEOF_VOID_P == 0); } diff --git a/Objects/obmalloc.c b/Objects/obmalloc.c index ed8dd5a5..224206b9 100644 --- a/Objects/obmalloc.c +++ b/Objects/obmalloc.c @@ -2962,7 +2962,6 @@ _PyObject_DebugMallocStats(FILE *out) * will be living in full pools -- would be a shame to miss them. */ for (i = 0; i < maxarenas; ++i) { - uint j; uintptr_t base = arenas[i].address; /* Skip arenas which are not allocated. */ @@ -2981,8 +2980,7 @@ _PyObject_DebugMallocStats(FILE *out) /* visit every pool in the arena */ assert(base <= (uintptr_t) arenas[i].pool_address); - for (j = 0; base < (uintptr_t) arenas[i].pool_address; - ++j, base += POOL_SIZE) { + for (; base < (uintptr_t) arenas[i].pool_address; base += POOL_SIZE) { poolp p = (poolp)base; const uint sz = p->szidx; uint freeblocks; diff --git a/PC/python_uwp.cpp b/PC/python_uwp.cpp index 88369e8f..2beea60e 100644 --- a/PC/python_uwp.cpp +++ b/PC/python_uwp.cpp @@ -10,6 +10,7 @@ #include +#include #include #include @@ -28,37 +29,49 @@ const wchar_t *PROGNAME = L"python.exe"; #endif static std::wstring -get_user_base() +get_package_family() { try { - const auto appData = winrt::Windows::Storage::ApplicationData::Current(); - if (appData) { - const auto localCache = appData.LocalCacheFolder(); - if (localCache) { - auto path = localCache.Path(); - if (!path.empty()) { - return std::wstring(path) + L"\\local-packages"; - } - } + UINT32 nameLength = MAX_PATH; + std::wstring name; + name.resize(nameLength); + DWORD rc = GetCurrentPackageFamilyName(&nameLength, name.data()); + if (rc == ERROR_SUCCESS) { + name.resize(nameLength - 1); + return name; } - } catch (...) { + else if (rc != ERROR_INSUFFICIENT_BUFFER) { + throw rc; + } + name.resize(nameLength); + rc = GetCurrentPackageFamilyName(&nameLength, name.data()); + if (rc != ERROR_SUCCESS) { + throw rc; + } + name.resize(nameLength - 1); + return name; } + catch (...) { + } + return std::wstring(); } static std::wstring -get_package_family() +get_user_base() { try { - const auto package = winrt::Windows::ApplicationModel::Package::Current(); - if (package) { - const auto id = package.Id(); - if (id) { - return std::wstring(id.FamilyName()); + const auto appData = winrt::Windows::Storage::ApplicationData::Current(); + if (appData) { + const auto localCache = appData.LocalCacheFolder(); + if (localCache) { + std::wstring path { localCache.Path().c_str() }; + if (!path.empty()) { + return path + L"\\local-packages"; + } } } - } - catch (...) { + } catch (...) { } return std::wstring(); @@ -68,13 +81,24 @@ static std::wstring get_package_home() { try { - const auto package = winrt::Windows::ApplicationModel::Package::Current(); - if (package) { - const auto path = package.InstalledLocation(); - if (path) { - return std::wstring(path.Path()); - } + UINT32 pathLength = MAX_PATH; + std::wstring path; + path.resize(pathLength); + DWORD rc = GetCurrentPackagePath(&pathLength, path.data()); + if (rc == ERROR_SUCCESS) { + path.resize(pathLength - 1); + return path; + } + else if (rc != ERROR_INSUFFICIENT_BUFFER) { + throw rc; + } + path.resize(pathLength); + rc = GetCurrentPackagePath(&pathLength, path.data()); + if (rc != ERROR_SUCCESS) { + throw rc; } + path.resize(pathLength - 1); + return path; } catch (...) { } diff --git a/PCbuild/get_externals.bat b/PCbuild/get_externals.bat index c7191145..e53f8ab8 100644 --- a/PCbuild/get_externals.bat +++ b/PCbuild/get_externals.bat @@ -54,12 +54,12 @@ set libraries= set libraries=%libraries% bzip2-1.0.8 if NOT "%IncludeLibffiSrc%"=="false" set libraries=%libraries% libffi-3.3.0 if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1q -set libraries=%libraries% sqlite-3.37.2.0 +set libraries=%libraries% sqlite-3.39.4.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tcl-core-8.6.12.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tk-8.6.12.0 if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tix-8.4.3.6 set libraries=%libraries% xz-5.2.5 -set libraries=%libraries% zlib-1.2.12 +set libraries=%libraries% zlib-1.2.13 for %%e in (%libraries%) do ( if exist "%EXTERNALS_DIR%\%%e" ( diff --git a/PCbuild/python.props b/PCbuild/python.props index bfb09888..fad98f78 100644 --- a/PCbuild/python.props +++ b/PCbuild/python.props @@ -57,7 +57,7 @@ $(EXTERNALS_DIR) $([System.IO.Path]::GetFullPath(`$(PySourcePath)externals`)) $(ExternalsDir)\ - $(ExternalsDir)sqlite-3.37.2.0\ + $(ExternalsDir)sqlite-3.39.4.0\ $(ExternalsDir)bzip2-1.0.8\ $(ExternalsDir)xz-5.2.5\ $(ExternalsDir)libffi-3.3.0\ @@ -67,7 +67,7 @@ $(ExternalsDir)openssl-bin-1.1.1q\$(ArchName)\ $(opensslOutDir)include $(ExternalsDir)\nasm-2.11.06\ - $(ExternalsDir)\zlib-1.2.12\ + $(ExternalsDir)\zlib-1.2.13\ _d diff --git a/PCbuild/readme.txt b/PCbuild/readme.txt index d819637c..804acd63 100644 --- a/PCbuild/readme.txt +++ b/PCbuild/readme.txt @@ -186,7 +186,7 @@ _ssl again when building. _sqlite3 - Wraps SQLite 3.37.2, which is itself built by sqlite3.vcxproj + Wraps SQLite 3.39.4, which is itself built by sqlite3.vcxproj Homepage: https://www.sqlite.org/ _tkinter diff --git a/Parser/tokenizer.c b/Parser/tokenizer.c index 0bbf1b17..13b666ca 100644 --- a/Parser/tokenizer.c +++ b/Parser/tokenizer.c @@ -419,7 +419,11 @@ tok_readline_recode(struct tok_state *tok) { error_ret(tok); goto error; } - if (!tok_reserve_buf(tok, buflen + 1)) { + // Make room for the null terminator *and* potentially + // an extra newline character that we may need to artificially + // add. + size_t buffer_size = buflen + 2; + if (!tok_reserve_buf(tok, buffer_size)) { goto error; } memcpy(tok->inp, buf, buflen); @@ -973,6 +977,7 @@ tok_underflow_file(struct tok_state *tok) { return 0; } if (tok->inp[-1] != '\n') { + assert(tok->inp + 1 < tok->end); /* Last line does not end in \n, fake one */ *tok->inp++ = '\n'; *tok->inp = '\0'; diff --git a/Python/import.c b/Python/import.c index 58d11170..18ef26f5 100644 --- a/Python/import.c +++ b/Python/import.c @@ -1006,7 +1006,8 @@ create_builtin(PyThreadState *tstate, PyObject *name, PyObject *spec) if (_PyUnicode_EqualToASCIIString(name, p->name)) { if (p->initfunc == NULL) { /* Cannot re-init internal module ("sys" or "builtins") */ - return PyImport_AddModuleObject(name); + mod = PyImport_AddModuleObject(name); + return Py_XNewRef(mod); } mod = (*p->initfunc)(); diff --git a/Python/sysmodule.c b/Python/sysmodule.c index 1d5a06a6..2b5c9d3e 100644 --- a/Python/sysmodule.c +++ b/Python/sysmodule.c @@ -462,6 +462,8 @@ sys_addaudithook_impl(PyObject *module, PyObject *hook) if (interp->audit_hooks == NULL) { return NULL; } + /* Avoid having our list of hooks show up in the GC module */ + PyObject_GC_UnTrack(interp->audit_hooks); } if (PyList_Append(interp->audit_hooks, hook) < 0) { diff --git a/README.rst b/README.rst index 986940db..4e40478c 100644 --- a/README.rst +++ b/README.rst @@ -1,4 +1,4 @@ -This is Python version 3.10.8 +This is Python version 3.10.9 ============================= .. image:: https://travis-ci.com/python/cpython.svg?branch=master diff --git a/Tools/i18n/pygettext.py b/Tools/i18n/pygettext.py index 6f889adf..7ada7910 100755 --- a/Tools/i18n/pygettext.py +++ b/Tools/i18n/pygettext.py @@ -335,9 +335,10 @@ class TokenEater: if ttype == tokenize.STRING and is_literal_string(tstring): self.__addentry(safe_eval(tstring), lineno, isdocstring=1) self.__freshmodule = 0 - elif ttype not in (tokenize.COMMENT, tokenize.NL): - self.__freshmodule = 0 - return + return + if ttype in (tokenize.COMMENT, tokenize.NL, tokenize.ENCODING): + return + self.__freshmodule = 0 # class or func/method docstring? if ttype == tokenize.NAME and tstring in ('class', 'def'): self.__state = self.__suiteseen diff --git a/configure b/configure index bad61996..4b71c4e0 100755 --- a/configure +++ b/configure @@ -5929,7 +5929,7 @@ if test "x$enable_profiling" = xyes; then CC="$CC -pg" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ -int main() { return 0; } +int main(void) { return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : @@ -7752,7 +7752,7 @@ else void* routine(void* p){return NULL;} -int main(){ +int main(void){ pthread_t p; if(pthread_create(&p,NULL,routine,NULL)!=0) return 1; @@ -7808,7 +7808,7 @@ else void* routine(void* p){return NULL;} -int main(){ +int main(void){ pthread_t p; if(pthread_create(&p,NULL,routine,NULL)!=0) return 1; @@ -7858,7 +7858,7 @@ else void* routine(void* p){return NULL;} -int main(){ +int main(void){ pthread_t p; if(pthread_create(&p,NULL,routine,NULL)!=0) return 1; @@ -7908,7 +7908,7 @@ else void* routine(void* p){return NULL;} -int main(){ +int main(void){ pthread_t p; if(pthread_create(&p,NULL,routine,NULL)!=0) return 1; @@ -10447,7 +10447,7 @@ else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ -int main() +int main(void) { char s[16]; int i, *p1, *p2; @@ -11024,6 +11024,7 @@ $as_echo_n "checking for pthread_create in -lpthread... " >&6; } /* end confdefs.h. */ #include +#include #include void * start_routine (void *arg) { exit (0); } @@ -11325,7 +11326,7 @@ else void *foo(void *parm) { return NULL; } - main() { + int main(void) { pthread_attr_t attr; pthread_t id; if (pthread_attr_init(&attr)) return (-1); @@ -12687,7 +12688,7 @@ else #include #include -int main(int argc, char*argv[]) +int main(int argc, char *argv[]) { if(chflags(argv[0], 0) != 0) return 1; @@ -12736,7 +12737,7 @@ else #include #include -int main(int argc, char*argv[]) +int main(int argc, char *argv[]) { if(lchflags(argv[0], 0) != 0) return 1; @@ -13653,7 +13654,7 @@ else #include #include -int main() +int main(void) { int passive, gaierr, inet4 = 0, inet6 = 0; struct addrinfo hints, *ai, *aitop; @@ -14880,7 +14881,7 @@ else #include #include -int main() { +int main(void) { volatile double x, y, z; /* 1./(1-2**-53) -> 1+2**-52 (correct), 1.0 (double rounding) */ x = 0.99999999999999989; /* 1-2**-53 */ @@ -15730,7 +15731,7 @@ else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ -int main() +int main(void) { return (((-1)>>3 == -1) ? 0 : 1); } @@ -16178,7 +16179,7 @@ else #include #include -int main() +int main(void) { int val1 = nice(1); if (val1 != -1 && val1 == nice(2)) @@ -16221,7 +16222,7 @@ else #include #include -int main() +int main(void) { struct pollfd poll_struct = { 42, POLLIN|POLLPRI|POLLOUT, 0 }; int poll_test; @@ -16279,7 +16280,7 @@ else extern char *tzname[]; #endif -int main() +int main(void) { /* Note that we need to ensure that not only does tzset(3) do 'something' with localtime, but it works as documented @@ -17040,9 +17041,10 @@ else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ +#include #include -#include -int main() { +#include +int main(void) { size_t len = -1; const char *str = "text"; len = mbstowcs(NULL, str, 0); @@ -17219,7 +17221,7 @@ else #include #include void foo(void *p, void *q) { memmove(p, q, 19); } -int main() { +int main(void) { char a[32] = "123456789000000000"; foo(&a[9], a); if (strcmp(a, "123456789123456789000000000") != 0) @@ -17274,7 +17276,7 @@ else ); return r; } - int main() { + int main(void) { int p = 8; if ((foo(&p) ? : p) != 6) return 1; @@ -17313,7 +17315,7 @@ cat confdefs.h - <<_ACEOF >conftest.$ac_ext #include atomic_int int_var; atomic_uintptr_t uintptr_var; - int main() { + int main(void) { atomic_store_explicit(&int_var, 5, memory_order_relaxed); atomic_store_explicit(&uintptr_var, 0, memory_order_relaxed); int loaded_value = atomic_load_explicit(&int_var, memory_order_seq_cst); @@ -17347,7 +17349,7 @@ cat confdefs.h - <<_ACEOF >conftest.$ac_ext int val; - int main() { + int main(void) { __atomic_store_n(&val, 1, __ATOMIC_SEQ_CST); (void)__atomic_load_n(&val, __ATOMIC_SEQ_CST); return 0; @@ -17406,7 +17408,7 @@ cat confdefs.h - <<_ACEOF >conftest.$ac_ext #include - int main() { + int main(void) { struct dirent entry; return entry.d_type == DT_UNKNOWN; } @@ -17436,11 +17438,12 @@ cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ + #include #include #include #include - int main() { + int main(void) { char buffer[1]; const size_t buflen = sizeof(buffer); const int flags = GRND_NONBLOCK; @@ -17475,9 +17478,10 @@ cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ + #include #include - int main() { + int main(void) { char buffer[1]; const size_t buflen = sizeof(buffer); const int flags = 0; diff --git a/configure.ac b/configure.ac index cc69015b..ac3be385 100644 --- a/configure.ac +++ b/configure.ac @@ -1099,7 +1099,7 @@ AC_ARG_ENABLE(profiling, if test "x$enable_profiling" = xyes; then ac_save_cc="$CC" CC="$CC -pg" - AC_LINK_IFELSE([AC_LANG_SOURCE([[int main() { return 0; }]])], + AC_LINK_IFELSE([AC_LANG_SOURCE([[int main(void) { return 0; }]])], [], [enable_profiling=no]) CC="$ac_save_cc" @@ -2057,7 +2057,7 @@ AC_CACHE_VAL(ac_cv_pthread_is_default, void* routine(void* p){return NULL;} -int main(){ +int main(void){ pthread_t p; if(pthread_create(&p,NULL,routine,NULL)!=0) return 1; @@ -2092,7 +2092,7 @@ AC_RUN_IFELSE([AC_LANG_SOURCE([[ void* routine(void* p){return NULL;} -int main(){ +int main(void){ pthread_t p; if(pthread_create(&p,NULL,routine,NULL)!=0) return 1; @@ -2121,7 +2121,7 @@ AC_RUN_IFELSE([AC_LANG_SOURCE([[ void* routine(void* p){return NULL;} -int main(){ +int main(void){ pthread_t p; if(pthread_create(&p,NULL,routine,NULL)!=0) return 1; @@ -2150,7 +2150,7 @@ AC_RUN_IFELSE([AC_LANG_SOURCE([[ void* routine(void* p){return NULL;} -int main(){ +int main(void){ pthread_t p; if(pthread_create(&p,NULL,routine,NULL)!=0) return 1; @@ -2991,7 +2991,7 @@ esac AC_MSG_CHECKING(aligned memory access is required) AC_CACHE_VAL(ac_cv_aligned_required, [AC_RUN_IFELSE([AC_LANG_SOURCE([[ -int main() +int main(void) { char s[16]; int i, *p1, *p2; @@ -3279,6 +3279,7 @@ yes AC_MSG_CHECKING([for pthread_create in -lpthread]) AC_LINK_IFELSE([AC_LANG_PROGRAM([[ #include +#include #include void * start_routine (void *arg) { exit (0); }]], [[ @@ -3344,7 +3345,7 @@ if test "$posix_threads" = "yes"; then void *foo(void *parm) { return NULL; } - main() { + int main(void) { pthread_attr_t attr; pthread_t id; if (pthread_attr_init(&attr)) return (-1); @@ -3912,7 +3913,7 @@ AC_CACHE_CHECK([for chflags], [ac_cv_have_chflags], [dnl AC_RUN_IFELSE([AC_LANG_SOURCE([[ #include #include -int main(int argc, char*argv[]) +int main(int argc, char *argv[]) { if(chflags(argv[0], 0) != 0) return 1; @@ -3934,7 +3935,7 @@ AC_CACHE_CHECK([for lchflags], [ac_cv_have_lchflags], [dnl AC_RUN_IFELSE([AC_LANG_SOURCE([[ #include #include -int main(int argc, char*argv[]) +int main(int argc, char *argv[]) { if(lchflags(argv[0], 0) != 0) return 1; @@ -4135,7 +4136,7 @@ then #include #include -int main() +int main(void) { int passive, gaierr, inet4 = 0, inet6 = 0; struct addrinfo hints, *ai, *aitop; @@ -4593,7 +4594,7 @@ CC="$CC $BASECFLAGS" AC_RUN_IFELSE([AC_LANG_SOURCE([[ #include #include -int main() { +int main(void) { volatile double x, y, z; /* 1./(1-2**-53) -> 1+2**-52 (correct), 1.0 (double rounding) */ x = 0.99999999999999989; /* 1-2**-53 */ @@ -4910,7 +4911,7 @@ fi], AC_MSG_CHECKING(whether right shift extends the sign bit) AC_CACHE_VAL(ac_cv_rshift_extends_sign, [ AC_RUN_IFELSE([AC_LANG_SOURCE([[ -int main() +int main(void) { return (((-1)>>3 == -1) ? 0 : 1); } @@ -5069,7 +5070,7 @@ AC_CACHE_VAL(ac_cv_broken_nice, [ AC_RUN_IFELSE([AC_LANG_SOURCE([[ #include #include -int main() +int main(void) { int val1 = nice(1); if (val1 != -1 && val1 == nice(2)) @@ -5093,7 +5094,7 @@ AC_RUN_IFELSE([AC_LANG_SOURCE([[ #include #include -int main() +int main(void) { struct pollfd poll_struct = { 42, POLLIN|POLLPRI|POLLOUT, 0 }; int poll_test; @@ -5131,7 +5132,7 @@ AC_RUN_IFELSE([AC_LANG_SOURCE([[ extern char *tzname[]; #endif -int main() +int main(void) { /* Note that we need to ensure that not only does tzset(3) do 'something' with localtime, but it works as documented @@ -5487,9 +5488,10 @@ AC_CHECK_TYPE(socklen_t,, AC_MSG_CHECKING(for broken mbstowcs) AC_CACHE_VAL(ac_cv_broken_mbstowcs, AC_RUN_IFELSE([AC_LANG_SOURCE([[ +#include #include -#include -int main() { +#include +int main(void) { size_t len = -1; const char *str = "text"; len = mbstowcs(NULL, str, 0); @@ -5600,7 +5602,7 @@ AC_RUN_IFELSE([AC_LANG_SOURCE([[ #include #include void foo(void *p, void *q) { memmove(p, q, 19); } -int main() { +int main(void) { char a[32] = "123456789000000000"; foo(&a[9], a); if (strcmp(a, "123456789123456789000000000") != 0) @@ -5641,7 +5643,7 @@ if test "$have_gcc_asm_for_x87" = yes; then ); return r; } - int main() { + int main(void) { int p = 8; if ((foo(&p) ? : p) != 6) return 1; @@ -5669,7 +5671,7 @@ AC_LINK_IFELSE( #include atomic_int int_var; atomic_uintptr_t uintptr_var; - int main() { + int main(void) { atomic_store_explicit(&int_var, 5, memory_order_relaxed); atomic_store_explicit(&uintptr_var, 0, memory_order_relaxed); int loaded_value = atomic_load_explicit(&int_var, memory_order_seq_cst); @@ -5691,7 +5693,7 @@ AC_LINK_IFELSE( [ AC_LANG_SOURCE([[ int val; - int main() { + int main(void) { __atomic_store_n(&val, 1, __ATOMIC_SEQ_CST); (void)__atomic_load_n(&val, __ATOMIC_SEQ_CST); return 0; @@ -5727,7 +5729,7 @@ AC_LINK_IFELSE( AC_LANG_SOURCE([[ #include - int main() { + int main(void) { struct dirent entry; return entry.d_type == DT_UNKNOWN; } @@ -5745,11 +5747,12 @@ AC_MSG_CHECKING(for the Linux getrandom() syscall) AC_LINK_IFELSE( [ AC_LANG_SOURCE([[ + #include #include #include #include - int main() { + int main(void) { char buffer[1]; const size_t buflen = sizeof(buffer); const int flags = GRND_NONBLOCK; @@ -5772,9 +5775,10 @@ AC_MSG_CHECKING(for the getrandom() function) AC_LINK_IFELSE( [ AC_LANG_SOURCE([[ + #include #include - int main() { + int main(void) { char buffer[1]; const size_t buflen = sizeof(buffer); const int flags = 0;