.. c:function:: PyObject* PyObject_Init(PyObject *op, PyTypeObject *type)
- Initialize a newly-allocated object *op* with its type and initial
+ Initialize a newly allocated object *op* with its type and initial
reference. Returns the initialized object. If *type* indicates that the
object participates in the cyclic garbage detector, it is added to the
detector's set of observed objects. Other fields of the object are not
However, the function ``PyVectorcall_NARGS`` should be used to allow
for future extensions.
- This function is not part of the :ref:`limited API <stable>`.
-
.. versionadded:: 3.8
.. c:function:: vectorcallfunc PyVectorcall_Function(PyObject *op)
This is mostly useful to check whether or not *op* supports vectorcall,
which can be done by checking ``PyVectorcall_Function(op) != NULL``.
- This function is not part of the :ref:`limited API <stable>`.
-
.. versionadded:: 3.8
.. c:function:: PyObject* PyVectorcall_Call(PyObject *callable, PyObject *tuple, PyObject *dict)
It does not check the :const:`Py_TPFLAGS_HAVE_VECTORCALL` flag
and it does not fall back to ``tp_call``.
- This function is not part of the :ref:`limited API <stable>`.
-
.. versionadded:: 3.8
Return the result of the call on success, or raise an exception and return
*NULL* on failure.
- This function is not part of the :ref:`limited API <stable>`.
-
.. versionadded:: 3.9
Return the result of the call on success, or raise an exception and return
*NULL* on failure.
- This function is not part of the :ref:`limited API <stable>`.
-
.. versionadded:: 3.9
Return the result of the call on success, or raise an exception and return
*NULL* on failure.
- This function is not part of the :ref:`limited API <stable>`.
-
.. versionadded:: 3.9
Return the result of the call on success, or raise an exception and return
*NULL* on failure.
- This function is not part of the :ref:`limited API <stable>`.
-
.. versionadded:: 3.9
.. c:function:: PyObject* PyObject_VectorcallDict(PyObject *callable, PyObject *const *args, size_t nargsf, PyObject *kwdict)
already has a dictionary ready to use for the keyword arguments,
but not a tuple for the positional arguments.
- This function is not part of the :ref:`limited API <stable>`.
-
.. versionadded:: 3.9
.. c:function:: PyObject* PyObject_VectorcallMethod(PyObject *name, PyObject *const *args, size_t nargsf, PyObject *kwnames)
Return the result of the call on success, or raise an exception and return
*NULL* on failure.
- This function is not part of the :ref:`limited API <stable>`.
-
.. versionadded:: 3.9
resulting number of microseconds and seconds lie in the ranges documented for
:class:`datetime.timedelta` objects.
+
.. c:function:: PyObject* PyTimeZone_FromOffset(PyDateTime_DeltaType* offset)
Return a :class:`datetime.timezone` object with an unnamed fixed offset
.. versionadded:: 3.7
+
.. c:function:: PyObject* PyTimeZone_FromOffsetAndName(PyDateTime_DeltaType* offset, PyUnicode* name)
Return a :class:`datetime.timezone` object with a fixed offset represented
Return the microsecond, as an int from 0 through 999999.
+
+.. c:function:: int PyDateTime_DATE_GET_FOLD(PyDateTime_DateTime *o)
+
+ Return the fold, as an int from 0 through 1.
+
+ .. versionadded:: 3.6
+
+
.. c:function:: PyObject* PyDateTime_DATE_GET_TZINFO(PyDateTime_DateTime *o)
Return the tzinfo (which may be ``None``).
.. versionadded:: 3.10
+
Macros to extract fields from time objects. The argument must be an instance of
:c:data:`PyDateTime_Time`, including subclasses. The argument must not be ``NULL``,
and the type is not checked:
Return the microsecond, as an int from 0 through 999999.
+
+.. c:function:: int PyDateTime_TIME_GET_FOLD(PyDateTime_Time *o)
+
+ Return the fold, as an int from 0 through 1.
+
+ .. versionadded:: 3.6
+
+
.. c:function:: PyObject* PyDateTime_TIME_GET_TZINFO(PyDateTime_Time *o)
Return the tzinfo (which may be ``None``).
* ``"utf-8"`` if :c:member:`PyPreConfig.utf8_mode` is non-zero.
* ``"ascii"`` if Python detects that ``nl_langinfo(CODESET)`` announces
- the ASCII encoding (or Roman8 encoding on HP-UX), whereas the
- ``mbstowcs()`` function decodes from a different encoding (usually
- Latin1).
+ the ASCII encoding, whereas the ``mbstowcs()`` function
+ decodes from a different encoding (usually Latin1).
* ``"utf-8"`` if ``nl_langinfo(CODESET)`` returns an empty string.
* Otherwise, use the :term:`locale encoding`:
``nl_langinfo(CODESET)`` result.
:file:`Misc/SpecialBuilds.txt` in the Python source distribution. Builds are
available that support tracing of reference counts, debugging the memory
allocator, or low-level profiling of the main interpreter loop. Only the most
-frequently-used builds will be described in the remainder of this section.
+frequently used builds will be described in the remainder of this section.
Compiling the interpreter with the :c:macro:`Py_DEBUG` macro defined produces
what is generally meant by :ref:`a debug build of Python <debug-build>`.
with new object types written in C. Another reason for using the Python heap is
the desire to *inform* the Python memory manager about the memory needs of the
extension module. Even when the requested memory is used exclusively for
-internal, highly-specific purposes, delegating all memory requests to the Python
+internal, highly specific purposes, delegating all memory requests to the Python
memory manager causes the interpreter to have a more accurate image of its
memory footprint as a whole. Consequently, under certain circumstances, the
Python memory manager may or may not trigger appropriate actions, like garbage
or possibly ``NULL`` if there are no keywords. The values of the keyword
arguments are stored in the *args* array, after the positional arguments.
- This is not part of the :ref:`limited API <stable>`.
-
.. versionadded:: 3.7
``PyObject_HEAD_INIT`` macro. For :ref:`statically allocated objects
<static-types>`, these fields always remain ``NULL``. For :ref:`dynamically
allocated objects <heap-types>`, these two fields are used to link the
- object into a doubly-linked list of *all* live objects on the heap.
+ object into a doubly linked list of *all* live objects on the heap.
This could be used for various debugging purposes; currently the only uses
are the :func:`sys.getobjects` function and to print the objects that are
When a user sets :attr:`__call__` in Python code, only *tp_call* is updated,
likely making it inconsistent with the vectorcall function.
- .. note::
-
- The semantics of the ``tp_vectorcall_offset`` slot are provisional and
- expected to be finalized in Python 3.9.
- If you use vectorcall, plan for updating your code for Python 3.9.
-
.. versionchanged:: 3.8
Before version 3.8, this slot was named ``tp_print``.
.. c:type:: PyObject *(*descrgetfunc)(PyObject *, PyObject *, PyObject *)
- See :c:member:`~PyTypeObject.tp_descrget`.
+ See :c:member:`~PyTypeObject.tp_descr_get`.
.. c:type:: int (*descrsetfunc)(PyObject *, PyObject *, PyObject *)
- See :c:member:`~PyTypeObject.tp_descrset`.
+ See :c:member:`~PyTypeObject.tp_descr_set`.
.. c:type:: Py_hash_t (*hashfunc)(PyObject *)
callable object that receives notification when *ob* is garbage collected; it
should accept a single parameter, which will be the weak reference object
itself. *callback* may also be ``None`` or ``NULL``. If *ob* is not a
- weakly-referencable object, or if *callback* is not callable, ``None``, or
+ weakly referencable object, or if *callback* is not callable, ``None``, or
``NULL``, this will return ``NULL`` and raise :exc:`TypeError`.
be a callable object that receives notification when *ob* is garbage
collected; it should accept a single parameter, which will be the weak
reference object itself. *callback* may also be ``None`` or ``NULL``. If *ob*
- is not a weakly-referencable object, or if *callback* is not callable,
+ is not a weakly referencable object, or if *callback* is not callable,
``None``, or ``NULL``, this will return ``NULL`` and raise :exc:`TypeError`.
highlight_language = 'python3'
# Minimum version of sphinx required
-needs_sphinx = '1.8'
+needs_sphinx = '3.2'
# Ignore any .rst files in the venv/ directory.
exclude_patterns = ['venv/*', 'README.rst']
# bpo-40204: Disable warnings on Sphinx 2 syntax of the C domain since the
# documentation is built with -W (warnings treated as errors).
c_warn_on_allowed_pre_v3 = False
-
-strip_signature_backslash = True
and other APIs, makes the API consistent across different Python versions,
and is hence recommended over using ``distutils`` directly.
-.. _New and changed setup.py arguments in setuptools: https://setuptools.readthedocs.io/en/latest/setuptools.html#new-and-changed-setup-keywords
+.. _New and changed setup.py arguments in setuptools: https://web.archive.org/web/20210614192516/https://setuptools.pypa.io/en/stable/userguide/keywords.html
.. include:: ./_setuptools_disclaimer.rst
it contains certain values: see :func:`check_environ`. Raise :exc:`ValueError`
for any variables not found in either *local_vars* or ``os.environ``.
- Note that this is not a fully-fledged string interpolation function. A valid
+ Note that this is not a full-fledged string interpolation function. A valid
``$variable`` can consist only of upper and lower case letters, numbers and an
underscore. No { } or ( ) style quoting is available.
.. c:function:: PyObject* PyInit_modulename(void)
-It returns either a fully-initialized module, or a :c:type:`PyModuleDef`
+It returns either a fully initialized module, or a :c:type:`PyModuleDef`
instance. See :ref:`initializing-modules` for details.
.. highlight:: python
}
If no :c:member:`~PyTypeObject.tp_repr` handler is specified, the interpreter will supply a
-representation that uses the type's :c:member:`~PyTypeObject.tp_name` and a uniquely-identifying
+representation that uses the type's :c:member:`~PyTypeObject.tp_name` and a uniquely identifying
value for the object.
The :c:member:`~PyTypeObject.tp_str` handler is to :func:`str` what the :c:member:`~PyTypeObject.tp_repr` handler
PyObject *weakreflist; /* List of weak references */
} TrivialObject;
-And the corresponding member in the statically-declared type object::
+And the corresponding member in the statically declared type object::
static PyTypeObject TrivialType = {
PyVarObject_HEAD_INIT(NULL, 0)
Functions are already first class objects in Python, and can be declared in a
local scope. Therefore the only advantage of using a lambda instead of a
-locally-defined function is that you don't need to invent a name for the
+locally defined function is that you don't need to invent a name for the
function -- but that's just a local variable to which the function object (which
is exactly the same type of object that a lambda expression yields) is assigned!
How do I copy a file?
---------------------
-The :mod:`shutil` module contains a :func:`~shutil.copyfile` function. Note
-that on MacOS 9 it doesn't copy the resource fork and Finder info.
+The :mod:`shutil` module contains a :func:`~shutil.copyfile` function.
+Note that on Windows NTFS volumes, it does not copy
+`alternate data streams
+<https://en.wikipedia.org/wiki/NTFS#Alternate_data_stream_(ADS)>`_
+nor `resource forks <https://en.wikipedia.org/wiki/Resource_fork>`__
+on macOS HFS+ volumes, though both are now rarely used.
+It also doesn't copy file permissions and metadata, though using
+:func:`shutil.copy2` instead will preserve most (though not all) of it.
How do I read (or write) binary data?
https://wiki.python.org/moin/WebProgramming\ .
Cameron Laird maintains a useful set of pages about Python web technologies at
-http://phaseit.net/claird/comp.lang.python/web_python.
+https://web.archive.org/web/20210224183619/http://phaseit.net/claird/comp.lang.python/web_python.
How can I mimic CGI form submission (METHOD=POST)?
Yes.
-`Pylint <https://www.pylint.org/>`_ and
+`Pylint <https://pylint.pycqa.org/en/latest/index.html>`_ and
`Pyflakes <https://github.com/PyCQA/pyflakes>`_ do basic checking that will
help you catch bugs sooner.
1. standard library modules -- e.g. ``sys``, ``os``, ``getopt``, ``re``
2. third-party library modules (anything installed in Python's site-packages
directory) -- e.g. mx.DateTime, ZODB, PIL.Image, etc.
-3. locally-developed modules
+3. locally developed modules
It is sometimes necessary to move imports to a function or class to avoid
problems with circular imports. Gordon McMillan says:
:term:`Parameters <parameter>` are defined by the names that appear in a
function definition, whereas :term:`arguments <argument>` are the values
-actually passed to a function when calling it. Parameters define what types of
-arguments a function can accept. For example, given the function definition::
+actually passed to a function when calling it. Parameters define what
+:term:`kind of arguments <parameter>` a function can accept. For
+example, given the function definition::
def func(foo, bar=None, **kwargs):
pass
A slash in the argument list of a function denotes that the parameters prior to
it are positional-only. Positional-only parameters are the ones without an
-externally-usable name. Upon calling a function that accepts positional-only
+externally usable name. Upon calling a function that accepts positional-only
parameters, arguments are mapped to parameters based solely on their position.
For example, :func:`divmod` is a function that accepts positional-only
parameters. Its documentation looks like this::
machines.
However, some extension modules, either standard or third-party,
- are designed so as to release the GIL when doing computationally-intensive
+ are designed so as to release the GIL when doing computationally intensive
tasks such as compression or hashing. Also, the GIL is always released
when doing I/O.
16. Compile, then run the relevant portions of the regression-test suite.
This change should not introduce any new compile-time warnings or errors,
- and there should be no externally-visible change to Python's behavior.
+ and there should be no externally visible change to Python's behavior.
Well, except for one difference: ``inspect.signature()`` run on your function
should now provide a valid signature!
``module.class`` in the sample just to illustrate that you must
use the full path to *both* functions.)
-Sorry, there's no syntax for partially-cloning a function, or cloning a function
+Sorry, there's no syntax for partially cloning a function, or cloning a function
then modifying it. Cloning is an all-or nothing proposition.
Also, the function you are cloning from must have been previously defined
there is no default, but not specifying a default may
result in an "uninitialized variable" warning. This can
easily happen when using option groups—although
- properly-written code will never actually use this value,
+ properly written code will never actually use this value,
the variable does get passed in to the impl, and the
C compiler will complain about the "use" of the
uninitialized value. This value should always be a
Sets can take their contents from an iterable and let you iterate over the set's
elements::
- S = {2, 3, 5, 7, 11, 13}
- for i in S:
- print(i)
+ >>> S = {2, 3, 5, 7, 11, 13}
+ >>> for i in S:
+ ... print(i)
+ 2
+ 3
+ 5
+ 7
+ 11
+ 13
functional programming language Haskell (https://www.haskell.org/). You can strip
all the whitespace from a stream of strings with the following code::
- line_list = [' line 1\n', 'line 2 \n', ...]
+ >>> line_list = [' line 1\n', 'line 2 \n', ' \n', '']
- # Generator expression -- returns iterator
- stripped_iter = (line.strip() for line in line_list)
+ >>> # Generator expression -- returns iterator
+ >>> stripped_iter = (line.strip() for line in line_list)
- # List comprehension -- returns list
- stripped_list = [line.strip() for line in line_list]
+ >>> # List comprehension -- returns list
+ >>> stripped_list = [line.strip() for line in line_list]
You can select only certain elements by adding an ``"if"`` condition::
- stripped_list = [line.strip() for line in line_list
- if line != ""]
+ >>> stripped_list = [line.strip() for line in line_list
+ ... if line != ""]
With a list comprehension, you get back a Python list; ``stripped_list`` is a
list containing the resulting lines, not an iterator. Generator expressions
if condition1
for expr2 in sequence2
if condition2
- for expr3 in sequence3 ...
+ for expr3 in sequence3
+ ...
if condition3
for exprN in sequenceN
if conditionN )
The itertools module
====================
-The :mod:`itertools` module contains a number of commonly-used iterators as well
+The :mod:`itertools` module contains a number of commonly used iterators as well
as functions for combining several iterators. This section will introduce the
module's contents by showing small examples.
Arguments: 8@%rbp 8@%r12 -4@%eax
The above metadata contains information for SystemTap describing how it
-can patch strategically-placed machine code instructions to enable the
+can patch strategically placed machine code instructions to enable the
tracing hooks used by a SystemTap script.
The following script uses the tapset above to provide a top-like view of all
-running CPython code, showing the top 20 most frequently-entered bytecode
+running CPython code, showing the top 20 most frequently entered bytecode
frames, each second, across the whole system:
.. code-block:: none
:Author: Vinay Sajip <vinay_sajip at red-dove dot com>
This page contains a number of recipes related to logging, which have been found
-useful in the past.
+useful in the past. For links to tutorial and reference information, please see
+:ref:`cookbook-ref-links`.
.. currentmodule:: logging
logging.basicConfig(level=logging.DEBUG,
format='%(asctime)s %(name)-12s %(levelname)-8s %(message)s',
datefmt='%m-%d %H:%M',
- filename='/temp/myapp.log',
+ filename='/tmp/myapp.log',
filemode='w')
# define a Handler which writes INFO messages or higher to the sys.stderr
console = logging.StreamHandler()
This example uses console and file handlers, but you can use any number and
combination of handlers you choose.
+Note that the above choice of log filename ``/tmp/myapp.log`` implies use of a
+standard location for temporary files on POSIX systems. On Windows, you may need to
+choose a different directory name for the log - just ensure that the directory exists
+and that you have the permissions to create and update files in it.
+
Configuration server example
----------------------------
2010-09-06 22:38:15,301 d.e.f DEBUG IP: 123.231.231.123 User: fred A message at DEBUG level with 2 parameters
2010-09-06 22:38:15,301 d.e.f INFO IP: 123.231.231.123 User: fred A message at INFO level with 2 parameters
+Use of ``contextvars``
+----------------------
+
+Since Python 3.7, the :mod:`contextvars` module has provided context-local storage
+which works for both :mod:`threading` and :mod:`asyncio` processing needs. This type
+of storage may thus be generally preferable to thread-locals. The following example
+shows how, in a multi-threaded environment, logs can populated with contextual
+information such as, for example, request attributes handled by web applications.
+
+For the purposes of illustration, say that you have different web applications, each
+independent of the other but running in the same Python process and using a library
+common to them. How can each of these applications have their own log, where all
+logging messages from the library (and other request processing code) are directed to
+the appropriate application's log file, while including in the log additional
+contextual information such as client IP, HTTP request method and client username?
+
+Let's assume that the library can be simulated by the following code:
+
+.. code-block:: python
+
+ # webapplib.py
+ import logging
+ import time
+
+ logger = logging.getLogger(__name__)
+
+ def useful():
+ # Just a representative event logged from the library
+ logger.debug('Hello from webapplib!')
+ # Just sleep for a bit so other threads get to run
+ time.sleep(0.01)
+
+We can simulate the multiple web applications by means of two simple classes,
+``Request`` and ``WebApp``. These simulate how real threaded web applications work -
+each request is handled by a thread:
+
+.. code-block:: python
+
+ # main.py
+ import argparse
+ from contextvars import ContextVar
+ import logging
+ import os
+ from random import choice
+ import threading
+ import webapplib
+
+ logger = logging.getLogger(__name__)
+ root = logging.getLogger()
+ root.setLevel(logging.DEBUG)
+
+ class Request:
+ """
+ A simple dummy request class which just holds dummy HTTP request method,
+ client IP address and client username
+ """
+ def __init__(self, method, ip, user):
+ self.method = method
+ self.ip = ip
+ self.user = user
+
+ # A dummy set of requests which will be used in the simulation - we'll just pick
+ # from this list randomly. Note that all GET requests are from 192.168.2.XXX
+ # addresses, whereas POST requests are from 192.16.3.XXX addresses. Three users
+ # are represented in the sample requests.
+
+ REQUESTS = [
+ Request('GET', '192.168.2.20', 'jim'),
+ Request('POST', '192.168.3.20', 'fred'),
+ Request('GET', '192.168.2.21', 'sheila'),
+ Request('POST', '192.168.3.21', 'jim'),
+ Request('GET', '192.168.2.22', 'fred'),
+ Request('POST', '192.168.3.22', 'sheila'),
+ ]
+
+ # Note that the format string includes references to request context information
+ # such as HTTP method, client IP and username
+
+ formatter = logging.Formatter('%(threadName)-11s %(appName)s %(name)-9s %(user)-6s %(ip)s %(method)-4s %(message)s')
+
+ # Create our context variables. These will be filled at the start of request
+ # processing, and used in the logging that happens during that processing
+
+ ctx_request = ContextVar('request')
+ ctx_appname = ContextVar('appname')
+
+ class InjectingFilter(logging.Filter):
+ """
+ A filter which injects context-specific information into logs and ensures
+ that only information for a specific webapp is included in its log
+ """
+ def __init__(self, app):
+ self.app = app
+
+ def filter(self, record):
+ request = ctx_request.get()
+ record.method = request.method
+ record.ip = request.ip
+ record.user = request.user
+ record.appName = appName = ctx_appname.get()
+ return appName == self.app.name
+
+ class WebApp:
+ """
+ A dummy web application class which has its own handler and filter for a
+ webapp-specific log.
+ """
+ def __init__(self, name):
+ self.name = name
+ handler = logging.FileHandler(name + '.log', 'w')
+ f = InjectingFilter(self)
+ handler.setFormatter(formatter)
+ handler.addFilter(f)
+ root.addHandler(handler)
+ self.num_requests = 0
+
+ def process_request(self, request):
+ """
+ This is the dummy method for processing a request. It's called on a
+ different thread for every request. We store the context information into
+ the context vars before doing anything else.
+ """
+ ctx_request.set(request)
+ ctx_appname.set(self.name)
+ self.num_requests += 1
+ logger.debug('Request processing started')
+ webapplib.useful()
+ logger.debug('Request processing finished')
+
+ def main():
+ fn = os.path.splitext(os.path.basename(__file__))[0]
+ adhf = argparse.ArgumentDefaultsHelpFormatter
+ ap = argparse.ArgumentParser(formatter_class=adhf, prog=fn,
+ description='Simulate a couple of web '
+ 'applications handling some '
+ 'requests, showing how request '
+ 'context can be used to '
+ 'populate logs')
+ aa = ap.add_argument
+ aa('--count', '-c', default=100, help='How many requests to simulate')
+ options = ap.parse_args()
+
+ # Create the dummy webapps and put them in a list which we can use to select
+ # from randomly
+ app1 = WebApp('app1')
+ app2 = WebApp('app2')
+ apps = [app1, app2]
+ threads = []
+ # Add a common handler which will capture all events
+ handler = logging.FileHandler('app.log', 'w')
+ handler.setFormatter(formatter)
+ root.addHandler(handler)
+
+ # Generate calls to process requests
+ for i in range(options.count):
+ try:
+ # Pick an app at random and a request for it to process
+ app = choice(apps)
+ request = choice(REQUESTS)
+ # Process the request in its own thread
+ t = threading.Thread(target=app.process_request, args=(request,))
+ threads.append(t)
+ t.start()
+ except KeyboardInterrupt:
+ break
+
+ # Wait for the threads to terminate
+ for t in threads:
+ t.join()
+
+ for app in apps:
+ print('%s processed %s requests' % (app.name, app.num_requests))
+
+ if __name__ == '__main__':
+ main()
+
+If you run the above, you should find that roughly half the requests go
+into :file:`app1.log` and the rest into :file:`app2.log`, and the all the requests are
+logged to :file:`app.log`. Each webapp-specific log will contain only log entries for
+only that webapp, and the request information will be displayed consistently in the
+log (i.e. the information in each dummy request will always appear together in a log
+line). This is illustrated by the following shell output:
+
+.. code-block:: shell
+
+ ~/logging-contextual-webapp$ python main.py
+ app1 processed 51 requests
+ app2 processed 49 requests
+ ~/logging-contextual-webapp$ wc -l *.log
+ 153 app1.log
+ 147 app2.log
+ 300 app.log
+ 600 total
+ ~/logging-contextual-webapp$ head -3 app1.log
+ Thread-3 (process_request) app1 __main__ jim 192.168.3.21 POST Request processing started
+ Thread-3 (process_request) app1 webapplib jim 192.168.3.21 POST Hello from webapplib!
+ Thread-5 (process_request) app1 __main__ jim 192.168.3.21 POST Request processing started
+ ~/logging-contextual-webapp$ head -3 app2.log
+ Thread-1 (process_request) app2 __main__ sheila 192.168.2.21 GET Request processing started
+ Thread-1 (process_request) app2 webapplib sheila 192.168.2.21 GET Hello from webapplib!
+ Thread-2 (process_request) app2 __main__ jim 192.168.2.20 GET Request processing started
+ ~/logging-contextual-webapp$ head app.log
+ Thread-1 (process_request) app2 __main__ sheila 192.168.2.21 GET Request processing started
+ Thread-1 (process_request) app2 webapplib sheila 192.168.2.21 GET Hello from webapplib!
+ Thread-2 (process_request) app2 __main__ jim 192.168.2.20 GET Request processing started
+ Thread-3 (process_request) app1 __main__ jim 192.168.3.21 POST Request processing started
+ Thread-2 (process_request) app2 webapplib jim 192.168.2.20 GET Hello from webapplib!
+ Thread-3 (process_request) app1 webapplib jim 192.168.3.21 POST Hello from webapplib!
+ Thread-4 (process_request) app2 __main__ fred 192.168.2.22 GET Request processing started
+ Thread-5 (process_request) app1 __main__ jim 192.168.3.21 POST Request processing started
+ Thread-4 (process_request) app2 webapplib fred 192.168.2.22 GET Hello from webapplib!
+ Thread-6 (process_request) app1 __main__ jim 192.168.3.21 POST Request processing started
+ ~/logging-contextual-webapp$ grep app1 app1.log | wc -l
+ 153
+ ~/logging-contextual-webapp$ grep app2 app2.log | wc -l
+ 147
+ ~/logging-contextual-webapp$ grep app1 app.log | wc -l
+ 153
+ ~/logging-contextual-webapp$ grep app2 app.log | wc -l
+ 147
+
+
+Imparting contextual information in handlers
+--------------------------------------------
+
+Each :class:`~Handler` has its own chain of filters.
+If you want to add contextual information to a :class:`LogRecord` without leaking
+it to other handlers, you can use a filter that returns
+a new :class:`~LogRecord` instead of modifying it in-place, as shown in the following script::
+
+ import copy
+ import logging
+
+ def filter(record: logging.LogRecord):
+ record = copy.copy(record)
+ record.user = 'jim'
+ return record
+
+ if __name__ == '__main__':
+ logger = logging.getLogger()
+ logger.setLevel(logging.INFO)
+ handler = logging.StreamHandler()
+ formatter = logging.Formatter('%(message)s from %(user)-8s')
+ handler.setFormatter(formatter)
+ handler.addFilter(filter)
+ logger.addHandler(handler)
+
+ logger.info('A log message')
.. _multiple-processes:
If you need more specialised processing, you can use a custom JSON encoder,
as in the following complete example::
- from __future__ import unicode_literals
-
import json
import logging
- # This next bit is to ensure the script runs unchanged on 2.x and 3.x
- try:
- unicode
- except NameError:
- unicode = str
class Encoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, set):
return tuple(o)
- elif isinstance(o, unicode):
+ elif isinstance(o, str):
return o.encode('unicode_escape').decode('ascii')
return super().default(o)
if __name__=='__main__':
main()
+Logging to syslog with RFC5424 support
+--------------------------------------
+
+Although :rfc:`5424` dates from 2009, most syslog servers are configured by detault to
+use the older :rfc:`3164`, which hails from 2001. When ``logging`` was added to Python
+in 2003, it supported the earlier (and only existing) protocol at the time. Since
+RFC5424 came out, as there has not been widespread deployment of it in syslog
+servers, the :class:`~logging.handlers.SysLogHandler` functionality has not been
+updated.
+
+RFC 5424 contains some useful features such as support for structured data, and if you
+need to be able to log to a syslog server with support for it, you can do so with a
+subclassed handler which looks something like this::
+
+ import datetime
+ import logging.handlers
+ import re
+ import socket
+ import time
+
+ class SysLogHandler5424(logging.handlers.SysLogHandler):
+
+ tz_offset = re.compile(r'([+-]\d{2})(\d{2})$')
+ escaped = re.compile(r'([\]"\\])')
+
+ def __init__(self, *args, **kwargs):
+ self.msgid = kwargs.pop('msgid', None)
+ self.appname = kwargs.pop('appname', None)
+ super().__init__(*args, **kwargs)
+
+ def format(self, record):
+ version = 1
+ asctime = datetime.datetime.fromtimestamp(record.created).isoformat()
+ m = self.tz_offset.match(time.strftime('%z'))
+ has_offset = False
+ if m and time.timezone:
+ hrs, mins = m.groups()
+ if int(hrs) or int(mins):
+ has_offset = True
+ if not has_offset:
+ asctime += 'Z'
+ else:
+ asctime += f'{hrs}:{mins}'
+ try:
+ hostname = socket.gethostname()
+ except Exception:
+ hostname = '-'
+ appname = self.appname or '-'
+ procid = record.process
+ msgid = '-'
+ msg = super().format(record)
+ sdata = '-'
+ if hasattr(record, 'structured_data'):
+ sd = record.structured_data
+ # This should be a dict where the keys are SD-ID and the value is a
+ # dict mapping PARAM-NAME to PARAM-VALUE (refer to the RFC for what these
+ # mean)
+ # There's no error checking here - it's purely for illustration, and you
+ # can adapt this code for use in production environments
+ parts = []
+
+ def replacer(m):
+ g = m.groups()
+ return '\\' + g[0]
+
+ for sdid, dv in sd.items():
+ part = f'[{sdid}'
+ for k, v in dv.items():
+ s = str(v)
+ s = self.escaped.sub(replacer, s)
+ part += f' {k}="{s}"'
+ part += ']'
+ parts.append(part)
+ sdata = ''.join(parts)
+ return f'{version} {asctime} {hostname} {appname} {procid} {msgid} {sdata} {msg}'
+
+You'll need to be familiar with RFC 5424 to fully understand the above code, and it
+may be that you have slightly different needs (e.g. for how you pass structural data
+to the log). Nevertheless, the above should be adaptable to your speciric needs. With
+the above handler, you'd pass structured data using something like this::
+
+ sd = {
+ 'foo@12345': {'bar': 'baz', 'baz': 'bozz', 'fizz': r'buzz'},
+ 'foo@54321': {'rab': 'baz', 'zab': 'bozz', 'zzif': r'buzz'}
+ }
+ extra = {'structured_data': sd}
+ i = 1
+ logger.debug('Message %d', i, extra=extra)
+
.. patterns-to-avoid:
* An attempt to delete a file (e.g. during file rotation) silently fails,
because there is another reference pointing to it. This can lead to confusion
and wasted debugging time - log entries end up in unexpected places, or are
- lost altogether.
+ lost altogether. Or a file that was supposed to be moved remains in place,
+ and grows in size unexpectedly despite size-based rotation being supposedly
+ in place.
Use the techniques outlined in :ref:`multiple-processes` to circumvent such
issues.
information into your logs and restrict the loggers created to those describing
areas within your application (generally modules, but occasionally slightly
more fine-grained than that).
+
+.. _cookbook-ref-links:
+
+Other resources
+---------------
+
+.. seealso::
+
+ Module :mod:`logging`
+ API reference for the logging module.
+
+ Module :mod:`logging.config`
+ Configuration API for the logging module.
+
+ Module :mod:`logging.handlers`
+ Useful handlers included with the logging module.
+
+ :ref:`Basic Tutorial <logging-basic-tutorial>`
+
+ :ref:`Advanced Tutorial <logging-advanced-tutorial>`
^^^^^^^^^^^^^^^^^
A very common situation is that of recording logging events in a file, so let's
-look at that next. Be sure to try the following in a newly-started Python
+look at that next. Be sure to try the following in a newly started Python
interpreter, and don't just continue from the session described above::
import logging
>>> m.groupdict()
{'first': 'Jane', 'last': 'Doe'}
-Named groups are handy because they let you use easily-remembered names, instead
+Named groups are handy because they let you use easily remembered names, instead
of having to remember numbers. Here's an example RE from the :mod:`imaplib`
module::
-----------
It is perfectly possible to send binary data over a socket. The major problem is
-that not all machines use the same formats for binary data. For example, a
-Motorola chip will represent a 16 bit integer with the value 1 as the two hex
-bytes 00 01. Intel and DEC, however, are byte-reversed - that same 1 is 01 00.
+that not all machines use the same formats for binary data. For example,
+`network byte order <https://en.wikipedia.org/wiki/Endianness#Networking>`_
+is big-endian, with the most significant byte first,
+so a 16 bit integer with the value ``1`` would be the two hex bytes ``00 01``.
+However, most common processors (x86/AMD64, ARM, RISC-V), are little-endian,
+with the least significant byte first - that same ``1`` would be ``01 00``.
+
Socket libraries have calls for converting 16 and 32 bit integers - ``ntohl,
htonl, ntohs, htons`` where "n" means *network* and "h" means *host*, "s" means
*short* and "l" means *long*. Where network order is host order, these do
nothing, but where the machine is byte-reversed, these swap the bytes around
appropriately.
-In these days of 32 bit machines, the ascii representation of binary data is
+In these days of 64-bit machines, the ASCII representation of binary data is
frequently smaller than the binary representation. That's because a surprising
-amount of the time, all those longs have the value 0, or maybe 1. The string "0"
-would be two bytes, while binary is four. Of course, this doesn't fit well with
-fixed-length messages. Decisions, decisions.
+amount of the time, most integers have the value 0, or maybe 1.
+The string ``"0"`` would be two bytes, while a full 64-bit integer would be 8.
+Of course, this doesn't fit well with fixed-length messages.
+Decisions, decisions.
Disconnecting
HOWTO Fetch Internet Resources Using The urllib Package
***********************************************************
-:Author: `Michael Foord <http://www.voidspace.org.uk/python/index.shtml>`_
+:Author: `Michael Foord <https://agileabstractions.com/>`_
.. note::
There is a French translation of an earlier revision of this
HOWTO, available at `urllib2 - Le Manuel manquant
- <http://www.voidspace.org.uk/python/articles/urllib2_francais.shtml>`_.
+ <https://web.archive.org/web/20200910051922/http://www.voidspace.org.uk/python/articles/urllib2_francais.shtml>`_.
You may also find useful the following article on fetching web resources
with Python:
- * `Basic Authentication <http://www.voidspace.org.uk/python/articles/authentication.shtml>`_
+ * `Basic Authentication <https://web.archive.org/web/20201215133350/http://www.voidspace.org.uk/python/articles/authentication.shtml>`_
A tutorial on *Basic Authentication*, with examples in Python.
====================
When you fetch a URL you use an opener (an instance of the perhaps
-confusingly-named :class:`urllib.request.OpenerDirector`). Normally we have been using
+confusingly named :class:`urllib.request.OpenerDirector`). Normally we have been using
the default opener - via ``urlopen`` - but you can create custom
openers. Openers use handlers. All the "heavy lifting" is done by the
handlers. Each handler knows how to open URLs for a particular URL scheme (http,
+++ /dev/null
-import sqlite3
-import datetime
-import time
-
-def adapt_datetime(ts):
- return time.mktime(ts.timetuple())
-
-sqlite3.register_adapter(datetime.datetime, adapt_datetime)
-
-con = sqlite3.connect(":memory:")
-cur = con.cursor()
-
-now = datetime.datetime.now()
-cur.execute("select ?", (now,))
-print(cur.fetchone()[0])
-
-con.close()
self.x, self.y = x, y
def __repr__(self):
- return "(%f;%f)" % (self.x, self.y)
+ return f"Point({self.x}, {self.y})"
def adapt_point(point):
- return ("%f;%f" % (point.x, point.y)).encode('ascii')
+ return f"{point.x};{point.y}".encode("utf-8")
def convert_point(s):
x, y = list(map(float, s.split(b";")))
return Point(x, y)
-# Register the adapter
+# Register the adapter and converter
sqlite3.register_adapter(Point, adapt_point)
-
-# Register the converter
sqlite3.register_converter("point", convert_point)
+# 1) Parse using declared types
p = Point(4.0, -3.2)
-
-#########################
-# 1) Using declared types
con = sqlite3.connect(":memory:", detect_types=sqlite3.PARSE_DECLTYPES)
-cur = con.cursor()
-cur.execute("create table test(p point)")
+cur = con.execute("create table test(p point)")
cur.execute("insert into test(p) values (?)", (p,))
cur.execute("select p from test")
cur.close()
con.close()
-#######################
-# 1) Using column names
+# 2) Parse using column names
con = sqlite3.connect(":memory:", detect_types=sqlite3.PARSE_COLNAMES)
-cur = con.cursor()
-cur.execute("create table test(p)")
+cur = con.execute("create table test(p)")
cur.execute("insert into test(p) values (?)", (p,))
cur.execute('select p as "p [point]" from test')
+++ /dev/null
-import sqlite3
-
-class IterChars:
- def __init__(self):
- self.count = ord('a')
-
- def __iter__(self):
- return self
-
- def __next__(self):
- if self.count > ord('z'):
- raise StopIteration
- self.count += 1
- return (chr(self.count - 1),) # this is a 1-tuple
-
-con = sqlite3.connect(":memory:")
-cur = con.cursor()
-cur.execute("create table characters(c)")
-
-theIter = IterChars()
-cur.executemany("insert into characters(c) values (?)", theIter)
-
-cur.execute("select c from characters")
-print(cur.fetchall())
-
-con.close()
+++ /dev/null
-import sqlite3
-import string
-
-def char_generator():
- for c in string.ascii_lowercase:
- yield (c,)
-
-con = sqlite3.connect(":memory:")
-cur = con.cursor()
-cur.execute("create table characters(c)")
-
-cur.executemany("insert into characters(c) values (?)", char_generator())
-
-cur.execute("select c from characters")
-print(cur.fetchall())
-
-con.close()
+++ /dev/null
-import sqlite3
-
-con = sqlite3.connect(":memory:")
-cur = con.cursor()
-cur.executescript("""
- create table person(
- firstname,
- lastname,
- age
- );
-
- create table book(
- title,
- author,
- published
- );
-
- insert into book(title, author, published)
- values (
- 'Dirk Gently''s Holistic Detective Agency',
- 'Douglas Adams',
- 1987
- );
- """)
-con.close()
was packaged and distributed in the standard way, i.e. using the Distutils.
First, the distribution's name and version number will be featured prominently
in the name of the downloaded archive, e.g. :file:`foo-1.0.tar.gz` or
-:file:`widget-0.9.7.zip`. Next, the archive will unpack into a similarly-named
+:file:`widget-0.9.7.zip`. Next, the archive will unpack into a similarly named
directory: :file:`foo-1.0` or :file:`widget-0.9.7`. Additionally, the
distribution will contain a setup script :file:`setup.py`, and a file named
:file:`README.txt` or possibly just :file:`README`, which should explain that
python -m ensurepip --default-pip
There are also additional resources for `installing pip.
-<https://packaging.python.org/tutorials/installing-packages/#install-pip-setuptools-and-wheel>`__
+<https://packaging.python.org/en/latest/tutorials/installing-packages/#ensure-pip-setuptools-and-wheel-are-up-to-date>`__
Installing binary extensions
.. _2to3-reference:
-2to3 - Automated Python 2 to 3 code translation
-===============================================
+2to3 --- Automated Python 2 to 3 code translation
+=================================================
.. sectionauthor:: Benjamin Peterson <benjamin@python.org>
``from future_builtins import zip`` appears.
-:mod:`lib2to3` - 2to3's library
--------------------------------
+:mod:`lib2to3` --- 2to3's library
+---------------------------------
.. module:: lib2to3
:synopsis: The 2to3 library
.. versionchanged:: 3.9
``array('u')`` now uses ``wchar_t`` as C type instead of deprecated
- ``Py_UNICODE``. This change doesn't affect to its behavior because
+ ``Py_UNICODE``. This change doesn't affect its behavior because
``Py_UNICODE`` is alias of ``wchar_t`` since Python 3.3.
.. deprecated-removed:: 3.3 4.0
This is the base of all AST node classes. The actual node classes are
derived from the :file:`Parser/Python.asdl` file, which is reproduced
- :ref:`below <abstract-grammar>`. They are defined in the :mod:`_ast` C
+ :ref:`above <abstract-grammar>`. They are defined in the :mod:`_ast` C
module and re-exported in :mod:`ast`.
There is one class defined for each left-hand side symbol in the abstract
If source contains a null character ('\0'), :exc:`ValueError` is raised.
- .. warning::
+ .. warning::
Note that successfully parsing source code into an AST object doesn't
guarantee that the source code provided is valid Python code that can
be executed as the compilation step can raise further :exc:`SyntaxError`
.. method:: async_chat.push_with_producer(producer)
Takes a producer object and adds it to the producer queue associated with
- the channel. When all currently-pushed producers have been exhausted the
+ the channel. When all currently pushed producers have been exhausted the
channel will consume this producer's data by calling its :meth:`more`
method and send the data to the remote endpoint.
* - :meth:`transport.get_write_buffer_size()
<WriteTransport.get_write_buffer_size>`
+ - Return the current size of the output buffer.
+
+ * - :meth:`transport.get_write_buffer_limits()
+ <WriteTransport.get_write_buffer_limits>`
- Return high and low water marks for write flow control.
* - :meth:`transport.set_write_buffer_limits()
can be read. Use the :attr:`IncompleteReadError.partial`
attribute to get the partially read data.
- .. coroutinemethod:: readuntil(separator=b'\\n')
+ .. coroutinemethod:: readuntil(separator=b'\n')
Read data from the stream until *separator* is found.
.. important::
Save a reference to the result of this function, to avoid
- a task disappearing mid execution.
+ a task disappearing mid execution. The event loop only keeps
+ weak references to tasks. A task that isn't referenced elsewhere
+ may get garbage-collected at any time, even before it's done.
+ For reliable "fire-and-forget" background tasks, gather them in
+ a collection::
+
+ background_tasks = set()
+
+ for i in range(10):
+ task = asyncio.create_task(some_coro(param=i))
+
+ # Add task to the set. This creates a strong reference.
+ background_tasks.add(task)
+
+ # To prevent keeping references to finished tasks forever,
+ # make each task remove its own reference from the set after
+ # completion:
+ task.add_done_callback(background_tasks.discard)
.. versionadded:: 3.7
.. versionadded:: 3.4
-.. function:: a85decode(b, *, foldspaces=False, adobe=False, ignorechars=b' \\t\\n\\r\\v')
+.. function:: a85decode(b, *, foldspaces=False, adobe=False, ignorechars=b' \t\n\r\v')
Decode the Ascii85 encoded :term:`bytes-like object` or ASCII string *b* and
return the decoded :class:`bytes`.
An :class:`Executor` subclass that uses a pool of at most *max_workers*
threads to execute calls asynchronously.
+ All threads enqueued to ``ThreadPoolExecutor`` will be joined before the
+ interpreter can exit. Note that the exit handler which does this is
+ executed *before* any exit handlers added using `atexit`. This means
+ exceptions in the main thread must be caught and handled in order to
+ signal threads to exit gracefully. For this reason, it is recommended
+ that ``ThreadPoolExecutor`` not be used for long-running tasks.
+
*initializer* is an optional callable that is called at the start of
each worker thread; *initargs* is a tuple of arguments passed to the
initializer. Should *initializer* raise an exception, all currently
Python's interactive interpreter. If you want a Python interpreter that
supports some special feature in addition to the Python language, you should
look at the :mod:`code` module. (The :mod:`codeop` module is lower-level, used
-to support compiling a possibly-incomplete chunk of Python code.)
+to support compiling a possibly incomplete chunk of Python code.)
The full list of modules described in this chapter is:
x: list = field(default_factory=list)
assert D().x is not D().x
+
+Descriptor-typed fields
+-----------------------
+
+Fields that are assigned :ref:`descriptor objects <descriptors>` as their
+default value have the following special behaviors:
+
+* The value for the field passed to the dataclass's ``__init__`` method is
+ passed to the descriptor's ``__set__`` method rather than overwriting the
+ descriptor object.
+* Similarly, when getting or setting the field, the descriptor's
+ ``__get__`` or ``__set__`` method is called rather than returning or
+ overwriting the descriptor object.
+* To determine whether a field contains a default value, ``dataclasses``
+ will call the descriptor's ``__get__`` method using its class access
+ form (i.e. ``descriptor.__get__(obj=None, type=cls)``. If the
+ descriptor returns a value in this case, it will be used as the
+ field's default. On the other hand, if the descriptor raises
+ :exc:`AttributeError` in this situation, no default value will be
+ provided for the field.
+
+::
+
+ class IntConversionDescriptor:
+ def __init__(self, *, default):
+ self._default = default
+
+ def __set_name__(self, owner, name):
+ self._name = "_" + name
+
+ def __get__(self, obj, type):
+ if obj is None:
+ return self._default
+
+ return getattr(obj, self._name, self._default)
+
+ def __set__(self, obj, value):
+ setattr(obj, self._name, int(value))
+
+ @dataclass
+ class InventoryItem:
+ quantity_on_hand: IntConversionDescriptor = IntConversionDescriptor(default=100)
+
+ i = InventoryItem()
+ print(i.quantity_on_hand) # 100
+ i.quantity_on_hand = 2.5 # calls __set__ with 2.5
+ print(i.quantity_on_hand) # 2
+
+Note that if a field is annotated with a descriptor type, but is not assigned
+a descriptor object as its default value, the field will act like a normal
+field.
+-------------------------------+----------------------------------------------+
| Operation | Result |
+===============================+==============================================+
-| ``date2 = date1 + timedelta`` | *date2* is ``timedelta.days`` days removed |
-| | from *date1*. (1) |
+| ``date2 = date1 + timedelta`` | *date2* will be ``timedelta.days`` days |
+| | after *date1*. (1) |
+-------------------------------+----------------------------------------------+
| ``date2 = date1 - timedelta`` | Computes *date2* such that ``date2 + |
| | timedelta == date1``. (2) |
many other calendar systems.
.. [#] See R. H. van Gent's `guide to the mathematics of the ISO 8601 calendar
- <https://www.staff.science.uu.nl/~gent0113/calendar/isocalendar.htm>`_
+ <https://web.archive.org/web/20220531051136/https://webspace.science.uu.nl/~gent0113/calendar/isocalendar.htm>`_
for a good explanation.
.. [#] Passing ``datetime.strptime('Feb 29', '%b %d')`` will fail since ``1900`` is not a leap year.
--------------
-The :mod:`decimal` module provides support for fast correctly-rounded
+The :mod:`decimal` module provides support for fast correctly rounded
decimal floating point arithmetic. It offers several advantages over the
:class:`float` datatype:
With two arguments, compute ``x**y``. If ``x`` is negative then ``y``
must be integral. The result will be inexact unless ``y`` is integral and
the result is finite and can be expressed exactly in 'precision' digits.
- The rounding mode of the context is used. Results are always correctly-rounded
+ The rounding mode of the context is used. Results are always correctly rounded
in the Python version.
``Decimal(0) ** Decimal(0)`` results in ``InvalidOperation``, and if ``InvalidOperation``
is not trapped, then results in ``Decimal('NaN')``.
.. versionchanged:: 3.3
- The C module computes :meth:`power` in terms of the correctly-rounded
+ The C module computes :meth:`power` in terms of the correctly rounded
:meth:`exp` and :meth:`ln` functions. The result is well-defined but
- only "almost always correctly-rounded".
+ only "almost always correctly rounded".
With three arguments, compute ``(x**y) % modulo``. For the three argument
form, the following restrictions on the arguments hold:
A. Yes. In the CPython and PyPy3 implementations, the C/CFFI versions of
the decimal module integrate the high speed `libmpdec
<https://www.bytereef.org/mpdecimal/doc/libmpdec/index.html>`_ library for
-arbitrary precision correctly-rounded decimal floating point arithmetic [#]_.
+arbitrary precision correctly rounded decimal floating point arithmetic [#]_.
``libmpdec`` uses `Karatsuba multiplication
<https://en.wikipedia.org/wiki/Karatsuba_algorithm>`_
for medium-sized numbers and the `Number Theoretic Transform
contains a good example of its use.
-.. function:: context_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\\n')
+.. function:: context_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n')
Compare *a* and *b* (lists of strings); return a delta (a :term:`generator`
generating the delta lines) in context diff format.
emu
-.. function:: unified_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\\n')
+.. function:: unified_diff(a, b, fromfile='', tofile='', fromfiledate='', tofiledate='', n=3, lineterm='\n')
Compare *a* and *b* (lists of strings); return a delta (a :term:`generator`
generating the delta lines) in unified diff format.
See :ref:`difflib-interface` for a more detailed example.
-.. function:: diff_bytes(dfunc, a, b, fromfile=b'', tofile=b'', fromfiledate=b'', tofiledate=b'', n=3, lineterm=b'\\n')
+.. function:: diff_bytes(dfunc, a, b, fromfile=b'', tofile=b'', fromfiledate=b'', tofiledate=b'', n=3, lineterm=b'\n')
Compare *a* and *b* (lists of bytes objects) using *dfunc*; yield a
sequence of delta lines (also bytes) in the format returned by *dfunc*.
.. function:: findlinestarts(code)
- This generator function uses the ``co_firstlineno`` and ``co_lnotab``
- attributes of the code object *code* to find the offsets which are starts of
+ This generator function uses the ``co_lines`` method
+ of the code object *code* to find the offsets which are starts of
lines in the source code. They are generated as ``(offset, lineno)`` pairs.
- See :source:`Objects/lnotab_notes.txt` for the ``co_lnotab`` format and
- how to decode it.
.. versionchanged:: 3.6
Line numbers can be decreasing. Before, they were always increasing.
+ .. versionchanged:: 3.10
+ The :pep:`626` ``co_lines`` method is used instead of the ``co_firstlineno``
+ and ``co_lnotab`` attributes of the code object.
+
.. function:: findlabels(code)
.. opcode:: GET_ANEXT
- Implements ``PUSH(get_awaitable(TOS.__anext__()))``. See ``GET_AWAITABLE``
- for details about ``get_awaitable``
+ Pushes ``get_awaitable(TOS.__anext__())`` to the stack. See
+ ``GET_AWAITABLE`` for details about ``get_awaitable``.
.. versionadded:: 3.5
When specified, doctests expecting exceptions pass so long as an exception
of the expected type is raised, even if the details
- (message and fully-qualified exception name) don't match.
+ (message and fully qualified exception name) don't match.
For example, an example expecting ``ValueError: 42`` will pass if the actual
exception raised is ``ValueError: 3*14``, but will fail if, say, a
:exc:`TypeError` is raised instead.
- It will also ignore any fully-qualified name included before the
+ It will also ignore any fully qualified name included before the
exception class, which can vary between implementations and versions
of Python and the code/libraries in use.
Hence, all three of these variations will work with the flag specified:
if *s* is a byte string.
- .. method:: encode(splitchars=';, \\t', maxlinelen=None, linesep='\\n')
+ .. method:: encode(splitchars=';, \t', maxlinelen=None, linesep='\n')
Encode a message header into an RFC-compliant format, possibly wrapping
long lines and encapsulating non-ASCII parts in base64 or quoted-printable
supported.
-.. function:: print(*objects, sep=' ', end='\\n', file=sys.stdout, flush=False)
+.. function:: print(*objects, sep=' ', end='\n', file=sys.stdout, flush=False)
Print *objects* to the text stream *file*, separated by *sep* and followed
by *end*. *sep*, *end*, *file*, and *flush*, if present, must be given as keyword
.. function:: glob(pathname, *, root_dir=None, dir_fd=None, recursive=False)
- Return a possibly-empty list of path names that match *pathname*, which must be
+ Return a possibly empty list of path names that match *pathname*, which must be
a string containing a path specification. *pathname* can be either absolute
(like :file:`/usr/src/Python-1.5/Makefile`) or relative (like
:file:`../../Tools/\*/\*.gif`), and can contain shell-style wildcards. Broken
hash function used in the protocol summarily stops this type of attack.
(`The Skein Hash Function Family
- <http://www.skein-hash.info/sites/default/files/skein1.3.pdf>`_,
+ <https://www.schneier.com/wp-content/uploads/2016/02/skein.pdf>`_,
p. 21)
BLAKE2 can be personalized by passing bytes to the *person* argument::
.. warning::
- When comparing the output of :meth:`digest` to an externally-supplied
+ When comparing the output of :meth:`digest` to an externally supplied
digest during a verification routine, it is recommended to use the
:func:`compare_digest` function instead of the ``==`` operator
to reduce the vulnerability to timing attacks.
.. warning::
- When comparing the output of :meth:`hexdigest` to an externally-supplied
+ When comparing the output of :meth:`hexdigest` to an externally supplied
digest during a verification routine, it is recommended to use the
:func:`compare_digest` function instead of the ``==`` operator
to reduce the vulnerability to timing attacks.
.. rubric:: Footnotes
-.. [#] See https://html.spec.whatwg.org/multipage/syntax.html#named-character-references
+.. [#] See https://html.spec.whatwg.org/multipage/named-characters.html#named-character-references
:meth:`value_decode` are inverses on the range of *value_decode*.
-.. method:: BaseCookie.output(attrs=None, header='Set-Cookie:', sep='\\r\\n')
+.. method:: BaseCookie.output(attrs=None, header='Set-Cookie:', sep='\r\n')
Return a string representation suitable to be sent as HTTP headers. *attrs* and
*header* are sent to each :class:`Morsel`'s :meth:`output` method. *sep* is used
.. warning::
:mod:`http.server` is not recommended for production. It only implements
- basic security checks.
+ :ref:`basic security checks <http.server-security>`.
One class, :class:`HTTPServer`, is a :class:`socketserver.TCPServer` subclass.
It creates and listens at the HTTP socket, dispatching the requests to a
the ``--cgi`` option::
python -m http.server --cgi
+
+.. _http.server-security:
+
+Security Considerations
+-----------------------
+
+.. index:: pair: http.server; security
+
+:class:`SimpleHTTPRequestHandler` will follow symbolic links when handling
+requests, this makes it possible for files outside of the specified directory
+to be served.
single: Clear Breakpoint
single: breakpoints
-Context Menus
+Context menus
^^^^^^^^^^^^^^^^^^^^^^^^^^
Open a context menu by right-clicking in a window (Control-click on macOS).
.. _editing-and-navigation:
-Editing and navigation
+Editing and Navigation
----------------------
Editor windows
The text and background colors for the context pane can be configured under
the Highlights tab in the Configure IDLE dialog.
-Python Shell window
-^^^^^^^^^^^^^^^^^^^
+Shell window
+^^^^^^^^^^^^
-With IDLE's Shell, one enters, edits, and recalls complete statements.
-Most consoles and terminals only work with a single physical line at a time.
+In IDLE's Shell, enter, edit, and recall complete statements. (Most
+consoles and terminals only work with a single physical line at a time).
+
+Submit a single-line statement for execution by hitting :kbd:`Return`
+with the cursor anywhere on the line. If a line is extended with
+Backslash (:kbd:`\\`), the cursor must be on the last physical line.
+Submit a multi-line compound statement by entering a blank line after
+the statement.
When one pastes code into Shell, it is not compiled and possibly executed
-until one hits :kbd:`Return`. One may edit pasted code first.
-If one pastes more that one statement into Shell, the result will be a
+until one hits :kbd:`Return`, as specified above.
+One may edit pasted code first.
+If one pastes more than one statement into Shell, the result will be a
:exc:`SyntaxError` when multiple statements are compiled as if they were one.
+Lines containing ``RESTART`` mean that the user execution process has been
+re-started. This occurs when the user execution process has crashed,
+when one requests a restart on the Shell menu, or when one runs code
+in an editor window.
+
The editing features described in previous subsections work when entering
code interactively. IDLE's Shell window also responds to the following keys.
* :kbd:`Alt-n` retrieves next. On macOS use :kbd:`C-n`.
- * :kbd:`Return` while on any previous command retrieves that command
+ * :kbd:`Return` while the cursor is on any previous command
+ retrieves that command
Text colors
^^^^^^^^^^^
text in popups and dialogs is not user-configurable.
-Startup and code execution
+Startup and Code Execution
--------------------------
Upon startup with the ``-s`` option, IDLE will execute the file referenced by
created in the execution process, whether directly by user code or by
modules such as multiprocessing. If such subprocess use ``input`` from
sys.stdin or ``print`` or ``write`` to sys.stdout or sys.stderr,
-IDLE should be started in a command line window. The secondary subprocess
+IDLE should be started in a command line window. (On Windows,
+use ``python`` or ``py`` rather than ``pythonw`` or ``pyw``.)
+The secondary subprocess
will then be attached to that window for input and output.
If ``sys`` is reset by user code, such as with ``importlib.reload(sys)``,
.. deprecated:: 3.4
-Help and preferences
+Help and Preferences
--------------------
.. _help-sources:
The ``group`` and ``name`` are arbitrary values defined by the package author
and usually a client will wish to resolve all entry points for a particular
group. Read `the setuptools docs
-<https://setuptools.readthedocs.io/en/latest/setuptools.html#dynamic-discovery-of-services-and-plugins>`_
+<https://setuptools.pypa.io/en/latest/userguide/entry_point.html>`_
for more information on entry points, their definition, and usage.
*Compatibility Note*
package. This attribute is not set on modules.
- :attr:`__package__`
- The fully-qualified name of the package under which the module was
+ The fully qualified name of the package under which the module was
loaded as a submodule (or the empty string for top-level modules).
For packages, it is the same as :attr:`__name__`. The
:func:`importlib.util.module_for_loader` decorator can handle the
(``__name__``)
- A string for the fully-qualified name of the module.
+ A string for the fully qualified name of the module.
.. attribute:: loader
(``__package__``)
- (Read-only) The fully-qualified name of the package under which the module
+ (Read-only) The fully qualified name of the package under which the module
should be loaded as a submodule (or the empty string for top-level modules).
For packages, it is the same as :attr:`__name__`.
doesn't have its own annotations dict, returns an empty dict.
* All accesses to object members and dict values are done
using ``getattr()`` and ``dict.get()`` for safety.
- * Always, always, always returns a freshly-created dict.
+ * Always, always, always returns a freshly created dict.
``eval_str`` controls whether or not values of type ``str`` are replaced
with the result of calling :func:`eval()` on those values:
.. versionadded:: 3.7
-.. class:: StringIO(initial_value='', newline='\\n')
+.. class:: StringIO(initial_value='', newline='\n')
A text stream using an in-memory text buffer. It inherits
:class:`TextIOBase`.
have the task tracking API, which means that you can use
:class:`~queue.SimpleQueue` instances for *queue*.
+ .. note:: If you are using :mod:`multiprocessing`, you should avoid using
+ :class:`~queue.SimpleQueue` and instead use :class:`multiprocessing.Queue`.
.. method:: emit(record)
method is enqueued.
The base implementation formats the record to merge the message,
- arguments, and exception information, if present. It also
- removes unpickleable items from the record in-place.
+ arguments, exception and stack information, if present. It also removes
+ unpickleable items from the record in-place. Specifically, it overwrites
+ the record's :attr:`msg` and :attr:`message` attributes with the merged
+ message (obtained by calling the handler's :meth:`format` method), and
+ sets the :attr:`args`, :attr:`exc_info` and :attr:`exc_text` attributes
+ to ``None``.
You might want to override this method if you want to convert
the record to a dict or JSON string, or send a modified copy
want to override this if you want to use blocking behaviour, or a
timeout, or a customized queue implementation.
+ .. attribute:: listener
+ When created via configuration using :func:`~logging.config.dictConfig`, this
+ attribute will contain a :class:`QueueListener` instance for use with this
+ handler. Otherwise, it will be ``None``.
+
+ .. versionadded:: 3.12
.. _queue-listener:
task tracking API (though it's used if available), which means that you can
use :class:`~queue.SimpleQueue` instances for *queue*.
+ .. note:: If you are using :mod:`multiprocessing`, you should avoid using
+ :class:`~queue.SimpleQueue` and instead use :class:`multiprocessing.Queue`.
+
If ``respect_handler_level`` is ``True``, a handler's level is respected
(compared with the level for the message) when deciding whether to pass
messages to that handler; otherwise, the behaviour is as in previous Python
can include your own messages integrated with messages from third-party
modules.
+The simplest example:
+
+.. code-block:: none
+
+ >>> import logging
+ >>> logging.warning('Watch out!')
+ WARNING:root:Watch out!
+
The module provides a lot of functionality and flexibility. If you are
-unfamiliar with logging, the best way to get to grips with it is to see the
-tutorials (see the links on the right).
+unfamiliar with logging, the best way to get to grips with it is to view the
+tutorials (**see the links above and on the right**).
The basic classes defined by the module, together with their functions, are
listed below.
2006-02-08 22:20:02,165 192.168.0.1 fbloggs Protocol problem: connection reset
The keys in the dictionary passed in *extra* should not clash with the keys used
- by the logging system. (See the :class:`Formatter` documentation for more
+ by the logging system. (See the section on :ref:`logrecord-attributes` for more
information on which keys are used by the logging system.)
If you choose to use these attributes in logged messages, you need to exercise
Raised when some mailbox-related condition beyond the control of the program
causes it to be unable to proceed, such as when failing to acquire a lock that
- another program already holds a lock, or when a uniquely-generated file name
+ another program already holds a lock, or when a uniquely generated file name
already exists.
The :func:`erf` function can be used to compute traditional statistical
functions such as the `cumulative standard normal distribution
- <https://en.wikipedia.org/wiki/Normal_distribution#Cumulative_distribution_function>`_::
+ <https://en.wikipedia.org/wiki/Normal_distribution#Cumulative_distribution_functions>`_::
def phi(x):
'Cumulative distribution function for the standard normal distribution'
To ensure validity of the created memory mapping the file specified
by the descriptor *fileno* is internally automatically synchronized
- with physical backing store on macOS and OpenVMS.
+ with the physical backing store on macOS.
This example shows a simple way of using :class:`~mmap.mmap`::
to start a process. These *start methods* are
*spawn*
- The parent process starts a fresh python interpreter process. The
+ The parent process starts a fresh Python interpreter process. The
child process will only inherit those resources necessary to run
the process object's :meth:`~Process.run` method. In particular,
unnecessary file descriptors and handles from the parent process
-:mod:`multiprocessing.shared_memory` --- Provides shared memory for direct access across processes
-===================================================================================================
+:mod:`multiprocessing.shared_memory` --- Shared memory for direct access across processes
+=========================================================================================
.. module:: multiprocessing.shared_memory
:synopsis: Provides shared memory for direct access across processes.
line-wrapping---\ :mod:`optparse` takes care of wrapping lines and making
the help output look good.
-* options that take a value indicate this fact in their automatically-generated
+* options that take a value indicate this fact in their automatically generated
help message, e.g. for the "mode" option::
-m MODE, --mode=MODE
:mod:`optparse` converts the destination variable name to uppercase and uses
that for the meta-variable. Sometimes, that's not what you want---for
example, the ``--filename`` option explicitly sets ``metavar="FILE"``,
- resulting in this automatically-generated option description::
+ resulting in this automatically generated option description::
-f FILE, --filename=FILE
parser.add_option("-n", "--dry-run", ..., help="do no harm")
parser.add_option("-n", "--noisy", ..., help="be noisy")
-At this point, :mod:`optparse` detects that a previously-added option is already
+At this point, :mod:`optparse` detects that a previously added option is already
using the ``-n`` option string. Since ``conflict_handler`` is ``"resolve"``,
it resolves the situation by removing ``-n`` from the earlier option's list of
option strings. Now ``--dry-run`` is the only way for the user to activate
...
-n, --noisy be noisy
-It's possible to whittle away the option strings for a previously-added option
+It's possible to whittle away the option strings for a previously added option
until there are none left, and the user has no way of invoking that option from
the command-line. In that case, :mod:`optparse` removes that option completely,
so it doesn't show up in help text or anywhere else. Carrying on with our
The *mode* parameter is passed to :func:`mkdir` for creating the leaf
directory; see :ref:`the mkdir() description <mkdir_modebits>` for how it
- is interpreted. To set the file permission bits of any newly-created parent
+ is interpreted. To set the file permission bits of any newly created parent
directories you can set the umask before invoking :func:`makedirs`. The
file permission bits of existing parent directories are not changed.
.. versionchanged:: 3.7
The *mode* argument no longer affects the file permission bits of
- newly-created intermediate-level directories.
+ newly created intermediate-level directories.
.. function:: mkfifo(path, mode=0o666, *, dir_fd=None)
:exc:`IsADirectoryError` or a :exc:`NotADirectoryError` will be raised
respectively. If both are directories and *dst* is empty, *dst* will be
silently replaced. If *dst* is a non-empty directory, an :exc:`OSError`
- is raised. If both are files, *dst* it will be replaced silently if the user
+ is raised. If both are files, *dst* will be replaced silently if the user
has permission. The operation may fail on some Unix flavors if *src* and
*dst* are on different filesystems. If successful, the renaming will be an
atomic operation (this is a POSIX requirement).
:attr:`!children_system`, and :attr:`!elapsed` in that order.
See the Unix manual page
- :manpage:`times(2)` and :manpage:`times(3)` manual page on Unix or `the GetProcessTimes MSDN
+ :manpage:`times(2)` and `times(3) <https://www.freebsd.org/cgi/man.cgi?time(3)>`_ manual page on Unix or `the GetProcessTimes MSDN
<https://docs.microsoft.com/windows/win32/api/processthreadsapi/nf-processthreadsapi-getprocesstimes>`_
on Windows. On Windows, only :attr:`!user` and :attr:`!system` are known; the other attributes are zero.
PureWindowsPath('c:/Program Files')
Spurious slashes and single dots are collapsed, but double dots (``'..'``)
- are not, since this would change the meaning of a path in the face of
- symbolic links::
+ and leading double slashes (``'//'``) are not, since this would change the
+ meaning of a path for various reasons (e.g. symbolic links, UNC paths)::
>>> PurePath('foo//bar')
PurePosixPath('foo/bar')
+ >>> PurePath('//foo/bar')
+ PurePosixPath('//foo/bar')
>>> PurePath('foo/./bar')
PurePosixPath('foo/bar')
>>> PurePath('foo/../bar')
.. class:: PureWindowsPath(*pathsegments)
A subclass of :class:`PurePath`, this path flavour represents Windows
- filesystem paths::
+ filesystem paths, including `UNC paths`_::
>>> PureWindowsPath('c:/Program Files/')
PureWindowsPath('c:/Program Files')
+ >>> PureWindowsPath('//server/share/file')
+ PureWindowsPath('//server/share/file')
*pathsegments* is specified similarly to :class:`PurePath`.
+ .. _unc paths: https://en.wikipedia.org/wiki/Path_(computing)#UNC
+
Regardless of the system you're running on, you can instantiate all of
these classes, since they don't provide any operation that does system calls.
>>> PureWindowsPath('//host/share').root
'\\'
+ If the path starts with more than two successive slashes,
+ :class:`~pathlib.PurePosixPath` collapses them::
+
+ >>> PurePosixPath('//etc').root
+ '//'
+ >>> PurePosixPath('///etc').root
+ '/'
+ >>> PurePosixPath('////etc').root
+ '/'
+
+ .. note::
+
+ This behavior conforms to *The Open Group Base Specifications Issue 6*,
+ paragraph `4.11 Pathname Resolution
+ <https://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap04.html#tag_04_11>`_:
+
+ *"A pathname that begins with two successive slashes may be interpreted in
+ an implementation-defined manner, although more than two leading slashes
+ shall be treated as a single slash."*
+
.. data:: PurePath.anchor
The concatenation of the drive and root::
:func:`os.link` :meth:`Path.hardlink_to`
:func:`os.symlink` :meth:`Path.symlink_to`
:func:`os.readlink` :meth:`Path.readlink`
-:func:`os.path.relpath` :meth:`Path.relative_to` [#]_
+:func:`os.path.relpath` :meth:`PurePath.relative_to` [#]_
:func:`os.stat` :meth:`Path.stat`,
:meth:`Path.owner`,
:meth:`Path.group`
:func:`os.path.basename` :data:`PurePath.name`
:func:`os.path.dirname` :data:`PurePath.parent`
:func:`os.path.samefile` :meth:`Path.samefile`
-:func:`os.path.splitext` :data:`PurePath.suffix`
+:func:`os.path.splitext` :data:`PurePath.stem` and
+ :data:`PurePath.suffix`
==================================== ==============================
.. rubric:: Footnotes
.. [#] :func:`os.path.abspath` does not resolve symbolic links while :meth:`Path.resolve` does.
-.. [#] :meth:`Path.relative_to` requires ``self`` to be the subpath of the argument, but :func:`os.path.relpath` does not.
+.. [#] :meth:`PurePath.relative_to` requires ``self`` to be the subpath of the argument, but :func:`os.path.relpath` does not.
.. function:: machine()
- Returns the machine type, e.g. ``'i386'``. An empty string is returned if the
+ Returns the machine type, e.g. ``'AMD64'``. An empty string is returned if the
value cannot be determined.
.. sectionauthor:: Steve Clift <clift@mail.anacapa.net>
-Several operating systems (including AIX, HP-UX and Solaris) provide
+Several operating systems (including AIX and Solaris) provide
support for files that are larger than 2 GiB from a C programming model where
:c:type:`int` and :c:type:`long` are 32-bit values. This is typically accomplished
by defining the relevant size and offset types as 64-bit values. Such files are
in ``.pyc``.
For example, if *file* is ``/foo/bar/baz.py`` *cfile* will default to
``/foo/bar/__pycache__/baz.cpython-32.pyc`` for Python 3.2. If *dfile* is
- specified, it is used as the name of the source file in error messages
- instead of *file*. If *doraise* is true, a :exc:`PyCompileError` is raised
+ specified, it is used instead of *file* as the name of the source file from
+ which source lines are obtained for display in exception tracebacks.
+ If *doraise* is true, a :exc:`PyCompileError` is raised
when an error is encountered while compiling *file*. If *doraise* is false
(the default), an error string is written to ``sys.stderr``, but no exception
is raised. This function returns the path to byte-compiled file, i.e.
Example of how to wait for enqueued tasks to be completed::
- import threading, queue
+ import threading
+ import queue
q = queue.Queue()
.. function:: choice(sequence)
- Return a randomly-chosen element from a non-empty sequence.
+ Return a randomly chosen element from a non-empty sequence.
.. function:: randbelow(n)
.. function:: compare_digest(a, b)
Return ``True`` if strings *a* and *b* are equal, otherwise ``False``,
- in such a way as to reduce the risk of
+ using a "constant-time compare" to reduce the risk of
`timing attacks <https://codahale.com/a-lesson-in-timing-attacks/>`_.
See :func:`hmac.compare_digest` for additional details.
Applications should not
`store passwords in a recoverable format <http://cwe.mitre.org/data/definitions/257.html>`_,
whether plain text or encrypted. They should be salted and hashed
- using a cryptographically-strong one-way (irreversible) hash function.
+ using a cryptographically strong one-way (irreversible) hash function.
Generate a ten-character alphanumeric password with at least one
argument disabling known insecure and blocked algorithms
<hashlib-usedforsecurity>`
* :mod:`http.server` is not suitable for production use, only implementing
- basic security checks
+ basic security checks. See the :ref:`security considerations <http.server-security>`.
* :mod:`logging`: :ref:`Logging configuration uses eval()
<logging-eval-security>`
* :mod:`multiprocessing`: :ref:`Connection.recv() uses pickle
.. method:: devpoll.poll([timeout])
- Polls the set of registered file descriptors, and returns a possibly-empty list
+ Polls the set of registered file descriptors, and returns a possibly empty list
containing ``(fd, event)`` 2-tuples for the descriptors that have events or
errors to report. *fd* is the file descriptor, and *event* is a bitmask with
bits set for the reported events for that descriptor --- :const:`POLLIN` for
.. method:: poll.poll([timeout])
- Polls the set of registered file descriptors, and returns a possibly-empty list
+ Polls the set of registered file descriptors, and returns a possibly empty list
containing ``(fd, event)`` 2-tuples for the descriptors that have events or
errors to report. *fd* is the file descriptor, and *event* is a bitmask with
bits set for the reported events for that descriptor --- :const:`POLLIN` for
When *follow_symlinks* is false, and *src* is a symbolic
link, :func:`copy2` attempts to copy all metadata from the
- *src* symbolic link to the newly-created *dst* symbolic link.
+ *src* symbolic link to the newly created *dst* symbolic link.
However, this functionality is not available on all platforms.
On platforms where some or all of this functionality is
unavailable, :func:`copy2` will preserve all the metadata
.. note::
- This function is not thread-safe.
+ This function is not thread-safe when custom archivers registered
+ with :func:`register_archive_format` are used. In this case it
+ temporarily changes the current working directory of the process
+ to perform archiving.
.. versionchanged:: 3.8
The modern pax (POSIX.1-2001) format is now used instead of
the legacy GNU format for archives created with ``format="tar"``.
+ .. versionchanged:: 3.10.6
+ This function is now made thread-safe during creation of standard
+ ``.zip`` and tar archives.
.. function:: get_archive_formats()
def __enter__(self):
# If KeyboardInterrupt occurs here, everything is fine
self.lock.acquire()
- # If KeyboardInterrupt occcurs here, __exit__ will not be called
+ # If KeyboardInterrupt occurs here, __exit__ will not be called
...
# KeyboardInterrupt could occur just before the function returns
.. attribute:: fqdn
- Holds the fully-qualified domain name of the server as returned by
+ Holds the fully qualified domain name of the server as returned by
:func:`socket.getfqdn`.
.. attribute:: peer
- A string or a tuple ``(id, unit)`` is used for the :const:`SYSPROTO_CONTROL`
protocol of the :const:`PF_SYSTEM` family. The string is the name of a
- kernel control using a dynamically-assigned ID. The tuple can be used if ID
+ kernel control using a dynamically assigned ID. The tuple can be used if ID
and unit number of the kernel control are known or if a registered ID is
used.
.. function:: getnameinfo(sockaddr, flags)
Translate a socket address *sockaddr* into a 2-tuple ``(host, port)``. Depending
- on the settings of *flags*, the result can contain a fully-qualified domain name
+ on the settings of *flags*, the result can contain a fully qualified domain name
or numeric address representation in *host*. Similarly, *port* can contain a
string port name or a numeric port number.
# Include IP headers
s.setsockopt(socket.IPPROTO_IP, socket.IP_HDRINCL, 1)
- # receive all packages
+ # receive all packets
s.ioctl(socket.SIO_RCVALL, socket.RCVALL_ON)
- # receive a package
+ # receive a packet
print(s.recvfrom(65565))
# disabled promiscuous mode
**Source code:** :source:`Lib/sqlite3/`
---------------
+
+.. _sqlite3-intro:
+
+Introduction
+------------
SQLite is a C library that provides a lightweight disk-based database that
doesn't require a separate server process and allows accessing the database
compliant with the DB-API 2.0 specification described by :pep:`249`, and
requires SQLite 3.7.15 or newer.
+This document includes four main sections:
+
+* :ref:`sqlite3-tutorial` teaches how to use the sqlite3 module.
+* :ref:`sqlite3-reference` describes the classes and functions this module
+ defines.
+* :ref:`sqlite3-howtos` details how to handle specific tasks.
+* :ref:`sqlite3-explanation` provides in-depth background on
+ transaction control.
+
+
+.. _sqlite3-tutorial:
+
+Tutorial
+--------
+
To use the module, start by creating a :class:`Connection` object that
represents the database. Here the data will be stored in the
:file:`example.db` file::
con = sqlite3.connect('example.db')
cur = con.cursor()
-To retrieve data after executing a SELECT statement, either treat the cursor as
-an :term:`iterator`, call the cursor's :meth:`~Cursor.fetchone` method to
-retrieve a single matching row, or call :meth:`~Cursor.fetchall` to get a list
-of the matching rows.
+At this point, our database only contains one row::
-This example uses the iterator form::
+ >>> res = cur.execute('SELECT count(rowid) FROM stocks')
+ >>> print(res.fetchone())
+ (1,)
+
+The result is a one-item :class:`tuple`:
+one row, with one column.
+Now, let us insert three more rows of data,
+using :meth:`~Cursor.executemany`::
+
+ >>> data = [
+ ... ('2006-03-28', 'BUY', 'IBM', 1000, 45.0),
+ ... ('2006-04-05', 'BUY', 'MSFT', 1000, 72.0),
+ ... ('2006-04-06', 'SELL', 'IBM', 500, 53.0),
+ ... ]
+ >>> cur.executemany('INSERT INTO stocks VALUES(?, ?, ?, ?, ?)', data)
+
+Then, retrieve the data by iterating over the result of a ``SELECT`` statement::
>>> for row in cur.execute('SELECT * FROM stocks ORDER BY price'):
- print(row)
+ ... print(row)
('2006-01-05', 'BUY', 'RHAT', 100, 35.14)
('2006-03-28', 'BUY', 'IBM', 1000, 45.0)
PEP written by Marc-André Lemburg.
+.. _sqlite3-reference:
+
+Reference
+---------
+
.. _sqlite3-module-contents:
Module functions and constants
-------------------------------
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. data:: apilevel
.. data:: version
- The version number of this module, as a string. This is not the version of
- the SQLite library.
+ Version number of this module as a :class:`string <str>`.
+ This is not the version of the SQLite library.
.. data:: version_info
- The version number of this module, as a tuple of integers. This is not the
- version of the SQLite library.
+ Version number of this module as a :class:`tuple` of :class:`integers <int>`.
+ This is not the version of the SQLite library.
.. data:: sqlite_version
- The version number of the run-time SQLite library, as a string.
+ Version number of the runtime SQLite library as a :class:`string <str>`.
.. data:: sqlite_version_info
- The version number of the run-time SQLite library, as a tuple of integers.
+ Version number of the runtime SQLite library as a :class:`tuple` of
+ :class:`integers <int>`.
.. data:: threadsafety
.. data:: PARSE_DECLTYPES
- This constant is meant to be used with the *detect_types* parameter of the
- :func:`connect` function.
+ Pass this flag value to the *detect_types* parameter of
+ :func:`connect` to look up a converter function using
+ the declared types for each column.
+ The types are declared when the database table is created.
+ ``sqlite3`` will look up a converter function using the first word of the
+ declared type as the converter dictionary key.
+ For example:
+
+
+ .. code-block:: sql
- Setting it makes the :mod:`sqlite3` module parse the declared type for each
- column it returns. It will parse out the first word of the declared type,
- i. e. for "integer primary key", it will parse out "integer", or for
- "number(10)" it will parse out "number". Then for that column, it will look
- into the converters dictionary and use the converter function registered for
- that type there.
+ CREATE TABLE test(
+ i integer primary key, ! will look up a converter named "integer"
+ p point, ! will look up a converter named "point"
+ n number(10) ! will look up a converter named "number"
+ )
+
+ This flag may be combined with :const:`PARSE_COLNAMES` using the ``|``
+ (bitwise or) operator.
.. data:: PARSE_COLNAMES
- This constant is meant to be used with the *detect_types* parameter of the
- :func:`connect` function.
-
- Setting this makes the SQLite interface parse the column name for each column it
- returns. It will look for a string formed [mytype] in there, and then decide
- that 'mytype' is the type of the column. It will try to find an entry of
- 'mytype' in the converters dictionary and then use the converter function found
- there to return the value. The column name found in :attr:`Cursor.description`
- does not include the type, i. e. if you use something like
- ``'as "Expiration date [datetime]"'`` in your SQL, then we will parse out
- everything until the first ``'['`` for the column name and strip
- the preceding space: the column name would simply be "Expiration date".
-
-
-.. function:: connect(database[, timeout, detect_types, isolation_level, check_same_thread, factory, cached_statements, uri])
-
- Opens a connection to the SQLite database file *database*. By default returns a
- :class:`Connection` object, unless a custom *factory* is given.
-
- *database* is a :term:`path-like object` giving the pathname (absolute or
- relative to the current working directory) of the database file to be opened.
- You can use ``":memory:"`` to open a database connection to a database that
- resides in RAM instead of on disk.
-
- When a database is accessed by multiple connections, and one of the processes
- modifies the database, the SQLite database is locked until that transaction is
- committed. The *timeout* parameter specifies how long the connection should wait
- for the lock to go away until raising an exception. The default for the timeout
- parameter is 5.0 (five seconds).
-
- For the *isolation_level* parameter, please see the
- :attr:`~Connection.isolation_level` property of :class:`Connection` objects.
-
- SQLite natively supports only the types TEXT, INTEGER, REAL, BLOB and NULL. If
- you want to use other types you must add support for them yourself. The
- *detect_types* parameter and the using custom **converters** registered with the
- module-level :func:`register_converter` function allow you to easily do that.
-
- *detect_types* defaults to 0 (i. e. off, no type detection), you can set it to
- any combination of :const:`PARSE_DECLTYPES` and :const:`PARSE_COLNAMES` to turn
- type detection on. Due to SQLite behaviour, types can't be detected for generated
- fields (for example ``max(data)``), even when *detect_types* parameter is set. In
- such case, the returned type is :class:`str`.
-
- By default, *check_same_thread* is :const:`True` and only the creating thread may
- use the connection. If set :const:`False`, the returned connection may be shared
- across multiple threads. When using multiple threads with the same connection
- writing operations should be serialized by the user to avoid data corruption.
-
- By default, the :mod:`sqlite3` module uses its :class:`Connection` class for the
- connect call. You can, however, subclass the :class:`Connection` class and make
- :func:`connect` use your class instead by providing your class for the *factory*
- parameter.
-
- Consult the section :ref:`sqlite3-types` of this manual for details.
-
- The :mod:`sqlite3` module internally uses a statement cache to avoid SQL parsing
- overhead. If you want to explicitly set the number of statements that are cached
- for the connection, you can set the *cached_statements* parameter. The currently
- implemented default is to cache 100 statements.
-
- If *uri* is :const:`True`, *database* is interpreted as a
- :abbr:`URI (Uniform Resource Identifier)` with a file path and an optional
- query string. The scheme part *must* be ``"file:"``. The path can be a
- relative or absolute file path. The query string allows us to pass
- parameters to SQLite. Some useful URI tricks include::
-
- # Open a database in read-only mode.
- con = sqlite3.connect("file:template.db?mode=ro", uri=True)
-
- # Don't implicitly create a new database file if it does not already exist.
- # Will raise sqlite3.OperationalError if unable to open a database file.
- con = sqlite3.connect("file:nosuchdb.db?mode=rw", uri=True)
-
- # Create a shared named in-memory database.
- con1 = sqlite3.connect("file:mem1?mode=memory&cache=shared", uri=True)
- con2 = sqlite3.connect("file:mem1?mode=memory&cache=shared", uri=True)
- con1.executescript("create table t(t); insert into t values(28);")
- rows = con2.execute("select * from t").fetchall()
-
- More information about this feature, including a list of recognized
- parameters, can be found in the
- `SQLite URI documentation <https://www.sqlite.org/uri.html>`_.
+ Pass this flag value to the *detect_types* parameter of
+ :func:`connect` to look up a converter function by
+ using the type name, parsed from the query column name,
+ as the converter dictionary key.
+ The type name must be wrapped in square brackets (``[]``).
+
+ .. code-block:: sql
+
+ SELECT p as "p [point]" FROM test; ! will look up converter "point"
+
+ This flag may be combined with :const:`PARSE_DECLTYPES` using the ``|``
+ (bitwise or) operator.
+
+
+
+.. function:: connect(database, timeout=5.0, detect_types=0, isolation_level="DEFERRED", check_same_thread=True, factory=sqlite3.Connection, cached_statements=128, uri=False)
+
+ Open a connection to an SQLite database.
+
+ :param database:
+ The path to the database file to be opened.
+ Pass ``":memory:"`` to open a connection to a database that is
+ in RAM instead of on disk.
+ :type database: :term:`path-like object`
+
+ :param timeout:
+ How many seconds the connection should wait before raising
+ an exception, if the database is locked by another connection.
+ If another connection opens a transaction to modify the database,
+ it will be locked until that transaction is committed.
+ Default five seconds.
+ :type timeout: float
+
+ :param detect_types:
+ Control whether and how data types not
+ :ref:`natively supported by SQLite <sqlite3-types>`
+ are looked up to be converted to Python types,
+ using the converters registered with :func:`register_converter`.
+ Set it to any combination (using ``|``, bitwise or) of
+ :const:`PARSE_DECLTYPES` and :const:`PARSE_COLNAMES`
+ to enable this.
+ Column names takes precedence over declared types if both flags are set.
+ Types cannot be detected for generated fields (for example ``max(data)``),
+ even when the *detect_types* parameter is set; :class:`str` will be
+ returned instead.
+ By default (``0``), type detection is disabled.
+ :type detect_types: int
+
+ :param isolation_level:
+ The :attr:`~Connection.isolation_level` of the connection,
+ controlling whether and how transactions are implicitly opened.
+ Can be ``"DEFERRED"`` (default), ``"EXCLUSIVE"`` or ``"IMMEDIATE"``;
+ or :const:`None` to disable opening transactions implicitly.
+ See :ref:`sqlite3-controlling-transactions` for more.
+ :type isolation_level: str | :const:`None`
+
+ :param check_same_thread:
+ If :const:`True` (default), only the creating thread may use the connection.
+ If :const:`False`, the connection may be shared across multiple threads;
+ if so, write operations should be serialized by the user to avoid data
+ corruption.
+ :type check_same_thread: bool
+
+ :param factory:
+ A custom subclass of :class:`Connection` to create the connection with,
+ if not the default :class:`Connection` class.
+ :type factory: :class:`Connection`
+
+ :param cached_statements:
+ The number of statements that ``sqlite3``
+ should internally cache for this connection, to avoid parsing overhead.
+ By default, 100 statements.
+ :type cached_statements: int
+
+ :param uri:
+ If set to :const:`True`, *database* is interpreted as a
+ :abbr:`URI (Uniform Resource Identifier)` with a file path
+ and an optional query string.
+ The scheme part *must* be ``"file:"``,
+ and the path can be relative or absolute.
+ The query string allows passing parameters to SQLite,
+ enabling various :ref:`sqlite3-uri-tricks`.
+ :type uri: bool
+
+ :rtype: Connection
.. audit-event:: sqlite3.connect database sqlite3.connect
.. audit-event:: sqlite3.connect/handle connection_handle sqlite3.connect
- .. versionchanged:: 3.4
- Added the *uri* parameter.
+ .. versionadded:: 3.4
+ The *uri* parameter.
.. versionchanged:: 3.7
*database* can now also be a :term:`path-like object`, not only a string.
- .. versionchanged:: 3.10
- Added the ``sqlite3.connect/handle`` auditing event.
+ .. versionadded:: 3.10
+ The ``sqlite3.connect/handle`` auditing event.
+
+.. function:: register_converter(typename, converter, /)
-.. function:: register_converter(typename, callable)
+ Register the *converter* callable to convert SQLite objects of type
+ *typename* into a Python object of a specific type.
+ The converter is invoked for all SQLite values of type *typename*;
+ it is passed a :class:`bytes` object and should return an object of the
+ desired Python type.
+ Consult the parameter *detect_types* of
+ :func:`connect` for information regarding how type detection works.
- Registers a callable to convert a bytestring from the database into a custom
- Python type. The callable will be invoked for all database values that are of
- the type *typename*. Confer the parameter *detect_types* of the :func:`connect`
- function for how the type detection works. Note that *typename* and the name of
- the type in your query are matched in case-insensitive manner.
+ Note: *typename* and the name of the type in your query are matched
+ case-insensitively.
-.. function:: register_adapter(type, callable)
+.. function:: register_adapter(type, adapter, /)
- Registers a callable to convert the custom Python type *type* into one of
- SQLite's supported types. The callable *callable* accepts as single parameter
- the Python value, and must return a value of the following types: int,
- float, str or bytes.
+ Register an *adapter* callable to adapt the Python type *type* into an
+ SQLite type.
+ The adapter is called with a Python object of type *type* as its sole
+ argument, and must return a value of a
+ :ref:`type that SQLite natively understands<sqlite3-types>`.
-.. function:: complete_statement(sql)
+.. function:: complete_statement(statement)
- Returns :const:`True` if the string *sql* contains one or more complete SQL
+ Returns :const:`True` if the string *statement* contains one or more complete SQL
statements terminated by semicolons. It does not verify that the SQL is
syntactically correct, only that there are no unclosed string literals and the
statement is terminated by a semicolon.
.. literalinclude:: ../includes/sqlite3/complete_statement.py
-.. function:: enable_callback_tracebacks(flag)
+.. function:: enable_callback_tracebacks(flag, /)
+ Enable or disable callback tracebacks.
By default you will not get any tracebacks in user-defined functions,
aggregates, converters, authorizer callbacks etc. If you want to debug them,
you can call this function with *flag* set to ``True``. Afterwards, you will
.. _sqlite3-connection-objects:
-Connection Objects
-------------------
+Connection objects
+^^^^^^^^^^^^^^^^^^
.. class:: Connection
+ Each open SQLite database is represented by a ``Connection`` object,
+ which is created using :func:`sqlite3.connect`.
+ Their main purpose is creating :class:`Cursor` objects,
+ and :ref:`sqlite3-controlling-transactions`.
+
+ .. seealso::
+
+ * :ref:`sqlite3-connection-shortcuts`
+ * :ref:`sqlite3-connection-context-manager`
+
An SQLite database connection has the following attributes and methods:
.. attribute:: isolation_level
- Get or set the current default isolation level. :const:`None` for autocommit mode or
- one of "DEFERRED", "IMMEDIATE" or "EXCLUSIVE". See section
- :ref:`sqlite3-controlling-transactions` for a more detailed explanation.
+ This attribute controls the :ref:`transaction handling
+ <sqlite3-controlling-transactions>` performed by ``sqlite3``.
+ If set to :const:`None`, transactions are never implicitly opened.
+ If set to one of ``"DEFERRED"``, ``"IMMEDIATE"``, or ``"EXCLUSIVE"``,
+ corresponding to the underlying `SQLite transaction behaviour`_,
+ implicit :ref:`transaction management
+ <sqlite3-controlling-transactions>` is performed.
+
+ If not overridden by the *isolation_level* parameter of :func:`connect`,
+ the default is ``""``, which is an alias for ``"DEFERRED"``.
.. attribute:: in_transaction
+ This read-only attribute corresponds to the low-level SQLite
+ `autocommit mode`_.
+
:const:`True` if a transaction is active (there are uncommitted changes),
- :const:`False` otherwise. Read-only attribute.
+ :const:`False` otherwise.
.. versionadded:: 3.2
.. method:: cursor(factory=Cursor)
+ Create and return a :class:`Cursor` object.
The cursor method accepts a single optional parameter *factory*. If
supplied, this must be a callable returning an instance of :class:`Cursor`
or its subclasses.
.. method:: commit()
- This method commits the current transaction. If you don't call this method,
- anything you did since the last call to ``commit()`` is not visible from
- other database connections. If you wonder why you don't see the data you've
- written to the database, please check you didn't forget to call this method.
+ Commit any pending transaction to the database.
+ If there is no open transaction, this method is a no-op.
.. method:: rollback()
- This method rolls back any changes to the database since the last call to
- :meth:`commit`.
+ Roll back to the start of any pending transaction.
+ If there is no open transaction, this method is a no-op.
.. method:: close()
- This closes the database connection. Note that this does not automatically
- call :meth:`commit`. If you just close your database connection without
- calling :meth:`commit` first, your changes will be lost!
+ Close the database connection.
+ Any pending transaction is not committed implicitly;
+ make sure to :meth:`commit` before closing
+ to avoid losing pending changes.
- .. method:: execute(sql[, parameters])
+ .. method:: execute(sql, parameters=(), /)
Create a new :class:`Cursor` object and call
:meth:`~Cursor.execute` on it with the given *sql* and *parameters*.
Return the new cursor object.
- .. method:: executemany(sql[, parameters])
+ .. method:: executemany(sql, parameters, /)
Create a new :class:`Cursor` object and call
:meth:`~Cursor.executemany` on it with the given *sql* and *parameters*.
Return the new cursor object.
- .. method:: executescript(sql_script)
+ .. method:: executescript(sql_script, /)
Create a new :class:`Cursor` object and call
:meth:`~Cursor.executescript` on it with the given *sql_script*.
Return the new cursor object.
- .. method:: create_function(name, num_params, func, *, deterministic=False)
+ .. method:: create_function(name, narg, func, *, deterministic=False)
+
+ Create or remove a user-defined SQL function.
- Creates a user-defined function that you can later use from within SQL
- statements under the function name *name*. *num_params* is the number of
- parameters the function accepts (if *num_params* is -1, the function may
- take any number of arguments), and *func* is a Python callable that is
- called as the SQL function. If *deterministic* is true, the created function
- is marked as `deterministic <https://sqlite.org/deterministic.html>`_, which
- allows SQLite to perform additional optimizations. This flag is supported by
- SQLite 3.8.3 or higher, :exc:`NotSupportedError` will be raised if used
- with older versions.
+ :param name:
+ The name of the SQL function.
+ :type name: str
- The function can return any of the types supported by SQLite: bytes, str, int,
- float and ``None``.
+ :param narg:
+ The number of arguments the SQL function can accept.
+ If ``-1``, it may take any number of arguments.
+ :type narg: int
- .. versionchanged:: 3.8
- The *deterministic* parameter was added.
+ :param func:
+ A callable that is called when the SQL function is invoked.
+ The callable must return :ref:`a type natively supported by SQLite
+ <sqlite3-types>`.
+ Set to :const:`None` to remove an existing SQL function.
+ :type func: :term:`callback` | :const:`None`
+
+ :param deterministic:
+ If :const:`True`, the created SQL function is marked as
+ `deterministic <https://sqlite.org/deterministic.html>`_,
+ which allows SQLite to perform additional optimizations.
+ :type deterministic: bool
+
+ :raises NotSupportedError:
+ If *deterministic* is used with SQLite versions older than 3.8.3.
+
+ .. versionadded:: 3.8
+ The *deterministic* parameter.
Example:
.. literalinclude:: ../includes/sqlite3/md5func.py
- .. method:: create_aggregate(name, num_params, aggregate_class)
+ .. method:: create_aggregate(name, /, n_arg, aggregate_class)
+
+ Create or remove a user-defined SQL aggregate function.
+
+ :param name:
+ The name of the SQL aggregate function.
+ :type name: str
+
+ :param n_arg:
+ The number of arguments the SQL aggregate function can accept.
+ If ``-1``, it may take any number of arguments.
+ :type n_arg: int
+
+ :param aggregate_class:
+ A class must implement the following methods:
- Creates a user-defined aggregate function.
+ * ``step()``: Add a row to the aggregate.
+ * ``finalize()``: Return the final result of the aggregate as
+ :ref:`a type natively supported by SQLite <sqlite3-types>`.
- The aggregate class must implement a ``step`` method, which accepts the number
- of parameters *num_params* (if *num_params* is -1, the function may take
- any number of arguments), and a ``finalize`` method which will return the
- final result of the aggregate.
+ The number of arguments that the ``step()`` method must accept
+ is controlled by *n_arg*.
- The ``finalize`` method can return any of the types supported by SQLite:
- bytes, str, int, float and ``None``.
+ Set to :const:`None` to remove an existing SQL aggregate function.
+ :type aggregate_class: :term:`class` | :const:`None`
Example:
.. method:: interrupt()
- You can call this method from a different thread to abort any queries that might
- be executing on the connection. The query will then abort and the caller will
- get an exception.
+ Call this method from a different thread to abort any queries that might
+ be executing on the connection.
+ Aborted queries will raise an exception.
.. method:: set_authorizer(authorizer_callback)
- This routine registers a callback. The callback is invoked for each attempt to
+ Register callable *authorizer_callback* to be invoked for each attempt to
access a column of a table in the database. The callback should return
:const:`SQLITE_OK` if access is allowed, :const:`SQLITE_DENY` if the entire SQL
statement should be aborted with an error and :const:`SQLITE_IGNORE` if the
one. All necessary constants are available in the :mod:`sqlite3` module.
- .. method:: set_progress_handler(handler, n)
+ .. method:: set_progress_handler(progress_handler, n)
- This routine registers a callback. The callback is invoked for every *n*
+ Register callable *progress_handler* to be invoked for every *n*
instructions of the SQLite virtual machine. This is useful if you want to
get called from SQLite during long-running operations, for example to update
a GUI.
If you want to clear any previously installed progress handler, call the
- method with :const:`None` for *handler*.
+ method with :const:`None` for *progress_handler*.
Returning a non-zero value from the handler function will terminate the
currently executing query and cause it to raise an :exc:`OperationalError`
.. method:: set_trace_callback(trace_callback)
- Registers *trace_callback* to be called for each SQL statement that is
- actually executed by the SQLite backend.
+ Register callable *trace_callback* to be invoked for each SQL statement
+ that is actually executed by the SQLite backend.
The only argument passed to the callback is the statement (as
:class:`str`) that is being executed. The return value of the callback is
.. versionadded:: 3.3
- .. method:: enable_load_extension(enabled)
+ .. method:: enable_load_extension(enabled, /)
- This routine allows/disallows the SQLite engine to load SQLite extensions
- from shared libraries. SQLite extensions can define new functions,
+ Enable the SQLite engine to load SQLite extensions from shared libraries
+ if *enabled* is :const:`True`;
+ else, disallow loading SQLite extensions.
+ SQLite extensions can define new functions,
aggregates or whole new virtual table implementations. One well-known
extension is the fulltext-search extension distributed with SQLite.
- Loadable extensions are disabled by default. See [#f1]_.
+ .. note::
+
+ The ``sqlite3`` module is not built with loadable extension support by
+ default, because some platforms (notably macOS) have SQLite
+ libraries which are compiled without this feature.
+ To get loadable extension support,
+ you must pass the :option:`--enable-loadable-sqlite-extensions` option
+ to :program:`configure`.
- .. audit-event:: sqlite3.enable_load_extension connection,enabled sqlite3.enable_load_extension
+ .. audit-event:: sqlite3.enable_load_extension connection,enabled sqlite3.Connection.enable_load_extension
.. versionadded:: 3.2
.. literalinclude:: ../includes/sqlite3/load_extension.py
- .. method:: load_extension(path)
+ .. method:: load_extension(path, /)
- This routine loads an SQLite extension from a shared library. You have to
- enable extension loading with :meth:`enable_load_extension` before you can
- use this routine.
+ Load an SQLite extension from a shared library located at *path*.
+ Enable extension loading with :meth:`enable_load_extension` before
+ calling this method.
- Loadable extensions are disabled by default. See [#f1]_.
-
- .. audit-event:: sqlite3.load_extension connection,path sqlite3.load_extension
+ .. audit-event:: sqlite3.load_extension connection,path sqlite3.Connection.load_extension
.. versionadded:: 3.2
.. attribute:: row_factory
- You can change this attribute to a callable that accepts the cursor and the
- original row as a tuple and will return the real result row. This way, you can
- implement more advanced ways of returning results, such as returning an object
- that can also access columns by name.
+ A callable that accepts two arguments,
+ a :class:`Cursor` object and the raw row results as a :class:`tuple`,
+ and returns a custom object representing an SQLite row.
Example:
If returning a tuple doesn't suffice and you want name-based access to
columns, you should consider setting :attr:`row_factory` to the
- highly-optimized :class:`sqlite3.Row` type. :class:`Row` provides both
+ highly optimized :class:`sqlite3.Row` type. :class:`Row` provides both
index-based and case-insensitive name-based access to columns with almost no
memory overhead. It will probably be better than your own custom
dictionary-based approach or even a db_row based solution.
.. attribute:: text_factory
- Using this attribute you can control what objects are returned for the ``TEXT``
- data type. By default, this attribute is set to :class:`str` and the
- :mod:`sqlite3` module will return :class:`str` objects for ``TEXT``.
- If you want to return :class:`bytes` instead, you can set it to :class:`bytes`.
-
- You can also set it to any other callable that accepts a single bytestring
- parameter and returns the resulting object.
+ A callable that accepts a :class:`bytes` parameter and returns a text
+ representation of it.
+ The callable is invoked for SQLite values with the ``TEXT`` data type.
+ By default, this attribute is set to :class:`str`.
+ If you want to return ``bytes`` instead, set *text_factory* to ``bytes``.
- See the following example code for illustration:
+ Example:
.. literalinclude:: ../includes/sqlite3/text_factory.py
.. attribute:: total_changes
- Returns the total number of database rows that have been modified, inserted, or
+ Return the total number of database rows that have been modified, inserted, or
deleted since the database connection was opened.
.. method:: iterdump
- Returns an iterator to dump the database in an SQL text format. Useful when
- saving an in-memory database for later restoration. This function provides
- the same capabilities as the :kbd:`.dump` command in the :program:`sqlite3`
- shell.
+ Return an :term:`iterator` to dump the database as SQL source code.
+ Useful when saving an in-memory database for later restoration.
+ Similar to the ``.dump`` command in the :program:`sqlite3` shell.
Example::
.. method:: backup(target, *, pages=-1, progress=None, name="main", sleep=0.250)
- This method makes a backup of an SQLite database even while it's being accessed
- by other clients, or concurrently by the same connection. The copy will be
- written into the mandatory argument *target*, that must be another
- :class:`Connection` instance.
-
- By default, or when *pages* is either ``0`` or a negative integer, the entire
- database is copied in a single step; otherwise the method performs a loop
- copying up to *pages* pages at a time.
-
- If *progress* is specified, it must either be ``None`` or a callable object that
- will be executed at each iteration with three integer arguments, respectively
- the *status* of the last iteration, the *remaining* number of pages still to be
- copied and the *total* number of pages.
-
- The *name* argument specifies the database name that will be copied: it must be
- a string containing either ``"main"``, the default, to indicate the main
- database, ``"temp"`` to indicate the temporary database or the name specified
- after the ``AS`` keyword in an ``ATTACH DATABASE`` statement for an attached
- database.
-
- The *sleep* argument specifies the number of seconds to sleep by between
- successive attempts to backup remaining pages, can be specified either as an
- integer or a floating point value.
+ Create a backup of an SQLite database.
+
+ Works even if the database is being accessed by other clients
+ or concurrently by the same connection.
+
+ :param target:
+ The database connection to save the backup to.
+ :type target: Connection
+
+ :param pages:
+ The number of pages to copy at a time.
+ If equal to or less than ``0``,
+ the entire database is copied in a single step.
+ Defaults to ``-1``.
+ :type pages: int
+
+ :param progress:
+ If set to a callable, it is invoked with three integer arguments for
+ every backup iteration:
+ the *status* of the last iteration,
+ the *remaining* number of pages still to be copied,
+ and the *total* number of pages.
+ Defaults to :const:`None`.
+ :type progress: :term:`callback` | :const:`None`
+
+ :param name:
+ The name of the database to back up.
+ Either ``"main"`` (the default) for the main database,
+ ``"temp"`` for the temporary database,
+ or the name of a custom database as attached using the
+ ``ATTACH DATABASE`` SQL statment.
+ :type name: str
+
+ :param sleep:
+ The number of seconds to sleep between successive attempts
+ to back up remaining pages.
+ :type sleep: float
Example 1, copy an existing database into another::
.. _sqlite3-cursor-objects:
-Cursor Objects
---------------
+Cursor objects
+^^^^^^^^^^^^^^
+
+ A ``Cursor`` object represents a `database cursor`_
+ which is used to execute SQL statements,
+ and manage the context of a fetch operation.
+ Cursors are created using :meth:`Connection.cursor`,
+ or by using any of the :ref:`connection shortcut methods
+ <sqlite3-connection-shortcuts>`.
+
+ Cursor objects are :term:`iterators <iterator>`,
+ meaning that if you :meth:`~Cursor.execute` a ``SELECT`` query,
+ you can simply iterate over the cursor to fetch the resulting rows::
+
+ for row in cur.execute("select * from data"):
+ print(row)
+
+ .. _database cursor: https://en.wikipedia.org/wiki/Cursor_(databases)
.. class:: Cursor
.. index:: single: ? (question mark); in SQL statements
.. index:: single: : (colon); in SQL statements
- .. method:: execute(sql[, parameters])
+ .. method:: execute(sql, parameters=(), /)
- Executes an SQL statement. Values may be bound to the statement using
- :ref:`placeholders <sqlite3-placeholders>`.
+ Execute SQL statement *sql*.
+ Bind values to the statement using :ref:`placeholders
+ <sqlite3-placeholders>` that map to the :term:`sequence` or :class:`dict`
+ *parameters*.
:meth:`execute` will only execute a single SQL statement. If you try to execute
- more than one statement with it, it will raise a :exc:`.Warning`. Use
+ more than one statement with it, it will raise a :exc:`Warning`. Use
:meth:`executescript` if you want to execute multiple SQL statements with one
call.
+ If :attr:`~Connection.isolation_level` is not :const:`None`,
+ *sql* is an ``INSERT``, ``UPDATE``, ``DELETE``, or ``REPLACE`` statement,
+ and there is no open transaction,
+ a transaction is implicitly opened before executing *sql*.
+
- .. method:: executemany(sql, seq_of_parameters)
+ .. method:: executemany(sql, parameters, /)
- Executes a :ref:`parameterized <sqlite3-placeholders>` SQL command
+ Execute :ref:`parameterized <sqlite3-placeholders>` SQL statement *sql*
against all parameter sequences or mappings found in the sequence
- *seq_of_parameters*. The :mod:`sqlite3` module also allows using an
+ *parameters*. It is also possible to use an
:term:`iterator` yielding parameters instead of a sequence.
+ Uses the same implicit transaction handling as :meth:`~Cursor.execute`.
- .. literalinclude:: ../includes/sqlite3/executemany_1.py
-
- Here's a shorter example using a :term:`generator`:
-
- .. literalinclude:: ../includes/sqlite3/executemany_2.py
+ Example::
+ data = [
+ ("row1",),
+ ("row2",),
+ ]
+ # cur is an sqlite3.Cursor object
+ cur.executemany("insert into t values(?)", data)
- .. method:: executescript(sql_script)
+ .. method:: executescript(sql_script, /)
- This is a nonstandard convenience method for executing multiple SQL statements
- at once. It issues a ``COMMIT`` statement first, then executes the SQL script it
- gets as a parameter. This method disregards :attr:`isolation_level`; any
- transaction control must be added to *sql_script*.
+ Execute the SQL statements in *sql_script*.
+ If there is a pending transaciton,
+ an implicit ``COMMIT`` statement is executed first.
+ No other implicit transaction control is performed;
+ any transaction control must be added to *sql_script*.
- *sql_script* can be an instance of :class:`str`.
+ *sql_script* must be a :class:`string <str>`.
- Example:
+ Example::
- .. literalinclude:: ../includes/sqlite3/executescript.py
+ # cur is an sqlite3.Cursor object
+ cur.executescript("""
+ begin;
+ create table person(firstname, lastname, age);
+ create table book(title, author, published);
+ create table publisher(name, address);
+ commit;
+ """)
.. method:: fetchone()
- Fetches the next row of a query result set, returning a single sequence,
- or :const:`None` when no more data is available.
+ Fetch the next row of a query result set as a :class:`tuple`.
+ Return :const:`None` if no more data is available.
.. method:: fetchmany(size=cursor.arraysize)
- Fetches the next set of rows of a query result, returning a list. An empty
- list is returned when no more rows are available.
+ Fetch the next set of rows of a query result as a :class:`list`.
+ Return an empty list if no more rows are available.
The number of rows to fetch per call is specified by the *size* parameter.
- If it is not given, the cursor's arraysize determines the number of rows
- to be fetched. The method should try to fetch as many rows as indicated by
- the size parameter. If this is not possible due to the specified number of
- rows not being available, fewer rows may be returned.
+ If *size* is not given, :attr:`arraysize` determines the number of rows
+ to be fetched.
+ If fewer than *size* rows are available,
+ as many rows as are available are returned.
Note there are performance considerations involved with the *size* parameter.
For optimal performance, it is usually best to use the arraysize attribute.
.. method:: fetchall()
- Fetches all (remaining) rows of a query result, returning a list. Note that
- the cursor's arraysize attribute can affect the performance of this operation.
- An empty list is returned when no rows are available.
+ Fetch all (remaining) rows of a query result as a :class:`list`.
+ Return an empty list if no rows are available.
+ Note that the :attr:`arraysize` attribute can affect the performance of
+ this operation.
.. method:: close()
The cursor will be unusable from this point forward; a :exc:`ProgrammingError`
exception will be raised if any operation is attempted with the cursor.
- .. method:: setinputsizes(sizes)
+ .. method:: setinputsizes(sizes, /)
Required by the DB-API. Does nothing in :mod:`sqlite3`.
- .. method:: setoutputsize(size [, column])
+ .. method:: setoutputsize(size, column=None, /)
Required by the DB-API. Does nothing in :mod:`sqlite3`.
.. attribute:: rowcount
- Although the :class:`Cursor` class of the :mod:`sqlite3` module implements this
- attribute, the database engine's own support for the determination of "rows
- affected"/"rows selected" is quirky.
-
- For :meth:`executemany` statements, the number of modifications are summed up
- into :attr:`rowcount`.
-
- As required by the Python DB API Spec, the :attr:`rowcount` attribute "is -1 in
- case no ``executeXX()`` has been performed on the cursor or the rowcount of the
- last operation is not determinable by the interface". This includes ``SELECT``
- statements because we cannot determine the number of rows a query produced
- until all rows were fetched.
+ Read-only attribute that provides the number of modified rows for
+ ``INSERT``, ``UPDATE``, ``DELETE``, and ``REPLACE`` statements;
+ is ``-1`` for other statements,
+ including :abbr:`CTE (Common Table Expression)` queries.
+ It is only updated by the :meth:`execute` and :meth:`executemany` methods.
.. attribute:: lastrowid
- This read-only attribute provides the row id of the last inserted row. It
+ Read-only attribute that provides the row id of the last inserted row. It
is only updated after successful ``INSERT`` or ``REPLACE`` statements
using the :meth:`execute` method. For other statements, after
:meth:`executemany` or :meth:`executescript`, or if the insertion failed,
.. attribute:: description
- This read-only attribute provides the column names of the last query. To
+ Read-only attribute that provides the column names of the last query. To
remain compatible with the Python DB API, it returns a 7-tuple for each
column where the last six items of each tuple are :const:`None`.
.. attribute:: connection
- This read-only attribute provides the SQLite database :class:`Connection`
- used by the :class:`Cursor` object. A :class:`Cursor` object created by
+ Read-only attribute that provides the SQLite database :class:`Connection`
+ belonging to the cursor. A :class:`Cursor` object created by
calling :meth:`con.cursor() <Connection.cursor>` will have a
:attr:`connection` attribute that refers to *con*::
.. _sqlite3-row-objects:
-Row Objects
------------
+Row objects
+^^^^^^^^^^^
.. class:: Row
A :class:`Row` instance serves as a highly optimized
:attr:`~Connection.row_factory` for :class:`Connection` objects.
- It tries to mimic a tuple in most of its features.
+ It tries to mimic a :class:`tuple` in most of its features,
+ and supports iteration, :func:`repr`, equality testing, :func:`len`,
+ and :term:`mapping` access by column name and index.
- It supports mapping access by column name and index, iteration,
- representation, equality testing and :func:`len`.
-
- If two :class:`Row` objects have exactly the same columns and their
- members are equal, they compare equal.
+ Two row objects compare equal if have equal columns and equal members.
.. method:: keys
- This method returns a list of column names. Immediately after a query,
+ Return a :class:`list` of column names as :class:`strings <str>`.
+ Immediately after a query,
it is the first member of each tuple in :attr:`Cursor.description`.
.. versionchanged:: 3.5
35.14
+PrepareProtocol objects
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. class:: PrepareProtocol
+
+ The PrepareProtocol type's single purpose is to act as a :pep:`246` style
+ adaption protocol for objects that can :ref:`adapt themselves
+ <sqlite3-conform>` to :ref:`native SQLite types <sqlite3-types>`.
+
+
.. _sqlite3-exceptions:
Exceptions
-----------
+^^^^^^^^^^
+
+The exception hierarchy is defined by the DB-API 2.0 (:pep:`249`).
.. exception:: Warning
- A subclass of :exc:`Exception`.
+ This exception is raised by ``sqlite3`` if an SQL query is not a
+ :class:`string <str>`, or if multiple statements are passed to
+ :meth:`~Cursor.execute` or :meth:`~Cursor.executemany`.
+ ``Warning`` is a subclass of :exc:`Exception`.
.. exception:: Error
- The base class of the other exceptions in this module. It is a subclass
- of :exc:`Exception`.
+ The base class of the other exceptions in this module.
+ Use this to catch all errors with one single :keyword:`except` statement.
+ ``Error`` is a subclass of :exc:`Exception`.
+
+.. exception:: InterfaceError
+
+ This exception is raised by ``sqlite3`` for fetch across rollback,
+ or if ``sqlite3`` is unable to bind parameters.
+ ``InterfaceError`` is a subclass of :exc:`Error`.
.. exception:: DatabaseError
Exception raised for errors that are related to the database.
+ This serves as the base exception for several types of database errors.
+ It is only raised implicitly through the specialised subclasses.
+ ``DatabaseError`` is a subclass of :exc:`Error`.
+
+.. exception:: DataError
+
+ Exception raised for errors caused by problems with the processed data,
+ like numeric values out of range, and strings which are too long.
+ ``DataError`` is a subclass of :exc:`DatabaseError`.
+
+.. exception:: OperationalError
+
+ Exception raised for errors that are related to the database's operation,
+ and not necessarily under the control of the programmer.
+ For example, the database path is not found,
+ or a transaction could not be processed.
+ ``OperationalError`` is a subclass of :exc:`DatabaseError`.
.. exception:: IntegrityError
Exception raised when the relational integrity of the database is affected,
e.g. a foreign key check fails. It is a subclass of :exc:`DatabaseError`.
-.. exception:: ProgrammingError
+.. exception:: InternalError
- Exception raised for programming errors, e.g. table not found or already
- exists, syntax error in the SQL statement, wrong number of parameters
- specified, etc. It is a subclass of :exc:`DatabaseError`.
+ Exception raised when SQLite encounters an internal error.
+ If this is raised, it may indicate that there is a problem with the runtime
+ SQLite library.
+ ``InternalError`` is a subclass of :exc:`DatabaseError`.
-.. exception:: OperationalError
+.. exception:: ProgrammingError
- Exception raised for errors that are related to the database's operation
- and not necessarily under the control of the programmer, e.g. an unexpected
- disconnect occurs, the data source name is not found, a transaction could
- not be processed, etc. It is a subclass of :exc:`DatabaseError`.
+ Exception raised for ``sqlite3`` API programming errors,
+ for example trying to operate on a closed :class:`Connection`,
+ or trying to execute non-DML statements with :meth:`~Cursor.executemany`.
+ ``ProgrammingError`` is a subclass of :exc:`DatabaseError`.
.. exception:: NotSupportedError
- Exception raised in case a method or database API was used which is not
- supported by the database, e.g. calling the :meth:`~Connection.rollback`
- method on a connection that does not support transaction or has
- transactions turned off. It is a subclass of :exc:`DatabaseError`.
+ Exception raised in case a method or database API is not supported by the
+ underlying SQLite library. For example, setting *deterministic* to
+ :const:`True` in :meth:`~Connection.create_function`, if the underlying SQLite library
+ does not support deterministic functions.
+ ``NotSupportedError`` is a subclass of :exc:`DatabaseError`.
.. _sqlite3-types:
SQLite and Python types
------------------------
-
-
-Introduction
-^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^^^^^^^^
SQLite natively supports the following types: ``NULL``, ``INTEGER``,
``REAL``, ``TEXT``, ``BLOB``.
+-------------+----------------------------------------------+
The type system of the :mod:`sqlite3` module is extensible in two ways: you can
-store additional Python types in an SQLite database via object adaptation, and
-you can let the :mod:`sqlite3` module convert SQLite types to different Python
-types via converters.
+store additional Python types in an SQLite database via
+:ref:`object adapters <sqlite3-adapters>`,
+and you can let the ``sqlite3`` module convert SQLite types to
+Python types via :ref:`converters <sqlite3-converters>`.
-Using adapters to store additional Python types in SQLite databases
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+.. _sqlite3-howtos:
-As described before, SQLite supports only a limited set of types natively. To
-use other Python types with SQLite, you must **adapt** them to one of the
-sqlite3 module's supported types for SQLite: one of NoneType, int, float,
-str, bytes.
+How-to guides
+-------------
-There are two ways to enable the :mod:`sqlite3` module to adapt a custom Python
-type to one of the supported ones.
+.. _sqlite3-adapters:
+Using adapters to store custom Python types in SQLite databases
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Letting your object adapt itself
-""""""""""""""""""""""""""""""""
+SQLite supports only a limited set of data types natively.
+To store custom Python types in SQLite databases, *adapt* them to one of the
+:ref:`Python types SQLite natively understands<sqlite3-types>`.
+
+There are two ways to adapt Python objects to SQLite types:
+letting your object adapt itself, or using an *adapter callable*.
+The latter will take precedence above the former.
+For a library that exports a custom type,
+it may make sense to enable that type to adapt itself.
+As an application developer, it may make more sense to take direct control by
+registering custom adapter functions.
-This is a good approach if you write the class yourself. Let's suppose you have
-a class like this::
- class Point:
- def __init__(self, x, y):
- self.x, self.y = x, y
+.. _sqlite3-conform:
-Now you want to store the point in a single SQLite column. First you'll have to
-choose one of the supported types to be used for representing the point.
-Let's just use str and separate the coordinates using a semicolon. Then you need
-to give your class a method ``__conform__(self, protocol)`` which must return
-the converted value. The parameter *protocol* will be :class:`PrepareProtocol`.
+Letting your object adapt itself
+""""""""""""""""""""""""""""""""
+
+Suppose we have a ``Point`` class that represents a pair of coordinates,
+``x`` and ``y``, in a Cartesian coordinate system.
+The coordinate pair will be stored as a text string in the database,
+using a semicolon to separate the coordinates.
+This can be implemented by adding a ``__conform__(self, protocol)``
+method which returns the adapted value.
+The object passed to *protocol* will be of type :class:`PrepareProtocol`.
.. literalinclude:: ../includes/sqlite3/adapter_point_1.py
Registering an adapter callable
"""""""""""""""""""""""""""""""
-The other possibility is to create a function that converts the type to the
-string representation and register the function with :meth:`register_adapter`.
+The other possibility is to create a function that converts the Python object
+to an SQLite-compatible type.
+This function can then be registered using :func:`register_adapter`.
.. literalinclude:: ../includes/sqlite3/adapter_point_2.py
-The :mod:`sqlite3` module has two default adapters for Python's built-in
-:class:`datetime.date` and :class:`datetime.datetime` types. Now let's suppose
-we want to store :class:`datetime.datetime` objects not in ISO representation,
-but as a Unix timestamp.
-
-.. literalinclude:: ../includes/sqlite3/adapter_datetime.py
+.. _sqlite3-converters:
Converting SQLite values to custom Python types
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Writing an adapter lets you send custom Python types to SQLite. But to make it
-really useful we need to make the Python to SQLite to Python roundtrip work.
-
-Enter converters.
+Writing an adapter lets you convert *from* custom Python types *to* SQLite
+values.
+To be able to convert *from* SQLite values *to* custom Python types,
+we use *converters*.
Let's go back to the :class:`Point` class. We stored the x and y coordinates
separated via semicolons as strings in SQLite.
.. note::
- Converter functions **always** get called with a :class:`bytes` object, no
- matter under which data type you sent the value to SQLite.
+ Converter functions are **always** passed a :class:`bytes` object,
+ no matter the underlying SQLite data type.
::
x, y = map(float, s.split(b";"))
return Point(x, y)
-Now you need to make the :mod:`sqlite3` module know that what you select from
-the database is actually a point. There are two ways of doing this:
-
-* Implicitly via the declared type
+We now need to tell ``sqlite3`` when it should convert a given SQLite value.
+This is done when connecting to a database, using the *detect_types* parameter
+of :func:`connect`. There are three options:
-* Explicitly via the column name
+* Implicit: set *detect_types* to :const:`PARSE_DECLTYPES`
+* Explicit: set *detect_types* to :const:`PARSE_COLNAMES`
+* Both: set *detect_types* to
+ ``sqlite3.PARSE_DECLTYPES | sqlite3.PARSE_COLNAMES``.
+ Column names take precedence over declared types.
-Both ways are described in section :ref:`sqlite3-module-contents`, in the entries
-for the constants :const:`PARSE_DECLTYPES` and :const:`PARSE_COLNAMES`.
-
-The following example illustrates both approaches.
+The following example illustrates the implicit and explicit approaches:
.. literalinclude:: ../includes/sqlite3/converter_point.py
offsets in timestamps, either leave converters disabled, or register an
offset-aware converter with :func:`register_converter`.
-.. _sqlite3-controlling-transactions:
-Controlling Transactions
-------------------------
+.. _sqlite3-adapter-converter-recipes:
-The underlying ``sqlite3`` library operates in ``autocommit`` mode by default,
-but the Python :mod:`sqlite3` module by default does not.
+Adapter and converter recipes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-``autocommit`` mode means that statements that modify the database take effect
-immediately. A ``BEGIN`` or ``SAVEPOINT`` statement disables ``autocommit``
-mode, and a ``COMMIT``, a ``ROLLBACK``, or a ``RELEASE`` that ends the
-outermost transaction, turns ``autocommit`` mode back on.
+This section shows recipes for common adapters and converters.
-The Python :mod:`sqlite3` module by default issues a ``BEGIN`` statement
-implicitly before a Data Modification Language (DML) statement (i.e.
-``INSERT``/``UPDATE``/``DELETE``/``REPLACE``).
+.. code-block::
-You can control which kind of ``BEGIN`` statements :mod:`sqlite3` implicitly
-executes via the *isolation_level* parameter to the :func:`connect`
-call, or via the :attr:`isolation_level` property of connections.
-If you specify no *isolation_level*, a plain ``BEGIN`` is used, which is
-equivalent to specifying ``DEFERRED``. Other possible values are ``IMMEDIATE``
-and ``EXCLUSIVE``.
+ import datetime
+ import sqlite3
-You can disable the :mod:`sqlite3` module's implicit transaction management by
-setting :attr:`isolation_level` to ``None``. This will leave the underlying
-``sqlite3`` library operating in ``autocommit`` mode. You can then completely
-control the transaction state by explicitly issuing ``BEGIN``, ``ROLLBACK``,
-``SAVEPOINT``, and ``RELEASE`` statements in your code.
+ def adapt_date_iso(val):
+ """Adapt datetime.date to ISO 8601 date."""
+ return val.isoformat()
-Note that :meth:`~Cursor.executescript` disregards
-:attr:`isolation_level`; any transaction control must be added explicitly.
+ def adapt_datetime_iso(val):
+ """Adapt datetime.datetime to timezone-naive ISO 8601 date."""
+ return val.isoformat()
-.. versionchanged:: 3.6
- :mod:`sqlite3` used to implicitly commit an open transaction before DDL
- statements. This is no longer the case.
+ def adapt_datetime_epoch(val)
+ """Adapt datetime.datetime to Unix timestamp."""
+ return int(val.timestamp())
+
+ sqlite3.register_adapter(datetime.date, adapt_date_iso)
+ sqlite3.register_adapter(datetime.datetime, adapt_datetime_iso)
+ sqlite3.register_adapter(datetime.datetime, adapt_datetime_epoch)
+
+ def convert_date(val):
+ """Convert ISO 8601 date to datetime.date object."""
+ return datetime.date.fromisoformat(val)
+
+ def convert_datetime(val):
+ """Convert ISO 8601 datetime to datetime.datetime object."""
+ return datetime.datetime.fromisoformat(val)
+ def convert_timestamp(val):
+ """Convert Unix epoch timestamp to datetime.datetime object."""
+ return datetime.datetime.fromtimestamp(val)
-Using :mod:`sqlite3` efficiently
---------------------------------
+ sqlite3.register_converter("date", convert_date)
+ sqlite3.register_converter("datetime", convert_datetime)
+ sqlite3.register_converter("timestamp", convert_timestamp)
-Using shortcut methods
-^^^^^^^^^^^^^^^^^^^^^^
+.. _sqlite3-connection-shortcuts:
-Using the nonstandard :meth:`execute`, :meth:`executemany` and
-:meth:`executescript` methods of the :class:`Connection` object, your code can
+Using connection shortcut methods
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Using the :meth:`~Connection.execute`,
+:meth:`~Connection.executemany`, and :meth:`~Connection.executescript`
+methods of the :class:`Connection` class, your code can
be written more concisely because you don't have to create the (often
superfluous) :class:`Cursor` objects explicitly. Instead, the :class:`Cursor`
objects are created implicitly and these shortcut methods return the cursor
.. literalinclude:: ../includes/sqlite3/shortcut_methods.py
+.. _sqlite3-columns-by-name:
+
Accessing columns by name instead of by index
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. literalinclude:: ../includes/sqlite3/rowclass.py
+.. _sqlite3-connection-context-manager:
+
Using the connection as a context manager
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Connection objects can be used as context managers
-that automatically commit or rollback transactions. In the event of an
-exception, the transaction is rolled back; otherwise, the transaction is
-committed:
+A :class:`Connection` object can be used as a context manager that
+automatically commits or rolls back open transactions when leaving the body of
+the context manager.
+If the body of the :keyword:`with` statement finishes without exceptions,
+the transaction is committed.
+If this commit fails,
+or if the body of the ``with`` statement raises an uncaught exception,
+the transaction is rolled back.
+
+If there is no open transaction upon leaving the body of the ``with`` statement,
+the context manager is a no-op.
+
+.. note::
+
+ The context manager neither implicitly opens a new transaction
+ nor closes the connection.
.. literalinclude:: ../includes/sqlite3/ctx_manager.py
-.. rubric:: Footnotes
+.. _sqlite3-uri-tricks:
+
+Working with SQLite URIs
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Some useful URI tricks include:
+
+* Open a database in read-only mode::
+
+ con = sqlite3.connect("file:template.db?mode=ro", uri=True)
+
+* Do not implicitly create a new database file if it does not already exist;
+ will raise :exc:`~sqlite3.OperationalError` if unable to create a new file::
+
+ con = sqlite3.connect("file:nosuchdb.db?mode=rw", uri=True)
+
+* Create a shared named in-memory database::
+
+ con1 = sqlite3.connect("file:mem1?mode=memory&cache=shared", uri=True)
+ con2 = sqlite3.connect("file:mem1?mode=memory&cache=shared", uri=True)
+ con1.execute("create table t(t)")
+ con1.execute("insert into t values(28)")
+ con1.commit()
+ rows = con2.execute("select * from t").fetchall()
+
+More information about this feature, including a list of parameters,
+can be found in the `SQLite URI documentation`_.
+
+.. _SQLite URI documentation: https://www.sqlite.org/uri.html
+
+
+.. _sqlite3-explanation:
+
+Explanation
+-----------
+
+.. _sqlite3-controlling-transactions:
+
+Transaction control
+^^^^^^^^^^^^^^^^^^^
+
+The ``sqlite3`` module does not adhere to the transaction handling recommended
+by :pep:`249`.
+
+If the connection attribute :attr:`~Connection.isolation_level`
+is not :const:`None`,
+new transactions are implicitly opened before
+:meth:`~Cursor.execute` and :meth:`~Cursor.executemany` executes
+``INSERT``, ``UPDATE``, ``DELETE``, or ``REPLACE`` statements.
+Use the :meth:`~Connection.commit` and :meth:`~Connection.rollback` methods
+to respectively commit and roll back pending transactions.
+You can choose the underlying `SQLite transaction behaviour`_ —
+that is, whether and what type of ``BEGIN`` statements ``sqlite3``
+implicitly executes –
+via the :attr:`~Connection.isolation_level` attribute.
+
+If :attr:`~Connection.isolation_level` is set to :const:`None`,
+no transactions are implicitly opened at all.
+This leaves the underlying SQLite library in `autocommit mode`_,
+but also allows the user to perform their own transaction handling
+using explicit SQL statements.
+The underlying SQLite library autocommit mode can be queried using the
+:attr:`~Connection.in_transaction` attribute.
+
+The :meth:`~Cursor.executescript` method implicitly commits
+any pending transaction before execution of the given SQL script,
+regardless of the value of :attr:`~Connection.isolation_level`.
+
+.. versionchanged:: 3.6
+ :mod:`sqlite3` used to implicitly commit an open transaction before DDL
+ statements. This is no longer the case.
+
+.. _autocommit mode:
+ https://www.sqlite.org/lang_transaction.html#implicit_versus_explicit_transactions
-.. [#f1] The sqlite3 module is not built with loadable extension support by
- default, because some platforms (notably macOS) have SQLite
- libraries which are compiled without this feature. To get loadable
- extension support, you must pass the
- :option:`--enable-loadable-sqlite-extensions` option to configure.
+.. _SQLite transaction behaviour:
+ https://www.sqlite.org/lang_transaction.html#deferred_immediate_and_exclusive_transactions
.. method:: SSLContext.session_stats()
Get statistics about the SSL sessions created or managed by this context.
- A dictionary is returned which maps the names of each `piece of information <https://www.openssl.org/docs/man1.1.1/ssl/SSL_CTX_sess_number.html>`_ to their
+ A dictionary is returned which maps the names of each `piece of information <https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_sess_number.html>`_ to their
numeric values. For example, here is the total number of hits and misses
in the session cache since the context was created::
:meth:`SSLContext.set_ciphers` method. Starting from Python 3.2.3, the
ssl module disables certain weak ciphers by default, but you may want
to further restrict the cipher choice. Be sure to read OpenSSL's documentation
-about the `cipher list format <https://www.openssl.org/docs/manmaster/man1/ciphers.html#CIPHER-LIST-FORMAT>`_.
+about the `cipher list format <https://www.openssl.org/docs/man1.1.1/man1/ciphers.html#CIPHER-LIST-FORMAT>`_.
If you want to check which ciphers are enabled by a given cipher list, use
:meth:`SSLContext.get_ciphers` or the ``openssl ciphers`` command on your
system.
you may be able to use :func:`map` to ensure a consistent result, for
example: ``map(float, input_data)``.
+Some datasets use ``NaN`` (not a number) values to represent missing data.
+Since NaNs have unusual comparison semantics, they cause surprising or
+undefined behaviors in the statistics functions that sort data or that count
+occurrences. The functions affected are ``median()``, ``median_low()``,
+``median_high()``, ``median_grouped()``, ``mode()``, ``multimode()``, and
+``quantiles()``. The ``NaN`` values should be stripped before calling these
+functions::
+
+ >>> from statistics import median
+ >>> from math import isnan
+ >>> from itertools import filterfalse
+
+ >>> data = [20.7, float('NaN'),19.2, 18.3, float('NaN'), 14.4]
+ >>> sorted(data) # This has surprising behavior
+ [20.7, nan, 14.4, 18.3, 19.2, nan]
+ >>> median(data) # This result is unexpected
+ 16.35
+
+ >>> sum(map(isnan, data)) # Number of missing values
+ 2
+ >>> clean = list(filterfalse(isnan, data)) # Strip NaN values
+ >>> clean
+ [20.7, 19.2, 18.3, 14.4]
+ >>> sorted(clean) # Sorting now works as expected
+ [14.4, 18.3, 19.2, 20.7]
+ >>> median(clean) # This result is now well defined
+ 18.75
+
+
Averages and measures of central location
-----------------------------------------
Compute the inverse cumulative distribution function, also known as the
`quantile function <https://en.wikipedia.org/wiki/Quantile_function>`_
or the `percent-point
- <https://www.statisticshowto.datasciencecentral.com/inverse-distribution-function/>`_
+ <https://web.archive.org/web/20190203145224/https://www.statisticshowto.datasciencecentral.com/inverse-distribution-function/>`_
function. Mathematically, it is written ``x : P(X <= x) = p``.
Finds the value *x* of the random variable *X* such that the
Normal distributions commonly arise in machine learning problems.
Wikipedia has a `nice example of a Naive Bayesian Classifier
-<https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Sex_classification>`_.
+<https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Person_classification>`_.
The challenge is to predict a person's gender from measurements of normally
distributed features including height, weight, and foot size.
Iteratively unpack from the buffer *buffer* according to the format
string *format*. This function returns an iterator which will read
- equally-sized chunks from the buffer until all its contents have been
+ equally sized chunks from the buffer until all its contents have been
consumed. The buffer's size in bytes must be a multiple of the size
required by the format, as reflected by :func:`calcsize`.
Native byte order is big-endian or little-endian, depending on the host
system. For example, Intel x86 and AMD64 (x86-64) are little-endian;
-Motorola 68000 and PowerPC G5 are big-endian; ARM and Intel Itanium feature
-switchable endianness (bi-endian). Use ``sys.byteorder`` to check the
-endianness of your system.
+IBM z and most legacy architectures are big-endian;
+and ARM, RISC-V and IBM Power feature switchable endianness
+(bi-endian, though the former two are nearly always little-endian in practice).
+Use ``sys.byteorder`` to check the endianness of your system.
Native size and alignment are determined using the C compiler's
``sizeof`` expression. This is always combined with native byte order.
.. _half precision format: https://en.wikipedia.org/wiki/Half-precision_floating-point_format
-.. _ieee 754 standard: https://en.wikipedia.org/wiki/IEEE_floating_point#IEEE_754-2008
+.. _ieee 754 standard: https://en.wikipedia.org/wiki/IEEE_754-2008_revision
.. _IETF RFC 1700: https://tools.ietf.org/html/rfc1700
.. warning::
- For maximum reliability, use a fully-qualified path for the executable.
+ For maximum reliability, use a fully qualified path for the executable.
To search for an unqualified name on :envvar:`PATH`, use
:meth:`shutil.which`. On all platforms, passing :data:`sys.executable`
is the recommended way to launch the current Python interpreter again,
.. method:: get_identifiers()
- Return a list of names of symbols in this table.
+ Return a view object containing the names of symbols in the table.
+ See the :ref:`documentation of view objects <dict-views>`.
.. method:: lookup(name)
A list of :term:`meta path finder` objects that have their
:meth:`~importlib.abc.MetaPathFinder.find_spec` methods called to see if one
- of the objects can find the module to be imported. The
+ of the objects can find the module to be imported. By default, it holds entries
+ that implement Python's default import semantics. The
:meth:`~importlib.abc.MetaPathFinder.find_spec` method is called with at
least the absolute name of the module being imported. If the module to be
imported is contained in a package, then the parent package's :attr:`__path__`
files and stores pathnames in a portable way. Modern tar implementations,
including GNU tar, bsdtar/libarchive and star, fully support extended *pax*
features; some old or unmaintained libraries may not, but should treat
- *pax* archives as if they were in the universally-supported *ustar* format.
+ *pax* archives as if they were in the universally supported *ustar* format.
It is the current default format for new archives.
It extends the existing *ustar* format with extra headers for information
Modify or inquire widget state. If *statespec* is specified, sets the
widget state according to it and return a new *statespec* indicating
which flags were changed. If *statespec* is not specified, returns
- the currently-enabled state flags.
+ the currently enabled state flags.
*statespec* will usually be a list or a tuple.
Ttk Notebook widget manages a collection of windows and displays a single
one at a time. Each child window is associated with a tab, which the user
-may select to change the currently-displayed window.
+may select to change the currently displayed window.
Options
* An integer between zero and the number of tabs
* The name of a child window
* A positional specification of the form "@x,y", which identifies the tab
-* The literal string "current", which identifies the currently-selected tab
+* The literal string "current", which identifies the currently selected tab
* The literal string "end", which returns the number of tabs (only valid for
:meth:`Notebook.index`)
Selects the specified *tab_id*.
The associated child window will be displayed, and the
- previously-selected window (if different) is unmapped. If *tab_id* is
+ previously selected window (if different) is unmapped. If *tab_id* is
omitted, returns the widget name of the currently selected pane.
Bound type variables are particularly useful for annotating
:func:`classmethods <classmethod>` that serve as alternative constructors.
- In the following example (©
+ In the following example (by
`Raymond Hettinger <https://www.youtube.com/watch?v=HTLu2DFOdTg>`_), the
type variable ``C`` is bound to the ``Circle`` class through the use of a
forward reference. Using this type variable to annotate the
methods, static methods and properties. You should patch these on the *class*
rather than an instance. They also work with *some* objects
that proxy attribute access, like the `django settings object
-<http://www.voidspace.org.uk/python/weblog/arch_d7_2010_12_04.shtml#e1198>`_.
+<https://web.archive.org/web/20200603181648/http://www.voidspace.org.uk/python/weblog/arch_d7_2010_12_04.shtml#e1198>`_.
MagicMock and magic method support
Example::
with self.assertLogs('foo', level='INFO') as cm:
- logging.getLogger('foo').info('first message')
- logging.getLogger('foo.bar').error('second message')
+ logging.getLogger('foo').info('first message')
+ logging.getLogger('foo.bar').error('second message')
self.assertEqual(cm.output, ['INFO:foo:first message',
'ERROR:foo.bar:second message'])
obtain the HTTP proxy's URL.
This example replaces the default :class:`ProxyHandler` with one that uses
-programmatically-supplied proxy URLs, and adds proxy authorization support with
+programmatically supplied proxy URLs, and adds proxy authorization support with
:class:`ProxyBasicAuthHandler`. ::
proxy_handler = urllib.request.ProxyHandler({'http': 'http://www.example.com:3128/'})
.. data:: NAMESPACE_DNS
- When this namespace is specified, the *name* string is a fully-qualified domain
+ When this namespace is specified, the *name* string is a fully qualified domain
name.
category must be a subclass in order to match.
* *module* is a string containing a regular expression that the start of the
- fully-qualified module name must match, case-sensitively. In :option:`-W` and
+ fully qualified module name must match, case-sensitively. In :option:`-W` and
:envvar:`PYTHONWARNINGS`, *module* is a literal string that the
- fully-qualified module name must be equal to (case-sensitively), ignoring any
+ fully qualified module name must be equal to (case-sensitively), ignoring any
whitespace at the start or end of *module*.
* *lineno* is an integer that the line number where the warning occurred must
.. method:: WSGIServer.get_app()
- Returns the currently-set application callable.
+ Returns the currently set application callable.
Normally, however, you do not need to use these additional methods, as
:meth:`set_app` is normally called by :func:`make_server`, and the
.. method:: BaseHandler.setup_environ()
- Set the :attr:`environ` attribute to a fully-populated WSGI environment. The
+ Set the :attr:`environ` attribute to a fully populated WSGI environment. The
default implementation uses all of the above methods and attributes, plus the
:meth:`get_stdin`, :meth:`get_stderr`, and :meth:`add_cgi_vars` methods and the
:attr:`wsgi_file_wrapper` attribute. It also inserts a ``SERVER_SOFTWARE`` key
When you are finished with a DOM tree, you may optionally call the
:meth:`unlink` method to encourage early cleanup of the now-unneeded
objects. :meth:`unlink` is an :mod:`xml.dom.minidom`\ -specific
-extension to the DOM API that renders the node and its descendants are
+extension to the DOM API that renders the node and its descendants
essentially useless. Otherwise, Python's garbage collector will
eventually take care of the objects in the tree.
.. versionchanged:: 3.9
The *standalone* parameter was added.
-.. method:: Node.toprettyxml(indent="\\t", newl="\\n", encoding=None, \
+.. method:: Node.toprettyxml(indent="\t", newl="\n", encoding=None, \
standalone=None)
Return a pretty-printed version of the document. *indent* specifies the
entity expansion, too. Instead of nested entities it repeats one large entity
with a couple of thousand chars over and over again. The attack isn't as
efficient as the exponential case but it avoids triggering parser countermeasures
- that forbid deeply-nested entities.
+ that forbid deeply nested entities.
external entity expansion
Entity declarations can contain more than just text for replacement. They can
The following parameters govern the use of the returned proxy instance.
If *allow_none* is true, the Python constant ``None`` will be translated into
XML; the default behaviour is for ``None`` to raise a :exc:`TypeError`. This is
- a commonly-used extension to the XML-RPC specification, but isn't supported by
+ a commonly used extension to the XML-RPC specification, but isn't supported by
all clients and servers; see `http://ontosys.com/xml-rpc/extensions.php
<https://web.archive.org/web/20130120074804/http://ontosys.com/xml-rpc/extensions.php>`_
for a description.
A boolean indicating whether the end of the compressed data stream has been
reached.
- This makes it possible to distinguish between a properly-formed compressed
+ This makes it possible to distinguish between a properly formed compressed
stream, and an incomplete or truncated one.
.. versionadded:: 3.3
The file :file:`Python/dtoa.c`, which supplies C functions dtoa and
strtod for conversion of C doubles to and from strings, is derived
from the file of the same name by David M. Gay, currently available
-from http://www.netlib.org/fp/. The original file, as retrieved on
-March 16, 2009, contains the following copyright and licensing
-notice::
+from https://web.archive.org/web/20220517033456/http://www.netlib.org/fp/dtoa.c.
+The original file, as retrieved on March 16, 2009, contains the following
+copyright and licensing notice::
/****************************************************************
*
else:
SUITE2
-See also :meth:`__aiter__` and :meth:`__anext__` for details.
+See also :meth:`~object.__aiter__` and :meth:`~object.__anext__` for details.
It is a :exc:`SyntaxError` to use an ``async for`` statement outside the
body of a coroutine function.
if not hit_except:
await aexit(manager, None, None, None)
-See also :meth:`__aenter__` and :meth:`__aexit__` for details.
+See also :meth:`~object.__aenter__` and :meth:`~object.__aexit__` for details.
It is a :exc:`SyntaxError` to use an ``async with`` statement outside the
body of a coroutine function.
Typical implementations create a new instance of the class by invoking the
superclass's :meth:`__new__` method using ``super().__new__(cls[, ...])``
- with appropriate arguments and then modifying the newly-created instance
+ with appropriate arguments and then modifying the newly created instance
as necessary before returning it.
If :meth:`__new__` is invoked during object construction and it returns an
predictable between repeated invocations of Python.
This is intended to provide protection against a denial-of-service caused
- by carefully-chosen inputs that exploit the worst case performance of a
+ by carefully chosen inputs that exploit the worst case performance of a
dict insertion, O(n\ :sup:`2`) complexity. See
http://www.ocert.org/advisories/ocert-2011-003.html for details.
In typical use, this is called with a single exception instance similar to the
way the :keyword:`raise` keyword is used.
- For backwards compatability, however, the second signature is
+ For backwards compatibility, however, the second signature is
supported, following a convention from older versions of Python.
The *type* argument should be an exception class, and *value*
should be an exception instance. If the *value* is not provided, the
.. attribute:: __name__
- The ``__name__`` attribute must be set to the fully-qualified name of
+ The ``__name__`` attribute must be set to the fully qualified name of
the module. This name is used to uniquely identify the module in
the import system.
Top-level format specifiers may include nested replacement fields. These nested
fields may include their own conversion fields and :ref:`format specifiers
-<formatspec>`, but may not include more deeply-nested replacement fields. The
+<formatspec>`, but may not include more deeply nested replacement fields. The
:ref:`format specifier mini-language <formatspec>` is the same as that used by
the :meth:`str.format` method.
# Sphinx version is pinned so that new versions that introduce new warnings
# won't suddenly cause build failures. Updating the version is fine as long
# as no warnings are raised by doing so.
-sphinx==3.2.1
+sphinx==3.4.3
# Docutils version is pinned to a version compatible with Sphinx
-# version 3.2.1. It can be removed after bumping Sphinx version to at
+# version <3.5.4. It can be removed after bumping Sphinx version to at
# least 3.5.4.
docutils==0.17.1
-# Jinja version is pinned to a version compatible with Sphinx version 3.2.1.
+# Jinja version is pinned to a version compatible with Sphinx version <4.5.
jinja2==3.0.3
blurb
translatable=False)
node.append(para)
env = self.state.document.settings.env
- # deprecated pre-Sphinx-2 method
- if hasattr(env, 'note_versionchange'):
- env.note_versionchange('deprecated', version[0], node, self.lineno)
- # new method
- else:
- env.get_domain('changeset').note_changeset(node)
+ env.get_domain('changeset').note_changeset(node)
return [node] + messages
import os
import re
import csv
-import sys
from docutils import nodes
from sphinx.builders import Builder
:[a-zA-Z][a-zA-Z0-9]+| # :foo
`| # ` (seldom used by itself)
(?<!\.)\.\.[ \t]*\w+: # .. foo: (but NOT ... else:)
- ''', re.UNICODE | re.VERBOSE).finditer
-
-py3 = sys.version_info >= (3, 0)
+ ''', re.VERBOSE).finditer
class Rule:
def report_issue(self, text, lineno, issue):
self.any_issue = True
self.write_log_entry(lineno, issue, text)
- if py3:
- self.logger.warning('[%s:%d] "%s" found in "%-.120s"' %
+ self.logger.warning('[%s:%d] "%s" found in "%-.120s"' %
(self.docname, lineno, issue, text))
- else:
- self.logger.warning(
- '[%s:%d] "%s" found in "%-.120s"' % (
- self.docname.encode(sys.getdefaultencoding(),'replace'),
- lineno,
- issue.encode(sys.getdefaultencoding(),'replace'),
- text.strip().encode(sys.getdefaultencoding(),'replace')))
self.app.statuscode = 1
def write_log_entry(self, lineno, issue, text):
- if py3:
- f = open(self.log_file_name, 'a')
- writer = csv.writer(f, dialect)
- writer.writerow([self.docname, lineno, issue, text.strip()])
- f.close()
- else:
- f = open(self.log_file_name, 'ab')
- writer = csv.writer(f, dialect)
- writer.writerow([self.docname.encode('utf-8'),
- lineno,
- issue.encode('utf-8'),
- text.strip().encode('utf-8')])
- f.close()
+ f = open(self.log_file_name, 'a')
+ writer = csv.writer(f, dialect)
+ writer.writerow([self.docname, lineno, issue, text.strip()])
+ f.close()
def load_rules(self, filename):
"""Load database of previously ignored issues.
self.logger.info("loading ignore rules... ", nonl=1)
self.rules = rules = []
try:
- if py3:
- f = open(filename, 'r')
- else:
- f = open(filename, 'rb')
+ f = open(filename, 'r')
except IOError:
return
for i, row in enumerate(csv.reader(f)):
lineno = int(lineno)
else:
lineno = None
- if not py3:
- docname = docname.decode('utf-8')
- issue = issue.decode('utf-8')
- text = text.decode('utf-8')
rule = Rule(docname, lineno, issue, text)
rules.append(rule)
f.close()
library/itertools,,:step,elements from seq[start:stop:step]
library/itertools,,::,kernel = tuple(kernel)[::-1]
library/itertools,,:stop,elements from seq[start:stop:step]
+library/logging,,:root,WARNING:root:Watch out!
+library/logging,,:Watch,WARNING:root:Watch out!
library/logging.handlers,,:port,host:port
library/mmap,,:i2,obj[i1:i2]
library/multiprocessing,,`,# Add more tasks using `put()`
{% if daily is defined %}
{% set dlbase = pathto('archives', 1) %}
{% else %}
+ {#
+ The link below returns HTTP 404 until the first related alpha release.
+ This is expected; use daily documentation builds for CPython development.
+ #}
{% set dlbase = 'https://docs.python.org/ftp/python/doc/' + release %}
{% endif %}
self.data = []
When a class defines an :meth:`__init__` method, class instantiation
-automatically invokes :meth:`__init__` for the newly-created class instance. So
+automatically invokes :meth:`__init__` for the newly created class instance. So
in this example, a new, initialized instance can be obtained by::
x = MyClass()
A :keyword:`match` statement takes an expression and compares its value to successive
patterns given as one or more case blocks. This is superficially
similar to a switch statement in C, Java or JavaScript (and many
-other languages), but it can also extract components (sequence elements or
-object attributes) from the value into variables.
+other languages), but it's more similar to pattern matching in
+languages like Rust or Haskell. Only the first pattern that matches
+gets executed and it can also extract components (sequence elements
+or object attributes) from the value into variables.
The simplest form compares a subject value against one or more literals::
[(0, 0), (1, 1), (2, 4), (3, 9), (4, 16), (5, 25)]
>>> # the tuple must be parenthesized, otherwise an error is raised
>>> [x, x**2 for x in range(6)]
- File "<stdin>", line 1, in <module>
+ File "<stdin>", line 1
[x, x**2 for x in range(6)]
- ^
- SyntaxError: invalid syntax
+ ^^^^^^^
+ SyntaxError: did you forget parentheses around the comprehension target?
>>> # flatten a list using a listcomp with two 'for'
>>> vec = [[1,2,3], [4,5,6], [7,8,9]]
>>> [num for elem in vec for num in elem]
Positional and keyword arguments can be arbitrarily combined::
>>> print('The story of {0}, {1}, and {other}.'.format('Bill', 'Manfred',
- other='Georg'))
+ ... other='Georg'))
The story of Bill, Manfred, and Georg.
If you have a really long format string that you don't want to split up, it
... 'Dcab: {0[Dcab]:d}'.format(table))
Jack: 4098; Sjoerd: 4127; Dcab: 8637678
-This could also be done by passing the table as keyword arguments with the '**'
+This could also be done by passing the ``table`` dictionary as keyword arguments with the ``**``
notation. ::
>>> table = {'Sjoerd': 4127, 'Jack': 4098, 'Dcab': 8637678}
This is particularly useful in combination with the built-in function
:func:`vars`, which returns a dictionary containing all local variables.
-As an example, the following lines produce a tidily-aligned
+As an example, the following lines produce a tidily aligned
set of columns giving integers and their squares and cubes::
>>> for x in range(1, 11):
>>> import fibo
-This does not enter the names of the functions defined in ``fibo`` directly in
-the current symbol table; it only enters the module name ``fibo`` there. Using
+This does not add the names of the functions defined in ``fibo`` directly to
+the current :term:`namespace` (see :ref:`tut-scopes` for more details);
+it only adds the module name ``fibo`` there. Using
the module name you can access the functions::
>>> fibo.fib(1000)
the *first* time the module name is encountered in an import statement. [#]_
(They are also run if the file is executed as a script.)
-Each module has its own private symbol table, which is used as the global symbol
-table by all functions defined in the module. Thus, the author of a module can
+Each module has its own private namespace, which is used as the global namespace
+by all functions defined in the module. Thus, the author of a module can
use global variables in the module without worrying about accidental clashes
with a user's global variables. On the other hand, if you know what you are
doing you can touch a module's global variables with the same notation used to
Modules can import other modules. It is customary but not required to place all
:keyword:`import` statements at the beginning of a module (or script, for that
-matter). The imported module names are placed in the importing module's global
-symbol table.
+matter). The imported module names, if placed at the top level of a module
+(outside any functions or classes), are added to the module's global namespace.
There is a variant of the :keyword:`import` statement that imports names from a
-module directly into the importing module's symbol table. For example::
+module directly into the importing module's namespace. For example::
>>> from fibo import fib, fib2
>>> fib(500)
0 1 1 2 3 5 8 13 21 34 55 89 144 233 377
This does not introduce the module name from which the imports are taken in the
-local symbol table (so in the example, ``fibo`` is not defined).
+local namespace (so in the example, ``fibo`` is not defined).
There is even a variant to import all names that a module defines::
.. rubric:: Footnotes
.. [#] In fact function definitions are also 'statements' that are 'executed'; the
- execution of a module-level function definition enters the function name in
- the module's global symbol table.
+ execution of a module-level function definition adds the function name to
+ the module's global namespace.
>>> photofiles = ['img_1074.jpg', 'img_1076.jpg', 'img_1077.jpg']
>>> class BatchRename(Template):
... delimiter = '%'
+ ...
>>> fmt = input('Enter rename style (%d-date %n-seqnum %f-format): ')
Enter rename style (%d-date %n-seqnum %f-format): Ashley_%n%f
More Python resources:
* https://www.python.org: The major Python web site. It contains code,
- documentation, and pointers to Python-related pages around the web. This web
- site is mirrored in various places around the world, such as Europe, Japan, and
- Australia; a mirror may be faster than the main site, depending on your
- geographical location.
+ documentation, and pointers to Python-related pages around the web.
* https://docs.python.org: Fast access to Python's documentation.
Particularly notable contributions are collected in a book also titled Python
Cookbook (O'Reilly & Associates, ISBN 0-596-00797-3.)
-* http://www.pyvideo.org collects links to Python-related videos from
+* https://pyvideo.org collects links to Python-related videos from
conferences and user-group meetings.
* https://scipy.org: The Scientific Python project includes modules for fast
between repeated invocations of Python.
Hash randomization is intended to provide protection against a
- denial-of-service caused by carefully-chosen inputs that exploit the worst
+ denial-of-service caused by carefully chosen inputs that exploit the worst
case performance of a dict construction, O(n\ :sup:`2`) complexity. See
http://www.ocert.org/advisories/ocert-2011-003.html for details.
whether the actual warning category of the message is a subclass of the
specified warning category.
- The *module* field matches the (fully-qualified) module name; this match is
+ The *module* field matches the (fully qualified) module name; this match is
case-sensitive.
The *lineno* field matches the line number, where zero matches all line
extensions. Use it when a compiler flag should *not* be part of the
distutils :envvar:`CFLAGS` once Python is installed (:issue:`21121`).
+ In particular, :envvar:`CFLAGS` should not contain:
+
+ * the compiler flag `-I` (for setting the search path for include files).
+ The `-I` flags are processed from left to right, and any flags in
+ :envvar:`CFLAGS` would take precedence over user- and package-supplied `-I`
+ flags.
+
+ * hardening flags such as `-Werror` because distributions cannot control
+ whether packages installed by users conform to such heightened
+ standards.
+
.. versionadded:: 3.5
.. envvar:: EXTRA_CFLAGS
:envvar:`CFLAGS_NODIST`. Use it when a linker flag should *not* be part of
the distutils :envvar:`LDFLAGS` once Python is installed (:issue:`35257`).
+ In particular, :envvar:`LDFLAGS` should not contain:
+
+ * the compiler flag `-L` (for setting the search path for libraries).
+ The `-L` flags are processed from left to right, and any flags in
+ :envvar:`LDFLAGS` would take precedence over user- and package-supplied `-L`
+ flags.
+
.. envvar:: CONFIGURE_LDFLAGS_NODIST
Value of :envvar:`LDFLAGS_NODIST` variable passed to the ``./configure``
If you want to compile CPython yourself, first thing you should do is get the
`source <https://www.python.org/downloads/source/>`_. You can download either the
latest release's source or just grab a fresh `clone
-<https://devguide.python.org/setup/#getting-the-source-code>`_. (If you want
+<https://devguide.python.org/setup/#get-the-source-code>`_. (If you want
to contribute patches, you will need a clone.)
The build process consists of the usual commands::
.. deprecated:: 3.6
``pyvenv`` was the recommended tool for creating virtual environments for
Python 3.3 and 3.4, and is `deprecated in Python 3.6
- <https://docs.python.org/dev/whatsnew/3.6.html#deprecated-features>`_.
+ <https://docs.python.org/dev/whatsnew/3.6.html#id8>`_.
.. versionchanged:: 3.5
The use of ``venv`` is now recommended for creating virtual environments.
:ref:`windows-store` is a simple installation of Python that is suitable for
running scripts and packages, and using IDLE or other development environments.
-It requires Windows 10, but can be safely installed without corrupting other
+It requires Windows 10 and above, but can be safely installed without corrupting other
programs. It also provides many convenient commands for launching Python and
its tools.
remove all packages you installed directly into this Python installation, but
will not remove any virtual environments
-Known Issues
+Known issues
------------
+Redirection of local data, registry, and temporary paths
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
Because of restrictions on Microsoft Store apps, Python scripts may not have
-full write access to shared locations such as ``TEMP`` and the registry.
+full write access to shared locations such as :envvar:`TEMP` and the registry.
Instead, it will write to a private copy. If your scripts must modify the
shared locations, you will need to install the full installer.
+At runtime, Python will use a private copy of well-known Windows folders and the registry.
+For example, if the environment variable :envvar:`%APPDATA%` is :file:`c:\\Users\\<user>\\AppData\\`,
+then when writing to :file:`C:\\Users\\<user>\\AppData\\Local` will write to
+:file:`C:\\Users\\<user>\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\Local\\`.
+
+When reading files, Windows will return the file from the private folder, or if that does not exist, the
+real Windows directory. For example reading :file:`C:\\Windows\\System32` returns the contents of :file:`C:\\Windows\\System32`
+plus the contents of :file:`C:\\Program Files\\WindowsApps\\package_name\\VFS\\SystemX86`.
+
+You can find the real path of any existing file using :func:`os.path.realpath`:
+
+.. code-block:: python
+
+ >>> import os
+ >>> test_file = 'C:\\Users\\example\\AppData\\Local\\test.txt'
+ >>> os.path.realpath(test_file)
+ 'C:\\Users\\example\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\\LocalCache\\Local\\test.txt'
+
+When writing to the Windows Registry, the following behaviors exist:
+
+* Reading from ``HKLM\\Software`` is allowed and results are merged with the :file:`registry.dat` file in the package.
+* Writing to ``HKLM\\Software`` is not allowed if the corresponding key/value exists, i.e. modifying existing keys.
+* Writing to ``HKLM\\Software`` is allowed as long as a corresponding key/value does not exist in the package
+ and the user has the correct access permissions.
+
For more detail on the technical basis for these limitations, please consult
Microsoft's documentation on packaged full-trust apps, currently available at
`docs.microsoft.com/en-us/windows/msix/desktop/desktop-to-uwp-behind-the-scenes
Popular scientific modules (such as numpy, scipy and pandas) and the
``conda`` package manager.
-`Canopy <https://www.enthought.com/product/canopy/>`_
- A "comprehensive Python analysis environment" with editors and other
- development tools.
+`Enthought Deployment Manager <https://www.enthought.com/edm/>`_
+ "The Next Generation Python Environment and Package Manager".
+
+ Previously Enthought provided Canopy, but it `reached end of life in 2016
+ <https://support.enthought.com/hc/en-us/articles/360038600051-Canopy-GUI-end-of-life-transition-to-the-Enthought-Deployment-Manager-EDM-and-Visual-Studio-Code>`_.
`WinPython <https://winpython.github.io/>`_
Windows-specific distribution with prebuilt scientific packages and
If you want to compile CPython yourself, first thing you should do is get the
`source <https://www.python.org/downloads/source/>`_. You can download either the
latest release's source or just grab a fresh `checkout
-<https://devguide.python.org/setup/#getting-the-source-code>`_.
+<https://devguide.python.org/setup/#get-the-source-code>`_.
The source tree contains a build solution and project files for Microsoft
Visual Studio, which is the compiler used to build the official Python
of bug fixes and improvements are always being submitted. A host of minor fixes,
a few optimizations, additional docstrings, and better error messages went into
2.0; to list them all would be impossible, but they're certainly significant.
-Consult the publicly-available CVS logs if you want to see the full list. This
+Consult the publicly available CVS logs if you want to see the full list. This
progress is due to the five developers working for PythonLabs are now getting
paid to spend their days fixing bugs, and also due to the improved communication
resulting from moving to SourceForge.
A common complaint from Python users is that there's no single catalog of all
the Python modules in existence. T. Middleton's Vaults of Parnassus at
-http://www.vex.net/parnassus/ are the largest catalog of Python modules, but
-registering software at the Vaults is optional, and many people don't bother.
+``www.vex.net/parnassus/`` (retired in February 2009, `available in the
+Internet Archive Wayback Machine
+<https://web.archive.org/web/20090130140102/http://www.vex.net/parnassus/>`_)
+was the largest catalog of Python modules, but
+registering software at the Vaults is optional, and many people did not bother.
As a first small step toward fixing the problem, Python software packaged using
the Distutils :command:`sdist` command will include a file named
of filters, and each filter can cause the :class:`LogRecord` to be ignored or
can modify the record before passing it along. When they're finally output,
:class:`LogRecord` instances are converted to text by a :class:`Formatter`
-class. All of these classes can be replaced by your own specially-written
+class. All of these classes can be replaced by your own specially written
classes.
With all of these features the :mod:`logging` package should provide enough
into this picture very well because they produce a Python list object containing
all of the items. This unavoidably pulls all of the objects into memory, which
can be a problem if your data set is very large. When trying to write a
-functionally-styled program, it would be natural to write something like::
+functionally styled program, it would be natural to write something like::
links = [link for link in get_all_links() if not link.followed]
for link in links:
(Contributed by Raymond Hettinger.)
-* Encountering a failure while importing a module no longer leaves a partially-initialized
+* Encountering a failure while importing a module no longer leaves a partially initialized
module object in ``sys.modules``. The incomplete module object left
behind would fool further imports of the same module into succeeding, leading to
confusing errors. (Fixed by Tim Peters.)
* The :mod:`tarfile` module now generates GNU-format tar files by default.
* Encountering a failure while importing a module no longer leaves a
- partially-initialized module object in ``sys.modules``.
+ partially initialized module object in ``sys.modules``.
* :const:`None` is now a constant; code that binds a new value to the name
``None`` is now a syntax error.
The changes in Python 2.5 are an interesting mix of language and library
improvements. The library enhancements will be more important to Python's user
-community, I think, because several widely-useful packages were added. New
+community, I think, because several widely useful packages were added. New
modules include ElementTree for XML processing (:mod:`xml.etree`),
the SQLite database module (:mod:`sqlite`), and the :mod:`ctypes`
module for calling C functions.
https://en.wikipedia.org/wiki/Coroutine
The Wikipedia entry for coroutines.
- http://www.sidhe.org/~dan/blog/archives/000178.html
+ https://web.archive.org/web/20160321211320/http://www.sidhe.org/~dan/blog/archives/000178.html
An explanation of coroutines from a Perl point of view, written by Dan Sugalski.
.. ======================================================================
received several enhancements and a number of bugfixes. You can now set the
maximum size in bytes of a field by calling the
``csv.field_size_limit(new_limit)`` function; omitting the *new_limit*
- argument will return the currently-set limit. The :class:`reader` class now has
+ argument will return the currently set limit. The :class:`reader` class now has
a :attr:`line_num` attribute that counts the number of physical lines read from
the source; records can span multiple physical lines, so :attr:`line_num` is not
the same as the number of records read.
.. seealso::
- http://starship.python.net/crew/theller/ctypes/
- The ctypes web page, with a tutorial, reference, and FAQ.
+ https://web.archive.org/web/20180410025338/http://starship.python.net/crew/theller/ctypes/
+ The pre-stdlib ctypes web page, with a tutorial, reference, and FAQ.
The documentation for the :mod:`ctypes` module.
.. seealso::
- http://www.wsgi.org
+ https://web.archive.org/web/20160331090247/http://wsgi.readthedocs.org/en/latest/
A central web site for WSGI-related resources.
:pep:`333` - Python Web Server Gateway Interface v1.0
of Stellenbosch, South Africa. Martin von Löwis put a
lot of effort into importing existing bugs and patches from
SourceForge; his scripts for this import operation are at
-http://svn.python.org/view/tracker/importer/ and may be useful to
+``http://svn.python.org/view/tracker/importer/`` and may be useful to
other projects wishing to move from SourceForge to Roundup.
.. seealso::
When you run Python, the module search path ``sys.path`` usually
includes a directory whose path ends in ``"site-packages"``. This
-directory is intended to hold locally-installed packages available to
+directory is intended to hold locally installed packages available to
all users using a machine or a particular site installation.
Python 2.6 introduces a convention for user-specific site directories.
The function must take a filename and return true if the file
should be excluded or false if it should be archived.
The function is applied to both the name initially passed to :meth:`add`
- and to the names of files in recursively-added directories.
+ and to the names of files in recursively added directories.
(All changes contributed by Lars Gustäbel).
(Contributed by Brett Cannon.)
* The :mod:`textwrap` module can now preserve existing whitespace
- at the beginnings and ends of the newly-created lines
+ at the beginnings and ends of the newly created lines
by specifying ``drop_whitespace=False``
as an argument::
ignores the insertion order and just compares the keys and values.
How does the :class:`~collections.OrderedDict` work? It maintains a
-doubly-linked list of keys, appending new keys to the list as they're inserted.
+doubly linked list of keys, appending new keys to the list as they're inserted.
A secondary dictionary maps keys to their corresponding list node, so
deletion doesn't have to traverse the entire linked list and therefore
remains O(1).
*ciphers* argument that's a string listing the encryption algorithms
to be allowed; the format of the string is described
`in the OpenSSL documentation
- <https://www.openssl.org/docs/manmaster/man1/ciphers.html#CIPHER-LIST-FORMAT>`__.
+ <https://www.openssl.org/docs/man1.0.2/man1/ciphers.html>`__.
(Added by Antoine Pitrou; :issue:`8322`.)
Another change makes the extension load all of OpenSSL's ciphers and
* :meth:`~unittest.TestCase.assertAlmostEqual` and :meth:`~unittest.TestCase.assertNotAlmostEqual` test
whether *first* and *second* are approximately equal. This method
- can either round their difference to an optionally-specified number
+ can either round their difference to an optionally specified number
of *places* (the default is 7) and compare it to zero, or require
the difference to be smaller than a supplied *delta* value.
.. seealso::
- http://www.voidspace.org.uk/python/articles/unittest2.shtml
+ https://web.archive.org/web/20210619163128/http://www.voidspace.org.uk/python/articles/unittest2.shtml
Describes the new features, how to use them, and the
rationale for various design decisions. (By Michael Foord.)
* The :mod:`_winreg` module for accessing the registry now implements
the :func:`~_winreg.CreateKeyEx` and :func:`~_winreg.DeleteKeyEx`
- functions, extended versions of previously-supported functions that
+ functions, extended versions of previously supported functions that
take several extra arguments. The :func:`~_winreg.DisableReflectionKey`,
:func:`~_winreg.EnableReflectionKey`, and :func:`~_winreg.QueryReflectionKey`
were also tested and documented.
:data:`string.ascii_letters` etc. instead. (The reason for the
removal is that :data:`string.letters` and friends had
locale-specific behavior, which is a bad idea for such
- attractively-named global "constants".)
+ attractively named global "constants".)
* Renamed module :mod:`__builtin__` to :mod:`builtins` (removing the
underscores, adding an 's'). The :data:`__builtins__` variable
^
SyntaxError: expected ':'
- (Contributed by Pablo Galindo in :issue:`42997`)
+ (Contributed by Pablo Galindo in :issue:`42997`.)
* Unparenthesised tuples in comprehensions targets:
^
SyntaxError: did you forget parentheses around the comprehension target?
- (Contributed by Pablo Galindo in :issue:`43017`)
+ (Contributed by Pablo Galindo in :issue:`43017`.)
* Missing commas in collection literals and between expressions:
^
SyntaxError: invalid syntax. Perhaps you forgot a comma?
- (Contributed by Pablo Galindo in :issue:`43822`)
+ (Contributed by Pablo Galindo in :issue:`43822`.)
* Multiple Exception types without parentheses:
^
SyntaxError: multiple exception types must be parenthesized
- (Contributed by Pablo Galindo in :issue:`43149`)
+ (Contributed by Pablo Galindo in :issue:`43149`.)
* Missing ``:`` and values in dictionary literals:
^
SyntaxError: ':' expected after dictionary key
- (Contributed by Pablo Galindo in :issue:`43823`)
+ (Contributed by Pablo Galindo in :issue:`43823`.)
* ``try`` blocks without ``except`` or ``finally`` blocks:
^^^^^^^^^
SyntaxError: expected 'except' or 'finally' block
- (Contributed by Pablo Galindo in :issue:`44305`)
+ (Contributed by Pablo Galindo in :issue:`44305`.)
* Usage of ``=`` instead of ``==`` in comparisons:
^
SyntaxError: cannot assign to attribute here. Maybe you meant '==' instead of '='?
- (Contributed by Pablo Galindo in :issue:`43797`)
+ (Contributed by Pablo Galindo in :issue:`43797`.)
* Usage of ``*`` in f-strings:
^
SyntaxError: f-string: cannot use starred expression here
- (Contributed by Pablo Galindo in :issue:`41064`)
+ (Contributed by Pablo Galindo in :issue:`41064`.)
IndentationErrors
~~~~~~~~~~~~~~~~~
Here, ``z`` and ``t`` are keyword-only parameters, while ``x`` and
``y`` are not.
-(Contributed by Eric V. Smith in :issue:`43532`)
+(Contributed by Eric V. Smith in :issue:`43532`.)
.. _distutils-deprecated:
-------
Add slice support to :attr:`PurePath.parents <pathlib.PurePath.parents>`.
-(Contributed by Joshua Cannon in :issue:`35498`)
+(Contributed by Joshua Cannon in :issue:`35498`.)
Add negative indexing support to :attr:`PurePath.parents
<pathlib.PurePath.parents>`.
-(Contributed by Yaroslav Pankovych in :issue:`21041`)
+(Contributed by Yaroslav Pankovych in :issue:`21041`.)
Add :meth:`Path.hardlink_to <pathlib.Path.hardlink_to>` method that
supersedes :meth:`~pathlib.Path.link_to`. The new method has the same argument
Add :func:`platform.freedesktop_os_release()` to retrieve operation system
identification from `freedesktop.org os-release
<https://www.freedesktop.org/software/systemd/man/os-release.html>`_ standard file.
-(Contributed by Christian Heimes in :issue:`28468`)
+(Contributed by Christian Heimes in :issue:`28468`.)
pprint
------
Add new function :func:`typing.is_typeddict` to introspect if an annotation
is a :class:`typing.TypedDict`.
-(Contributed by Patrick Reader in :issue:`41792`)
+(Contributed by Patrick Reader in :issue:`41792`.)
Subclasses of ``typing.Protocol`` which only have data variables declared
will now raise a ``TypeError`` when checked with ``isinstance`` unless they
passed silently. Users should decorate their
subclasses with the :func:`runtime_checkable` decorator
if they want runtime protocols.
-(Contributed by Yurii Karabas in :issue:`38908`)
+(Contributed by Yurii Karabas in :issue:`38908`.)
Importing from the ``typing.io`` and ``typing.re`` submodules will now emit
:exc:`DeprecationWarning`. These submodules have been deprecated since
Python 3.8 and will be removed in a future version of Python. Anything
belonging to those submodules should be imported directly from
:mod:`typing` instead.
-(Contributed by Sebastian Rittau in :issue:`38291`)
+(Contributed by Sebastian Rittau in :issue:`38291`.)
unittest
--------
strings, and the function object lazily converts this into the annotations dict
on demand. This optimization cuts the CPU time needed to define an annotated
function by half.
- (Contributed by Yurii Karabas and Inada Naoki in :issue:`42202`)
+ (Contributed by Yurii Karabas and Inada Naoki in :issue:`42202`.)
* Substring search functions such as ``str1 in str2`` and ``str2.find(str1)``
now sometimes use Crochemore & Perrin's "Two-Way" string searching
* Add micro-optimizations to ``_PyType_Lookup()`` to improve type attribute cache lookup
performance in the common case of cache hits. This makes the interpreter 1.04 times faster
- on average. (Contributed by Dino Viehland in :issue:`43452`)
+ on average. (Contributed by Dino Viehland in :issue:`43452`.)
* The following built-in functions now support the faster :pep:`590` vectorcall calling convention:
:func:`map`, :func:`filter`, :func:`reversed`, :func:`bool` and :func:`float`.
- (Contributed by Dong-hee Na and Jeroen Demeyer in :issue:`43575`, :issue:`43287`, :issue:`41922`, :issue:`41873` and :issue:`41870`)
+ (Contributed by Dong-hee Na and Jeroen Demeyer in :issue:`43575`, :issue:`43287`, :issue:`41922`, :issue:`41873` and :issue:`41870`.)
* :class:`BZ2File` performance is improved by removing internal ``RLock``.
This makes :class:`BZ2File` thread unsafe in the face of multiple simultaneous
readers or writers, just like its equivalent classes in :mod:`gzip` and
- :mod:`lzma` have always been. (Contributed by Inada Naoki in :issue:`43785`).
+ :mod:`lzma` have always been. (Contributed by Inada Naoki in :issue:`43785`.)
.. _whatsnew310-deprecated:
:keyword:`for`, :keyword:`if`, :keyword:`in`, :keyword:`is` and :keyword:`or`.
In future releases it will be changed to syntax warning, and finally to
syntax error.
- (Contributed by Serhiy Storchaka in :issue:`43833`).
+ (Contributed by Serhiy Storchaka in :issue:`43833`.)
* Starting in this release, there will be a concerted effort to begin
cleaning up old import semantics that were kept for Python 2.7
* ``threading.Thread.setDaemon`` => :attr:`threading.Thread.daemon`
- (Contributed by Jelle Zijlstra in :issue:`21574`.)
+ (Contributed by Jelle Zijlstra in :gh:`87889`.)
* :meth:`pathlib.Path.link_to` is deprecated and slated for removal in
Python 3.12. Use :meth:`pathlib.Path.hardlink_to` instead.
:exc:`DeprecationWarning`. These submodules will be removed in a future version
of Python. Anything belonging to these submodules should be imported directly
from :mod:`typing` instead.
- (Contributed by Sebastian Rittau in :issue:`38291`)
+ (Contributed by Sebastian Rittau in :issue:`38291`.)
.. _whatsnew310-removed:
syntax error. To get rid of the warning and make the code compatible with
future releases just add a space between the numeric literal and the
following keyword.
- (Contributed by Serhiy Storchaka in :issue:`43833`).
+ (Contributed by Serhiy Storchaka in :issue:`43833`.)
.. _changes-python-api:
* The ``MAKE_FUNCTION`` instruction now accepts either a dict or a tuple of
strings as the function's annotations.
- (Contributed by Yurii Karabas and Inada Naoki in :issue:`42202`)
+ (Contributed by Yurii Karabas and Inada Naoki in :issue:`42202`.)
Build Changes
=============
directory. These files must not be included directly, as they are already
included in ``Python.h``: :ref:`Include Files <api-includes>`. If they have
been included directly, consider including ``Python.h`` instead.
- (Contributed by Nicholas Sim in :issue:`35134`)
+ (Contributed by Nicholas Sim in :issue:`35134`.)
* Use the :c:data:`Py_TPFLAGS_IMMUTABLETYPE` type flag to create immutable type
objects. Do not rely on :c:data:`Py_TPFLAGS_HEAPTYPE` to decide if a type
* The undocumented function ``Py_FrozenMain`` has been removed from the
limited API. The function is mainly useful for custom builds of Python.
- (Contributed by Petr Viktorin in :issue:`26241`)
+ (Contributed by Petr Viktorin in :issue:`26241`.)
Deprecated
----------
This article explains the new features in Python 3.2 as compared to 3.1.
Python 3.2 was released on February 20, 2011. It
focuses on a few highlights and gives a few examples. For full details, see the
-`Misc/NEWS
-<https://github.com/python/cpython/blob/076ca6c3c8df3030307e548d9be792ce3c1c6eea/Misc/NEWS>`_
+`Misc/NEWS <https://github.com/python/cpython/blob/v3.2.6/Misc/NEWS>`_
file.
.. seealso::
* The :func:`ssl.wrap_socket` constructor function now takes a *ciphers*
argument. The *ciphers* string lists the allowed encryption algorithms using
the format described in the `OpenSSL documentation
- <https://www.openssl.org/docs/manmaster/man1/ciphers.html#CIPHER-LIST-FORMAT>`__.
+ <https://www.openssl.org/docs/man1.0.2/man1/ciphers.html#CIPHER-LIST-FORMAT>`__.
* When linked against recent versions of OpenSSL, the :mod:`ssl` module now
supports the Server Name Indication extension to the TLS protocol, allowing
longer used and it had never been documented (:issue:`8837`).
There were a number of other small changes to the C-API. See the
-:source:`Misc/NEWS` file for a complete list.
+`Misc/NEWS <https://github.com/python/cpython/blob/v3.2.6/Misc/NEWS>`_
+file for a complete list.
Also, there were a number of updates to the Mac OS X build, see
-:source:`Mac/BuildScript/README.txt` for details. For users running a 32/64-bit
+`Mac/BuildScript/README.txt <https://github.com/python/cpython/blob/v3.2.6/Mac/BuildScript/README.txt>`_
+for details. For users running a 32/64-bit
build, there is a known problem with the default Tcl/Tk on Mac OS X 10.6.
Accordingly, we recommend installing an updated alternative such as
`ActiveState Tcl/Tk 8.5.9 <https://www.activestate.com/activetcl/downloads>`_\.
Virtual environments help create separate Python setups while sharing a
system-wide base install, for ease of maintenance. Virtual environments
-have their own set of private site packages (i.e. locally-installed
+have their own set of private site packages (i.e. locally installed
libraries), and are optionally segregated from the system-wide site
packages. Their concept and implementation are inspired by the popular
``virtualenv`` third-party package, but benefit from tighter integration
lzma
----
-The newly-added :mod:`lzma` module provides data compression and decompression
+The newly added :mod:`lzma` module provides data compression and decompression
using the LZMA algorithm, including support for the ``.xz`` and ``.lzma``
file formats.
C-module and libmpdec written by Stefan Krah.
The new C version of the decimal module integrates the high speed libmpdec
-library for arbitrary precision correctly-rounded decimal floating point
+library for arbitrary precision correctly rounded decimal floating point
arithmetic. libmpdec conforms to IBM's General Decimal Arithmetic Specification.
Performance gains range from 10x for database applications to 100x for
in order to obtain a rounded or inexact value.
-* The power function in decimal.py is always correctly-rounded. In the
- C version, it is defined in terms of the correctly-rounded
+* The power function in decimal.py is always correctly rounded. In the
+ C version, it is defined in terms of the correctly rounded
:meth:`~decimal.Decimal.exp` and :meth:`~decimal.Decimal.ln` functions,
but the final result is only "almost always correctly rounded".
----
New attribute :attr:`zlib.Decompress.eof` makes it possible to distinguish
-between a properly-formed compressed stream and an incomplete or truncated one.
+between a properly formed compressed stream and an incomplete or truncated one.
(Contributed by Nadeem Vawda in :issue:`12646`.)
New attribute :attr:`zlib.ZLIB_RUNTIME_VERSION` reports the version string of
:keyword:`with` block becomes a "sub-test". This context manager allows a test
method to dynamically generate subtests by, say, calling the ``subTest``
context manager inside a loop. A single test method can thereby produce an
-indefinite number of separately-identified and separately-counted tests, all of
+indefinite number of separately identified and separately counted tests, all of
which will run even if one or more of them fail. For example::
class NumbersTest(unittest.TestCase):
``malloc`` in ``obmalloc``. Artificial benchmarks show about a 3% memory
savings.
-* :func:`os.urandom` now uses a lazily-opened persistent file descriptor
+* :func:`os.urandom` now uses a lazily opened persistent file descriptor
so as to avoid using many file descriptors when run in parallel from
multiple threads. (Contributed by Antoine Pitrou in :issue:`18756`.)
platform specific ``Lib/plat-*/`` directories, but were chronically out of
date, inconsistently available across platforms, and unmaintained. The
script that created these modules is still available in the source
- distribution at :source:`Tools/scripts/h2py.py`.
+ distribution at `Tools/scripts/h2py.py
+ <https://github.com/python/cpython/blob/v3.6.15/Tools/scripts/h2py.py>`_.
* The deprecated ``asynchat.fifo`` class has been removed.
While Python provides a C API for thread-local storage support; the existing
:ref:`Thread Local Storage (TLS) API <thread-local-storage-api>` has used
:c:type:`int` to represent TLS keys across all platforms. This has not
-generally been a problem for officially-support platforms, but that is neither
+generally been a problem for officially support platforms, but that is neither
POSIX-compliant, nor portable in any practical sense.
:pep:`539` changes this by providing a new :ref:`Thread Specific Storage (TSS)
:issue:`31368`.)
The mode argument of :func:`os.makedirs` no longer affects the file
-permission bits of newly-created intermediate-level directories.
+permission bits of newly created intermediate-level directories.
(Contributed by Serhiy Storchaka in :issue:`19930`.)
:func:`os.dup2` now returns the new file descriptor. Previously, ``None``
of other LTS Linux releases (e.g. RHEL/CentOS 7.5, SLES 12-SP3), use OpenSSL
1.0.2 or later, and remain supported in the default build configuration.
- CPython's own :source:`CI configuration file <.travis.yml>` provides an
+ CPython's own `CI configuration file
+ <https://github.com/python/cpython/blob/v3.7.13/.travis.yml>`_ provides an
example of using the SSL
:source:`compatibility testing infrastructure <Tools/ssl/multissltests.py>` in
CPython's test suite to build and link against OpenSSL 1.1.0 rather than an
(Contributed by Serhiy Storchaka in :issue:`29192`.)
* The *mode* argument of :func:`os.makedirs` no longer affects the file
- permission bits of newly-created intermediate-level directories.
+ permission bits of newly created intermediate-level directories.
To set their file permission bits you can set the umask before invoking
``makedirs()``.
(Contributed by Serhiy Storchaka in :issue:`19930`.)
``replace()`` method of :class:`types.CodeType` can be used to make the code
future-proof.
+* The parameter ``digestmod`` for :func:`hmac.new` no longer uses the MD5 digest
+ by default.
Changes in the C API
--------------------
for_stmt[stmt_ty]:
| invalid_for_stmt
- | 'for' t=star_targets 'in' ~ ex=star_expressions &&':' tc=[TYPE_COMMENT] b=block el=[else_block] {
+ | 'for' t=star_targets 'in' ~ ex=star_expressions ':' tc=[TYPE_COMMENT] b=block el=[else_block] {
_PyAST_For(t, ex, b, el, NEW_TYPE_COMMENT(p, tc), EXTRA) }
- | ASYNC 'for' t=star_targets 'in' ~ ex=star_expressions &&':' tc=[TYPE_COMMENT] b=block el=[else_block] {
+ | ASYNC 'for' t=star_targets 'in' ~ ex=star_expressions ':' tc=[TYPE_COMMENT] b=block el=[else_block] {
CHECK_VERSION(stmt_ty, 5, "Async for loops are", _PyAST_AsyncFor(t, ex, b, el, NEW_TYPE_COMMENT(p, tc), EXTRA)) }
| invalid_for_target
with_stmt[stmt_ty]:
| invalid_with_stmt_indent
| 'with' '(' a[asdl_withitem_seq*]=','.with_item+ ','? ')' ':' b=block {
- _PyAST_With(a, b, NULL, EXTRA) }
+ CHECK_VERSION(stmt_ty, 9, "Parenthesized context managers are", _PyAST_With(a, b, NULL, EXTRA)) }
| 'with' a[asdl_withitem_seq*]=','.with_item+ ':' tc=[TYPE_COMMENT] b=block {
_PyAST_With(a, b, NEW_TYPE_COMMENT(p, tc), EXTRA) }
| ASYNC 'with' '(' a[asdl_withitem_seq*]=','.with_item+ ','? ')' ':' b=block {
or_pattern[pattern_ty]:
| patterns[asdl_pattern_seq*]='|'.closed_pattern+ {
asdl_seq_LEN(patterns) == 1 ? asdl_seq_GET(patterns, 0) : _PyAST_MatchOr(patterns, EXTRA) }
-closed_pattern[pattern_ty]:
+
+closed_pattern[pattern_ty] (memo):
| literal_pattern
| capture_pattern
| wildcard_pattern
maybe_star_pattern[pattern_ty]:
| star_pattern
| pattern
-star_pattern[pattern_ty]:
+
+star_pattern[pattern_ty] (memo):
| '*' target=pattern_capture_target {
_PyAST_MatchStar(target->v.Name.id, EXTRA) }
| '*' wildcard_pattern {
| class_def_raw
class_def_raw[stmt_ty]:
| invalid_class_def_raw
- | 'class' a=NAME b=['(' z=[arguments] ')' { z }] &&':' c=block {
+ | 'class' a=NAME b=['(' z=[arguments] ')' { z }] ':' c=block {
_PyAST_ClassDef(a->v.Name.id,
(b) ? ((expr_ty) b)->v.Call.args : NULL,
(b) ? ((expr_ty) b)->v.Call.keywords : NULL,
assignment_expression[expr_ty]:
- | a=NAME ':=' ~ b=expression { _PyAST_NamedExpr(CHECK(expr_ty, _PyPegen_set_expr_context(p, a, Store)), b, EXTRA) }
+ | a=NAME ':=' ~ b=expression {
+ CHECK_VERSION(expr_ty, 8, "Assignment expressions are",
+ _PyAST_NamedExpr(CHECK(expr_ty, _PyPegen_set_expr_context(p, a, Store)), b, EXTRA)) }
named_expression[expr_ty]:
| assignment_expression
RAISE_SYNTAX_ERROR("trailing comma not allowed without surrounding parentheses") }
invalid_with_stmt:
- | [ASYNC] 'with' ','.(expression ['as' star_target])+ &&':'
- | [ASYNC] 'with' '(' ','.(expressions ['as' star_target])+ ','? ')' &&':'
+ | [ASYNC] 'with' ','.(expression ['as' star_target])+ NEWLINE { RAISE_SYNTAX_ERROR("expected ':'") }
+ | [ASYNC] 'with' '(' ','.(expressions ['as' star_target])+ ','? ')' NEWLINE { RAISE_SYNTAX_ERROR("expected ':'") }
invalid_with_stmt_indent:
| [ASYNC] a='with' ','.(expression ['as' star_target])+ ':' NEWLINE !INDENT {
RAISE_INDENTATION_ERROR("expected an indented block after 'with' statement on line %d", a->lineno) }
RAISE_INDENTATION_ERROR("expected an indented block after 'except' statement on line %d", a->lineno) }
| a='except' ':' NEWLINE !INDENT { RAISE_INDENTATION_ERROR("expected an indented block after 'except' statement on line %d", a->lineno) }
invalid_match_stmt:
- | "match" subject_expr !':' { CHECK_VERSION(void*, 10, "Pattern matching is", RAISE_SYNTAX_ERROR("expected ':'") ) }
+ | "match" subject_expr NEWLINE { CHECK_VERSION(void*, 10, "Pattern matching is", RAISE_SYNTAX_ERROR("expected ':'") ) }
| a="match" subject=subject_expr ':' NEWLINE !INDENT {
RAISE_INDENTATION_ERROR("expected an indented block after 'match' statement on line %d", a->lineno) }
invalid_case_block:
- | "case" patterns guard? !':' { RAISE_SYNTAX_ERROR("expected ':'") }
+ | "case" patterns guard? NEWLINE { RAISE_SYNTAX_ERROR("expected ':'") }
| a="case" patterns guard? ':' NEWLINE !INDENT {
RAISE_INDENTATION_ERROR("expected an indented block after 'case' statement on line %d", a->lineno) }
invalid_as_pattern:
| a='while' named_expression ':' NEWLINE !INDENT {
RAISE_INDENTATION_ERROR("expected an indented block after 'while' statement on line %d", a->lineno) }
invalid_for_stmt:
+ | [ASYNC] 'for' star_targets 'in' star_expressions NEWLINE { RAISE_SYNTAX_ERROR("expected ':'") }
| [ASYNC] a='for' star_targets 'in' star_expressions ':' NEWLINE !INDENT {
RAISE_INDENTATION_ERROR("expected an indented block after 'for' statement on line %d", a->lineno) }
invalid_def_raw:
| [ASYNC] a='def' NAME '(' [params] ')' ['->' expression] ':' NEWLINE !INDENT {
RAISE_INDENTATION_ERROR("expected an indented block after function definition on line %d", a->lineno) }
invalid_class_def_raw:
+ | 'class' NAME ['(' [arguments] ')'] NEWLINE { RAISE_SYNTAX_ERROR("expected ':'") }
| a='class' NAME ['('[arguments] ')'] ':' NEWLINE !INDENT {
RAISE_INDENTATION_ERROR("expected an indented block after class definition on line %d", a->lineno) }
| a=expression !(':') {
RAISE_ERROR_KNOWN_LOCATION(p, PyExc_SyntaxError, a->lineno, a->end_col_offset - 1, a->end_lineno, -1, "':' expected after dictionary key") }
| expression ':' a='*' bitwise_or { RAISE_SYNTAX_ERROR_STARTING_FROM(a, "cannot use a starred expression in a dictionary value") }
- | expression a=':' {RAISE_SYNTAX_ERROR_KNOWN_LOCATION(a, "expression expected after dictionary key and ':'") }
+ | expression a=':' &('}'|',') {RAISE_SYNTAX_ERROR_KNOWN_LOCATION(a, "expression expected after dictionary key and ':'") }
collection of frozen modules: */
PyAPI_DATA(const struct _frozen *) PyImport_FrozenModules;
+
+PyAPI_DATA(PyObject *) _PyImport_GetModuleAttr(PyObject *, PyObject *);
+PyAPI_DATA(PyObject *) _PyImport_GetModuleAttrString(const char *, const char *);
/*--start constants--*/
#define PY_MAJOR_VERSION 3
#define PY_MINOR_VERSION 10
-#define PY_MICRO_VERSION 5
+#define PY_MICRO_VERSION 6
#define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL
#define PY_RELEASE_SERIAL 0
/* Version as a string */
-#define PY_VERSION "3.10.5"
+#define PY_VERSION "3.10.6"
/*--end constants--*/
/* Version as a single 4-byte hex number, e.g. 0x010502B2 == 1.5.2b2.
raise exceptions.InvalidStateError('Result is not ready.')
self.__log_traceback = False
if self._exception is not None:
- raise self._exception
+ raise self._exception.with_traceback(self._exception_tb)
return self._result
def exception(self):
raise TypeError("StopIteration interacts badly with generators "
"and cannot be raised into a Future")
self._exception = exception
+ self._exception_tb = exception.__traceback__
self._state = _FINISHED
self.__schedule_callbacks()
self.__log_traceback = True
def __del__(self, _warn=warnings.warn):
if self._sock is not None:
_warn(f"unclosed transport {self!r}", ResourceWarning, source=self)
- self.close()
+ self._sock.close()
def _fatal_error(self, exc, message='Fatal error on pipe transport'):
try:
done.update(waiter.finished_futures)
return DoneAndNotDoneFutures(done, fs - done)
+
+def _result_or_cancel(fut, timeout=None):
+ try:
+ try:
+ return fut.result(timeout)
+ finally:
+ fut.cancel()
+ finally:
+ # Break a reference cycle with the exception in self._exception
+ del fut
+
+
class Future(object):
"""Represents the result of an asynchronous computation."""
while fs:
# Careful not to keep a reference to the popped future
if timeout is None:
- yield fs.pop().result()
+ yield _result_or_cancel(fs.pop())
else:
- yield fs.pop().result(end_time - time.monotonic())
+ yield _result_or_cancel(fs.pop(), end_time - time.monotonic())
finally:
for future in fs:
future.cancel()
def _reconstruct(x, memo, func, args,
state=None, listiter=None, dictiter=None,
- deepcopy=deepcopy):
+ *, deepcopy=deepcopy):
deep = memo is not None
if deep and args:
args = (deepcopy(arg, memo) for arg in args)
return None
data = data[:5]
[dd, mm, yy, tm, tz] = data
+ if not (dd and mm and yy):
+ return None
mm = mm.lower()
if mm not in _monthnames:
dd, mm = mm, dd.lower()
yy, tm = tm, yy
if yy[-1] == ',':
yy = yy[:-1]
+ if not yy:
+ return None
if not yy[0].isdigit():
yy, tz = tz, yy
if tm[-1] == ',':
__all__ = ["version", "bootstrap"]
_PACKAGE_NAMES = ('setuptools', 'pip')
-_SETUPTOOLS_VERSION = "58.1.0"
-_PIP_VERSION = "22.0.4"
+_SETUPTOOLS_VERSION = "63.2.0"
+_PIP_VERSION = "22.2.1"
_PROJECTS = [
("setuptools", _SETUPTOOLS_VERSION, "py3"),
("pip", _PIP_VERSION, "py3"),
sys.argv[1:] = {args}
runpy.run_module("pip", run_name="__main__", alter_sys=True)
"""
- return subprocess.run([sys.executable, '-W', 'ignore::DeprecationWarning',
- "-c", code], check=True).returncode
+
+ cmd = [
+ sys.executable,
+ '-W',
+ 'ignore::DeprecationWarning',
+ '-c',
+ code,
+ ]
+ if sys.flags.isolated:
+ # run code in isolated mode if currently running isolated
+ cmd.insert(1, '-I')
+ return subprocess.run(cmd, check=True).returncode
def version():
pass
# The next few lines may raise OSError
os.rename(self._filename, self._backupfilename)
- self._file = open(self._backupfilename, self._mode, encoding=encoding)
+ self._file = open(self._backupfilename, self._mode,
+ encoding=encoding, errors=self._errors)
try:
perm = os.fstat(self._file.fileno()).st_mode
except OSError:
- self._output = open(self._filename, self._write_mode, encoding=encoding)
+ self._output = open(self._filename, self._write_mode,
+ encoding=encoding, errors=self._errors)
else:
mode = os.O_CREAT | os.O_WRONLY | os.O_TRUNC
if hasattr(os, 'O_BINARY'):
mode |= os.O_BINARY
fd = os.open(self._filename, mode, perm)
- self._output = os.fdopen(fd, self._write_mode, encoding=encoding)
+ self._output = os.fdopen(fd, self._write_mode,
+ encoding=encoding, errors=self._errors)
try:
os.chmod(self._filename, perm)
except OSError:
(self.host,self.port), self.timeout, self.source_address)
# Might fail in OSs that don't implement TCP_NODELAY
try:
- self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
+ self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
except OSError as e:
if e.errno != errno.ENOPROTOOPT:
raise
return False
self.command, self.path = command, path
+ # gh-87389: The purpose of replacing '//' with '/' is to protect
+ # against open redirect attacks possibly triggered if the path starts
+ # with '//' because http clients treat //path as an absolute URI
+ # without scheme (similar to http://path) rather than a path.
+ if self.path.startswith('//'):
+ self.path = '/' + self.path.lstrip('/') # Reduce to a single /
+
# Examine the headers and look for a Connection directive.
try:
self.headers = http.client.parse_headers(self.rfile,
What's New in IDLE 3.10.z
-after 3.10.0 until 3.10.?
-Released on 2022-05-16
+after 3.10.0 until 3.10.10?
+Released 2023-04-03?
=========================
+gh-95511: Fix the Shell context menu copy-with-prompts bug of copying
+an extra line when one selects whole lines.
+
+gh-95471: Tweak Edit menu. Move 'Select All' above 'Cut' as it is used
+with 'Cut' and 'Copy' but not 'Paste'. Add a separator between 'Replace'
+and 'Go to Line' to help IDLE issue triagers.
+
+gh-95411: Enable using IDLE's module browser with .pyw files.
+
+gh-89610: Add .pyi as a recognized extension for IDLE on macOS. This allows
+opening stub files by double clicking on them in the Finder.
+
bpo-28950: Apply IDLE syntax highlighting to `.pyi` files. Add util.py
for common components. Patch by Alex Waygood and Terry Jan Reedy.
Released on 2021-10-04
=========================
+bpo-45193: Make completion boxes appear on Ubuntu again.
bpo-40128: Mostly fix completions on macOS when not using tcl/tk 8.6.11
(as with 3.9).
(or recheck on window popup)
- add popup menu with more options (e.g. doc strings, base classes, imports)
- add base classes to class browser tree
-- finish removing limitation to x.py files (ModuleBrowserTreeItem)
"""
import os
from idlelib.config import idleConf
from idlelib import pyshell
from idlelib.tree import TreeNode, TreeItem, ScrolledCanvas
+from idlelib.util import py_extensions
from idlelib.window import ListedToplevel
file_open = None # Method...Item and Class...Item use this.
# Normally pyshell.flist.open, but there is no pyshell.flist for htest.
+# The browser depends on pyclbr and importlib which do not support .pyi files.
+browseable_extension_blocklist = ('.pyi',)
+
+
+def is_browseable_extension(path):
+ _, ext = os.path.splitext(path)
+ ext = os.path.normcase(ext)
+ return ext in py_extensions and ext not in browseable_extension_blocklist
+
def transform_children(child_dict, modname=None):
"""Transform a child dictionary to an ordered sequence of objects.
Instance variables:
name: Module name.
- file: Full path and module with .py extension. Used in
- creating ModuleBrowserTreeItem as the rootnode for
+ file: Full path and module with supported extension.
+ Used in creating ModuleBrowserTreeItem as the rootnode for
the tree and subsequently in the children.
"""
self.master = master
def OnDoubleClick(self):
"Open a module in an editor window when double clicked."
- if os.path.normcase(self.file[-3:]) != ".py":
+ if not is_browseable_extension(self.file):
return
if not os.path.exists(self.file):
return
file_open(self.file)
def IsExpandable(self):
- "Return True if Python (.py) file."
- return os.path.normcase(self.file[-3:]) == ".py"
+ "Return True if Python file."
+ return is_browseable_extension(self.file)
def listchildren(self):
"Return sequenced classes and functions in the module."
- dir, base = os.path.split(self.file)
- name, ext = os.path.splitext(base)
- if os.path.normcase(ext) != ".py":
+ if not is_browseable_extension(self.file):
return []
+ dir, base = os.path.split(self.file)
+ name, _ = os.path.splitext(base)
try:
tree = pyclbr.readmodule_ex(name, [dir] + sys.path)
except ImportError:
"""
import re
-from tkinter import (Toplevel, Listbox, Scale, Canvas,
+from tkinter import (Toplevel, Listbox, Canvas,
StringVar, BooleanVar, IntVar, TRUE, FALSE,
TOP, BOTTOM, RIGHT, LEFT, SOLID, GROOVE,
NONE, BOTH, X, Y, W, E, EW, NS, NSEW, NW,
want = ((have - 1) // self.indentwidth) * self.indentwidth
# Debug prompt is multilined....
ncharsdeleted = 0
- while 1:
+ while True:
chars = chars[:-1]
ncharsdeleted = ncharsdeleted + 1
have = len(chars.expandtabs(tabwidth))
<!DOCTYPE html>
-<html>
+<html lang="en">
<head>
<meta charset="utf-8" />
- <meta name="viewport" content="width=device-width, initial-scale=1.0" />
- <title>IDLE — Python 3.11.0a4 documentation</title>
- <link rel="stylesheet" href="../_static/pydoctheme.css" type="text/css" />
- <link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0" /><meta name="generator" content="Docutils 0.18.1: http://docutils.sourceforge.net/" />
- <script id="documentation_options" data-url_root="../" src="../_static/documentation_options.js"></script>
+ <title>IDLE — Python 3.12.0a0 documentation</title><meta name="viewport" content="width=device-width, initial-scale=1.0">
+
+ <link rel="stylesheet" type="text/css" href="../_static/pygments.css" />
+ <link rel="stylesheet" type="text/css" href="../_static/pydoctheme.css?2022.1" />
+
+ <script data-url_root="../" id="documentation_options" src="../_static/documentation_options.js"></script>
<script src="../_static/jquery.js"></script>
<script src="../_static/underscore.js"></script>
+ <script src="../_static/_sphinx_javascript_frameworks_compat.js"></script>
<script src="../_static/doctools.js"></script>
- <script src="../_static/language_data.js"></script>
<script src="../_static/sidebar.js"></script>
<link rel="search" type="application/opensearchdescription+xml"
- title="Search within Python 3.11.0a4 documentation"
+ title="Search within Python 3.12.0a0 documentation"
href="../_static/opensearch.xml"/>
<link rel="author" title="About these documents" href="../about.html" />
<link rel="index" title="Index" href="../genindex.html" />
}
}
</style>
+<link rel="shortcut icon" type="image/png" href="../_static/py.svg" />
+ <script type="text/javascript" src="../_static/copybutton.js"></script>
+ <script type="text/javascript" src="../_static/menu.js"></script>
- <link rel="shortcut icon" type="image/png" href="../_static/py.png" />
-
- <script type="text/javascript" src="../_static/copybutton.js"></script>
-
-
+ </head>
+<body>
+<div class="mobile-nav">
+ <input type="checkbox" id="menuToggler" class="toggler__input" aria-controls="navigation"
+ aria-pressed="false" aria-expanded="false" role="button" aria-label="Menu" />
+ <label for="menuToggler" class="toggler__label">
+ <span></span>
+ </label>
+ <nav class="nav-content" role="navigation">
+ <a href="https://www.python.org/" class="nav-logo">
+ <img src="../_static/py.svg" alt="Logo"/>
+ </a>
+ <div class="version_switcher_placeholder"></div>
+ <form role="search" class="search" action="../search.html" method="get">
+ <svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" class="search-icon">
+ <path fill-rule="nonzero"
+ d="M15.5 14h-.79l-.28-.27a6.5 6.5 0 001.48-5.34c-.47-2.78-2.79-5-5.59-5.34a6.505 6.505 0 00-7.27 7.27c.34 2.8 2.56 5.12 5.34 5.59a6.5 6.5 0 005.34-1.48l.27.28v.79l4.25 4.25c.41.41 1.08.41 1.49 0 .41-.41.41-1.08 0-1.49L15.5 14zm-6 0C7.01 14 5 11.99 5 9.5S7.01 5 9.5 5 14 7.01 14 9.5 11.99 14 9.5 14z" fill="#444"></path>
+ </svg>
+ <input type="text" name="q" aria-label="Quick search"/>
+ <input type="submit" value="Go"/>
+ </form>
+ </nav>
+ <div class="menu-wrapper">
+ <nav class="menu" role="navigation" aria-label="main navigation">
+ <div class="language_switcher_placeholder"></div>
+ <div>
+ <h3><a href="../contents.html">Table of Contents</a></h3>
+ <ul>
+<li><a class="reference internal" href="#">IDLE</a><ul>
+<li><a class="reference internal" href="#menus">Menus</a><ul>
+<li><a class="reference internal" href="#file-menu-shell-and-editor">File menu (Shell and Editor)</a></li>
+<li><a class="reference internal" href="#edit-menu-shell-and-editor">Edit menu (Shell and Editor)</a></li>
+<li><a class="reference internal" href="#format-menu-editor-window-only">Format menu (Editor window only)</a></li>
+<li><a class="reference internal" href="#run-menu-editor-window-only">Run menu (Editor window only)</a></li>
+<li><a class="reference internal" href="#shell-menu-shell-window-only">Shell menu (Shell window only)</a></li>
+<li><a class="reference internal" href="#debug-menu-shell-window-only">Debug menu (Shell window only)</a></li>
+<li><a class="reference internal" href="#options-menu-shell-and-editor">Options menu (Shell and Editor)</a></li>
+<li><a class="reference internal" href="#window-menu-shell-and-editor">Window menu (Shell and Editor)</a></li>
+<li><a class="reference internal" href="#help-menu-shell-and-editor">Help menu (Shell and Editor)</a></li>
+<li><a class="reference internal" href="#context-menus">Context menus</a></li>
+</ul>
+</li>
+<li><a class="reference internal" href="#editing-and-navigation">Editing and Navigation</a><ul>
+<li><a class="reference internal" href="#editor-windows">Editor windows</a></li>
+<li><a class="reference internal" href="#key-bindings">Key bindings</a></li>
+<li><a class="reference internal" href="#automatic-indentation">Automatic indentation</a></li>
+<li><a class="reference internal" href="#completions">Completions</a></li>
+<li><a class="reference internal" href="#calltips">Calltips</a></li>
+<li><a class="reference internal" href="#code-context">Code Context</a></li>
+<li><a class="reference internal" href="#shell-window">Shell window</a></li>
+<li><a class="reference internal" href="#text-colors">Text colors</a></li>
+</ul>
+</li>
+<li><a class="reference internal" href="#startup-and-code-execution">Startup and Code Execution</a><ul>
+<li><a class="reference internal" href="#command-line-usage">Command line usage</a></li>
+<li><a class="reference internal" href="#startup-failure">Startup failure</a></li>
+<li><a class="reference internal" href="#running-user-code">Running user code</a></li>
+<li><a class="reference internal" href="#user-output-in-shell">User output in Shell</a></li>
+<li><a class="reference internal" href="#developing-tkinter-applications">Developing tkinter applications</a></li>
+<li><a class="reference internal" href="#running-without-a-subprocess">Running without a subprocess</a></li>
+</ul>
+</li>
+<li><a class="reference internal" href="#help-and-preferences">Help and Preferences</a><ul>
+<li><a class="reference internal" href="#help-sources">Help sources</a></li>
+<li><a class="reference internal" href="#setting-preferences">Setting preferences</a></li>
+<li><a class="reference internal" href="#idle-on-macos">IDLE on macOS</a></li>
+<li><a class="reference internal" href="#extensions">Extensions</a></li>
+</ul>
+</li>
+</ul>
+</li>
+</ul>
+ </div>
+ <div>
+ <h4>Previous topic</h4>
+ <p class="topless"><a href="tkinter.tix.html"
+ title="previous chapter"><code class="xref py py-mod docutils literal notranslate"><span class="pre">tkinter.tix</span></code> — Extension widgets for Tk</a></p>
+ </div>
+ <div>
+ <h4>Next topic</h4>
+ <p class="topless"><a href="development.html"
+ title="next chapter">Development Tools</a></p>
+ </div>
+ <div role="note" aria-label="source link">
+ <h3>This Page</h3>
+ <ul class="this-page-menu">
+ <li><a href="../bugs.html">Report a Bug</a></li>
+ <li>
+ <a href="https://github.com/python/cpython/blob/main/Doc/library/idle.rst"
+ rel="nofollow">Show Source
+ </a>
+ </li>
+ </ul>
+ </div>
+ </nav>
+ </div>
+</div>
- </head><body>
<div class="related" role="navigation" aria-label="related navigation">
<h3>Navigation</h3>
<a href="tkinter.tix.html" title="tkinter.tix — Extension widgets for Tk"
accesskey="P">previous</a> |</li>
- <li><img src="../_static/py.png" alt=""
- style="vertical-align: middle; margin-top: -1px"/></li>
- <li><a href="https://www.python.org/">Python</a> »</li>
-
+ <li><img src="../_static/py.svg" alt="python logo" style="vertical-align: middle; margin-top: -1px"/></li>
+ <li><a href="https://www.python.org/">Python</a> »</li>
+ <li class="switchers">
+ <div class="language_switcher_placeholder"></div>
+ <div class="version_switcher_placeholder"></div>
+ </li>
+ <li>
+ </li>
<li id="cpython-language-and-version">
- <a href="../index.html">3.11.0a4 Documentation</a> »
+ <a href="../index.html">3.12.0a0 Documentation</a> »
</li>
<li class="nav-item nav-item-1"><a href="index.html" >The Python Standard Library</a> »</li>
<li class="nav-item nav-item-2"><a href="tk.html" accesskey="U">Graphical User Interfaces with Tk</a> »</li>
<li class="nav-item nav-item-this"><a href="">IDLE</a></li>
- <li class="right">
+ <li class="right">
- <div class="inline-search" style="display: none" role="search">
+ <div class="inline-search" role="search">
<form class="inline-search" action="../search.html" method="get">
- <input placeholder="Quick search" type="text" name="q" />
+ <input placeholder="Quick search" aria-label="Quick search" type="text" name="q" />
<input type="submit" value="Go" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
- <script type="text/javascript">$('.inline-search').show(0);</script>
- |
- </li>
+ |
+ </li>
</ul>
</div>
<div class="bodywrapper">
<div class="body" role="main">
- <div class="section" id="idle">
-<span id="id1"></span><h1>IDLE<a class="headerlink" href="#idle" title="Permalink to this headline">¶</a></h1>
+ <section id="idle">
+<span id="id1"></span><h1>IDLE<a class="headerlink" href="#idle" title="Permalink to this heading">¶</a></h1>
<p><strong>Source code:</strong> <a class="reference external" href="https://github.com/python/cpython/tree/main/Lib/idlelib/">Lib/idlelib/</a></p>
<hr class="docutils" id="index-0" />
<p>IDLE is Python’s Integrated Development and Learning Environment.</p>
of global and local namespaces</p></li>
<li><p>configuration, browsers, and other dialogs</p></li>
</ul>
-<div class="section" id="menus">
-<h2>Menus<a class="headerlink" href="#menus" title="Permalink to this headline">¶</a></h2>
+<section id="menus">
+<h2>Menus<a class="headerlink" href="#menus" title="Permalink to this heading">¶</a></h2>
<p>IDLE has two main window types, the Shell window and the Editor window. It is
possible to have multiple editor windows simultaneously. On Windows and
Linux, each has its own top menu. Each menu documented below indicates
<p>On macOS, there is one application menu. It dynamically changes according
to the window currently selected. It has an IDLE menu, and some entries
described below are moved around to conform to Apple guidelines.</p>
-<div class="section" id="file-menu-shell-and-editor">
-<h3>File menu (Shell and Editor)<a class="headerlink" href="#file-menu-shell-and-editor" title="Permalink to this headline">¶</a></h3>
+<section id="file-menu-shell-and-editor">
+<h3>File menu (Shell and Editor)<a class="headerlink" href="#file-menu-shell-and-editor" title="Permalink to this heading">¶</a></h3>
<dl class="simple">
<dt>New File</dt><dd><p>Create a new file editing window.</p>
</dd>
<dt>Exit IDLE</dt><dd><p>Close all windows and quit IDLE (ask to save unsaved edit windows).</p>
</dd>
</dl>
-</div>
-<div class="section" id="edit-menu-shell-and-editor">
-<h3>Edit menu (Shell and Editor)<a class="headerlink" href="#edit-menu-shell-and-editor" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="edit-menu-shell-and-editor">
+<h3>Edit menu (Shell and Editor)<a class="headerlink" href="#edit-menu-shell-and-editor" title="Permalink to this heading">¶</a></h3>
<dl class="simple">
<dt>Undo</dt><dd><p>Undo the last change to the current window. A maximum of 1000 changes may
be undone.</p>
<dt>Show surrounding parens</dt><dd><p>Highlight the surrounding parenthesis.</p>
</dd>
</dl>
-</div>
-<div class="section" id="format-menu-editor-window-only">
-<span id="format-menu"></span><h3>Format menu (Editor window only)<a class="headerlink" href="#format-menu-editor-window-only" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="format-menu-editor-window-only">
+<span id="format-menu"></span><h3>Format menu (Editor window only)<a class="headerlink" href="#format-menu-editor-window-only" title="Permalink to this heading">¶</a></h3>
<dl class="simple">
<dt>Indent Region</dt><dd><p>Shift selected lines right by the indent width (default 4 spaces).</p>
</dd>
remove extra newlines at the end of the file.</p>
</dd>
</dl>
-</div>
-<div class="section" id="run-menu-editor-window-only">
-<span id="index-2"></span><h3>Run menu (Editor window only)<a class="headerlink" href="#run-menu-editor-window-only" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="run-menu-editor-window-only">
+<span id="index-2"></span><h3>Run menu (Editor window only)<a class="headerlink" href="#run-menu-editor-window-only" title="Permalink to this heading">¶</a></h3>
<dl class="simple" id="run-module">
<dt>Run Module</dt><dd><p>Do <a class="reference internal" href="#check-module"><span class="std std-ref">Check Module</span></a>. If no error, restart the shell to clean the
environment, then execute the module. Output is displayed in the Shell
<dt>Python Shell</dt><dd><p>Open or wake up the Python Shell window.</p>
</dd>
</dl>
-</div>
-<div class="section" id="shell-menu-shell-window-only">
-<h3>Shell menu (Shell window only)<a class="headerlink" href="#shell-menu-shell-window-only" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="shell-menu-shell-window-only">
+<h3>Shell menu (Shell window only)<a class="headerlink" href="#shell-menu-shell-window-only" title="Permalink to this heading">¶</a></h3>
<dl class="simple">
<dt>View Last Restart</dt><dd><p>Scroll the shell window to the last Shell restart.</p>
</dd>
<dt>Interrupt Execution</dt><dd><p>Stop a running program.</p>
</dd>
</dl>
-</div>
-<div class="section" id="debug-menu-shell-window-only">
-<h3>Debug menu (Shell window only)<a class="headerlink" href="#debug-menu-shell-window-only" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="debug-menu-shell-window-only">
+<h3>Debug menu (Shell window only)<a class="headerlink" href="#debug-menu-shell-window-only" title="Permalink to this heading">¶</a></h3>
<dl class="simple">
<dt>Go to File/Line</dt><dd><p>Look on the current line. with the cursor, and the line above for a filename
and line number. If found, open the file if not already open, and show the
<dt>Auto-open Stack Viewer</dt><dd><p>Toggle automatically opening the stack viewer on an unhandled exception.</p>
</dd>
</dl>
-</div>
-<div class="section" id="options-menu-shell-and-editor">
-<h3>Options menu (Shell and Editor)<a class="headerlink" href="#options-menu-shell-and-editor" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="options-menu-shell-and-editor">
+<h3>Options menu (Shell and Editor)<a class="headerlink" href="#options-menu-shell-and-editor" title="Permalink to this heading">¶</a></h3>
<dl class="simple">
<dt>Configure IDLE</dt><dd><p>Open a configuration dialog and change preferences for the following:
fonts, indentation, keybindings, text color themes, startup windows and
no effect when a window is maximized.</p>
</dd>
</dl>
-</div>
-<div class="section" id="window-menu-shell-and-editor">
-<h3>Window menu (Shell and Editor)<a class="headerlink" href="#window-menu-shell-and-editor" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="window-menu-shell-and-editor">
+<h3>Window menu (Shell and Editor)<a class="headerlink" href="#window-menu-shell-and-editor" title="Permalink to this heading">¶</a></h3>
<p>Lists the names of all open windows; select one to bring it to the foreground
(deiconifying it if necessary).</p>
-</div>
-<div class="section" id="help-menu-shell-and-editor">
-<h3>Help menu (Shell and Editor)<a class="headerlink" href="#help-menu-shell-and-editor" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="help-menu-shell-and-editor">
+<h3>Help menu (Shell and Editor)<a class="headerlink" href="#help-menu-shell-and-editor" title="Permalink to this heading">¶</a></h3>
<dl class="simple">
<dt>About IDLE</dt><dd><p>Display version, copyright, license, credits, and more.</p>
</dd>
<p>Additional help sources may be added here with the Configure IDLE dialog under
the General tab. See the <a class="reference internal" href="#help-sources"><span class="std std-ref">Help sources</span></a> subsection below
for more on Help menu choices.</p>
-</div>
-<div class="section" id="context-menus">
-<span id="index-4"></span><h3>Context Menus<a class="headerlink" href="#context-menus" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="context-menus">
+<span id="index-4"></span><h3>Context menus<a class="headerlink" href="#context-menus" title="Permalink to this heading">¶</a></h3>
<p>Open a context menu by right-clicking in a window (Control-click on macOS).
Context menus have the standard clipboard functions also on the Edit menu.</p>
<dl class="simple">
the code above and the prompt below down to a ‘Squeezed text’ label.</p>
</dd>
</dl>
-</div>
-</div>
-<div class="section" id="editing-and-navigation">
-<span id="id2"></span><h2>Editing and navigation<a class="headerlink" href="#editing-and-navigation" title="Permalink to this headline">¶</a></h2>
-<div class="section" id="editor-windows">
-<h3>Editor windows<a class="headerlink" href="#editor-windows" title="Permalink to this headline">¶</a></h3>
+</section>
+</section>
+<section id="editing-and-navigation">
+<span id="id2"></span><h2>Editing and Navigation<a class="headerlink" href="#editing-and-navigation" title="Permalink to this heading">¶</a></h2>
+<section id="editor-windows">
+<h3>Editor windows<a class="headerlink" href="#editor-windows" title="Permalink to this heading">¶</a></h3>
<p>IDLE may open editor windows when it starts, depending on settings
and how you start IDLE. Thereafter, use the File menu. There can be only
one open editor window for a given file.</p>
column numbers with 0.</p>
<p>IDLE assumes that files with a known .py* extension contain Python code
and that other files do not. Run Python code with the Run menu.</p>
-</div>
-<div class="section" id="key-bindings">
-<h3>Key bindings<a class="headerlink" href="#key-bindings" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="key-bindings">
+<h3>Key bindings<a class="headerlink" href="#key-bindings" title="Permalink to this heading">¶</a></h3>
<p>In this section, ‘C’ refers to the <kbd class="kbd docutils literal notranslate">Control</kbd> key on Windows and Unix and
the <kbd class="kbd docutils literal notranslate">Command</kbd> key on macOS.</p>
<ul>
<li><p><kbd class="kbd docutils literal notranslate">Backspace</kbd> deletes to the left; <kbd class="kbd docutils literal notranslate">Del</kbd> deletes to the right</p></li>
-<li><p><kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">Backspace</kbd></kbd> delete word left; <kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">Del</kbd></kbd> delete word to the right</p></li>
-<li><p>Arrow keys and <kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">Page</kbd> <kbd class="kbd docutils literal notranslate">Up</kbd></kbd>/<kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">Page</kbd> <kbd class="kbd docutils literal notranslate">Down</kbd></kbd> to move around</p></li>
-<li><p><kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">LeftArrow</kbd></kbd> and <kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">RightArrow</kbd></kbd> moves by words</p></li>
+<li><p><kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">Backspace</kbd></kbd> delete word left; <kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">Del</kbd></kbd> delete word to the right</p></li>
+<li><p>Arrow keys and <kbd class="kbd docutils literal notranslate">Page Up</kbd>/<kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">Page</kbd> <kbd class="kbd docutils literal notranslate">Down</kbd></kbd> to move around</p></li>
+<li><p><kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">LeftArrow</kbd></kbd> and <kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">RightArrow</kbd></kbd> moves by words</p></li>
<li><p><kbd class="kbd docutils literal notranslate">Home</kbd>/<kbd class="kbd docutils literal notranslate">End</kbd> go to begin/end of line</p></li>
-<li><p><kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">Home</kbd></kbd>/<kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">End</kbd></kbd> go to begin/end of file</p></li>
+<li><p><kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">Home</kbd></kbd>/<kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">End</kbd></kbd> go to begin/end of file</p></li>
<li><p>Some useful Emacs bindings are inherited from Tcl/Tk:</p>
<blockquote>
<div><ul class="simple">
-<li><p><kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">a</kbd></kbd> beginning of line</p></li>
-<li><p><kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">e</kbd></kbd> end of line</p></li>
-<li><p><kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">k</kbd></kbd> kill line (but doesn’t put it in clipboard)</p></li>
-<li><p><kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">l</kbd></kbd> center window around the insertion point</p></li>
-<li><p><kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">b</kbd></kbd> go backward one character without deleting (usually you can
+<li><p><kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">a</kbd></kbd> beginning of line</p></li>
+<li><p><kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">e</kbd></kbd> end of line</p></li>
+<li><p><kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">k</kbd></kbd> kill line (but doesn’t put it in clipboard)</p></li>
+<li><p><kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">l</kbd></kbd> center window around the insertion point</p></li>
+<li><p><kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">b</kbd></kbd> go backward one character without deleting (usually you can
also use the cursor key for this)</p></li>
-<li><p><kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">f</kbd></kbd> go forward one character without deleting (usually you can
+<li><p><kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">f</kbd></kbd> go forward one character without deleting (usually you can
also use the cursor key for this)</p></li>
-<li><p><kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">p</kbd></kbd> go up one line (usually you can also use the cursor key for
+<li><p><kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">p</kbd></kbd> go up one line (usually you can also use the cursor key for
this)</p></li>
-<li><p><kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">d</kbd></kbd> delete next character</p></li>
+<li><p><kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">d</kbd></kbd> delete next character</p></li>
</ul>
</div></blockquote>
</li>
</ul>
-<p>Standard keybindings (like <kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">c</kbd></kbd> to copy and <kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">v</kbd></kbd> to paste)
+<p>Standard keybindings (like <kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">c</kbd></kbd> to copy and <kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">v</kbd></kbd> to paste)
may work. Keybindings are selected in the Configure IDLE dialog.</p>
-</div>
-<div class="section" id="automatic-indentation">
-<h3>Automatic indentation<a class="headerlink" href="#automatic-indentation" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="automatic-indentation">
+<h3>Automatic indentation<a class="headerlink" href="#automatic-indentation" title="Permalink to this heading">¶</a></h3>
<p>After a block-opening statement, the next line is indented by 4 spaces (in the
Python Shell window by one tab). After certain keywords (break, return etc.)
the next line is dedented. In leading indentation, <kbd class="kbd docutils literal notranslate">Backspace</kbd> deletes up
are restricted to four spaces due to Tcl/Tk limitations.</p>
<p>See also the indent/dedent region commands on the
<a class="reference internal" href="#format-menu"><span class="std std-ref">Format menu</span></a>.</p>
-</div>
-<div class="section" id="completions">
-<span id="id3"></span><h3>Completions<a class="headerlink" href="#completions" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="completions">
+<span id="id3"></span><h3>Completions<a class="headerlink" href="#completions" title="Permalink to this heading">¶</a></h3>
<p>Completions are supplied, when requested and available, for module
names, attributes of classes or functions, or filenames. Each request
method displays a completion box with existing names. (See tab
directory name and a separator.</p>
<p>Instead of waiting, or after a box is closed, open a completion box
immediately with Show Completions on the Edit menu. The default hot
-key is <kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">space</kbd></kbd>. If one types a prefix for the desired name
+key is <kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">space</kbd></kbd>. If one types a prefix for the desired name
before opening the box, the first match or near miss is made visible.
The result is the same as if one enters a prefix
after the box is displayed. Show Completions after a quote completes
<p>Completion boxes initially exclude names beginning with ‘_’ or, for
modules, not included in ‘__all__’. The hidden names can be accessed
by typing ‘_’ after ‘.’, either before or after the box is opened.</p>
-</div>
-<div class="section" id="calltips">
-<span id="id4"></span><h3>Calltips<a class="headerlink" href="#calltips" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="calltips">
+<span id="id4"></span><h3>Calltips<a class="headerlink" href="#calltips" title="Permalink to this heading">¶</a></h3>
<p>A calltip is shown automatically when one types <kbd class="kbd docutils literal notranslate">(</kbd> after the name
of an <em>accessible</em> function. A function name expression may include
dots and subscripts. A calltip remains until it is clicked, the cursor
<p>In an editor, import statements have no effect until one runs the file.
One might want to run a file after writing import statements, after
adding function definitions, or after opening an existing file.</p>
-</div>
-<div class="section" id="code-context">
-<span id="id5"></span><h3>Code Context<a class="headerlink" href="#code-context" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="code-context">
+<span id="id5"></span><h3>Code Context<a class="headerlink" href="#code-context" title="Permalink to this heading">¶</a></h3>
<p>Within an editor window containing Python code, code context can be toggled
in order to show or hide a pane at the top of the window. When shown, this
pane freezes the opening lines for block code, such as those beginning with
line to the top of the editor.</p>
<p>The text and background colors for the context pane can be configured under
the Highlights tab in the Configure IDLE dialog.</p>
-</div>
-<div class="section" id="python-shell-window">
-<h3>Python Shell window<a class="headerlink" href="#python-shell-window" title="Permalink to this headline">¶</a></h3>
-<p>With IDLE’s Shell, one enters, edits, and recalls complete statements.
-Most consoles and terminals only work with a single physical line at a time.</p>
+</section>
+<section id="shell-window">
+<h3>Shell window<a class="headerlink" href="#shell-window" title="Permalink to this heading">¶</a></h3>
+<p>In IDLE’s Shell, enter, edit, and recall complete statements. (Most
+consoles and terminals only work with a single physical line at a time).</p>
+<p>Submit a single-line statement for execution by hitting <kbd class="kbd docutils literal notranslate">Return</kbd>
+with the cursor anywhere on the line. If a line is extended with
+Backslash (<kbd class="kbd docutils literal notranslate">\</kbd>), the cursor must be on the last physical line.
+Submit a multi-line compound statement by entering a blank line after
+the statement.</p>
<p>When one pastes code into Shell, it is not compiled and possibly executed
-until one hits <kbd class="kbd docutils literal notranslate">Return</kbd>. One may edit pasted code first.
-If one pastes more that one statement into Shell, the result will be a
+until one hits <kbd class="kbd docutils literal notranslate">Return</kbd>, as specified above.
+One may edit pasted code first.
+If one pastes more than one statement into Shell, the result will be a
<a class="reference internal" href="exceptions.html#SyntaxError" title="SyntaxError"><code class="xref py py-exc docutils literal notranslate"><span class="pre">SyntaxError</span></code></a> when multiple statements are compiled as if they were one.</p>
+<p>Lines containing <code class="docutils literal notranslate"><span class="pre">RESTART</span></code> mean that the user execution process has been
+re-started. This occurs when the user execution process has crashed,
+when one requests a restart on the Shell menu, or when one runs code
+in an editor window.</p>
<p>The editing features described in previous subsections work when entering
code interactively. IDLE’s Shell window also responds to the following keys.</p>
<ul>
-<li><p><kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">c</kbd></kbd> interrupts executing command</p></li>
-<li><p><kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">d</kbd></kbd> sends end-of-file; closes window if typed at a <code class="docutils literal notranslate"><span class="pre">>>></span></code> prompt</p></li>
-<li><p><kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">Alt</kbd>-<kbd class="kbd docutils literal notranslate">/</kbd></kbd> (Expand word) is also useful to reduce typing</p>
+<li><p><kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">c</kbd></kbd> interrupts executing command</p></li>
+<li><p><kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">d</kbd></kbd> sends end-of-file; closes window if typed at a <code class="docutils literal notranslate"><span class="pre">>>></span></code> prompt</p></li>
+<li><p><kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">Alt</kbd>-<kbd class="kbd docutils literal notranslate">/</kbd></kbd> (Expand word) is also useful to reduce typing</p>
<p>Command history</p>
<ul class="simple">
-<li><p><kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">Alt</kbd>-<kbd class="kbd docutils literal notranslate">p</kbd></kbd> retrieves previous command matching what you have typed. On
-macOS use <kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">p</kbd></kbd>.</p></li>
-<li><p><kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">Alt</kbd>-<kbd class="kbd docutils literal notranslate">n</kbd></kbd> retrieves next. On macOS use <kbd class="kbd docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">n</kbd></kbd>.</p></li>
-<li><p><kbd class="kbd docutils literal notranslate">Return</kbd> while on any previous command retrieves that command</p></li>
+<li><p><kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">Alt</kbd>-<kbd class="kbd docutils literal notranslate">p</kbd></kbd> retrieves previous command matching what you have typed. On
+macOS use <kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">p</kbd></kbd>.</p></li>
+<li><p><kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">Alt</kbd>-<kbd class="kbd docutils literal notranslate">n</kbd></kbd> retrieves next. On macOS use <kbd class="kbd compound docutils literal notranslate"><kbd class="kbd docutils literal notranslate">C</kbd>-<kbd class="kbd docutils literal notranslate">n</kbd></kbd>.</p></li>
+<li><p><kbd class="kbd docutils literal notranslate">Return</kbd> while the cursor is on any previous command
+retrieves that command</p></li>
</ul>
</li>
</ul>
-</div>
-<div class="section" id="text-colors">
-<h3>Text colors<a class="headerlink" href="#text-colors" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="text-colors">
+<h3>Text colors<a class="headerlink" href="#text-colors" title="Permalink to this heading">¶</a></h3>
<p>Idle defaults to black on white text, but colors text with special meanings.
For the shell, these are shell output, shell error, user output, and
user error. For Python code, at the shell prompt or in an editor, these are
visible. To change the color scheme, use the Configure IDLE dialog
Highlighting tab. The marking of debugger breakpoint lines in the editor and
text in popups and dialogs is not user-configurable.</p>
-</div>
-</div>
-<div class="section" id="startup-and-code-execution">
-<h2>Startup and code execution<a class="headerlink" href="#startup-and-code-execution" title="Permalink to this headline">¶</a></h2>
+</section>
+</section>
+<section id="startup-and-code-execution">
+<h2>Startup and Code Execution<a class="headerlink" href="#startup-and-code-execution" title="Permalink to this heading">¶</a></h2>
<p>Upon startup with the <code class="docutils literal notranslate"><span class="pre">-s</span></code> option, IDLE will execute the file referenced by
the environment variables <span class="target" id="index-5"></span><code class="xref std std-envvar docutils literal notranslate"><span class="pre">IDLESTARTUP</span></code> or <span class="target" id="index-6"></span><a class="reference internal" href="../using/cmdline.html#envvar-PYTHONSTARTUP"><code class="xref std std-envvar docutils literal notranslate"><span class="pre">PYTHONSTARTUP</span></code></a>.
IDLE first checks for <code class="docutils literal notranslate"><span class="pre">IDLESTARTUP</span></code>; if <code class="docutils literal notranslate"><span class="pre">IDLESTARTUP</span></code> is present the file
looked for in the user’s home directory. Statements in this file will be
executed in the Tk namespace, so this file is not useful for importing
functions to be used from IDLE’s Python shell.</p>
-<div class="section" id="command-line-usage">
-<h3>Command line usage<a class="headerlink" href="#command-line-usage" title="Permalink to this headline">¶</a></h3>
+<section id="command-line-usage">
+<h3>Command line usage<a class="headerlink" href="#command-line-usage" title="Permalink to this heading">¶</a></h3>
<div class="highlight-none notranslate"><div class="highlight"><pre><span></span>idle.py [-c command] [-d] [-e] [-h] [-i] [-r file] [-s] [-t title] [-] [arg] ...
-c command run command in the shell window
<li><p>Otherwise, arguments are files opened for editing and
<code class="docutils literal notranslate"><span class="pre">sys.argv</span></code> reflects the arguments passed to IDLE itself.</p></li>
</ul>
-</div>
-<div class="section" id="startup-failure">
-<h3>Startup failure<a class="headerlink" href="#startup-failure" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="startup-failure">
+<h3>Startup failure<a class="headerlink" href="#startup-failure" title="Permalink to this heading">¶</a></h3>
<p>IDLE uses a socket to communicate between the IDLE GUI process and the user
code execution process. A connection must be established whenever the Shell
starts or restarts. (The latter is indicated by a divider line that says
if one starts IDLE to edit a file with such a character or later
when entering such a character. If one cannot upgrade tcl/tk,
then re-configure IDLE to use a font that works better.</p>
-</div>
-<div class="section" id="running-user-code">
-<h3>Running user code<a class="headerlink" href="#running-user-code" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="running-user-code">
+<h3>Running user code<a class="headerlink" href="#running-user-code" title="Permalink to this heading">¶</a></h3>
<p>With rare exceptions, the result of executing Python code with IDLE is
intended to be the same as executing the same code by the default method,
directly with Python in a text-mode system console or terminal window.
created in the execution process, whether directly by user code or by
modules such as multiprocessing. If such subprocess use <code class="docutils literal notranslate"><span class="pre">input</span></code> from
sys.stdin or <code class="docutils literal notranslate"><span class="pre">print</span></code> or <code class="docutils literal notranslate"><span class="pre">write</span></code> to sys.stdout or sys.stderr,
-IDLE should be started in a command line window. The secondary subprocess
+IDLE should be started in a command line window. (On Windows,
+use <code class="docutils literal notranslate"><span class="pre">python</span></code> or <code class="docutils literal notranslate"><span class="pre">py</span></code> rather than <code class="docutils literal notranslate"><span class="pre">pythonw</span></code> or <code class="docutils literal notranslate"><span class="pre">pyw</span></code>.)
+The secondary subprocess
will then be attached to that window for input and output.</p>
<p>If <code class="docutils literal notranslate"><span class="pre">sys</span></code> is reset by user code, such as with <code class="docutils literal notranslate"><span class="pre">importlib.reload(sys)</span></code>,
IDLE’s changes are lost and input from the keyboard and output to the screen
frames.</p>
<p>When user code raises SystemExit either directly or by calling sys.exit,
IDLE returns to a Shell prompt instead of exiting.</p>
-</div>
-<div class="section" id="user-output-in-shell">
-<h3>User output in Shell<a class="headerlink" href="#user-output-in-shell" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="user-output-in-shell">
+<h3>User output in Shell<a class="headerlink" href="#user-output-in-shell" title="Permalink to this heading">¶</a></h3>
<p>When a program outputs text, the result is determined by the
corresponding output device. When IDLE executes user code, <code class="docutils literal notranslate"><span class="pre">sys.stdout</span></code>
and <code class="docutils literal notranslate"><span class="pre">sys.stderr</span></code> are connected to the display area of IDLE’s Shell. Some of
<p>Squeezed output is expanded in place by double-clicking the label.
It can also be sent to the clipboard or a separate view window by
right-clicking the label.</p>
-</div>
-<div class="section" id="developing-tkinter-applications">
-<h3>Developing tkinter applications<a class="headerlink" href="#developing-tkinter-applications" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="developing-tkinter-applications">
+<h3>Developing tkinter applications<a class="headerlink" href="#developing-tkinter-applications" title="Permalink to this heading">¶</a></h3>
<p>IDLE is intentionally different from standard Python in order to
facilitate development of tkinter programs. Enter <code class="docutils literal notranslate"><span class="pre">import</span> <span class="pre">tkinter</span> <span class="pre">as</span> <span class="pre">tk;</span>
<span class="pre">root</span> <span class="pre">=</span> <span class="pre">tk.Tk()</span></code> in standard Python and nothing appears. Enter the same
the mainloop call. One then gets a shell prompt immediately and can
interact with the live application. One just has to remember to
re-enable the mainloop call when running in standard Python.</p>
-</div>
-<div class="section" id="running-without-a-subprocess">
-<h3>Running without a subprocess<a class="headerlink" href="#running-without-a-subprocess" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="running-without-a-subprocess">
+<h3>Running without a subprocess<a class="headerlink" href="#running-without-a-subprocess" title="Permalink to this heading">¶</a></h3>
<p>By default, IDLE executes user code in a separate subprocess via a socket,
which uses the internal loopback interface. This connection is not
externally visible and no data is sent to or received from the internet.
<div class="deprecated">
<p><span class="versionmodified deprecated">Deprecated since version 3.4.</span></p>
</div>
-</div>
-</div>
-<div class="section" id="help-and-preferences">
-<h2>Help and preferences<a class="headerlink" href="#help-and-preferences" title="Permalink to this headline">¶</a></h2>
-<div class="section" id="help-sources">
-<span id="id6"></span><h3>Help sources<a class="headerlink" href="#help-sources" title="Permalink to this headline">¶</a></h3>
+</section>
+</section>
+<section id="help-and-preferences">
+<h2>Help and Preferences<a class="headerlink" href="#help-and-preferences" title="Permalink to this heading">¶</a></h2>
+<section id="help-sources">
+<span id="id6"></span><h3>Help sources<a class="headerlink" href="#help-sources" title="Permalink to this heading">¶</a></h3>
<p>Help menu entry “IDLE Help” displays a formatted html version of the
IDLE chapter of the Library Reference. The result, in a read-only
tkinter text window, is close to what one sees in a web browser.
that will be opened instead.</p>
<p>Selected URLs can be added or removed from the help menu at any time using the
General tab of the Configure IDLE dialog.</p>
-</div>
-<div class="section" id="setting-preferences">
-<span id="preferences"></span><h3>Setting preferences<a class="headerlink" href="#setting-preferences" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="setting-preferences">
+<span id="preferences"></span><h3>Setting preferences<a class="headerlink" href="#setting-preferences" title="Permalink to this heading">¶</a></h3>
<p>The font preferences, highlighting, keys, and general preferences can be
changed via Configure IDLE on the Option menu.
Non-default user settings are saved in a <code class="docutils literal notranslate"><span class="pre">.idlerc</span></code> directory in the user’s
and key set. To use a newer built-in color theme or key set with older
IDLEs, save it as a new custom theme or key set and it well be accessible
to older IDLEs.</p>
-</div>
-<div class="section" id="idle-on-macos">
-<h3>IDLE on macOS<a class="headerlink" href="#idle-on-macos" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="idle-on-macos">
+<h3>IDLE on macOS<a class="headerlink" href="#idle-on-macos" title="Permalink to this heading">¶</a></h3>
<p>Under System Preferences: Dock, one can set “Prefer tabs when opening
documents” to “Always”. This setting is not compatible with the tk/tkinter
GUI framework used by IDLE, and it breaks a few IDLE features.</p>
-</div>
-<div class="section" id="extensions">
-<h3>Extensions<a class="headerlink" href="#extensions" title="Permalink to this headline">¶</a></h3>
+</section>
+<section id="extensions">
+<h3>Extensions<a class="headerlink" href="#extensions" title="Permalink to this heading">¶</a></h3>
<p>IDLE contains an extension facility. Preferences for extensions can be
changed with the Extensions tab of the preferences dialog. See the
beginning of config-extensions.def in the idlelib directory for further
information. The only current default extension is zzdummy, an example
also used for testing.</p>
-</div>
-</div>
-</div>
+</section>
+</section>
+</section>
<div class="clearer"></div>
</div>
<div class="sphinxsidebar" role="navigation" aria-label="main navigation">
<div class="sphinxsidebarwrapper">
- <h3><a href="../contents.html">Table of Contents</a></h3>
- <ul>
+ <div>
+ <h3><a href="../contents.html">Table of Contents</a></h3>
+ <ul>
<li><a class="reference internal" href="#">IDLE</a><ul>
<li><a class="reference internal" href="#menus">Menus</a><ul>
<li><a class="reference internal" href="#file-menu-shell-and-editor">File menu (Shell and Editor)</a></li>
<li><a class="reference internal" href="#options-menu-shell-and-editor">Options menu (Shell and Editor)</a></li>
<li><a class="reference internal" href="#window-menu-shell-and-editor">Window menu (Shell and Editor)</a></li>
<li><a class="reference internal" href="#help-menu-shell-and-editor">Help menu (Shell and Editor)</a></li>
-<li><a class="reference internal" href="#context-menus">Context Menus</a></li>
+<li><a class="reference internal" href="#context-menus">Context menus</a></li>
</ul>
</li>
-<li><a class="reference internal" href="#editing-and-navigation">Editing and navigation</a><ul>
+<li><a class="reference internal" href="#editing-and-navigation">Editing and Navigation</a><ul>
<li><a class="reference internal" href="#editor-windows">Editor windows</a></li>
<li><a class="reference internal" href="#key-bindings">Key bindings</a></li>
<li><a class="reference internal" href="#automatic-indentation">Automatic indentation</a></li>
<li><a class="reference internal" href="#completions">Completions</a></li>
<li><a class="reference internal" href="#calltips">Calltips</a></li>
<li><a class="reference internal" href="#code-context">Code Context</a></li>
-<li><a class="reference internal" href="#python-shell-window">Python Shell window</a></li>
+<li><a class="reference internal" href="#shell-window">Shell window</a></li>
<li><a class="reference internal" href="#text-colors">Text colors</a></li>
</ul>
</li>
-<li><a class="reference internal" href="#startup-and-code-execution">Startup and code execution</a><ul>
+<li><a class="reference internal" href="#startup-and-code-execution">Startup and Code Execution</a><ul>
<li><a class="reference internal" href="#command-line-usage">Command line usage</a></li>
<li><a class="reference internal" href="#startup-failure">Startup failure</a></li>
<li><a class="reference internal" href="#running-user-code">Running user code</a></li>
<li><a class="reference internal" href="#running-without-a-subprocess">Running without a subprocess</a></li>
</ul>
</li>
-<li><a class="reference internal" href="#help-and-preferences">Help and preferences</a><ul>
+<li><a class="reference internal" href="#help-and-preferences">Help and Preferences</a><ul>
<li><a class="reference internal" href="#help-sources">Help sources</a></li>
<li><a class="reference internal" href="#setting-preferences">Setting preferences</a></li>
<li><a class="reference internal" href="#idle-on-macos">IDLE on macOS</a></li>
</li>
</ul>
- <h4>Previous topic</h4>
- <p class="topless"><a href="tkinter.tix.html"
- title="previous chapter"><code class="xref py py-mod docutils literal notranslate"><span class="pre">tkinter.tix</span></code> — Extension widgets for Tk</a></p>
- <h4>Next topic</h4>
- <p class="topless"><a href="development.html"
- title="next chapter">Development Tools</a></p>
+ </div>
+ <div>
+ <h4>Previous topic</h4>
+ <p class="topless"><a href="tkinter.tix.html"
+ title="previous chapter"><code class="xref py py-mod docutils literal notranslate"><span class="pre">tkinter.tix</span></code> — Extension widgets for Tk</a></p>
+ </div>
+ <div>
+ <h4>Next topic</h4>
+ <p class="topless"><a href="development.html"
+ title="next chapter">Development Tools</a></p>
+ </div>
<div role="note" aria-label="source link">
<h3>This Page</h3>
<ul class="this-page-menu">
</ul>
</div>
</div>
+<div id="sidebarbutton" title="Collapse sidebar">
+<span>«</span>
+</div>
+
</div>
<div class="clearer"></div>
</div>
<a href="tkinter.tix.html" title="tkinter.tix — Extension widgets for Tk"
>previous</a> |</li>
- <li><img src="../_static/py.png" alt=""
- style="vertical-align: middle; margin-top: -1px"/></li>
- <li><a href="https://www.python.org/">Python</a> »</li>
-
+ <li><img src="../_static/py.svg" alt="python logo" style="vertical-align: middle; margin-top: -1px"/></li>
+ <li><a href="https://www.python.org/">Python</a> »</li>
+ <li class="switchers">
+ <div class="language_switcher_placeholder"></div>
+ <div class="version_switcher_placeholder"></div>
+ </li>
+ <li>
+ </li>
<li id="cpython-language-and-version">
- <a href="../index.html">3.11.0a4 Documentation</a> »
+ <a href="../index.html">3.12.0a0 Documentation</a> »
</li>
<li class="nav-item nav-item-1"><a href="index.html" >The Python Standard Library</a> »</li>
<li class="nav-item nav-item-2"><a href="tk.html" >Graphical User Interfaces with Tk</a> »</li>
<li class="nav-item nav-item-this"><a href="">IDLE</a></li>
- <li class="right">
+ <li class="right">
- <div class="inline-search" style="display: none" role="search">
+ <div class="inline-search" role="search">
<form class="inline-search" action="../search.html" method="get">
- <input placeholder="Quick search" type="text" name="q" />
+ <input placeholder="Quick search" aria-label="Quick search" type="text" name="q" />
<input type="submit" value="Go" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
- <script type="text/javascript">$('.inline-search').show(0);</script>
- |
- </li>
+ |
+ </li>
</ul>
</div>
<div class="footer">
© <a href="../copyright.html">Copyright</a> 2001-2022, Python Software Foundation.
<br />
+ This page is licensed under the Python Software Foundation License Version 2.
+ <br />
+ Examples, recipes, and other code in the documentation are additionally licensed under the Zero Clause BSD License.
+ <br />
+ See <a href="/license.html">History and License</a> for more information.<br />
+ <br />
The Python Software Foundation is a non-profit corporation.
<a href="https://www.python.org/psf/donations/">Please donate.</a>
<br />
<br />
- Last updated on Jan 26, 2022.
+ Last updated on Jul 03, 2022.
<a href="/bugs.html">Found a bug</a>?
<br />
- Created using <a href="https://www.sphinx-doc.org/">Sphinx</a> 3.2.1.
+ Created using <a href="https://www.sphinx-doc.org/">Sphinx</a> 5.0.2.
</div>
</body>
if a == 'class':
class_ = v
s = ''
- if tag == 'div' and class_ == 'section':
+ if tag == 'section' and attrs == [('id', 'idle')]:
self.show = True # Start main content.
- elif tag == 'div' and class_ == 'sphinxsidebar':
+ elif tag == 'div' and class_ == 'clearer':
self.show = False # End main content.
elif tag == 'p' and self.prevtag and not self.prevtag[0]:
# Begin a new block for <p> tags after a closed tag.
email = Label(frame_background, text='email: idle-dev@python.org',
justify=LEFT, fg=self.fg, bg=self.bg)
email.grid(row=6, column=0, columnspan=2, sticky=W, padx=10, pady=0)
- docs = Label(frame_background, text="https://docs.python.org/"
- f"{version[:version.rindex('.')]}/library/idle.html",
+ docs_url = ("https://docs.python.org/%d.%d/library/idle.html" %
+ sys.version_info[:2])
+ docs = Label(frame_background, text=docs_url,
justify=LEFT, fg=self.fg, bg=self.bg)
docs.grid(row=7, column=0, columnspan=2, sticky=W, padx=10, pady=0)
docs.bind("<Button-1>", lambda event: webbrowser.open(docs['text']))
self.text.bell()
return
nprefix = len(prefix)
- while 1:
+ while True:
pointer += -1 if reverse else 1
if pointer < 0 or pointer >= nhist:
self.text.bell()
last_identifier_pos = pos
postdot_phase = True
- while 1:
+ while True:
# Eat whitespaces, comments, and if postdot_phase is False - a dot
- while 1:
+ while True:
if pos>brck_limit and rawtext[pos-1] in self._whitespace_chars:
# Eat a whitespace
pos -= 1
import unittest
from unittest import mock
from idlelib.idle_test.mock_idle import Func
+from idlelib.util import py_extensions
from collections import deque
import os.path
self.assertTrue(mb.node.destroy.called)
del mb.top.destroy, mb.node.destroy
+ def test_is_browseable_extension(self):
+ path = "/path/to/file"
+ for ext in py_extensions:
+ with self.subTest(ext=ext):
+ filename = f'{path}{ext}'
+ actual = browser.is_browseable_extension(filename)
+ expected = ext not in browser.browseable_extension_blocklist
+ self.assertEqual(actual, expected)
+
# Nested tree same as in test_pyclbr.py except for supers on C0. C1.
mb = pyclbr
from idlelib import debugger_r
import unittest
-from test.support import requires
-from tkinter import Tk
-
-
-class Test(unittest.TestCase):
+# Boilerplate likely to be needed for future test classes.
+##from test.support import requires
+##from tkinter import Tk
+##class Test(unittest.TestCase):
## @classmethod
## def setUpClass(cls):
## requires('gui')
## cls.root = Tk()
-##
## @classmethod
## def tearDownClass(cls):
## cls.root.destroy()
-## del cls.root
-
- def test_init(self):
- self.assertTrue(True) # Get coverage of import
-
-# Classes GUIProxy, IdbAdapter, FrameProxy, CodeProxy, DictProxy,
-# GUIAdapter, IdbProxy plus 7 module functions.
+# GUIProxy, IdbAdapter, FrameProxy, CodeProxy, DictProxy,
+# GUIAdapter, IdbProxy, and 7 functions still need tests.
class IdbAdapterTest(unittest.TestCase):
from collections import namedtuple
from test.support import requires
from tkinter import Tk
-from idlelib.idle_test.mock_idle import Func
Editor = editor.EditorWindow
first_line = get_end_linenumber(text)
self.do_input(dedent('''\
if True:
- print(1)
+ print(1)
'''))
yield
selected_lines_text = text.get('sel.first linestart', 'sel.last')
selected_lines = selected_lines_text.split('\n')
- # Expect a block of input, a single output line, and a new prompt
+ selected_lines.pop() # Final '' is a split artifact, not a line.
+ # Expect a block of input and a single output line.
expected_prompts = \
- ['>>>'] + ['...'] * (len(selected_lines) - 3) + [None, '>>>']
+ ['>>>'] + ['...'] * (len(selected_lines) - 2) + [None]
selected_text_with_prompts = '\n'.join(
line if prompt is None else prompt + ' ' + line
for prompt, line in zip(expected_prompts,
from tkinter import messagebox
from tkinter.simpledialog import askstring
-import idlelib
from idlelib.config import idleConf
from idlelib.util import py_extensions
py_extensions = ' '.join("*"+ext for ext in py_extensions)
-
encoding = 'utf-8'
-if sys.platform == 'win32':
- errors = 'surrogatepass'
-else:
- errors = 'surrogateescape'
-
+errors = 'surrogatepass' if sys.platform == 'win32' else 'surrogateescape'
class IOBinding:
('_Undo', '<<undo>>'),
('_Redo', '<<redo>>'),
None,
+ ('Select _All', '<<select-all>>'),
('Cu_t', '<<cut>>'),
('_Copy', '<<copy>>'),
('_Paste', '<<paste>>'),
- ('Select _All', '<<select-all>>'),
None,
('_Find...', '<<find>>'),
('Find A_gain', '<<find-again>>'),
('Find _Selection', '<<find-selection>>'),
('Find in Files...', '<<find-in-files>>'),
('R_eplace...', '<<replace>>'),
+ None,
('Go to _Line', '<<goto-line>>'),
('S_how Completions', '<<force-open-completions>>'),
('E_xpand Word', '<<expand-word>>'),
and/or last lines is selected.
"""
text = self.text
-
- selection_indexes = (
- self.text.index("sel.first linestart"),
- self.text.index("sel.last +1line linestart"),
- )
- if selection_indexes[0] is None:
- # There is no selection, so do nothing.
- return
-
- selected_text = self.text.get(*selection_indexes)
+ selfirst = text.index('sel.first linestart')
+ if selfirst is None: # Should not be possible.
+ return # No selection, do nothing.
+ sellast = text.index('sel.last')
+ if sellast[-1] != '0':
+ sellast = text.index("sel.last+1line linestart")
+
+ selected_text = self.text.get(selfirst, sellast)
selection_lineno_range = range(
- int(float(selection_indexes[0])),
- int(float(selection_indexes[1]))
+ int(float(selfirst)),
+ int(float(sellast))
)
prompts = [
self.shell_sidebar.line_prompts.get(lineno)
self.debug("_getresponse:myseq:", myseq)
if threading.current_thread() is self.sockthread:
# this thread does all reading of requests or responses
- while 1:
+ while True:
response = self.pollresponse(myseq, wait)
if response is not None:
return response
self.responses and notify the owning thread.
"""
- while 1:
+ while True:
# send queued response if there is one available
try:
qmsg = response_queue.get(0)
args=((LOCALHOST, port),))
sockthread.daemon = True
sockthread.start()
- while 1:
+ while True:
try:
if exit_now:
try:
wrapped = 0
startline = line
chars = text.get("%d.0" % line, "%d.0" % (line+1))
- while 1:
+ while True:
m = search_reverse(prog, chars[:-1], col)
if m:
if ok or m.start() < col:
* std streams (pyshell, run),
* warning stuff (pyshell, run).
"""
-from os import path
# .pyw is for Windows; .pyi is for stub files.
py_extensions = ('.py', '.pyw', '.pyi') # Order needed for open/save dialogs.
@contextlib.contextmanager
-def _tempfile(reader, suffix=''):
+def _tempfile(reader, suffix='',
+ # gh-93353: Keep a reference to call os.remove() in late Python
+ # finalization.
+ *, _os_remove=os.remove):
# Not using tempfile.NamedTemporaryFile as it leads to deeper 'try'
# blocks due to the need to close the temporary file to work on Windows
# properly.
yield pathlib.Path(raw_path)
finally:
try:
- os.remove(raw_path)
+ _os_remove(raw_path)
except FileNotFoundError:
pass
while ismethod(f):
f = f.__func__
f = functools._unwrap_partial(f)
- if not isfunction(f):
+ if not (isfunction(f) or _signature_is_functionlike(f)):
return False
return bool(f.__code__.co_flags & flag)
try:
with tokenize.open(fullname) as fp:
lines = fp.readlines()
- except OSError:
+ except (OSError, UnicodeDecodeError, SyntaxError):
return []
if lines and not lines[-1].endswith('\n'):
lines[-1] += '\n'
# (if there's exception data), and also returns the formatted
# message. We can then use this to replace the original
# msg + args, as these might be unpickleable. We also zap the
- # exc_info and exc_text attributes, as they are no longer
+ # exc_info, exc_text and stack_info attributes, as they are no longer
# needed and, if not None, will typically not be pickleable.
msg = self.format(record)
# bpo-35726: make copy of record to avoid affecting other handlers in the chain.
record.args = None
record.exc_info = None
record.exc_text = None
+ record.stack_info = None
return record
def emit(self, record):
def _Popen(process_obj):
return _default_context.get_context().Process._Popen(process_obj)
+ @staticmethod
+ def _after_fork():
+ return _default_context.get_context().Process._after_fork()
+
class DefaultContext(BaseContext):
Process = Process
from .popen_spawn_posix import Popen
return Popen(process_obj)
+ @staticmethod
+ def _after_fork():
+ # process is spawned, nothing to do
+ pass
+
class ForkServerProcess(process.BaseProcess):
_start_method = 'forkserver'
@staticmethod
from .popen_spawn_win32 import Popen
return Popen(process_obj)
+ @staticmethod
+ def _after_fork():
+ # process is spawned, nothing to do
+ pass
+
class SpawnContext(BaseContext):
_name = 'spawn'
Process = SpawnProcess
processes = os.cpu_count() or 1
if processes < 1:
raise ValueError("Number of processes must be at least 1")
+ if maxtasksperchild is not None:
+ if not isinstance(maxtasksperchild, int) or maxtasksperchild <= 0:
+ raise ValueError("maxtasksperchild must be a positive int or None")
if initializer is not None and not callable(initializer):
raise TypeError('initializer must be a callable')
if threading._HAVE_THREAD_NATIVE_ID:
threading.main_thread()._set_native_id()
try:
- util._finalizer_registry.clear()
- util._run_after_forkers()
+ self._after_fork()
finally:
# delay finalization of the old process object until after
# _run_after_forkers() is executed
return exitcode
+ @staticmethod
+ def _after_fork():
+ from . import util
+ util._finalizer_registry.clear()
+ util._run_after_forkers()
+
+
#
# We subclass bytes to avoid accidental transmission of auth keys over network
#
import _posixshmem
_USE_POSIX = True
+from . import resource_tracker
_O_CREX = os.O_CREAT | os.O_EXCL
self.unlink()
raise
- from .resource_tracker import register
- register(self._name, "shared_memory")
+ resource_tracker.register(self._name, "shared_memory")
else:
called once (and only once) across all processes which have access
to the shared memory block."""
if _USE_POSIX and self._name:
- from .resource_tracker import unregister
_posixshmem.shm_unlink(self._name)
- unregister(self._name, "shared_memory")
+ resource_tracker.unregister(self._name, "shared_memory")
_encoding = "utf8"
import genericpath
from genericpath import *
+
__all__ = ["normcase","isabs","join","splitdrive","split","splitext",
"basename","dirname","commonprefix","getsize","getmtime",
"getatime","getctime", "islink","exists","lexists","isdir","isfile",
# Other normalizations (such as optimizing '../' away) are not done
# (this is done by normpath).
-def normcase(s):
- """Normalize case of pathname.
-
- Makes all characters lowercase and all slashes into backslashes."""
- s = os.fspath(s)
- if isinstance(s, bytes):
- return s.replace(b'/', b'\\').lower()
- else:
+try:
+ from _winapi import (
+ LCMapStringEx as _LCMapStringEx,
+ LOCALE_NAME_INVARIANT as _LOCALE_NAME_INVARIANT,
+ LCMAP_LOWERCASE as _LCMAP_LOWERCASE)
+
+ def normcase(s):
+ """Normalize case of pathname.
+
+ Makes all characters lowercase and all slashes into backslashes.
+ """
+ s = os.fspath(s)
+ if not s:
+ return s
+ if isinstance(s, bytes):
+ encoding = sys.getfilesystemencoding()
+ s = s.decode(encoding, 'surrogateescape').replace('/', '\\')
+ s = _LCMapStringEx(_LOCALE_NAME_INVARIANT,
+ _LCMAP_LOWERCASE, s)
+ return s.encode(encoding, 'surrogateescape')
+ else:
+ return _LCMapStringEx(_LOCALE_NAME_INVARIANT,
+ _LCMAP_LOWERCASE,
+ s.replace('/', '\\'))
+except ImportError:
+ def normcase(s):
+ """Normalize case of pathname.
+
+ Makes all characters lowercase and all slashes into backslashes.
+ """
+ s = os.fspath(s)
+ if isinstance(s, bytes):
+ return os.fsencode(os.fsdecode(s).replace('/', '\\').lower())
return s.replace('/', '\\').lower()
"persistent IDs in protocol 0 must be ASCII strings")
def save_reduce(self, func, args, state=None, listitems=None,
- dictitems=None, state_setter=None, obj=None):
+ dictitems=None, state_setter=None, *, obj=None):
# This API is called by some subclasses
if not isinstance(args, tuple):
# -*- coding: utf-8 -*-
-# Autogenerated by Sphinx on Mon Jun 6 12:53:10 2022
+# Autogenerated by Sphinx on Mon Aug 1 21:23:42 2022
topics = {'assert': 'The "assert" statement\n'
'**********************\n'
'\n'
' invoking the superclass’s "__new__()" method using\n'
' "super().__new__(cls[, ...])" with appropriate arguments '
'and then\n'
- ' modifying the newly-created instance as necessary before '
+ ' modifying the newly created instance as necessary before '
'returning\n'
' it.\n'
'\n'
'Python.This is\n'
' intended to provide protection against a '
'denial-of-service caused\n'
- ' by carefully-chosen inputs that exploit the worst '
+ ' by carefully chosen inputs that exploit the worst '
'case\n'
' performance of a dict insertion, O(n^2) complexity. '
'See\n'
' invoking the superclass’s "__new__()" method using\n'
' "super().__new__(cls[, ...])" with appropriate arguments '
'and then\n'
- ' modifying the newly-created instance as necessary before '
+ ' modifying the newly created instance as necessary before '
'returning\n'
' it.\n'
'\n'
'is\n'
' intended to provide protection against a '
'denial-of-service caused\n'
- ' by carefully-chosen inputs that exploit the worst case\n'
+ ' by carefully chosen inputs that exploit the worst case\n'
' performance of a dict insertion, O(n^2) complexity. '
'See\n'
' http://www.ocert.org/advisories/ocert-2011-003.html '
return None
def _make_tarball(base_name, base_dir, compress="gzip", verbose=0, dry_run=0,
- owner=None, group=None, logger=None):
+ owner=None, group=None, logger=None, root_dir=None):
"""Create a (possibly compressed) tar file from all the files under
'base_dir'.
if not dry_run:
tar = tarfile.open(archive_name, 'w|%s' % tar_compression)
+ arcname = base_dir
+ if root_dir is not None:
+ base_dir = os.path.join(root_dir, base_dir)
try:
- tar.add(base_dir, filter=_set_uid_gid)
+ tar.add(base_dir, arcname, filter=_set_uid_gid)
finally:
tar.close()
+ if root_dir is not None:
+ archive_name = os.path.abspath(archive_name)
return archive_name
-def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0, logger=None):
+def _make_zipfile(base_name, base_dir, verbose=0, dry_run=0,
+ logger=None, owner=None, group=None, root_dir=None):
"""Create a zip file from all the files under 'base_dir'.
The output zip file will be named 'base_name' + ".zip". Returns the
if not dry_run:
with zipfile.ZipFile(zip_filename, "w",
compression=zipfile.ZIP_DEFLATED) as zf:
- path = os.path.normpath(base_dir)
- if path != os.curdir:
- zf.write(path, path)
+ arcname = os.path.normpath(base_dir)
+ if root_dir is not None:
+ base_dir = os.path.join(root_dir, base_dir)
+ base_dir = os.path.normpath(base_dir)
+ if arcname != os.curdir:
+ zf.write(base_dir, arcname)
if logger is not None:
- logger.info("adding '%s'", path)
+ logger.info("adding '%s'", base_dir)
for dirpath, dirnames, filenames in os.walk(base_dir):
+ arcdirpath = dirpath
+ if root_dir is not None:
+ arcdirpath = os.path.relpath(arcdirpath, root_dir)
+ arcdirpath = os.path.normpath(arcdirpath)
for name in sorted(dirnames):
- path = os.path.normpath(os.path.join(dirpath, name))
- zf.write(path, path)
+ path = os.path.join(dirpath, name)
+ arcname = os.path.join(arcdirpath, name)
+ zf.write(path, arcname)
if logger is not None:
logger.info("adding '%s'", path)
for name in filenames:
- path = os.path.normpath(os.path.join(dirpath, name))
+ path = os.path.join(dirpath, name)
+ path = os.path.normpath(path)
if os.path.isfile(path):
- zf.write(path, path)
+ arcname = os.path.join(arcdirpath, name)
+ zf.write(path, arcname)
if logger is not None:
logger.info("adding '%s'", path)
+ if root_dir is not None:
+ zip_filename = os.path.abspath(zip_filename)
return zip_filename
+# Maps the name of the archive format to a tuple containing:
+# * the archiving function
+# * extra keyword arguments
+# * description
+# * does it support the root_dir argument?
_ARCHIVE_FORMATS = {
- 'tar': (_make_tarball, [('compress', None)], "uncompressed tar file"),
+ 'tar': (_make_tarball, [('compress', None)],
+ "uncompressed tar file", True),
}
if _ZLIB_SUPPORTED:
_ARCHIVE_FORMATS['gztar'] = (_make_tarball, [('compress', 'gzip')],
- "gzip'ed tar-file")
- _ARCHIVE_FORMATS['zip'] = (_make_zipfile, [], "ZIP file")
+ "gzip'ed tar-file", True)
+ _ARCHIVE_FORMATS['zip'] = (_make_zipfile, [], "ZIP file", True)
if _BZ2_SUPPORTED:
_ARCHIVE_FORMATS['bztar'] = (_make_tarball, [('compress', 'bzip2')],
- "bzip2'ed tar-file")
+ "bzip2'ed tar-file", True)
if _LZMA_SUPPORTED:
_ARCHIVE_FORMATS['xztar'] = (_make_tarball, [('compress', 'xz')],
- "xz'ed tar-file")
+ "xz'ed tar-file", True)
def get_archive_formats():
"""Returns a list of supported formats for archiving and unarchiving.
if not isinstance(element, (tuple, list)) or len(element) !=2:
raise TypeError('extra_args elements are : (arg_name, value)')
- _ARCHIVE_FORMATS[name] = (function, extra_args, description)
+ _ARCHIVE_FORMATS[name] = (function, extra_args, description, False)
def unregister_archive_format(name):
del _ARCHIVE_FORMATS[name]
uses the current owner and group.
"""
sys.audit("shutil.make_archive", base_name, format, root_dir, base_dir)
- save_cwd = os.getcwd()
- if root_dir is not None:
- if logger is not None:
- logger.debug("changing into '%s'", root_dir)
- base_name = os.path.abspath(base_name)
- if not dry_run:
- os.chdir(root_dir)
-
- if base_dir is None:
- base_dir = os.curdir
-
- kwargs = {'dry_run': dry_run, 'logger': logger}
-
try:
format_info = _ARCHIVE_FORMATS[format]
except KeyError:
raise ValueError("unknown archive format '%s'" % format) from None
+ kwargs = {'dry_run': dry_run, 'logger': logger,
+ 'owner': owner, 'group': group}
+
func = format_info[0]
for arg, val in format_info[1]:
kwargs[arg] = val
- if format != 'zip':
- kwargs['owner'] = owner
- kwargs['group'] = group
+ if base_dir is None:
+ base_dir = os.curdir
+
+ support_root_dir = format_info[3]
+ save_cwd = None
+ if root_dir is not None:
+ if support_root_dir:
+ # Support path-like base_name here for backwards-compatibility.
+ base_name = os.fspath(base_name)
+ kwargs['root_dir'] = root_dir
+ else:
+ save_cwd = os.getcwd()
+ if logger is not None:
+ logger.debug("changing into '%s'", root_dir)
+ base_name = os.path.abspath(base_name)
+ if not dry_run:
+ os.chdir(root_dir)
try:
filename = func(base_name, base_dir, **kwargs)
finally:
- if root_dir is not None:
+ if save_cwd is not None:
if logger is not None:
logger.debug("changing back to '%s'", save_cwd)
os.chdir(save_cwd)
finally:
tarobj.close()
+# Maps the name of the unpack format to a tuple containing:
+# * extensions
+# * the unpacking function
+# * extra keyword arguments
+# * description
_UNPACK_FORMATS = {
'tar': (['.tar'], _unpack_tarfile, [], "uncompressed tar file"),
'zip': (['.zip'], _unpack_zipfile, [], "ZIP file"),
ORDER BY "name"
"""
schema_res = cu.execute(q)
+ sqlite_sequence = []
for table_name, type, sql in schema_res.fetchall():
if table_name == 'sqlite_sequence':
- yield('DELETE FROM "sqlite_sequence";')
+ rows = cu.execute('SELECT * FROM "sqlite_sequence";').fetchall()
+ sqlite_sequence = ['DELETE FROM "sqlite_sequence"']
+ sqlite_sequence += [
+ f'INSERT INTO "sqlite_sequence" VALUES(\'{row[0]}\',{row[1]})'
+ for row in rows
+ ]
+ continue
elif table_name == 'sqlite_stat1':
yield('ANALYZE "sqlite_master";')
elif table_name.startswith('sqlite_'):
for name, type, sql in schema_res.fetchall():
yield('{0};'.format(sql))
+ # gh-79009: Yield statements concerning the sqlite_sequence table at the
+ # end of the transaction.
+ for row in sqlite_sequence:
+ yield('{0};'.format(row))
+
yield('COMMIT;')
self.cu.executemany("insert into test(name) values (?)", [(1,), (2,), (3,)])
self.assertEqual(self.cu.rowcount, 3)
+ @unittest.skipIf(sqlite.sqlite_version_info < (3, 35, 0),
+ "Requires SQLite 3.35.0 or newer")
+ def test_rowcount_update_returning(self):
+ # gh-93421: rowcount is updated correctly for UPDATE...RETURNING queries
+ self.cu.execute("update test set name='bar' where name='foo' returning 1")
+ self.assertEqual(self.cu.fetchone()[0], 1)
+ self.assertEqual(self.cu.rowcount, 1)
+
def test_total_changes(self):
self.cu.execute("insert into test(name) values ('foo')")
self.cu.execute("insert into test(name) values ('foo')")
import unittest
import sqlite3 as sqlite
+
class DumpTests(unittest.TestCase):
def setUp(self):
self.cx = sqlite.connect(":memory:")
[self.assertEqual(expected_sqls[i], actual_sqls[i])
for i in range(len(expected_sqls))]
+ def test_dump_autoincrement(self):
+ expected = [
+ 'CREATE TABLE "t1" (id integer primary key autoincrement);',
+ 'INSERT INTO "t1" VALUES(NULL);',
+ 'CREATE TABLE "t2" (id integer primary key autoincrement);',
+ ]
+ self.cu.executescript("".join(expected))
+
+ # the NULL value should now be automatically be set to 1
+ expected[1] = expected[1].replace("NULL", "1")
+ expected.insert(0, "BEGIN TRANSACTION;")
+ expected.extend([
+ 'DELETE FROM "sqlite_sequence";',
+ 'INSERT INTO "sqlite_sequence" VALUES(\'t1\',1);',
+ 'COMMIT;',
+ ])
+
+ actual = [stmt for stmt in self.cx.iterdump()]
+ self.assertEqual(expected, actual)
+
+ def test_dump_autoincrement_create_new_db(self):
+ self.cu.execute("BEGIN TRANSACTION")
+ self.cu.execute("CREATE TABLE t1 (id integer primary key autoincrement)")
+ self.cu.execute("CREATE TABLE t2 (id integer primary key autoincrement)")
+ self.cu.executemany("INSERT INTO t1 VALUES(?)", ((None,) for _ in range(9)))
+ self.cu.executemany("INSERT INTO t2 VALUES(?)", ((None,) for _ in range(4)))
+ self.cx.commit()
+
+ cx2 = sqlite.connect(":memory:")
+ query = "".join(self.cx.iterdump())
+ cx2.executescript(query)
+ cu2 = cx2.cursor()
+
+ dataset = (
+ ("t1", 9),
+ ("t2", 4),
+ )
+ for table, seq in dataset:
+ with self.subTest(table=table, seq=seq):
+ res = cu2.execute("""
+ SELECT "seq" FROM "sqlite_sequence" WHERE "name" == ?
+ """, (table,))
+ rows = res.fetchall()
+ self.assertEqual(rows[0][0], seq)
+
def test_unorderable_row(self):
# iterdump() should be able to cope with unorderable row types (issue #15545)
class UnorderableRow:
import os, unittest
import sqlite3 as sqlite
-def get_db_path():
- return "sqlite_testdb"
+from test.support import LOOPBACK_TIMEOUT
+from test.support.os_helper import TESTFN, unlink
+
+
+TIMEOUT = LOOPBACK_TIMEOUT / 10
+
class TransactionTests(unittest.TestCase):
def setUp(self):
- try:
- os.remove(get_db_path())
- except OSError:
- pass
-
- self.con1 = sqlite.connect(get_db_path(), timeout=0.1)
+ self.con1 = sqlite.connect(TESTFN, timeout=TIMEOUT)
self.cur1 = self.con1.cursor()
- self.con2 = sqlite.connect(get_db_path(), timeout=0.1)
+ self.con2 = sqlite.connect(TESTFN, timeout=TIMEOUT)
self.cur2 = self.con2.cursor()
def tearDown(self):
- self.cur1.close()
- self.con1.close()
+ try:
+ self.cur1.close()
+ self.con1.close()
- self.cur2.close()
- self.con2.close()
+ self.cur2.close()
+ self.con2.close()
- try:
- os.unlink(get_db_path())
- except OSError:
- pass
+ finally:
+ unlink(TESTFN)
def test_dml_does_not_auto_commit_before(self):
self.cur1.execute("create table test(i)")
return bool(self._table.children)
def get_identifiers(self):
- """Return a list of names of symbols in the table.
+ """Return a view object containing the names of symbols in the table.
"""
return self._table.symbols.keys()
# header information.
self._apply_pax_info(tarfile.pax_headers, tarfile.encoding, tarfile.errors)
+ # Remove redundant slashes from directories. This is to be consistent
+ # with frombuf().
+ if self.isdir():
+ self.name = self.name.rstrip("/")
+
return self
def _proc_gnulong(self, tarfile):
elif self.type == GNUTYPE_LONGLINK:
next.linkname = nts(buf, tarfile.encoding, tarfile.errors)
+ # Remove redundant slashes from directories. This is to be consistent
+ # with frombuf().
+ if next.isdir():
+ next.name = next.name.removesuffix("/")
+
return next
def _proc_sparse(self, tarfile):
fd = _os.open(filename, _bin_openflags, 0o600)
try:
try:
- with _io.open(fd, 'wb', closefd=False) as fp:
- fp.write(b'blat')
+ _os.write(fd, b'blat')
finally:
_os.close(fd)
finally:
def _mkstemp_inner(dir, pre, suf, flags, output_type):
"""Code common to mkstemp, TemporaryFile, and NamedTemporaryFile."""
+ dir = _os.path.abspath(dir)
names = _get_candidate_names()
if output_type is bytes:
names = map(_os.fsencode, names)
continue
else:
raise
- return (fd, _os.path.abspath(file))
+ return fd, file
raise FileExistsError(_errno.EEXIST,
"No usable temporary file name found")
if "b" not in mode:
encoding = _io.text_encoding(encoding)
- (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
+ name = None
+ def opener(*args):
+ nonlocal name
+ fd, name = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
+ return fd
try:
- file = _io.open(fd, mode, buffering=buffering,
- newline=newline, encoding=encoding, errors=errors)
-
- return _TemporaryFileWrapper(file, name, delete)
- except BaseException:
- _os.unlink(name)
- _os.close(fd)
+ file = _io.open(dir, mode, buffering=buffering,
+ newline=newline, encoding=encoding, errors=errors,
+ opener=opener)
+ try:
+ raw = getattr(file, 'buffer', file)
+ raw = getattr(raw, 'raw', raw)
+ raw.name = name
+ return _TemporaryFileWrapper(file, name, delete)
+ except:
+ file.close()
+ raise
+ except:
+ if name is not None and not (_os.name == 'nt' and delete):
+ _os.unlink(name)
raise
if _os.name != 'posix' or _sys.platform == 'cygwin':
flags = _bin_openflags
if _O_TMPFILE_WORKS:
- try:
+ fd = None
+ def opener(*args):
+ nonlocal fd
flags2 = (flags | _os.O_TMPFILE) & ~_os.O_CREAT
fd = _os.open(dir, flags2, 0o600)
+ return fd
+ try:
+ file = _io.open(dir, mode, buffering=buffering,
+ newline=newline, encoding=encoding,
+ errors=errors, opener=opener)
+ raw = getattr(file, 'buffer', file)
+ raw = getattr(raw, 'raw', raw)
+ raw.name = fd
+ return file
except IsADirectoryError:
# Linux kernel older than 3.11 ignores the O_TMPFILE flag:
# O_TMPFILE is read as O_DIRECTORY. Trying to open a directory
# fails with NotADirectoryError, because O_TMPFILE is read as
# O_DIRECTORY.
pass
- else:
- try:
- return _io.open(fd, mode, buffering=buffering,
- newline=newline, encoding=encoding,
- errors=errors)
- except:
- _os.close(fd)
- raise
# Fallback to _mkstemp_inner().
- (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
- try:
- _os.unlink(name)
- return _io.open(fd, mode, buffering=buffering,
- newline=newline, encoding=encoding, errors=errors)
- except:
- _os.close(fd)
- raise
+ fd = None
+ def opener(*args):
+ nonlocal fd
+ fd, name = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
+ try:
+ _os.unlink(name)
+ except BaseException as e:
+ _os.close(fd)
+ raise
+ return fd
+ file = _io.open(dir, mode, buffering=buffering,
+ newline=newline, encoding=encoding, errors=errors,
+ opener=opener)
+ raw = getattr(file, 'buffer', file)
+ raw = getattr(raw, 'raw', raw)
+ raw.name = fd
+ return file
class SpooledTemporaryFile:
"""Temporary file wrapper, specialized to switch from BytesIO
import unittest
import unittest.mock
import queue as pyqueue
+import textwrap
import time
import io
import itertools
for (j, res) in enumerate(results):
self.assertEqual(res.get(), sqr(j))
+ def test_pool_maxtasksperchild_invalid(self):
+ for value in [0, -1, 0.5, "12"]:
+ with self.assertRaises(ValueError):
+ multiprocessing.Pool(3, maxtasksperchild=value)
+
def test_worker_finalization_via_atexit_handler_of_multiprocessing(self):
# tests cases against bpo-38744 and bpo-39360
cmd = '''if 1:
'multiprocessing.shared_memory._make_filename') as mock_make_filename:
NAME_PREFIX = shared_memory._SHM_NAME_PREFIX
- names = ['test01_fn', 'test02_fn']
+ names = [self._new_shm_name('test03_fn'), self._new_shm_name('test04_fn')]
# Prepend NAME_PREFIX which can be '/psm_' or 'wnsm_', necessary
# because some POSIX compliant systems require name to start with /
names = [NAME_PREFIX + name for name in names]
self.run_worker(self._test_namespace, o)
+class TestNamedResource(unittest.TestCase):
+ def test_global_named_resource_spawn(self):
+ #
+ # gh-90549: Check that global named resources in main module
+ # will not leak by a subprocess, in spawn context.
+ #
+ testfn = os_helper.TESTFN
+ self.addCleanup(os_helper.unlink, testfn)
+ with open(testfn, 'w', encoding='utf-8') as f:
+ f.write(textwrap.dedent('''\
+ import multiprocessing as mp
+
+ ctx = mp.get_context('spawn')
+
+ global_resource = ctx.Semaphore()
+
+ def submain(): pass
+
+ if __name__ == '__main__':
+ p = ctx.Process(target=submain)
+ p.start()
+ p.join()
+ '''))
+ rc, out, err = test.support.script_helper.assert_python_ok(testfn)
+ # on error, err = 'UserWarning: resource_tracker: There appear to
+ # be 1 leaked semaphore objects to clean up at shutdown'
+ self.assertEqual(err, b'')
+
+
class MiscTestCase(unittest.TestCase):
def test__all__(self):
# Just make sure names in not_exported are excluded
remote_globs['setUpModule'] = setUpModule
remote_globs['tearDownModule'] = tearDownModule
+
+
+@unittest.skipIf(not hasattr(_multiprocessing, 'SemLock'), 'SemLock not available')
+@unittest.skipIf(sys.platform != "linux", "Linux only")
+class SemLockTests(unittest.TestCase):
+
+ def test_semlock_subclass(self):
+ class SemLock(_multiprocessing.SemLock):
+ pass
+ name = f'test_semlock_subclass-{os.getpid()}'
+ s = SemLock(1, 0, 10, name, 0)
+ _multiprocessing.sem_unlink(name)
#define TEST_PREPROCESSOR_GUARDED_ELSE_METHODDEF
#endif /* !defined(TEST_PREPROCESSOR_GUARDED_ELSE_METHODDEF) */
/*[clinic end generated code: output=3804bb18d454038c input=3fc80c9989d2f2e1]*/
+
+/*[clinic input]
+test_paramname_module
+
+ module as mod: object
+[clinic start generated code]*/
+
+PyDoc_STRVAR(test_paramname_module__doc__,
+"test_paramname_module($module, /, module)\n"
+"--\n"
+"\n");
+
+#define TEST_PARAMNAME_MODULE_METHODDEF \
+ {"test_paramname_module", (PyCFunction)(void(*)(void))test_paramname_module, METH_FASTCALL|METH_KEYWORDS, test_paramname_module__doc__},
+
+static PyObject *
+test_paramname_module_impl(PyObject *module, PyObject *mod);
+
+static PyObject *
+test_paramname_module(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames)
+{
+ PyObject *return_value = NULL;
+ static const char * const _keywords[] = {"module", NULL};
+ static _PyArg_Parser _parser = {NULL, _keywords, "test_paramname_module", 0};
+ PyObject *argsbuf[1];
+ PyObject *mod;
+
+ args = _PyArg_UnpackKeywords(args, nargs, NULL, kwnames, &_parser, 1, 1, 0, argsbuf);
+ if (!args) {
+ goto exit;
+ }
+ mod = args[0];
+ return_value = test_paramname_module_impl(module, mod);
+
+exit:
+ return return_value;
+}
+
+static PyObject *
+test_paramname_module_impl(PyObject *module, PyObject *mod)
+/*[clinic end generated code: output=2a6769d34d1b2be0 input=afefe259667f13ba]*/
except (ValueError, OSError) as err:
print(f"Unable to raise RLIMIT_NOFILE from {fd_limit} to "
f"{new_fd_limit}: {err}.")
-
# 2-D, non-contiguous
check_array(arr[::2])
+ def test_evil_class_mutating_dict(self):
+ # https://github.com/python/cpython/issues/92930
+ from random import getrandbits
+
+ global Bad
+ class Bad:
+ def __eq__(self, other):
+ return ENABLED
+ def __hash__(self):
+ return 42
+ def __reduce__(self):
+ if getrandbits(6) == 0:
+ collection.clear()
+ return (Bad, ())
+
+ for proto in protocols:
+ for _ in range(20):
+ ENABLED = False
+ collection = {Bad(): Bad() for _ in range(20)}
+ for bad in collection:
+ bad.bad = bad
+ bad.collection = collection
+ ENABLED = True
+ try:
+ data = self.dumps(collection, proto)
+ self.loads(data)
+ except RuntimeError as e:
+ expected = "changed size during iteration"
+ self.assertIn(expected, str(e))
+
+ def test_evil_pickler_mutating_collection(self):
+ # https://github.com/python/cpython/issues/92930
+ if not hasattr(self, "pickler"):
+ raise self.skipTest(f"{type(self)} has no associated pickler type")
+
+ global Clearer
+ class Clearer:
+ pass
+
+ def check(collection):
+ class EvilPickler(self.pickler):
+ def persistent_id(self, obj):
+ if isinstance(obj, Clearer):
+ collection.clear()
+ return None
+ pickler = EvilPickler(io.BytesIO(), proto)
+ try:
+ pickler.dump(collection)
+ except RuntimeError as e:
+ expected = "changed size during iteration"
+ self.assertIn(expected, str(e))
+
+ for proto in protocols:
+ check([Clearer()])
+ check([Clearer(), Clearer()])
+ check({Clearer()})
+ check({Clearer(), Clearer()})
+ check({Clearer(): 1})
+ check({Clearer(): 1, Clearer(): 2})
+ check({1: Clearer(), 2: Clearer()})
+
class BigmemPickleTests:
setattr(object_to_patch, attr_name, new_value)
+@contextlib.contextmanager
+def patch_list(orig):
+ """Like unittest.mock.patch.dict, but for lists."""
+ try:
+ saved = orig[:]
+ yield
+ finally:
+ orig[:] = saved
+
+
def run_in_subinterp(code):
"""
Run code in a subinterpreter. Raise unittest.SkipTest if the tracemalloc
expressions[0] = f"expr = {ast.expr.__subclasses__()[0].__doc__}"
self.assertCountEqual(ast.expr.__doc__.split("\n"), expressions)
+ def test_parenthesized_with_feature_version(self):
+ ast.parse('with (CtxManager() as example): ...', feature_version=(3, 10))
+ # While advertised as a feature in Python 3.10, this was allowed starting 3.9
+ ast.parse('with (CtxManager() as example): ...', feature_version=(3, 9))
+ with self.assertRaises(SyntaxError):
+ ast.parse('with (CtxManager() as example): ...', feature_version=(3, 8))
+ ast.parse('with CtxManager() as example: ...', feature_version=(3, 8))
+
def test_issue40614_feature_version(self):
ast.parse('f"{x=}"', feature_version=(3, 8))
with self.assertRaises(SyntaxError):
ast.parse('f"{x=}"', feature_version=(3, 7))
+ def test_assignment_expression_feature_version(self):
+ ast.parse('(x := 0)', feature_version=(3, 8))
+ with self.assertRaises(SyntaxError):
+ ast.parse('(x := 0)', feature_version=(3, 7))
+
def test_constant_as_name(self):
for constant in "True", "False", "None":
expr = ast.Expression(ast.Name(constant, ast.Load()))
# IsolatedAsyncioTestCase based tests
import asyncio
+import traceback
import unittest
+from asyncio import tasks
def tearDownModule():
asyncio.set_event_loop_policy(None)
-class FutureTests(unittest.IsolatedAsyncioTestCase):
+class FutureTests:
+
+ async def test_future_traceback(self):
+
+ async def raise_exc():
+ raise TypeError(42)
+
+ future = self.cls(raise_exc())
+
+ for _ in range(5):
+ try:
+ await future
+ except TypeError as e:
+ tb = ''.join(traceback.format_tb(e.__traceback__))
+ self.assertEqual(tb.count("await future"), 1)
+ else:
+ self.fail('TypeError was not raised')
+
+@unittest.skipUnless(hasattr(tasks, '_CTask'),
+ 'requires the C _asyncio module')
+class CFutureTests(FutureTests, unittest.IsolatedAsyncioTestCase):
+ cls = tasks._CTask
+
+class PyFutureTests(FutureTests, unittest.IsolatedAsyncioTestCase):
+ cls = tasks._PyTask
+
+class FutureReprTests(unittest.IsolatedAsyncioTestCase):
+
async def test_recursive_repr_for_pending_tasks(self):
# The call crashes if the guard for recursive call
# in base_futures:_future_repr_info is absent
self.assertTrue(asyncio.iscoroutinefunction(fn2))
self.assertFalse(asyncio.iscoroutinefunction(mock.Mock()))
+ self.assertTrue(asyncio.iscoroutinefunction(mock.AsyncMock()))
def test_yield_vs_yield_from(self):
fut = self.new_future(self.loop)
def test_sqlite3(self):
- try:
- import sqlite3
- except ImportError:
- return
+ sqlite3 = import_helper.import_module("sqlite3")
returncode, events, stderr = self.run_python("test_sqlite3")
if returncode:
self.fail(stderr)
from itertools import islice, repeat
from test.support import import_helper
from test.support import os_helper
+from test.support import patch_list
class BdbException(Exception): pass
with TracerRun(self) as tracer:
tracer.runcall(tfunc_main)
+ @patch_list(sys.meta_path)
def test_skip(self):
# Check that tracing is skipped over the import statement in
# 'tfunc_import()'.
+
+ # Remove all but the standard importers.
+ sys.meta_path[:] = (
+ item
+ for item in sys.meta_path
+ if item.__module__.startswith('_frozen_importlib')
+ )
+
code = """
def main():
lno = 3
self.assertEqual(b1, b)
self.assertEqual(b3, b'xcxcxc')
+ def test_mutating_index(self):
+ class Boom:
+ def __index__(self):
+ b.clear()
+ return 0
+
+ with self.subTest("tp_as_mapping"):
+ b = bytearray(b'Now you see me...')
+ with self.assertRaises(IndexError):
+ b[0] = Boom()
+
+ with self.subTest("tp_as_sequence"):
+ _testcapi = import_helper.import_module('_testcapi')
+ b = bytearray(b'Now you see me...')
+ with self.assertRaises(IndexError):
+ _testcapi.sequence_setitem(b, 0, Boom())
+
class AssortedBytesTest(unittest.TestCase):
#
new_october = calendar.TextCalendar().formatmonthname(2010, 10, 10)
self.assertEqual(old_october, new_october)
+ def test_locale_calendar_formatweekday(self):
+ try:
+ # formatweekday uses different day names based on the available width.
+ cal = calendar.LocaleTextCalendar(locale='en_US')
+ # For short widths, a centered, abbreviated name is used.
+ self.assertEqual(cal.formatweekday(0, 5), " Mon ")
+ # For really short widths, even the abbreviated name is truncated.
+ self.assertEqual(cal.formatweekday(0, 2), "Mo")
+ # For long widths, the full day name is used.
+ self.assertEqual(cal.formatweekday(0, 10), " Monday ")
+ except locale.Error:
+ raise unittest.SkipTest('cannot set the en_US locale')
+
def test_locale_html_calendar_custom_css_class_month_name(self):
try:
cal = calendar.LocaleHTMLCalendar(locale='')
import contextlib
+class BadStr(str):
+ def __eq__(self, other):
+ return True
+ def __hash__(self):
+ # Guaranteed different hash
+ return str.__hash__(self) ^ 3
+
+
class FunctionCalls(unittest.TestCase):
def test_kwargs_order(self):
self.assertRaisesRegex(TypeError, msg,
print, 0, sep=1, end=2, file=3, flush=4, foo=5)
+ def test_varargs18_kw(self):
+ # _PyArg_UnpackKeywordsWithVararg()
+ msg = r"invalid keyword argument for print\(\)$"
+ with self.assertRaisesRegex(TypeError, msg):
+ print(0, 1, **{BadStr('foo'): ','})
+
+ def test_varargs19_kw(self):
+ # _PyArg_UnpackKeywords()
+ msg = r"invalid keyword argument for round\(\)$"
+ with self.assertRaisesRegex(TypeError, msg):
+ round(1.75, **{BadStr('foo'): 1})
+
def test_oldargs0_1(self):
msg = r"keys\(\) takes no arguments \(1 given\)"
self.assertRaisesRegex(TypeError, msg, {}.keys, 0)
code += " x and x\n" * self.N
self.check_stack_size(code)
+ def test_stack_3050(self):
+ M = 3050
+ code = "x," * M + "=t"
+ # This raised on 3.10.0 to 3.10.5
+ compile(code, "<foo>", "single")
+
class TestStackSizeStability(unittest.TestCase):
# Check that repeating certain snippets doesn't increase the stack size
t = ThreadPoolExecutor()
t.submit(sleep_and_print, .1, "apple")
t.shutdown(wait=False, cancel_futures=True)
- """.format(executor_type=self.executor_type.__name__))
+ """)
# Errors in atexit hooks don't change the process exit code, check
# stderr manually.
self.assertFalse(err)
with futures.ProcessPoolExecutor(1, mp_context=mp.get_context('fork')) as workers:
workers.submit(tuple)
+ def test_executor_map_current_future_cancel(self):
+ stop_event = threading.Event()
+ log = []
+
+ def log_n_wait(ident):
+ log.append(f"{ident=} started")
+ try:
+ stop_event.wait()
+ finally:
+ log.append(f"{ident=} stopped")
+
+ with self.executor_type(max_workers=1) as pool:
+ # submit work to saturate the pool
+ fut = pool.submit(log_n_wait, ident="first")
+ try:
+ with contextlib.closing(
+ pool.map(log_n_wait, ["second", "third"], timeout=0)
+ ) as gen:
+ with self.assertRaises(futures.TimeoutError):
+ next(gen)
+ finally:
+ stop_event.set()
+ fut.result()
+ # ident='second' is cancelled as a result of raising a TimeoutError
+ # ident='third' is cancelled because it remained in the collection of futures
+ self.assertListEqual(log, ["ident='first' started", "ident='first' stopped"])
+
class ProcessPoolExecutorTest(ExecutorTest):
self.assertIsNot(x, y)
self.assertIsNot(x["foo"], y["foo"])
+ def test_reduce_6tuple(self):
+ def state_setter(*args, **kwargs):
+ self.fail("shouldn't call this")
+ class C:
+ def __reduce__(self):
+ return C, (), self.__dict__, None, None, state_setter
+ x = C()
+ with self.assertRaises(TypeError):
+ copy.copy(x)
+ with self.assertRaises(TypeError):
+ copy.deepcopy(x)
+
+ def test_reduce_6tuple_none(self):
+ class C:
+ def __reduce__(self):
+ return C, (), self.__dict__, None, None, None
+ x = C()
+ with self.assertRaises(TypeError):
+ copy.copy(x)
+ with self.assertRaises(TypeError):
+ copy.deepcopy(x)
+
def test_copy_slots(self):
class C(object):
__slots__ = ["foo"]
self.assertEqual(D.__set_name__.call_count, 1)
+ def test_init_calls_set(self):
+ class D:
+ pass
+
+ D.__set__ = Mock()
+
+ @dataclass
+ class C:
+ i: D = D()
+
+ # Make sure D.__set__ is called.
+ D.__set__.reset_mock()
+ c = C(5)
+ self.assertEqual(D.__set__.call_count, 1)
+
+ def test_getting_field_calls_get(self):
+ class D:
+ pass
+
+ D.__set__ = Mock()
+ D.__get__ = Mock()
+
+ @dataclass
+ class C:
+ i: D = D()
+
+ c = C(5)
+
+ # Make sure D.__get__ is called.
+ D.__get__.reset_mock()
+ value = c.i
+ self.assertEqual(D.__get__.call_count, 1)
+
+ def test_setting_field_calls_set(self):
+ class D:
+ pass
+
+ D.__set__ = Mock()
+
+ @dataclass
+ class C:
+ i: D = D()
+
+ c = C(5)
+
+ # Make sure D.__set__ is called.
+ D.__set__.reset_mock()
+ c.i = 10
+ self.assertEqual(D.__set__.call_count, 1)
+
+ def test_setting_uninitialized_descriptor_field(self):
+ class D:
+ pass
+
+ D.__set__ = Mock()
+
+ @dataclass
+ class C:
+ i: D
+
+ # D.__set__ is not called because there's no D instance to call it on
+ D.__set__.reset_mock()
+ c = C(5)
+ self.assertEqual(D.__set__.call_count, 0)
+
+ # D.__set__ still isn't called after setting i to an instance of D
+ # because descriptors don't behave like that when stored as instance vars
+ c.i = D()
+ c.i = 5
+ self.assertEqual(D.__set__.call_count, 0)
+
+ def test_default_value(self):
+ class D:
+ def __get__(self, instance: Any, owner: object) -> int:
+ if instance is None:
+ return 100
+
+ return instance._x
+
+ def __set__(self, instance: Any, value: int) -> None:
+ instance._x = value
+
+ @dataclass
+ class C:
+ i: D = D()
+
+ c = C()
+ self.assertEqual(c.i, 100)
+
+ c = C(5)
+ self.assertEqual(c.i, 5)
+
+ def test_no_default_value(self):
+ class D:
+ def __get__(self, instance: Any, owner: object) -> int:
+ if instance is None:
+ raise AttributeError()
+
+ return instance._x
+
+ def __set__(self, instance: Any, value: int) -> None:
+ instance._x = value
+
+ @dataclass
+ class C:
+ i: D = D()
+
+ with self.assertRaisesRegex(TypeError, 'missing 1 required positional argument'):
+ c = C()
class TestStringAnnotations(unittest.TestCase):
def test_classvar(self):
# parsedate and parsedate_tz will become deprecated interfaces someday
def test_parsedate_returns_None_for_invalid_strings(self):
- self.assertIsNone(utils.parsedate(''))
- self.assertIsNone(utils.parsedate_tz(''))
- self.assertIsNone(utils.parsedate(' '))
- self.assertIsNone(utils.parsedate_tz(' '))
- self.assertIsNone(utils.parsedate('0'))
- self.assertIsNone(utils.parsedate_tz('0'))
- self.assertIsNone(utils.parsedate('A Complete Waste of Time'))
- self.assertIsNone(utils.parsedate_tz('A Complete Waste of Time'))
- self.assertIsNone(utils.parsedate_tz('Wed, 3 Apr 2002 12.34.56.78+0800'))
+ # See also test_parsedate_to_datetime_with_invalid_raises_valueerror
+ # in test_utils.
+ invalid_dates = [
+ '',
+ ' ',
+ '0',
+ 'A Complete Waste of Time',
+ 'Wed, 3 Apr 2002 12.34.56.78+0800',
+ '17 June , 2022',
+ 'Friday, -Nov-82 16:14:55 EST',
+ 'Friday, Nov--82 16:14:55 EST',
+ 'Friday, 19-Nov- 16:14:55 EST',
+ ]
+ for dtstr in invalid_dates:
+ with self.subTest(dtstr=dtstr):
+ self.assertIsNone(utils.parsedate(dtstr))
+ self.assertIsNone(utils.parsedate_tz(dtstr))
# Not a part of the spec but, but this has historically worked:
self.assertIsNone(utils.parsedate(None))
self.assertIsNone(utils.parsedate_tz(None))
def test_parsedate_compact(self):
+ self.assertEqual(utils.parsedate_tz('Wed, 3 Apr 2002 14:58:26 +0800'),
+ (2002, 4, 3, 14, 58, 26, 0, 1, -1, 28800))
# The FWS after the comma is optional
- self.assertEqual(utils.parsedate('Wed,3 Apr 2002 14:58:26 +0800'),
- utils.parsedate('Wed, 3 Apr 2002 14:58:26 +0800'))
+ self.assertEqual(utils.parsedate_tz('Wed,3 Apr 2002 14:58:26 +0800'),
+ (2002, 4, 3, 14, 58, 26, 0, 1, -1, 28800))
+ # The comma is optional
+ self.assertEqual(utils.parsedate_tz('Wed 3 Apr 2002 14:58:26 +0800'),
+ (2002, 4, 3, 14, 58, 26, 0, 1, -1, 28800))
def test_parsedate_no_dayofweek(self):
eq = self.assertEqual
- eq(utils.parsedate_tz('25 Feb 2003 13:47:26 -0800'),
- (2003, 2, 25, 13, 47, 26, 0, 1, -1, -28800))
-
- def test_parsedate_compact_no_dayofweek(self):
- eq = self.assertEqual
eq(utils.parsedate_tz('5 Feb 2003 13:47:26 -0800'),
(2003, 2, 5, 13, 47, 26, 0, 1, -1, -28800))
+ eq(utils.parsedate_tz('February 5, 2003 13:47:26 -0800'),
+ (2003, 2, 5, 13, 47, 26, 0, 1, -1, -28800))
def test_parsedate_no_space_before_positive_offset(self):
self.assertEqual(utils.parsedate_tz('Wed, 3 Apr 2002 14:58:26+0800'),
self.assertEqual(utils.parsedate_tz('Wed, 3 Apr 2002 14:58:26-0800'),
(2002, 4, 3, 14, 58, 26, 0, 1, -1, -28800))
-
def test_parsedate_accepts_time_with_dots(self):
eq = self.assertEqual
eq(utils.parsedate_tz('5 Feb 2003 13.47.26 -0800'),
eq(utils.parsedate_tz('5 Feb 2003 13.47 -0800'),
(2003, 2, 5, 13, 47, 0, 0, 1, -1, -28800))
+ def test_parsedate_rfc_850(self):
+ self.assertEqual(utils.parsedate_tz('Friday, 19-Nov-82 16:14:55 EST'),
+ (1982, 11, 19, 16, 14, 55, 0, 1, -1, -18000))
+
+ def test_parsedate_no_seconds(self):
+ self.assertEqual(utils.parsedate_tz('Wed, 3 Apr 2002 14:58 +0800'),
+ (2002, 4, 3, 14, 58, 0, 0, 1, -1, 28800))
+
+ def test_parsedate_dot_time_delimiter(self):
+ self.assertEqual(utils.parsedate_tz('Wed, 3 Apr 2002 14.58.26 +0800'),
+ (2002, 4, 3, 14, 58, 26, 0, 1, -1, 28800))
+ self.assertEqual(utils.parsedate_tz('Wed, 3 Apr 2002 14.58 +0800'),
+ (2002, 4, 3, 14, 58, 0, 0, 1, -1, 28800))
+
def test_parsedate_acceptable_to_time_functions(self):
eq = self.assertEqual
timetup = utils.parsedate('5 Feb 2003 13:47:26 -0800')
self.naive_dt)
def test_parsedate_to_datetime_with_invalid_raises_valueerror(self):
- invalid_dates = ['',
- '0',
- 'A Complete Waste of Time'
- 'Tue, 06 Jun 2017 27:39:33 +0600',
- 'Tue, 06 Jun 2017 07:39:33 +2600',
- 'Tue, 06 Jun 2017 27:39:33']
+ # See also test_parsedate_returns_None_for_invalid_strings in test_email.
+ invalid_dates = [
+ '',
+ ' ',
+ '0',
+ 'A Complete Waste of Time',
+ 'Wed, 3 Apr 2002 12.34.56.78+0800'
+ 'Tue, 06 Jun 2017 27:39:33 +0600',
+ 'Tue, 06 Jun 2017 07:39:33 +2600',
+ 'Tue, 06 Jun 2017 27:39:33',
+ '17 June , 2022',
+ 'Friday, -Nov-82 16:14:55 EST',
+ 'Friday, Nov--82 16:14:55 EST',
+ 'Friday, 19-Nov- 16:14:55 EST',
+ ]
for dtstr in invalid_dates:
with self.subTest(dtstr=dtstr):
self.assertRaises(ValueError, utils.parsedate_to_datetime, dtstr)
with open(temp_file, 'rb') as f:
self.assertEqual(f.read(), b'New line.')
+ def test_inplace_encoding_errors(self):
+ temp_file = self.writeTmp(b'Initial text \x88', mode='wb')
+ with FileInput(temp_file, inplace=True,
+ encoding="ascii", errors="replace") as fobj:
+ line = fobj.readline()
+ self.assertEqual(line, 'Initial text \ufffd')
+ print("New line \x88")
+ with open(temp_file, 'rb') as f:
+ self.assertEqual(f.read().rstrip(b'\r\n'), b'New line ?')
+
def test_file_hook_backward_compatibility(self):
def old_hook(filename, mode):
return io.StringIO("I used to receive only filename and mode")
self.assertEqual(binop.lineno, 4)
self.assertEqual(binop.left.lineno, 4)
self.assertEqual(binop.right.lineno, 6)
- self.assertEqual(binop.col_offset, 4)
- self.assertEqual(binop.left.col_offset, 4)
+ self.assertEqual(binop.col_offset, 3)
+ self.assertEqual(binop.left.col_offset, 3)
self.assertEqual(binop.right.col_offset, 7)
+ expr = """
+a = f'''
+ {blech}
+ '''
+"""
+ t = ast.parse(expr)
+ self.assertEqual(type(t), ast.Module)
+ self.assertEqual(len(t.body), 1)
+ # Check f'...'
+ self.assertEqual(type(t.body[0]), ast.Assign)
+ self.assertEqual(type(t.body[0].value), ast.JoinedStr)
+ self.assertEqual(len(t.body[0].value.values), 3)
+ self.assertEqual(type(t.body[0].value.values[1]), ast.FormattedValue)
+ self.assertEqual(t.body[0].lineno, 2)
+ self.assertEqual(t.body[0].value.lineno, 2)
+ self.assertEqual(t.body[0].value.values[0].lineno, 2)
+ self.assertEqual(t.body[0].value.values[1].lineno, 2)
+ self.assertEqual(t.body[0].value.values[2].lineno, 2)
+ self.assertEqual(t.body[0].col_offset, 0)
+ self.assertEqual(t.body[0].value.col_offset, 4)
+ self.assertEqual(t.body[0].value.values[0].col_offset, 4)
+ self.assertEqual(t.body[0].value.values[1].col_offset, 4)
+ self.assertEqual(t.body[0].value.values[2].col_offset, 4)
+ # Check {blech}
+ self.assertEqual(t.body[0].value.values[1].value.lineno, 3)
+ self.assertEqual(t.body[0].value.values[1].value.end_lineno, 3)
+ self.assertEqual(t.body[0].value.values[1].value.col_offset, 11)
+ self.assertEqual(t.body[0].value.values[1].value.end_col_offset, 16)
+
def test_ast_line_numbers_with_parentheses(self):
expr = """
x = (
import math
import string
import sys
+import warnings
from test import support
from test.support import import_helper
from test.support import warnings_helper
"'\udc80' is an invalid keyword argument for this function"):
getargs_keyword_only(1, 2, **{'\uDC80': 10})
+ def test_weird_str_subclass(self):
+ class BadStr(str):
+ def __eq__(self, other):
+ return True
+ def __hash__(self):
+ # Guaranteed different hash
+ return str.__hash__(self) ^ 3
+ with self.assertRaisesRegex(TypeError,
+ "invalid keyword argument for this function"):
+ getargs_keyword_only(1, 2, **{BadStr("keyword_only"): 3})
+ with self.assertRaisesRegex(TypeError,
+ "invalid keyword argument for this function"):
+ getargs_keyword_only(1, 2, **{BadStr("monster"): 666})
+
+ def test_weird_str_subclass2(self):
+ class BadStr(str):
+ def __eq__(self, other):
+ return False
+ def __hash__(self):
+ return str.__hash__(self)
+ with self.assertRaisesRegex(TypeError,
+ "invalid keyword argument for this function"):
+ getargs_keyword_only(1, 2, **{BadStr("keyword_only"): 3})
+ with self.assertRaisesRegex(TypeError,
+ "invalid keyword argument for this function"):
+ getargs_keyword_only(1, 2, **{BadStr("monster"): 666})
+
class PositionalOnlyAndKeywords_TestCase(unittest.TestCase):
from _testcapi import getargs_positional_only_and_keywords as getargs
def test_s_hash_int(self):
# "s#" without PY_SSIZE_T_CLEAN defined.
from _testcapi import getargs_s_hash_int
- self.assertRaises(SystemError, getargs_s_hash_int, "abc")
- self.assertRaises(SystemError, getargs_s_hash_int, x=42)
- # getargs_s_hash_int() don't raise SystemError because skipitem() is not called.
+ from _testcapi import getargs_s_hash_int2
+ buf = bytearray([1, 2])
+ self.assertRaises(SystemError, getargs_s_hash_int, buf, "abc")
+ self.assertRaises(SystemError, getargs_s_hash_int, buf, x=42)
+ self.assertRaises(SystemError, getargs_s_hash_int, buf, x="abc")
+ self.assertRaises(SystemError, getargs_s_hash_int2, buf, ("abc",))
+ self.assertRaises(SystemError, getargs_s_hash_int2, buf, x=42)
+ self.assertRaises(SystemError, getargs_s_hash_int2, buf, x="abc")
+ buf.append(3) # still mutable -- not locked by a buffer export
+ # getargs_s_hash_int(buf) may not raise SystemError because skipitem()
+ # is not called. But it is an implementation detail.
+ # getargs_s_hash_int(buf)
+ # getargs_s_hash_int2(buf)
def test_z(self):
from _testcapi import getargs_z
self.assertRaises(TypeError, getargs_u, memoryview(b'memoryview'))
with self.assertWarns(DeprecationWarning):
self.assertRaises(TypeError, getargs_u, None)
+ with warnings.catch_warnings():
+ warnings.simplefilter('error', DeprecationWarning)
+ self.assertRaises(DeprecationWarning, getargs_u, 'abc\xe9')
@support.requires_legacy_unicode_capi
def test_u_hash(self):
self.assertRaises(TypeError, getargs_u_hash, memoryview(b'memoryview'))
with self.assertWarns(DeprecationWarning):
self.assertRaises(TypeError, getargs_u_hash, None)
+ with warnings.catch_warnings():
+ warnings.simplefilter('error', DeprecationWarning)
+ self.assertRaises(DeprecationWarning, getargs_u_hash, 'abc\xe9')
@support.requires_legacy_unicode_capi
def test_Z(self):
self.assertRaises(TypeError, getargs_Z, memoryview(b'memoryview'))
with self.assertWarns(DeprecationWarning):
self.assertIsNone(getargs_Z(None))
+ with warnings.catch_warnings():
+ warnings.simplefilter('error', DeprecationWarning)
+ self.assertRaises(DeprecationWarning, getargs_Z, 'abc\xe9')
@support.requires_legacy_unicode_capi
def test_Z_hash(self):
self.assertRaises(TypeError, getargs_Z_hash, memoryview(b'memoryview'))
with self.assertWarns(DeprecationWarning):
self.assertIsNone(getargs_Z_hash(None))
+ with warnings.catch_warnings():
+ warnings.simplefilter('error', DeprecationWarning)
+ self.assertRaises(DeprecationWarning, getargs_Z_hash, 'abc\xe9')
class Object_TestCase(unittest.TestCase):
pass
def setUp(self):
- BaseTestCase.setUp(self)
+ super().setUp()
self.cwd = os.getcwd()
basetempdir = tempfile.gettempdir()
os.chdir(basetempdir)
except:
pass
finally:
- BaseTestCase.tearDown(self)
+ super().tearDown()
def check_status_and_reason(self, response, status, data=None):
def close_conn():
self.check_status_and_reason(response, HTTPStatus.OK,
data=os_helper.TESTFN_UNDECODABLE)
+ def test_get_dir_redirect_location_domain_injection_bug(self):
+ """Ensure //evil.co/..%2f../../X does not put //evil.co/ in Location.
+
+ //netloc/ in a Location header is a redirect to a new host.
+ https://github.com/python/cpython/issues/87389
+
+ This checks that a path resolving to a directory on our server cannot
+ resolve into a redirect to another server.
+ """
+ os.mkdir(os.path.join(self.tempdir, 'existing_directory'))
+ url = f'/python.org/..%2f..%2f..%2f..%2f..%2f../%0a%0d/../{self.tempdir_name}/existing_directory'
+ expected_location = f'{url}/' # /python.org.../ single slash single prefix, trailing slash
+ # Canonicalizes to /tmp/tempdir_name/existing_directory which does
+ # exist and is a dir, triggering the 301 redirect logic.
+ response = self.request(url)
+ self.check_status_and_reason(response, HTTPStatus.MOVED_PERMANENTLY)
+ location = response.getheader('Location')
+ self.assertEqual(location, expected_location, msg='non-attack failed!')
+
+ # //python.org... multi-slash prefix, no trailing slash
+ attack_url = f'/{url}'
+ response = self.request(attack_url)
+ self.check_status_and_reason(response, HTTPStatus.MOVED_PERMANENTLY)
+ location = response.getheader('Location')
+ self.assertFalse(location.startswith('//'), msg=location)
+ self.assertEqual(location, expected_location,
+ msg='Expected Location header to start with a single / and '
+ 'end with a / as this is a directory redirect.')
+
+ # ///python.org... triple-slash prefix, no trailing slash
+ attack3_url = f'//{url}'
+ response = self.request(attack3_url)
+ self.check_status_and_reason(response, HTTPStatus.MOVED_PERMANENTLY)
+ self.assertEqual(response.getheader('Location'), expected_location)
+
+ # If the second word in the http request (Request-URI for the http
+ # method) is a full URI, we don't worry about it, as that'll be parsed
+ # and reassembled as a full URI within BaseHTTPRequestHandler.send_head
+ # so no errant scheme-less //netloc//evil.co/ domain mixup can happen.
+ attack_scheme_netloc_2slash_url = f'https://pypi.org/{url}'
+ expected_scheme_netloc_location = f'{attack_scheme_netloc_2slash_url}/'
+ response = self.request(attack_scheme_netloc_2slash_url)
+ self.check_status_and_reason(response, HTTPStatus.MOVED_PERMANENTLY)
+ location = response.getheader('Location')
+ # We're just ensuring that the scheme and domain make it through, if
+ # there are or aren't multiple slashes at the start of the path that
+ # follows that isn't important in this Location: header.
+ self.assertTrue(location.startswith('https://pypi.org/'), msg=location)
+
def test_get(self):
#constructs the path relative to the root directory of the HTTPServer
response = self.request(self.base_url + '/test')
gen_coroutine_function_example))))
self.assertTrue(inspect.isgenerator(gen_coro))
+ self.assertFalse(
+ inspect.iscoroutinefunction(unittest.mock.Mock()))
+ self.assertTrue(
+ inspect.iscoroutinefunction(unittest.mock.AsyncMock()))
self.assertTrue(
inspect.iscoroutinefunction(coroutine_function_example))
self.assertTrue(
self.assertTrue(inspect.iscoroutine(coro))
self.assertFalse(
+ inspect.isgeneratorfunction(unittest.mock.Mock()))
+ self.assertFalse(
+ inspect.isgeneratorfunction(unittest.mock.AsyncMock()))
+ self.assertFalse(
inspect.isgeneratorfunction(coroutine_function_example))
self.assertFalse(
inspect.isgeneratorfunction(
coroutine_function_example))))
self.assertFalse(inspect.isgenerator(coro))
+ self.assertFalse(
+ inspect.isasyncgenfunction(unittest.mock.Mock()))
+ self.assertFalse(
+ inspect.isasyncgenfunction(unittest.mock.AsyncMock()))
+ self.assertFalse(
+ inspect.isasyncgenfunction(coroutine_function_example))
self.assertTrue(
inspect.isasyncgenfunction(async_generator_function_example))
self.assertTrue(
# file_byte_string = b'Bad data goes here'
def test_getline(self):
- self.assertRaises((SyntaxError, UnicodeDecodeError),
- linecache.getline, self.file_name, 1)
+ self.assertEqual(linecache.getline(self.file_name, 1), '')
def test_getlines(self):
- self.assertRaises((SyntaxError, UnicodeDecodeError),
- linecache.getlines, self.file_name)
+ self.assertEqual(linecache.getlines(self.file_name), [])
class EmptyFile(GetLineTestsGoodData, unittest.TestCase):
class GoodUnicode(GetLineTestsGoodData, unittest.TestCase):
file_list = ['á\n', 'b\n', 'abcdef\n', 'ááááá\n']
+class BadUnicode_NoDeclaration(GetLineTestsBadData, unittest.TestCase):
+ file_byte_string = b'\n\x80abc'
-class BadUnicode(GetLineTestsBadData, unittest.TestCase):
- file_byte_string = b'\x80abc'
+class BadUnicode_WithDeclaration(GetLineTestsBadData, unittest.TestCase):
+ file_byte_string = b'# coding=utf-8\n\x80abc'
class LineCacheTests(unittest.TestCase):
self.sl_hdlr = hcls((server.server_address[0], server.port))
else:
self.sl_hdlr = hcls(server.server_address)
- self.log_output = ''
+ self.log_output = b''
self.root_logger.removeHandler(self.root_logger.handlers[0])
self.root_logger.addHandler(self.sl_hdlr)
self.handled = threading.Event()
@unittest.skipUnless(hasattr(logging.handlers, 'QueueListener'),
'logging.handlers.QueueListener required for this test')
def test_queue_listener_with_StreamHandler(self):
- # Test that traceback only appends once (bpo-34334).
+ # Test that traceback and stack-info only appends once (bpo-34334, bpo-46755).
listener = logging.handlers.QueueListener(self.queue, self.root_hdlr)
listener.start()
try:
except ZeroDivisionError as e:
exc = e
self.que_logger.exception(self.next_message(), exc_info=exc)
+ self.que_logger.error(self.next_message(), stack_info=True)
listener.stop()
self.assertEqual(self.stream.getvalue().strip().count('Traceback'), 1)
+ self.assertEqual(self.stream.getvalue().strip().count('Stack'), 1)
@unittest.skipUnless(hasattr(logging.handlers, 'QueueListener'),
'logging.handlers.QueueListener required for this test')
with self.assertRaises(TypeError):
pickle.dumps(m, proto)
+ def test_use_released_memory(self):
+ # gh-92888: Previously it was possible to use a memoryview even after
+ # backing buffer is freed in certain cases. This tests that those
+ # cases raise an exception.
+ size = 128
+ def release():
+ m.release()
+ nonlocal ba
+ ba = bytearray(size)
+ class MyIndex:
+ def __index__(self):
+ release()
+ return 4
+ class MyFloat:
+ def __float__(self):
+ release()
+ return 4.25
+ class MyBool:
+ def __bool__(self):
+ release()
+ return True
+
+ ba = None
+ m = memoryview(bytearray(b'\xff'*size))
+ with self.assertRaises(ValueError):
+ m[MyIndex()]
+
+ ba = None
+ m = memoryview(bytearray(b'\xff'*size))
+ self.assertEqual(list(m[:MyIndex()]), [255] * 4)
+
+ ba = None
+ m = memoryview(bytearray(b'\xff'*size))
+ self.assertEqual(list(m[MyIndex():8]), [255] * 4)
+
+ ba = None
+ m = memoryview(bytearray(b'\xff'*size)).cast('B', (64, 2))
+ with self.assertRaisesRegex(ValueError, "operation forbidden"):
+ m[MyIndex(), 0]
+
+ ba = None
+ m = memoryview(bytearray(b'\xff'*size)).cast('B', (2, 64))
+ with self.assertRaisesRegex(ValueError, "operation forbidden"):
+ m[0, MyIndex()]
+
+ ba = None
+ m = memoryview(bytearray(b'\xff'*size))
+ with self.assertRaisesRegex(ValueError, "operation forbidden"):
+ m[MyIndex()] = 42
+ self.assertEqual(ba[:8], b'\0'*8)
+
+ ba = None
+ m = memoryview(bytearray(b'\xff'*size))
+ with self.assertRaisesRegex(ValueError, "operation forbidden"):
+ m[:MyIndex()] = b'spam'
+ self.assertEqual(ba[:8], b'\0'*8)
+
+ ba = None
+ m = memoryview(bytearray(b'\xff'*size))
+ with self.assertRaisesRegex(ValueError, "operation forbidden"):
+ m[MyIndex():8] = b'spam'
+ self.assertEqual(ba[:8], b'\0'*8)
+
+ ba = None
+ m = memoryview(bytearray(b'\xff'*size)).cast('B', (64, 2))
+ with self.assertRaisesRegex(ValueError, "operation forbidden"):
+ m[MyIndex(), 0] = 42
+ self.assertEqual(ba[8:16], b'\0'*8)
+ ba = None
+ m = memoryview(bytearray(b'\xff'*size)).cast('B', (2, 64))
+ with self.assertRaisesRegex(ValueError, "operation forbidden"):
+ m[0, MyIndex()] = 42
+ self.assertEqual(ba[:8], b'\0'*8)
+
+ ba = None
+ m = memoryview(bytearray(b'\xff'*size))
+ with self.assertRaisesRegex(ValueError, "operation forbidden"):
+ m[0] = MyIndex()
+ self.assertEqual(ba[:8], b'\0'*8)
+
+ for fmt in 'bhilqnBHILQN':
+ with self.subTest(fmt=fmt):
+ ba = None
+ m = memoryview(bytearray(b'\xff'*size)).cast(fmt)
+ with self.assertRaisesRegex(ValueError, "operation forbidden"):
+ m[0] = MyIndex()
+ self.assertEqual(ba[:8], b'\0'*8)
+
+ for fmt in 'fd':
+ with self.subTest(fmt=fmt):
+ ba = None
+ m = memoryview(bytearray(b'\xff'*size)).cast(fmt)
+ with self.assertRaisesRegex(ValueError, "operation forbidden"):
+ m[0] = MyFloat()
+ self.assertEqual(ba[:8], b'\0'*8)
+
+ ba = None
+ m = memoryview(bytearray(b'\xff'*size)).cast('?')
+ with self.assertRaisesRegex(ValueError, "operation forbidden"):
+ m[0] = MyBool()
+ self.assertEqual(ba[:8], b'\0'*8)
if __name__ == "__main__":
unittest.main()
def test_path_normcase(self):
self._check_function(self.path.normcase)
+ if sys.platform == 'win32':
+ self.assertEqual(ntpath.normcase('\u03a9\u2126'), 'ωΩ')
def test_path_isabs(self):
self._check_function(self.path.isabs)
self.assertListEqual(self._trace(f, "go x"), [1, 2, 3])
self.assertListEqual(self._trace(f, "spam"), [1, 2, 3])
+ def test_parser_deeply_nested_patterns(self):
+ # Deeply nested patterns can cause exponential backtracking when parsing.
+ # See gh-93671 for more information.
+
+ levels = 100
+
+ patterns = [
+ "A" + "(" * levels + ")" * levels,
+ "{1:" * levels + "1" + "}" * levels,
+ "[" * levels + "1" + "]" * levels,
+ ]
+
+ for pattern in patterns:
+ with self.subTest(pattern):
+ code = inspect.cleandoc("""
+ match None:
+ case {}:
+ pass
+ """.format(pattern))
+ compile(code, "<string>", "exec")
+
if __name__ == "__main__":
"""
check(PersPickler)
@support.cpython_only
+ def test_custom_pickler_dispatch_table_memleak(self):
+ # See https://github.com/python/cpython/issues/89988
+
+ class Pickler(self.pickler):
+ def __init__(self, *args, **kwargs):
+ self.dispatch_table = table
+ super().__init__(*args, **kwargs)
+
+ class DispatchTable:
+ pass
+
+ table = DispatchTable()
+ pickler = Pickler(io.BytesIO())
+ self.assertIs(pickler.dispatch_table, table)
+ table_ref = weakref.ref(table)
+ self.assertIsNotNone(table_ref())
+ del pickler
+ del table
+ support.gc_collect()
+ self.assertIsNone(table_ref())
+
+
+ @support.cpython_only
def test_unpickler_reference_cycle(self):
def check(Unpickler):
for proto in range(pickle.HIGHEST_PROTOCOL + 1):
except ImportError:
_winapi = None
+no_chdir = unittest.mock.patch('os.chdir',
+ side_effect=AssertionError("shouldn't call os.chdir()"))
+
def _fake_rename(*args, **kwargs):
# Pretend the destination path is on a different filesystem.
raise OSError(getattr(errno, 'EXDEV', 18), "Invalid cross-device link")
work_dir = os.path.dirname(tmpdir2)
rel_base_name = os.path.join(os.path.basename(tmpdir2), 'archive')
- with os_helper.change_cwd(work_dir):
+ with os_helper.change_cwd(work_dir), no_chdir:
base_name = os.path.abspath(rel_base_name)
tarball = make_archive(rel_base_name, 'gztar', root_dir, '.')
'./file1', './file2', './sub/file3'])
# trying an uncompressed one
- with os_helper.change_cwd(work_dir):
+ with os_helper.change_cwd(work_dir), no_chdir:
tarball = make_archive(rel_base_name, 'tar', root_dir, '.')
self.assertEqual(tarball, base_name + '.tar')
self.assertTrue(os.path.isfile(tarball))
def test_tarfile_vs_tar(self):
root_dir, base_dir = self._create_files()
base_name = os.path.join(self.mkdtemp(), 'archive')
- tarball = make_archive(base_name, 'gztar', root_dir, base_dir)
+ with no_chdir:
+ tarball = make_archive(base_name, 'gztar', root_dir, base_dir)
# check if the compressed tarball was created
self.assertEqual(tarball, base_name + '.tar.gz')
self.assertEqual(self._tarinfo(tarball), self._tarinfo(tarball2))
# trying an uncompressed one
- tarball = make_archive(base_name, 'tar', root_dir, base_dir)
+ with no_chdir:
+ tarball = make_archive(base_name, 'tar', root_dir, base_dir)
self.assertEqual(tarball, base_name + '.tar')
self.assertTrue(os.path.isfile(tarball))
# now for a dry_run
- tarball = make_archive(base_name, 'tar', root_dir, base_dir,
- dry_run=True)
+ with no_chdir:
+ tarball = make_archive(base_name, 'tar', root_dir, base_dir,
+ dry_run=True)
self.assertEqual(tarball, base_name + '.tar')
self.assertTrue(os.path.isfile(tarball))
work_dir = os.path.dirname(tmpdir2)
rel_base_name = os.path.join(os.path.basename(tmpdir2), 'archive')
- with os_helper.change_cwd(work_dir):
+ with os_helper.change_cwd(work_dir), no_chdir:
base_name = os.path.abspath(rel_base_name)
res = make_archive(rel_base_name, 'zip', root_dir)
'dist/file1', 'dist/file2', 'dist/sub/file3',
'outer'])
- with os_helper.change_cwd(work_dir):
+ with os_helper.change_cwd(work_dir), no_chdir:
base_name = os.path.abspath(rel_base_name)
res = make_archive(rel_base_name, 'zip', root_dir, base_dir)
def test_zipfile_vs_zip(self):
root_dir, base_dir = self._create_files()
base_name = os.path.join(self.mkdtemp(), 'archive')
- archive = make_archive(base_name, 'zip', root_dir, base_dir)
+ with no_chdir:
+ archive = make_archive(base_name, 'zip', root_dir, base_dir)
# check if ZIP file was created
self.assertEqual(archive, base_name + '.zip')
def test_unzip_zipfile(self):
root_dir, base_dir = self._create_files()
base_name = os.path.join(self.mkdtemp(), 'archive')
- archive = make_archive(base_name, 'zip', root_dir, base_dir)
+ with no_chdir:
+ archive = make_archive(base_name, 'zip', root_dir, base_dir)
# check if ZIP file was created
self.assertEqual(archive, base_name + '.zip')
base_name = os.path.join(self.mkdtemp(), 'archive')
group = grp.getgrgid(0)[0]
owner = pwd.getpwuid(0)[0]
- with os_helper.change_cwd(root_dir):
+ with os_helper.change_cwd(root_dir), no_chdir:
archive_name = make_archive(base_name, 'gztar', root_dir, 'dist',
owner=owner, group=group)
def test_make_archive_cwd(self):
current_dir = os.getcwd()
+ root_dir = self.mkdtemp()
def _breaks(*args, **kw):
raise RuntimeError()
+ dirs = []
+ def _chdir(path):
+ dirs.append(path)
+ orig_chdir(path)
register_archive_format('xxx', _breaks, [], 'xxx file')
try:
- try:
- make_archive('xxx', 'xxx', root_dir=self.mkdtemp())
- except Exception:
- pass
+ with support.swap_attr(os, 'chdir', _chdir) as orig_chdir:
+ try:
+ make_archive('xxx', 'xxx', root_dir=root_dir)
+ except Exception:
+ pass
self.assertEqual(os.getcwd(), current_dir)
+ self.assertEqual(dirs, [root_dir, current_dir])
finally:
unregister_archive_format('xxx')
def test_make_tarfile_in_curdir(self):
# Issue #21280
root_dir = self.mkdtemp()
- with os_helper.change_cwd(root_dir):
+ with os_helper.change_cwd(root_dir), no_chdir:
self.assertEqual(make_archive('test', 'tar'), 'test.tar')
self.assertTrue(os.path.isfile('test.tar'))
def test_make_zipfile_in_curdir(self):
# Issue #21280
root_dir = self.mkdtemp()
- with os_helper.change_cwd(root_dir):
+ with os_helper.change_cwd(root_dir), no_chdir:
self.assertEqual(make_archive('test', 'zip'), 'test.zip')
self.assertTrue(os.path.isfile('test.zip'))
scheme = 'osx_framework_user'
else:
scheme = os.name + '_user'
- self.assertEqual(site._get_path(site._getuserbase()),
+ self.assertEqual(os.path.normpath(site._get_path(site._getuserbase())),
sysconfig.get_path('purelib', scheme))
@unittest.skipUnless(site.ENABLE_USER_SITE, "requires access to PEP 370 "
"user-site (site.ENABLE_USER_SITE)")
def test_s_option(self):
# (ncoghlan) Change this to use script_helper...
- usersite = site.USER_SITE
+ usersite = os.path.normpath(site.USER_SITE)
self.assertIn(usersite, sys.path)
env = os.environ.copy()
dll_file = os.path.join(temp_dir, os.path.split(dll_src_file)[1])
shutil.copy(sys.executable, exe_file)
shutil.copy(dll_src_file, dll_file)
+ for fn in glob.glob(os.path.join(os.path.split(dll_src_file)[0], "vcruntime*.dll")):
+ shutil.copy(fn, os.path.join(temp_dir, os.path.split(fn)[1]))
if exe_pth:
_pth_file = os.path.splitext(exe_file)[0] + '._pth'
else:
s.bind(bytearray(b"\x00python\x00test\x00"))
self.assertEqual(s.getsockname(), b"\x00python\x00test\x00")
+ def testAutobind(self):
+ # Check that binding to an empty string binds to an available address
+ # in the abstract namespace as specified in unix(7) "Autobind feature".
+ abstract_address = b"^\0[0-9a-f]{5}"
+ with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s1:
+ s1.bind("")
+ self.assertRegex(s1.getsockname(), abstract_address)
+ # Each socket is bound to a different abstract address.
+ with socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) as s2:
+ s2.bind("")
+ self.assertRegex(s2.getsockname(), abstract_address)
+ self.assertNotEqual(s1.getsockname(), s2.getsockname())
+
+
@unittest.skipUnless(hasattr(socket, 'AF_UNIX'), 'test needs socket.AF_UNIX')
class TestUnixDomain(unittest.TestCase):
self.addCleanup(os_helper.unlink, path)
self.assertEqual(self.sock.getsockname(), path)
+ @unittest.skipIf(sys.platform == 'linux', 'Linux specific test')
+ def testEmptyAddress(self):
+ # Test that binding empty address fails.
+ self.assertRaises(OSError, self.sock.bind, "")
+
class BufferIOTest(SocketConnectedTest):
"""
)
for protocol in protocols:
+ if not has_tls_protocol(protocol):
+ continue
with self.subTest(protocol=protocol):
with self.assertWarns(DeprecationWarning) as cm:
ssl.SSLContext(protocol)
)
for version in versions:
+ if not has_tls_version(version):
+ continue
with self.subTest(version=version):
ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
with self.assertWarns(DeprecationWarning) as cm:
def test_constructor(self):
for protocol in PROTOCOLS:
- with warnings_helper.check_warnings():
- ctx = ssl.SSLContext(protocol)
- self.assertEqual(ctx.protocol, protocol)
+ if has_tls_protocol(protocol):
+ with warnings_helper.check_warnings():
+ ctx = ssl.SSLContext(protocol)
+ self.assertEqual(ctx.protocol, protocol)
with warnings_helper.check_warnings():
ctx = ssl.SSLContext()
self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS)
ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
ctx.set_ciphers('AESGCM')
names = set(d['name'] for d in ctx.get_ciphers())
- self.assertIn('AES256-GCM-SHA384', names)
- self.assertIn('AES128-GCM-SHA256', names)
+ expected = {
+ 'AES128-GCM-SHA256',
+ 'ECDHE-ECDSA-AES128-GCM-SHA256',
+ 'ECDHE-RSA-AES128-GCM-SHA256',
+ 'DHE-RSA-AES128-GCM-SHA256',
+ 'AES256-GCM-SHA384',
+ 'ECDHE-ECDSA-AES256-GCM-SHA384',
+ 'ECDHE-RSA-AES256-GCM-SHA384',
+ 'DHE-RSA-AES256-GCM-SHA384',
+ }
+ intersection = names.intersection(expected)
+ self.assertGreaterEqual(
+ len(intersection), 2, f"\ngot: {sorted(names)}\nexpected: {sorted(expected)}"
+ )
def test_options(self):
ctx = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
ctx.maximum_version = ssl.TLSVersion.MINIMUM_SUPPORTED
self.assertIn(
ctx.maximum_version,
- {ssl.TLSVersion.TLSv1, ssl.TLSVersion.SSLv3}
+ {ssl.TLSVersion.TLSv1, ssl.TLSVersion.TLSv1_1, ssl.TLSVersion.SSLv3}
)
ctx.minimum_version = ssl.TLSVersion.MAXIMUM_SUPPORTED
with self.assertRaises(ValueError):
ctx.minimum_version = 42
- ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1_1)
-
- self.assertIn(
- ctx.minimum_version, minimum_range
- )
- self.assertEqual(
- ctx.maximum_version, ssl.TLSVersion.MAXIMUM_SUPPORTED
- )
- with self.assertRaises(ValueError):
- ctx.minimum_version = ssl.TLSVersion.MINIMUM_SUPPORTED
- with self.assertRaises(ValueError):
- ctx.maximum_version = ssl.TLSVersion.TLSv1
+ if has_tls_protocol(ssl.PROTOCOL_TLSv1_1):
+ ctx = ssl.SSLContext(ssl.PROTOCOL_TLSv1_1)
+ self.assertIn(
+ ctx.minimum_version, minimum_range
+ )
+ self.assertEqual(
+ ctx.maximum_version, ssl.TLSVersion.MAXIMUM_SUPPORTED
+ )
+ with self.assertRaises(ValueError):
+ ctx.minimum_version = ssl.TLSVersion.MINIMUM_SUPPORTED
+ with self.assertRaises(ValueError):
+ ctx.maximum_version = ssl.TLSVersion.TLSv1
@unittest.skipUnless(
hasattr(ssl.SSLContext, 'security_level'),
self.assertEqual(ctx.verify_mode, ssl.CERT_NONE)
self._assert_context_options(ctx)
-
-
def test__create_stdlib_context(self):
ctx = ssl._create_stdlib_context()
self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLS_CLIENT)
self.assertFalse(ctx.check_hostname)
self._assert_context_options(ctx)
- with warnings_helper.check_warnings():
- ctx = ssl._create_stdlib_context(ssl.PROTOCOL_TLSv1)
- self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLSv1)
- self.assertEqual(ctx.verify_mode, ssl.CERT_NONE)
- self._assert_context_options(ctx)
+ if has_tls_protocol(ssl.PROTOCOL_TLSv1):
+ with warnings_helper.check_warnings():
+ ctx = ssl._create_stdlib_context(ssl.PROTOCOL_TLSv1)
+ self.assertEqual(ctx.protocol, ssl.PROTOCOL_TLSv1)
+ self.assertEqual(ctx.verify_mode, ssl.CERT_NONE)
+ self._assert_context_options(ctx)
with warnings_helper.check_warnings():
ctx = ssl._create_stdlib_context(
client_options=ssl.OP_NO_TLSv1_2)
try_protocol_combo(ssl.PROTOCOL_TLS, ssl.PROTOCOL_TLSv1_2, 'TLSv1.2')
- try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1, False)
- try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1_2, False)
- try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_1, False)
- try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_2, False)
+ if has_tls_protocol(ssl.PROTOCOL_TLSv1):
+ try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1, False)
+ try_protocol_combo(ssl.PROTOCOL_TLSv1, ssl.PROTOCOL_TLSv1_2, False)
+ if has_tls_protocol(ssl.PROTOCOL_TLSv1_1):
+ try_protocol_combo(ssl.PROTOCOL_TLSv1_2, ssl.PROTOCOL_TLSv1_1, False)
+ try_protocol_combo(ssl.PROTOCOL_TLSv1_1, ssl.PROTOCOL_TLSv1_2, False)
def test_starttls(self):
"""Switching from clear text to encrypted and back again."""
from collections import abc
import array
+import gc
import math
import operator
import unittest
import struct
import sys
+import weakref
from test import support
+from test.support import import_helper
from test.support.script_helper import assert_python_ok
ISBIGENDIAN = sys.byteorder == "big"
self.assertIn(b"Exception ignored in:", stderr)
self.assertIn(b"C.__del__", stderr)
+ def test__struct_reference_cycle_cleaned_up(self):
+ # Regression test for python/cpython#94207.
+
+ # When we create a new struct module, trigger use of its cache,
+ # and then delete it ...
+ _struct_module = import_helper.import_fresh_module("_struct")
+ module_ref = weakref.ref(_struct_module)
+ _struct_module.calcsize("b")
+ del _struct_module
+
+ # Then the module should have been garbage collected.
+ gc.collect()
+ self.assertIsNone(
+ module_ref(), "_struct module was not garbage collected")
+
+ @support.cpython_only
+ def test__struct_types_immutable(self):
+ # See https://github.com/python/cpython/issues/94254
+
+ Struct = struct.Struct
+ unpack_iterator = type(struct.iter_unpack("b", b'x'))
+ for cls in (Struct, unpack_iterator):
+ with self.subTest(cls=cls):
+ with self.assertRaises(TypeError):
+ cls.x = 1
+
+
def test_issue35714(self):
# Embedded null characters should not be allowed in format strings.
for s in '\0', '2\0i', b'\0':
>>> class C(x for x in L):
... pass
Traceback (most recent call last):
-SyntaxError: expected ':'
+SyntaxError: invalid syntax
>>> def g(*args, **kwargs):
... print(args, sorted(kwargs.items()))
...
SyntaxError: cannot assign to function call here. Maybe you meant '==' instead of '='?
- Missing ':' before suites:
+Missing ':' before suites:
- >>> def f()
- ... pass
- Traceback (most recent call last):
- SyntaxError: expected ':'
+ >>> def f()
+ ... pass
+ Traceback (most recent call last):
+ SyntaxError: expected ':'
- >>> class A
- ... pass
- Traceback (most recent call last):
- SyntaxError: expected ':'
+ >>> class A
+ ... pass
+ Traceback (most recent call last):
+ SyntaxError: expected ':'
+
+ >>> class R&D:
+ ... pass
+ Traceback (most recent call last):
+ SyntaxError: invalid syntax
>>> if 1
... pass
Traceback (most recent call last):
SyntaxError: expected ':'
+ >>> for x in range 10:
+ ... pass
+ Traceback (most recent call last):
+ SyntaxError: invalid syntax
+
>>> while True
... pass
Traceback (most recent call last):
Traceback (most recent call last):
SyntaxError: expected ':'
+ >>> with block ad something:
+ ... pass
+ Traceback (most recent call last):
+ SyntaxError: invalid syntax
+
>>> try
... pass
Traceback (most recent call last):
Traceback (most recent call last):
SyntaxError: expected ':'
+ >>> match x x:
+ ... case list():
+ ... pass
+ Traceback (most recent call last):
+ SyntaxError: invalid syntax
+
>>> match x:
... case list()
... pass
Traceback (most recent call last):
SyntaxError: expression expected after dictionary key and ':'
- # Ensure that the error is not raise for syntax errors that happen after sets
+ # Ensure that the error is not raised for syntax errors that happen after sets
>>> {1} $
Traceback (most recent call last):
SyntaxError: invalid syntax
+ # Ensure that the error is not raised for invalid expressions
+
+ >>> {1: 2, 3: foo(,), 4: 5}
+ Traceback (most recent call last):
+ SyntaxError: invalid syntax
+
+ >>> {1: $, 2: 3}
+ Traceback (most recent call last):
+ SyntaxError: invalid syntax
+
Specialized indentation errors:
>>> while condition:
self.assertEqual(compile(s1, '<string>', 'exec'), compile(s2, '<string>', 'exec'))
except SyntaxError:
self.fail("Indented statement over multiple lines is valid")
-
- def test_continuation_bad_indentation(self):
+
+ def test_continuation_bad_indentation(self):
# Check that code that breaks indentation across multiple lines raises a syntax error
code = r"""\
import re
check(re.finditer('',''), size('2P'))
# list
- samples = [[], [1,2,3], ['1', '2', '3']]
- for sample in samples:
- check(list(sample), vsize('Pn') + len(sample)*self.P)
+ check(list([]), vsize('Pn'))
+ check(list([1]), vsize('Pn') + 2*self.P)
+ check(list([1, 2]), vsize('Pn') + 2*self.P)
+ check(list([1, 2, 3]), vsize('Pn') + 4*self.P)
# sortwrapper (list)
# XXX
# cmpwrapper (list)
import pprint
import sys
import unittest
+from test import support
class TestGetProfile(unittest.TestCase):
pprint.pprint(capture_events(callable))
+class TestEdgeCases(unittest.TestCase):
+
+ def setUp(self):
+ self.addCleanup(sys.setprofile, sys.getprofile())
+ sys.setprofile(None)
+
+ def test_reentrancy(self):
+ def foo(*args):
+ ...
+
+ def bar(*args):
+ ...
+
+ class A:
+ def __call__(self, *args):
+ pass
+
+ def __del__(self):
+ sys.setprofile(bar)
+
+ sys.setprofile(A())
+ with support.catch_unraisable_exception() as cm:
+ sys.setprofile(foo)
+ self.assertEqual(cm.unraisable.object, A.__del__)
+ self.assertIsInstance(cm.unraisable.exc_value, RuntimeError)
+
+ self.assertEqual(sys.getprofile(), foo)
+
+
+ def test_same_object(self):
+ def foo(*args):
+ ...
+
+ sys.setprofile(foo)
+ del foo
+ sys.setprofile(sys.getprofile())
+
+
if __name__ == "__main__":
unittest.main()
from test import support
import unittest
+from unittest.mock import MagicMock
import sys
import difflib
import gc
output.append(8)
+class TestEdgeCases(unittest.TestCase):
+
+ def setUp(self):
+ self.addCleanup(sys.settrace, sys.gettrace())
+ sys.settrace(None)
+
+ def test_reentrancy(self):
+ def foo(*args):
+ ...
+
+ def bar(*args):
+ ...
+
+ class A:
+ def __call__(self, *args):
+ pass
+
+ def __del__(self):
+ sys.settrace(bar)
+
+ sys.settrace(A())
+ with support.catch_unraisable_exception() as cm:
+ sys.settrace(foo)
+ self.assertEqual(cm.unraisable.object, A.__del__)
+ self.assertIsInstance(cm.unraisable.exc_value, RuntimeError)
+
+ self.assertEqual(sys.gettrace(), foo)
+
+
+ def test_same_object(self):
+ def foo(*args):
+ ...
+
+ sys.settrace(foo)
+ del foo
+ sys.settrace(sys.gettrace())
+
+
if __name__ == "__main__":
unittest.main()
base = base.replace(sys.base_prefix, sys.prefix)
if HAS_USER_BASE:
user_path = get_path(name, 'posix_user')
- expected = global_path.replace(base, user, 1)
+ expected = os.path.normpath(global_path.replace(base, user, 1))
# bpo-44860: platlib of posix_user doesn't use sys.platlibdir,
# whereas posix_prefix does.
if name == 'platlib':
def add_dir_and_getmember(self, name):
with os_helper.temp_cwd():
with tarfile.open(tmpname, 'w') as tar:
+ tar.format = tarfile.USTAR_FORMAT
try:
os.mkdir(name)
tar.add(name)
"iso8859-1", "strict")
self.assertEqual(tarinfo.type, self.longnametype)
+ def test_longname_directory(self):
+ # Test reading a longlink directory. Issue #47231.
+ longdir = ('a' * 101) + '/'
+ with os_helper.temp_cwd():
+ with tarfile.open(tmpname, 'w') as tar:
+ tar.format = self.format
+ try:
+ os.mkdir(longdir)
+ tar.add(longdir)
+ finally:
+ os.rmdir(longdir)
+ with tarfile.open(tmpname) as tar:
+ self.assertIsNotNone(tar.getmember(longdir))
+ self.assertIsNotNone(tar.getmember(longdir.removesuffix('/')))
class GNUReadTest(LongnameTest, ReadTest, unittest.TestCase):
subdir = "gnu"
longnametype = tarfile.GNUTYPE_LONGNAME
+ format = tarfile.GNU_FORMAT
# Since 3.2 tarfile is supposed to accurately restore sparse members and
# produce files with holes. This is what we actually want to test here.
subdir = "pax"
longnametype = tarfile.XHDTYPE
+ format = tarfile.PAX_FORMAT
def test_pax_global_headers(self):
tar = tarfile.open(tarname, encoding="iso8859-1")
def raise_OSError(*args, **kwargs):
raise OSError()
- with support.swap_attr(io, "open", raise_OSError):
- # test again with failing io.open()
+ with support.swap_attr(os, "open", raise_OSError):
+ # test again with failing os.open()
with self.assertRaises(FileNotFoundError):
tempfile._get_default_tempdir()
self.assertEqual(os.listdir(our_temp_directory), [])
- def bad_writer(*args, **kwargs):
- fp = orig_open(*args, **kwargs)
- fp.write = raise_OSError
- return fp
-
- with support.swap_attr(io, "open", bad_writer) as orig_open:
- # test again with failing write()
+ with support.swap_attr(os, "write", raise_OSError):
+ # test again with failing os.write()
with self.assertRaises(FileNotFoundError):
tempfile._get_default_tempdir()
self.assertEqual(os.listdir(our_temp_directory), [])
try:
with tempfile.NamedTemporaryFile(dir=dir) as f:
f.write(b'blat')
+ self.assertEqual(os.listdir(dir), [])
self.assertFalse(os.path.exists(f.name),
"NamedTemporaryFile %s exists after close" % f.name)
finally:
pass
self.assertRaises(ValueError, use_closed)
- def test_no_leak_fd(self):
- # Issue #21058: don't leak file descriptor when io.open() fails
- closed = []
- os_close = os.close
- def close(fd):
- closed.append(fd)
- os_close(fd)
-
- with mock.patch('os.close', side_effect=close):
- with mock.patch('io.open', side_effect=ValueError):
- self.assertRaises(ValueError, tempfile.NamedTemporaryFile)
- self.assertEqual(len(closed), 1)
-
def test_bad_mode(self):
dir = tempfile.mkdtemp()
self.addCleanup(os_helper.rmtree, dir)
tempfile.NamedTemporaryFile(mode=2, dir=dir)
self.assertEqual(os.listdir(dir), [])
+ def test_bad_encoding(self):
+ dir = tempfile.mkdtemp()
+ self.addCleanup(os_helper.rmtree, dir)
+ with self.assertRaises(LookupError):
+ tempfile.NamedTemporaryFile('w', encoding='bad-encoding', dir=dir)
+ self.assertEqual(os.listdir(dir), [])
+
+ def test_unexpected_error(self):
+ dir = tempfile.mkdtemp()
+ self.addCleanup(os_helper.rmtree, dir)
+ with mock.patch('tempfile._TemporaryFileWrapper') as mock_ntf, \
+ mock.patch('io.open', mock.mock_open()) as mock_open:
+ mock_ntf.side_effect = KeyboardInterrupt()
+ with self.assertRaises(KeyboardInterrupt):
+ tempfile.NamedTemporaryFile(dir=dir)
+ mock_open().close.assert_called()
+ self.assertEqual(os.listdir(dir), [])
+
# How to test the mode and bufsize parameters?
class TestSpooledTemporaryFile(BaseTestCase):
self.assertTrue(f._rolled)
filename = f.name
f.close()
- self.assertFalse(isinstance(filename, str) and os.path.exists(filename),
- "SpooledTemporaryFile %s exists after close" % filename)
+ self.assertEqual(os.listdir(dir), [])
+ if not isinstance(filename, int):
+ self.assertFalse(os.path.exists(filename),
+ "SpooledTemporaryFile %s exists after close" % filename)
finally:
os.rmdir(dir)
roundtrip("\u039B", "w+", encoding="utf-16")
roundtrip("foo\r\n", "w+", newline="")
- def test_no_leak_fd(self):
- # Issue #21058: don't leak file descriptor when io.open() fails
- closed = []
- os_close = os.close
- def close(fd):
- closed.append(fd)
- os_close(fd)
-
- with mock.patch('os.close', side_effect=close):
- with mock.patch('io.open', side_effect=ValueError):
- self.assertRaises(ValueError, tempfile.TemporaryFile)
- self.assertEqual(len(closed), 1)
+ def test_bad_mode(self):
+ dir = tempfile.mkdtemp()
+ self.addCleanup(os_helper.rmtree, dir)
+ with self.assertRaises(ValueError):
+ tempfile.TemporaryFile(mode='wr', dir=dir)
+ with self.assertRaises(TypeError):
+ tempfile.TemporaryFile(mode=2, dir=dir)
+ self.assertEqual(os.listdir(dir), [])
+
+ def test_bad_encoding(self):
+ dir = tempfile.mkdtemp()
+ self.addCleanup(os_helper.rmtree, dir)
+ with self.assertRaises(LookupError):
+ tempfile.TemporaryFile('w', encoding='bad-encoding', dir=dir)
+ self.assertEqual(os.listdir(dir), [])
+ def test_unexpected_error(self):
+ dir = tempfile.mkdtemp()
+ self.addCleanup(os_helper.rmtree, dir)
+ with mock.patch('tempfile._O_TMPFILE_WORKS', False), \
+ mock.patch('os.unlink') as mock_unlink, \
+ mock.patch('os.open') as mock_open, \
+ mock.patch('os.close') as mock_close:
+ mock_unlink.side_effect = KeyboardInterrupt()
+ with self.assertRaises(KeyboardInterrupt):
+ tempfile.TemporaryFile(dir=dir)
+ mock_close.assert_called()
+ self.assertEqual(os.listdir(dir), [])
# Helper for test_del_on_shutdown
self.assertIsInstance(object.__lt__, types.WrapperDescriptorType)
self.assertIsInstance(int.__lt__, types.WrapperDescriptorType)
+ def test_dunder_get_signature(self):
+ sig = inspect.signature(object.__init__.__get__)
+ self.assertEqual(list(sig.parameters), ["instance", "owner"])
+ # gh-93021: Second parameter is optional
+ self.assertIs(sig.parameters["owner"].default, None)
+
def test_method_wrapper_types(self):
self.assertIsInstance(object().__init__, types.MethodWrapperType)
self.assertIsInstance(object().__str__, types.MethodWrapperType)
self.assertEqual(x.bar, 'abc')
self.assertEqual(x.__dict__, {'foo': 42, 'bar': 'abc'})
samples = [Any, Union, Tuple, Callable, ClassVar,
- Union[int, str], ClassVar[List], Tuple[int, ...], Callable[[str], bytes],
+ Union[int, str], ClassVar[List], Tuple[int, ...], Tuple[()],
+ Callable[[str], bytes],
typing.DefaultDict, typing.FrozenSet[int]]
for s in samples:
for proto in range(pickle.HIGHEST_PROTOCOL + 1):
def test_copy_and_deepcopy(self):
T = TypeVar('T')
class Node(Generic[T]): ...
- things = [Union[T, int], Tuple[T, int], Callable[..., T], Callable[[int], int],
+ things = [Union[T, int], Tuple[T, int], Tuple[()],
+ Callable[..., T], Callable[[int], int],
Tuple[Any, Any], Node[T], Node[int], Node[Any], typing.Iterable[T],
typing.Iterable[Any], typing.Iterable[int], typing.Dict[int, str],
typing.Dict[T, Any], ClassVar[int], ClassVar[List[T]], Tuple['T', 'T'],
Licensed to the PSF under a contributor agreement.
"""
+import contextlib
import ensurepip
import os
import os.path
# Actually run the create command with all that unhelpful
# config in place to ensure we ignore it
- try:
+ with self.nicer_error():
self.run_with_capture(venv.create, self.env_dir,
system_site_packages=system_site_packages,
with_pip=True)
- except subprocess.CalledProcessError as exc:
- # The output this produces can be a little hard to read,
- # but at least it has all the details
- details = exc.output.decode(errors="replace")
- msg = "{}\n\n**Subprocess Output**\n{}"
- self.fail(msg.format(exc, details))
# Ensure pip is available in the virtual environment
envpy = os.path.join(os.path.realpath(self.env_dir), self.bindir, self.exe)
# Ignore DeprecationWarning since pip code is not part of Python
# Check the private uninstall command provided for the Windows
# installers works (at least in a virtual environment)
with EnvironmentVarGuard() as envvars:
- # It seems ensurepip._uninstall calls subprocesses which do not
- # inherit the interpreter settings.
- envvars["PYTHONWARNINGS"] = "ignore"
- out, err = check_output([envpy,
- '-W', 'ignore::DeprecationWarning',
- '-W', 'ignore::ImportWarning', '-I',
- '-m', 'ensurepip._uninstall'])
+ with self.nicer_error():
+ # It seems ensurepip._uninstall calls subprocesses which do not
+ # inherit the interpreter settings.
+ envvars["PYTHONWARNINGS"] = "ignore"
+ out, err = check_output([envpy,
+ '-W', 'ignore::DeprecationWarning',
+ '-W', 'ignore::ImportWarning', '-I',
+ '-m', 'ensurepip._uninstall'])
# We force everything to text, so unittest gives the detailed diff
# if we get unexpected results
err = err.decode("latin-1") # Force to text, prevent decoding errors
if not system_site_packages:
self.assert_pip_not_installed()
+ @contextlib.contextmanager
+ def nicer_error(self):
+ """
+ Capture output from a failed subprocess for easier debugging.
+
+ The output this handler produces can be a little hard to read,
+ but at least it has all the details.
+ """
+ try:
+ yield
+ except subprocess.CalledProcessError as exc:
+ out = exc.output.decode(errors="replace")
+ err = exc.stderr.decode(errors="replace")
+ self.fail(
+ f"{exc}\n\n"
+ f"**Subprocess Output**\n{out}\n\n"
+ f"**Subprocess Error**\n{err}"
+ )
+
# Issue #26610: pip/pep425tags.py requires ctypes
@unittest.skipUnless(ctypes, 'pip requires ctypes')
@requires_zlib()
self.do_test_with_pip(False)
self.do_test_with_pip(True)
+
if __name__ == "__main__":
unittest.main()
self.assertTrue(b'ZeroDivisionError' in err)
+class ModuleTestCase(unittest.TestCase):
+ def test_names(self):
+ for name in ('ReferenceType', 'ProxyType', 'CallableProxyType',
+ 'WeakMethod', 'WeakSet', 'WeakKeyDictionary', 'WeakValueDictionary'):
+ obj = getattr(weakref, name)
+ if name != 'WeakSet':
+ self.assertEqual(obj.__module__, 'weakref')
+ self.assertEqual(obj.__name__, name)
+ self.assertEqual(obj.__qualname__, name)
+
+
libreftest = """ Doctest for examples in the library reference: weakref.rst
>>> from test.support import gc_collect
tree = ET.ElementTree(ET.XML('''<site>\xf8</site>'''))
tree.write(TESTFN, encoding='unicode')
with open(TESTFN, 'rb') as f:
- data = f.read()
- expected = "<site>\xf8</site>".encode(encoding, 'xmlcharrefreplace')
- if encoding.lower() in ('utf-8', 'ascii'):
- self.assertEqual(data, expected)
- else:
- self.assertIn(b"<?xml version='1.0' encoding=", data)
- self.assertIn(expected, data)
+ self.assertEqual(f.read(), b"<site>\xc3\xb8</site>")
def test_write_to_text_file(self):
self.addCleanup(os_helper.unlink, TESTFN)
tree.write(f, encoding='unicode')
self.assertFalse(f.closed)
with open(TESTFN, 'rb') as f:
- self.assertEqual(f.read(), convlinesep(
- b'''<?xml version='1.0' encoding='ascii'?>\n'''
- b'''<site>ø</site>'''))
+ self.assertEqual(f.read(), b'''<site>ø</site>''')
with open(TESTFN, 'w', encoding='ISO-8859-1') as f:
tree.write(f, encoding='unicode')
self.assertFalse(f.closed)
with open(TESTFN, 'rb') as f:
- self.assertEqual(f.read(), convlinesep(
- b'''<?xml version='1.0' encoding='ISO-8859-1'?>\n'''
- b'''<site>\xf8</site>'''))
+ self.assertEqual(f.read(), b'''<site>\xf8</site>''')
def test_write_to_binary_file(self):
self.addCleanup(os_helper.unlink, TESTFN)
sys.path.insert(0, TEMP_ZIP)
mod = importlib.import_module(TESTMOD)
self.assertEqual(mod.test(1), 1)
- self.assertRaises(AssertionError, mod.test, False)
+ if __debug__:
+ self.assertRaises(AssertionError, mod.test, False)
+ else:
+ self.assertEqual(mod.test(0), 0)
def testImport_WithStuff(self):
# try importing from a zipfile which contains additional
else:
origin = self.__origin__
args = tuple(self.__args__)
- if len(args) == 1 and not isinstance(args[0], tuple):
+ if len(args) == 1 and (not isinstance(args[0], tuple) or
+ origin is Tuple and not args[0]):
args, = args
return operator.getitem, (origin, args)
code_mock = NonCallableMock(spec_set=CodeType)
code_mock.co_flags = inspect.CO_COROUTINE
self.__dict__['__code__'] = code_mock
+ self.__dict__['__name__'] = 'AsyncMock'
+ self.__dict__['__defaults__'] = tuple()
+ self.__dict__['__kwdefaults__'] = {}
+ self.__dict__['__annotations__'] = None
async def _execute_mock_call(self, /, *args, **kwargs):
# This is nearly just like super(), except for special handling
with _get_writer(file_or_filename, encoding) as (write, declared_encoding):
if method == "xml" and (xml_declaration or
(xml_declaration is None and
+ encoding.lower() != "unicode" and
declared_encoding.lower() not in ("utf-8", "us-ascii"))):
write("<?xml version='1.0' encoding='%s'?>\n" % (
declared_encoding,))
except AttributeError:
# file_or_filename is a file name
if encoding.lower() == "unicode":
- file = open(file_or_filename, "w",
- errors="xmlcharrefreplace")
- else:
- file = open(file_or_filename, "w", encoding=encoding,
- errors="xmlcharrefreplace")
- with file:
- yield file.write, file.encoding
+ encoding="utf-8"
+ with open(file_or_filename, "w", encoding=encoding,
+ errors="xmlcharrefreplace") as file:
+ yield file.write, encoding
else:
# file_or_filename is a file-like object
# encoding determines if it is a text or binary writer
<key>CFBundleTypeExtensions</key>
<array>
<string>py</string>
+ <string>pyi</string>
<string>pyw</string>
</array>
<key>CFBundleTypeIconFile</key>
Frederik Fix
Tom Flanagan
Matt Fleming
+Sean Fleming
Hernán Martínez Foffani
Benjamin Fogle
Artem Fokin
Reid Kleckner
Carsten Klein
Bastian Kleineidam
+Joel Klimont
Bob Kline
Matthias Klose
Jeremy Kloth
Python News
+++++++++++
+What's New in Python 3.10.6 final?
+==================================
+
+*Release date: 2022-08-01*
+
+Security
+--------
+
+- gh-issue-87389: :mod:`http.server`: Fix an open redirection vulnerability
+ in the HTTP server when an URI path starts with ``//``. Vulnerability
+ discovered, and initial fix proposed, by Hamza Avvan.
+
+- gh-issue-92888: Fix ``memoryview`` use after free when accessing the
+ backing buffer in certain cases.
+
+Core and Builtins
+-----------------
+
+- gh-issue-95355: ``_PyPegen_Parser_New`` now properly detects token memory
+ allocation errors. Patch by Honglin Zhu.
+
+- gh-issue-94938: Fix error detection in some builtin functions when keyword
+ argument name is an instance of a str subclass with overloaded ``__eq__``
+ and ``__hash__``. Previously it could cause SystemError or other undesired
+ behavior.
+
+- gh-issue-94949: :func:`ast.parse` will no longer parse parenthesized
+ context managers when passed ``feature_version`` less than ``(3, 9)``.
+ Patch by Shantanu Jain.
+
+- gh-issue-94947: :func:`ast.parse` will no longer parse assignment
+ expressions when passed ``feature_version`` less than ``(3, 8)``. Patch by
+ Shantanu Jain.
+
+- gh-issue-94869: Fix the column offsets for some expressions in multi-line
+ f-strings :mod:`ast` nodes. Patch by Pablo Galindo.
+
+- gh-issue-91153: Fix an issue where a :class:`bytearray` item assignment
+ could crash if it's resized by the new value's :meth:`__index__` method.
+
+- gh-issue-94329: Compile and run code with unpacking of extremely large
+ sequences (1000s of elements). Such code failed to compile. It now
+ compiles and runs correctly.
+
+- gh-issue-94360: Fixed a tokenizer crash when reading encoded files with
+ syntax errors from ``stdin`` with non utf-8 encoded text. Patch by Pablo
+ Galindo
+
+- gh-issue-94192: Fix error for dictionary literals with invalid expression
+ as value.
+
+- gh-issue-93964: Strengthened compiler overflow checks to prevent crashes
+ when compiling very large source files.
+
+- gh-issue-93671: Fix some exponential backtrace case happening with deeply
+ nested sequence patterns in match statements. Patch by Pablo Galindo
+
+- gh-issue-93021: Fix the :attr:`__text_signature__` for :meth:`__get__`
+ methods implemented in C. Patch by Jelle Zijlstra.
+
+- gh-issue-92930: Fixed a crash in ``_pickle.c`` from mutating collections
+ during ``__reduce__`` or ``persistent_id``.
+
+- gh-issue-92914: Always round the allocated size for lists up to the
+ nearest even number.
+
+- gh-issue-92858: Improve error message for some suites with syntax error
+ before ':'
+
+Library
+-------
+
+- gh-issue-95339: Update bundled pip to 22.2.1.
+
+- gh-issue-95045: Fix GC crash when deallocating ``_lsprof.Profiler`` by
+ untracking it before calling any callbacks. Patch by Kumar Aditya.
+
+- gh-issue-95087: Fix IndexError in parsing invalid date in the :mod:`email`
+ module.
+
+- gh-issue-95199: Upgrade bundled setuptools to 63.2.0.
+
+- gh-issue-95194: Upgrade bundled pip to 22.2.
+
+- gh-issue-93899: Fix check for existence of :data:`os.EFD_CLOEXEC`,
+ :data:`os.EFD_NONBLOCK` and :data:`os.EFD_SEMAPHORE` flags on older kernel
+ versions where these flags are not present. Patch by Kumar Aditya.
+
+- gh-issue-95166: Fix :meth:`concurrent.futures.Executor.map` to cancel the
+ currently waiting on future on an error - e.g. TimeoutError or
+ KeyboardInterrupt.
+
+- gh-issue-93157: Fix :mod:`fileinput` module didn't support ``errors``
+ option when ``inplace`` is true.
+
+- gh-issue-94821: Fix binding of unix socket to empty address on Linux to
+ use an available address from the abstract namespace, instead of "\0".
+
+- gh-issue-94736: Fix crash when deallocating an instance of a subclass of
+ ``_multiprocessing.SemLock``. Patch by Kumar Aditya.
+
+- gh-issue-94637: :meth:`SSLContext.set_default_verify_paths` now releases
+ the GIL around ``SSL_CTX_set_default_verify_paths`` call. The function
+ call performs I/O and CPU intensive work.
+
+- gh-issue-94510: Re-entrant calls to :func:`sys.setprofile` and
+ :func:`sys.settrace` now raise :exc:`RuntimeError`. Patch by Pablo
+ Galindo.
+
+- gh-issue-92336: Fix bug where :meth:`linecache.getline` fails on bad files
+ with :exc:`UnicodeDecodeError` or :exc:`SyntaxError`. It now returns an
+ empty string as per the documentation.
+
+- gh-issue-89988: Fix memory leak in :class:`pickle.Pickler` when looking up
+ :attr:`dispatch_table`. Patch by Kumar Aditya.
+
+- gh-issue-94254: Fixed types of :mod:`struct` module to be immutable. Patch
+ by Kumar Aditya.
+
+- gh-issue-94245: Fix pickling and copying of ``typing.Tuple[()]``.
+
+- gh-issue-94207: Made :class:`_struct.Struct` GC-tracked in order to fix a
+ reference leak in the :mod:`_struct` module.
+
+- gh-issue-94101: Manual instantiation of :class:`ssl.SSLSession` objects is
+ no longer allowed as it lead to misconfigured instances that crashed the
+ interpreter when attributes where accessed on them.
+
+- gh-issue-84753: :func:`inspect.iscoroutinefunction`,
+ :func:`inspect.isgeneratorfunction`, and
+ :func:`inspect.isasyncgenfunction` now properly return ``True`` for
+ duck-typed function-like objects like instances of
+ :class:`unittest.mock.AsyncMock`.
+
+ This makes :func:`inspect.iscoroutinefunction` consistent with the
+ behavior of :func:`asyncio.iscoroutinefunction`. Patch by Mehdi ABAAKOUK.
+
+- gh-issue-83499: Fix double closing of file description in :mod:`tempfile`.
+
+- gh-issue-79512: Fixed names and ``__module__`` value of :mod:`weakref`
+ classes :class:`~weakref.ReferenceType`, :class:`~weakref.ProxyType`,
+ :class:`~weakref.CallableProxyType`. It makes them pickleable.
+
+- gh-issue-90494: :func:`copy.copy` and :func:`copy.deepcopy` now always
+ raise a TypeError if ``__reduce__()`` returns a tuple with length 6
+ instead of silently ignore the 6th item or produce incorrect result.
+
+- gh-issue-90549: Fix a multiprocessing bug where a global named resource
+ (such as a semaphore) could leak when a child process is spawned (as
+ opposed to forked).
+
+- gh-issue-79579: :mod:`sqlite3` now correctly detects DML queries with
+ leading comments. Patch by Erlend E. Aasland.
+
+- gh-issue-93421: Update :data:`sqlite3.Cursor.rowcount` when a DML
+ statement has run to completion. This fixes the row count for SQL queries
+ like ``UPDATE ... RETURNING``. Patch by Erlend E. Aasland.
+
+- gh-issue-91810: Suppress writing an XML declaration in open files in
+ ``ElementTree.write()`` with ``encoding='unicode'`` and
+ ``xml_declaration=None``.
+
+- gh-issue-93353: Fix the :func:`importlib.resources.as_file` context
+ manager to remove the temporary file if destroyed late during Python
+ finalization: keep a local reference to the :func:`os.remove` function.
+ Patch by Victor Stinner.
+
+- gh-issue-83658: Make :class:`multiprocessing.Pool` raise an exception if
+ ``maxtasksperchild`` is not ``None`` or a positive int.
+
+- gh-issue-74696: :func:`shutil.make_archive` no longer temporarily changes
+ the current working directory during creation of standard ``.zip`` or tar
+ archives.
+
+- gh-issue-91577: Move imports in :class:`~multiprocessing.SharedMemory`
+ methods to module level so that they can be executed late in python
+ finalization.
+
+- bpo-47231: Fixed an issue with inconsistent trailing slashes in tarfile
+ longname directories.
+
+- bpo-46755: In :class:`QueueHandler`, clear ``stack_info`` from
+ :class:`LogRecord` to prevent stack trace from being written twice.
+
+- bpo-46053: Fix OSS audio support on NetBSD.
+
+- bpo-46197: Fix :mod:`ensurepip` environment isolation for subprocess
+ running ``pip``.
+
+- bpo-45924: Fix :mod:`asyncio` incorrect traceback when future's exception
+ is raised multiple times. Patch by Kumar Aditya.
+
+- bpo-34828: :meth:`sqlite3.Connection.iterdump` now handles databases that
+ use ``AUTOINCREMENT`` in one or more tables.
+
+Documentation
+-------------
+
+- gh-issue-94321: Document the :pep:`246` style protocol type
+ :class:`sqlite3.PrepareProtocol`.
+
+- gh-issue-86128: Document a limitation in ThreadPoolExecutor where its exit
+ handler is executed before any handlers in atexit.
+
+- gh-issue-61162: Clarify :mod:`sqlite3` behavior when
+ :ref:`sqlite3-connection-context-manager`.
+
+- gh-issue-87260: Align :mod:`sqlite3` argument specs with the actual
+ implementation.
+
+- gh-issue-86986: The minimum Sphinx version required to build the
+ documentation is now 3.2.
+
+- gh-issue-88831: Augmented documentation of asyncio.create_task().
+ Clarified the need to keep strong references to tasks and added a code
+ snippet detailing how to to this.
+
+- bpo-47161: Document that :class:`pathlib.PurePath` does not collapse
+ initial double slashes because they denote UNC paths.
+
+Tests
+-----
+
+- gh-issue-95280: Fix problem with ``test_ssl`` ``test_get_ciphers`` on
+ systems that require perfect forward secrecy (PFS) ciphers.
+
+- gh-issue-95212: Make multiprocessing test case
+ ``test_shared_memory_recreate`` parallel-safe.
+
+- gh-issue-91330: Added more tests for :mod:`dataclasses` to cover behavior
+ with data descriptor-based fields.
+
+ # Write your Misc/NEWS entry below. It should be a simple ReST paragraph.
+ # Don't start with "- Issue #<n>: " or "- gh-issue-<n>: " or that sort of
+ stuff.
+ ###########################################################################
+
+- gh-issue-94208: ``test_ssl`` is now checking for supported TLS version and
+ protocols in more tests.
+
+- gh-issue-93951: In test_bdb.StateTestCase.test_skip, avoid including
+ auxiliary importers.
+
+- gh-issue-93957: Provide nicer error reporting from subprocesses in
+ test_venv.EnsurePipTest.test_with_pip.
+
+- gh-issue-57539: Increase calendar test coverage for
+ :meth:`calendar.LocaleTextCalendar.formatweekday`.
+
+- gh-issue-92886: Fixing tests that fail when running with optimizations
+ (``-O``) in ``test_zipimport.py``
+
+- bpo-47016: Create a GitHub Actions workflow for verifying bundled pip and
+ setuptools. Patch by Illia Volochii and Adam Turner.
+
+Build
+-----
+
+- gh-issue-94841: Fix the possible performance regression of
+ :c:func:`PyObject_Free` compiled with MSVC version 1932.
+
+- bpo-45816: Python now supports building with Visual Studio 2022 (MSVC
+ v143, VS Version 17.0). Patch by Jeremiah Vivian.
+
+Windows
+-------
+
+- gh-issue-90844: Allow virtual environments to correctly launch when they
+ have spaces in the path.
+
+- gh-issue-92841: :mod:`asyncio` no longer throws ``RuntimeError: Event loop
+ is closed`` on interpreter exit after asynchronous socket activity. Patch
+ by Oleg Iarygin.
+
+- bpo-42658: Support native Windows case-insensitive path comparisons by
+ using ``LCMapStringEx`` instead of :func:`str.lower` in
+ :func:`ntpath.normcase`. Add ``LCMapStringEx`` to the :mod:`_winapi`
+ module.
+
+IDLE
+----
+
+- gh-issue-95511: Fix the Shell context menu copy-with-prompts bug of
+ copying an extra line when one selects whole lines.
+
+- gh-issue-95471: In the Edit menu, move ``Select All`` and add a new
+ separator.
+
+- gh-issue-95411: Enable using IDLE's module browser with .pyw files.
+
+- gh-issue-89610: Add .pyi as a recognized extension for IDLE on macOS.
+ This allows opening stub files by double clicking on them in the Finder.
+
+Tools/Demos
+-----------
+
+- gh-issue-94538: Fix Argument Clinic output to custom file destinations.
+ Patch by Erlend E. Aasland.
+
+- gh-issue-94430: Allow parameters named ``module`` and ``self`` with custom
+ C names in Argument Clinic. Patch by Erlend E. Aasland
+
+C API
+-----
+
+- gh-issue-94930: Fix ``SystemError`` raised when
+ :c:func:`PyArg_ParseTupleAndKeywords` is used with ``#`` in ``(...)`` but
+ without ``PY_SSIZE_T_CLEAN`` defined.
+
+- gh-issue-94864: Fix ``PyArg_Parse*`` with deprecated format units "u" and
+ "Z". It returned 1 (success) when warnings are turned into exceptions.
+
+
What's New in Python 3.10.5 final?
==================================
a ``.dist`` object referencing the ``Distribution`` when constructed from
a ``Distribution``. - Add support for package discovery under package
normalization rules. - The object returned by ``metadata()`` now has a
- formally-defined protocol called ``PackageMetadata`` with declared support
+ formally defined protocol called ``PackageMetadata`` with declared support
for the ``.get_all()`` method. - Synced with importlib_metadata 3.3.
- bpo-41877: A check is added against misspellings of autospect, auto_spec
:func:`traceback.print_exception` functions can now take an exception
object as a positional-only argument.
-- bpo-41889: Enum: fix regression involving inheriting a multiply-inherited
+- bpo-41889: Enum: fix regression involving inheriting a multiply inherited
enum
- bpo-41861: Convert :mod:`sqlite3` to use heap types (PEP 384). Patch by
:class:`xmlrpc.client.SafeTransport`. Patch by Cédric Krier.
- bpo-34572: Fix C implementation of pickle.loads to use importlib's locking
- mechanisms, and thereby avoid using partially-loaded modules. Patch by Tim
+ mechanisms, and thereby avoid using partially loaded modules. Patch by Tim
Burgess.
Documentation
:meth:`complex.__format__` methods for non-ASCII decimal point when using
the "n" formatter.
-- bpo-35269: Fix a possible segfault involving a newly-created coroutine.
+- bpo-35269: Fix a possible segfault involving a newly created coroutine.
Patch by Zackery Spytz.
- bpo-35224: Implement :pep:`572` (assignment expressions). Patch by Emily
as soon as they are finished in multiprocessing.Pool.
- bpo-19930: The mode argument of os.makedirs() no longer affects the file
- permission bits of newly-created intermediate-level directories.
+ permission bits of newly created intermediate-level directories.
- bpo-29884: faulthandler: Restore the old sigaltstack during teardown.
Patch by Christophe Zeitouny.
PyObject *prefix##_context0; \
PyObject *prefix##_callbacks; \
PyObject *prefix##_exception; \
+ PyObject *prefix##_exception_tb; \
PyObject *prefix##_result; \
PyObject *prefix##_source_tb; \
PyObject *prefix##_cancel_msg; \
Py_CLEAR(fut->fut_callbacks);
Py_CLEAR(fut->fut_result);
Py_CLEAR(fut->fut_exception);
+ Py_CLEAR(fut->fut_exception_tb);
Py_CLEAR(fut->fut_source_tb);
Py_CLEAR(fut->fut_cancel_msg);
_PyErr_ClearExcState(&fut->fut_cancelled_exc_state);
}
assert(!fut->fut_exception);
+ assert(!fut->fut_exception_tb);
fut->fut_exception = exc_val;
+ fut->fut_exception_tb = PyException_GetTraceback(exc_val);
fut->fut_state = STATE_FINISHED;
if (future_schedule_callbacks(fut) == -1) {
fut->fut_log_tb = 0;
if (fut->fut_exception != NULL) {
+ PyObject *tb = fut->fut_exception_tb;
+ if (tb == NULL) {
+ tb = Py_None;
+ }
+ if (PyException_SetTraceback(fut->fut_exception, tb) < 0) {
+ return -1;
+ }
Py_INCREF(fut->fut_exception);
*result = fut->fut_exception;
+ Py_CLEAR(fut->fut_exception_tb);
return 1;
}
Py_CLEAR(fut->fut_callbacks);
Py_CLEAR(fut->fut_result);
Py_CLEAR(fut->fut_exception);
+ Py_CLEAR(fut->fut_exception_tb);
Py_CLEAR(fut->fut_source_tb);
Py_CLEAR(fut->fut_cancel_msg);
_PyErr_ClearExcState(&fut->fut_cancelled_exc_state);
Py_VISIT(fut->fut_callbacks);
Py_VISIT(fut->fut_result);
Py_VISIT(fut->fut_exception);
+ Py_VISIT(fut->fut_exception_tb);
Py_VISIT(fut->fut_source_tb);
Py_VISIT(fut->fut_cancel_msg);
Py_VISIT(fut->dict);
return 0;
/* check for embedded null bytes */
if (PyBytes_AsStringAndSize(*bytes, &str, NULL) < 0) {
+ Py_CLEAR(*bytes);
return 0;
}
return 1;
PyUnicode_GET_LENGTH(a),
PyUnicode_GET_LENGTH(b));
}
- /* fallback to buffer interface for bytes, bytesarray and other */
+ /* fallback to buffer interface for bytes, bytearray and other */
else {
Py_buffer view_a;
Py_buffer view_b;
: buffered_closed(self)))
#define CHECK_CLOSED(self, error_msg) \
- if (IS_CLOSED(self) & (Py_SAFE_DOWNCAST(READAHEAD(self), Py_off_t, Py_ssize_t) == 0)) { \
+ if (IS_CLOSED(self) && (Py_SAFE_DOWNCAST(READAHEAD(self), Py_off_t, Py_ssize_t) == 0)) { \
PyErr_SetString(PyExc_ValueError, error_msg); \
return NULL; \
} \
/*[clinic input]
module _io
class _io.IncrementalNewlineDecoder "nldecoder_object *" "&PyIncrementalNewlineDecoder_Type"
-class _io.TextIOWrapper "textio *" "&TextIOWrapper_TYpe"
+class _io.TextIOWrapper "textio *" "&TextIOWrapper_Type"
[clinic start generated code]*/
-/*[clinic end generated code: output=da39a3ee5e6b4b0d input=2097a4fc85670c26]*/
+/*[clinic end generated code: output=da39a3ee5e6b4b0d input=ed072384f8aada2c]*/
_Py_IDENTIFIER(close);
_Py_IDENTIFIER(_dealloc_warn);
static void
profiler_dealloc(ProfilerObject *op)
{
+ PyObject_GC_UnTrack(op);
if (op->flags & POF_ENABLED) {
PyThreadState *tstate = PyThreadState_GET();
if (_PyEval_SetProfile(tstate, NULL, NULL) < 0) {
- PyErr_WriteUnraisable((PyObject *)op);
+ _PyErr_WriteUnraisableMsg("When destroying _lsprof profiler", NULL);
}
}
newsemlockobject(PyTypeObject *type, SEM_HANDLE handle, int kind, int maxvalue,
char *name)
{
- SemLockObject *self;
-
- self = PyObject_New(SemLockObject, type);
+ SemLockObject *self = (SemLockObject *)type->tp_alloc(type, 0);
if (!self)
return NULL;
self->handle = handle;
if (self->handle != SEM_FAILED)
SEM_CLOSE(self->handle);
PyMem_Free(self->name);
- PyObject_Free(self);
+ Py_TYPE(self)->tp_free((PyObject*)self);
}
/*[clinic input]
PyUnicode_GET_LENGTH(a),
PyUnicode_GET_LENGTH(b));
}
- /* fallback to buffer interface for bytes, bytesarray and other */
+ /* fallback to buffer interface for bytes, bytearray and other */
else {
Py_buffer view_a;
Py_buffer view_b;
if (PyList_GET_SIZE(obj) == 1) {
item = PyList_GET_ITEM(obj, 0);
- if (save(self, item, 0) < 0)
+ Py_INCREF(item);
+ int err = save(self, item, 0);
+ Py_DECREF(item);
+ if (err < 0)
return -1;
if (_Pickler_Write(self, &append_op, 1) < 0)
return -1;
return -1;
while (total < PyList_GET_SIZE(obj)) {
item = PyList_GET_ITEM(obj, total);
- if (save(self, item, 0) < 0)
+ Py_INCREF(item);
+ int err = save(self, item, 0);
+ Py_DECREF(item);
+ if (err < 0)
return -1;
total++;
if (++this_batch == BATCHSIZE)
/* Special-case len(d) == 1 to save space. */
if (dict_size == 1) {
PyDict_Next(obj, &ppos, &key, &value);
- if (save(self, key, 0) < 0)
- return -1;
- if (save(self, value, 0) < 0)
- return -1;
+ Py_INCREF(key);
+ Py_INCREF(value);
+ if (save(self, key, 0) < 0) {
+ goto error;
+ }
+ if (save(self, value, 0) < 0) {
+ goto error;
+ }
+ Py_CLEAR(key);
+ Py_CLEAR(value);
if (_Pickler_Write(self, &setitem_op, 1) < 0)
return -1;
return 0;
if (_Pickler_Write(self, &mark_op, 1) < 0)
return -1;
while (PyDict_Next(obj, &ppos, &key, &value)) {
- if (save(self, key, 0) < 0)
- return -1;
- if (save(self, value, 0) < 0)
- return -1;
+ Py_INCREF(key);
+ Py_INCREF(value);
+ if (save(self, key, 0) < 0) {
+ goto error;
+ }
+ if (save(self, value, 0) < 0) {
+ goto error;
+ }
+ Py_CLEAR(key);
+ Py_CLEAR(value);
if (++i == BATCHSIZE)
break;
}
} while (i == BATCHSIZE);
return 0;
+error:
+ Py_XDECREF(key);
+ Py_XDECREF(value);
+ return -1;
}
static int
if (_Pickler_Write(self, &mark_op, 1) < 0)
return -1;
while (_PySet_NextEntry(obj, &ppos, &item, &hash)) {
- if (save(self, item, 0) < 0)
+ Py_INCREF(item);
+ int err = save(self, item, 0);
+ Py_CLEAR(item);
+ if (err < 0)
return -1;
if (++i == BATCHSIZE)
break;
return -1;
}
+
+ if (self->dispatch_table != NULL) {
+ return 0;
+ }
if (_PyObject_LookupAttrId((PyObject *)self,
- &PyId_dispatch_table, &self->dispatch_table) < 0) {
+ &PyId_dispatch_table, &self->dispatch_table) < 0) {
return -1;
}
"close($self, /)\n"
"--\n"
"\n"
-"Closes the connection.");
+"Close the database connection.\n"
+"\n"
+"Any pending transaction is not committed implicitly.");
#define PYSQLITE_CONNECTION_CLOSE_METHODDEF \
{"close", (PyCFunction)pysqlite_connection_close, METH_NOARGS, pysqlite_connection_close__doc__},
"commit($self, /)\n"
"--\n"
"\n"
-"Commit the current transaction.");
+"Commit any pending transaction to the database.\n"
+"\n"
+"If there is no open transaction, this method is a no-op.");
#define PYSQLITE_CONNECTION_COMMIT_METHODDEF \
{"commit", (PyCFunction)pysqlite_connection_commit, METH_NOARGS, pysqlite_connection_commit__doc__},
"rollback($self, /)\n"
"--\n"
"\n"
-"Roll back the current transaction.");
+"Roll back to the start of any pending transaction.\n"
+"\n"
+"If there is no open transaction, this method is a no-op.");
#define PYSQLITE_CONNECTION_ROLLBACK_METHODDEF \
{"rollback", (PyCFunction)pysqlite_connection_rollback, METH_NOARGS, pysqlite_connection_rollback__doc__},
#ifndef PYSQLITE_CONNECTION_LOAD_EXTENSION_METHODDEF
#define PYSQLITE_CONNECTION_LOAD_EXTENSION_METHODDEF
#endif /* !defined(PYSQLITE_CONNECTION_LOAD_EXTENSION_METHODDEF) */
-/*[clinic end generated code: output=2f3f3406ba6b4d2e input=a9049054013a1b77]*/
+/*[clinic end generated code: output=5f75df72ee4abdca input=a9049054013a1b77]*/
}
PyDoc_STRVAR(pysqlite_register_adapter__doc__,
-"register_adapter($module, type, caster, /)\n"
+"register_adapter($module, type, adapter, /)\n"
"--\n"
"\n"
-"Registers an adapter with sqlite3\'s adapter registry.");
+"Register a function to adapt Python objects to SQLite values.");
#define PYSQLITE_REGISTER_ADAPTER_METHODDEF \
{"register_adapter", (PyCFunction)(void(*)(void))pysqlite_register_adapter, METH_FASTCALL, pysqlite_register_adapter__doc__},
}
PyDoc_STRVAR(pysqlite_register_converter__doc__,
-"register_converter($module, name, converter, /)\n"
+"register_converter($module, typename, converter, /)\n"
"--\n"
"\n"
-"Registers a converter with sqlite3.");
+"Register a function to convert SQLite values to Python objects.");
#define PYSQLITE_REGISTER_CONVERTER_METHODDEF \
{"register_converter", (PyCFunction)(void(*)(void))pysqlite_register_converter, METH_FASTCALL, pysqlite_register_converter__doc__},
exit:
return return_value;
}
-/*[clinic end generated code: output=6939849a4371122d input=a9049054013a1b77]*/
+/*[clinic end generated code: output=ad3685282fedde73 input=a9049054013a1b77]*/
/*[clinic input]
_sqlite3.Connection.close as pysqlite_connection_close
-Closes the connection.
+Close the database connection.
+
+Any pending transaction is not committed implicitly.
[clinic start generated code]*/
static PyObject *
pysqlite_connection_close_impl(pysqlite_Connection *self)
-/*[clinic end generated code: output=a546a0da212c9b97 input=3d58064bbffaa3d3]*/
+/*[clinic end generated code: output=a546a0da212c9b97 input=b3ed5b74f6fefc06]*/
{
int rc;
/*[clinic input]
_sqlite3.Connection.commit as pysqlite_connection_commit
-Commit the current transaction.
+Commit any pending transaction to the database.
+
+If there is no open transaction, this method is a no-op.
[clinic start generated code]*/
static PyObject *
pysqlite_connection_commit_impl(pysqlite_Connection *self)
-/*[clinic end generated code: output=3da45579e89407f2 input=39c12c04dda276a8]*/
+/*[clinic end generated code: output=3da45579e89407f2 input=c8793c97c3446065]*/
{
int rc;
sqlite3_stmt* statement;
/*[clinic input]
_sqlite3.Connection.rollback as pysqlite_connection_rollback
-Roll back the current transaction.
+Roll back to the start of any pending transaction.
+
+If there is no open transaction, this method is a no-op.
[clinic start generated code]*/
static PyObject *
pysqlite_connection_rollback_impl(pysqlite_Connection *self)
-/*[clinic end generated code: output=b66fa0d43e7ef305 input=12d4e8d068942830]*/
+/*[clinic end generated code: output=b66fa0d43e7ef305 input=7f60a2f1076f16b3]*/
{
int rc;
sqlite3_stmt* statement;
pysqlite_statement_reset(self->statement);
}
- /* reset description and rowcount */
+ /* reset description */
Py_INCREF(Py_None);
Py_SETREF(self->description, Py_None);
- self->rowcount = 0L;
func_args = PyTuple_New(1);
if (!func_args) {
pysqlite_statement_reset(self->statement);
pysqlite_statement_mark_dirty(self->statement);
+ self->rowcount = self->statement->is_dml ? 0L : -1L;
/* We start a transaction implicitly before a DML statement.
SELECT is the only exception. See #9924. */
}
}
- if (self->statement->is_dml) {
- self->rowcount += (long)sqlite3_changes(self->connection->db);
- } else {
- self->rowcount= -1L;
- }
-
if (!multiple) {
Py_BEGIN_ALLOW_THREADS
lastrowid = sqlite3_last_insert_rowid(self->connection->db);
if (self->next_row == NULL)
goto error;
} else if (rc == SQLITE_DONE && !multiple) {
+ if (self->statement->is_dml) {
+ self->rowcount = (long)sqlite3_changes(self->connection->db);
+ }
pysqlite_statement_reset(self->statement);
Py_CLEAR(self->statement);
}
if (multiple) {
+ if (self->statement->is_dml && rc == SQLITE_DONE) {
+ self->rowcount += (long)sqlite3_changes(self->connection->db);
+ }
pysqlite_statement_reset(self->statement);
}
Py_XDECREF(parameters);
if (PyErr_Occurred()) {
goto error;
}
- if (rc != SQLITE_DONE && rc != SQLITE_ROW) {
+ if (rc == SQLITE_DONE) {
+ if (self->statement->is_dml) {
+ self->rowcount = (long)sqlite3_changes(self->connection->db);
+ }
+ }
+ else if (rc != SQLITE_ROW) {
_pysqlite_seterror(self->connection->db, NULL);
goto error;
}
_sqlite3.register_adapter as pysqlite_register_adapter
type: object(type='PyTypeObject *')
- caster: object
+ adapter as caster: object
/
-Registers an adapter with sqlite3's adapter registry.
+Register a function to adapt Python objects to SQLite values.
[clinic start generated code]*/
static PyObject *
pysqlite_register_adapter_impl(PyObject *module, PyTypeObject *type,
PyObject *caster)
-/*[clinic end generated code: output=a287e8db18e8af23 input=b4bd87afcadc535d]*/
+/*[clinic end generated code: output=a287e8db18e8af23 input=29a5e0f213030242]*/
{
int rc;
/*[clinic input]
_sqlite3.register_converter as pysqlite_register_converter
- name as orig_name: unicode
+ typename as orig_name: unicode
converter as callable: object
/
-Registers a converter with sqlite3.
+Register a function to convert SQLite values to Python objects.
[clinic start generated code]*/
static PyObject *
pysqlite_register_converter_impl(PyObject *module, PyObject *orig_name,
PyObject *callable)
-/*[clinic end generated code: output=a2f2bfeed7230062 input=90f645419425d6c4]*/
+/*[clinic end generated code: output=a2f2bfeed7230062 input=159a444971b40378]*/
{
PyObject* name = NULL;
PyObject* retval = NULL;
Py_DECREF(tp);
}
+PyDoc_STRVAR(doc, "PEP 246 style object adaption protocol type.");
+
static PyType_Slot type_slots[] = {
{Py_tp_dealloc, pysqlite_prepare_protocol_dealloc},
{Py_tp_init, pysqlite_prepare_protocol_init},
{Py_tp_traverse, pysqlite_prepare_protocol_traverse},
+ {Py_tp_doc, (void *)doc},
{0, NULL},
};
#include "util.h"
/* prototypes */
-static int pysqlite_check_remaining_sql(const char* tail);
-
-typedef enum {
- LINECOMMENT_1,
- IN_LINECOMMENT,
- COMMENTSTART_1,
- IN_COMMENT,
- COMMENTEND_1,
- NORMAL
-} parse_remaining_sql_state;
+static const char *lstrip_sql(const char *sql);
typedef enum {
TYPE_LONG,
int rc;
const char* sql_cstr;
Py_ssize_t sql_cstr_len;
- const char* p;
assert(PyUnicode_Check(sql));
/* Determine if the statement is a DML statement.
SELECT is the only exception. See #9924. */
- for (p = sql_cstr; *p != 0; p++) {
- switch (*p) {
- case ' ':
- case '\r':
- case '\n':
- case '\t':
- continue;
- }
-
+ const char *p = lstrip_sql(sql_cstr);
+ if (p != NULL) {
self->is_dml = (PyOS_strnicmp(p, "insert", 6) == 0)
|| (PyOS_strnicmp(p, "update", 6) == 0)
|| (PyOS_strnicmp(p, "delete", 6) == 0)
|| (PyOS_strnicmp(p, "replace", 7) == 0);
- break;
}
Py_BEGIN_ALLOW_THREADS
goto error;
}
- if (rc == SQLITE_OK && pysqlite_check_remaining_sql(tail)) {
+ if (rc == SQLITE_OK && lstrip_sql(tail)) {
(void)sqlite3_finalize(self->st);
self->st = NULL;
PyErr_SetString(pysqlite_Warning,
}
/*
- * Checks if there is anything left in an SQL string after SQLite compiled it.
- * This is used to check if somebody tried to execute more than one SQL command
- * with one execute()/executemany() command, which the DB-API and we don't
- * allow.
+ * Strip leading whitespace and comments from incoming SQL (null terminated C
+ * string) and return a pointer to the first non-whitespace, non-comment
+ * character.
+ *
+ * This is used to check if somebody tries to execute more than one SQL query
+ * with one execute()/executemany() command, which the DB-API don't allow.
*
- * Returns 1 if there is more left than should be. 0 if ok.
+ * It is also used to harden DML query detection.
*/
-static int pysqlite_check_remaining_sql(const char* tail)
+static inline const char *
+lstrip_sql(const char *sql)
{
- const char* pos = tail;
-
- parse_remaining_sql_state state = NORMAL;
-
- for (;;) {
+ // This loop is borrowed from the SQLite source code.
+ for (const char *pos = sql; *pos; pos++) {
switch (*pos) {
- case 0:
- return 0;
- case '-':
- if (state == NORMAL) {
- state = LINECOMMENT_1;
- } else if (state == LINECOMMENT_1) {
- state = IN_LINECOMMENT;
- }
- break;
case ' ':
case '\t':
- break;
+ case '\f':
case '\n':
- case 13:
- if (state == IN_LINECOMMENT) {
- state = NORMAL;
- }
+ case '\r':
+ // Skip whitespace.
break;
- case '/':
- if (state == NORMAL) {
- state = COMMENTSTART_1;
- } else if (state == COMMENTEND_1) {
- state = NORMAL;
- } else if (state == COMMENTSTART_1) {
- return 1;
+ case '-':
+ // Skip line comments.
+ if (pos[1] == '-') {
+ pos += 2;
+ while (pos[0] && pos[0] != '\n') {
+ pos++;
+ }
+ if (pos[0] == '\0') {
+ return NULL;
+ }
+ continue;
}
- break;
- case '*':
- if (state == NORMAL) {
- return 1;
- } else if (state == LINECOMMENT_1) {
- return 1;
- } else if (state == COMMENTSTART_1) {
- state = IN_COMMENT;
- } else if (state == IN_COMMENT) {
- state = COMMENTEND_1;
+ return pos;
+ case '/':
+ // Skip C style comments.
+ if (pos[1] == '*') {
+ pos += 2;
+ while (pos[0] && (pos[0] != '*' || pos[1] != '/')) {
+ pos++;
+ }
+ if (pos[0] == '\0') {
+ return NULL;
+ }
+ pos++;
+ continue;
}
- break;
+ return pos;
default:
- if (state == COMMENTEND_1) {
- state = IN_COMMENT;
- } else if (state == IN_LINECOMMENT) {
- } else if (state == IN_COMMENT) {
- } else {
- return 1;
- }
+ return pos;
}
-
- pos++;
}
- return 0;
+ return NULL;
}
static PyMemberDef stmt_members[] = {
static PyObject *
get_minimum_version(PySSLContext *self, void *c)
{
- int v = SSL_CTX_ctrl(self->ctx, SSL_CTRL_GET_MIN_PROTO_VERSION, 0, NULL);
+ int v = SSL_CTX_get_min_proto_version(self->ctx);
if (v == 0) {
v = PY_PROTO_MINIMUM_SUPPORTED;
}
static PyObject *
get_maximum_version(PySSLContext *self, void *c)
{
- int v = SSL_CTX_ctrl(self->ctx, SSL_CTRL_GET_MAX_PROTO_VERSION, 0, NULL);
+ int v = SSL_CTX_get_max_proto_version(self->ctx);
if (v == 0) {
v = PY_PROTO_MAXIMUM_SUPPORTED;
}
_ssl__SSLContext_set_default_verify_paths_impl(PySSLContext *self)
/*[clinic end generated code: output=0bee74e6e09deaaa input=35f3408021463d74]*/
{
- if (!SSL_CTX_set_default_verify_paths(self->ctx)) {
+ int rc;
+ Py_BEGIN_ALLOW_THREADS
+ rc = SSL_CTX_set_default_verify_paths(self->ctx);
+ Py_END_ALLOW_THREADS
+ if (!rc) {
_setSSLError(get_state_ctx(self), NULL, 0, __FILE__, __LINE__);
return NULL;
}
.name = "_ssl.SSLSession",
.basicsize = sizeof(PySSLSession),
.flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC |
- Py_TPFLAGS_IMMUTABLETYPE),
+ Py_TPFLAGS_IMMUTABLETYPE |
+ Py_TPFLAGS_DISALLOW_INSTANTIATION),
.slots = PySSLSession_slots,
};
return ret;
}
+static int
+s_clear(PyStructObject *s)
+{
+ Py_CLEAR(s->s_format);
+ return 0;
+}
+
+static int
+s_traverse(PyStructObject *s, visitproc visit, void *arg)
+{
+ Py_VISIT(Py_TYPE(s));
+ Py_VISIT(s->s_format);
+ return 0;
+}
+
static void
s_dealloc(PyStructObject *s)
{
PyTypeObject *tp = Py_TYPE(s);
+ PyObject_GC_UnTrack(s);
if (s->weakreflist != NULL)
PyObject_ClearWeakRefs((PyObject *)s);
if (s->s_codes != NULL) {
"_struct.unpack_iterator",
sizeof(unpackiterobject),
0,
- Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,
+ (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC |
+ Py_TPFLAGS_IMMUTABLETYPE),
unpackiter_type_slots
};
{Py_tp_getattro, PyObject_GenericGetAttr},
{Py_tp_setattro, PyObject_GenericSetAttr},
{Py_tp_doc, (void*)s__doc__},
+ {Py_tp_traverse, s_traverse},
+ {Py_tp_clear, s_clear},
{Py_tp_methods, s_methods},
{Py_tp_members, s_members},
{Py_tp_getset, s_getsetlist},
{Py_tp_init, Struct___init__},
{Py_tp_alloc, PyType_GenericAlloc},
{Py_tp_new, s_new},
- {Py_tp_free, PyObject_Del},
+ {Py_tp_free, PyObject_GC_Del},
{0, 0},
};
"_struct.Struct",
sizeof(PyStructObject),
0,
- Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,
+ (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC |
+ Py_TPFLAGS_BASETYPE | Py_TPFLAGS_IMMUTABLETYPE),
PyStructType_slots
};
}
+static PyObject *
+sequence_setitem(PyObject *self, PyObject *args)
+{
+ Py_ssize_t i;
+ PyObject *seq, *val;
+ if (!PyArg_ParseTuple(args, "OnO", &seq, &i, &val)) {
+ return NULL;
+ }
+ if (PySequence_SetItem(seq, i, val)) {
+ return NULL;
+ }
+ Py_RETURN_NONE;
+}
+
+
/* Functions for testing C calling conventions (METH_*) are named meth_*,
* e.g. "meth_varargs" for METH_VARARGS.
*
static PyObject *test_buildvalue_issue38913(PyObject *, PyObject *);
static PyObject *getargs_s_hash_int(PyObject *, PyObject *, PyObject*);
+static PyObject *getargs_s_hash_int2(PyObject *, PyObject *, PyObject*);
static PyMethodDef TestMethods[] = {
{"raise_exception", raise_exception, METH_VARARGS},
{"getargs_s_hash", getargs_s_hash, METH_VARARGS},
{"getargs_s_hash_int", (PyCFunction)(void(*)(void))getargs_s_hash_int,
METH_VARARGS|METH_KEYWORDS},
+ {"getargs_s_hash_int2", (PyCFunction)(void(*)(void))getargs_s_hash_int2,
+ METH_VARARGS|METH_KEYWORDS},
{"getargs_z", getargs_z, METH_VARARGS},
{"getargs_z_star", getargs_z_star, METH_VARARGS},
{"getargs_z_hash", getargs_z_hash, METH_VARARGS},
#endif
{"write_unraisable_exc", test_write_unraisable_exc, METH_VARARGS},
{"sequence_getitem", sequence_getitem, METH_VARARGS},
+ {"sequence_setitem", sequence_setitem, METH_VARARGS},
{"meth_varargs", meth_varargs, METH_VARARGS},
{"meth_varargs_keywords", (PyCFunction)(void(*)(void))meth_varargs_keywords, METH_VARARGS|METH_KEYWORDS},
{"meth_o", meth_o, METH_O},
static PyObject *
getargs_s_hash_int(PyObject *self, PyObject *args, PyObject *kwargs)
{
- static char *keywords[] = {"", "x", NULL};
+ static char *keywords[] = {"", "", "x", NULL};
+ Py_buffer buf = {NULL};
+ const char *s;
+ int len;
+ int i = 0;
+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "w*|s#i", keywords, &buf, &s, &len, &i))
+ return NULL;
+ PyBuffer_Release(&buf);
+ Py_RETURN_NONE;
+}
+
+static PyObject *
+getargs_s_hash_int2(PyObject *self, PyObject *args, PyObject *kwargs)
+{
+ static char *keywords[] = {"", "", "x", NULL};
+ Py_buffer buf = {NULL};
const char *s;
int len;
int i = 0;
- if (!PyArg_ParseTupleAndKeywords(args, kwargs, "|s#i", keywords, &s, &len, &i))
+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "w*|(s#)i", keywords, &buf, &s, &len, &i))
return NULL;
+ PyBuffer_Release(&buf);
Py_RETURN_NONE;
}
}
/*[clinic input]
+_winapi.LCMapStringEx
+
+ locale: unicode
+ flags: DWORD
+ src: unicode
+
+[clinic start generated code]*/
+
+static PyObject *
+_winapi_LCMapStringEx_impl(PyObject *module, PyObject *locale, DWORD flags,
+ PyObject *src)
+/*[clinic end generated code: output=8ea4c9d85a4a1f23 input=2fa6ebc92591731b]*/
+{
+ if (flags & (LCMAP_SORTHANDLE | LCMAP_HASH | LCMAP_BYTEREV |
+ LCMAP_SORTKEY)) {
+ return PyErr_Format(PyExc_ValueError, "unsupported flags");
+ }
+
+ wchar_t *locale_ = PyUnicode_AsWideCharString(locale, NULL);
+ if (!locale_) {
+ return NULL;
+ }
+ Py_ssize_t srcLenAsSsize;
+ int srcLen;
+ wchar_t *src_ = PyUnicode_AsWideCharString(src, &srcLenAsSsize);
+ if (!src_) {
+ PyMem_Free(locale_);
+ return NULL;
+ }
+ srcLen = (int)srcLenAsSsize;
+ if (srcLen != srcLenAsSsize) {
+ srcLen = -1;
+ }
+
+ int dest_size = LCMapStringEx(locale_, flags, src_, srcLen, NULL, 0,
+ NULL, NULL, 0);
+ if (dest_size == 0) {
+ PyMem_Free(locale_);
+ PyMem_Free(src_);
+ return PyErr_SetFromWindowsErr(0);
+ }
+
+ wchar_t* dest = PyMem_NEW(wchar_t, dest_size);
+ if (dest == NULL) {
+ PyMem_Free(locale_);
+ PyMem_Free(src_);
+ return PyErr_NoMemory();
+ }
+
+ int nmapped = LCMapStringEx(locale_, flags, src_, srcLen, dest, dest_size,
+ NULL, NULL, 0);
+ if (nmapped == 0) {
+ DWORD error = GetLastError();
+ PyMem_Free(locale_);
+ PyMem_Free(src_);
+ PyMem_DEL(dest);
+ return PyErr_SetFromWindowsErr(error);
+ }
+
+ PyObject *ret = PyUnicode_FromWideChar(dest, dest_size);
+ PyMem_Free(locale_);
+ PyMem_Free(src_);
+ PyMem_DEL(dest);
+
+ return ret;
+}
+
+/*[clinic input]
_winapi.ReadFile
handle: HANDLE
_WINAPI_OPENFILEMAPPING_METHODDEF
_WINAPI_OPENPROCESS_METHODDEF
_WINAPI_PEEKNAMEDPIPE_METHODDEF
+ _WINAPI_LCMAPSTRINGEX_METHODDEF
_WINAPI_READFILE_METHODDEF
_WINAPI_SETNAMEDPIPEHANDLESTATE_METHODDEF
_WINAPI_TERMINATEPROCESS_METHODDEF
WINAPI_CONSTANT(F_DWORD, FILE_TYPE_PIPE);
WINAPI_CONSTANT(F_DWORD, FILE_TYPE_REMOTE);
+ WINAPI_CONSTANT("u", LOCALE_NAME_INVARIANT);
+ WINAPI_CONSTANT(F_DWORD, LOCALE_NAME_MAX_LENGTH);
+ WINAPI_CONSTANT("u", LOCALE_NAME_SYSTEM_DEFAULT);
+ WINAPI_CONSTANT("u", LOCALE_NAME_USER_DEFAULT);
+
+ WINAPI_CONSTANT(F_DWORD, LCMAP_FULLWIDTH);
+ WINAPI_CONSTANT(F_DWORD, LCMAP_HALFWIDTH);
+ WINAPI_CONSTANT(F_DWORD, LCMAP_HIRAGANA);
+ WINAPI_CONSTANT(F_DWORD, LCMAP_KATAKANA);
+ WINAPI_CONSTANT(F_DWORD, LCMAP_LINGUISTIC_CASING);
+ WINAPI_CONSTANT(F_DWORD, LCMAP_LOWERCASE);
+ WINAPI_CONSTANT(F_DWORD, LCMAP_SIMPLIFIED_CHINESE);
+ WINAPI_CONSTANT(F_DWORD, LCMAP_TITLECASE);
+ WINAPI_CONSTANT(F_DWORD, LCMAP_TRADITIONAL_CHINESE);
+ WINAPI_CONSTANT(F_DWORD, LCMAP_UPPERCASE);
+
WINAPI_CONSTANT("i", NULL);
return 0;
return return_value;
}
+PyDoc_STRVAR(_winapi_LCMapStringEx__doc__,
+"LCMapStringEx($module, /, locale, flags, src)\n"
+"--\n"
+"\n");
+
+#define _WINAPI_LCMAPSTRINGEX_METHODDEF \
+ {"LCMapStringEx", (PyCFunction)(void(*)(void))_winapi_LCMapStringEx, METH_FASTCALL|METH_KEYWORDS, _winapi_LCMapStringEx__doc__},
+
+static PyObject *
+_winapi_LCMapStringEx_impl(PyObject *module, PyObject *locale, DWORD flags,
+ PyObject *src);
+
+static PyObject *
+_winapi_LCMapStringEx(PyObject *module, PyObject *const *args, Py_ssize_t nargs, PyObject *kwnames)
+{
+ PyObject *return_value = NULL;
+ static const char * const _keywords[] = {"locale", "flags", "src", NULL};
+ static _PyArg_Parser _parser = {"UkU:LCMapStringEx", _keywords, 0};
+ PyObject *locale;
+ DWORD flags;
+ PyObject *src;
+
+ if (!_PyArg_ParseStackAndKeywords(args, nargs, kwnames, &_parser,
+ &locale, &flags, &src)) {
+ goto exit;
+ }
+ return_value = _winapi_LCMapStringEx_impl(module, locale, flags, src);
+
+exit:
+ return return_value;
+}
+
PyDoc_STRVAR(_winapi_ReadFile__doc__,
"ReadFile($module, /, handle, size, overlapped=False)\n"
"--\n"
exit:
return return_value;
}
-/*[clinic end generated code: output=ac3623be6e42017c input=a9049054013a1b77]*/
+/*[clinic end generated code: output=8e13179bf25bdea5 input=a9049054013a1b77]*/
#endif /* defined(HAVE_MEMFD_CREATE) */
-#if defined(HAVE_EVENTFD)
+#if (defined(HAVE_EVENTFD) && defined(EFD_CLOEXEC))
PyDoc_STRVAR(os_eventfd__doc__,
"eventfd($module, /, initval, flags=EFD_CLOEXEC)\n"
return return_value;
}
-#endif /* defined(HAVE_EVENTFD) */
+#endif /* (defined(HAVE_EVENTFD) && defined(EFD_CLOEXEC)) */
-#if defined(HAVE_EVENTFD)
+#if (defined(HAVE_EVENTFD) && defined(EFD_CLOEXEC))
PyDoc_STRVAR(os_eventfd_read__doc__,
"eventfd_read($module, /, fd)\n"
return return_value;
}
-#endif /* defined(HAVE_EVENTFD) */
+#endif /* (defined(HAVE_EVENTFD) && defined(EFD_CLOEXEC)) */
-#if defined(HAVE_EVENTFD)
+#if (defined(HAVE_EVENTFD) && defined(EFD_CLOEXEC))
PyDoc_STRVAR(os_eventfd_write__doc__,
"eventfd_write($module, /, fd, value)\n"
return return_value;
}
-#endif /* defined(HAVE_EVENTFD) */
+#endif /* (defined(HAVE_EVENTFD) && defined(EFD_CLOEXEC)) */
#if (defined(TERMSIZE_USE_CONIO) || defined(TERMSIZE_USE_IOCTL))
#ifndef OS_WAITSTATUS_TO_EXITCODE_METHODDEF
#define OS_WAITSTATUS_TO_EXITCODE_METHODDEF
#endif /* !defined(OS_WAITSTATUS_TO_EXITCODE_METHODDEF) */
-/*[clinic end generated code: output=debefcf43738ec66 input=a9049054013a1b77]*/
+/*[clinic end generated code: output=73be33fa0000a6d1 input=a9049054013a1b77]*/
u_long v4a;
#ifdef ENABLE_IPV6
u_char pfx;
-#endif
int h_error;
+#endif
char numserv[512];
char numaddr[512];
hp = getipnodebyaddr(addr, gni_afd->a_addrlen, gni_afd->a_af, &h_error);
#else
hp = gethostbyaddr(addr, gni_afd->a_addrlen, gni_afd->a_af);
- h_error = h_errno;
#endif
if (hp) {
/* Expose all the ioctl numbers for masochists who like to do this
stuff directly. */
+#ifdef SNDCTL_COPR_HALT
_EXPORT_INT(m, SNDCTL_COPR_HALT);
+#endif
+#ifdef SNDCTL_COPR_LOAD
_EXPORT_INT(m, SNDCTL_COPR_LOAD);
+#endif
+#ifdef SNDCTL_COPR_RCODE
_EXPORT_INT(m, SNDCTL_COPR_RCODE);
+#endif
+#ifdef SNDCTL_COPR_RCVMSG
_EXPORT_INT(m, SNDCTL_COPR_RCVMSG);
+#endif
+#ifdef SNDCTL_COPR_RDATA
_EXPORT_INT(m, SNDCTL_COPR_RDATA);
+#endif
+#ifdef SNDCTL_COPR_RESET
_EXPORT_INT(m, SNDCTL_COPR_RESET);
+#endif
+#ifdef SNDCTL_COPR_RUN
_EXPORT_INT(m, SNDCTL_COPR_RUN);
+#endif
+#ifdef SNDCTL_COPR_SENDMSG
_EXPORT_INT(m, SNDCTL_COPR_SENDMSG);
+#endif
+#ifdef SNDCTL_COPR_WCODE
_EXPORT_INT(m, SNDCTL_COPR_WCODE);
+#endif
+#ifdef SNDCTL_COPR_WDATA
_EXPORT_INT(m, SNDCTL_COPR_WDATA);
+#endif
#ifdef SNDCTL_DSP_BIND_CHANNEL
_EXPORT_INT(m, SNDCTL_DSP_BIND_CHANNEL);
#endif
_EXPORT_INT(m, SNDCTL_DSP_STEREO);
_EXPORT_INT(m, SNDCTL_DSP_SUBDIVIDE);
_EXPORT_INT(m, SNDCTL_DSP_SYNC);
+#ifdef SNDCTL_FM_4OP_ENABLE
_EXPORT_INT(m, SNDCTL_FM_4OP_ENABLE);
+#endif
+#ifdef SNDCTL_FM_LOAD_INSTR
_EXPORT_INT(m, SNDCTL_FM_LOAD_INSTR);
+#endif
+#ifdef SNDCTL_MIDI_INFO
_EXPORT_INT(m, SNDCTL_MIDI_INFO);
+#endif
+#ifdef SNDCTL_MIDI_MPUCMD
_EXPORT_INT(m, SNDCTL_MIDI_MPUCMD);
+#endif
+#ifdef SNDCTL_MIDI_MPUMODE
_EXPORT_INT(m, SNDCTL_MIDI_MPUMODE);
+#endif
+#ifdef SNDCTL_MIDI_PRETIME
_EXPORT_INT(m, SNDCTL_MIDI_PRETIME);
+#endif
+#ifdef SNDCTL_SEQ_CTRLRATE
_EXPORT_INT(m, SNDCTL_SEQ_CTRLRATE);
+#endif
+#ifdef SNDCTL_SEQ_GETINCOUNT
_EXPORT_INT(m, SNDCTL_SEQ_GETINCOUNT);
+#endif
+#ifdef SNDCTL_SEQ_GETOUTCOUNT
_EXPORT_INT(m, SNDCTL_SEQ_GETOUTCOUNT);
+#endif
#ifdef SNDCTL_SEQ_GETTIME
_EXPORT_INT(m, SNDCTL_SEQ_GETTIME);
#endif
+#ifdef SNDCTL_SEQ_NRMIDIS
_EXPORT_INT(m, SNDCTL_SEQ_NRMIDIS);
+#endif
+#ifdef SNDCTL_SEQ_NRSYNTHS
_EXPORT_INT(m, SNDCTL_SEQ_NRSYNTHS);
+#endif
+#ifdef SNDCTL_SEQ_OUTOFBAND
_EXPORT_INT(m, SNDCTL_SEQ_OUTOFBAND);
+#endif
+#ifdef SNDCTL_SEQ_PANIC
_EXPORT_INT(m, SNDCTL_SEQ_PANIC);
+#endif
+#ifdef SNDCTL_SEQ_PERCMODE
_EXPORT_INT(m, SNDCTL_SEQ_PERCMODE);
+#endif
+#ifdef SNDCTL_SEQ_RESET
_EXPORT_INT(m, SNDCTL_SEQ_RESET);
+#endif
+#ifdef SNDCTL_SEQ_RESETSAMPLES
_EXPORT_INT(m, SNDCTL_SEQ_RESETSAMPLES);
+#endif
+#ifdef SNDCTL_SEQ_SYNC
_EXPORT_INT(m, SNDCTL_SEQ_SYNC);
+#endif
+#ifdef SNDCTL_SEQ_TESTMIDI
_EXPORT_INT(m, SNDCTL_SEQ_TESTMIDI);
+#endif
+#ifdef SNDCTL_SEQ_THRESHOLD
_EXPORT_INT(m, SNDCTL_SEQ_THRESHOLD);
+#endif
#ifdef SNDCTL_SYNTH_CONTROL
_EXPORT_INT(m, SNDCTL_SYNTH_CONTROL);
#endif
#ifdef SNDCTL_SYNTH_ID
_EXPORT_INT(m, SNDCTL_SYNTH_ID);
#endif
+#ifdef SNDCTL_SYNTH_INFO
_EXPORT_INT(m, SNDCTL_SYNTH_INFO);
+#endif
+#ifdef SNDCTL_SYNTH_MEMAVL
_EXPORT_INT(m, SNDCTL_SYNTH_MEMAVL);
+#endif
#ifdef SNDCTL_SYNTH_REMOVESAMPLE
_EXPORT_INT(m, SNDCTL_SYNTH_REMOVESAMPLE);
#endif
+#ifdef SNDCTL_TMR_CONTINUE
_EXPORT_INT(m, SNDCTL_TMR_CONTINUE);
+#endif
+#ifdef SNDCTL_TMR_METRONOME
_EXPORT_INT(m, SNDCTL_TMR_METRONOME);
+#endif
+#ifdef SNDCTL_TMR_SELECT
_EXPORT_INT(m, SNDCTL_TMR_SELECT);
+#endif
+#ifdef SNDCTL_TMR_SOURCE
_EXPORT_INT(m, SNDCTL_TMR_SOURCE);
+#endif
+#ifdef SNDCTL_TMR_START
_EXPORT_INT(m, SNDCTL_TMR_START);
+#endif
+#ifdef SNDCTL_TMR_STOP
_EXPORT_INT(m, SNDCTL_TMR_STOP);
+#endif
+#ifdef SNDCTL_TMR_TEMPO
_EXPORT_INT(m, SNDCTL_TMR_TEMPO);
+#endif
+#ifdef SNDCTL_TMR_TIMEBASE
_EXPORT_INT(m, SNDCTL_TMR_TIMEBASE);
+#endif
return m;
}
}
#endif
-#ifdef HAVE_EVENTFD
+#if defined(HAVE_EVENTFD) && defined(EFD_CLOEXEC)
/*[clinic input]
os.eventfd
}
Py_RETURN_NONE;
}
-#endif /* HAVE_EVENTFD */
+#endif /* HAVE_EVENTFD && EFD_CLOEXEC */
/* Terminal size querying */
#endif
#endif /* HAVE_MEMFD_CREATE */
-#ifdef HAVE_EVENTFD
+#if defined(HAVE_EVENTFD) && defined(EFD_CLOEXEC)
if (PyModule_AddIntMacro(m, EFD_CLOEXEC)) return -1;
+#ifdef EFD_NONBLOCK
if (PyModule_AddIntMacro(m, EFD_NONBLOCK)) return -1;
+#endif
+#ifdef EFD_SEMAPHORE
if (PyModule_AddIntMacro(m, EFD_SEMAPHORE)) return -1;
#endif
+#endif /* HAVE_EVENTFD && EFD_CLOEXEC */
#if defined(__APPLE__)
if (PyModule_AddIntConstant(m, "_COPYFILE_DATA", COPYFILE_DATA)) return -1;
return PyObject_RichCompareBool(func, dfl_ign_handler, Py_EQ) == 1;
}
-#ifdef HAVE_GETITIMER
-/* auxiliary functions for setitimer */
+#ifdef HAVE_SETITIMER
+/* auxiliary function for setitimer */
static int
timeval_from_double(PyObject *obj, struct timeval *tv)
{
}
return _PyTime_AsTimeval(t, tv, _PyTime_ROUND_CEILING);
}
+#endif
+#if defined(HAVE_SETITIMER) || defined(HAVE_GETITIMER)
+/* auxiliary functions for get/setitimer */
Py_LOCAL_INLINE(double)
double_from_timeval(struct timeval *tv)
{
}
+#ifdef HAVE_SOCKETPAIR
/* Create a new socket object.
This just creates the object and initializes it.
If the creation fails, return NULL and set an exception (implicit
}
return s;
}
+#endif
/* Lock to allow python interpreter to continue, but only allow one
struct sockaddr_un* addr = &addrbuf->un;
#ifdef __linux__
- if (path.len > 0 && *(const char *)path.buf == 0) {
- /* Linux abstract namespace extension */
+ if (path.len == 0 || *(const char *)path.buf == 0) {
+ /* Linux abstract namespace extension:
+ - Empty address auto-binding to an abstract address
+ - Address that starts with null byte */
if ((size_t)path.len > sizeof addr->sun_path) {
PyErr_SetString(PyExc_OSError,
"AF_UNIX path too long");
}
scriptobj = PyList_GetItem(argv, 0);
+ if (scriptobj == NULL) {
+ PyErr_Clear();
+ return NULL;
+ }
if (!PyUnicode_Check(scriptobj)) {
return(NULL);
}
}
slash = PyUnicode_FindChar(scriptobj, SEP, 0, scriptlen, -1);
- if (slash == -2)
+ if (slash == -2) {
+ PyErr_Clear();
return NULL;
+ }
if (slash != -1) {
return PyUnicode_Substring(scriptobj, slash + 1, scriptlen);
} else {
Py_INCREF(scriptobj);
return(scriptobj);
}
-
- return(NULL);
}
{
long logopt = 0;
long facility = LOG_USER;
- PyObject *new_S_ident_o = NULL;
+ PyObject *ident = NULL;
static char *keywords[] = {"ident", "logoption", "facility", 0};
- const char *ident = NULL;
+ const char *ident_str = NULL;
if (!PyArg_ParseTupleAndKeywords(args, kwds,
- "|Ull:openlog", keywords, &new_S_ident_o, &logopt, &facility))
+ "|Ull:openlog", keywords, &ident, &logopt, &facility))
return NULL;
- if (new_S_ident_o) {
- Py_INCREF(new_S_ident_o);
+ if (ident) {
+ Py_INCREF(ident);
}
-
- /* get sys.argv[0] or NULL if we can't for some reason */
- if (!new_S_ident_o) {
- new_S_ident_o = syslog_get_argv();
+ else {
+ /* get sys.argv[0] or NULL if we can't for some reason */
+ ident = syslog_get_argv();
}
- Py_XDECREF(S_ident_o);
- S_ident_o = new_S_ident_o;
-
- /* At this point, S_ident_o should be INCREF()ed. openlog(3) does not
- * make a copy, and syslog(3) later uses it. We can't garbagecollect it
+ /* At this point, ident should be INCREF()ed. openlog(3) does not
+ * make a copy, and syslog(3) later uses it. We can't garbagecollect it.
* If NULL, just let openlog figure it out (probably using C argv[0]).
*/
- if (S_ident_o) {
- ident = PyUnicode_AsUTF8(S_ident_o);
- if (ident == NULL)
+ if (ident) {
+ ident_str = PyUnicode_AsUTF8(ident);
+ if (ident_str == NULL) {
+ Py_DECREF(ident);
return NULL;
+ }
}
-
- if (PySys_Audit("syslog.openlog", "sll", ident, logopt, facility) < 0) {
+ if (PySys_Audit("syslog.openlog", "Oll", ident ? ident : Py_None, logopt, facility) < 0) {
+ Py_DECREF(ident);
return NULL;
}
- openlog(ident, logopt, facility);
+ openlog(ident_str, logopt, facility);
S_log_open = 1;
+ Py_XSETREF(S_ident_o, ident);
Py_RETURN_NONE;
}
*/
if ((openargs = PyTuple_New(0))) {
PyObject *openlog_ret = syslog_openlog(self, openargs, NULL);
- Py_XDECREF(openlog_ret);
Py_DECREF(openargs);
+ if (openlog_ret == NULL) {
+ return NULL;
+ }
+ Py_DECREF(openlog_ret);
+ }
+ else {
+ return NULL;
}
}
+ /* Incref ident, because it can be decrefed if syslog.openlog() is
+ * called when the GIL is released.
+ */
+ PyObject *ident = S_ident_o;
+ Py_XINCREF(ident);
Py_BEGIN_ALLOW_THREADS;
syslog(priority, "%s", message);
Py_END_ALLOW_THREADS;
+ Py_XDECREF(ident);
Py_RETURN_NONE;
}
PyInit_syslog(void)
{
return PyModuleDef_Init(&syslogmodule);
-}
\ No newline at end of file
+}
};
static PyType_Spec Xxo_Type_spec = {
- "xxlimited.Xxo",
+ "xxlimited_35.Xxo",
sizeof(XxoObject),
0,
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_HAVE_GC,
};
static PyType_Spec Str_Type_spec = {
- "xxlimited.Str",
+ "xxlimited_35.Str",
0,
0,
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,
};
static PyType_Spec Null_Type_spec = {
- "xxlimited.Null",
+ "xxlimited_35.Null",
0, /* basicsize */
0, /* itemsize */
Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE,
Null_Type_slots[1].pfunc = PyType_GenericNew;
Str_Type_slots[0].pfunc = &PyUnicode_Type;
- Xxo_Type = PyType_FromSpec(&Xxo_Type_spec);
- if (Xxo_Type == NULL)
- goto fail;
-
/* Add some symbolic constants to the module */
if (ErrorObject == NULL) {
- ErrorObject = PyErr_NewException("xxlimited.error", NULL, NULL);
- if (ErrorObject == NULL)
- goto fail;
+ ErrorObject = PyErr_NewException("xxlimited_35.error", NULL, NULL);
+ if (ErrorObject == NULL) {
+ return -1;
+ }
}
Py_INCREF(ErrorObject);
- PyModule_AddObject(m, "error", ErrorObject);
+ if (PyModule_AddObject(m, "error", ErrorObject) < 0) {
+ Py_DECREF(ErrorObject);
+ return -1;
+ }
/* Add Xxo */
- o = PyType_FromSpec(&Xxo_Type_spec);
- if (o == NULL)
- goto fail;
- PyModule_AddObject(m, "Xxo", o);
+ Xxo_Type = PyType_FromSpec(&Xxo_Type_spec);
+ if (Xxo_Type == NULL) {
+ return -1;
+ }
+ if (PyModule_AddObject(m, "Xxo", Xxo_Type) < 0) {
+ Py_DECREF(Xxo_Type);
+ return -1;
+ }
/* Add Str */
o = PyType_FromSpec(&Str_Type_spec);
- if (o == NULL)
- goto fail;
- PyModule_AddObject(m, "Str", o);
+ if (o == NULL) {
+ return -1;
+ }
+ if (PyModule_AddObject(m, "Str", o) < 0) {
+ Py_DECREF(o);
+ return -1;
+ }
/* Add Null */
o = PyType_FromSpec(&Null_Type_spec);
- if (o == NULL)
- goto fail;
- PyModule_AddObject(m, "Null", o);
+ if (o == NULL) {
+ return -1;
+ }
+ if (PyModule_AddObject(m, "Null", o) < 0) {
+ Py_DECREF(o);
+ return -1;
+ }
+
return 0;
- fail:
- Py_XDECREF(m);
- return -1;
}
/* Finalize the type object including setting type of the new type
* object; doing it here is required for portability, too. */
- if (PyType_Ready(&Xxo_Type) < 0)
- goto fail;
+ if (PyType_Ready(&Xxo_Type) < 0) {
+ return -1;
+ }
/* Add some symbolic constants to the module */
if (ErrorObject == NULL) {
ErrorObject = PyErr_NewException("xx.error", NULL, NULL);
- if (ErrorObject == NULL)
- goto fail;
+ if (ErrorObject == NULL) {
+ return -1;
+ }
+ }
+ int rc = PyModule_AddType(m, (PyTypeObject *)ErrorObject);
+ Py_DECREF(ErrorObject);
+ if (rc < 0) {
+ return -1;
}
- Py_INCREF(ErrorObject);
- PyModule_AddObject(m, "error", ErrorObject);
-
- /* Add Str */
- if (PyType_Ready(&Str_Type) < 0)
- goto fail;
- PyModule_AddObject(m, "Str", (PyObject *)&Str_Type);
-
- /* Add Null */
- if (PyType_Ready(&Null_Type) < 0)
- goto fail;
- PyModule_AddObject(m, "Null", (PyObject *)&Null_Type);
+
+ /* Add Str and Null types */
+ if (PyModule_AddType(m, &Str_Type) < 0) {
+ return -1;
+ }
+ if (PyModule_AddType(m, &Null_Type) < 0) {
+ return -1;
+ }
+
return 0;
- fail:
- Py_XDECREF(m);
- return -1;
}
static struct PyModuleDef_Slot xx_slots[] = {
static int
bytearray_setitem(PyByteArrayObject *self, Py_ssize_t i, PyObject *value)
{
- int ival;
+ int ival = -1;
- if (i < 0)
+ // GH-91153: We need to do this *before* the size check, in case value has a
+ // nasty __index__ method that changes the size of the bytearray:
+ if (value && !_getbytevalue(value, &ival)) {
+ return -1;
+ }
+
+ if (i < 0) {
i += Py_SIZE(self);
+ }
if (i < 0 || i >= Py_SIZE(self)) {
PyErr_SetString(PyExc_IndexError, "bytearray index out of range");
return -1;
}
- if (value == NULL)
+ if (value == NULL) {
return bytearray_setslice(self, i, i+1, NULL);
+ }
- if (!_getbytevalue(value, &ival))
- return -1;
-
+ assert(0 <= ival && ival < 256);
PyByteArray_AS_STRING(self)[i] = ival;
return 0;
}
if (_PyIndex_Check(index)) {
Py_ssize_t i = PyNumber_AsSsize_t(index, PyExc_IndexError);
- if (i == -1 && PyErr_Occurred())
+ if (i == -1 && PyErr_Occurred()) {
return -1;
+ }
- if (i < 0)
+ int ival = -1;
+
+ // GH-91153: We need to do this *before* the size check, in case values
+ // has a nasty __index__ method that changes the size of the bytearray:
+ if (values && !_getbytevalue(values, &ival)) {
+ return -1;
+ }
+
+ if (i < 0) {
i += PyByteArray_GET_SIZE(self);
+ }
if (i < 0 || i >= Py_SIZE(self)) {
PyErr_SetString(PyExc_IndexError, "bytearray index out of range");
slicelen = 1;
}
else {
- int ival;
- if (!_getbytevalue(values, &ival))
- return -1;
+ assert(0 <= ival && ival < 256);
buf[i] = (char)ival;
return 0;
}
assert(self->ob_item == NULL);
assert(size > 0);
+ /* Since the Python memory allocator has granularity of 16 bytes on 64-bit
+ * platforms (8 on 32-bit), there is no benefit of allocating space for
+ * the odd number of items, and there is no drawback of rounding the
+ * allocated size up to the nearest even number.
+ */
+ size = (size + 1) & ~(size_t)1;
PyObject **items = PyMem_New(PyObject*, size);
if (items == NULL) {
PyErr_NoMemory();
return -1; \
}
+/* See gh-92888. These macros signal that we need to check the memoryview
+ again due to possible read after frees. */
+#define CHECK_RELEASED_AGAIN(mv) CHECK_RELEASED(mv)
+#define CHECK_RELEASED_INT_AGAIN(mv) CHECK_RELEASED_INT(mv)
+
#define CHECK_LIST_OR_TUPLE(v) \
if (!PyList_Check(v) && !PyTuple_Check(v)) { \
PyErr_SetString(PyExc_TypeError, \
/* Faster copying of one-dimensional arrays. */
static int
-copy_single(Py_buffer *dest, Py_buffer *src)
+copy_single(PyMemoryViewObject *self, const Py_buffer *dest, const Py_buffer *src)
{
+ CHECK_RELEASED_INT_AGAIN(self);
char *mem = NULL;
assert(dest->ndim == 1);
module syntax. This function is very sensitive to small changes. With this
layout gcc automatically generates a fast jump table. */
static inline PyObject *
-unpack_single(const char *ptr, const char *fmt)
+unpack_single(PyMemoryViewObject *self, const char *ptr, const char *fmt)
{
unsigned long long llu;
unsigned long lu;
unsigned char uc;
void *p;
+ CHECK_RELEASED_AGAIN(self);
+
switch (fmt[0]) {
/* signed integers and fast path for 'B' */
/* Pack a single item. 'fmt' can be any native format character in
struct module syntax. */
static int
-pack_single(char *ptr, PyObject *item, const char *fmt)
+pack_single(PyMemoryViewObject *self, char *ptr, PyObject *item, const char *fmt)
{
unsigned long long llu;
unsigned long lu;
ld = pylong_as_ld(item);
if (ld == -1 && PyErr_Occurred())
goto err_occurred;
+ CHECK_RELEASED_INT_AGAIN(self);
switch (fmt[0]) {
case 'b':
if (ld < SCHAR_MIN || ld > SCHAR_MAX) goto err_range;
lu = pylong_as_lu(item);
if (lu == (unsigned long)-1 && PyErr_Occurred())
goto err_occurred;
+ CHECK_RELEASED_INT_AGAIN(self);
switch (fmt[0]) {
case 'B':
if (lu > UCHAR_MAX) goto err_range;
lld = pylong_as_lld(item);
if (lld == -1 && PyErr_Occurred())
goto err_occurred;
+ CHECK_RELEASED_INT_AGAIN(self);
PACK_SINGLE(ptr, lld, long long);
break;
case 'Q':
llu = pylong_as_llu(item);
if (llu == (unsigned long long)-1 && PyErr_Occurred())
goto err_occurred;
+ CHECK_RELEASED_INT_AGAIN(self);
PACK_SINGLE(ptr, llu, unsigned long long);
break;
zd = pylong_as_zd(item);
if (zd == -1 && PyErr_Occurred())
goto err_occurred;
+ CHECK_RELEASED_INT_AGAIN(self);
PACK_SINGLE(ptr, zd, Py_ssize_t);
break;
case 'N':
zu = pylong_as_zu(item);
if (zu == (size_t)-1 && PyErr_Occurred())
goto err_occurred;
+ CHECK_RELEASED_INT_AGAIN(self);
PACK_SINGLE(ptr, zu, size_t);
break;
d = PyFloat_AsDouble(item);
if (d == -1.0 && PyErr_Occurred())
goto err_occurred;
+ CHECK_RELEASED_INT_AGAIN(self);
if (fmt[0] == 'f') {
PACK_SINGLE(ptr, d, float);
}
ld = PyObject_IsTrue(item);
if (ld < 0)
return -1; /* preserve original error */
+ CHECK_RELEASED_INT_AGAIN(self);
PACK_SINGLE(ptr, ld, _Bool);
break;
p = PyLong_AsVoidPtr(item);
if (p == NULL && PyErr_Occurred())
goto err_occurred;
+ CHECK_RELEASED_INT_AGAIN(self);
PACK_SINGLE(ptr, p, void *);
break;
/* Base case for multi-dimensional unpacking. Assumption: ndim == 1. */
static PyObject *
-tolist_base(const char *ptr, const Py_ssize_t *shape,
+tolist_base(PyMemoryViewObject *self, const char *ptr, const Py_ssize_t *shape,
const Py_ssize_t *strides, const Py_ssize_t *suboffsets,
const char *fmt)
{
for (i = 0; i < shape[0]; ptr+=strides[0], i++) {
const char *xptr = ADJUST_PTR(ptr, suboffsets, 0);
- item = unpack_single(xptr, fmt);
+ item = unpack_single(self, xptr, fmt);
if (item == NULL) {
Py_DECREF(lst);
return NULL;
/* Unpack a multi-dimensional array into a nested list.
Assumption: ndim >= 1. */
static PyObject *
-tolist_rec(const char *ptr, Py_ssize_t ndim, const Py_ssize_t *shape,
+tolist_rec(PyMemoryViewObject *self, const char *ptr, Py_ssize_t ndim, const Py_ssize_t *shape,
const Py_ssize_t *strides, const Py_ssize_t *suboffsets,
const char *fmt)
{
assert(strides != NULL);
if (ndim == 1)
- return tolist_base(ptr, shape, strides, suboffsets, fmt);
+ return tolist_base(self, ptr, shape, strides, suboffsets, fmt);
lst = PyList_New(shape[0]);
if (lst == NULL)
for (i = 0; i < shape[0]; ptr+=strides[0], i++) {
const char *xptr = ADJUST_PTR(ptr, suboffsets, 0);
- item = tolist_rec(xptr, ndim-1, shape+1,
+ item = tolist_rec(self, xptr, ndim-1, shape+1,
strides+1, suboffsets ? suboffsets+1 : NULL,
fmt);
if (item == NULL) {
if (fmt == NULL)
return NULL;
if (view->ndim == 0) {
- return unpack_single(view->buf, fmt);
+ return unpack_single(self, view->buf, fmt);
}
else if (view->ndim == 1) {
- return tolist_base(view->buf, view->shape,
+ return tolist_base(self, view->buf, view->shape,
view->strides, view->suboffsets,
fmt);
}
else {
- return tolist_rec(view->buf, view->ndim, view->shape,
+ return tolist_rec(self, view->buf, view->ndim, view->shape,
view->strides, view->suboffsets,
fmt);
}
char *ptr = ptr_from_index(view, index);
if (ptr == NULL)
return NULL;
- return unpack_single(ptr, fmt);
+ return unpack_single(self, ptr, fmt);
}
PyErr_SetString(PyExc_NotImplementedError,
ptr = ptr_from_tuple(view, tup);
if (ptr == NULL)
return NULL;
- return unpack_single(ptr, fmt);
+ return unpack_single(self, ptr, fmt);
}
static inline int
const char *fmt = adjust_fmt(view);
if (fmt == NULL)
return NULL;
- return unpack_single(view->buf, fmt);
+ return unpack_single(self, view->buf, fmt);
}
else if (key == Py_Ellipsis) {
Py_INCREF(self);
if (key == Py_Ellipsis ||
(PyTuple_Check(key) && PyTuple_GET_SIZE(key)==0)) {
ptr = (char *)view->buf;
- return pack_single(ptr, value, fmt);
+ return pack_single(self, ptr, value, fmt);
}
else {
PyErr_SetString(PyExc_TypeError,
ptr = ptr_from_index(view, index);
if (ptr == NULL)
return -1;
- return pack_single(ptr, value, fmt);
+ return pack_single(self, ptr, value, fmt);
}
/* one-dimensional: fast path */
if (PySlice_Check(key) && view->ndim == 1) {
goto end_block;
dest.len = dest.shape[0] * dest.itemsize;
- ret = copy_single(&dest, &src);
+ ret = copy_single(self, &dest, &src);
end_block:
PyBuffer_Release(&src);
ptr = ptr_from_tuple(view, key);
if (ptr == NULL)
return -1;
- return pack_single(ptr, value, fmt);
+ return pack_single(self, ptr, value, fmt);
}
if (PySlice_Check(key) || is_multislice(key)) {
/* Call memory_subscript() to produce a sliced lvalue, then copy
if (ptr == NULL) {
return NULL;
}
- return unpack_single(ptr, it->it_fmt);
+ return unpack_single(seq, ptr, it->it_fmt);
}
it->it_seq = NULL;
static arena_map_bot_t arena_map_root;
#endif
+#if defined(Py_DEBUG)
+# define ALWAYS_INLINE
+#elif defined(__GNUC__) || defined(__clang__) || defined(__INTEL_COMPILER)
+# define ALWAYS_INLINE __attribute__((always_inline))
+#elif defined(_MSC_VER)
+# define ALWAYS_INLINE __forceinline
+#else
+# define ALWAYS_INLINE
+#endif
+
/* Return a pointer to a bottom tree node, return NULL if it doesn't exist or
* it cannot be created */
-static arena_map_bot_t *
+static ALWAYS_INLINE arena_map_bot_t *
arena_map_get(block *p, int create)
{
#ifdef USE_INTERIOR_NODES
PyObject *dict = base->tp_subclasses;
if (dict == NULL) {
base->tp_subclasses = dict = PyDict_New();
- if (dict == NULL)
+ if (dict == NULL) {
+ Py_DECREF(key);
+ Py_DECREF(ref);
return -1;
+ }
}
assert(PyDict_CheckExact(dict));
obj = NULL;
if (type == Py_None)
type = NULL;
- if (type == NULL &&obj == NULL) {
+ if (type == NULL && obj == NULL) {
PyErr_SetString(PyExc_TypeError,
"__get__(None, None) is invalid");
return NULL;
TPSLOT("__next__", tp_iternext, slot_tp_iternext, wrap_next,
"__next__($self, /)\n--\n\nImplement next(self)."),
TPSLOT("__get__", tp_descr_get, slot_tp_descr_get, wrap_descr_get,
- "__get__($self, instance, owner, /)\n--\n\nReturn an attribute of instance, which is of type owner."),
+ "__get__($self, instance, owner=None, /)\n--\n\nReturn an attribute of instance, which is of type owner."),
TPSLOT("__set__", tp_descr_set, slot_tp_descr_set, wrap_descr_set,
"__set__($self, instance, value, /)\n--\n\nSet an attribute of instance to value."),
TPSLOT("__delete__", tp_descr_set, slot_tp_descr_set,
PyTypeObject
_PyWeakref_RefType = {
PyVarObject_HEAD_INIT(&PyType_Type, 0)
- "weakref",
+ "weakref.ReferenceType",
sizeof(PyWeakReference),
0,
weakref_dealloc, /*tp_dealloc*/
PyTypeObject
_PyWeakref_ProxyType = {
PyVarObject_HEAD_INIT(&PyType_Type, 0)
- "weakproxy",
+ "weakref.ProxyType",
sizeof(PyWeakReference),
0,
/* methods */
PyTypeObject
_PyWeakref_CallableProxyType = {
PyVarObject_HEAD_INIT(&PyType_Type, 0)
- "weakcallableproxy",
+ "weakref.CallableProxyType",
sizeof(PyWeakReference),
0,
/* methods */
if (!cch) {
error(0, L"Cannot determine memory for home path");
}
- cch += (DWORD)wcslen(PYTHON_EXECUTABLE) + 1 + 1; /* include sep and null */
+ cch += (DWORD)wcslen(PYTHON_EXECUTABLE) + 4; /* include sep, null and quotes */
executable = (wchar_t *)malloc(cch * sizeof(wchar_t));
if (executable == NULL) {
error(RC_NO_MEMORY, L"A memory allocation failed");
}
- cch_actual = MultiByteToWideChar(CP_UTF8, 0, start, len, executable, cch);
+ /* start with a quote - we'll skip this ahead, but want it for the final string */
+ executable[0] = L'"';
+ cch_actual = MultiByteToWideChar(CP_UTF8, 0, start, len, &executable[1], cch - 1);
if (!cch_actual) {
error(RC_BAD_VENV_CFG, L"Cannot decode home path in '%ls'",
venv_cfg_path);
}
+ cch_actual += 1; /* account for the first quote */
+ executable[cch_actual] = L'\0';
if (executable[cch_actual - 1] != L'\\') {
executable[cch_actual++] = L'\\';
executable[cch_actual] = L'\0';
}
- if (wcscat_s(executable, cch, PYTHON_EXECUTABLE)) {
+ if (wcscat_s(&executable[1], cch - 1, PYTHON_EXECUTABLE)) {
error(RC_BAD_VENV_CFG, L"Cannot create executable path from '%ls'",
venv_cfg_path);
}
- if (GetFileAttributesW(executable) == INVALID_FILE_ATTRIBUTES) {
+ /* there's no trailing quote, so we only have to skip one character for the test */
+ if (GetFileAttributesW(&executable[1]) == INVALID_FILE_ATTRIBUTES) {
error(RC_NO_PYTHON, L"No Python at '%ls'", executable);
}
+ /* now append the final quote */
+ wcscat_s(executable, cch, L"\"");
+ /* smuggle our original path through */
if (!SetEnvironmentVariableW(L"__PYVENV_LAUNCHER__", argv0)) {
error(0, L"Failed to set launcher environment");
}
<LinkTimeCodeGeneration Condition="$(SupportPGO) and $(Configuration) == 'PGInstrument'">PGInstrument</LinkTimeCodeGeneration>\r
<LinkTimeCodeGeneration Condition="$(SupportPGO) and $(Configuration) == 'PGUpdate'">PGUpdate</LinkTimeCodeGeneration>\r
<AdditionalDependencies>advapi32.lib;shell32.lib;ole32.lib;oleaut32.lib;%(AdditionalDependencies)</AdditionalDependencies>\r
- <AdditionalOptions Condition="$(Configuration) != 'Debug'">/OPT:REF,NOICF %(AdditionalOptions)</AdditionalOptions>\r
+ <AdditionalOptions Condition="$(Configuration) != 'Debug'">/OPT:REF,NOICF /CGTHREADS:1 /PDBTHREADS:1 %(AdditionalOptions)</AdditionalOptions>\r
</Link>\r
<Lib>\r
<LinkTimeCodeGeneration Condition="$(Configuration) == 'Release'">true</LinkTimeCodeGeneration>\r
\r
We set BasePlatformToolset for ICC's benefit, it's otherwise ignored.\r
-->\r
- <BasePlatformToolset Condition="'$(BasePlatformToolset)' == '' and '$(VisualStudioVersion)' == '17.0'">v142</BasePlatformToolset>\r
+ <BasePlatformToolset Condition="'$(BasePlatformToolset)' == '' and '$(VisualStudioVersion)' == '17.0'">v143</BasePlatformToolset>\r
<BasePlatformToolset Condition="'$(BasePlatformToolset)' == '' and '$(VisualStudioVersion)' == '16.0'">v142</BasePlatformToolset>\r
<BasePlatformToolset Condition="'$(BasePlatformToolset)' == '' and ('$(MSBuildToolsVersion)' == '15.0' or '$(VisualStudioVersion)' == '15.0')">v141</BasePlatformToolset>\r
<BasePlatformToolset Condition="'$(BasePlatformToolset)' == '' and '$(VCTargetsPath14)' != ''">v140</BasePlatformToolset>\r
</ClCompile>\r
</ItemGroup>\r
</Target>\r
- <Target Name="_WarnAboutToolset" BeforeTargets="PrepareForBuild" Condition="$(PlatformToolset) != 'v140' and $(PlatformToolset) != 'v141' and $(PlatformToolset) != 'v142'">\r
+ <Target Name="_WarnAboutToolset" BeforeTargets="PrepareForBuild" Condition="$(PlatformToolset) != 'v140' and $(PlatformToolset) != 'v141' and $(PlatformToolset) != 'v142' and $(PlatformToolset) != 'v143'">\r
<Warning Text="Toolset $(PlatformToolset) is not used for official builds. Your build may have errors or incompatibilities." />\r
</Target>\r
<Target Name="_WarnAboutZlib" BeforeTargets="PrepareForBuild" Condition="!$(IncludeExternals)">\r
#define _tmp_175_type 1395
#define _tmp_176_type 1396
#define _tmp_177_type 1397
-#define _loop0_179_type 1398
-#define _gather_178_type 1399
-#define _tmp_180_type 1400
+#define _tmp_178_type 1398
+#define _loop0_180_type 1399
+#define _gather_179_type 1400
#define _tmp_181_type 1401
#define _tmp_182_type 1402
#define _tmp_183_type 1403
#define _tmp_204_type 1424
#define _tmp_205_type 1425
#define _tmp_206_type 1426
+#define _tmp_207_type 1427
+#define _tmp_208_type 1428
static mod_ty file_rule(Parser *p);
static mod_ty interactive_rule(Parser *p);
static void *_tmp_175_rule(Parser *p);
static void *_tmp_176_rule(Parser *p);
static void *_tmp_177_rule(Parser *p);
-static asdl_seq *_loop0_179_rule(Parser *p);
-static asdl_seq *_gather_178_rule(Parser *p);
-static void *_tmp_180_rule(Parser *p);
+static void *_tmp_178_rule(Parser *p);
+static asdl_seq *_loop0_180_rule(Parser *p);
+static asdl_seq *_gather_179_rule(Parser *p);
static void *_tmp_181_rule(Parser *p);
static void *_tmp_182_rule(Parser *p);
static void *_tmp_183_rule(Parser *p);
static void *_tmp_204_rule(Parser *p);
static void *_tmp_205_rule(Parser *p);
static void *_tmp_206_rule(Parser *p);
+static void *_tmp_207_rule(Parser *p);
+static void *_tmp_208_rule(Parser *p);
// file: statements? $
// for_stmt:
// | invalid_for_stmt
-// | 'for' star_targets 'in' ~ star_expressions &&':' TYPE_COMMENT? block else_block?
-// | ASYNC 'for' star_targets 'in' ~ star_expressions &&':' TYPE_COMMENT? block else_block?
+// | 'for' star_targets 'in' ~ star_expressions ':' TYPE_COMMENT? block else_block?
+// | ASYNC 'for' star_targets 'in' ~ star_expressions ':' TYPE_COMMENT? block else_block?
// | invalid_for_target
static stmt_ty
for_stmt_rule(Parser *p)
D(fprintf(stderr, "%*c%s for_stmt[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "invalid_for_stmt"));
}
- { // 'for' star_targets 'in' ~ star_expressions &&':' TYPE_COMMENT? block else_block?
+ { // 'for' star_targets 'in' ~ star_expressions ':' TYPE_COMMENT? block else_block?
if (p->error_indicator) {
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> for_stmt[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'for' star_targets 'in' ~ star_expressions &&':' TYPE_COMMENT? block else_block?"));
+ D(fprintf(stderr, "%*c> for_stmt[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'for' star_targets 'in' ~ star_expressions ':' TYPE_COMMENT? block else_block?"));
int _cut_var = 0;
Token * _keyword;
Token * _keyword_1;
&&
(ex = star_expressions_rule(p)) // star_expressions
&&
- (_literal = _PyPegen_expect_forced_token(p, 11, ":")) // forced_token=':'
+ (_literal = _PyPegen_expect_token(p, 11)) // token=':'
&&
(tc = _PyPegen_expect_token(p, TYPE_COMMENT), !p->error_indicator) // TYPE_COMMENT?
&&
(el = else_block_rule(p), !p->error_indicator) // else_block?
)
{
- D(fprintf(stderr, "%*c+ for_stmt[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'for' star_targets 'in' ~ star_expressions &&':' TYPE_COMMENT? block else_block?"));
+ D(fprintf(stderr, "%*c+ for_stmt[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'for' star_targets 'in' ~ star_expressions ':' TYPE_COMMENT? block else_block?"));
Token *_token = _PyPegen_get_last_nonnwhitespace_token(p);
if (_token == NULL) {
p->level--;
}
p->mark = _mark;
D(fprintf(stderr, "%*c%s for_stmt[%d-%d]: %s failed!\n", p->level, ' ',
- p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'for' star_targets 'in' ~ star_expressions &&':' TYPE_COMMENT? block else_block?"));
+ p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'for' star_targets 'in' ~ star_expressions ':' TYPE_COMMENT? block else_block?"));
if (_cut_var) {
p->level--;
return NULL;
}
}
- { // ASYNC 'for' star_targets 'in' ~ star_expressions &&':' TYPE_COMMENT? block else_block?
+ { // ASYNC 'for' star_targets 'in' ~ star_expressions ':' TYPE_COMMENT? block else_block?
if (p->error_indicator) {
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> for_stmt[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "ASYNC 'for' star_targets 'in' ~ star_expressions &&':' TYPE_COMMENT? block else_block?"));
+ D(fprintf(stderr, "%*c> for_stmt[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "ASYNC 'for' star_targets 'in' ~ star_expressions ':' TYPE_COMMENT? block else_block?"));
int _cut_var = 0;
Token * _keyword;
Token * _keyword_1;
&&
(ex = star_expressions_rule(p)) // star_expressions
&&
- (_literal = _PyPegen_expect_forced_token(p, 11, ":")) // forced_token=':'
+ (_literal = _PyPegen_expect_token(p, 11)) // token=':'
&&
(tc = _PyPegen_expect_token(p, TYPE_COMMENT), !p->error_indicator) // TYPE_COMMENT?
&&
(el = else_block_rule(p), !p->error_indicator) // else_block?
)
{
- D(fprintf(stderr, "%*c+ for_stmt[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "ASYNC 'for' star_targets 'in' ~ star_expressions &&':' TYPE_COMMENT? block else_block?"));
+ D(fprintf(stderr, "%*c+ for_stmt[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "ASYNC 'for' star_targets 'in' ~ star_expressions ':' TYPE_COMMENT? block else_block?"));
Token *_token = _PyPegen_get_last_nonnwhitespace_token(p);
if (_token == NULL) {
p->level--;
}
p->mark = _mark;
D(fprintf(stderr, "%*c%s for_stmt[%d-%d]: %s failed!\n", p->level, ' ',
- p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "ASYNC 'for' star_targets 'in' ~ star_expressions &&':' TYPE_COMMENT? block else_block?"));
+ p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "ASYNC 'for' star_targets 'in' ~ star_expressions ':' TYPE_COMMENT? block else_block?"));
if (_cut_var) {
p->level--;
return NULL;
UNUSED(_end_lineno); // Only used by EXTRA macro
int _end_col_offset = _token->end_col_offset;
UNUSED(_end_col_offset); // Only used by EXTRA macro
- _res = _PyAST_With ( a , b , NULL , EXTRA );
+ _res = CHECK_VERSION ( stmt_ty , 9 , "Parenthesized context managers are" , _PyAST_With ( a , b , NULL , EXTRA ) );
if (_res == NULL && PyErr_Occurred()) {
p->error_indicator = 1;
p->level--;
return NULL;
}
pattern_ty _res = NULL;
+ if (_PyPegen_is_memoized(p, closed_pattern_type, &_res)) {
+ p->level--;
+ return _res;
+ }
int _mark = p->mark;
{ // literal_pattern
if (p->error_indicator) {
}
_res = NULL;
done:
+ _PyPegen_insert_memo(p, _mark, closed_pattern_type, _res);
p->level--;
return _res;
}
return NULL;
}
pattern_ty _res = NULL;
+ if (_PyPegen_is_memoized(p, star_pattern_type, &_res)) {
+ p->level--;
+ return _res;
+ }
int _mark = p->mark;
if (p->mark == p->fill && _PyPegen_fill_token(p) < 0) {
p->error_indicator = 1;
}
_res = NULL;
done:
+ _PyPegen_insert_memo(p, _mark, star_pattern_type, _res);
p->level--;
return _res;
}
return _res;
}
-// class_def_raw: invalid_class_def_raw | 'class' NAME ['(' arguments? ')'] &&':' block
+// class_def_raw: invalid_class_def_raw | 'class' NAME ['(' arguments? ')'] ':' block
static stmt_ty
class_def_raw_rule(Parser *p)
{
D(fprintf(stderr, "%*c%s class_def_raw[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "invalid_class_def_raw"));
}
- { // 'class' NAME ['(' arguments? ')'] &&':' block
+ { // 'class' NAME ['(' arguments? ')'] ':' block
if (p->error_indicator) {
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> class_def_raw[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'class' NAME ['(' arguments? ')'] &&':' block"));
+ D(fprintf(stderr, "%*c> class_def_raw[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'class' NAME ['(' arguments? ')'] ':' block"));
Token * _keyword;
Token * _literal;
expr_ty a;
&&
(b = _tmp_85_rule(p), !p->error_indicator) // ['(' arguments? ')']
&&
- (_literal = _PyPegen_expect_forced_token(p, 11, ":")) // forced_token=':'
+ (_literal = _PyPegen_expect_token(p, 11)) // token=':'
&&
(c = block_rule(p)) // block
)
{
- D(fprintf(stderr, "%*c+ class_def_raw[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'class' NAME ['(' arguments? ')'] &&':' block"));
+ D(fprintf(stderr, "%*c+ class_def_raw[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'class' NAME ['(' arguments? ')'] ':' block"));
Token *_token = _PyPegen_get_last_nonnwhitespace_token(p);
if (_token == NULL) {
p->level--;
}
p->mark = _mark;
D(fprintf(stderr, "%*c%s class_def_raw[%d-%d]: %s failed!\n", p->level, ' ',
- p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'class' NAME ['(' arguments? ')'] &&':' block"));
+ p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'class' NAME ['(' arguments? ')'] ':' block"));
}
_res = NULL;
done:
UNUSED(_end_lineno); // Only used by EXTRA macro
int _end_col_offset = _token->end_col_offset;
UNUSED(_end_col_offset); // Only used by EXTRA macro
- _res = _PyAST_NamedExpr ( CHECK ( expr_ty , _PyPegen_set_expr_context ( p , a , Store ) ) , b , EXTRA );
+ _res = CHECK_VERSION ( expr_ty , 8 , "Assignment expressions are" , _PyAST_NamedExpr ( CHECK ( expr_ty , _PyPegen_set_expr_context ( p , a , Store ) ) , b , EXTRA ) );
if (_res == NULL && PyErr_Occurred()) {
p->error_indicator = 1;
p->level--;
}
// invalid_with_stmt:
-// | ASYNC? 'with' ','.(expression ['as' star_target])+ &&':'
-// | ASYNC? 'with' '(' ','.(expressions ['as' star_target])+ ','? ')' &&':'
+// | ASYNC? 'with' ','.(expression ['as' star_target])+ NEWLINE
+// | ASYNC? 'with' '(' ','.(expressions ['as' star_target])+ ','? ')' NEWLINE
static void *
invalid_with_stmt_rule(Parser *p)
{
}
void * _res = NULL;
int _mark = p->mark;
- { // ASYNC? 'with' ','.(expression ['as' star_target])+ &&':'
+ { // ASYNC? 'with' ','.(expression ['as' star_target])+ NEWLINE
if (p->error_indicator) {
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> invalid_with_stmt[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "ASYNC? 'with' ','.(expression ['as' star_target])+ &&':'"));
+ D(fprintf(stderr, "%*c> invalid_with_stmt[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "ASYNC? 'with' ','.(expression ['as' star_target])+ NEWLINE"));
asdl_seq * _gather_163_var;
Token * _keyword;
- Token * _literal;
void *_opt_var;
UNUSED(_opt_var); // Silence compiler warnings
+ Token * newline_var;
if (
(_opt_var = _PyPegen_expect_token(p, ASYNC), !p->error_indicator) // ASYNC?
&&
&&
(_gather_163_var = _gather_163_rule(p)) // ','.(expression ['as' star_target])+
&&
- (_literal = _PyPegen_expect_forced_token(p, 11, ":")) // forced_token=':'
+ (newline_var = _PyPegen_expect_token(p, NEWLINE)) // token='NEWLINE'
)
{
- D(fprintf(stderr, "%*c+ invalid_with_stmt[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "ASYNC? 'with' ','.(expression ['as' star_target])+ &&':'"));
- _res = _PyPegen_dummy_name(p, _opt_var, _keyword, _gather_163_var, _literal);
+ D(fprintf(stderr, "%*c+ invalid_with_stmt[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "ASYNC? 'with' ','.(expression ['as' star_target])+ NEWLINE"));
+ _res = RAISE_SYNTAX_ERROR ( "expected ':'" );
+ if (_res == NULL && PyErr_Occurred()) {
+ p->error_indicator = 1;
+ p->level--;
+ return NULL;
+ }
goto done;
}
p->mark = _mark;
D(fprintf(stderr, "%*c%s invalid_with_stmt[%d-%d]: %s failed!\n", p->level, ' ',
- p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "ASYNC? 'with' ','.(expression ['as' star_target])+ &&':'"));
+ p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "ASYNC? 'with' ','.(expression ['as' star_target])+ NEWLINE"));
}
- { // ASYNC? 'with' '(' ','.(expressions ['as' star_target])+ ','? ')' &&':'
+ { // ASYNC? 'with' '(' ','.(expressions ['as' star_target])+ ','? ')' NEWLINE
if (p->error_indicator) {
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> invalid_with_stmt[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "ASYNC? 'with' '(' ','.(expressions ['as' star_target])+ ','? ')' &&':'"));
+ D(fprintf(stderr, "%*c> invalid_with_stmt[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "ASYNC? 'with' '(' ','.(expressions ['as' star_target])+ ','? ')' NEWLINE"));
asdl_seq * _gather_165_var;
Token * _keyword;
Token * _literal;
Token * _literal_1;
- Token * _literal_2;
void *_opt_var;
UNUSED(_opt_var); // Silence compiler warnings
void *_opt_var_1;
UNUSED(_opt_var_1); // Silence compiler warnings
+ Token * newline_var;
if (
(_opt_var = _PyPegen_expect_token(p, ASYNC), !p->error_indicator) // ASYNC?
&&
&&
(_literal_1 = _PyPegen_expect_token(p, 8)) // token=')'
&&
- (_literal_2 = _PyPegen_expect_forced_token(p, 11, ":")) // forced_token=':'
+ (newline_var = _PyPegen_expect_token(p, NEWLINE)) // token='NEWLINE'
)
{
- D(fprintf(stderr, "%*c+ invalid_with_stmt[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "ASYNC? 'with' '(' ','.(expressions ['as' star_target])+ ','? ')' &&':'"));
- _res = _PyPegen_dummy_name(p, _opt_var, _keyword, _literal, _gather_165_var, _opt_var_1, _literal_1, _literal_2);
+ D(fprintf(stderr, "%*c+ invalid_with_stmt[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "ASYNC? 'with' '(' ','.(expressions ['as' star_target])+ ','? ')' NEWLINE"));
+ _res = RAISE_SYNTAX_ERROR ( "expected ':'" );
+ if (_res == NULL && PyErr_Occurred()) {
+ p->error_indicator = 1;
+ p->level--;
+ return NULL;
+ }
goto done;
}
p->mark = _mark;
D(fprintf(stderr, "%*c%s invalid_with_stmt[%d-%d]: %s failed!\n", p->level, ' ',
- p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "ASYNC? 'with' '(' ','.(expressions ['as' star_target])+ ','? ')' &&':'"));
+ p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "ASYNC? 'with' '(' ','.(expressions ['as' star_target])+ ','? ')' NEWLINE"));
}
_res = NULL;
done:
}
// invalid_match_stmt:
-// | "match" subject_expr !':'
+// | "match" subject_expr NEWLINE
// | "match" subject_expr ':' NEWLINE !INDENT
static void *
invalid_match_stmt_rule(Parser *p)
}
void * _res = NULL;
int _mark = p->mark;
- { // "match" subject_expr !':'
+ { // "match" subject_expr NEWLINE
if (p->error_indicator) {
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> invalid_match_stmt[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "\"match\" subject_expr !':'"));
+ D(fprintf(stderr, "%*c> invalid_match_stmt[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "\"match\" subject_expr NEWLINE"));
expr_ty _keyword;
+ Token * newline_var;
expr_ty subject_expr_var;
if (
(_keyword = _PyPegen_expect_soft_keyword(p, "match")) // soft_keyword='"match"'
&&
(subject_expr_var = subject_expr_rule(p)) // subject_expr
&&
- _PyPegen_lookahead_with_int(0, _PyPegen_expect_token, p, 11) // token=':'
+ (newline_var = _PyPegen_expect_token(p, NEWLINE)) // token='NEWLINE'
)
{
- D(fprintf(stderr, "%*c+ invalid_match_stmt[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "\"match\" subject_expr !':'"));
+ D(fprintf(stderr, "%*c+ invalid_match_stmt[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "\"match\" subject_expr NEWLINE"));
_res = CHECK_VERSION ( void * , 10 , "Pattern matching is" , RAISE_SYNTAX_ERROR ( "expected ':'" ) );
if (_res == NULL && PyErr_Occurred()) {
p->error_indicator = 1;
}
p->mark = _mark;
D(fprintf(stderr, "%*c%s invalid_match_stmt[%d-%d]: %s failed!\n", p->level, ' ',
- p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "\"match\" subject_expr !':'"));
+ p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "\"match\" subject_expr NEWLINE"));
}
{ // "match" subject_expr ':' NEWLINE !INDENT
if (p->error_indicator) {
}
// invalid_case_block:
-// | "case" patterns guard? !':'
+// | "case" patterns guard? NEWLINE
// | "case" patterns guard? ':' NEWLINE !INDENT
static void *
invalid_case_block_rule(Parser *p)
}
void * _res = NULL;
int _mark = p->mark;
- { // "case" patterns guard? !':'
+ { // "case" patterns guard? NEWLINE
if (p->error_indicator) {
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> invalid_case_block[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "\"case\" patterns guard? !':'"));
+ D(fprintf(stderr, "%*c> invalid_case_block[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "\"case\" patterns guard? NEWLINE"));
expr_ty _keyword;
void *_opt_var;
UNUSED(_opt_var); // Silence compiler warnings
+ Token * newline_var;
pattern_ty patterns_var;
if (
(_keyword = _PyPegen_expect_soft_keyword(p, "case")) // soft_keyword='"case"'
&&
(_opt_var = guard_rule(p), !p->error_indicator) // guard?
&&
- _PyPegen_lookahead_with_int(0, _PyPegen_expect_token, p, 11) // token=':'
+ (newline_var = _PyPegen_expect_token(p, NEWLINE)) // token='NEWLINE'
)
{
- D(fprintf(stderr, "%*c+ invalid_case_block[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "\"case\" patterns guard? !':'"));
+ D(fprintf(stderr, "%*c+ invalid_case_block[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "\"case\" patterns guard? NEWLINE"));
_res = RAISE_SYNTAX_ERROR ( "expected ':'" );
if (_res == NULL && PyErr_Occurred()) {
p->error_indicator = 1;
}
p->mark = _mark;
D(fprintf(stderr, "%*c%s invalid_case_block[%d-%d]: %s failed!\n", p->level, ' ',
- p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "\"case\" patterns guard? !':'"));
+ p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "\"case\" patterns guard? NEWLINE"));
}
{ // "case" patterns guard? ':' NEWLINE !INDENT
if (p->error_indicator) {
return _res;
}
-// invalid_for_stmt: ASYNC? 'for' star_targets 'in' star_expressions ':' NEWLINE !INDENT
+// invalid_for_stmt:
+// | ASYNC? 'for' star_targets 'in' star_expressions NEWLINE
+// | ASYNC? 'for' star_targets 'in' star_expressions ':' NEWLINE !INDENT
static void *
invalid_for_stmt_rule(Parser *p)
{
}
void * _res = NULL;
int _mark = p->mark;
+ { // ASYNC? 'for' star_targets 'in' star_expressions NEWLINE
+ if (p->error_indicator) {
+ p->level--;
+ return NULL;
+ }
+ D(fprintf(stderr, "%*c> invalid_for_stmt[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "ASYNC? 'for' star_targets 'in' star_expressions NEWLINE"));
+ Token * _keyword;
+ Token * _keyword_1;
+ void *_opt_var;
+ UNUSED(_opt_var); // Silence compiler warnings
+ Token * newline_var;
+ expr_ty star_expressions_var;
+ expr_ty star_targets_var;
+ if (
+ (_opt_var = _PyPegen_expect_token(p, ASYNC), !p->error_indicator) // ASYNC?
+ &&
+ (_keyword = _PyPegen_expect_token(p, 517)) // token='for'
+ &&
+ (star_targets_var = star_targets_rule(p)) // star_targets
+ &&
+ (_keyword_1 = _PyPegen_expect_token(p, 518)) // token='in'
+ &&
+ (star_expressions_var = star_expressions_rule(p)) // star_expressions
+ &&
+ (newline_var = _PyPegen_expect_token(p, NEWLINE)) // token='NEWLINE'
+ )
+ {
+ D(fprintf(stderr, "%*c+ invalid_for_stmt[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "ASYNC? 'for' star_targets 'in' star_expressions NEWLINE"));
+ _res = RAISE_SYNTAX_ERROR ( "expected ':'" );
+ if (_res == NULL && PyErr_Occurred()) {
+ p->error_indicator = 1;
+ p->level--;
+ return NULL;
+ }
+ goto done;
+ }
+ p->mark = _mark;
+ D(fprintf(stderr, "%*c%s invalid_for_stmt[%d-%d]: %s failed!\n", p->level, ' ',
+ p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "ASYNC? 'for' star_targets 'in' star_expressions NEWLINE"));
+ }
{ // ASYNC? 'for' star_targets 'in' star_expressions ':' NEWLINE !INDENT
if (p->error_indicator) {
p->level--;
return _res;
}
-// invalid_class_def_raw: 'class' NAME ['(' arguments? ')'] ':' NEWLINE !INDENT
+// invalid_class_def_raw:
+// | 'class' NAME ['(' arguments? ')'] NEWLINE
+// | 'class' NAME ['(' arguments? ')'] ':' NEWLINE !INDENT
static void *
invalid_class_def_raw_rule(Parser *p)
{
}
void * _res = NULL;
int _mark = p->mark;
+ { // 'class' NAME ['(' arguments? ')'] NEWLINE
+ if (p->error_indicator) {
+ p->level--;
+ return NULL;
+ }
+ D(fprintf(stderr, "%*c> invalid_class_def_raw[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'class' NAME ['(' arguments? ')'] NEWLINE"));
+ Token * _keyword;
+ void *_opt_var;
+ UNUSED(_opt_var); // Silence compiler warnings
+ expr_ty name_var;
+ Token * newline_var;
+ if (
+ (_keyword = _PyPegen_expect_token(p, 527)) // token='class'
+ &&
+ (name_var = _PyPegen_name_token(p)) // NAME
+ &&
+ (_opt_var = _tmp_177_rule(p), !p->error_indicator) // ['(' arguments? ')']
+ &&
+ (newline_var = _PyPegen_expect_token(p, NEWLINE)) // token='NEWLINE'
+ )
+ {
+ D(fprintf(stderr, "%*c+ invalid_class_def_raw[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'class' NAME ['(' arguments? ')'] NEWLINE"));
+ _res = RAISE_SYNTAX_ERROR ( "expected ':'" );
+ if (_res == NULL && PyErr_Occurred()) {
+ p->error_indicator = 1;
+ p->level--;
+ return NULL;
+ }
+ goto done;
+ }
+ p->mark = _mark;
+ D(fprintf(stderr, "%*c%s invalid_class_def_raw[%d-%d]: %s failed!\n", p->level, ' ',
+ p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'class' NAME ['(' arguments? ')'] NEWLINE"));
+ }
{ // 'class' NAME ['(' arguments? ')'] ':' NEWLINE !INDENT
if (p->error_indicator) {
p->level--;
&&
(name_var = _PyPegen_name_token(p)) // NAME
&&
- (_opt_var = _tmp_177_rule(p), !p->error_indicator) // ['(' arguments? ')']
+ (_opt_var = _tmp_178_rule(p), !p->error_indicator) // ['(' arguments? ')']
&&
(_literal = _PyPegen_expect_token(p, 11)) // token=':'
&&
return NULL;
}
D(fprintf(stderr, "%*c> invalid_double_starred_kvpairs[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "','.double_starred_kvpair+ ',' invalid_kvpair"));
- asdl_seq * _gather_178_var;
+ asdl_seq * _gather_179_var;
Token * _literal;
void *invalid_kvpair_var;
if (
- (_gather_178_var = _gather_178_rule(p)) // ','.double_starred_kvpair+
+ (_gather_179_var = _gather_179_rule(p)) // ','.double_starred_kvpair+
&&
(_literal = _PyPegen_expect_token(p, 12)) // token=','
&&
)
{
D(fprintf(stderr, "%*c+ invalid_double_starred_kvpairs[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "','.double_starred_kvpair+ ',' invalid_kvpair"));
- _res = _PyPegen_dummy_name(p, _gather_178_var, _literal, invalid_kvpair_var);
+ _res = _PyPegen_dummy_name(p, _gather_179_var, _literal, invalid_kvpair_var);
goto done;
}
p->mark = _mark;
&&
(a = _PyPegen_expect_token(p, 11)) // token=':'
&&
- _PyPegen_lookahead(1, _tmp_180_rule, p)
+ _PyPegen_lookahead(1, _tmp_181_rule, p)
)
{
D(fprintf(stderr, "%*c+ invalid_double_starred_kvpairs[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "expression ':' &('}' | ',')"));
return _res;
}
-// invalid_kvpair: expression !(':') | expression ':' '*' bitwise_or | expression ':'
+// invalid_kvpair:
+// | expression !(':')
+// | expression ':' '*' bitwise_or
+// | expression ':' &('}' | ',')
static void *
invalid_kvpair_rule(Parser *p)
{
D(fprintf(stderr, "%*c%s invalid_kvpair[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "expression ':' '*' bitwise_or"));
}
- { // expression ':'
+ { // expression ':' &('}' | ',')
if (p->error_indicator) {
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> invalid_kvpair[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "expression ':'"));
+ D(fprintf(stderr, "%*c> invalid_kvpair[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "expression ':' &('}' | ',')"));
Token * a;
expr_ty expression_var;
if (
(expression_var = expression_rule(p)) // expression
&&
(a = _PyPegen_expect_token(p, 11)) // token=':'
+ &&
+ _PyPegen_lookahead(1, _tmp_182_rule, p)
)
{
- D(fprintf(stderr, "%*c+ invalid_kvpair[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "expression ':'"));
+ D(fprintf(stderr, "%*c+ invalid_kvpair[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "expression ':' &('}' | ',')"));
_res = RAISE_SYNTAX_ERROR_KNOWN_LOCATION ( a , "expression expected after dictionary key and ':'" );
if (_res == NULL && PyErr_Occurred()) {
p->error_indicator = 1;
}
p->mark = _mark;
D(fprintf(stderr, "%*c%s invalid_kvpair[%d-%d]: %s failed!\n", p->level, ' ',
- p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "expression ':'"));
+ p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "expression ':' &('}' | ',')"));
}
_res = NULL;
done:
return NULL;
}
D(fprintf(stderr, "%*c> _loop1_22[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "(star_targets '=')"));
- void *_tmp_181_var;
+ void *_tmp_183_var;
while (
- (_tmp_181_var = _tmp_181_rule(p)) // star_targets '='
+ (_tmp_183_var = _tmp_183_rule(p)) // star_targets '='
)
{
- _res = _tmp_181_var;
+ _res = _tmp_183_var;
if (_n == _children_capacity) {
_children_capacity *= 2;
void **_new_children = PyMem_Realloc(_children, _children_capacity*sizeof(void *));
return NULL;
}
D(fprintf(stderr, "%*c> _loop0_31[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "('.' | '...')"));
- void *_tmp_182_var;
+ void *_tmp_184_var;
while (
- (_tmp_182_var = _tmp_182_rule(p)) // '.' | '...'
+ (_tmp_184_var = _tmp_184_rule(p)) // '.' | '...'
)
{
- _res = _tmp_182_var;
+ _res = _tmp_184_var;
if (_n == _children_capacity) {
_children_capacity *= 2;
void **_new_children = PyMem_Realloc(_children, _children_capacity*sizeof(void *));
return NULL;
}
D(fprintf(stderr, "%*c> _loop1_32[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "('.' | '...')"));
- void *_tmp_183_var;
+ void *_tmp_185_var;
while (
- (_tmp_183_var = _tmp_183_rule(p)) // '.' | '...'
+ (_tmp_185_var = _tmp_185_rule(p)) // '.' | '...'
)
{
- _res = _tmp_183_var;
+ _res = _tmp_185_var;
if (_n == _children_capacity) {
_children_capacity *= 2;
void **_new_children = PyMem_Realloc(_children, _children_capacity*sizeof(void *));
return NULL;
}
D(fprintf(stderr, "%*c> _loop1_84[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "('@' named_expression NEWLINE)"));
- void *_tmp_184_var;
+ void *_tmp_186_var;
while (
- (_tmp_184_var = _tmp_184_rule(p)) // '@' named_expression NEWLINE
+ (_tmp_186_var = _tmp_186_rule(p)) // '@' named_expression NEWLINE
)
{
- _res = _tmp_184_var;
+ _res = _tmp_186_var;
if (_n == _children_capacity) {
_children_capacity *= 2;
void **_new_children = PyMem_Realloc(_children, _children_capacity*sizeof(void *));
return NULL;
}
D(fprintf(stderr, "%*c> _loop1_86[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "(',' star_expression)"));
- void *_tmp_185_var;
+ void *_tmp_187_var;
while (
- (_tmp_185_var = _tmp_185_rule(p)) // ',' star_expression
+ (_tmp_187_var = _tmp_187_rule(p)) // ',' star_expression
)
{
- _res = _tmp_185_var;
+ _res = _tmp_187_var;
if (_n == _children_capacity) {
_children_capacity *= 2;
void **_new_children = PyMem_Realloc(_children, _children_capacity*sizeof(void *));
return NULL;
}
D(fprintf(stderr, "%*c> _loop1_89[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "(',' expression)"));
- void *_tmp_186_var;
+ void *_tmp_188_var;
while (
- (_tmp_186_var = _tmp_186_rule(p)) // ',' expression
+ (_tmp_188_var = _tmp_188_rule(p)) // ',' expression
)
{
- _res = _tmp_186_var;
+ _res = _tmp_188_var;
if (_n == _children_capacity) {
_children_capacity *= 2;
void **_new_children = PyMem_Realloc(_children, _children_capacity*sizeof(void *));
return NULL;
}
D(fprintf(stderr, "%*c> _loop1_104[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "('or' conjunction)"));
- void *_tmp_187_var;
+ void *_tmp_189_var;
while (
- (_tmp_187_var = _tmp_187_rule(p)) // 'or' conjunction
+ (_tmp_189_var = _tmp_189_rule(p)) // 'or' conjunction
)
{
- _res = _tmp_187_var;
+ _res = _tmp_189_var;
if (_n == _children_capacity) {
_children_capacity *= 2;
void **_new_children = PyMem_Realloc(_children, _children_capacity*sizeof(void *));
return NULL;
}
D(fprintf(stderr, "%*c> _loop1_105[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "('and' inversion)"));
- void *_tmp_188_var;
+ void *_tmp_190_var;
while (
- (_tmp_188_var = _tmp_188_rule(p)) // 'and' inversion
+ (_tmp_190_var = _tmp_190_rule(p)) // 'and' inversion
)
{
- _res = _tmp_188_var;
+ _res = _tmp_190_var;
if (_n == _children_capacity) {
_children_capacity *= 2;
void **_new_children = PyMem_Realloc(_children, _children_capacity*sizeof(void *));
return NULL;
}
D(fprintf(stderr, "%*c> _loop0_121[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "('if' disjunction)"));
- void *_tmp_189_var;
+ void *_tmp_191_var;
while (
- (_tmp_189_var = _tmp_189_rule(p)) // 'if' disjunction
+ (_tmp_191_var = _tmp_191_rule(p)) // 'if' disjunction
)
{
- _res = _tmp_189_var;
+ _res = _tmp_191_var;
if (_n == _children_capacity) {
_children_capacity *= 2;
void **_new_children = PyMem_Realloc(_children, _children_capacity*sizeof(void *));
return NULL;
}
D(fprintf(stderr, "%*c> _loop0_122[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "('if' disjunction)"));
- void *_tmp_190_var;
+ void *_tmp_192_var;
while (
- (_tmp_190_var = _tmp_190_rule(p)) // 'if' disjunction
+ (_tmp_192_var = _tmp_192_rule(p)) // 'if' disjunction
)
{
- _res = _tmp_190_var;
+ _res = _tmp_192_var;
if (_n == _children_capacity) {
_children_capacity *= 2;
void **_new_children = PyMem_Realloc(_children, _children_capacity*sizeof(void *));
while (
(_literal = _PyPegen_expect_token(p, 12)) // token=','
&&
- (elem = _tmp_191_rule(p)) // starred_expression | (assignment_expression | expression !':=') !'='
+ (elem = _tmp_193_rule(p)) // starred_expression | (assignment_expression | expression !':=') !'='
)
{
_res = elem;
void *elem;
asdl_seq * seq;
if (
- (elem = _tmp_191_rule(p)) // starred_expression | (assignment_expression | expression !':=') !'='
+ (elem = _tmp_193_rule(p)) // starred_expression | (assignment_expression | expression !':=') !'='
&&
(seq = _loop0_124_rule(p)) // _loop0_124
)
return NULL;
}
D(fprintf(stderr, "%*c> _loop0_134[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "(',' star_target)"));
- void *_tmp_192_var;
+ void *_tmp_194_var;
while (
- (_tmp_192_var = _tmp_192_rule(p)) // ',' star_target
+ (_tmp_194_var = _tmp_194_rule(p)) // ',' star_target
)
{
- _res = _tmp_192_var;
+ _res = _tmp_194_var;
if (_n == _children_capacity) {
_children_capacity *= 2;
void **_new_children = PyMem_Realloc(_children, _children_capacity*sizeof(void *));
return NULL;
}
D(fprintf(stderr, "%*c> _loop1_137[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "(',' star_target)"));
- void *_tmp_193_var;
+ void *_tmp_195_var;
while (
- (_tmp_193_var = _tmp_193_rule(p)) // ',' star_target
+ (_tmp_195_var = _tmp_195_rule(p)) // ',' star_target
)
{
- _res = _tmp_193_var;
+ _res = _tmp_195_var;
if (_n == _children_capacity) {
_children_capacity *= 2;
void **_new_children = PyMem_Realloc(_children, _children_capacity*sizeof(void *));
return NULL;
}
D(fprintf(stderr, "%*c> _loop0_150[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "(star_targets '=')"));
- void *_tmp_194_var;
+ void *_tmp_196_var;
while (
- (_tmp_194_var = _tmp_194_rule(p)) // star_targets '='
+ (_tmp_196_var = _tmp_196_rule(p)) // star_targets '='
)
{
- _res = _tmp_194_var;
+ _res = _tmp_196_var;
if (_n == _children_capacity) {
_children_capacity *= 2;
void **_new_children = PyMem_Realloc(_children, _children_capacity*sizeof(void *));
return NULL;
}
D(fprintf(stderr, "%*c> _loop0_151[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "(star_targets '=')"));
- void *_tmp_195_var;
+ void *_tmp_197_var;
while (
- (_tmp_195_var = _tmp_195_rule(p)) // star_targets '='
+ (_tmp_197_var = _tmp_197_rule(p)) // star_targets '='
)
{
- _res = _tmp_195_var;
+ _res = _tmp_197_var;
if (_n == _children_capacity) {
_children_capacity *= 2;
void **_new_children = PyMem_Realloc(_children, _children_capacity*sizeof(void *));
}
D(fprintf(stderr, "%*c> _tmp_160[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "',' (')' | '**')"));
Token * _literal;
- void *_tmp_196_var;
+ void *_tmp_198_var;
if (
(_literal = _PyPegen_expect_token(p, 12)) // token=','
&&
- (_tmp_196_var = _tmp_196_rule(p)) // ')' | '**'
+ (_tmp_198_var = _tmp_198_rule(p)) // ')' | '**'
)
{
D(fprintf(stderr, "%*c+ _tmp_160[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "',' (')' | '**')"));
- _res = _PyPegen_dummy_name(p, _literal, _tmp_196_var);
+ _res = _PyPegen_dummy_name(p, _literal, _tmp_198_var);
goto done;
}
p->mark = _mark;
}
D(fprintf(stderr, "%*c> _tmp_161[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "',' (':' | '**')"));
Token * _literal;
- void *_tmp_197_var;
+ void *_tmp_199_var;
if (
(_literal = _PyPegen_expect_token(p, 12)) // token=','
&&
- (_tmp_197_var = _tmp_197_rule(p)) // ':' | '**'
+ (_tmp_199_var = _tmp_199_rule(p)) // ':' | '**'
)
{
D(fprintf(stderr, "%*c+ _tmp_161[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "',' (':' | '**')"));
- _res = _PyPegen_dummy_name(p, _literal, _tmp_197_var);
+ _res = _PyPegen_dummy_name(p, _literal, _tmp_199_var);
goto done;
}
p->mark = _mark;
while (
(_literal = _PyPegen_expect_token(p, 12)) // token=','
&&
- (elem = _tmp_198_rule(p)) // expression ['as' star_target]
+ (elem = _tmp_200_rule(p)) // expression ['as' star_target]
)
{
_res = elem;
void *elem;
asdl_seq * seq;
if (
- (elem = _tmp_198_rule(p)) // expression ['as' star_target]
+ (elem = _tmp_200_rule(p)) // expression ['as' star_target]
&&
(seq = _loop0_164_rule(p)) // _loop0_164
)
while (
(_literal = _PyPegen_expect_token(p, 12)) // token=','
&&
- (elem = _tmp_199_rule(p)) // expressions ['as' star_target]
+ (elem = _tmp_201_rule(p)) // expressions ['as' star_target]
)
{
_res = elem;
void *elem;
asdl_seq * seq;
if (
- (elem = _tmp_199_rule(p)) // expressions ['as' star_target]
+ (elem = _tmp_201_rule(p)) // expressions ['as' star_target]
&&
(seq = _loop0_166_rule(p)) // _loop0_166
)
while (
(_literal = _PyPegen_expect_token(p, 12)) // token=','
&&
- (elem = _tmp_200_rule(p)) // expression ['as' star_target]
+ (elem = _tmp_202_rule(p)) // expression ['as' star_target]
)
{
_res = elem;
void *elem;
asdl_seq * seq;
if (
- (elem = _tmp_200_rule(p)) // expression ['as' star_target]
+ (elem = _tmp_202_rule(p)) // expression ['as' star_target]
&&
(seq = _loop0_168_rule(p)) // _loop0_168
)
while (
(_literal = _PyPegen_expect_token(p, 12)) // token=','
&&
- (elem = _tmp_201_rule(p)) // expressions ['as' star_target]
+ (elem = _tmp_203_rule(p)) // expressions ['as' star_target]
)
{
_res = elem;
void *elem;
asdl_seq * seq;
if (
- (elem = _tmp_201_rule(p)) // expressions ['as' star_target]
+ (elem = _tmp_203_rule(p)) // expressions ['as' star_target]
&&
(seq = _loop0_170_rule(p)) // _loop0_170
)
return _res;
}
-// _loop0_179: ',' double_starred_kvpair
+// _tmp_178: '(' arguments? ')'
+static void *
+_tmp_178_rule(Parser *p)
+{
+ if (p->level++ == MAXSTACK) {
+ p->error_indicator = 1;
+ PyErr_NoMemory();
+ }
+ if (p->error_indicator) {
+ p->level--;
+ return NULL;
+ }
+ void * _res = NULL;
+ int _mark = p->mark;
+ { // '(' arguments? ')'
+ if (p->error_indicator) {
+ p->level--;
+ return NULL;
+ }
+ D(fprintf(stderr, "%*c> _tmp_178[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'(' arguments? ')'"));
+ Token * _literal;
+ Token * _literal_1;
+ void *_opt_var;
+ UNUSED(_opt_var); // Silence compiler warnings
+ if (
+ (_literal = _PyPegen_expect_token(p, 7)) // token='('
+ &&
+ (_opt_var = arguments_rule(p), !p->error_indicator) // arguments?
+ &&
+ (_literal_1 = _PyPegen_expect_token(p, 8)) // token=')'
+ )
+ {
+ D(fprintf(stderr, "%*c+ _tmp_178[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'(' arguments? ')'"));
+ _res = _PyPegen_dummy_name(p, _literal, _opt_var, _literal_1);
+ goto done;
+ }
+ p->mark = _mark;
+ D(fprintf(stderr, "%*c%s _tmp_178[%d-%d]: %s failed!\n", p->level, ' ',
+ p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'(' arguments? ')'"));
+ }
+ _res = NULL;
+ done:
+ p->level--;
+ return _res;
+}
+
+// _loop0_180: ',' double_starred_kvpair
static asdl_seq *
-_loop0_179_rule(Parser *p)
+_loop0_180_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _loop0_179[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "',' double_starred_kvpair"));
+ D(fprintf(stderr, "%*c> _loop0_180[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "',' double_starred_kvpair"));
Token * _literal;
KeyValuePair* elem;
while (
_mark = p->mark;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _loop0_179[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _loop0_180[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "',' double_starred_kvpair"));
}
asdl_seq *_seq = (asdl_seq*)_Py_asdl_generic_seq_new(_n, p->arena);
}
for (int i = 0; i < _n; i++) asdl_seq_SET_UNTYPED(_seq, i, _children[i]);
PyMem_Free(_children);
- _PyPegen_insert_memo(p, _start_mark, _loop0_179_type, _seq);
+ _PyPegen_insert_memo(p, _start_mark, _loop0_180_type, _seq);
p->level--;
return _seq;
}
-// _gather_178: double_starred_kvpair _loop0_179
+// _gather_179: double_starred_kvpair _loop0_180
static asdl_seq *
-_gather_178_rule(Parser *p)
+_gather_179_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
}
asdl_seq * _res = NULL;
int _mark = p->mark;
- { // double_starred_kvpair _loop0_179
+ { // double_starred_kvpair _loop0_180
if (p->error_indicator) {
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _gather_178[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "double_starred_kvpair _loop0_179"));
+ D(fprintf(stderr, "%*c> _gather_179[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "double_starred_kvpair _loop0_180"));
KeyValuePair* elem;
asdl_seq * seq;
if (
(elem = double_starred_kvpair_rule(p)) // double_starred_kvpair
&&
- (seq = _loop0_179_rule(p)) // _loop0_179
+ (seq = _loop0_180_rule(p)) // _loop0_180
)
{
- D(fprintf(stderr, "%*c+ _gather_178[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "double_starred_kvpair _loop0_179"));
+ D(fprintf(stderr, "%*c+ _gather_179[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "double_starred_kvpair _loop0_180"));
_res = _PyPegen_seq_insert_in_front(p, elem, seq);
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _gather_178[%d-%d]: %s failed!\n", p->level, ' ',
- p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "double_starred_kvpair _loop0_179"));
+ D(fprintf(stderr, "%*c%s _gather_179[%d-%d]: %s failed!\n", p->level, ' ',
+ p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "double_starred_kvpair _loop0_180"));
}
_res = NULL;
done:
return _res;
}
-// _tmp_180: '}' | ','
+// _tmp_181: '}' | ','
static void *
-_tmp_180_rule(Parser *p)
+_tmp_181_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_180[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'}'"));
+ D(fprintf(stderr, "%*c> _tmp_181[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'}'"));
Token * _literal;
if (
(_literal = _PyPegen_expect_token(p, 26)) // token='}'
)
{
- D(fprintf(stderr, "%*c+ _tmp_180[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'}'"));
+ D(fprintf(stderr, "%*c+ _tmp_181[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'}'"));
_res = _literal;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_180[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_181[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'}'"));
}
{ // ','
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_180[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "','"));
+ D(fprintf(stderr, "%*c> _tmp_181[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "','"));
Token * _literal;
if (
(_literal = _PyPegen_expect_token(p, 12)) // token=','
)
{
- D(fprintf(stderr, "%*c+ _tmp_180[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "','"));
+ D(fprintf(stderr, "%*c+ _tmp_181[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "','"));
_res = _literal;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_180[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_181[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "','"));
}
_res = NULL;
return _res;
}
-// _tmp_181: star_targets '='
+// _tmp_182: '}' | ','
static void *
-_tmp_181_rule(Parser *p)
+_tmp_182_rule(Parser *p)
+{
+ if (p->level++ == MAXSTACK) {
+ p->error_indicator = 1;
+ PyErr_NoMemory();
+ }
+ if (p->error_indicator) {
+ p->level--;
+ return NULL;
+ }
+ void * _res = NULL;
+ int _mark = p->mark;
+ { // '}'
+ if (p->error_indicator) {
+ p->level--;
+ return NULL;
+ }
+ D(fprintf(stderr, "%*c> _tmp_182[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'}'"));
+ Token * _literal;
+ if (
+ (_literal = _PyPegen_expect_token(p, 26)) // token='}'
+ )
+ {
+ D(fprintf(stderr, "%*c+ _tmp_182[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'}'"));
+ _res = _literal;
+ goto done;
+ }
+ p->mark = _mark;
+ D(fprintf(stderr, "%*c%s _tmp_182[%d-%d]: %s failed!\n", p->level, ' ',
+ p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'}'"));
+ }
+ { // ','
+ if (p->error_indicator) {
+ p->level--;
+ return NULL;
+ }
+ D(fprintf(stderr, "%*c> _tmp_182[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "','"));
+ Token * _literal;
+ if (
+ (_literal = _PyPegen_expect_token(p, 12)) // token=','
+ )
+ {
+ D(fprintf(stderr, "%*c+ _tmp_182[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "','"));
+ _res = _literal;
+ goto done;
+ }
+ p->mark = _mark;
+ D(fprintf(stderr, "%*c%s _tmp_182[%d-%d]: %s failed!\n", p->level, ' ',
+ p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "','"));
+ }
+ _res = NULL;
+ done:
+ p->level--;
+ return _res;
+}
+
+// _tmp_183: star_targets '='
+static void *
+_tmp_183_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_181[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "star_targets '='"));
+ D(fprintf(stderr, "%*c> _tmp_183[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "star_targets '='"));
Token * _literal;
expr_ty z;
if (
(_literal = _PyPegen_expect_token(p, 22)) // token='='
)
{
- D(fprintf(stderr, "%*c+ _tmp_181[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "star_targets '='"));
+ D(fprintf(stderr, "%*c+ _tmp_183[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "star_targets '='"));
_res = z;
if (_res == NULL && PyErr_Occurred()) {
p->error_indicator = 1;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_181[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_183[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "star_targets '='"));
}
_res = NULL;
return _res;
}
-// _tmp_182: '.' | '...'
+// _tmp_184: '.' | '...'
static void *
-_tmp_182_rule(Parser *p)
+_tmp_184_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_182[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'.'"));
+ D(fprintf(stderr, "%*c> _tmp_184[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'.'"));
Token * _literal;
if (
(_literal = _PyPegen_expect_token(p, 23)) // token='.'
)
{
- D(fprintf(stderr, "%*c+ _tmp_182[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'.'"));
+ D(fprintf(stderr, "%*c+ _tmp_184[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'.'"));
_res = _literal;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_182[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_184[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'.'"));
}
{ // '...'
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_182[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'...'"));
+ D(fprintf(stderr, "%*c> _tmp_184[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'...'"));
Token * _literal;
if (
(_literal = _PyPegen_expect_token(p, 52)) // token='...'
)
{
- D(fprintf(stderr, "%*c+ _tmp_182[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'...'"));
+ D(fprintf(stderr, "%*c+ _tmp_184[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'...'"));
_res = _literal;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_182[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_184[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'...'"));
}
_res = NULL;
return _res;
}
-// _tmp_183: '.' | '...'
+// _tmp_185: '.' | '...'
static void *
-_tmp_183_rule(Parser *p)
+_tmp_185_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_183[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'.'"));
+ D(fprintf(stderr, "%*c> _tmp_185[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'.'"));
Token * _literal;
if (
(_literal = _PyPegen_expect_token(p, 23)) // token='.'
)
{
- D(fprintf(stderr, "%*c+ _tmp_183[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'.'"));
+ D(fprintf(stderr, "%*c+ _tmp_185[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'.'"));
_res = _literal;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_183[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_185[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'.'"));
}
{ // '...'
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_183[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'...'"));
+ D(fprintf(stderr, "%*c> _tmp_185[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'...'"));
Token * _literal;
if (
(_literal = _PyPegen_expect_token(p, 52)) // token='...'
)
{
- D(fprintf(stderr, "%*c+ _tmp_183[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'...'"));
+ D(fprintf(stderr, "%*c+ _tmp_185[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'...'"));
_res = _literal;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_183[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_185[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'...'"));
}
_res = NULL;
return _res;
}
-// _tmp_184: '@' named_expression NEWLINE
+// _tmp_186: '@' named_expression NEWLINE
static void *
-_tmp_184_rule(Parser *p)
+_tmp_186_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_184[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'@' named_expression NEWLINE"));
+ D(fprintf(stderr, "%*c> _tmp_186[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'@' named_expression NEWLINE"));
Token * _literal;
expr_ty f;
Token * newline_var;
(newline_var = _PyPegen_expect_token(p, NEWLINE)) // token='NEWLINE'
)
{
- D(fprintf(stderr, "%*c+ _tmp_184[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'@' named_expression NEWLINE"));
+ D(fprintf(stderr, "%*c+ _tmp_186[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'@' named_expression NEWLINE"));
_res = f;
if (_res == NULL && PyErr_Occurred()) {
p->error_indicator = 1;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_184[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_186[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'@' named_expression NEWLINE"));
}
_res = NULL;
return _res;
}
-// _tmp_185: ',' star_expression
+// _tmp_187: ',' star_expression
static void *
-_tmp_185_rule(Parser *p)
+_tmp_187_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_185[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "',' star_expression"));
+ D(fprintf(stderr, "%*c> _tmp_187[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "',' star_expression"));
Token * _literal;
expr_ty c;
if (
(c = star_expression_rule(p)) // star_expression
)
{
- D(fprintf(stderr, "%*c+ _tmp_185[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "',' star_expression"));
+ D(fprintf(stderr, "%*c+ _tmp_187[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "',' star_expression"));
_res = c;
if (_res == NULL && PyErr_Occurred()) {
p->error_indicator = 1;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_185[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_187[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "',' star_expression"));
}
_res = NULL;
return _res;
}
-// _tmp_186: ',' expression
+// _tmp_188: ',' expression
static void *
-_tmp_186_rule(Parser *p)
+_tmp_188_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_186[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "',' expression"));
+ D(fprintf(stderr, "%*c> _tmp_188[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "',' expression"));
Token * _literal;
expr_ty c;
if (
(c = expression_rule(p)) // expression
)
{
- D(fprintf(stderr, "%*c+ _tmp_186[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "',' expression"));
+ D(fprintf(stderr, "%*c+ _tmp_188[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "',' expression"));
_res = c;
if (_res == NULL && PyErr_Occurred()) {
p->error_indicator = 1;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_186[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_188[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "',' expression"));
}
_res = NULL;
return _res;
}
-// _tmp_187: 'or' conjunction
+// _tmp_189: 'or' conjunction
static void *
-_tmp_187_rule(Parser *p)
+_tmp_189_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_187[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'or' conjunction"));
+ D(fprintf(stderr, "%*c> _tmp_189[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'or' conjunction"));
Token * _keyword;
expr_ty c;
if (
(c = conjunction_rule(p)) // conjunction
)
{
- D(fprintf(stderr, "%*c+ _tmp_187[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'or' conjunction"));
+ D(fprintf(stderr, "%*c+ _tmp_189[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'or' conjunction"));
_res = c;
if (_res == NULL && PyErr_Occurred()) {
p->error_indicator = 1;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_187[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_189[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'or' conjunction"));
}
_res = NULL;
return _res;
}
-// _tmp_188: 'and' inversion
+// _tmp_190: 'and' inversion
static void *
-_tmp_188_rule(Parser *p)
+_tmp_190_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_188[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'and' inversion"));
+ D(fprintf(stderr, "%*c> _tmp_190[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'and' inversion"));
Token * _keyword;
expr_ty c;
if (
(c = inversion_rule(p)) // inversion
)
{
- D(fprintf(stderr, "%*c+ _tmp_188[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'and' inversion"));
+ D(fprintf(stderr, "%*c+ _tmp_190[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'and' inversion"));
_res = c;
if (_res == NULL && PyErr_Occurred()) {
p->error_indicator = 1;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_188[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_190[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'and' inversion"));
}
_res = NULL;
return _res;
}
-// _tmp_189: 'if' disjunction
+// _tmp_191: 'if' disjunction
static void *
-_tmp_189_rule(Parser *p)
+_tmp_191_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_189[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'if' disjunction"));
+ D(fprintf(stderr, "%*c> _tmp_191[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'if' disjunction"));
Token * _keyword;
expr_ty z;
if (
(z = disjunction_rule(p)) // disjunction
)
{
- D(fprintf(stderr, "%*c+ _tmp_189[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'if' disjunction"));
+ D(fprintf(stderr, "%*c+ _tmp_191[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'if' disjunction"));
_res = z;
if (_res == NULL && PyErr_Occurred()) {
p->error_indicator = 1;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_189[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_191[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'if' disjunction"));
}
_res = NULL;
return _res;
}
-// _tmp_190: 'if' disjunction
+// _tmp_192: 'if' disjunction
static void *
-_tmp_190_rule(Parser *p)
+_tmp_192_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_190[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'if' disjunction"));
+ D(fprintf(stderr, "%*c> _tmp_192[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'if' disjunction"));
Token * _keyword;
expr_ty z;
if (
(z = disjunction_rule(p)) // disjunction
)
{
- D(fprintf(stderr, "%*c+ _tmp_190[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'if' disjunction"));
+ D(fprintf(stderr, "%*c+ _tmp_192[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'if' disjunction"));
_res = z;
if (_res == NULL && PyErr_Occurred()) {
p->error_indicator = 1;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_190[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_192[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'if' disjunction"));
}
_res = NULL;
return _res;
}
-// _tmp_191: starred_expression | (assignment_expression | expression !':=') !'='
+// _tmp_193: starred_expression | (assignment_expression | expression !':=') !'='
static void *
-_tmp_191_rule(Parser *p)
+_tmp_193_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_191[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "starred_expression"));
+ D(fprintf(stderr, "%*c> _tmp_193[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "starred_expression"));
expr_ty starred_expression_var;
if (
(starred_expression_var = starred_expression_rule(p)) // starred_expression
)
{
- D(fprintf(stderr, "%*c+ _tmp_191[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "starred_expression"));
+ D(fprintf(stderr, "%*c+ _tmp_193[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "starred_expression"));
_res = starred_expression_var;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_191[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_193[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "starred_expression"));
}
{ // (assignment_expression | expression !':=') !'='
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_191[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "(assignment_expression | expression !':=') !'='"));
- void *_tmp_202_var;
+ D(fprintf(stderr, "%*c> _tmp_193[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "(assignment_expression | expression !':=') !'='"));
+ void *_tmp_204_var;
if (
- (_tmp_202_var = _tmp_202_rule(p)) // assignment_expression | expression !':='
+ (_tmp_204_var = _tmp_204_rule(p)) // assignment_expression | expression !':='
&&
_PyPegen_lookahead_with_int(0, _PyPegen_expect_token, p, 22) // token='='
)
{
- D(fprintf(stderr, "%*c+ _tmp_191[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "(assignment_expression | expression !':=') !'='"));
- _res = _tmp_202_var;
+ D(fprintf(stderr, "%*c+ _tmp_193[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "(assignment_expression | expression !':=') !'='"));
+ _res = _tmp_204_var;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_191[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_193[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "(assignment_expression | expression !':=') !'='"));
}
_res = NULL;
return _res;
}
-// _tmp_192: ',' star_target
+// _tmp_194: ',' star_target
static void *
-_tmp_192_rule(Parser *p)
+_tmp_194_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_192[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "',' star_target"));
+ D(fprintf(stderr, "%*c> _tmp_194[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "',' star_target"));
Token * _literal;
expr_ty c;
if (
(c = star_target_rule(p)) // star_target
)
{
- D(fprintf(stderr, "%*c+ _tmp_192[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "',' star_target"));
+ D(fprintf(stderr, "%*c+ _tmp_194[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "',' star_target"));
_res = c;
if (_res == NULL && PyErr_Occurred()) {
p->error_indicator = 1;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_192[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_194[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "',' star_target"));
}
_res = NULL;
return _res;
}
-// _tmp_193: ',' star_target
+// _tmp_195: ',' star_target
static void *
-_tmp_193_rule(Parser *p)
+_tmp_195_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_193[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "',' star_target"));
+ D(fprintf(stderr, "%*c> _tmp_195[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "',' star_target"));
Token * _literal;
expr_ty c;
if (
(c = star_target_rule(p)) // star_target
)
{
- D(fprintf(stderr, "%*c+ _tmp_193[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "',' star_target"));
+ D(fprintf(stderr, "%*c+ _tmp_195[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "',' star_target"));
_res = c;
if (_res == NULL && PyErr_Occurred()) {
p->error_indicator = 1;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_193[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_195[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "',' star_target"));
}
_res = NULL;
return _res;
}
-// _tmp_194: star_targets '='
+// _tmp_196: star_targets '='
static void *
-_tmp_194_rule(Parser *p)
+_tmp_196_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_194[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "star_targets '='"));
+ D(fprintf(stderr, "%*c> _tmp_196[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "star_targets '='"));
Token * _literal;
expr_ty star_targets_var;
if (
(_literal = _PyPegen_expect_token(p, 22)) // token='='
)
{
- D(fprintf(stderr, "%*c+ _tmp_194[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "star_targets '='"));
+ D(fprintf(stderr, "%*c+ _tmp_196[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "star_targets '='"));
_res = _PyPegen_dummy_name(p, star_targets_var, _literal);
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_194[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_196[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "star_targets '='"));
}
_res = NULL;
return _res;
}
-// _tmp_195: star_targets '='
+// _tmp_197: star_targets '='
static void *
-_tmp_195_rule(Parser *p)
+_tmp_197_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_195[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "star_targets '='"));
+ D(fprintf(stderr, "%*c> _tmp_197[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "star_targets '='"));
Token * _literal;
expr_ty star_targets_var;
if (
(_literal = _PyPegen_expect_token(p, 22)) // token='='
)
{
- D(fprintf(stderr, "%*c+ _tmp_195[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "star_targets '='"));
+ D(fprintf(stderr, "%*c+ _tmp_197[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "star_targets '='"));
_res = _PyPegen_dummy_name(p, star_targets_var, _literal);
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_195[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_197[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "star_targets '='"));
}
_res = NULL;
return _res;
}
-// _tmp_196: ')' | '**'
+// _tmp_198: ')' | '**'
static void *
-_tmp_196_rule(Parser *p)
+_tmp_198_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_196[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "')'"));
+ D(fprintf(stderr, "%*c> _tmp_198[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "')'"));
Token * _literal;
if (
(_literal = _PyPegen_expect_token(p, 8)) // token=')'
)
{
- D(fprintf(stderr, "%*c+ _tmp_196[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "')'"));
+ D(fprintf(stderr, "%*c+ _tmp_198[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "')'"));
_res = _literal;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_196[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_198[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "')'"));
}
{ // '**'
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_196[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'**'"));
+ D(fprintf(stderr, "%*c> _tmp_198[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'**'"));
Token * _literal;
if (
(_literal = _PyPegen_expect_token(p, 35)) // token='**'
)
{
- D(fprintf(stderr, "%*c+ _tmp_196[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'**'"));
+ D(fprintf(stderr, "%*c+ _tmp_198[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'**'"));
_res = _literal;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_196[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_198[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'**'"));
}
_res = NULL;
return _res;
}
-// _tmp_197: ':' | '**'
+// _tmp_199: ':' | '**'
static void *
-_tmp_197_rule(Parser *p)
+_tmp_199_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_197[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "':'"));
+ D(fprintf(stderr, "%*c> _tmp_199[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "':'"));
Token * _literal;
if (
(_literal = _PyPegen_expect_token(p, 11)) // token=':'
)
{
- D(fprintf(stderr, "%*c+ _tmp_197[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "':'"));
+ D(fprintf(stderr, "%*c+ _tmp_199[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "':'"));
_res = _literal;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_197[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_199[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "':'"));
}
{ // '**'
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_197[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'**'"));
+ D(fprintf(stderr, "%*c> _tmp_199[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'**'"));
Token * _literal;
if (
(_literal = _PyPegen_expect_token(p, 35)) // token='**'
)
{
- D(fprintf(stderr, "%*c+ _tmp_197[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'**'"));
+ D(fprintf(stderr, "%*c+ _tmp_199[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'**'"));
_res = _literal;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_197[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_199[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'**'"));
}
_res = NULL;
return _res;
}
-// _tmp_198: expression ['as' star_target]
+// _tmp_200: expression ['as' star_target]
static void *
-_tmp_198_rule(Parser *p)
+_tmp_200_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_198[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "expression ['as' star_target]"));
+ D(fprintf(stderr, "%*c> _tmp_200[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "expression ['as' star_target]"));
void *_opt_var;
UNUSED(_opt_var); // Silence compiler warnings
expr_ty expression_var;
if (
(expression_var = expression_rule(p)) // expression
&&
- (_opt_var = _tmp_203_rule(p), !p->error_indicator) // ['as' star_target]
+ (_opt_var = _tmp_205_rule(p), !p->error_indicator) // ['as' star_target]
)
{
- D(fprintf(stderr, "%*c+ _tmp_198[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "expression ['as' star_target]"));
+ D(fprintf(stderr, "%*c+ _tmp_200[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "expression ['as' star_target]"));
_res = _PyPegen_dummy_name(p, expression_var, _opt_var);
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_198[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_200[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "expression ['as' star_target]"));
}
_res = NULL;
return _res;
}
-// _tmp_199: expressions ['as' star_target]
+// _tmp_201: expressions ['as' star_target]
static void *
-_tmp_199_rule(Parser *p)
+_tmp_201_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_199[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "expressions ['as' star_target]"));
+ D(fprintf(stderr, "%*c> _tmp_201[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "expressions ['as' star_target]"));
void *_opt_var;
UNUSED(_opt_var); // Silence compiler warnings
expr_ty expressions_var;
if (
(expressions_var = expressions_rule(p)) // expressions
&&
- (_opt_var = _tmp_204_rule(p), !p->error_indicator) // ['as' star_target]
+ (_opt_var = _tmp_206_rule(p), !p->error_indicator) // ['as' star_target]
)
{
- D(fprintf(stderr, "%*c+ _tmp_199[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "expressions ['as' star_target]"));
+ D(fprintf(stderr, "%*c+ _tmp_201[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "expressions ['as' star_target]"));
_res = _PyPegen_dummy_name(p, expressions_var, _opt_var);
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_199[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_201[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "expressions ['as' star_target]"));
}
_res = NULL;
return _res;
}
-// _tmp_200: expression ['as' star_target]
+// _tmp_202: expression ['as' star_target]
static void *
-_tmp_200_rule(Parser *p)
+_tmp_202_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_200[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "expression ['as' star_target]"));
+ D(fprintf(stderr, "%*c> _tmp_202[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "expression ['as' star_target]"));
void *_opt_var;
UNUSED(_opt_var); // Silence compiler warnings
expr_ty expression_var;
if (
(expression_var = expression_rule(p)) // expression
&&
- (_opt_var = _tmp_205_rule(p), !p->error_indicator) // ['as' star_target]
+ (_opt_var = _tmp_207_rule(p), !p->error_indicator) // ['as' star_target]
)
{
- D(fprintf(stderr, "%*c+ _tmp_200[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "expression ['as' star_target]"));
+ D(fprintf(stderr, "%*c+ _tmp_202[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "expression ['as' star_target]"));
_res = _PyPegen_dummy_name(p, expression_var, _opt_var);
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_200[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_202[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "expression ['as' star_target]"));
}
_res = NULL;
return _res;
}
-// _tmp_201: expressions ['as' star_target]
+// _tmp_203: expressions ['as' star_target]
static void *
-_tmp_201_rule(Parser *p)
+_tmp_203_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_201[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "expressions ['as' star_target]"));
+ D(fprintf(stderr, "%*c> _tmp_203[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "expressions ['as' star_target]"));
void *_opt_var;
UNUSED(_opt_var); // Silence compiler warnings
expr_ty expressions_var;
if (
(expressions_var = expressions_rule(p)) // expressions
&&
- (_opt_var = _tmp_206_rule(p), !p->error_indicator) // ['as' star_target]
+ (_opt_var = _tmp_208_rule(p), !p->error_indicator) // ['as' star_target]
)
{
- D(fprintf(stderr, "%*c+ _tmp_201[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "expressions ['as' star_target]"));
+ D(fprintf(stderr, "%*c+ _tmp_203[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "expressions ['as' star_target]"));
_res = _PyPegen_dummy_name(p, expressions_var, _opt_var);
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_201[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_203[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "expressions ['as' star_target]"));
}
_res = NULL;
return _res;
}
-// _tmp_202: assignment_expression | expression !':='
+// _tmp_204: assignment_expression | expression !':='
static void *
-_tmp_202_rule(Parser *p)
+_tmp_204_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_202[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "assignment_expression"));
+ D(fprintf(stderr, "%*c> _tmp_204[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "assignment_expression"));
expr_ty assignment_expression_var;
if (
(assignment_expression_var = assignment_expression_rule(p)) // assignment_expression
)
{
- D(fprintf(stderr, "%*c+ _tmp_202[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "assignment_expression"));
+ D(fprintf(stderr, "%*c+ _tmp_204[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "assignment_expression"));
_res = assignment_expression_var;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_202[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_204[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "assignment_expression"));
}
{ // expression !':='
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_202[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "expression !':='"));
+ D(fprintf(stderr, "%*c> _tmp_204[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "expression !':='"));
expr_ty expression_var;
if (
(expression_var = expression_rule(p)) // expression
_PyPegen_lookahead_with_int(0, _PyPegen_expect_token, p, 53) // token=':='
)
{
- D(fprintf(stderr, "%*c+ _tmp_202[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "expression !':='"));
+ D(fprintf(stderr, "%*c+ _tmp_204[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "expression !':='"));
_res = expression_var;
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_202[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_204[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "expression !':='"));
}
_res = NULL;
return _res;
}
-// _tmp_203: 'as' star_target
+// _tmp_205: 'as' star_target
static void *
-_tmp_203_rule(Parser *p)
+_tmp_205_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_203[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'as' star_target"));
+ D(fprintf(stderr, "%*c> _tmp_205[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'as' star_target"));
Token * _keyword;
expr_ty star_target_var;
if (
(star_target_var = star_target_rule(p)) // star_target
)
{
- D(fprintf(stderr, "%*c+ _tmp_203[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'as' star_target"));
+ D(fprintf(stderr, "%*c+ _tmp_205[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'as' star_target"));
_res = _PyPegen_dummy_name(p, _keyword, star_target_var);
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_203[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_205[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'as' star_target"));
}
_res = NULL;
return _res;
}
-// _tmp_204: 'as' star_target
+// _tmp_206: 'as' star_target
static void *
-_tmp_204_rule(Parser *p)
+_tmp_206_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_204[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'as' star_target"));
+ D(fprintf(stderr, "%*c> _tmp_206[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'as' star_target"));
Token * _keyword;
expr_ty star_target_var;
if (
(star_target_var = star_target_rule(p)) // star_target
)
{
- D(fprintf(stderr, "%*c+ _tmp_204[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'as' star_target"));
+ D(fprintf(stderr, "%*c+ _tmp_206[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'as' star_target"));
_res = _PyPegen_dummy_name(p, _keyword, star_target_var);
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_204[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_206[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'as' star_target"));
}
_res = NULL;
return _res;
}
-// _tmp_205: 'as' star_target
+// _tmp_207: 'as' star_target
static void *
-_tmp_205_rule(Parser *p)
+_tmp_207_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_205[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'as' star_target"));
+ D(fprintf(stderr, "%*c> _tmp_207[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'as' star_target"));
Token * _keyword;
expr_ty star_target_var;
if (
(star_target_var = star_target_rule(p)) // star_target
)
{
- D(fprintf(stderr, "%*c+ _tmp_205[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'as' star_target"));
+ D(fprintf(stderr, "%*c+ _tmp_207[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'as' star_target"));
_res = _PyPegen_dummy_name(p, _keyword, star_target_var);
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_205[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_207[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'as' star_target"));
}
_res = NULL;
return _res;
}
-// _tmp_206: 'as' star_target
+// _tmp_208: 'as' star_target
static void *
-_tmp_206_rule(Parser *p)
+_tmp_208_rule(Parser *p)
{
if (p->level++ == MAXSTACK) {
p->error_indicator = 1;
p->level--;
return NULL;
}
- D(fprintf(stderr, "%*c> _tmp_206[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'as' star_target"));
+ D(fprintf(stderr, "%*c> _tmp_208[%d-%d]: %s\n", p->level, ' ', _mark, p->mark, "'as' star_target"));
Token * _keyword;
expr_ty star_target_var;
if (
(star_target_var = star_target_rule(p)) // star_target
)
{
- D(fprintf(stderr, "%*c+ _tmp_206[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'as' star_target"));
+ D(fprintf(stderr, "%*c+ _tmp_208[%d-%d]: %s succeeded!\n", p->level, ' ', _mark, p->mark, "'as' star_target"));
_res = _PyPegen_dummy_name(p, _keyword, star_target_var);
goto done;
}
p->mark = _mark;
- D(fprintf(stderr, "%*c%s _tmp_206[%d-%d]: %s failed!\n", p->level, ' ',
+ D(fprintf(stderr, "%*c%s _tmp_208[%d-%d]: %s failed!\n", p->level, ' ',
p->error_indicator ? "ERROR!" : "-", _mark, p->mark, "'as' star_target"));
}
_res = NULL;
Py_ssize_t relative_lineno = p->starting_lineno ? lineno - p->starting_lineno + 1 : lineno;
for (int i = 0; i < relative_lineno - 1; i++) {
- char *new_line = strchr(cur_line, '\n') + 1;
- assert(new_line != NULL && new_line <= buf_end);
- if (new_line == NULL || new_line > buf_end) {
+ char *new_line = strchr(cur_line, '\n');
+ assert(new_line != NULL && new_line + 1 < buf_end);
+ if (new_line == NULL || new_line + 1 > buf_end) {
break;
}
- cur_line = new_line;
+ cur_line = new_line + 1;
}
char *next_newline;
return (Parser *) PyErr_NoMemory();
}
p->tokens[0] = PyMem_Calloc(1, sizeof(Token));
- if (!p->tokens) {
+ if (!p->tokens[0]) {
PyMem_Free(p->tokens);
PyMem_Free(p);
return (Parser *) PyErr_NoMemory();
start--;
}
*p_cols += (int)(expr_start - start);
+ if (*start == '\n') {
+ *p_cols -= 1;
+ }
}
/* adjust the start based on the number of newlines encountered
before the f-string expression */
NULL, p->arena);
p2->starting_lineno = t->lineno + lines;
- p2->starting_col_offset = t->col_offset + cols;
+ p2->starting_col_offset = lines != 0 ? cols : t->col_offset + cols;
expr = _PyPegen_run_parser(p2);
Py_ssize_t current_size = tok->interactive_src_end - tok->interactive_src_start;
Py_ssize_t line_size = strlen(line);
+ char last_char = line[line_size > 0 ? line_size - 1 : line_size];
+ if (last_char != '\n') {
+ line_size += 1;
+ }
char* new_str = tok->interactive_src_start;
new_str = PyMem_Realloc(new_str, current_size + line_size + 1);
return -1;
}
strcpy(new_str + current_size, line);
-
+ if (last_char != '\n') {
+ /* Last line does not end in \n, fake one */
+ new_str[current_size + line_size - 1] = '\n';
+ new_str[current_size + line_size] = '\0';
+ }
tok->interactive_src_start = new_str;
tok->interactive_src_end = new_str + current_size + line_size;
return 0;
PyThread_free_lock(lock);
+ Py_Finalize();
+
return 0;
}
/* The caller must hold the GIL */
assert(PyGILState_Check());
+ static int reentrant = 0;
+ if (reentrant) {
+ _PyErr_SetString(tstate, PyExc_RuntimeError, "Cannot install a profile function "
+ "while another profile function is being installed");
+ reentrant = 0;
+ return -1;
+ }
+ reentrant = 1;
+
/* Call _PySys_Audit() in the context of the current thread state,
even if tstate is not the current thread state. */
PyThreadState *current_tstate = _PyThreadState_GET();
if (_PySys_Audit(current_tstate, "sys.setprofile", NULL) < 0) {
+ reentrant = 0;
return -1;
}
/* Flag that tracing or profiling is turned on */
tstate->cframe->use_tracing = (func != NULL) || (tstate->c_tracefunc != NULL);
+ reentrant = 0;
return 0;
}
/* The caller must hold the GIL */
assert(PyGILState_Check());
+ static int reentrant = 0;
+
+ if (reentrant) {
+ _PyErr_SetString(tstate, PyExc_RuntimeError, "Cannot install a trace function "
+ "while another trace function is being installed");
+ reentrant = 0;
+ return -1;
+ }
+ reentrant = 1;
+
/* Call _PySys_Audit() in the context of the current thread state,
even if tstate is not the current thread state. */
PyThreadState *current_tstate = _PyThreadState_GET();
if (_PySys_Audit(current_tstate, "sys.settrace", NULL) < 0) {
+ reentrant = 0;
return -1;
}
tstate->c_traceobj = NULL;
/* Must make sure that profiling is not ignored if 'traceobj' is freed */
tstate->cframe->use_tracing = (tstate->c_profilefunc != NULL);
- Py_XDECREF(traceobj);
-
Py_XINCREF(arg);
+ Py_XDECREF(traceobj);
tstate->c_traceobj = arg;
tstate->c_tracefunc = func;
tstate->cframe->use_tracing = ((func != NULL)
|| (tstate->c_profilefunc != NULL));
+ reentrant = 0;
return 0;
}
assemble_emit_linetable_pair(struct assembler *a, int bdelta, int ldelta)
{
Py_ssize_t len = PyBytes_GET_SIZE(a->a_lnotab);
- if (a->a_lnotab_off + 2 >= len) {
- if (_PyBytes_Resize(&a->a_lnotab, len * 2) < 0)
+ if (a->a_lnotab_off > INT_MAX - 2) {
+ goto overflow;
+ }
+ if (a->a_lnotab_off >= len - 2) {
+ if (len > INT_MAX / 2) {
+ goto overflow;
+ }
+ if (_PyBytes_Resize(&a->a_lnotab, len * 2) < 0) {
return 0;
+ }
}
unsigned char *lnotab = (unsigned char *) PyBytes_AS_STRING(a->a_lnotab);
lnotab += a->a_lnotab_off;
*lnotab++ = bdelta;
*lnotab++ = ldelta;
return 1;
+overflow:
+ PyErr_SetString(PyExc_OverflowError, "line number table is too long");
+ return 0;
}
/* Appends a range to the end of the line number table. See
int size, arg = 0;
Py_ssize_t len = PyBytes_GET_SIZE(a->a_bytecode);
_Py_CODEUNIT *code;
-
arg = i->i_oparg;
size = instrsize(arg);
if (i->i_lineno && !assemble_lnotab(a, i))
return 0;
+ if (a->a_offset > INT_MAX - size) {
+ goto overflow;
+ }
if (a->a_offset + size >= len / (int)sizeof(_Py_CODEUNIT)) {
- if (len > PY_SSIZE_T_MAX / 2)
- return 0;
+ if (len > INT_MAX / 2) {
+ goto overflow;
+ }
if (_PyBytes_Resize(&a->a_bytecode, len * 2) < 0)
return 0;
}
a->a_offset += size;
write_op_arg(code, i->i_opcode, arg, size);
return 1;
+overflow:
+ PyErr_SetString(PyExc_OverflowError, "bytecode is too long");
+ return 0;
}
static void
Py_DECREF(consts);
goto error;
}
- if (maxdepth > MAX_ALLOWED_STACK_USE) {
- PyErr_Format(PyExc_SystemError,
- "excessive stack use: stack is %d deep",
- maxdepth);
- Py_DECREF(consts);
- goto error;
- }
co = PyCode_NewWithPosOnlyArgs(posonlyargcount+posorkeywordargcount,
posonlyargcount, kwonlyargcount, nlocals_int,
maxdepth, flags, a->a_bytecode, consts, names,
int j, nblocks;
PyCodeObject *co = NULL;
PyObject *consts = NULL;
+ memset(&a, 0, sizeof(struct assembler));
/* Make sure every block that falls off the end returns None.
XXX NEXT_BLOCK() isn't quite right, because if the last
{
if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1,
"getargs: The '%c' format is deprecated. Use 'U' instead.", c)) {
- return NULL;
+ RETURN_ERR_OCCURRED;
}
_Py_COMP_DIAG_PUSH
_Py_COMP_DIAG_IGNORE_DEPR_DECLS
return retval;
}
+static void
+error_unexpected_keyword_arg(PyObject *kwargs, PyObject *kwnames, PyObject *kwtuple, const char *fname)
+{
+ /* make sure there are no extraneous keyword arguments */
+ Py_ssize_t j = 0;
+ while (1) {
+ PyObject *keyword;
+ if (kwargs != NULL) {
+ if (!PyDict_Next(kwargs, &j, &keyword, NULL))
+ break;
+ }
+ else {
+ if (j >= PyTuple_GET_SIZE(kwnames))
+ break;
+ keyword = PyTuple_GET_ITEM(kwnames, j);
+ j++;
+ }
+ if (!PyUnicode_Check(keyword)) {
+ PyErr_SetString(PyExc_TypeError,
+ "keywords must be strings");
+ return;
+ }
+
+ int match = PySequence_Contains(kwtuple, keyword);
+ if (match <= 0) {
+ if (!match) {
+ PyErr_Format(PyExc_TypeError,
+ "'%S' is an invalid keyword "
+ "argument for %.200s%s",
+ keyword,
+ (fname == NULL) ? "this function" : fname,
+ (fname == NULL) ? "" : "()");
+ }
+ return;
+ }
+ }
+ /* Something wrong happened. There are extraneous keyword arguments,
+ * but we don't know what. And we don't bother. */
+ PyErr_Format(PyExc_TypeError,
+ "invalid keyword argument for %.200s%s",
+ (fname == NULL) ? "this function" : fname,
+ (fname == NULL) ? "" : "()");
+}
+
int
PyArg_ValidateKeywordArguments(PyObject *kwargs)
{
return cleanreturn(0, &freelist);
}
}
+ /* Something wrong happened. There are extraneous keyword arguments,
+ * but we don't know what. And we don't bother. */
+ PyErr_Format(PyExc_TypeError,
+ "invalid keyword argument for %.200s%s",
+ (fname == NULL) ? "this function" : fname,
+ (fname == NULL) ? "" : "()");
+ return cleanreturn(0, &freelist);
}
return cleanreturn(1, &freelist);
assert(IS_END_OF_FORMAT(*format) || (*format == '|') || (*format == '$'));
if (nkwargs > 0) {
- Py_ssize_t j;
/* make sure there are no arguments given by name and position */
for (i = pos; i < nargs; i++) {
keyword = PyTuple_GET_ITEM(kwtuple, i - pos);
return cleanreturn(0, &freelist);
}
}
- /* make sure there are no extraneous keyword arguments */
- j = 0;
- while (1) {
- int match;
- if (kwargs != NULL) {
- if (!PyDict_Next(kwargs, &j, &keyword, NULL))
- break;
- }
- else {
- if (j >= PyTuple_GET_SIZE(kwnames))
- break;
- keyword = PyTuple_GET_ITEM(kwnames, j);
- j++;
- }
- match = PySequence_Contains(kwtuple, keyword);
- if (match <= 0) {
- if (!match) {
- PyErr_Format(PyExc_TypeError,
- "'%S' is an invalid keyword "
- "argument for %.200s%s",
- keyword,
- (parser->fname == NULL) ? "this function" : parser->fname,
- (parser->fname == NULL) ? "" : "()");
- }
- return cleanreturn(0, &freelist);
- }
- }
+ error_unexpected_keyword_arg(kwargs, kwnames, kwtuple, parser->fname);
+ return cleanreturn(0, &freelist);
}
return cleanreturn(1, &freelist);
}
if (nkwargs > 0) {
- Py_ssize_t j;
/* make sure there are no arguments given by name and position */
for (i = posonly; i < nargs; i++) {
keyword = PyTuple_GET_ITEM(kwtuple, i - posonly);
return NULL;
}
}
- /* make sure there are no extraneous keyword arguments */
- j = 0;
- while (1) {
- int match;
- if (kwargs != NULL) {
- if (!PyDict_Next(kwargs, &j, &keyword, NULL))
- break;
- }
- else {
- if (j >= PyTuple_GET_SIZE(kwnames))
- break;
- keyword = PyTuple_GET_ITEM(kwnames, j);
- j++;
- }
- match = PySequence_Contains(kwtuple, keyword);
- if (match <= 0) {
- if (!match) {
- PyErr_Format(PyExc_TypeError,
- "'%S' is an invalid keyword "
- "argument for %.200s%s",
- keyword,
- (parser->fname == NULL) ? "this function" : parser->fname,
- (parser->fname == NULL) ? "" : "()");
- }
- return NULL;
- }
- }
+ error_unexpected_keyword_arg(kwargs, kwnames, kwtuple, parser->fname);
+ return NULL;
}
return buf;
if (*format == '#') {
if (p_va != NULL) {
if (!(flags & FLAG_SIZE_T)) {
- PyErr_SetString(PyExc_SystemError,
- "PY_SSIZE_T_CLEAN macro must be defined for '#' formats");
- return NULL;
+ return "PY_SSIZE_T_CLEAN macro must be defined for '#' formats";
}
(void) va_arg(*p_va, Py_ssize_t *);
}
return PyImport_ExtendInittab(newtab);
}
+
+PyObject *
+_PyImport_GetModuleAttr(PyObject *modname, PyObject *attrname)
+{
+ PyObject *mod = PyImport_Import(modname);
+ if (mod == NULL) {
+ return NULL;
+ }
+ PyObject *result = PyObject_GetAttr(mod, attrname);
+ Py_DECREF(mod);
+ return result;
+}
+
+PyObject *
+_PyImport_GetModuleAttrString(const char *modname, const char *attrname)
+{
+ PyObject *pmodname = PyUnicode_FromString(modname);
+ if (pmodname == NULL) {
+ return NULL;
+ }
+ PyObject *pattrname = PyUnicode_FromString(attrname);
+ if (pattrname == NULL) {
+ Py_DECREF(pmodname);
+ return NULL;
+ }
+ PyObject *result = _PyImport_GetModuleAttr(pmodname, pattrname);
+ Py_DECREF(pattrname);
+ Py_DECREF(pmodname);
+ return result;
+}
+
#ifdef __cplusplus
}
#endif
-This is Python version 3.10.5
+This is Python version 3.10.6
=============================
.. image:: https://travis-ci.com/python/cpython.svg?branch=master
overwritten by the installation of a different version. All files and
directories installed using ``make altinstall`` contain the major and minor
version and can thus live side-by-side. ``make install`` also creates
-``${prefix}/bin/python3`` which refers to ``${prefix}/bin/pythonX.Y``. If you
+``${prefix}/bin/python3`` which refers to ``${prefix}/bin/python3.X``. If you
intend to install multiple versions using the same prefix you must decide which
version (if any) is your "primary" version. Install that version using ``make
install``. Install all other versions using ``make altinstall``.
self.block.output.append('\n')
return
- d = self.clinic.get_destination(destination)
+ d = self.clinic.get_destination_buffer(destination)
if command_or_name == "everything":
for name in list(fd):
p = Parameter(parameter_name, kind, function=self.function, converter=converter, default=value, group=self.group)
- if parameter_name in self.function.parameters:
+ names = [k.name for k in self.function.parameters.values()]
+ if parameter_name in names[1:]:
fail("You can't have two parameters named " + repr(parameter_name) + "!")
- self.function.parameters[parameter_name] = p
+ elif names and parameter_name == names[0] and c_name is None:
+ fail(f"Parameter '{parameter_name}' requires a custom C name")
+
+ key = f"{parameter_name}_as_{c_name}" if c_name else parameter_name
+ self.function.parameters[key] = p
def parse_converter(self, annotation):
if (hasattr(ast, 'Constant') and
<None Include="pythonba.def" />\r
</ItemGroup>\r
<Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />\r
-</Project>
\ No newline at end of file
+</Project>\r
--- /dev/null
+#! /usr/bin/env python3
+
+"""
+Compare checksums for wheels in :mod:`ensurepip` against the Cheeseshop.
+
+When GitHub Actions executes the script, output is formatted accordingly.
+https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#setting-a-notice-message
+"""
+
+import hashlib
+import json
+import os
+import re
+from pathlib import Path
+from urllib.request import urlopen
+
+PACKAGE_NAMES = ("pip", "setuptools")
+ENSURE_PIP_ROOT = Path(__file__).parent.parent.parent / "Lib/ensurepip"
+WHEEL_DIR = ENSURE_PIP_ROOT / "_bundled"
+ENSURE_PIP_INIT_PY_TEXT = (ENSURE_PIP_ROOT / "__init__.py").read_text(encoding="utf-8")
+GITHUB_ACTIONS = os.getenv("GITHUB_ACTIONS") == "true"
+
+
+def print_notice(file_path: str, message: str) -> None:
+ if GITHUB_ACTIONS:
+ message = f"::notice file={file_path}::{message}"
+ print(message, end="\n\n")
+
+
+def print_error(file_path: str, message: str) -> None:
+ if GITHUB_ACTIONS:
+ message = f"::error file={file_path}::{message}"
+ print(message, end="\n\n")
+
+
+def verify_wheel(package_name: str) -> bool:
+ # Find the package on disk
+ package_path = next(WHEEL_DIR.glob(f"{package_name}*.whl"), None)
+ if not package_path:
+ print_error("", f"Could not find a {package_name} wheel on disk.")
+ return False
+
+ print(f"Verifying checksum for {package_path}.")
+
+ # Find the version of the package used by ensurepip
+ package_version_match = re.search(
+ f'_{package_name.upper()}_VERSION = "([^"]+)', ENSURE_PIP_INIT_PY_TEXT
+ )
+ if not package_version_match:
+ print_error(
+ package_path,
+ f"No {package_name} version found in Lib/ensurepip/__init__.py.",
+ )
+ return False
+ package_version = package_version_match[1]
+
+ # Get the SHA 256 digest from the Cheeseshop
+ try:
+ raw_text = urlopen(f"https://pypi.org/pypi/{package_name}/json").read()
+ except (OSError, ValueError):
+ print_error(package_path, f"Could not fetch JSON metadata for {package_name}.")
+ return False
+
+ release_files = json.loads(raw_text)["releases"][package_version]
+ for release_info in release_files:
+ if package_path.name != release_info["filename"]:
+ continue
+ expected_digest = release_info["digests"].get("sha256", "")
+ break
+ else:
+ print_error(package_path, f"No digest for {package_name} found from PyPI.")
+ return False
+
+ # Compute the SHA 256 digest of the wheel on disk
+ actual_digest = hashlib.sha256(package_path.read_bytes()).hexdigest()
+
+ print(f"Expected digest: {expected_digest}")
+ print(f"Actual digest: {actual_digest}")
+
+ if actual_digest != expected_digest:
+ print_error(
+ package_path, f"Failed to verify the checksum of the {package_name} wheel."
+ )
+ return False
+
+ print_notice(
+ package_path,
+ f"Successfully verified the checksum of the {package_name} wheel.",
+ )
+ return True
+
+
+if __name__ == "__main__":
+ exit_status = 0
+ for package_name in PACKAGE_NAMES:
+ if not verify_wheel(package_name):
+ exit_status = 1
+ raise SystemExit(exit_status)
if self.missing:
print()
- print("Python build finished successfully!")
print("The necessary bits to build these optional modules were not "
"found:")
print_three_column(self.missing)
# Platform-specific libraries
if HOST_PLATFORM.startswith(('linux', 'freebsd', 'gnukfreebsd')):
self.add(Extension('ossaudiodev', ['ossaudiodev.c']))
+ elif HOST_PLATFORM.startswith(('netbsd')):
+ self.add(Extension('ossaudiodev', ['ossaudiodev.c'],
+ libraries=["ossaudio"]))
elif not AIX:
self.missing.append('ossaudiodev')