Previously, :exc:`TypeError` was raised when embedded null code points
were encountered in the Python string.
- .. deprecated-removed:: 3.3 4.0
+ .. deprecated-removed:: 3.3 3.12
Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using
:c:func:`PyUnicode_AsWideCharString`.
Unicode data buffer, the second one its length. This variant allows
null code points.
- .. deprecated-removed:: 3.3 4.0
+ .. deprecated-removed:: 3.3 3.12
Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using
:c:func:`PyUnicode_AsWideCharString`.
Like ``u``, but the Python object may also be ``None``, in which case the
:c:type:`Py_UNICODE` pointer is set to ``NULL``.
- .. deprecated-removed:: 3.3 4.0
+ .. deprecated-removed:: 3.3 3.12
Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using
:c:func:`PyUnicode_AsWideCharString`.
Like ``u#``, but the Python object may also be ``None``, in which case the
:c:type:`Py_UNICODE` pointer is set to ``NULL``.
- .. deprecated-removed:: 3.3 4.0
+ .. deprecated-removed:: 3.3 3.12
Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using
:c:func:`PyUnicode_AsWideCharString`.
guarantee consistent behavior in corner cases, which the Standard C functions do
not.
-The wrappers ensure that *str*[*size*-1] is always ``'\0'`` upon return. They
+The wrappers ensure that ``str[size-1]`` is always ``'\0'`` upon return. They
never write more than *size* bytes (including the trailing ``'\0'``) into str.
Both functions require that ``str != NULL``, ``size > 0`` and ``format !=
NULL``.
* When ``0 <= rv < size``, the output conversion was successful and *rv*
characters were written to *str* (excluding the trailing ``'\0'`` byte at
- *str*[*rv*]).
+ ``str[rv]``).
* When ``rv >= size``, the output conversion was truncated and a buffer with
- ``rv + 1`` bytes would have been needed to succeed. *str*[*size*-1] is ``'\0'``
+ ``rv + 1`` bytes would have been needed to succeed. ``str[size-1]`` is ``'\0'``
in this case.
-* When ``rv < 0``, "something bad happened." *str*[*size*-1] is ``'\0'`` in
+* When ``rv < 0``, "something bad happened." ``str[size-1]`` is ``'\0'`` in
this case too, but the rest of *str* is undefined. The exact cause of the error
depends on the underlying platform.
+--------------------------------------------------+---------------------------------------+
| ``void* alloc(void *ctx, size_t size)`` | allocate an arena of size bytes |
+--------------------------------------------------+---------------------------------------+
- | ``void free(void *ctx, size_t size, void *ptr)`` | free an arena |
+ | ``void free(void *ctx, void *ptr, size_t size)`` | free an arena |
+--------------------------------------------------+---------------------------------------+
.. c:function:: void PyObject_GetArenaAllocator(PyObjectArenaAllocator *allocator)
:c:type:`Py_UNICODE*` and UTF-8 representations are created on demand and cached
in the Unicode object. The :c:type:`Py_UNICODE*` representation is deprecated
-and inefficient; it should be avoided in performance- or memory-sensitive
-situations.
+and inefficient.
Due to the transition between the old APIs and the new APIs, Unicode objects
can internally be in two states depending on how they were created:
If *u* is ``NULL``, this function behaves like :c:func:`PyUnicode_FromUnicode`
with the buffer set to ``NULL``. This usage is deprecated in favor of
- :c:func:`PyUnicode_New`.
+ :c:func:`PyUnicode_New`, and will be removed in Python 3.12.
.. c:function:: PyObject *PyUnicode_FromString(const char *u)
Deprecated Py_UNICODE APIs
""""""""""""""""""""""""""
-.. deprecated-removed:: 3.3 4.0
+.. deprecated-removed:: 3.3 3.12
These API functions are deprecated with the implementation of :pep:`393`.
Extension modules can continue using them, as they will not be removed in Python
:c:type:`Py_UNICODE` buffer of the given *size* by ASCII digits 0--9
according to their decimal value. Return ``NULL`` if an exception occurs.
+ .. deprecated-removed:: 3.3 3.11
+ Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using
+ :c:func:`Py_UNICODE_TODECIMAL`.
+
.. c:function:: Py_UNICODE* PyUnicode_AsUnicodeAndSize(PyObject *unicode, Py_ssize_t *size)
to be used is looked up using the Python codec registry. Return ``NULL`` if an
exception was raised by the codec.
- .. deprecated-removed:: 3.3 4.0
+ .. deprecated-removed:: 3.3 3.11
Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using
:c:func:`PyUnicode_AsEncodedString`.
return a Python bytes object. Return ``NULL`` if an exception was raised by
the codec.
- .. deprecated-removed:: 3.3 4.0
+ .. deprecated-removed:: 3.3 3.11
Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using
:c:func:`PyUnicode_AsUTF8String`, :c:func:`PyUnicode_AsUTF8AndSize` or
:c:func:`PyUnicode_AsEncodedString`.
Return ``NULL`` if an exception was raised by the codec.
- .. deprecated-removed:: 3.3 4.0
+ .. deprecated-removed:: 3.3 3.11
Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using
:c:func:`PyUnicode_AsUTF32String` or :c:func:`PyUnicode_AsEncodedString`.
Return ``NULL`` if an exception was raised by the codec.
- .. deprecated-removed:: 3.3 4.0
+ .. deprecated-removed:: 3.3 3.11
Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using
:c:func:`PyUnicode_AsUTF16String` or :c:func:`PyUnicode_AsEncodedString`.
nonzero, whitespace will be encoded in base-64. Both are set to zero for the
Python "utf-7" codec.
- .. deprecated-removed:: 3.3 4.0
+ .. deprecated-removed:: 3.3 3.11
Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using
:c:func:`PyUnicode_AsEncodedString`.
Encode the :c:type:`Py_UNICODE` buffer of the given *size* using Unicode-Escape and
return a bytes object. Return ``NULL`` if an exception was raised by the codec.
- .. deprecated-removed:: 3.3 4.0
+ .. deprecated-removed:: 3.3 3.11
Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using
:c:func:`PyUnicode_AsUnicodeEscapeString`.
Encode the :c:type:`Py_UNICODE` buffer of the given *size* using Raw-Unicode-Escape
and return a bytes object. Return ``NULL`` if an exception was raised by the codec.
- .. deprecated-removed:: 3.3 4.0
+ .. deprecated-removed:: 3.3 3.11
Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using
:c:func:`PyUnicode_AsRawUnicodeEscapeString` or
:c:func:`PyUnicode_AsEncodedString`.
return a Python bytes object. Return ``NULL`` if an exception was raised by
the codec.
- .. deprecated-removed:: 3.3 4.0
+ .. deprecated-removed:: 3.3 3.11
Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using
:c:func:`PyUnicode_AsLatin1String` or
:c:func:`PyUnicode_AsEncodedString`.
return a Python bytes object. Return ``NULL`` if an exception was raised by
the codec.
- .. deprecated-removed:: 3.3 4.0
+ .. deprecated-removed:: 3.3 3.11
Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using
:c:func:`PyUnicode_AsASCIIString` or
:c:func:`PyUnicode_AsEncodedString`.
*mapping* object and return the result as a bytes object. Return ``NULL`` if
an exception was raised by the codec.
- .. deprecated-removed:: 3.3 4.0
+ .. deprecated-removed:: 3.3 3.11
Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using
:c:func:`PyUnicode_AsCharmapString` or
:c:func:`PyUnicode_AsEncodedString`.
character *mapping* table to it and return the resulting Unicode object.
Return ``NULL`` when an exception was raised by the codec.
- .. deprecated-removed:: 3.3 4.0
+ .. deprecated-removed:: 3.3 3.11
Part of the old-style :c:type:`Py_UNICODE` API; please migrate to using
:c:func:`PyUnicode_Translate`. or :ref:`generic codec based API
<codec-registry>`
Why is there no goto?
---------------------
-You can use exceptions to provide a "structured goto" that even works across
+In the 1970s people realized that unrestricted goto could lead
+to messy "spaghetti" code that was hard to understand and revise.
+In a high-level language, it is also unneeded as long as there
+are ways to branch (in Python, with ``if`` statements and ``or``,
+``and``, and ``if-else`` expressions) and loop (with ``while``
+and ``for`` statements, possibly containing ``continue`` and ``break``).
+
+One can also use exceptions to provide a "structured goto"
+that works even across
function calls. Many feel that exceptions can conveniently emulate all
reasonable uses of the "go" or "goto" constructs of C, Fortran, and other
languages. For example::
.. testcode::
class MethodType:
- "Emulate Py_MethodType in Objects/classobject.c"
+ "Emulate PyMethod_Type in Objects/classobject.c"
def __init__(self, func, obj):
self.__func__ = func
*cls* comes from in class methods, this is it!
-Static methods
---------------
+Other kinds of methods
+----------------------
Non-data descriptors provide a simple mechanism for variations on the usual
patterns of binding functions into methods.
| classmethod | f(type(obj), \*args) | f(cls, \*args) |
+-----------------+----------------------+------------------+
+
+Static methods
+--------------
+
Static methods return the underlying function without changes. Calling either
``c.f`` or ``C.f`` is the equivalent of a direct lookup into
``object.__getattribute__(c, "f")`` or ``object.__getattribute__(C, "f")``. As a
In the meantime, instantiating them will return an instance of
a different class.
+.. note::
+ The descriptions of the specific node classes displayed here
+ were initially adapted from the fantastic `Green Tree
+ Snakes <https://greentreesnakes.readthedocs.io/en/latest/>`__ project and
+ all its contributors.
Literals
^^^^^^^^
writing to any mapping in the chain.
* Django's `Context class
- <https://github.com/django/django/blob/master/django/template/context.py>`_
+ <https://github.com/django/django/blob/main/django/template/context.py>`_
for templating is a read-only chain of mappings. It also features
pushing and popping of contexts similar to the
:meth:`~collections.ChainMap.new_child` method and the
subprocess.rst
sched.rst
queue.rst
+ contextvars.rst
The following are support modules for some of the above services:
attribute ``__hash__ = None`` has a specific meaning to Python, as
described in the :meth:`__hash__` documentation.
- If :meth:`__hash__` is not explicit defined, or if it is set to ``None``,
+ If :meth:`__hash__` is not explicitly defined, or if it is set to ``None``,
then :func:`dataclass` *may* add an implicit :meth:`__hash__` method.
Although not recommended, you can force :func:`dataclass` to create a
:meth:`__hash__` method with ``unsafe_hash=True``. This might be the case
.. opcode:: ROT_FOUR
- Lifts second, third and forth stack items one position up, moves top down
+ Lifts second, third and fourth stack items one position up, moves top down
to position four.
.. versionadded:: 3.8
.. note::
- The goal of the default :meth:`_generate_next_value_` methods is to provide
+ The goal of the default :meth:`_generate_next_value_` method is to provide
the next :class:`int` in sequence with the last :class:`int` provided, but
the way it does this is an implementation detail and may change.
.. class:: type(object)
- type(name, bases, dict)
+ type(name, bases, dict, **kwds)
.. index:: object: type
See also :ref:`bltin-type-objects`.
+ Keyword arguments provided to the three argument form are passed to the
+ appropriate metaclass machinery (usually :meth:`~object.__init_subclass__`)
+ in the same way that keywords in a class
+ definition (besides *metaclass*) would.
+
+ See also :ref:`class-customization`.
+
.. versionchanged:: 3.6
Subclasses of :class:`type` which don't override ``type.__new__`` may no
longer use the one-argument form to get the type of an object.
.. versionchanged:: 3.8
New *generation* parameter.
+ .. audit-event:: gc.get_objects generation gc.get_objects
+
.. function:: get_stats()
Return a list of three per-generation dictionaries containing collection
resulting referrers. To get only currently live objects, call :func:`collect`
before calling :func:`get_referrers`.
- Care must be taken when using objects returned by :func:`get_referrers` because
- some of them could still be under construction and hence in a temporarily
- invalid state. Avoid using :func:`get_referrers` for any purpose other than
- debugging.
+ .. warning::
+ Care must be taken when using objects returned by :func:`get_referrers` because
+ some of them could still be under construction and hence in a temporarily
+ invalid state. Avoid using :func:`get_referrers` for any purpose other than
+ debugging.
+
+ .. audit-event:: gc.get_referrers objs gc.get_referrers
.. function:: get_referents(*objs)
be involved in a cycle. So, for example, if an integer is directly reachable
from an argument, that integer object may or may not appear in the result list.
+ .. audit-event:: gc.get_referents objs gc.get_referents
.. function:: is_tracked(obj)
code execution process. A connection must be established whenever the Shell
starts or restarts. (The latter is indicated by a divider line that says
'RESTART'). If the user process fails to connect to the GUI process, it
-displays a ``Tk`` error box with a 'cannot connect' message that directs the
-user here. It then exits.
+usually displays a ``Tk`` error box with a 'cannot connect' message
+that directs the user here. It then exits.
+
+One specific connection failure on Unix systems results from
+misconfigured masquerading rules somewhere in a system's network setup.
+When IDLE is started from a terminal, one will see a message starting
+with ``** Invalid host:``.
+The valid value is ``127.0.0.1 (idlelib.rpc.LOCALHOST)``.
+One can diagnose with ``tcpconnect -irv 127.0.0.1 6543`` in one
+terminal window and ``tcplisten <same args>`` in another.
A common cause of failure is a user-written file with the same name as a
standard library module, such as *random.py* and *tkinter.py*. When such a
starting it from a console or terminal (``python -m idlelib``) and see if
this results in an error message.
+On Unix-based systems with tcl/tk older than ``8.6.11`` (see
+``About IDLE``) certain characters of certain fonts can cause
+a tk failure with a message to the terminal. This can happen either
+if one starts IDLE to edit a file with such a character or later
+when entering such a character. If one cannot upgrade tcl/tk,
+then re-configure IDLE to use a font that works better.
+
Running user code
^^^^^^^^^^^^^^^^^
The original values stored in ``sys.__stdin__``, ``sys.__stdout__``, and
``sys.__stderr__`` are not touched, but may be ``None``.
-When Shell has the focus, it controls the keyboard and screen. This is
-normally transparent, but functions that directly access the keyboard
-and screen will not work. These include system-specific functions that
-determine whether a key has been pressed and if so, which.
+Sending print output from one process to a text widget in another is
+slower than printing to a system terminal in the same process.
+This has the most effect when printing multiple arguments, as the string
+for each argument, each separator, the newline are sent separately.
+For development, this is usually not a problem, but if one wants to
+print faster in IDLE, format and join together everything one wants
+displayed together and then print a single string. Both format strings
+and :meth:`str.join` can help combine fields and lines.
IDLE's standard stream replacements are not inherited by subprocesses
-created in the execution process, whether directly by user code or by modules
-such as multiprocessing. If such subprocess use ``input`` from sys.stdin
-or ``print`` or ``write`` to sys.stdout or sys.stderr,
+created in the execution process, whether directly by user code or by
+modules such as multiprocessing. If such subprocess use ``input`` from
+sys.stdin or ``print`` or ``write`` to sys.stdout or sys.stderr,
IDLE should be started in a command line window. The secondary subprocess
will then be attached to that window for input and output.
-The IDLE code running in the execution process adds frames to the call stack
-that would not be there otherwise. IDLE wraps ``sys.getrecursionlimit`` and
-``sys.setrecursionlimit`` to reduce the effect of the additional stack frames.
-
If ``sys`` is reset by user code, such as with ``importlib.reload(sys)``,
IDLE's changes are lost and input from the keyboard and output to the screen
will not work correctly.
-When user code raises SystemExit either directly or by calling sys.exit, IDLE
-returns to a Shell prompt instead of exiting.
+When Shell has the focus, it controls the keyboard and screen. This is
+normally transparent, but functions that directly access the keyboard
+and screen will not work. These include system-specific functions that
+determine whether a key has been pressed and if so, which.
+
+The IDLE code running in the execution process adds frames to the call stack
+that would not be there otherwise. IDLE wraps ``sys.getrecursionlimit`` and
+``sys.setrecursionlimit`` to reduce the effect of the additional stack
+frames.
+
+When user code raises SystemExit either directly or by calling sys.exit,
+IDLE returns to a Shell prompt instead of exiting.
User output in Shell
^^^^^^^^^^^^^^^^^^^^
.. versionadded:: 3.9
+.. function:: as_file(traversable)
+
+ Given a :class:`importlib.resources.abc.Traversable` object representing
+ a file, typically from :func:`importlib.resources.files`, return
+ a context manager for use in a :keyword:`with` statement.
+ The context manager provides a :class:`pathlib.Path` object.
+
+ Exiting the context manager cleans up any temporary file created when the
+ resource was extracted from e.g. a zip file.
+
+ Use ``as_file`` when the Traversable methods
+ (``read_text``, etc) are insufficient and an actual file on
+ the file system is required.
+
+ .. versionadded:: 3.9
+
.. function:: open_binary(package, resource)
Open for binary reading the *resource* within *package*.
crypto.rst
allos.rst
concurrency.rst
- contextvars.rst
ipc.rst
netdata.rst
markup.rst
when an unsupported operation is called on a stream.
-In-memory streams
-^^^^^^^^^^^^^^^^^
-
-It is also possible to use a :class:`str` or :term:`bytes-like object` as a
-file for both reading and writing. For strings :class:`StringIO` can be used
-like a file opened in text mode. :class:`BytesIO` can be used like a file
-opened in binary mode. Both provide full read-write capabilities with random
-access.
-
-
.. seealso::
:mod:`sys`
*object_hook*, if specified, will be called with the result of every JSON
object decoded and its return value will be used in place of the given
:class:`dict`. This can be used to provide custom deserializations (e.g. to
- support JSON-RPC class hinting).
+ support `JSON-RPC <http://www.jsonrpc.org>`_ class hinting).
*object_pairs_hook*, if specified will be called with the result of every
JSON object decoded with an ordered list of pairs. The return value of
for ``o`` if possible, otherwise it should call the superclass implementation
(to raise :exc:`TypeError`).
- If *skipkeys* is false (the default), then it is a :exc:`TypeError` to
- attempt encoding of keys that are not :class:`str`, :class:`int`,
- :class:`float` or ``None``. If *skipkeys* is true, such items are simply
- skipped.
+ If *skipkeys* is false (the default), a :exc:`TypeError` will be raised when
+ trying to encode keys that are not :class:`str`, :class:`int`, :class:`float`
+ or ``None``. If *skipkeys* is true, such items are simply skipped.
If *ensure_ascii* is true (the default), the output is guaranteed to
have all incoming non-ASCII characters escaped. If *ensure_ascii* is
object for *o*, or calls the base implementation (to raise a
:exc:`TypeError`).
- For example, to support arbitrary iterators, you could implement default
- like this::
+ For example, to support arbitrary iterators, you could implement
+ :meth:`default` like this::
def default(self, o):
try:
.. function:: getLevelName(level)
- Returns the textual representation of logging level *level*. If the level is one
- of the predefined levels :const:`CRITICAL`, :const:`ERROR`, :const:`WARNING`,
- :const:`INFO` or :const:`DEBUG` then you get the corresponding string. If you
- have associated levels with names using :func:`addLevelName` then the name you
- have associated with *level* is returned. If a numeric value corresponding to one
- of the defined levels is passed in, the corresponding string representation is
- returned. Otherwise, the string 'Level %s' % level is returned.
+ Returns the textual or numeric representation of logging level *level*.
+
+ If *level* is one of the predefined levels :const:`CRITICAL`, :const:`ERROR`,
+ :const:`WARNING`, :const:`INFO` or :const:`DEBUG` then you get the
+ corresponding string. If you have associated levels with names using
+ :func:`addLevelName` then the name you have associated with *level* is
+ returned. If a numeric value corresponding to one of the defined levels is
+ passed in, the corresponding string representation is returned.
+
+ The *level* parameter also accepts a string representation of the level such
+ as 'INFO'. In such cases, this functions returns the corresponding numeric
+ value of the level.
+
+ If no matching numeric or string value is passed in, the string
+ 'Level %s' % level is returned.
.. note:: Levels are internally integers (as they need to be compared in the
logging logic). This function is used to convert between an integer level
and the level name displayed in the formatted log output by means of the
- ``%(levelname)s`` format specifier (see :ref:`logrecord-attributes`).
+ ``%(levelname)s`` format specifier (see :ref:`logrecord-attributes`), and
+ vice versa.
.. versionchanged:: 3.4
In Python versions earlier than 3.4, this function could also be passed a
| | to ``'a'``. |
+--------------+---------------------------------------------+
| *format* | Use the specified format string for the |
- | | handler. |
+ | | handler. Defaults to attributes |
+ | | ``levelname``, ``name`` and ``message`` |
+ | | separated by colons. |
+--------------+---------------------------------------------+
| *datefmt* | Use the specified date/time format, as |
| | accepted by :func:`time.strftime`. |
Join one or more path components intelligently. The return value is the
concatenation of *path* and any members of *\*paths* with exactly one
- directory separator (``os.sep``) following each non-empty part except the
- last, meaning that the result will only end in a separator if the last
- part is empty. If a component is an absolute path, all previous
- components are thrown away and joining continues from the absolute path
- component.
+ directory separator following each non-empty part except the last, meaning
+ that the result will only end in a separator if the last part is empty. If
+ a component is an absolute path, all previous components are thrown away
+ and joining continues from the absolute path component.
On Windows, the drive letter is not reset when an absolute path component
(e.g., ``r'\foo'``) is encountered. If a component contains a drive
:file:`example.db` file::
import sqlite3
- conn = sqlite3.connect('example.db')
+ con = sqlite3.connect('example.db')
You can also supply the special name ``:memory:`` to create a database in RAM.
Once you have a :class:`Connection`, you can create a :class:`Cursor` object
and call its :meth:`~Cursor.execute` method to perform SQL commands::
- c = conn.cursor()
+ cur = con.cursor()
# Create table
- c.execute('''CREATE TABLE stocks
- (date text, trans text, symbol text, qty real, price real)''')
+ cur.execute('''CREATE TABLE stocks
+ (date text, trans text, symbol text, qty real, price real)''')
# Insert a row of data
- c.execute("INSERT INTO stocks VALUES ('2006-01-05','BUY','RHAT',100,35.14)")
+ cur.execute("INSERT INTO stocks VALUES ('2006-01-05','BUY','RHAT',100,35.14)")
# Save (commit) the changes
- conn.commit()
+ con.commit()
# We can also close the connection if we are done with it.
# Just be sure any changes have been committed or they will be lost.
- conn.close()
+ con.close()
The data you've saved is persistent and is available in subsequent sessions::
import sqlite3
- conn = sqlite3.connect('example.db')
- c = conn.cursor()
+ con = sqlite3.connect('example.db')
+ cur = con.cursor()
Usually your SQL operations will need to use values from Python variables. You
shouldn't assemble your query using Python's string operations because doing so
# Never do this -- insecure!
symbol = 'RHAT'
- c.execute("SELECT * FROM stocks WHERE symbol = '%s'" % symbol)
+ cur.execute("SELECT * FROM stocks WHERE symbol = '%s'" % symbol)
# Do this instead
t = ('RHAT',)
- c.execute('SELECT * FROM stocks WHERE symbol=?', t)
- print(c.fetchone())
+ cur.execute('SELECT * FROM stocks WHERE symbol=?', t)
+ print(cur.fetchone())
# Larger example that inserts many records at a time
purchases = [('2006-03-28', 'BUY', 'IBM', 1000, 45.00),
('2006-04-05', 'BUY', 'MSFT', 1000, 72.00),
('2006-04-06', 'SELL', 'IBM', 500, 53.00),
]
- c.executemany('INSERT INTO stocks VALUES (?,?,?,?,?)', purchases)
+ cur.executemany('INSERT INTO stocks VALUES (?,?,?,?,?)', purchases)
To retrieve data after executing a SELECT statement, you can either treat the
cursor as an :term:`iterator`, call the cursor's :meth:`~Cursor.fetchone` method to
This example uses the iterator form::
- >>> for row in c.execute('SELECT * FROM stocks ORDER BY price'):
+ >>> for row in cur.execute('SELECT * FROM stocks ORDER BY price'):
print(row)
('2006-01-05', 'BUY', 'RHAT', 100, 35.14)
Let's assume we initialize a table as in the example given above::
- conn = sqlite3.connect(":memory:")
- c = conn.cursor()
- c.execute('''create table stocks
+ con = sqlite3.connect(":memory:")
+ cur = con.cursor()
+ cur.execute('''create table stocks
(date text, trans text, symbol text,
qty real, price real)''')
- c.execute("""insert into stocks
- values ('2006-01-05','BUY','RHAT',100,35.14)""")
- conn.commit()
- c.close()
+ cur.execute("""insert into stocks
+ values ('2006-01-05','BUY','RHAT',100,35.14)""")
+ con.commit()
+ cur.close()
Now we plug :class:`Row` in::
- >>> conn.row_factory = sqlite3.Row
- >>> c = conn.cursor()
- >>> c.execute('select * from stocks')
+ >>> con.row_factory = sqlite3.Row
+ >>> cur = con.cursor()
+ >>> cur.execute('select * from stocks')
<sqlite3.Cursor object at 0x7f4e7dd8fa80>
- >>> r = c.fetchone()
+ >>> r = cur.fetchone()
>>> type(r)
<class 'sqlite3.Row'>
>>> tuple(r)
.. [#f1] The sqlite3 module is not built with loadable extension support by
default, because some platforms (notably Mac OS X) have SQLite
libraries which are compiled without this feature. To get loadable
- extension support, you must pass --enable-loadable-sqlite-extensions to
+ extension support, you must pass ``--enable-loadable-sqlite-extensions`` to
configure.
.. attribute:: SSLContext.check_hostname
- Whether to match the peer cert's hostname with :func:`match_hostname` in
+ Whether to match the peer cert's hostname in
:meth:`SSLSocket.do_handshake`. The context's
:attr:`~SSLContext.verify_mode` must be set to :data:`CERT_OPTIONAL` or
:data:`CERT_REQUIRED`, and you must pass *server_hostname* to
The :meth:`str.format` method and the :class:`Formatter` class share the same
syntax for format strings (although in the case of :class:`Formatter`,
subclasses can define their own format string syntax). The syntax is
-related to that of :ref:`formatted string literals <f-strings>`, but
-there are differences.
+related to that of :ref:`formatted string literals <f-strings>`, but it is
+less sophisticated and, in particular, does not support arbitrary expressions.
.. index::
single: {} (curly brackets); in string formatting
supported by this module.
+.. impl-detail::
+
+ In CPython, due to the :term:`Global Interpreter Lock
+ <global interpreter lock>`, only one thread
+ can execute Python code at once (even though certain performance-oriented
+ libraries might overcome this limitation).
+ If you want your application to make better use of the computational
+ resources of multi-core machines, you are advised to use
+ :mod:`multiprocessing` or :class:`concurrent.futures.ProcessPoolExecutor`.
+ However, threading is still an appropriate model if you want to run
+ multiple I/O-bound tasks simultaneously.
+
+
This module defines the following functions:
property instead.
-.. impl-detail::
-
- In CPython, due to the :term:`Global Interpreter Lock
- <global interpreter lock>`, only one thread
- can execute Python code at once (even though certain performance-oriented
- libraries might overcome this limitation).
- If you want your application to make better use of the computational
- resources of multi-core machines, you are advised to use
- :mod:`multiprocessing` or :class:`concurrent.futures.ProcessPoolExecutor`.
- However, threading is still an appropriate model if you want to run
- multiple I/O-bound tasks simultaneously.
-
-
.. _lock-objects:
Lock Objects
Return the value (in fractional seconds) of a monotonic clock, i.e. a clock
that cannot go backwards. The clock is not affected by system clock updates.
The reference point of the returned value is undefined, so that only the
- difference between the results of consecutive calls is valid.
+ difference between the results of two calls is valid.
.. versionadded:: 3.3
.. versionchanged:: 3.5
clock with the highest available resolution to measure a short duration. It
does include time elapsed during sleep and is system-wide. The reference
point of the returned value is undefined, so that only the difference between
- the results of consecutive calls is valid.
+ the results of two calls is valid.
.. versionadded:: 3.3
CPU time of the current process. It does not include time elapsed during
sleep. It is process-wide by definition. The reference point of the
returned value is undefined, so that only the difference between the results
- of consecutive calls is valid.
+ of two calls is valid.
.. versionadded:: 3.3
CPU time of the current thread. It does not include time elapsed during
sleep. It is thread-specific by definition. The reference point of the
returned value is undefined, so that only the difference between the results
- of consecutive calls in the same thread is valid.
+ of two calls in the same thread is valid.
.. availability:: Windows, Linux, Unix systems supporting
``CLOCK_THREAD_CPUTIME_ID``.
.. class:: ModuleType(name, doc=None)
- The type of :term:`modules <module>`. Constructor takes the name of the
+ The type of :term:`modules <module>`. The constructor takes the name of the
module to be created and optionally its :term:`docstring`.
.. note::
The :term:`loader` which loaded the module. Defaults to ``None``.
+ This attribute is to match :attr:`importlib.machinery.ModuleSpec.loader`
+ as stored in the attr:`__spec__` object.
+
+ .. note::
+ A future version of Python may stop setting this attribute by default.
+ To guard against this potential change, preferrably read from the
+ :attr:`__spec__` attribute instead or use
+ ``getattr(module, "__loader__", None)`` if you explicitly need to use
+ this attribute.
+
.. versionchanged:: 3.4
Defaults to ``None``. Previously the attribute was optional.
.. attribute:: __name__
- The name of the module.
+ The name of the module. Expected to match
+ :attr:`importlib.machinery.ModuleSpec.name`.
.. attribute:: __package__
to ``''``, else it should be set to the name of the package (which can be
:attr:`__name__` if the module is a package itself). Defaults to ``None``.
+ This attribute is to match :attr:`importlib.machinery.ModuleSpec.parent`
+ as stored in the attr:`__spec__` object.
+
+ .. note::
+ A future version of Python may stop setting this attribute by default.
+ To guard against this potential change, preferrably read from the
+ :attr:`__spec__` attribute instead or use
+ ``getattr(module, "__package__", None)`` if you explicitly need to use
+ this attribute.
+
.. versionchanged:: 3.4
Defaults to ``None``. Previously the attribute was optional.
+ .. attribute:: __spec__
+
+ A record of the the module's import-system-related state. Expected to be
+ an instance of :class:`importlib.machinery.ModuleSpec`.
+
+ .. versionadded:: 3.4
+
.. class:: GenericAlias(t_origin, t_args)
some examples of how to use :class:`Mock`, :class:`MagicMock` and
:func:`patch`.
-Mock is very easy to use and is designed for use with :mod:`unittest`. Mock
+Mock is designed for use with :mod:`unittest` and
is based on the 'action -> assertion' pattern instead of 'record -> replay'
used by many mocking frameworks.
the `load_tests protocol`_.
.. versionchanged:: 3.4
- Test discovery supports :term:`namespace packages <namespace package>`.
+ Test discovery supports :term:`namespace packages <namespace package>`
+ for start directory. Note that you need to the top level directory too.
+ (e.g. ``python -m unittest discover -s root/namespace -t root``).
.. _organizing-tests:
after :meth:`setUpClass` if :meth:`setUpClass` raises an exception.
It is responsible for calling all the cleanup functions added by
- :meth:`addCleanupClass`. If you need cleanup functions to be called
+ :meth:`addClassCleanup`. If you need cleanup functions to be called
*prior* to :meth:`tearDownClass` then you can call
- :meth:`doCleanupsClass` yourself.
+ :meth:`doClassCleanups` yourself.
- :meth:`doCleanupsClass` pops methods off the stack of cleanup
+ :meth:`doClassCleanups` pops methods off the stack of cleanup
functions one at a time, so it can be called at any time.
.. versionadded:: 3.8
.. versionchanged:: 3.4
Modules that raise :exc:`SkipTest` on import are recorded as skips,
- not errors.
- Discovery works for :term:`namespace packages <namespace package>`.
- Paths are sorted before being imported so that execution order is
- the same even if the underlying file system's ordering is not
- dependent on file name.
+ not errors.
+
+ .. versionchanged:: 3.4
+ *start_dir* can be a :term:`namespace packages <namespace package>`.
+
+ .. versionchanged:: 3.4
+ Paths are sorted before being imported so that execution order is the
+ same even if the underlying file system's ordering is not dependent
+ on file name.
.. versionchanged:: 3.5
Found packages are now checked for ``load_tests`` regardless of
:param context: The information for the virtual environment
creation request being processed.
"""
- url = 'https://raw.github.com/pypa/pip/master/contrib/get-pip.py'
+ url = 'https://bootstrap.pypa.io/get-pip.py'
self.install_script(context, 'pip', url)
def main(args=None):
.. index::
single: from; yield from expression
-When ``yield from <expr>`` is used, it treats the supplied expression as
-a subiterator. All values produced by that subiterator are passed directly
+When ``yield from <expr>`` is used, the supplied expression must be an
+iterable. The values produced by iterating that iterable are passed directly
to the caller of the current generator's methods. Any values passed in with
:meth:`~generator.send` and any exceptions passed in with
:meth:`~generator.throw` are passed to the underlying iterator if it has the
info['source'].append((env.docname, target))
pnode = nodes.paragraph(text, classes=["audit-hook"], ids=ids)
+ pnode.line = self.lineno
if self.content:
self.state.nested_parse(self.content, self.content_offset, pnode)
else:
The actual parameters (arguments) to a function call are introduced in the local
symbol table of the called function when it is called; thus, arguments are
passed using *call by value* (where the *value* is always an object *reference*,
-not the value of the object). [#]_ When a function calls another function, a new
+not the value of the object). [#]_ When a function calls another function,
+or calls itself recursively, a new
local symbol table is created for that call.
A function definition associates the function name with the function object in
by an expression evaluating to the value of the annotation. Return annotations are
defined by a literal ``->``, followed by an expression, between the parameter
list and the colon denoting the end of the :keyword:`def` statement. The
-following example has a positional argument, a keyword argument, and the return
+following example has a required argument, an optional argument, and the return
value annotated::
>>> def f(ham: str, eggs: str = 'eggs') -> str:
functions internally. For more details, please see their respective
documentation.
(Contributed by Adam Goldschmidt, Senthil Kumaran and Ken Jin in :issue:`42967`.)
+
+Notable changes in Python 3.9.3
+===============================
+
+A security fix alters the :class:`ftplib.FTP` behavior to not trust the
+IPv4 address sent from the remote server when setting up a passive data
+channel. We reuse the ftp server IP address instead. For unusual code
+requiring the old behavior, set a ``trust_server_pasv_ipv4_address``
+attribute on your FTP instance to ``True``. (See :issue:`43285`)
/* Borrowed reference to the current frame (it can be NULL) */
PyFrameObject *frame;
int recursion_depth;
- char overflowed; /* The stack has overflowed. Allow 50 more calls
- to handle the runtime error. */
+ int recursion_headroom; /* Allow 50 more calls to handle any errors. */
char recursion_critical; /* The current calls must not cause
a stack overflow. */
int stackcheck_counter;
#define Py_EnterRecursiveCall(where) _Py_EnterRecursiveCall_inline(where)
-/* Compute the "lower-water mark" for a recursion limit. When
- * Py_LeaveRecursiveCall() is called with a recursion depth below this mark,
- * the overflowed flag is reset to 0. */
-static inline int _Py_RecursionLimitLowerWaterMark(int limit) {
- if (limit > 200) {
- return (limit - 50);
- }
- else {
- return (3 * (limit >> 2));
- }
-}
-
static inline void _Py_LeaveRecursiveCall(PyThreadState *tstate) {
tstate->recursion_depth--;
- int limit = tstate->interp->ceval.recursion_limit;
- if (tstate->recursion_depth < _Py_RecursionLimitLowerWaterMark(limit)) {
- tstate->overflowed = 0;
- }
}
static inline void _Py_LeaveRecursiveCall_inline(void) {
/*--start constants--*/
#define PY_MAJOR_VERSION 3
#define PY_MINOR_VERSION 9
-#define PY_MICRO_VERSION 2
+#define PY_MICRO_VERSION 3
#define PY_RELEASE_LEVEL PY_RELEASE_LEVEL_FINAL
#define PY_RELEASE_SERIAL 0
/* Version as a string */
-#define PY_VERSION "3.9.2"
+#define PY_VERSION "3.9.3"
/*--end constants--*/
/* Version as a single 4-byte hex number, e.g. 0x010502B2 == 1.5.2b2.
def _write_constant(self, value):
if isinstance(value, (float, complex)):
- # Substitute overflowing decimal literal for AST infinities.
- self.write(repr(value).replace("inf", _INFSTR))
+ # Substitute overflowing decimal literal for AST infinities,
+ # and inf - inf for NaNs.
+ self.write(
+ repr(value)
+ .replace("inf", _INFSTR)
+ .replace("nan", f"({_INFSTR}-{_INFSTR})")
+ )
elif self._avoid_backslashes and isinstance(value, str):
self._write_str_avoiding_backslashes(value)
else:
self.traverse(node.orelse)
def visit_Set(self, node):
- if not node.elts:
- raise ValueError("Set node should have at least one item")
- with self.delimit("{", "}"):
- self.interleave(lambda: self.write(", "), self.traverse, node.elts)
+ if node.elts:
+ with self.delimit("{", "}"):
+ self.interleave(lambda: self.write(", "), self.traverse, node.elts)
+ else:
+ # `{}` would be interpreted as a dictionary literal, and
+ # `set` might be shadowed. Thus:
+ self.write('{*()}')
def visit_Dict(self, node):
def write_key_value_pair(k, v):
def __get_result(self):
if self._exception:
- raise self._exception
+ try:
+ raise self._exception
+ finally:
+ # Break a reference cycle with the exception in self._exception
+ self = None
else:
return self._result
timeout.
Exception: If the call raised then that exception will be raised.
"""
- with self._condition:
- if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
- raise CancelledError()
- elif self._state == FINISHED:
- return self.__get_result()
-
- self._condition.wait(timeout)
-
- if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
- raise CancelledError()
- elif self._state == FINISHED:
- return self.__get_result()
- else:
- raise TimeoutError()
+ try:
+ with self._condition:
+ if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
+ raise CancelledError()
+ elif self._state == FINISHED:
+ return self.__get_result()
+
+ self._condition.wait(timeout)
+
+ if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
+ raise CancelledError()
+ elif self._state == FINISHED:
+ return self.__get_result()
+ else:
+ raise TimeoutError()
+ finally:
+ # Break a reference cycle with the exception in self._exception
+ self = None
def exception(self, timeout=None):
"""Return the exception raised by the call that the future represents.
sock = None
file = None
welcome = None
- passiveserver = 1
+ passiveserver = True
+ # Disables https://bugs.python.org/issue43285 security if set to True.
+ trust_server_pasv_ipv4_address = False
def __init__(self, host='', user='', passwd='', acct='',
timeout=_GLOBAL_DEFAULT_TIMEOUT, source_address=None, *,
return sock
def makepasv(self):
+ """Internal: Does the PASV or EPSV handshake -> (address, port)"""
if self.af == socket.AF_INET:
- host, port = parse227(self.sendcmd('PASV'))
+ untrusted_host, port = parse227(self.sendcmd('PASV'))
+ if self.trust_server_pasv_ipv4_address:
+ host = untrusted_host
+ else:
+ host = self.sock.getpeername()[0]
else:
host, port = parse229(self.sendcmd('EPSV'), self.sock.getpeername())
return host, port
g = sys.stdout.buffer
else:
if arg[-3:] != ".gz":
- print("filename doesn't end in .gz:", repr(arg))
- continue
+ sys.exit(f"filename doesn't end in .gz: {arg!r}")
f = open(arg, "rb")
g = builtins.open(arg[:-3], "wb")
else:
# NOTE: RFC 2616, S4.4, #3 says we ignore this if tr_enc is "chunked"
self.length = None
length = self.headers.get("content-length")
-
- # are we using the chunked-style of transfer encoding?
- tr_enc = self.headers.get("transfer-encoding")
if length and not self.chunked:
try:
self.length = int(length)
self.debuglevel = level
def _tunnel(self):
- connect_str = "CONNECT %s:%d HTTP/1.0\r\n" % (self._tunnel_host,
- self._tunnel_port)
- connect_bytes = connect_str.encode("ascii")
- self.send(connect_bytes)
+ connect = b"CONNECT %s:%d HTTP/1.0\r\n" % (
+ self._tunnel_host.encode("ascii"), self._tunnel_port)
+ headers = [connect]
for header, value in self._tunnel_headers.items():
- header_str = "%s: %s\r\n" % (header, value)
- header_bytes = header_str.encode("latin-1")
- self.send(header_bytes)
- self.send(b'\r\n')
+ headers.append(f"{header}: {value}\r\n".encode("latin-1"))
+ headers.append(b"\r\n")
+ # Making a single send() call instead of one per line encourages
+ # the host OS to use a more optimal packet size instead of
+ # potentially emitting a series of small packets.
+ self.send(b"".join(headers))
+ del headers
response = self.response_class(self.sock, method=self._method)
(version, code, message) = response._read_status()
if code != http.HTTPStatus.OK:
self.close()
- raise OSError("Tunnel connection failed: %d %s" % (code,
- message.strip()))
+ raise OSError(f"Tunnel connection failed: {code} {message.strip()}")
while True:
line = response.fp.readline(_MAXLINE + 1)
if len(line) > _MAXLINE:
-What's New in IDLE 3.9.2
-Released on 2021-02-15?
-======================================
+What's New in IDLE 3.9.z
+(since 3.9.0)
+=========================
+
+bpo-43283: Document why printing to IDLE's Shell is often slower than
+printing to a system terminal and that it can be made faster by
+pre-formatting a single string before printing.
bpo-23544: Disable Debug=>Stack Viewer when user code is running or
Debugger is active, to prevent hang or crash. Patch by Zackery Spytz.
bpo-32631: Finish zzdummy example extension module: make menu entries
work; add docstrings and tests with 100% coverage.
-
-What's New in IDLE 3.9.1
-Released on 2020-12-07
-======================================
-
bpo-42508: Keep IDLE running on macOS. Remove obsolete workaround
that prevented running files with shortcuts when using new universal2
installers built on macOS 11.
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
- <title>IDLE — Python 3.10.0a1 documentation</title>
+ <title>IDLE — Python 3.10.0a6 documentation</title>
<link rel="stylesheet" href="../_static/pydoctheme.css" type="text/css" />
<link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
<script src="../_static/sidebar.js"></script>
<link rel="search" type="application/opensearchdescription+xml"
- title="Search within Python 3.10.0a1 documentation"
+ title="Search within Python 3.10.0a6 documentation"
href="../_static/opensearch.xml"/>
<link rel="author" title="About these documents" href="../about.html" />
<link rel="index" title="Index" href="../genindex.html" />
-
<style>
@media only screen {
table.full-width-table {
<li><a href="https://www.python.org/">Python</a> »</li>
- <li>
- <a href="../index.html">3.10.0a1 Documentation</a> »
+ <li id="cpython-language-and-version">
+ <a href="../index.html">3.10.0a6 Documentation</a> »
</li>
<li class="nav-item nav-item-1"><a href="index.html" >The Python Standard Library</a> »</li>
<dl class="simple">
<dt>View Last Restart</dt><dd><p>Scroll the shell window to the last Shell restart.</p>
</dd>
-<dt>Restart Shell</dt><dd><p>Restart the shell to clean the environment.</p>
+<dt>Restart Shell</dt><dd><p>Restart the shell to clean the environment and reset display and exception handling.</p>
</dd>
<dt>Previous History</dt><dd><p>Cycle through earlier commands in history which match the current entry.</p>
</dd>
code execution process. A connection must be established whenever the Shell
starts or restarts. (The latter is indicated by a divider line that says
‘RESTART’). If the user process fails to connect to the GUI process, it
-displays a <code class="docutils literal notranslate"><span class="pre">Tk</span></code> error box with a ‘cannot connect’ message that directs the
-user here. It then exits.</p>
+usually displays a <code class="docutils literal notranslate"><span class="pre">Tk</span></code> error box with a ‘cannot connect’ message
+that directs the user here. It then exits.</p>
+<p>One specific connection failure on Unix systems results from
+misconfigured masquerading rules somewhere in a system’s network setup.
+When IDLE is started from a terminal, one will see a message starting
+with <code class="docutils literal notranslate"><span class="pre">**</span> <span class="pre">Invalid</span> <span class="pre">host:</span></code>.
+The valid value is <code class="docutils literal notranslate"><span class="pre">127.0.0.1</span> <span class="pre">(idlelib.rpc.LOCALHOST)</span></code>.
+One can diagnose with <code class="docutils literal notranslate"><span class="pre">tcpconnect</span> <span class="pre">-irv</span> <span class="pre">127.0.0.1</span> <span class="pre">6543</span></code> in one
+terminal window and <code class="docutils literal notranslate"><span class="pre">tcplisten</span> <span class="pre"><same</span> <span class="pre">args></span></code> in another.</p>
<p>A common cause of failure is a user-written file with the same name as a
standard library module, such as <em>random.py</em> and <em>tkinter.py</em>. When such a
file is located in the same directory as a file that is about to be run,
<p>If IDLE quits with no message, and it was not started from a console, try
starting it from a console or terminal (<code class="docutils literal notranslate"><span class="pre">python</span> <span class="pre">-m</span> <span class="pre">idlelib</span></code>) and see if
this results in an error message.</p>
+<p>On Unix-based systems with tcl/tk older than <code class="docutils literal notranslate"><span class="pre">8.6.11</span></code> (see
+<code class="docutils literal notranslate"><span class="pre">About</span> <span class="pre">IDLE</span></code>) certain characters of certain fonts can cause
+a tk failure with a message to the terminal. This can happen either
+if one starts IDLE to edit a file with such a character or later
+when entering such a character. If one cannot upgrade tcl/tk,
+then re-configure IDLE to use a font that works better.</p>
</div>
<div class="section" id="running-user-code">
<h3>Running user code<a class="headerlink" href="#running-user-code" title="Permalink to this headline">¶</a></h3>
with objects that get input from and send output to the Shell window.
The original values stored in <code class="docutils literal notranslate"><span class="pre">sys.__stdin__</span></code>, <code class="docutils literal notranslate"><span class="pre">sys.__stdout__</span></code>, and
<code class="docutils literal notranslate"><span class="pre">sys.__stderr__</span></code> are not touched, but may be <code class="docutils literal notranslate"><span class="pre">None</span></code>.</p>
-<p>When Shell has the focus, it controls the keyboard and screen. This is
-normally transparent, but functions that directly access the keyboard
-and screen will not work. These include system-specific functions that
-determine whether a key has been pressed and if so, which.</p>
+<p>Sending print output from one process to a text widget in another is
+slower than printing to a system terminal in the same process.
+This has the most effect when printing multiple arguments, as the string
+for each argument, each separator, the newline are sent separately.
+For development, this is usually not a problem, but if one wants to
+print faster in IDLE, format and join together everything one wants
+displayed together and then print a single string. Both format strings
+and <a class="reference internal" href="stdtypes.html#str.join" title="str.join"><code class="xref py py-meth docutils literal notranslate"><span class="pre">str.join()</span></code></a> can help combine fields and lines.</p>
<p>IDLE’s standard stream replacements are not inherited by subprocesses
-created in the execution process, whether directly by user code or by modules
-such as multiprocessing. If such subprocess use <code class="docutils literal notranslate"><span class="pre">input</span></code> from sys.stdin
-or <code class="docutils literal notranslate"><span class="pre">print</span></code> or <code class="docutils literal notranslate"><span class="pre">write</span></code> to sys.stdout or sys.stderr,
+created in the execution process, whether directly by user code or by
+modules such as multiprocessing. If such subprocess use <code class="docutils literal notranslate"><span class="pre">input</span></code> from
+sys.stdin or <code class="docutils literal notranslate"><span class="pre">print</span></code> or <code class="docutils literal notranslate"><span class="pre">write</span></code> to sys.stdout or sys.stderr,
IDLE should be started in a command line window. The secondary subprocess
will then be attached to that window for input and output.</p>
-<p>The IDLE code running in the execution process adds frames to the call stack
-that would not be there otherwise. IDLE wraps <code class="docutils literal notranslate"><span class="pre">sys.getrecursionlimit</span></code> and
-<code class="docutils literal notranslate"><span class="pre">sys.setrecursionlimit</span></code> to reduce the effect of the additional stack frames.</p>
<p>If <code class="docutils literal notranslate"><span class="pre">sys</span></code> is reset by user code, such as with <code class="docutils literal notranslate"><span class="pre">importlib.reload(sys)</span></code>,
IDLE’s changes are lost and input from the keyboard and output to the screen
will not work correctly.</p>
-<p>When user code raises SystemExit either directly or by calling sys.exit, IDLE
-returns to a Shell prompt instead of exiting.</p>
+<p>When Shell has the focus, it controls the keyboard and screen. This is
+normally transparent, but functions that directly access the keyboard
+and screen will not work. These include system-specific functions that
+determine whether a key has been pressed and if so, which.</p>
+<p>The IDLE code running in the execution process adds frames to the call stack
+that would not be there otherwise. IDLE wraps <code class="docutils literal notranslate"><span class="pre">sys.getrecursionlimit</span></code> and
+<code class="docutils literal notranslate"><span class="pre">sys.setrecursionlimit</span></code> to reduce the effect of the additional stack
+frames.</p>
+<p>When user code raises SystemExit either directly or by calling sys.exit,
+IDLE returns to a Shell prompt instead of exiting.</p>
</div>
<div class="section" id="user-output-in-shell">
<h3>User output in Shell<a class="headerlink" href="#user-output-in-shell" title="Permalink to this headline">¶</a></h3>
<li><a href="https://www.python.org/">Python</a> »</li>
- <li>
- <a href="../index.html">3.10.0a1 Documentation</a> »
+ <li id="cpython-language-and-version">
+ <a href="../index.html">3.10.0a6 Documentation</a> »
</li>
<li class="nav-item nav-item-1"><a href="index.html" >The Python Standard Library</a> »</li>
</ul>
</div>
<div class="footer">
- © <a href="../copyright.html">Copyright</a> 2001-2020, Python Software Foundation.
+ © <a href="../copyright.html">Copyright</a> 2001-2021, Python Software Foundation.
<br />
The Python Software Foundation is a non-profit corporation.
<br />
<br />
- Last updated on Oct 20, 2020.
+ Last updated on Mar 29, 2021.
<a href="https://docs.python.org/3/bugs.html">Found a bug</a>?
<br />
def getLevelName(level):
"""
- Return the textual representation of logging level 'level'.
+ Return the textual or numeric representation of logging level 'level'.
If the level is one of the predefined levels (CRITICAL, ERROR, WARNING,
INFO, DEBUG) then you get the corresponding string. If you have
If a numeric value corresponding to one of the defined levels is passed
in, the corresponding string representation is returned.
- Otherwise, the string "Level %s" % level is returned.
+ If a string representation of the level is passed in, the corresponding
+ numeric value is returned.
+
+ If no matching numeric or string value is passed in, the string
+ 'Level %s' % level is returned.
"""
# See Issues #22386, #27937 and #29220 for why it's this way
result = _levelToName.get(level)
%s</head><body bgcolor="#f0f0f8">%s<div style="clear:both;padding-top:.5em;">%s</div>
</body></html>''' % (title, css_link, html_navbar(), contents)
- def filelink(self, url, path):
- return '<a href="getfile?key=%s">%s</a>' % (url, path)
-
html = _HTMLDoc()
'key = %s' % key, '#ffffff', '#ee77aa', '<br>'.join(results))
return 'Search Results', contents
- def html_getfile(path):
- """Get and display a source file listing safely."""
- path = urllib.parse.unquote(path)
- with tokenize.open(path) as fp:
- lines = html.escape(fp.read())
- body = '<pre>%s</pre>' % lines
- heading = html.heading(
- '<big><big><strong>File Listing</strong></big></big>',
- '#ffffff', '#7799ee')
- contents = heading + html.bigsection(
- 'File: %s' % path, '#ffffff', '#ee77aa', body)
- return 'getfile %s' % path, contents
-
def html_topics():
"""Index of topic texts available."""
op, _, url = url.partition('=')
if op == "search?key":
title, content = html_search(url)
- elif op == "getfile?key":
- title, content = html_getfile(url)
elif op == "topic?key":
# try topics first, then objects.
try:
# -*- coding: utf-8 -*-
-# Autogenerated by Sphinx on Fri Feb 19 13:29:38 2021
+# Autogenerated by Sphinx on Fri Apr 2 11:48:03 2021
topics = {'assert': 'The "assert" statement\n'
'**********************\n'
'\n'
'"Formatter",\n'
'subclasses can define their own format string syntax). The '
'syntax is\n'
- 'related to that of formatted string literals, but there '
- 'are\n'
- 'differences.\n'
+ 'related to that of formatted string literals, but it is '
+ 'less\n'
+ 'sophisticated and, in particular, does not support '
+ 'arbitrary\n'
+ 'expressions.\n'
'\n'
'Format strings contain “replacement fields” surrounded by '
'curly braces\n'
if _destinsrc(src, dst):
raise Error("Cannot move a directory '%s' into itself"
" '%s'." % (src, dst))
+ if (_is_immutable(src)
+ or (not os.access(src, os.W_OK) and os.listdir(src)
+ and sys.platform == 'darwin')):
+ raise PermissionError("Cannot move the non-empty directory "
+ "'%s': Lacking write permission to '%s'."
+ % (src, src))
copytree(src, real_dst, copy_function=copy_function,
symlinks=True)
rmtree(src)
dst += os.path.sep
return dst.startswith(src)
+def _is_immutable(src):
+ st = _stat(src)
+ immutable_states = [stat.UF_IMMUTABLE, stat.SF_IMMUTABLE]
+ return hasattr(st, 'st_flags') and st.st_flags in immutable_states
+
def _get_gid(name):
"""Returns a gid, given a group name."""
if getgrnam is None or name is None:
CRLF = "\r\n"
bCRLF = b"\r\n"
_MAXLINE = 8192 # more than 8 times larger than RFC 821, 4.5.3
+_MAXCHALLENGE = 5 # Maximum number of AUTH challenges sent
OLDSTYLE_AUTH = re.compile(r"auth=(.*)", re.I)
self.esmtp_features = {}
self.command_encoding = 'ascii'
self.source_address = source_address
+ self._auth_challenge_count = 0
if host:
(code, msg) = self.connect(host, port)
if initial_response is not None:
response = encode_base64(initial_response.encode('ascii'), eol='')
(code, resp) = self.docmd("AUTH", mechanism + " " + response)
+ self._auth_challenge_count = 1
else:
(code, resp) = self.docmd("AUTH", mechanism)
+ self._auth_challenge_count = 0
# If server responds with a challenge, send the response.
- if code == 334:
+ while code == 334:
+ self._auth_challenge_count += 1
challenge = base64.decodebytes(resp)
response = encode_base64(
authobject(challenge).encode('ascii'), eol='')
(code, resp) = self.docmd(response)
+ # If server keeps sending challenges, something is wrong.
+ if self._auth_challenge_count > _MAXCHALLENGE:
+ raise SMTPException(
+ "Server AUTH mechanism infinite loop. Last response: "
+ + repr((code, resp))
+ )
if code in (235, 503):
return (code, resp)
raise SMTPAuthenticationError(code, resp)
def auth_login(self, challenge=None):
""" Authobject to use with LOGIN authentication. Requires self.user and
self.password to be set."""
- if challenge is None:
+ if challenge is None or self._auth_challenge_count < 2:
return self.user
else:
return self.password
self.collect_children(blocking=self.block_on_close)
+class _Threads(list):
+ """
+ Joinable list of all non-daemon threads.
+ """
+ def append(self, thread):
+ self.reap()
+ if thread.daemon:
+ return
+ super().append(thread)
+
+ def pop_all(self):
+ self[:], result = [], self[:]
+ return result
+
+ def join(self):
+ for thread in self.pop_all():
+ thread.join()
+
+ def reap(self):
+ self[:] = (thread for thread in self if thread.is_alive())
+
+
+class _NoThreads:
+ """
+ Degenerate version of _Threads.
+ """
+ def append(self, thread):
+ pass
+
+ def join(self):
+ pass
+
+
class ThreadingMixIn:
"""Mix-in class to handle each request in a new thread."""
daemon_threads = False
# If true, server_close() waits until all non-daemonic threads terminate.
block_on_close = True
- # For non-daemonic threads, list of threading.Threading objects
+ # Threads object
# used by server_close() to wait for all threads completion.
- _threads = None
+ _threads = _NoThreads()
def process_request_thread(self, request, client_address):
"""Same as in BaseServer but as a thread.
def process_request(self, request, client_address):
"""Start a new thread to process the request."""
+ if self.block_on_close:
+ vars(self).setdefault('_threads', _Threads())
t = threading.Thread(target = self.process_request_thread,
args = (request, client_address))
t.daemon = self.daemon_threads
- if not t.daemon and self.block_on_close:
- if self._threads is None:
- self._threads = []
- self._threads.append(t)
+ self._threads.append(t)
t.start()
def server_close(self):
super().server_close()
- if self.block_on_close:
- threads = self._threads
- self._threads = None
- if threads:
- for thread in threads:
- thread.join()
+ self._threads.join()
if hasattr(os, "fork"):
self.stderr.close()
# All data exchanged. Translate lists into strings.
- if stdout is not None:
- stdout = stdout[0]
- if stderr is not None:
- stderr = stderr[0]
+ stdout = stdout[0] if stdout else None
+ stderr = stderr[0] if stderr else None
return (stdout, stderr)
sock.close()
+def test_gc():
+ import gc
+
+ def hook(event, args):
+ if event.startswith("gc."):
+ print(event, *args)
+
+ sys.addaudithook(hook)
+
+ gc.get_objects(generation=1)
+
+ x = object()
+ y = [x]
+
+ gc.get_referrers(x)
+ gc.get_referents(y)
+
+
if __name__ == "__main__":
from test.support import suppress_msvcrt_asserts
self.assertEqual(events[2][0], "socket.bind")
self.assertTrue(events[2][2].endswith("('127.0.0.1', 8080)"))
+ def test_gc(self):
+ returncode, events, stderr = self.run_python("test_gc")
+ if returncode:
+ self.fail(stderr)
+
+ if support.verbose:
+ print(*events, sep='\n')
+ self.assertEqual(
+ [event[0] for event in events],
+ ["gc.get_objects", "gc.get_referrers", "gc.get_referents"]
+ )
+
+
if __name__ == "__main__":
unittest.main()
if not stdout.startswith(pattern):
raise AssertionError("%a doesn't start with %a" % (stdout, pattern))
+ @unittest.skipIf(sys.platform == 'win32',
+ 'Windows has a native unicode API')
+ def test_invalid_utf8_arg(self):
+ # bpo-35883: Py_DecodeLocale() must escape b'\xfd\xbf\xbf\xbb\xba\xba'
+ # byte sequence with surrogateescape rather than decoding it as the
+ # U+7fffbeba character which is outside the [U+0000; U+10ffff] range of
+ # Python Unicode characters.
+ #
+ # Test with default config, in the C locale, in the Python UTF-8 Mode.
+ code = 'import sys, os; s=os.fsencode(sys.argv[1]); print(ascii(s))'
+ base_cmd = [sys.executable, '-c', code]
+
+ def run_default(arg):
+ cmd = [sys.executable, '-c', code, arg]
+ return subprocess.run(cmd, stdout=subprocess.PIPE, text=True)
+
+ def run_c_locale(arg):
+ cmd = [sys.executable, '-c', code, arg]
+ env = dict(os.environ)
+ env['LC_ALL'] = 'C'
+ return subprocess.run(cmd, stdout=subprocess.PIPE,
+ text=True, env=env)
+
+ def run_utf8_mode(arg):
+ cmd = [sys.executable, '-X', 'utf8', '-c', code, arg]
+ return subprocess.run(cmd, stdout=subprocess.PIPE, text=True)
+
+ valid_utf8 = 'e:\xe9, euro:\u20ac, non-bmp:\U0010ffff'.encode('utf-8')
+ # invalid UTF-8 byte sequences with a valid UTF-8 sequence
+ # in the middle.
+ invalid_utf8 = (
+ b'\xff' # invalid byte
+ b'\xc3\xff' # invalid byte sequence
+ b'\xc3\xa9' # valid utf-8: U+00E9 character
+ b'\xed\xa0\x80' # lone surrogate character (invalid)
+ b'\xfd\xbf\xbf\xbb\xba\xba' # character outside [U+0000; U+10ffff]
+ )
+ test_args = [valid_utf8, invalid_utf8]
+
+ for run_cmd in (run_default, run_c_locale, run_utf8_mode):
+ with self.subTest(run_cmd=run_cmd):
+ for arg in test_args:
+ proc = run_cmd(arg)
+ self.assertEqual(proc.stdout.rstrip(), ascii(arg))
+
@unittest.skipUnless((sys.platform == 'darwin' or
support.is_android), 'test specific to Mac OS X and Android')
def test_osx_android_utf8(self):
- def check_output(text):
- decoded = text.decode('utf-8', 'surrogateescape')
- expected = ascii(decoded).encode('ascii') + b'\n'
+ text = 'e:\xe9, euro:\u20ac, non-bmp:\U0010ffff'.encode('utf-8')
+ code = "import sys; print(ascii(sys.argv[1]))"
- env = os.environ.copy()
- # C locale gives ASCII locale encoding, but Python uses UTF-8
- # to parse the command line arguments on Mac OS X and Android.
- env['LC_ALL'] = 'C'
+ decoded = text.decode('utf-8', 'surrogateescape')
+ expected = ascii(decoded).encode('ascii') + b'\n'
- p = subprocess.Popen(
- (sys.executable, "-c", "import sys; print(ascii(sys.argv[1]))", text),
- stdout=subprocess.PIPE,
- env=env)
- stdout, stderr = p.communicate()
- self.assertEqual(stdout, expected)
- self.assertEqual(p.returncode, 0)
+ env = os.environ.copy()
+ # C locale gives ASCII locale encoding, but Python uses UTF-8
+ # to parse the command line arguments on Mac OS X and Android.
+ env['LC_ALL'] = 'C'
- # test valid utf-8
- text = 'e:\xe9, euro:\u20ac, non-bmp:\U0010ffff'.encode('utf-8')
- check_output(text)
-
- # test invalid utf-8
- text = (
- b'\xff' # invalid byte
- b'\xc3\xa9' # valid utf-8 character
- b'\xc3\xff' # invalid byte sequence
- b'\xed\xa0\x80' # lone surrogate character (invalid)
- )
- check_output(text)
+ p = subprocess.Popen(
+ (sys.executable, "-c", code, text),
+ stdout=subprocess.PIPE,
+ env=env)
+ stdout, stderr = p.communicate()
+ self.assertEqual(stdout, expected)
+ self.assertEqual(p.returncode, 0)
def test_non_interactive_output_buffering(self):
code = textwrap.dedent("""
class MiscTests(unittest.TestCase):
+ @requires_curses_func('update_lines_cols')
def test_update_lines_cols(self):
curses.update_lines_cols()
lines, cols = curses.LINES, curses.COLS
# tstate->recursion_depth is equal to (recursion_limit - 1)
# and is equal to recursion_limit when _gen_throw() calls
# PyErr_NormalizeException().
- recurse(setrecursionlimit(depth + 2) - depth - 1)
+ recurse(setrecursionlimit(depth + 2) - depth)
finally:
sys.setrecursionlimit(recursionlimit)
print('Done.')
b'while normalizing an exception', err)
self.assertIn(b'Done.', out)
+
+ def test_recursion_in_except_handler(self):
+
+ def set_relative_recursion_limit(n):
+ depth = 1
+ while True:
+ try:
+ sys.setrecursionlimit(depth)
+ except RecursionError:
+ depth += 1
+ else:
+ break
+ sys.setrecursionlimit(depth+n)
+
+ def recurse_in_except():
+ try:
+ 1/0
+ except:
+ recurse_in_except()
+
+ def recurse_after_except():
+ try:
+ 1/0
+ except:
+ pass
+ recurse_after_except()
+
+ def recurse_in_body_and_except():
+ try:
+ recurse_in_body_and_except()
+ except:
+ recurse_in_body_and_except()
+
+ recursionlimit = sys.getrecursionlimit()
+ try:
+ set_relative_recursion_limit(10)
+ for func in (recurse_in_except, recurse_after_except, recurse_in_body_and_except):
+ with self.subTest(func=func):
+ try:
+ func()
+ except RecursionError:
+ pass
+ else:
+ self.fail("Should have raised a RecursionError")
+ finally:
+ sys.setrecursionlimit(recursionlimit)
+
+
@cpython_only
def test_recursion_normalizing_with_no_memory(self):
# Issue #30697. Test that in the abort that occurs when there is no
except MemoryError as e:
tb = e.__traceback__
else:
- self.fail("Should have raises a MemoryError")
+ self.fail("Should have raised a MemoryError")
return traceback.format_tb(tb)
tb1 = raiseMemError()
# check the call
call = middle.value
self.assertEqual(type(call), ast.Call)
- self.assertEqual(call.lineno, 5)
- self.assertEqual(call.end_lineno, 5)
- self.assertEqual(call.col_offset, 27)
- self.assertEqual(call.end_col_offset, 31)
+ self.assertEqual(call.lineno, 4 if use_old_parser() else 5)
+ self.assertEqual(call.end_lineno, 4 if use_old_parser() else 5)
+ self.assertEqual(call.col_offset, 13 if use_old_parser() else 27)
+ self.assertEqual(call.end_col_offset, 17 if use_old_parser() else 31)
# check the second wat
self.assertEqual(type(wat2), ast.Constant)
self.assertEqual(wat2.lineno, 4)
self.next_retr_data = RETR_DATA
self.push('220 welcome')
self.encoding = encoding
+ # We use this as the string IPv4 address to direct the client
+ # to in response to a PASV command. To test security behavior.
+ # https://bugs.python.org/issue43285/.
+ self.fake_pasv_server_ip = '252.253.254.255'
def collect_incoming_data(self, data):
self.in_buffer.append(data)
def cmd_pasv(self, arg):
with socket.create_server((self.socket.getsockname()[0], 0)) as sock:
sock.settimeout(TIMEOUT)
- ip, port = sock.getsockname()[:2]
+ port = sock.getsockname()[1]
+ ip = self.fake_pasv_server_ip
ip = ip.replace('.', ','); p1 = port / 256; p2 = port % 256
self.push('227 entering passive mode (%s,%d,%d)' %(ip, p1, p2))
conn, addr = sock.accept()
# IPv4 is in use, just make sure send_epsv has not been used
self.assertEqual(self.server.handler_instance.last_received_cmd, 'pasv')
+ def test_makepasv_issue43285_security_disabled(self):
+ """Test the opt-in to the old vulnerable behavior."""
+ self.client.trust_server_pasv_ipv4_address = True
+ bad_host, port = self.client.makepasv()
+ self.assertEqual(
+ bad_host, self.server.handler_instance.fake_pasv_server_ip)
+ # Opening and closing a connection keeps the dummy server happy
+ # instead of timing out on accept.
+ socket.create_connection((self.client.sock.getpeername()[0], port),
+ timeout=TIMEOUT).close()
+
+ def test_makepasv_issue43285_security_enabled_default(self):
+ self.assertFalse(self.client.trust_server_pasv_ipv4_address)
+ trusted_host, port = self.client.makepasv()
+ self.assertNotEqual(
+ trusted_host, self.server.handler_instance.fake_pasv_server_ip)
+ # Opening and closing a connection keeps the dummy server happy
+ # instead of timing out on accept.
+ socket.create_connection((trusted_host, port), timeout=TIMEOUT).close()
+
def test_with_statement(self):
self.client.quit()
self.assertEqual(err, b'')
def test_decompress_infile_outfile_error(self):
- rc, out, err = assert_python_ok('-m', 'gzip', '-d', 'thisisatest.out')
- self.assertIn(b"filename doesn't end in .gz:", out)
- self.assertEqual(rc, 0)
- self.assertEqual(err, b'')
+ rc, out, err = assert_python_failure('-m', 'gzip', '-d', 'thisisatest.out')
+ self.assertEqual(b"filename doesn't end in .gz: 'thisisatest.out'", err.strip())
+ self.assertEqual(rc, 1)
+ self.assertEqual(out, b'')
@create_and_remove_directory(TEMPDIR)
def test_compress_stdin_outfile(self):
import warnings
import unittest
+from unittest import mock
TestCase = unittest.TestCase
from test import support
# This test should be removed when CONNECT gets the HTTP/1.1 blessing
self.assertNotIn(b'Host: proxy.com', self.conn.sock.data)
+ def test_tunnel_connect_single_send_connection_setup(self):
+ """Regresstion test for https://bugs.python.org/issue43332."""
+ with mock.patch.object(self.conn, 'send') as mock_send:
+ self.conn.set_tunnel('destination.com')
+ self.conn.connect()
+ self.conn.request('GET', '/')
+ mock_send.assert_called()
+ # Likely 2, but this test only cares about the first.
+ self.assertGreater(
+ len(mock_send.mock_calls), 1,
+ msg=f'unexpected number of send calls: {mock_send.mock_calls}')
+ proxy_setup_data_sent = mock_send.mock_calls[0][1][0]
+ self.assertIn(b'CONNECT destination.com', proxy_setup_data_sent)
+ self.assertTrue(
+ proxy_setup_data_sent.endswith(b'\r\n\r\n'),
+ msg=f'unexpected proxy data sent {proxy_setup_data_sent!r}')
+
def test_connect_put_request(self):
self.conn.set_tunnel('destination.com')
self.conn.request('PUT', '/', '')
import tempfile
import textwrap
import contextlib
+import unittest
@contextlib.contextmanager
return test.support.FS_NONASCII or \
self.skip("File system does not support non-ascii.")
+ def skip(self, reason):
+ raise unittest.SkipTest(reason)
+
def DALS(str):
"Dedent and left-strip"
--- /dev/null
+import os
+import sys
+import threading
+import traceback
+
+
+NLOOPS = 50
+NTHREADS = 30
+
+
+def t1():
+ try:
+ from concurrent.futures import ThreadPoolExecutor
+ except Exception:
+ traceback.print_exc()
+ os._exit(1)
+
+def t2():
+ try:
+ from concurrent.futures.thread import ThreadPoolExecutor
+ except Exception:
+ traceback.print_exc()
+ os._exit(1)
+
+def main():
+ for j in range(NLOOPS):
+ threads = []
+ for i in range(NTHREADS):
+ threads.append(threading.Thread(target=t2 if i % 1 else t1))
+ for thread in threads:
+ thread.start()
+ for thread in threads:
+ thread.join()
+ sys.modules.pop('concurrent.futures', None)
+ sys.modules.pop('concurrent.futures.thread', None)
+
+if __name__ == "__main__":
+ main()
--- /dev/null
+import multiprocessing
+import os
+import threading
+import traceback
+
+
+def t():
+ try:
+ with multiprocessing.Pool(1):
+ pass
+ except Exception:
+ traceback.print_exc()
+ os._exit(1)
+
+
+def main():
+ threads = []
+ for i in range(20):
+ threads.append(threading.Thread(target=t))
+ for thread in threads:
+ thread.start()
+ for thread in threads:
+ thread.join()
+
+
+if __name__ == "__main__":
+ main()
from unittest import mock
from test.support import (
verbose, run_unittest, TESTFN, reap_threads,
- forget, unlink, rmtree, start_threads)
+ forget, unlink, rmtree, start_threads, script_helper)
def task(N, done, done_tasks, errors):
try:
__import__(TESTFN)
del sys.modules[TESTFN]
+ def test_concurrent_futures_circular_import(self):
+ # Regression test for bpo-43515
+ fn = os.path.join(os.path.dirname(__file__),
+ 'partial', 'cfimport.py')
+ script_helper.assert_python_ok(fn)
+
+ def test_multiprocessing_pool_circular_import(self):
+ # Regression test for bpo-41567
+ fn = os.path.join(os.path.dirname(__file__),
+ 'partial', 'pool_in_threads.py')
+ script_helper.assert_python_ok(fn)
+
@reap_threads
def test_main():
with self.assertRaises(AttributeError):
del t._CHUNK_SIZE
+ def test_internal_buffer_size(self):
+ # bpo-43260: TextIOWrapper's internal buffer should not store
+ # data larger than chunk size.
+ chunk_size = 8192 # default chunk size, updated later
+
+ class MockIO(self.MockRawIO):
+ def write(self, data):
+ if len(data) > chunk_size:
+ raise RuntimeError
+ return super().write(data)
+
+ buf = MockIO()
+ t = self.TextIOWrapper(buf, encoding="ascii")
+ chunk_size = t._CHUNK_SIZE
+ t.write("abc")
+ t.write("def")
+ # default chunk size is 8192 bytes so t don't write data to buf.
+ self.assertEqual([], buf._write_stack)
+
+ with self.assertRaises(RuntimeError):
+ t.write("x"*(chunk_size+1))
+
+ self.assertEqual([b"abcdef"], buf._write_stack)
+ t.write("ghi")
+ t.write("x"*chunk_size)
+ self.assertEqual([b"abcdef", b"ghi", b"x"*chunk_size], buf._write_stack)
+
class PyTextIOWrapperTest(TextIOWrapperTest):
io = pyio
loc = locale.getlocale(locale.LC_CTYPE)
if verbose:
print('testing with %a' % (loc,), end=' ', flush=True)
- locale.setlocale(locale.LC_CTYPE, loc)
+ try:
+ locale.setlocale(locale.LC_CTYPE, loc)
+ except locale.Error as exc:
+ # bpo-37945: setlocale(LC_CTYPE) fails with getlocale(LC_CTYPE)
+ # and the tr_TR locale on Windows. getlocale() builds a locale
+ # which is not recognize by setlocale().
+ self.skipTest(f"setlocale(LC_CTYPE, {loc!r}) failed: {exc!r}")
self.assertEqual(loc, locale.getlocale(locale.LC_CTYPE))
def test_invalid_locale_format_in_localetuple(self):
import unittest
+from test.support import use_old_parser
GLOBAL_VAR = None
with self.assertRaisesRegex(SyntaxError, msg):
exec(f"lambda: {code}", {}) # Function scope
+ @unittest.skipIf(use_old_parser(), "Old parser does not support walruses in set comprehensions")
def test_named_expression_invalid_rebinding_set_comprehension_iteration_variable(self):
cases = [
("Local reuse", 'i', "{i := 0 for i in range(5)}"),
with self.assertRaisesRegex(SyntaxError, msg):
exec(f"lambda: {code}", {}) # Function scope
+ @unittest.skipIf(use_old_parser(), "Old parser does not support walruses in set comprehensions")
def test_named_expression_invalid_set_comprehension_iterable_expression(self):
cases = [
("Top level", "{i for i in (i := range(5))}"),
("topic?key=def", "Pydoc: KEYWORD def"),
("topic?key=STRINGS", "Pydoc: TOPIC STRINGS"),
("foobar", "Pydoc: Error - foobar"),
- ("getfile?key=foobar", "Pydoc: Error - getfile?key=foobar"),
]
with self.restrict_walk_packages():
for url, title in requests:
self.call_url_handler(url, title)
- path = string.__file__
- title = "Pydoc: getfile " + path
- url = "getfile?key=" + path
- self.call_url_handler(url, title)
-
class TestHelper(unittest.TestCase):
def test_keywords(self):
from test.support import TESTFN, FakePath
TESTFN2 = TESTFN + "2"
+TESTFN_SRC = TESTFN + "_SRC"
+TESTFN_DST = TESTFN + "_DST"
MACOS = sys.platform.startswith("darwin")
AIX = sys.platform[:3] == 'aix'
try:
os.rmdir(dst_dir)
+ @unittest.skipUnless(hasattr(os, 'geteuid') and os.geteuid() == 0
+ and hasattr(os, 'lchflags')
+ and hasattr(stat, 'SF_IMMUTABLE')
+ and hasattr(stat, 'UF_OPAQUE'),
+ 'root privileges required')
+ def test_move_dir_permission_denied(self):
+ # bpo-42782: shutil.move should not create destination directories
+ # if the source directory cannot be removed.
+ try:
+ os.mkdir(TESTFN_SRC)
+ os.lchflags(TESTFN_SRC, stat.SF_IMMUTABLE)
+
+ # Testing on an empty immutable directory
+ # TESTFN_DST should not exist if shutil.move failed
+ self.assertRaises(PermissionError, shutil.move, TESTFN_SRC, TESTFN_DST)
+ self.assertFalse(TESTFN_DST in os.listdir())
+
+ # Create a file and keep the directory immutable
+ os.lchflags(TESTFN_SRC, stat.UF_OPAQUE)
+ os_helper.create_empty_file(os.path.join(TESTFN_SRC, 'child'))
+ os.lchflags(TESTFN_SRC, stat.SF_IMMUTABLE)
+
+ # Testing on a non-empty immutable directory
+ # TESTFN_DST should not exist if shutil.move failed
+ self.assertRaises(PermissionError, shutil.move, TESTFN_SRC, TESTFN_DST)
+ self.assertFalse(TESTFN_DST in os.listdir())
+ finally:
+ if os.path.exists(TESTFN_SRC):
+ os.lchflags(TESTFN_SRC, stat.UF_OPAQUE)
+ os_helper.rmtree(TESTFN_SRC)
+ if os.path.exists(TESTFN_DST):
+ os.lchflags(TESTFN_DST, stat.UF_OPAQUE)
+ os_helper.rmtree(TESTFN_DST)
+
+
class TestCopyFile(unittest.TestCase):
class Faux(object):
import statistics
import subprocess
import sys
+import threading
import time
import unittest
from test import support
# Python handler
self.assertEqual(len(sigs), N, "Some signals were lost")
+ @unittest.skipUnless(hasattr(signal, "SIGUSR1"),
+ "test needs SIGUSR1")
+ def test_stress_modifying_handlers(self):
+ # bpo-43406: race condition between trip_signal() and signal.signal
+ signum = signal.SIGUSR1
+ num_sent_signals = 0
+ num_received_signals = 0
+ do_stop = False
+
+ def custom_handler(signum, frame):
+ nonlocal num_received_signals
+ num_received_signals += 1
+
+ def set_interrupts():
+ nonlocal num_sent_signals
+ while not do_stop:
+ signal.raise_signal(signum)
+ num_sent_signals += 1
+
+ def cycle_handlers():
+ while num_sent_signals < 100:
+ for i in range(20000):
+ # Cycle between a Python-defined and a non-Python handler
+ for handler in [custom_handler, signal.SIG_IGN]:
+ signal.signal(signum, handler)
+
+ old_handler = signal.signal(signum, custom_handler)
+ self.addCleanup(signal.signal, signum, old_handler)
+
+ t = threading.Thread(target=set_interrupts)
+ try:
+ ignored = False
+ with support.catch_unraisable_exception() as cm:
+ t.start()
+ cycle_handlers()
+ do_stop = True
+ t.join()
+
+ if cm.unraisable is not None:
+ # An unraisable exception may be printed out when
+ # a signal is ignored due to the aforementioned
+ # race condition, check it.
+ self.assertIsInstance(cm.unraisable.exc_value, OSError)
+ self.assertIn(
+ f"Signal {signum} ignored due to race condition",
+ str(cm.unraisable.exc_value))
+ ignored = True
+
+ # bpo-43406: Even if it is unlikely, it's technically possible that
+ # all signals were ignored because of race conditions.
+ if not ignored:
+ # Sanity check that some signals were received, but not all
+ self.assertGreater(num_received_signals, 0)
+ self.assertLess(num_received_signals, num_sent_signals)
+ finally:
+ do_stop = True
+ t.join()
+
+
class RaiseSignalTest(unittest.TestCase):
def test_sigint(self):
except ResponseException as e:
self.smtp_state = self.COMMAND
self.push('%s %s' % (e.smtp_code, e.smtp_error))
- return
+ return
super().found_terminator()
self._authenticated(self._auth_login_user, password == sim_auth[1])
del self._auth_login_user
+ def _auth_buggy(self, arg=None):
+ # This AUTH mechanism will 'trap' client in a neverending 334
+ # base64 encoded 'BuGgYbUgGy'
+ self.push('334 QnVHZ1liVWdHeQ==')
+
def _auth_cram_md5(self, arg=None):
if arg is None:
self.push('334 {}'.format(sim_cram_md5_challenge))
self.assertEqual(resp, (235, b'Authentication Succeeded'))
smtp.close()
+ def testAUTH_LOGIN_initial_response_ok(self):
+ self.serv.add_feature("AUTH LOGIN")
+ with smtplib.SMTP(HOST, self.port, local_hostname='localhost',
+ timeout=support.LOOPBACK_TIMEOUT) as smtp:
+ smtp.user, smtp.password = sim_auth
+ smtp.ehlo("test_auth_login")
+ resp = smtp.auth("LOGIN", smtp.auth_login, initial_response_ok=True)
+ self.assertEqual(resp, (235, b'Authentication Succeeded'))
+
+ def testAUTH_LOGIN_initial_response_notok(self):
+ self.serv.add_feature("AUTH LOGIN")
+ with smtplib.SMTP(HOST, self.port, local_hostname='localhost',
+ timeout=support.LOOPBACK_TIMEOUT) as smtp:
+ smtp.user, smtp.password = sim_auth
+ smtp.ehlo("test_auth_login")
+ resp = smtp.auth("LOGIN", smtp.auth_login, initial_response_ok=False)
+ self.assertEqual(resp, (235, b'Authentication Succeeded'))
+
+ def testAUTH_BUGGY(self):
+ self.serv.add_feature("AUTH BUGGY")
+
+ def auth_buggy(challenge=None):
+ self.assertEqual(b"BuGgYbUgGy", challenge)
+ return "\0"
+
+ smtp = smtplib.SMTP(
+ HOST, self.port, local_hostname='localhost',
+ timeout=support.LOOPBACK_TIMEOUT
+ )
+ try:
+ smtp.user, smtp.password = sim_auth
+ smtp.ehlo("test_auth_buggy")
+ expect = r"^Server AUTH mechanism infinite loop.*"
+ with self.assertRaisesRegex(smtplib.SMTPException, expect) as cm:
+ smtp.auth("BUGGY", auth_buggy, initial_response_ok=False)
+ finally:
+ smtp.close()
+
@hashlib_helper.requires_hashdigest('md5')
def testAUTH_CRAM_MD5(self):
self.serv.add_feature("AUTH CRAM-MD5")
t.join()
s.server_close()
+ def test_close_immediately(self):
+ class MyServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
+ pass
+
+ server = MyServer((HOST, 0), lambda: None)
+ server.server_close()
+
def test_tcpserver_bind_leak(self):
# Issue #22435: the server socket wouldn't be closed if bind()/listen()
# failed.
self.assertEqual(server.shutdown_called, 1)
server.server_close()
+ def test_threads_reaped(self):
+ """
+ In #37193, users reported a memory leak
+ due to the saving of every request thread. Ensure that
+ not all threads are kept forever.
+ """
+ class MyServer(socketserver.ThreadingMixIn, socketserver.TCPServer):
+ pass
+
+ server = MyServer((HOST, 0), socketserver.StreamRequestHandler)
+ for n in range(10):
+ with socket.create_connection(server.server_address):
+ server.handle_request()
+ self.assertLess(len(server._threads), 10)
+ server.server_close()
+
if __name__ == "__main__":
unittest.main()
OP_CIPHER_SERVER_PREFERENCE = getattr(ssl, "OP_CIPHER_SERVER_PREFERENCE", 0)
OP_ENABLE_MIDDLEBOX_COMPAT = getattr(ssl, "OP_ENABLE_MIDDLEBOX_COMPAT", 0)
+# Ubuntu has patched OpenSSL and changed behavior of security level 2
+# see https://bugs.python.org/issue41561#msg389003
+def is_ubuntu():
+ try:
+ # Assume that any references of "ubuntu" implies Ubuntu-like distro
+ # The workaround is not required for 18.04, but doesn't hurt either.
+ with open("/etc/os-release", encoding="utf-8") as f:
+ return "ubuntu" in f.read()
+ except FileNotFoundError:
+ return False
+
+if is_ubuntu():
+ def seclevel_workaround(*ctxs):
+ """"Lower security level to '1' and allow all ciphers for TLS 1.0/1"""
+ for ctx in ctxs:
+ if ctx.minimum_version <= ssl.TLSVersion.TLSv1_1:
+ ctx.set_ciphers("@SECLEVEL=1:ALL")
+else:
+ def seclevel_workaround(*ctxs):
+ pass
+
def has_tls_protocol(protocol):
"""Check if a TLS protocol is available and enabled
rc = s.connect_ex((REMOTE_HOST, 443))
if rc == 0:
self.skipTest("REMOTE_HOST responded too quickly")
+ elif rc == errno.ENETUNREACH:
+ self.skipTest("Network unreachable.")
self.assertIn(rc, (errno.EAGAIN, errno.EWOULDBLOCK))
@unittest.skipUnless(socket_helper.IPV6_ENABLED, 'Needs IPv6')
if client_context.protocol == ssl.PROTOCOL_TLS:
client_context.set_ciphers("ALL")
+ seclevel_workaround(server_context, client_context)
+
for ctx in (client_context, server_context):
ctx.verify_mode = certsreqs
ctx.load_cert_chain(SIGNED_CERTFILE)
with self.subTest(protocol=ssl._PROTOCOL_NAMES[protocol]):
context = ssl.SSLContext(protocol)
context.load_cert_chain(CERTFILE)
+ seclevel_workaround(context)
server_params_test(context, context,
chatty=True, connectionchatty=True)
client_context.maximum_version = ssl.TLSVersion.TLSv1_2
server_context.minimum_version = ssl.TLSVersion.TLSv1
server_context.maximum_version = ssl.TLSVersion.TLSv1_1
+ seclevel_workaround(client_context, server_context)
with ThreadedEchoServer(context=server_context) as server:
with client_context.wrap_socket(socket.socket(),
server_context.minimum_version = ssl.TLSVersion.TLSv1_2
client_context.maximum_version = ssl.TLSVersion.TLSv1
client_context.minimum_version = ssl.TLSVersion.TLSv1
+ seclevel_workaround(client_context, server_context)
+
with ThreadedEchoServer(context=server_context) as server:
with client_context.wrap_socket(socket.socket(),
server_hostname=hostname) as s:
server_context.minimum_version = ssl.TLSVersion.SSLv3
client_context.minimum_version = ssl.TLSVersion.SSLv3
client_context.maximum_version = ssl.TLSVersion.SSLv3
+ seclevel_workaround(client_context, server_context)
+
with ThreadedEchoServer(context=server_context) as server:
with client_context.wrap_socket(socket.socket(),
server_hostname=hostname) as s:
msg
)
+ def test_msg_callback_deadlock_bpo43577(self):
+ client_context, server_context, hostname = testing_context()
+ server_context2 = testing_context()[1]
+
+ def msg_cb(conn, direction, version, content_type, msg_type, data):
+ pass
+
+ def sni_cb(sock, servername, ctx):
+ sock.context = server_context2
+
+ server_context._msg_callback = msg_cb
+ server_context.sni_callback = sni_cb
+
+ server = ThreadedEchoServer(context=server_context, chatty=False)
+ with server:
+ with client_context.wrap_socket(socket.socket(),
+ server_hostname=hostname) as s:
+ s.connect((HOST, server.port))
+ with client_context.wrap_socket(socket.socket(),
+ server_hostname=hostname) as s:
+ s.connect((HOST, server.port))
+
def test_main(verbose=False):
if support.verbose:
"""
self._check_error(code, "invalid syntax")
+ def test_invalid_line_continuation_error_position(self):
+ self._check_error(r"a = 3 \ 4",
+ "unexpected character after line continuation character",
+ lineno=1, offset=9)
+
def test_invalid_line_continuation_left_recursive(self):
# Check bpo-42218: SyntaxErrors following left-recursive rules
# (t_primary_raw in this case) need to be tested explicitly
def f():
f()
try:
- for depth in (10, 25, 50, 75, 100, 250, 1000):
+ for depth in (50, 75, 100, 250, 1000):
try:
sys.setrecursionlimit(depth)
except RecursionError:
# Issue #5392: test stack overflow after hitting recursion
# limit twice
- self.assertRaises(RecursionError, f)
- self.assertRaises(RecursionError, f)
+ with self.assertRaises(RecursionError):
+ f()
+ with self.assertRaises(RecursionError):
+ f()
finally:
sys.setrecursionlimit(oldlimit)
@test.support.cpython_only
def test_setrecursionlimit_recursion_depth(self):
# Issue #25274: Setting a low recursion limit must be blocked if the
- # current recursion depth is already higher than the "lower-water
- # mark". Otherwise, it may not be possible anymore to
- # reset the overflowed flag to 0.
+ # current recursion depth is already higher than limit.
from _testinternalcapi import get_recursion_depth
sys.setrecursionlimit(1000)
for limit in (10, 25, 50, 75, 100, 150, 200):
- # formula extracted from _Py_RecursionLimitLowerWaterMark()
- if limit > 200:
- depth = limit - 50
- else:
- depth = limit * 3 // 4
- set_recursion_limit_at_depth(depth, limit)
+ set_recursion_limit_at_depth(limit, limit)
finally:
sys.setrecursionlimit(oldlimit)
- # The error message is specific to CPython
- @test.support.cpython_only
- def test_recursionlimit_fatalerror(self):
- # A fatal error occurs if a second recursion limit is hit when recovering
- # from a first one.
- code = textwrap.dedent("""
- import sys
-
- def f():
- try:
- f()
- except RecursionError:
- f()
-
- sys.setrecursionlimit(%d)
- f()""")
- with test.support.SuppressCrashReport():
- for i in (50, 1000):
- sub = subprocess.Popen([sys.executable, '-c', code % i],
- stderr=subprocess.PIPE)
- err = sub.communicate()[1]
- self.assertTrue(sub.returncode, sub.returncode)
- self.assertIn(
- b"Fatal Python error: _Py_CheckRecursiveCall: "
- b"Cannot recover from stack overflow",
- err)
-
def test_getwindowsversion(self):
# Raise SkipTest if sys doesn't have getwindowsversion attribute
test.support.get_attribute(sys, "getwindowsversion")
self.assertIsNone(cur.firstiter)
self.assertIsNone(cur.finalizer)
+ def test_changing_sys_stderr_and_removing_reference(self):
+ # If the default displayhook doesn't take a strong reference
+ # to sys.stderr the following code can crash. See bpo-43660
+ # for more details.
+ code = textwrap.dedent('''
+ import sys
+ class MyStderr:
+ def write(self, s):
+ sys.stderr = None
+ sys.stderr = MyStderr()
+ 1/0
+ ''')
+ rc, out, err = assert_python_failure('-c', code)
+ self.assertEqual(out, b"")
+ self.assertEqual(err, b"")
if __name__ == "__main__":
unittest.main()
data = [int(x, 16) for x in data.split(" ")]
return "".join([chr(x) for x in data])
+ @requires_resource('network')
def test_normalization(self):
TESTDATAFILE = "NormalizationTest.txt"
TESTDATAURL = f"http://www.pythontest.net/unicode/{unicodedata.unidata_version}/{TESTDATAFILE}"
# Tests for extended unpacking, starred expressions.
+from test.support import use_old_parser
+
doctests = """
Unpack tuple
...
SyntaxError: can't use starred expression here
+Some size constraints (all fail.)
+
+ >>> s = ", ".join("a%d" % i for i in range(1<<8)) + ", *rest = range(1<<8 + 1)"
+ >>> compile(s, 'test', 'exec') # doctest:+ELLIPSIS
+ Traceback (most recent call last):
+ ...
+ SyntaxError: too many expressions in star-unpacking assignment
+
+ >>> s = ", ".join("a%d" % i for i in range(1<<8 + 1)) + ", *rest = range(1<<8 + 2)"
+ >>> compile(s, 'test', 'exec') # doctest:+ELLIPSIS
+ Traceback (most recent call last):
+ ...
+ SyntaxError: too many expressions in star-unpacking assignment
+
+(there is an additional limit, on the number of expressions after the
+'*rest', but it's 1<<24 and testing it takes too much memory.)
+
+"""
+
+new_parser_doctests = """\
>>> (*x),y = 1, 2 # doctest:+ELLIPSIS
Traceback (most recent call last):
...
Traceback (most recent call last):
...
SyntaxError: can't use starred expression here
-
-Some size constraints (all fail.)
-
- >>> s = ", ".join("a%d" % i for i in range(1<<8)) + ", *rest = range(1<<8 + 1)"
- >>> compile(s, 'test', 'exec') # doctest:+ELLIPSIS
- Traceback (most recent call last):
- ...
- SyntaxError: too many expressions in star-unpacking assignment
-
- >>> s = ", ".join("a%d" % i for i in range(1<<8 + 1)) + ", *rest = range(1<<8 + 2)"
- >>> compile(s, 'test', 'exec') # doctest:+ELLIPSIS
- Traceback (most recent call last):
- ...
- SyntaxError: too many expressions in star-unpacking assignment
-
-(there is an additional limit, on the number of expressions after the
-'*rest', but it's 1<<24 and testing it takes too much memory.)
-
"""
-__test__ = {'doctests' : doctests}
+if use_old_parser():
+ __test__ = {'doctests' : doctests}
+else:
+ __test__ = {'doctests' : doctests + new_parser_doctests}
def test_main(verbose=False):
from test import support
self.check_ast_roundtrip("1e1000j")
self.check_ast_roundtrip("-1e1000j")
+ def test_nan(self):
+ self.assertASTEqual(
+ ast.parse(ast.unparse(ast.Constant(value=float('nan')))),
+ ast.parse('1e1000 - 1e1000')
+ )
+
def test_min_int(self):
self.check_ast_roundtrip(str(-(2 ** 31)))
self.check_ast_roundtrip(str(-(2 ** 63)))
def test_set_literal(self):
self.check_ast_roundtrip("{'a', 'b', 'c'}")
+ def test_empty_set(self):
+ self.assertASTEqual(
+ ast.parse(ast.unparse(ast.Set(elts=[]))),
+ ast.parse('{*()}')
+ )
+
def test_set_comprehension(self):
self.check_ast_roundtrip("{x for x in range(5)}")
def test_invalid_fstring_backslash(self):
self.check_invalid(ast.FormattedValue(value=ast.Constant(value="\\\\")))
- def test_invalid_set(self):
- self.check_invalid(ast.Set(elts=[]))
-
def test_invalid_yield_from(self):
self.check_invalid(ast.YieldFrom(value=None))
elem.extend([e])
self.serialize_check(elem, '<body><tag /><tag2 /></body>')
elem.remove(e)
+ elem.extend(iter([e]))
+ self.serialize_check(elem, '<body><tag /><tag2 /></body>')
+ elem.remove(e)
element = ET.Element("tag", key="value")
self.serialize_check(element, '<tag key="value" />') # 1
#on AF_INET only.
URL = "http://%s:%d"%(ADDR, PORT)
serv.server_activate()
- paths = ["/foo", "/foo/bar"]
+ paths = [
+ "/foo", "/foo/bar",
+ "/foo?k=v", "/foo#frag", "/foo?k=v#frag",
+ "", "/", "/RPC2", "?k=v", "#frag",
+ ]
for path in paths:
d = serv.add_dispatcher(path, xmlrpc.server.SimpleXMLRPCDispatcher())
d.register_introspection_functions()
d.register_multicall_functions()
+ d.register_function(lambda p=path: p, 'test')
serv.get_dispatcher(paths[0]).register_function(pow)
serv.get_dispatcher(paths[1]).register_function(lambda x,y: x+y, 'add')
serv.add_dispatcher("/is/broken", BrokenDispatcher())
p = xmlrpclib.ServerProxy(URL+"/is/broken")
self.assertRaises(xmlrpclib.Fault, p.add, 6, 8)
+ def test_invalid_path(self):
+ p = xmlrpclib.ServerProxy(URL+"/invalid")
+ self.assertRaises(xmlrpclib.Fault, p.add, 6, 8)
+
+ def test_path_query_fragment(self):
+ p = xmlrpclib.ServerProxy(URL+"/foo?k=v#frag")
+ self.assertEqual(p.test(), "/foo?k=v#frag")
+
+ def test_path_fragment(self):
+ p = xmlrpclib.ServerProxy(URL+"/foo#frag")
+ self.assertEqual(p.test(), "/foo#frag")
+
+ def test_path_query(self):
+ p = xmlrpclib.ServerProxy(URL+"/foo?k=v")
+ self.assertEqual(p.test(), "/foo?k=v")
+
+ def test_empty_path(self):
+ p = xmlrpclib.ServerProxy(URL)
+ self.assertEqual(p.test(), "/RPC2")
+
+ def test_root_path(self):
+ p = xmlrpclib.ServerProxy(URL + "/")
+ self.assertEqual(p.test(), "/")
+
+ def test_empty_path_query(self):
+ p = xmlrpclib.ServerProxy(URL + "?k=v")
+ self.assertEqual(p.test(), "?k=v")
+
+ def test_empty_path_fragment(self):
+ p = xmlrpclib.ServerProxy(URL + "#frag")
+ self.assertEqual(p.test(), "#frag")
+
+
#A test case that verifies that a server using the HTTP/1.1 keep-alive mechanism
#does indeed serve subsequent requests on the same connection
class BaseKeepaliveServerTestCase(BaseServerTestCase):
"""
for element in elements:
self._assert_is_element(element)
- self._children.extend(elements)
+ self._children.append(element)
def insert(self, index, subelement):
"""Insert *subelement* at position *index*."""
# establish a "logical" server connection
# get the url
- p = urllib.parse.urlparse(uri)
+ p = urllib.parse.urlsplit(uri)
if p.scheme not in ("http", "https"):
raise OSError("unsupported XML-RPC protocol")
self.__host = p.netloc
- self.__handler = p.path or "/RPC2"
+ self.__handler = urllib.parse.urlunsplit(["", "", *p[2:]])
+ if not self.__handler:
+ self.__handler = "/RPC2"
if transport is None:
if p.scheme == "https":
result.extend([
dict(
- name="OpenSSL 1.1.1i",
- url="https://www.openssl.org/source/openssl-1.1.1i.tar.gz",
- checksum='08987c3cf125202e2b0840035efb392c',
+ name="OpenSSL 1.1.1k",
+ url="https://www.openssl.org/source/openssl-1.1.1k.tar.gz",
+ checksum='c4e7d95f782b08116afa27b30393dd27',
buildrecipe=build_universal_openssl,
configure=None,
install=None,
test/test_importlib/namespace_pkgs/project3 \
test/test_importlib/namespace_pkgs/project3/parent \
test/test_importlib/namespace_pkgs/project3/parent/child \
+ test/test_importlib/partial \
test/test_importlib/source \
test/test_importlib/zipdata01 \
test/test_importlib/zipdata02 \
Davin Potts
Guillaume Pratte
Florian Preinstorfer
+Alex Prengère
Amrit Prem
Paul Prescod
Donovan Preston
Python News
+++++++++++
+What's New in Python 3.9.3 final?
+=================================
+
+*Release date: 2021-04-02*
+
+Security
+--------
+
+- bpo-42988: CVE-2021-3426: Remove the ``getfile`` feature of the
+ :mod:`pydoc` module which could be abused to read arbitrary files on the
+ disk (directory traversal vulnerability). Moreover, even source code of
+ Python modules can contain sensitive data like passwords. Vulnerability
+ reported by David Schwörer.
+
+- bpo-43285: :mod:`ftplib` no longer trusts the IP address value returned
+ from the server in response to the PASV command by default. This prevents
+ a malicious FTP server from using the response to probe IPv4 address and
+ port combinations on the client network.
+
+ Code that requires the former vulnerable behavior may set a
+ ``trust_server_pasv_ipv4_address`` attribute on their :class:`ftplib.FTP`
+ instances to ``True`` to re-enable it.
+
+- bpo-43439: Add audit hooks for :func:`gc.get_objects`,
+ :func:`gc.get_referrers` and :func:`gc.get_referents`. Patch by Pablo
+ Galindo.
+
+Core and Builtins
+-----------------
+
+- bpo-43660: Fix crash that happens when replacing ``sys.stderr`` with a
+ callable that can remove the object while an exception is being printed.
+ Patch by Pablo Galindo.
+
+- bpo-43555: Report the column offset for :exc:`SyntaxError` for invalid
+ line continuation characters. Patch by Pablo Galindo.
+
+- bpo-43517: Fix misdetection of circular imports when using ``from pkg.mod
+ import attr``, which caused false positives in non-trivial multi-threaded
+ code.
+
+- bpo-35883: Python no longer fails at startup with a fatal error if a
+ command line argument contains an invalid Unicode character. The
+ :c:func:`Py_DecodeLocale` function now escapes byte sequences which would
+ be decoded as Unicode characters outside the [U+0000; U+10ffff] range.
+
+- bpo-43406: Fix a possible race condition where ``PyErr_CheckSignals``
+ tries to execute a non-Python signal handler.
+
+- bpo-42500: Improve handling of exceptions near recursion limit. Converts a
+ number of Fatal Errors in RecursionErrors.
+
+Library
+-------
+
+- bpo-43433: :class:`xmlrpc.client.ServerProxy` no longer ignores query and
+ fragment in the URL of the server.
+
+- bpo-35930: Raising an exception raised in a "future" instance will create
+ reference cycles.
+
+- bpo-43577: Fix deadlock when using :class:`ssl.SSLContext` debug callback
+ with :meth:`ssl.SSLContext.sni_callback`.
+
+- bpo-43521: ``ast.unparse`` can now render NaNs and empty sets.
+
+- bpo-43423: :func:`subprocess.communicate` no longer raises an IndexError
+ when there is an empty stdout or stderr IO buffer during a timeout on
+ Windows.
+
+- bpo-27820: Fixed long-standing bug of smtplib.SMTP where doing AUTH LOGIN
+ with initial_response_ok=False will fail.
+
+ The cause is that SMTP.auth_login _always_ returns a password if provided
+ with a challenge string, thus non-compliant with the standard for AUTH
+ LOGIN.
+
+ Also fixes bug with the test for smtpd.
+
+- bpo-43332: Improves the networking efficiency of :mod:`http.client` when
+ using a proxy via :meth:`~HTTPConnection.set_tunnel`. Fewer small send
+ calls are made during connection setup.
+
+- bpo-43399: Fix ``ElementTree.extend`` not working on iterators when using
+ the Python implementation
+
+- bpo-43316: The ``python -m gzip`` command line application now properly
+ fails when detecting an unsupported extension. It exits with a non-zero
+ exit code and prints an error message to stderr.
+
+- bpo-43260: Fix TextIOWrapper can not flush internal buffer forever after
+ very large text is written.
+
+- bpo-42782: Fail fast in :func:`shutil.move()` to avoid creating
+ destination directories on failure.
+
+- bpo-37193: Fixed memory leak in ``socketserver.ThreadingMixIn`` introduced
+ in Python 3.7.
+
+Documentation
+-------------
+
+- bpo-43199: Answer "Why is there no goto?" in the Design and History FAQ.
+
+- bpo-43407: Clarified that a result from :func:`time.monotonic`,
+ :func:`time.perf_counter`, :func:`time.process_time`, or
+ :func:`time.thread_time` can be compared with the result from any
+ following call to the same function - not just the next immediate call.
+
+- bpo-27646: Clarify that 'yield from <expr>' works with any iterable, not
+ just iterators.
+
+- bpo-36346: Update some deprecated unicode APIs which are documented as
+ "will be removed in 4.0" to "3.12". See :pep:`623` for detail.
+
+Tests
+-----
+
+- bpo-37945: Fix test_getsetlocale_issue1813() of test_locale: skip the test
+ if ``setlocale()`` fails. Patch by Victor Stinner.
+
+- bpo-41561: Add workaround for Ubuntu's custom OpenSSL security level
+ policy.
+
+- bpo-43288: Fix test_importlib to correctly skip Unicode file tests if the
+ fileystem does not support them.
+
+Build
+-----
+
+- bpo-43631: Update macOS, Windows, and CI to OpenSSL 1.1.1k.
+
+- bpo-43617: Improve configure.ac: Check for presence of autoconf-archive
+ package and remove our copies of M4 macros.
+
+macOS
+-----
+
+- bpo-41837: Update macOS installer build to use OpenSSL 1.1.1j.
+
+IDLE
+----
+
+- bpo-42225: Document that IDLE can fail on Unix either from misconfigured
+ IP masquerage rules or failure displaying complex colored (non-ascii)
+ characters.
+
+- bpo-43283: Document why printing to IDLE's Shell is often slower than
+ printing to a system terminal and that it can be made faster by
+ pre-formatting a single string before printing.
+
+
What's New in Python 3.9.2 final?
=================================
blake2b_increment_counter( S, BLAKE2B_BLOCKBYTES );
blake2b_compress( S, S->buf );
S->buflen -= BLAKE2B_BLOCKBYTES;
- memcpy( S->buf, S->buf + BLAKE2B_BLOCKBYTES, S->buflen );
+ memmove( S->buf, S->buf + BLAKE2B_BLOCKBYTES, S->buflen );
}
blake2b_increment_counter( S, S->buflen );
blake2b_increment_counter( S, BLAKE2B_BLOCKBYTES );
blake2b_compress( S, S->buf );
S->buflen -= BLAKE2B_BLOCKBYTES;
- memcpy( S->buf, S->buf + BLAKE2B_BLOCKBYTES, S->buflen );
+ memmove( S->buf, S->buf + BLAKE2B_BLOCKBYTES, S->buflen );
}
blake2b_increment_counter( S, S->buflen );
blake2s_increment_counter( S, BLAKE2S_BLOCKBYTES );
blake2s_compress( S, S->buf );
S->buflen -= BLAKE2S_BLOCKBYTES;
- memcpy( S->buf, S->buf + BLAKE2S_BLOCKBYTES, S->buflen );
+ memmove( S->buf, S->buf + BLAKE2S_BLOCKBYTES, S->buflen );
}
blake2s_increment_counter( S, ( uint32_t )S->buflen );
blake2s_increment_counter( S, BLAKE2S_BLOCKBYTES );
blake2s_compress( S, S->buf );
S->buflen -= BLAKE2S_BLOCKBYTES;
- memcpy( S->buf, S->buf + BLAKE2S_BLOCKBYTES, S->buflen );
+ memmove( S->buf, S->buf + BLAKE2S_BLOCKBYTES, S->buflen );
}
blake2s_increment_counter( S, ( uint32_t )S->buflen );
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wdeprecated-declarations"
#endif
-#if defined(__GNUC__)
+#if defined(__GNUC__) && ((__GNUC__ > 4) || ((__GNUC__ == 4) && (__GNUC_MINOR__ > 5)))
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
#endif
#if defined(__clang__) || defined(MACOSX)
#pragma clang diagnostic pop
#endif
-#if defined(__GNUC__)
+#if defined(__GNUC__) && ((__GNUC__ > 4) || ((__GNUC__ == 4) && (__GNUC_MINOR__ > 5)))
#pragma GCC diagnostic pop
#endif
if (!PyArg_ParseTuple(args, "U|i:LoadLibrary", &nameobj, &load_flags))
return NULL;
+_Py_COMP_DIAG_PUSH
+_Py_COMP_DIAG_IGNORE_DEPR_DECLS
name = _PyUnicode_AsUnicode(nameobj);
+_Py_COMP_DIAG_POP
if (!name)
return NULL;
if (!PyUnicode_FSDecoder(nameobj, &stringobj)) {
return -1;
}
+_Py_COMP_DIAG_PUSH
+_Py_COMP_DIAG_IGNORE_DEPR_DECLS
widename = PyUnicode_AsUnicode(stringobj);
+_Py_COMP_DIAG_POP
if (widename == NULL)
return -1;
#else
ret = PyObject_CallMethodOneArg(self->buffer, _PyIO_str_write, b);
} while (ret == NULL && _PyIO_trap_eintr());
Py_DECREF(b);
+ // NOTE: We cleared buffer but we don't know how many bytes are actually written
+ // when an error occurred.
if (ret == NULL)
return -1;
Py_DECREF(ret);
/* XXX What if we were just reading? */
if (self->encodefunc != NULL) {
- if (PyUnicode_IS_ASCII(text) && is_asciicompat_encoding(self->encodefunc)) {
+ if (PyUnicode_IS_ASCII(text) &&
+ // See bpo-43260
+ PyUnicode_GET_LENGTH(text) <= self->chunk_size &&
+ is_asciicompat_encoding(self->encodefunc)) {
b = text;
Py_INCREF(b);
}
}
self->encoding_start_of_stream = 0;
}
- else
+ else {
b = PyObject_CallMethodOneArg(self->encoder, _PyIO_str_encode, text);
+ }
Py_DECREF(text);
if (b == NULL)
self->pending_bytes_count = 0;
self->pending_bytes = b;
}
+ else if (self->pending_bytes_count + bytes_len > self->chunk_size) {
+ // Prevent to concatenate more than chunk_size data.
+ if (_textiowrapper_writeflush(self) < 0) {
+ Py_DECREF(b);
+ return NULL;
+ }
+ self->pending_bytes = b;
+ }
else if (!PyList_CheckExact(self->pending_bytes)) {
PyObject *list = PyList_New(2);
if (list == NULL) {
}
self->pending_bytes_count += bytes_len;
- if (self->pending_bytes_count > self->chunk_size || needflush ||
+ if (self->pending_bytes_count >= self->chunk_size || needflush ||
text_needflush) {
if (_textiowrapper_writeflush(self) < 0)
return NULL;
return string;
err:
- PyMem_Del(state->mark);
+ /* We add an explicit cast here because MSVC has a bug when
+ compiling C code where it believes that `const void**` cannot be
+ safely casted to `void*`, see bpo-39943 for details. */
+ PyMem_Del((void*) state->mark);
state->mark = NULL;
if (state->buffer.buf)
PyBuffer_Release(&state->buffer);
PyBuffer_Release(&state->buffer);
Py_XDECREF(state->string);
data_stack_dealloc(state);
- PyMem_Del(state->mark);
+ /* See above PyMem_Del for why we explicitly cast here. */
+ PyMem_Del((void*) state->mark);
state->mark = NULL;
}
Py_INCREF(value);
Py_SETREF(self->ctx, (PySSLContext *)value);
SSL_set_SSL_CTX(self->ssl, self->ctx->ctx);
+ /* Set SSL* internal msg_callback to state of new context's state */
+ SSL_set_msg_callback(
+ self->ssl,
+ self->ctx->msg_cb ? _PySSL_msg_callback : NULL
+ );
#endif
} else {
PyErr_SetString(PyExc_TypeError, "The value must be a SSLContext");
ssl_obj = (PySSLSocket *)SSL_get_app_data(ssl);
assert(PySSLSocket_Check(ssl_obj));
if (ssl_obj->ctx->msg_cb == NULL) {
+ PyGILState_Release(threadstate);
return;
}
{
PyThreadState *tstate = _PyThreadState_GET();
int i;
+
+ if (PySys_Audit("gc.get_referrers", "(O)", args) < 0) {
+ return NULL;
+ }
+
PyObject *result = PyList_New(0);
if (!result) {
return NULL;
gc_get_referents(PyObject *self, PyObject *args)
{
Py_ssize_t i;
+ if (PySys_Audit("gc.get_referents", "(O)", args) < 0) {
+ return NULL;
+ }
PyObject *result = PyList_New(0);
if (result == NULL)
PyObject* result;
GCState *gcstate = &tstate->interp->gc;
+ if (PySys_Audit("gc.get_objects", "n", generation) < 0) {
+ return NULL;
+ }
+
result = PyList_New(0);
if (result == NULL) {
return NULL;
if (is_unicode) {
#ifdef MS_WINDOWS
+_Py_COMP_DIAG_PUSH
+_Py_COMP_DIAG_IGNORE_DEPR_DECLS
wide = PyUnicode_AsUnicodeAndSize(o, &length);
+_Py_COMP_DIAG_POP
if (!wide) {
goto error_exit;
}
goto error_exit;
}
+_Py_COMP_DIAG_PUSH
+_Py_COMP_DIAG_IGNORE_DEPR_DECLS
wide = PyUnicode_AsUnicodeAndSize(wo, &length);
+_Py_COMP_DIAG_POP
if (!wide) {
goto error_exit;
}
#ifdef MS_WINDOWS
if (!PyUnicode_FSDecoder(self->path, &ub))
return NULL;
+_Py_COMP_DIAG_PUSH
+_Py_COMP_DIAG_IGNORE_DEPR_DECLS
const wchar_t *path = PyUnicode_AsUnicode(ub);
+_Py_COMP_DIAG_POP
#else /* POSIX */
if (!PyUnicode_FSConverter(self->path, &ub))
return NULL;
if (!PyUnicode_FSDecoder(self->path, &unicode))
return NULL;
+_Py_COMP_DIAG_PUSH
+_Py_COMP_DIAG_IGNORE_DEPR_DECLS
path = PyUnicode_AsUnicode(unicode);
+_Py_COMP_DIAG_POP
result = LSTAT(path, &stat);
Py_DECREF(unicode);
}
_Py_atomic_store_relaxed(&Handlers[i].tripped, 0);
+ /* Signal handlers can be modified while a signal is received,
+ * and therefore the fact that trip_signal() or PyErr_SetInterrupt()
+ * was called doesn't guarantee that there is still a Python
+ * signal handler for it by the time PyErr_CheckSignals() is called
+ * (see bpo-43406).
+ */
+ PyObject *func = Handlers[i].func;
+ if (func == NULL || func == Py_None || func == IgnoreHandler ||
+ func == DefaultHandler) {
+ /* No Python signal handler due to aforementioned race condition.
+ * We can't call raise() as it would break the assumption
+ * that PyErr_SetInterrupt() only *simulates* an incoming
+ * signal (i.e. it will never kill the process).
+ * We also don't want to interrupt user code with a cryptic
+ * asynchronous exception, so instead just write out an
+ * unraisable error.
+ */
+ PyErr_Format(PyExc_OSError,
+ "Signal %i ignored due to race condition",
+ i);
+ PyErr_WriteUnraisable(Py_None);
+ continue;
+ }
+
PyObject *arglist = Py_BuildValue("(iO)", i, frame);
PyObject *result;
if (arglist) {
- result = _PyObject_Call(tstate, Handlers[i].func, arglist, NULL);
+ result = _PyObject_Call(tstate, func, arglist, NULL);
Py_DECREF(arglist);
}
else {
state->data_stack_base += size; \
} while (0)
+/* We add an explicit cast to memcpy here because MSVC has a bug when
+ compiling C code where it believes that `const void**` cannot be
+ safely casted to `void*`, see bpo-39943 for details. */
#define DATA_STACK_POP(state, data, size, discard) \
do { \
TRACE(("copy data to %p from %" PY_FORMAT_SIZE_T "d " \
"(%" PY_FORMAT_SIZE_T "d)\n", \
data, state->data_stack_base-size, size)); \
- memcpy(data, state->data_stack+state->data_stack_base-size, size); \
+ memcpy((void*) data, state->data_stack+state->data_stack_base-size, size); \
if (discard) \
state->data_stack_base -= size; \
} while (0)
return NULL;
}
+ /* Make sure that code is indexable with an int, this is
+ a long running assumption in ceval.c and many parts of
+ the interpreter. */
+ if (PyBytes_GET_SIZE(code) > INT_MAX) {
+ PyErr_SetString(PyExc_OverflowError, "co_code larger than INT_MAX");
+ return NULL;
+ }
+
/* Check for any inner or outer closure references */
n_cellvars = PyTuple_GET_SIZE(cellvars);
if (!n_cellvars && !PyTuple_GET_SIZE(freevars)) {
BaseException_clear(self);
if (!Py_IS_TYPE(self, (PyTypeObject *) PyExc_MemoryError)) {
- return Py_TYPE(self)->tp_free((PyObject *)self);
+ Py_TYPE(self)->tp_free((PyObject *)self);
+ return;
}
_PyObject_GC_UNTRACK(self);
return -1;
}
- int len = PyBytes_GET_SIZE(f->f_code->co_code)/sizeof(_Py_CODEUNIT);
+ /* PyCode_NewWithPosOnlyArgs limits co_code to be under INT_MAX so this
+ * should never overflow. */
+ int len = (int)(PyBytes_GET_SIZE(f->f_code->co_code) / sizeof(_Py_CODEUNIT));
int *lines = marklines(f->f_code, len);
if (lines == NULL) {
return -1;
extern "C" {
#endif
-/* Maximum code point of Unicode 6.0: 0x10ffff (1,114,111) */
+// Maximum code point of Unicode 6.0: 0x10ffff (1,114,111).
+// The value must be the same in fileutils.c.
#define MAX_UNICODE 0x10ffff
#ifdef Py_DEBUG
*maxchar = ch;
if (*maxchar > MAX_UNICODE) {
PyErr_Format(PyExc_ValueError,
- "character U+%x is not in range [U+0000; U+10ffff]",
- ch);
+ "character U+%x is not in range [U+0000; U+%x]",
+ ch, MAX_UNICODE);
return -1;
}
}
goto onError;
}
+_Py_COMP_DIAG_PUSH
+_Py_COMP_DIAG_IGNORE_DEPR_DECLS
repwstr = PyUnicode_AsUnicodeAndSize(repunicode, &repwlen);
+_Py_COMP_DIAG_POP
if (repwstr == NULL)
goto onError;
/* need more space? (at least enough for what we
substring = PyUnicode_Substring(unicode, offset, offset+len);
if (substring == NULL)
return -1;
+_Py_COMP_DIAG_PUSH
+_Py_COMP_DIAG_IGNORE_DEPR_DECLS
p = PyUnicode_AsUnicodeAndSize(substring, &size);
+_Py_COMP_DIAG_POP
if (p == NULL) {
Py_DECREF(substring);
return -1;
{
case PyUnicode_1BYTE_KIND: maxchar = 0xff; break;
case PyUnicode_2BYTE_KIND: maxchar = 0xffff; break;
- case PyUnicode_4BYTE_KIND: maxchar = 0x10ffff; break;
+ case PyUnicode_4BYTE_KIND: maxchar = MAX_UNICODE; break;
default:
Py_UNREACHABLE();
}
return NULL;
if (PyUnicode_Check(data)) {
+_Py_COMP_DIAG_PUSH
+_Py_COMP_DIAG_IGNORE_DEPR_DECLS
const WCHAR *value = _PyUnicode_AsUnicode(data);
+_Py_COMP_DIAG_POP
if (value == NULL) {
return NULL;
}
t = PyList_GET_ITEM(value, j);
if (!PyUnicode_Check(t))
return FALSE;
+_Py_COMP_DIAG_PUSH
+_Py_COMP_DIAG_IGNORE_DEPR_DECLS
wstr = PyUnicode_AsUnicodeAndSize(t, &len);
+_Py_COMP_DIAG_POP
if (wstr == NULL)
return FALSE;
size += Py_SAFE_DOWNCAST((len + 1) * sizeof(wchar_t),
Py_ssize_t len;
t = PyList_GET_ITEM(value, j);
+_Py_COMP_DIAG_PUSH
+_Py_COMP_DIAG_IGNORE_DEPR_DECLS
wstr = PyUnicode_AsUnicodeAndSize(t, &len);
+_Py_COMP_DIAG_POP
assert(wstr);
wcscpy(P, wstr);
P += (len + 1);
if (PySys_Audit("winreg.SetValue", "nunO",
(Py_ssize_t)key, value_name, (Py_ssize_t)type,
value) < 0) {
+ PyMem_Free(data);
return NULL;
}
Py_BEGIN_ALLOW_THREADS
set libraries=\r
set libraries=%libraries% bzip2-1.0.6\r
if NOT "%IncludeLibffiSrc%"=="false" set libraries=%libraries% libffi\r
-if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1i\r
+if NOT "%IncludeSSLSrc%"=="false" set libraries=%libraries% openssl-1.1.1k\r
set libraries=%libraries% sqlite-3.34.0.0\r
if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tcl-core-8.6.9.0\r
if NOT "%IncludeTkinterSrc%"=="false" set libraries=%libraries% tk-8.6.9.0\r
\r
set binaries=\r
if NOT "%IncludeLibffi%"=="false" set binaries=%binaries% libffi\r
-if NOT "%IncludeSSL%"=="false" set binaries=%binaries% openssl-bin-1.1.1i\r
+if NOT "%IncludeSSL%"=="false" set binaries=%binaries% openssl-bin-1.1.1k\r
if NOT "%IncludeTkinter%"=="false" set binaries=%binaries% tcltk-8.6.9.0\r
if NOT "%IncludeSSLSrc%"=="false" set binaries=%binaries% nasm-2.11.06\r
\r
<libffiDir>$(ExternalsDir)libffi\</libffiDir>\r
<libffiOutDir>$(ExternalsDir)libffi\$(ArchName)\</libffiOutDir>\r
<libffiIncludeDir>$(libffiOutDir)include</libffiIncludeDir>\r
- <opensslDir>$(ExternalsDir)openssl-1.1.1i\</opensslDir>\r
- <opensslOutDir>$(ExternalsDir)openssl-bin-1.1.1i\$(ArchName)\</opensslOutDir>\r
+ <opensslDir>$(ExternalsDir)openssl-1.1.1k\</opensslDir>\r
+ <opensslOutDir>$(ExternalsDir)openssl-bin-1.1.1k\$(ArchName)\</opensslOutDir>\r
<opensslIncludeDir>$(opensslOutDir)include</opensslIncludeDir>\r
<nasmDir>$(ExternalsDir)\nasm-2.11.06\</nasmDir>\r
<zlibDir>$(ExternalsDir)\zlib-1.2.11\</zlibDir>\r
Homepage:\r
http://tukaani.org/xz/\r
_ssl\r
- Python wrapper for version 1.1.1i of the OpenSSL secure sockets\r
+ Python wrapper for version 1.1.1k of the OpenSSL secure sockets\r
library, which is downloaded from our binaries repository at\r
https://github.com/python/cpython-bin-deps.\r
\r
const char *msg = NULL;
PyObject* errtype = PyExc_SyntaxError;
+ Py_ssize_t col_offset = -1;
switch (p->tok->done) {
case E_TOKEN:
msg = "invalid token";
msg = "too many levels of indentation";
break;
case E_LINECONT:
+ col_offset = strlen(strtok(p->tok->buf, "\n")) - 1;
msg = "unexpected character after line continuation character";
break;
default:
msg = "unknown parsing error";
}
- PyErr_Format(errtype, msg);
- // There is no reliable column information for this error
- PyErr_SyntaxLocationObject(p->tok->filename, p->tok->lineno, 0);
-
+ RAISE_ERROR_KNOWN_LOCATION(p, errtype, p->tok->lineno, col_offset, msg);
return -1;
}
void *_PyPegen_dummy_name(Parser *p, ...);
Py_LOCAL_INLINE(void *)
-RAISE_ERROR_KNOWN_LOCATION(Parser *p, PyObject *errtype, int lineno,
- int col_offset, const char *errmsg, ...)
+RAISE_ERROR_KNOWN_LOCATION(Parser *p, PyObject *errtype,
+ Py_ssize_t lineno, Py_ssize_t col_offset,
+ const char *errmsg, ...)
{
va_list va;
va_start(va, errmsg);
_Py_CheckRecursionLimit = recursion_limit;
}
#endif
- if (tstate->recursion_critical)
- /* Somebody asked that we don't check for recursion. */
- return 0;
- if (tstate->overflowed) {
+ if (tstate->recursion_headroom) {
if (tstate->recursion_depth > recursion_limit + 50) {
/* Overflowing while handling an overflow. Give up. */
Py_FatalError("Cannot recover from stack overflow.");
}
- return 0;
}
- if (tstate->recursion_depth > recursion_limit) {
- --tstate->recursion_depth;
- tstate->overflowed = 1;
- _PyErr_Format(tstate, PyExc_RecursionError,
- "maximum recursion depth exceeded%s",
- where);
- return -1;
+ else {
+ if (tstate->recursion_depth > recursion_limit) {
+ tstate->recursion_headroom++;
+ _PyErr_Format(tstate, PyExc_RecursionError,
+ "maximum recursion depth exceeded%s",
+ where);
+ tstate->recursion_headroom--;
+ --tstate->recursion_depth;
+ return -1;
+ }
}
return 0;
}
_Py_CheckPython3();
+_Py_COMP_DIAG_PUSH
+_Py_COMP_DIAG_IGNORE_DEPR_DECLS
wpathname = _PyUnicode_AsUnicode(pathname);
+_Py_COMP_DIAG_POP
if (wpathname == NULL)
return NULL;
PyObject **val, PyObject **tb)
{
int recursion_depth = 0;
+ tstate->recursion_headroom++;
PyObject *type, *value, *initial_tb;
restart:
type = *exc;
if (type == NULL) {
/* There was no exception, so nothing to do. */
+ tstate->recursion_headroom--;
return;
}
}
*exc = type;
*val = value;
+ tstate->recursion_headroom--;
return;
error:
int _Py_open_cloexec_works = -1;
#endif
+// The value must be the same in unicodeobject.c.
+#define MAX_UNICODE 0x10ffff
+
+// mbstowcs() and mbrtowc() errors
+static const size_t DECODE_ERROR = ((size_t)-1);
+static const size_t INCOMPLETE_CHARACTER = (size_t)-2;
+
static int
get_surrogateescape(_Py_error_handler errors, int *surrogateescape)
Py_RETURN_NONE;
}
+
+static size_t
+is_valid_wide_char(wchar_t ch)
+{
+ if (Py_UNICODE_IS_SURROGATE(ch)) {
+ // Reject lone surrogate characters
+ return 0;
+ }
+ if (ch > MAX_UNICODE) {
+ // bpo-35883: Reject characters outside [U+0000; U+10ffff] range.
+ // The glibc mbstowcs() UTF-8 decoder does not respect the RFC 3629,
+ // it creates characters outside the [U+0000; U+10ffff] range:
+ // https://sourceware.org/bugzilla/show_bug.cgi?id=2373
+ return 0;
+ }
+ return 1;
+}
+
+
+static size_t
+_Py_mbstowcs(wchar_t *dest, const char *src, size_t n)
+{
+ size_t count = mbstowcs(dest, src, n);
+ if (dest != NULL && count != DECODE_ERROR) {
+ for (size_t i=0; i < count; i++) {
+ wchar_t ch = dest[i];
+ if (!is_valid_wide_char(ch)) {
+ return DECODE_ERROR;
+ }
+ }
+ }
+ return count;
+}
+
+
+#ifdef HAVE_MBRTOWC
+static size_t
+_Py_mbrtowc(wchar_t *pwc, const char *str, size_t len, mbstate_t *pmbs)
+{
+ assert(pwc != NULL);
+ size_t count = mbrtowc(pwc, str, len, pmbs);
+ if (count != 0 && count != DECODE_ERROR && count != INCOMPLETE_CHARACTER) {
+ if (!is_valid_wide_char(*pwc)) {
+ return DECODE_ERROR;
+ }
+ }
+ return count;
+}
+#endif
+
+
#if !defined(_Py_FORCE_UTF8_FS_ENCODING) && !defined(MS_WINDOWS)
#define USE_FORCE_ASCII
size_t res;
ch = (unsigned char)0xA7;
- res = mbstowcs(&wch, (char*)&ch, 1);
- if (res != (size_t)-1 && wch == L'\xA7') {
+ res = _Py_mbstowcs(&wch, (char*)&ch, 1);
+ if (res != DECODE_ERROR && wch == L'\xA7') {
/* On HP-UX withe C locale or the POSIX locale,
nl_langinfo(CODESET) announces "roman8", whereas mbstowcs() uses
Latin1 encoding in practice. Force ASCII in this case.
unsigned uch = (unsigned char)i;
ch[0] = (char)uch;
- res = mbstowcs(wch, ch, 1);
- if (res != (size_t)-1) {
+ res = _Py_mbstowcs(wch, ch, 1);
+ if (res != DECODE_ERROR) {
/* decoding a non-ASCII character from the locale encoding succeed:
the locale encoding is not ASCII, force ASCII */
return 1;
*/
argsize = strlen(arg);
#else
- argsize = mbstowcs(NULL, arg, 0);
+ argsize = _Py_mbstowcs(NULL, arg, 0);
#endif
- if (argsize != (size_t)-1) {
+ if (argsize != DECODE_ERROR) {
if (argsize > PY_SSIZE_T_MAX / sizeof(wchar_t) - 1) {
return -1;
}
return -1;
}
- count = mbstowcs(res, arg, argsize + 1);
- if (count != (size_t)-1) {
- wchar_t *tmp;
- /* Only use the result if it contains no
- surrogate characters. */
- for (tmp = res; *tmp != 0 &&
- !Py_UNICODE_IS_SURROGATE(*tmp); tmp++)
- ;
- if (*tmp == 0) {
- if (wlen != NULL) {
- *wlen = count;
- }
- *wstr = res;
- return 0;
+ count = _Py_mbstowcs(res, arg, argsize + 1);
+ if (count != DECODE_ERROR) {
+ *wstr = res;
+ if (wlen != NULL) {
+ *wlen = count;
}
+ return 0;
}
PyMem_RawFree(res);
}
out = res;
memset(&mbs, 0, sizeof mbs);
while (argsize) {
- size_t converted = mbrtowc(out, (char*)in, argsize, &mbs);
+ size_t converted = _Py_mbrtowc(out, (char*)in, argsize, &mbs);
if (converted == 0) {
/* Reached end of string; null char stored. */
break;
}
- if (converted == (size_t)-2) {
+ if (converted == INCOMPLETE_CHARACTER) {
/* Incomplete character. This should never happen,
since we provide everything that we have -
unless there is a bug in the C library, or I
goto decode_error;
}
- if (converted == (size_t)-1) {
+ if (converted == DECODE_ERROR) {
if (!surrogateescape) {
goto decode_error;
}
- /* Conversion error. Escape as UTF-8b, and start over
- in the initial shift state. */
+ /* Decoding error. Escape as UTF-8b, and start over in the initial
+ shift state. */
*out++ = 0xdc00 + *in++;
argsize--;
memset(&mbs, 0, sizeof mbs);
continue;
}
- if (Py_UNICODE_IS_SURROGATE(*out)) {
- if (!surrogateescape) {
- goto decode_error;
- }
+ // _Py_mbrtowc() reject lone surrogate characters
+ assert(!Py_UNICODE_IS_SURROGATE(*out));
- /* Surrogate character. Escape the original
- byte sequence with surrogateescape. */
- argsize -= converted;
- while (converted--) {
- *out++ = 0xdc00 + *in++;
- }
- continue;
- }
/* successfully converted some bytes */
in += converted;
argsize -= converted;
else {
converted = wcstombs(NULL, buf, 0);
}
- if (converted == (size_t)-1) {
+ if (converted == DECODE_ERROR) {
goto encode_error;
}
if (bytes != NULL) {
struct _stat wstatbuf;
const wchar_t *wpath;
+_Py_COMP_DIAG_PUSH
+_Py_COMP_DIAG_IGNORE_DEPR_DECLS
wpath = _PyUnicode_AsUnicode(path);
+_Py_COMP_DIAG_POP
if (wpath == NULL)
return -2;
char cmode[10];
size_t r;
r = wcstombs(cmode, mode, 10);
- if (r == (size_t)-1 || r >= 10) {
+ if (r == DECODE_ERROR || r >= 10) {
errno = EINVAL;
return NULL;
}
Py_TYPE(path));
return NULL;
}
+_Py_COMP_DIAG_PUSH
+_Py_COMP_DIAG_IGNORE_DEPR_DECLS
wpath = _PyUnicode_AsUnicode(path);
+_Py_COMP_DIAG_POP
if (wpath == NULL)
return NULL;
}
if (mod != NULL && mod != Py_None) {
- if (import_ensure_initialized(tstate, mod, name) < 0) {
+ if (import_ensure_initialized(tstate, mod, abs_name) < 0) {
goto error;
}
}
tstate->frame = NULL;
tstate->recursion_depth = 0;
- tstate->overflowed = 0;
+ tstate->recursion_headroom = 0;
tstate->recursion_critical = 0;
tstate->stackcheck_counter = 0;
tstate->tracing = 0;
if (file == Py_None) {
return;
}
-
+ Py_INCREF(file);
_PyErr_Display(file, exception, value, tb);
+ Py_DECREF(file);
}
PyObject *
sys_setrecursionlimit_impl(PyObject *module, int new_limit)
/*[clinic end generated code: output=35e1c64754800ace input=b0f7a23393924af3]*/
{
- int mark;
PyThreadState *tstate = _PyThreadState_GET();
if (new_limit < 1) {
Reject too low new limit if the current recursion depth is higher than
the new low-water mark. Otherwise it may not be possible anymore to
reset the overflowed flag to 0. */
- mark = _Py_RecursionLimitLowerWaterMark(new_limit);
- if (tstate->recursion_depth >= mark) {
+ if (tstate->recursion_depth >= new_limit) {
_PyErr_Format(tstate, PyExc_RecursionError,
"cannot set the recursion limit to %i at "
"the recursion depth %i: the limit is too low",
-This is Python version 3.9.2
+This is Python version 3.9.3
============================
.. image:: https://travis-ci.org/python/cpython.svg?branch=3.9
:alt: CPython code coverage on Codecov
:target: https://codecov.io/gh/python/cpython
-.. image:: https://img.shields.io/badge/zulip-join_chat-brightgreen.svg
- :alt: Python Zulip chat
- :target: https://python.zulipchat.com
+.. image:: https://img.shields.io/badge/discourse-join_chat-brightgreen.svg
+ :alt: Python Discourse chat
+ :target: https://discuss.python.org/
Copyright (c) 2001-2021 Python Software Foundation. All rights reserved.
]
OPENSSL_RECENT_VERSIONS = [
- "1.1.1g",
- # "3.0.0-alpha2"
+ "1.1.1k",
+ # "3.0.0-alpha12"
]
LIBRESSL_OLD_VERSIONS = [
# PARTICULAR PURPOSE.
m4_ifndef([AC_CONFIG_MACRO_DIRS], [m4_defun([_AM_CONFIG_MACRO_DIRS], [])m4_defun([AC_CONFIG_MACRO_DIRS], [_AM_CONFIG_MACRO_DIRS($@)])])
+# ===============================================================================
+# https://www.gnu.org/software/autoconf-archive/ax_c_float_words_bigendian.html
+# ===============================================================================
+#
+# SYNOPSIS
+#
+# AX_C_FLOAT_WORDS_BIGENDIAN([ACTION-IF-TRUE], [ACTION-IF-FALSE], [ACTION-IF-UNKNOWN])
+#
+# DESCRIPTION
+#
+# Checks the ordering of words within a multi-word float. This check is
+# necessary because on some systems (e.g. certain ARM systems), the float
+# word ordering can be different from the byte ordering. In a multi-word
+# float context, "big-endian" implies that the word containing the sign
+# bit is found in the memory location with the lowest address. This
+# implementation was inspired by the AC_C_BIGENDIAN macro in autoconf.
+#
+# The endianness is detected by first compiling C code that contains a
+# special double float value, then grepping the resulting object file for
+# certain strings of ASCII values. The double is specially crafted to have
+# a binary representation that corresponds with a simple string. In this
+# implementation, the string "noonsees" was selected because the
+# individual word values ("noon" and "sees") are palindromes, thus making
+# this test byte-order agnostic. If grep finds the string "noonsees" in
+# the object file, the target platform stores float words in big-endian
+# order. If grep finds "seesnoon", float words are in little-endian order.
+# If neither value is found, the user is instructed to specify the
+# ordering.
+#
+# LICENSE
+#
+# Copyright (c) 2008 Daniel Amelang <dan@amelang.net>
+#
+# Copying and distribution of this file, with or without modification, are
+# permitted in any medium without royalty provided the copyright notice
+# and this notice are preserved. This file is offered as-is, without any
+# warranty.
+
+#serial 11
+
+AC_DEFUN([AX_C_FLOAT_WORDS_BIGENDIAN],
+ [AC_CACHE_CHECK(whether float word ordering is bigendian,
+ ax_cv_c_float_words_bigendian, [
+
+ax_cv_c_float_words_bigendian=unknown
+AC_COMPILE_IFELSE([AC_LANG_SOURCE([[
+
+double d = 90904234967036810337470478905505011476211692735615632014797120844053488865816695273723469097858056257517020191247487429516932130503560650002327564517570778480236724525140520121371739201496540132640109977779420565776568942592.0;
+
+]])], [
+
+if grep noonsees conftest.$ac_objext >/dev/null ; then
+ ax_cv_c_float_words_bigendian=yes
+fi
+if grep seesnoon conftest.$ac_objext >/dev/null ; then
+ if test "$ax_cv_c_float_words_bigendian" = unknown; then
+ ax_cv_c_float_words_bigendian=no
+ else
+ ax_cv_c_float_words_bigendian=unknown
+ fi
+fi
+
+])])
+
+case $ax_cv_c_float_words_bigendian in
+ yes)
+ m4_default([$1],
+ [AC_DEFINE([FLOAT_WORDS_BIGENDIAN], 1,
+ [Define to 1 if your system stores words within floats
+ with the most significant word first])]) ;;
+ no)
+ $2 ;;
+ *)
+ m4_default([$3],
+ [AC_MSG_ERROR([
+
+Unknown float word ordering. You need to manually preset
+ax_cv_c_float_words_bigendian=no (or yes) according to your system.
+
+ ])]) ;;
+esac
+
+])# AX_C_FLOAT_WORDS_BIGENDIAN
+
+# ===========================================================================
+# https://www.gnu.org/software/autoconf-archive/ax_check_openssl.html
+# ===========================================================================
+#
+# SYNOPSIS
+#
+# AX_CHECK_OPENSSL([action-if-found[, action-if-not-found]])
+#
+# DESCRIPTION
+#
+# Look for OpenSSL in a number of default spots, or in a user-selected
+# spot (via --with-openssl). Sets
+#
+# OPENSSL_INCLUDES to the include directives required
+# OPENSSL_LIBS to the -l directives required
+# OPENSSL_LDFLAGS to the -L or -R flags required
+#
+# and calls ACTION-IF-FOUND or ACTION-IF-NOT-FOUND appropriately
+#
+# This macro sets OPENSSL_INCLUDES such that source files should use the
+# openssl/ directory in include directives:
+#
+# #include <openssl/hmac.h>
+#
+# LICENSE
+#
+# Copyright (c) 2009,2010 Zmanda Inc. <http://www.zmanda.com/>
+# Copyright (c) 2009,2010 Dustin J. Mitchell <dustin@zmanda.com>
+#
+# Copying and distribution of this file, with or without modification, are
+# permitted in any medium without royalty provided the copyright notice
+# and this notice are preserved. This file is offered as-is, without any
+# warranty.
+
+#serial 10
+
+AU_ALIAS([CHECK_SSL], [AX_CHECK_OPENSSL])
+AC_DEFUN([AX_CHECK_OPENSSL], [
+ found=false
+ AC_ARG_WITH([openssl],
+ [AS_HELP_STRING([--with-openssl=DIR],
+ [root of the OpenSSL directory])],
+ [
+ case "$withval" in
+ "" | y | ye | yes | n | no)
+ AC_MSG_ERROR([Invalid --with-openssl value])
+ ;;
+ *) ssldirs="$withval"
+ ;;
+ esac
+ ], [
+ # if pkg-config is installed and openssl has installed a .pc file,
+ # then use that information and don't search ssldirs
+ AC_CHECK_TOOL([PKG_CONFIG], [pkg-config])
+ if test x"$PKG_CONFIG" != x""; then
+ OPENSSL_LDFLAGS=`$PKG_CONFIG openssl --libs-only-L 2>/dev/null`
+ if test $? = 0; then
+ OPENSSL_LIBS=`$PKG_CONFIG openssl --libs-only-l 2>/dev/null`
+ OPENSSL_INCLUDES=`$PKG_CONFIG openssl --cflags-only-I 2>/dev/null`
+ found=true
+ fi
+ fi
+
+ # no such luck; use some default ssldirs
+ if ! $found; then
+ ssldirs="/usr/local/ssl /usr/lib/ssl /usr/ssl /usr/pkg /usr/local /usr"
+ fi
+ ]
+ )
+
+
+ # note that we #include <openssl/foo.h>, so the OpenSSL headers have to be in
+ # an 'openssl' subdirectory
+
+ if ! $found; then
+ OPENSSL_INCLUDES=
+ for ssldir in $ssldirs; do
+ AC_MSG_CHECKING([for openssl/ssl.h in $ssldir])
+ if test -f "$ssldir/include/openssl/ssl.h"; then
+ OPENSSL_INCLUDES="-I$ssldir/include"
+ OPENSSL_LDFLAGS="-L$ssldir/lib"
+ OPENSSL_LIBS="-lssl -lcrypto"
+ found=true
+ AC_MSG_RESULT([yes])
+ break
+ else
+ AC_MSG_RESULT([no])
+ fi
+ done
+
+ # if the file wasn't found, well, go ahead and try the link anyway -- maybe
+ # it will just work!
+ fi
+
+ # try the preprocessor and linker with our new flags,
+ # being careful not to pollute the global LIBS, LDFLAGS, and CPPFLAGS
+
+ AC_MSG_CHECKING([whether compiling and linking against OpenSSL works])
+ echo "Trying link with OPENSSL_LDFLAGS=$OPENSSL_LDFLAGS;" \
+ "OPENSSL_LIBS=$OPENSSL_LIBS; OPENSSL_INCLUDES=$OPENSSL_INCLUDES" >&AS_MESSAGE_LOG_FD
+
+ save_LIBS="$LIBS"
+ save_LDFLAGS="$LDFLAGS"
+ save_CPPFLAGS="$CPPFLAGS"
+ LDFLAGS="$LDFLAGS $OPENSSL_LDFLAGS"
+ LIBS="$OPENSSL_LIBS $LIBS"
+ CPPFLAGS="$OPENSSL_INCLUDES $CPPFLAGS"
+ AC_LINK_IFELSE(
+ [AC_LANG_PROGRAM([#include <openssl/ssl.h>], [SSL_new(NULL)])],
+ [
+ AC_MSG_RESULT([yes])
+ $1
+ ], [
+ AC_MSG_RESULT([no])
+ $2
+ ])
+ CPPFLAGS="$save_CPPFLAGS"
+ LDFLAGS="$save_LDFLAGS"
+ LIBS="$save_LIBS"
+
+ AC_SUBST([OPENSSL_INCLUDES])
+ AC_SUBST([OPENSSL_LIBS])
+ AC_SUBST([OPENSSL_LDFLAGS])
+])
+
# pkg.m4 - Macros to locate and utilise pkg-config. -*- Autoconf -*-
# serial 11 (pkg-config-0.29.1)
[AC_DEFINE([HAVE_][$1], 1, [Enable ]m4_tolower([$1])[ support])])
])dnl PKG_HAVE_DEFINE_WITH_MODULES
-m4_include([m4/ax_c_float_words_bigendian.m4])
-m4_include([m4/ax_check_openssl.m4])
--with-ensurepip[=install|upgrade|no]
"install" or "upgrade" using bundled pip (default is
upgrade)
- --with-openssl=DIR override root of the OpenSSL directory to DIR
+ --with-openssl=DIR root of the OpenSSL directory
--with-ssl-default-suites=[python|openssl|STRING]
override default cipher suites string, python: use
Python's preferred selection (default), openssl:
if ac_fn_c_try_compile "$LINENO"; then :
-if $GREP noonsees conftest.$ac_objext >/dev/null ; then
+if grep noonsees conftest.$ac_objext >/dev/null ; then
ax_cv_c_float_words_bigendian=yes
fi
-if $GREP seesnoon conftest.$ac_objext >/dev/null ; then
+if grep seesnoon conftest.$ac_objext >/dev/null ; then
if test "$ax_cv_c_float_words_bigendian" = unknown; then
ax_cv_c_float_words_bigendian=no
else
fi
+
EXT_SUFFIX=.${SOABI}${SHLIB_SUFFIX}
{ $as_echo "$as_me:${as_lineno-$LINENO}: checking LDVERSION" >&5
-dnl ***********************************************
-dnl * Please run autoreconf to test your changes! *
-dnl ***********************************************
+dnl ***************************************************
+dnl * Please run autoreconf -if to test your changes! *
+dnl ***************************************************
+dnl
+dnl Python's configure script requires autoconf 2.69 and autoconf-archive.
+dnl
# Set VERSION so we only need to edit in one place (i.e., here)
m4_define(PYTHON_VERSION, 3.9)
AC_INIT([python],[PYTHON_VERSION],[https://bugs.python.org/])
-AC_CONFIG_MACRO_DIR(m4)
+m4_ifdef(
+ [AX_C_FLOAT_WORDS_BIGENDIAN],
+ [],
+ [AC_MSG_ERROR([Please install autoconf-archive package and re-run autoreconf])]
+)
AC_SUBST(BASECPPFLAGS)
if test "$srcdir" != . -a "$srcdir" != "$(pwd)"; then
+++ /dev/null
-# ===============================================================================
-# https://www.gnu.org/software/autoconf-archive/ax_c_float_words_bigendian.html
-# ===============================================================================
-#
-# SYNOPSIS
-#
-# AX_C_FLOAT_WORDS_BIGENDIAN([ACTION-IF-TRUE], [ACTION-IF-FALSE], [ACTION-IF-UNKNOWN])
-#
-# DESCRIPTION
-#
-# Checks the ordering of words within a multi-word float. This check is
-# necessary because on some systems (e.g. certain ARM systems), the float
-# word ordering can be different from the byte ordering. In a multi-word
-# float context, "big-endian" implies that the word containing the sign
-# bit is found in the memory location with the lowest address. This
-# implementation was inspired by the AC_C_BIGENDIAN macro in autoconf.
-#
-# The endianness is detected by first compiling C code that contains a
-# special double float value, then grepping the resulting object file for
-# certain strings of ASCII values. The double is specially crafted to have
-# a binary representation that corresponds with a simple string. In this
-# implementation, the string "noonsees" was selected because the
-# individual word values ("noon" and "sees") are palindromes, thus making
-# this test byte-order agnostic. If grep finds the string "noonsees" in
-# the object file, the target platform stores float words in big-endian
-# order. If grep finds "seesnoon", float words are in little-endian order.
-# If neither value is found, the user is instructed to specify the
-# ordering.
-#
-# LICENSE
-#
-# Copyright (c) 2008 Daniel Amelang <dan@amelang.net>
-#
-# Copying and distribution of this file, with or without modification, are
-# permitted in any medium without royalty provided the copyright notice
-# and this notice are preserved. This file is offered as-is, without any
-# warranty.
-
-#serial 11
-
-AC_DEFUN([AX_C_FLOAT_WORDS_BIGENDIAN],
- [AC_CACHE_CHECK(whether float word ordering is bigendian,
- ax_cv_c_float_words_bigendian, [
-
-ax_cv_c_float_words_bigendian=unknown
-AC_COMPILE_IFELSE([AC_LANG_SOURCE([[
-
-double d = 90904234967036810337470478905505011476211692735615632014797120844053488865816695273723469097858056257517020191247487429516932130503560650002327564517570778480236724525140520121371739201496540132640109977779420565776568942592.0;
-
-]])], [
-
-if $GREP noonsees conftest.$ac_objext >/dev/null ; then
- ax_cv_c_float_words_bigendian=yes
-fi
-if $GREP seesnoon conftest.$ac_objext >/dev/null ; then
- if test "$ax_cv_c_float_words_bigendian" = unknown; then
- ax_cv_c_float_words_bigendian=no
- else
- ax_cv_c_float_words_bigendian=unknown
- fi
-fi
-
-])])
-
-case $ax_cv_c_float_words_bigendian in
- yes)
- m4_default([$1],
- [AC_DEFINE([FLOAT_WORDS_BIGENDIAN], 1,
- [Define to 1 if your system stores words within floats
- with the most significant word first])]) ;;
- no)
- $2 ;;
- *)
- m4_default([$3],
- [AC_MSG_ERROR([
-
-Unknown float word ordering. You need to manually preset
-ax_cv_c_float_words_bigendian=no (or yes) according to your system.
-
- ])]) ;;
-esac
-
-])# AX_C_FLOAT_WORDS_BIGENDIAN
+++ /dev/null
-# ===========================================================================
-# https://www.gnu.org/software/autoconf-archive/ax_check_openssl.html
-# ===========================================================================
-#
-# SYNOPSIS
-#
-# AX_CHECK_OPENSSL([action-if-found[, action-if-not-found]])
-#
-# DESCRIPTION
-#
-# Look for OpenSSL in a number of default spots, or in a user-selected
-# spot (via --with-openssl). Sets
-#
-# OPENSSL_INCLUDES to the include directives required
-# OPENSSL_LIBS to the -l directives required
-# OPENSSL_LDFLAGS to the -L or -R flags required
-#
-# and calls ACTION-IF-FOUND or ACTION-IF-NOT-FOUND appropriately
-#
-# This macro sets OPENSSL_INCLUDES such that source files should use the
-# openssl/ directory in include directives:
-#
-# #include <openssl/hmac.h>
-#
-# LICENSE
-#
-# Copyright (c) 2009,2010 Zmanda Inc. <http://www.zmanda.com/>
-# Copyright (c) 2009,2010 Dustin J. Mitchell <dustin@zmanda.com>
-#
-# Copying and distribution of this file, with or without modification, are
-# permitted in any medium without royalty provided the copyright notice
-# and this notice are preserved. This file is offered as-is, without any
-# warranty.
-
-#serial 10
-
-AU_ALIAS([CHECK_SSL], [AX_CHECK_OPENSSL])
-AC_DEFUN([AX_CHECK_OPENSSL], [
- found=false
- AC_ARG_WITH([openssl],
- [AS_HELP_STRING([--with-openssl=DIR],
- [override root of the OpenSSL directory to DIR])],
- [
- case "$withval" in
- "" | y | ye | yes | n | no)
- AC_MSG_ERROR([Invalid --with-openssl value])
- ;;
- *) ssldirs="$withval"
- ;;
- esac
- ], [
- # if pkg-config is installed and openssl has installed a .pc file,
- # then use that information and don't search ssldirs
- AC_CHECK_TOOL([PKG_CONFIG], [pkg-config])
- if test x"$PKG_CONFIG" != x""; then
- OPENSSL_LDFLAGS=`$PKG_CONFIG openssl --libs-only-L 2>/dev/null`
- if test $? = 0; then
- OPENSSL_LIBS=`$PKG_CONFIG openssl --libs-only-l 2>/dev/null`
- OPENSSL_INCLUDES=`$PKG_CONFIG openssl --cflags-only-I 2>/dev/null`
- found=true
- fi
- fi
-
- # no such luck; use some default ssldirs
- if ! $found; then
- ssldirs="/usr/local/ssl /usr/lib/ssl /usr/ssl /usr/pkg /usr/local /usr"
- fi
- ]
- )
-
-
- # note that we #include <openssl/foo.h>, so the OpenSSL headers have to be in
- # an 'openssl' subdirectory
-
- if ! $found; then
- OPENSSL_INCLUDES=
- for ssldir in $ssldirs; do
- AC_MSG_CHECKING([for openssl/ssl.h in $ssldir])
- if test -f "$ssldir/include/openssl/ssl.h"; then
- OPENSSL_INCLUDES="-I$ssldir/include"
- OPENSSL_LDFLAGS="-L$ssldir/lib"
- OPENSSL_LIBS="-lssl -lcrypto"
- found=true
- AC_MSG_RESULT([yes])
- break
- else
- AC_MSG_RESULT([no])
- fi
- done
-
- # if the file wasn't found, well, go ahead and try the link anyway -- maybe
- # it will just work!
- fi
-
- # try the preprocessor and linker with our new flags,
- # being careful not to pollute the global LIBS, LDFLAGS, and CPPFLAGS
-
- AC_MSG_CHECKING([whether compiling and linking against OpenSSL works])
- echo "Trying link with OPENSSL_LDFLAGS=$OPENSSL_LDFLAGS;" \
- "OPENSSL_LIBS=$OPENSSL_LIBS; OPENSSL_INCLUDES=$OPENSSL_INCLUDES" >&AS_MESSAGE_LOG_FD
-
- save_LIBS="$LIBS"
- save_LDFLAGS="$LDFLAGS"
- save_CPPFLAGS="$CPPFLAGS"
- LDFLAGS="$LDFLAGS $OPENSSL_LDFLAGS"
- LIBS="$OPENSSL_LIBS $LIBS"
- CPPFLAGS="$OPENSSL_INCLUDES $CPPFLAGS"
- AC_LINK_IFELSE(
- [AC_LANG_PROGRAM([#include <openssl/ssl.h>], [SSL_new(NULL)])],
- [
- AC_MSG_RESULT([yes])
- $1
- ], [
- AC_MSG_RESULT([no])
- $2
- ])
- CPPFLAGS="$save_CPPFLAGS"
- LDFLAGS="$save_LDFLAGS"
- LIBS="$save_LIBS"
-
- AC_SUBST([OPENSSL_INCLUDES])
- AC_SUBST([OPENSSL_LIBS])
- AC_SUBST([OPENSSL_LDFLAGS])
-])
/* Define to 1 if you have the `dup3' function. */
#undef HAVE_DUP3
+/* Define if you have the '_dyld_shared_cache_contains_path' function. */
+#undef HAVE_DYLD_SHARED_CACHE_CONTAINS_PATH
+
/* Defined when any dynamic module loading is enabled. */
#undef HAVE_DYNAMIC_LOADING
/* Define if you have the 'prlimit' functions. */
#undef HAVE_PRLIMIT
-/* Define if you have the '_dyld_shared_cache_contains_path' function. */
-#undef HAVE_DYLD_SHARED_CACHE_CONTAINS_PATH
-
/* Define to 1 if you have the <process.h> header file. */
#undef HAVE_PROCESS_H