Metadata-Version: 1.2
Name: numpy
-Version: 1.21.0
+Version: 1.21.1
Summary: NumPy is the fundamental package for array computing with Python.
Home-page: https://www.numpy.org
Author: Travis E. Oliphant et al.
--- /dev/null
+
+Contributors
+============
+
+A total of 11 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Bas van Beek
+* Charles Harris
+* Ganesh Kathiresan
+* Gregory R. Lee
+* Hugo Defois +
+* Kevin Sheppard
+* Matti Picus
+* Ralf Gommers
+* Sayed Adel
+* Sebastian Berg
+* Thomas J. Fan
+
+Pull requests merged
+====================
+
+A total of 26 pull requests were merged for this release.
+
+* `#19311 <https://github.com/numpy/numpy/pull/19311>`__: REV,BUG: Replace ``NotImplemented`` with ``typing.Any``
+* `#19324 <https://github.com/numpy/numpy/pull/19324>`__: MAINT: Fixed the return-dtype of ``ndarray.real`` and ``imag``
+* `#19330 <https://github.com/numpy/numpy/pull/19330>`__: MAINT: Replace ``"dtype[Any]"`` with ``dtype`` in the definiton of...
+* `#19342 <https://github.com/numpy/numpy/pull/19342>`__: DOC: Fix some docstrings that crash pdf generation.
+* `#19343 <https://github.com/numpy/numpy/pull/19343>`__: MAINT: bump scipy-mathjax
+* `#19347 <https://github.com/numpy/numpy/pull/19347>`__: BUG: Fix arr.flat.index for large arrays and big-endian machines
+* `#19348 <https://github.com/numpy/numpy/pull/19348>`__: ENH: add ``numpy.f2py.get_include`` function
+* `#19349 <https://github.com/numpy/numpy/pull/19349>`__: BUG: Fix reference count leak in ufunc dtype handling
+* `#19350 <https://github.com/numpy/numpy/pull/19350>`__: MAINT: Annotate missing attributes of ``np.number`` subclasses
+* `#19351 <https://github.com/numpy/numpy/pull/19351>`__: BUG: Fix cast safety and comparisons for zero sized voids
+* `#19352 <https://github.com/numpy/numpy/pull/19352>`__: BUG: Correct Cython declaration in random
+* `#19353 <https://github.com/numpy/numpy/pull/19353>`__: BUG: protect against accessing base attribute of a NULL subarray
+* `#19365 <https://github.com/numpy/numpy/pull/19365>`__: BUG, SIMD: Fix detecting AVX512 features on Darwin
+* `#19366 <https://github.com/numpy/numpy/pull/19366>`__: MAINT: remove ``print()``'s in distutils template handling
+* `#19390 <https://github.com/numpy/numpy/pull/19390>`__: ENH: SIMD architectures to show_config
+* `#19391 <https://github.com/numpy/numpy/pull/19391>`__: BUG: Do not raise deprecation warning for all nans in unique...
+* `#19392 <https://github.com/numpy/numpy/pull/19392>`__: BUG: Fix NULL special case in object-to-any cast code
+* `#19430 <https://github.com/numpy/numpy/pull/19430>`__: MAINT: Use arm64-graviton2 for testing on travis
+* `#19495 <https://github.com/numpy/numpy/pull/19495>`__: BUILD: update OpenBLAS to v0.3.17
+* `#19496 <https://github.com/numpy/numpy/pull/19496>`__: MAINT: Avoid unicode characters in division SIMD code comments
+* `#19499 <https://github.com/numpy/numpy/pull/19499>`__: BUG, SIMD: Fix infinite loop during count non-zero on GCC-11
+* `#19500 <https://github.com/numpy/numpy/pull/19500>`__: BUG: fix a numpy.npiter leak in npyiter_multi_index_set
+* `#19501 <https://github.com/numpy/numpy/pull/19501>`__: TST: Fix a ``GenericAlias`` test failure for python 3.9.0
+* `#19502 <https://github.com/numpy/numpy/pull/19502>`__: MAINT: Start testing with Python 3.10.0b3.
+* `#19503 <https://github.com/numpy/numpy/pull/19503>`__: MAINT: Add missing dtype overloads for object- and ctypes-based...
+* `#19510 <https://github.com/numpy/numpy/pull/19510>`__: REL: Prepare for NumPy 1.21.1 release.
+
"author": "MathJax Consortium",
"private": true,
"devDependencies": {
- "grunt": "^0.4.5",
- "grunt-cli": "^1.2.0",
- "grunt-contrib-clean": "^0.6.0",
- "grunt-regex-replace": "^0.2.6",
+ "grunt": ">=1.3.0",
+ "grunt-cli": ">=1.2.0",
+ "grunt-contrib-clean": ">=0.6.0",
+ "grunt-regex-replace": ">=0.2.6",
"matchdep": "*"
}
}
.. toctree::
:maxdepth: 3
+ 1.21.1 <release/1.21.1-notes>
1.21.0 <release/1.21.0-notes>
1.20.3 <release/1.20.3-notes>
1.20.2 <release/1.20.2-notes>
--- /dev/null
+.. currentmodule:: numpy
+
+==========================
+NumPy 1.21.1 Release Notes
+==========================
+The NumPy 1.21.1 is maintenance release that fixes bugs discovered after the
+1.21.0 release and updates OpenBLAS to v0.3.17 to deal with problems on arm64.
+
+The Python versions supported for this release are 3.7-3.9. The 1.21.x series
+is compatible with development Python 3.10. Python 3.10 will be officially
+supported after it is released.
+
+.. warning::
+ There are unresolved problems compiling NumPy 1.20.0 with gcc-11.1.
+
+ * Optimization level `-O3` results in many incorrect warnings when
+ running the tests.
+ * On some hardware NumPY will hang in an infinite loop.
+
+Contributors
+============
+
+A total of 11 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Bas van Beek
+* Charles Harris
+* Ganesh Kathiresan
+* Gregory R. Lee
+* Hugo Defois +
+* Kevin Sheppard
+* Matti Picus
+* Ralf Gommers
+* Sayed Adel
+* Sebastian Berg
+* Thomas J. Fan
+
+Pull requests merged
+====================
+
+A total of 26 pull requests were merged for this release.
+
+* `#19311 <https://github.com/numpy/numpy/pull/19311>`__: REV,BUG: Replace ``NotImplemented`` with ``typing.Any``
+* `#19324 <https://github.com/numpy/numpy/pull/19324>`__: MAINT: Fixed the return-dtype of ``ndarray.real`` and ``imag``
+* `#19330 <https://github.com/numpy/numpy/pull/19330>`__: MAINT: Replace ``"dtype[Any]"`` with ``dtype`` in the definiton of...
+* `#19342 <https://github.com/numpy/numpy/pull/19342>`__: DOC: Fix some docstrings that crash pdf generation.
+* `#19343 <https://github.com/numpy/numpy/pull/19343>`__: MAINT: bump scipy-mathjax
+* `#19347 <https://github.com/numpy/numpy/pull/19347>`__: BUG: Fix arr.flat.index for large arrays and big-endian machines
+* `#19348 <https://github.com/numpy/numpy/pull/19348>`__: ENH: add ``numpy.f2py.get_include`` function
+* `#19349 <https://github.com/numpy/numpy/pull/19349>`__: BUG: Fix reference count leak in ufunc dtype handling
+* `#19350 <https://github.com/numpy/numpy/pull/19350>`__: MAINT: Annotate missing attributes of ``np.number`` subclasses
+* `#19351 <https://github.com/numpy/numpy/pull/19351>`__: BUG: Fix cast safety and comparisons for zero sized voids
+* `#19352 <https://github.com/numpy/numpy/pull/19352>`__: BUG: Correct Cython declaration in random
+* `#19353 <https://github.com/numpy/numpy/pull/19353>`__: BUG: protect against accessing base attribute of a NULL subarray
+* `#19365 <https://github.com/numpy/numpy/pull/19365>`__: BUG, SIMD: Fix detecting AVX512 features on Darwin
+* `#19366 <https://github.com/numpy/numpy/pull/19366>`__: MAINT: remove ``print()``'s in distutils template handling
+* `#19390 <https://github.com/numpy/numpy/pull/19390>`__: ENH: SIMD architectures to show_config
+* `#19391 <https://github.com/numpy/numpy/pull/19391>`__: BUG: Do not raise deprecation warning for all nans in unique...
+* `#19392 <https://github.com/numpy/numpy/pull/19392>`__: BUG: Fix NULL special case in object-to-any cast code
+* `#19430 <https://github.com/numpy/numpy/pull/19430>`__: MAINT: Use arm64-graviton2 for testing on travis
+* `#19495 <https://github.com/numpy/numpy/pull/19495>`__: BUILD: update OpenBLAS to v0.3.17
+* `#19496 <https://github.com/numpy/numpy/pull/19496>`__: MAINT: Avoid unicode characters in division SIMD code comments
+* `#19499 <https://github.com/numpy/numpy/pull/19499>`__: BUG, SIMD: Fix infinite loop during count non-zero on GCC-11
+* `#19500 <https://github.com/numpy/numpy/pull/19500>`__: BUG: fix a numpy.npiter leak in npyiter_multi_index_set
+* `#19501 <https://github.com/numpy/numpy/pull/19501>`__: TST: Fix a ``GenericAlias`` test failure for python 3.9.0
+* `#19502 <https://github.com/numpy/numpy/pull/19502>`__: MAINT: Start testing with Python 3.10.0b3.
+* `#19503 <https://github.com/numpy/numpy/pull/19503>`__: MAINT: Add missing dtype overloads for object- and ctypes-based...
+* `#19510 <https://github.com/numpy/numpy/pull/19510>`__: REL: Prepare for NumPy 1.21.1 release.
+
import builtins
import os
import sys
+import mmap
+import ctypes as ct
+import array as _array
import datetime as dt
from abc import abstractmethod
from types import TracebackType
# other special cases. Order is sometimes important because of the
# subtype relationships
#
- # bool < int < float < complex
+ # bool < int < float < complex < object
#
# so we have to make sure the overloads for the narrowest type is
# first.
@overload
def __new__(cls, dtype: Type[bytes], align: bool = ..., copy: bool = ...) -> dtype[bytes_]: ...
- # `unsignedinteger` string-based representations
+ # `unsignedinteger` string-based representations and ctypes
@overload
- def __new__(cls, dtype: _UInt8Codes, align: bool = ..., copy: bool = ...) -> dtype[uint8]: ...
+ def __new__(cls, dtype: _UInt8Codes | Type[ct.c_uint8], align: bool = ..., copy: bool = ...) -> dtype[uint8]: ...
@overload
- def __new__(cls, dtype: _UInt16Codes, align: bool = ..., copy: bool = ...) -> dtype[uint16]: ...
+ def __new__(cls, dtype: _UInt16Codes | Type[ct.c_uint16], align: bool = ..., copy: bool = ...) -> dtype[uint16]: ...
@overload
- def __new__(cls, dtype: _UInt32Codes, align: bool = ..., copy: bool = ...) -> dtype[uint32]: ...
+ def __new__(cls, dtype: _UInt32Codes | Type[ct.c_uint32], align: bool = ..., copy: bool = ...) -> dtype[uint32]: ...
@overload
- def __new__(cls, dtype: _UInt64Codes, align: bool = ..., copy: bool = ...) -> dtype[uint64]: ...
+ def __new__(cls, dtype: _UInt64Codes | Type[ct.c_uint64], align: bool = ..., copy: bool = ...) -> dtype[uint64]: ...
@overload
- def __new__(cls, dtype: _UByteCodes, align: bool = ..., copy: bool = ...) -> dtype[ubyte]: ...
+ def __new__(cls, dtype: _UByteCodes | Type[ct.c_ubyte], align: bool = ..., copy: bool = ...) -> dtype[ubyte]: ...
@overload
- def __new__(cls, dtype: _UShortCodes, align: bool = ..., copy: bool = ...) -> dtype[ushort]: ...
+ def __new__(cls, dtype: _UShortCodes | Type[ct.c_ushort], align: bool = ..., copy: bool = ...) -> dtype[ushort]: ...
@overload
- def __new__(cls, dtype: _UIntCCodes, align: bool = ..., copy: bool = ...) -> dtype[uintc]: ...
+ def __new__(cls, dtype: _UIntCCodes | Type[ct.c_uint], align: bool = ..., copy: bool = ...) -> dtype[uintc]: ...
+
+ # NOTE: We're assuming here that `uint_ptr_t == size_t`,
+ # an assumption that does not hold in rare cases (same for `ssize_t`)
@overload
- def __new__(cls, dtype: _UIntPCodes, align: bool = ..., copy: bool = ...) -> dtype[uintp]: ...
+ def __new__(cls, dtype: _UIntPCodes | Type[ct.c_void_p] | Type[ct.c_size_t], align: bool = ..., copy: bool = ...) -> dtype[uintp]: ...
@overload
- def __new__(cls, dtype: _UIntCodes, align: bool = ..., copy: bool = ...) -> dtype[uint]: ...
+ def __new__(cls, dtype: _UIntCodes | Type[ct.c_ulong], align: bool = ..., copy: bool = ...) -> dtype[uint]: ...
@overload
- def __new__(cls, dtype: _ULongLongCodes, align: bool = ..., copy: bool = ...) -> dtype[ulonglong]: ...
+ def __new__(cls, dtype: _ULongLongCodes | Type[ct.c_ulonglong], align: bool = ..., copy: bool = ...) -> dtype[ulonglong]: ...
- # `signedinteger` string-based representations
+ # `signedinteger` string-based representations and ctypes
@overload
- def __new__(cls, dtype: _Int8Codes, align: bool = ..., copy: bool = ...) -> dtype[int8]: ...
+ def __new__(cls, dtype: _Int8Codes | Type[ct.c_int8], align: bool = ..., copy: bool = ...) -> dtype[int8]: ...
@overload
- def __new__(cls, dtype: _Int16Codes, align: bool = ..., copy: bool = ...) -> dtype[int16]: ...
+ def __new__(cls, dtype: _Int16Codes | Type[ct.c_int16], align: bool = ..., copy: bool = ...) -> dtype[int16]: ...
@overload
- def __new__(cls, dtype: _Int32Codes, align: bool = ..., copy: bool = ...) -> dtype[int32]: ...
+ def __new__(cls, dtype: _Int32Codes | Type[ct.c_int32], align: bool = ..., copy: bool = ...) -> dtype[int32]: ...
@overload
- def __new__(cls, dtype: _Int64Codes, align: bool = ..., copy: bool = ...) -> dtype[int64]: ...
+ def __new__(cls, dtype: _Int64Codes | Type[ct.c_int64], align: bool = ..., copy: bool = ...) -> dtype[int64]: ...
@overload
- def __new__(cls, dtype: _ByteCodes, align: bool = ..., copy: bool = ...) -> dtype[byte]: ...
+ def __new__(cls, dtype: _ByteCodes | Type[ct.c_byte], align: bool = ..., copy: bool = ...) -> dtype[byte]: ...
@overload
- def __new__(cls, dtype: _ShortCodes, align: bool = ..., copy: bool = ...) -> dtype[short]: ...
+ def __new__(cls, dtype: _ShortCodes | Type[ct.c_short], align: bool = ..., copy: bool = ...) -> dtype[short]: ...
@overload
- def __new__(cls, dtype: _IntCCodes, align: bool = ..., copy: bool = ...) -> dtype[intc]: ...
+ def __new__(cls, dtype: _IntCCodes | Type[ct.c_int], align: bool = ..., copy: bool = ...) -> dtype[intc]: ...
@overload
- def __new__(cls, dtype: _IntPCodes, align: bool = ..., copy: bool = ...) -> dtype[intp]: ...
+ def __new__(cls, dtype: _IntPCodes | Type[ct.c_ssize_t], align: bool = ..., copy: bool = ...) -> dtype[intp]: ...
@overload
- def __new__(cls, dtype: _IntCodes, align: bool = ..., copy: bool = ...) -> dtype[int_]: ...
+ def __new__(cls, dtype: _IntCodes | Type[ct.c_long], align: bool = ..., copy: bool = ...) -> dtype[int_]: ...
@overload
- def __new__(cls, dtype: _LongLongCodes, align: bool = ..., copy: bool = ...) -> dtype[longlong]: ...
+ def __new__(cls, dtype: _LongLongCodes | Type[ct.c_longlong], align: bool = ..., copy: bool = ...) -> dtype[longlong]: ...
- # `floating` string-based representations
+ # `floating` string-based representations and ctypes
@overload
def __new__(cls, dtype: _Float16Codes, align: bool = ..., copy: bool = ...) -> dtype[float16]: ...
@overload
@overload
def __new__(cls, dtype: _HalfCodes, align: bool = ..., copy: bool = ...) -> dtype[half]: ...
@overload
- def __new__(cls, dtype: _SingleCodes, align: bool = ..., copy: bool = ...) -> dtype[single]: ...
+ def __new__(cls, dtype: _SingleCodes | Type[ct.c_float], align: bool = ..., copy: bool = ...) -> dtype[single]: ...
@overload
- def __new__(cls, dtype: _DoubleCodes, align: bool = ..., copy: bool = ...) -> dtype[double]: ...
+ def __new__(cls, dtype: _DoubleCodes | Type[ct.c_double], align: bool = ..., copy: bool = ...) -> dtype[double]: ...
@overload
- def __new__(cls, dtype: _LongDoubleCodes, align: bool = ..., copy: bool = ...) -> dtype[longdouble]: ...
+ def __new__(cls, dtype: _LongDoubleCodes | Type[ct.c_longdouble], align: bool = ..., copy: bool = ...) -> dtype[longdouble]: ...
# `complexfloating` string-based representations
@overload
@overload
def __new__(cls, dtype: _CLongDoubleCodes, align: bool = ..., copy: bool = ...) -> dtype[clongdouble]: ...
- # Miscellaneous string-based representations
+ # Miscellaneous string-based representations and ctypes
@overload
- def __new__(cls, dtype: _BoolCodes, align: bool = ..., copy: bool = ...) -> dtype[bool_]: ...
+ def __new__(cls, dtype: _BoolCodes | Type[ct.c_bool], align: bool = ..., copy: bool = ...) -> dtype[bool_]: ...
@overload
def __new__(cls, dtype: _TD64Codes, align: bool = ..., copy: bool = ...) -> dtype[timedelta64]: ...
@overload
@overload
def __new__(cls, dtype: _StrCodes, align: bool = ..., copy: bool = ...) -> dtype[str_]: ...
@overload
- def __new__(cls, dtype: _BytesCodes, align: bool = ..., copy: bool = ...) -> dtype[bytes_]: ...
+ def __new__(cls, dtype: _BytesCodes | Type[ct.c_char], align: bool = ..., copy: bool = ...) -> dtype[bytes_]: ...
@overload
def __new__(cls, dtype: _VoidCodes, align: bool = ..., copy: bool = ...) -> dtype[void]: ...
@overload
- def __new__(cls, dtype: _ObjectCodes, align: bool = ..., copy: bool = ...) -> dtype[object_]: ...
+ def __new__(cls, dtype: _ObjectCodes | Type[ct.py_object], align: bool = ..., copy: bool = ...) -> dtype[object_]: ...
# dtype of a dtype is the same dtype
@overload
align: bool = ...,
copy: bool = ...,
) -> dtype[Any]: ...
- # Catchall overload
+ # Catchall overload for void-likes
@overload
def __new__(
cls,
align: bool = ...,
copy: bool = ...,
) -> dtype[void]: ...
+ # Catchall overload for object-likes
+ @overload
+ def __new__(
+ cls,
+ dtype: Type[object],
+ align: bool = ...,
+ copy: bool = ...,
+ ) -> dtype[object_]: ...
@overload
def __getitem__(self: dtype[void], key: List[str]) -> dtype[void]: ...
class _SupportsItem(Protocol[_T_co]):
def item(self, __args: Any) -> _T_co: ...
+class _SupportsReal(Protocol[_T_co]):
+ @property
+ def real(self) -> _T_co: ...
+
+class _SupportsImag(Protocol[_T_co]):
+ @property
+ def imag(self) -> _T_co: ...
+
class ndarray(_ArrayOrScalarCommon, Generic[_ShapeType, _DType_co]):
@property
def base(self) -> Optional[ndarray]: ...
@property
def size(self) -> int: ...
@property
- def real(self: _ArraySelf) -> _ArraySelf: ...
+ def real(
+ self: NDArray[_SupportsReal[_ScalarType]], # type: ignore[type-var]
+ ) -> ndarray[_ShapeType, dtype[_ScalarType]]: ...
@real.setter
def real(self, value: ArrayLike) -> None: ...
@property
- def imag(self: _ArraySelf) -> _ArraySelf: ...
+ def imag(
+ self: NDArray[_SupportsImag[_ScalarType]], # type: ignore[type-var]
+ ) -> ndarray[_ShapeType, dtype[_ScalarType]]: ...
@imag.setter
def imag(self, value: ArrayLike) -> None: ...
def __new__(
]
class integer(number[_NBit1]): # type: ignore
+ @property
+ def numerator(self: _ScalarType) -> _ScalarType: ...
+ @property
+ def denominator(self) -> L[1]: ...
+ @overload
+ def __round__(self, ndigits: None = ...) -> int: ...
+ @overload
+ def __round__(self: _ScalarType, ndigits: SupportsIndex) -> _ScalarType: ...
+
# NOTE: `__index__` is technically defined in the bottom-most
# sub-classes (`int64`, `uint32`, etc)
def item(
__value: Union[None, int, _CharLike_co, dt.timedelta, timedelta64] = ...,
__format: Union[_CharLike_co, Tuple[_CharLike_co, _IntLike_co]] = ...,
) -> None: ...
+ @property
+ def numerator(self: _ScalarType) -> _ScalarType: ...
+ @property
+ def denominator(self) -> L[1]: ...
# NOTE: Only a limited number of units support conversion
# to builtin scalar types: `Y`, `M`, `ns`, `ps`, `fs`, `as`
uint = unsignedinteger[_NBitInt]
ulonglong = unsignedinteger[_NBitLongLong]
-class inexact(number[_NBit1]): ... # type: ignore
+class inexact(number[_NBit1]): # type: ignore
+ def __getnewargs__(self: inexact[_64Bit]) -> Tuple[float, ...]: ...
_IntType = TypeVar("_IntType", bound=integer)
_FloatType = TypeVar('_FloatType', bound=floating)
__args: Union[L[0], Tuple[()], Tuple[L[0]]] = ...,
) -> float: ...
def tolist(self) -> float: ...
+ def is_integer(self: float64) -> bool: ...
+ def hex(self: float64) -> str: ...
+ @classmethod
+ def fromhex(cls: Type[float64], __string: str) -> float64: ...
+ def as_integer_ratio(self) -> Tuple[int, int]: ...
+ if sys.version_info >= (3, 9):
+ def __ceil__(self: float64) -> int: ...
+ def __floor__(self: float64) -> int: ...
+ def __trunc__(self: float64) -> int: ...
+ def __getnewargs__(self: float64) -> Tuple[float]: ...
+ def __getformat__(self: float64, __typestr: L["double", "float"]) -> str: ...
+ @overload
+ def __round__(self, ndigits: None = ...) -> int: ...
+ @overload
+ def __round__(self: _ScalarType, ndigits: SupportsIndex) -> _ScalarType: ...
__add__: _FloatOp[_NBit1]
__radd__: _FloatOp[_NBit1]
__sub__: _FloatOp[_NBit1]
@property
def imag(self) -> floating[_NBit2]: ... # type: ignore[override]
def __abs__(self) -> floating[_NBit1]: ... # type: ignore[override]
+ def __getnewargs__(self: complex128) -> Tuple[float, float]: ...
+ # NOTE: Deprecated
+ # def __round__(self, ndigits=...): ...
__add__: _ComplexOp[_NBit1]
__radd__: _ComplexOp[_NBit1]
__sub__: _ComplexOp[_NBit1]
version_json = '''
{
- "date": "2021-06-19T12:50:55-0600",
+ "date": "2021-07-18T11:34:41-0600",
"dirty": false,
"error": null,
- "full-revisionid": "b235f9e701e14ed6f6f6dcba885f7986a833743f",
- "version": "1.21.0"
+ "full-revisionid": "df6d2600c51502e1877aac563658d0616a75c5e5",
+ "version": "1.21.1"
}
''' # END VERSION_JSON
npy__cpu_have[NPY_CPU_FEATURE_FMA] = npy__cpu_have[NPY_CPU_FEATURE_FMA3];
// check AVX512 OS support
- if ((xcr & 0xe6) != 0xe6)
+ int avx512_os = (xcr & 0xe6) == 0xe6;
+#if defined(__APPLE__) && defined(__x86_64__)
+ /**
+ * On darwin, machines with AVX512 support, by default, threads are created with
+ * AVX512 masked off in XCR0 and an AVX-sized savearea is used.
+ * However, AVX512 capabilities are advertised in the commpage and via sysctl.
+ * for more information, check:
+ * - https://github.com/apple/darwin-xnu/blob/0a798f6738bc1db01281fc08ae024145e84df927/osfmk/i386/fpu.c#L175-L201
+ * - https://github.com/golang/go/issues/43089
+ * - https://github.com/numpy/numpy/issues/19319
+ */
+ if (!avx512_os) {
+ npy_uintp commpage64_addr = 0x00007fffffe00000ULL;
+ npy_uint16 commpage64_ver = *((npy_uint16*)(commpage64_addr + 0x01E));
+ // cpu_capabilities64 undefined in versions < 13
+ if (commpage64_ver > 12) {
+ npy_uint64 commpage64_cap = *((npy_uint64*)(commpage64_addr + 0x010));
+ avx512_os = (commpage64_cap & 0x0000004000000000ULL) != 0;
+ }
+ }
+#endif
+ if (!avx512_os) {
return;
+ }
npy__cpu_have[NPY_CPU_FEATURE_AVX512F] = (reg[1] & (1 << 16)) != 0;
npy__cpu_have[NPY_CPU_FEATURE_AVX512CD] = (reg[1] & (1 << 28)) != 0;
if (npy__cpu_have[NPY_CPU_FEATURE_AVX512F] && npy__cpu_have[NPY_CPU_FEATURE_AVX512CD]) {
#include "arraytypes.h"
#include "scalartypes.h"
#include "arrayobject.h"
+#include "convert_datatype.h"
#include "conversion_utils.h"
#include "ctors.h"
#include "dtypemeta.h"
return Py_NotImplemented;
}
- _res = PyArray_CanCastTypeTo(PyArray_DESCR(self),
- PyArray_DESCR(array_other),
- NPY_EQUIV_CASTING);
+ _res = PyArray_CheckCastSafety(
+ NPY_EQUIV_CASTING,
+ PyArray_DESCR(self), PyArray_DESCR(array_other), NULL);
+ if (_res < 0) {
+ PyErr_Clear();
+ _res = 0;
+ }
if (_res == 0) {
/* 2015-05-07, 1.10 */
Py_DECREF(array_other);
return Py_NotImplemented;
}
- _res = PyArray_CanCastTypeTo(PyArray_DESCR(self),
- PyArray_DESCR(array_other),
- NPY_EQUIV_CASTING);
+ _res = PyArray_CheckCastSafety(
+ NPY_EQUIV_CASTING,
+ PyArray_DESCR(self), PyArray_DESCR(array_other), NULL);
+ if (_res < 0) {
+ PyErr_Clear();
+ _res = 0;
+ }
if (_res == 0) {
/* 2015-05-07, 1.10 */
Py_DECREF(array_other);
goto fail;
}
for (i = 0; i < len; i++) {
-#if NPY_SIZEOF_INTP <= NPY_SIZEOF_LONG
- PyObject *o = PyLong_FromLong((long) vals[i]);
-#else
- PyObject *o = PyLong_FromLongLong((npy_longlong) vals[i]);
-#endif
+ PyObject *o = PyArray_PyIntFromIntp(vals[i]);
if (!o) {
Py_DECREF(intTuple);
intTuple = NULL;
NPY_NO_EXPORT int
PyArray_TypestrConvert(int itemsize, int gentype);
+
+static NPY_INLINE PyObject *
+PyArray_PyIntFromIntp(npy_intp const value)
+{
+#if NPY_SIZEOF_INTP <= NPY_SIZEOF_LONG
+ return PyLong_FromLong((long)value);
+#else
+ return PyLong_FromLongLong((npy_longlong)value);
+#endif
+}
+
NPY_NO_EXPORT PyObject *
PyArray_IntTupleFromIntp(int len, npy_intp const *vals);
* is ignored).
* @return 0 for an invalid cast, 1 for a valid and -1 for an error.
*/
-static int
+NPY_NO_EXPORT int
PyArray_CheckCastSafety(NPY_CASTING casting,
PyArray_Descr *from, PyArray_Descr *to, PyArray_DTypeMeta *to_dtype)
{
loop_descrs[1]->elsize = given_descrs[0]->elsize;
Py_INCREF(given_descrs[0]);
loop_descrs[0] = given_descrs[0];
+ if (loop_descrs[0]->type_num == NPY_VOID &&
+ loop_descrs[0]->subarray == NULL && loop_descrs[1]->names == NULL) {
+ return NPY_NO_CASTING | _NPY_CAST_IS_VIEW;
+ }
return NPY_SAFE_CASTING | _NPY_CAST_IS_VIEW;
}
casting = NPY_NO_CASTING | _NPY_CAST_IS_VIEW;
}
}
- NPY_CASTING field_casting = PyArray_GetCastSafety(
- given_descrs[0]->subarray->base, given_descrs[1]->subarray->base, NULL);
+
+ PyArray_Descr *from_base = (from_sub == NULL) ? given_descrs[0] : from_sub->base;
+ PyArray_Descr *to_base = (to_sub == NULL) ? given_descrs[1] : to_sub->base;
+ NPY_CASTING field_casting = PyArray_GetCastSafety(from_base, to_base, NULL);
if (field_casting < 0) {
return -1;
}
PyArray_GetCastSafety(
PyArray_Descr *from, PyArray_Descr *to, PyArray_DTypeMeta *to_dtype);
+NPY_NO_EXPORT int
+PyArray_CheckCastSafety(NPY_CASTING casting,
+ PyArray_Descr *from, PyArray_Descr *to, PyArray_DTypeMeta *to_dtype);
+
NPY_NO_EXPORT NPY_CASTING
legacy_same_dtype_resolve_descriptors(
PyArrayMethodObject *self,
while (N > 0) {
memcpy(&src_ref, src, sizeof(src_ref));
- if (PyArray_Pack(data->descr, dst, src_ref) < 0) {
+ if (PyArray_Pack(data->descr, dst, src_ref ? src_ref : Py_None) < 0) {
return -1;
}
- if (data->move_references) {
+ if (data->move_references && src_ref != NULL) {
Py_DECREF(src_ref);
memset(src, 0, sizeof(src_ref));
}
static PyObject *
array_size_get(PyArrayObject *self)
{
- npy_intp size=PyArray_SIZE(self);
-#if NPY_SIZEOF_INTP <= NPY_SIZEOF_LONG
- return PyLong_FromLong((long) size);
-#else
- if (size > NPY_MAX_LONG || size < NPY_MIN_LONG) {
- return PyLong_FromLongLong(size);
- }
- else {
- return PyLong_FromLong((long) size);
- }
-#endif
+ return PyArray_PyIntFromIntp(PyArray_SIZE(self));
}
static PyObject *
array_nbytes_get(PyArrayObject *self)
{
- npy_intp nbytes = PyArray_NBYTES(self);
-#if NPY_SIZEOF_INTP <= NPY_SIZEOF_LONG
- return PyLong_FromLong((long) nbytes);
-#else
- if (nbytes > NPY_MAX_LONG || nbytes < NPY_MIN_LONG) {
- return PyLong_FromLongLong(nbytes);
- }
- else {
- return PyLong_FromLong((long) nbytes);
- }
-#endif
+ return PyArray_PyIntFromIntp(PyArray_NBYTES(self));
}
#if NPY_SIMD
/* Count the zero bytes between `*d` and `end`, updating `*d` to point to where to keep counting from. */
-static NPY_INLINE NPY_GCC_OPT_3 npyv_u8
+NPY_FINLINE NPY_GCC_OPT_3 npyv_u8
count_zero_bytes_u8(const npy_uint8 **d, const npy_uint8 *end, npy_uint8 max_count)
{
const npyv_u8 vone = npyv_setall_u8(1);
return vsum8;
}
-static NPY_INLINE NPY_GCC_OPT_3 npyv_u16x2
+NPY_FINLINE NPY_GCC_OPT_3 npyv_u16x2
count_zero_bytes_u16(const npy_uint8 **d, const npy_uint8 *end, npy_uint16 max_count)
{
npyv_u16x2 vsum16;
#include "iterators.h"
#include "ctors.h"
#include "common.h"
+#include "conversion_utils.h"
#include "array_coercion.h"
#define NEWAXIS_INDEX -1
T_OBJECT,
offsetof(PyArrayIterObject, ao),
READONLY, NULL},
- {"index",
- T_INT,
- offsetof(PyArrayIterObject, index),
- READONLY, NULL},
{NULL, 0, 0, 0, NULL},
};
+static PyObject *
+iter_index_get(PyArrayIterObject *self)
+{
+ return PyArray_PyIntFromIntp(self->index);
+}
+
static PyObject *
iter_coords_get(PyArrayIterObject *self)
{
}
static PyGetSetDef iter_getsets[] = {
+ {"index",
+ (getter)iter_index_get,
+ NULL, NULL, NULL},
{"coords",
(getter)iter_coords_get,
- NULL,
- NULL, NULL},
+ NULL, NULL, NULL},
{NULL, NULL, NULL, NULL, NULL},
};
static PyObject *
arraymultiter_size_get(PyArrayMultiIterObject *self)
{
-#if NPY_SIZEOF_INTP <= NPY_SIZEOF_LONG
- return PyLong_FromLong((long) self->size);
-#else
- if (self->size < NPY_MAX_LONG) {
- return PyLong_FromLong((long) self->size);
- }
- else {
- return PyLong_FromLongLong((npy_longlong) self->size);
- }
-#endif
+ return PyArray_PyIntFromIntp(self->size);
}
static PyObject *
arraymultiter_index_get(PyArrayMultiIterObject *self)
{
-#if NPY_SIZEOF_INTP <= NPY_SIZEOF_LONG
- return PyLong_FromLong((long) self->index);
-#else
- if (self->size < NPY_MAX_LONG) {
- return PyLong_FromLong((long) self->index);
- }
- else {
- return PyLong_FromLongLong((npy_longlong) self->index);
- }
-#endif
+ return PyArray_PyIntFromIntp(self->index);
}
static PyObject *
for (idim = 0; idim < ndim; ++idim) {
PyObject *v = PySequence_GetItem(value, idim);
multi_index[idim] = PyLong_AsLong(v);
+ Py_DECREF(v);
if (error_converting(multi_index[idim])) {
- Py_XDECREF(v);
return -1;
}
}
** Defining the SIMD kernels
*
* Floor division of signed is based on T. Granlund and P. L. Montgomery
- * “Division by invariant integers using multiplication(see [Figure 6.1]
+ * "Division by invariant integers using multiplication(see [Figure 6.1]
* http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.1.2556)"
* For details on TRUNC division see simd/intdiv.h for more clarification
***********************************************************************************
- ** Figure 6.1: Signed division by run–time invariant divisor, rounded towards -INF
+ ** Figure 6.1: Signed division by run-time invariant divisor, rounded towards -INF
***********************************************************************************
* For q = FLOOR(a/d), all sword:
- * sword −dsign = SRL(d, N − 1);
- * uword −nsign = (n < −dsign);
- * uword −qsign = EOR(−nsign, −dsign);
- * q = TRUNC((n − (−dsign ) + (−nsign))/d) − (−qsign);
+ * sword -dsign = SRL(d, N - 1);
+ * uword -nsign = (n < -dsign);
+ * uword -qsign = EOR(-nsign, -dsign);
+ * q = TRUNC((n - (-dsign ) + (-nsign))/d) - (-qsign);
********************************************************************************/
#if NPY_SIMD
if (dtype == NULL) {
goto fail;
}
- Py_INCREF(dtype->singleton);
otype = dtype->singleton;
+ Py_INCREF(otype);
+ Py_DECREF(dtype);
}
if (out_obj && !PyArray_OutputConverter(out_obj, &out)) {
goto fail;
operands, type_tup, out_dtypes);
}
- Py_INCREF(descr);
out_dtypes[0] = ensure_dtype_nbo(descr);
if (out_dtypes[0] == NULL) {
return -1;
assert not np.can_cast("U1", "V1")
# Structured to unstructured is just like any other:
assert np.can_cast("d,i", "V", casting="same_kind")
+ # Unstructured void to unstructured is actually no cast at all:
+ assert np.can_cast("V3", "V", casting="no")
+ assert np.can_cast("V0", "V", casting="no")
class TestCasting:
with pytest.raises(TypeError,
match="casting from object to the parametric DType"):
cast._resolve_descriptors((np.dtype("O"), None))
+
+ @pytest.mark.parametrize("casting", ["no", "unsafe"])
+ def test_void_and_structured_with_subarray(self, casting):
+ # test case corresponding to gh-19325
+ dtype = np.dtype([("foo", "<f4", (3, 2))])
+ expected = casting == "unsafe"
+ assert np.can_cast("V4", dtype, casting=casting) == expected
+ assert np.can_cast(dtype, "V4", casting=casting) == expected
+
+ @pytest.mark.parametrize("dtype", np.typecodes["All"])
+ def test_object_casts_NULL_None_equivalence(self, dtype):
+ # None to <other> casts may succeed or fail, but a NULL'ed array must
+ # behave the same as one filled with None's.
+ arr_normal = np.array([None] * 5)
+ arr_NULLs = np.empty_like([None] * 5)
+ # If the check fails (maybe it should) the test would lose its purpose:
+ assert arr_NULLs.tobytes() == b"\x00" * arr_NULLs.nbytes
+
+ try:
+ expected = arr_normal.astype(dtype)
+ except TypeError:
+ with pytest.raises(TypeError):
+ arr_NULLs.astype(dtype)
+ else:
+ assert_array_equal(expected, arr_NULLs.astype(dtype))
assert_(abs(sys.getrefcount(ind) - rc_ind) < 50)
assert_(abs(sys.getrefcount(indtype) - rc_indtype) < 50)
+ def test_index_getset(self):
+ it = np.arange(10).reshape(2, 1, 5).flat
+ with pytest.raises(AttributeError):
+ it.index = 10
+
+ for _ in it:
+ pass
+ # Check the value of `.index` is updated correctly (see also gh-19153)
+ # If the type was incorrect, this would show up on big-endian machines
+ assert it.index == it.base.size
+
class TestResize:
assert_equal([x for x in i],
aview.swapaxes(0, 1).ravel(order='A'))
+def test_nditer_multi_index_set():
+ # Test the multi_index set
+ a = np.arange(6).reshape(2, 3)
+ it = np.nditer(a, flags=['multi_index'])
+
+ # Removes the iteration on two first elements of a[0]
+ it.multi_index = (0, 2,)
+
+ assert_equal([i for i in it], [2, 3, 4, 5])
+
+@pytest.mark.skipif(not HAS_REFCOUNT, reason="Python lacks refcounts")
+def test_nditer_multi_index_set_refcount():
+ # Test if the reference count on index variable is decreased
+
+ index = 0
+ i = np.nditer(np.array([111, 222, 333, 444]), flags=['multi_index'])
+
+ start_count = sys.getrefcount(index)
+ i.multi_index = (index,)
+ end_count = sys.getrefcount(index)
+
+ assert_equal(start_count, end_count)
+
def test_iter_best_order_multi_index_1d():
# The multi-indices should be correct with any reordering
assert_(not res)
assert_(type(res) is bool)
+ @pytest.mark.parametrize("dtype", ["V0", "V3", "V10"])
+ def test_compare_unstructured_voids(self, dtype):
+ zeros = np.zeros(3, dtype=dtype)
+
+ assert_array_equal(zeros, zeros)
+ assert not (zeros != zeros).any()
+
+ if dtype == "V0":
+ # Can't test != of actually different data
+ return
+
+ nonzeros = np.array([b"1", b"2", b"3"], dtype=dtype)
+
+ assert not (zeros == nonzeros).any()
+ assert (zeros != nonzeros).all()
+
def assert_array_strict_equal(x, y):
assert_array_equal(x, y)
if not os.path.isabs(fn):
fn = os.path.join(d, fn)
if os.path.isfile(fn):
- print('Including file', fn)
lines.extend(resolve_includes(fn))
else:
lines.append(line)
if not os.path.isabs(fn):
fn = os.path.join(d, fn)
if os.path.isfile(fn):
- print('Including file', fn)
lines.extend(resolve_includes(fn))
else:
lines.append(line)
* ``src_dirs``: directories containing library source files
* ``define_macros``: preprocessor macros used by
``distutils.setup``
+ * ``baseline``: minimum CPU features required
+ * ``found``: dispatched features supported in the system
+ * ``not found``: dispatched features that are not supported
+ in the system
Examples
--------
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
"""
+ from numpy.core._multiarray_umath import (
+ __cpu_features__, __cpu_baseline__, __cpu_dispatch__
+ )
for name,info_dict in globals().items():
if name[0] == "_" or type(info_dict) is not type({}): continue
print(name + ":")
if k == "sources" and len(v) > 200:
v = v[:60] + " ...\n... " + v[-60:]
print(" %s = %s" % (k,v))
+
+ features_found, features_not_found = [], []
+ for feature in __cpu_dispatch__:
+ if __cpu_features__[feature]:
+ features_found.append(feature)
+ else:
+ features_not_found.append(feature)
+
+ print("Supported SIMD extensions in this NumPy install:")
+ print(" baseline = %s" % (','.join(__cpu_baseline__)))
+ print(" found = %s" % (','.join(features_found)))
+ print(" not found = %s" % (','.join(features_not_found)))
+
'''))
return target
"""Fortran to Python Interface Generator.
"""
-__all__ = ['run_main', 'compile', 'f2py_testing']
+__all__ = ['run_main', 'compile', 'get_include', 'f2py_testing']
import sys
import subprocess
return cp.returncode
+def get_include():
+ """
+ Return the directory that contains the fortranobject.c and .h files.
+
+ .. note::
+
+ This function is not needed when building an extension with
+ `numpy.distutils` directly from ``.f`` and/or ``.pyf`` files
+ in one go.
+
+ Python extension modules built with f2py-generated code need to use
+ ``fortranobject.c`` as a source file, and include the ``fortranobject.h``
+ header. This function can be used to obtain the directory containing
+ both of these files.
+
+ Returns
+ -------
+ include_path : str
+ Absolute path to the directory containing ``fortranobject.c`` and
+ ``fortranobject.h``.
+
+ Notes
+ -----
+ .. versionadded:: 1.22.0
+
+ Unless the build system you are using has specific support for f2py,
+ building a Python extension using a ``.pyf`` signature file is a two-step
+ process. For a module ``mymod``:
+
+ - Step 1: run ``python -m numpy.f2py mymod.pyf --quiet``. This
+ generates ``_mymodmodule.c`` and (if needed)
+ ``_fblas-f2pywrappers.f`` files next to ``mymod.pyf``.
+ - Step 2: build your Python extension module. This requires the
+ following source files:
+
+ - ``_mymodmodule.c``
+ - ``_mymod-f2pywrappers.f`` (if it was generated in step 1)
+ - ``fortranobject.c``
+
+ See Also
+ --------
+ numpy.get_include : function that returns the numpy include directory
+
+ """
+ return os.path.join(os.path.dirname(__file__), 'src')
+
+
if sys.version_info[:2] >= (3, 7):
# module level getattr is only supported in 3.7 onwards
# https://www.python.org/dev/peps/pep-0562/
x = np.arange(3, dtype=np.float32)
self.module.foo(x)
assert_equal(x, [3, 1, 2])
-
+
class TestNumpyVersionAttribute(util.F2PyTest):
# Check that th attribute __f2py_numpy_version__ is present
# in the compiled module and that has the value np.__version__.
sources = [_path('src', 'regression', 'inout.f90')]
-
+
@pytest.mark.slow
def test_numpy_version_attribute(self):
-
+
# Check that self.module has an attribute named "__f2py_numpy_version__"
- assert_(hasattr(self.module, "__f2py_numpy_version__"),
+ assert_(hasattr(self.module, "__f2py_numpy_version__"),
msg="Fortran module does not have __f2py_numpy_version__")
-
+
# Check that the attribute __f2py_numpy_version__ is a string
assert_(isinstance(self.module.__f2py_numpy_version__, str),
msg="__f2py_numpy_version__ is not a string")
-
+
# Check that __f2py_numpy_version__ has the value numpy.__version__
assert_string_equal(np.__version__, self.module.__f2py_numpy_version__)
+
+
+def test_include_path():
+ incdir = np.f2py.get_include()
+ fnames_in_dir = os.listdir(incdir)
+ for fname in ('fortranobject.c', 'fortranobject.h'):
+ assert fname in fnames_in_dir
+
aux_firstnan = np.searchsorted(np.isnan(aux), True, side='left')
else:
aux_firstnan = np.searchsorted(aux, aux[-1], side='left')
- mask[1:aux_firstnan] = (aux[1:aux_firstnan] != aux[:aux_firstnan - 1])
+ if aux_firstnan > 0:
+ mask[1:aux_firstnan] = (
+ aux[1:aux_firstnan] != aux[:aux_firstnan - 1])
mask[aux_firstnan] = True
mask[aux_firstnan + 1:] = False
else:
assert_equal(np.unique(a, return_inverse=True), (ua, ua_inv))
assert_equal(np.unique(a, return_counts=True), (ua, ua_cnt))
+ # test for gh-19300
+ all_nans = [np.nan] * 4
+ ua = [np.nan]
+ ua_idx = [0]
+ ua_inv = [0, 0, 0, 0]
+ ua_cnt = [4]
+ assert_equal(np.unique(all_nans), ua)
+ assert_equal(np.unique(all_nans, return_index=True), (ua, ua_idx))
+ assert_equal(np.unique(all_nans, return_inverse=True), (ua, ua_inv))
+ assert_equal(np.unique(all_nans, return_counts=True), (ua, ua_cnt))
+
def test_unique_axis_errors(self):
assert_raises(TypeError, self._run_axis_tests, object)
assert_raises(TypeError, self._run_axis_tests,
cdef void *PyArray_calloc_aligned(size_t n, size_t s)
cdef void PyArray_free_aligned(void *p)
-ctypedef double (*random_double_fill)(bitgen_t *state, np.npy_intp count, double* out) nogil
+ctypedef void (*random_double_fill)(bitgen_t *state, np.npy_intp count, double* out) nogil
ctypedef double (*random_double_0)(void *state) nogil
ctypedef double (*random_double_1)(void *state, double a) nogil
ctypedef double (*random_double_2)(void *state, double a, double b) nogil
Examples
--------
>>> np.random.default_rng().bytes(10)
- b'\xfeC\x9b\x86\x17\xf2\xa1\xafcp' # random
+ b'\\xfeC\\x9b\\x86\\x17\\xf2\\xa1\\xafcp' # random
"""
cdef Py_ssize_t n_uint32 = ((length - 1) // 4 + 1)
Examples
--------
>>> np.random.bytes(10)
- ' eh\\x85\\x022SZ\\xbf\\xa4' #random
+ b' eh\\x85\\x022SZ\\xbf\\xa4' #random
"""
cdef Py_ssize_t n_uint32 = ((length - 1) // 4 + 1)
# Interpret the uint32s as little-endian to convert them to bytes
# NOTE: The API section will be appended with additional entries
# further down in this file
-from typing import TYPE_CHECKING, List
+from typing import TYPE_CHECKING, List, Any
if TYPE_CHECKING:
import sys
_GUFunc_Nin2_Nout1,
)
else:
- _UFunc_Nin1_Nout1 = NotImplemented
- _UFunc_Nin2_Nout1 = NotImplemented
- _UFunc_Nin1_Nout2 = NotImplemented
- _UFunc_Nin2_Nout2 = NotImplemented
- _GUFunc_Nin2_Nout1 = NotImplemented
+ _UFunc_Nin1_Nout1 = Any
+ _UFunc_Nin2_Nout1 = Any
+ _UFunc_Nin1_Nout2 = Any
+ _UFunc_Nin2_Nout2 = Any
+ _GUFunc_Nin2_Nout1 = Any
# Clean up the namespace
-del TYPE_CHECKING, final, List
+del TYPE_CHECKING, final, List, Any
if __doc__ is not None:
from ._add_docstring import _docstrings
ArrayLike = Union[
_RecursiveSequence,
_ArrayLike[
- "dtype[Any]",
+ dtype,
Union[bool, int, float, complex, str, bytes]
],
]
import sys
-from typing import Any, List, Sequence, Tuple, Union, Type, TypeVar, TYPE_CHECKING
+from typing import (
+ Any,
+ List,
+ Sequence,
+ Tuple,
+ Union,
+ Type,
+ TypeVar,
+ Generic,
+ TYPE_CHECKING,
+)
import numpy as np
from ._shape import _ShapeLike
else:
_DTypeDict = Any
- _SupportsDType = Any
+
+ class _SupportsDType(Generic[_DType_co]):
+ pass
# Would create a dtype[np.void]
# array-scalar types and generic types
type, # TODO: enumerate these when we add type hints for numpy scalars
# anything with a dtype attribute
- "_SupportsDType[np.dtype[Any]]",
+ _SupportsDType[np.dtype],
# character codes, type strings or comma-separated fields, e.g., 'float64'
str,
_VoidDTypeLike,
that they can be imported conditionally via the numpy's mypy plugin.
"""
-from typing import TYPE_CHECKING
+from typing import TYPE_CHECKING, Any
import numpy as np
from . import (
complex256 = np.complexfloating[_128Bit, _128Bit]
complex512 = np.complexfloating[_256Bit, _256Bit]
else:
- uint128 = NotImplemented
- uint256 = NotImplemented
- int128 = NotImplemented
- int256 = NotImplemented
- float80 = NotImplemented
- float96 = NotImplemented
- float128 = NotImplemented
- float256 = NotImplemented
- complex160 = NotImplemented
- complex192 = NotImplemented
- complex256 = NotImplemented
- complex512 = NotImplemented
+ uint128 = Any
+ uint256 = Any
+ int128 = Any
+ int256 = Any
+ float80 = Any
+ float96 = Any
+ float128 = Any
+ float256 = Any
+ complex160 = Any
+ complex192 = Any
+ complex256 = Any
+ complex512 = Any
import sys
-from typing import Sequence, Tuple, Union
+from typing import Sequence, Tuple, Union, Any
if sys.version_info >= (3, 8):
from typing import SupportsIndex
try:
from typing_extensions import SupportsIndex
except ImportError:
- SupportsIndex = NotImplemented
+ SupportsIndex = Any
_Shape = Tuple[int, ...]
"field2": (int, 3),
}
)
-
-np.dtype[np.float64](np.int64) # E: Argument 1 to "dtype" has incompatible type
+import sys
import numpy as np
f2: np.float16
f8: np.float64
+c8: np.complex64
# Construction
func(f2) # E: incompatible type
func(f8) # E: incompatible type
+
+round(c8) # E: No overload variant
+
+c8.__getnewargs__() # E: Invalid self argument
+f2.__getnewargs__() # E: Invalid self argument
+f2.is_integer() # E: Invalid self argument
+f2.hex() # E: Invalid self argument
+np.float16.fromhex("0x0.0p+0") # E: Invalid self argument
+f2.__trunc__() # E: Invalid self argument
+f2.__getformat__("float") # E: Invalid self argument
+import ctypes as ct
import numpy as np
dtype_obj: np.dtype[np.str_]
reveal_type(np.dtype(bool)) # E: numpy.dtype[numpy.bool_]
reveal_type(np.dtype(str)) # E: numpy.dtype[numpy.str_]
reveal_type(np.dtype(bytes)) # E: numpy.dtype[numpy.bytes_]
+reveal_type(np.dtype(object)) # E: numpy.dtype[numpy.object_]
+
+# ctypes
+reveal_type(np.dtype(ct.c_double)) # E: numpy.dtype[{double}]
+reveal_type(np.dtype(ct.c_longlong)) # E: numpy.dtype[{longlong}]
+reveal_type(np.dtype(ct.c_uint32)) # E: numpy.dtype[{uint32}]
+reveal_type(np.dtype(ct.c_bool)) # E: numpy.dtype[numpy.bool_]
+reveal_type(np.dtype(ct.c_char)) # E: numpy.dtype[numpy.bytes_]
+reveal_type(np.dtype(ct.py_object)) # E: numpy.dtype[numpy.object_]
# Special case for None
reveal_type(np.dtype(None)) # E: numpy.dtype[{double}]
+import sys
import numpy as np
b: np.bool_
f8: np.float64
c8: np.complex64
c16: np.complex128
+m: np.timedelta64
U: np.str_
S: np.bytes_
reveal_type(c16.reshape(1)) # E: numpy.ndarray[Any, numpy.dtype[{complex128}]]
reveal_type(U.reshape(1)) # E: numpy.ndarray[Any, numpy.dtype[numpy.str_]]
reveal_type(S.reshape(1)) # E: numpy.ndarray[Any, numpy.dtype[numpy.bytes_]]
+
+reveal_type(f8.as_integer_ratio()) # E: Tuple[builtins.int, builtins.int]
+reveal_type(f8.is_integer()) # E: bool
+reveal_type(f8.__trunc__()) # E: int
+reveal_type(f8.__getformat__("float")) # E: str
+reveal_type(f8.hex()) # E: str
+reveal_type(np.float64.fromhex("0x0.0p+0")) # E: {float64}
+
+reveal_type(f8.__getnewargs__()) # E: Tuple[builtins.float]
+reveal_type(c16.__getnewargs__()) # E: Tuple[builtins.float, builtins.float]
+
+reveal_type(i8.numerator) # E: {int64}
+reveal_type(i8.denominator) # E: Literal[1]
+reveal_type(u8.numerator) # E: {uint64}
+reveal_type(u8.denominator) # E: Literal[1]
+reveal_type(m.numerator) # E: numpy.timedelta64
+reveal_type(m.denominator) # E: Literal[1]
+
+reveal_type(round(i8)) # E: int
+reveal_type(round(i8, 3)) # E: {int64}
+reveal_type(round(u8)) # E: int
+reveal_type(round(u8, 3)) # E: {uint64}
+reveal_type(round(f8)) # E: int
+reveal_type(round(f8, 3)) # E: {float64}
+
+if sys.version_info >= (3, 9):
+ reveal_type(f8.__ceil__()) # E: int
+ reveal_type(f8.__floor__()) # E: int
NDArray_ref = types.GenericAlias(np.ndarray, (Any, DType_ref))
FuncType = Callable[[Union[_GenericAlias, types.GenericAlias]], Any]
else:
- DType_ref = NotImplemented
- NDArray_ref = NotImplemented
+ DType_ref = Any
+ NDArray_ref = Any
FuncType = Callable[[_GenericAlias], Any]
GETATTR_NAMES = sorted(set(dir(np.ndarray)) - _GenericAlias._ATTR_EXCEPTIONS)
("__call__", lambda n: n(shape=(1,), dtype=np.int64, buffer=BUFFER)),
("subclassing", lambda n: _get_subclass_mro(n)),
("pickle", lambda n: n == pickle.loads(pickle.dumps(n))),
- ("__weakref__", lambda n: n == weakref.ref(n)()),
])
def test_pass(self, name: str, func: FuncType) -> None:
"""Compare `types.GenericAlias` with its numpy-based backport.
value_ref = func(NDArray_ref)
assert value == value_ref
+ def test_weakref(self) -> None:
+ """Test ``__weakref__``."""
+ value = weakref.ref(NDArray)()
+
+ if sys.version_info >= (3, 9, 1): # xref bpo-42332
+ value_ref = weakref.ref(NDArray_ref)()
+ assert value == value_ref
+
@pytest.mark.parametrize("name", GETATTR_NAMES)
def test_getattr(self, name: str) -> None:
"""Test that `getattr` wraps around the underlying type,
--- /dev/null
+"""Test the runtime usage of `numpy.typing`."""
+
+from __future__ import annotations
+
+import sys
+from typing import get_type_hints, Union, Tuple, NamedTuple
+
+import pytest
+import numpy as np
+import numpy.typing as npt
+
+try:
+ from typing_extensions import get_args, get_origin
+ SKIP = False
+except ImportError:
+ SKIP = True
+
+
+class TypeTup(NamedTuple):
+ typ: type
+ args: Tuple[type, ...]
+ origin: None | type
+
+
+if sys.version_info >= (3, 9):
+ NDArrayTup = TypeTup(npt.NDArray, npt.NDArray.__args__, np.ndarray)
+else:
+ NDArrayTup = TypeTup(npt.NDArray, (), None)
+
+TYPES = {
+ "ArrayLike": TypeTup(npt.ArrayLike, npt.ArrayLike.__args__, Union),
+ "DTypeLike": TypeTup(npt.DTypeLike, npt.DTypeLike.__args__, Union),
+ "NBitBase": TypeTup(npt.NBitBase, (), None),
+ "NDArray": NDArrayTup,
+}
+
+
+@pytest.mark.parametrize("name,tup", TYPES.items(), ids=TYPES.keys())
+@pytest.mark.skipif(SKIP, reason="requires typing-extensions")
+def test_get_args(name: type, tup: TypeTup) -> None:
+ """Test `typing.get_args`."""
+ typ, ref = tup.typ, tup.args
+ out = get_args(typ)
+ assert out == ref
+
+
+@pytest.mark.parametrize("name,tup", TYPES.items(), ids=TYPES.keys())
+@pytest.mark.skipif(SKIP, reason="requires typing-extensions")
+def test_get_origin(name: type, tup: TypeTup) -> None:
+ """Test `typing.get_origin`."""
+ typ, ref = tup.typ, tup.origin
+ out = get_origin(typ)
+ assert out == ref
+
+
+@pytest.mark.parametrize("name,tup", TYPES.items(), ids=TYPES.keys())
+def test_get_type_hints(name: type, tup: TypeTup) -> None:
+ """Test `typing.get_type_hints`."""
+ typ = tup.typ
+
+ # Explicitly set `__annotations__` in order to circumvent the
+ # stringification performed by `from __future__ import annotations`
+ def func(a): pass
+ func.__annotations__ = {"a": typ, "return": None}
+
+ out = get_type_hints(func)
+ ref = {"a": typ, "return": type(None)}
+ assert out == ref
+
+
+@pytest.mark.parametrize("name,tup", TYPES.items(), ids=TYPES.keys())
+def test_get_type_hints_str(name: type, tup: TypeTup) -> None:
+ """Test `typing.get_type_hints` with string-representation of types."""
+ typ_str, typ = f"npt.{name}", tup.typ
+
+ # Explicitly set `__annotations__` in order to circumvent the
+ # stringification performed by `from __future__ import annotations`
+ def func(a): pass
+ func.__annotations__ = {"a": typ_str, "return": None}
+
+ out = get_type_hints(func)
+ ref = {"a": typ, "return": type(None)}
+ assert out == ref
+
+
+def test_keys() -> None:
+ """Test that ``TYPES.keys()`` and ``numpy.typing.__all__`` are synced."""
+ keys = TYPES.keys()
+ ref = set(npt.__all__)
+ assert keys == ref
#-----------------------------------
# Path to the release notes
-RELEASE_NOTES = 'doc/source/release/1.21.0-notes.rst'
+RELEASE_NOTES = 'doc/source/release/1.21.1-notes.rst'
#-------------------------------------------------------
from urllib.request import urlopen, Request
from urllib.error import HTTPError
-OPENBLAS_V = '0.3.13'
-OPENBLAS_LONG = 'v0.3.13-62-gaf2b0d02'
+OPENBLAS_V = '0.3.17'
+OPENBLAS_LONG = 'v0.3.17'
BASE_LOC = 'https://anaconda.org/multibuild-wheels-staging/openblas-libs'
BASEURL = f'{BASE_LOC}/{OPENBLAS_LONG}/download'
SUPPORTED_PLATFORMS = [