Metadata-Version: 2.1
Name: numpy
-Version: 1.22.1
+Version: 1.22.2
Summary: NumPy is the fundamental package for array computing with Python.
Home-page: https://www.numpy.org
Author: Travis E. Oliphant et al.
+++ /dev/null
-
-/* This file is generated from numpy/distutils/system_info.py */
-void ATL_buildinfo(void);
-int main(void) {
- ATL_buildinfo();
- return 0;
-}
--- /dev/null
+
+Contributors
+============
+
+A total of 14 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Andrew J. Hesford +
+* Bas van Beek
+* Brénainn Woodsend +
+* Charles Harris
+* Hood Chatham
+* Janus Heide +
+* Leo Singer
+* Matti Picus
+* Mukulika Pahari
+* Niyas Sait
+* Pearu Peterson
+* Ralf Gommers
+* Sebastian Berg
+* Serge Guelton
+
+Pull requests merged
+====================
+
+A total of 21 pull requests were merged for this release.
+
+* `#20842 <https://github.com/numpy/numpy/pull/20842>`__: BLD: Add NPY_DISABLE_SVML env var to opt out of SVML
+* `#20843 <https://github.com/numpy/numpy/pull/20843>`__: BUG: Fix build of third party extensions with Py_LIMITED_API
+* `#20844 <https://github.com/numpy/numpy/pull/20844>`__: TYP: Fix pyright being unable to infer the ``real`` and ``imag``...
+* `#20845 <https://github.com/numpy/numpy/pull/20845>`__: BUG: Fix comparator function signatures
+* `#20906 <https://github.com/numpy/numpy/pull/20906>`__: BUG: Avoid importing ``numpy.distutils`` on import numpy.testing
+* `#20907 <https://github.com/numpy/numpy/pull/20907>`__: MAINT: remove outdated mingw32 fseek support
+* `#20908 <https://github.com/numpy/numpy/pull/20908>`__: TYP: Relax the return type of ``np.vectorize``
+* `#20909 <https://github.com/numpy/numpy/pull/20909>`__: BUG: fix f2py's define for threading when building with Mingw
+* `#20910 <https://github.com/numpy/numpy/pull/20910>`__: BUG: distutils: fix building mixed C/Fortran extensions
+* `#20912 <https://github.com/numpy/numpy/pull/20912>`__: DOC,TST: Fix Pandas code example as per new release
+* `#20935 <https://github.com/numpy/numpy/pull/20935>`__: TYP, MAINT: Add annotations for ``flatiter.__setitem__``
+* `#20936 <https://github.com/numpy/numpy/pull/20936>`__: MAINT, TYP: Added missing where typehints in ``fromnumeric.pyi``
+* `#20937 <https://github.com/numpy/numpy/pull/20937>`__: BUG: Fix build_ext interaction with non numpy extensions
+* `#20938 <https://github.com/numpy/numpy/pull/20938>`__: BUG: Fix missing intrinsics for windows/arm64 target
+* `#20945 <https://github.com/numpy/numpy/pull/20945>`__: REL: Prepare for the NumPy 1.22.2 release.
+* `#20982 <https://github.com/numpy/numpy/pull/20982>`__: MAINT: f2py: don't generate code that triggers ``-Wsometimes-uninitialized``.
+* `#20983 <https://github.com/numpy/numpy/pull/20983>`__: BUG: Fix incorrect return type in reduce without initial value
+* `#20984 <https://github.com/numpy/numpy/pull/20984>`__: ENH: review return values for PyArray_DescrNew
+* `#20985 <https://github.com/numpy/numpy/pull/20985>`__: MAINT: be more tolerant of setuptools >= 60
+* `#20986 <https://github.com/numpy/numpy/pull/20986>`__: BUG: Fix misplaced return.
+* `#20992 <https://github.com/numpy/numpy/pull/20992>`__: MAINT: Further small return value validation fixes
.. toctree::
:maxdepth: 3
+ 1.22.2 <release/1.22.2-notes>
1.22.1 <release/1.22.1-notes>
1.22.0 <release/1.22.0-notes>
1.21.4 <release/1.21.4-notes>
--- /dev/null
+.. currentmodule:: numpy
+
+==========================
+NumPy 1.22.2 Release Notes
+==========================
+
+The NumPy 1.22.2 is maintenance release that fixes bugs discovered after the
+1.22.1 release. Notable fixes are:
+
+- Several build related fixes for downstream projects and other platforms.
+- Various Annotation fixes/additions.
+- Numpy wheels for Windows will use the 1.41 tool chain, fixing downstream link
+ problems for projects using NumPy provided libraries on Windows.
+- Deal with CVE-2021-41495 complaint.
+
+The Python versions supported for this release are 3.8-3.10.
+
+Contributors
+============
+
+A total of 14 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Andrew J. Hesford +
+* Bas van Beek
+* Brénainn Woodsend +
+* Charles Harris
+* Hood Chatham
+* Janus Heide +
+* Leo Singer
+* Matti Picus
+* Mukulika Pahari
+* Niyas Sait
+* Pearu Peterson
+* Ralf Gommers
+* Sebastian Berg
+* Serge Guelton
+
+Pull requests merged
+====================
+
+A total of 21 pull requests were merged for this release.
+
+* `#20842 <https://github.com/numpy/numpy/pull/20842>`__: BLD: Add NPY_DISABLE_SVML env var to opt out of SVML
+* `#20843 <https://github.com/numpy/numpy/pull/20843>`__: BUG: Fix build of third party extensions with Py_LIMITED_API
+* `#20844 <https://github.com/numpy/numpy/pull/20844>`__: TYP: Fix pyright being unable to infer the ``real`` and ``imag``...
+* `#20845 <https://github.com/numpy/numpy/pull/20845>`__: BUG: Fix comparator function signatures
+* `#20906 <https://github.com/numpy/numpy/pull/20906>`__: BUG: Avoid importing ``numpy.distutils`` on import numpy.testing
+* `#20907 <https://github.com/numpy/numpy/pull/20907>`__: MAINT: remove outdated mingw32 fseek support
+* `#20908 <https://github.com/numpy/numpy/pull/20908>`__: TYP: Relax the return type of ``np.vectorize``
+* `#20909 <https://github.com/numpy/numpy/pull/20909>`__: BUG: fix f2py's define for threading when building with Mingw
+* `#20910 <https://github.com/numpy/numpy/pull/20910>`__: BUG: distutils: fix building mixed C/Fortran extensions
+* `#20912 <https://github.com/numpy/numpy/pull/20912>`__: DOC,TST: Fix Pandas code example as per new release
+* `#20935 <https://github.com/numpy/numpy/pull/20935>`__: TYP, MAINT: Add annotations for ``flatiter.__setitem__``
+* `#20936 <https://github.com/numpy/numpy/pull/20936>`__: MAINT, TYP: Added missing where typehints in ``fromnumeric.pyi``
+* `#20937 <https://github.com/numpy/numpy/pull/20937>`__: BUG: Fix build_ext interaction with non numpy extensions
+* `#20938 <https://github.com/numpy/numpy/pull/20938>`__: BUG: Fix missing intrinsics for windows/arm64 target
+* `#20945 <https://github.com/numpy/numpy/pull/20945>`__: REL: Prepare for the NumPy 1.22.2 release.
+* `#20982 <https://github.com/numpy/numpy/pull/20982>`__: MAINT: f2py: don't generate code that triggers ``-Wsometimes-uninitialized``.
+* `#20983 <https://github.com/numpy/numpy/pull/20983>`__: BUG: Fix incorrect return type in reduce without initial value
+* `#20984 <https://github.com/numpy/numpy/pull/20984>`__: ENH: review return values for PyArray_DescrNew
+* `#20985 <https://github.com/numpy/numpy/pull/20985>`__: MAINT: be more tolerant of setuptools >= 60
+* `#20986 <https://github.com/numpy/numpy/pull/20986>`__: BUG: Fix misplaced return.
+* `#20992 <https://github.com/numpy/numpy/pull/20992>`__: MAINT: Further small return value validation fixes
.. for doctests
The continuous integration truncates dataframe display without this setting.
- >>> pd.set_option('max_columns', 10)
+ >>> pd.set_option('display.max_columns', 10)
You could create a Pandas dataframe ::
@overload
def __getitem__(
self: flatiter[ndarray[Any, dtype[_ScalarType]]],
- key: Union[int, integer],
+ key: int | integer | tuple[int | integer],
) -> _ScalarType: ...
@overload
def __getitem__(
- self, key: Union[_ArrayLikeInt, slice, ellipsis],
+ self,
+ key: _ArrayLikeInt | slice | ellipsis | tuple[_ArrayLikeInt | slice | ellipsis],
) -> _NdArraySubClass: ...
+ # TODO: `__setitem__` operates via `unsafe` casting rules, and can
+ # thus accept any type accepted by the relevant underlying `np.generic`
+ # constructor.
+ # This means that `value` must in reality be a supertype of `npt.ArrayLike`.
+ def __setitem__(
+ self,
+ key: _ArrayLikeInt | slice | ellipsis | tuple[_ArrayLikeInt | slice | ellipsis],
+ value: Any,
+ ) -> None: ...
@overload
def __array__(self: flatiter[ndarray[Any, _DType]], dtype: None = ..., /) -> ndarray[Any, _DType]: ...
@overload
axis: None = ...,
out: None = ...,
keepdims: L[False] = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> bool_: ...
@overload
def all(
axis: Optional[_ShapeLike] = ...,
out: None = ...,
keepdims: bool = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> Any: ...
@overload
def all(
axis: Optional[_ShapeLike] = ...,
out: _NdArraySubClass = ...,
keepdims: bool = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> _NdArraySubClass: ...
@overload
axis: None = ...,
out: None = ...,
keepdims: L[False] = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> bool_: ...
@overload
def any(
axis: Optional[_ShapeLike] = ...,
out: None = ...,
keepdims: bool = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> Any: ...
@overload
def any(
axis: Optional[_ShapeLike] = ...,
out: _NdArraySubClass = ...,
keepdims: bool = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> _NdArraySubClass: ...
@overload
dtype: DTypeLike = ...,
out: None = ...,
keepdims: bool = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> Any: ...
@overload
def mean(
dtype: DTypeLike = ...,
out: _NdArraySubClass = ...,
keepdims: bool = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> _NdArraySubClass: ...
@overload
out: None = ...,
ddof: int = ...,
keepdims: bool = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> Any: ...
@overload
def std(
out: _NdArraySubClass = ...,
ddof: int = ...,
keepdims: bool = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> _NdArraySubClass: ...
@overload
out: None = ...,
ddof: int = ...,
keepdims: bool = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> Any: ...
@overload
def var(
out: _NdArraySubClass = ...,
ddof: int = ...,
keepdims: bool = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> _NdArraySubClass: ...
_DType = TypeVar("_DType", bound=dtype[Any])
def size(self) -> int: ...
@property
def real(
- self: NDArray[_SupportsReal[_ScalarType]], # type: ignore[type-var]
+ self: ndarray[_ShapeType, dtype[_SupportsReal[_ScalarType]]], # type: ignore[type-var]
) -> ndarray[_ShapeType, _dtype[_ScalarType]]: ...
@real.setter
def real(self, value: ArrayLike) -> None: ...
@property
def imag(
- self: NDArray[_SupportsImag[_ScalarType]], # type: ignore[type-var]
+ self: ndarray[_ShapeType, dtype[_SupportsImag[_ScalarType]]], # type: ignore[type-var]
) -> ndarray[_ShapeType, _dtype[_ScalarType]]: ...
@imag.setter
def imag(self, value: ArrayLike) -> None: ...
) -> Any: ...
def flush(self) -> None: ...
+# TODO: Add a mypy plugin for managing functions whose output type is dependant
+# on the literal value of some sort of signature (e.g. `einsum` and `vectorize`)
class vectorize:
pyfunc: Callable[..., Any]
cache: bool
cache: bool = ...,
signature: None | str = ...,
) -> None: ...
- def __call__(self, *args: Any, **kwargs: Any) -> NDArray[Any]: ...
+ def __call__(self, *args: Any, **kwargs: Any) -> Any: ...
class poly1d:
@property
version_json = '''
{
- "date": "2022-01-13T16:11:04-0700",
+ "date": "2022-02-03T14:24:02-0700",
"dirty": false,
"error": null,
- "full-revisionid": "7ce4118531b585b5d8f0380c6b896ae22d93bd96",
- "version": "1.22.1"
+ "full-revisionid": "f6dddcb2e5ea5ed39675f14429af3585c585a666",
+ "version": "1.22.2"
}
''' # END VERSION_JSON
axis: None = ...,
out: None = ...,
keepdims: Literal[False] = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> bool_: ...
@overload
def all(
axis: Optional[_ShapeLike] = ...,
out: Optional[ndarray] = ...,
keepdims: bool = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> Any: ...
@overload
axis: None = ...,
out: None = ...,
keepdims: Literal[False] = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> bool_: ...
@overload
def any(
axis: Optional[_ShapeLike] = ...,
out: Optional[ndarray] = ...,
keepdims: bool = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> Any: ...
def cumsum(
dtype: DTypeLike = ...,
out: Optional[ndarray] = ...,
keepdims: bool = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> Any: ...
def std(
out: Optional[ndarray] = ...,
ddof: int = ...,
keepdims: bool = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> Any: ...
def var(
out: Optional[ndarray] = ...,
ddof: int = ...,
keepdims: bool = ...,
+ *,
+ where: _ArrayLikeBool_co = ...,
) -> Any: ...
defined(__MINGW32__) || defined(__MINGW64__)
#include <io.h>
-/* mingw based on 3.4.5 has lseek but not ftell/fseek */
-#if defined(__MINGW32__) || defined(__MINGW64__)
-extern int __cdecl _fseeki64(FILE *, long long, int);
-extern long long __cdecl _ftelli64(FILE *);
-#endif
-
#define npy_fseek _fseeki64
#define npy_ftell _ftelli64
#define npy_lseek _lseeki64
* but this was never implemented. (This is also why the above
* selector is called the "legacy" selector.)
*/
- vectorcallfunc vectorcall;
+ #ifndef Py_LIMITED_API
+ vectorcallfunc vectorcall;
+ #else
+ void *vectorcall;
+ #endif
/* Was previously the `PyUFunc_MaskedInnerLoopSelectionFunc` */
void *_always_null_previously_masked_innerloop_selector;
NPY_RELAXED_STRIDES_DEBUG = (os.environ.get('NPY_RELAXED_STRIDES_DEBUG', "0") != "0")
NPY_RELAXED_STRIDES_DEBUG = NPY_RELAXED_STRIDES_DEBUG and NPY_RELAXED_STRIDES_CHECKING
+# Set NPY_DISABLE_SVML=1 in the environment to disable the vendored SVML
+# library. This option only has significance on a Linux x86_64 host and is most
+# useful to avoid improperly requiring SVML when cross compiling.
+NPY_DISABLE_SVML = (os.environ.get('NPY_DISABLE_SVML', "0") == "1")
+
# XXX: ugly, we use a class to avoid calling twice some expensive functions in
# config.h/numpyconfig.h. I don't see a better way because distutils force
# config.h generation inside an Extension class, and as such sharing
"""SVML library is supported only on x86_64 architecture and currently
only on linux
"""
+ if NPY_DISABLE_SVML:
+ return False
machine = platform.machine()
system = platform.system()
return "x86_64" in machine and system == "Linux"
fromstring_null_term_c_api(PyObject *dummy, PyObject *byte_obj)
{
char *string;
- PyArray_Descr *descr;
string = PyBytes_AsString(byte_obj);
if (string == NULL) {
return NULL;
}
- descr = PyArray_DescrNewFromType(NPY_FLOAT64);
- return PyArray_FromString(string, -1, descr, -1, " ");
+ return PyArray_FromString(string, -1, NULL, -1, " ");
}
if (PyArray_ISNOTSWAPPED(self) != PyArray_ISNOTSWAPPED(other)) {
/* Cast `other` to the same byte order as `self` (both unicode here) */
PyArray_Descr* unicode = PyArray_DescrNew(PyArray_DESCR(self));
+ if (unicode == NULL) {
+ return NULL;
+ }
unicode->elsize = PyArray_DESCR(other)->elsize;
PyObject *new = PyArray_FromAny((PyObject *)other,
unicode, 0, 0, 0, NULL);
#define LT(a,b) ((a) < (b) || ((b) != (b) && (a) ==(a)))
static int
-@TYPE@_compare(@type@ *pa, @type@ *pb)
+@TYPE@_compare(@type@ *pa, @type@ *pb, PyArrayObject *NPY_UNUSED(ap))
{
const @type@ a = *pa;
const @type@ b = *pb;
static int
-C@TYPE@_compare(@type@ *pa, @type@ *pb)
+C@TYPE@_compare(@type@ *pa, @type@ *pb, PyArrayObject *NPY_UNUSED(ap))
{
const @type@ ar = pa[0];
const @type@ ai = pa[1];
*/
static int
-@TYPE@_compare(@type@ *pa, @type@ *pb)
+@TYPE@_compare(@type@ *pa, @type@ *pb, PyArrayObject *NPY_UNUSED(ap))
{
const @type@ a = *pa;
const @type@ b = *pb;
}
descr = PyArray_DescrFromType(type_num);
+ if (descr == NULL) {
+ return 0;
+ }
if (byte_order == '=') {
*result = (PyObject*)descr;
}
else {
*result = (PyObject*)PyArray_DescrNewByteorder(descr, byte_order);
Py_DECREF(descr);
+ if (*result == NULL) {
+ return 0;
+ }
}
return 1;
PyArrayObject_fields *fa;
npy_intp nbytes;
+ if (descr == NULL) {
+ return NULL;
+ }
if (nd > NPY_MAXDIMS || nd < 0) {
PyErr_Format(PyExc_ValueError,
"number of dimensions must be within [0, %d]", NPY_MAXDIMS);
return NULL;
}
PyArray_DESCR_REPLACE(descr);
+ if (descr == NULL) {
+ return NULL;
+ }
descr->elsize = itemsize;
}
new = PyArray_NewFromDescr(subtype, descr, nd, dims, strides,
* terminate.
*/
descr = PyArray_DescrNewFromType(NPY_STRING);
+ if (descr == NULL) {
+ return NULL;
+ }
descr->elsize = view->itemsize;
}
return descr;
if (!descr && PyArray_Check(op) &&
PyArray_ISBYTESWAPPED((PyArrayObject* )op)) {
descr = PyArray_DescrNew(PyArray_DESCR((PyArrayObject *)op));
+ if (descr == NULL) {
+ return NULL;
+ }
}
else if (descr && !PyArray_ISNBO(descr->byteorder)) {
PyArray_DESCR_REPLACE(descr);
PyArrayObject *ret;
size_t nread = 0;
+ if (dtype == NULL) {
+ return NULL;
+ }
+
if (PyDataType_REFCHK(dtype)) {
PyErr_SetString(PyExc_ValueError,
"Cannot read into object array");
int itemsize;
int writeable = 1;
+ if (type == NULL) {
+ return NULL;
+ }
if (PyDataType_REFCHK(type)) {
PyErr_SetString(PyExc_ValueError,
PyArray_FromIter(PyObject *obj, PyArray_Descr *dtype, npy_intp count)
{
PyObject *value;
- PyObject *iter = PyObject_GetIter(obj);
+ PyObject *iter = NULL;
PyArrayObject *ret = NULL;
npy_intp i, elsize, elcount;
char *item, *new_data;
+ if (dtype == NULL) {
+ return NULL;
+ }
+
+ iter = PyObject_GetIter(obj);
if (iter == NULL) {
goto done;
}
+
if (PyDataType_ISUNSIZED(dtype)) {
PyErr_SetString(PyExc_ValueError,
"Must specify length when using variable-size data-type.");
PyArray_Descr *new = PyArray_DescrNewFromType(NPY_VOID);
if (new == NULL) {
- Py_XDECREF(fields);
- Py_XDECREF(nameslist);
- return NULL;
+ goto fail;
}
new->fields = fields;
new->names = nameslist;
totalsize += conv->elsize;
}
PyArray_Descr *new = PyArray_DescrNewFromType(NPY_VOID);
+ if (new == NULL) {
+ goto fail;
+ }
new->fields = fields;
new->names = nameslist;
new->flags = dtypeflags;
PyArray_Descr *new;
old = PyArray_DescrFromType(type_num);
+ if (old == NULL) {
+ return NULL;
+ }
new = PyArray_DescrNew(old);
Py_DECREF(old);
return new;
}
PyObject *odescr, *metadata=NULL;
- PyArray_Descr *descr, *conv;
+ PyArray_Descr *conv;
npy_bool align = NPY_FALSE;
npy_bool copy = NPY_FALSE;
npy_bool copied = NPY_FALSE;
/* Get a new copy of it unless it's already a copy */
if (copy && conv->fields == Py_None) {
- descr = PyArray_DescrNew(conv);
- Py_DECREF(conv);
- conv = descr;
+ PyArray_DESCR_REPLACE(conv);
+ if (conv == NULL) {
+ return NULL;
+ }
copied = NPY_TRUE;
}
* underlying dictionary
*/
if (!copied) {
+ PyArray_DESCR_REPLACE(conv);
+ if (conv == NULL) {
+ return NULL;
+ }
copied = NPY_TRUE;
- descr = PyArray_DescrNew(conv);
- Py_DECREF(conv);
- conv = descr;
}
if ((conv->metadata != NULL)) {
/*
char endian;
new = PyArray_DescrNew(self);
+ if (new == NULL) {
+ return NULL;
+ }
endian = new->byteorder;
if (endian != NPY_IGNORE) {
if (newendian == NPY_SWAP) {
int len, i;
newfields = PyDict_New();
+ if (newfields == NULL) {
+ Py_DECREF(new);
+ return NULL;
+ }
/* make new dictionary with replaced PyArray_Descr Objects */
while (PyDict_Next(self->fields, &pos, &key, &value)) {
if (NPY_TITLE_KEY(key, value)) {
Py_DECREF(new->subarray->base);
new->subarray->base = PyArray_DescrNewByteorder(
self->subarray->base, newendian);
+ if (new->subarray->base == NULL) {
+ Py_DECREF(new);
+ return NULL;
+ }
}
return new;
}
if (!PyErr_Occurred()) {
PyErr_Format(PyExc_TypeError,
"Cannot cast array data from %R to %R.", src_dtype, dst_dtype);
- Py_DECREF(meth);
- return -1;
}
+ Py_DECREF(meth);
+ return -1;
}
assert(PyArray_DescrCheck(cast_info->descriptors[0]));
assert(PyArray_DescrCheck(cast_info->descriptors[1]));
"string to large to store inside array.");
}
PyArray_Descr *res = PyArray_DescrNewFromType(cls->type_num);
+ if (res == NULL) {
+ return NULL;
+ }
res->elsize = (int)itemsize;
return res;
}
}
if (PyBytes_Check(obj)) {
PyArray_Descr *descr = PyArray_DescrNewFromType(NPY_VOID);
+ if (descr == NULL) {
+ return NULL;
+ }
Py_ssize_t itemsize = PyBytes_Size(obj);
if (itemsize > NPY_MAX_INT) {
PyErr_SetString(PyExc_TypeError,
"byte-like to large to store inside array.");
+ Py_DECREF(descr);
+ return NULL;
}
descr->elsize = (int)itemsize;
return descr;
}
type = PyArray_DescrFromType(float_type_num);
+ if (type == NULL) {
+ return NULL;
+ }
offset = (imag ? type->elsize : 0);
if (!PyArray_ISNBO(PyArray_DESCR(self)->byteorder)) {
- PyArray_Descr *new;
- new = PyArray_DescrNew(type);
- new->byteorder = PyArray_DESCR(self)->byteorder;
- Py_DECREF(type);
- type = new;
+ Py_SETREF(type, PyArray_DescrNew(type));
+ if (type == NULL) {
+ return NULL;
+ }
+ type->byteorder = PyArray_DESCR(self)->byteorder;
}
ret = (PyArrayObject *)PyArray_NewFromDescrAndBase(
Py_TYPE(self),
return NULL;
}
newd = PyArray_DescrNew(saved);
+ if (newd == NULL) {
+ Py_DECREF(new_name);
+ return NULL;
+ }
Py_DECREF(newd->names);
newd->names = new_name;
((PyArrayObject_fields *)self)->descr = newd;
return NULL;
}
newd = PyArray_DescrNew(saved);
+ if (newd == NULL) {
+ Py_DECREF(new_name);
+ return NULL;
+ }
Py_DECREF(newd->names);
newd->names = new_name;
((PyArrayObject_fields *)self)->descr = newd;
return NULL;
}
newd = PyArray_DescrNew(saved);
+ if (newd == NULL) {
+ Py_DECREF(new_name);
+ return NULL;
+ }
Py_DECREF(newd->names);
newd->names = new_name;
((PyArrayObject_fields *)self)->descr = newd;
return NULL;
}
newd = PyArray_DescrNew(saved);
+ if (newd == NULL) {
+ Py_DECREF(new_name);
+ return NULL;
+ }
Py_DECREF(newd->names);
newd->names = new_name;
((PyArrayObject_fields *)self)->descr = newd;
}
else {
fa->descr = PyArray_DescrNew(typecode);
+ if (fa->descr == NULL) {
+ Py_CLEAR(fa->mem_handler);
+ Py_DECREF(rawdata);
+ return NULL;
+ }
if (PyArray_DESCR(self)->byteorder == NPY_BIG) {
PyArray_DESCR(self)->byteorder = NPY_LITTLE;
}
if (op_flags & NPY_ITER_NBO) {
/* Check byte order */
if (!PyArray_ISNBO((*op_dtype)->byteorder)) {
- PyArray_Descr *nbo_dtype;
-
/* Replace with a new descr which is in native byte order */
- nbo_dtype = PyArray_DescrNewByteorder(*op_dtype, NPY_NATIVE);
- Py_DECREF(*op_dtype);
- *op_dtype = nbo_dtype;
-
+ Py_SETREF(*op_dtype,
+ PyArray_DescrNewByteorder(*op_dtype, NPY_NATIVE));
+ if (*op_dtype == NULL) {
+ return 0;
+ }
NPY_IT_DBG_PRINT("Iterator: Setting NPY_OP_ITFLAG_CAST "
"because of NPY_ITER_NBO\n");
/* Indicate that byte order or alignment needs fixing */
}
if (PyDataType_ISUNSIZED(descr)) {
PyArray_DESCR_REPLACE(descr);
+ if (descr == NULL) {
+ return NULL;
+ }
type_num = descr->type_num;
if (type_num == NPY_STRING) {
descr->elsize = PyBytes_GET_SIZE(sc);
}
((PyVoidScalarObject *)ret)->obval = destptr;
Py_SET_SIZE((PyVoidScalarObject *)ret, (int) memu);
- ((PyVoidScalarObject *)ret)->descr =
- PyArray_DescrNewFromType(NPY_VOID);
- ((PyVoidScalarObject *)ret)->descr->elsize = (int) memu;
((PyVoidScalarObject *)ret)->flags = NPY_ARRAY_BEHAVED |
NPY_ARRAY_OWNDATA;
((PyVoidScalarObject *)ret)->base = NULL;
+ ((PyVoidScalarObject *)ret)->descr =
+ PyArray_DescrNewFromType(NPY_VOID);
+ if (((PyVoidScalarObject *)ret)->descr == NULL) {
+ Py_DECREF(ret);
+ return NULL;
+ }
+ ((PyVoidScalarObject *)ret)->descr->elsize = (int) memu;
return ret;
}
/* use built-in popcount if present, else use our implementation */
#if (defined(__clang__) || defined(__GNUC__)) && NPY_BITSOF_@STYPE@ >= 32
return __builtin_popcount@c@(a);
-#elif defined(_MSC_VER) && NPY_BITSOF_@STYPE@ >= 16
+#elif defined(_MSC_VER) && NPY_BITSOF_@STYPE@ >= 16 && !defined(_M_ARM64) && !defined(_M_ARM)
/* no builtin __popcnt64 for 32 bits */
#if defined(_WIN64) || (defined(_WIN32) && NPY_BITSOF_@STYPE@ != 64)
return TO_BITS_LEN(__popcnt)(a);
* Returns -1 if an error occurred, and otherwise the reduce arrays size,
* which is the number of elements already initialized.
*/
-NPY_NO_EXPORT int
+static npy_intp
PyArray_CopyInitialReduceValues(
PyArrayObject *result, PyArrayObject *operand,
const npy_bool *axis_flags, const char *funcname,
+++ /dev/null
-"""
-Functions in this module give python-space wrappers for cython functions
-exposed in numpy/__init__.pxd, so they can be tested in test_cython.py
-"""
-cimport numpy as cnp
-cnp.import_array()
-
-
-def is_td64(obj):
- return cnp.is_timedelta64_object(obj)
-
-
-def is_dt64(obj):
- return cnp.is_datetime64_object(obj)
-
-
-def get_dt64_value(obj):
- return cnp.get_datetime64_value(obj)
-
-
-def get_td64_value(obj):
- return cnp.get_timedelta64_value(obj)
-
-
-def get_dt64_unit(obj):
- return cnp.get_datetime64_unit(obj)
-
-
-def is_integer(obj):
- return isinstance(obj, (cnp.integer, int))
--- /dev/null
+"""
+Functions in this module give python-space wrappers for cython functions
+exposed in numpy/__init__.pxd, so they can be tested in test_cython.py
+"""
+cimport numpy as cnp
+cnp.import_array()
+
+
+def is_td64(obj):
+ return cnp.is_timedelta64_object(obj)
+
+
+def is_dt64(obj):
+ return cnp.is_datetime64_object(obj)
+
+
+def get_dt64_value(obj):
+ return cnp.get_datetime64_value(obj)
+
+
+def get_td64_value(obj):
+ return cnp.get_timedelta64_value(obj)
+
+
+def get_dt64_unit(obj):
+ return cnp.get_datetime64_unit(obj)
+
+
+def is_integer(obj):
+ return isinstance(obj, (cnp.integer, int))
--- /dev/null
+"""
+Provide python-space access to the functions exposed in numpy/__init__.pxd
+for testing.
+"""
+
+import numpy as np
+from distutils.core import setup
+from Cython.Build import cythonize
+from setuptools.extension import Extension
+import os
+
+macros = [("NPY_NO_DEPRECATED_API", 0)]
+
+checks = Extension(
+ "checks",
+ sources=[os.path.join('.', "checks.pyx")],
+ include_dirs=[np.get_include()],
+ define_macros=macros,
+)
+
+extensions = [checks]
+
+setup(
+ ext_modules=cythonize(extensions)
+)
--- /dev/null
+#define Py_LIMITED_API 0x03060000
+
+#include <Python.h>
+#include <numpy/arrayobject.h>
+#include <numpy/ufuncobject.h>
+
+static PyModuleDef moduledef = {
+ .m_base = PyModuleDef_HEAD_INIT,
+ .m_name = "limited_api"
+};
+
+PyMODINIT_FUNC PyInit_limited_api(void)
+{
+ import_array();
+ import_umath();
+ return PyModule_Create(&moduledef);
+}
--- /dev/null
+"""
+Build an example package using the limited Python C API.
+"""
+
+import numpy as np
+from setuptools import setup, Extension
+import os
+
+macros = [("NPY_NO_DEPRECATED_API", 0), ("Py_LIMITED_API", "0x03060000")]
+
+limited_api = Extension(
+ "limited_api",
+ sources=[os.path.join('.', "limited_api.c")],
+ include_dirs=[np.get_include()],
+ define_macros=macros,
+)
+
+extensions = [limited_api]
+
+setup(
+ ext_modules=extensions
+)
+++ /dev/null
-"""
-Provide python-space access to the functions exposed in numpy/__init__.pxd
-for testing.
-"""
-
-import numpy as np
-from distutils.core import setup
-from Cython.Build import cythonize
-from setuptools.extension import Extension
-import os
-
-macros = [("NPY_NO_DEPRECATED_API", 0)]
-
-checks = Extension(
- "checks",
- sources=[os.path.join('.', "checks.pyx")],
- include_dirs=[np.get_include()],
- define_macros=macros,
-)
-
-extensions = [checks]
-
-setup(
- ext_modules=cythonize(extensions)
-)
# Based in part on test_cython from random.tests.test_extending
here = os.path.dirname(__file__)
- ext_dir = os.path.join(here, "examples")
+ ext_dir = os.path.join(here, "examples", "cython")
cytest = str(tmp_path / "cytest")
--- /dev/null
+import os
+import shutil
+import subprocess
+import sys
+import sysconfig
+import pytest
+
+
+@pytest.mark.xfail(
+ sysconfig.get_config_var("Py_DEBUG"),
+ reason=(
+ "Py_LIMITED_API is incompatible with Py_DEBUG, Py_TRACE_REFS, "
+ "and Py_REF_DEBUG"
+ ),
+)
+def test_limited_api(tmp_path):
+ """Test building a third-party C extension with the limited API."""
+ # Based in part on test_cython from random.tests.test_extending
+
+ here = os.path.dirname(__file__)
+ ext_dir = os.path.join(here, "examples", "limited_api")
+
+ cytest = str(tmp_path / "limited_api")
+
+ shutil.copytree(ext_dir, cytest)
+ # build the examples and "install" them into a temporary directory
+
+ install_log = str(tmp_path / "tmp_install_log.txt")
+ subprocess.check_call(
+ [
+ sys.executable,
+ "setup.py",
+ "build",
+ "install",
+ "--prefix", str(tmp_path / "installdir"),
+ "--single-version-externally-managed",
+ "--record",
+ install_log,
+ ],
+ cwd=cytest,
+ )
assert_almost_equal, assert_array_almost_equal, assert_no_warnings,
assert_allclose, HAS_REFCOUNT, suppress_warnings
)
+from numpy.testing._private.utils import requires_memory
from numpy.compat import pickle
[[0, 1, 1], [1, 1, 1]])
assert_equal(np.minimum.reduce(a, axis=()), a)
+ @requires_memory(6 * 1024**3)
+ def test_identityless_reduction_huge_array(self):
+ # Regression test for gh-20921 (copying identity incorrectly failed)
+ arr = np.zeros((2, 2**31), 'uint8')
+ arr[:, 0] = [1, 3]
+ arr[:, -1] = [4, 1]
+ res = np.maximum.reduce(arr, axis=0)
+ del arr
+ assert res[0] == 3
+ assert res[-1] == 4
+
def test_identityless_reduction_corder(self):
a = np.empty((2, 3, 4), order='C')
self.check_identityless_reduction(a)
l = ext.language or self.compiler.detect_language(ext.sources)
if l:
ext_languages.add(l)
+
# reset language attribute for choosing proper linker
+ #
+ # When we build extensions with multiple languages, we have to
+ # choose a linker. The rules here are:
+ # 1. if there is Fortran code, always prefer the Fortran linker,
+ # 2. otherwise prefer C++ over C,
+ # 3. Users can force a particular linker by using
+ # `language='c'` # or 'c++', 'f90', 'f77'
+ # in their config.add_extension() calls.
if 'c++' in ext_languages:
ext_language = 'c++'
- elif 'f90' in ext_languages:
+ else:
+ ext_language = 'c' # default
+
+ has_fortran = False
+ if 'f90' in ext_languages:
ext_language = 'f90'
+ has_fortran = True
elif 'f77' in ext_languages:
ext_language = 'f77'
- else:
- ext_language = 'c' # default
- if l and l != ext_language and ext.language:
- log.warn('resetting extension %r language from %r to %r.' %
- (ext.name, l, ext_language))
- if not ext.language:
+ has_fortran = True
+
+ if not ext.language or has_fortran:
+ if l and l != ext_language and ext.language:
+ log.warn('resetting extension %r language from %r to %r.' %
+ (ext.name, l, ext_language))
+
ext.language = ext_language
+
# global language
all_languages.update(ext_languages)
log.info("building '%s' extension", ext.name)
extra_args = ext.extra_compile_args or []
- extra_cflags = ext.extra_c_compile_args or []
- extra_cxxflags = ext.extra_cxx_compile_args or []
+ extra_cflags = getattr(ext, 'extra_c_compile_args', None) or []
+ extra_cxxflags = getattr(ext, 'extra_cxx_compile_args', None) or []
macros = ext.define_macros[:]
for undef in ext.undef_macros:
'latexdocstrcbs': '\\noindent Call-back functions:',
'routnote': {hasnote: '--- #note#', l_not(hasnote): ''},
}, { # Function
- 'decl': ' #ctype# return_value;',
- 'frompyobj': [{debugcapi: ' CFUNCSMESS("cb:Getting return_value->");'},
- ' if (capi_j>capi_i)\n GETSCALARFROMPYTUPLE(capi_return,capi_i++,&return_value,#ctype#,"#ctype#_from_pyobj failed in converting return_value of call-back function #name# to C #ctype#\\n");',
- {debugcapi:
- ' fprintf(stderr,"#showvalueformat#.\\n",return_value);'}
- ],
+ 'decl': ' #ctype# return_value = 0;',
+ 'frompyobj': [
+ {debugcapi: ' CFUNCSMESS("cb:Getting return_value->");'},
+ '''\
+ if (capi_j>capi_i) {
+ GETSCALARFROMPYTUPLE(capi_return,capi_i++,&return_value,#ctype#,
+ "#ctype#_from_pyobj failed in converting return_value of"
+ " call-back function #name# to C #ctype#\\n");
+ } else {
+ fprintf(stderr,"Warning: call-back function #name# did not provide"
+ " return value (index=%d, type=#ctype#)\\n",capi_i);
+ }''',
+ {debugcapi:
+ ' fprintf(stderr,"#showvalueformat#.\\n",return_value);'}
+ ],
'need': ['#ctype#_from_pyobj', {debugcapi: 'CFUNCSMESS'}, 'GETSCALARFROMPYTUPLE'],
'return': ' return return_value;',
'_check': l_and(isfunction, l_not(isstringfunction), l_not(iscomplexfunction))
'args': '#ctype# return_value,int return_value_len',
'args_nm': 'return_value,&return_value_len',
'args_td': '#ctype# ,int',
- 'frompyobj': [{debugcapi: ' CFUNCSMESS("cb:Getting return_value->\\"");'},
- """ if (capi_j>capi_i)
- GETSTRFROMPYTUPLE(capi_return,capi_i++,return_value,return_value_len);""",
- {debugcapi:
- ' fprintf(stderr,"#showvalueformat#\\".\\n",return_value);'}
- ],
+ 'frompyobj': [
+ {debugcapi: ' CFUNCSMESS("cb:Getting return_value->\\"");'},
+ """\
+ if (capi_j>capi_i) {
+ GETSTRFROMPYTUPLE(capi_return,capi_i++,return_value,return_value_len);
+ } else {
+ fprintf(stderr,"Warning: call-back function #name# did not provide"
+ " return value (index=%d, type=#ctype#)\\n",capi_i);
+ }""",
+ {debugcapi:
+ ' fprintf(stderr,"#showvalueformat#\\".\\n",return_value);'}
+ ],
'need': ['#ctype#_from_pyobj', {debugcapi: 'CFUNCSMESS'},
'string.h', 'GETSTRFROMPYTUPLE'],
'return': 'return;',
""",
'decl': """
#ifdef F2PY_CB_RETURNCOMPLEX
- #ctype# return_value;
+ #ctype# return_value = {0, 0};
#endif
""",
- 'frompyobj': [{debugcapi: ' CFUNCSMESS("cb:Getting return_value->");'},
- """\
- if (capi_j>capi_i)
+ 'frompyobj': [
+ {debugcapi: ' CFUNCSMESS("cb:Getting return_value->");'},
+ """\
+ if (capi_j>capi_i) {
#ifdef F2PY_CB_RETURNCOMPLEX
- GETSCALARFROMPYTUPLE(capi_return,capi_i++,&return_value,#ctype#,\"#ctype#_from_pyobj failed in converting return_value of call-back function #name# to C #ctype#\\n\");
+ GETSCALARFROMPYTUPLE(capi_return,capi_i++,&return_value,#ctype#,
+ \"#ctype#_from_pyobj failed in converting return_value of call-back\"
+ \" function #name# to C #ctype#\\n\");
#else
- GETSCALARFROMPYTUPLE(capi_return,capi_i++,return_value,#ctype#,\"#ctype#_from_pyobj failed in converting return_value of call-back function #name# to C #ctype#\\n\");
+ GETSCALARFROMPYTUPLE(capi_return,capi_i++,return_value,#ctype#,
+ \"#ctype#_from_pyobj failed in converting return_value of call-back\"
+ \" function #name# to C #ctype#\\n\");
#endif
-""",
- {debugcapi: """
+ } else {
+ fprintf(stderr,
+ \"Warning: call-back function #name# did not provide\"
+ \" return value (index=%d, type=#ctype#)\\n\",capi_i);
+ }""",
+ {debugcapi: """\
#ifdef F2PY_CB_RETURNCOMPLEX
fprintf(stderr,\"#showvalueformat#.\\n\",(return_value).r,(return_value).i);
#else
fprintf(stderr,\"#showvalueformat#.\\n\",(*return_value).r,(*return_value).i);
#endif
-
"""}
- ],
+ ],
'return': """
#ifdef F2PY_CB_RETURNCOMPLEX
return return_value;
"""
cppmacros["F2PY_THREAD_LOCAL_DECL"] = """\
#ifndef F2PY_THREAD_LOCAL_DECL
-#if defined(_MSC_VER) \\
- || defined(_WIN32) || defined(_WIN64) \\
- || defined(__MINGW32__) || defined(__MINGW64__)
+#if defined(_MSC_VER)
#define F2PY_THREAD_LOCAL_DECL __declspec(thread)
+#elif defined(__MINGW32__) || defined(__MINGW64__)
+#define F2PY_THREAD_LOCAL_DECL __thread
#elif defined(__STDC_VERSION__) \\
&& (__STDC_VERSION__ >= 201112L) \\
&& !defined(__STDC_NO_THREADS__) \\
- && (!defined(__GLIBC__) || __GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ > 12))
+ && (!defined(__GLIBC__) || __GLIBC__ > 2 || (__GLIBC__ == 2 && __GLIBC_MINOR__ > 12)) \\
+ && !defined(__OpenBSD__)
/* __STDC_NO_THREADS__ was first defined in a maintenance release of glibc 2.12,
see https://lists.gnu.org/archive/html/commit-hurd/2012-07/msg00180.html,
so `!defined(__STDC_NO_THREADS__)` may give false positive for the existence
- of `threads.h` when using an older release of glibc 2.12 */
+ of `threads.h` when using an older release of glibc 2.12
+ See gh-19437 for details on OpenBSD */
#include <threads.h>
#define F2PY_THREAD_LOCAL_DECL thread_local
#elif defined(__GNUC__) \\
import pathlib
import sys
import sysconfig
-from numpy.distutils.ccompiler import new_compiler
-from distutils.errors import CompileError
__all__ = ['build_and_import_extension', 'compile_extension_module']
>>> assert not mod.test_bytes(u'abc')
>>> assert mod.test_bytes(b'abc')
"""
+ from distutils.errors import CompileError
body = prologue + _make_methods(functions, modname)
init = """PyObject *mod = PyModule_Create(&moduledef);
def build(cfile, outputfilename, compile_extra, link_extra,
include_dirs, libraries, library_dirs):
"cd into the directory where the cfile is, use distutils to build"
+ from numpy.distutils.ccompiler import new_compiler
compiler = new_compiler(force=1, verbose=2)
compiler.customize('')
reveal_type(a[[0, 1, 2]]) # E: ndarray[Any, dtype[str_]]
reveal_type(a[...]) # E: ndarray[Any, dtype[str_]]
reveal_type(a[:]) # E: ndarray[Any, dtype[str_]]
+reveal_type(a[(...,)]) # E: ndarray[Any, dtype[str_]]
+reveal_type(a[(0,)]) # E: str_
reveal_type(a.__array__()) # E: ndarray[Any, dtype[str_]]
reveal_type(a.__array__(np.dtype(np.float64))) # E: ndarray[Any, dtype[{float64}]]
+a[0] = "a"
+a[:5] = "a"
+a[...] = "a"
+a[(...,)] = "a"
reveal_type(vectorized_func.otypes) # E: Union[None, builtins.str]
reveal_type(vectorized_func.excluded) # E: set[Union[builtins.int, builtins.str]]
reveal_type(vectorized_func.__doc__) # E: Union[None, builtins.str]
-reveal_type(vectorized_func([1])) # E: ndarray[Any, dtype[Any]]
+reveal_type(vectorized_func([1])) # E: Any
reveal_type(np.vectorize(int)) # E: vectorize
reveal_type(np.vectorize( # E: vectorize
int, otypes="i", doc="doc", excluded=(), cache=True, signature=None
#-----------------------------------
# Path to the release notes
-RELEASE_NOTES = 'doc/source/release/1.22.1-notes.rst'
+RELEASE_NOTES = 'doc/source/release/1.22.2-notes.rst'
#-------------------------------------------------------
import numpy.distutils.command.sdist
import setuptools
if int(setuptools.__version__.split('.')[0]) >= 60:
- raise RuntimeError(
- "Setuptools version is '{}', version < '60.0.0' is required. "
- "See pyproject.toml".format(setuptools.__version__))
+ # setuptools >= 60 switches to vendored distutils by default; this
+ # may break the numpy build, so make sure the stdlib version is used
+ try:
+ setuptools_use_distutils = os.environ['SETUPTOOLS_USE_DISTUTILS']
+ except KeyError:
+ os.environ['SETUPTOOLS_USE_DISTUTILS'] = "stdlib"
+ else:
+ if setuptools_use_distutils != "stdlib":
+ raise RuntimeError("setuptools versions >= '60.0.0' require "
+ "SETUPTOOLS_USE_DISTUTILS=stdlib in the environment")
# Initialize cmdclass from versioneer
from numpy.distutils.core import numpy_cmdclass
}
%#endif
if (!PyArray_IsScalar(obj,Integer)) return SWIG_TypeError;
- PyArray_Descr * longDescr = PyArray_DescrNewFromType(NPY_LONG);
+ PyArray_Descr * longDescr = PyArray_DescrFromType(NPY_LONG);
PyArray_CastScalarToCtype(obj, (void*)val, longDescr);
Py_DECREF(longDescr);
return SWIG_OK;
}
%#endif
if (!PyArray_IsScalar(obj,Integer)) return SWIG_TypeError;
- PyArray_Descr * ulongDescr = PyArray_DescrNewFromType(NPY_ULONG);
+ PyArray_Descr * ulongDescr = PyArray_DescrFromType(NPY_ULONG);
PyArray_CastScalarToCtype(obj, (void*)val, ulongDescr);
Py_DECREF(ulongDescr);
return SWIG_OK;