Metadata-Version: 1.2
Name: numpy
-Version: 1.20.1
+Version: 1.20.2
Summary: NumPy is the fundamental package for array computing with Python.
Home-page: https://www.numpy.org
Author: Travis E. Oliphant et al.
--- /dev/null
+
+Contributors
+============
+
+A total of 7 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Allan Haldane
+* Bas van Beek
+* Charles Harris
+* Christoph Gohlke
+* Mateusz Sokół +
+* Michael Lamparski
+* Sebastian Berg
+
+Pull requests merged
+====================
+
+A total of 20 pull requests were merged for this release.
+
+* `#18382 <https://github.com/numpy/numpy/pull/18382>`__: MAINT: Update f2py from master.
+* `#18459 <https://github.com/numpy/numpy/pull/18459>`__: BUG: ``diagflat`` could overflow on windows or 32-bit platforms
+* `#18460 <https://github.com/numpy/numpy/pull/18460>`__: BUG: Fix refcount leak in f2py ``complex_double_from_pyobj``.
+* `#18461 <https://github.com/numpy/numpy/pull/18461>`__: BUG: Fix tiny memory leaks when ``like=`` overrides are used
+* `#18462 <https://github.com/numpy/numpy/pull/18462>`__: BUG: Remove temporary change of descr/flags in VOID functions
+* `#18469 <https://github.com/numpy/numpy/pull/18469>`__: BUG: Segfault in nditer buffer dealloc for Object arrays
+* `#18485 <https://github.com/numpy/numpy/pull/18485>`__: BUG: Remove suspicious type casting
+* `#18486 <https://github.com/numpy/numpy/pull/18486>`__: BUG: remove nonsensical comparison of pointer < 0
+* `#18487 <https://github.com/numpy/numpy/pull/18487>`__: BUG: verify pointer against NULL before using it
+* `#18488 <https://github.com/numpy/numpy/pull/18488>`__: BUG: check if PyArray_malloc succeeded
+* `#18546 <https://github.com/numpy/numpy/pull/18546>`__: BUG: incorrect error fallthrough in nditer
+* `#18559 <https://github.com/numpy/numpy/pull/18559>`__: CI: Backport CI fixes from main.
+* `#18599 <https://github.com/numpy/numpy/pull/18599>`__: MAINT: Add annotations for ``dtype.__getitem__``, ``__mul__`` and...
+* `#18611 <https://github.com/numpy/numpy/pull/18611>`__: BUG: NameError in numpy.distutils.fcompiler.compaq
+* `#18612 <https://github.com/numpy/numpy/pull/18612>`__: BUG: Fixed ``where`` keyword for ``np.mean`` & ``np.var`` methods
+* `#18617 <https://github.com/numpy/numpy/pull/18617>`__: CI: Update apt package list before Python install
+* `#18636 <https://github.com/numpy/numpy/pull/18636>`__: MAINT: Ensure that re-exported sub-modules are properly annotated
+* `#18638 <https://github.com/numpy/numpy/pull/18638>`__: BUG: Fix ma coercion list-of-ma-arrays if they do not cast to...
+* `#18661 <https://github.com/numpy/numpy/pull/18661>`__: BUG: Fix small valgrind-found issues
+* `#18671 <https://github.com/numpy/numpy/pull/18671>`__: BUG: Fix small issues found with pytest-leaks
.. toctree::
:maxdepth: 3
+ 1.20.2 <release/1.20.2-notes>
1.20.1 <release/1.20.1-notes>
1.20.0 <release/1.20.0-notes>
1.19.5 <release/1.19.5-notes>
--- /dev/null
+.. currentmodule:: numpy
+
+==========================
+NumPy 1.20.2 Release Notes
+==========================
+
+NumPy 1,20.2 is a bugfix release containing several fixes merged to the main
+branch after the NumPy 1.20.1 release.
+
+
+Contributors
+============
+
+A total of 7 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Allan Haldane
+* Bas van Beek
+* Charles Harris
+* Christoph Gohlke
+* Mateusz Sokół +
+* Michael Lamparski
+* Sebastian Berg
+
+Pull requests merged
+====================
+
+A total of 20 pull requests were merged for this release.
+
+* `#18382 <https://github.com/numpy/numpy/pull/18382>`__: MAINT: Update f2py from master.
+* `#18459 <https://github.com/numpy/numpy/pull/18459>`__: BUG: ``diagflat`` could overflow on windows or 32-bit platforms
+* `#18460 <https://github.com/numpy/numpy/pull/18460>`__: BUG: Fix refcount leak in f2py ``complex_double_from_pyobj``.
+* `#18461 <https://github.com/numpy/numpy/pull/18461>`__: BUG: Fix tiny memory leaks when ``like=`` overrides are used
+* `#18462 <https://github.com/numpy/numpy/pull/18462>`__: BUG: Remove temporary change of descr/flags in VOID functions
+* `#18469 <https://github.com/numpy/numpy/pull/18469>`__: BUG: Segfault in nditer buffer dealloc for Object arrays
+* `#18485 <https://github.com/numpy/numpy/pull/18485>`__: BUG: Remove suspicious type casting
+* `#18486 <https://github.com/numpy/numpy/pull/18486>`__: BUG: remove nonsensical comparison of pointer < 0
+* `#18487 <https://github.com/numpy/numpy/pull/18487>`__: BUG: verify pointer against NULL before using it
+* `#18488 <https://github.com/numpy/numpy/pull/18488>`__: BUG: check if PyArray_malloc succeeded
+* `#18546 <https://github.com/numpy/numpy/pull/18546>`__: BUG: incorrect error fallthrough in nditer
+* `#18559 <https://github.com/numpy/numpy/pull/18559>`__: CI: Backport CI fixes from main.
+* `#18599 <https://github.com/numpy/numpy/pull/18599>`__: MAINT: Add annotations for `dtype.__getitem__`, `__mul__` and...
+* `#18611 <https://github.com/numpy/numpy/pull/18611>`__: BUG: NameError in numpy.distutils.fcompiler.compaq
+* `#18612 <https://github.com/numpy/numpy/pull/18612>`__: BUG: Fixed ``where`` keyword for ``np.mean`` & ``np.var`` methods
+* `#18617 <https://github.com/numpy/numpy/pull/18617>`__: CI: Update apt package list before Python install
+* `#18636 <https://github.com/numpy/numpy/pull/18636>`__: MAINT: Ensure that re-exported sub-modules are properly annotated
+* `#18638 <https://github.com/numpy/numpy/pull/18638>`__: BUG: Fix ma coercion list-of-ma-arrays if they do not cast to...
+* `#18661 <https://github.com/numpy/numpy/pull/18661>`__: BUG: Fix small valgrind-found issues
+* `#18671 <https://github.com/numpy/numpy/pull/18671>`__: BUG: Fix small issues found with pytest-leaks
align: bool = ...,
copy: bool = ...,
) -> dtype[void]: ...
+
+ @overload
+ def __getitem__(self: dtype[void], key: List[str]) -> dtype[void]: ...
+ @overload
+ def __getitem__(self: dtype[void], key: Union[str, int]) -> dtype[Any]: ...
+
+ # NOTE: In the future 1-based multiplications will also yield `void` dtypes
+ @overload
+ def __mul__(self, value: Literal[0]) -> None: ... # type: ignore[misc]
+ @overload
+ def __mul__(self, value: Literal[1]) -> dtype[_DTypeScalar]: ...
+ @overload
+ def __mul__(self, value: int) -> dtype[void]: ...
+
+ # NOTE: `__rmul__` seems to be broken when used in combination with
+ # literals as of mypy 0.800. Set the return-type to `Any` for now.
+ def __rmul__(self, value: int) -> Any: ...
+
def __eq__(self, other: DTypeLike) -> bool: ...
def __ne__(self, other: DTypeLike) -> bool: ...
def __gt__(self, other: DTypeLike) -> bool: ...
@property
def name(self) -> str: ...
@property
+ def names(self) -> Optional[Tuple[str, ...]]: ...
+ @property
def num(self) -> int: ...
@property
def shape(self) -> _Shape: ...
is_float16_result = False
rcount = _count_reduce_items(arr, axis, keepdims=keepdims, where=where)
- if rcount == 0 if where is True else umr_any(rcount == 0):
+ if rcount == 0 if where is True else umr_any(rcount == 0, axis=None):
warnings.warn("Mean of empty slice.", RuntimeWarning, stacklevel=2)
# Cast bool, unsigned int, and int to float64 by default
rcount = _count_reduce_items(arr, axis, keepdims=keepdims, where=where)
# Make this warning show up on top.
- if ddof >= rcount if where is True else umr_any(ddof >= rcount):
+ if ddof >= rcount if where is True else umr_any(ddof >= rcount, axis=None):
warnings.warn("Degrees of freedom <= 0 for slice", RuntimeWarning,
stacklevel=2)
NULL, DType, &flags, item_DType) < 0) {
Py_DECREF(iter);
Py_DECREF(elem);
+ Py_XDECREF(*out_descr);
Py_XDECREF(item_DType);
return -1;
}
return NULL;
}
- /* Remove `like=` kwarg, which is NumPy-exclusive and thus not present
+ /*
+ * Remove `like=` kwarg, which is NumPy-exclusive and thus not present
* in downstream libraries. If `like=` is specified but doesn't
* implement `__array_function__`, raise a `TypeError`.
*/
if (kwargs != NULL && PyDict_Contains(kwargs, npy_ma_str_like)) {
PyObject *like_arg = PyDict_GetItem(kwargs, npy_ma_str_like);
- if (like_arg && !get_array_function(like_arg)) {
- return PyErr_Format(PyExc_TypeError,
- "The `like` argument must be an array-like that implements "
- "the `__array_function__` protocol.");
+ if (like_arg != NULL) {
+ PyObject *tmp_has_override = get_array_function(like_arg);
+ if (tmp_has_override == NULL) {
+ return PyErr_Format(PyExc_TypeError,
+ "The `like` argument must be an array-like that "
+ "implements the `__array_function__` protocol.");
+ }
+ Py_DECREF(tmp_has_override);
+ PyDict_DelItem(kwargs, npy_ma_str_like);
}
- PyDict_DelItem(kwargs, npy_ma_str_like);
}
PyObject *res = array_implement_array_function_internal(
return Py_NotImplemented;
}
- PyObject *like_arg = PyDict_GetItem(kwargs, npy_ma_str_like);
+ PyObject *like_arg = PyDict_GetItemWithError(kwargs, npy_ma_str_like);
if (like_arg == NULL) {
return NULL;
}
- else if (!get_array_function(like_arg)) {
- return PyErr_Format(PyExc_TypeError,
- "The `like` argument must be an array-like that implements "
- "the `__array_function__` protocol.");
+ else {
+ PyObject *tmp_has_override = get_array_function(like_arg);
+ if (tmp_has_override == NULL) {
+ return PyErr_Format(PyExc_TypeError,
+ "The `like` argument must be an array-like that "
+ "implements the `__array_function__` protocol.");
+ }
+ Py_DECREF(tmp_has_override);
}
PyObject *relevant_args = PyTuple_Pack(1, like_arg);
PyDict_DelItem(kwargs, npy_ma_str_like);
#include "npy_cblas.h"
#include "npy_buffer.h"
+
+/*
+ * Define a stack allocated dummy array with only the minimum information set:
+ * 1. The descr, the main field interesting here.
+ * 2. The flags, which are needed for alignment;.
+ * 3. The type is set to NULL and the base is the original array, if this
+ * is used within a subarray getitem to create a new view, the base
+ * must be walked until the type is not NULL.
+ *
+ * The following should create errors in debug mode (if deallocated
+ * incorrectly), since base would be incorrectly decref'd as well.
+ * This is especially important for nonzero and copyswap, which may run with
+ * the GIL released.
+ */
+static NPY_INLINE PyArrayObject_fields
+get_dummy_stack_array(PyArrayObject *orig)
+{
+ PyArrayObject_fields new_fields;
+ new_fields.flags = PyArray_FLAGS(orig);
+ /* Set to NULL so the dummy object can be distinguished from the real one */
+ Py_TYPE(&new_fields) = NULL;
+ new_fields.base = (PyObject *)orig;
+ return new_fields;
+}
+
+
/* check for sequences, but ignore the types numpy considers scalars */
static NPY_INLINE npy_bool
PySequence_NoString_Check(PyObject *op) {
return PyErr_Occurred() ? -1 : 0;
}
+
/* VOID */
static PyObject *
{
PyArrayObject *ap = vap;
char *ip = input;
- PyArray_Descr* descr;
+ PyArray_Descr* descr = PyArray_DESCR(vap);
- descr = PyArray_DESCR(ap);
if (PyDataType_HASFIELDS(descr)) {
PyObject *key;
PyObject *names;
int i, n;
PyObject *ret;
PyObject *tup;
- int savedflags;
+ PyArrayObject_fields dummy_fields = get_dummy_stack_array(ap);
+ PyArrayObject *dummy_arr = (PyArrayObject *)&dummy_fields;
/* get the names from the fields dictionary*/
names = descr->names;
n = PyTuple_GET_SIZE(names);
ret = PyTuple_New(n);
- savedflags = PyArray_FLAGS(ap);
for (i = 0; i < n; i++) {
npy_intp offset;
PyArray_Descr *new;
tup = PyDict_GetItem(descr->fields, key);
if (_unpack_field(tup, &new, &offset) < 0) {
Py_DECREF(ret);
- ((PyArrayObject_fields *)ap)->descr = descr;
return NULL;
}
- /*
- * TODO: temporarily modifying the array like this
- * is bad coding style, should be changed.
- */
- ((PyArrayObject_fields *)ap)->descr = new;
+ dummy_fields.descr = new;
/* update alignment based on offset */
if ((new->alignment > 1)
&& ((((npy_intp)(ip+offset)) % new->alignment) != 0)) {
- PyArray_CLEARFLAGS(ap, NPY_ARRAY_ALIGNED);
+ PyArray_CLEARFLAGS(dummy_arr, NPY_ARRAY_ALIGNED);
}
else {
- PyArray_ENABLEFLAGS(ap, NPY_ARRAY_ALIGNED);
+ PyArray_ENABLEFLAGS(dummy_arr, NPY_ARRAY_ALIGNED);
}
- PyTuple_SET_ITEM(ret, i, PyArray_GETITEM(ap, ip+offset));
- ((PyArrayObject_fields *)ap)->flags = savedflags;
+ PyTuple_SET_ITEM(ret, i, PyArray_GETITEM(dummy_arr, ip+offset));
}
- ((PyArrayObject_fields *)ap)->descr = descr;
return ret;
}
return NULL;
}
Py_INCREF(descr->subarray->base);
+
+ /*
+ * NOTE: There is the possibility of recursive calls from the above
+ * field branch. These calls use a dummy arr for thread
+ * (and general) safety. However, we must set the base array,
+ * so if such a dummy array was passed (its type is NULL),
+ * we have walk its base until the initial array is found.
+ *
+ * TODO: This should be fixed, the next "generation" of GETITEM will
+ * probably need to pass in the original array (in addition
+ * to the dtype as a method). Alternatively, VOID dtypes
+ * could have special handling.
+ */
+ PyObject *base = (PyObject *)ap;
+ while (Py_TYPE(base) == NULL) {
+ base = PyArray_BASE((PyArrayObject *)base);
+ }
ret = (PyArrayObject *)PyArray_NewFromDescrAndBase(
&PyArray_Type, descr->subarray->base,
shape.len, shape.ptr, NULL, ip,
PyArray_FLAGS(ap) & ~NPY_ARRAY_F_CONTIGUOUS,
- NULL, (PyObject *)ap);
+ NULL, base);
npy_free_cache_dim_obj(shape);
return (PyObject *)ret;
}
* individual fields of a numpy structure, in VOID_setitem. Compare to inner
* loops in VOID_getitem and VOID_nonzero.
*
- * WARNING: Clobbers arr's dtype and alignment flag.
+ * WARNING: Clobbers arr's dtype and alignment flag, should not be used
+ * on the original array!
*/
NPY_NO_EXPORT int
_setup_field(int i, PyArray_Descr *descr, PyArrayObject *arr,
_copy_and_return_void_setitem(PyArray_Descr *dstdescr, char *dstdata,
PyArray_Descr *srcdescr, char *srcdata){
PyArrayObject_fields dummy_struct;
- PyArrayObject *dummy = (PyArrayObject *)&dummy_struct;
+ PyArrayObject *dummy_arr = (PyArrayObject *)&dummy_struct;
npy_int names_size = PyTuple_GET_SIZE(dstdescr->names);
npy_intp offset;
npy_int i;
if (PyArray_EquivTypes(srcdescr, dstdescr)) {
for (i = 0; i < names_size; i++) {
/* neither line can ever fail, in principle */
- if (_setup_field(i, dstdescr, dummy, &offset, dstdata)) {
+ if (_setup_field(i, dstdescr, dummy_arr, &offset, dstdata)) {
return -1;
}
- PyArray_DESCR(dummy)->f->copyswap(dstdata + offset,
- srcdata + offset, 0, dummy);
+ PyArray_DESCR(dummy_arr)->f->copyswap(dstdata + offset,
+ srcdata + offset, 0, dummy_arr);
}
return 0;
}
{
char *ip = input;
PyArrayObject *ap = vap;
- PyArray_Descr *descr;
- int flags;
- int itemsize=PyArray_DESCR(ap)->elsize;
+ int itemsize = PyArray_DESCR(ap)->elsize;
int res;
+ PyArray_Descr *descr = PyArray_DESCR(ap);
- descr = PyArray_DESCR(ap);
- flags = PyArray_FLAGS(ap);
if (PyDataType_HASFIELDS(descr)) {
PyObject *errmsg;
npy_int i;
return -1;
}
+ PyArrayObject_fields dummy_fields = get_dummy_stack_array(ap);
+ PyArrayObject *dummy_arr = (PyArrayObject *)&dummy_fields;
+
for (i = 0; i < names_size; i++) {
PyObject *item;
- /* temporarily make ap have only this field */
- if (_setup_field(i, descr, ap, &offset, ip) == -1) {
+ if (_setup_field(i, descr, dummy_arr, &offset, ip) == -1) {
failed = 1;
break;
}
break;
}
/* use setitem to set this field */
- if (PyArray_SETITEM(ap, ip + offset, item) < 0) {
+ if (PyArray_SETITEM(dummy_arr, ip + offset, item) < 0) {
failed = 1;
break;
}
/* Otherwise must be non-void scalar. Try to assign to each field */
npy_intp names_size = PyTuple_GET_SIZE(descr->names);
+ PyArrayObject_fields dummy_fields = get_dummy_stack_array(ap);
+ PyArrayObject *dummy_arr = (PyArrayObject *)&dummy_fields;
+
for (i = 0; i < names_size; i++) {
/* temporarily make ap have only this field */
- if (_setup_field(i, descr, ap, &offset, ip) == -1) {
+ if (_setup_field(i, descr, dummy_arr, &offset, ip) == -1) {
failed = 1;
break;
}
/* use setitem to set this field */
- if (PyArray_SETITEM(ap, ip + offset, op) < 0) {
+ if (PyArray_SETITEM(dummy_arr, ip + offset, op) < 0) {
failed = 1;
break;
}
}
}
- /* reset clobbered attributes */
- ((PyArrayObject_fields *)(ap))->descr = descr;
- ((PyArrayObject_fields *)(ap))->flags = flags;
-
if (failed) {
return -1;
}
else if (PyDataType_HASSUBARRAY(descr)) {
/* copy into an array of the same basic type */
PyArray_Dims shape = {NULL, -1};
- PyArrayObject *ret;
if (!(PyArray_IntpConverter(descr->subarray->shape, &shape))) {
npy_free_cache_dim_obj(shape);
PyErr_SetString(PyExc_ValueError,
return -1;
}
Py_INCREF(descr->subarray->base);
- ret = (PyArrayObject *)PyArray_NewFromDescrAndBase(
+ /*
+ * Note we set no base object here, as to not rely on the input
+ * being a valid object for base setting. `ret` nevertheless does
+ * does not own its data, this is generally not good, but localized.
+ */
+ PyArrayObject *ret = (PyArrayObject *)PyArray_NewFromDescrAndBase(
&PyArray_Type, descr->subarray->base,
shape.len, shape.ptr, NULL, ip,
- PyArray_FLAGS(ap), NULL, (PyObject *)ap);
+ PyArray_FLAGS(ap), NULL, NULL);
npy_free_cache_dim_obj(shape);
if (!ret) {
return -1;
return;
}
+
/* */
static void
VOID_copyswapn (char *dst, npy_intp dstride, char *src, npy_intp sstride,
if (PyArray_HASFIELDS(arr)) {
PyObject *key, *value;
-
Py_ssize_t pos = 0;
+ PyArrayObject_fields dummy_fields = get_dummy_stack_array(arr);
+ PyArrayObject *dummy_arr = (PyArrayObject *)&dummy_fields;
+
while (PyDict_Next(descr->fields, &pos, &key, &value)) {
npy_intp offset;
- PyArray_Descr * new;
+ PyArray_Descr *new;
if (NPY_TITLE_KEY(key, value)) {
continue;
}
if (_unpack_field(value, &new, &offset) < 0) {
- ((PyArrayObject_fields *)arr)->descr = descr;
return;
}
- /*
- * TODO: temporarily modifying the array like this
- * is bad coding style, should be changed.
- */
- ((PyArrayObject_fields *)arr)->descr = new;
+
+ dummy_fields.descr = new;
new->f->copyswapn(dst+offset, dstride,
(src != NULL ? src+offset : NULL),
- sstride, n, swap, arr);
+ sstride, n, swap, dummy_arr);
}
- ((PyArrayObject_fields *)arr)->descr = descr;
return;
}
if (PyDataType_HASSUBARRAY(descr)) {
}
new = descr->subarray->base;
- /*
- * TODO: temporarily modifying the array like this
- * is bad coding style, should be changed.
- */
- ((PyArrayObject_fields *)arr)->descr = new;
dstptr = dst;
srcptr = src;
subitemsize = new->elsize;
/* There cannot be any elements, so return */
return;
}
+
+ PyArrayObject_fields dummy_fields = get_dummy_stack_array(arr);
+ PyArrayObject *dummy_arr = (PyArrayObject *)&dummy_fields;
+ ((PyArrayObject_fields *)dummy_arr)->descr = new;
+
num = descr->elsize / subitemsize;
for (i = 0; i < n; i++) {
new->f->copyswapn(dstptr, subitemsize, srcptr,
- subitemsize, num, swap, arr);
+ subitemsize, num, swap, dummy_arr);
dstptr += dstride;
if (srcptr) {
srcptr += sstride;
}
}
- ((PyArrayObject_fields *)arr)->descr = descr;
return;
}
/* Must be a naive Void type (e.g. a "V8") so simple copy is sufficient. */
PyObject *key, *value;
Py_ssize_t pos = 0;
+ PyArrayObject_fields dummy_fields = get_dummy_stack_array(arr);
+ PyArrayObject *dummy_arr = (PyArrayObject *)&dummy_fields;
+
while (PyDict_Next(descr->fields, &pos, &key, &value)) {
npy_intp offset;
+
PyArray_Descr * new;
if (NPY_TITLE_KEY(key, value)) {
continue;
}
if (_unpack_field(value, &new, &offset) < 0) {
- ((PyArrayObject_fields *)arr)->descr = descr;
return;
}
- /*
- * TODO: temporarily modifying the array like this
- * is bad coding style, should be changed.
- */
- ((PyArrayObject_fields *)arr)->descr = new;
+ dummy_fields.descr = new;
new->f->copyswap(dst+offset,
(src != NULL ? src+offset : NULL),
- swap, arr);
+ swap, dummy_arr);
}
- ((PyArrayObject_fields *)arr)->descr = descr;
return;
}
if (PyDataType_HASSUBARRAY(descr)) {
}
new = descr->subarray->base;
- /*
- * TODO: temporarily modifying the array like this
- * is bad coding style, should be changed.
- */
- ((PyArrayObject_fields *)arr)->descr = new;
subitemsize = new->elsize;
if (subitemsize == 0) {
/* There cannot be any elements, so return */
return;
}
+
+ PyArrayObject_fields dummy_fields = get_dummy_stack_array(arr);
+ PyArrayObject *dummy_arr = (PyArrayObject *)&dummy_fields;
+ dummy_fields.descr = new;
+
num = descr->elsize / subitemsize;
new->f->copyswapn(dst, subitemsize, src,
- subitemsize, num, swap, arr);
- ((PyArrayObject_fields *)arr)->descr = descr;
+ subitemsize, num, swap, dummy_arr);
return;
}
/* Must be a naive Void type (e.g. a "V8") so simple copy is sufficient. */
if (PyArray_HASFIELDS(ap)) {
PyArray_Descr *descr;
PyObject *key, *value;
- int savedflags;
Py_ssize_t pos = 0;
+ PyArrayObject_fields dummy_fields = get_dummy_stack_array(ap);
+ PyArrayObject *dummy_arr = (PyArrayObject *)&dummy_fields;
descr = PyArray_DESCR(ap);
- savedflags = PyArray_FLAGS(ap);
while (PyDict_Next(descr->fields, &pos, &key, &value)) {
PyArray_Descr * new;
npy_intp offset;
PyErr_Clear();
continue;
}
- /*
- * TODO: temporarily modifying the array like this
- * is bad coding style, should be changed.
- */
- ((PyArrayObject_fields *)ap)->descr = new;
- ((PyArrayObject_fields *)ap)->flags = savedflags;
+
+ dummy_fields.descr = new;
if ((new->alignment > 1) && !__ALIGNED(ip + offset,
new->alignment)) {
PyArray_CLEARFLAGS(ap, NPY_ARRAY_ALIGNED);
else {
PyArray_ENABLEFLAGS(ap, NPY_ARRAY_ALIGNED);
}
- if (new->f->nonzero(ip+offset, ap)) {
+ if (new->f->nonzero(ip+offset, dummy_arr)) {
nonz = NPY_TRUE;
break;
}
}
- ((PyArrayObject_fields *)ap)->descr = descr;
- ((PyArrayObject_fields *)ap)->flags = savedflags;
return nonz;
}
len = PyArray_DESCR(ap)->elsize;
*/
_buffer_info_t *info = _buffer_get_info(&scalar->_buffer_info, self, flags);
if (info == NULL) {
+ Py_DECREF(self);
return -1;
}
view->format = info->format;
"The dtype `%R` is not a valid dtype for concatenation "
"since it is a subarray dtype (the subarray dimensions "
"would be added as array dimensions).", result);
- Py_DECREF(result);
- return NULL;
+ Py_SETREF(result, NULL);
}
goto finish;
}
else {
PyArrayObject *view;
view = (PyArrayObject *)array_item_asarray(self, i);
- if (view < 0) {
+ if (view == NULL) {
goto fail;
}
if (PyArray_AssignFromCache_Recursive(view, ndim, cache) < 0) {
/* Get an ASCII string data type, adapted to match the UNICODE one */
str_dtype = PyArray_DescrNewFromType(NPY_STRING);
- str_dtype->elsize = dst_dtype->elsize / 4;
if (str_dtype == NULL) {
return NPY_FAIL;
}
+ str_dtype->elsize = dst_dtype->elsize / 4;
/* Get the copy/swap operation to dst */
if (PyArray_GetDTypeCopySwapFn(aligned,
}
if (PyBytes_Check(obj)) {
PyArray_Descr *descr = PyArray_DescrNewFromType(NPY_VOID);
- Py_ssize_t itemsize = (int)PyBytes_Size(obj);
+ Py_ssize_t itemsize = PyBytes_Size(obj);
if (itemsize > NPY_MAX_INT) {
PyErr_SetString(PyExc_TypeError,
"byte-like to large to store inside array.");
}
- descr->elsize = itemsize;
+ descr->elsize = (int)itemsize;
return descr;
}
PyErr_Format(PyExc_TypeError,
char **dataptr;
npy_intp *stride;
npy_intp *countptr;
+ int needs_api;
NPY_BEGIN_THREADS_DEF;
iternext = NpyIter_GetIterNext(iter, NULL);
dataptr = NpyIter_GetDataPtrArray(iter);
stride = NpyIter_GetInnerStrideArray(iter);
countptr = NpyIter_GetInnerLoopSizePtr(iter);
+ needs_api = NpyIter_IterationNeedsAPI(iter);
NPY_BEGIN_THREADS_NDITER(iter);
NPY_EINSUM_DBG_PRINT("Einsum loop\n");
do {
sop(nop, dataptr, stride, *countptr);
- } while(iternext(iter));
+ } while (!(needs_api && PyErr_Occurred()) && iternext(iter));
NPY_END_THREADS;
/* If the API was needed, it may have thrown an error */
while (*curtype != NPY_NOTYPE) {
if (*curtype++ == totype) {
+ Py_DECREF(from);
return 1;
}
}
}
+ Py_DECREF(from);
return 0;
}
array_function_result = array_implement_c_array_function_creation(
"empty", args, kwds);
if (array_function_result != Py_NotImplemented) {
+ Py_XDECREF(typecode);
+ npy_free_cache_dim_obj(shape);
return array_function_result;
}
array_function_result = array_implement_c_array_function_creation(
"zeros", args, kwds);
if (array_function_result != Py_NotImplemented) {
+ Py_XDECREF(typecode);
+ npy_free_cache_dim_obj(shape);
return array_function_result;
}
array_function_result = array_implement_c_array_function_creation(
"fromstring", args, keywds);
if (array_function_result != Py_NotImplemented) {
+ Py_XDECREF(descr);
return array_function_result;
}
array_function_result = array_implement_c_array_function_creation(
"fromfile", args, keywds);
if (array_function_result != Py_NotImplemented) {
+ Py_XDECREF(type);
return array_function_result;
}
file = NpyPath_PathlikeToFspath(file);
if (file == NULL) {
+ Py_XDECREF(type);
return NULL;
}
array_function_result = array_implement_c_array_function_creation(
"frombuffer", args, keywds);
if (array_function_result != Py_NotImplemented) {
+ Py_XDECREF(type);
return array_function_result;
}
/* Cleanup any buffers with references */
char **buffers = NBF_BUFFERS(bufferdata);
PyArray_Descr **dtypes = NIT_DTYPES(iter);
+ npyiter_opitflags *op_itflags = NIT_OPITFLAGS(iter);
for (int iop = 0; iop < nop; ++iop, ++buffers) {
/*
* We may want to find a better way to do this, on the other hand,
* a well defined state (either NULL or owning the reference).
* Only we implement cleanup
*/
- if (!PyDataType_REFCHK(dtypes[iop])) {
+ if (!PyDataType_REFCHK(dtypes[iop]) ||
+ !(op_itflags[iop]&NPY_OP_ITFLAG_USINGBUFFER)) {
continue;
}
if (*buffers == 0) {
if (item == NULL || PyDict_SetItemString(dict, "@str@", item) < 0) {
goto err;
}
+ Py_DECREF(item);
/**end repeat**/
item = PyList_New(0);
if (item == NULL || PyDict_SetItemString(dict, "all", item) < 0) {
goto err;
}
NPY_CPU_DISPATCH_CALL_ALL(_umath_tests_dispatch_attach, (item));
+ Py_SETREF(item, NULL);
if (PyErr_Occurred()) {
goto err;
}
PyObject *item = PyUnicode_FromString(NPY_TOSTRING(NPY_CPU_DISPATCH_CURFX(func)));
if (item) {
PyList_Append(list, item);
+ Py_DECREF(item);
}
}
char **dataptr;
npy_intp *stride;
npy_intp *count_ptr;
+ int needs_api;
PyArrayObject **op_it;
npy_uint32 iter_flags;
dataptr = NpyIter_GetDataPtrArray(iter);
stride = NpyIter_GetInnerStrideArray(iter);
count_ptr = NpyIter_GetInnerLoopSizePtr(iter);
+ needs_api = NpyIter_IterationNeedsAPI(iter);
NPY_BEGIN_THREADS_NDITER(iter);
do {
NPY_UF_DBG_PRINT1("iterator loop count %d\n", (int)*count_ptr);
innerloop(dataptr, count_ptr, stride, innerloopdata);
- } while (iternext(iter));
+ } while (!(needs_api && PyErr_Occurred()) && iternext(iter));
NPY_END_THREADS;
}
dataptr = NpyIter_GetDataPtrArray(iter);
strides = NpyIter_GetInnerStrideArray(iter);
countptr = NpyIter_GetInnerLoopSizePtr(iter);
+ needs_api = NpyIter_IterationNeedsAPI(iter);
NPY_BEGIN_THREADS_NDITER(iter);
innerloop(dataptr, strides,
dataptr[nop], strides[nop],
*countptr, innerloopdata);
- } while (iternext(iter));
+ } while (!(needs_api && PyErr_Occurred()) && iternext(iter));
NPY_END_THREADS;
}
dataptr = NpyIter_GetDataPtrArray(iter);
count_ptr = NpyIter_GetInnerLoopSizePtr(iter);
+ needs_api = NpyIter_IterationNeedsAPI(iter);
if (!needs_api && !NpyIter_IterationNeedsAPI(iter)) {
NPY_BEGIN_THREADS_THRESHOLDED(total_problem_size);
do {
inner_dimensions[0] = *count_ptr;
innerloop(dataptr, inner_dimensions, inner_strides, innerloopdata);
- } while (iternext(iter));
+ } while (!(needs_api && PyErr_Occurred()) && iternext(iter));
if (!needs_api && !NpyIter_IterationNeedsAPI(iter)) {
NPY_END_THREADS;
innerloop(dataptrs_copy, &count,
strides_copy, innerloopdata);
+ if (needs_api && PyErr_Occurred()) {
+ goto finish_loop;
+ }
+
/* Jump to the faster loop when skipping is done */
if (skip_first_count == 0) {
if (iternext(iter)) {
}
} while (iternext(iter));
}
+
+ if (needs_api && PyErr_Occurred()) {
+ goto finish_loop;
+ }
+
do {
/* Turn the two items into three for the inner loop */
dataptrs_copy[0] = dataptrs[0];
n = 1;
}
}
- } while (iternext(iter));
+ } while (!(needs_api && PyErr_Occurred()) && iternext(iter));
finish_loop:
NPY_END_THREADS;
goto fail;
}
dataptr = NpyIter_GetDataPtrArray(iter);
+ needs_api = NpyIter_IterationNeedsAPI(iter);
/* Execute the loop with just the outer iterator */
innerloop(dataptr_copy, &count_m1,
stride_copy, innerloopdata);
}
- } while (iternext(iter));
+ } while (!(needs_api && PyErr_Occurred()) && iternext(iter));
NPY_END_THREADS;
}
npy_intp stride0_ind = PyArray_STRIDE(op[0], axis);
int itemsize = op_dtypes[0]->elsize;
+ int needs_api = NpyIter_IterationNeedsAPI(iter);
/* Get the variables needed for the loop */
iternext = NpyIter_GetIterNext(iter, NULL);
stride_copy, innerloopdata);
}
}
- } while (iternext(iter));
+ } while (!(needs_api && PyErr_Occurred()) && iternext(iter));
NPY_END_THREADS;
}
if (cmp == 0 && current != NULL && current->arg_dtypes == NULL) {
current->arg_dtypes = PyArray_malloc(ufunc->nargs *
sizeof(PyArray_Descr*));
- if (arg_dtypes != NULL) {
+ if (current->arg_dtypes == NULL) {
+ PyErr_NoMemory();
+ result = -1;
+ goto done;
+ }
+ else if (arg_dtypes != NULL) {
for (i = 0; i < ufunc->nargs; i++) {
current->arg_dtypes[i] = arg_dtypes[i];
Py_INCREF(current->arg_dtypes[i]);
/* DEPRECATED 2020-05-13, NumPy 1.20 */
if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1,
matrix_deprecation_msg, ufunc->name, "first") < 0) {
+ Py_DECREF(tmp);
return NULL;
}
ap1 = (PyArrayObject *) PyArray_FromObject(tmp, NPY_NOTYPE, 0, 0);
/* DEPRECATED 2020-05-13, NumPy 1.20 */
if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1,
matrix_deprecation_msg, ufunc->name, "second") < 0) {
+ Py_DECREF(tmp);
Py_DECREF(ap1);
return NULL;
}
"maximum supported dimension for an ndarray is %d, but "
"`%s.outer()` result would have %d.",
NPY_MAXDIMS, ufunc->name, newdims.len);
- return NPY_FAIL;
+ goto fail;
}
if (newdims.ptr == NULL) {
goto fail;
with pytest.raises(IndexError):
arr[(index,) * num] = 1.
+ def test_structured_advanced_indexing(self):
+ # Test that copyswap(n) used by integer array indexing is threadsafe
+ # for structured datatypes, see gh-15387. This test can behave randomly.
+ from concurrent.futures import ThreadPoolExecutor
+
+ # Create a deeply nested dtype to make a failure more likely:
+ dt = np.dtype([("", "f8")])
+ dt = np.dtype([("", dt)] * 2)
+ dt = np.dtype([("", dt)] * 2)
+ # The array should be large enough to likely run into threading issues
+ arr = np.random.uniform(size=(6000, 8)).view(dt)[:, 0]
+
+ rng = np.random.default_rng()
+ def func(arr):
+ indx = rng.integers(0, len(arr), size=6000, dtype=np.intp)
+ arr[indx]
+
+ tpe = ThreadPoolExecutor(max_workers=8)
+ futures = [tpe.submit(func, arr) for _ in range(10)]
+ for f in futures:
+ f.result()
+
+ assert arr.dtype is dt
+
class TestFieldIndexing:
def test_scalar_return_type(self):
np.array(_res))
assert_allclose(np.mean(a, axis=_ax, where=_wh),
np.array(_res))
+
+ a3d = np.arange(16).reshape((2, 2, 4))
+ _wh_partial = np.array([False, True, True, False])
+ _res = [[1.5, 5.5], [9.5, 13.5]]
+ assert_allclose(a3d.mean(axis=2, where=_wh_partial),
+ np.array(_res))
+ assert_allclose(np.mean(a3d, axis=2, where=_wh_partial),
+ np.array(_res))
+
with pytest.warns(RuntimeWarning) as w:
assert_allclose(a.mean(axis=1, where=wh_partial),
np.array([np.nan, 5.5, 9.5, np.nan]))
np.array(_res))
assert_allclose(np.var(a, axis=_ax, where=_wh),
np.array(_res))
+
+ a3d = np.arange(16).reshape((2, 2, 4))
+ _wh_partial = np.array([False, True, True, False])
+ _res = [[0.25, 0.25], [0.25, 0.25]]
+ assert_allclose(a3d.var(axis=2, where=_wh_partial),
+ np.array(_res))
+ assert_allclose(np.var(a3d, axis=2, where=_wh_partial),
+ np.array(_res))
+
assert_allclose(np.var(a, axis=1, where=wh_full),
np.var(a[wh_full].reshape((5, 3)), axis=1))
assert_allclose(np.var(a, axis=0, where=wh_partial),
assert_allclose(a.std(axis=_ax, where=_wh), _res)
assert_allclose(np.std(a, axis=_ax, where=_wh), _res)
+ a3d = np.arange(16).reshape((2, 2, 4))
+ _wh_partial = np.array([False, True, True, False])
+ _res = [[0.5, 0.5], [0.5, 0.5]]
+ assert_allclose(a3d.std(axis=2, where=_wh_partial),
+ np.array(_res))
+ assert_allclose(np.std(a3d, axis=2, where=_wh_partial),
+ np.array(_res))
+
assert_allclose(a.std(axis=1, where=whf),
np.std(a[whf].reshape((5,3)), axis=1))
assert_allclose(np.std(a, axis=1, where=whf),
memoryview(arr)
def test_max_dims(self):
- a = np.empty((1,) * 32)
+ a = np.ones((1,) * 32)
self._check_roundtrip(a)
@pytest.mark.slow
assert_equal(vals['c'], [[(0.5)]*3]*2)
assert_equal(vals['d'], 0.5)
+def test_object_iter_cleanup():
+ # see gh-18450
+ # object arrays can raise a python exception in ufunc inner loops using
+ # nditer, which should cause iteration to stop & cleanup. There were bugs
+ # in the nditer cleanup when decref'ing object arrays.
+ # This test would trigger valgrind "uninitialized read" before the bugfix.
+ assert_raises(TypeError, lambda: np.zeros((17000, 2), dtype='f4') * None)
+
+ # this more explicit code also triggers the invalid access
+ arr = np.arange(np.BUFSIZE * 10).reshape(10, -1).astype(str)
+ oarr = arr.astype(object)
+ oarr[:, -1] = None
+ assert_raises(TypeError, lambda: np.add(oarr[:, ::-1], arr[:, ::-1]))
+
+ # followup: this tests for a bug introduced in the first pass of gh-18450,
+ # caused by an incorrect fallthrough of the TypeError
+ class T:
+ def __bool__(self):
+ raise TypeError("Ambiguous")
+ assert_raises(TypeError, np.logical_or.reduce,
+ np.array([T(), T()], dtype='O'))
def test_iter_too_large():
# The total size of the iterator must not exceed the maximum intp due
a = np.array([[ThrowsAfter(15)]]*10)
assert_raises(ValueError, np.nonzero, a)
+ def test_structured_threadsafety(self):
+ # Nonzero (and some other functions) should be threadsafe for
+ # structured datatypes, see gh-15387. This test can behave randomly.
+ from concurrent.futures import ThreadPoolExecutor
+
+ # Create a deeply nested dtype to make a failure more likely:
+ dt = np.dtype([("", "f8")])
+ dt = np.dtype([("", dt)])
+ dt = np.dtype([("", dt)] * 2)
+ # The array should be large enough to likely run into threading issues
+ arr = np.random.uniform(size=(5000, 4)).view(dt)[:, 0]
+ def func(arr):
+ arr.nonzero()
+
+ tpe = ThreadPoolExecutor(max_workers=8)
+ futures = [tpe.submit(func, arr) for _ in range(10)]
+ for f in futures:
+ f.result()
+
+ assert arr.dtype is dt
+
class TestIndex:
def test_boolean(self):
import inspect
import sys
+import os
import tempfile
from io import StringIO
from unittest import mock
data = np.random.random(5)
- fname = tempfile.mkstemp()[1]
- data.tofile(fname)
-
- array_like = np.fromfile(fname, like=ref)
- if numpy_ref is True:
- assert type(array_like) is np.ndarray
- np_res = np.fromfile(fname, like=ref)
- assert_equal(np_res, data)
- assert_equal(array_like, np_res)
- else:
- assert type(array_like) is self.MyArray
- assert array_like.function is self.MyArray.fromfile
+ with tempfile.TemporaryDirectory() as tmpdir:
+ fname = os.path.join(tmpdir, "testfile")
+ data.tofile(fname)
+
+ array_like = np.fromfile(fname, like=ref)
+ if numpy_ref is True:
+ assert type(array_like) is np.ndarray
+ np_res = np.fromfile(fname, like=ref)
+ assert_equal(np_res, data)
+ assert_equal(array_like, np_res)
+ else:
+ assert type(array_like) is self.MyArray
+ assert array_like.function is self.MyArray.fromfile
@requires_array_function
def test_exception_handling(self):
except DistutilsPlatformError:
pass
except AttributeError as e:
- if '_MSVCCompiler__root' in str(msg):
- print('Ignoring "%s" (I think it is msvccompiler.py bug)' % (msg))
+ if '_MSVCCompiler__root' in str(e):
+ print('Ignoring "%s" (I think it is msvccompiler.py bug)' % (e))
else:
raise
except IOError as e:
-major = 2
-
-try:
- from __svn_version__ import version
- version_info = (major, version)
- version = '%s_%s' % version_info
-except (ImportError, ValueError):
- version = str(major)
+from numpy.version import version
Pearu Peterson
"""
-__version__ = "$Revision: 1.60 $"[10:-1]
-
from . import __version__
f2py_version = __version__.version
len = a['*']
elif 'len' in a:
len = a['len']
- if re.match(r'\(\s*([*]|[:])\s*\)', len) or re.match(r'([*]|[:])', len):
+ if re.match(r'\(\s*(\*|:)\s*\)', len) or re.match(r'(\*|:)', len):
if isintent_hide(var):
errmess('getstrlength:intent(hide): expected a string with defined length but got: %s\n' % (
repr(var)))
/*typedef #rctype#(*#name#_typedef)(#optargs_td##args_td##strarglens_td##noargs#);*/
#static# #rctype# #callbackname# (#optargs##args##strarglens##noargs#) {
- #name#_t *cb;
+ #name#_t cb_local = { NULL, NULL, 0 };
+ #name#_t *cb = NULL;
PyTupleObject *capi_arglist = NULL;
PyObject *capi_return = NULL;
PyObject *capi_tmp = NULL;
f2py_cb_start_clock();
#endif
cb = get_active_#name#();
+ if (cb == NULL) {
+ capi_longjmp_ok = 0;
+ cb = &cb_local;
+ }
capi_arglist = cb->args_capi;
CFUNCSMESS(\"cb:Call-back function #name# (maxnofargs=#maxnofargs#(-#nofoptargs#))\\n\");
CFUNCSMESSPY(\"cb:#name#_capi=\",cb->capi);
if (cb->capi==NULL) {
capi_longjmp_ok = 0;
cb->capi = PyObject_GetAttrString(#modulename#_module,\"#argname#\");
+ CFUNCSMESSPY(\"cb:#name#_capi=\",cb->capi);
}
if (cb->capi==NULL) {
PyErr_SetString(#modulename#_error,\"cb: Callback #argname# not defined (as an argument or module #modulename# attribute).\\n\");
}
(*v).r = ((npy_cdouble *)PyArray_DATA(arr))->real;
(*v).i = ((npy_cdouble *)PyArray_DATA(arr))->imag;
+ Py_DECREF(arr);
return 1;
}
/* Python does not provide PyNumber_Complex function :-( */
Pearu Peterson
"""
-__version__ = "$Revision: 1.19 $"[10:-1]
-
from . import __version__
f2py_version = __version__.version
return ''
return name[i + 1:]
-is_f_file = re.compile(r'.*[.](for|ftn|f77|f)\Z', re.I).match
-_has_f_header = re.compile(r'-[*]-\s*fortran\s*-[*]-', re.I).search
-_has_f90_header = re.compile(r'-[*]-\s*f90\s*-[*]-', re.I).search
-_has_fix_header = re.compile(r'-[*]-\s*fix\s*-[*]-', re.I).search
+is_f_file = re.compile(r'.*\.(for|ftn|f77|f)\Z', re.I).match
+_has_f_header = re.compile(r'-\*-\s*fortran\s*-\*-', re.I).search
+_has_f90_header = re.compile(r'-\*-\s*f90\s*-\*-', re.I).search
+_has_fix_header = re.compile(r'-\*-\s*fix\s*-\*-', re.I).search
_free_f90_start = re.compile(r'[^c*]\s*[^\s\d\t]', re.I).match
return decl
selectpattern = re.compile(
- r'\s*(?P<this>(@\(@.*?@\)@|[*][\d*]+|[*]\s*@\(@.*?@\)@|))(?P<after>.*)\Z', re.I)
+ r'\s*(?P<this>(@\(@.*?@\)@|\*[\d*]+|\*\s*@\(@.*?@\)@|))(?P<after>.*)\Z', re.I)
nameargspattern = re.compile(
r'\s*(?P<name>\b[\w$]+\b)\s*(@\(@\s*(?P<args>[\w\s,]*)\s*@\)@|)\s*((result(\s*@\(@\s*(?P<result>\b[\w$]+\b)\s*@\)@|))|(bind\s*@\(@\s*(?P<bind>.*)\s*@\)@))*\s*\Z', re.I)
callnameargspattern = re.compile(
previous_context = ('common', bn, groupcounter)
elif case == 'use':
m1 = re.match(
- r'\A\s*(?P<name>\b[\w]+\b)\s*((,(\s*\bonly\b\s*:|(?P<notonly>))\s*(?P<list>.*))|)\s*\Z', m.group('after'), re.I)
+ r'\A\s*(?P<name>\b\w+\b)\s*((,(\s*\bonly\b\s*:|(?P<notonly>))\s*(?P<list>.*))|)\s*\Z', m.group('after'), re.I)
if m1:
mm = m1.groupdict()
if 'use' not in groupcache[groupcounter]:
for l in ll:
if '=' in l:
m2 = re.match(
- r'\A\s*(?P<local>\b[\w]+\b)\s*=\s*>\s*(?P<use>\b[\w]+\b)\s*\Z', l, re.I)
+ r'\A\s*(?P<local>\b\w+\b)\s*=\s*>\s*(?P<use>\b\w+\b)\s*\Z', l, re.I)
if m2:
rl[m2.group('local').strip()] = m2.group(
'use').strip()
ll = ll[i + 2:]
return typespec, selector, attr, ll
#####
-namepattern = re.compile(r'\s*(?P<name>\b[\w]+\b)\s*(?P<after>.*)\s*\Z', re.I)
+namepattern = re.compile(r'\s*(?P<name>\b\w+\b)\s*(?P<after>.*)\s*\Z', re.I)
kindselector = re.compile(
- r'\s*(\(\s*(kind\s*=)?\s*(?P<kind>.*)\s*\)|[*]\s*(?P<kind2>.*?))\s*\Z', re.I)
+ r'\s*(\(\s*(kind\s*=)?\s*(?P<kind>.*)\s*\)|\*\s*(?P<kind2>.*?))\s*\Z', re.I)
charselector = re.compile(
- r'\s*(\((?P<lenkind>.*)\)|[*]\s*(?P<charlen>.*))\s*\Z', re.I)
+ r'\s*(\((?P<lenkind>.*)\)|\*\s*(?P<charlen>.*))\s*\Z', re.I)
lenkindpattern = re.compile(
r'\s*(kind\s*=\s*(?P<kind>.*?)\s*(@,@\s*len\s*=\s*(?P<len>.*)|)|(len\s*=\s*|)(?P<len2>.*?)\s*(@,@\s*(kind\s*=\s*|)(?P<kind2>.*)|))\s*\Z', re.I)
lenarraypattern = re.compile(
- r'\s*(@\(@\s*(?!/)\s*(?P<array>.*?)\s*@\)@\s*[*]\s*(?P<len>.*?)|([*]\s*(?P<len2>.*?)|)\s*(@\(@\s*(?!/)\s*(?P<array2>.*?)\s*@\)@|))\s*(=\s*(?P<init>.*?)|(@\(@|)/\s*(?P<init2>.*?)\s*/(@\)@|)|)\s*\Z', re.I)
+ r'\s*(@\(@\s*(?!/)\s*(?P<array>.*?)\s*@\)@\s*\*\s*(?P<len>.*?)|(\*\s*(?P<len2>.*?)|)\s*(@\(@\s*(?!/)\s*(?P<array2>.*?)\s*@\)@|))\s*(=\s*(?P<init>.*?)|(@\(@|)/\s*(?P<init2>.*?)\s*/(@\)@|)|)\s*\Z', re.I)
def removespaces(expr):
edecl['charselector'] = copy.copy(charselect)
edecl['typename'] = typename
edecl['attrspec'] = copy.copy(attrspec)
+ if 'external' in (edecl.get('attrspec') or []) and e in groupcache[groupcounter]['args']:
+ if 'externals' not in groupcache[groupcounter]:
+ groupcache[groupcounter]['externals'] = []
+ groupcache[groupcounter]['externals'].append(e)
if m.group('after'):
m1 = lenarraypattern.match(markouterparen(m.group('after')))
if m1:
block['vars'][block['result']] = {}
return block
-determineexprtype_re_1 = re.compile(r'\A\(.+?[,].+?\)\Z', re.I)
-determineexprtype_re_2 = re.compile(r'\A[+-]?\d+(_(?P<name>[\w]+)|)\Z', re.I)
+determineexprtype_re_1 = re.compile(r'\A\(.+?,.+?\)\Z', re.I)
+determineexprtype_re_2 = re.compile(r'\A[+-]?\d+(_(?P<name>\w+)|)\Z', re.I)
determineexprtype_re_3 = re.compile(
- r'\A[+-]?[\d.]+[\d+\-de.]*(_(?P<name>[\w]+)|)\Z', re.I)
+ r'\A[+-]?[\d.]+[-\d+de.]*(_(?P<name>\w+)|)\Z', re.I)
determineexprtype_re_4 = re.compile(r'\A\(.*\)\Z', re.I)
determineexprtype_re_5 = re.compile(r'\A(?P<name>\w+)\s*\(.*?\)\s*\Z', re.I)
from . import capi_maps
f2py_version = __version__.version
+numpy_version = __version__.version
errmess = sys.stderr.write
# outmess=sys.stdout.write
show = pprint.pprint
outmess = auxfuncs.outmess
-try:
- from numpy import __version__ as numpy_version
-except ImportError:
- numpy_version = 'N/A'
-
-__usage__ = """\
-Usage:
+__usage__ =\
+f"""Usage:
1) To construct extension module sources:
--[no-]latex-doc Create (or not) <modulename>module.tex.
Default is --no-latex-doc.
--short-latex Create 'incomplete' LaTeX document (without commands
- \\documentclass, \\tableofcontents, and \\begin{document},
- \\end{document}).
+ \\documentclass, \\tableofcontents, and \\begin{{document}},
+ \\end{{document}}).
--[no-]rest-doc Create (or not) <modulename>module.rst.
Default is --no-rest-doc.
array. Integer <int> sets the threshold for array sizes when
a message should be shown.
-Version: %s
-numpy Version: %s
+Version: {f2py_version}
+numpy Version: {numpy_version}
Requires: Python 3.5 or higher.
License: NumPy license (see LICENSE.txt in the NumPy source code)
Copyright 1999 - 2011 Pearu Peterson all rights reserved.
-http://cens.ioc.ee/projects/f2py2e/""" % (f2py_version, numpy_version)
+http://cens.ioc.ee/projects/f2py2e/"""
def scaninputline(inputline):
remove_build_dir = 1
build_dir = tempfile.mkdtemp()
- _reg1 = re.compile(r'[-][-]link[-]')
+ _reg1 = re.compile(r'--link-')
sysinfo_flags = [_m for _m in sys.argv[1:] if _reg1.match(_m)]
sys.argv = [_m for _m in sys.argv if _m not in sysinfo_flags]
if sysinfo_flags:
sysinfo_flags = [f[7:] for f in sysinfo_flags]
_reg2 = re.compile(
- r'[-][-]((no[-]|)(wrap[-]functions|lower)|debug[-]capi|quiet)|[-]include')
+ r'--((no-|)(wrap-functions|lower)|debug-capi|quiet)|-include')
f2py_flags = [_m for _m in sys.argv[1:] if _reg2.match(_m)]
sys.argv = [_m for _m in sys.argv if _m not in f2py_flags]
f2py_flags2 = []
sys.argv = [_m for _m in sys.argv if _m not in f2py_flags2]
_reg3 = re.compile(
- r'[-][-]((f(90)?compiler([-]exec|)|compiler)=|help[-]compiler)')
+ r'--((f(90)?compiler(-exec|)|compiler)=|help-compiler)')
flib_flags = [_m for _m in sys.argv[1:] if _reg3.match(_m)]
sys.argv = [_m for _m in sys.argv if _m not in flib_flags]
_reg4 = re.compile(
- r'[-][-]((f(77|90)(flags|exec)|opt|arch)=|(debug|noopt|noarch|help[-]fcompiler))')
+ r'--((f(77|90)(flags|exec)|opt|arch)=|(debug|noopt|noarch|help-fcompiler))')
fc_flags = [_m for _m in sys.argv[1:] if _reg4.match(_m)]
sys.argv = [_m for _m in sys.argv if _m not in fc_flags]
del flib_flags[i]
assert len(flib_flags) <= 2, repr(flib_flags)
- _reg5 = re.compile(r'[-][-](verbose)')
+ _reg5 = re.compile(r'--(verbose)')
setup_flags = [_m for _m in sys.argv[1:] if _reg5.match(_m)]
sys.argv = [_m for _m in sys.argv if _m not in setup_flags]
Pearu Peterson
"""
-__version__ = "$Revision: 1.129 $"[10:-1]
-
-from . import __version__
-f2py_version = __version__.version
-
-from .. import version as _numpy_version
-numpy_version = _numpy_version.version
-
import os
import time
import copy
+# __version__.version is now the same as the NumPy version
+from . import __version__
+f2py_version = __version__.version
+numpy_version = __version__.version
+
from .auxfuncs import (
applyrules, debugcapi, dictappend, errmess, gentitle, getargs2,
hascallstatement, hasexternals, hasinitvalue, hasnote, hasresultnote,
\tif (PyErr_Occurred())
\t\t{PyErr_SetString(PyExc_ImportError, \"can't initialize module #modulename# (failed to import numpy)\"); return m;}
\td = PyModule_GetDict(m);
-\ts = PyUnicode_FromString(\"$R""" + """evision: $\");
+\ts = PyUnicode_FromString(\"#f2py_version#\");
\tPyDict_SetItemString(d, \"__version__\", s);
\tPy_DECREF(s);
\ts = PyUnicode_FromString(
""",
{debugcapi: ["""\
fprintf(stderr,\"debug-capi:Assuming %d arguments; at most #maxnofargs#(-#nofoptargs#) is expected.\\n\",#varname#_cb.nofargs);
- CFUNCSMESSPY(\"for #varname#=\",#cbname#_capi);""",
+ CFUNCSMESSPY(\"for #varname#=\",#varname#_cb.capi);""",
{l_not(isintent_callback): """ fprintf(stderr,\"#vardebugshowvalue# (call-back in C).\\n\",#cbname#);"""}]},
"""\
CFUNCSMESS(\"Saving callback variables for `#varname#`.\\n\");
&& ARRAY_ISCOMPATIBLE(arr,type_num)
&& F2PY_CHECK_ALIGNMENT(arr, intent)
) {
- if ((intent & F2PY_INTENT_C)?PyArray_ISCARRAY(arr):PyArray_ISFARRAY(arr)) {
+ if ((intent & F2PY_INTENT_C)?PyArray_ISCARRAY_RO(arr):PyArray_ISFARRAY_RO(arr)) {
if ((intent & F2PY_INTENT_OUT)) {
Py_INCREF(arr);
}
return arr;
}
}
-
if (intent & F2PY_INTENT_INOUT) {
strcpy(mess, "failed to initialize intent(inout) array");
+ /* Must use PyArray_IS*ARRAY because intent(inout) requires writable input */
if ((intent & F2PY_INTENT_C) && !PyArray_ISCARRAY(arr))
strcat(mess, " -- input not contiguous");
if (!(intent & F2PY_INTENT_C) && !PyArray_ISFARRAY(arr))
import copy
import pytest
-from numpy import (
- array, alltrue, ndarray, zeros, dtype, intp, clongdouble
- )
+import numpy as np
+
from numpy.testing import assert_, assert_equal
from numpy.core.multiarray import typeinfo
from . import util
# 16 byte long double types this means the inout intent cannot be satisfied
# and several tests fail as the alignment flag can be randomly true or fals
# when numpy gains an aligned allocator the tests could be enabled again
-if ((intp().dtype.itemsize != 4 or clongdouble().dtype.alignment <= 8) and
+if ((np.intp().dtype.itemsize != 4 or np.clongdouble().dtype.alignment <= 8) and
sys.platform != 'win32'):
_type_names.extend(['LONGDOUBLE', 'CDOUBLE', 'CLONGDOUBLE'])
_cast_dict['LONGDOUBLE'] = _cast_dict['LONG'] + \
_type_cache = {}
def __new__(cls, name):
- if isinstance(name, dtype):
+ if isinstance(name, np.dtype):
dtype0 = name
name = None
for n, i in typeinfo.items():
info = typeinfo[self.NAME]
self.type_num = getattr(wrap, 'NPY_' + self.NAME)
assert_equal(self.type_num, info.num)
- self.dtype = info.type
+ self.dtype = np.dtype(info.type)
+ self.type = info.type
self.elsize = info.bits / 8
self.dtypechar = info.char
# arr.dtypechar may be different from typ.dtypechar
self.arr = wrap.call(typ.type_num, dims, intent.flags, obj)
- assert_(isinstance(self.arr, ndarray), repr(type(self.arr)))
+ assert_(isinstance(self.arr, np.ndarray), repr(type(self.arr)))
self.arr_attr = wrap.array_attrs(self.arr)
return
if intent.is_intent('cache'):
- assert_(isinstance(obj, ndarray), repr(type(obj)))
- self.pyarr = array(obj).reshape(*dims).copy()
+ assert_(isinstance(obj, np.ndarray), repr(type(obj)))
+ self.pyarr = np.array(obj).reshape(*dims).copy()
else:
- self.pyarr = array(array(obj, dtype=typ.dtypechar).reshape(*dims),
- order=self.intent.is_intent('c') and 'C' or 'F')
+ self.pyarr = np.array(
+ np.array(obj, dtype=typ.dtypechar).reshape(*dims),
+ order=self.intent.is_intent('c') and 'C' or 'F')
assert_(self.pyarr.dtype == typ,
repr((self.pyarr.dtype, typ)))
+ self.pyarr.setflags(write=self.arr.flags['WRITEABLE'])
assert_(self.pyarr.flags['OWNDATA'], (obj, intent))
self.pyarr_attr = wrap.array_attrs(self.pyarr)
repr((self.arr_attr[5][3], self.type.elsize)))
assert_(self.arr_equal(self.pyarr, self.arr))
- if isinstance(self.obj, ndarray):
+ if isinstance(self.obj, np.ndarray):
if typ.elsize == Type(obj.dtype).elsize:
if not intent.is_intent('copy') and self.arr_attr[1] <= 1:
assert_(self.has_shared_memory())
def arr_equal(self, arr1, arr2):
if arr1.shape != arr2.shape:
return False
- s = arr1 == arr2
- return alltrue(s.flatten())
+ return (arr1 == arr2).all()
def __str__(self):
return str(self.arr)
"""
if self.obj is self.arr:
return True
- if not isinstance(self.obj, ndarray):
+ if not isinstance(self.obj, np.ndarray):
return False
obj_attr = wrap.array_attrs(self.obj)
return obj_attr[0] == self.arr_attr[0]
def test_in_from_2casttype(self):
for t in self.type.cast_types():
- obj = array(self.num2seq, dtype=t.dtype)
+ obj = np.array(self.num2seq, dtype=t.dtype)
a = self.array([len(self.num2seq)], intent.in_, obj)
if t.elsize == self.type.elsize:
assert_(
else:
assert_(not a.has_shared_memory(), repr(t.dtype))
+ @pytest.mark.parametrize('write', ['w', 'ro'])
+ @pytest.mark.parametrize('order', ['C', 'F'])
+ @pytest.mark.parametrize('inp', ['2seq', '23seq'])
+ def test_in_nocopy(self, write, order, inp):
+ """Test if intent(in) array can be passed without copies
+ """
+ seq = getattr(self, 'num' + inp)
+ obj = np.array(seq, dtype=self.type.dtype, order=order)
+ obj.setflags(write=(write == 'w'))
+ a = self.array(obj.shape, ((order=='C' and intent.in_.c) or intent.in_), obj)
+ assert a.has_shared_memory()
+
def test_inout_2seq(self):
- obj = array(self.num2seq, dtype=self.type.dtype)
+ obj = np.array(self.num2seq, dtype=self.type.dtype)
a = self.array([len(self.num2seq)], intent.inout, obj)
assert_(a.has_shared_memory())
raise SystemError('intent(inout) should have failed on sequence')
def test_f_inout_23seq(self):
- obj = array(self.num23seq, dtype=self.type.dtype, order='F')
+ obj = np.array(self.num23seq, dtype=self.type.dtype, order='F')
shape = (len(self.num23seq), len(self.num23seq[0]))
a = self.array(shape, intent.in_.inout, obj)
assert_(a.has_shared_memory())
- obj = array(self.num23seq, dtype=self.type.dtype, order='C')
+ obj = np.array(self.num23seq, dtype=self.type.dtype, order='C')
shape = (len(self.num23seq), len(self.num23seq[0]))
try:
a = self.array(shape, intent.in_.inout, obj)
'intent(inout) should have failed on improper array')
def test_c_inout_23seq(self):
- obj = array(self.num23seq, dtype=self.type.dtype)
+ obj = np.array(self.num23seq, dtype=self.type.dtype)
shape = (len(self.num23seq), len(self.num23seq[0]))
a = self.array(shape, intent.in_.c.inout, obj)
assert_(a.has_shared_memory())
def test_in_copy_from_2casttype(self):
for t in self.type.cast_types():
- obj = array(self.num2seq, dtype=t.dtype)
+ obj = np.array(self.num2seq, dtype=t.dtype)
a = self.array([len(self.num2seq)], intent.in_.copy, obj)
assert_(not a.has_shared_memory(), repr(t.dtype))
def test_in_from_23casttype(self):
for t in self.type.cast_types():
- obj = array(self.num23seq, dtype=t.dtype)
+ obj = np.array(self.num23seq, dtype=t.dtype)
a = self.array([len(self.num23seq), len(self.num23seq[0])],
intent.in_, obj)
assert_(not a.has_shared_memory(), repr(t.dtype))
def test_f_in_from_23casttype(self):
for t in self.type.cast_types():
- obj = array(self.num23seq, dtype=t.dtype, order='F')
+ obj = np.array(self.num23seq, dtype=t.dtype, order='F')
a = self.array([len(self.num23seq), len(self.num23seq[0])],
intent.in_, obj)
if t.elsize == self.type.elsize:
def test_c_in_from_23casttype(self):
for t in self.type.cast_types():
- obj = array(self.num23seq, dtype=t.dtype)
+ obj = np.array(self.num23seq, dtype=t.dtype)
a = self.array([len(self.num23seq), len(self.num23seq[0])],
intent.in_.c, obj)
if t.elsize == self.type.elsize:
def test_f_copy_in_from_23casttype(self):
for t in self.type.cast_types():
- obj = array(self.num23seq, dtype=t.dtype, order='F')
+ obj = np.array(self.num23seq, dtype=t.dtype, order='F')
a = self.array([len(self.num23seq), len(self.num23seq[0])],
intent.in_.copy, obj)
assert_(not a.has_shared_memory(), repr(t.dtype))
def test_c_copy_in_from_23casttype(self):
for t in self.type.cast_types():
- obj = array(self.num23seq, dtype=t.dtype)
+ obj = np.array(self.num23seq, dtype=t.dtype)
a = self.array([len(self.num23seq), len(self.num23seq[0])],
intent.in_.c.copy, obj)
assert_(not a.has_shared_memory(), repr(t.dtype))
for t in self.type.all_types():
if t.elsize != self.type.elsize:
continue
- obj = array(self.num2seq, dtype=t.dtype)
+ obj = np.array(self.num2seq, dtype=t.dtype)
shape = (len(self.num2seq),)
a = self.array(shape, intent.in_.c.cache, obj)
assert_(a.has_shared_memory(), repr(t.dtype))
a = self.array(shape, intent.in_.cache, obj)
assert_(a.has_shared_memory(), repr(t.dtype))
- obj = array(self.num2seq, dtype=t.dtype, order='F')
+ obj = np.array(self.num2seq, dtype=t.dtype, order='F')
a = self.array(shape, intent.in_.c.cache, obj)
assert_(a.has_shared_memory(), repr(t.dtype))
for t in self.type.all_types():
if t.elsize >= self.type.elsize:
continue
- obj = array(self.num2seq, dtype=t.dtype)
+ obj = np.array(self.num2seq, dtype=t.dtype)
shape = (len(self.num2seq),)
try:
self.array(shape, intent.in_.cache, obj) # Should succeed
shape = (2,)
a = self.array(shape, intent.hide, None)
assert_(a.arr.shape == shape)
- assert_(a.arr_equal(a.arr, zeros(shape, dtype=self.type.dtype)))
+ assert_(a.arr_equal(a.arr, np.zeros(shape, dtype=self.type.dtype)))
shape = (2, 3)
a = self.array(shape, intent.hide, None)
assert_(a.arr.shape == shape)
- assert_(a.arr_equal(a.arr, zeros(shape, dtype=self.type.dtype)))
+ assert_(a.arr_equal(a.arr, np.zeros(shape, dtype=self.type.dtype)))
assert_(a.arr.flags['FORTRAN'] and not a.arr.flags['CONTIGUOUS'])
shape = (2, 3)
a = self.array(shape, intent.c.hide, None)
assert_(a.arr.shape == shape)
- assert_(a.arr_equal(a.arr, zeros(shape, dtype=self.type.dtype)))
+ assert_(a.arr_equal(a.arr, np.zeros(shape, dtype=self.type.dtype)))
assert_(not a.arr.flags['FORTRAN'] and a.arr.flags['CONTIGUOUS'])
shape = (-1, 3)
shape = (2,)
a = self.array(shape, intent.optional, None)
assert_(a.arr.shape == shape)
- assert_(a.arr_equal(a.arr, zeros(shape, dtype=self.type.dtype)))
+ assert_(a.arr_equal(a.arr, np.zeros(shape, dtype=self.type.dtype)))
shape = (2, 3)
a = self.array(shape, intent.optional, None)
assert_(a.arr.shape == shape)
- assert_(a.arr_equal(a.arr, zeros(shape, dtype=self.type.dtype)))
+ assert_(a.arr_equal(a.arr, np.zeros(shape, dtype=self.type.dtype)))
assert_(a.arr.flags['FORTRAN'] and not a.arr.flags['CONTIGUOUS'])
shape = (2, 3)
a = self.array(shape, intent.c.optional, None)
assert_(a.arr.shape == shape)
- assert_(a.arr_equal(a.arr, zeros(shape, dtype=self.type.dtype)))
+ assert_(a.arr_equal(a.arr, np.zeros(shape, dtype=self.type.dtype)))
assert_(not a.arr.flags['FORTRAN'] and a.arr.flags['CONTIGUOUS'])
def test_optional_from_2seq(self):
assert_(not a.has_shared_memory())
def test_inplace(self):
- obj = array(self.num23seq, dtype=self.type.dtype)
+ obj = np.array(self.num23seq, dtype=self.type.dtype)
assert_(not obj.flags['FORTRAN'] and obj.flags['CONTIGUOUS'])
shape = obj.shape
a = self.array(shape, intent.inplace, obj)
assert_(obj[1][2] == a.arr[1][2], repr((obj, a.arr)))
a.arr[1][2] = 54
assert_(obj[1][2] == a.arr[1][2] ==
- array(54, dtype=self.type.dtype), repr((obj, a.arr)))
+ np.array(54, dtype=self.type.dtype), repr((obj, a.arr)))
assert_(a.arr is obj)
assert_(obj.flags['FORTRAN']) # obj attributes are changed inplace!
assert_(not obj.flags['CONTIGUOUS'])
for t in self.type.cast_types():
if t is self.type:
continue
- obj = array(self.num23seq, dtype=t.dtype)
- assert_(obj.dtype.type == t.dtype)
- assert_(obj.dtype.type is not self.type.dtype)
+ obj = np.array(self.num23seq, dtype=t.dtype)
+ assert_(obj.dtype.type == t.type)
+ assert_(obj.dtype.type is not self.type.type)
assert_(not obj.flags['FORTRAN'] and obj.flags['CONTIGUOUS'])
shape = obj.shape
a = self.array(shape, intent.inplace, obj)
assert_(obj[1][2] == a.arr[1][2], repr((obj, a.arr)))
a.arr[1][2] = 54
assert_(obj[1][2] == a.arr[1][2] ==
- array(54, dtype=self.type.dtype), repr((obj, a.arr)))
+ np.array(54, dtype=self.type.dtype), repr((obj, a.arr)))
assert_(a.arr is obj)
assert_(obj.flags['FORTRAN']) # obj attributes changed inplace!
assert_(not obj.flags['CONTIGUOUS'])
- assert_(obj.dtype.type is self.type.dtype) # obj changed inplace!
+ assert_(obj.dtype.type is self.type.type) # obj changed inplace!
a = callback(cu, lencu)
end
+
+ subroutine hidden_callback(a, r)
+ external global_f
+cf2py intent(callback, hide) global_f
+ integer a, r, global_f
+cf2py intent(out) r
+ r = global_f(a)
+ end
+
+ subroutine hidden_callback2(a, r)
+ external global_f
+ integer a, r, global_f
+cf2py intent(out) r
+ r = global_f(a)
+ end
"""
@pytest.mark.parametrize('name', 't,t2'.split(','))
if errors:
raise AssertionError(errors)
+ def test_hidden_callback(self):
+ try:
+ self.module.hidden_callback(2)
+ except Exception as msg:
+ assert_(str(msg).startswith('Callback global_f not defined'))
+
+ try:
+ self.module.hidden_callback2(2)
+ except Exception as msg:
+ assert_(str(msg).startswith('cb: Callback global_f not defined'))
+
+ self.module.global_f = lambda x: x + 1
+ r = self.module.hidden_callback(2)
+ assert_(r == 3)
+
+ self.module.global_f = lambda x: x + 2
+ r = self.module.hidden_callback(2)
+ assert_(r == 4)
+
+ del self.module.global_f
+ try:
+ self.module.hidden_callback(2)
+ except Exception as msg:
+ assert_(str(msg).startswith('Callback global_f not defined'))
+
+ self.module.global_f = lambda x=0: x + 3
+ r = self.module.hidden_callback(2)
+ assert_(r == 5)
+
+ # reproducer of gh18341
+ r = self.module.hidden_callback2(2)
+ assert_(r == 3)
+
class TestF77CallbackPythonTLS(TestF77Callback):
"""
assert 'public' not in mod['vars']['a']['attrspec']
assert 'private' not in mod['vars']['seta']['attrspec']
assert 'public' in mod['vars']['seta']['attrspec']
+
+class TestExternal(util.F2PyTest):
+ # issue gh-17859: add external attribute support
+ code = """
+ integer(8) function external_as_statement(fcn)
+ implicit none
+ external fcn
+ integer(8) :: fcn
+ external_as_statement = fcn(0)
+ end
+
+ integer(8) function external_as_attribute(fcn)
+ implicit none
+ integer(8), external :: fcn
+ external_as_attribute = fcn(0)
+ end
+ """
+
+ def test_external_as_statement(self):
+ def incr(x):
+ return x + 123
+ r = self.module.external_as_statement(incr)
+ assert r == 123
+
+ def test_external_as_attribute(self):
+ def incr(x):
+ return x + 123
+ r = self.module.external_as_attribute(incr)
+ assert r == 123
from typing import Any, List
+from numpy.lib import (
+ format as format,
+ mixins as mixins,
+ scimath as scimath,
+ stride_tricks as stride_stricks,
+)
+
__all__: List[str]
emath: Any
from numpy.core.numeric import (
asanyarray, arange, zeros, greater_equal, multiply, ones,
- asarray, where, int8, int16, int32, int64, empty, promote_types, diagonal,
- nonzero
+ asarray, where, int8, int16, int32, int64, intp, empty, promote_types,
+ diagonal, nonzero, indices
)
from numpy.core.overrides import set_array_function_like_doc, set_module
from numpy.core import overrides
n = s + abs(k)
res = zeros((n, n), v.dtype)
if (k >= 0):
- i = arange(0, n-k)
+ i = arange(0, n-k, dtype=intp)
fi = i+k+i*n
else:
- i = arange(0, n+k)
+ i = arange(0, n+k, dtype=intp)
fi = i+(i-k)*n
res.flat[fi] = v
if not wrap:
from typing import Any, List
+from numpy.ma import extras as extras
+
__all__: List[str]
core: Any
elif isinstance(data, (tuple, list)):
try:
# If data is a sequence of masked array
- mask = np.array([getmaskarray(np.asanyarray(m, dtype=mdtype))
- for m in data], dtype=mdtype)
+ mask = np.array(
+ [getmaskarray(np.asanyarray(m, dtype=_data.dtype))
+ for m in data], dtype=mdtype)
except ValueError:
# If data is nested
mask = nomask
assert_equal(data, [[0, 1, 2, 3, 4], [4, 3, 2, 1, 0]])
assert_(data.mask is nomask)
+ def test_creation_with_list_of_maskedarrays_no_bool_cast(self):
+ # Tests the regression in gh-18551
+ masked_str = np.ma.masked_array(['a', 'b'], mask=[True, False])
+ normal_int = np.arange(2)
+ res = np.ma.asarray([masked_str, normal_int], dtype="U21")
+ assert_array_equal(res.mask, [[True, False], [False, False]])
+
+ # The above only failed due a long chain of oddity, try also with
+ # an object array that cannot be converted to bool always:
+ class NotBool():
+ def __bool__(self):
+ raise ValueError("not a bool!")
+ masked_obj = np.ma.masked_array([NotBool(), 'b'], mask=[True, False])
+ # Check that the NotBool actually fails like we would expect:
+ with pytest.raises(ValueError, match="not a bool!"):
+ np.asarray([masked_obj], dtype=bool)
+
+ res = np.ma.asarray([masked_obj, normal_int])
+ assert_array_equal(res.mask, [[True, False], [False, False]])
+
def test_creation_from_ndarray_with_padding(self):
x = np.array([('A', 0)], dtype={'names':['f0','f1'],
'formats':['S4','i8'],
from typing import Any
+from numpy.polynomial import (
+ chebyshev as chebyshev,
+ hermite as hermite,
+ hermite_e as hermite_e,
+ laguerre as laguerre,
+ legendre as legendre,
+ polynomial as polynomial,
+)
+
Polynomial: Any
Chebyshev: Any
Legendre: Any
"lib.mixins",
"lib.recfunctions",
"lib.scimath",
+ "lib.stride_tricks",
"linalg",
"ma",
"ma.extras",
"polynomial.laguerre",
"polynomial.legendre",
"polynomial.polynomial",
- "polynomial.polyutils",
"random",
"testing",
"typing",
"lib.npyio",
"lib.polynomial",
"lib.shape_base",
- "lib.stride_tricks",
"lib.twodim_base",
"lib.type_check",
"lib.ufunclike",
"ma.timer_comparison",
"matrixlib",
"matrixlib.defmatrix",
+ "polynomial.polyutils",
"random.mtrand",
"random.bit_generator",
"testing.print_coercion_tables",
def test_f2py(f2py_cmd):
# test that we can run f2py script
stdout = subprocess.check_output([f2py_cmd, '-v'])
- assert_equal(stdout.strip(), b'2')
+ assert_equal(stdout.strip(), np.__version__.encode('ascii'))
def test_pep338():
stdout = subprocess.check_output([sys.executable, '-mnumpy.f2py', '-v'])
- assert_equal(stdout.strip(), b'2')
+ assert_equal(stdout.strip(), np.__version__.encode('ascii'))
np.sys # E: Module has no attribute
np.os # E: Module has no attribute
np.math # E: Module has no attribute
+
+# Public sub-modules that are not imported to their parent module by default;
+# e.g. one must first execute `import numpy.lib.recfunctions`
+np.lib.recfunctions # E: Module has no attribute
+np.ma.mrecords # E: Module has no attribute
import numpy as np
+dtype_obj = np.dtype(np.str_)
+void_dtype_obj = np.dtype([("f0", np.float64), ("f1", np.float32)])
+
np.dtype(dtype=np.int64)
np.dtype(int)
np.dtype("int")
np.dtype(Test())
+
+dtype_obj.names
+
+dtype_obj * 0
+dtype_obj * 2
+
+0 * dtype_obj
+2 * dtype_obj
+
+void_dtype_obj["f0"]
+void_dtype_obj[0]
+void_dtype_obj[["f0", "f1"]]
+void_dtype_obj[["f0"]]
np.testing
np.version
+np.lib.format
+np.lib.mixins
+np.lib.scimath
+np.lib.stride_tricks
+np.ma.extras
+np.polynomial.chebyshev
+np.polynomial.hermite
+np.polynomial.hermite_e
+np.polynomial.laguerre
+np.polynomial.legendre
+np.polynomial.polynomial
+
np.__path__
np.__version__
import numpy as np
+dtype_obj: np.dtype[np.str_]
+void_dtype_obj: np.dtype[np.void]
+
reveal_type(np.dtype(np.float64)) # E: numpy.dtype[numpy.floating[numpy.typing._64Bit]]
reveal_type(np.dtype(np.int64)) # E: numpy.dtype[numpy.signedinteger[numpy.typing._64Bit]]
# Void
reveal_type(np.dtype(("U", 10))) # E: numpy.dtype[numpy.void]
+
+reveal_type(dtype_obj.name) # E: str
+reveal_type(dtype_obj.names) # E: Union[builtins.tuple[builtins.str], None]
+
+reveal_type(dtype_obj * 0) # E: None
+reveal_type(dtype_obj * 1) # E: numpy.dtype[numpy.str_]
+reveal_type(dtype_obj * 2) # E: numpy.dtype[numpy.void]
+
+reveal_type(0 * dtype_obj) # E: Any
+reveal_type(1 * dtype_obj) # E: Any
+reveal_type(2 * dtype_obj) # E: Any
+
+reveal_type(void_dtype_obj["f0"]) # E: numpy.dtype[Any]
+reveal_type(void_dtype_obj[0]) # E: numpy.dtype[Any]
+reveal_type(void_dtype_obj[["f0", "f1"]]) # E: numpy.dtype[numpy.void]
+reveal_type(void_dtype_obj[["f0"]]) # E: numpy.dtype[numpy.void]
reveal_type(np.testing) # E: ModuleType
reveal_type(np.version) # E: ModuleType
+reveal_type(np.lib.format) # E: ModuleType
+reveal_type(np.lib.mixins) # E: ModuleType
+reveal_type(np.lib.scimath) # E: ModuleType
+reveal_type(np.lib.stride_tricks) # E: ModuleType
+reveal_type(np.ma.extras) # E: ModuleType
+reveal_type(np.polynomial.chebyshev) # E: ModuleType
+reveal_type(np.polynomial.hermite) # E: ModuleType
+reveal_type(np.polynomial.hermite_e) # E: ModuleType
+reveal_type(np.polynomial.laguerre) # E: ModuleType
+reveal_type(np.polynomial.legendre) # E: ModuleType
+reveal_type(np.polynomial.polynomial) # E: ModuleType
+
# TODO: Remove when annotations have been added to `np.testing.assert_equal`
reveal_type(np.testing.assert_equal) # E: Any
# THIS FILE IS GENERATED FROM NUMPY SETUP.PY
#
# To compare versions robustly, use `numpy.lib.NumpyVersion`
-short_version: str = '1.20.1'
-version: str = '1.20.1'
-full_version: str = '1.20.1'
-git_revision: str = 'd7aa4085623b222058edb0ff38392c38c5e00c54'
+short_version: str = '1.20.2'
+version: str = '1.20.2'
+full_version: str = '1.20.2'
+git_revision: str = 'b19ad5bfa396a4600a52a598a30a65d4e993f831'
release: bool = True
if not release:
MAJOR = 1
MINOR = 20
-MICRO = 1
+MICRO = 2
ISRELEASED = True
VERSION = '%d.%d.%d' % (MAJOR, MINOR, MICRO)