-Metadata-Version: 1.2
+Metadata-Version: 2.1
Name: numpy
-Version: 1.21.4
+Version: 1.21.5
Summary: NumPy is the fundamental package for array computing with Python.
Home-page: https://www.numpy.org
Author: Travis E. Oliphant et al.
Project-URL: Bug Tracker, https://github.com/numpy/numpy/issues
Project-URL: Documentation, https://numpy.org/doc/1.21
Project-URL: Source Code, https://github.com/numpy/numpy
-Description: It provides:
-
- - a powerful N-dimensional array object
- - sophisticated (broadcasting) functions
- - tools for integrating C/C++ and Fortran code
- - useful linear algebra, Fourier transform, and random number capabilities
- - and much more
-
- Besides its obvious scientific uses, NumPy can also be used as an efficient
- multi-dimensional container of generic data. Arbitrary data-types can be
- defined. This allows NumPy to seamlessly and speedily integrate with a wide
- variety of databases.
-
- All NumPy wheels distributed on PyPI are BSD licensed.
-
-
Platform: Windows
Platform: Linux
Platform: Solaris
Classifier: Operating System :: Unix
Classifier: Operating System :: MacOS
Requires-Python: >=3.7,<3.11
+License-File: LICENSE.txt
+License-File: LICENSES_bundled.txt
+
+It provides:
+
+- a powerful N-dimensional array object
+- sophisticated (broadcasting) functions
+- tools for integrating C/C++ and Fortran code
+- useful linear algebra, Fourier transform, and random number capabilities
+- and much more
+
+Besides its obvious scientific uses, NumPy can also be used as an efficient
+multi-dimensional container of generic data. Arbitrary data-types can be
+defined. This allows NumPy to seamlessly and speedily integrate with a wide
+variety of databases.
+
+All NumPy wheels distributed on PyPI are BSD licensed.
+
+
+
--- /dev/null
+
+Contributors
+============
+
+A total of 7 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Bas van Beek
+* Charles Harris
+* Matti Picus
+* Rohit Goswami +
+* Ross Barnowski
+* Sayed Adel
+* Sebastian Berg
+
+Pull requests merged
+====================
+
+A total of 11 pull requests were merged for this release.
+
+* `#20357 <https://github.com/numpy/numpy/pull/20357>`__: MAINT: Do not forward `__(deep)copy__` calls of `_GenericAlias`...
+* `#20462 <https://github.com/numpy/numpy/pull/20462>`__: BUG: Fix float16 einsum fastpaths using wrong tempvar
+* `#20463 <https://github.com/numpy/numpy/pull/20463>`__: BUG, DIST: Print os error message when the executable not exist
+* `#20464 <https://github.com/numpy/numpy/pull/20464>`__: BLD: Verify the ability to compile C++ sources before initiating...
+* `#20465 <https://github.com/numpy/numpy/pull/20465>`__: BUG: Force ``npymath` ` to respect ``npy_longdouble``
+* `#20466 <https://github.com/numpy/numpy/pull/20466>`__: BUG: Fix failure to create aligned, empty structured dtype
+* `#20467 <https://github.com/numpy/numpy/pull/20467>`__: ENH: provide a convenience function to replace npy_load_module
+* `#20495 <https://github.com/numpy/numpy/pull/20495>`__: MAINT: update wheel to version that supports python3.10
+* `#20497 <https://github.com/numpy/numpy/pull/20497>`__: BUG: Clear errors correctly in F2PY conversions
+* `#20613 <https://github.com/numpy/numpy/pull/20613>`__: DEV: add a warningfilter to fix pytest workflow.
+* `#20618 <https://github.com/numpy/numpy/pull/20618>`__: MAINT: Help boost::python libraries at least not crash
.. toctree::
:maxdepth: 3
+ 1.21.5 <release/1.21.5-notes>
1.21.4 <release/1.21.4-notes>
1.21.3 <release/1.21.3-notes>
1.21.2 <release/1.21.2-notes>
--- /dev/null
+.. currentmodule:: numpy
+
+==========================
+NumPy 1.21.5 Release Notes
+==========================
+
+NumPy 1.21.5 is a maintenance release that fixes a few bugs discovered after
+the 1.21.4 release and does some maintenance to extend the 1.21.x lifetime.
+The Python versions supported in this release are 3.7-3.10. If you want to
+compile your own version using gcc-11, you will need to use gcc-11.2+ to avoid
+problems.
+
+Contributors
+============
+
+A total of 7 people contributed to this release. People with a "+" by their
+names contributed a patch for the first time.
+
+* Bas van Beek
+* Charles Harris
+* Matti Picus
+* Rohit Goswami
+* Ross Barnowski
+* Sayed Adel
+* Sebastian Berg
+
+Pull requests merged
+====================
+
+A total of 11 pull requests were merged for this release.
+
+* `#20357 <https://github.com/numpy/numpy/pull/20357>`__: MAINT: Do not forward ``__(deep)copy__`` calls of ``_GenericAlias``...
+* `#20462 <https://github.com/numpy/numpy/pull/20462>`__: BUG: Fix float16 einsum fastpaths using wrong tempvar
+* `#20463 <https://github.com/numpy/numpy/pull/20463>`__: BUG, DIST: Print os error message when the executable not exist
+* `#20464 <https://github.com/numpy/numpy/pull/20464>`__: BLD: Verify the ability to compile C++ sources before initiating...
+* `#20465 <https://github.com/numpy/numpy/pull/20465>`__: BUG: Force ``npymath` ` to respect ``npy_longdouble``
+* `#20466 <https://github.com/numpy/numpy/pull/20466>`__: BUG: Fix failure to create aligned, empty structured dtype
+* `#20467 <https://github.com/numpy/numpy/pull/20467>`__: ENH: provide a convenience function to replace npy_load_module
+* `#20495 <https://github.com/numpy/numpy/pull/20495>`__: MAINT: update wheel to version that supports python3.10
+* `#20497 <https://github.com/numpy/numpy/pull/20497>`__: BUG: Clear errors correctly in F2PY conversions
+* `#20613 <https://github.com/numpy/numpy/pull/20613>`__: DEV: add a warningfilter to fix pytest workflow.
+* `#20618 <https://github.com/numpy/numpy/pull/20618>`__: MAINT: Help boost::python libraries at least not crash
version_json = '''
{
- "date": "2021-11-04T15:06:03-0600",
+ "date": "2021-12-19T13:38:14-0700",
"dirty": false,
"error": null,
- "full-revisionid": "c0b003e9c787ccab27f6fe57c154d7b881da5795",
- "version": "1.21.4"
+ "full-revisionid": "c3d0a09342c08c466984654bc4738af595fba896",
+ "version": "1.21.5"
}
''' # END VERSION_JSON
def npy_load_module(name, fn, info=None):
"""
- Load a module.
+ Load a module. Uses ``load_module`` which will be deprecated in python
+ 3.12. An alternative that uses ``exec_module`` is in
+ numpy.distutils.misc_util.exec_mod_from_location
.. versionadded:: 1.11.2
typedef unsigned char npy_bool;
#define NPY_FALSE 0
#define NPY_TRUE 1
-
-
+/*
+ * `NPY_SIZEOF_LONGDOUBLE` isn't usually equal to sizeof(long double).
+ * In some certain cases, it may forced to be equal to sizeof(double)
+ * even against the compiler implementation and the same goes for
+ * `complex long double`.
+ *
+ * Therefore, avoid `long double`, use `npy_longdouble` instead,
+ * and when it comes to standard math functions make sure of using
+ * the double version when `NPY_SIZEOF_LONGDOUBLE` == `NPY_SIZEOF_DOUBLE`.
+ * For example:
+ * npy_longdouble *ptr, x;
+ * #if NPY_SIZEOF_LONGDOUBLE == NPY_SIZEOF_DOUBLE
+ * npy_longdouble r = modf(x, ptr);
+ * #else
+ * npy_longdouble r = modfl(x, ptr);
+ * #endif
+ *
+ * See https://github.com/numpy/numpy/issues/20348
+ */
#if NPY_SIZEOF_LONGDOUBLE == NPY_SIZEOF_DOUBLE
- typedef double npy_longdouble;
- #define NPY_LONGDOUBLE_FMT "g"
+ #define NPY_LONGDOUBLE_FMT "g"
+ typedef double npy_longdouble;
#else
- typedef long double npy_longdouble;
- #define NPY_LONGDOUBLE_FMT "Lg"
+ #define NPY_LONGDOUBLE_FMT "Lg"
+ typedef long double npy_longdouble;
#endif
#ifndef Py_USING_UNICODE
# but we cannot use add_installed_pkg_config here either, so we only
# update the substitution dictionary during npymath build
config_cmd = config.get_config_cmd()
-
# Check that the toolchain works, to fail early if it doesn't
# (avoid late errors with MATHLIB which are confusing if the
# compiler does not work).
- st = config_cmd.try_link('int main(void) { return 0;}')
- if not st:
- # rerun the failing command in verbose mode
- config_cmd.compiler.verbose = True
- config_cmd.try_link('int main(void) { return 0;}')
- raise RuntimeError("Broken toolchain: cannot link a simple C program")
+ for lang, test_code, note in (
+ ('c', 'int main(void) { return 0;}', ''),
+ ('c++', (
+ 'int main(void)'
+ '{ auto x = 0.0; return static_cast<int>(x); }'
+ ), (
+ 'note: A compiler with support for C++11 language '
+ 'features is required.'
+ )
+ ),
+ ):
+ is_cpp = lang == 'c++'
+ if is_cpp:
+ # this a workround to get rid of invalid c++ flags
+ # without doing big changes to config.
+ # c tested first, compiler should be here
+ bk_c = config_cmd.compiler
+ config_cmd.compiler = bk_c.cxx_compiler()
+ st = config_cmd.try_link(test_code, lang=lang)
+ if not st:
+ # rerun the failing command in verbose mode
+ config_cmd.compiler.verbose = True
+ config_cmd.try_link(test_code, lang=lang)
+ raise RuntimeError(
+ f"Broken toolchain: cannot link a simple {lang.upper()} "
+ f"program. {note}"
+ )
+ if is_cpp:
+ config_cmd.compiler = bk_c
mlibs = check_mathlib(config_cmd)
posix_mlib = ' '.join(['-l%s' % l for l in mlibs])
}
else if (PyArray_IsScalar(obj, LongDouble)) {
npy_longdouble x = PyArrayScalar_VAL(obj, LongDouble);
- PyOS_snprintf(str, sizeof(str), "%.*Lg", precision, x);
+ PyOS_snprintf(str, sizeof(str), "%.*" NPY_LONGDOUBLE_FMT, precision, x);
}
else{
double val = PyFloat_AsDouble(obj);
goto fail;
}
/* If align is set, make sure the alignment divides into the size */
- if (align && itemsize % new->alignment != 0) {
+ if (align && new->alignment > 0 && itemsize % new->alignment != 0) {
PyErr_Format(PyExc_ValueError,
"NumPy dtype descriptor requires alignment of %d bytes, "
"which is not divisible into the specified itemsize %d",
/**begin repeat2
* #i = 0, 1, 2, 3#
*/
- const @type@ b@i@ = @from@(data[@i@]);
- const @type@ c@i@ = @from@(data_out[@i@]);
+ const @temptype@ b@i@ = @from@(data[@i@]);
+ const @temptype@ c@i@ = @from@(data_out[@i@]);
/**end repeat2**/
/**begin repeat2
* #i = 0, 1, 2, 3#
*/
- const @type@ abc@i@ = scalar * b@i@ + c@i@;
+ const @temptype@ abc@i@ = scalar * b@i@ + c@i@;
/**end repeat2**/
/**begin repeat2
* #i = 0, 1, 2, 3#
}
#endif // !NPY_DISABLE_OPTIMIZATION
for (; count > 0; --count, ++data, ++data_out) {
- const @type@ b = @from@(*data);
- const @type@ c = @from@(*data_out);
+ const @temptype@ b = @from@(*data);
+ const @temptype@ c = @from@(*data_out);
*data_out = @to@(scalar * b + c);
}
#endif // NPYV check for @type@
/**begin repeat2
* #i = 0, 1, 2, 3#
*/
- const @type@ a@i@ = @from@(data0[@i@]);
- const @type@ b@i@ = @from@(data1[@i@]);
- const @type@ c@i@ = @from@(data_out[@i@]);
+ const @temptype@ a@i@ = @from@(data0[@i@]);
+ const @temptype@ b@i@ = @from@(data1[@i@]);
+ const @temptype@ c@i@ = @from@(data_out[@i@]);
/**end repeat2**/
/**begin repeat2
* #i = 0, 1, 2, 3#
*/
- const @type@ abc@i@ = a@i@ * b@i@ + c@i@;
+ const @temptype@ abc@i@ = a@i@ * b@i@ + c@i@;
/**end repeat2**/
/**begin repeat2
* #i = 0, 1, 2, 3#
}
#endif // !NPY_DISABLE_OPTIMIZATION
for (; count > 0; --count, ++data0, ++data1, ++data_out) {
- const @type@ a = @from@(*data0);
- const @type@ b = @from@(*data1);
- const @type@ c = @from@(*data_out);
+ const @temptype@ a = @from@(*data0);
+ const @temptype@ b = @from@(*data1);
+ const @temptype@ c = @from@(*data_out);
*data_out = @to@(a * b + c);
}
#endif // NPYV check for @type@
/**begin repeat2
* #i = 0, 1, 2, 3#
*/
- const @type@ ab@i@ = @from@(data0[@i@]) * @from@(data1[@i@]);
+ const @temptype@ ab@i@ = @from@(data0[@i@]) * @from@(data1[@i@]);
/**end repeat2**/
accum += ab0 + ab1 + ab2 + ab3;
}
#endif // !NPY_DISABLE_OPTIMIZATION
for (; count > 0; --count, ++data0, ++data1) {
- const @type@ a = @from@(*data0);
- const @type@ b = @from@(*data1);
+ const @temptype@ a = @from@(*data0);
+ const @temptype@ b = @from@(*data1);
accum += a * b;
}
#endif // NPYV check for @type@
if (type1 == type2) {
return 1;
}
+
+ if (Py_TYPE(Py_TYPE(type1)) == &PyType_Type) {
+ /*
+ * 2021-12-17: This case is nonsense and should be removed eventually!
+ *
+ * boost::python has/had a bug effectively using EquivTypes with
+ * `type(arbitrary_obj)`. That is clearly wrong as that cannot be a
+ * `PyArray_Descr *`. We assume that `type(type(type(arbitrary_obj))`
+ * is always in practice `type` (this is the type of the metaclass),
+ * but for our descriptors, `type(type(descr))` is DTypeMeta.
+ *
+ * In that case, we just return False. There is a possibility that
+ * this actually _worked_ effectively (returning 1 sometimes).
+ * We ignore that possibility for simplicity; it really is not our bug.
+ */
+ return 0;
+ }
+
/*
* Do not use PyArray_CanCastTypeTo because it supports legacy flexible
* dtypes as input.
/**begin repeat
* #type = npy_longdouble, npy_double, npy_float#
+ * #TYPE = LONGDOUBLE, DOUBLE, FLOAT#
* #c = l,,f#
* #C = L,,F#
*/
-
+#undef NPY__FP_SFX
+#if NPY_SIZEOF_@TYPE@ == NPY_SIZEOF_DOUBLE
+ #define NPY__FP_SFX(X) X
+#else
+ #define NPY__FP_SFX(X) NPY_CAT(X, @c@)
+#endif
/*
* On arm64 macOS, there's a bug with sin, cos, and tan where they don't
* raise "invalid" when given INFINITY as input.
return (x - x);
}
#endif
- return @kind@@c@(x);
+ return NPY__FP_SFX(@kind@)(x);
}
#endif
#ifdef HAVE_@KIND@@C@
NPY_INPLACE @type@ npy_@kind@@c@(@type@ x, @type@ y)
{
- return @kind@@c@(x, y);
+ return NPY__FP_SFX(@kind@)(x, y);
}
#endif
/**end repeat1**/
#ifdef HAVE_MODF@C@
NPY_INPLACE @type@ npy_modf@c@(@type@ x, @type@ *iptr)
{
- return modf@c@(x, iptr);
+ return NPY__FP_SFX(modf)(x, iptr);
}
#endif
#ifdef HAVE_LDEXP@C@
NPY_INPLACE @type@ npy_ldexp@c@(@type@ x, int exp)
{
- return ldexp@c@(x, exp);
+ return NPY__FP_SFX(ldexp)(x, exp);
}
#endif
#ifdef HAVE_FREXP@C@
NPY_INPLACE @type@ npy_frexp@c@(@type@ x, int* exp)
{
- return frexp@c@(x, exp);
+ return NPY__FP_SFX(frexp)(x, exp);
}
#endif
#else
NPY_INPLACE @type@ npy_cbrt@c@(@type@ x)
{
- return cbrt@c@(x);
+ return NPY__FP_SFX(cbrt)(x);
}
#endif
-
+#undef NPY__FP_SFX
/**end repeat**/
/**begin repeat
* #type = npy_float, npy_double, npy_longdouble#
+ * #TYPE = FLOAT, DOUBLE, LONGDOUBLE#
* #c = f, ,l#
* #C = F, ,L#
*/
-
+#undef NPY__FP_SFX
+#if NPY_SIZEOF_@TYPE@ == NPY_SIZEOF_DOUBLE
+ #define NPY__FP_SFX(X) X
+#else
+ #define NPY__FP_SFX(X) NPY_CAT(X, @c@)
+#endif
@type@ npy_heaviside@c@(@type@ x, @type@ h0)
{
if (npy_isnan(x)) {
}
}
-#define LOGE2 NPY_LOGE2@c@
-#define LOG2E NPY_LOG2E@c@
-#define RAD2DEG (180.0@c@/NPY_PI@c@)
-#define DEG2RAD (NPY_PI@c@/180.0@c@)
+#define LOGE2 NPY__FP_SFX(NPY_LOGE2)
+#define LOG2E NPY__FP_SFX(NPY_LOG2E)
+#define RAD2DEG (NPY__FP_SFX(180.0)/NPY__FP_SFX(NPY_PI))
+#define DEG2RAD (NPY__FP_SFX(NPY_PI)/NPY__FP_SFX(180.0))
NPY_INPLACE @type@ npy_rad2deg@c@(@type@ x)
{
#undef LOG2E
#undef RAD2DEG
#undef DEG2RAD
-
+#undef NPY__FP_SFX
/**end repeat**/
/**begin repeat
.long 0x00000000,0x80000000,0x00000000,0x00000000
.type .L_2il0floatpacket.197,@object
.size .L_2il0floatpacket.197,16
+
+ .section .note.GNU-stack,"",@progbits
.long 0x00000000,0x80000000,0x00000000,0x00000000
.type .L_2il0floatpacket.199,@object
.size .L_2il0floatpacket.199,16
+
+ .section .note.GNU-stack,"",@progbits
.long 4293918720
.type __dacosh_la_CoutTab,@object
.size __dacosh_la_CoutTab,32
+
+ .section .note.GNU-stack,"",@progbits
.long 2139095040
.type __sacosh_la__iml_sacosh_cout_tab,@object
.size __sacosh_la__iml_sacosh_cout_tab,12
+
+ .section .note.GNU-stack,"",@progbits
.long 3217739776
.type _vmldASinHATab,@object
.size _vmldASinHATab,4504
+
+ .section .note.GNU-stack,"",@progbits
.long 3217739776
.type _vmldASinHATab,@object
.size _vmldASinHATab,4504
+
+ .section .note.GNU-stack,"",@progbits
.long 1031600026
.type __svml_dasinh_data_internal_avx512,@object
.size __svml_dasinh_data_internal_avx512,2048
+
+ .section .note.GNU-stack,"",@progbits
.long 939916788
.type __svml_sasinh_data_internal_avx512,@object
.size __svml_sasinh_data_internal_avx512,1344
+
+ .section .note.GNU-stack,"",@progbits
.long 0xffffffff,0xffffffff
.type .L_2il0floatpacket.31,@object
.size .L_2il0floatpacket.31,8
+
+ .section .note.GNU-stack,"",@progbits
.long 1101004800
.type __satan2_la_CoutTab,@object
.size __satan2_la_CoutTab,2008
+
+ .section .note.GNU-stack,"",@progbits
.long 0x00000000,0x3ff00000
.type .L_2il0floatpacket.14,@object
.size .L_2il0floatpacket.14,8
+
+ .section .note.GNU-stack,"",@progbits
.long 3198855850
.type __svml_satan_data_internal_avx512,@object
.size __svml_satan_data_internal_avx512,1024
+
+ .section .note.GNU-stack,"",@progbits
.long 4293918720
.type __datanh_la_CoutTab,@object
.size __datanh_la_CoutTab,32
+
+ .section .note.GNU-stack,"",@progbits
.long 2139095040
.type __satanh_la__imlsAtanhTab,@object
.size __satanh_la__imlsAtanhTab,12
+
+ .section .note.GNU-stack,"",@progbits
.long 0x00000000,0x80000000,0x00000000,0x00000000
.type .L_2il0floatpacket.81,@object
.size .L_2il0floatpacket.81,16
+
+ .section .note.GNU-stack,"",@progbits
.long 0xbd288f47
.type .L_2il0floatpacket.35,@object
.size .L_2il0floatpacket.35,4
+
+ .section .note.GNU-stack,"",@progbits
.long 2146435072
.type __dcos_la_CoutTab,@object
.size __dcos_la_CoutTab,16
+
+ .section .note.GNU-stack,"",@progbits
.long 2139095040
.type __scos_la__vmlsCosCoutTab,@object
.size __scos_la__vmlsCosCoutTab,8
+
+ .section .note.GNU-stack,"",@progbits
.long 1077247184
.type __dcosh_la_CoutTab,@object
.size __dcosh_la_CoutTab,1152
+
+ .section .note.GNU-stack,"",@progbits
.long 1077247184
.type __scosh_la_CoutTab,@object
.size __scosh_la_CoutTab,1152
+
+ .section .note.GNU-stack,"",@progbits
.long 0
.type __dexp2_la__imldExp2HATab,@object
.size __dexp2_la__imldExp2HATab,1152
+
+ .section .note.GNU-stack,"",@progbits
.long 0xc2fc0000
.type .L_2il0floatpacket.56,@object
.size .L_2il0floatpacket.56,4
+
+ .section .note.GNU-stack,"",@progbits
.long 1106771968
.type _imldExpHATab,@object
.size _imldExpHATab,1176
+
+ .section .note.GNU-stack,"",@progbits
.long 0x40000000
.type .L_2il0floatpacket.67,@object
.size .L_2il0floatpacket.67,4
+
+ .section .note.GNU-stack,"",@progbits
.long 0x00000000,0xbff00000
.type .L_2il0floatpacket.77,@object
.size .L_2il0floatpacket.77,8
+
+ .section .note.GNU-stack,"",@progbits
.long 0x3c0950ef
.type .L_2il0floatpacket.56,@object
.size .L_2il0floatpacket.56,4
+
+ .section .note.GNU-stack,"",@progbits
.long 0x00000000,0x3ff00000
.type .L_2il0floatpacket.89,@object
.size .L_2il0floatpacket.89,8
+
+ .section .note.GNU-stack,"",@progbits
.long 0x3f800000
.type .L_2il0floatpacket.93,@object
.size .L_2il0floatpacket.93,4
+
+ .section .note.GNU-stack,"",@progbits
.long 0x00000000,0x3ff00000
.type .L_2il0floatpacket.81,@object
.size .L_2il0floatpacket.81,8
+
+ .section .note.GNU-stack,"",@progbits
.long 0x3f800000
.type .L_2il0floatpacket.90,@object
.size .L_2il0floatpacket.90,4
+
+ .section .note.GNU-stack,"",@progbits
.long 1073157447
.type __dlog2_la__P,@object
.size __dlog2_la__P,56
+
+ .section .note.GNU-stack,"",@progbits
.long 0x3f800000
.type .L_2il0floatpacket.90,@object
.size .L_2il0floatpacket.90,4
+
+ .section .note.GNU-stack,"",@progbits
.long 0x00000000,0x3ff00000
.type .L_2il0floatpacket.80,@object
.size .L_2il0floatpacket.80,8
+
+ .section .note.GNU-stack,"",@progbits
.long 0x00000000,0x3ff00000
.type .L_2il0floatpacket.73,@object
.size .L_2il0floatpacket.73,8
+
+ .section .note.GNU-stack,"",@progbits
.long 862978048
.type __dpow_la_CoutTab,@object
.size __dpow_la_CoutTab,6880
+
+ .section .note.GNU-stack,"",@progbits
.long 0x3f800000
.type .L_2il0floatpacket.136,@object
.size .L_2il0floatpacket.136,4
+
+ .section .note.GNU-stack,"",@progbits
.long 2146435072
.type __dsin_la_CoutTab,@object
.size __dsin_la_CoutTab,16
+
+ .section .note.GNU-stack,"",@progbits
.long 2139095040
.type __ssin_la__vmlsSinHATab,@object
.size __ssin_la__vmlsSinHATab,8
+
+ .section .note.GNU-stack,"",@progbits
.long 0x00000000,0x80000000,0x00000000,0x00000000
.type .L_2il0floatpacket.97,@object
.size .L_2il0floatpacket.97,16
+
+ .section .note.GNU-stack,"",@progbits
.long 0x00000000,0x80000000,0x00000000,0x00000000
.type .L_2il0floatpacket.98,@object
.size .L_2il0floatpacket.98,16
+
+ .section .note.GNU-stack,"",@progbits
.long 2146435072
.type __dtan_la_Tab,@object
.size __dtan_la_Tab,16
+
+ .section .note.GNU-stack,"",@progbits
.long 2139095040
.type __stan_la__vmlsTanTab,@object
.size __stan_la__vmlsTanTab,8
+
+ .section .note.GNU-stack,"",@progbits
.long 3220176896
.type __dtanh_la__imldTanhTab,@object
.size __dtanh_la__imldTanhTab,16
+
+ .section .note.GNU-stack,"",@progbits
.long 3212836864
.type __stanh_la__imlsTanhTab,@object
.size __stanh_la__imlsTanhTab,8
+
+ .section .note.GNU-stack,"",@progbits
t2 = np.dtype('2i4', align=True)
assert_equal(t1.alignment, t2.alignment)
+ def test_aligned_empty(self):
+ # Mainly regression test for gh-19696: construction failed completely
+ dt = np.dtype([], align=True)
+ assert dt == np.dtype([])
+ dt = np.dtype({"names": [], "formats": [], "itemsize": 0}, align=True)
+ assert dt == np.dtype([])
def iter_struct_object_dtypes():
"""
import itertools
+import pytest
+
import numpy as np
from numpy.testing import (
assert_, assert_equal, assert_array_equal, assert_almost_equal,
np.einsum('ij,jk->ik', x, x, out=out)
assert_array_equal(out.base, correct_base)
+ @pytest.mark.parametrize("dtype",
+ np.typecodes["AllFloat"] + np.typecodes["AllInteger"])
+ def test_different_paths(self, dtype):
+ # Test originally added to cover broken float16 path: gh-20305
+ # Likely most are covered elsewhere, at least partially.
+ dtype = np.dtype(dtype)
+ # Simple test, designed to excersize most specialized code paths,
+ # note the +0.5 for floats. This makes sure we use a float value
+ # where the results must be exact.
+ arr = (np.arange(7) + 0.5).astype(dtype)
+ scalar = np.array(2, dtype=dtype)
+
+ # contig -> scalar:
+ res = np.einsum('i->', arr)
+ assert res == arr.sum()
+ # contig, contig -> contig:
+ res = np.einsum('i,i->i', arr, arr)
+ assert_array_equal(res, arr * arr)
+ # noncontig, noncontig -> contig:
+ res = np.einsum('i,i->i', arr.repeat(2)[::2], arr.repeat(2)[::2])
+ assert_array_equal(res, arr * arr)
+ # contig + contig -> scalar
+ assert np.einsum('i,i->', arr, arr) == (arr * arr).sum()
+ # contig + scalar -> contig (with out)
+ out = np.ones(7, dtype=dtype)
+ res = np.einsum('i,->i', arr, dtype.type(2), out=out)
+ assert_array_equal(res, arr * dtype.type(2))
+ # scalar + contig -> contig (with out)
+ res = np.einsum(',i->i', scalar, arr)
+ assert_array_equal(res, arr * dtype.type(2))
+ # scalar + contig -> scalar
+ res = np.einsum(',i->', scalar, arr)
+ # Use einsum to compare to not have difference due to sum round-offs:
+ assert res == np.einsum('i->', scalar * arr)
+ # contig + scalar -> scalar
+ res = np.einsum('i,->', arr, scalar)
+ # Use einsum to compare to not have difference due to sum round-offs:
+ assert res == np.einsum('i->', scalar * arr)
+ # contig + contig + contig -> scalar
+ arr = np.array([0.5, 0.5, 0.25, 4.5, 3.], dtype=dtype)
+ res = np.einsum('i,i,i->', arr, arr, arr)
+ assert_array_equal(res, (arr * arr * arr).sum())
+ # four arrays:
+ res = np.einsum('i,i,i,i->', arr, arr, arr, arr)
+ assert_array_equal(res, (arr * arr * arr * arr).sum())
+
def test_small_boolean_arrays(self):
# See gh-5946.
# Use array of True embedded in False.
from numpy.distutils.exec_command import (
filepath_from_subprocess_output, forward_bytes_to_stdout
)
-from numpy.distutils.misc_util import cyg2win32, is_sequence, mingw32, \
- get_num_build_jobs, \
- _commandline_dep_string
+from numpy.distutils.misc_util import (
+ cyg2win32, is_sequence, mingw32, get_num_build_jobs,
+ _commandline_dep_string, sanitize_cxx_flags
+)
# globals for parallel build management
import threading
except subprocess.CalledProcessError as exc:
o = exc.output
s = exc.returncode
- except OSError:
+ except OSError as e:
# OSError doesn't have the same hooks for the exception
# output, but exec_command() historically would use an
# empty string for EnvironmentError (base class for
# OSError)
- o = b''
+ # o = b''
+ # still that would make the end-user lost in translation!
+ o = f"\n\n{e}\n\n\n"
+ try:
+ o = o.encode(sys.stdout.encoding)
+ except AttributeError:
+ o = o.encode('utf8')
# status previously used by exec_command() for parent
# of OSError
s = 127
return self
cxx = copy(self)
- cxx.compiler_so = [cxx.compiler_cxx[0]] + cxx.compiler_so[1:]
+ cxx.compiler_cxx = cxx.compiler_cxx
+ cxx.compiler_so = [cxx.compiler_cxx[0]] + \
+ sanitize_cxx_flags(cxx.compiler_so[1:])
+ #cxx.compiler_so = [cxx.compiler_cxx[0]] + cxx.compiler_so[1:]
if sys.platform.startswith('aix') and 'ld_so_aix' in cxx.linker_so[0]:
# AIX needs the ld_so_aix script included with Python
cxx.linker_so = [cxx.linker_so[0], cxx.compiler_cxx[0]] \
@staticmethod
def dist_load_module(name, path):
"""Load a module from file, required by the abstract class '_Cache'."""
- from numpy.compat import npy_load_module
+ from .misc_util import exec_mod_from_location
try:
- return npy_load_module(name, path)
+ return exec_mod_from_location(name, path)
except Exception as e:
_Distutils.dist_log(e, stderr=True)
return None
except subprocess.CalledProcessError as exc:
o = exc.output
s = exc.returncode
- except OSError:
- o = b''
+ except OSError as e:
+ o = e
s = 127
else:
return None
atexit.register(clean_up_temporary_directory)
-from numpy.compat import npy_load_module
-
__all__ = ['Configuration', 'get_numpy_include_dirs', 'default_config_dict',
'dict_append', 'appendpath', 'generate_config_py',
'get_cmd', 'allpath', 'get_mathlibs',
'dot_join', 'get_frame', 'minrelpath', 'njoin',
'is_sequence', 'is_string', 'as_list', 'gpaths', 'get_language',
'quote_args', 'get_build_architecture', 'get_info', 'get_pkg_info',
- 'get_num_build_jobs']
+ 'get_num_build_jobs', 'sanitize_cxx_flags',
+ 'exec_mod_from_location']
class InstallableLib:
"""
try:
setup_name = os.path.splitext(os.path.basename(setup_py))[0]
n = dot_join(self.name, subpackage_name, setup_name)
- setup_module = npy_load_module('_'.join(n.split('.')),
- setup_py,
- ('.py', 'U', 1))
+ setup_module = exec_mod_from_location(
+ '_'.join(n.split('.')), setup_py)
if not hasattr(setup_module, 'configuration'):
if not self.options['assume_default_configuration']:
self.warn('Assuming default configuration '\
name = os.path.splitext(os.path.basename(fn))[0]
n = dot_join(self.name, name)
try:
- version_module = npy_load_module('_'.join(n.split('.')),
- fn, info)
+ version_module = exec_mod_from_location(
+ '_'.join(n.split('.')), fn)
except ImportError as e:
self.warn(str(e))
version_module = None
# systems, so delay the import to here.
from distutils.msvccompiler import get_build_architecture
return get_build_architecture()
+
+
+_cxx_ignore_flags = {'-Werror=implicit-function-declaration', '-std=c99'}
+
+
+def sanitize_cxx_flags(cxxflags):
+ '''
+ Some flags are valid for C but not C++. Prune them.
+ '''
+ return [flag for flag in cxxflags if flag not in _cxx_ignore_flags]
+
+
+def exec_mod_from_location(modname, modfile):
+ '''
+ Use importlib machinery to import a module `modname` from the file
+ `modfile`. Depending on the `spec.loader`, the module may not be
+ registered in sys.modules.
+ '''
+ spec = importlib.util.spec_from_file_location(modname, modfile)
+ foo = importlib.util.module_from_spec(spec)
+ spec.loader.exec_module(foo)
+ return foo
+
return !(*v == -1 && PyErr_Occurred());
}
- if (PyComplex_Check(obj))
+ if (PyComplex_Check(obj)) {
+ PyErr_Clear();
tmp = PyObject_GetAttrString(obj,\"real\");
- else if (PyBytes_Check(obj) || PyUnicode_Check(obj))
+ }
+ else if (PyBytes_Check(obj) || PyUnicode_Check(obj)) {
/*pass*/;
- else if (PySequence_Check(obj))
+ }
+ else if (PySequence_Check(obj)) {
+ PyErr_Clear();
tmp = PySequence_GetItem(obj, 0);
+ }
+
if (tmp) {
- PyErr_Clear();
if (int_from_pyobj(v, tmp, errmess)) {
Py_DECREF(tmp);
return 1;
}
Py_DECREF(tmp);
}
+
{
PyObject* err = PyErr_Occurred();
if (err == NULL) {
return !(*v == -1 && PyErr_Occurred());
}
- if (PyComplex_Check(obj))
+ if (PyComplex_Check(obj)) {
+ PyErr_Clear();
tmp = PyObject_GetAttrString(obj,\"real\");
- else if (PyBytes_Check(obj) || PyUnicode_Check(obj))
+ }
+ else if (PyBytes_Check(obj) || PyUnicode_Check(obj)) {
/*pass*/;
- else if (PySequence_Check(obj))
- tmp = PySequence_GetItem(obj,0);
+ }
+ else if (PySequence_Check(obj)) {
+ PyErr_Clear();
+ tmp = PySequence_GetItem(obj, 0);
+ }
if (tmp) {
- PyErr_Clear();
if (long_from_pyobj(v, tmp, errmess)) {
Py_DECREF(tmp);
return 1;
return !(*v == -1 && PyErr_Occurred());
}
- if (PyComplex_Check(obj))
+ if (PyComplex_Check(obj)) {
+ PyErr_Clear();
tmp = PyObject_GetAttrString(obj,\"real\");
- else if (PyBytes_Check(obj) || PyUnicode_Check(obj))
+ }
+ else if (PyBytes_Check(obj) || PyUnicode_Check(obj)) {
/*pass*/;
- else if (PySequence_Check(obj))
- tmp = PySequence_GetItem(obj,0);
- if (tmp) {
+ }
+ else if (PySequence_Check(obj)) {
PyErr_Clear();
+ tmp = PySequence_GetItem(obj, 0);
+ }
+
+ if (tmp) {
if (long_long_from_pyobj(v, tmp, errmess)) {
Py_DECREF(tmp);
return 1;
Py_DECREF(tmp);
return !(*v == -1.0 && PyErr_Occurred());
}
- if (PyComplex_Check(obj))
+
+ if (PyComplex_Check(obj)) {
+ PyErr_Clear();
tmp = PyObject_GetAttrString(obj,\"real\");
- else if (PyBytes_Check(obj) || PyUnicode_Check(obj))
+ }
+ else if (PyBytes_Check(obj) || PyUnicode_Check(obj)) {
/*pass*/;
- else if (PySequence_Check(obj))
- tmp = PySequence_GetItem(obj,0);
- if (tmp) {
+ }
+ else if (PySequence_Check(obj)) {
PyErr_Clear();
+ tmp = PySequence_GetItem(obj, 0);
+ }
+
+ if (tmp) {
if (double_from_pyobj(v,tmp,errmess)) {Py_DECREF(tmp); return 1;}
Py_DECREF(tmp);
}
import sys
import warnings
import numpy as np
+from numpy.distutils.misc_util import exec_mod_from_location
try:
import cffi
assert so1 is not None
assert so2 is not None
# import the so's without adding the directory to sys.path
- from importlib.machinery import ExtensionFileLoader
- extending = ExtensionFileLoader('extending', so1).load_module()
- extending_distributions = ExtensionFileLoader('extending_distributions', so2).load_module()
-
+ exec_mod_from_location('extending', so1)
+ extending_distributions = exec_mod_from_location(
+ 'extending_distributions', so2)
# actually test the cython c-extension
from numpy.random import PCG64
values = extending_distributions.uniforms_ex(PCG64(0), 10, 'd')
>>> np.lib.test(doctests=True) # doctest: +SKIP
"""
- from numpy.compat import npy_load_module
+ from numpy.distutils.misc_util import exec_mod_from_location
import doctest
if filename is None:
f = sys._getframe(1)
filename = f.f_globals['__file__']
name = os.path.splitext(os.path.basename(filename))[0]
- m = npy_load_module(name, filename)
+ m = exec_mod_from_location(name, filename)
tests = doctest.DocTestFinder().find(m)
runner = doctest.DocTestRunner(verbose=False)
"__mro_entries__",
"__reduce__",
"__reduce_ex__",
+ "__copy__",
+ "__deepcopy__",
})
def __getattribute__(self, name: str) -> Any:
from __future__ import annotations
import sys
+import copy
import types
import pickle
import weakref
value_ref = func(NDArray_ref)
assert value == value_ref
+ @pytest.mark.parametrize("name,func", [
+ ("__copy__", lambda n: n == copy.copy(n)),
+ ("__deepcopy__", lambda n: n == copy.deepcopy(n)),
+ ])
+ def test_copy(self, name: str, func: FuncType) -> None:
+ value = func(NDArray)
+
+ # xref bpo-45167
+ GE_398 = (
+ sys.version_info[:2] == (3, 9) and sys.version_info >= (3, 9, 8)
+ )
+ if GE_398 or sys.version_info >= (3, 10, 1):
+ value_ref = func(NDArray_ref)
+ assert value == value_ref
+
def test_weakref(self) -> None:
"""Test ``__weakref__``."""
value = weakref.ref(NDArray)()
#-----------------------------------
# Path to the release notes
-RELEASE_NOTES = 'doc/source/release/1.21.4-notes.rst'
+RELEASE_NOTES = 'doc/source/release/1.21.5-notes.rst'
#-------------------------------------------------------
# Minimum requirements for the build system to execute.
requires = [
"packaging==20.5; platform_machine=='arm64'", # macos M1
- "setuptools<49.2.0",
- "wheel==0.36.2",
+ "setuptools==59.2.0",
+ "wheel==0.37.0",
"Cython>=0.29.24,<3.0", # Note: keep in sync with tools/cythonize.py
]
ignore:Importing from numpy.matlib is
# pytest warning when using PYTHONOPTIMIZE
ignore:assertions not in test modules or plugins:pytest.PytestConfigWarning
+# TODO: remove below when array_api user warning is removed
+ ignore:The numpy.array_api submodule is still experimental. See NEP 47.
+# Ignore DeprecationWarnings from distutils
+ ignore::DeprecationWarning:.*distutils
cython==0.29.24
-wheel<0.36.3
-setuptools<49.2.0
-hypothesis==6.12.0
-pytest==6.2.4
-pytz==2021.1
-pytest-cov==2.12.0
+wheel==0.37.0
+setuptools==59.2.0
+hypothesis==6.24.1
+pytest==6.2.5
+pytz==2021.3
+pytest-cov==3.0.0
pickle5; python_version == '3.7' and platform_python_implementation != 'PyPy'
# for numpy.random.test.test_extending
-cffi
+cffi; python_version < '3.10'
# For testing types. Notes on the restrictions:
# - Mypy relies on C API features not present in PyPy
# - There is no point in installing typing_extensions without mypy
-mypy==0.812; platform_python_implementation != "PyPy"
-typing_extensions==3.10.0.0; platform_python_implementation != "PyPy"
+mypy==0.910; platform_python_implementation != "PyPy"
+#typing_extensions==3.10.0.0; platform_python_implementation != "PyPy"