+1 as well.
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
. I believe so but I have not tried myself yet.
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
2016-04-20 16:57 GMT+02:00 Matthew Brett :
> On Wed, Apr 20, 2016 at 1:59 AM, Olivier Grisel
> wrote:
>> Thanks,
>>
>> I think next we could upgrade the travis configuration of numpy and
>> scipy to build and upload manylinux1 wheels to
>> http://travis-
would require publishing an official pre-built
libopenblas.so (+headers) archive or RPM package. That archive would
server as the reference libary to build scipy stack manylinux1 wheels.
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
I think that would be very useful, e.g. for downstream projects to
check that they work properly with old versions using a simple pip
install command on their CI workers.
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https
/5107
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
7; object has no attribute 'get_abi_tag'
But we don't really care because manylinux1 wheels can only be
installed by pip 8.1 and later. Previous versions of pip should just
ignore those wheels and try to install from the source tarball
instead.
--
Olivier
___
\o/
Thank you very much Matthew. I will upload the scikit-learn wheels soon.
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
I updated the issue:
https://github.com/xianyi/OpenBLAS-CI/issues/10#issuecomment-206195714
The random test_nanmedian_all_axis failure is unrelated to openblas
and should be ignored.
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion
Yes sorry I forgot to update the thread. Actually I am no longer sure
how I go this error. I am re-running the full test suite because I
cannot reproduce it when running the test_stats.py module alone.
--
Olivier
___
NumPy-Discussion mailing list
or(msg)
AssertionError:
Items are not equal:
ACTUAL: 1
DESIRED: 4
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
a few BLAS calls
that would help.
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
while we
could not achieve similar results with atlas 3.10.
--
Olivier Grisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
typo:
python -m install --upgrade pip
should read:
python -m pip install --upgrade pip
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
The problem with the gfortran failures will be tackled by renaming the
vendored libgfortran.so library, see:
https://github.com/pypa/auditwheel/issues/24
This is orthogonal to the ATLAS vs OpenBLAS decision though.
--
Olivier
___
NumPy-Discussion
has set up a
buildbot based CI to test OpenBLAS on many CPU architectures and is
running the scipy test continuously to detect regressions early on:
https://github.com/xianyi/OpenBLAS/issues/785
http://build.openblas.net/waterfall
https://github.com/xianyi/OpenBLAS-CI/
--
Olivier Grisel
Thanks Matthew! I just installed it and ran the tests and it all works
(except for test_system_info.py that fails because I am missing a
vcvarsall.bat on that system but this is expected).
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion
f OpenBLAS.
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
0))"
Also note that all scipy tests pass:
Ran 20180 tests in 366.163s
OK (KNOWNFAIL=97, SKIP=1657)
--
Olivier Grisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
https://mail.scipy.org/mailman/listinfo/numpy-discussion
I used docker to run the numpy tests on base/archlinux. I had to
pacman -Sy python-pip openssl and gcc (required by one of the numpy
tests):
```
Ran 5621 tests in 34.482s
OK (KNOWNFAIL=4, SKIP=9)
```
Everything looks fine.
--
Olivier
___
NumPy
op of mingw-w64 w.r.t. VS2015 but as
far as I know it's not supported yet either. Once the issue is fixed at the
upstream level, I think mingwpy could be rebuilt to benefit from the fix.
--
Olivier Grisel
___
NumPy-Discussion mailing list
2015-07-10 22:13 GMT+02:00 Carl Kleffner :
>
>
> 2015-07-10 19:06 GMT+02:00 Olivier Grisel :
>>
>> 2015-07-10 16:47 GMT+02:00 Carl Kleffner :
>> > Hi Olivier,
>> >
>> > yes, this is all explained in
>> > https://github.com/xianyi/OpenBL
2015-07-11 18:30 GMT+02:00 Olivier Grisel :
> 2015-07-10 20:20 GMT+02:00 Carl Kleffner :
>> I could provide you with a debug build of libopenblaspy.dll. The segfault -
>> if ithrown from openblas - could be detected with gdb or with the help of
>> backtrace.dll.
>
>
tions you used to build openblas with mingwpy? Also which
version of openblas did you use? The last stable or some snapshot from
the develop branch?
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
___
NumPy-Discussion mailing l
2015-07-10 16:47 GMT+02:00 Carl Kleffner :
> Hi Olivier,
>
> yes, this is all explained in
> https://github.com/xianyi/OpenBLAS/wiki/Faq#choose_target_dynamic as well.
> This seems to be necessary for CI systems, right?
The auto detection should work. If not it's a bug
2015-07-10 18:42 GMT+02:00 Olivier Grisel :
>
>> I assume you've already checked that this is a Windows specific issue?
>
> I am starting a rackspace VM with linux to check. Hopefully it will
> also be detected as Barcelona by openblas.
I just built OpenBLAS 0.2.14 and num
2015-07-10 18:31 GMT+02:00 Nathaniel Smith :
> On Jul 10, 2015 10:51 AM, "Olivier Grisel" wrote:
>>
>> I narrowed down the segfault from the scipy tests on my machine to:
>>
>> OPENBLAS_CORETYPE='Barcelona' /c/Python34_x64/python -c"import n
#x27;t know if it's a bug in OpenBLAS' coretype detection or in the
SGESDD kernel for the Barcelona architecture.
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
t.github.com/ogrisel/ad4e547a32d0eb18b4ff
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
e new test results for scipy.
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi Carl,
Sorry for the slow reply.
I ran some tests with your binstar packages:
I installed numpy, scipy and mingwpy for Python 2.7 32 bit and Python
3.4 64 bit (downloaded from python.org) on a freshly provisionned
windows VM on rackspace.
I then used the mingwpy C & C++ compilers to build the
e do you put the BLAS / LAPACK header
files?
I would like to help automating that build in some CI environment
(with either Windows or Linux + wine) but I am affraid that I am not
familiar enough with the windows build of numpy & scipy to get it
working all by myself.
--
+1 for bundling OpenBLAS both in scipy and numpy in the short term.
Introducing a new dependency project for OpenBLAS sounds like a good
idea but this is probably more work.
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
2015-01-23 9:25 GMT+01:00 Carl Kleffner :
> All tests for the 64bit builds passed.
Thanks very much Carl. Did you have to patch the numpy / distutils
source to build those wheels are is this using the source code from
the official releases?
--
Olivier
http://twitter.com/ogrisel - h
2014-07-31 22:40 GMT+02:00 Matthew Brett :
>
> Sure, I built and uploaded:
>
> scipy-0.12.0 py27
> scipy-0.13.0 py27, 33, 34
>
> Are there any others you need?
Thanks, this is already great.
--
Olivier
http://twitter.com/ogrisel - http
? As scipy is even slower to build
that would be even more helpful.
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
.microsoft.com/?linkid=7729279
FYI I recently update the scikit-learn documentation for building
under windows, both for Python 2 and Python 3 as well as 32 bit and 64
bit architectures:
http://scikit-learn.org/stable/install.html#building-on-windows
The same build environment should work for nump
alled
> specs file - see readme.txt in the archive.
Have the patches to build numpy and scipy with mingw-w64 been merged
in the master branches of those projects?
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
___
NumPy-Discussi
The dtype returned by np.where looks right (int64):
>>> import platform
>>> platform.architecture()
('64bit', 'WindowsPE')
>>> import numpy as np
>>> np.__version__
'1.9.0b1'
>>> a = np.zeros(10)
>>>
antics is
good for that case. Having an explicit dtype might be even better.
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
appveyor-demo in the sense that for sklearn I decided to
actually install the generated wheel package and run the tests on the
resulting installed library rather than on the project source folder.
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Di
Best,
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ike a very
promising solution (and could probably be run on the appveyor infra as
well).
Best,
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
ource left to help with
upstream projects such as numpy and scipy.
Best,
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi Matthew and Ralf,
Has anyone managed to build working whl packages for numpy and scipy
on win32 using the static mingw-w64 toolchain?
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo
Just successfully tested on Python 3.4 from python.org / OSX 10.9 and
all sklearn tests pass, including a tests that involves
multiprocessing and that used to crash with Accelerate.
Thanks very much!
--
Olivier
___
NumPy-Discussion mailing list
NumPy
hat would make sense, indeed.
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
e limited evaluation package though.
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
evelopers would rather avoid if possible. If this feature is
important to you, please speak up on the blis-devel mailing list.
"""
Also Windows support is still considered experimental according to the same FAQ.
--
Olivier
___
NumPy
NO_AFFINITY=1 flag to avoid the
issue.
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
2014-03-31 13:53 GMT+02:00 Olivier Grisel :
> 2014-03-28 23:13 GMT+01:00 Matthew Brett :
>> Hi,
>>
>> On Fri, Mar 28, 2014 at 3:09 PM, Olivier Grisel
>> wrote:
>>> This is great! Has anyone started to work on OSX whl packages for
>>> scipy? I assume t
2014-03-28 23:13 GMT+01:00 Matthew Brett :
> Hi,
>
> On Fri, Mar 28, 2014 at 3:09 PM, Olivier Grisel
> wrote:
>> This is great! Has anyone started to work on OSX whl packages for
>> scipy? I assume the libgfortran, libquadmath & libgcc_s dylibs will
>> not make
ages?
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
2014-03-28 22:55 GMT+01:00 Julian Taylor :
> On 28.03.2014 22:38, Olivier Grisel wrote:
>> 2014-03-28 22:18 GMT+01:00 Nathaniel Smith :
>>> I thought OpenBLAS is usually used with reference lapack?
>>
>> I am no longer sure myself. Debian & thus Ubuntu seem to be
my
two setups.
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
2014-03-27 14:55 GMT+01:00 :
> On Wed, Mar 26, 2014 at 5:17 PM, Olivier Grisel
> wrote:
>> My understanding of Carl's effort is that the long term goal is to
>> have official windows whl packages for both numpy and scipy published
>> on PyPI with a builtin BLAS /
2014-03-26 16:27 GMT+01:00 Olivier Grisel :
> Hi Carl,
>
> I installed Python 2.7.6 64 bits on a windows server instance from
> rackspace cloud and then ran get-pip.py and then could successfully
> install the numpy and scipy wheel packages from your google drive
> folder. I t
2014-03-26 22:31 GMT+01:00 Julian Taylor :
> On 26.03.2014 22:17, Olivier Grisel wrote:
>>
>> The problem with ATLAS is that you need to select the number of thread
>> at build time AFAIK. But we could set it to a reasonable default (e.g.
>> 4 threads) for the default
Just something that works.
The problem with ATLAS is that you need to select the number of thread
at build time AFAIK. But we could set it to a reasonable default (e.g.
4 threads) for the default windows package.
--
Olivier
___
NumPy-Discussion mailing list
s_src]) or by setting
the BLAS_SRC environment variable.
warnings.warn(BlasSrcNotFoundError.__doc__)
Out[4]: {}
Would it make sense to embed the blas and lapack header files as part
of this numpy wheel and make numpy.distutils.system_info return the
lib
>>
>>
> I don't have a strong feeling either way on '@@' . Matrix inverses are
> pretty common in matrix expressions, but I don't know that the new operator
> offers much advantage over a function call. The positive integer powers
> might be useful in some domains, as others have pointed out, but
> computational practice one would tend to factor the evaluation.
>
> Chuck
>
Personally I think it should go in, because:
- it's useful (although marginally), as in the examples previously mentioned
- it's what people will expect
- it's the only reasonable use of @@ once @ makes it in
As far as the details about precedence rules and what not... Yes, someone
should think about them and come up with rules that make sense, but since
it will be pretty much only be used in unambiguous situations, this
shouldn't be a blocker.
-=- Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
2014-02-20 23:56 GMT+01:00 Carl Kleffner :
> Hi,
>
> 2014-02-20 23:17 GMT+01:00 Olivier Grisel :
>
>> I had a quick look (without running the procedure) but I don't
>> understand some elements:
>>
>> - apparently you never tell in the numpy's site.cf
Indeed I just ran the bench on my Mac and OSX Veclib is more than 2x
faster than OpenBLAS on such squared matrix multiplication (I just
have 2 physical cores on this box).
MKL from Canopy Express is slightly slower OpenBLAS for this GEMM
bench on that box.
I really wonder why Veclib is faster in
have MinGW installed?
Best,
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
']
language = f77
openblas_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/opt/OpenBLAS/lib']
language = f77
blas_mkl_info:
NOT AVAILABLE
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
4352
>>> %time import numpy
CPU times: user 84 ms, sys: 464 ms, total: 548 ms
Wall time: 59.3 ms
>>> psutil.Process(os.getpid()).get_memory_info().rss / 1e6
27.906048
Thanks for the tip.
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
___
Thanks for sharing, this is all very interesting.
Have you tried to have a look at the memory usage and import time of
numpy when linked against libopenblas.dll?
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http
= openblas
But this is unrelated to the previous numpy memory pattern as it
occurs independendly of scipy.
--
Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
is just ~15MB.
I would be very interested in any help on this:
- can you reproduce this behavior?
- do you have an idea of a possible cause?
- how to investigate?
--
Olivier
http://twitter.com/ogrisel - http://github.com/ogrisel
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
2014-02-20 11:32 GMT+01:00 Julian Taylor :
> On Thu, Feb 20, 2014 at 1:25 AM, Nathaniel Smith wrote:
>> Hey all,
>>
>> Just a heads up: thanks to the tireless work of Olivier Grisel, the OpenBLAS
>> development branch is now fork-safe when built with its default threadi
99.38460995],
[ 100., 100., 100., 100. ,
100.]])
-=- Olivier
2013/5/26 Sudheer Joseph
> Thank you Aronne for the helping hand,
> I tried the transpose as a check
> when I could not get it correct other
... ;)
>
> However, pip is really awful on Windows.
>
> If you have a virtualenv and you use --upgrade, it wants to upgrade all
> package dependencies (!), but it doesn't know how (with numpy and scipy).
>
> (easy_install was so much nicer.)
>
> Josef
>
You can use --no-deps to prevent pip from trying to upgrade dependencies.
-=- Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
The Python Package Index (https://pypi.python.org/pypi) is to my knowledge
the largest centralized source of Python packages. That's where
easy_install and pip typically fetch them so that you can install from the
command line without manual download.
-=- Olivier
2013/4/7 Happyman
&g
Congrats and thanks to Andreas and everyone involved in the release,
the website fixes and the online survey setup.
I posted Andreas blog post on HN and reddit:
- http://news.ycombinator.com/item?id=5094319
-
http://www.reddit.com/r/programming/comments/170oty/scikitlearn_013_is_out_machine_lear
2013/1/18 Matthew Brett :
> Hi,
>
> On Fri, Jan 18, 2013 at 7:58 PM, Chris Barker - NOAA Federal
> wrote:
>> On Fri, Jan 18, 2013 at 4:39 AM, Olivier Delalleau wrote:
>>> Le vendredi 18 janvier 2013, Chris Barker - NOAA Federal a écrit :
>>
>>> If y
Le vendredi 18 janvier 2013, Chris Barker - NOAA Federal a écrit :
> On Thu, Jan 17, 2013 at 5:34 PM, Olivier Delalleau
> >
> wrote:
> >> Yes, I do understand that. The difference - as I understand it - is
> >> that back in the day, numeric did not have the the fl
Le vendredi 18 janvier 2013, Chris Barker - NOAA Federal a écrit :
> On Thu, Jan 17, 2013 at 5:19 PM, Olivier Delalleau
> >
> wrote:
> > 2013/1/16 >:
> >> On Wed, Jan 16, 2013 at 10:43 PM, Patrick Marsh
> >> > wrote:
>
> >> I could live
ng my head against this
problem for a while now, because it's simple and consistent.
Since most of the related issues seem to come from integer arrays, a
middle-ground may be the following:
- Integer-type arrays get upcasted by scalars as in usual array /
array operations.
- Float/Compl
output. And
although you may blame the programmer for not being careful enough
about types, he couldn't expect it might crash the application back
when this code was written
Long story short, +1 for warning, -1 for exception, and +1 for a
config flag that al
>>>
>>> Can we at least have a np.nans() and np.infs() functions? This should
>>> cover an additional 4% of use-cases.
>>>
>>> Ben Root
>>>
>>> P.S. - I know they aren't verbs...
>>
>>
>> Would it be too weird or clumsy to extend the empty and empty_like functions
>> to do the filling?
>>
>> np.empty((10, 10), fill=np.nan)
>> np.empty_like(my_arr, fill=np.nan)
>
> That sounds like a good idea to me. Someone wanting a fast way to
> fill an array will probably check out the 'empty' docstring first.
>
> See you,
>
> Matthew
+1 from me. Even though it *is* weird to have both "empty" and "fill" ;)
-=- Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
may not find this pattern.
>
> => no new API
> For : easy maintenance
> Con : harder for users to discover fill pattern, filling a new array
> requires two lines instead of one.
>
> So maybe the decision rests on:
>
> How important is it that users see these function names in the
> namespace in order to discover the pattern "a = ones(shape) ;
> a.fill(val)"?
>
> How important is it to obey guidelines for no-return-from-in-place?
>
> How important is it to avoid expanding the namespace?
>
> How common is this pattern?
>
> On the last, I'd say that the only common use I have for this pattern
> is to fill an array with NaN.
My 2 cts from a user perspective:
- +1 to have such a function. I usually use numpy.ones * scalar
because honestly, spending two lines of code for such a basic
operations seems like a waste. Even if it's slower and potentially
dangerous due to casting rules.
- I think having a noun rather than a verb makes more sense since we
have numpy.ones and numpy.zeros (and I always read "numpy.empty" as
"give me an empty array", not "empty an array").
- I agree the name collision with np.ma.filled is a problem. I have no
better suggestion though at this point.
-=- Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
, then the result will only
contain positive values (upcast to int32). Do you believe it is a good
behavior?
-=- Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
mber, but not necessarily to hold the
result. So for instance if x is an int16 array with only positive values,
the result of this addition may contain negative values (or not, depending
on the number being drawn). That's the part I feel is flawed with this
behavior, it is quite unpredictab
om a few tests I tried), and to me this is an argument in
favor of the upcast behavior for non-inplace operations.
-=- Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
gt;> and the other has ndim>0, downcast the ndim==0 item to the smallest
>> width that is consistent with its value and the other operand's type.
>>
>
> Well, that leaves the maybe not quite implausible proposal of saying
> that numpy scal
hat it would lead to a long and painful deprecation
process. I may be wrong though, I really haven't thought about it
much.
-=- Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
2013/1/6 Nathaniel Smith :
> On Mon, Jan 7, 2013 at 1:43 AM, Olivier Delalleau wrote:
>> 2013/1/5 Nathaniel Smith :
>>> On Fri, Jan 4, 2013 at 5:25 PM, Andrew Collette
>>> wrote:
>>>> I agree the current behavior is confusing. Regardless of the detail
this
very specific scenario (an "unsafe" cast in a mixed scalar/array
operation), would that be possible?
Also, do we all agree that "float32 array + float64 scalar" should
cast the scalar to float32 (thus resulting in a float32 array as
output) without warning, eve
a fixed behavior (either rollover or upcast) regardless of the
numeric value being provided.
Looking at what other numeric libraries are doing is definitely a good
suggestion.
-=- Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
s, because it's a subtle
difference that users could have trouble understanding & foreseeing.
The expected behavior of numpy functions when providing them with
non-numpy objects is they should behave the same as if we had called
numpy.asarray() on these objec
ted.
If we always do the silent cast, it will significantly break existing
code relying on the 1.6 behavior, and increases the risk of doing
something unexpected (bad on #2 & #4)
If we always upcast, we may break existing code and lose efficiency
(bad on #3 and #4).
If we keep current behavior,
lution is to forget about trying to be smart and always
upcast the operation. That would be my 2nd preferred solution, but it
would make it very annoying to deal with Python scalars (typically
int64 / float64) that would be upcasting lots of things, potentially
breaking a significant amount of exi
upgraded... except my little module that may take
advantage of Numpy improvements.
-=- Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
I'd say it's a good idea, although I hope 1.7.x will still be maintained
for a while for those who are still stuck with Python 2.4-5 (sometimes you
don't have a choice).
-=- Olivier
2012/12/13 Charles R Harris
> The previous proposal to drop python 2.4 support garnered no
r/lib/atlas-base/libatlas.so;
> export
> LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/lapack:/usr/lib/atlas-base;
>
Is the file libatlas.so.3 present in /usr/lib/lapack:/usr/lib/atlas-base?
-=- Olivier
> On Mon, Dec 10, 2012 at 2:54 PM, Alexander Eberspächer <
> alex.eberspaec...@gm
Current behavior looks sensible to me. I personally would prefer no warning
but I think it makes sense to have one as it can be helpful to detect
issues faster.
-=- Olivier
2012/11/21 Charles R Harris
> What should be the value of the mean, var, and std of empty arrays?
> Currently
>
loat32(5e38))
> O3 inf
>
> However, these two still surprises me:
>
> I5 (np.float32()*1).dtype
> O5 dtype('float64')
>
> I6 (np.float32()*np.int32(1)).dtype
> O6 dtype('float64')
>
That's because the current way of finding out the result's dtype is based
on input dtypes only (not on numeric values), and numpy.can_cast('int32',
'float32') is False, while numpy.can_cast('int32', 'float64') is True (and
same for int64).
Thus it decides to cast to float64.
-=- Olivier
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
2012/11/16 Olivier Delalleau
> 2012/11/16 Charles R Harris
>
>>
>>
>> On Thu, Nov 15, 2012 at 11:37 PM, Charles R Harris <
>> charlesr.har...@gmail.com> wrote:
>>
>>>
>>>
>>> On Thu, Nov 15, 2012 at 8:24 PM, Gökhan Sever w
ions (scalar/scalar or
array/array) use casting rules that don't depend on magnitude, and the
upcast of int{32,64} mixed with float32 has always been float64 (probably
because the result has to be a kind of float, and float64 makes it possible
to represent exactly a larger integer rang
ed memory. For more info:
http://deeplearning.net/software/theano/tutorial/python-memory-management.html#internal-memory-management
-=- Olivier
2012/11/13 Austin Bingham
> OK, if numpy is just subject to Python's behavior then what I'm seeing
> must be due to the vagaries of Pyth
1 - 100 of 322 matches
Mail list logo