On Fri, Dec 11, 2015, 18:04 David Cournapeau wrote:
On Fri, Dec 11, 2015 at 4:22 PM, Anne Archibald wrote:
Actually, GCC implements 128-bit floats in software and provides them as
__float128; there are also quad-precision versions of the usual functions.
The Intel compiler provides this as well
> What does "true vectorization" mean anyway?
Calling python functions on python objects in a for loop is not really
vectorized. It's much slower than people intend when they use numpy.
Elliot
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.or
Hi All,
astropy `Time` indeed using two doubles internally, but is very limited in
the operations it allows: essentially only addition/subtraction, and
multiplication with/division by a normal double.
It would be great to have better support within numpy; it is a pity to have
a float128 type that
"Thomas Baruchel" wrote:
> While this is obviously the most relevant answer for many users because
> it will allow them to use Numpy arrays exactly
> as they would have used them with native types, the wrong thing is that
> from some point of view "true" vectorization
> will be lost.
What does "
I have a mostly complete wrapping of the double-double type from the QD
library (http://crd-legacy.lbl.gov/~dhbailey/mpdist/) into a numpy dtype.
The real problem is, as david pointed out, user dtypes aren't quite full
equivalents of the builtin dtypes. I can post the code if there is
interest.
S
On Fri, Dec 11, 2015 at 10:45 AM, Nathaniel Smith wrote:
> On Dec 11, 2015 7:46 AM, "Charles R Harris"
> wrote:
> >
> >
> >
> > On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel
> wrote:
> >>
> >> From time to time it is asked on forums how to extend precision of
> computation on Numpy array. Th
On Dec 11, 2015 7:46 AM, "Charles R Harris"
wrote:
>
>
>
> On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel wrote:
>>
>> From time to time it is asked on forums how to extend precision of
computation on Numpy array. The most common answer
>> given to this question is: use the dtype=object with so
On Fri, Dec 11, 2015 at 4:22 PM, Anne Archibald wrote:
> Actually, GCC implements 128-bit floats in software and provides them as
> __float128; there are also quad-precision versions of the usual functions.
> The Intel compiler provides this as well, I think, but I don't think
> Microsoft compile
On Fri, Dec 11, 2015 at 11:22 AM, Anne Archibald wrote:
> Actually, GCC implements 128-bit floats in software and provides them as
> __float128; there are also quad-precision versions of the usual functions.
> The Intel compiler provides this as well, I think, but I don't think
> Microsoft compile
Actually, GCC implements 128-bit floats in software and provides them as
__float128; there are also quad-precision versions of the usual functions.
The Intel compiler provides this as well, I think, but I don't think
Microsoft compilers do. A portable quad-precision library might be less
painful.
> There has also been some talk of adding a user type for ieee 128 bit doubles.
> I've looked once for relevant code for the latter and, IIRC, the available
> packages were GPL :(.
This looks like it's BSD-Ish:
http://www.jhauser.us/arithmetic/SoftFloat.html
Don't know if it's any good
C
On Fri, Dec 11, 2015 at 6:25 AM, Thomas Baruchel wrote:
> From time to time it is asked on forums how to extend precision of
> computation on Numpy array. The most common answer
> given to this question is: use the dtype=object with some arbitrary
> precision module like mpmath or gmpy.
> See
> h
>From time to time it is asked on forums how to extend precision of computation
>on Numpy array. The most common answer
given to this question is: use the dtype=object with some arbitrary precision
module like mpmath or gmpy.
See
http://stackoverflow.com/questions/6876377/numpy-arbitrary-precisi
13 matches
Mail list logo