2016-02-13 17:42 GMT+01:00 Charles R Harris :
> The Fortran modulo function, which is the same basic function as in my
>> branch, does not specify any bounds on the result for floating numbers, but
>> gives only the formula, modulus(a, b) = a - b*floor(a/b), which has the
>> advantage of being si
2016-02-09 18:02 GMT+01:00 Gregor Thalhammer :
>> It is not suitable as a standard for numpy.
>
> Why should numpy not provide fast transcendental math functions? For
linear algebra it supports fast implementations, even non-free (MKL).
Wouldn’t it be nice if numpy outperforms C?
Floating point op
2016-02-08 18:54 GMT+01:00 Julian Taylor :
> which version of glibm was used here? There are significant difference
> in performance between versions.
> Also the input ranges are very important for these functions, depending
> on input the speed of these functions can vary by factors of 1000.
>
> g
> The npy_math functions are used if otherwise unavailable OR if someone
> has at some point noticed that say glibc 2.4-2.10 has a bad quality
> tan (or whatever) and added a special case hack that checks for those
> particular library versions and uses our built-in version instead.
> It's not the
Hi all,
I wanted to know if there is any sane way to build numpy while linking to a
different implementation of libm?
A drop-in replacement for libm (e.g. openlibm) should in principle work, I
guess, but I did not manage to actually make it work. As far as I
understand the build code, setting MATH
Hey,
I use complex numbers a lot and obviously need the modulus a lot. However,
I am not sure if we need a special function for _performance_ reasons.
At 10:01 AM 9/20/2015, you wrote:
It is, but since that involves taking sqrt, it is *much* slower. Even now,
```
In [32]: r = np.arange(1)*(1
> I think that numpy.fft should be left there in its current state
(although perhaps as deprecated). Now scipy.fft should have a good generic
algorithm as default, and easily allow for different implementations to be
accessed through the same interface.
I also agree with the above. But I want to a
> I think for Scipy homebrew uses the Gfortran ABI:
> https://trac.macports.org/browser/trunk/dports/python/py-scipy/Portfile
fwiw, homebrew is not macports. it's a more recent replacement that
seems to be taking over gradually.
___
NumPy-Discussion mai
> is this intended?
>
> np.histogramdd([[1,2],[3,4]],bins=2)
>
> (array([[ 1., 0.],
>[ 0., 1.]]),
> [array([ 1. , 1.5, 2. ]), array([ 3. , 3.5, 4. ])])
>
> np.histogram2d([1,2],[3,4],bins=2)
>
> (array([[ 1., 0.],
>[ 0., 1.]]),
> array([ 1. , 1.5, 2. ]),
> array([ 3
hi,
is this intended?
np.histogramdd([[1,2],[3,4]],bins=2)
(array([[ 1., 0.],
[ 0., 1.]]),
[array([ 1. , 1.5, 2. ]), array([ 3. , 3.5, 4. ])])
np.histogram2d([1,2],[3,4],bins=2)
(array([[ 1., 0.],
[ 0., 1.]]),
array([ 1. , 1.5, 2. ]),
array([ 3. , 3.5, 4. ]))
np.h
Hi,
>> Finally, the former Scientific.IO NetCDF interface is now part of
>> scipy.io, but I assume it only supports netCDF 3 (the documentation
>> is not specific about that). This might be the easiest option for a
>> portable data format (if Matlab supports it).
> Yes, it is NetCDF 3.
In recent
Hi Ralf,
I cloned numpy/master and played around a little.
when giving the bins explicitely, now histogram2d and histogramdd work
as expected in all tests i tried.
However, some of the cases with missing bin specification appear
somewhat inconsistent.
The first question is if creating arbitrar
Hi,
I was wondering why histogram2d and histogramdd raise a ValueError when
fed with empty data of the correct dimensions. I came across this as a
corner case when calling histogram2d from my own specialized histogram
function.
In comparison, histogram does handle this case correctly when bins ar
Robert,
your answer does work: after indexing with () I can then further index
into the datatype.
In [115]: a_rank_0[()][0]
Out[115]: 0.0
I guess I just found the fact confusing that a_rank_1[0] and a_rank_0
compare and print equal but behave differently under indexing.
More precisely if I do
I
Hi,
I noticed that I can index into a dtype when I take an element
of a rank-1 array but not if I make a rank-0 array directly. This seems
inconsistent. A bug?
Nils
In [76]: np.version.version
Out[76]: '1.5.1'
In [78]: dt = np.dtype([('x', ' in ()
IndexError: 0-d arrays can't be indexed
In [
Hi,
why is
>>> bool(np.dtype(np.float))
False
?
I came across this when using this python idiom:
def f(dtype=None):
if not dtype:
print 'using default dtype'
If there is no good reason to have a False truth value, I would vote for
making it True since that is what one would expect
Hi,
what about the normed=True bug in numpy.histogram? It was discussed here
a while ago, and fixed (although i did not find it on the tracker), but
the message
suggests it just missed 1.5.0? I don't have 1.5 installed, so I can't
check right now.
sorry to mention my personal pet bug, but mayb
> I think (a corrected) density histogram is core functionality for
> unequal bin lengths.
>
> The graph with raw count in the case of unequal bin sizes would be
> quite misleading when plotted and interpreted on the real line and not
> on discrete points (shaded areas instead of vertical lines).
> On Sat, Aug 28, 2010 at 04:12, Zbyszek Szmek wrote:
>> Hi,
>>
>> On Fri, Aug 27, 2010 at 06:43:26PM -0600, Charles R Harris wrote:
>>> ? ?On Fri, Aug 27, 2010 at 2:47 PM, Robert Kern
>>> ? ?wrote:
>>>
>>> ? ? ?On Fri, Aug 27, 2010 at 15:32, David Huard
>>> ? ? ?wrote:
>>> ? ? ?> Nils and Josep
Hi again,
first a correction: I posted
> I believe np.histogram(data, bins, normed=True) effectively does :
>>> np.histogram(data, bins, normed=False) / (bins[-1] - bins[0]).
>>>
>>> However, it _should_ do
>>> np.histogram(data, bins, normed=False) / bins_widths
but there is a normalization mis
Hi,
I found what looks like a bug in histogram, when the option normed=True
is used together with non-uniform bins.
Consider this example:
import numpy as np
data = np.array([1, 2, 3, 4])
bins = np.array([.5, 1.5, 4.5])
bin_widths = np.diff(bins)
(counts, dummy) = np.histogram(data, bins)
(densi
21 matches
Mail list logo