https://github.com/numpy/numpy/pull/5822 is a year-old PR which allows many
random distributions to have a scale of exactly 0 (in which case a stream
of zeros is returned of whatever constant value is appropriate).
It passes all tests and has been sitting there for a while. Would a core
dev be kin
https://github.com/numpy/numpy/issues/3511 proposed (nearly three years
ago) to return an integer when `builtins.round` (which calls the `__round__
dunder method, and thereafter called `round` (... not to be confused with
`np.round`)) is called with a single argument. Currently, `round` returns
a
np.uint64(2**62-1)) would
actually fit in an int64 (or an uint64), so arguably the conversion to
float makes things worse.
Antony
2016-04-12 19:56 GMT-07:00 Nathaniel Smith :
> So what type should uint64 + int64 return?
> On Apr 12, 2016 7:17 PM, "Antony Lee" wrote:
>
>>
This kind of issue (see also https://github.com/numpy/numpy/issues/3511)
has become more annoying now that indexing requires integers (indexing with
a float raises a VisibleDeprecationWarning). The argument "dividing an
uint by an int may give a result that does not fit in an uint nor in an
int" d
In a sense this discussion is really about making np.array(iterable) more
efficient, so I restarted the discussion at
https://mail.scipy.org/pipermail/numpy-discussion/2016-February/075059.html
Antony
2016-02-18 14:21 GMT-08:00 Chris Barker :
> On Thu, Feb 18, 2016 at 10:15 AM, Antony
at 11:41 PM, Antony Lee
> wrote:
>
>> So how can np.array(range(...)) even work?
>>
>
> range() (in py3) is not a generator, nor is is a iterator. it is a range
> object, which is lazily evaluated, and satisfies both the iterator protocol
> and the sequence protocol
Actually, while working on https://github.com/numpy/numpy/issues/7264 I
realized that the memory efficiency (one-pass) argument is simply incorrect:
import numpy as np
class A:
def __getitem__(self, i):
print("A get item", i)
return [np.int8(1), np.int8(2)][i]
def __len__(
See earlier discussion here: https://github.com/numpy/numpy/issues/6326
Basically, naïvely sorting may be faster than a not-so-optimized version of
quickselect.
Antony
2016-02-15 21:49 GMT-08:00 Joseph Fox-Rabinovitz :
> I would like to add a `weights` keyword to `np.partition`,
> `np.percentile
rray(C())
Fatal Python error: Segmentation fault
2016-02-15 0:10 GMT-08:00 Nathaniel Smith :
> On Sun, Feb 14, 2016 at 11:41 PM, Antony Lee
> wrote:
> > I wonder whether numpy is using the "old" iteration protocol (repeatedly
> > calling x[i] for increasing i until S
ay(range(...)) even work?
2016-02-14 22:21 GMT-08:00 Ralf Gommers :
>
>
> On Sun, Feb 14, 2016 at 10:36 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Sun, Feb 14, 2016 at 7:36 AM, Ralf Gommers
>> wrote:
>>
>>>
>&g
I was thinking (1) (special-case range()); however (2) may be more
generally applicable and useful.
Antony
2016-02-14 6:36 GMT-08:00 Ralf Gommers :
>
>
> On Sun, Feb 14, 2016 at 9:21 AM, Antony Lee
> wrote:
>
>> re: no reason why...
>> This has nothing to do with Py
reasons).
re: iterable vs iterator: check for the presence of the __next__ special
method (or isinstance(x, Iterable) vs. isinstance(x, Iterator) and not
isinstance(x, Iterable))
Antony
2016-02-13 18:48 GMT-08:00 :
>
>
> On Sat, Feb 13, 2016 at 9:43 PM, wrote:
>
>>
>&g
Compare (on Python3 -- for Python2, read "xrange" instead of "range"):
In [2]: %timeit np.array(range(100), np.int64)
10 loops, best of 3: 156 ms per loop
In [3]: %timeit np.arange(100, dtype=np.int64)
1000 loops, best of 3: 853 µs per loop
Note that while iterating over a range is not
Hi all,
The docstring of np.full indicates that the result of the dtype is
`np.array(fill_value).dtype`, as long as the keyword argument `dtype`
itself is not set. This is actually not the case: the current
implementation always returns a float array when `dtype` is not set, see
e.g.
In [1]: np.
2015-05-29 14:06 GMT-07:00 Antony Lee :
>
> A proof-of-concept implementation, still missing tests, is tracked as
>> #5911. It includes the patch proposed in #5158 as an example of how to
>> include an improved version of random.choice.
>>
>
> Tests are in no
>
> A proof-of-concept implementation, still missing tests, is tracked as
> #5911. It includes the patch proposed in #5158 as an example of how to
> include an improved version of random.choice.
>
Tests are in now (whether we should bundle in pickles of old versions to
make sure they are still un
2015-05-24 13:30 GMT-07:00 Sturla Molden :
> On 24/05/15 10:22, Antony Lee wrote:
>
> > Comments, and help for writing tests (in particular to make sure
> > backwards compatibility is maintained) are welcome.
>
> I have one comment, and that is what makes random numbers so
Thanks to Nathaniel who has indeed clarified my intent, i.e. "the global
RandomState should use the latest implementation, unless explicitly
seeded". More generally, the `RandomState` constructor is just a thin
wrapper around `seed` with the same signature, so one can swap the version
of the globa
ts (in particular to make sure backwards
compatibility is maintained) are welcome.
Antony Lee
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Another improvement would be to make sure, for integer-valued datasets,
that all bins cover the same number of integer, as it is easy to end up
otherwise with bins "effectively" wider than others:
hist(np.random.randint(11, size=1))
shows a peak in the last bin, as it covers both 9 and 10.
A
I see, thanks!
2015-01-05 2:14 GMT-07:00 Sebastian Berg :
> On Mo, 2015-01-05 at 14:13 +0530, Maniteja Nandana wrote:
> > Hi Anthony,
> >
> >
> > I am not sure whether the following section in documentation is
> > relevant to the behavior you were referring to.
> >
> >
> > When an ellipsis (...)
While trying to reproduce various fancy indexings for astropy's FITS
sections (a loaded-on-demand array), I found the following interesting
behavior:
>>> np.array([1])[..., 0]
array(1)
>>> np.array([1])[0]
1
>>> np.array([1])[(0,)]
1
The docs say "Ellipsis expand to the number of : objects needed
"", line 1, in
AttributeError: 'numpy.ndarray' object has no attribute '__nonzero__'
2014-11-13 10:05 GMT-08:00 Alan G Isaac :
> On 11/13/2014 12:37 PM, Antony Lee wrote:
> > On Python3, __nonzero__ is never defined (always raises an
> AttributeError), eve
On Python3, __nonzero__ is never defined (always raises an AttributeError),
even after calling __bool__.
2014-11-13 5:24 GMT-08:00 Alan G Isaac :
> On 11/13/2014 1:19 AM, Antony Lee wrote:
> > "t.__bool__()" also returns True
>
>
> But t.__nonzero__() is being ca
I am puzzled by the following (numpy 1.9.0, python 3.4.2):
In [1]: t = array(None); t[()] = array([None, None]) # Construct a 0d
array of dtype object, containing a single numpy array with 2 elements
In [2]: bool(t)
Out[2]: True
In [3]: if t: pass
---
4-09-12 10:46 GMT-07:00 Robert Kern :
> On Fri, Sep 12, 2014 at 5:46 PM, Robert Kern
> wrote:
> > On Fri, Sep 12, 2014 at 5:44 PM, Antony Lee
> wrote:
> >> I see. I went back to the documentation of ufunc.reduce and this is not
> >> explicitly mentioned
I see. I went back to the documentation of ufunc.reduce and this is not
explicitly mentioned although a posteriori it makes sense; perhaps this can
be made clearer there?
Antony
2014-09-12 2:22 GMT-07:00 Robert Kern :
> On Fri, Sep 12, 2014 at 10:04 AM, Antony Lee
> wrote:
> > I
-12 0:48 GMT-07:00 Sebastian Berg :
> On Do, 2014-09-11 at 22:54 -0700, Antony Lee wrote:
> > Hi,
> > I thought that ufunc.reduce performs broadcasting, but it seems a bit
> > confused by boolean arrays:
> >
> >
> > In [1]: add.reduce([array([1, 2]), arra
Hi,
I thought that ufunc.reduce performs broadcasting, but it seems a bit
confused by boolean arrays:
In [1]: add.reduce([array([1, 2]), array([1])])
Out[1]: array([2, 3])
In [2]: logical_and.reduce([array([True, False], dtype=bool), array([True],
dtype=bool)])
---
Thanks a lot!
Antony
2013/9/20 Henry Gomersall
> On 18/09/13 01:51, Antony Lee wrote:
> > While I realize that this is certainly tweaking multiprocessing beyond
> > its specifications, I would like to use it on Windows to start a
> > 32-bit Python process from a 64-bit Pyt
Henry: thanks a lot, that would be very appreciated regardless of whether I
end up using it in this specific project or not.
Other replies below.
Antony
2013/9/19 Robert Kern
> On Thu, Sep 19, 2013 at 2:40 AM, Antony Lee
> wrote:
> >
> > Thanks, I didn't know that m
2013/9/19 Robert Kern
> On Thu, Sep 19, 2013 at 5:58 PM, Antony Lee
> wrote:
> >
> > Henry: thanks a lot, that would be very appreciated regardless of
> whether I end up using it in this specific project or not.
> > Other replies below.
> >
> > Antony
>
yet if the
FFTs are going to be the limiting step but I thought I may as well give
pyFFTW a try and ran into that issue...
Antony
2013/9/18 Robert Kern
> On Wed, Sep 18, 2013 at 1:51 AM, Antony Lee
> wrote:
> >
> > Hi all,
> >
> > While I realize that this is certain
Hi all,
While I realize that this is certainly tweaking multiprocessing beyond its
specifications, I would like to use it on Windows to start a 32-bit Python
process from a 64-bit Python process (use case: I need to interface with a
64-bit DLL and use an extension (pyFFTW) for which I can only fin
Sure, I will. Right now my solution is to use genfromtxt once with bytes
and auto-dtype detection, then modify the resulting dtype, replacing bytes
with unicodes, and use that new dtypes for a second round of genfromtxt. A
bit awkward but that gets the job done.
Antony Lee
2012/5/1 Charles R
k;
from my limited understanding the problem comes earlier, in the way
StringBuilder is defined(?).
Antony Lee
import io, numpy as np
s = io.BytesIO()
s.write(b"abc 1\ndef 2")
s.seek(0)
t = np.genfromtxt(s, dtype=None) # (or converters={0: bytes})
print(t, t.dtype) # -> [(b'a', 1
I just ran into the following:
>>> np.dtype(u"f4")
Traceback (most recent call last):
File "", line 1, in
TypeError: data type not understood
Is that the expected behaviour?
Thanks in advance,
Antony Lee
___
NumPy-
37 matches
Mail list logo