Yes, clearing is not the proper word but the "trick" works only work for 0
(I'll get the same result in both cases).
Nicolas
> On 27 Dec 2016, at 20:52, Chris Barker wrote:
>
> On Mon, Dec 26, 2016 at 1:34 AM, Nicolas P. Rougier
> wrote:
>
> I'm trying to understand why viewing an array
On Mon, Dec 26, 2016 at 1:34 AM, Nicolas P. Rougier <
nicolas.roug...@inria.fr> wrote:
>
> I'm trying to understand why viewing an array as bytes before clearing
> makes the whole operation faster.
> I imagine there is some kind of special treatment for byte arrays but I've
> no clue.
>
I notice
Might be os-specific, too. Some virtual memory management systems might
special case the zeroing out of memory. Try doing the same thing with a
different value than zero.
On Dec 26, 2016 6:15 AM, "Nicolas P. Rougier"
wrote:
Thanks for the explanation Sebastian, makes sense.
Nicolas
> On 26 D
Thanks for the explanation Sebastian, makes sense.
Nicolas
> On 26 Dec 2016, at 11:48, Sebastian Berg wrote:
>
> On Mo, 2016-12-26 at 10:34 +0100, Nicolas P. Rougier wrote:
>> Hi all,
>>
>>
>> I'm trying to understand why viewing an array as bytes before
>> clearing makes the whole operatio
On Mo, 2016-12-26 at 10:34 +0100, Nicolas P. Rougier wrote:
> Hi all,
>
>
> I'm trying to understand why viewing an array as bytes before
> clearing makes the whole operation faster.
> I imagine there is some kind of special treatment for byte arrays but
> I've no clue.
>
Sure, if its a 1-byte
Hi all,
I'm trying to understand why viewing an array as bytes before clearing makes
the whole operation faster.
I imagine there is some kind of special treatment for byte arrays but I've no
clue.
# Native float
Z_float = np.ones(100, float)
Z_int = np.ones(100, int)
%timeit Z_fl
Hello Numpy Discussion List,
So I'm trying to get numpy working on an AIX 6.1 system. Initially I had a lot
of problems trying to compile the package because the xlc compiler weren't
installed on this machine, but apparently the Python package we installed had
been built with them. Once we go
Thanks for the explanation.
Chris Barker - NOAA Federal noaa.gov> writes:
> There has been a lot of discussion about casting on this list in the
> last couple months -- I suggest you peruse that discussion and see
> what conclusions it has lead to.
I'll look at it. My message to the ml followe
On Fri, Mar 8, 2013 at 8:23 AM, Sergio Callegari
wrote:
> I have noticed that numpy introduces some unexpected type casts, that are
> in some cases problematic.
There has been a lot of discussion about casting on this list in the
last couple months -- I suggest you peruse that discussion and s
Hi,
I have noticed that numpy introduces some unexpected type casts, that are
in some cases problematic.
A very weird cast is
int + uint64 -> float
for instance, consider the following snippet:
import numpy as np
a=np.uint64(1)
a+1
-> 2.0
this cast is quite different from what other program
Le vendredi 18 janvier 2013, Chris Barker - NOAA Federal a écrit :
> On Thu, Jan 17, 2013 at 5:19 PM, Olivier Delalleau
> >
> wrote:
> > 2013/1/16 >:
> >> On Wed, Jan 16, 2013 at 10:43 PM, Patrick Marsh
> >> > wrote:
>
> >> I could live with an exception for lossy down casting in this case.
>
>
Am 17.01.2013 17:21, schrieb Chris Barker - NOAA Federal:
> On Wed, Jan 16, 2013 at 11:34 PM, Matthieu Brucher
>
>> Of course a += b is not the same as a = a + b. The first one modifies the
>> object a, the second one creates a new object and puts it inside a. The
>> behavior IS consistent.
>
> E
On Thu, Jan 17, 2013 at 5:19 PM, Olivier Delalleau wrote:
> 2013/1/16 :
>> On Wed, Jan 16, 2013 at 10:43 PM, Patrick Marsh
>> wrote:
>> I could live with an exception for lossy down casting in this case.
I'm not sure what the idea here is -- would you only get an exception
if the value was suc
2013/1/16 :
> On Wed, Jan 16, 2013 at 10:43 PM, Patrick Marsh
> wrote:
>> Thanks, everyone for chiming in. Now that I know this behavior exists, I
>> can explicitly prevent it in my code. However, it would be nice if a warning
>> or something was generated to alert users about the inconsistency
On Wed, Jan 16, 2013 at 11:34 PM, Matthieu Brucher
> Of course a += b is not the same as a = a + b. The first one modifies the
> object a, the second one creates a new object and puts it inside a. The
> behavior IS consistent.
Exactly -- if you ask me, the bug is that Python allows "in_place"
ope
On 01/17/2013 01:27 PM, josef.p...@gmail.com wrote:
> On Thu, Jan 17, 2013 at 2:34 AM, Matthieu Brucher
> wrote:
>> Hi,
>>
>> Actually, this behavior is already present in other languages, so I'm -1 on
>> additional verbosity.
>> Of course a += b is not the same as a = a + b. The first one modifie
On Thu, Jan 17, 2013 at 2:34 AM, Matthieu Brucher
wrote:
> Hi,
>
> Actually, this behavior is already present in other languages, so I'm -1 on
> additional verbosity.
> Of course a += b is not the same as a = a + b. The first one modifies the
> object a, the second one creates a new object and put
Hi,
Actually, this behavior is already present in other languages, so I'm -1 on
additional verbosity.
Of course a += b is not the same as a = a + b. The first one modifies the
object a, the second one creates a new object and puts it inside a. The
behavior IS consistent.
Cheers,
Matthieu
2013/
On 17.01.2013 04:43, Patrick Marsh wrote:
> Thanks, everyone for chiming in. Now that I know this behavior
> exists, I can explicitly prevent it in my code. However, it would be
> nice if a warning or something was generated to alert users about the
> inconsistency between var += ... and var =
On Wed, Jan 16, 2013 at 10:43 PM, Patrick Marsh
wrote:
> Thanks, everyone for chiming in. Now that I know this behavior exists, I
> can explicitly prevent it in my code. However, it would be nice if a warning
> or something was generated to alert users about the inconsistency between
> var += ...
Thanks, everyone for chiming in. Now that I know this behavior exists, I
can explicitly prevent it in my code. However, it would be nice if a
warning or something was generated to alert users about the inconsistency
between var += ... and var = var + ...
Patrick
---
Patrick Marsh
Ph.D. Candida
This is separate from the scalar casting thing. This is a disguised version
of the discussion about what we should do with implicit casts caused by
assignment:
into_array[i] = 0.5
Traditionally numpy just happily casts this stuff, possibly mangling data
in the process, and this has caused many a
Patrick,
Not a bug but is it a mis-feature?
See the recent thread: "Do we want scalar casting to behave as it does
at the moment"
In short, this is an complex issue with no easy answer...
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R
Hi Patrick:
I think it is the behavior I have come to expect. The only "gotcha" here
might be the difference between "var = var + 0.5" and "var += 0.5"
For example:
>>> import numpy as np
>>> x = np.arange(5); x += 0.5; x
array([0, 1, 2, 3, 4])
>>> x = np.arange(5); x = x + 0.5; x
array([ 0.5
Greetings,
I spent a couple hours today tracking down a bug in one of my programs. I
was getting different answers depending on whether I passed in a numpy
array or a single number. Ultimately, I tracked it down to something I
would consider a bug, but I'm not sure if others do. The case comes fro
Oops, sorry, Keith Goodman kindly pointed out that I had missed out:
On Wed, Apr 18, 2012 at 11:03 AM, Matthew Brett wrote:
> Hi,
>
> I just wanted to point out a situation where the scalar casting rules
> can be a little confusing:
In [110]: a = np.array([-128, 127], dtype=np.int8)
> In [113]:
Hi,
I just wanted to point out a situation where the scalar casting rules
can be a little confusing:
In [113]: a - np.int16(128)
Out[113]: array([-256, -1], dtype=int16)
In [114]: a + np.int16(-128)
Out[114]: array([ 0, -1], dtype=int8)
This is predictable from the nice docs here:
http://doc
Hi,
On Thu, Mar 8, 2012 at 3:14 PM, Matthew Brett wrote:
> Hi,
>
> On Wed, Mar 7, 2012 at 4:08 PM, Matthew Brett wrote:
>> Hi,
>>
>> I noticed a casting change running the test suite on our image reader,
>> nibabel:
>> https://github.com/nipy/nibabel/blob/master/nibabel/tests/test_casting.py
>>
Hi,
On Wed, Mar 7, 2012 at 4:08 PM, Matthew Brett wrote:
> Hi,
>
> I noticed a casting change running the test suite on our image reader,
> nibabel:
> https://github.com/nipy/nibabel/blob/master/nibabel/tests/test_casting.py
>
> For this script:
>
>
> import numpy as np
>
> Adata = np.zeros((2,
Hi,
I noticed a casting change running the test suite on our image reader,
nibabel:
https://github.com/nipy/nibabel/blob/master/nibabel/tests/test_casting.py
For this script:
import numpy as np
Adata = np.zeros((2,), dtype=np.uint8)
Bdata = np.zeros((2,), dtype=np.int16)
Bzero = np.int16(0)
B
Thanks Robert,
I thought it was something like that but couldn't figure it out.
C.
On May 26, 2009, at 4:50 PM, Robert Kern wrote:
> 2009/5/26 Charles سمير Doutriaux :
>> Hi there,
>>
>> One of our users just found a bug in numpy that has to do with
>> casting.
>>
>> Consider the attached exam
2009/5/26 Charles سمير Doutriaux :
> Hi there,
>
> One of our users just found a bug in numpy that has to do with casting.
>
> Consider the attached example.
>
> The difference at the end should be 0 (zero) everywhere.
>
> But it's not by default.
>
> Casting the data to 'float64' at reading and a
Hi there,
One of our users just found a bug in numpy that has to do with casting.
Consider the attached example.
The difference at the end should be 0 (zero) everywhere.
But it's not by default.
Casting the data to 'float64' at reading and assiging to the arrays
works
Defining the arrays
Gideon Simpson wrote:
> I want to do:
>
> numpy.float(numpy.arange(0, 10))
>
> but get the error:
>
> Traceback (most recent call last):
>File "", line 1, in
> TypeError: only length-1 arrays can be converted to Python scalars
>
> How should I do this?
numpy.arange(0,10, dtype=float)
or
n
Gael Varoquaux wrote:
> nump.arange(0, 10.astype(numpy.float)
I think you meant:
np.arange(0, 10).astype(np.float)
but:
np.arange(0, 10, dtype=np.float)
is a better bet.
> but in this special case you can do:
>
> numpy.arange(0., 10.)
yup -- however, beware, using arange() with floating poi
On Thu, Feb 26, 2009 at 05:53:28PM -0500, Gideon Simpson wrote:
> I want to do:
> numpy.float(numpy.arange(0, 10))
> but get the error:
> Traceback (most recent call last):
>File "", line 1, in
> TypeError: only length-1 arrays can be converted to Python scalars
> How should I do this?
nu
I want to do:
numpy.float(numpy.arange(0, 10))
but get the error:
Traceback (most recent call last):
File "", line 1, in
TypeError: only length-1 arrays can be converted to Python scalars
How should I do this?
-gideon
___
Numpy-discussion mailin
Charles R Harris wrote:
> On Jan 16, 2008 11:30 AM, Neal Becker <[EMAIL PROTECTED]> wrote:
>
>> Hans Meine wrote:
>>
>> > Am Montag, 14. Januar 2008 19:59:15 schrieb Neal Becker:
>> >> I don't want to use FROM_O here, because I really can only handle
>> certain
>> >> types. If I used FROM_O, the
On Jan 16, 2008 11:30 AM, Neal Becker <[EMAIL PROTECTED]> wrote:
> Hans Meine wrote:
>
> > Am Montag, 14. Januar 2008 19:59:15 schrieb Neal Becker:
> >> I don't want to use FROM_O here, because I really can only handle
> certain
> >> types. If I used FROM_O, then after calling FROM_O, if the type
Hans Meine wrote:
> Am Montag, 14. Januar 2008 19:59:15 schrieb Neal Becker:
>> I don't want to use FROM_O here, because I really can only handle certain
>> types. If I used FROM_O, then after calling FROM_O, if the type was not
>> one I could handle, I'd have to call FromAny and convert it.
> Wh
Am Montag, 14. Januar 2008 19:59:15 schrieb Neal Becker:
> I don't want to use FROM_O here, because I really can only handle certain
> types. If I used FROM_O, then after calling FROM_O, if the type was not
> one I could handle, I'd have to call FromAny and convert it.
What is the problem with tha
Neal Becker wrote:
> Jon Wright wrote:
>
>>> I'm sorry, I still think we're talking past each other. What do you mean
>>> by "native data type"? If you just want to get an ndarray without
>>> specifying a type, use PyArray_FROM_O(). That's what it's for. You don't
>>> need to know the data type be
Jon Wright wrote:
>> I'm sorry, I still think we're talking past each other. What do you mean
>> by "native data type"? If you just want to get an ndarray without
>> specifying a type, use PyArray_FROM_O(). That's what it's for. You don't
>> need to know the data type beforehand.
>
> What I have
> I'm sorry, I still think we're talking past each other. What do you mean by
> "native data type"? If you just want to get an ndarray without specifying a
> type, use PyArray_FROM_O(). That's what it's for. You don't need to know the
> data type beforehand.
What I have wanted in the past (and
Neal Becker wrote:
> Robert Kern wrote:
>
>> Neal Becker wrote:
>>> numpy frequently refers to 'casting'. I'm not sure if that term is ever
>>> defined. I believe it has the same meaning as in C. In that case, it is
>>> unfortunately used to mean 2 different things. There are casts that do
>>>
Robert Kern wrote:
> Neal Becker wrote:
>> numpy frequently refers to 'casting'. I'm not sure if that term is ever
>> defined. I believe it has the same meaning as in C. In that case, it is
>> unfortunately used to mean 2 different things. There are casts that do
>> not change the underlying b
Neal Becker wrote:
> numpy frequently refers to 'casting'. I'm not sure if that term is ever
> defined. I believe it has the same meaning as in C. In that case, it is
> unfortunately used to mean 2 different things. There are casts that do not
> change the underlying bits (such as a pointer cas
numpy frequently refers to 'casting'. I'm not sure if that term is ever
defined. I believe it has the same meaning as in C. In that case, it is
unfortunately used to mean 2 different things. There are casts that do not
change the underlying bits (such as a pointer cast), and there are casts
tha
Matthieu Brucher wrote:
> And if there is a way to add a
> formatting option ('1.1f' for instance), it would be
> even better.
For full control of the formatting, you can use python's string
formatting, and a nested list comprehension:
[ ["%.3f"%i
Thank you for the precision, I didn't thought of using 'Sxx' directly :(
Matthieu
2007/10/5, lorenzo bolla <[EMAIL PROTECTED]>:
>
> gotcha. specify the number of bytes, then.
>
> In [20]: x
> Out[20]:
> array([[-2., 3.],
>[ 4., 5.]])
>
> In [21]: x.astype(numpy.dtype('S10'))
> Out[21]:
gotcha. specify the number of bytes, then.
In [20]: x
Out[20]:
array([[-2., 3.],
[ 4., 5.]])
In [21]: x.astype(numpy.dtype('S10'))
Out[21]:
array([['-2.0', '3.0'],
['4.0', '5.0']],
dtype='|S10')
L.
On 10/5/07, Matthieu Brucher <[EMAIL PROTECTED]> wrote:
>
> I'd like to hav
On 05/10/2007, Matthieu Brucher <[EMAIL PROTECTED]> wrote:
> I'd like to have the '2.', because if the number is negative, only '-' is
> returned, not the real value.
For string arrays you need to specify the length of the string as part
of the data type (and it defaults to length 1):
In [11]: ra
I'd like to have the '2.', because if the number is negative, only '-' is
returned, not the real value.
Matthieu
2007/10/5, lorenzo bolla <[EMAIL PROTECTED]>:
>
> what's wrong with astype?
>
> In [3]: x = numpy.array([[2.,3.],[4.,5.]])
>
> In [4]: x.astype(str)
> Out[4]:
> array([['2', '3'],
>
what's wrong with astype?
In [3]: x = numpy.array([[2.,3.],[4.,5.]])
In [4]: x.astype(str)
Out[4]:
array([['2', '3'],
['4', '5']],
dtype='|S1')
and if you want a list:
In [5]: x.astype(str).tolist()
Out[5]: [['2', '3'], ['4', '5']]
L.
On 10/5/07, Matthieu Brucher <[EMAIL PROTEC
Hi,
I'm trying to cast a float array into a string array (for instance
transforming [[2., 3.], [4., 5.]] into [['2.', '3.'], ['4.', '5.']]), I
tried with astype(str) and every variation (str_, string, string_, string0),
but not luck.
Is there a function or a method of the array class that can fulf
55 matches
Mail list logo