>
> The lack of commutativity wasn't in precision, it was in the typecodes, and
> was there from the beginning. That caused confusion. A current cause of
> confusion is the many to one relation of, say, int32 and long, longlong which
> varies platform to platform. I think that confusion is a mo
2012/2/13 Andrea Gavana
> -- Forwarded message --
> From: "Andrea Gavana"
> Date: Feb 13, 2012 11:31 PM
> Subject: Re: [Numpy-discussion] Creating parallel curves
> To: "Jonathan Hilmer"
>
> Thank you Jonathan for this, it's exactly what I was looking for. I' ll
> try it tomorro
>
> You might be right, Chuck. I would like to investigate more, however.
>
> What I fear is that there are *a lot* of users still on NumPy 1.3 and NumPy
> 1.5. The fact that we haven't heard any complaints, yet, does not mean to
> me that we aren't creating headache for people later who h
On Mon, Feb 13, 2012 at 10:48 PM, Mark Wiebe wrote:
> On Mon, Feb 13, 2012 at 10:38 PM, Charles R Harris <
> charlesr.har...@gmail.com> wrote:
>
>>
>>
>> On Mon, Feb 13, 2012 at 11:07 PM, Travis Oliphant wrote:
>>
>>>
>>> >
> No argument on any of this. It's just that this needs to happen
I think the problem is quite easy to solve, without changing the
"documentation" behaviour.
The doc says:
Help on built-in function arange in module numpy.core.multiarray:
/
arange(...)
arange([start,] stop[, step,], dtype=None)
Return evenly spaced values within a given interval.
On Mon, Feb 13, 2012 at 10:38 PM, Charles R Harris <
charlesr.har...@gmail.com> wrote:
>
>
> On Mon, Feb 13, 2012 at 11:07 PM, Travis Oliphant wrote:
>
>>
>> >
>>> > No argument on any of this. It's just that this needs to happen at
>>> NumPy 2.0, not in the NumPy 1.X series. I think requiring
On Mon, Feb 13, 2012 at 11:07 PM, Travis Oliphant wrote:
>
> >
>> > No argument on any of this. It's just that this needs to happen at
>> NumPy 2.0, not in the NumPy 1.X series. I think requiring a re-compile is
>> far-less onerous than changing the type-coercion subtly in a 1.5 to 1.6
>> rele
On Mon, Feb 13, 2012 at 11:00 PM, Travis Oliphant wrote:
>
> > >
> > > No argument on any of this. It's just that this needs to happen at
> NumPy 2.0, not in the NumPy 1.X series. I think requiring a re-compile is
> far-less onerous than changing the type-coercion subtly in a 1.5 to 1.6
> rele
> >
> > No argument on any of this. It's just that this needs to happen at NumPy
> > 2.0, not in the NumPy 1.X series. I think requiring a re-compile is
> > far-less onerous than changing the type-coercion subtly in a 1.5 to 1.6
> > release. That's my major point, and I'm surprised oth
> >
> > No argument on any of this. It's just that this needs to happen at NumPy
> > 2.0, not in the NumPy 1.X series. I think requiring a re-compile is
> > far-less onerous than changing the type-coercion subtly in a 1.5 to 1.6
> > release. That's my major point, and I'm surprised oth
On Mon, Feb 13, 2012 at 10:25 PM, Benjamin Root wrote:
>
>
> On Monday, February 13, 2012, Travis Oliphant wrote:
> >
> > On Feb 13, 2012, at 10:14 PM, Charles R Harris wrote:
> >
> >
> > On Mon, Feb 13, 2012 at 9:04 PM, Travis Oliphant
> wrote:
> >>
> >> I disagree with your assessment of the
On Mon, Feb 13, 2012 at 10:01 PM, Travis Oliphant wrote:
>
> On Feb 13, 2012, at 10:14 PM, Charles R Harris wrote:
>
>
>
> On Mon, Feb 13, 2012 at 9:04 PM, Travis Oliphant wrote:
>
>> I disagree with your assessment of the subscript operator, but I'm sure
>> we will have plenty of time to discuss
On Monday, February 13, 2012, Travis Oliphant wrote:
>
> On Feb 13, 2012, at 10:14 PM, Charles R Harris wrote:
>
>
> On Mon, Feb 13, 2012 at 9:04 PM, Travis Oliphant
wrote:
>>
>> I disagree with your assessment of the subscript operator, but I'm sure
we will have plenty of time to discuss that.
On Feb 13, 2012, at 10:14 PM, Charles R Harris wrote:
>
>
> On Mon, Feb 13, 2012 at 9:04 PM, Travis Oliphant wrote:
> I disagree with your assessment of the subscript operator, but I'm sure we
> will have plenty of time to discuss that. I don't think it's correct to
> compare the corner ca
These are great suggestions. I am happy to start digging into the code. I'm
also happy to re-visit any and all design decisions for NumPy 2.0 (with a
strong-eye towards helping people migrate and documenting the results). Mark,
I think you have done an excellent job of working with a stodgy
On Mon, Feb 13, 2012 at 8:04 PM, Travis Oliphant wrote:
> I disagree with your assessment of the subscript operator, but I'm sure we
> will have plenty of time to discuss that. I don't think it's correct to
> compare the corner cases of the fancy indexing and regular indexing to the
> corner cas
On Monday, February 13, 2012, Charles R Harris
wrote:
>
>
> On Mon, Feb 13, 2012 at 9:04 PM, Travis Oliphant
wrote:
>>
>> I disagree with your assessment of the subscript operator, but I'm sure
we will have plenty of time to discuss that. I don't think it's correct to
compare the corner cases o
It hasn't changed: since float is of "a fundamentally different kind of
data", it's expected to upcast the result.
However, if I may add a personal comment on numpy's casting rules: until
now, I've found them confusing and somewhat inconsistent. Some of the
inconsistencies I've found were bugs, wh
On Mon, Feb 13, 2012 at 9:04 PM, Travis Oliphant wrote:
> I disagree with your assessment of the subscript operator, but I'm sure we
> will have plenty of time to discuss that. I don't think it's correct to
> compare the corner cases of the fancy indexing and regular indexing to the
> corner cas
I can also confirm that at least on NumPy 1.5.1:
integer array * (literal Python float scalar) --- creates a double result.
So, my memory was incorrect on that (unless it changed at an earlier release,
but I don't think so).
-Travis
On Feb 13, 2012, at 9:40 PM, Mark Wiebe wrote:
> I
I disagree with your assessment of the subscript operator, but I'm sure we will
have plenty of time to discuss that. I don't think it's correct to compare
the corner cases of the fancy indexing and regular indexing to the corner cases
of type coercion system.If you recall, I was quite nerv
On Monday, February 13, 2012, Aaron Meurer wrote:
> I'd like the ability to make "in" (i.e., __contains__) return
> something other than a bool.
>
> Also, the ability to make the x < y < z syntax would be useful. It's
> been suggested that the ability to override the boolean operators
> (and, or,
On 02/13/2012 06:19 PM, Mark Wiebe wrote:
> It might be nice to turn the matrix class into a short class hierarchy,
> something like this:
>
> class MatrixBase
> class DenseMatrix(MatrixBase)
> class TriangularMatrix(MatrixBase) # Maybe a few variations of
> upper/lower triangular and whether the d
I believe the main lessons to draw from this are just how incredibly
important a complete test suite and staying on top of code reviews are. I'm
of the opinion that any explicit design choice of this nature should be
reflected in the test suite, so that if someone changes it years later,
they get i
The problem is that these sorts of things take a while to emerge. The original
system was more consistent than I think you give it credit. What you are
seeing is that most people get NumPy from distributions and are relying on us
to keep things consistent.
The scalar coercion rules were dete
I took a look into the code to see what is causing this, and the reason is
that nothing has ever been implemented to deal with the fields. This means
it falls back to treating all struct dtypes as if they were a plain "void"
dtype, which allows anything to be cast to it.
While I was redoing the ca
It might be nice to turn the matrix class into a short class hierarchy,
something like this:
class MatrixBase
class DenseMatrix(MatrixBase)
class TriangularMatrix(MatrixBase) # Maybe a few variations of upper/lower
triangular and whether the diagonal is stored
class SymmetricMatrix(MatrixBase)
Th
>I'm wondering about using one of these commercial issue tracking plans for
>NumPy and would like thoughts and comments.Both of these plans allow Open
>Source projects to have unlimited plans for free.
>
>JIRA:
>
>http://www.atlassian.com/software/jira/overview/tour/code-integration
>
>
On Mon, Feb 13, 2012 at 5:00 PM, Travis Oliphant wrote:
> Hmmm. This seems like a regression. The scalar casting API was fairly
> intentional.
>
> What is the reason for the change?
>
In order to make 1.6 ABI-compatible with 1.5, I basically had to rewrite
this subsystem. There were virtually
Hi,
I've also just noticed this oddity:
In [17]: np.can_cast('c', 'u1')
Out[17]: False
OK so far, but...
In [18]: np.can_cast('c', [('f1', 'u1')])
Out[18]: True
In [19]: np.can_cast('c', [('f1', 'u1')], 'safe')
Out[19]: True
In [20]: np.can_cast(np.ones(10, dtype='c'), [('f1', 'u1')])
Out[20]
Hmmm. This seems like a regression. The scalar casting API was fairly
intentional.
What is the reason for the change?
--
Travis Oliphant
(on a mobile)
512-826-7480
On Feb 13, 2012, at 6:25 PM, Matthew Brett wrote:
> Hi,
>
> I recently noticed a change in the upcasting rules in numpy 1.
I'd like the ability to make "in" (i.e., __contains__) return
something other than a bool.
Also, the ability to make the x < y < z syntax would be useful. It's
been suggested that the ability to override the boolean operators
(and, or, not) would be the way to do this (pep 335), though I'm not
10
On Mon, Feb 13, 2012 at 7:48 PM, Wes McKinney wrote:
> On Mon, Feb 13, 2012 at 7:46 PM, Wes McKinney wrote:
>> On Mon, Feb 13, 2012 at 7:30 PM, Nathaniel Smith wrote:
>>> How would you fix it? I shouldn't speculate without profiling, but I'll be
>>> naughty. Presumably the problem is that python
On Mon, Feb 13, 2012 at 7:46 PM, Wes McKinney wrote:
> On Mon, Feb 13, 2012 at 7:30 PM, Nathaniel Smith wrote:
>> How would you fix it? I shouldn't speculate without profiling, but I'll be
>> naughty. Presumably the problem is that python turns that into something
>> like
>>
>> hist[i,j] = hist[i
On Mon, Feb 13, 2012 at 7:30 PM, Nathaniel Smith wrote:
> How would you fix it? I shouldn't speculate without profiling, but I'll be
> naughty. Presumably the problem is that python turns that into something
> like
>
> hist[i,j] = hist[i,j] + 1
>
> which means there's no way for numpy to avoid cre
How would you fix it? I shouldn't speculate without profiling, but I'll be
naughty. Presumably the problem is that python turns that into something
like
hist[i,j] = hist[i,j] + 1
which means there's no way for numpy to avoid creating a temporary array.
So maybe this could be fixed by adding a fus
Hi,
I recently noticed a change in the upcasting rules in numpy 1.6.0 /
1.6.1 and I just wanted to check it was intentional.
For all versions of numpy I've tested, we have:
>>> import numpy as np
>>> Adata = np.array([127], dtype=np.int8)
>>> Bdata = np.int16(127)
>>> (Adata + Bdata).dtype
dtype
On Mon, Feb 13, 2012 at 6:23 PM, Marcel Oliver
wrote:
> Hi,
>
> I have a short piece of code where the use of an index array "feels
> right", but incurs a severe performance penalty: It's about an order
> of magnitude slower than all other operations with arrays of that
> size.
>
> It comes up in
On Mon, Feb 13, 2012 at 3:46 PM, Travis Vaught wrote:
>
> - Extra operators/PEP 225. Here's a summary from the last time we
> went over this, years ago at Scipy 2008:
> http://mail.scipy.org/pipermail/numpy-discussion/2008-October/038234.html,
> and the current status of the document we wrote abo
On Feb 13, 2012, at 3:55 PM, Fernando Perez wrote:
> ...
> - Extra operators/PEP 225. Here's a summary from the last time we
> went over this, years ago at Scipy 2008:
> http://mail.scipy.org/pipermail/numpy-discussion/2008-October/038234.html,
> and the current status of the document we wrote a
Hi,
I have a short piece of code where the use of an index array "feels
right", but incurs a severe performance penalty: It's about an order
of magnitude slower than all other operations with arrays of that
size.
It comes up in a piece of code which is doing a large number of "on
the fly" histogr
On Mon, Feb 13, 2012 at 12:56 PM, Matthew Brett wrote:
> I have the impression that the Cython / SAGE team are happy with their
> Jenkins configuration.
So are we in IPython, thanks to Thomas Kluyver's recent leadership on
this front it's now running quite smoothly:
https://jenkins.shiningpanda.
Hi,
On Mon, Feb 13, 2012 at 2:33 PM, wrote:
> On 2/13/12 2:56 PM, Matthew Brett wrote:
>> I have the impression that the Cython / SAGE team are happy with their
>> Jenkins configuration.
>
> I'm not aware of a Jenkins buildbot system for Sage, though I think
> Cython uses such a system: https://
On 2/13/12 2:56 PM, Matthew Brett wrote:
> I have the impression that the Cython / SAGE team are happy with their
> Jenkins configuration.
I'm not aware of a Jenkins buildbot system for Sage, though I think
Cython uses such a system: https://sage.math.washington.edu:8091/hudson/
We do have a num
-- Forwarded message --
From: "Andrea Gavana"
Date: Feb 13, 2012 11:31 PM
Subject: Re: [Numpy-discussion] Creating parallel curves
To: "Jonathan Hilmer"
Thank you Jonathan for this, it's exactly what I was looking for. I' ll try
it tomorrow on the 768 well trajectories I have and
Le 13/02/2012 19:17, eat a écrit :
> wouldn't it be nice if you could just write:
> a= np.empty(shape).fill(A)
> this would be possible if .fill(.) just returned self.
Thanks for the tip. I noticed several times this was not working
(because of course, in the mean time, I forgot it...)
but I had to
Hi,
On Mon, Feb 13, 2012 at 12:44 PM, Travis Oliphant wrote:
>
> On Mon, Feb 13, 2012 at 12:12 AM, Travis Oliphant
> wrote:
>>
>> I'm wondering about using one of these commercial issue tracking plans for
>> NumPy and would like thoughts and comments. Both of these plans allow
>> Open Source
>
> On Mon, Feb 13, 2012 at 12:12 AM, Travis Oliphant wrote:
> I'm wondering about using one of these commercial issue tracking plans for
> NumPy and would like thoughts and comments.Both of these plans allow Open
> Source projects to have unlimited plans for free.
>
> Free usage of a t
On Mon, Feb 13, 2012 at 12:12 AM, Travis Oliphant wrote:
> I'm wondering about using one of these commercial issue tracking plans for
> NumPy and would like thoughts and comments.Both of these plans allow
> Open Source projects to have unlimited plans for free.
>
> Free usage of a tool that's
Thank you, that does the trick.
Regards,
Will
On 13 February 2012 19:39, Travis Oliphant wrote:
> I think the following is what you want:
>
> neighborhoods[range(9),perf[neighbourhoods].argmax(axis=1)]
>
> -Travis
>
>
> On Feb 13, 2012, at 1:26 PM, William Furnass wrote:
>
>> np.array( [neighbo
On Mon, Feb 13, 2012 at 1:01 AM, Niki Spahiev wrote:
> You can get polygon buffer from http://angusj.com/delphi/clipper.php and
> make cython interface to it.
This should be built into GEOS as well, and the shapely package
provides a python wrapper already.
-Chris
> HTH
>
> Niki
>
> _
I think the following is what you want:
neighborhoods[range(9),perf[neighbourhoods].argmax(axis=1)]
-Travis
On Feb 13, 2012, at 1:26 PM, William Furnass wrote:
> np.array( [neighbourhoods[i][perf[neighbourhoods].argmax(axis=1)[i]]
> for i in xrange(neighbourhoods.shape[0])] )
___
Hi,
Apologies if the following is a trivial question. I wish to index the
columns of the following 2D array
In [78]: neighbourhoods
Out[78]:
array([[8, 0, 1],
[0, 1, 2],
[1, 2, 3],
[2, 3, 4],
[3, 4, 5],
[4, 5, 6],
[5, 6, 7],
[6, 7, 8],
[7,
Hi,
A slightly OT (and not directly answering to your question), but
On Mon, Feb 13, 2012 at 3:30 PM, Pierre Haessig wrote:
> I have a pretty silly question about initializing an array a to a given
> scalar value, say A.
>
> Most of the time I use a=np.ones(shape)*A which seems the most
> widesp
I have a pretty silly question about initializing an array a to a given
scalar value, say A.
Most of the time I use a=np.ones(shape)*A which seems the most
widespread idiom, but I got recently interested in getting some
performance improvement.
I tried a=np.zeros(shape)+A, based on broadcastin
You can get polygon buffer from http://angusj.com/delphi/clipper.php and
make cython interface to it.
HTH
Niki
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
56 matches
Mail list logo