Thanks Julian. Mistakenly, I have (a[:1:] + a[:1,:])/2 type of construct
somewhere in my code. It works fine, however I wasn't sure if this is
something leading to a wrong calculation. Now your explanation makes it
clearer.
On Fri, Feb 28, 2014 at 6:48 PM, Julian Taylor <
jtaylor.deb...@googlemai
On 01.03.2014 00:32, Gökhan Sever wrote:
>
> Hello,
>
> Given this simple 2D array:
>
> In [1]: np.arange(9).reshape((3,3))
> Out[1]:
> array([[0, 1, 2],
>[3, 4, 5],
>[6, 7, 8]])
>
> In [2]: a = np.arange(9).reshape((3,3))
>
> In [3]: a[:1:]
> Out[3]: array([[0, 1, 2]])
>
> In
Hello,
Given this simple 2D array:
In [1]: np.arange(9).reshape((3,3))
Out[1]:
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
In [2]: a = np.arange(9).reshape((3,3))
In [3]: a[:1:]
Out[3]: array([[0, 1, 2]])
In [4]: a[:1,:]
Out[4]: array([[0, 1, 2]])
Could you tell me why the last two
On Wed, Feb 26, 2014 at 3:10 PM, Tom Augspurger
wrote:
> Chris Barker - NOAA Federal wrote
> > What python are you using? apparently not a Universal 32+64 bit build.
> The
> > one Apple delivers?
>
> I'm using homebrew python, so the platform difference seems to have come
> from there.
and it _s
On Wed, Feb 26, 2014 at 2:48 PM, Matthew Brett wrote:
> > Agreed. The trick is that it's reasonable for users of Apple's python
> build
> > to want this too -- but I don't know how we can hope to provide that.
>
> We don't support system python for the mpkg, so I think it's
> reasonable to leave t
On 2/28/14, 3:00 PM, Charles R Harris wrote:
>
>
>
> On Fri, Feb 28, 2014 at 5:52 AM, Julian Taylor
> mailto:jtaylor.deb...@googlemail.com>>
> wrote:
>
> performance should not be impacted as long as we stay on the stack, it
> just increases offset of a stack pointer a bit more.
> E.g
On Feb 28, 2014, at 1:04 AM, Sebastian Berg wrote:
>
> because the sequence check like that seems standard in python 3.
Whatever happened to duck typing?
Sigh.
-Chris
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/
On Fri, Feb 28, 2014 at 3:10 PM, Anthony Scopatz wrote:
> Thanks All,
>
> I am sorry I missed the issue. (I still can't seem to find it, actually.)
https://github.com/numpy/numpy/issues/2776
> I agree that there would be minimal overhead here and I bet that would be
> easy to show. I really lo
Thanks All,
I am sorry I missed the issue. (I still can't seem to find it, actually.)
I agree that there would be minimal overhead here and I bet that would be
easy to show. I really look forward to seeing this get in!
Be Well
Anthony
On Fri, Feb 28, 2014 at 4:59 AM, Robert Kern wrote:
> O
Hi everyone,
I have got code for some python wrappers of a scientific library which
needs to support both Numpy 1.6 and later versions.
The build of the wrapper (using swig) stopped working because of the
deprecated API introduced in v1.7. The error only concerns the renaming of
some macros from
On Fri, Feb 28, 2014 at 5:52 AM, Julian Taylor <
jtaylor.deb...@googlemail.com> wrote:
> performance should not be impacted as long as we stay on the stack, it
> just increases offset of a stack pointer a bit more.
> E.g. nditer and einsum use temporary stack arrays of this type for its
> initiali
performance should not be impacted as long as we stay on the stack, it
just increases offset of a stack pointer a bit more.
E.g. nditer and einsum use temporary stack arrays of this type for its
initialization:
op_axes_arrays[NPY_MAXARGS][NPY_MAXDIMS]; // both 32 currently
The resulting nditer stru
Well, what numexpr is using is basically NpyIter_AdvancedNew:
https://github.com/pydata/numexpr/blob/master/numexpr/interpreter.cpp#L1178
and nothing else. If NPY_MAXARGS could be increased just for that, and
without ABI breaking, then fine. If not, we should have to wait until
1.9 I am afrai
hm increasing it for PyArrayMapIterObject would break the public ABI.
While nobody should be using this part of the ABI its not appropriate
for a bugfix release.
Note that as it currently stands in numpy 1.9.dev we will break this ABI
for the indexing improvements.
Though for nditer and some other
Hi Julian,
Any chance that NPY_MAXARGS could be increased to something more than
the current value of 32? There is a discussion about this in:
https://github.com/numpy/numpy/pull/226
but I think that, as Charles was suggesting, just increasing NPY_MAXARGS
to something more reasonable (say 256
On Fri, Feb 28, 2014 at 9:03 AM, Sebastian Berg
wrote:
> On Fr, 2014-02-28 at 08:47 +, Sturla Molden wrote:
>> Anthony Scopatz wrote:
>> > Hello All,
>> >
>> > The semantics of this seem quite insane to me:
>> >
>> > In [1]: import numpy as np
>> >
>> > In [2]: import collections
>> >
>> > In
On Fr, 2014-02-28 at 08:47 +, Sturla Molden wrote:
> Anthony Scopatz wrote:
> > Hello All,
> >
> > The semantics of this seem quite insane to me:
> >
> > In [1]: import numpy as np
> >
> > In [2]: import collections
> >
> > In [4]: isinstance(np.arange(5), collections.Sequence) Out[4]: Fal
Anthony Scopatz wrote:
> Hello All,
>
> The semantics of this seem quite insane to me:
>
> In [1]: import numpy as np
>
> In [2]: import collections
>
> In [4]: isinstance(np.arange(5), collections.Sequence) Out[4]: False
>
> In [6]: np.version.full_version
> Out[6]: '1.9.0.dev-eb40f65'
>
>
18 matches
Mail list logo