-הודעה מקורית-
מאת: [EMAIL PROTECTED] בשם Andreas Kl?ckner
נשלח: ג 25-מרץ-08 06:42
אל: Discussion of Numerical Python
נושא: Re: [Numpy-discussion] __iadd__(ndarray, ndarray)
On Montag 24 M?rz 2008, St?fan van der Walt wrote:
> > I think this is highly undesirable and should be fixed,
On Dienstag 25 März 2008, Travis E. Oliphant wrote:
> > Question: If it's a known trap, why not change it?
>
> It also has useful applications. Also, it can only happen at with a
> bump in version number to 1.1
I'm not trying to make the functionality go away. I'm arguing that
int_array += downc
Andreas Klöckner wrote:
> On Montag 24 März 2008, Stéfan van der Walt wrote:
>
>>> I think this is highly undesirable and should be fixed, or at least
>>> warned about. Opinions?
>>>
>> I know the result is surprising, but it follows logically. You have
>> created two integers in memory
On Montag 24 März 2008, Stéfan van der Walt wrote:
> > I think this is highly undesirable and should be fixed, or at least
> > warned about. Opinions?
>
> I know the result is surprising, but it follows logically. You have
> created two integers in memory, and now you add 0.2 and 0.1 to both --
>
Hi Joris
Also take a look at the work done by Neal Becker, and posted on this
list earlier this year or end of last. Please go ahead and create a
cookbook entry on the wiki -- that way we have a central plce for
writing up further explorations of this kind (also, let us know on the
list if you do
Hi Martin
Please file a bug on the trac page: http://projects.scipy.org/scipy/numpy
You may mark memory errors as blockers for the next release.
Confirmed under latest SVN.
Thanks
Stéfan
On Mon, Mar 24, 2008 at 2:05 PM, Martin Manns <[EMAIL PROTECTED]> wrote:
> Hello,
>
> I am encountering a p
Hi Andreas
On Mon, Mar 24, 2008 at 7:28 PM, Andreas Klöckner
<[EMAIL PROTECTED]> wrote:
> I just got tripped up by this behavior in Numpy 1.0.4:
>
> >>> u = numpy.array([1,3])
> >>> v = numpy.array([0.2,0.1])
> >>> u+=v
> >>> u
> array([1, 3])
> >>>
>
> I think this is highly undesirable a
On Mon, Mar 24, 2008 at 6:37 PM, David Cournapeau
<[EMAIL PROTECTED]> wrote:
> That's one of the reason why I was thinking about a gradual move of most
> "core functionalities of the core" toward a separate C library, with a
> simple and crystal clear interface, without any reference to any pyth
Robert Kern wrote:
> On Mon, Mar 24, 2008 at 12:12 PM, Gnata Xavier <[EMAIL PROTECTED]> wrote:
>
>
>> Well it is not that easy. We have several numpy code following like this :
>> 1) open an large data file to get a numpy array
>> 2) perform computations on this array (I'm only talking of the
Stéfan van der Walt wrote:
> On Mon, Mar 24, 2008 at 6:04 PM, Lou Pecora <[EMAIL PROTECTED]> wrote:
>
>> --- Matthieu Brucher <[EMAIL PROTECTED]>
>> wrote:
>>
>>
>> > It was added as a compile-time #define on the SVN
>> > some days ago ;)
>> >
>> > Matthieu
>>
>> Thanks, Matthieu, that's
On Mon, Mar 24, 2008 at 6:04 PM, Lou Pecora <[EMAIL PROTECTED]> wrote:
>
> --- Matthieu Brucher <[EMAIL PROTECTED]>
> wrote:
>
>
> > It was added as a compile-time #define on the SVN
> > some days ago ;)
> >
> > Matthieu
>
> Thanks, Matthieu, that's a good step. But when the
> SVD function
On 24 Mar 2008, at 18:27, Martin Manns wrote:
>> I cannot confirm the problem on my intel macbook pro using the same
>> Python and Numpy versions. Although any(numpy.array(large_none))
>> takes
>> a significantly longer time than any(numpy.array(large_zero)), the
>> former does not segfault on
On Mon, Mar 24, 2008 at 2:00 PM, Bruce Southey <[EMAIL PROTECTED]> wrote:
> Hi,
> True, I noticed that on my system (with 8 Gb memory) that using
> works but not 1.
> Also, use of a 2 dimensional array also crashes if the size if large
> enough:
> large_m=numpy.vstack((large_none, large_n
Hi,
True, I noticed that on my system (with 8 Gb memory) that using
works but not 1.
Also, use of a 2 dimensional array also crashes if the size if large enough:
large_m=numpy.vstack((large_none, large_none))
Bruce
Martin Manns wrote:
> Bruce Southey <[EMAIL PROTECTED]> wrote:> Hi,
>
Hi all,
I just got tripped up by this behavior in Numpy 1.0.4:
>>> u = numpy.array([1,3])
>>> v = numpy.array([0.2,0.1])
>>> u+=v
>>> u
array([1, 3])
>>>
I think this is highly undesirable and should be fixed, or at least warned
about. Opinions?
Andreas
signature.asc
Description: This is a
On Mon, Mar 24, 2008 at 12:12 PM, Gnata Xavier <[EMAIL PROTECTED]> wrote:
> Well it is not that easy. We have several numpy code following like this :
> 1) open an large data file to get a numpy array
> 2) perform computations on this array (I'm only talking of the numpy
> part here. scipy is
Bruce Southey <[EMAIL PROTECTED]> wrote:> Hi,
> This also crashes by numpy 1.0.4 under python 2.5.1. I am guessing it
> may be due to numpy.any() probably not understanding the 'None' .
I doubt that because I get the segfault for all kinds of object arrays that I
try out:
~$ python
Python 2.4.5
Hi,
This also crashes by numpy 1.0.4 under python 2.5.1. I am guessing it
may be due to numpy.any() probably not understanding the 'None' .
Bruce
Martin Manns wrote:
>> On 24 Mar 2008, at 14:05, Martin Manns wrote:
>>
>>
>>> Hello,
>>>
>>> I am encountering a problem (a bug?) with the numpy
Matthew Brett wrote:
>
> I'm +3 for the plugin idea - it would have huge benefits for
> installation and automatic optimization. What needs to be done? Who
> could do it?
The main issues are portability, and reliability I think. All OS
supported by numpy have more or less a dynamic library load
> On 24 Mar 2008, at 14:05, Martin Manns wrote:
>
> > Hello,
> >
> > I am encountering a problem (a bug?) with the numpy any function.
> > Since the python any function behaves in a slightly different way,
> > I would like to keep using numpy's.
> >
> I cannot confirm the problem on my intel macbo
On Mon, Mar 24, 2008 at 10:35 AM, Robert Kern <[EMAIL PROTECTED]> wrote:
> On Sat, Mar 22, 2008 at 4:25 PM, Charles R Harris
> <[EMAIL PROTECTED]> wrote:
> >
> > On Sat, Mar 22, 2008 at 2:59 PM, Robert Kern <[EMAIL PROTECTED]>
> wrote:
> > >
> > > On Sat, Mar 22, 2008 at 2:04 PM, Charles R Harris
Matthieu Brucher wrote:
>
> It is a real problem in some communities like astronomers and images
> processing people but the lack of documentation is the first one,
> that
> is true.
>
>
> Even in those communities, I think that a lot could be done at a
> higher level, as what IPy
--- Matthieu Brucher <[EMAIL PROTECTED]>
wrote:
> It was added as a compile-time #define on the SVN
> some days ago ;)
>
> Matthieu
Thanks, Matthieu, that's a good step. But when the
SVD function throws an exception is it clear that the
user can redefine niter and recompile? Otherwise, the
fi
On Sat, Mar 22, 2008 at 4:25 PM, Charles R Harris
<[EMAIL PROTECTED]> wrote:
>
> On Sat, Mar 22, 2008 at 2:59 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
> >
> > On Sat, Mar 22, 2008 at 2:04 PM, Charles R Harris
> > <[EMAIL PROTECTED]> wrote:
> >
> > > Maybe it's time to revisit the template subsyst
>
> It is a real problem in some communities like astronomers and images
> processing people but the lack of documentation is the first one, that
> is true.
>
Even in those communities, I think that a lot could be done at a higher
level, as what IPython1 does (tasks parallelism).
Matthieu
--
Fr
It was added as a compile-time #define on the SVN some days ago ;)
Matthieu
2008/3/24, Zachary Pincus <[EMAIL PROTECTED]>:
>
> Hi all,
>
>
> > I looked at line 21902 of dlapack_lite.c, it is,
> >
> >for (niter = iter; niter <= 20; ++niter) {
> >
> > Indeed the upper limit for iterations
> A couple of thoughts on parallelism:
>
> 1. Can someone come up with a small set of cases and time them on
> numpy, IDL, Matlab, and C, using various parallel schemes, for each of
> a representative set of architectures? We're comparing a benchmark to
> itself on different architectures, rather
I cannot confirm the problem on my intel macbook pro using the same
Python and Numpy versions. Although any(numpy.array(large_none)) takes
a significantly longer time than any(numpy.array(large_zero)), the
former does not segfault on my machine.
J.
On 24 Mar 2008, at 14:05, Martin Manns
Hi all,
> I looked at line 21902 of dlapack_lite.c, it is,
>
>for (niter = iter; niter <= 20; ++niter) {
>
> Indeed the upper limit for iterations in the
> linalg.svd code is set for 20. For now I will go with
> my method (on earlier post) of squaring the matrix and
> then doing svd when
A couple of thoughts on parallelism:
1. Can someone come up with a small set of cases and time them on
numpy, IDL, Matlab, and C, using various parallel schemes, for each of
a representative set of architectures? We're comparing a benchmark to
itself on different architectures, rather than seeing
David Cournapeau wrote:
> Gnata Xavier wrote:
>
>> Ok I will try to see what I can do but it is sure that we do need the
>> plug-in system first (read "before the threads in the numpy release").
>> During the devel of 1.1, I will try to find some time to understand
>> where I should put some
Hello,
I am encountering a problem (a bug?) with the numpy any function.
Since the python any function behaves in a slightly different way,
I would like to keep using numpy's.
Here is the problem:
$ python
Python 2.5.1 (r251:54863, Jan 26 2008, 01:34:00)
[GCC 4.1.2 (Gentoo 4.1.2)] on linux2
Typ
Hi,
> Note that the plug-in idea is just my own idea, it is not something
> agreed by anyone else. So maybe it won't be done for numpy 1.1, or at
> all. It depends on the main maintainers of numpy.
I'm +3 for the plugin idea - it would have huge benefits for
installation and automatic optimiza
Hi,
I am eager to implement the C version of the set_where function but
would like to do so in a numpy-esque way. Having implemented several
internal and released Python/C packages, I am familiar with the PyArray
object and the PyArrayIterObject and the like. After looking through the
code I n
34 matches
Mail list logo