On 4 Dec 2012 02:27, "Ondřej Čertík" wrote:
>
> Hi,
>
> I started to work on the release again and noticed weird failures at
Travis-CI:
[…]
> File
"/home/travis/virtualenv/python2.5/lib/python2.5/site-packages/numpy/core/tests/test_iterator.py",
The problem is that Travis started installing num
Hi,
I started to work on the release again and noticed weird failures at Travis-CI:
https://github.com/numpy/numpy/pull/2782
The first commit (8a18fc7) should not trigger this failure:
==
FAIL: test_iterator.test_iter_array_cas
Chris,
thanks for the feedback,
fyi,
the minor changes I talked about have different performance enhancements
depending on scenario,
e.g,
1) Array * Array
point = array( [2.0, 3.0])
scale = array( [2.4, 0.9] )
retVal = point * scale
#The line above runs 1.1 times faster with my new code (but
03.12.2012 22:10, Karl Kappler kirjoitti:
[clip]
> I.e.. the imaginary part is initialized to a different value. From
> reading up on forums I think I understand that when an array is
> allocated without specific values, it will be given random values which
> are very small, ie. ~1e-316 or so. But
Hello,
This is a continuation of a problem I had last year,
http://old.nabble.com/Apparently-non-deterministic-behaviour-of-complex-array-multiplication-tt32893004.html#a32931369
at least it seems to have similar symptoms. I am working again with complex
valued arrays in numpy (python version 2.
Raul,
Thanks for doing this work -- both the profiling and actual
suggestions for how to improve the code -- whoo hoo!
In general, it seem that numpy performance for scalars and very small
arrays (i.e (2,), (3,) maybe (3,3), the kind of thing that you'd use
to hold a coordinate point or the like,
On 03/12/2012 4:14 AM, Nathaniel Smith wrote:
> On Mon, Dec 3, 2012 at 1:28 AM, Raul Cota wrote:
>> I finally decided to track down the problem and I started by getting
>> Python 2.6 from source and profiling it in one of my cases. By far the
>> biggest bottleneck came out to be PyString_FromForma
On 02/12/2012 8:31 PM, Travis Oliphant wrote:
> Raul,
>
> This is *fantastic work*. While many optimizations were done 6 years ago
> as people started to convert their code, that kind of report has trailed off
> in the last few years. I have not seen this kind of speed-comparison for
> som
Thanks Christoph.
It seemed to work. Will do profile runs today/tomorrow and see what come
out.
Raul
On 02/12/2012 7:33 PM, Christoph Gohlke wrote:
> On 12/2/2012 5:28 PM, Raul Cota wrote:
>> Hello,
>>
>> First a quick summary of my problem and at the end I include the basic
>> changes I am
A followup on the previous thread on scalar speed.
operations with numpy scalars
I can *maybe* understand this
>>> np.array(2)[()] * [0.5, 1]
[0.5, 1, 0.5, 1]
but don't understand this
>>> np.array(2.+0.1j)[()] * [0.5, 1]
__main__:1: ComplexWarning: Casting complex values to real discards
the
On Mon, Dec 3, 2012 at 6:14 AM, Nathaniel Smith wrote:
> On Mon, Dec 3, 2012 at 1:28 AM, Raul Cota wrote:
>> I finally decided to track down the problem and I started by getting
>> Python 2.6 from source and profiling it in one of my cases. By far the
>> biggest bottleneck came out to be PyString
On Mon, Dec 3, 2012 at 1:28 AM, Raul Cota wrote:
> I finally decided to track down the problem and I started by getting
> Python 2.6 from source and profiling it in one of my cases. By far the
> biggest bottleneck came out to be PyString_FromFormatV which is a
> function to assemble a string for a
12 matches
Mail list logo