On Fri, 26 Nov 2010 20:57:30 +0100, Gerrit Holl wrote:
[clip]
> I wonder, am I missing something or have I really written a significant
> improvement in less than 10 LOC? Should I file a patch for this?
The implementation of merge_arrays doesn't look optimal -- it seems to
actually iterate over t
On 26 November 2010 20:16, Gerrit Holl wrote:
> Hi,
>
> upon profiling my code, I found that
> numpy.lib.recfunctions.merge_arrays is extremely slow; it does some
> 7000 rows/second. This is not acceptable for me.
...
> How can I do this in a faster way?
Replying to my own code here. Either I hav
Hi,
upon profiling my code, I found that
numpy.lib.recfunctions.merge_arrays is extremely slow; it does some
7000 rows/second. This is not acceptable for me.
I have two large record arrays, or arrays with a complicated dtype.
All I want to do is to merge them into one. I don't think that should
h
A Thursday 25 November 2010 11:13:49 Jean-Luc Menut escrigué:
> Hello all,
>
> I have a little question about the speed of numpy vs IDL 7.0. I did a
> very simple little check by computing just a cosine in a loop. I was
> quite surprised to see an order of magnitude of difference between
> numpy a
Hello,
After careful Google searches, I was not successful in finding any
project dealing with Weibull analysis with neither python nor
numpy or scipy.
So before reinventing the wheel, I ask here whether any of you
have already started such a project and is eager to share.
Thanks,
David
_
Although this was mentioned earlier, it's worth emphasizing that if
you need to use functions such as cosine with scalar arguments, you
should use math.cos(), not numpy.cos(). The numpy versions of these
functions are optimized for handling array arguments and are much
slower than the math versions