Hello,
Almost two years ago a discussion on this list took place under the
subject "speeding up operations on small vectors". The issue was the
rather high constant time and space overhead of small NumPy arrays. I'm
aware that in general one should combine small arrays into large ones,
but there
Dear numpy historians,
When multiplying two arrays with numpy.dot, the summation is made over
the last index of the first argument, and over the *second-to-last*
index of the second argument. I wonder why the convention has been
chosen like this?
The only reason I can think of is that this allow
I noticed that numpy.linalg.eigvalsh returns a complex array, even
though mathematically the resulting eigenvalues are guaranteed to be
real.
Looking at the source code, the underlying zheevd routine of LAPACK
indeed returns an array of real numbers which is than converted to
complex in the numpy
Hello,
Is it just me who thinks that matplotlib is ugly and a pain to use? So
far I haven't found a decent alternative usable from within python. (I
haven't tried all the packages out there.) I'm mostly interested in 2d
plots. Who is happy enough with a numpy-compatible plotting package to
rec
Skipper Seabold writes:
> So it's the dot function being called repeatedly on smallish arrays
> that's the bottleneck? I've run into this as well. See this thread
> [1].
> (...)
Thanks for the links. "tokyo" is interesting, though I fear the
intermediate matrix size regime where it really makes
>> My question was about ways to achieve a speedup without modifying the
>> algorithm. I was hoping that there is some numpy-like library for
>> python which for small arrays achieves a performance at least on par
>> with the implementation using tuples. This should be possible
>> technically.
>
Pauli Virtanen writes:
>> Thank you for your suggestion. It doesn't help me however, because
>> the algorithm I'm _really_ trying to speed up cannot be vectorized
>> with numpy in the way you vectorized my toy example.
>>
>> Any other ideas?
>
> Reformulate the problem so that it can be vectori
Olivier Delalleau writes:
> Here's a version that uses less Python loops and thus is faster. What
> still takes time is the array creation (np.array(...)), I'm not sure
> exactly why. It may be possible to speed it up.
Thank you for your suggestion. It doesn't help me however, because the
algor
Dear numpy experts,
I could not find a satisfying solution to the following problem, so I
thought I would ask:
In one part of a large program I have to deal a lot with small (2d or
3d) vectors and matrices, performing simple linear algebra operations
with them (dot products and matrix multiplicat
> On Wed, May 4, 2011 at 6:19 AM, Christoph Groth wrote:
>>
>> Dear numpy experts,
>>
>> I have noticed that with Numpy 1.5.1 the operation
>>
>> m[::2] += 1.0
>>
>> takes twice as long as
>>
>>
Dear numpy experts,
I have noticed that with Numpy 1.5.1 the operation
m[::2] += 1.0
takes twice as long as
t = m[::2]
t += 1.0
where "m" is some large matrix. This is of course because the first
snippet is equivalent to
t = m[::2]
t += 1.0
m[::2] = t
I wonder whether it would not be a good
11 matches
Mail list logo