Sebastian Haase wrote:
> Does ATLAS/BLAS do anything special for element wise multiplication
> and alike - if for example the data is not aligned or not contiguous?
Nothing that ATLAS optimizes, no. They focus (rightly) on the more complicated
matrix operations (BLAS Level 3, if you are familiar
On 4/18/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> Sebastian Haase wrote:
> > On 4/17/07, Anne Archibald <[EMAIL PROTECTED]> wrote:
> >> On 18/04/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> >>> Sebastian Haase wrote:
> >>>
> Hi,
> I don't know much about ATLAS -- would there be other n
Sebastian Haase wrote:
> On 4/17/07, Anne Archibald <[EMAIL PROTECTED]> wrote:
>> On 18/04/07, Robert Kern <[EMAIL PROTECTED]> wrote:
>>> Sebastian Haase wrote:
>>>
Hi,
I don't know much about ATLAS -- would there be other numpy functions
that *could* or *should* be implemented usi
On 4/17/07, Anne Archibald <[EMAIL PROTECTED]> wrote:
> On 18/04/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> > Sebastian Haase wrote:
> >
> > > Hi,
> > > I don't know much about ATLAS -- would there be other numpy functions
> > > that *could* or *should* be implemented using ATLAS !?
> > > Any ?
--- Anne Archibald <[EMAIL PROTECTED]> wrote:
> I just
> took another look at
> that code and added a parallel_map I hadn't got
> around to writing
> before, too. I'd be happy to stick it (and test
> file) on the wiki
> under some open license or other ("do what thou wilt
> shall be the
> whole o
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Anne Archibald wrote:
>
> It would be perfectly possible, in principle, to implement an
> ATLAS-like library that handled a variety (perhaps all) of numpy's
> basic operations in platform-optimized fashion. But implementing ATLAS
> is not a simple pro
Anne Archibald wrote:
>
> And the scope of improvement would be very limited; an
> expression like A*B+C*D would be much more efficient, probably, if the
> whole expression were evaluated at once for each element (due to
> memory locality and temporary allocation). But it is impossible for
> numpy
On 18/04/07, Sebastian Haase <[EMAIL PROTECTED]> wrote:
> Hi Anne,
> I'm just starting to look into your code (sound very interesting -
> should probably be put onto the wiki)
> -- quick note:
> you are mixing tabs and spaces :-(
> what editor are you using !?
Agh. vim is misbehaving. Sorry abou
On 18/04/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> Sebastian Haase wrote:
>
> > Hi,
> > I don't know much about ATLAS -- would there be other numpy functions
> > that *could* or *should* be implemented using ATLAS !?
> > Any ?
>
> Not really, no.
ATLAS is a library designed to implement linea
Sebastian Haase wrote:
> Hi,
> I don't know much about ATLAS -- would there be other numpy functions
> that *could* or *should* be implemented using ATLAS !?
> Any ?
Not really, no.
--
Robert Kern
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terr
On 4/17/07, Robert Kern <[EMAIL PROTECTED]> wrote:
> Matthieu Brucher wrote:
> > I would say that if the underlying atlas library is multithreaded, numpy
> > operations will be as well. Then, at the Python level, even if the
> > operations take a lot of time, the interpreter will be able to process
Hi Anne,
I'm just starting to look into your code (sound very interesting -
should probably be put onto the wiki)
-- quick note:
you are mixing tabs and spaces :-(
what editor are you using !?
-Sebastian
On 4/17/07, Anne Archibald <[EMAIL PROTECTED]> wrote:
> On 17/04/07, Lou Pecora <[EMAIL P
Matthieu Brucher wrote:
> I would say that if the underlying atlas library is multithreaded, numpy
> operations will be as well. Then, at the Python level, even if the
> operations take a lot of time, the interpreter will be able to process
> threads, as the lock is freed during the numpy operation
Very nice. Thanks. Examples are welcome since they
are usually the best to get up to speed with
programming concepts.
--- Anne Archibald <[EMAIL PROTECTED]> wrote:
> On 17/04/07, Lou Pecora <[EMAIL PROTECTED]>
> wrote:
> > I get what you are saying, but I'm not even at the
> > Stupidly Easy Par
I would say that if the underlying atlas library is multithreaded, numpy
operations will be as well. Then, at the Python level, even if the
operations take a lot of time, the interpreter will be able to process
threads, as the lock is freed during the numpy operations - as I understood
for the las
On 17/04/07, James Turner <[EMAIL PROTECTED]> wrote:
> Hi Anne,
>
> Your reply to Lou raises a naive follow-up question of my own...
>
> > Normally, python's multithreading is effectively cooperative, because
> > the interpreter's data structures are all stored under the same lock,
> > so only one
On 17/04/07, Lou Pecora <[EMAIL PROTECTED]> wrote:
I get what you are saying, but I'm not even at the
Stupidly Easy Parallel level, yet. Eventually.
Well, it's hardly wonderful, but I wrote a little package to make idioms like:
d = {}
def work(f):
d[f] = sum(exp(2.j*pi*f*times))
foreach(wo
Hi Anne,
Your reply to Lou raises a naive follow-up question of my own...
> Normally, python's multithreading is effectively cooperative, because
> the interpreter's data structures are all stored under the same lock,
> so only one thread can be executing python bytecode at a time.
> However, man
Ii get what you are saying, but I'm not even at the
Stupidly Easy Parallel level, yet. Eventually.
Thanks.
--- Anne Archibald <[EMAIL PROTECTED]> wrote:
> On 17/04/07, Lou Pecora <[EMAIL PROTECTED]>
> wrote:
> > Now, I didn't know that. That's cool because I
> have a
> > new dual core Intel Ma
On 17/04/07, Lou Pecora <[EMAIL PROTECTED]> wrote:
> Now, I didn't know that. That's cool because I have a
> new dual core Intel Mac Pro. I see I have some
> learning to do with multithreading. Thanks.
No problem. I had completely forgotten about the global interpreter
lock, wrote a little mult
On 17/04/07, Francesc Altet <[EMAIL PROTECTED]> wrote:
> Finally, don't let benchmarks fool you. If you can, it is always better
> to run your own benchmarks made of your own problems. A tool that can be
> killer for one application can be just mediocre for another (that's
> somewhat extreme, but
Now, I didn't know that. That's cool because I have a
new dual core Intel Mac Pro. I see I have some
learning to do with multithreading. Thanks.
--- Anne Archibald <[EMAIL PROTECTED]> wrote:
> On 17/04/07, Lou Pecora <[EMAIL PROTECTED]>
> wrote:
> > You should probably look over your code and
El dt 17 de 04 del 2007 a les 16:43 +, en/na Simon Berube va
escriure:
> I recently made the switch from Matlab to Python and am very
> interested in optimizing certain routines that I find too slow in
> python/numpy (long loops).
>
> I have looked and learned about the different methods used
On 17/04/07, Lou Pecora <[EMAIL PROTECTED]> wrote:
> You should probably look over your code and see if you
> can eliminate loops by using the built in
> vectorization of NumPy. I've found this can really
> speed things up. E.g. given element by element
> multiplication of two n-dimensional array
You should probably look over your code and see if you
can eliminate loops by using the built in
vectorization of NumPy. I've found this can really
speed things up. E.g. given element by element
multiplication of two n-dimensional arrays x and y
replace,
z=zeros(n)
for i in xrange(n):
z[i]=x[
Hi,
You can find various suggestions to improve performance like Tim
Hochberg's list:
"""
0. Think about your algorithm.
1. Vectorize your inner loop.
2. Eliminate temporaries
3. Ask for help
4. Recode in C.
5. Accept that your code will never be fast.
Step zero should probably be repeated after
I recently made the switch from Matlab to Python and am very
interested in optimizing certain routines that I find too slow in
python/numpy (long loops).
I have looked and learned about the different methods used for such
problems such as blitz, weave and pyrex but had a question for more
experien
27 matches
Mail list logo