On Fri, Feb 25, 2011 at 12:52 PM, Joe Kington wrote:
> Do you expect to have very large integer values, or only values over a
> limited range?
>
> If your integer values will fit in into 16-bit range (or even 32-bit, if
> you're on a 64-bit machine, the default dtype is float64...) you can
> pote
It certainly does. Here is mine, showing that numpy is linked against mkl:
In [2]: np.show_config()
lapack_opt_info:
libraries = ['mkl_lapack95', 'mkl_intel', 'mkl_intel_thread',
'mkl_core', 'mkl_p4m', 'mkl_p4p', 'pthread']
library_dirs =
['/Library/Frameworks/Python.framework/Versions/1.3
The `cdist` function in scipy spatial does what you want, and takes ~ 1ms on
my machine.
In [1]: import numpy as np
In [2]: from scipy.spatial.distance import cdist
In [3]: a = np.random.random((340, 2))
In [4]: b = np.random.random((329, 2))
In [5]: c = cdist(a, b)
In [6]: c.shape
Out[6]: (
Yes, concatenate is doing other work under the covers. In short, in supports
concatenating a list of arbitrary python sequences into an array and does
checking on each element of the tuple to ensure it is valid to concatenate.
On Tue, Aug 17, 2010 at 9:03 AM, Zbyszek Szmek wrote:
> Hi,
> this is
On Wed, 2010-05-12 at 23:06 -0400, Chris Colbert wrote:
> I had this problem back in 2009 when building Enthought Enable, and
> was happy with a work around. It just bit me again, and I finally got
> around to drilling down to the problem.
>
>
> On linux, if one uses the numpy
On Wed, May 12, 2010 at 11:06 PM, Chris Colbert wrote:
> I had this problem back in 2009 when building Enthought Enable, and was
> happy with a work around. It just bit me again, and I finally got around to
> drilling down to the problem.
>
> On linux, if one uses the numpy/si
I had this problem back in 2009 when building Enthought Enable, and was
happy with a work around. It just bit me again, and I finally got around to
drilling down to the problem.
On linux, if one uses the numpy/site.cfg [default] section when building
from source to specifiy local library directori
On Tue, May 4, 2010 at 12:20 PM, S. Chris Colbert wrote:
> On Thu, 2009-03-12 at 19:59 +0100, Dag Sverre Seljebotn wrote:
> > (First off, is it OK to continue polling the NumPy list now and then on
> > Cython language decisions? Or should I expect that any interested Cython
>
On Thu, 2009-03-12 at 19:59 +0100, Dag Sverre Seljebotn wrote:
> (First off, is it OK to continue polling the NumPy list now and then on
> Cython language decisions? Or should I expect that any interested Cython
> users follow the Cython list?)
>
> In Python, if I write "-1 % 5", I get 4. Howeve
On Sat, Apr 3, 2010 at 5:50 PM, Chris Colbert wrote:
> On Sat, Apr 3, 2010 at 12:52 PM, Antoine Pairet wrote:
>> On Sat, 2010-04-03 at 11:04 -0500, Warren Weckesser wrote:
>>> Don't include that last "numpy" in the path. E.g.
>>>
>>> e
On Sat, Apr 3, 2010 at 12:52 PM, Antoine Pairet wrote:
> On Sat, 2010-04-03 at 11:04 -0500, Warren Weckesser wrote:
>> Don't include that last "numpy" in the path. E.g.
>>
>> export
>> PYTHONPATH=$PYTHONPATH:/home/pcpm/pairet/pythonModules/numpy/lib64/python2.4/site-packages
>> numpy is installe
On Sat, Apr 3, 2010 at 12:17 AM, wrote:
>
> On Fri, Apr 2, 2010 at 11:45 PM, Chris Colbert wrote:
> >
> >
> > On Fri, Apr 2, 2010 at 3:03 PM, Erik Tollerud
> > wrote:
> > you could try something like this (untested):
> > if __name__ == '_
On Fri, Apr 2, 2010 at 3:03 PM, Erik Tollerud wrote:
you could try something like this (untested):
if __name__ == '__main__':
try:
import numpy
except ImportError:
import subprocess
subprocess.check_call(['easy_install', 'numpy']) # will except if
call fails
_
This is how I always do it:
In [1]: import numpy as np
In [3]: tmat = np.array([[0., 1., 0., 5.],[0., 0., 1., 3.],[1., 0., 0.,
2.]])
In [4]: tmat
Out[4]:
array([[ 0., 1., 0., 5.],
[ 0., 0., 1., 3.],
[ 1., 0., 0., 2.]])
In [5]: points = np.random.random((5, 3))
In
In [4]: %timeit a = np.random.randint(0, 20, 100)
10 loops, best of 3: 4.32 us per loop
In [5]: %timeit (a>=10).sum()
10 loops, best of 3: 7.32 us per loop
In [8]: %timeit np.where(a>=10)
10 loops, best of 3: 5.36 us per loop
am i missing something?
On Wed, Feb 24, 2010 at 12:50 PM
brucewa...@broo:~/Downloads$ python isinf.py
True
Kubuntu 9.10
NumPy 1.3.0
Python 2.6.4 (r264:75706, Dec 7 2009, 18:43:55)
[GCC 4.4.1] on linux2
On Sun, Feb 21, 2010 at 12:42 PM, Nadav Horesh wrote:
>
> $ python isinf.py
> Warning: invalid value encountered in isinf
> True
>
> machine: gentoo
Perhaps it's my inability to properly use openmp, but when working on
scikits.image on algorithms doing per-pixel manipulation with numpy arrays
(using Cython), i saw better performance using Python threads and releasing
the GIL than I did with openmp. I found the openmp overhead to be quite
large,
On Sat, Dec 19, 2009 at 6:43 AM, Charles R Harris wrote:
>
>
> On Fri, Dec 18, 2009 at 10:20 PM, Wayne Watson <
> sierra_mtnv...@sbcglobal.net> wrote:
>
>> This program gives me the message following it:
>> Program==
>> import numpy as np
>> from numpy import matrix
>> imp
not to mention that that idea probably isnt going to work if his
problem is non-linear ;)
On Thu, Dec 10, 2009 at 7:36 PM, Norbert Nemec
wrote:
>
>
> Dag Sverre Seljebotn wrote:
>> I haven't heard of anything, but here's what I'd do:
>> - Use np.int64
>> - Multiply all inputs to my code with
Why cant the divisor constant just be made an optional kwarg that
defaults to zero?
It wont break any existing code, and will let everybody that wants the
other behavior, to have it.
On Thu, Dec 3, 2009 at 1:49 PM, Colin J. Williams wrote:
> Yogesh,
>
> Could you explain the rationale for this ch
Cool. Thanks!
I will take a look at this. We have some code in scikits.image that
creates a QImage from the numpy data buffer for display. But I have
only implemented it for RGB888 so far. So you may have saved me some
time :)
Cheers!
Chris
2009/12/2 Hans Meine :
> Hi,
>
> I have just uploaded
This problem is solved. Lisandro spent a bunch of time with me helping
to track it down. Thanks Lisandro!
On Mon, Nov 9, 2009 at 6:49 PM, Chris Colbert wrote:
> I've got an issue where trying to pass a numpy array to one of my
> cython functions fails, with the exception sayin
I've got an issue where trying to pass a numpy array to one of my
cython functions fails, with the exception saying than 'int objects
are not iterable'.
So somehow, my array is going from being perfectly ok (i can display
the image and print its shape and size), to going bad right before the
funct
Great!
Thanks for the help David!
On Sat, Oct 31, 2009 at 1:58 PM, David Cournapeau wrote:
> On Sat, Oct 31, 2009 at 9:45 PM, Chris Colbert wrote:
>
>> Graphically can this every occur in hardware memory:
>>
>> |--- a portion of array A ---|--- python object foo ---|
need to put in place to ensure that it doesnt trample on
memory. In the best case senario, it would just trample on the parent
array, in the worst case senario it would segfault.
Cheers,
Chris
On Sat, Oct 31, 2009 at 1:32 PM, David Cournapeau wrote:
> On Sat, Oct 31, 2009 at 9:22 PM, Chris Colb
For example say we have an original array a=np.random.random((512, 512, 3))
and we take a slice of that array b=a[:100, :100, :]
now, b is discontiguous, but all if its memory is owned by a.
Will there ever be a situation where a discontiguous array owns its
own data? Or more generally, will dis
that code works fine for me:
ubuntu 9.04 x64
python 2.6.2
scipy 0.7.1
numpy 1.3.0
ipython 0.9.1
On Wed, Oct 28, 2009 at 2:21 PM, Ole Streicher wrote:
> Hi,
>
> Is there something wrong with scipy.special.hermite? The following code
> produces glibc errors:
>
> 8<-
Lets say I have function that applies a homogeneous transformation
matrix to an Nx3 array of points using np.dot.
since the matrix is 4x4 I have to add a 4 column of ones to the array
so the function looks something like this:
def foo():
<--snip-->
pts = np.column_stack((Xquad, Yquad, Z
my powers are typically doubles
I traced the problem down to the pow function in math.h just being slow...
Thanks!
On Tue, Sep 29, 2009 at 7:53 PM, Charles R Harris
wrote:
>
>
> On Tue, Sep 29, 2009 at 11:01 AM, Chris Colbert wrote:
>>
>> are there any particular opti
are there any particular optimization flags issued when building numpy
aside from the following?
-fwrapv -O2
On Tue, Sep 29, 2009 at 6:54 PM, Robert Kern wrote:
> On Tue, Sep 29, 2009 at 11:47, Chris Colbert wrote:
>> Does numpy use pow from math.h or something else
Does numpy use pow from math.h or something else?
I seem to be having a problem with slow pow under gcc when building an
extension, but it's not affecting numpy. So if numpy uses that, then
there is something else i'm missing.
Cheers!
Chris
___
NumPy-D
you sir, have a very good point :)
On Fri, Sep 25, 2009 at 12:29 PM, Robert Kern wrote:
> On Fri, Sep 25, 2009 at 11:23, Chris Colbert wrote:
>> Oh, and sorry if calling you Chuck was offensive, that's out of habit
>> from a friend of mine named Charles.
>
> Since he
Oh, and sorry if calling you Chuck was offensive, that's out of habit
from a friend of mine named Charles.
My apologies...
On Fri, Sep 25, 2009 at 12:21 PM, Chris Colbert wrote:
> here's an example from numpy/core/tests/
>
> from the source directory:
>
> brucewa...@
so, something went amok probably a few installs back. This seems to
have cleared it up.
Thanks Chuck!
Chris
On Fri, Sep 25, 2009 at 12:10 PM, Charles R Harris
wrote:
>
>
> On Fri, Sep 25, 2009 at 10:09 AM, Charles R Harris
> wrote:
>>
>>
>> On Fri, Sep
for numpy and scipy, only the tests have executable permissions. It's
as if the tests were specifically targeted and had their permissions
changed.
And these are the only two python packages i've built from source and
installed in this manner, other i've gotten via easy_install, or in
the case of
if other permissions are being changed behind the scenes.
On Fri, Sep 25, 2009 at 10:10 AM, Skipper Seabold wrote:
> On Fri, Sep 25, 2009 at 10:01 AM, Chris Colbert wrote:
>> Sorry to bring up an old topic again, but I still haven't managed to
>> resolve this issue concerning
Sorry to bring up an old topic again, but I still haven't managed to
resolve this issue concerning numpy and nose tests...
On a fresh numpy 1.3.0 build from source tarball on sourceforge:
On ubuntu 9.04 x64 I issue the following commands:
cd numpy-1.3.0
python setup.py
sudo python setup.py inst
I give my vote to cython as well. I have a program which uses cython
for a portion simply because it was easier using a simple C for-loop
to do what i wanted rather than beating numpy into submission. It was
an order of magnitude faster as well.
Cheers,
Chris
On Mon, Sep 21, 2009 at 9:12 PM, Dav
Just because I have a ruler handy :)
On my laptop with qx9300, I invert that 5000, 5000 double (float64)
matrix in 14.67s.
Granted my cpu cores were all at about 75 degrees during that process..
Cheers!
Chris
On Mon, Sep 21, 2009 at 4:53 PM, David Cournapeau wrote:
> On Mon, Sep 21, 2009
the way I do my rotations is this:
tmat = rotation matrix
vec = stack of row vectors
rotated_vecs = np.dot(tmat, vec.T).T
On Mon, Sep 7, 2009 at 6:53 PM, T J wrote:
> On Mon, Sep 7, 2009 at 3:43 PM, T J wrote:
>> Or perhaps I am just being dense.
>>
>
> Yes. I just tried to reinvent standard
tarball from sourceforge.
On Thu, Aug 20, 2009 at 6:33 PM, David Cournapeau wrote:
> On Thu, Aug 20, 2009 at 2:06 PM, Chris Colbert wrote:
>> the issue is that the files are executable. I have no idea why they
>> are set that way either. This is numpy 1.3.0 built from source.
&g
nope.
I build Atlas, and modified site.cfg to find those libs in /usr/local/lib/atlas/
then i did:
python setup.py build
sudo python setup.py install
that's it.
On Thu, Aug 20, 2009 at 5:09 PM, Robert Kern wrote:
> On Thu, Aug 20, 2009 at 14:06, Chris Colbert wrote:
>> the iss
this happens with scipy too...
On Thu, Aug 20, 2009 at 5:06 PM, Chris Colbert wrote:
> the issue is that the files are executable. I have no idea why they
> are set that way either. This is numpy 1.3.0 built from source.
>
> the default install location for setup.py install is the
> On Thu, Aug 20, 2009 at 1:52 PM, Chris Colbert wrote:
>> when I build numpy from source via:
>>
>> python setup.py build
>> sudo python setup.py install
>>
>>
>> the nosetests fail because of permissions:
>>
>> In [5]: np.test()
>> Runni
when I build numpy from source via:
python setup.py build
sudo python setup.py install
the nosetests fail because of permissions:
In [5]: np.test()
Running unit tests for numpy
NumPy version 1.3.0
NumPy is installed in /usr/local/lib/python2.6/dist-packages/numpy
Python version 2.6.2 (release26
That's exactly it. Thanks!
On Mon, Aug 17, 2009 at 5:24 AM, Citi, Luca wrote:
> As you stress on "repeat the array ... rather than repeat each element",
> you may want to consider tile as well:
>
np.tile(a, [10,1])
> array([[0, 1, 2],
> [0, 1, 2],
> [0, 1, 2],
> [0, 1, 2],
>
great, thanks!
by order I meant repeat the array in order rather than repeat each element.
On 8/16/09, Stéfan van der Walt wrote:
> 2009/8/16 Chris Colbert :
>> I have a 1x3 array that I want to repeat n times and form an nx3 array
>> where each row is a copy of the origin
I don't think np.repeat will do what I want because the order needs to
be preserved.
I have a 1x3 array that I want to repeat n times and form an nx3 array
where each row is a copy of the original array.
So far I have this:
>>> import numpy as np
>>> a = np.arange(3)
>>> b = np.asarray([a]*10)
>
Thanks to everyone supporting this. I wish I could attend this year,
and I will be making it a point to attend next year. I am very
grateful to be able to catch the talks at this years conference.
Thanks!
Chris
On Wed, Aug 12, 2009 at 6:27 PM, Fernando Perez wrote:
> Hi all,
>
> as you may recal
Someone posts on offtopic.com
>
> (1) Extend the work of others (in this case Luca Citi and Robert Kern)
> (2) File a ticket
> (3) ???
> (4) Profit
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinf
when you use slice notation, [0:4] returns everything up-to but not
including index 4. That is a[4] is actually the 5th element of the
array (which doesn't exist) because arrays are zero-based in python.
http://docs.scipy.org/doc/numpy-1.3.x/user/basics.indexing.html
On Mon, Aug 10, 2009 at 11:0
I get similar results as the OP:
In [1]: import numpy as np
In [2]: a = np.arange(0.0, 1000, (2*3.14159) / 1000, dtype=np.float32)
In [3]: b = np.arange(0.0, 1000, (2*3.14159) / 1000, dtype=np.float64)
In [4]: %timeit -n 10 np.sin(a)
10 loops, best of 3: 63.8 ms per loop
In [5]: %timeit -n 10
ahh, yeah I see now. Thanks!
nothing like making myself look the fool on a friday!
Cheers!
On Fri, Jul 31, 2009 at 6:20 PM, Robert Kern wrote:
> On Fri, Jul 31, 2009 at 17:15, Chris Colbert wrote:
>> Numpy 1.3
>>
>> In [1]: import numpy as np
>>
>> In [2]: a
Numpy 1.3
In [1]: import numpy as np
In [2]: a = np.zeros(5).fill(5)
In [3]: a
In [4]: type(a)
Out[4]:
In [5]: a = np.zeros(5)
In [6]: a.fill(5)
In [7]: a
Out[7]: array([ 5., 5., 5., 5., 5.])
What i'm trying to do may not be the best way, but I think it should
still have worked.
Thou
true,
I saw intel visual fortran, and made the wrong association thinking it
was some sort of visual studio plugin. (stupid, i know...)
On Thu, Jul 30, 2009 at 2:17 AM, David
Cournapeau wrote:
> Chris Colbert wrote:
>> unless you have a visual studio 2003 compiler, you may have to u
unless you have a visual studio 2003 compiler, you may have to use python 2.6.
2009/7/29 ian :
> Hi,everyone!
>
>I installed MSVS2005 + Intel visual fortran 9.1 + Python 2.5 +
> numpy-1.3.0-win32-superpack-python2.5.exe on my computer. The numpy works
> well:
>
> Python 2.5 (r25:51908,
what machine spec are you using?
Using your last function line2array5 WITH float conversion, i get the
following timing on a mobile quad core extreme:
In [24]: a = np.arange(100).astype(str).tostring()
In [25]: a
Out[25]:
'012345678911223344556677
for your particular case:
>>> a = np.array([1, 5, 4, 99], 'f')
>>> b = np.array([3, 7, 2, 8], 'f')
>>> c = b.copy()
>>> d = a!=99
>>> c[d] = (a[d] + b[d])/2.
>>> c
array([ 2., 6., 3., 8.], dtype=float32)
>>>
index with a boolean array?
>>> import numpy as np
>>> a = np.array([3, 3, 3, 4, 4, 4])
>>> a
array([3, 3, 3, 4, 4, 4])
>>> np.average(a)
3.5
>>> b = a != 3
>>> b
array([False, False, False, True, True, True], dtype=bool)
>>> np.average(a[b])
4.0
>>>
On Tue, Jul 14, 2009 at 3:33 PM, Greg Fisk
5.2, 17.1, 19. ]])
On Fri, Jul 10, 2009 at 4:49 AM, David Warde-Farley wrote:
>
> On 10-Jul-09, at 1:25 AM, Chris Colbert wrote:
>
>> actually what would be better is if i can pass two 1d arrays X and Y
>> both size Nx1
>> and get back a 2d array of size NxM where t
actually what would be better is if i can pass two 1d arrays X and Y
both size Nx1
and get back a 2d array of size NxM where the [n,:] row is the linear
interpolation of X[n] to Y[n]
On Fri, Jul 10, 2009 at 1:16 AM, Chris Colbert wrote:
> If i have two arrays representing start points and
If i have two arrays representing start points and end points, is
there a function that will return a 2d array where each row is the
range(start, end, n) where n is a fixed number of steps and is the
same for all rows?
___
NumPy-Discussion mailing list
Nu
hey,
great man! thanks!
I had thought that it may have been possible with a single dot, but
how to do it escaped me.
Thanks again!
Chris
On Thu, Jul 9, 2009 at 11:45 PM, Kurt Smith wrote:
> On Thu, Jul 9, 2009 at 9:36 PM, Chris Colbert wrote:
>> no, because dot(x,y) != dot(y,x)
&
>>>
hence I need xnew = [Transform]*[xold]
and not [xold]*[Transform]
On Thu, Jul 9, 2009 at 10:22 PM, Keith Goodman wrote:
> On Thu, Jul 9, 2009 at 7:08 PM, Chris Colbert wrote:
>> say i have an Nx4 array of points and I want to dot every [n, :] 1x4
>> slice with a 4
say i have an Nx4 array of points and I want to dot every [n, :] 1x4
slice with a 4x4 matrix.
Currently I am using apply_along_axis in the following manner:
def func(slice, mat):
return np.dot(mat, slice)
np.apply_along_axis(func, arr, 1, mat)
Is there a more efficient way of doing this th
there's a wrapper called PyCuda. but you actually write cuda code as a
docstring and its compiled and exectuted at run time.
I think it can be done more pythonic.
On Thu, Jul 9, 2009 at 12:31 PM, Gökhan SEVER wrote:
> Still speaking of particles, has anyone seen Nvidia's Cuda powered particle
>
I figured I'd contribute something similar: the "even" distribution of
points on a sphere.
I can't take credit for the algorithm though, I got it from the following
page and just vectorized it, and tweaked it so it uses the golden ration
properly:
http://www.xsi-blog.com/archives/115
It makes us
and my grammar just sucks tonight...
On Tue, Jul 7, 2009 at 1:46 AM, Chris Colbert wrote:
> I should clarify, everything in python is an object. Even methods of
> classes. The syntax to invoke a method is the method name followed by the
> parenthese (). If you leave off the parenthese
off.
Chris
On Tue, Jul 7, 2009 at 1:42 AM, Chris Colbert wrote:
> you actually have to call the method as transpose(). What you requested was
> the actual method.
>
> >>> import numpy as np
> >>> a = np. matrix([[1,2,3],[4,5,6],[7,8,9]])
> >>> a
you actually have to call the method as transpose(). What you requested was
the actual method.
>>> import numpy as np
>>> a = np. matrix([[1,2,3],[4,5,6],[7,8,9]])
>>> a
matrix([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
>>> b = a.transpose()
>>> b
matrix([[1, 4, 7],
[2, 5, 8],
ion, however, any reason not to say replace the apt
> dist-packages/numpy with the v1.3?
>
>
> Chris Colbert wrote:
> >
> > I ran into this same problem a few days ago. The issue is that Python
> > imports from /usr/python2.6/dist-packages before
> > /usr/local/pyt
I ran into this same problem a few days ago. The issue is that Python
imports from /usr/python2.6/dist-packages before
/usr/local/python2.6/dist-packages causing your numpy 1.3 (assuming its
installed there) to be hidden by the snaptic numpy.
To solve the problem, I added this line to my ~/.bashrc
I'm relatively certain its possible, but then you have to deal with
locks, semaphores, synchronization, etc...
On Thu, Jul 2, 2009 at 12:04 PM, Sebastian Haase wrote:
> On Thu, Jul 2, 2009 at 5:38 PM, Chris Colbert wrote:
>> Who are quoting Sebastian?
>>
>> Multiproc
t 5:14 PM, Chris Colbert wrote:
>> can you hold the entire file in memory as single array with room to spare?
>> If so, you could use multiprocessing and load a bunch of smaller
>> arrays, then join them all together.
>>
>> It wont be super fast, because serializing
can you hold the entire file in memory as single array with room to spare?
If so, you could use multiprocessing and load a bunch of smaller
arrays, then join them all together.
It wont be super fast, because serializing a numpy array is somewhat
slow when using multiprocessing. That said, its stil
do you mean that the values in the kernel depends on the kernels
position relative to the data to be convolved, or that the kernel is
not composed of homogeneous values but otherwise does not change as it
is slid around the source data?
If the case is the latter, you may be better off doing the co
make check" and "make ptcheck" reported no errors.
>
> Gabriel
>
> On Sun, 2009-06-07 at 10:20 +0200, Gabriel Beckers wrote:
>> On Sat, 2009-06-06 at 12:59 -0400, Chris Colbert wrote:
>> > ../configure -b 64 -D c -DPentiumCPS=2400 -Fa -alg -fPIC
>> > --
thanks for catching the typos!
Chris
On Sun, Jun 7, 2009 at 4:20 AM, Gabriel Beckers wrote:
> On Sat, 2009-06-06 at 12:59 -0400, Chris Colbert wrote:
>> ../configure -b 64 -D c -DPentiumCPS=2400 -Fa -alg -fPIC
>> --with-netlib-lapack=/home/your-user-name/build/lapack/lapack-3.2.1
to do in site.cfg when you built it.
Change your site.cfg rebuild & reinstall and you should be fine
Chris
On Sun, Jun 7, 2009 at 12:11 AM, wrote:
> Hi,
>
> On Jun 6, 2009 3:11pm, Chris Colbert wrote:
>> it definately found your threaded atlas libraries. How do you know
&
don't think I can use ldconfig without root, but have set LD_LIBRARY_PATH
> to point to the scipy_build/lib until I put them somewhere else.
>
> importing numpy works, though lapack_lite is also imported. I wonder if this
> is normal even if my ATLAS was used.
>
> Thanks,
>
PM, Chris Colbert wrote:
> can you run this and post the build.log to pastebin.com:
>
> assuming your numpy build directory is /home/numpy-1.3.0:
>
> cd /home/numpy-1.3.0
> rm -rf build
> python setup.py build &&> build.log
>
>
> Chris
>
>
> On Sa
raries = f77blas, cblas, atlas
>
> [lapack_opt]
> libraries = lapack, f77blas, cblas, atlas
>
> [amd]
> amd_libs = amd
>
> [umfpack]
> umfpack_libs = umfpack, gfortran
>
> [fftw]
> libraries = fftw3
>
>
> Rich
>
>
>
>
> On Sat, Jun 6, 2009
when you build numpy, did you use site.cfg to tell it where to find
your atlas libs?
On Sat, Jun 6, 2009 at 1:02 PM, Richard Llewellyn wrote:
> Hello,
>
> I've managed a build of lapack and atlas on Fedora 10 on a quad core, 64,
> and now (...) have a numpy I can import that runs tests ok. :] I
andom.randn(6000, 6000)
>>numpy.dot(a, a) # look at your cpu monitor and verify all cpu cores are
>>at 100% if you built with threads
Celebrate with a beer!
Cheers!
Chris
On Sat, Jun 6, 2009 at 10:42 AM, Keith Goodman wrote:
> On Fri, Jun 5, 2009 at 2:37 PM, Chris Colbe
well, it sounded like a good idea.
Oh, well.
On Fri, Jun 5, 2009 at 5:28 PM, Robert Kern wrote:
> On Fri, Jun 5, 2009 at 16:24, Chris Colbert wrote:
> > How about just introducing a slightly different syntax which tells numpy
> to
> > handle the array like a matrix:
I'll caution anyone from using Atlas from the repos in Ubuntu 9.04 as the
package is broken:
https://bugs.launchpad.net/ubuntu/+source/atlas/+bug/363510
just build Atlas yourself, you get better performance AND threading.
Building it is not the nightmare it sounds like. I think i've done it a
t
How about just introducing a slightly different syntax which tells numpy to
handle the array like a matrix:
Some thing along the lines of this:
A = array[[..]]
B = array[[..]]
elementwise multipication (as it currently is):
C = A * B
matrix multiplication:
C = {A} * {B}
or
C = [A] * [B]
or
dnt be
any different on windows AFAIK.
chris
On Thu, Jun 4, 2009 at 4:54 PM, Chris Colbert wrote:
> Sebastian is right.
>
> Since Matlab r2007 (i think that's the version) it has included support for
> multi-core architecture. On my core2 Quad here at the office, r2008b has no
>
Sebastian is right.
Since Matlab r2007 (i think that's the version) it has included support for
multi-core architecture. On my core2 Quad here at the office, r2008b has no
problem utilizing 100% cpu for large matrix multiplications.
If you download and build atlas and lapack from source and enab
the directory wasn't on the python path either. I added a site-packages.pth
file to /usr/local/lib/python2.6/dist-packages with the line
"/usr/local/lib/python2.6/site-packages"
Not elegant, but it worked.
Chris
On Mon, Jun 1, 2009 at 5:44 PM, Chris Colbert wrote:
> yeah
yeah, I came back here just now to call myself an idiot, but I'm too late :)
Chris
On Mon, Jun 1, 2009 at 5:37 PM, Robert Kern wrote:
> On Mon, Jun 1, 2009 at 16:35, Chris Colbert wrote:
> > thanks Robert,
> >
> > the directory indeed wasnt in the $PATH variable.
&
thanks Robert,
the directory indeed wasnt in the $PATH variable.
Cheers,
Chris
On Mon, Jun 1, 2009 at 5:12 PM, Robert Kern wrote:
> On Mon, Jun 1, 2009 at 15:37, Chris Colbert wrote:
> > On 64-bit ubuntu 9.04 and Python 2.6, I built numpy from source against
> > atlas and lap
building without the prefix flag works for me as well, just wondering why
this doesnt...
Chris
On Mon, Jun 1, 2009 at 4:47 PM, Skipper Seabold wrote:
> On Mon, Jun 1, 2009 at 4:37 PM, Chris Colbert wrote:
> > On 64-bit ubuntu 9.04 and Python 2.6, I built numpy from source against
On 64-bit ubuntu 9.04 and Python 2.6, I built numpy from source against
atlas and lapack (everything 64bit).
To install, I used: sudo python setup.py install --prefix /usr/local
but then python doesnt find the numpy module, even though it exists in
/usr/local/lib/python2.6/site-packages
Do I
the reason for all this is that the bitmap image format specifies the image
origin as the lower left corner. This is the convention used by PIL. The
origin of a numpy array is the upper right corner. Matplot lib does not
handle this discrepancy in the function pil_to_array, which is called
internal
This is interesting.
I have always done RGB imaging with numpy using arrays of shape (height,
width, 3). In fact, this is the form that PIL gives when calling
np.asarray() on a PIL image.
It does seem more efficient to be able to do a[0],a[1],a[2] to get the R, G,
and B channels respectively. Thi
Thanks Stefan.
2009/5/11 Stéfan van der Walt
> 2009/5/11 Chris Colbert :
> > Does the scipy implementation do this differently? I thought that since
> FFTW
> > support has been dropped, that scipy and numpy use the same routines...
>
> Just to be c
der Walt
> Hi Chris
>
> 2009/5/11 Chris Colbert :
> > When convolving an image with a large kernel, its know that its faster to
> > perform the operation as multiplication in the frequency domain. The
> below
> > code example shows that the results of my 2d fi
Ok, that makes sense.
Thanks Chuck.
On Mon, May 11, 2009 at 2:41 PM, Charles R Harris wrote:
>
>
> On Mon, May 11, 2009 at 9:40 AM, Chris Colbert wrote:
>
>> at least I think this is strange behavior.
>>
>> When convolving an image with a large kernel, its know
at least I think this is strange behavior.
When convolving an image with a large kernel, its know that its faster to
perform the operation as multiplication in the frequency domain. The below
code example shows that the results of my 2d filtering are shifted from the
expected value a distance 1/2
1 - 100 of 164 matches
Mail list logo