On Wed, Dec 3, 2008 at 9:19 AM, Sébastien Barthélemy
<[EMAIL PROTECTED]>wrote:
>def inv_v1(self):
>self[0:4,0:4] = htr.inv(self)
>def inv_v2(self):
>data = htr.inv(self)
>self = HomogeneousMatrix(data)
>def inv_v3(self):
>self
On Tue, Oct 21, 2008 at 5:01 PM, Bruce Southey <[EMAIL PROTECTED]> wrote:
> I think you are on your own here as it is a huge chunk to chew!
> Depending on what you really mean by linear models is also part of that
> (the Wikipedia entry is amusing). Most people probably to stats
On Wed, Aug 13, 2008 at 4:01 PM, Robert Kern <[EMAIL PROTECTED]> wrote:
> On Wed, Aug 13, 2008 at 14:37, Joe Harrington <[EMAIL PROTECTED]> wrote:
> >>On Tue, Aug 12, 2008 at 19:28, Charles R Harris
> >><[EMAIL PROTECTED]> wrote:
> >>>
> &
On Tue, Aug 12, 2008 at 1:46 AM, Andrew Dalke <[EMAIL PROTECTED]>wrote:
> Here's the implementation, from lib/function_base.py
>
> def nanmin(a, axis=None):
> """Find the minimium over the given axis, ignoring NaNs.
> """
> y
On Thu, Jul 31, 2008 at 7:08 PM, Christopher Burns <[EMAIL PROTECTED]>wrote:
> Do you mean add a file on the Wiki or in the source tree somewhere?
>
Either or both-- so long as there is a convenient place to find them. I
suppose a Wiki page would be most flexible, since it could be
> self._destpath = tempfile.mkdtemp()
> self._istmpdest = True
>
>
>Andrew
>[EMAIL PROTECTED]
>
>
> ___
> Numpy-discussion mailing list
> Numpy-di
On Wed, Jul 30, 2008 at 9:25 PM, Dave Peterson <[EMAIL PROTECTED]>wrote:
> Hello,
>
> I am very pleased to announce that Traits 3.0 has just been released!
>
>
All of the URLs on PyPi to Enthought seem to be broken (e.g.,
http://code.enthought.com/traits). Can you give a
On Thu, Jul 31, 2008 at 10:14 AM, Gael Varoquaux <
[EMAIL PROTECTED]> wrote:
> On Thu, Jul 31, 2008 at 12:43:17PM +0200, Andrew Dalke wrote:
> > Startup performance has not been a numpy concern. It a concern for
> > me, and it has been (for other packages) a concern for s
On Sun, Jun 22, 2008 at 3:58 PM, Andreas Klöckner <[EMAIL PROTECTED]> wrote:
> PyCuda is based on the driver API. CUBLAS uses the high-level API. Once
> *can*
> violate this rule without crashing immediately. But sketchy stuff does
> happen. Instead, for BLAS-1 operations, P
I have a speed problem with the approach I'm using to detect phase wrappings in
a 3D data set. In my application, phaseField is a 3D array containing the phase
values of a field. In order to detect the vortices/phase windings at each
point, I check for windings on each of 3 faces of a 2x2 cube w
On Thu, May 22, 2008 at 12:08 PM, Keith Goodman <[EMAIL PROTECTED]> wrote:
> How big is n? If it is much smaller than a million then loop over that
> instead.
>
n is always relatively small, but I'd rather not do:
for i in range(n):
counts[i] = (items==i).sum()
After poking around for a bit, I was wondering if there was a faster method
for the following:
# Array of index values 0..n
items = numpy.array([0,3,2,1,4,2],dtype=int)
# Count the number of occurrences of each index
counts = numpy.zeros(5, dtype=int)
for i in items:
counts[i] += 1
In my real
rather I'm merely pointing out
that var(X, vardef='sample') is an option (using SAS's PROC MEANS parameter
name as an arbitrary example).
In the extremely rare cases I need any other denominator, I'm fine with
multiplying by var(x)*n/(n-adjust).
-Kevin
On Mon, Apr 7, 200
hello
while trying to write a function that processes some numpy arrays and
calculate euclidean distance ,i ended up with this code
#some samplevalues
totalimgs=17
selectedfacespaces=6
imgpixels=18750 (ie for an image of 125X150 )
...
# i am using these arrays to do the calculation
facespace #num
ok..I coded everything again from scratch..looks like i was having a
problem with matrix class
when i used a matrix for facespace
facespace=sortedeigenvectorsmatrix * adjustedfacematrix
and trying to convert the row to an image (eigenface).
by
make_simple_image(facespace[x],"eigenimage_x.jpg",(im
>Arnar wrote
> I dont know if this made anything any clearer. However, a simple
> example may be clearer:
> # X is (a ndarray, *not* matrix) column centered with vectorized images in
> rows
> # method 1:
> XX = dot(X, X.T)
> s, u = linalg.eigh(XX)
> reorder = s.argsort()[::-1]
> facespace = dot(X.
> I dont know if this made anything any clearer. However, a simple
> example may be clearer:
thanks Arnar for the kind response,now things are a lot clearer...will
try out in code ..
D
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://
hi
i have a set of images of faces which i make into a 2d array using
numpy.ndarray
each row represents a face image
faces=
[[ 173. 87. ... 88. 165.]
[ 158. 103. .. 73. 143.]
[ 180. 87. .. 55. 143.]
[ 155. 117. .. 93. 155.]]
from which i can get the mean image =>
avgface=a
> This example assumes that facearray is an ndarray.(like you described
> in original post ;-) ) It looks like you are using a matrix.
hi Arnar
thanks ..
a few doubts however
1.when i use say 10 images of 4X3 each
u, s, vt = linalg.svd(facearray, 0)
i will get vt of shape (10,12)
can't i take th
On Mar 1, 12:57 am, "Peter Skomoroch" wrote:
I think
> > matlab example should be easy to translate to scipy/matplotlib using the
> > montage function:
>
> > load faces.mat
> > %Form covariance matrix
> > C=cov(faces');
> > %build eigenvectors and eigenvalues
> > [E,D] = eig(C);
hi Peter,
nice
hi guys
I have a set of face images with which i want to do face recognition
using Petland's PCA method.I gathered these steps from their docs
1.represent matrix of face images data
2.find the adjusted matrix by substracting the mean face
3.calculate covariance matrix (cov=A* A_transpose) where
> Robin wrote
> I'm not sure why they would be doing this - to me it looks they might
> be using Image as a convenient way to store some other kind of data...
thanks Robin,
I am wondering if there is a more straightforward way to do these..
especially the vector to image function
D
__
hi
i came across a codebase by rice univ people..in that there are some
functions for conversion btw image and vectors
1.
def image_to_vector(self, filename):
try:
im = Image.open(filename)
except IOError:
print 'couldn\'t load ' + filename
sys.e
> Arnar wrote
> from scipy import linalg
> facearray-=facearray.mean(0) #mean centering
> u, s, vt = linalg.svd(facearray, 0)
> scores = u*s
> facespace = vt.T
hi Arnar
when i do this i get these
u =< 'numpy.core.defmatrix.matrix'> (4, 4)
that matches the eigenvectors matrix in my previous data
s=
On Feb 28, 1:27 pm, "Matthieu Brucher" wrote
> If your images are 4x3, your eigenvector must be 12 long.
hi
thanx for reply
i am using 4 images each of size 4X3
the covariance matrix obtained from adjfaces*faces_trans is 4X4 in
size and that produces the evalues and eigenvectors given here
eva
i all
I am learning PCA method by reading up Turk&Petland papers etc
while trying out PCA on a set of greyscale images using python, and
numpy I tried to create eigenvectors and facespace.
i have
facesarray--- an NXP numpy.ndarray that contains data of images
N=numof images,P=pixels in an
> How are you using the values? How significant are the differences?
>
i am using these eigenvectors to do PCA on a set of images(of faces).I
sort the eigenvectors in descending order of their eigenvalues and
this is multiplied with the (orig data of some images viz a matrix)to
obtain a facespac
> Different implementations follow different conventions as to which
> is which.
thank you for the replies ..the reason why i asked was that the most
significant eigenvectors ( sorted according to eigenvalues) are later
used in calculations and then the results obtained differ in java and
python
hi
i was calculating eigenvalues and eigenvectors for a covariancematrix
using numpy
adjfaces=matrix(adjarr)
faces_trans=adjfaces.transpose()
covarmat=adjfaces*faces_trans
evalues,evect=eigh(covarmat)
for a sample covarmat like
[[ 1.69365981e+13 , -5.44960784e+12, -9.00346400e+12 , -2.48352625e
On 1/8/08, Matthieu Brucher <[EMAIL PROTECTED]> wrote:
>
> I have AMD processor so I guess I should use ACML somehow instead.
> > However, at 1st I would prefer my code to be platform-independent, and
> > at 2nd unfortunately I haven't encountered in numpy documentat
> try starting with the tutorial:
> http://www.scipy.org/Tentative_NumPy_Tutorial
>
> For example, to extract an array containing the maxima of each row of
> mymatrix, you can use the amax() function:
>
> temp = numpy.amax(mymatrix, axis=1)
>
>
thanx..had a tuff time finding the functions..will
in my code i am trying to normalise a matrix as below
mymatrix=matrix(..# items are of double type..can be negative
values)
numrows,numcols=mymatrix.shape
for i in range(numrows):
temp=mymatrix[i].max()
for j in range(numcols):
mymatrix[i,j]=abs(mymatrix[i,j]/t
tuition is that the first problem you
need to solve is getting Boot to generate the appropriate __rmul__ method.
The second problem, if it even exists, is ensuring that __mul__ returns
NotImplemented.
Best of luck,
-Kevin
On Dec 27, 2007 10:15 AM, Bruce Sherwood <[EMAIL PROTECTED]> wrot
> on my computer I have:
> 1.15.6 sec with your code
> 2.0.072 sec with resultmatrix2
> 3.0.040 sec with tensordot (resultmatrix3) (-- which is a 400x speed)
wow ,thanks!
the tensordot fn is blinding fast..
i added /modified
resultndarray = tensordot(matrixone[:sample,:], matrixtwo.
hi
i am doing some maths calculations involving matrices of double values
using numpy.matrix ,
java code for this is something like
int items=25;
int sample=5;
int totalcols=8100;
double[][]dblarrayone=new double[items][totalcols];
double[][]dblarraytwo=new double[items][totalcols];
//their eleme
hi
i am a beginner with numpy and python,so pardon me if this doubt seems
silly
i want to create a matrix with say 3 rows and 5 columns..and then set
the values of each item in it .for this i did something like below
myarray=zeros((3,5))
#then set the items
for row in range(3):
for col in rang
hi
i am a beginner with numpy and python,so pardon me if this doubt seems
silly
i want to create a matrix with say 3 rows and 5 columns..and then set
the values of each item in it .for this i did something like below
myarray=zeros((3,5))
#then set the items
for row in range(3):
for col in rang
> > In particular for the simulation yes, depending on the level of detail
> > of course. But only parts, eg. random number generation for certain
> > distributions had to be coded in C/C++.
>
> Are you saying you extended the scipy/numpy tools for this?
> Do you think it would make sense to put so
> a) Can you guys tell me briefly about the kind of problems you are
> tackling with numpy and scipy?
I'm using python with numpy,scipy, pytables and matplotlib for data
analysis in the field of high energy particle physics. Most of the
work is histograming millions of events, fitting functions t
hi
i wish to convert an rgb image into an array of double values..is
there a method for that in numpy?
also i want to create an array of doubles into corresponding rgb
tuples of an image
can anyone guide me?
dn
___
Numpy-discussion mailing list
Numpy-dis
It's just the other way around:
mymat[:,0] # first column
mymat[:,1] # second column
Take a look at the tutorial:
http://scipy.org/Tentative_NumPy_Tutorial#head-864862d3f2bb4c32f04260fac61eb4ef34788c4c
best! bernhard
On Nov 1, 7:22 am, "dev new" <[EMAIL PROTECTED]> wrote
Take a look at numpy.ix_
http://www.scipy.org/Numpy_Example_List_With_Doc#head-603de8bdb62d0412798c45fe1db0648d913c8a9c
This method creates the index array for you. You only have to specify
the coordinates in each dimesion.
Bernhard
On Oct 29, 8:46 am, "Matthieu Brucher" <[EM
David Cournapeau wrote:
>> Python 2.4.4 (#2, May 17 2004, 22:47:37) [C] on irix6
>> Type "help", "copyright", "credits" or "license" for more information.
> from numpy import *
>> Running from numpy source directory.
> Here is the problem: this means numpy will NOT work. Two possibilities:
Hi David,
David Cournapeau wrote:
[EMAIL PROTECTED] wrote:
Jarrod Millman wrote:
Hello,
I am hoping to close a few of the remaining tickets for the upcoming
NumPy 1.0.4 release. Is anyone using NumPy on IRIX? Or have access
to IRIX? If so, could you please take a look at this ticket
Jarrod Millman wrote:
> Hello,
>
> I am hoping to close a few of the remaining tickets for the upcoming
> NumPy 1.0.4 release. Is anyone using NumPy on IRIX? Or have access
> to IRIX? If so, could you please take a look at this ticket:
> http://projects.scipy.org/scipy/numpy/ticket/417
>
> Tha
On 7/20/07, Kevin Jacobs <[EMAIL PROTECTED]> <[EMAIL PROTECTED]>
wrote:
On 7/20/07, Charles R Harris <[EMAIL PROTECTED]> wrote:
>
> I expect using sqrt(x) will be faster than x**.5.
>
I did test this at one point and was also surprised that sqrt(x) seemed
slower than
On 7/20/07, Charles R Harris <[EMAIL PROTECTED]> wrote:
I expect using sqrt(x) will be faster than x**.5.
I did test this at one point and was also surprised that sqrt(x) seemed
slower than **.5. However I found out otherwise while preparing a timeit
script to demonstrate this obser
On 7/20/07, Nils Wagner <[EMAIL PROTECTED]> wrote:
Your sqrtm_eig(x) function won't work if x is defective.
See test_defective.py for details.
I've added several defective matrices to my test cases and the SVD method
doesn't work well as I'd thought (which is ob
On 7/20/07, Nils Wagner <[EMAIL PROTECTED]> wrote:
Your sqrtm_eig(x) function won't work if x is defective.
See test_defective.py for details.
I am aware, though at least on my system, the SVD-based method is by far the
fastest and robust (and can be made more robust by the ad
On 7/20/07, Anne Archibald <[EMAIL PROTECTED]> wrote:
On 20/07/07, Nils Wagner <[EMAIL PROTECTED]> wrote:
> lorenzo bolla wrote:
> > hi all.
> > is there a function in numpy to compute the exp of a matrix, similar
> > to expm in matlab?
> > for example:
&
On 7/16/07, Robert Kern <[EMAIL PROTECTED]> wrote:
And we'd certainly appreciate the contribution. I'm tentatively going to
say
yes, we should start requiring LAPACK 3.0 unless if there is some very
important
platform that only comes with an older LAPACK.
Great! The added
On 7/16/07, Charles R Harris <[EMAIL PROTECTED]> wrote:
Hmm,
I get a real result for this, although the result is wildly incorrect.
Sqrtm isn't part of numpy, where are you getting it from? Mine is coming
from pylab and looks remarkably buggy.
from scipy.linalg import sqrtm
I
On 7/16/07, Kevin Jacobs <[EMAIL PROTECTED]> <[EMAIL PROTECTED]>
wrote:
This is a bit of a SciPy question, but I thought I'd ask here since I'm
already subscribed. I'd like to add some new LAPACK bindings to SciPy and
was wondering if there was a minimum version re
On 7/16/07, Charles R Harris <[EMAIL PROTECTED]> wrote:
On 7/16/07, Robert Kern <[EMAIL PROTECTED]> wrote:
>
> Kevin Jacobs <[EMAIL PROTECTED]> wrote:
> > Mea culpa on the msqrt example, however I still think it is wrong to
> get
> > a complex square
Mea culpa on the msqrt example, however I still think it is wrong to get a
complex square-root back when a real valued result is expected and exists.
-Kevin
On 7/16/07, Hanno Klemm <[EMAIL PROTECTED]> wrote:
Kevin,
the problem appears to be that sqrtm() gives back an array, rather
Hi all,
This is a bit of a SciPy question, but I thought I'd ask here since I'm
already subscribed. I'd like to add some new LAPACK bindings to SciPy and
was wondering if there was a minimum version requirement for LAPACK, since
it would be ideal if I could use some of the newer 3.0 features. I
Hi, I just installed numpy (1.0.3) and scipy (0.5.2) on a Windows
machine running Python 2.5.1. They both complete installation, and
numpy.test() reports no errors. scipy.test() produces a huge stream
(see below) of warnings, errors (19), and failures (2), however. Also,
there's a deprecation warni
vin
On 6/18/07, Stephen Simmons <[EMAIL PROTECTED]> wrote:
Hi,
Has anyone written a parser for SQL-like queries against PyTables HDF
tables or numpy recarrays?
I'm asking because I have written code for grouping then summing rows of
source data, where the groups are defined by fu
Call randint until you get enough bits of entropy to for a long with the
appropriate number of bits.
def randwords(n):
result = 0L
for i in range(n):
result = (result<<32) | randint(0,2<<32-1)
return result
-Kevin
On 6/14/07, Will Woods <[EMAIL PROTECTED]> wrote:
> Is the genutils module not included to standard CPython edition?
It's not. It's a sub-module of IPython.
It's based on the resource module,though and that comes with Python on
Linux. Just define the function Fernando posted:
> > def clock():
> > """clock() -> floating point number
e the difference, gives the elapsed time) and time.clock
(under linux the cpu clock time used by the process) are the methods
you want.
Cheers! Bernhard
On May 12, 12:22 am, dmitrey <[EMAIL PROTECTED]> wrote:
> hi all,
> please inform me which way for counting elapsed time and cputime in
On 4/29/07, Andrew Straw <[EMAIL PROTECTED]> wrote:
No, the nth index of a Python sequence is a[n], where n starts from
zero. Thus, if I want the nth dimension of array a, I want a.shape[n].
I reverted the page to its original form and added a couple explanatory
comments about zero
I had to poke around before finding it too:
bmat( [[K,G],[G.T, zeros(nc)]] )
On 4/1/07, Bill Baxter <[EMAIL PROTECTED]> wrote:
What's the best way of assembling a big matrix from parts?
I'm using lagrange multipliers to enforce constraints and this kind of
matrix comes up a
od 2 took 0.163641 seconds
method 1a took 0.006665 seconds
method 1b took 0.004070 seconds
-Kevin
On 3/11/07, Dan Becker <[EMAIL PROTECTED]> wrote:
As soon as I posted that I realized it's due to the type conversions from
True
to 1. For some reason, this
---
myMat=scipy.randn(500,500)
64 matches
Mail list logo