On Fri, Sep 10, 2010 at 7:04 PM, David Cournapeau wrote:
> On Sat, Sep 11, 2010 at 9:47 AM, Charles R Harris
> wrote:
> >
> >
> > On Fri, Sep 10, 2010 at 6:41 PM, David Cournapeau
> > wrote:
> >>
> >> On Sat, Sep 11, 2010 at 2:57 AM, Charles R Harris
> >> wrote:
> >> >
> >> >
> >> > On Fri, Sep
On Sat, Sep 11, 2010 at 9:47 AM, Charles R Harris
wrote:
>
>
> On Fri, Sep 10, 2010 at 6:41 PM, David Cournapeau
> wrote:
>>
>> On Sat, Sep 11, 2010 at 2:57 AM, Charles R Harris
>> wrote:
>> >
>> >
>> > On Fri, Sep 10, 2010 at 11:36 AM, Hagen Fürstenau
>> > wrote:
>> >>
>> >> Hi,
>> >>
>> >> I'
On Fri, Sep 10, 2010 at 6:41 PM, David Cournapeau wrote:
> On Sat, Sep 11, 2010 at 2:57 AM, Charles R Harris
> wrote:
> >
> >
> > On Fri, Sep 10, 2010 at 11:36 AM, Hagen Fürstenau
> > wrote:
> >>
> >> Hi,
> >>
> >> I'm multiplying two 1000x1000 arrays with numpy.dot() and seeing
> >> significant
On Fri, Sep 10, 2010 at 6:32 PM, wrote:
> On Fri, Sep 10, 2010 at 8:28 PM, Charles R Harris
> wrote:
> >
> >
> > On Fri, Sep 10, 2010 at 6:15 PM, Charles R Harris
> > wrote:
> >>
> >>
> >> On Fri, Sep 10, 2010 at 5:46 PM, wrote:
> >>>
> >>> I saw some questions on the web how to create non-uni
On Sat, Sep 11, 2010 at 2:57 AM, Charles R Harris
wrote:
>
>
> On Fri, Sep 10, 2010 at 11:36 AM, Hagen Fürstenau
> wrote:
>>
>> Hi,
>>
>> I'm multiplying two 1000x1000 arrays with numpy.dot() and seeing
>> significant performance differences depending on the data. It seems to
>> take much longer
On Fri, Sep 10, 2010 at 1:40 PM, Adam wrote:
> I'm keeping a large number of data points in multiple 2d arrays, for
> example:
>
> class c(object):
> def __init__(self):
> self.a = np.zeros((24, 60))
> self.b = np.zeros((24, 60))
> ...
>
> After processing the data, I
On Fri, Sep 10, 2010 at 8:28 PM, Charles R Harris
wrote:
>
>
> On Fri, Sep 10, 2010 at 6:15 PM, Charles R Harris
> wrote:
>>
>>
>> On Fri, Sep 10, 2010 at 5:46 PM, wrote:
>>>
>>> I saw some questions on the web how to create non-uniform random
>>> integers in python.
>>>
>>> I don't know what th
On Fri, Sep 10, 2010 at 6:15 PM, Charles R Harris wrote:
>
>
> On Fri, Sep 10, 2010 at 5:46 PM, wrote:
>
>> I saw some questions on the web how to create non-uniform random
>> integers in python.
>>
>> I don't know what the best way is but here is another way that looks
>> reasonably fast
>>
>>
On Fri, Sep 10, 2010 at 5:46 PM, wrote:
> I saw some questions on the web how to create non-uniform random
> integers in python.
>
> I don't know what the best way is but here is another way that looks
> reasonably fast
>
> >>> rvs = np.dot(np.random.multinomial(1, [0.1, 0.2, 0.5, 0.2],
> size=10
I saw some questions on the web how to create non-uniform random
integers in python.
I don't know what the best way is but here is another way that looks
reasonably fast
>>> rvs = np.dot(np.random.multinomial(1, [0.1, 0.2, 0.5, 0.2],
>>> size=100),np.arange(4))
>>> np.bincount(rvs)/100.
> I don't see a difference on my computer. Could you post an example?
I'm attaching a small benchmark script. It multiplies two example arrays
loaded from the file "data" (which you can download from
http://zhuliguan.net/data (157K)), and compares this with multiplying
two random arrays. On two di
Thanks for your suggestion, Chuck. The equation arises in the substraction
of two harmonic potentials V and V':
V' = 1/2 x^t * A^(-1) * x
V= 1/2 x^t * B^(-1) * x
V'-V = 1/2 x^t * ( A^(-1) - B^(-1) ) * x = 1/2 x^t * Z^(-1) * x
A is the covariance matrix of the coordinates x in a molecular dynam
On Fri, Sep 10, 2010 at 2:39 PM, Jose Borreguero wrote:
> Dear Numpy users,
>
> I have to solve for Z in the following equation Z^(-1) = A^(-1) - B^(-1),
> where A and B are covariance matrices with zero determinant.
>
> I have never used pseudoinverse matrixes, could anybody please point to me
>
Dear Numpy users,
I have to solve for Z in the following equation Z^(-1) = A^(-1) - B^(-1),
where A and B are covariance matrices with zero determinant.
I have never used pseudoinverse matrixes, could anybody please point to me
any cautions I have to take when solving this equation for Z? The bru
On Fri, Sep 10, 2010 at 1:58 PM, Christopher Barrington-Leigh
wrote:
> Interesting. Thanks Erin, Josef and Keith.
thanks to the stata page at least I figured out that WLS is aweights
with asumption mu_i = mu
import numpy as np
from scikits.statsmodels import WLS
w0 = np.arange(20) % 4
w = 1.*w0/
I'm keeping a large number of data points in multiple 2d arrays, for
example:
class c(object):
def __init__(self):
self.a = np.zeros((24, 60))
self.b = np.zeros((24, 60))
...
After processing the data, I'm serializing these to disk for future
reference/post-pro
Interesting. Thanks Erin, Josef and Keith.
There is a nice article on this at
http://www.stata.com/support/faqs/stat/supweight.html. In my case, the
model I've in mind is to assume that the expected value (mean) is the same
for each sample, and that the weights are/should be normalised, whence a
c
On Fri, Sep 10, 2010 at 11:36 AM, Hagen Fürstenau wrote:
> Hi,
>
> I'm multiplying two 1000x1000 arrays with numpy.dot() and seeing
> significant performance differences depending on the data. It seems to
> take much longer on matrices with many zeros than on random ones. I
> don't know much about
On Fri, Sep 10, 2010 at 10:36 AM, Hagen Fürstenau wrote:
> I'm multiplying two 1000x1000 arrays with numpy.dot() and seeing
> significant performance differences depending on the data. It seems to
> take much longer on matrices with many zeros than on random ones. I
> don't know much about optimiz
Perhaps the ndarrays have different ordering (C vs Fortran order)?
On Fri, Sep 10, 2010 at 10:36 AM, Hagen Fürstenau wrote:
> Hi,
>
> I'm multiplying two 1000x1000 arrays with numpy.dot() and seeing
> significant performance differences depending on the data. It seems to
> take much longer on mat
Hi,
I'm multiplying two 1000x1000 arrays with numpy.dot() and seeing
significant performance differences depending on the data. It seems to
take much longer on matrices with many zeros than on random ones. I
don't know much about optimized MM implementations, but is this normal
behavior for some r
Hy all,
As a test case before writing something bigger, I'm trying to write a little
Fortran module to compute the average of a array in these 4 cases:
avg2d_float, avg2d_double
avg3d_float, avg3d_double
I want this module to be callable from both Fortran and Python, using f2py.
4 Fortran functi
Fri, 10 Sep 2010 14:35:46 +0200, Radek Machulka wrote:
> Thanks, but...
>
x = array([[0,0,0,0],[0,1,0,0],[0,0,1,1],[0,0,0,0]]) x
> array([[0, 0, 0, 0],
>[0, 1, 0, 0],
>[0, 0, 1, 1],
>[0, 0, 0, 0]])
i, j = x.any(0).nonzero()[0], x.any(1).nonzero()[0]
Should be
Thanks, but...
>>> x = array([[0,0,0,0],[0,1,0,0],[0,0,1,1],[0,0,0,0]])
>>> x
array([[0, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 1],
[0, 0, 0, 0]])
>>> i, j = x.any(0).nonzero()[0], x.any(1).nonzero()[0]
>>> x[i[:,None], j[None,:]]
array([[1, 0],
[0, 1],
[0, 0]])
R.
David Cournapeau wrote:
>
>> (I'm running ancient numpy and python at work, so if this is already
>> supported in later versions, my apologies)
>
> What does ancient mean ? Could you give us the version (numpy.__version__)
>
>
1.0.4
Python 2.4.2
IPython 0.9.1
--
View this message in contex
On Fri, Sep 10, 2010 at 9:22 PM, David Cournapeau wrote:
> On Fri, Sep 10, 2010 at 9:05 PM, Tom K. wrote:
>>
>> OK, I know it's my problem if I try to form a 15000x15000 array and take the
>> cosine of each element, but the result is that my python session completely
>> hangs - that is, the opera
On Fri, Sep 10, 2010 at 9:05 PM, Tom K. wrote:
>
> OK, I know it's my problem if I try to form a 15000x15000 array and take the
> cosine of each element, but the result is that my python session completely
> hangs - that is, the operation is not interruptible.
>
> t=np.arange(15360)/15.36e6
> t.sh
OK, I know it's my problem if I try to form a 15000x15000 array and take the
cosine of each element, but the result is that my python session completely
hangs - that is, the operation is not interruptible.
t=np.arange(15360)/15.36e6
t.shape=(-1,1)
X=np.cos(2*np.pi*750*(t-t.T))
I'd like to hit "
Fri, 10 Sep 2010 11:46:47 +0200, Radek Machulka wrote:
> I have array (numpy.ndarray object) with non-zero elements cumulated
> 'somewhere' (like a array([[0,0,0,0],[0,1,1,0],[0,0,1,0],[0,0,0,0]]))
> and I need sub-array with just non-zero elements (array([[1,1],[0,1]])).
> I can do this with itera
Hi Folks,
I have array (numpy.ndarray object) with non-zero elements cumulated
'somewhere' (like a array([[0,0,0,0],[0,1,1,0],[0,0,1,0],[0,0,0,0]]))
and I need sub-array with just non-zero elements
(array([[1,1],[0,1]])).
I can do this with iterating throught an array, but I also found some
magic
Hi Luis,
thanks for the announcement. How would you compare mahotas to scipy's ndimage ?
Are you using ndimage in mahotas at all ?
Thanks,
Sebastian Haase
On Fri, Sep 10, 2010 at 4:50 AM, Luis Pedro Coelho wrote:
> Hello everyone,
>
> My numpy based image processing toolbox has just had a new
31 matches
Mail list logo