On 12/03/2014 12:17 PM, Jaime Fernández del Río wrote:
>
>
> The safe way to create 1D object arrays from a list is by preallocating them,
> something like this:
>
> >>> a = [np.random.rand(2, 3), np.random.rand(2, 3)]
> >>> b = np.empty(len(a), dtype=object)
> >>> b[:] = a
> >>> b
> array([ array
On 12/03/2014 04:32 AM, Ryan Nelson wrote:
> Emanuele,
>
> This doesn't address your question directly. However, I wonder if you
> could approach this problem from a different way to get what you want.
>
> First of all, create a "index" array and then just vstack all of your
> arrays at once.
>
>
Hi,
I am using 2D arrays where only one dimension remains constant, e.g.:
---
import numpy as np
a = np.array([[1, 2, 3], [4, 5, 6]]) # 2 x 3
b = np.array([[9, 8, 7]]) # 1 x 3
c = np.array([[1, 3, 5], [7, 9, 8], [6, 4, 2]]) # 3 x 3
d = np.array([[5, 5, 4], [4, 3, 3]]) # 2 x 3
---
I have a large nu
Hi,
I just came across this unexpected behaviour when creating
a np.array() from two other np.arrays of different shape.
Have a look at this example:
import numpy as np
a = np.zeros(3)
b = np.zeros((2,3))
c = np.zeros((3,2))
ab = np.array([a, b])
print ab.shape, ab.dtype
ac = np.array([a, c],
ye(2),
>>>> size=1)
> [[-0.55854737 -1.82631485]]
>>>> print np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2),
>>>> size=np.int64(1))
> [[ 0.40274243 -0.33922682]]
>
>
>
> Nicolas
>
> On May 24, 2013, at 2:02 PM, Emanuele Olivetti
Hi,
I'm using NumPy v1.6.1 shipped with Ubuntu 12.04 (Python 2.7.3). I observed an
odd behavior of the multivariate_normal function, which does not like int64 for
the 'size' argument.
Short example:
"""
import numpy as np
print np.random.multivariate_normal(mean=np.zeros(2), cov=np.eye(2), size=1)
Maybe of interest.
E.
Original Message
-- Forwarded message --
From: mikiobraun <[EMAIL PROTECTED]>
Date: 2008/9/8
Subject: [ML-news] Call for Submissions: Workshop on Machine Learning
Open Source Software (MLOSS), NIPS*08
To: Machine Learning News <[EMAIL PROT
Damian Eads wrote:
> Emanuele Olivetti wrote:
>> ...
>> [*] : ||x - x'||_w = (\sum_{i=1...N} (w_i*|x_i - x'_i|)**p)**(1/p)
>
> This feature could be implemented easily. However, I must admit I'm not
> very familiar with weighted p-norms. What is the reas
ng with documentation
> and about two dozen tests.
>
> Cheers,
>
> Damian
>
> Emanuele Olivetti wrote:
>
>> David Cournapeau wrote:
>>
>>> FWIW, distance is deemed to move to a separate package, because distance
>>> computation is useful in oth
David Cournapeau wrote:
> FWIW, distance is deemed to move to a separate package, because distance
> computation is useful in other contexts than clustering.
>
>
Excellent. I was thinking about something similar. I'll have a look
to the separate package. Please drop an email to this list when
d
David Cournapeau wrote:
> Emanuele Olivetti wrote:
>> Hi,
>>
>> I'm trying to compute the distance matrix (weighted p-norm [*])
>> between two sets of vectors (data1 and data2). Example:
>>
>
> You may want to look at scipy.cluster.distance, which has
Hi,
I'm trying to compute the distance matrix (weighted p-norm [*])
between two sets of vectors (data1 and data2). Example:
import numpy as N
p = 3.0
data1 = N.random.randn(100,20)
data2 = N.random.randn(80,20)
weight = N.random.rand(20)
distance_matrix = N.zeros((data1.shape[0],data2.shape[0]))
Rob Hetland wrote:
> I think you want something like this:
>
> x1 = x1 * weights[np.newaxis,:]
> x2 = x2 * weights[np.newaxis,:]
>
> x1 = x1[np.newaxis, :, :]
> x2 = x2[:, np.newaxis, :]
> distance = np.sqrt( ((x1 - x2)**2).sum(axis=-1) )
>
> x1 and x2 are arrays with size of (npoints, ndimensions)
Matthieu Brucher wrote:
> Hi,
>
> Bill Baxter proposed a version of this problem some months ago on this
> ML. I use it regularly and it is fast enough for me.
>
Excellent. Exactly what I was looking for.
Thanks,
Emanuele
___
Numpy-discussion mailing
Dear all,
I need to speed up this function (a little example follows):
--
import numpy as N
def distance_matrix(data1,data2,weights):
rows = data1.shape[0]
columns = data2.shape[0]
dm = N.zeros((rows,columns))
for i in range(rows):
for j in range(columns):
d
James Philbin wrote:
> OK, i've written a simple benchmark which implements an elementwise
> multiply (A=B*C) in three different ways (standard C, intrinsics, hand
> coded assembly). On the face of things the results seem to indicate
> that the vectorization works best on medium sized inputs. If pe
Dear all,
Look at this little example:
import numpy
a = numpy.array([1])
b = numpy.array([1,2,a])
c = numpy.array([a,1,2])
Which has the following output:
Traceback (most recent call last):
File "b.py", line 4, in
c = numpy.array([a,1,2])
ValueError: setting an array element
b=struct.pack("<10H",*a)
File "/usr/lib/python2.5/struct.py", line 63, in pack
return o.pack(*args)
SystemError: ../Objects/longobject.c:322: bad argument to internal function
No error with python2.4 so I believe it is a 32bit issue.
HTH,
Emanuele
Emanuele Oli
Hi,
this snippet is causing troubles:
---
import struct
import numpy
a=numpy.arange(10).astype('H')
b=struct.pack("<10H",*a)
---
(The module struct simply packs and unpacks data in byte-blobs).
It works OK with python2.4, but gives problems with python2.5.
On my laptop (linux x86_64 on intel cor
Simone Marras wrote:
> Hello everyone,
>
> I am trying to install numpy on my Suse 10.2 using Python 2.5
> Python is correctly installed and when I launch > python setup.py
> install, I get the following error:
>
> numpy/core/src/multiarraymodule.c:7604: fatal error: error writing
> to /tmp/ccN
Hi,
I'm working with 4D integer matrices and need to compute std() on a
given axis but I experience problems with excessive memory consumption.
Example:
---
import numpy
a = numpy.random.randint(100,size=(50,50,50,200)) # 4D randint matrix
b = a.std(3)
---
It seems that this code requires 100-200 M
David Huard wrote:
> Hi Emanuele,
>
> The bug is due to a part of the code that shifts the last bin's
> position to make sure the array's maximum value is counted in the last
> bin, and not as an outlier. To do so, the code computes an approximate
> precision used the shift the bin edge by amount s
An even simpler example generating the same error:
import numpy
x = numpy.array([0,0])
numpy.histogram2d(x,x)
HTH,
Emanuele
Emanuele Olivetti wrote:
> While using histogram2d on simple examples I got these errors:
>
> import numpy
> x = numpy.array([0,0])
> y = n
While using histogram2d on simple examples I got these errors:
import numpy
x = numpy.array([0,0])
y = numpy.array([0,1])
numpy.histogram2d(x,y,bins=[2,2])
-
Warning: divide by zero encountered in log10
---
Robert Kern wrote:
> Emanuele Olivetti wrote:
>
>
>> permutation() likes 'int' and dislikes 'numpy.int32' integers :(
>> Seems a bug.
>>
>
> Yup. I should get around to fixing it later tonight.
>
>
Wow. Superfast! :)
Eman
Look at this:
--bug.py---
import numpy
a=numpy.array([1,2])
b=a.sum()
print type(b)
c=numpy.random.permutation(b)
---
If I run it (Python 2.5, numpy 1.0.1 on a Linux box) I get:
---
#> python /tmp/bug.py
Traceback (most recent call last):
File "/tmp/bug.
Travis E. Oliphant wrote:
> I correct my previous statement. Yes, this is true. Pickles generated
> with 1.0.1 cannot be read by version 1.0
>
> However, pickles generated with 1.0 can be read by 1.0.1. It is
> typically not the case that pickles created with newer versions of the
> code wil
I'm running numpy 1.0 and 1.0.1 on several hosts and
today I've found that pickling arrays in 1.0.1 generates
problems to 1.0. An example:
--- numpy 1.0.1 ---
import numpy
import pickle
a = numpy.array([1,2,3])
f=open('test1.pickle','w')
pickle.dump(a,f)
f.close()
---
If I unpickle test1.pickle in
28 matches
Mail list logo