However with 1.10.4 no matter
how I tried to define such variables (even by inserting them in the
site.cfg file), there was no way to make it work.
Davide
On Thu, 2016-01-28 at 14:23 -0800, Nathaniel Smith wrote:
> What does
> ldd
> /usr/local/python2/2.7.8/x86_64/gcc46/New_build/lib
Hi all,
I recently upgraded NumPy from 1.9.1 to 1.10.4 on Python 2.7.8 by using
pip. As always I specified the paths to Blas, Lapack and Atlas in the
respective environment variables. I used the same compiler I used to
compile both Python and the libraries (GCC 4.6.1). The problem is that
it always
ion 1.2.1
So nobody knows why the number of tests run are different among
different runs of the same binary/library on different nodes?
https://github.com/numpy/numpy/blob/master/doc/TESTS.rst.txt implies
they shouldn't...
Regards,
Davide Del Vento,
On 02/11/2013 08:54 PM, Davide Del Vento
ere did the 358 "missing" tests go in the batch run?
The handful difference in SKIPped and FAILed (which I am
investigating) cannot be the reason.
What is it happening?
PS: a similar thing happened with scipy, which I'm asking on the
scipy mailing lis
this code?
Regards,
Davide Lasagna
--
Phd Student
Dipartimento di Ingegneria Aeronautica a Spaziale
Politecnico di Torino, Italy
tel: 011/0906871
e-mail: davide.lasa...@polito.it; lasagnadav...@gmail.com
___
NumPy-Discussion mailing list
NumPy-Discussion
Thanks for your good work!
Cheers,
Davide
On 09/19/2011 02:15 AM, Hoyt Koepke wrote:
> Hello,
>
> I'm pleased to announce the first release of a wrapper I wrote for
> IBM's CPlex Optimizer Suite. It focuses on ease of use and seamless
> integration with numpy, and allows
tools are you going to use for the CWT? I
may be interested in providing some help.
Cheers,
Davide Lasagna
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
npack=True ) )
returns 10, where i would expect it to return 1, to be consistent with
the behaviour when there are multiple columns.
Is there a reason for why it is not like that?
Cheers
Davide Lasagna
___
NumPy-Discussion mailing list
NumPy-
publish such codes somewhere, i.e Github, so anyone
interested can pick it up.
Cheers,
Davide
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
stion,
Davide
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
with code and
documentation. If you can provide some, and if you are interested in
becoming a core developer, please drop us an email at openpiv-develop at
lists dot sourceforge dot net. A draft website can be found at
www.openpiv.net/python.
Thanks for your attention,
Davide Lasagna
Here is some *working* code i wrote once. It uses strides, look at the
docs for what it is.
from numpy.lib import stride_tricks
def overlap_array( y, len_blocks, overlap=0 ):
"""
Make use of strides to return a two dimensional whose
rows come from a one dimensional array. Strides
the nd.array to get function evaluation
at the point/s specified.
Maybe this can help,
Davide
>>> I have a n dimensional grid. The grids axes are linear but not
>>> intergers. Let's say I want the value in gridcell [3.2,-5.6,0.01]. Is
>>> there an easy way to transfo
On 16/feb/2011, at 00:04, numpy-discussion-requ...@scipy.org wrote:
>
> I'm sorry that I don't have some example code for you, but you
> probably need to break down the problem if you can't fit it into
> memory: http://en.wikipedia.org/wiki/Overlap-add_method
>
> Jonathan
Thanks! You saved my
Hi all,
I have to work with huge numpy.array (i.e. up to 250 M long) and I have to
perform either np.correlate or np.convolve between those.
The process can only work on big memory machines but it takes ages. I'm writing
to get some hint on how to speed up things (at cost of precision, maybe...)
m starting to love numpy array facilities
Ciao
Davide
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi,
I want to compute the following dot product:
P = np.array( [[ p11, p12 ], [p21, p22]] )
C = np.array( [c1, c2] )
where c1 and c2 are m*m matrices, so that
C.shape = (2,m,m)
I want to compute:
A = np.array([a1, a2])
where a1 and a2 are two matrices m*m, from the dot product of P and C.
You may have a look to the nice python-h5py module, which gives an OO
interface to the underlying hdf5 file format. I'm using it for storing
large amounts (~10Gb) of experimental data. Very fast, very convenient.
Ciao
Davide
On Thu, 2010-06-17 at 08:33 -0400, greg whittier wrote:
> On
Well, actually np.arange(2**24) was just to test the following line ;). I'm
particularly concerned about memory consumption rather than speed.
On 16 May 2010 22:53, Brent Pedersen wrote:
> On Sun, May 16, 2010 at 12:14 PM, Davide Lasagna
> wrote:
> > Hi all,
> > What is
Hi all,
What is the fastest and lowest memory consumption way to compute this?
y = np.arange(2**24)
bases = y[1:] + y[:-1]
Actually it is already quite fast, but i'm not sure whether it is occupying
some temporary memory
is the summation. Any help is appreciated.
Cheers
D
st: on my machine (1.7GHz) i have this.
>>>:x = np.linspace(0,2*np.pi, 1e6)
>>>:func = lambda x: np.sin(x)
>>>:timeit derive(func, x)
10 loops, best of 3: 177 ms per loop
I'm curious if someone comes up with something faster.
Regards,
Davide
On 4 May 2010 22:17,
d np.std are so slow?
I'm sure I'm missing something.
Cheers
Davide
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi all,
Is there a fast numpy way to find the peak boundaries in a (looong, millions of
points) smoothed signal? I've found some approaches, like this:
z = data[1:-1]
l = data[:-2]
r = data[2:]
f = np.greater(z, l)
f *= np.greater(z, r)
boundaries = np.nonzero(f)
but it is too sensitive... it d
_something()
#
Is there any way to do what i think?? Can i obtain "pythonically" a list of
column arrays??
Any help is appreciated.
Cheers..
Davide Lasagna
Dip. Ingegneria Aerospaziale
Politecnico di Torino
Italia
___
NumPy-Discussion mailing l
t; between the two classes.
>
> Matthieu
>
> 2008/2/15, Davide Albanese <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>:
>
> Yes: https://mlpy.fbk.eu/wiki/MlpyExamplesWithDoc
>
> * svm()
> Initialize the svm class.
>
> Inputs:
&g
Whith "standard" Python:
>>> who()
Robin ha scritto:
> On Thu, Feb 14, 2008 at 8:43 PM, Alexander Michael <[EMAIL PROTECTED]> wrote:
>
>> Is there a way to list all of the arrays that are referencing a given
>> array? Similarly, is there a way to get a list of all arrays that are
>> currentl
nderlying cost function ? (this is mainly
> what I need)
>
> Matthieu
>
> 2008/2/15, Davide Albanese <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>:
>
> I don't know very well libsvm too, the core of svm-mlpy is written
> in C
> and was developed
in other terms : how does it compare to libsvm,
> which is one of the most known packages for SVMs ?
>
> Matthieu
>
> 2008/2/15, Davide Albanese <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>:
>
> Dear Matthieu,
> I don't know very well sci
19(10), 1597-1611,
2006.
/* da */
Matthieu Brucher ha scritto:
> Hi,
>
> How does it compare to the elarn scikit, especially for the SVM part ?
> How was it implemented ?
>
> Matthieu
>
> 2008/2/14, Davide Albanese <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
ss.org/software/
> ("mloss" is "machine learning open source software")
> Regards, D.
>
> Davide Albanese wrote:
>
>> *Machine Learning Py* (MLPY) is a *Python/NumPy* based package for
>> machine learning.
>> The package now includes:
>>
&g
*Machine Learning Py* (MLPY) is a *Python/NumPy* based package for
machine learning.
The package now includes:
* *Support Vector Machines* (linear, gaussian, polinomial,
terminated ramps) for 2-class problems
* *Fisher Discriminant Analysis* for 2-class problems
* *Iterative Rel
31 matches
Mail list logo