On Mon, Dec 1, 2008 at 11:14 PM, frank wang <[EMAIL PROTECTED]> wrote:
> This is what I thought to do. However, I am not sure whether this is a
> fast way to do it and also I want to find a more generous way to do it. I
> thought there may be a more elegant way to do it.
>
> Thanks
>
> Frank
>
W
Pierre,
Your change fixed masked_all for the example I gave, but I think it
introduced a new failure in zeros:
dt = np.dtype([((' Pressure, Digiquartz [db]', 'P'), ' in ()
/usr/local/lib/python2.5/site-packages/numpy/ma/core.pyc in
__call__(self, a, *args, **params)
4533 #
4534
This is what I thought to do. However, I am not sure whether this is a fast way
to do it and also I want to find a more generous way to do it. I thought there
may be a more elegant way to do it.
Thanks
Frank> Date: Tue, 2 Dec 2008 07:42:27 +0200> From: [EMAIL PROTECTED]> To:
numpy-discussio
Hi Frank
2008/12/2 frank wang <[EMAIL PROTECTED]>:
> I need to convolve a 1d filter with 8 coefficients with a 2d array of the
> shape (6,7). I can use convolve to perform the operation for each row. This
> will involve a for loop with a counter 6. I wonder there is
> an fast way to do this in nu
On 28-Nov-08, at 5:38 PM, Gideon Simpson wrote:
> Has anyone gotten the combination of OS X with a fink python
> distribution to successfully build numpy/scipy with the intel
> compilers and the mkl? If so, how'd you do it?
IIRC David Cournapeau has had some success building numpy with MKL on
>Requires
>
>
>* UNIX-like platform (Linux or Mac OS-X);
>Windows version is in progress
I installed version 0.3.0 back in August on WindowsXP, and as far as I
remember there were no problems at all with the install, and all tests
pass.
I thought the interface was really easy to use.
But
On Dec 1, 2008, at 6:09 PM, Eric Firing wrote:
> Pierre,
>
> ma.masked_all does not seem to work with fancy dtypes and more then
> one dimension:
Eric,
Should be fixed in SVN (r6130). There were indeed problems with nested
dtypes. Tricky beasts they are.
Thanks for reporting!
__
=
Announcing HDF5 for Python (h5py) 1.0
=
What is h5py?
-
HDF5 for Python (h5py) is a general-purpose Python interface to the
Hierarchical Data Format library, version 5. HDF5 is a versatile,
mature scientific so
Hi,
I need to convolve a 1d filter with 8 coefficients with a 2d array of the shape
(6,7). I can use convolve to perform the operation for each row. This will
involve a for loop with a counter 6. I wonder there is
an fast way to do this in numpy without using for loop. Does anyone know how to
On Dec 1, 2008, at 6:21 PM, Christopher Barker wrote:
> Pierre GM wrote:
>> Another issue comes from the possibility to define the dtype
>> automatically:
>
> Does all that get bypassed if the dtype(s) is specified? Is it still
> slow in that case?
Good question. Having a dtype != None does skip
Pierre GM wrote:
> Another issue comes from the possibility to define the dtype
> automatically:
Does all that get bypassed if the dtype(s) is specified? Is it still
slow in that case?
-Chris
--
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R(
Stéfan van der Walt wrote:
>> important to you, why are you using ascii I/O?
ascii I/O is slow, so that's a reason in itself to want it not to be slower!
> More "I" than "O"! But I think numpy.fromfile, once fixed up, could
> fill this niche nicely.
I agree -- for the simple cases, fromfile() c
Pierre,
ma.masked_all does not seem to work with fancy dtypes and more then one
dimension:
In [1]:import numpy as np
In [2]:dt = np.dtype({'names': ['a', 'b'], 'formats': ['f', 'f']})
In [3]:x = np.ma.masked_all((2,), dtype=dt)
In [4]:x
Out[4]:
masked_array(data = [(--, --) (--, --)],
I agree, genloadtxt is a bit blotted, and it's not a surprise it's
slower than the initial one. I think that in order to be fair,
comparisons must be performed with matplotlib.mlab.csv2rec, that
implements as well the autodetection of the dtype. I'm quite in favor
of keeping a lite version
2008/12/1 Ryan May <[EMAIL PROTECTED]>:
> I've wondered about this being an issue. On one hand, you hate to make
> existing code noticeably slower. On the other hand, if speed is
> important to you, why are you using ascii I/O?
More "I" than "O"! But I think numpy.fromfile, once fixed up, could
Stéfan van der Walt wrote:
> Hi Pierre
>
> 2008/12/1 Pierre GM <[EMAIL PROTECTED]>:
>> * `genloadtxt` is the base function that makes all the work. It
>> outputs 2 arrays, one for the data (missing values being substituted
>> by the appropriate default) and one for the mask. It would go in
>> np.l
Hi Pierre
2008/12/1 Pierre GM <[EMAIL PROTECTED]>:
> * `genloadtxt` is the base function that makes all the work. It
> outputs 2 arrays, one for the data (missing values being substituted
> by the appropriate default) and one for the mask. It would go in
> np.lib.io
I see the code length increase
Mon, 01 Dec 2008 14:43:11 -0500, Neal Becker wrote:
> Says it takes a default dtype arg, but doesn't act like it's an optional
> arg:
>
> fromiter (iterator or generator, dtype=None) Construct an array from an
> iterator or a generator. Only handles 1-dimensional cases. By default
> the data-type
Says it takes a default dtype arg, but doesn't act like it's an optional arg:
fromiter (iterator or generator, dtype=None)
Construct an array from an iterator or a generator. Only handles 1-dimensional
cases. By default the data-type is determined from the objects returned from
the iterator.
--->
On Dec 1, 2008, at 2:26 PM, John Hunter wrote
>
> OK, that worked great. I do think some a default impl in np.rec which
> returned a recarray would be nice. It might also be nice to have a
> method like np.rec.fromcsv which defaults to a delimiter=',',
> names=True and dtype=None. Since csv is
On Mon, Dec 1, 2008 at 1:14 PM, Pierre GM <[EMAIL PROTECTED]> wrote:
>> The problem you have is that the default dtype is 'float' (for
>> backwards compatibility w/ the original np.loadtxt). What you want
>> is to automatically change the dtype according to the content of
>> your file: you should
(Sorry about that, I pressed "Reply" instead of "Reply all". Not my
day for emails...)
> On Dec 1, 2008, at 1:54 PM, John Hunter wrote:
>>
>> It looks like I am doing something wrong -- trying to parse a CSV
>> file
>> with dates formatted like '2008-10-14', with::
>>
>> import datetime, sys
On Mon, Dec 1, 2008 at 12:21 PM, Pierre GM <[EMAIL PROTECTED]> wrote:
> Well, looks like the attachment is too big, so here's the implementation.
> The tests will come in another message.\
It looks like I am doing something wrong -- trying to parse a CSV file
with dates formatted like '2008-10-14
Well, looks like the attachment is too big, so here's the
implementation. The tests will come in another message.
"""
Proposal :
Here's an extension to np.loadtxt, designed to take missing values into account.
"""
import itertools
import numpy as np
import numpy.ma as ma
def _is_string_l
2008/12/1 Pierre GM <[EMAIL PROTECTED]>:
> Please find attached to this message another implementation of
Struggling to comply!
Cheers
Stéfan
___
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-d
And now for the tests:
"""
Proposal :
Here's an extension to np.loadtxt, designed to take missing values into account.
"""
from genload_proposal import *
from numpy.ma.testutils import *
import StringIO
class TestLineSplitter(TestCase):
#
def test_nodelimiter(self):
"Test Line
All,
Please find attached to this message another implementation of
np.loadtxt, which focuses on missing values. It's basically a
combination of John Hunter's et al mlab.csv2rec, Ryan May's patches
and pieces of code I'd been working on over the last few weeks.
Besides some helper classes (S
On Mon, Dec 1, 2008 at 3:12 AM, Gael Varoquaux <
[EMAIL PROTECTED]> wrote:
> On Mon, Dec 01, 2008 at 12:44:10PM +0900, David Cournapeau wrote:
> > On Mon, Dec 1, 2008 at 7:00 AM, Darren Dale <[EMAIL PROTECTED]> wrote:
> > > I tried installing 4.0.300x on a machine running 64-bit windows vista
> ho
Wim Bakker wrote:
> For a long time now, numpy's memmap has me puzzled by its behavior. When I use
> memmap straightforward on a file it seems to work fine, but whenever I try to
> do a memmap using a dtype it seems to gobble up the whole file into memory.
>
I don't understand your question.
Hi,
>
thanks for all your answers. I will certainly test it.
> numpy.vectorize(myfunc) should do what you want.
Just to add a better example based on a recent
discussion here on this list [1]:
myfunc(x):
res = math.sin(x)
return res
a = numpy.arange(1,20)
=> myfunc(a) will not w
2008/12/1 Nadav Horesh <[EMAIL PROTECTED]>:
> I does not solve the slowness problem. I think I read on the list about an
> experimental code for fast vectorization.
The choices are basically weave, fast_vectorize
(http://projects.scipy.org/scipy/scipy/ticket/727), ctypes, cython or
f2py. Any I le
For a long time now, numpy's memmap has me puzzled by its behavior. When I use
memmap straightforward on a file it seems to work fine, but whenever I try to
do a memmap using a dtype it seems to gobble up the whole file into memory.
This, of course, makes the use of memmap futile. I would expect
I does not solve the slowness problem. I think I read on the list about an
experimental code for fast vectorization.
Nadav.
-הודעה מקורית-
מאת: [EMAIL PROTECTED] בשם Emmanuelle Gouillart
נשלח: ב 01-דצמבר-08 12:28
אל: Discussion of Numerical Python
נושא: Re: [Numpy-discussion] optimisi
2008/12/1 Timmie <[EMAIL PROTECTED]>:
> Hello,
> I am developing a module which bases its calculations
> on another specialised module.
> My module uses numpy arrays a lot.
> The problem is that the other module I am building
> upon, does not work with (whole) arrays but with
> single values.
> The
Hello Timmie,
numpy.vectorize(myfunc) should do what you want.
Cheers,
Emmanuelle
> Hello,
> I am developing a module which bases its calculations
> on another specialised module.
> My module uses numpy arrays a lot.
> The problem is that the other module I am building
> upon, does not work wit
Hello,
I am developing a module which bases its calculations
on another specialised module.
My module uses numpy arrays a lot.
The problem is that the other module I am building
upon, does not work with (whole) arrays but with
single values.
Therefore, I am currently forces to loop over the
array:
On Mon, Dec 01, 2008 at 12:44:10PM +0900, David Cournapeau wrote:
> On Mon, Dec 1, 2008 at 7:00 AM, Darren Dale <[EMAIL PROTECTED]> wrote:
> > I tried installing 4.0.300x on a machine running 64-bit windows vista home
> > edition and ran into problems with PyQt and some related packages. So I
> > u
37 matches
Mail list logo