On Wed, Sep 14, 2011 at 5:30 PM, Christopher Barker
wrote:
> On 9/14/11 2:41 PM, Benjamin Root wrote:
>> Are you sure the f2 code works? a.resize() takes only a shape tuple. As
>> coded, you should get an exception.
>
> wow, what an idiot!
>
> I think I just timed how long it takes to raise that
On 9/14/11 2:41 PM, Benjamin Root wrote:
> Are you sure the f2 code works? a.resize() takes only a shape tuple. As
> coded, you should get an exception.
wow, what an idiot!
I think I just timed how long it takes to raise that exception...
And when I fix that, I get a memory error.
When I fix
On Wed, Sep 14, 2011 at 4:25 PM, Christopher Barker
wrote:
> On 9/14/11 1:01 PM, Christopher Barker wrote:
> > numpy.ndarray.resize is a different method, and I'm pretty sure it
> > should be as fast or faster that np.empty + np.append.
>
> My profile:
>
> In [25]: %timeit f1 # numpy.resize()
> 10
On 9/14/11 1:01 PM, Christopher Barker wrote:
> numpy.ndarray.resize is a different method, and I'm pretty sure it
> should be as fast or faster that np.empty + np.append.
My profile:
In [25]: %timeit f1 # numpy.resize()
1000 loops, best of 3: 163 ns per loop
In [26]: %timeit f2 #numpy.ndarr
On 9/13/11 1:01 PM, Christopher Jordan-Squire wrote:
> Sorry, I cheated by reading the docs. :-)
me too...
> """
> numpy.resize(a, new_shape)
>
> Return a new array with the specified shape.
>
> If the new array is larger than the original array, then the new array
> is filled with repeated copie
On Tue, Sep 13, 2011 at 3:43 AM, Pierre GM wrote:
>
> On Sep 13, 2011, at 01:38 , Christopher Jordan-Squire wrote:
>
>> I did some timings to see what the advantage would be, in the simplest
>> case possible, of taking multiple lines from the file to process at a
>> time. Assuming the dtype is alr
On Tue, Sep 13, 2011 at 2:41 PM, Chris.Barker wrote:
> On 9/12/11 4:38 PM, Christopher Jordan-Squire wrote:
>> I did some timings to see what the advantage would be, in the simplest
>> case possible, of taking multiple lines from the file to process at a
>> time.
>
> Nice work, only a minor commen
On 9/12/11 4:38 PM, Christopher Jordan-Squire wrote:
> I did some timings to see what the advantage would be, in the simplest
> case possible, of taking multiple lines from the file to process at a
> time.
Nice work, only a minor comment:
> f6 and f7 use stripped down versions of Chris
> Barker's
On Sep 13, 2011, at 01:38 , Christopher Jordan-Squire wrote:
> I did some timings to see what the advantage would be, in the simplest
> case possible, of taking multiple lines from the file to process at a
> time. Assuming the dtype is already known. The code is attached. What
> I found was I can
I did some timings to see what the advantage would be, in the simplest
case possible, of taking multiple lines from the file to process at a
time. Assuming the dtype is already known. The code is attached. What
I found was I can't use generators to avoid constructing a list and
then making a tuple
On 9/8/11 1:43 PM, Christopher Jordan-Squire wrote:
> I just ran a quick test on my machine of this idea. With
>
> dt = np.dtype([('x',np.float32),('y', np.int32),('z', np.float64)])
> temp = np.empty((), dtype=dt)
> temp2 = np.zeros(1,dtype=dt)
>
> In [96]: def f():
> ...: l=[0]*3
>
On Wed, Sep 7, 2011 at 2:52 PM, Chris.Barker wrote:
> On 9/2/11 2:45 PM, Christopher Jordan-Squire wrote:
>> It doesn't have to parse the entire file to determine the dtypes. It
>> builds up a regular expression for what it expects to see, in terms of
>> dtypes. Then it just loops over the lines,
Wed, 07 Sep 2011 12:52:44 -0700, Chris.Barker wrote:
[clip]
> In [9]: temp['x'] = 3
>
> In [10]: temp['y'] = 4
>
> In [11]: temp['z'] = 5
[clip]
> maybe it wouldn't be any faster, but with re-using temp, and one less
> list-tuple conversion, and fewer python type to numpy type conversions,
> mayb
On 9/2/11 2:45 PM, Christopher Jordan-Squire wrote:
> It doesn't have to parse the entire file to determine the dtypes. It
> builds up a regular expression for what it expects to see, in terms of
> dtypes. Then it just loops over the lines, only parsing if the regular
> expression doesn't match. It
On Tue, Sep 6, 2011 at 9:32 AM, Derek Homeier
wrote:
> On 02.09.2011, at 11:45PM, Christopher Jordan-Squire wrote:
and unfortunately it's for 1D-arrays only).
>>>
>>> That's not bad for this use -- make a row a struct dtype, and you've got
>>> a 1-d array anyway -- you can optionally con
On 02.09.2011, at 11:45PM, Christopher Jordan-Squire wrote:
>>>
>>> and unfortunately it's for 1D-arrays only).
>>
>> That's not bad for this use -- make a row a struct dtype, and you've got
>> a 1-d array anyway -- you can optionally convert to a 2-d array after
>> the fact.
>>
>> I don't know
On Fri, Sep 2, 2011 at 3:54 PM, Chris.Barker wrote:
> On 9/2/11 9:16 AM, Christopher Jordan-Squire wrote:
I agree it would make a very nice addition, and could complement my
pre-allocation option for loadtxt - however there I've also been made
aware that this approach breaks streame
On 9/2/11 9:16 AM, Christopher Jordan-Squire wrote:
>>> I agree it would make a very nice addition, and could complement my
>>> pre-allocation option for loadtxt - however there I've also been made
>>> aware that this approach breaks streamed input etc., so the buffer.resize(…)
>>> methods in accum
On 02.09.2011, at 6:16PM, Christopher Jordan-Squire wrote:
> I hadn't thought of that. Interesting idea. I'm surprised that
> completely resetting the array could be faster.
>
I had experimented a bit with the fromiter function, which also increases
the output array as needed, and this creates n
On 02.09.2011, at 5:50PM, Chris.Barker wrote:
> hmmm -- it seems you could jsut as well be building the array as you go,
> and if you hit a change in the imput, re-set and start again.
>
> In my tests, I'm pretty sure that the time spent file io and string
> parsing swamp the time it takes to a
Sorry I'm only now getting around to thinking more about this. Been
side-tracked by stats stuff.
On Fri, Sep 2, 2011 at 10:50 AM, Chris.Barker wrote:
> On 9/2/11 8:22 AM, Derek Homeier wrote:
>> I agree it would make a very nice addition, and could complement my
>> pre-allocation option for loadt
On 9/2/11 8:22 AM, Derek Homeier wrote:
> I agree it would make a very nice addition, and could complement my
> pre-allocation option for loadtxt - however there I've also been made
> aware that this approach breaks streamed input etc., so the buffer.resize(…)
> methods in accumulator would be the
On 30.08.2011, at 6:21PM, Chris.Barker wrote:
>> I've submitted a pull request for a new method for loading data from
>> text files into a record array/masked record array.
>
>> Click on the link for more info, but the general idea is to create a
>> regular expression for what entries should look
On 8/27/11 11:08 AM, Christopher Jordan-Squire wrote:
> I've submitted a pull request for a new method for loading data from
> text files into a record array/masked record array.
> Click on the link for more info, but the general idea is to create a
> regular expression for what entries should loo
Hi--
I've submitted a pull request for a new method for loading data from
text files into a record array/masked record array.
https://github.com/numpy/numpy/pull/143
Click on the link for more info, but the general idea is to create a
regular expression for what entries should look like and loop
25 matches
Mail list logo