I think you want np.meshgrid
-paul
On Sun, Oct 26, 2014 at 2:09 AM, Artur Bercik wrote:
> I have a rectangle with the following coordinates:
>
> import numpy as np
>
> ulx,uly = (110, 60) ##uppper left lon, upper left lat
> urx,ury = (120, 60) ##uppper right lon, upper right lat
> lrx, lry = (12
At 06:32 AM 10/26/2014, you wrote:
On Sun, Oct 26, 2014 at 1:21 PM, Eelco Hoogendoorn
wrote:
> Im not sure why the memory doubling is necessary. Isnt it possible to
> preallocate the arrays and write to them?
Not without reading the whole file first to know how many rows to preallocate
Seems
I agree with @Daniele's point, storing huge arrays in text files migh
indicate a bad process but once these functions can be improved, why
not? Unless this turns to be a burden to change.
Regarding the estimation of the array size, I don't see a big performance
loss when the file iterator is e
Hi,
We have finally finished the first release candidate of NumOy 1.9.1,
sorry for the week delay.
The 1.9.1 release will as usual be a bugfix only release to the 1.9.x
series.
The tarballs and win32 binaries are available on sourceforge:
https://sourceforge.net/projects/numpy/files/NumPy/1.9.1rc1
On 26/10/14 09:46, Saullo Castro wrote:
> I would like to start working on a memory efficient alternative for
> np.loadtxt and np.genfromtxt that uses arrays instead of lists to store
> the data while the file iterator is exhausted.
...
> I would be glad if you could share your experience on this
On 26 Oct 2014 11:54, "Jeff Reback" wrote:
>
> you should have a read here/
> http://wesmckinney.com/blog/?p=543
>
> going below the 2x memory usage on read in is non trivial and costly in
terms of performance
On Linux you can probably go below 2x overhead easily, by exploiting the
fact that real
you are describing a special case where you know the data size apriori (eg not
streaming), dtypes are readily apparent from a small sample case
and in general your data is not messy
I would agree if these can be satisfied then you can achieve closer to a 1x
memory overhead
using bcolZ is grea
On 26 Oct 2014, at 02:21 pm, Eelco Hoogendoorn
wrote:
> Im not sure why the memory doubling is necessary. Isnt it possible to
> preallocate the arrays and write to them? I suppose this might be inefficient
> though, in case you end up reading only a small subset of rows out of a
> mostly corr
On 26 October 2014 12:54, Jeff Reback wrote:
> you should have a read here/
> http://wesmckinney.com/blog/?p=543
>
> going below the 2x memory usage on read in is non trivial and costly in
> terms of performance
>
If you know in advance the number of rows (because it is in the header,
counted w
On Sun, Oct 26, 2014 at 1:21 PM, Eelco Hoogendoorn
wrote:
> Im not sure why the memory doubling is necessary. Isnt it possible to
> preallocate the arrays and write to them?
Not without reading the whole file first to know how many rows to preallocate.
--
Robert Kern
___
Im not sure why the memory doubling is necessary. Isnt it possible to
preallocate the arrays and write to them? I suppose this might be
inefficient though, in case you end up reading only a small subset of rows
out of a mostly corrupt file? But that seems to be a rather uncommon corner
case.
Eithe
you should have a read here/
http://wesmckinney.com/blog/?p=543
going below the 2x memory usage on read in is non trivial and costly in terms
of performance
> On Oct 26, 2014, at 4:46 AM, Saullo Castro wrote:
>
> I would like to start working on a memory efficient alternative for
> np.loadtx
I have a rectangle with the following coordinates:
import numpy as np
ulx,uly = (110, 60) ##uppper left lon, upper left lat
urx,ury = (120, 60) ##uppper right lon, upper right lat
lrx, lry = (120, 50) ##lower right lon, lower right lat
llx, lly = (110, 50) ##lower left lon, lower left lat
I want
Hi,
On Sat, Oct 25, 2014 at 11:26 PM, David Cournapeau wrote:
> Not exactly: if you build numpy with mingw (as is the official binary), you
> need to build everything that uses numpy C API with it.
Some of the interwebs appear to believe that the mingw .a file is
compatible with visual studio:
h
I would like to start working on a memory efficient alternative for
np.loadtxt and np.genfromtxt that uses arrays instead of lists to store the
data while the file iterator is exhausted.
The motivation came from this SO question:
http://stackoverflow.com/q/26569852/832621
where for huge arrays t
15 matches
Mail list logo