On Dec 21, 2007 6:45 AM, David Cournapeau <[EMAIL PROTECTED]>
wrote:
> Hans Meine wrote:
> > Am Freitag, 21. Dezember 2007 13:23:49 schrieb David Cournapeau:
> >
> >>> Instead of saying "memmap is ALL about disc access" I would rather
> >>> like to say that "memap is all about SMART disk access" -
Hans Meine wrote:
> Am Freitag, 21. Dezember 2007 13:23:49 schrieb David Cournapeau:
>
>>> Instead of saying "memmap is ALL about disc access" I would rather
>>> like to say that "memap is all about SMART disk access" -- what I mean
>>> is that memmap should run as fast as a normal ndarray if it
Am Freitag, 21. Dezember 2007 13:23:49 schrieb David Cournapeau:
> > Instead of saying "memmap is ALL about disc access" I would rather
> > like to say that "memap is all about SMART disk access" -- what I mean
> > is that memmap should run as fast as a normal ndarray if it works on
> > the cached
Sebastian Haase wrote:
> On Dec 21, 2007 12:11 AM, Martin Spacek <[EMAIL PROTECTED]> wrote:
>
By the way, I installed 64-bit linux (ubuntu 7.10) on the same machine,
and now numpy.memmap works like a charm. Slicing around a 15 GB file is
fun!
>>> Thanks for the
On Dec 21, 2007 12:11 AM, Martin Spacek <[EMAIL PROTECTED]> wrote:
> >> By the way, I installed 64-bit linux (ubuntu 7.10) on the same machine,
> >> and now numpy.memmap works like a charm. Slicing around a 15 GB file is
> >> fun!
> >>
> > Thanks for the feedback !
> > Did you get the kind of spee
>> By the way, I installed 64-bit linux (ubuntu 7.10) on the same machine,
>> and now numpy.memmap works like a charm. Slicing around a 15 GB file is fun!
>>
> Thanks for the feedback !
> Did you get the kind of speed you need and/or the speed you were hoping for ?
Nope. Like I wrote earlier, it s
On Dec 20, 2007 3:22 AM, Martin Spacek <[EMAIL PROTECTED]> wrote:
> Sebastian Haase wrote:
> > b) To my knowledge, any OS Linux, Windows an OSX can max. allocate
> > about 1GB of data - assuming you have a 32 bit machine.
> > The actual numbers I measured varied from about 700MB to maybe 1.3GB.
>
On Dec 20, 2007 3:22 AM, Martin Spacek <[EMAIL PROTECTED]> wrote:
> Sebastian Haase wrote:
> > b) To my knowledge, any OS Linux, Windows an OSX can max. allocate
> > about 1GB of data - assuming you have a 32 bit machine.
> > The actual numbers I measured varied from about 700MB to maybe 1.3GB.
>
Sebastian Haase wrote:
> b) To my knowledge, any OS Linux, Windows an OSX can max. allocate
> about 1GB of data - assuming you have a 32 bit machine.
> The actual numbers I measured varied from about 700MB to maybe 1.3GB.
> In other words, you would be right at the limit.
> (For 64bit, you would h
On Dec 4, 2007 3:05 AM, David Cournapeau <[EMAIL PROTECTED]>
wrote:
> Gael Varoquaux wrote:
> > On Tue, Dec 04, 2007 at 02:13:53PM +0900, David Cournapeau wrote:
> >
> >> With recent kernels, you can get really good latency if you do it right
> >> (around 1-2 ms worst case under high load, includi
Andrew Straw wrote:
> Hi all,
>
> I haven't done any serious testing in the past couple years, but for
> this particular task -- drawing frames using OpenGL without ever
> skipping a video update -- it is my impression that as of a few Ubuntu
> releases ago (Edgy?) Windows still beat linux.
>
Hi all,
I haven't done any serious testing in the past couple years, but for
this particular task -- drawing frames using OpenGL without ever
skipping a video update -- it is my impression that as of a few Ubuntu
releases ago (Edgy?) Windows still beat linux.
Just now, I have investigated on 2
Gael Varoquaux wrote:
> On Tue, Dec 04, 2007 at 02:13:53PM +0900, David Cournapeau wrote:
>
>> With recent kernels, you can get really good latency if you do it right
>> (around 1-2 ms worst case under high load, including high IO pressure).
>>
>
> As you can see on my page, I indeed meas
On Tue, Dec 04, 2007 at 02:13:53PM +0900, David Cournapeau wrote:
> With recent kernels, you can get really good latency if you do it right
> (around 1-2 ms worst case under high load, including high IO pressure).
As you can see on my page, I indeed measured less than 1ms latency on
Linux under
Martin Spacek wrote:
> Gael Varoquaux wrote:
>> Very interesting. Have you made measurements to see how many times you
>> lost one of your cycles. I made these kind of measurements on Linux using
>> the real-time clock with C and it was very interesting (
>> http://www.gael-varoquaux.info/computers
Gael Varoquaux wrote:
> Very interesting. Have you made measurements to see how many times you
> lost one of your cycles. I made these kind of measurements on Linux using
> the real-time clock with C and it was very interesting (
> http://www.gael-varoquaux.info/computers/real-time ). I want to red
Francesc Altet wrote:
> Perhaps something that can surely improve your timings is first
> performing a read of your data file(s) while throwing the data as you
> are reading it. This serves only to load the file entirely (if you have
> memory enough, but this seems your case) in OS page cache. T
A Monday 03 December 2007, Martin Spacek escrigué:
> Sebastian Haase wrote:
> > reading this thread I have two comments.
> > a) *Displaying* at 200Hz probably makes little sense, since humans
> > would only see about max. of 30Hz (aka video frame rate).
> > Consequently you would want to separate y
On Sun, Dec 02, 2007 at 05:22:49PM -0800, Martin Spacek wrote:
> so I run python (with Andrew Straw's
> package VisionEgg) as a "realtime" priority process in windows on a dual
> core computer, which lets me reliably update the video frame buffer in
> time for the next refresh, without having to wo
Sebastian Haase wrote:
> reading this thread I have two comments.
> a) *Displaying* at 200Hz probably makes little sense, since humans
> would only see about max. of 30Hz (aka video frame rate).
> Consequently you would want to separate your data frame rate, that (as
> I understand) you want to sav
On Samstag 01 Dezember 2007, Martin Spacek wrote:
> Kurt Smith wrote:
> > You might try numpy.memmap -- others have had success with it for
> > large files (32 bit should be able to handle a 1.3 GB file, AFAIK).
>
> Yeah, I looked into numpy.memmap. Two issues with that. I need to
> eliminate as
Ivan Vilata i Balaguer (el 2007-11-30 a les 19:19:38 +0100) va dir::
> Well, one thing you could do is dump your data into a PyTables_
> ``CArray`` dataset, which you may afterwards access as if its was a
> NumPy array to get slices which are actually NumPy arrays. PyTables
> datasets have no pro
On Dec 1, 2007 12:09 AM, Martin Spacek <[EMAIL PROTECTED]> wrote:
> Kurt Smith wrote:
> > You might try numpy.memmap -- others have had success with it for
> > large files (32 bit should be able to handle a 1.3 GB file, AFAIK).
>
> Yeah, I looked into numpy.memmap. Two issues with that. I need to
Martin Spacek wrote:
> Kurt Smith wrote:
> > You might try numpy.memmap -- others have had success with it for
> > large files (32 bit should be able to handle a 1.3 GB file, AFAIK).
>
> Yeah, I looked into numpy.memmap. Two issues with that. I need to
> eliminate as much disk access as possible
Martin Spacek wrote:
> Would it be better to load the file one
> frame at a time, generating nframes arrays of shape (height, width),
> and sticking them consecutively in a python list?
I just tried this, and it works. Looks like it's all in physical RAM (no
disk thrashing on the 2GB machine),
Kurt Smith wrote:
> You might try numpy.memmap -- others have had success with it for
> large files (32 bit should be able to handle a 1.3 GB file, AFAIK).
Yeah, I looked into numpy.memmap. Two issues with that. I need to
eliminate as much disk access as possible while my app is running. I'm
d
>
> Well, one thing you could do is dump your data into a PyTables_
> ``CArray`` dataset, which you may afterwards access as if its was a
> NumPy array to get slices which are actually NumPy arrays. PyTables
> datasets have no problem in working with datasets exceeding memory size.
> For instanc
Martin Spacek (el 2007-11-30 a les 00:47:41 -0800) va dir::
>[...]
> I find that if I load the file in two pieces into two arrays, say 1GB
> and 0.3GB respectively, I can avoid the memory error. So it seems that
> it's not that windows can't allocate the memory, just that it can't
> allocate enoug
On Nov 30, 2007 2:47 AM, Martin Spacek <[EMAIL PROTECTED]> wrote:
> I need to load a 1.3GB binary file entirely into a single numpy.uint8
> array. I've been using numpy.fromfile(), but for files > 1.2GB on my
> win32 machine, I get a memory error. Actually, since I have several
> other python modul
I need to load a 1.3GB binary file entirely into a single numpy.uint8
array. I've been using numpy.fromfile(), but for files > 1.2GB on my
win32 machine, I get a memory error. Actually, since I have several
other python modules imported at the same time, including pygame, I get
a "pygame parachute"
30 matches
Mail list logo