On 5/17/2011 10:20 AM Vikram K said...
I wish to read a large data file (file size is around 1.8 MB) and
manipulate the data in this file. Just reading and writing the first 500
lines of this file is causing a problem. I wrote:
fin = open('gene-GS00471-DNA_B01_
1101_37-ASM.tsv')
count = 0
for i in fin.readlines():
print i
count += 1
if count >= 500:
break
and got this error msg:
Traceback (most recent call last):
File
"H:\genome_4_omics_study\GS000003696-DID\GS00471-DNA_B01_1101_37-ASM\GS00471-DNA_B01\ASM\gene-GS00471-DNA_B01_1101_37-ASM.tsv\test.py",
line 3, in <module>
for i in fin.readlines():
MemoryError
-------
is there a way to stop python from slurping all the file contents at once?
Yes -- look at the optional parameters for open:
ActivePython 2.6.1.1 (ActiveState Software Inc.) based on
Python 2.6.1 (r261:67515, Dec 5 2008, 13:58:38) [MSC v.1500 32 bit
(Intel)] on
win32
Type "help", "copyright", "credits" or "license" for more information.
>>> help(open)
Help on built-in function open in module __builtin__:
open(...)
open(name[, mode[, buffering]]) -> file object
Open a file using the file() type, returns a file object.
This is the preferred way to open a file.
>>>
Emile
_______________________________________________
Tutor maillist - Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor