On 04/11/13 13:06, Amal Thomas wrote:

Present code:

*f = open("output.txt")
content=f.read().split('\n')
f.close()

If your objective is to save time, then you should replace this with f.readlines() which will save you reprocesasing the entire file to remove the newlines.

for lines in content:
*  <processing>*
*content.clear()*

But if you are processing line by line what makes you think that reading the entire file into RAM and then reprocessing it is faster than reading it line by line?

Have you tried that on aqnother file and measutred any significant improvement? There are times when reading into RAM is faster but I'm not sure this will be one of them.

for line in f:
   process line

may be your best bet.

*f = open("output.txt")
content=io.StringIO(f.read())
f.close()
for lines in content:
   <processing>
*
*content.close()*

--
Alan G
Author of the Learn to Program web site
http://www.alan-g.me.uk/
http://www.flickr.com/photos/alangauldphotos

_______________________________________________
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
https://mail.python.org/mailman/listinfo/tutor

Reply via email to