Re: [Tutor] Equivalent exception of os.path.exists()
bibi midi dixit: > Like i thought so, there is no exception to catch if a file already exist. > I've been browsing the many types of exceptions and cant find anything thats > close. Thank you for clarifying. > Actually, you can. I guess it's not the book author's intent, but you can try it for the sake of experiment. If you're working under a unix-like system, just try to write into another user's home dir (*), or any file outside your own home. The filesystem will refuse the permission, so you will get an error from python. In fact, in this case, you'll get an error both for creation and change. I don't know the equivalent under other OSes, but it sure exists. Denis la vita e estrany http://spir.wikidot.com/ ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] python closures
Dave Angel dixit: > Maybe a more complex example might show the various linkages. > > glob = 42 > > def outer(parm1): > free = 12 > free3 = 19 > def inner(parm2, parm3=free3): > print "global", glob, ", free vars", parm1, free, free3, ", > locals", parm2, parm3 > free = 49 > free3 = 48 > return inner > > newfunc = outer(10) > newfunc(45) > > > produces output: >global 42 , free vars 10 49 48 , locals 45 19 > > So when the inner() function is actually called, glob is just a global. > parm1, fre, and free3 hold the values they ended up with when outer() > returned, and local parm2 is passed by top-level code, while local parm3 > gets its default value assigned when "def inner(...) was executed. > > Notice that the free variables free, free3, and parm1 are referring to > the function's ending state, not to the state when the function was > defined. This has an impact when you've got inner being defined in a > loop. And this example could be made more complex if outer() is a > generator, in which case it may not have actually ended when inner gets > called. > > HTH > DaveA Great example, thank you. By the way, do you know the idiom: def makeInc(start): def inc(): inc.n += 1 print inc.n inc.n = start # 'start' may change now # ... return inc inc= makeInc(start=3) inc() I find it much nicer than a pseudo default value, for it explicitely shows that 'n' is, conceptually speaking, an attribute of the func (read: a closure upvalue). Let's take advantage of the fact python funcs are real objects! Denis la vita e estrany http://spir.wikidot.com/ ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] python closures
May I suggest that upvalues are analog to parameters passed by name? (which is indeed not Python's paradigm) Denis la vita e estrany http://spir.wikidot.com/ ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] python closures
On Tue, Dec 1, 2009 at 8:42 AM, spir wrote: > Great example, thank you. > > By the way, do you know the idiom: > > def makeInc(start): > def inc(): > inc.n += 1 > print inc.n > inc.n = start > # 'start' may change now > # ... > return inc > > inc= makeInc(start=3) > inc() > > I find it much nicer than a pseudo default value, for it explicitely shows > that 'n' is, conceptually speaking, an attribute of the func (read: a closure > upvalue). Let's take advantage of the fact python funcs are real objects! Well, if you need an attribute maintained between calls like that I think a generator is much nicer to write: def inc(start): while True: yield start start += 1 >>> i = inc(3) >>> i.next() 3 >>> i.next() 4 There might be a use-case where function attributes fit better, can't think of one right now though. Hugo ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] [Errno 9] Bad file descriptor
Khalid Al-Ghamdi wrote: Hi everybody, I'm running python 2.6.1 on vista and I'm trying to use the csv module to write to a csv file and get the average of some numbers, but I keep getting the following error: Traceback (most recent call last): File "C:\Python31\MyCSVProjectFinal.py", line 83, in writer.writerow(headings) IOError: [Errno 9] Bad file descriptor line 83 refers to the following code, specifically to the one in capital (in the actual code it's not in capital by the way): headings = linesInCSV[0] # e.g. ['Measured1', 'Measured2'] csvOutFileName = easygui.filesavebox(title = "Choose output file for averages", ) if csvOutFileName is not None: print "Saving using: "+csvOutFileName csvOut = file(csvOutFileName, 'rb') If this is an output file, why would you use 'rb' as the file mode? Don't you mean 'wb' ? writer = csv.writer(csvOut) *WRITER.WRITEROW(HEADINGS)* for index in range(len(measured1)): writer.writerow([measured1[index], measured2[index]]) writer.writerow([averaged1, averaged2]) else: print "No filename for saving" so, my problem is I don't know why it keeps giving me this error. I've checked on the internet, but I haven't found anything to help resolve this error. ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] numerical simulation + SQLite
Thanks everyone for your responses! Alan Gauld wrote: > You may need to be realistic in your expectations. > A database is writing to disk which will be slower than working in memory. > And a 3GB file takes a while to read/traverse, even with indexes. It depends > a lot on exactly what you are doing. If its mainly writing it should not be > much slower than writing to a flat file. If you are doing a lot of reading - > and you have used indexes - then it should be a lot faster than a file. > > But RAM - if you have enough - will always be fastest, by about 100 times. > The problem is when you run out, you revert to using files and that's usually > slower than a database... > > But without details of your usage pattern and database schema and SQL code > etc it is, as you say, impossible to be specific. I'm running a stochastic simulation of Brownian motion of a number of particles, for which I'll present a simplified version here. At each time step, I determine if some particles have left the system, determine the next position of the remaining particles, and then introduce new particles into the system at defined starting points. I have two tables in my SQLite database: one for information on each particle and one for all the x y z locations for each particle. sqlite> .schema Particles CREATE TABLE Particles (part_id INTEGER PRIMARY KEY, origin INTEGER, endpoint INTEGER, status TEXT, starttime REAL, x REAL, y REAL, z REAL); sqlite> .schema Locations CREATE TABLE Locations (id INTEGER PRIMARY KEY AUTOINCREMENT, timepoint REAL, part_id INTEGER, x REAL, y REAL, z REAL); For particles that have left the system, I create a list of part_id values whose status I'd like to update in the database and issue a command within my script (for which db=sqlite3.connect('results.db')): db.executemany("UPDATE Particles SET status='left' WHERE part_id=?",part_id) db.commit() To update the position, something like: db.executemany("UPDATE Particles SET x=?,y=?,z=? WHERE part_id=?",Particle_entries) db.executemany("INSERT INTO Locations (timepoint,lig,x,y,z) VALUES (?,?,?,?,?)",Location_entries) db.commit() That's about it. Just for many particles (i.e. 1e4 to 1e5). I'm considering whether I need every location entry or if I could get away with every 10 location entries, for example. Eike Welk wrote: > Just in case you don't know it, maybe Pytables is the right solution > for you. It is a disk storage library specially for scientific > applications: > http://www.pytables.org/moin Wow, that looks pretty good. I work with a lot of numpy.array's in this simulation so I'll definitely look into that. bob gailer wrote: > What do you do with the results after the simulation run? > > How precise do the numbers have to be? I'm interested in the particles that have left the system (I actually have a few ways they can leave) and I'm also interested in the ensemble average of the trajectories. As far as precision is concerned, I'm working on the scale of µm and each movement is on the order of 0.1 to 10 µm. Faisal ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] read in ascii and plot
Excellent thank you!! On Mon, Nov 30, 2009 at 7:03 PM, Kent Johnson wrote: > On Mon, Nov 30, 2009 at 8:26 PM, Wayne Werner > wrote: > > > A sample of the data is always helpful, but I'll take a shot in the dark. > > If you have data like this: > > 2.31 72 > > 9823 > > ... > > 347.32 > > And those are x y pairs you could do something like this: > > f = open('input.txt') > > #List comprehension to read all the lines as [[x1, y1], [x2, y2], ... > [xn, > > yn]] > > data = [line.split() for line in f] > > You have to convert the text strings to float somewhere, for example > data = [ map(float, line.split()) for line in f ] > > > # Reorient values as [(x1, x2,... xn), (y1, y2, ... yn)] > > data = zip(*data) > > # plot the xy vals > > pylab.scatter(data[0], data[1]) > > Or, IMO a little clearer, > x, y = zip(*data) > pylab.scatter(x, y) > > Kent > ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] read in ascii and plot
I would now like to add a line of best fit. I think the command is polyfit()?? But I can't seem to get it to work f=open('e:/testscatter.txt') data=[map(float,line.split()) for line in f] x, y=zip(*data) pylab.polyfit(x,y,1) pylab.scatter(x,y) pylab.show() Any feedback will be greatly appreciated. On Mon, Nov 30, 2009 at 7:03 PM, Kent Johnson wrote: > On Mon, Nov 30, 2009 at 8:26 PM, Wayne Werner > wrote: > > > A sample of the data is always helpful, but I'll take a shot in the dark. > > If you have data like this: > > 2.31 72 > > 9823 > > ... > > 347.32 > > And those are x y pairs you could do something like this: > > f = open('input.txt') > > #List comprehension to read all the lines as [[x1, y1], [x2, y2], ... > [xn, > > yn]] > > data = [line.split() for line in f] > > You have to convert the text strings to float somewhere, for example > data = [ map(float, line.split()) for line in f ] > > > # Reorient values as [(x1, x2,... xn), (y1, y2, ... yn)] > > data = zip(*data) > > # plot the xy vals > > pylab.scatter(data[0], data[1]) > > Or, IMO a little clearer, > x, y = zip(*data) > pylab.scatter(x, y) > > Kent > ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor
[Tutor] Monitoring a logfile
Varnish has a dedicated (but not always) reliable logger service. I'd like to monitor the logs - specifically I want to check that a known entry appears in there every minute (it should be there about 10 times a minute). What's going to be the best way to carry out this kind of check? I had a look at SEC, but it looks horrifically complicated. Could someone point me in the right direction? I think I basically want to be able to check the logfile every minute and check that an entry is in there since the last time I met I just can't see the right way to get started. S. ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] Monitoring a logfile
> Varnish has a dedicated (but not always) reliable logger service. I'd > like to monitor the logs - specifically I want to check that a known > entry appears in there every minute (it should be there about 10 times > a minute). > What's going to be the best way to carry out this kind of check? I > had a look at SEC, but it looks horrifically complicated. Ever used the seek() and tell() and readline() methods of a file object? You could probably hack something together pretty quickly with those. Alan ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: http://mail.python.org/mailman/listinfo/tutor