Re: pickle.dump (obj, conn)
Pedro Izecksohn writes: > Shouldn't pickle.dump (obj, conn) raise an Exception if conn is a TCP > connection that was closed by the remote host? It is quite difficult to detect the closing of a TCP channel in a reliable way. At least, you should not trust that you are reliably informed about it. The behavior is related to output to TCP connections, not directly to "pickle". -- https://mail.python.org/mailman/listinfo/python-list
Re: script uses up all memory
Larry Martell writes: > I figured out what is causing this. Each pass through the loop it does: > > self.tools = Tool.objects.filter(ip__isnull=False) > > And that is what is causing the memory consumption. If I move that > outside the loop and just do that once the memory issue goes away. Now > I need to figure out why this is happening and how to prevent it as > they do want to query the db each pass through the loop in case it has > been updated. Django saves a copy of every executed SQL query if it is in debug mode (if the DEBUG setting is true). See https://docs.djangoproject.com/en/dev/faq/models/#why-is-django-leaking-memory Regards, / Kent Engström, Lysator -- https://mail.python.org/mailman/listinfo/python-list
imapclient Gmail search() times
I'm working on a small app to help sort/organize my mail via Gmail's
IMAP server, and I'm using imapclient (BTW it's a _huge_ improvement
over imaplib).
The odd thing I'm seeing is that when searching a large "folder" (All
Mail), I get wildly different times depending on what header I search.
allmail = IMAPClient(HOST, use_uid=True, ssl=True, port=993)
allmail.login(USERNAME, PASSWORD)
allmail.select_folder('[Gmail]/All Mail')
Searching on "Message-Id:" is fast (sub-second):
irt = allmail.search('HEADER Message-ID %s' % msgid)
Searching on "In-Reply-To:" takes 10-15 seconds:
rt = allmail.search('HEADER In-Reply-To %s' % msgid)
[IIRC, I've got about 22000 messages in the 'All Mail' "folder".]
I'm assuming this is just due to the way that Google implmented their
IMAP server code, but I thought I'd ask if anybody else had noticed
this. Perhaps I'm doing something stupid, but I can't imagine what it
would be
--
Grant Edwards grant.b.edwardsYow! I have a TINY BOWL in
at my HEAD
gmail.com
--
https://mail.python.org/mailman/listinfo/python-list
PEP/GSoC idea: built-in parser generator module for Python?
First of all, hi everyone, I'm new to this list. I'm a grad student who's worked on and off with Python on various projects for 8ish years now. I recently wanted to construct a parser for another programing language in Python and was dissapointed that Python doesn't have a built-in module for building parsers, which seems like a common-enough task. There are plenty of different 3rd-party parsing libraries available, specialized in lots of different ways (see e.g., [1]). I happened to pick one that seemed suitable for my needs but didn't turn out to support the recursive structures that I needed to parse. Rather than pick a different one I just built my own parser generator module, and used that to build my parser: problem solved. It would have been much nicer if there were a fully-featured builtin parser generator module in Python, however, and the purpose of this email is to test the waters a bit: is this something that other people in the Python community would be interested in? I imagine the route to providing a built-in parser generator module would be to first canvass the community to figure out what third-party libraries they use, and then contact the developers of some of the top libraries to see if they'd be happy integrating as a built-in module. At that point someone would need to work to integrate the chosen third-party library as a built-in module (ideally with its developers). >From what I've looked at PyParsing and PLY seem to be standout parser generators for Python, PyParsing has a bit more Pythonic syntax from what I've seen. One important issue would be speed though: an implementation mostly written in C for low-level parsing tasks would probably be much preferrable to one written in pure Python, since a builtin module should be geared towards efficiency, but I don't actually know exactly how that would work (I've both extended and embedded Python with/in C before, but I'm not sure how that kind of project relates to writing a built-in module in C). Sorry if this is a bit rambly, but I'm interested in feedback from the community on this idea: is a builtin parser generator module desirable? If so, would integrating PyParsing as a builtin module be a good solution? What 3rd-party parsing module do you think would serve best for this purpose? -Peter Mawhorter [1] http://nedbatchelder.com/text/python-parsers.html -- https://mail.python.org/mailman/listinfo/python-list
Re: PEP/GSoC idea: built-in parser generator module for Python?
On 3/14/2014 2:51 PM, Peter Mawhorter wrote: First of all, hi everyone, I'm new to this list. Welcome. I'm a grad student who's worked on and off with Python on various projects for 8ish years now. I recently wanted to construct a parser for another programing language in Python and was dissapointed that Python doesn't have a built-in module for building parsers, which seems like a common-enough task. There are plenty of different 3rd-party parsing libraries available, specialized in lots of different ways (see e.g., [1]). I happened to pick one that seemed suitable for my needs but didn't turn out to support the recursive structures that I needed to parse. Rather than pick a different one I just built my own parser generator module, and used that to build my parser: problem solved. It would have been much nicer if there were a fully-featured builtin parser generator module in Python, however, and the purpose of this email is to test the waters a bit: is this something that other people in the Python community would be interested in? I imagine the route to providing a built-in parser generator module would be to first canvass the community to figure out what third-party libraries they use, and then contact the developers of some of the top libraries to see if they'd be happy integrating as a built-in module. At that point someone would need to work to integrate the chosen third-party library as a built-in module (ideally with its developers). I think the idea has been raised before, but I am not sure which list (this one, pydev, or python-ideas). My first reaction, as a core developer, is that the stdlib is, if anything, too large. It is already not as well-maintained as we would like. My second is that parser generation is an application, not a library. A parser generator is used by running it with an input specification, not by importing it and using specific functions and classes. From what I've looked at PyParsing and PLY seem to be standout parser generators for Python, PyParsing has a bit more Pythonic syntax from what I've seen. One important issue would be speed though: an implementation mostly written in C for low-level parsing tasks would probably be much preferrable to one written in pure Python, since a builtin module should be geared towards efficiency, but I don't actually know exactly how that would work (I've both extended and embedded Python with/in C before, but I'm not sure how that kind of project relates to writing a built-in module in C). Something written in Python can be run with any implementation of Python. Something written in C tends to be tied to CPython, Sorry if this is a bit rambly, but I'm interested in feedback from the community on this idea: is a builtin parser generator module desirable? If so, would integrating PyParsing as a builtin module be a good solution? What 3rd-party parsing module do you think would serve best for this purpose? [1] http://nedbatchelder.com/text/python-parsers.html Perhaps something like this should be in the wiki, if not already. -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list
JESUS EXPLAINED TO ME THAT HUMANS HAVE ORIGINS IN THE DEVONIAN.
== >BREAKING HOGWASH! == > TALK.ORIGINS RECENTLY ANNOUNCED THAT HUMANS HAVE ORIGINS IN THE DEVONIAN. > THIS RECENT RECOGNITION OF THE GREATEST FACT IN EARTH'S HISTORY IS PROFOUND TO http://www.talkorigins.org/ CREDIBILITY. (Also see http://www.ediacara.org/) > DAVID IAIN GREIG, TALK.ORIGINS DICTATOR WAS RELUCTANT BUT FINALLY ANNOUNCED IN A PAPER IN http://www.sciencemag.org/ THAT THRINAXODON WAS RIGHT, AND HUMANS DO HAVE ORIGINS IN THE DEVONIAN! > === EVIDENCE THAT HUMANS LIVED IN THE DEVONIAN: https://groups.google.com/group/sci.bio.paleontology/browse_thread/thread/6f501c469c7af24f# https://groups.google.com/group/sci.bio.paleontology/browse_thread/thread/3aad75c16afb0b82# http://thrinaxodon.wordpress.com/ === THRINAXODON ONLY HAD THIS TO SAY: "I..I...I...Can't believe it. This completely disproved Darwinian orthodoxy." === THE BASTARDS AT THE SMITHSONIAN, AND THE LEAKEY FOUNDATION ARE ERODING WITH FEAR. === THESE ASSHOLES ARE GOING TO DIE: THOMAS AQUINAS; ALDOUS HUXLEY; BOB CASANVOVA; SkyEyes; DAVID IAIN GRIEG; MARK ISAAK; JOHN HARSHAM; RICHARD NORMAN; DR. DOOLITTLE; CHARLES DARWIN; MARK HORTON; ERIK SIMPSON; HYPATIAB7; PAUL J. GANS; JILLERY; WIKI TRIK; THRINAXODON; PETER NYIKOS; RON OKIMOTO; JOHN S. WILKINS === THRINAXODON WAS SCOURING ANOTHER DEVONIAN FOSSIL BED, AND FOUND A HUMAN SKULL, AND A HUMAN FEMUR. HE ANALYSED THE FINDS, AND SAW THAT THEY WERE NOT NORMAL ROCKS. THESE WERE FOSSILIZED BONES. THEY EVEN HAD TOOTH MARKS ON THEM. SO, THRINAXODON BROUGHT THEM TO THE LEAKEY FOUNDATION, THEY UTTERLY DISMISSED IT, AND SAID, "We want to keep people thinking that humans evolved 2 Ma." THRINAXODON BROUGHT HIS SWORD, AND SAID, "SCIENCE CORRECTS ITSELF." RICHARD LEAKEY SAID, "That is a myth, for people to believe in science." THRINAXODON PLANS TO BRING DOOM TO SCIENCE, ITSELF. THRINAXODON IS NOW ON REDDIT -- ---Thrinaxodon -- https://mail.python.org/mailman/listinfo/python-list
Re: Balanced trees
On 8 March 2014 20:37, Mark Lawrence wrote: > I've found this link useful http://kmike.ru/python-data-structures/ > > I also don't want all sorts of data structures added to the Python library. > I believe that there are advantages to leaving specialist data structures on > pypi or other sites, plus it means Python in a Nutshell can still fit in > your pocket and not a 40 ton articulated lorry, unlike the Java equivalent. The thing we really need is for the blist containers to become stdlib (but not to replace the current list implementation). The rejected PEP (http://legacy.python.org/dev/peps/pep-3128/) misses a few important points, largely in how the "log(n)" has a really large base: random.choice went from 1.2µs to 1.6µs from n=1 to n=10⁸, vs 1.2µs for a standard list. Further, it's worth considering a few advantages: * copy is O(1), allowing code to avoid mutation by just copying its input, which is good practice. * FIFO is effectively O(1), as the time just about doubles from n=1 to n=10⁸ so will never actually branch that much. There is still a speed benefit of collections.deque, but it's much, much less significant. This is very useful when considering usage as a multi-purpose data structure, and removes demand for explicit linked lists (which have foolishly been reimplemented loads of times). * It reduces demand for trees: * There are efficient implementations of sortedlist, sortedset and sorteddict. * Slicing, slice assignment and slice deletion are really fast. * Addition of lists is sublinear. Instead of "list(itertools.chain(...))", one can add in a loop and end up *faster*. I think blist isn't very popular not because it isn't really good, but because it isn't a specialised structure. It is, however, almost there for almost every circumstance. This can help keep the standard library clean, especially of tree data structures. Here's what we kill: * Linked lists and doubly-linked lists, which are scarily popular for whatever reason. Sometimes people claim that collections.deque isn't powerful enough for whatever they want, and blist will almost definitely sate those cases. * Balanced trees, with blist.sortedlist. This is actually needed right now. * Poor performance in the cases where a lot of list merging and pruning happens. * Most uses of bisect. * Some instances where two data structures are used in parallel in order to keep performance fast on disparate operations (like `x in y` and `y[i]`). Now, I understand there are downsides to blist. Particularly, I've looked through the "benchmarks" and they seem untruthful. Further, we'd need a maintainer. Finally, nobody jumps at blists because they're rarely the obvious solution. Rather, they attempt to be a different general solution. Hopefully, though, a stdlib inclusion could make them a lot more obvious, and support in some current libraries could make them feel more at home. I don't know whether this is a good idea, but I do feel that it is more promising and general than having a graph in the standard library. -- https://mail.python.org/mailman/listinfo/python-list
