[Python-Dev] Windows compiler for Python 2.6+
I just got bitten by the runtime library incompatibility problem on windows when I tried to load a C extension compiled with MSVC 2005 (64-bit) into Python 2.5. I realize that Python2.5 will continue to use MSVC 2003 for compatibility reasons, but I was curious if any thought had been given to the future of the 2.x series. Cheers, Chris ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Windows compiler for Python 2.6+
On 2/28/07, Josiah Carlson <[EMAIL PROTECTED]> wrote: > "Chris AtLee" <[EMAIL PROTECTED]> wrote: > > I just got bitten by the runtime library incompatibility problem on > > windows when I tried to load a C extension compiled with MSVC 2005 > > (64-bit) into Python 2.5. > > I would guess it is more an issue of 32bit + 64bit dynamic linking > having issues, but I could certainly be wrong. I don't think so, this was the 64bit version of Python 2.5. When I recompiled with the 2003 compiler it worked fine. > > I realize that Python2.5 will continue to use MSVC 2003 for > > compatibility reasons, but I was curious if any thought had been given > > to the future of the 2.x series. > > IIUC, there exists a project file in PCBUILD8 for compiling with MSVC > 2005. You should be able to recompile Python 2.5 with that compiler, > though you may need to change some things (I've never tried myself). That is kind of a last-resort for me...I'd like for my code to work with all the other python extensions out there, which is why I switched to the 2003 compiler for now. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] urllib, multipart/form-data encoding and file uploads
Hello, I notice that there is some work being done on urllib / urllib2 for python 2.6/3.0. One thing I've always missed in urllib/urllib2 is the facility to encode POST data as multipart/form-data. I think it would also be useful to be able to stream a POST request to the remote server rather than having requiring the user to create the entire POST body in memory before starting the request. This would be extremely useful when writing any kind of code that does file uploads. I didn't see any recent discussion about this so I thought I'd ask here: do you think this would make a good addition to the new urllib package? Cheers, Chris ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] urllib, multipart/form-data encoding and file uploads
On Fri, Jun 27, 2008 at 11:40 AM, Bill Janssen <[EMAIL PROTECTED]> wrote: >> I notice that there is some work being done on urllib / urllib2 for >> python 2.6/3.0. One thing I've always missed in urllib/urllib2 is the >> facility to encode POST data as multipart/form-data. I think it would >> also be useful to be able to stream a POST request to the remote >> server rather than having requiring the user to create the entire POST >> body in memory before starting the request. This would be extremely >> useful when writing any kind of code that does file uploads. >> >> I didn't see any recent discussion about this so I thought I'd ask >> here: do you think this would make a good addition to the new urllib >> package? > > I think it would be very helpful. I'd separate the two things, > though; you want to be able to format a set of values as > "multipart/form-data", and do various things with that resulting > "document", and you want to be able to stream a POST (or PUT) request. How about if the function that encoded the values as "multipart/form-data" was able to stream data to a POST (or PUT) request via an iterator that yielded chunks of data? def multipart_encode(params, boundary=None): """Encode ``params`` as multipart/form-data. ``params`` should be a dictionary where the keys represent parameter names, and the values are either parameter values, or file-like objects to use as the parameter value. The file-like object must support the .read(), .seek(), and .tell() methods. If ``boundary`` is set, then it as used as the MIME boundary. Otherwise a randomly generated boundary will be used. In either case, if the boundary string appears in the parameter values a ValueError will be raised. Returns an iterable object that will yield blocks of data representing the encoded parameters.""" The file objects need to support .seek() and .tell() so we can determine how large they are before including them in the output. I've been trying to come up with a good way to specify the size separately so you could use unseekable objects, but no good ideas have come to mind. Maybe it could look for a 'size' attribute or callable on the object? That seems a bit hacky... A couple helper functions would be necessary as well, one to generate random boundary strings that are guaranteed not to collide with file data, and another function to calculate the total size of the encoding to be used in the 'Content-Length' header in the main HTTP request. Then we'd need to change either urllib or httplib to support iterable objects in addition to the regular strings that it currently uses. Cheers, Chris ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] urllib, multipart/form-data encoding and file uploads
On Sat, Jun 28, 2008 at 4:14 AM, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote: >> I didn't see any recent discussion about this so I thought I'd ask >> here: do you think this would make a good addition to the new urllib >> package? > > Just in case that isn't clear: any such change must be delayed for > 2.7/3.1. That is not to say that you couldn't start implementing it > now, of course. I like a challenge :) As discussed previously, there are two parts to this: handling streaming HTTP requests, and multipart/form-data encoding. I notice that support for file objects has already been added to 2.6's httplib. The changes required to support iterable objects are very minimal: Index: Lib/httplib.py === --- Lib/httplib.py (revision 64600) +++ Lib/httplib.py (working copy) @@ -688,7 +688,12 @@ self.__state = _CS_IDLE def send(self, str): -"""Send `str' to the server.""" +"""Send `str` to the server. + +``str`` can be a string object, a file-like object that supports +a .read() method, or an iterable object that supports a .next() +method. +""" if self.sock is None: if self.auto_open: self.connect() @@ -710,6 +715,10 @@ while data: self.sock.sendall(data) data=str.read(blocksize) +elif hasattr(str,'next'): +if self.debuglevel > 0: print "sendIng an iterable" +for data in str: +self.sock.sendall(data) else: self.sock.sendall(str) except socket.error, v: (small aside, should the parameter called 'str' be renamed to something else to avoid conflicts with the 'str' builtin?) All regression tests continue to pass with this change applied. If this change is not applied, then we have to jump through a couple of hoops to support iterable HTTP request bodies: - Provide our own httplib.HTTP(S)Connection classes that override send() to do exactly what the patch above accomplishes - Provide our own urllib2.HTTP(S)Handler classes that will use the new HTTP(S)Connection classes - Register the new HTTP(S)Handler classes with urllib2 so they take priority over the standard handlers I've created the necessary sub-classes, as well as several classes and functions to do multipart/form-data encoding of strings and files. My current work is available online here: http://atlee.ca/software/poster (tarball here: http://atlee.ca/software/poster/dist/0.1/poster-0.1dev.tar.gz) Cheers, Chris ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] urllib, multipart/form-data encoding and file uploads
On Fri, Jun 27, 2008 at 9:06 PM, Nick Coghlan <[EMAIL PROTECTED]> wrote: > Chris, > > To avoid losing these ideas, could you add them to the issue tracker as > feature requests? It's too late to get them into 2.6/3.0 but they may make > good additions for the next release cycle. > > Cheers, > Nick. Issues #3243 and #3244 created. Cheers, Chris ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Copying zlib compression objects
I'm writing a program in python that creates tar files of a certain maximum size (to fit onto CD/DVD). One of the problems I'm running into is that when using compression, it's pretty much impossible to determine if a file, once added to an archive, will cause the archive size to exceed the maximum size. I believe that to do this properly, you need to copy the state of tar file (basically the current file offset as well as the state of the compression object), then add the file. If the new size of the archive exceeds the maximum, you need to restore the original state. The critical part is being able to copy the compression object. Without compression it is trivial to determine if a given file will "fit" inside the archive. When using compression, the compression ratio of a file depends partially on all the data that has been compressed prior to it. The current implementation in the standard library does not allow you to copy these compression objects in a useful way, so I've made some minor modifications (patch attached) to the standard 2.4.2 library: - Add copy() method to zlib compression object. This returns a new compression object with the same internal state. I named it copy() to keep it consistent with things like sha.copy(). - Add snapshot() / restore() methods to GzipFile and TarFile. These work only in write mode. snapshot() returns a state object. Passing in this state object to restore() will restore the state of the GzipFile / TarFile to the state represented by the object. Future work: - Decompression objects could use a copy() method too - Add support for copying bzip2 compression objects Although this patch isn't complete, does this seem like a good approach? Cheers, Chris diff -ur Python-2.4.2.orig/Lib/gzip.py Python-2.4.2/Lib/gzip.py --- Python-2.4.2.orig/Lib/gzip.py 2005-06-09 10:22:07.0 -0400 +++ Python-2.4.2/Lib/gzip.py 2006-02-14 13:12:29.0 -0500 @@ -433,6 +433,17 @@ else: raise StopIteration +def snapshot(self): +if self.mode == READ: +raise IOError("Can't create a snapshot in READ mode") +return (self.size, self.crc, self.fileobj.tell(), self.offset, self.compress.copy()) + +def restore(self, s): +if self.mode == READ: +raise IOError("Can't restore a snapshot in READ mode") +self.size, self.crc, offset, self.offset, self.compress = s +self.fileobj.seek(offset) +self.fileobj.truncate() def _test(): # Act like gzip; with -d, act like gunzip. diff -ur Python-2.4.2.orig/Lib/tarfile.py Python-2.4.2/Lib/tarfile.py --- Python-2.4.2.orig/Lib/tarfile.py 2005-08-27 06:08:21.0 -0400 +++ Python-2.4.2/Lib/tarfile.py 2006-02-15 12:05:48.0 -0500 @@ -1825,6 +1825,28 @@ """ if level <= self.debug: print >> sys.stderr, msg + +def snapshot(self): +"""Save the current state of the tarfile +""" +self._check("_aw") +if hasattr(self.fileobj, "snapshot"): +return self.fileobj.snapshot(), self.offset, self.members[:], self.inodes.copy() +else: +return self.fileobj.tell(), self.offset, self.members[:], self.inodes.copy() + +def restore(self, s): +"""Restore the state of the tarfile from a previous snapshot +""" +self._check("_aw") +if hasattr(self.fileobj, "restore"): +snapshot, self.offset, self.members, self.inodes = s +self.fileobj.restore(snapshot) +else: +offset, self.offset, self.members, self.inodes = s +self.fileobj.seek(offset) +self.fileobj.truncate() + # class TarFile class TarIter: diff -ur Python-2.4.2.orig/Modules/zlibmodule.c Python-2.4.2/Modules/zlibmodule.c --- Python-2.4.2.orig/Modules/zlibmodule.c 2004-12-28 15:12:31.0 -0500 +++ Python-2.4.2/Modules/zlibmodule.c 2006-02-14 14:05:35.0 -0500 @@ -653,6 +653,36 @@ return RetVal; } +PyDoc_STRVAR(comp_copy__doc__, +"copy() -- Return a copy of the compression object."); + +static PyObject * +PyZlib_copy(compobject *self, PyObject *args) +{ +compobject *retval; + +retval = newcompobject(&Comptype); + +/* Copy the zstream state */ +/* TODO: Are the ENTER / LEAVE needed? */ +ENTER_ZLIB +deflateCopy(&retval->zst, &self->zst); +LEAVE_ZLIB + +/* Make references to the original unused_data and unconsumed_tail + * They're not used by compression objects so we don't have to do + * anything special here */ +retval->unused_data = self->unused_data; +retval->unconsumed_tail = self->unconsumed_tail; +Py_INCREF(retval->unused_data); +Py_INCREF(retval->unconsumed_tail); + +/* Mark it as being initialized */ +retval->is_initialised = 1; + +return (PyObject*)retval; +} + PyDoc_STRVAR(decomp_flush__doc__, "flush() -- Return a string containing any remaining decompressed dat
Re: [Python-Dev] Copying zlib compression objects
On 2/17/06, Guido van Rossum <[EMAIL PROTECTED]> wrote: Please submit your patch to SourceForge.I've submitted the zlib patch as patch #1435422. I added some test cases to test_zlib.py and documented the new methods. I'd like to test my gzip / tarfile changes more before creating a patch for it, but I'm interested in any feedback about the idea of adding snapshot() / restore() methods to the GzipFile and TarFile classes. It doesn't look like the underlying bz2 library supports copying compression / decompression streams, so for now it's impossible to make corresponding changes to the bz2 module.I also noticed that the tarfile reimplements the gzip file format when dealing with streams. Would it make sense to refactor some the gzip.py code to expose the methods that read/write the gzip file header, and have the tarfile module use those methods?Cheers,Chris ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Path PEP: some comments (equality)
On 2/20/06, Mark Mc Mahon <[EMAIL PROTECTED]> wrote: > Hi, > > It seems that the Path module as currently defined leaves equality > testing up to the underlying string comparison. My guess is that this > is fine for Unix (maybe not even) but it is a bit lacking for Windows. > > Should the path class implement an __eq__ method that might do some of > the following things: > - Get the absolute path of both self and the other path > - normcase both > - now see if they are equal > > This would make working with paths much easier for keys of a > dictionary on windows. (I frequently use a case insensitive string > class for paths if I need them to be keys of a dict.) The PEP specifies path.samefile(), which is useful in the case of files that actually exist, but pretty much useless for comparing paths that don't exist on the local machine. I think leaving __eq__ as the default string comparison is best. But what about providing an alternate platform-specific equality test? def isequal(self, other, platform="native"): """Return True if self is equivalent to other using platform's path comparison rules. platform can be one of "native", "posix", "windows", "mac".""" This could do some combination of os.path.normpath() and os.path.normcase() depending on the platform. The docs for os.path.normpath() say that it may change the meaning of the path if it contains symlinks...it's not clear to me how though. Cheers, Chris ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Patch to add timestamp() method to datetime objects
Hi, I've just submitted patch #1457227 which adds a convenience method to datetime objects to get the timestamp. It's equivalent to time.mktime(d.timetuple()), I just wanted to save myself some typing and be able to write d.timestamp() instead. I hope I have the dst code right. Would d.utctimestamp() be useful as well? Cheers, Chris ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] improving quality
On 3/28/06, Neal Norwitz <[EMAIL PROTECTED]> wrote: > We've made a lot of improvement with testing over the years. > Recently, we've gotten even more serious with the buildbot, Coverity, > and coverage (http://coverage.livinglogic.de). However, in order to > improve quality even further, we need to do a little more work. This > is especially important with the upcoming 2.5. Python 2.5 is the most > fundamental set of changes to Python since 2.2. If we're to make this > release work, we need to be very careful about it. This reminds me of something I've been wanting to ask for a while: does anybody run python through valgrind on a regular basis? I've noticed that valgrind complains a lot about invalid reads in PyObject_Free. I know that valgrind can warn about things that turn out not to be problems, but would generating a suppresion file and running all or part of the test suite through valgrind on the buildbots be useful? Cheers, Chris ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] xmlrpclib.binary missing?
The current 2.4 and 2.5 docs mention that the xmlrpclib has a function called binary which converts any python value to a Binary object. However, this function does not exist in either 2.4.3 or 2.5. The Binary constructor accepts a data parameter, so I would say just remove mention of the binary function from the docs and leave the implementation as-is. Cheers, Chris ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] xmlrpclib.binary missing?
On 3/30/06, Aahz <[EMAIL PROTECTED]> wrote: > On Thu, Mar 30, 2006, Chris AtLee wrote: > > > > The current 2.4 and 2.5 docs mention that the xmlrpclib has a function > > called binary which converts any python value to a Binary object. > > > > However, this function does not exist in either 2.4.3 or 2.5. > > > > The Binary constructor accepts a data parameter, so I would say just > > remove mention of the binary function from the docs and leave the > > implementation as-is. > > Please file a bug on SF and report it back here so that we don't lose > track of this. Sure, this is filed as bug #1461610. Cheers, Chris ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] zlib module doesn't build - inflateCopy() not found
On 5/21/06, Guido van Rossum <[EMAIL PROTECTED]> wrote: > Then options 2 and 3 are both fine. > > Not compiling at all is *not*, so if nobody has time to implement 2 or > 3, we'll have to do 4. > > --Guido Is this thread still alive? I've posted patch #1503046 to sourceforge which implements option #2 by checking for inflateCopy() in the system zlib during the configure step. I'm not sure if this works when using the zlib library included with python or not. Cheers, Chris ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com