Re: [Python-Dev] Issues with Py3.1's new ipaddr
On Tue, Jun 2, 2009 at 10:34 AM, "Martin v. Löwis" wrote: >>> We could remove it, but then what we have wouldn't really be a release >>> candidate anymore, so the release would get delayed. >> >> How long do release candidates soak in the field before being accepted? > > For this release, the release schedule is defined in PEP 375 > >> I'm curious if an exception could be made in this case, given that you >> have admitted that ipaddr is not an important library. > > This would be need to be decided by the release manager. However, given > that Guido has already pronounced on this issue, Benjamin is unlikely to > change anything. > >> You seem comfortable with these quirks, but then you're not planning >> to write software with this library. Developers who do intend to write >> meaningful network applications do seem concerned, yet we're ignored. > > I don't hear a public outcry - only a single complainer. I normally just lurk on python-dev, but I will comment on this thread. I manage several large IP address spaces and I've written my own IP address tools in the past. The comments on the thread motivated me to look at the ipaddr module. I fully agree with Clay's comments. I would not use the module as it stands. I apologize for lurking too much and not commenting earlier. casevh > > Regards, > Martin > ___ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/casevh%40gmail.com > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3144: IP Address Manipulation Library for the Python Standard Library
>On Thu, Aug 20, 2009 at 2:00 PM, Peter Moody wrote: > The pep has been updated with the excellent suggestions thus far. > > Are there any more? Thanks for writing the PEP. I tried a few of the common scenarios that I use at work. Disclaimer: my comments are based on my work environment. I was surprised that IP('172.16.1.1') returned IPv4Address('172.16.1.1/32') instead of IPv4Address('172.16.1.1'). I know I can change the behavior by using host=True, but then IP('172.16.1.1/24', host=True) will raise an error. It makes more sense, at least to me, that if I input just an IP address, I get an IP address back. I would prefer that IP('172.16.1.1/32') return an IPv4Network and IP('172.16.1.1') return an IPv4Address. Would it be possible to provide an iterator that returns just the valid host IP addresses (minus the network and broadcast addresses)? "for i in IPv4Network('172.16.1.0/28')" and "for in in IPv4Network('172.16.1.0/28').iterhosts()" both return all 16 IP addresses. I normally describe 172.16.1.0/28 as consisting of one network address, 14 host addresses, and one broadcast address. I would prefer that "for i in IPv4Network('172.16.1.0/28')" return all IP addresses and that "for in in IPv4Network('172.16.1.0/28').iterhosts()" exclude the network and broadcast addresses. I think creating a list of IP addresses that can be assigned to devices on a network is a common task. Can .subnet() be enhanced to accept masks? For example, IPv4Network('172.16.0.0/16').subnet('/19') would return the eight /19 subnets. What about supporting multiple parameters to subnet? I frequently need to create complex subnet layouts. The following subnet layout is NOT made up! 172.16.0.0/22 >172.16.0.0/23 >>>172.16.2.0/25 172.16.2.128/26 172.16.2.192/26 >>172.16.3.0/28 >>172.16.3.16/28 172.16.3.32/30 172.16.3.36/30 172.16.3.40/30 172.16.3.44/30 172.16.3.48/30 172.16.3.52/30 172.16.3.56/30 172.16.3.60/30 >>172.16.3.64/32 >>172.16.3.79/32 >>172.16.3.80/28 >>172.16.3.96/28 >>172.16.3.112/28 >172.16.3.128/27 >172.16.3.160/27 172.16.3.192/26 A possible syntax would be: .subnet((1,'/23'),(1,'/25'),(2,'/26'),(2,'/28'),(8,'/30'),(16,'/32'),(3,'/28'),(2,'/27'),(1,'/26')) Note: I am willing to provide patches to implement my suggestions. I just won't have much time over the next couple weeks. casevh > Cheers, > /peter > > On Tue, Aug 18, 2009 at 1:00 PM, Peter Moody wrote: >> Howdy folks, >> >> I have a first draft of a PEP for including an IP address manipulation >> library in the python stdlib. It seems like there are a lot of really >> smart folks with some, ahem, strong ideas about what an IP address >> module should and shouldn't be so I wanted to solicit your input on >> this pep. >> >> the pep can be found here: >> >> http://www.python.org/dev/peps/pep-3144/ >> >> the code can be found here: >> >> http://ipaddr-py.googlecode.com/svn/branches/2.0.x/ >> >> Please let me know if you have any comments (some already coming :) >> >> Cheers, >> /peter >> > ___ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/casevh%40gmail.com > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Removal of intobject.h in 3.1
On Sat, Nov 21, 2009 at 11:05 AM, "Martin v. Löwis" wrote: >> IMHO, that's not really a good way to encourage people to try to provide >> a smooth upgrade to the 3.x branch. Much to the contrary. 3.x should make >> it easier for developers by providing more standard helpers like >> the removed intobject.h header file. > > I think it's better than it sounds. The macros (unfortunately) allowed > to make non-obvious mistakes. Now that they are gone, people need to > really think of what precisely they want to do. > > For example, consider > > if (PyInt_Check(o)){ > long val = PyInt_AsLong(o); > // process > } else if (PyLong_Check(o)) { > long long val = PyLong_AsLongLong(o); > // check for overflow > // process > } > > With intobject.h, this code would continue to compile, but work > incorrectly, as the second case will never be executed. It would > be better to port this as > > #if Py2.x > if (PyInt_Check(o)){ > long val = PyInt_AsLong(o); > // process > } else > #endif > if (PyLong_Check(o)) { > > i.e. eliminating the int case altogether. For another example, > > long foo = PyInt_AsLong(Foo); > > has a hidden error in 3.x, with intobject: PyLong_AsLong might > overflow, which the 2.x case doesn't. > > So eliminating intobject.h likely helps avoiding subtle errors. FWIW, I ported gmpy to Python 3.x without using intobject.h. I'm now using the #if Py2.x ... #endif approach (almost) everywhere. The same source compiles successfully with Python 2.4 to 3.2. Case > > Regards, > Martin > ___ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/casevh%40gmail.com > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Providing support files to assist 3.x extension authors
Hello, When I ported gmpy (Python to GMP multiple precision library) to Python 3.x, I began to use PyLong_AsLongAndOverflow frequently. I found the code to slightly faster and cleaner than using PyLong_AsLong and checking for overflow. I looked at making PyLong_AsLongAndOverflow available to Python 2.x. http://bugs.python.org/issue7528 includes a patch that adds PyLong_AsLongAndOverflow to Python 2.7. I also included a file (py3intcompat.c) that can be included with an extension's source code and provides PyLong_AsLongAndOverflow to earlier versions of Python 2.x. In the bug report, I suggested that py3intcompat.c could be included in the Misc directory and be made available to extension authors. This follows the precedent of pymemcompat.h. But there may be more "compatibility" files that could benefit extension authors. Mark Dickinson suggested that I bring the topic on python-dev. Several questions come to mind: 1) Is it reasonable to provide backward compatibility files (either as .h or .c) to provide support to new API calls to extension authors? 2) If yes, should they be included with the Python source or distributed as a separate entity? (2to3 and/or 3to2 projects, a Wiki page) 3) If not, and extension authors can create their own compatibility files, are there any specific attribution or copyright messages that must be included? (I assuming the compatibility was done by extracting the code for the new API and tweaking it to run on older versions of Python.) Thanks in advance for your attention, Case Van Horsen ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Providing support files to assist 3.x extension authors
On Sun, Dec 20, 2009 at 12:49 AM, "Martin v. Löwis" wrote: >> Several questions come to mind: >> >> 1) Is it reasonable to provide backward compatibility files (either as >> .h or .c) to provide support to new API calls to extension authors? > > I'm skeptical. In my experience, each extension has different needs, and > will also use different strategies to provide compatibility. So > publishing one way as the "official" approach might be difficult. In one > case, the proposed approach for compatibility actually led to incorrect > code (by ignoring exceptions when extracting a long), so we would need > to be fairly careful what compatibility layers we can bless as official. > >> 2) If yes, should they be included with the Python source or >> distributed as a separate entity? (2to3 and/or 3to2 projects, a Wiki >> page) > > In the way you propose it (i.e. as forward compatibility files) I fail > to see the point of including them with Python 2.7. Extension authors > likely want to support versions of Python before that, which now cannot > be changed. So those authors would still have to include the file > on their own. > > So a file distributed in Include of 2.7 actually hurts, as it would > conflict with the copy included in packages. > >> 3) If not, and extension authors can create their own compatibility >> files, are there any specific attribution or copyright messages that >> must be included? > > If you write a compatibility file from scratch, without copying existing > code, you don't need to worry at all. If you do copy parts of Python, > you must follow the license, in particular clause 2. > > Regards, > Martin > Thanks for comments. I will just maintain my own version. Case ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Decimal <-> float comparisons in py3k.
On Fri, Mar 19, 2010 at 3:07 AM, Mark Dickinson wrote: > On Fri, Mar 19, 2010 at 9:37 AM, Mark Dickinson wrote: >> Making hashes of int, >> float, Decimal *and* Fraction all compatible with one another, >> efficient for ints and floats, and not grossly inefficient for >> Fractions and Decimals, is kinda hairy, though I have a couple of >> ideas of how it could be done. > > To elaborate, here's a cute scheme for the above, which is actually > remarkably close to what Python already does for ints and floats, and > which doesn't require any of the numeric types to figure out whether > it's exactly equal to an instance of some other numeric type. > > After throwing out infinities and nans (which can easily be dealt with > separately), everything we care about is a rational number, so it's > enough to come up with some mapping from the set of all rationals to > the set of possible hashes, subject to the requirement that the > mapping can be computed efficiently for the types we care about. > > For any prime P there's a natural 'reduce modulo P' map > > reduce : {rational numbers} --> {0, 1, ..., P-1, infinity} > > defined in pseudocode by: > > reduce(m/n) = ((m % P) * inv(n, P)) % P if n % P != 0 else infinity > > where inv(n, P) represents the modular inverse to n modulo P. > > Now let P be the handy Mersenne prime P = 2**31-1 (on a 64-bit > machine, the almost equally handy prime 2**61-1 could be used > instead), and define a hash function from the set of rationals to the > set [-2**31, 2**31) by: > > hash(0) = 0 > hash(m/n) = 1 + reduce(m/n - 1) if m/n > 0 # i.e., reduce m/n modulo > P, but to [1..P] rather than [0..P-1]. > hash(m/n) = -hash(-m/n) if m/n < 0. > > and in the last two cases, map a result of infinity to the unused hash > value -2**31. > > For ints, this hash function is almost identical to what Python > already has, except that the current int hash does a reduction modulo > 2**32-1 or 2**64-1 rather than 2**31-1. For all small ints, hash(n) > == n, as currently. Either way, the hash can be computed > digit-by-digit in exactly the same manner. For floats, it's also easy > to compute: express the float as m * 2**e for some integers m and e, > compute hash(m), and rotate e bits in the appropriate direction. And > it's straightforward to implement for the Decimal and Fraction types, > too. Will this change the result of hashing a long? I know that both gmpy and SAGE use their own hash implementations for performance reasons. I understand that CPython's hash function is an implementation detail, but there are external modules that rely on the existing hash behavior. FWIW, I'd prefer 2.7 and 3.2 have the same behavior. I don't mind the existing 3.1 behavior and I'd rather not have a difference between 3.1 and 3.2. casevh > > (One minor detail: as usual, some postprocessing would be required to > replace a hash of -1 with something else, since a hash value of -1 is > invalid.) > > Mark > ___ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/casevh%40gmail.com > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Decimal <-> float comparisons in py3k.
On Sat, Mar 20, 2010 at 4:06 AM, Mark Dickinson wrote: > On Fri, Mar 19, 2010 at 1:17 PM, Case Vanhorsen wrote: >> On Fri, Mar 19, 2010 at 3:07 AM, Mark Dickinson wrote: >>> On Fri, Mar 19, 2010 at 9:37 AM, Mark Dickinson wrote: >>>> Making hashes of int, >>>> float, Decimal *and* Fraction all compatible with one another, >>>> efficient for ints and floats, and not grossly inefficient for >>>> Fractions and Decimals, is kinda hairy, though I have a couple of >>>> ideas of how it could be done. >>> >>> To elaborate, here's a cute scheme for the above, which is actually >>> remarkably close to what Python already does for ints and floats, and >>> which doesn't require any of the numeric types to figure out whether >>> it's exactly equal to an instance of some other numeric type. > >> Will this change the result of hashing a long? I know that both gmpy >> and SAGE use their own hash implementations for performance reasons. I >> understand that CPython's hash function is an implementation detail, >> but there are external modules that rely on the existing hash >> behavior. > > Yes, it would change the hash of a long. > > What external modules are there that rely on existing hash behaviour? I'm only aware of gmpy and SAGE. > And exactly what behaviour do they rely on? Instead of calculating hash(long(mpz)), they calculate hash(mpz) directly. It avoids creation of a temporary object that could be quite large and is faster than the two-step process. I would need to modify the code so that it continues to produce the same result. casevh > Mark > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Decimal <-> float comparisons in py3k.
On Sat, Mar 20, 2010 at 10:05 AM, Mark Dickinson wrote: > On Sat, Mar 20, 2010 at 3:17 PM, Case Vanhorsen wrote: >> On Sat, Mar 20, 2010 at 4:06 AM, Mark Dickinson wrote: >>> What external modules are there that rely on existing hash behaviour? >> >> I'm only aware of gmpy and SAGE. >> >>> And exactly what behaviour do they rely on? >> >> Instead of calculating hash(long(mpz)), they calculate hash(mpz) >> directly. It avoids creation of a temporary object that could be quite >> large and is faster than the two-step process. I would need to modify >> the code so that it continues to produce the same result. > > Does gmpy only do this for Python 2.6? Or does it use different > algorithms for 2.4/2.5 and 2.6? As far as I can tell, there was no > reasonable way to compute long_hash directly at all before the > algorithm was changed for 2.6, unless you imitate exactly what Python > was doing (break up into 15-bit pieces, and do all the rotation and > addition exactly the same way), in which case you might as well be > calling long_hash directly. > > Mark > It does the later: it converts from GMP's internal format to CPython's long format and calculates the hash along the way. I may (should :) ) revert back to converting to long and then calling long_hash. The majority of the speed increase came from the conversion improvements, not the hash calculation. I am in favor of any change that makes 2.7 and 3.2 behave the same. casevh ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] mingw support?
On Mon, Aug 9, 2010 at 11:47 AM, Sturla Molden wrote: >> Terry Reedy: > >> MingW has become less attractive in recent years by the difficulty >> in downloading and installing a current version and finding out how to >> do so. Some projects have moved on to the TDM packaging of MingW. >> >> http://tdm-gcc.tdragon.net/ > > > MinGW has become a mess. Equation.com used to have a decent installer, but > at some point they started to ship mingw builds with a Trojan. TDM looks > OK for now. > > Building 32-bit Python extension works with MinGW. 64-bit extensions are > not possible due to lacking import libraries (no libmsvcr90.a and > libpython26.a for amd64). It is not possible to build Python with mingw, > only extensions. > > I think it is possible to build Python with Microsoft's SDK compiler, as > it has nmake. The latest is Windows 7 SDK for .NET 4, but we need the > version for .NET 3.5 to maintain CRT compatibility with current Python > releases. > > Python's distutils do not work with the SDK compiler, only Visual Studio. > Building Python extensions with the SDK compiler is not as easy as it > could (or should) be. Based on hints here: http://docs.python.org/distutils/apiref.html?highlight=sdk#module-distutils.msvccompiler I've been able to build GMPY and MPIR using just SDK compiler. For an example, see http://code.google.com/p/gmpy/source/browse/trunk/win_x64_sdk_build.txt I agree that it should be easier but it is possible. casevh > > One advantage of mingw for scientific programmers (which a frequent users > of Python) is the gfortran compiler. Although it is not as capable as > Absoft or Intel Fortran, it is still decent and can be used with f2py. > This makes the lack of 64-bit support for Python extensions with mingw > particularly annoying. Microsoft's SDK does not have a Fortran compiler, > and commercial versions are very expensive (though I prefer to pay for > Absoft anyway). > > I do not wish for a complete build process for mingw. But support for > 64-bit exensions with mingw and distutils support for Microsoft's SDK > compiler would be nice. > > Sturla > > ___ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/casevh%40gmail.com > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com