Re: [Python-Dev] Status of C compilers for Python on Windows
On Wed, Oct 29, 2014 at 3:25 PM, Antoine Pitrou wrote: > On Thu, 30 Oct 2014 01:09:45 +1000 > Nick Coghlan wrote: > > > > Lots of folks are happy with POSIX emulation layers on Windows, as > > they're OK with "basically works" rather than "works like any other > > native application". "Basically works" isn't sufficient for many > > Python-on-Windows use cases though, so the core ABI is a platform > > native one, rather than a POSIX emulation. > > > > This makes Python fit in more cleanly with other Windows applications, > > but makes it harder to write Python applications that span both POSIX > > and Windows. > > I don't really understanding why that's the case. Only the > building and packaging may be more difficult, and that assumes you're > familiar with mingw32. But mingw32, AFAIK, doesn't make the Windows > runtime magically POSIX-compatible (Cygwin does, to some extent). > mingw32 is a more compliant C compiler (VS2008 does not implement much from C89), and it does implement quite a few things not implemented in the C runtime, especially for math. But TBH, those are not compelling cases to build python itself on mingw, only to better support C extensions with mingw. David > Regards > > Antoine. > > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > https://mail.python.org/mailman/options/python-dev/cournape%40gmail.com > ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Status of C compilers for Python on Windows
On Wed, Oct 29, 2014 at 5:17 PM, David Cournapeau wrote: > > > On Wed, Oct 29, 2014 at 3:25 PM, Antoine Pitrou > wrote: > >> On Thu, 30 Oct 2014 01:09:45 +1000 >> Nick Coghlan wrote: >> > >> > Lots of folks are happy with POSIX emulation layers on Windows, as >> > they're OK with "basically works" rather than "works like any other >> > native application". "Basically works" isn't sufficient for many >> > Python-on-Windows use cases though, so the core ABI is a platform >> > native one, rather than a POSIX emulation. >> > >> > This makes Python fit in more cleanly with other Windows applications, >> > but makes it harder to write Python applications that span both POSIX >> > and Windows. >> >> I don't really understanding why that's the case. Only the >> building and packaging may be more difficult, and that assumes you're >> familiar with mingw32. But mingw32, AFAIK, doesn't make the Windows >> runtime magically POSIX-compatible (Cygwin does, to some extent). >> > > mingw32 is a more compliant C compiler (VS2008 does not implement much > from C89) > That should read much C99, of course, otherwise VS 2008 would have been a completely useless C compiler ! David ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Why does python use relative instead of absolute path when calling LoadLibrary*
Hi, While looking at the import code of python for C extensions, I was wondering why we pass a relative path instead of an absolute path to LoadLibraryEx (see bottom for some context). In python 2.7, the full path existence was even checked before calling into LoadLibraryEx ( https://github.com/python/cpython/blob/2.7/Python/dynload_win.c#L189), but it looks like this check was removed in python 3.x branch. Is there any defined behaviour that depends on this path to be relative ? Context --- The reason why I am interested in this is the potential use of SetDllDirectory to share dlls between multiple python extensions. Currently, the only solutions I am aware of are: 1. putting the dlls in the PATH 2. bundling the dlls side by side the .pyd 3. patching packages to use preloading (using e.g. ctypes) I am investigating a solution 4, where the dlls would be put in a separate "private" directory only known of python itself, without the need to modify PATH. Patching python to use SetDllDirectory("some private paths specific to a python interpreter") works perfectly, except that it slightly changes the semantics of LoadLibraryEx not to look for dlls in the current directory. This breaks importing extensions built in place, unless I modify the call in ;https://github.com/python/cpython/blob/2.7/Python/dynload_win.c#L195 from: hDLL = LoadLibraryEx(pathname, NULL LOAD_WITH_ALTERED_SEARCH_PATH) to hDLL = LoadLibraryEx(pathbuf, NULL LOAD_WITH_ALTERED_SEARCH_PATH) That seems to work, but I am quite worried about changing any import semantics by accident. David ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Why does python use relative instead of absolute path when calling LoadLibrary*
Thank you both for your answers. I will go away with this modification, and see how it goes. David On Thu, Mar 12, 2015 at 2:41 AM, Wes Turner wrote: > > On Mar 11, 2015 3:36 PM, "David Cournapeau" wrote: > > > > Hi, > > > > While looking at the import code of python for C extensions, I was > wondering why we pass a relative path instead of an absolute path to > LoadLibraryEx (see bottom for some context). > > > > In python 2.7, the full path existence was even checked before calling > into LoadLibraryEx ( > https://github.com/python/cpython/blob/2.7/Python/dynload_win.c#L189), > but it looks like this check was removed in python 3.x branch. > > > > Is there any defined behaviour that depends on this path to be relative ? > > Just a guess: does it have to do with resolving symlinks (w/ POSIX > filesystems)? > ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Investigating time for `import requests`
On Mon, Oct 2, 2017 at 6:42 PM, Raymond Hettinger < raymond.hettin...@gmail.com> wrote: > > > On Oct 2, 2017, at 12:39 AM, Nick Coghlan wrote: > > > > "What requests uses" can identify a useful set of > > avoidable imports. A Flask "Hello world" app could likely provide > > another such sample, as could some example data analysis notebooks). > > Right. It is probably worthwhile to identify which parts of the library > are typically imported but are not ever used. And likewise, identify a > core set of commonly used tools that are going to be almost unavoidable in > sufficiently interesting applications (like using requests to access a REST > API, running a micro-webframework, or invoking mercurial). > > Presumably, if any of this is going to make a difference to end users, we > need to see if there is any avoidable work that takes a significant > fraction of the total time from invocation through the point where the user > first sees meaningful output. That would include loading from nonvolatile > storage, executing the various imports, and doing the actual application. > > I don't expect to find anything that would help users of Django, Flask, > and Bottle since those are typically long-running apps where we value > response time more than startup time. > > For scripts using the requests module, there will be some fruit because > not everything that is imported is used. However, that may not be > significant because scripts using requests tend to be I/O bound. In the > timings below, 6% of the running time is used to load and run python.exe, > another 16% is used to import requests, and the remaining 78% is devoted to > the actual task of running a simple REST API query. It would be interesting > to see how much of the 16% could be avoided without major alterations to > requests, to urllib3, and to the standard library. > It is certainly true that for a CLI tool that actually makes any network I/O, especially SSL, import times will quickly be negligible. It becomes tricky for complex tools, because of error management. For example, a common pattern I have used in the past is to have a high level "catch all exceptions" function that dispatch the CLI command: try: main_function(...) except ErrorKind1: except requests.exceptions.SSLError: # gives complete message about options when receiving SSL errors, e.g. invalid certificate This pattern requires importing requests every time the command is run, even if no network IO is actually done. For complex CLI tools, maybe most command don't use network IO (the tool in question was a complete packages manager), but you pay ~100 ms because of requests import for every command. It is particularly visible because commands latency starts to be felt around 100-150 ms, and while you can do a lot in python in 100-150 ms, you can't do much in 0-50 ms. David > For mercurial, "hg log" or "hg commit" will likely be instructive about > what portion of the imports actually get used. A push or pull will likely > be I/O bound so those commands are less informative. > > > Raymond > > > - Quick timing for a minimal script using the requests module > --- > > $ cat > demo_github_rest_api.py > import requests > info = requests.get('https://api.github.com/users/raymondh').json() > print('%(name)s works at %(company)s. Contact at %(email)s' % info) > > $ time python3.6 demo_github_rest_api.py > Raymond Hettinger works at SauceLabs. Contact at None > > real0m0.561s > user0m0.134s > sys 0m0.018s > > $ time python3.6 -c "import requests" > > real0m0.125s > user0m0.104s > sys 0m0.014s > > $ time python3.6 -c "" > > real0m0.036s > user0m0.024s > sys 0m0.005s > > > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > cournape%40gmail.com > ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Single-file Python executables (was: Computed Goto dispatch for Python 2)
On Fri, May 29, 2015 at 1:28 AM, Chris Barker wrote: > On Thu, May 28, 2015 at 9:23 AM, Chris Barker > wrote: > >> Barry Warsaw wrote: >> >> I do think single-file executables are an important piece to Python's >> >> long-term >> competitiveness. >> >> Really? It seems to me that desktop development is dying. What are the >> critical use-cases for a single file executable? >> > > oops, sorry -- I see this was addressed in another thread. Though I guess > I still don't see why "single file" is critical, over "single thing to > install" -- like a OS-X app bundle that can just be dragged into the > Applications folder. > It is much simpler to deploy in an automated, recoverable way (and also much faster), because you can't have parts of the artefact "unsynchronized" with another part of the program. Note also that moving a python installation in your fs is actually quite unlikely to work in interesting usecases on unix because of the relocatability issue. Another advantage: it makes it impossible for users to tamper an application's content and be surprised things don't work anymore (a very common source of issues, familiar to anybody deploying complex python applications in the "enterprise world"). I recently started using some services written in go, and the single file approach is definitely a big +. It makes *using* applications written in it so much easier than python, even though I am complete newbie in go and relatively comfortable with python. One should keep in mind that go has some inherent advantages over python in those contexts even if python were to gain single file distribution tomorrow. Most of go stdlib is written in go now I believe, and it is much more portable across linux systems on a given CPU arch compared to python. IOW, it is more robust against ABI variability. David ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 514: Python environment registration in the Windows Registry
Hi Steve, I have looked into this PEP to see what we need to do on Enthought side of things. I have a few questions: 1. Is it recommended to follow this for any python version we may provide, or just new versions (3.6 and above). Most of our customers still heavily use 2.7, and I wonder whether it would cause more trouble than it is worth backporting this to 2.7. 2. The main issue for us in practice has been `PythonPath` entry as used to build `sys.path`. I understand this is not the point of the PEP, but would it make sense to give more precise recommendations for 3rd party providers there ? IIUC, the PEP 514 would recommend for us to do the following: 1. Use HKLM for "system install" or HKCU for "user install" as the root key 2. Register under "\Software\Python\Enthought" 3. We should patch our pythons to look in 2. and not in "\Software\Python\PythonCore", especially for `sys.path` constructions. 4. When a python from enthought is installed, it should never register anything in the key defined in 2. Is this correct ? I am not clear about 3., especially on what should be changed. I know that for 2.7, we need to change PC\getpathp.c for sys.path, but are there any other places where the registry is used by python itself ? Thanks for working on this, David On Sat, Feb 6, 2016 at 9:01 PM, Steve Dower wrote: > I've posted an updated version of this PEP that should soon be visible at > https://www.python.org/dev/peps/pep-0514. > > Leaving aside the fact that the current implementation of Python relies on > *other* information in the registry (that is not specified in this PEP), > I'm still looking for feedback or concerns from developers who are likely > to create or use the keys that are described here. > > > > PEP: 514 > Title: Python registration in the Windows registry > Version: $Revision$ > Last-Modified: $Date$ > Author: Steve Dower > Status: Draft > Type: Informational > Content-Type: text/x-rst > Created: 02-Feb-2016 > Post-History: 02-Feb-2016 > > Abstract > > > This PEP defines a schema for the Python registry key to allow third-party > installers to register their installation, and to allow applications to > detect > and correctly display all Python environments on a user's machine. No > implementation changes to Python are proposed with this PEP. > > Python environments are not required to be registered unless they want to > be > automatically discoverable by external tools. > > The schema matches the registry values that have been used by the official > installer since at least Python 2.5, and the resolution behaviour matches > the > behaviour of the official Python releases. > > Motivation > == > > When installed on Windows, the official Python installer creates a > registry key > for discovery and detection by other applications. This allows tools such > as > installers or IDEs to automatically detect and display a user's Python > installations. > > Third-party installers, such as those used by distributions, typically > create > identical keys for the same purpose. Most tools that use the registry to > detect > Python installations only inspect the keys used by the official installer. > As a > result, third-party installations that wish to be discoverable will > overwrite > these values, resulting in users "losing" their Python installation. > > By describing a layout for registry keys that allows third-party > installations > to register themselves uniquely, as well as providing tool developers > guidance > for discovering all available Python installations, these collisions > should be > prevented. > > Definitions > === > > A "registry key" is the equivalent of a file-system path into the > registry. Each > key may contain "subkeys" (keys nested within keys) and "values" (named and > typed attributes attached to a key). > > ``HKEY_CURRENT_USER`` is the root of settings for the currently logged-in > user, > and this user can generally read and write all settings under this root. > > ``HKEY_LOCAL_MACHINE`` is the root of settings for all users. Generally, > any > user can read these settings but only administrators can modify them. It is > typical for values under ``HKEY_CURRENT_USER`` to take precedence over > those in > ``HKEY_LOCAL_MACHINE``. > > On 64-bit Windows, ``HKEY_LOCAL_MACHINE\Software\Wow6432Node`` is a > special key > that 32-bit processes transparently read and write to rather than > accessing the > ``Software`` key directly. > > Structure > = > > We consider there to be a single collection of Python environments on a > machine, > where the collection may be different for each user of the machine. There > are > three potential registry locations where the collection may be stored > based on > the installation options of each environment:: > > HKEY_CURRENT_USER\Software\Python\\ > HKEY_LOCAL_MACHINE\Software\Python\\ > HKEY_LOCAL_MACHINE\Software\Wow6432Node\Python\\ > > Environments are uniquely identifie
Re: [Python-Dev] PEP 514: Python environment registration in the Windows Registry
On Tue, Mar 1, 2016 at 5:46 PM, Steve Dower wrote: > On 01Mar2016 0524, Paul Moore wrote: > >> On 1 March 2016 at 11:37, David Cournapeau wrote: >> >>> I am not clear about 3., especially on what should be changed. I know >>> that >>> for 2.7, we need to change PC\getpathp.c for sys.path, but are there any >>> other places where the registry is used by python itself ? >>> >> >> My understanding from the earlier discussion was that you should not >> patch Python at all. The sys.path building via PythonPath is not >> covered by the PEP and you should continue as at present. The new keys >> are all for informational purposes - your installer should write to >> them, and read them if looking for your installations. But the Python >> interpreter itself should not know or care about your new keys. >> >> Steve can probably clarify better than I can, but that's how I recall >> it being intended to work. >> Paul >> > > Yes, the intention was to not move sys.path building out of the PythonCore > key. It's solely about discovery by external tools. > Right. For us, continuing populating sys.path from the registry "owned" by python.org official installers is more and more untenable, because every distribution writes there, and this is especially problematic when you have both 32 bits and 64 bits distributions in the same machine. > If you want to patch your own distribution to move the paths you are > welcome to do that - there is only one string literal in getpathp.c that > needs to be updated - but it's not a requirement and I deliberately avoided > making a recommendation either way. (Though as discussed earlier in the > thread, I'm very much in favour of deprecating and removing any use of the > registry by the runtime itself in 3.6+, but still working out the > implications of that.) > Great, I just wanted to make sure removing it ourselves do not put us in a corner or further away from where python itself is going. Would it make sense to indicate in the PEP that doing so is allowed (neither recommended or frowned upon) ? David ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Software integrators vs end users (was Re: Language Summit notes)
On Fri, Apr 18, 2014 at 11:28 PM, Donald Stufft wrote: > > On Apr 18, 2014, at 6:24 PM, Nick Coghlan wrote: > > > On 18 April 2014 18:17, Paul Moore wrote: > >> On 18 April 2014 22:57, Donald Stufft wrote: > >>> Maybe Nick meant ``pip install ipython[all]`` but I don’t actually > know what that > >>> includes. I’ve never used ipython except for the console. > >> > >> The hard bit is the QT Console, but that's because there aren't wheels > >> for PySide AFAICT. > > > > IPython, matplotlib, scikit-learn, NumPy, nltk, etc. The things that > > let you break programming out of the low level box of controlling the > > computer, and connect it directly to the more universal high level > > task of understanding and visualising the world. > > > > Regards, > > Nick. > > > >> > >> Paul > > > > > > > > -- > > Nick Coghlan | ncogh...@gmail.com | Brisbane, Australia > > FWIW It’s been David Cournapeau’s opinion (on Twitter at least) that > some/all/most > (I’m not sure exactly which) of these can be handled by Wheels (they just > aren’t right now!). > Indeed, and the scipy community has been working on making wheels for new releases. The details of the format does not matter as much as having one format: at Enthought, we have been using the egg format for years to deploy python, C/C++ libraries and other assets, but we would have been using wheels if it existed at that time. Adding features like pre remove/post install to wheels would be great, but that's a relatively simpler discussion. I agree with your sentiment that the main value of sumo distributions like anaconda, active python or our own canopy is the binary packaging + making sure it all works together. There will always be some limitations in making those sumo distributions work seamlessly with 'standard' python, but those are pretty much the same issues as e.g. linux integrators have. If the python packaging efforts help the linux distribution integration, it is very likely to help us too (us == sumo distributions builders) too. David ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python Language Summit at PyCon: Agenda
On Mon, Mar 4, 2013 at 4:34 PM, Brett Cannon wrote: > > > > On Mon, Mar 4, 2013 at 11:29 AM, Barry Warsaw wrote: >> >> On Mar 04, 2013, at 07:26 PM, Robert Collins wrote: >> >> >It is of course possible for subunit and related tools to run their >> >own implementation, but it seems ideal to me to have a common API >> >which regular unittest, nose, py.test and others can all agree on and >> >use : better reuse for pretty printers, GUI displays and the like >> >depend on some common API. >> >> And One True Way of invoking and/or discovering how to invoke, a package's >> test suite. > > > How does unittest's test discovery not solve that? It is not always obvious how to test a package when one is not familiar with it. Are the tests in pkgname/tests or tests or ... ? In the scientific community, we have used the convention of making the test suite available at runtime with pkgname.tests(). David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Making the new dtrace support work on OS X
Hi, I was excited to see official dtrace support for python 3.6.0 on OS X, but I have not been able to make it work: 1. I built my own python from sources on OS X 10.9, with the --with-dtrace support 2. if I launch `python3.6 -q &` and then `sudo dtrace -l -P python$!`, I get the following output: ID PROVIDERMODULE FUNCTION NAME 2774 python48084 python3.6 _PyEval_EvalFrameDefault function-entry 2775 python48084 python3.6 _PyEval_EvalFrameDefault function-return 2776 python48084 python3.6 collect gc-done 2777 python48084 python3.6 collect gc-start 2778 python48084 python3.6 _PyEval_EvalFrameDefault line Which looks similar but not the same as the example given in the doc at https://docs.python.org/dev/howto/instrumentation.html#enabling-the-static-markers 3. When I try to test anything with the given call_stack.d example, I can't make it work at all: """ # script.py def start(): foo() def foo(): pass start() """ I am not very familiar with dtrace, so maybe I a missing a step, there is a documentation bug, or it depends on which OS X version you are using ? Thanks, David ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Making the new dtrace support work on OS X
On Fri, Jan 13, 2017 at 9:12 PM, Lukasz Langa wrote: > Looks like function-entry and function-return give you the C-level frame > names for some reason. This was implemented on OS X 10.11 if that makes any > difference. I will look at this in the evening, the laptop I'm on now is > macOS Sierra with SIP which cripples dtrace. > On that hint, I tried on OSX 11.1. sw_vers says ProductName: Mac OS X ProductVersion: 10.11.6 BuildVersion: 15G1108 And there, the example worked as advertised w/ my build of 3.6.0. I will try on more versions of OS X in our test lab. David > > On Jan 12, 2017, at 5:08 AM, David Cournapeau wrote: > > Hi, > > I was excited to see official dtrace support for python 3.6.0 on OS X, but > I have not been able to make it work: > > 1. I built my own python from sources on OS X 10.9, with the > --with-dtrace support > 2. if I launch `python3.6 -q &` and then `sudo dtrace -l -P python$!`, I > get the following output: > >ID PROVIDERMODULE FUNCTION NAME > 2774 python48084 python3.6 _PyEval_EvalFrameDefault > function-entry > 2775 python48084 python3.6 _PyEval_EvalFrameDefault > function-return > 2776 python48084 python3.6 collect > gc-done > 2777 python48084 python3.6 collect > gc-start > 2778 python48084 python3.6 _PyEval_EvalFrameDefault line > > Which looks similar but not the same as the example given in the doc at > https://docs.python.org/dev/howto/instrumentation. > html#enabling-the-static-markers > > 3. When I try to test anything with the given call_stack.d example, I > can't make it work at all: > > """ > # script.py > def start(): > foo() > > def foo(): > pass > > start() > """ > > I am not very familiar with dtrace, so maybe I a missing a step, there is > a documentation bug, or it depends on which OS X version you are using ? > > Thanks, > David > ___ > Python-Dev mailing list > Python-Dev@python.org > https://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: https://mail.python.org/mailman/options/python-dev/ > lukasz%40langa.pl > > > ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] SSL certificates recommendations for downstream python packagers
Hi, I am managing the team responsible for providing python packaging at Enthought, and I would like to make sure we are providing a good (and secure) out of the box experience for SSL. My understanding is that PEP 476 is the latest PEP that concerns this issue, and that PEP recommends using the system store: https://www.python.org/dev/peps/pep-0476/#trust-database. But looking at binary python distributions from python.org, that does not seem to a.ways be the case. I looked at the following: * 3.5.3 from python.org for OS X (64 bits): this uses the old, system openssl * 3.6.0 from python.org for OS X: this embeds a recent openssl, but ssl seems to be configured to use non existing paths (ssl..get_default_verify_paths()), and indeed, cert validation seems to fail by default with those installers * 3.6.0 from python.org for windows: I have not found how the ssl module finds the certificate, but certification seems to work Are there any official recommendations for downstream packagers beyond PEP 476 ? Is it "acceptable" for downstream packagers to patch python's default cert locations ? David ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] SSL certificates recommendations for downstream python packagers
On Mon, Jan 30, 2017 at 8:50 PM, Cory Benfield wrote: > > > > On 30 Jan 2017, at 13:53, David Cournapeau wrote: > > > > Are there any official recommendations for downstream packagers beyond > PEP 476 ? Is it "acceptable" for downstream packagers to patch python's > default cert locations ? > > There *are* no default cert locations on Windows or macOS that can be > accessed by OpenSSL. > > I cannot stress this strongly enough: you cannot provide a platform-native > certificate validation logic for Python *and* use OpenSSL for certificate > validation on Windows or macOS. (macOS can technically do this when you > link against the system OpenSSL, at the cost of using a catastrophically > insecure version of OpenSSL.) > Ah, thanks, that's already useful information. Just making sure I understand: this means there is no way to use python's SSL library to use the system store on windows, in particular private certifications that are often deployed by internal ITs in large orgs ? > The only program I am aware of that does platform-native certificate > validation on all three major desktop OS platforms is Chrome. It does this > using a fork of OpenSSL to do the actual TLS, but the platform-native > crypto library to do the certificate validation. This is the only > acceptable way to do this, and Python does not expose the appropriate hooks > to do it from within Python code. This would require that you carry > substantial patches to the standard library to achieve this, all of which > would be custom code. I strongly recommend you don't undertake to do this > unless you are very confident of your ability to write this code correctly. > That's exactly what I was afraid of and why I asked before attempting anything. > > The best long term solution to this is to stop using OpenSSL on platforms > that don't consider it the 'blessed' approach. If you're interested in > following that work, we're currently discussing it on the security-SIG, and > you'd be welcome to join. > Thanks, I will see if it looks like I have anything to contribute. David ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] SSL certificates recommendations for downstream python packagers
On Mon, Jan 30, 2017 at 8:50 PM, Cory Benfield wrote: > > > > On 30 Jan 2017, at 13:53, David Cournapeau wrote: > > > > Are there any official recommendations for downstream packagers beyond > PEP 476 ? Is it "acceptable" for downstream packagers to patch python's > default cert locations ? > > There *are* no default cert locations on Windows or macOS that can be > accessed by OpenSSL. > Also, doesn't that contradict the wording of PEP 476, specifically " Python would use the system provided certificate database on all platforms. Failure to locate such a database would be an error, and users would need to explicitly specify a location to fix it." ? Or is that PEP a long term goal, and not a description of the current status ? David ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] SSL certificates recommendations for downstream python packagers
On Mon, Jan 30, 2017 at 9:14 PM, Christian Heimes wrote: > On 2017-01-30 22:00, David Cournapeau wrote: > > > > > > On Mon, Jan 30, 2017 at 8:50 PM, Cory Benfield > <mailto:c...@lukasa.co.uk>> wrote: > > > > > > > > > On 30 Jan 2017, at 13:53, David Cournapeau <mailto:courn...@gmail.com>> wrote: > > > > > > Are there any official recommendations for downstream packagers > beyond PEP 476 ? Is it "acceptable" for downstream packagers to patch > python's default cert locations ? > > > > There *are* no default cert locations on Windows or macOS that can > > be accessed by OpenSSL. > > > > I cannot stress this strongly enough: you cannot provide a > > platform-native certificate validation logic for Python *and* use > > OpenSSL for certificate validation on Windows or macOS. (macOS can > > technically do this when you link against the system OpenSSL, at the > > cost of using a catastrophically insecure version of OpenSSL.) > > > > > > Ah, thanks, that's already useful information. > > > > Just making sure I understand: this means there is no way to use > > python's SSL library to use the system store on windows, in particular > > private certifications that are often deployed by internal ITs in large > > orgs ? > > That works with CPython because we get all trust anchors from the cert > store. However Python is not able to retrieve *additional* certificates. > A new installation of Windows starts off with a minimal set of trust > anchors. Chrome, IE and Edge use the proper APIs. > Hm. Is this documented anywhere ? We have customers needing "private/custom" certificates, and I am unsure where to look for. David ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] SSL certificates recommendations for downstream python packagers
On Tue, Jan 31, 2017 at 9:19 AM, Cory Benfield wrote: > > On 30 Jan 2017, at 21:00, David Cournapeau wrote: > > > > On Mon, Jan 30, 2017 at 8:50 PM, Cory Benfield wrote: > >> >> >> > On 30 Jan 2017, at 13:53, David Cournapeau wrote: >> > >> > Are there any official recommendations for downstream packagers beyond >> PEP 476 ? Is it "acceptable" for downstream packagers to patch python's >> default cert locations ? >> >> There *are* no default cert locations on Windows or macOS that can be >> accessed by OpenSSL. >> >> I cannot stress this strongly enough: you cannot provide a >> platform-native certificate validation logic for Python *and* use OpenSSL >> for certificate validation on Windows or macOS. (macOS can technically do >> this when you link against the system OpenSSL, at the cost of using a >> catastrophically insecure version of OpenSSL.) >> > > Ah, thanks, that's already useful information. > > Just making sure I understand: this means there is no way to use python's > SSL library to use the system store on windows, in particular private > certifications that are often deployed by internal ITs in large orgs ? > > > If only it were that simple! > > No, you absolutely *can* do that. You can extract the trust roots from the > system trust store, convert them into PEM/DER-encoded files, and load them > into OpenSSL. That will work. > Right, I guess it depends on what one means by "can". In my context, it was to be taken as "can it work without the end user having to do anything". We provide them a python-based tool, and it has to work with the system store out of the box. If the system store is updated through e.g. group policy, our python tool automatically get that update. >From the sound of it, it looks like this is simply not possible ATM with python, at least not without 3rd party libraries. David > The problem is that both SecureTransport and SChannel have got a number of > differences from OpenSSL. In no particular order: > > 1. Their chain building logic is different. This means that, given a > collection of certificates presented by a server and a bundle of > already-trusted certs, each implementation may build a different trust > chain. This may cause one implementation to refuse to validate where the > others do, or vice versa. This is very common with older OpenSSLs. > 2. SecureTransport and SChannel both use the system trust DB, which on > both Windows and mac allows the setting of custom policies. OpenSSL won’t > respect these policies, which means you can fail-open (that is, export and > use a root certificate that the OS believes should not be trusted for a > given use case). There is no way to export these trust policies into > OpenSSL. > 3. SecureTransport, SChannel, and OpenSSL all support different X.509 > extensions and understand them differently. This means that some certs may > be untrusted for certain uses by Windows but trusted for those uses by > OpenSSL, for example. > > In general, it is unwise to mix trust stores. If you want to use your OS’s > trust store, the best approach is to use the OS’s TLS stack as well. At > least that way when a user says “It works in my browser”, you know it > should work for you too. > > Cory > > ___ Python-Dev mailing list Python-Dev@python.org https://mail.python.org/mailman/listinfo/python-dev Unsubscribe: https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Attention Bazaar mirror users
On Sat, Feb 21, 2009 at 6:21 AM, Barry Warsaw wrote: > Adam Olsen reminds me that bzr 1.9 won't be supported by default in Ubuntu > until Jaunty in April and Thomas reminds me that Debian still just has 1.5. > > In both those cases, you can use the PPA: > > https://launchpad.net/~bzr/+archive/ppa Please note that for many people in a corporate/university environment, this is not an option. Granted, you can install it by yourself at this point, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Attention Bazaar mirror users
On Sat, Feb 21, 2009 at 3:52 PM, Stephen J. Turnbull wrote: > David Cournapeau writes: > > On Sat, Feb 21, 2009 at 6:21 AM, Barry Warsaw wrote: > > > > In both those cases, you can use the PPA: > > > Please note that for many people in a corporate/university > > environment, this is not an option. Granted, you can install it by > > yourself at this point, > > Er, what are people without access to PPAs doing building Python from > a VCS checkout? I don't see the link between access to PPA and building Python from sources. I don't have administration privileges on any of my machine at work. Adding PPA is simply not allowed at some places (PPA or anything else which is not considered 'safe'), or too much of a (bureaucratic) burden. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Attention Bazaar mirror users
On Sat, Feb 21, 2009 at 9:15 PM, Stephen J. Turnbull wrote: > "Martin v. Löwis" writes: > > sjt sez: > > > > I didn't say "from source", I said "from a VCS checkout". If using a > > > *specific* recent official release of a core tool is bureaucratically > > > infeasible, it would IMO be very unusual if you're allowed to checkout > > > and build arbitrary versions of Python, rather than using a version > > > provided by your bureaucrats. > > > > > > The number of people whose job is *specifically* developing Python, or > > > developing code that depends on bleeding-edge Python, in such an > > > environment is surely very small. > > > This completely contradicts with my experience. In a university > > environment, students regularly check out software from the source > > repository, modify it, and build it, just to learn something by doing > > so. > > You're ignoring the second paragraph quoted above. I'm *not* denying > that such environments are common. The question is "Do developers > *restricted to such environments* really have an impact on Python > development to outweigh the real cost of standardizing on an older > implementation of Bazaar to developers who would be able to use a more > capable version?" That was not the original question. I was just meaning to say that not being able to install from PPA is not hypothetical, in some of my work environments, not that it would be significant for the python future :) David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Integrate BeautifulSoup into stdlib?
On Tue, Mar 24, 2009 at 6:47 AM, "Martin v. Löwis" wrote: >already the introduction of eggs made the life worse for Debian > package maintainers, at least initially - i.e. for a few years. It still is, FWIW, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Integrate BeautifulSoup into stdlib?
On Tue, Mar 24, 2009 at 8:53 PM, Steve Holden wrote: > I'm not convinced we do need a cross-platform packaging solution, so I > may have explained my views badly. I regard application developers as > Python users, so I did not intend to suggest that the requirement for > stand-alone installation came from them. > > My main concern is that if Linux and Unix (Lunix) application > installation results, as is the case with setuptools, in the download > and/or installation of arbitrary support packages then we may end up > condemning Python app users to our own version of DLL hell (package > purgatory?). There already is a bit of a DLL hell in python. The whole idea to solve the dependency problems by installing multiple version of the same software is fundamentally flawed, it just does not work for general deployment on multiple machines. Many systems outside python, with more resource, have tried - and failed. By enabling a general, system-wide installation of multiple version of the same package, setuptools has made the situation worse. I am quite puzzled than many people don't realize this fundamental issue, it is a simple combinatory problem. If the problem is to get a recent enough version of the library, then the library would better be installed "locally", for the application. If it is too much a problem because the application depends on billions of libraries which are 6 months old, the problem is to allow such a dependency in the first place. What kind of nightmare would it be if programs developed in C would required a C library which is 6 months old ? That's exactly what multiple-versions installations inflict on us. That's great for testing, development. But for deployment on end-user machines, the whole thing is a failure IMO. > I am afraid that distutils, and > setuptools, are not really the answer to the problem, since while they > may (as intended) guarantee that Python applications can be installed > uniformly across different platforms they also more or less guarantee > that Python applications are installed differently from all other > applications on the platform. I think they should be part of the solution, in the sense that they should allow easier packaging for the different platforms (linux, windows, mac os x and so on). For now, they make things much harder than they should (difficult to follow the FHS, etc...). But otherwise, I agree. Python applications which care about non-savy users should be distributed as .dmg, .exe, .rpm, .deb. There will always be some work to do that correctly: there is no way to provide a general, automatic environment to build installers which provide a good experience on all platforms. AFAIK, It does not even exist in the commercial landscape, so I see little chance to see this in the python ecosystem. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Integrate BeautifulSoup into stdlib?
2009/3/24 Toshio Kuratomi : > Steve Holden wrote: > >> Seems to me that while all this is fine for developers and Python users >> it's completely unsatisfactory for people who just want to use Python >> applications. For them it's much easier if each application comes with >> all dependencies including the interpreter. >> >> This may seem wasteful, but it removes many of the version compatibility >> issues that otherwise bog things down. >> > The upfront cost of bundling is lower but the maintenance cost is > higher. For instance, OS vendors have developed many ways of being > notified of and dealing with security issues. If there's a security > issue with gtkmozdev and the python bindings to it have to be > recompiled, OS vendors will be alerted to it and have the opportunity to > release updates on zero day, the day that the security announcement goes > out. I don't think bundling should be compared to depending on the system libraries, but as a lesser evil compared to requiring multiple, system-wide installed libraries. > > 3) Over time, bundled libraries tend to become forked versions. And > worse, privately forked versions. If three python apps all use slightly > different older versions of libfoo-python and have backported fixes, > added new features, etc it is a nightmare for a system administrator or > packager to get them running with a single version from the system > library or forward port them. And because they're private forks the > developers lose out on collaborating on security, bugfixes, etc because > they are doing their work in isolation from the other forks. This is a purely technical problem, and can be handled by good source control systems, no ? cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Integrate BeautifulSoup into stdlib?
On Wed, Mar 25, 2009 at 1:45 AM, Toshio Kuratomi wrote: > David Cournapeau wrote: >> 2009/3/24 Toshio Kuratomi : >>> Steve Holden wrote: >>> >>>> Seems to me that while all this is fine for developers and Python users >>>> it's completely unsatisfactory for people who just want to use Python >>>> applications. For them it's much easier if each application comes with >>>> all dependencies including the interpreter. >>>> >>>> This may seem wasteful, but it removes many of the version compatibility >>>> issues that otherwise bog things down. >>>> >>> The upfront cost of bundling is lower but the maintenance cost is >>> higher. For instance, OS vendors have developed many ways of being >>> notified of and dealing with security issues. If there's a security >>> issue with gtkmozdev and the python bindings to it have to be >>> recompiled, OS vendors will be alerted to it and have the opportunity to >>> release updates on zero day, the day that the security announcement goes >>> out. >> >> I don't think bundling should be compared to depending on the system >> libraries, but as a lesser evil compared to requiring multiple, >> system-wide installed libraries. >> > Well.. I'm not so sure it's even a win there. If the libraries are > installed system-wide, at least the consumer of the application knows: > > 1) Where to find all the libraries to audit the versions when a security > issue is announced. > 2) That the library is unforked from upstream. > 3) That all the consumers of the library version have a central location > to collaborate on announcing fixes to the library. Yes, those are problems, but installing multi libraries have a lot of problems too: - quickly, by enabling multiple version installed, people become very sloppy to handle versions of the dependencies, and this increases a lot the number of libraries installed - so the advantages above for system-wide installation becomes intractable quite quickly - bundling also supports a real user-case which cannot be solved by rpm/deb AFAIK: installation without administration privileges. - multi-version installation give very fragile systems. That's actually my number one complain in python: setuptools has caused me numerous headache, and I got many bug reports because you often do not know why one version was loaded instead of another one. So I am not so convinced multiple-version is better than bundling - I can see how it sometimes can be, but I am not sure those are that important in practice. > No. This is a social problem. Good source control only helps if I am > tracking upstream's trunk so I'm aware of the direction that their > changes are headed. But there's a wide range of reasons that > application developers that bundle libraries don't do that: > > 1) not enough time in a day. I'm working full-time on making my > application better. Plus I have to update all these bundled libraries > from time to time, testing that the updates don't break anything. I > don't have time to track trunk for all these libraries -- I barely have > time to track releases. Yes, but in that case, there is nothing you can do. Putting everything in one project is always easier than splitting into modules, coding and deployment-wise. That's just one side of the speed of development vs maintenance issue IMHO. > 3) This doesn't help with the fact that my bundled version of the > library and your bundled version of the library are being developed in > isolation from each other. This needs central coordination which people > who believe bundling libraries are very unlikely to pursue. As above, I think that in that case, it will happen whatever tools are available, so it is not a case worth being pursued. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Integrate BeautifulSoup into stdlib?
On Wed, Mar 25, 2009 at 2:20 AM, Tres Seaver wrote: > > Many of us using setuptools extensively tend to adopt an "isolated > environment" strategy (e.g., pip, virtualenv, zc.buildout). We don't > install the packages used by different applications into shared > directories at all. Instead, each environment uses a restricted subset > of packages known to work together. Is that a working solution when you want to enable easy installation on a large number of "customers" ? In those discussions, I often see different solutions depending on the kind of projects people do. I don't know anything about plone, but I can imagine the deployment issues are quite different from the projects I am involved in (numpy and co). Everytime I tried to understand what buildout was about, I was not even sure it could help for my own problems at all. It seems very specific to web development - I may completely miss the point ? virtualenv, pip, yolk, those are useful tools for development/testing, but I don't see how they can help me to make the installation of a numpy environment easier on many different kind of platforms. > >> If the problem is to get a recent enough version of the library, then >> the library would better be installed "locally", for the application. >> If it is too much a problem because the application depends on >> billions of libraries which are 6 months old, the problem is to allow >> such a dependency in the first place. What kind of nightmare would it >> be if programs developed in C would required a C library which is 6 >> months old ? That's exactly what multiple-versions installations >> inflict on us. That's great for testing, development. But for >> deployment on end-user machines, the whole thing is a failure IMO. > > It is wildly successful, even on platforms such as Windows, when you > abandon the notion that separate applications should be sharing the > libaries they need. Well, I may not have been clear: I meant that in my experience, deploying something with several dependencies was easier with bundling than with a mechanism ala setuptools with *system-wide* installation of multiple versions of the same library. So I think we agree here: depending on something stable (python stdlib + a few well known things) system-wide is OK, for anything else, not sharing is easier and more robust in the current state of things, at least when one needs to stay cross platform. Almost every deployment problem I got from people using my own softwares was related to setuptools, and in particular the multiple version thing. For end-users who know nothing about python package mechanism, and do not care about it, that's really a PITA to debug, and give bad mouth taste. The fact that those problems happen when my software was not even *using* setuptools/etc... was a real deal breaker for me, and I am strongly biased against setuptools ever since. > > FHS is something which packagers / distributors care about: I strongly > doubt that the "end users" will ever notice, particularly for silliness > like 'bin' vs. 'sbin', or architecture-specific vs. 'noarch' rules. That's not silly, and that's a bit of a fallacy. Of course end users do not care about the FHS in itself, but following the FHS enables good integration in the system, which end users do care about. I like finding my doc in /usr/share/doc whatever thing I install - as I am sure every window user like to find his installed software in the panel control. > > As a counter-example, I offer the current Plone installer[1], which lays > down a bunch of egg-based packages in a cross-platform way (Windows, > MacOSX, Linux, BSDs). It uses zc.buildout, which makes > configuration-driven (repeatable) installation of add-ons easy. But zc.buildout is not a solution to deploy applications, right ? In my understanding, it is a tool to deploy plone instances on server/test machines, but that's quite a different problem from installing "applications" for users who may not even know what python is. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Integrate BeautifulSoup into stdlib?
On Wed, Mar 25, 2009 at 3:15 AM, Tres Seaver wrote: > >> Everytime I tried to understand what buildout was about, I was not >> even sure it could help for my own problems at all. It seems very >> specific to web development - I may completely miss the point ? > > I think so: it is largely a way to get repeatable / scripted deployment > of software to disk. It uses setuptools to install Python package > distributions, but also can use other means (e.g, configure-make-make > install to install a C library such as libxml2). The end result is a > self-contained directory tree: > > - - Scripts in the 'bin' directory are configured to have the specific > Python pacakges (and versions) they require on the PYTHONPATH. > > - - By convention, released package distributions are installed into the > 'eggs' subdirectory', which is *not* on the PYTHONPATH, nor is it a > 'site' directory for Python. > > - - Other bits are typically in their own subdirectories, often under > 'parts'. Ok - but I don't think it helps much, see below. > When not doing Plone / Zope-specific work (where zc.buildout is a de > facto standard), I use 'virtualenv' to create isolated environments into > which I install the libraries for a given application. If your > application ships as Python package distributions, then documenting the > use of 'virtualenv' as a "supported" way to install it might reduce your > support burden. I now realize why we don't understand each other - in my case, the one doing the installation is the user, who cannot be assumed to know much about python.q11 That's what I mean by "application deployment vs webapp deployment". Ideally, the user has to enter one command/click one button, and he can use the application - he may not even be a programmer (I deploy things based on numpy/scipy for scientific people, who often have little patience for things which take more than 1 minute to set up software-wise). That's the user case I am mostly interested in - and I think it is quite general, and quite different from plone deployment kind of things. > > You can think of zc.buildout or the virtualenv-based script as a form of > bundling, which bootstraps from another already-installed Python, but > remains isolated from anything in its 'site-packages'. Yes, I know what virtualenv is, I use it sometimes - but it is definitely too complicated for the people I want to distribute my software to. > > I never even use that switch manually. zc.buildout does, but that is > because it wants to control the PYTHONPATH by generating the script > code: it doesn't ask users to tweak that. Well, that's the thing: neither do I :) but if my software is a dependency of another software, then I am bothered for problems with software which are not used at all by my own package (because setuptools makes an egg of my software automatically, screw things up, and I am the one blamed for it). > > I don't know why anybody who was not writing a packaging tool, or > packaging a library for something like .deb / .rpm, would even use the > multi-version option for setuptools: I don't see any sane way to > install conflicting requirements into a shared 'site-packages'. But that's the problem: it often happens *even if you don't use setuptools yourself*. I would not be surprised if that's one reason why so many people have a seemingly unfair bias against setuptools. That's certainly the reason in my case. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Integrate BeautifulSoup into stdlib?
> > This is only sortof true. You can install rpms into a local directory > without root privileges with a commandline switch. But rpm/deb are > optimized for system administrators so the documentation on doing this > is not well done. There can also be code issues with doing things this > way but those issues can affect bundled apps as well. And finally, since > rpm's primary use is installing systems, the toolset around it builds > systems. So it's a lot easier to build a private root filesystem than > it is to cherrypick a single package. It should be possible to create a > tool that merges a system rpmdb and a user's local rpmdb using the > existing API but I'm not aware of any applications built to do that yet. Building private root kind of defeat the purpose :) Deploying linux packages in a reliable way without requiring admin privileges is an "interesting" experience. The tools just don't support this well - or there exists some magical tools that I've never seen mention of. > I won't argue for setuptools' implementation of multi-version. It > sucks. But multi-version can be done well. Sonames in C libraries are > a simple system that does this better. I would say simplistic instead of simple :) what works for C won't necessarily work for python - and even in C, library versioning is not used that often except for a few core libraries. Library versioning works in C because C model is very simple. It already breaks for C++. More high-level languages like C# already have a more complicated scheme (GAC) - and my impression is that it did not work that well. The SxS for dll on recent windows to handle multiple version is a nightmare too in my (limited) experience. >> > I'm confused -- if it will happen whatever tools are available, how does > "good source control" solve the issue? I'm saying that this is not an > issue that can be solved by having good source control... it's a social > issue that has to be solved by people learning to avoid bad practices. I meant that whatever technology is available, bundling everything will always be easier. And sometimes, given the time/resource constraints, that's even the only realistic option. I could tell you many stories about wasted hours related to some fortran libraries which were hopelessly broken (missing symbols) on most distributions, or ABI conflicts - for which bundling sometimes is the only path to keep its sanity (short of giving up support for the platform). When you need to solve the problem now because you want to demo things tomorrow, not in 6 months, that's the only solution. It is not always bad practice. IMHO, one should focus on making it easier to avoid bundling everything - robust and simple dependency scheme, adapting distutils installation scheme to the various OS conventions, be it FHS, windows, etc... But we can't and shouldn't prevent it totally, and tools are already there to help minimizing the problems of bundling. For multiple system-wide libraries, I have yet to encounter anything which makes it somewhat reliable - it has only caused problems for me, and not solved any single problem. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] "setuptools has divided the Python community"
On Thu, Mar 26, 2009 at 12:37 AM, Tres Seaver wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Barry Warsaw wrote: > >> Maybe there's a difference between being a Zope user and using zope >> packages? I think it's great that I can pick and choose >> zope.interfaces and other packages in my not-Zope project. But if I'm >> deploying actual Zope-the-web-thing I just want to install it from my >> distro and be done with it. It's up to my distro to determine >> compatibility, handle bug and security fixing, etc. > > Historically, the distros have done a less than stellar job here, too. I don't think that's entirely accurate. For softwares which I don't care directly as a developer, because I only use them, and don't intend to change anything in it, the linux distribution is god's heaven IMO. Being able to update, upgrade everything in a centralized, predictable manner works very well. It fails for software I am directly involved in, or maybe the layer just below: for example, there is no way for me to get a python 2.6 on my distribution (Ubuntu), so I cannot easily test the python projects I am involved in for python 2.6. But for layers below, it is almost perfect. If python, as a platform, could be as reliable as debian, I would be extremely happy. I just don't think it is possible, because of the huge amount of work this requires. Tools are just a small part of it - you need a lot of discipline and work to make sure everything fits together, and that can't happen for every python lib/application out there. I already mention this on the distutils ML, but IMO, the only workable solution is to have a system which makes it *possible* for OS distributors to package in whatever they see fit (.deb, .rpm, .dmg, .exe, .msi). Distutils, as of today, makes it much harder than it is for non trivial softwares (documentation, controlling what goes where, etc...). That's something we can hope to improve. Again, I will take the autoconf example: it has no policy, and can be used for different kind of situations, because you can (if you want) control things in a very fine-grained manner. Automatic, 'native' installers which are well integrated into every system, this seems so far from reach I don't see how this can even be considered. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] "setuptools has divided the Python community"
On Thu, Mar 26, 2009 at 12:02 PM, Nick Coghlan wrote: > If that perception is accurate, then any changes likely need to focus on > the *opposite* end of the toolchain: the part between the " packaging spec>" and the end users. Yes - but is this part the job of python ? > In other words: Given an egg, how easy is it for a packager/distributor > to create a platform specific package that places the files in the > correct locations for that particular platform (regardless of how > arbitrary those rules may appear to the original developers)? Why coming from eggs and not from the build tool provided by python itself (distutils) ? I don't see what eggs brings - specially since the format is not even standardized. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Integrate BeautifulSoup into stdlib?
On Thu, Mar 26, 2009 at 12:26 PM, "Martin v. Löwis" wrote: >> Tools like setuptools, zc.buildout, etc. seem great for developers but not >> very good for distributions. At last year's Pycon I think there was >> agreement from the Linux distributors that distutils, etc. just wasn't very >> useful for them. > > I think distutils is different here - it not only helps creating > distributions (i.e. deployable package files), but also allows > direct installation. This, in turn, is used to build the packages > for Linux distributions. E.g. debian/rules often contains a > "setup.py install" call in its build step (and there is even a > CDBS python-distutils.mk fragment) > > In that sense, distutils is for Python what make is for C. It is more like the whole autotools suite (at least for unix), and that's the problem: distutils does everything quite poorly, and once you need to do something that distutils can't do out of the box, you are in a no man's land because distutils is almost impossible to extend (everything is done internally, with no way to recover data short of rewriting everything or monkey patching). To take a recent example, I wanted to add the ability to install a clib extension (pure C, no python API), so that it can be used by other projects: that would take 2 minutes with any build tool out there, but is almost impossible in distutils, unless you rewrite your own build_clib and/or install commands. Even autotools is more enjoyable, which is quite an achievement :) If distutils was split into different modules (one for the build, one for the compiler/platform configuration, one for the installation), which could be extended, tweaked, it would be much better. But the distutils design makes this inherently very difficult (commands). cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Integrate BeautifulSoup into stdlib?
On Thu, Mar 26, 2009 at 2:01 PM, Tarek Ziadé wrote: > On Thu, Mar 26, 2009 at 5:32 AM, David Cournapeau wrote: >> If distutils was split into different modules (one for the build, one >> for the compiler/platform configuration, one for the installation), >> which could be extended, tweaked, it would be much better. But the >> distutils design makes this inherently very difficult (commands). > > I am not sur why the command design is a problem here. For several reasons: - options handling cannot be done correctly. If you need to pass some options specific to the build, you have to pass it to build_clib and build_ext, etc... Example: I would really like to add options like --with-libfoo ala autoconf so that the packager can simply say where to look for a library (headers, .so, etc). This cannot be done easily in distutils (no persistence, no easy way to communicate between commands) - the whole concept of commands is bogus for a build tool. The correct way to do builds is with a DAG, to handle dependencies. > And I think > Distutils features are not far from > what you need, if you look at things like customize_compiler, which is > called by build_clib. The whole customize_compiler is awful. You cannot call it when you want, but only at some arbitrary time in the execution, which is not documented. You have to create your own command, because you can't call it in setup.py directly. You may have to call initialize_something_which_has_nothing_to_do_with_compiler(), which may break on windows because the MS compiler abstraction is totally different than the unix one. It is actually hard to believe if you never had to deal with it: so many trivial things which are one line of code in every other tool are difficult, obscure, magic or damn impossible in distutils. Modifiying compiler flag ? You have to create a new compiler class. Installing docs ? You have to create your own install class. etc... > > I'm ready to discuss your use case in Distutils-SIG, if you have a > complete example etc. Ok, I will give you the example on the distutils ML, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] "setuptools has divided the Python community"
On Thu, Mar 26, 2009 at 2:01 PM, Nick Coghlan wrote: > Yes, that metadata is what I meant to refer to rather than zipped .egg > files specifically. An egg is just one example of something which > includes that metadata. Ok, my bad. Being able to describe meta-data for installed files is indeed sorely needed, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] "setuptools has divided the Python community"
On Thu, Mar 26, 2009 at 2:42 PM, Stephen J. Turnbull wrote: > > +-> E --> downstream developer -+ > | | > | +--+ V > source -> build -> A -> B -+-> C -> D -> | END USER | <+ > | +--+ A > | | > +-> F -> OS distro -+ > According to your diagram, the build->A is the only part where describing meta-data can be possible so that everyone benefit from it - which is what I believe Tarek is working on, cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] "setuptools has divided the Python community"
On Fri, Mar 27, 2009 at 8:28 PM, Ben Finney wrote: > > I would argue that the Python community has a wealth of people quite > capable of taking on this particular task, and if it makes the core > architecture and maintenance of ‘distutils’ simpler to remove special > cases for binary installers, I think that's a pearl of great price. I think there are two points making binary installers pluggable, so that they are independent of a core distutils, and including such plugins in the stdlib. Nobody argues against the first case: that's certainly a common complain that distutils is a big ball of code where everything is intervened. Concerning contribution for windows binaries: as the current numpy developer in charge of windows binaries and windows support for a while, my experience is that the windows situation for contribution is very different from the other platforms. The mentality is just different. At the risk of an overly broad and unfair generalization, my experience is that on windows, people just want things to work, complain when they do not, and almost never contribute back to make it work, or when they do, they are almost never familiar with how things work on other platforms, so they suggest broken fixes. To say it differently: I mostly use Linux, the less time I am on windows, the happier I am, but bdist_wininst is the only distutils bdist_* command I care about. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] "setuptools has divided the Python community"
On Fri, Mar 27, 2009 at 9:49 PM, M.-A. Lemburg wrote: > I think that esp. the bdist_* commands help developers a lot by > removing the need to know how to build e.g. RPMs or Windows > installers and let distutils deal with it. I think it is a big dangerous to build rpm/deb without knowing how to build them, because contrary to windows .exe, rpm/deb install things system-wide, and you could easily break something. I don't think you can build deb/rpm without knowing quite a lot about them. > (*) I've had a go at this a few months ago and then found out > that the egg format itself is not documented anywhere. As a result > you have to dig deep into setuptools to find out which files > are needed and where. That's something that needs to change > (Tarek is already working on a PEP for this, AFAIK). It is "documented" here: http://peak.telecommunity.com/DevCenter/EggFormats But as said in the preambule, people are not supposed to rely on this. I for once would be really happy if I could build eggs without setuptools - for example to build eggs from scons, scripts, etc... cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] splitting out bdist_*
2009/3/28 Stephen J. Turnbull : > > Sure, but use for internal distribution is very different from to > problem its being asked to solve now, isn't it? IIUC, you're > basically using RPM as an installer for a standalone application, > where you set policy at both ends, packaging and installation. This > may be a group of modules which may have internal interdependencies, > but very likely the environment doesn't change much. Pretty much > anything will do in that kind of situation, and I don't think it would > matter to you if there are zero, one, or twelve such tools in stdlib, > as long as there's one you like in PyPI somewhere. I myself would never use such a tool, unless sanctioned by my OS vendor, because I would not trust it not to break my system. But I think bdist_rpm and bdist_deb solve a real deficiency: no uninstallation feature. Thinking of it, that's exactly why I like bdist_wininst so much when I am on windows (and because the consequences of a bad installer from bdist_wininst seem minimal on windows, seems everything is one directory). David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] splitting out bdist_*
2009/3/29 Stephen J. Turnbull : > I really don't see how that kind of thing can be easily supported by a > Python module maintainer, unless they're also the downstream packager. Almost none. But in my understanding, that's not what most linux packagers vendors ask about - they will handle the dependencies themselves anyway, because naming conventions and the like are different. What is a pain right now with distutils for packagers is: - how to control which files are installed where - how to control the build (compilation flags, etc...). Packagers generally "like" autotools packages because they can be customized along each distribution convention. Autotools do not really handle dependencies either, but they can be customized for vastly different kind of deployement (one dir for everything ala gobolinux, along the FHS for most major distributions, etc...) - and the upstream developer doesn't need to care much about it. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] bdist_linux (was: setuptools has divided the Python community)
2009/3/29 "Martin v. Löwis" : >> I think that each OS community should maintain its own tool, that complies >> to the OS standard (wich has its own evolution cycle) >> >> Of course this will be possible as long as Distutils let the system >> packager find/change the metadata in an easy way. > > In the specific case of RPMs, I still think that a distutils command > is the right solution. It may be that bdist_rpm is a bit too general, > and that bdist_fedora and bdist_suse might be more useful. > > It all comes down to whether the .spec file is written manually or not. > *If* it is written manually, there is IMO no need to have the packager's > metadata readily available. Whoever writes the spec file can also look > at setup.py. OTOH, if the spec file is automatically generated, I can't > see why a bdist_ command couldn't do that - and a bdist_ command can > easily get at all the package (meta) data it needs. > > So in this case, I think separate access to the meta-data > isn't needed/doesn't help. > > For bdist_deb, things might be different, as the packager will certainly > want to maintain parts of the debian/ directory manually; other parts > would be generated. However, I still believe that a bdist_ command would > be appropriate (e.g. bdist_dpkg). As I understand Matthias Klose, the > tricky part here is that the packaging sometimes wants to reorganize > the files in a different layout, and for that, it needs to know what > files have what function (documentation, regular package, test suite, > etc). If that file classification isn't provided by the package author, > then there would be still a manually-maintained step to reorganize the > files. Maybe I don't understand what is meant by metadata, but I don't understand why we can't provide the same metadata as autotools, so that distutils could be customized from the command line to put data where they belong (according to each OS convention). So that making a FHS compliant package would be as simple as python setup.py distutils --datadir=bla --htmldir=foo I spent some time looking at cabal this afternoon ("haskell distutils"), and from my current understanding, that's exactly what they are doing: http://www.haskell.org/cabal/release/cabal-latest/doc/users-guide/authors.html#pkg-descr This way, if some metadata are not provided by upstream, the downstream packager can fix it, and send patches upstream so that other packagers benefit from it. (FWIW, from the reading of cabal documentation, it looks like cabal provides everything I would like distutils to provide: static metadata, good documentation, sane handling of options, etc... Maybe that's something worth looking into as inspiration for improving/fixing distutils) cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] bdist_linux
On Sun, Mar 29, 2009 at 10:42 PM, "Martin v. Löwis" wrote: > >> Maybe I don't understand what is meant by metadata, but I don't >> understand why we can't provide the same metadata as autotools > > Likewise, *this* I do not understand. In what way does autotools > *provide* metadata? I can understand that it *uses* certain metadata, > but it doesn't *provide* them... It does not provide them to external tools, true. Let me rephrase this: why cannot distutils use and provide metadata corresponding to the different categories as available in autotools ? It provides both customization from the command line and a relatively straightforward way to set which files go where. Last time this point was discussed on distutils-sig, the main worry came from people who do not care about tagging things appropriately. I don't think it is a big problem, because people already do it in setup.py, or distutils can do it semi automatically (it already has different categories for .py, .pyc, data files, libraries). Also, since packagers have to do it anyway, I think they would prefer something which enable them to send those changes upstream instead of every OS packager having to do it. >> python setup.py distutils --datadir=bla --htmldir=foo > > What's the meaning of the distutils command? Sorry, this should read python setup.py install ... cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Evaluated cmake as an autoconf replacement
On Mon, Mar 30, 2009 at 2:59 AM, Antoine Pitrou wrote: > Jeffrey Yasskin gmail.com> writes: >> >> The other popular configure+make replacement is scons. > > I can only give uninformed information (!) here, but in one company I worked > with, the main project decided to switch from scons to cmake due to some huge > performance problems in scons. This was in 2005-2006, though, and I don't know > whether things have changed. They haven't - scons is still slow. Python is not that big, though (from a build POV) ? I would think the bootstrap problem to be much more significant. I don't find the argument "many desktop have already python" very convincing - what if you can't install it, for example ? AFAIK, scons does not run on jython or ironpython. > > If you want to investigate Python-based build systems, there is waf (*), which > apparently started out as a fork of scons (precisely due to the aforementioned > performance problems). Again, I have never tried it. Waf is definitely faster than scons - something like one order of magnitude. I am yet very familiar with waf, but I like what I saw - the architecture is much nicer than scons (waf core amount of code is almost ten times smaller than scons core), but I would not call it a mature project yet. About cmake: I haven't looked at it recently, but I have a bit of hard time believing python requires more from a build system than KDE. The lack of autoheader is not accurate, if only because kde projects have it: http://www.cmake.org/Wiki/CMake_HowToDoPlatformChecks Whether using it compared to the current system is really a win for python, I have no idea. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Evaluated cmake as an autoconf replacement
On Mon, Mar 30, 2009 at 3:18 AM, Antoine Pitrou wrote: > What are the compilation requirements for cmake itself? Does it only need a > standard C compiler and library, or are there other dependencies? CMake is written in C++. IIRC, that's the only dependency. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Evaluated cmake as an autoconf replacement
On Tue, Mar 31, 2009 at 2:37 AM, Alexander Neundorf wrote: > On Mon, Mar 30, 2009 at 12:09 AM, Neil Hodgson wrote: > ... >> while so I can't remember the details. The current Python project >> files are hierarchical, building several DLLs and an EXE and I think >> this was outside the scope of the tools I looked at. > > Not sure I understand. > Having a project which builds (shared) libraries and executables which > use them (and which maybe have to be executed later on during the > build) is no problem for CMake, also with the VisualStudio projects. > >From what I remember when I wrote the CMake files for python it was > quite straight forward. I think Christian meant that since on windows, those are built with visual studio project files, but everywhere else, it is built with distutils, you can't use a common system without first converting everything to cmake for all the other platforms. Also, when converting a project from one build system to another, doing the 80 % takes 20 % in my experience. The most time consuming part is all small the details on not so common platforms. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Evaluated cmake as an autoconf replacement
On Tue, Mar 31, 2009 at 3:16 AM, Alexander Neundorf wrote: > > Can you please explain ? What is "those" ? Everything in Lib. On windows, I believe this is done through project files, but on linux at least, and I guess on most other OS, those are handled by distutils. I guess the lack of autoconf on windows is one reason for this difference ? > >> Also, when converting a project from one build system to another, >> doing the 80 % takes 20 % in my experience. > > Getting it working took me like 2 days, if that's 20% it's not too bad ;-) So it means ten days of work to convert to a new system that maybe most python maintainers do not know. What does it bring ? I think supporting cross compilation would be more worthwhile, for example, in the build department. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Mercurial?
On Sun, Apr 5, 2009 at 6:06 PM, "Martin v. Löwis" wrote: >> Off the top of my head, the following is needed for a successful migration: >> >> - Verify that the repository at http://code.python.org/hg/ is >> properly converted. > > I see that this has four branches. What about all the other branches? > Will they be converted, or not? What about the stuff outside /python? > > In particular, the Stackless people have requested that they move along > with what core Python does, so their code should also be converted. I don't know the capabilities of hg w.r.t svn conversion, so this may well be overkill, but git has a really good tool for svn conversion (svn-all-fast-export, developed by KDE). You can handle almost any svn organization (e.g. outside the usual trunk/tags/branches), and convert email addresses of committers, split one big svn repo into subprojects, etc... Then, the git repo could be converted to hg relatively easily I believe. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Evaluated cmake as an autoconf replacement
On Tue, Apr 7, 2009 at 9:14 PM, wrote: > > Ondrej> ... while scons and other Python solutions imho encourage to > Ondrej> write full Python programs, which imho is a disadvantage for the > Ondrej> build system, as then every build system is nonstandard. > > Hmmm... Like distutils setup scripts? fortunately, waf and scons are much better than distutils, at least for the build part :) I think it is hard to overestimate the importance of a python solution for python softwares (python itself is different). Having a full fledged language for complex builds is nice, I think most familiar with complex makefiles would agree with this. > > I don't know thing one about cmake, but if it's good for the goose (building > Python proper) would it be good for the gander (building extensions)? For complex softwares, specially ones relying on lot of C and platform idiosyncrasies, distutils is just too cumbersome and limited. Both Ondrej and me use python for scientific usage, and I think it is no hazard that we both look for something else. In those cases, scons - and cmake it seems - are very nice; build tools are incredibly hard to get right once you want to manage dependencies automatically. For simple python projects (pure python, a few .c source files without much dependencies), I think it is just overkill. cheers, David > > -- > Skip Montanaro - s...@pobox.com - http://www.smontanaro.net/ > "XML sucks, dictionaries rock" - Dave Beazley > ___ > Python-Dev mailing list > Python-Dev@python.org > http://mail.python.org/mailman/listinfo/python-dev > Unsubscribe: > http://mail.python.org/mailman/options/python-dev/cournape%40gmail.com > ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Evaluated cmake as an autoconf replacement
On Tue, Apr 7, 2009 at 10:08 PM, Alexander Neundorf wrote: > > What is involved in building python extensions ? Can you please explain ? Not much: at the core, a python extension is nothing more than a dynamically loaded library + a couple of options. One choice is whether to take options from distutils or to set them up independently. In my own scons tool to build python extensions, both are possible. The hard (or rather time consuming) work is to do everything else that distutils does related to the packaging. That's where scons/waf are more interesting than cmake IMO, because you can "easily" give up this task back to distutils, whereas it is inherently more difficult with cmake. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 382: Namespace Packages
On Tue, Apr 7, 2009 at 11:58 PM, M.-A. Lemburg wrote: >> >> This means your proposal actually doesn't add any benefit over the >> status quo, where you can have an __init__.py that does nothing but >> declare the package a namespace. We already have that now, and it >> doesn't need a new filename. Why would we expect OS vendors to start >> supporting it, just because we name it __pkg__.py instead of __init__.py? > > I lost you there. > > Since when do we support namespace packages in core Python without > the need to add some form of magic support code to __init__.py ? I think P. Eby refers to the problem that most packaging systems don't like several packages to have the same file - be it empty or not. That's my main personal grip against namespace packages, and from this POV, I think it is fair to say the proposal does not solve anything. Not that I have a solution, of course :) cheers, David > > My suggestion basically builds on the same idea as Martin's PEP, > but uses a single __pkg__.py file as opposed to some non-Python > file yaddayadda.pkg. > > Here's a copy of the proposal, with some additional discussion > bullets added: > > """ > Alternative Approach: > - > > Wouldn't it be better to stick with a simpler approach and look for > "__pkg__.py" files to detect namespace packages using that O(1) check ? > > This would also avoid any issues you'd otherwise run into if you want > to maintain this scheme in an importer that doesn't have access to a list > of files in a package directory, but is well capable for the checking > the existence of a file. > > Mechanism: > -- > > If the import mechanism finds a matching namespace package (a directory > with a __pkg__.py file), it then goes into namespace package scan mode and > scans the complete sys.path for more occurrences of the same namespace > package. > > The import loads all __pkg__.py files of matching namespace packages > having the same package name during the search. > > One of the namespace packages, the defining namespace package, will have > to include a __init__.py file. > > After having scanned all matching namespace packages and loading > the __pkg__.py files in the order of the search, the import mechanism > then sets the packages .__path__ attribute to include all namespace > package directories found on sys.path and finally executes the > __init__.py file. > > (Please let me know if the above is not clear, I will then try to > follow up on it.) > > Discussion: > --- > > The above mechanism allows the same kind of flexibility we already > have with the existing normal __init__.py mechanism. > > * It doesn't add yet another .pth-style sys.path extension (which are > difficult to manage in installations). > > * It always uses the same naive sys.path search strategy. The strategy > is not determined by some file contents. > > * The search is only done once - on the first import of the package. > > * It's possible to have a defining package dir and add-one package > dirs. > > * The search does not depend on the order of directories in sys.path. > There's no requirement for the defining package to appear first > on sys.path. > > * Namespace packages are easy to recognize by testing for a single > resource. > > * There's no conflict with existing files using the .pkg extension > such as Mac OS X installer files or Solaris packages. > > * Namespace __pkg__.py modules can provide extra meta-information, > logging, etc. to simplify debugging namespace package setups. > > * It's possible to freeze such setups, to put them into ZIP files, > or only have parts of it in a ZIP file and the other parts in the > file-system. > > * There's no need for a package directory scan, allowing the > mechanism to also work with resources that do not permit to > (easily and efficiently) scan the contents of a package "directory", > e.g. frozen packages or imports from web resources. > > Caveats: > > * Changes to sys.path will not result in an automatic rescan for > additional namespace packages, if the package was already loaded. > However, we could have a function to make such a rescan explicit. > """ > > -- > Marc-Andre Lemburg > eGenix.com > > Professional Python Services directly from the Source (#1, Apr 07 2009) Python/Zope Consulting and Support ... http://www.egenix.com/ mxODBC.Zope.Database.Adapter ... http://zope.egenix.com/ mxODBC, mxDateTime, mxTextTools ... http://python.egenix.com/ > > 2009-03-19: Released mxODBC.Connect 1.0.1 http://python.egenix.com/ > > ::: Try our new mxODBC.Connect Python Database Interface for free ! > > > eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48 > D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg > Registered at Amtsgericht Duesseldorf: HRB 46611 > http://www.egenix.com/company/contact/ >
Re: [Python-Dev] Evaluated cmake as an autoconf replacement
On Wed, Apr 8, 2009 at 2:24 AM, Heikki Toivonen wrote: > David Cournapeau wrote: >> The hard (or rather time consuming) work is to do everything else that >> distutils does related to the packaging. That's where scons/waf are >> more interesting than cmake IMO, because you can "easily" give up this >> task back to distutils, whereas it is inherently more difficult with >> cmake. > > I think this was the first I heard about using SCons this way. Do you > have any articles or examples of this? If not, could you perhaps write one? I developed numscons as an experiment to build numpy, scipy, and other complex python projects depending on many library/compilers: http://github.com/cournape/numscons/tree/master The general ideas are somewhat explained on my blog http://cournape.wordpress.com/?s=numscons And also the slides from SciPy08 conf: http://conference.scipy.org/static/wiki/numscons.pdf It is plugged into distutils through a scons command (which bypasses all the compiled build_* ones, so that the whole build is done through scons for correct dependency handling). It is not really meant as a general replacement (it is too fragile, partly because of distutils, partly because of scons, partly because of me), but it shows it is possible not only theoretically. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Evaluated cmake as an autoconf replacement
On Wed, Apr 8, 2009 at 6:42 AM, Alexander Neundorf wrote: > What options ? Compilation options. If you build an extension with distutils, the extension is built with the same flags as the ones used by python, the options are taken from distutils.sysconfig (except for MS compilers, which has its own options, which is one of the big pain in distutils). > > Can you please explain ? If you want to stay compatible with distutils, you have to do quite a lot of things. Cmake (and scons, and waf) only handle the build, but they can't handle all the packaging done by distutils (tarballs generation, binaries generation, in place build, develop mode of setuptools, eggs, .pyc and .pyo generation, etc...), so you have two choices: add support for this in the build tool (lot of work) or just use distutils once everything is built with your tool of choice. > It is easy to run external tools with cmake at cmake time and at build > time, and it is also possible to run them at install time. Sure, what can of build tool could not do that :) But given the design of distutils, if you want to keep all its packaging features, you can't just launch a few commands, you have to integrate them somewhat. Everytime you need something from distutils, you would need to launch python for cmake, whether with scons/waf, you can just use it as you would use any python library. That's just inherent to the fact that waf/scons are in the same language as distutils; if we were doing ocaml builds, having a build tool in ocaml would have been easier, etc... David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Evaluated cmake as an autoconf replacement
On Wed, Apr 8, 2009 at 7:54 AM, Alexander Neundorf wrote: > On Wed, Apr 8, 2009 at 12:43 AM, Greg Ewing > wrote: >> David Cournapeau wrote: >>> >>> Having a full >>> fledged language for complex builds is nice, I think most familiar >>> with complex makefiles would agree with this. >> >> Yes, people will still need general computation in their >> build process from time to time whether the build tool >> they're using supports it or not. > > I'm maintaining the CMake-based buildsystem for KDE4 since 3 years now > in my sparetime, millions lines of code, multiple code generators, all > major operating systems. My experience is that people don't need > general computation in their build process. > CMake supports now more general purpose programming features than it > did 2 years ago, e.g. it has now functions with local variables, it > can do simple math, regexps and other things. > If we get to the point where this is not enough, it usually means a > real program which does real work is required. > In this case it's actually a good thing to have this as a separate > tool, and not mixed into the buildsystem. > Having a not very powerful, but therefor domain specific language for > the buildsystem is really a feature :-) > (even if it sounds wrong in the first moment). Yes, there are some advantages to that. The point of python is to have the same language for the build specification and the extensions, in my mind. For extensions, you really need a full language - for example, if you want to add support for tools which generate unknown files in advance, and handle this correctly from a build POV, a macro-like language is not sufficient. > > >From what I saw when I was building Python I didn't actually see too > complicated things. In KDE4 we are not only building and installing > programs, but we are also installing and shipping a development > platform. This includes CMake files which contain functionality which > helps in developing KDE software, i.e. variables and a bunch of > KDE-specific macros. They are documented here: > http://api.kde.org/cmake/modules.html#module_FindKDE4Internal > (this is generated automatically from the cmake file we ship). > I guess something similar could be useful for Python, maybe this is > what distutils actually do ? distutils does roughly everything that autotools does, and more: - configuration: not often used in extensions, we (numpy) are the exception I would guess - build - installation - tarball generation - bdist_ installers (msi, .exe on windows, .pkg/.mpkg on mac os x, rpm/deb on Linux) - registration to pypi - more things which just ellude me at the moment cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Evaluated cmake as an autoconf replacement
On Thu, Apr 9, 2009 at 4:45 AM, Alexander Neundorf wrote: > I think cmake can do all of the above (cpack supports creating packages). I am sure it is - it is just a lot of work, specially if you want to stay compatible with distutils-built extensions :) cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Help on issue 5941
On Wed, May 6, 2009 at 6:01 PM, Tarek Ziadé wrote: > Hello, > > I need some help on http://bugs.python.org/issue5941 > > The bug is quite simple: the Distutils unixcompiler used to set the > archiver command to "ar -rc". > > For quite a while now, this behavior has changed in order to be able > to customize the compiler behavior from > the environment. That introduced a regression because the mechanism in > Distutils that looks for the > AR variable in the environment also looks into the Makefile of Python. > (in the Makefile then is os.environ) > > And as a matter of fact, AR is set to "ar" in there, so the -cr option > is not set anymore. > > So my question is : should I make a change into the Makefile by adding > for example a variable called AR_OPTIONS > then build the ar command with AR + AR_OPTIONS I think for consistency, it could be named ARFLAGS (this is the name usually taken for configure scripts), and both should be overridable as the other variable in distutils.sysconfig.customize_compiler. Those flags should be used in Makefile.pre as well, instead of the harcoded cr as currently used. Here is what I would try: - check for AR (already done in the configure script AFAICT) - if ARFLAGS is defined in the environment, use those, otherwise set ARFLAGS to cr - use ARFLAGS in the makefile Then, in the customize_compiler function, set archiver to $AR + $ARFLAGS. IOW, just copying the logic used for e.g. ldshared, I can prepare a patch if you want, cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Help on issue 5941
On Thu, May 7, 2009 at 8:49 PM, Tarek Ziadé wrote: > > Notice that from the beginning, the unixcompiler class options are > never used if the option has been customized > in distutils.sysconfig and present in the Makefile, so we need to > clean this behavior as well at some point, and document > the customization features. Indeed, I have never bothered much with this part, though. Flags customization with distutils is too awkward to be useful in general for something like numpy IMHO, I just use scons instead when I need fine grained control. > By the way, do you happen to have a buildbot or something that builds numpy ? We have a buildbot: http://buildbot.scipy.org/ But I don't know if that's easy to set up such as both python and numpy are built from sources. > If not it'll be very interesting: I wouldn't mind having one numpy > track running on the Python trunk and receiving > mails if something is broken. Well, I would not mind either :) David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adding a "sysconfig" module in the stdlib
On Fri, May 8, 2009 at 9:36 AM, Tarek Ziadé wrote: > Hello, > > I am trying to refactor distutils.log in order to use logging but I > have been bugged by the fact that site.py uses > distutils.util.get_platform() in "addbuilddir". > The problem is the order of imports at initialization time : importing > "logging" into distutils will make the initialization/build fail > because site.py wil break when > trying to import "logging", then "time". > > Anyways, > So why site.py looks into distutils ? because distutils has a few > functions to get some info about the platform and about the Makefile > and some > other header files like pyconfig.h etc. > > But I don't think it's the best place for this, and I have a proposal : > > let's create a dedicated "sysconfig" module in the standard library > that will provide all the (refactored) functions located in > distutils.sysconfig (but not customize_compiler) > and disutils.util.get_platform. If we are talking about putting this into the stdlib proper, I would suggest thinking about putting information for every platform in sysconfig, instead of just Unix. I understand it is not an easy problem (because windows builds are totally different than every other platform), but it would really help for interoperability with other build tools. If sysconfig is to become independent of distutils, it should be cross platform and not unix specific. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] py3k build broken
On Fri, May 8, 2009 at 7:23 AM, Tarek Ziadé wrote: > I have fixed configure by runing autoconf, everything should be fine now > > Sorry for the inconvenience. I am the one responsible for this - I did not realize that the generated configure/Makefile were also in the trunk, and my patch did not include the generated files. My apologies, cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 376 - Open questions
On Thu, Jul 9, 2009 at 7:07 AM, Eric Smith wrote: > Paul Moore wrote: >> >> 2009/7/8 P.J. Eby : >>> >>> If it were being driven by setuptools, I'd have just implemented it >>> myself >>> and presented it as a fait accompli. I can't speak to Tarek's motives, >>> but >>> I assume that, as stated in the PEP, the primary driver is supporting the >>> distutils being able to uninstall things, and secondarily to allow other >>> tools to be built on top of the API. >> >> My understanding is that all of the various distutils PEPs were driven >> by the "packaging summit" ay PyCon. The struggle here seems to be to >> find *anyone* from that summit who will now comment on the discussion >> :-( > > I was there, and I've been commenting! > > There might have been more discussion after the language summit and the one > open space event I went to. But the focus as I recall was static metadata > and version specification. When I originally brought up static metadata at > the summit, I meant metadata describing the sources in the distribution, so > that we can get rid of setup.py's. From that metadata, I want to be able to > generate .debs, .rpms, .eggs, etc. I agree wholeheartedly. Getting rid of setup.py for most packages should be a goal IMHO. Most packages don't need anything fancy, and static metadata are so much easier to use compared to setup.py/distutils for 3rd party interop. There was a discussion about how to describe/find the list of files to form a distribution (for the different sdist/bdist_* commands), but no agreement was reached. Some people strongly defend the setuptools feature to get the list of files from the source control system, in particular. http://mail.python.org/pipermail/distutils-sig/2009-April/011226.html David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 376 - Open questions
On Thu, Jul 9, 2009 at 4:18 PM, Paul Moore wrote: >> >> There might be a library (and I call dibs on the name "distlib" :) that >> provides support routines to parse setup.info, but there's no framework >> involved. And no need for a plugin system. > > +1. Now who's going to design & write it? I started a new thread on distutils-sig ("setup.py needs to go away") to avoid jeopardizing this thread. I added the context as well as my own suggestions for such a design. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Distutils] PEP 376 - from PyPM's point of view
On Wed, Jul 15, 2009 at 11:00 PM, Tarek Ziadé wrote: > On Wed, Jul 15, 2009 at 12:10 PM, Paul Moore wrote: >> >> Disclaimer: I've only read the short version, so if some of this is >> covered in the longer explanation, I apologise now. > > Next time I won't put a short version ;) > > >> >> PEP 376 support has added a requirement for 3 additional methods to >> the existing 1 finder method in PEP 302. That's already a 300% >> increase in complexity. I'm against adding any further complexity to >> PEP 302 - in all honesty, I'd rather move towards PEP 376 defining its >> *own* locator protocol and avoid any extra burden on PEP 302. I'm not >> sure implementers of PEP 302 importers will even provide the current >> PEP 376 extensions. >> >> I propose that before the current prototype is turned into a final >> (spec and) implementation, the PEP 302 extensions are extracted and >> documented as an independent protocol, purely part of PEP 376. (This >> *helps* implementers, as they can write support for, for example, >> eggs, without needing to modify the existing egg importer). I'll >> volunteer to do that work - but I won't start until the latest >> iteration of questions and discussions has settled down and PEP 376 >> has achieved a stable form with the known issues addressed. > > Sure that makes sense. I am all for having these 302 extensions > flipped on PEP 376 > side, then think about the "locator" protocol. > > I am lagging a bit in the discussions, I have 10 messages left to read or so, > but the known issues I've listed so far are about the RECORD file and > absolute paths, > I am waiting for PJE example on the syntax he proposed for prefixes, > on the docutils example. > >> Of course, this is moving more and more towards saying that the design >> of setuptools, with its generic means for locating distributions, etc >> etc, is the right approach. >> We're reinventing the wheel here. But the >> problem is that too many people dislike setuptools as it stands for it >> to gain support. > > I don't think it's about setuptools design. I think it's more likely > to be about the fact > that there's no way in Python to install two different versions of the > same distribution > without "hiding" one from each other, using setuptools, virtualenv or > zc.buildout. > > "installing" a distribution in Python means that its activated > globally, whereas people > need it locally at the application level. > >> My understanding is that the current set of PEPs were >> intended to be a stripped down, more generally acceptable subset of >> setuptools. Let's keep them that way (and omit the complexities of >> multi-version support). >> >> If you want setuptools, you know where to get it... > > Sure, but let's not forget that the multiple-version issue is a global > issue OS packagers > also meet. (setuptools is not the problem) : > > - application Foo uses docutils 0.4 and doesn't work with docutils 0.5 > - application Bar uses docutils 0.5 > > if docutils 0.5 is installed, Foo is broken, unless docutils 0.4 is > shipped with it. As was stated by Debian packagers on the distutils ML, the problem is that docutils 0.5 breaks packages which work with docutils 0.4 in the first place. http://www.mail-archive.com/distutils-...@python.org/msg05775.html And current hacks to work around lack of explicit version handling for module import is a maintenance burden: http://www.mail-archive.com/distutils-...@python.org/msg05742.html setuptools has given the incentive to use versioning as a workaround for API/ABI compatibility. That's the core problem, and most problems brought by setuptools (sys.path and .pth hacks with the unreliability which ensued) are consequences of this. I don't see how virtualenv solves anything in that regard for deployment issues. I doubt using things like virtualenv will make OS packagers happy. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] mingw32 and gc-header weirdness
On Thu, Jul 23, 2009 at 4:40 AM, Antoine Pitrou wrote: > > The size of long double is also 12 under 32-bit Linux. Perhaps mingw disagrees > with Visual Studio Yes, mingw and VS do not have the same long double type. This has been the source of some problems in numpy as well, since mingw uses the MS runtime, and everything involving long double and the runtime is broken (printf, math library calls). I wish there was a way to disable this in mingw, but there isn't AFAIK. > on some ABI subtleties (is it expected? is mingw supposed to > be ABI-compatible with Visual Studio? if yes, you may report a bug to them > :-)). I think mostly ABI compatible is the best description :) David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] mingw32 and gc-header weirdness
On Thu, Jul 23, 2009 at 6:49 PM, Paul Moore wrote: > 2009/7/22 Christian Tismer : >> Maybe the simple solution is to prevent building extensions >> with mingw, if the python executable was not also built with it? >> Then, all would be fine I guess. > > I have never had problems in practice with extensions built with mingw > rather than MSVC - so while I'm not saying that the issue doesn't > exist, it certainly doesn't affect all extensions, so disabling mingw > support seems a bit of an extreme measure. I am strongly against this as well. We build numpy with mingw on windows, and disabling it would make my life even more miserable on windows. One constant source of pain with MS compilers is when supporting different versions of python - 2.4, 2.5 and 2.6 require a different VS version (and free versions are available only for the last version of VS usually). I am far from a windows specialist, but I understand that quite a few problems with mingw-built extensions with python are caused by some Python decisions as well (the C API with runtime-dependent structures like FILE, etc...). So mingw is not the only to blame :) David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Update to Python Documentation Website Request
On Mon, Jul 27, 2009 at 7:20 PM, David Lyon wrote: > My only point is that Windows ain't no embedded system. It's not > short on memory or disk space. If a package manager is 5 megabytes > extra say, with it's libraries.. what's the extra download time on > that ? compared to three days+ stuffing around trying to find out > how to install packages for a new user. The problem is not so much the size by itself that more code means more maintenance burden for python developers. Including new code means it has to work everywhere where python works currently, and that other people can understand/support the related code. Adding code to a project is far from free from python maintainers POV. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] VC++ versions to match python versions?
On Mon, Aug 17, 2009 at 2:01 PM, David Bolen wrote: > Chris Withers writes: > >> Is the Express Edition of Visual C++ 2008 suitable for compiling >> packages for Python 2.6 on Windows? >> (And Python 2.6 itself for that matter...) > > Yes - it's currently being used on my buildbot, for example, to build > Python itself. Works for 2.6 and later. > >> Ditto for 2.5, 3.1 and the trunk (which I guess becomes 3.2?) > > 2.5 needs VS 2003. The 64 bits version of 2.5 is built with VS 2005, though. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Package install failures in 2.6.3
2009/10/6 P.J. Eby : > At 02:22 PM 10/5/2009 +0200, Tarek Ziadé wrote: >> >> Setuptools development has been discontinued for a year, and does >> patches on Distutils code. Some of these patches are sensitive to any >> change >> made on Distutils, wether those changes are internal or not. > > Setuptools is also not the only thing out there that subclasses distutils > commands in general, or the build_ext command in particular. Numpy, > Twisted, the mx extensions... there are plenty of things out there that > subclass distutils commands, quite in adherence to the rules. (Note too > that subclassing != patching, and the ability to subclass and substitute > distutils commands is documented.) > > It's therefore not appropriate to treat the issue as if it were > setuptools-specific; it could have broken any other major (or minor) > package's subclassing of the build_ext command. The internal vs published API difference does not make much sense in distutils case anyway, since a lot of implementation details are necessary to make non trivial extension work. When working on numpy.distutils, I almost always have to look at distutils sources since the docs are vastly insufficient, and even then, the code is so bad that quite often the only way to interact with distutils is to "reverse engineer" its behavior by trial and error. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Distutils and Distribute roadmap (and some words on Virtualenv, Pip)
On Thu, Oct 8, 2009 at 5:31 PM, Tarek Ziadé wrote: > = Virtualenv and the multiple version support in Distribute = > > (I am not saying "We" here because this part was not discussed yet > with everyone) > > Virtualenv allows you to create an isolated environment to install > some distribution without polluting the > main site-packages, a bit like a user site-packages. > > My opinion is that this tool exists only because Python doesn't > support the installation of multiple versions for the same > distributions. I am really worried about this, because it may encourage people to use multiple versions as a bandaid to maintaining backward compatibility. At least with virtual-env, the problem is restricted to the user. Generalized multiple, side by side installation has been tried in many different contexts, and I have never seen a single one working and not bringing more problems that it solved. One core problem being the exponential number of combination (package A depends on B and C, B depends on one version of D, C on another version of D). Being able to install *some* libraries in multiple versions is OK, but generalizing is very dangerous IMHO. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Distutils and Distribute roadmap (and some words on Virtualenv, Pip)
On Fri, Oct 9, 2009 at 1:35 AM, Masklinn wrote: > On 8 Oct 2009, at 18:17 , Toshio Kuratomi wrote: >> >>> This is not at all how I use virtualenv. For me virtualenv is a >>> sandbox so that I don't have to become root whenever I need to install >>> a Python package for testing purposes >> >> This is needing to install multiple versions and use the newly installed >> version for testing. >> > No it's not. It's keeping the python package *being tested* out of the > system's or user's site-package because it's potentially unstable or > unneeded. It provides a trivial way of *removing* the package to get rid of > it: delete the virtualenv. No trace anywhere that the package was ever > installed, no impact on the system (apart from the potential side-effects > of executing the system). > > The issue here isn't "multiple installed packages", it will more than likely > be the only version of itself: note that it's a package being tested, not an > *upgrade* being tested. > > The issues solved are: > * not having to become root (solved by PEP 370 if it ever lands) > * minimizing as much as possible the impact of testing the package on the > system (not solved by any other solution) This is not true - stow solves the problem in a more general way (in the sense that it is not restricted to python), at least on platforms which support softlink. The only inconvenience of stow compared to virtual env is namespace packages, but that's because of a design flaw in namespace package (as implemented in setuptools, and hopefully fixed in the upcoming namespace package PEP). Virtualenv provides a possible solution to some deployment problems, and is useful in those cases, but it is too specialized to be included in python itself IMO. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Distutils and Distribute roadmap (and some words on Virtualenv, Pip)
On Wed, Oct 21, 2009 at 5:49 AM, Paul Moore wrote: > 2009/10/20 Chris Withers : >> I wouldn't have a problem if integrating with the windows package manager >> was an optional extra, but I think it's one of many types of package >> management that need to be worried about, so might be easier to get the >> others working and let anyone who wants anything beyond a pure-python >> packaging system that works across platforms, regardless of whether binary >> extensions are needed, do the work themselves... > > There are many (I believe) Windows users for whom bdist_wininst is > just what they want. For those people, where's the incentive to switch > in what you propose? You're not providing the features they currently > have, and frankly "do the work yourself" is no answer (not everyone > can, often for entirely legitimate reasons). I am not so familiar with msi or wininst internals, but isn't it possible to install w.r.t. a given prefix ? Basically, making it possible to use a wininst in a virtualenv if required (in which case I guess it would not register with the windows db - at least it should be possible to disable it). The main problem with bdist_wininst installers is that they don't work with setuptools dependency stuff (at least, that's the reason given by windows users for a numpy egg on windows, whereas we used to only provide an exe). But you could argue it is a setuptools pb as much as a wininst pb, I guess. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] 2.7 Release? 2.7 == last of the 2.x line?
On Tue, Nov 3, 2009 at 6:13 PM, Michael Foord wrote: > Sturla Molden wrote: >> >> I'd just like to mention that the scientific community is highly dependent >> on NumPy. As long as NumPy is not ported to Py3k, migration is out of the >> question. Porting NumPy is not a trivial issue. It might take a complete >> rewrite of the whole C base using Cython. NumPy's ABI is not even PEP 3118 >> compliant. Changing the ABI for Py3k might break extension code written for >> NumPy using C. And scientists tend to write CPU-bound routines in languages >> like C and Fortran, not Python, so that is a major issue as well. If we port >> NumPy to Py3k, everyone using NumPy will have to port their C code to the >> new ABI. There are lot of people stuck with Python 2.x for this reason. It >> does not just affect individual scientists, but also large projects like IBM >> and CERN's blue brain and NASA's space telecope. So please, do not cancel >> 2.x support before we have ported NumPy, Matplotlib and most of their >> dependant extensions to Py3k. > > What will it take to *start* the port? (Or is it already underway?) For many > projects I fear that it is only the impending obsolescence (real rather than > theoretical) of Python 2 that will convince projects to port. I feel the same way. Given how much resources it will take to port to py3k, I doubt the port will happen soon. I don't know what other numpy developers think, but I consider py3k to simply not worth the hassle - I know we will have to port eventually, though. To answer your question, the main issues are: - are two branches are necessary or not ? If two branches are necessary, I think we simply do not have the resources at the moment. - how to maintain a compatible C API across 2.x and 3.x - is it practically possible to support and maintain numpy from 2.4 to 3.x ? For example, I don't think the python 2.6 py3k warnings are very useful when you need to maintain compatibility with 2.4 and 2.5. There is also little documentation on how to port a significant C codebase to py3k. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] 2.7 Release? 2.7 == last of the 2.x line?
On Tue, Nov 3, 2009 at 8:40 PM, Antoine Pitrou wrote: > Sturla Molden molden.no> writes: >> >> Porting NumPy is not a trivial issue. It might take >> a complete rewrite of the whole C base using Cython. > > I don't see why they would need a rewrite. (let me know if those numpy-specific discussions are considered OT0 There is certainly no need for a full rewrite, no. I am still unclear on the range of things to change for 3.x, but the C changes are not small, especially since numpy uses "dark" areas of python C extension. The long vs int, strings vs bytes will take some time. AFAIK, the only thing which has been attempted so far is porting our own distutils extension to python 3.x, but I have not integrated those changes yet. > between 2.x and 3.x. Cython itself is supposed to support both 2.x and 3.x, > isn't it? Yes - but no numpy code use cython ATM, except for the random generators, which would almost certainly be trivial to convert. The idea which has been discussed so far is that for *some* code which need significant changes or rewrite, using cython instead of C may be beneficial, as it would give the 3.x code "for free". Having more cython and less C could also bring more contributors - that would actually be the biggest incentive, as the number of people who know the core C code of numpy is too small. > That's interesting, because PEP 3118 was pushed mainly by a prominent member > of > the NumPy community and some of its features are almost dedicated to NumPy. I have not been involved with PEP 3118 discussion, so cannot comment on the reason why it is not fully supported yet by numpy. But I think that's a different issue altogether - PEP 3118 goal is for interoperation with other packages. We can port to PEP 3118 without porting to 3.x, and we can port to 3.x without taking care of PEP 3118. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] 2.7 Release? 2.7 == last of the 2.x line?
On Tue, Nov 3, 2009 at 9:55 PM, Barry Warsaw wrote: > > Then clearly we can't back port features. > > I'd like to read some case studies of people who have migrated applications > from 2.6 to 3.0. +1, especially for packages which have a lot of C code: the current documentation is sparse :) The only helpful reference I have found so far is an email by MvL concerning psycopg2 port. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] 2.7 Release? 2.7 == last of the 2.x line?
On Wed, Nov 4, 2009 at 3:25 AM, "Martin v. Löwis" wrote: > But only if NumPy would drop support for 2.x, for x < 7, right? > That would probably be many years in the future. Yes. Today, given the choice of supporting py 3.x and dropping python < 2.7 and continue support for 2.4, the latter is by far my preferred choice today (RHEL still require 2.4, for example). cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] 2.7 Release? 2.7 == last of the 2.x line?
On Thu, Nov 5, 2009 at 4:02 AM, "Martin v. Löwis" wrote: > > That's not my experience. I see a change in source (say, on Django) > available for 3.x within 5 seconds. This is for which version of 2to3 ? I got similar experience (several minutes), but maybe I am using 2to3 the wrong way. On my machine, with 2to3 from 3.1.1, it takes ~ 1s to convert one single file of 200 lines, and converting a tiny subset of numpy takes > one minute. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] buildtime vs runtime in Distutils
On Sun, Nov 15, 2009 at 10:32 PM, Tarek Ziadé wrote: > > Ok. Fair enough, I'll work with them this way. Although packagers should certainly fix the problems they introduce in the first place, the second suggestion in the bug report would be useful, independently on how linux distributions package things. Especially if the data can be obtained for every build (autoconf and VS-based), this would help packages which use something else than distutils for their build. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] IO module improvements
On Sat, Feb 6, 2010 at 4:31 PM, Tres Seaver wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > Antoine Pitrou wrote: >> Pascal Chambon gmail.com> writes: >>> By the way, I'm having trouble with the "name" attribute of raw files, >>> which can be string or integer (confusing), ambiguous if containing a >>> relative path, and which isn't able to handle the new case of my >>> library, i.e opening a file from an existing file handle (which is ALSO >>> an integer, like C file descriptors...) >> >> What is the difference between "file handle" and a regular C file descriptor? >> Is it some Windows-specific thing? >> If so, then perhaps it deserves some Windows-specific attribute ("handle"?). > > File descriptors are integer indexes into a process-specific table. AFAIK, they aren't simple indexes in windows, and that's partly why even file descriptors cannot be safely passed between C runtimes on windows (whereas they can in most unices). David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] IO module improvements
On Fri, Feb 5, 2010 at 10:28 PM, Antoine Pitrou wrote: > Pascal Chambon gmail.com> writes: >> >> By the way, I'm having trouble with the "name" attribute of raw files, >> which can be string or integer (confusing), ambiguous if containing a >> relative path, and which isn't able to handle the new case of my >> library, i.e opening a file from an existing file handle (which is ALSO >> an integer, like C file descriptors...) > > What is the difference between "file handle" and a regular C file descriptor? > Is it some Windows-specific thing? > If so, then perhaps it deserves some Windows-specific attribute ("handle"?). When wondering about the same issue, I found the following useful: http://www.codeproject.com/KB/files/handles.aspx The C library file descriptor as returned by C open is emulated by win32. Only HANDLE is considered "native" (can be passed freely however you want within one process). cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 3188: Implementation Questions
On Fri, Feb 26, 2010 at 1:51 PM, Meador Inge wrote: > Hi All, > > Recently some discussion began in the issue 3132 thread > (http://bugs.python.org/issue3132) regarding > implementation of the new struct string syntax for PEP 3118. Mark Dickinson > suggested that I bring the discussion on over to Python Dev. Below is a > summary > of the questions\comments from the thread. > > Unpacking a long-double > === > > 1. Should this return a Decimal object or a ctypes 'long double'? > 2. Using ctypes 'long double' is easier to implement, but precision is > lost when needing to do arithmetic, since the value for cytpes 'long > double' > is converted to a Python float. > 3. Using Decimal keeps the desired precision, but the implementation would > be non-trivial and architecture specific (unless we just picked a > fixed number of bytes regardless of the architecture). > 4. What representation should be used for standard size and alignment? > IEEE 754 extended double precision? I think supporting even basic arithmetic correctly for long double would be a tremendous amount of work in python. First, as you know, there are many different formats which depend not only on the CPU but also on the OS and the compiler, but there are quite a few issues which are specific to long double (like converting to an integer which cannot fit in any C integer type on most implementations). Also, IEEE 754 does not define any alignment as far as I know, that's up to the CPU implementer I believe. In Numpy, long double usually maps to either 12 bytes (np.float96) or 16 bytes (np.float128). I would expect the long double to be mostly useful for data exchange - if you want to do arithmetic on long double, then the user of the buffer protocol would have to implement it by himself (like NumPy does ATM). So the important thing is to have enough information to use the long double: alignment and size are not enough. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Why is nan != nan?
On Thu, Mar 25, 2010 at 9:39 PM, Jesus Cea wrote: > -BEGIN PGP SIGNED MESSAGE- > Hash: SHA1 > > On 03/25/2010 12:22 PM, Nick Coghlan wrote: >> "Not a Number" is not a single floating point value. Instead each >> instance is a distinct value representing the precise conditions that >> created it. Thus, two "NaN" values x and y will compare equal iff they >> are the exact same NaN object (i.e. "if isnan(x) then x == y iff >> x is y". >> >> As stated above, such a change would allow us to restore reflexivity >> (eliminating a bunch of weirdness) while still honouring the idea of NaN >> being a set of values rather than a single value. > > Sounds good. > > But IEEE 754 was created by pretty clever guys and sure they had a > reason for define things in the way they are. Probably we are missing > something. Yes, indeed. I don't claim having a deep understanding myself, but up to now, everytime I thought something in IEE 754 was weird, it ended up being for good reasons. I think the fundamental missing point in this discussion about Nan is exception handling: a lot of NaN quircky behavior becomes much more natural once you take into account which operations are invalid under which condition. Unless I am mistaken, python itself does not support for FPU exception handling. For example, the reason why x != x for x Nan is because != (and ==) are about the only operations where you can have NaN as operands without risking raising an exception, and support for creating and detecting NaN in languages have been coming only quite lately (e.g. C99). Concerning the lack of rationale: a relatively short reference concerned about FPU exception and NaN handling is from Kahan himself http://www.eecs.berkeley.edu/~wkahan/ieee754status/ieee754.ps David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Why is nan != nan?
On Fri, Mar 26, 2010 at 10:19 AM, P.J. Eby wrote: > At 11:57 AM 3/26/2010 +1100, Steven D'Aprano wrote: >> >> But they're not -- they're *signals* for "your calculation has gone screwy >> and the result you get is garbage", so to speak. You shouldn't even think of >> a specific NAN as a piece of specific garbage, but merely a label on the >> *kind* of garbage you've got (the payload): INF-INF is, in some sense, a >> different kind of error to log(-1). In the same way you might say "INF-INF >> could be any number at all, therefore we return NAN", you might say "since >> INF-INF could be anything, there's no reason to think that INF-INF == >> INF-INF." > > So, are you suggesting that maybe the Pythonic thing to do in that case > would be to cause any operation on a NAN (including perhaps comparison) to > fail, rather than allowing garbage to silently propagate? Nan behavior being tightly linked to FPU exception handling, I think this is a good idea. One of the goal of Nan is to avoid many testing in intermediate computation (for efficiency reason), which may not really apply to python. Generally, you want to detect errors/exceptional situations as early as possible, and if you use python, you don't care about potential slowdown caused by those checks. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Why is nan != nan?
On Sat, Mar 27, 2010 at 8:16 AM, Raymond Hettinger wrote: > > On Mar 26, 2010, at 2:16 PM, Xavier Morel wrote: > > How about raising an exception instead of creating nans in the first place, > except maybe within specific contexts (so that the IEEE-754 minded can get > their nans working as they currently do)? > > -1 > The numeric community uses NaNs as placeholders in vectorized calculations. But is this relevant to python itself ? In Numpy, we indeed do use and support NaN, but we have much more control on what happens compared to python float objects. We can control whether invalid operations raises an exception or not, we had isnan/isfinite for a long time, and the fact that nan != nan has never been a real problem AFAIK. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Why is nan != nan?
On Sun, Mar 28, 2010 at 9:28 AM, Robert Kern wrote: > On 2010-03-27 00:32 , David Cournapeau wrote: >> >> On Sat, Mar 27, 2010 at 8:16 AM, Raymond Hettinger >> wrote: >>> >>> On Mar 26, 2010, at 2:16 PM, Xavier Morel wrote: >>> >>> How about raising an exception instead of creating nans in the first >>> place, >>> except maybe within specific contexts (so that the IEEE-754 minded can >>> get >>> their nans working as they currently do)? >>> >>> -1 >>> The numeric community uses NaNs as placeholders in vectorized >>> calculations. >> >> But is this relevant to python itself ? In Numpy, we indeed do use and >> support NaN, but we have much more control on what happens compared to >> python float objects. We can control whether invalid operations raises >> an exception or not, we had isnan/isfinite for a long time, and the >> fact that nan != nan has never been a real problem AFAIK. > > Nonetheless, the closer our float arrays are to Python's float type, the > happier I will be. Me too, but I don't see how to reconcile this with the intent of simplifying nan handling because they are not intuitive, which seems to be the goal of this discussion. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Distutils] Bootstrap script for package management tool in Python 2.7 (Was: Re: At least one package management tool for 2.7)
On Mon, Mar 29, 2010 at 10:45 PM, anatoly techtonik wrote: > On Mon, Mar 29, 2010 at 12:15 PM, Tarek Ziadé wrote: >> [..] >>> distutils is not a `package management` tool, because it doesn't know >>> anything even about installed packages, not saying anything about >>> dependencies. >> >> At this point, no one knows anything about installed packages at the >> Python level. > > Users do not care about this, and `distutils` doesn't know this even > at package level. > >> Keeping track of installed projects is a feature done within each >> package managment system. >> >> And the whole purpose of PEP 376 is to define a database of what's >> installed, for the sake of interoperability. > > That's great. When it will be ready everybody would be happy to make > their package management tool compliant. > >>> >>> `pip` and `distribute` are unknown for a vast majority of Python >>> users, so if you have a perspective replacement for `easy_install` - >> >> Depending on how you call a Python user, I disagree here. Many people >> use pip and distribute. >> >> The first one because it has an uninstall feature among other things. >> The second one because it fixes some bugs of setuptools and provides >> Python 3 support > > I do not mind if we can distribute three stubs, they will also serve > as pointers for a better way of packaging when an ultimate tool is > finally born. Me personally is willing to elaborate for `easy_install` > stub in 2.7. > >>> >>> For now there are two questions: >>> 1. Are they stable enough for the replacement of user command line >>> `easy_install` tool? >>> 2. Which one is the recommended? >>> >>> P.S. Should there be an accessible FAQ in addition to ML? >> >> This FAQ work has been started in th "HitchHicker's guide to >> Packaging" you can find here: >> >> http://guide.python-distribute.org > > I can see any FAQ. To me the FAQ is something that could be posted to > distutils ML once a month to reflect current state of packaging. It > should also carry version number. So anybody can comment on the FAQ, > ask another question or ask to make a change. > >> Again, any new code work will not happen because 2.7 is due in less >> than a week. Things are happening in Distutils2. > > That doesn't solve the problem. Bootstrap script can be written in one > day. What we need is a consensus whatever this script is welcomed in > 2.7 or not? Who is the person to make the decision? > >> Now, for the "best practice" documentation, I think the guide is the >> best plce to look at. > > Let's refer to original user story: > "I installed Python and need a quick way to install my packages on top of it." python setup.py install works well, and has for almost a decade. If you need setuptools, you can include ez_setup.py, which does exactly what you want, without adding a hugely controversial feature to python proper. You do something like: try: import setuptools except ImportError: print "Run ez_setup.py first" And you're done, cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] python compiler
On Mon, Apr 5, 2010 at 11:54 PM, wrote: > for a college project, I proposed to create a compiler for python. I've > read something about it and maybe I saw that made a bad choice. I hear > everyone's opinion respond. Depending on your taste, you may want to tackle something like a static analyser for python. This is not a compiler proper, but it could potentially be more useful than yet another compiler compiling 50 % of python, and you would get some results more quickly (no need to generate code, etc...). See e.g. http://bugs.jython.org/issue1541 for an actual implementation on a similar idea (but for jython), cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Automatic installer builds (was Re: Fwd: Broken link to download (Mac OS X))
On Thu, Apr 15, 2010 at 3:54 AM, wrote: > > Bill> In any case, they shouldn't be needed on buildbots maintained by > Bill> the PSF. > > Sure. My question was related to humans building binary distributions > though. Unless that becomes fully automated so the release manager can just > push a button and have it built on and as-yet-nonexistent Mac OSX buildbot > machine, somebody will have to generate that installer. Ronald says Fink, > MacPorts and /usr/local are poison. If that's truly the case that's fine. > It's just that it reduces the size of the potential binary installer build > machines. Actually, you can just use a chroot "jail" to build the binary - I use this process to build the official numpy/scipy binaries, it works very well whatever crap there is on my laptop otherwise. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Binary Compatibility Issue with Python v2.6.5 and v3.1.2
On Tue, Apr 20, 2010 at 9:19 PM, Phil Thompson wrote: > When I build my C++ extension on Windows (specifically PyQt with MinGW) > against Python v2.6.5 it fails to run under v2.6.4. The same problem exists > when building against v3.1.2 and running under v3.1.1. > > The error message is... > > ImportError: DLL load failed: The specified procedure could not be found. > > ...though I don't know what the procedure is. > > When built against v2.6.4 it runs fine under all v2.6.x. When built under > v3.1.1 it runs fine under all v3.1.x. > > I had always assumed that an extension built with vX.Y.Z would always run > under vX.Y.Z-1. I don't know how well it is handled in python, but this is extremely hard to do in general - you are asking about forward compatibility, not backward compatibility. Is there a reason why you need to do this ? The usual practice is to build against the *oldest* compatible version you can, so that it remains compatible with everything afterwards, cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] what to do if you don't want your module in Debian
On Tue, Apr 27, 2010 at 5:10 AM, Piotr Ożarowski wrote: > if there's no other way (--install-data is ignored right now, and I know > you're doing a great work to change that, thanks BTW), one could always > use it in *one* place and later import the result in other parts of > the code (instead of using __file__ again) May I ask why this is not actually the solution to resources location ? For example, let's say we have (hypothetic version of distutils supporting autoconf paths): python setup.py install --prefix=/usr --datadir=/var/lib/foo --manpath=/somefunkypath Then the install step would generate a file __install_path.py such as: PREFIX = "/usr" DATADIR = "/var/lib/foo" MANPATH = "/somfunkypath" There remains then the problem of relocatable packages, but solving this would be easy through a conditional in this generated file: if RELOCATABLE: PREFIX = "$prefix" ... else: and define $prefix and co from __file__ if necessary. All this would be an implementation detail, so that the package developer effectively do from mypkg.file_paths import PREFIX, DATADIR, etc... This is both simple and flexible: it is not mandatory, it does not make life more complicated for python developers who don't care about platform X. FWIW, that's the scheme I intend to support in my own packaging solution, cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] How are the bdist_wininst binaries built ?
Hi, I would like to modify the code of the bdist installers, but I don't see any VS project for VS 9.0. How are the wininst-9.0*exe built ? thanks, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] How are the bdist_wininst binaries built ?
On Thu, Jul 1, 2010 at 1:22 PM, "Martin v. Löwis" wrote: >> I would like to modify the code of the bdist installers, but I don't >> see any VS project for VS 9.0. How are the wininst-9.0*exe built ? > > See PC/bdist_wininst. Hm, my question may not have been clear: *how* is the wininst-9.0 built from the bdist_wininst sources ? I see 6, 7.0, 7.1 and 8.0 versions of the visual studio build scripts, but nothing for VS 9.0. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] How are the bdist_wininst binaries built ?
On Thu, Jul 1, 2010 at 2:00 PM, "Martin v. Löwis" wrote: >>> See PC/bdist_wininst. >> >> Hm, my question may not have been clear: *how* is the wininst-9.0 >> built from the bdist_wininst sources ? I see 6, 7.0, 7.1 and 8.0 >> versions of the visual studio build scripts, but nothing for VS 9.0. > > Ah. See PCbuild/bdist_wininst.vcproj. I thought I checked there, but I obviously missed it. thanks, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] SVN <-> HG workflow to split Python Library by Module
On Sat, Jul 3, 2010 at 6:37 AM, Brett Cannon wrote: > On Fri, Jul 2, 2010 at 12:25, anatoly techtonik wrote: >> I planned to publish this proposal when it is finally ready and tested >> with an assumption that Subversion repository will be online and >> up-to-date after Mercurial migration. But recent threads showed that >> currently there is no tested mechanism to sync Subversion repository >> back with Mercurial, so it will probably quickly outdate, and the >> proposal won't have a chance to be evaluated. So now is better than >> never. >> >> So, this is a way to split modules from monolithic Subversion >> repository into several Mercurial mirrors - one mirror for each module >> (or whatever directory structure you like). This will allow to >> concentrate your work on only one module at a time ("distutils", >> "CGIHTTPServer" etc.) without caring much about anything else. >> Exceptionally useful for occasional external "contributors" like me, >> and folks on Windows, who don't possess Visual Studio to compile >> Python and are forced to use whatever version they have installed to >> create and test patches. > > But modules do not live in an isolated world; they are dependent on > changes made to other modules. Isolating them from other modules whose > semantics change during development will lead to skew and improper > patches. I cannot comment on the original proposal, but this issue has known solutions in git, in the form of submodules. I believe hg has something similar with the forest extension http://mercurial.selenic.com/wiki/ForestExtension David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] SVN <-> HG workflow to split Python Library by Module
On Sat, Jul 3, 2010 at 9:34 AM, Brett Cannon wrote: > On Fri, Jul 2, 2010 at 17:17, David Cournapeau wrote: >> On Sat, Jul 3, 2010 at 6:37 AM, Brett Cannon wrote: >>> On Fri, Jul 2, 2010 at 12:25, anatoly techtonik wrote: >>>> I planned to publish this proposal when it is finally ready and tested >>>> with an assumption that Subversion repository will be online and >>>> up-to-date after Mercurial migration. But recent threads showed that >>>> currently there is no tested mechanism to sync Subversion repository >>>> back with Mercurial, so it will probably quickly outdate, and the >>>> proposal won't have a chance to be evaluated. So now is better than >>>> never. >>>> >>>> So, this is a way to split modules from monolithic Subversion >>>> repository into several Mercurial mirrors - one mirror for each module >>>> (or whatever directory structure you like). This will allow to >>>> concentrate your work on only one module at a time ("distutils", >>>> "CGIHTTPServer" etc.) without caring much about anything else. >>>> Exceptionally useful for occasional external "contributors" like me, >>>> and folks on Windows, who don't possess Visual Studio to compile >>>> Python and are forced to use whatever version they have installed to >>>> create and test patches. >>> >>> But modules do not live in an isolated world; they are dependent on >>> changes made to other modules. Isolating them from other modules whose >>> semantics change during development will lead to skew and improper >>> patches. >> >> I cannot comment on the original proposal, but this issue has known >> solutions in git, in the form of submodules. I believe hg has >> something similar with the forest extension >> >> http://mercurial.selenic.com/wiki/ForestExtension > > Mercurial has subrepo support, but that doesn't justify the need to > have every module in its own repository so they can be checked out > individually. It does not justify it, but it makes it possible to keep several repositories in sync, and that you get a consistent state when cloning the top repo. If there is a need to often move code from one repo to the other, or if a change in one repo often cause a change in another one, then certainly that's a sign that they should be in the same repo. But for the windows issue, using subrepo so that when you clone python repo, you get the exact same versions of C libraries as used for the official msi (tk, tcl, openssl, bzip2, etc...), that would be very useful. At least I would have prefered this to the current situation when I need to build python myself on windows. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] More detailed build instructions for Windows
On Sat, Jul 3, 2010 at 2:26 PM, Reid Kleckner wrote: > Hey folks, > > I'm trying to test out a patch to add a timeout in subprocess.py on > Windows, so I need to build Python with Visual Studio. The docs say > the files in PCBuild/ work with VC 9 and newer. I downloaded Visual > C++ 2010 Express, and it needs to convert the .vcproj files into > .vcxproj files, but it fails. > > I can't figure out where to get VC 9, all I see is 2008 and 2010. VS 2008 == VC 9 == MSVC 15 David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python 3 optimizations...
On Thu, Jul 22, 2010 at 10:08 PM, stefan brunthaler wrote: >> Is the source code under an open source non-copyleft license? >> > I am (unfortunately) not employed or funded by anybody, so I think > that I can license/release the code as I see fit. If you did this work under your PhD program, you may be more restricted than you think. You may want to check with your adviser first, cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] proto-pep: plugin proposal (for unittest)
On Fri, Jul 30, 2010 at 10:23 PM, Michael Foord wrote: > For those of you who found this document perhaps just a little bit too long, > I've written up a *much* shorter intro to the plugin system (including how > to get the prototype) on my blog: > > http://www.voidspace.org.uk/python/weblog/arch_d7_2010_07_24.shtml#e1186 This looks nice and simple, but I am a bit worried about the configuration file for registration. My experience is that end users don't like editing files much. I understand that may be considered as bikesheding, but have you considered a system analog to bzr instead ? A plugin is a directory somewhere, which means that disabling it is just removing a directory. In my experience, it is more reliable from a user POV than e.g. the hg way of doing things. The plugin system of bzr is one of the thing that I still consider the best in its category, even though I stopped using bzr for quite some time. The registration was incredibly robust and easy to use from a user and developer POV, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 376 proposed changes for basic plugins support
On Tue, Aug 3, 2010 at 8:48 PM, Antoine Pitrou wrote: > On Tue, 03 Aug 2010 10:28:07 +0200 > "M.-A. Lemburg" wrote: >> > >> > Don't forget system packaging tools like .deb, .rpm, etc., which do not >> > generally take kindly to updating such things. For better or worse, the >> > filesystem *is* our "central database" these days. >> >> I don't think that's a problem: the SQLite database would be a cache >> like e.g. a font cache or TCSH command cache, not a replacement of >> the meta files stored in directories. >> >> Such a database would solve many things at once: faster access to >> the meta-data of installed packages, fewer I/O calls during startup, >> more flexible ways of doing queries on the meta-data, needed for >> introspection and discovery, etc. > > If the cache can become stale because of system package management > tools, how do you avoid I/O calls while checking that the database is > fresh enough at startup? There is a tension between the two approaches: either you want "auto-discovery", or you want a system with explicit registration and only the registered plugins would be visible to the system. System-wise, I much prefer the later, and auto-discovery should be left at the application discretion IMO. A library to deal with this at the *app* level may be fine. But the current system of loading packages and co is already complex enough in python that anything that complexify at the system (interpreter) level sounds like a bad idea. David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 376 proposed changes for basic plugins support
On Tue, Aug 3, 2010 at 11:35 PM, Michael Foord wrote: > On 03/08/2010 15:19, David Cournapeau wrote: >> >> On Tue, Aug 3, 2010 at 8:48 PM, Antoine Pitrou >> wrote: >> >>> >>> On Tue, 03 Aug 2010 10:28:07 +0200 >>> "M.-A. Lemburg" wrote: >>> >>>>> >>>>> Don't forget system packaging tools like .deb, .rpm, etc., which do not >>>>> generally take kindly to updating such things. For better or worse, >>>>> the >>>>> filesystem *is* our "central database" these days. >>>>> >>>> >>>> I don't think that's a problem: the SQLite database would be a cache >>>> like e.g. a font cache or TCSH command cache, not a replacement of >>>> the meta files stored in directories. >>>> >>>> Such a database would solve many things at once: faster access to >>>> the meta-data of installed packages, fewer I/O calls during startup, >>>> more flexible ways of doing queries on the meta-data, needed for >>>> introspection and discovery, etc. >>>> >>> >>> If the cache can become stale because of system package management >>> tools, how do you avoid I/O calls while checking that the database is >>> fresh enough at startup? >>> >> >> There is a tension between the two approaches: either you want >> "auto-discovery", or you want a system with explicit registration and >> only the registered plugins would be visible to the system. >> >> > > Not true. Auto-discovery provides an API for applications to tell users > which plugins are *available* whilst still allowing the app to decide which > are active / enabled. It still leaves full control in the hands of the > application. Maybe I was not clear, but I don't understand how your statement contradict mine. The issue is how to determine which plugins are available: if you don't have an explicit registration, you need to constantly restat every potential location (short of using OS specific systems to to get notification from fs changes). The current python solutions that I am familiar with are prohibitively computing intensive for this reason (think about what happens when you stat locations on NFS shares). David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] mingw support?
On Tue, Aug 10, 2010 at 11:06 PM, wrote: > On Mon, Aug 09, 2010 at 06:55:29PM -0400, Terry Reedy wrote: >> On 8/9/2010 2:47 PM, Sturla Molden wrote: >> >> Terry Reedy: >> > >> >> MingW has become less attractive in recent years by the difficulty >> >> in downloading and installing a current version and finding out how to >> >> do so. Some projects have moved on to the TDM packaging of MingW. >> >> >> >> http://tdm-gcc.tdragon.net/ >> >> Someone else deserves credit for writing that and giving that link ;-) > > Yes, that was a great link, thanks. It works fine for me. > > The reason I was bringing up this topic again was that I think the gnu > autotools have been made for exactly this purpose, to port software to > different platforms, Autotools only help for posix-like platforms. They are certainly a big hindrance on windows platform in general, cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] mingw support?
On Wed, Aug 11, 2010 at 10:21 PM, Sturla Molden wrote: > > "David Cournapeau": >> Autotools only help for posix-like platforms. They are certainly a big >> hindrance on windows platform in general, > > That is why mingw has MSYS. I know of MSYS, but it is not very pleasant to use, if only because it is extremely slow. When I need to build things for windows, I much prefer cross compiling to using MSYS. I also think that cross compilation is more useful than native mingw build alone - there are patches for cross compilation, but I don't know their current status, cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Fixing #7175: a standard location for Python config files
On Fri, Aug 13, 2010 at 7:29 AM, Antoine Pitrou wrote: > On Thu, 12 Aug 2010 18:14:44 -0400 > Glyph Lefkowitz wrote: >> >> On Aug 12, 2010, at 6:30 AM, Tim Golden wrote: >> >> > I don't care how many stats we're doing >> >> You might not, but I certainly do. And I can guarantee you that the >> authors of command-line tools that have to start up in under ten >> seconds, for example 'bzr', care too. > > The idea that import time is dominated by stat() calls sounds rather > undemonstrated (and unlikely) to me. It may be, depending on what you import. I certainly have seen (and profiled) it. In my experience, stat calls and regex compilation often come at the top of the culprits for slow imports. In the case of setuptools namespace package, there was a thread on 23rd april on distutils-sig about this issue: most of the slowdown came from unneeded stat (and symlink translations). cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Possible bug in randint when importing pylab?
On Fri, Aug 20, 2010 at 1:02 AM, Amaury Forgeot d'Arc wrote: > Hi, > > 2010/8/19 Timothy Kinney : >> I am getting some unexpected behavior in Python 2.6.4 on a WinXP SP3 box. > > This mailing list is for development *of* python, not about > development *with* python. > Your question should be directed to the comp.lang.python newsgroup, or > the python-list mailing list. actually, the numpy and/or matplotlib ML would be even better in that case :) David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 384 status
On Mon, Aug 30, 2010 at 6:43 AM, Antoine Pitrou wrote: > On Mon, 30 Aug 2010 07:31:34 +1000 > Nick Coghlan wrote: >> Since part of the point of >> PEP 384 is to support multiple versions of the C runtime in a single >> process, [...] > > I think that's quite a maximalist goal. The point of PEP 384 should be > to define a standard API for Python, (hopefully) spanning multiple > versions. Whether the API effectively guarantees a standard ABI can > only depend on whether the system itself hasn't changed its own > conventions (including, for example, function call conventions, or the > binary representation of standard C types). > > In other words, PEP 384 should only care to stabilize the ABI as > long as the underlying system doesn't change. It sounds a bit foolish > for us to try to hide potential unstabilities in the underlying > platform. And it's equally foolish to try to forbid people from using > well-known system facilities such as FILE* or (worse) errno. > > So, perhaps the C API docs can simply mention the caveat of using FILE* > (and perhaps errno, if there's a problem there as well) for C extensions > under Windows. C extension writers are (usually) consenting adults, for > all. This significantly decrease the value of such an API, to the point of making it useless on windows, since historically different python versions are built with different runtimes. And I would think that windows is the platform where PEP 384 would be the most useful - at least it would for numpy/scipy, where those runtimes issues have bitten us several times (and are painful to debug, especially when you don't know windows so well). cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 384 status
On Tue, Aug 31, 2010 at 6:54 AM, Nick Coghlan wrote: > Hmm... that last point is a bit of any issue actually, since it also > flows the other way (changes made via the locale module won't be > visible to any extension modules using a different C runtime). So I > suspect mixing C runtimes is still going to come with the caveat of > potential locale related glitches. As far as IO is concerned, FILE* is just a special case of a more generic issue, though, so maybe this could be a bit reworded. For example, file descriptor cannot be shared between runtimes either. cheers, David ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com