[issue26078] Python launcher options enhancement
New submission from Edward: The Python launcher in Windows is a neat tool for running multiple versions of Python 2 and Python 3 at different times. It allows as options the ability to specify the latest version of either Python 2 or Python 3 defaulting to the 64-bit version if both exist, or a specific 32-bit or 64-bit version of Python 2 or Python 3. What is missing is the ability to specify the latest 32-bit version of Python 2 or Python 3. The equivalent syntax would be '-2-32' or '-3-32'. Is there some reason why this option has been disallowed ? If not I would like to see this logical enhancement to the Python launcher in Windows added to its functionality. -- components: Windows messages: 257940 nosy: edienerlee, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Python launcher options enhancement type: enhancement versions: Python 3.5 ___ Python tracker <http://bugs.python.org/issue26078> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23687] Stacktrace identifies wrong line in multiline list comprehension
New submission from Edward: This code: z = [ ["Y" for y in None ] for x in range(4) ] produces this stacktrace in both Python 2.7 and 3.4: Traceback (most recent call last): File "/Users/edwsmith/dev/untitled4/test.py", line 7, in ] for x in range(4) File "/Users/edwsmith/dev/untitled4/test.py", line 7, in ] for x in range(4) TypeError: 'NoneType' object is not iterable Of course my code was slightly more complex, but I lost a fair amount of time troubleshooting how the 'for x in range(4)' was evaluating to None, when really, it was the inner comprehension that was failing. Ideally the stack trace would say: Traceback (most recent call last): File "/Users/edwsmith/dev/untitled4/test.py", line 6, in ["Y" for y in None File "/Users/edwsmith/dev/untitled4/test.py", line 6, in ["Y" for y in None TypeError: 'NoneType' object is not iterable -- components: Interpreter Core messages: 238290 nosy: ers81239 priority: normal severity: normal status: open title: Stacktrace identifies wrong line in multiline list comprehension type: behavior versions: Python 2.7, Python 3.4 ___ Python tracker <http://bugs.python.org/issue23687> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue1124] Webchecker not parsing css "@import url"
New submission from Edward Abraham: webchecker and its dependent, websucker, which are distributed with the python tools, are not following references to stylesheets given with the @import url(mystyle.css); declaration ... This means that the websucker isn't copying across stylesheets ... -- components: Demos and Tools messages: 55722 nosy: ready.eddy severity: normal status: open title: Webchecker not parsing css "@import url" versions: Python 2.5 __ Tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue1124> __ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3245] Memory leak on OS X
New submission from Edward Langley <[EMAIL PROTECTED]>: On OS X 10.5.3 the default python has a mild memory leak. sample session: % python -S Python 2.5.1 (r251:54863, Apr 15 2008, 22:57:26) [GCC 4.0.1 (Apple Inc. build 5465)] on darwin >>> % leaks Python Process 2357: 572 nodes malloced for 1031 KB Process 2357: 1 leak for 16 total leaked bytes. ... -- components: Macintosh messages: 69016 nosy: fiddlerwoaroof severity: normal status: open title: Memory leak on OS X type: performance versions: Python 2.5 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3245> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3245] Memory leak on OS X
Edward Langley <[EMAIL PROTECTED]> added the comment: I think it may be a result of the framework build, I don't have the problem with either 2.5.2 (debug build) or 2.6b1, both non-framework builds. ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3245> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7926] Stray parentheses() in context manager "what's new" doc
New submission from Edward Welbourne : http://docs.python.org/whatsnew/2.6.html#writing-context-managers penultimate item in "A high-level explanation": If BLOCK raises an exception, the __exit__(type, value, traceback)() is called has extra () after the argument list - this appears to say that __exit__ should return a callable, that shall be called with no parameters. Fortunately, later example code reveals that __exit__ simply returns true or false. http://docs.python.org/whatsnew/2.6.html#the-contextlib-module after the first code block: The contextlib module also has a nested(mgr1, mgr2, ...)() function again, stray () after parameter list. After the next short code snippet: Finally, the closing(object)() function returns ... -- assignee: georg.brandl components: Documentation messages: 99336 nosy: eddy, georg.brandl severity: normal status: open title: Stray parentheses() in context manager "what's new" doc versions: Python 2.6 ___ Python tracker <http://bugs.python.org/issue7926> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7928] String formatting: grammar wrongly limits [index] to integer
New submission from Edward Welbourne : http://docs.python.org/library/string.html#formatstrings field_name::= (identifier | integer) ("." attribute_name | "[" element_index "]")* element_index ::= integer Subsequent text indicates __getitem__() is used but does not overtly say that a string can be used; but http://docs.python.org/whatsnew/2.6.html#pep-3101-advanced-string-formatting gives the example >>> 'Content-type: {0[.mp4]}'.format(mimetypes.types_map) and clearly '.mp4' is passed to __getitem__(); a string, not an integer. Clearly one of these is wrong ! Given that the "what's new" doc goes into some detail about how the content of [...] gets parsed, I'm guessing it's right and the grammar is wrong. -- assignee: georg.brandl components: Documentation messages: 99340 nosy: eddy, georg.brandl severity: normal status: open title: String formatting: grammar wrongly limits [index] to integer versions: Python 2.6 ___ Python tracker <http://bugs.python.org/issue7928> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7926] Stray parentheses() in context manager "what's new" doc
Edward Welbourne added the comment: The third change removes the early uses of "object" from: Finally, the closing(object)() function returns object so that it can be bound to a variable, and calls object.close at the end of the block. leaving the last use (object.close) as a dangling reference. So either revert this part of the fix and change :func:`closing(object)` to just ``closing(object)`` or follow up the present change by changing and calls :meth `object.close` at the end to and calls the argument's :meth:`close` method at the end -- status: closed -> open ___ Python tracker <http://bugs.python.org/issue7926> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7926] Stray parentheses() in context manager "what's new" doc
Edward Welbourne added the comment: Nice :-) -- ___ Python tracker <http://bugs.python.org/issue7926> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12458] Tracebacks should contain the first line of continuation lines
Edward Yang added the comment: Supposing I like the old behavior (line number of the end of the statement), is there any way to recover that line number from the traceback after this change? -- nosy: +ezyang ___ Python tracker <https://bugs.python.org/issue12458> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue45210] tp_dealloc docs should mention error indicator may be set
New submission from Edward Yang : The fact that the error indicator may be set during tp_dealloc is somewhat well known (https://github.com/posborne/dbus-python/blob/fef4bccfc535c6c2819e3f15384600d7bc198bc5/_dbus_bindings/conn.c#L387) but it's not documented in the official manual. We should document it. A simple suggested patch: diff --git a/Doc/c-api/typeobj.rst b/Doc/c-api/typeobj.rst index b17fb22b69..e7c9b13646 100644 --- a/Doc/c-api/typeobj.rst +++ b/Doc/c-api/typeobj.rst @@ -668,6 +668,20 @@ and :c:type:`PyType_Type` effectively act as defaults.) :c:func:`PyObject_GC_Del` if the instance was allocated using :c:func:`PyObject_GC_New` or :c:func:`PyObject_GC_NewVar`. + If you may call functions that may set the error indicator, you must + use :c:func:`PyErr_Fetch` and :c:func:`PyErr_Restore` to ensure you + don't clobber a preexisting error indicator (the deallocation could + have occurred while processing a different error): + + .. code-block:: c + + static void foo_dealloc(foo_object *self) { + PyObject *et, *ev, *etb; + PyErr_Fetch(&et, &ev, &etb); + ... + PyErr_Restore(et, ev, etb); + } + Finally, if the type is heap allocated (:const:`Py_TPFLAGS_HEAPTYPE`), the deallocator should decrement the reference count for its type object after calling the type deallocator. In order to avoid dangling pointers, the -- assignee: docs@python components: Documentation messages: 401854 nosy: docs@python, ezyang priority: normal severity: normal status: open title: tp_dealloc docs should mention error indicator may be set type: enhancement versions: Python 3.11 ___ Python tracker <https://bugs.python.org/issue45210> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue40897] Inheriting from Generic causes inspect.signature to always return (*args, **kwargs) for constructor (and all subclasses)
New submission from Edward Yang : Consider the following program: ``` import inspect from typing import Generic, TypeVar T = TypeVar('T') class A(Generic[T]): def __init__(self) -> None: pass print(inspect.signature(A)) ``` I expect inspect.signature to return () as the signature of the constructor of this function. However, I get this: ``` $ python3 foo.py (*args, **kwds) ``` Although it is true that one cannot generally rely on inspect.signature to always give the most accurate signature (because there may always be decorator or metaclass shenanigans getting in the way), in this particular case it seems especially undesirable because Python type annotations are supposed to be erased at runtime, and yet here inheriting from Generic (simply to add type annotations) causes a very clear change in runtime behavior. -- components: Library (Lib) messages: 370870 nosy: ezyang priority: normal severity: normal status: open title: Inheriting from Generic causes inspect.signature to always return (*args, **kwargs) for constructor (and all subclasses) type: behavior versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue40897> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14416] syslog missing constants
New submission from Edward Yang : The syslog module is missing constants for a number of logging priorities available on modern Linuxen. In particular, the following options are missing: LOG_ODELAY, LOG_AUTHPRIV, LOG_SYSLOG, LOG_UUCP. -- components: Library (Lib) messages: 156842 nosy: ezyang priority: normal severity: normal status: open title: syslog missing constants type: enhancement versions: Python 2.7 ___ Python tracker <http://bugs.python.org/issue14416> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14416] syslog missing constants
Edward Yang added the comment: I misspoke about UUCP. SYSLOG appears to be missing from the documentation. Arguably they should be present if Linux supports them, and missing if they don't (same as LOG_PERROR, and some of the other constants.) Then you can do feature detection Python-side. -- ___ Python tracker <http://bugs.python.org/issue14416> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue14531] Backtrace should not attempt to open file
New submission from Edward Yang : When generating a backtrace from an interactive Python session (e.g. the input is from , Python attempts to actually find a file named , to somewhat hilarious consequences. See the strace'd Python session below: >>> foo open("/etc/default/apport", O_RDONLY|O_LARGEFILE) = 3 Traceback (most recent call last): File "", line 1, in open("", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.7/dist-packages/pylint-0.24.0-py2.7.egg/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.7/dist-packages/logilab_astng-0.22.0-py2.7.egg/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.7/dist-packages/logilab_common-0.56.1-py2.7.egg/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.7/dist-packages/unittest2-0.5.1-py2.7.egg/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.7/dist-packages/GitPython-0.3.2.RC1-py2.7.egg/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.7/dist-packages/gitdb-0.5.4-py2.7-linux-i686.egg/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.7/dist-packages/smmap-0.8.1-py2.7.egg/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.7/dist-packages/async-0.6.1-py2.7-linux-i686.egg/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.7/dist-packages/decorator-3.3.1-py2.7.egg/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.7/dist-packages/SQLAlchemy-0.7.2-py2.7-linux-i686.egg/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.7/dist-packages/Sphinx-1.0.7-py2.7.egg/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.7/dist-packages/docutils-0.8.1-py2.7.egg/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.7/dist-packages/Jinja2-2.6-py2.7.egg/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.7/dist-packages/Pygments-1.4-py2.7.egg/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.7/dist-packages/nose-1.1.2-py2.7.egg/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/home/ezyang/Dev/6.02/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/home/ezyang/Dev/pyafs/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/home/ezyang/Dev/wizard/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/home/ezyang/Dev/twisted/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.6/site-packages/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/home/ezyang/Work/shared-python/build/lib.linux-i686-2.6/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/home/ezyang/Work/snarfs/python/coil/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/home/ezyang/Dev/py-github/src/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/lib/python2.7/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/lib/python2.7/plat-linux2/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/lib/python2.7/lib-tk/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/lib/python2.7/lib-old/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/lib/python2.7/lib-dynload/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/local/lib/python2.7/dist-packages/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/lib/python2.7/dist-packages/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/lib/python2.7/dist-packages/Numeric/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/lib/python2.7/dist-packages/PIL/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/lib/python2.7/dist-packages/gst-0.10/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/usr/lib/python2.7/dist-packages/gtk-2.0/", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open("/
[issue14531] Backtrace should not attempt to open file
Edward Yang added the comment: "" is a valid name of a file on Unix systems. So the fix is not so clear. ezyang@javelin:~$ python Python 2.7.2+ (default, Oct 4 2011, 20:03:08) [GCC 4.6.1] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> a Traceback (most recent call last): File "", line 1, in Here’s an idea: when a (multi-variable) calculus course arrives at the topic of the *chain rule*, it should use as a worked example the multilayer perceptron—a topic you usually only find in an introductory artificial intelligence course. In fact, it’s ideal, since the treatment of this topic in most AI courses (at this point, I’ve taken two—a byproduct of slightly mismatched class schedules when you study abroad) involves *no* extra theoretical computer science content whatsoever. If you know the definition of a multilayer perceptron, any Calculus student who knows the chain rule should be able to work out the back-propagation algorithm—or perhaps I should call it a *recurrence.* NameError: name 'a' is not defined -- ___ Python tracker <http://bugs.python.org/issue14531> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue37407] Update imaplib.py to account for additional padding
New submission from Edward Smith : Regex for imaplib should account for IMAP servers which return one or two whitespaces after the * in responses. For example when using davmail as an interpreter between exchange to IMAP Acceptable response is as follows: ``` 58:24.54 > PJJD3 EXAMINE INBOX 58:24.77 < * 486 EXISTS 58:24.78 matched r'\* (?P\d+) (?P[A-Z-]+)( (?P.*))?' => ('486', 'EXISTS', None, None) 58:24.78 untagged_responses[EXISTS] 0 += ["486"] ``` Davmail response: ``` 57:50.86 > KPFE3 EXAMINE INBOX 57:51.10 < * 953 EXISTS 57:51.10 last 0 IMAP4 interactions: 57:51.10 > KPFE4 LOGOUT ``` See additional whitespace after the * on line 2 To be fixed by allowing the regex to account for one or two whitespaces ```python br'\*[ ]{1,2}(?P\d+) (?P[A-Z-]+)( (?P.*))?' ``` -- components: email messages: 346584 nosy: barry, edwardmbsmith, r.david.murray priority: normal pull_requests: 14202 severity: normal status: open title: Update imaplib.py to account for additional padding type: behavior versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue37407> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue27777] cgi.FieldStorage can't parse simple body with Content-Length and no Content-Disposition
Edward Gow added the comment: This bug is triggered by xml-rpc calls from the xmlrpc.client in the Python 3.5 standard library to a mod_wsgi/Python 3.5 endpoint. -- nosy: +elgow ___ Python tracker <https://bugs.python.org/issue2> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue32736] random.triangular yields unexpected distribution when args mixed
New submission from Edward Preble : The random.triangular function produces an unexpected distribution result (instead of an error or warning message) when the order of the 3 arguments are mixed up. Python notebook with demo of issue here: https://github.com/epreble/random_triangular_distribution_issue Cases 1-4 are OK 1. random.triangular(low, high, mode) (Docs specified usage) 2. random.triangular(high, low, mode) 3. random.triangular(low, high) 4. random.triangular(high, low) Incorrect argument input (e.g. numpy style) yields distributions that are NOT 3-value-triangular and the output is also from different ranges than expected: Incorrect arguments (first one is numpy.random.triangular style) 6. random.triangular(low, mode, high) or: 7. random.triangular(high, mode, low) Raising errors was discouraged in prior discussion (https://bugs.python.org/issue13355) due to back compatibility concerns. However, I would argue that output of an incorrect distribution without a warning is a problem that -should- be broken, even in old code. A possible solution, that might not break the old code (I haven't looked at all the edge cases): If 3 arguments provided, need to be sure the mode is arg3: If arg1 < arg2 < arg3, this is definitely wrong since the mode is definitely in the middle (wrong position). If arg1 > arg2 > arg3, this is definitely wrong since the mode is definitely in the middle (wrong position). Those tests would not break the old use cases, since the signs of the tests switch between arg1/arg2/arg3: low, high, mode (would be arg1 < arg2 > arg3) high, low, mode (would be arg1 > arg2 < arg3) Not positive how all the <=, >= combos work out with these tests. -- components: Library (Lib) messages: 311377 nosy: eddieprebs priority: normal severity: normal status: open title: random.triangular yields unexpected distribution when args mixed type: behavior versions: Python 2.7, Python 3.6 ___ Python tracker <https://bugs.python.org/issue32736> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33976] Enums don't support nested classes
New submission from Edward Wang : Methods defined in Enums behave 'normally' but classes defined in Enums get mistaken for regular values and can't be used as classes out of the box. ```python class Outer(Enum): a = 1 b = 2 class Inner(Enum): foo = 10 bar = 11 ``` -- messages: 320541 nosy: edwardw priority: normal severity: normal status: open title: Enums don't support nested classes type: behavior ___ Python tracker <https://bugs.python.org/issue33976> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33976] Enums don't support nested classes
Change by Edward Wang : -- keywords: +patch pull_requests: +7557 stage: -> patch review ___ Python tracker <https://bugs.python.org/issue33976> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33976] Enums don't support nested classes
Edward Wang added the comment: Ethan - thank you for your speedy response! For an example you can see https://github.com/ucb-bar/hammer/blob/97021bc7e1c819747f8b8b6d4b8c76cdc6a488e3/src/hammer-vlsi/hammer_vlsi_impl.py#L195 - the ObstructionType enum is really only used inside PlacementConstraintType, so it was thought to be logical to nest ObstructionType inside PlacementConstraintType and reference it as PlacementConstraintType.ObstructionType. It just seemed weird that inline class definitions inside an Enum get treated as an enum value while functions defined inside Enums are treated as methods - as a user it would make more sense to have them both treated the same way (not as enum values), I think. -- ___ Python tracker <https://bugs.python.org/issue33976> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue34213] Frozen dataclass __init__ fails for "object" property"
New submission from Edward Jones : When `__init__` is called for a class which 1) is annotated with `@dataclasses.dataclass(frozen=True)` and 2) has a attribute named `object` a TypeError is raised because `object` is overridden for the local scope and as a result `__setattr__` is called on the passed in argument value instead of the standard `object` base type. I was able to reproduce this in a Docker container running https://github.com/docker-library/python/blob/7a794688c7246e7eff898f5288716a3e7dc08484/3.7/stretch/Dockerfile with the attached .py file. Python 3.7.0 (default, Jul 17 2018, 11:04:33) [GCC 6.3.0 20170516] on linux -- components: Library (Lib) files: frozen_dataclass_init_typeerror.py messages: 322321 nosy: Omenien priority: normal severity: normal status: open title: Frozen dataclass __init__ fails for "object" property" type: behavior versions: Python 3.7 Added file: https://bugs.python.org/file47710/frozen_dataclass_init_typeerror.py ___ Python tracker <https://bugs.python.org/issue34213> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35060] subprocess output seems to depend on size of terminal screen
New submission from Edward Pratt : I am looking for a string inside of a process, and it seems that the output of the check_output command depends on the screen size of the terminal I run the code in. Here I ran the code with a normal screen resolution: >>> result = subprocess.check_output(['ps', 'aux']).decode('ascii', >>> errors='ignore') >>> 'app-id' in result False Then I zoom out to the point where I can barely read the text on the screen, and this is the output I get: >>> result = subprocess.check_output(['ps', 'aux']).decode('ascii', >>> errors='ignore') >>> 'app-id' in result True -- components: Demos and Tools messages: 328371 nosy: epsolos priority: normal severity: normal status: open title: subprocess output seems to depend on size of terminal screen type: behavior versions: Python 3.5 ___ Python tracker <https://bugs.python.org/issue35060> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35060] subprocess output seems to depend on size of terminal screen
Edward Pratt added the comment: I don’t think that is true. I tried grepping for what I need in a very small terminal and still got the correct result. > On Oct 24, 2018, at 1:40 PM, Eryk Sun wrote: > > > Eryk Sun added the comment: > > This is due to the ps command itself. You'd have the same problem when piping > to grep or redirecting output to a file. I don't know how it determines > terminal size. I tried overriding stdin, stdout and stderr to pipes and > calling setsid() in the forked child process to detach from the controlling > terminal, but it still detected the terminal size. Anyway, the "ww" option of > ps overrides this behavior. > > -- > nosy: +eryksun > resolution: -> third party > stage: -> resolved > status: open -> closed > > ___ > Python tracker > <https://bugs.python.org/issue35060> > ___ -- ___ Python tracker <https://bugs.python.org/issue35060> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue35060] subprocess output seems to depend on size of terminal screen
Edward Pratt added the comment: You are correct. It works as expected outside of the REPL. > On Oct 24, 2018, at 4:34 PM, Stéphane Wirtel wrote: > > > Stéphane Wirtel added the comment: > > My script: > > #!/usr/bin/env python > import pathlib > import subprocess > > output = subprocess.check_output(['ps', 'aux']) > pathlib.Path('/tmp/ps_aux.txt').write_bytes(output) > > > When I execute the following script in the REPL, I get your issue but for me, > it's normal because the REPL is running in a terminal with a limited size. > > And when I execute the same script like as a simple python script, I don't > have your issue. > > -- > status: closed -> open > > ___ > Python tracker > <https://bugs.python.org/issue35060> > ___ -- ___ Python tracker <https://bugs.python.org/issue35060> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18244] singledispatch: When virtual-inheriting ABCs at distinct points in MRO, composed MRO is dependent on haystack ordering
New submission from Edward Catmur: Suppose we have a class C with MRO (C, B, A, object). C virtual-inherits an ABC V, while B virtual-inherits an unrelated ABC W: object / | \ A W | | .` / B` V | .` C` Recalling that per PEP 443 singledispatch prefers concrete bases to virtual bases, we would expect the following composed MRO: C, B, V, A, W, object However what actually happens is the composed MRO depends on the order of the haystack; if W is processed first the result is correct, but if V is processed first then (because V does not subclass W) W is inserted in the MRO *before* V: C, B, A, object C, B, V, A, object C, B, W, V, A, object This results in ambiguity between V and W. Suggested fix is a slight change to the MRO composition algorithm, considering whether the items already placed in the MRO are concrete base classes. -- components: Extension Modules hgrepos: 200 messages: 191350 nosy: ecatmur priority: normal severity: normal status: open title: singledispatch: When virtual-inheriting ABCs at distinct points in MRO, composed MRO is dependent on haystack ordering versions: Python 3.4 ___ Python tracker <http://bugs.python.org/issue18244> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18244] singledispatch: When virtual-inheriting ABCs at distinct points in MRO, composed MRO is dependent on haystack ordering
Edward Catmur added the comment: Apologies, the linked repository is for the 2.x backport of singledispatch. I'll replace it with a proper Python repo. -- ___ Python tracker <http://bugs.python.org/issue18244> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18244] singledispatch: When virtual-inheriting ABCs at distinct points in MRO, composed MRO is dependent on haystack ordering
Changes by Edward Catmur : -- keywords: +patch Added file: http://bugs.python.org/file30623/singledispatch-mro-18244.patch ___ Python tracker <http://bugs.python.org/issue18244> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18244] singledispatch: When virtual-inheriting ABCs at distinct points in MRO, composed MRO is dependent on haystack ordering
Changes by Edward Catmur : -- type: -> behavior ___ Python tracker <http://bugs.python.org/issue18244> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18244] singledispatch: When virtual-inheriting ABCs at distinct points in MRO, composed MRO is dependent on haystack ordering
Edward Catmur added the comment: See attachment for patch and test. Note that reproducing the issue without access to singledispatch internals depends on iteration order of a dict of types and is thus intermittent/environment dependent. -- ___ Python tracker <http://bugs.python.org/issue18244> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue18244] singledispatch: When virtual-inheriting ABCs at distinct points in MRO, composed MRO is dependent on haystack ordering
Edward Catmur added the comment: Łukasz, thanks. When the most-derived class virtual-inherits two related ABCs U, V: object / | \ A W V | .` .` B` U` | .` C` The secondary `for` loop is necessary to ensure U and V are ordered correctly. I'll upload a patch with an improved test that covers this case. -- Added file: http://bugs.python.org/file30646/singledispatch-mro-composition.patch ___ Python tracker <http://bugs.python.org/issue18244> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue16067] UAC prompt for installation shows temporary file name
New submission from Edward Brey: When installing on Windows, the UAC prompt shows a temporary random file name for the MSI file. To solve this, use the /d switch with signtool when signing the MSI file. Cf. http://stackoverflow.com/q/4315840 -- components: Installation messages: 171398 nosy: breyed priority: normal severity: normal status: open title: UAC prompt for installation shows temporary file name type: behavior versions: Python 3.4 ___ Python tracker <http://bugs.python.org/issue16067> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19096] multiprocessing.Pool._terminate_pool restarts workers during shutdown
New submission from Edward Catmur: There is a race condition in multiprocessing.Pool._terminate_pool that can result in workers being restarted during shutdown (process shutdown or pool.terminate()). worker_handler._state = TERMINATE# < race from here task_handler._state = TERMINATE debug('helping task handler/workers to finish') cls._help_stuff_finish(inqueue, task_handler, len(pool)) assert result_handler.is_alive() or len(cache) == 0 result_handler._state = TERMINATE outqueue.put(None) # sentinel # We must wait for the worker handler to exit before terminating # workers because we don't want workers to be restarted behind our back. debug('joining worker handler') worker_handler.join()# <~ race to here At any point between setting worker_handler._state = TERMINATE and joining the worker handler, if the intervening code causes a worker to exit then it is possible for the worker handler to fail to notice that it has been shutdown and so attempt to restart the worker: @staticmethod def _handle_workers(pool): thread = threading.current_thread() # Keep maintaining workers until the cache gets drained, unless the pool # is terminated. while thread._state == RUN or (pool._cache and thread._state != TERMINATE): # <~~ race here pool._maintain_pool() time.sleep(0.1) # send sentinel to stop workers pool._taskqueue.put(None) util.debug('worker handler exiting') We noticed this initially because in the absence of the fix to #14881 a ThreadPool trying to restart a worker fails and hangs the process. In the presence of the fix to #14881 there is no immediate issue, but trying to restart a worker process/thread on pool shutdown is clearly unwanted and could result in bad things happening e.g. at process shutdown. To trigger the race with ThreadPool, it is enough just to pause the _handle_workers thread after checking its state and before calling _maintain_pool: import multiprocessing.pool import time class ThreadPool(multiprocessing.pool.ThreadPool): def _maintain_pool(self): time.sleep(1) super(ThreadPool, self)._maintain_pool() def _repopulate_pool(self): assert self._state == multiprocessing.pool.RUN super(ThreadPool, self)._repopulate_pool() pool = ThreadPool(4) pool.map(lambda x: x, range(5)) pool.terminate() pool.join() Exception in thread Thread-5: Traceback (most recent call last): File ".../cpython/Lib/threading.py", line 657, in _bootstrap_inner self.run() File ".../cpython/Lib/threading.py", line 605, in run self._target(*self._args, **self._kwargs) File ".../cpython/Lib/multiprocessing/pool.py", line 358, in _handle_workers pool._maintain_pool() File ".../bug.py", line 6, in _maintain_pool super(ThreadPool, self)._maintain_pool() File ".../cpython/Lib/multiprocessing/pool.py", line 232, in _maintain_pool self._repopulate_pool() File ".../bug.py", line 8, in _repopulate_pool assert self._state == multiprocessing.pool.RUN AssertionError In this case, the race occurs when ThreadPool._help_stuff_finish puts sentinels on inqueue to make the workers finish. It is also possible to trigger the bug with multiprocessing.pool.Pool: import multiprocessing.pool import time class Pool(multiprocessing.pool.Pool): def _maintain_pool(self): time.sleep(2) super(Pool, self)._maintain_pool() def _repopulate_pool(self): assert self._state == multiprocessing.pool.RUN super(Pool, self)._repopulate_pool() @staticmethod def _handle_tasks(taskqueue, put, outqueue, pool): time.sleep(1) _real_handle_tasks(taskqueue, put, outqueue, pool) _real_handle_tasks = multiprocessing.pool.Pool._handle_tasks multiprocessing.pool.Pool._handle_tasks = Pool._handle_tasks pool = Pool(4) pool.map(str, range(10)) pool.map_async(str, range(10)) pool.terminate() pool.join() In this case, the race occurs when _handle_tasks checks thread._state, breaks out of its first loop, and sends sentinels to the workers. The terminate/join can be omitted, in which case the bug will occur at gc or process shutdown when the pool's atexit handler runs. The bug is avoided if terminate is replaced with close, and we are using this workaround. -- components: Library (Lib) messages: 198432 nosy: ecatmur priority: normal severity: normal status: open title: multiprocessing.Pool._terminate_pool restarts workers during shutdown type: behavior versions: Python 2.7, Python 3.5 ___ Python tracker <http://bugs.python.org/issue19096> ___ _
[issue19096] multiprocessing.Pool._terminate_pool restarts workers during shutdown
Edward Catmur added the comment: Suggested patch: https://bitbucket.org/ecatmur/cpython/compare/19096-multiprocessing-race..#diff Move the worker_handler.join() to immediately after setting the worker handler thread state to TERMINATE. This is a safe change as nothing in the moved-over code affects the worker handler thread, except by terminating workers which is precisely what we don't want to happen. In addition, this is near-equivalent behaviour to current close() + join(), which is well-tested. Also: write tests; and modify Pool.__init__ to refer to its static methods using self rather than class name, to make them overridable for testing purposes. -- hgrepos: +211 ___ Python tracker <http://bugs.python.org/issue19096> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19615] "ImportError: dynamic module does not define init function" when deleting and recreating .so files from different machines over NFS
New submission from Edward Catmur: foo.c: #include static PyMethodDef mth[] = { {NULL, NULL, 0, NULL} }; static struct PyModuleDef mod = { PyModuleDef_HEAD_INIT, "foo", NULL, -1, mth }; PyMODINIT_FUNC PyInit_foo(void) { return PyModule_Create(&mod); } bar.c: #include static PyMethodDef mth[] = { {NULL, NULL, 0, NULL} }; static struct PyModuleDef mod = { PyModuleDef_HEAD_INIT, "bar", NULL, -1, mth }; PyMODINIT_FUNC PyInit_bar(void) { return PyModule_Create(&mod); } setup.py: from distutils.core import setup, Extension setup(name='PackageName', ext_modules=[Extension('foo', sources=['foo.c']), Extension('bar', sources=['bar.c'])]) In an NFS mount: host1$ python setup.py build host1$ rm *.so; cp build/lib.*/foo*.so .; cp build/lib.*/bar*.so . host1$ python -c 'import foo; input(); import bar' While python is waiting for input, on another host in the same directory: host2$ rm *.so; cp build/lib.*/bar*.so .; cp build/lib.*/foo*.so . Back on host1: ImportError: dynamic module does not define init function (PyInit_bar) Attaching a debugger to Python after the ImportError and calling dlerror() shows the problem: (gdb) print (char *)dlerror() $1 = 0xe495210 "/<...>/foo.cpython-34dm.so: undefined symbol: PyInit_bar" This is because dynload_shlib.c[1] caches dlopen handles by (device and) inode number; but NFS will reuse inode numbers even if a process on a client host has the file open; running lsof on Python, before: python 16475 ecatmur memREG0,3614000 55321147 /<...>/foo.cpython-34dm.so (nfs:/export/user) and after: python 16475 ecatmur memREG0,36 55321147 /<...>/foo.cpython-34dm.so (nfs:/export/user) (path inode=55321161) Indeed, bar.cpython-34dm.so now has the inode number that Python originally opened foo.cpython-34dm.so under: host1$ stat -c '%n %i' *.so bar.cpython-34dm.so 55321147 foo.cpython-34dm.so 55321161 Obviously, this can only happen on a filesystem like NFS where inode numbers can be reused even while a process still has a file open (or mapped). We encountered this problem in a fairly pathological situation; multiple processes running in two virtualenvs with different copies of a zipped egg (of the same version!) were contending over the ~/.python-eggs directory created by pkg_resources[2] to cache .so files extracted from eggs. We are working around the situation by setting PYTHON_EGG_CACHE to a virtualenv-specific location, which also fixes the contention issue. (We should probably work out why the eggs are different, but fixing that is bound into our build/deployment system.) I'm not sure exactly how to solve or even detect this issue; perhaps looking at the mtime of the .so might work? If it is decided not to fix the issue it would be useful if _PyImport_GetDynLoadFunc could report the actual dlerror(); this would have saved us quite some time debugging it. I'll work on a patch to do that. 1. http://hg.python.org/cpython/file/tip/Python/dynload_shlib.c 2. https://bitbucket.org/pypa/setuptools/src/ac127a3f46be3037c79f2c4076c7ab221cde21b2/pkg_resources.py?at=default#cl-1040 -- components: Interpreter Core messages: 202963 nosy: ecatmur priority: normal severity: normal status: open title: "ImportError: dynamic module does not define init function" when deleting and recreating .so files from different machines over NFS type: behavior versions: Python 3.5 ___ Python tracker <http://bugs.python.org/issue19615> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue19615] "ImportError: dynamic module does not define init function" when deleting and recreating .so files from different machines over NFS
Edward Catmur added the comment: Report dlerror() if dlsym() fails. The error output is now something like: ImportError: /<...>/foo.cpython-34dm.so: undefined symbol: PyInit_bar -- keywords: +patch Added file: http://bugs.python.org/file32643/dynload_report_dlerror.patch ___ Python tracker <http://bugs.python.org/issue19615> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22005] datetime.__setstate__ fails decoding python2 pickle
New submission from Edward Oubrayrie: pickle.loads raises a TypeError when calling the datetime constructor, (then a UnicodeEncodeError in the load_reduce function). A short test program & the log, including dis output of both PY2 and PY3 pickles, are available in this gist; and extract on stackoverflow: https://gist.github.com/eddy-geek/191f15871c1b9f801b76 http://stackoverflow.com/questions/24805105/ I am using pickle.dumps(reply, protocol=2) in PY2 then pickle._loads(pickled, fix_imports=True, encoding='latin1') in PY3 (tried None and utf-8 without success) Native cPickle loads decoding fails too, I am only using pure python's _loads for debugging. Sorry if this is misguided (first time here) Regards, Edward -- components: Library (Lib) messages: 223408 nosy: eddygeek priority: normal severity: normal status: open title: datetime.__setstate__ fails decoding python2 pickle type: behavior versions: Python 3.4 ___ Python tracker <http://bugs.python.org/issue22005> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22005] datetime.__setstate__ fails decoding python2 pickle
Edward O added the comment: The code works when using encoding='bytes'. Thanks Tim for the suggestion. So this is not a bug, but is there any sense in having encoding='ASCII' by default in pickle ? -- ___ Python tracker <http://bugs.python.org/issue22005> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22294] 2to3 consuming_calls: len, min, max, zip, map, reduce, filter, dict, xrange
New submission from Edward O: This is a patch for issues similar to #16573 With this patch, the following new tests are now unchanged: r = dict(zip(s, range(len(s))), **d) r = len(filter(attrgetter('t'), self.a)) r = min(map(methodcaller('f'), self.a)) max(map(node.id, self.nodes)) + 1 if self.nodes else 0 reduce(set.union, map(f, self.a)) Note that as part of the patch, the range transformation now calls the generic in_special_context in addition to the customized one (which. I guess, should be removed, but I didn't dare). All existing tests pass, but the patterns used may not be strict enough, though I tried to stick to how it was done for other consuming calls. M Lib/lib2to3/fixer_util.py M Lib/lib2to3/fixes/fix_xrange.py M Lib/lib2to3/tests/test_fixers.py -- components: 2to3 (2.x to 3.x conversion tool) files: 2to3_more_consuming_calls.diff hgrepos: 271 keywords: patch messages: 226019 nosy: eddygeek priority: normal severity: normal status: open title: 2to3 consuming_calls: len, min, max, zip, map, reduce, filter, dict, xrange type: enhancement versions: Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python 3.5 Added file: http://bugs.python.org/file36495/2to3_more_consuming_calls.diff ___ Python tracker <http://bugs.python.org/issue22294> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22294] 2to3 consuming_calls: len, min, max, zip, map, reduce, filter, dict, xrange
Changes by Edward O : -- nosy: +benjamin.peterson ___ Python tracker <http://bugs.python.org/issue22294> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22297] json encoding broken for
New submission from Edward O: _make_iterencode in python2.7/json/encoder.py encodes custom enum types incorrectly (the label will be printed without '"') because of these lines (line 320 in 2.7.6): elif isinstance(value, (int, long)): yield buf + str(value) in constract, _make_iterencode in python 3 explicitly supports the enum types: elif isinstance(value, int): # Subclasses of int/float may override __str__, but we still # want to encode them as integers/floats in JSON. One example # within the standard library is IntEnum. yield buf + str(int(value)) -- components: Library (Lib) messages: 226057 nosy: eddygeek priority: normal severity: normal status: open title: json encoding broken for type: behavior versions: Python 2.7 ___ Python tracker <http://bugs.python.org/issue22297> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22297] 2.7 json encoding broken for enums
Changes by Edward O : -- title: json encoding broken for -> 2.7 json encoding broken for enums ___ Python tracker <http://bugs.python.org/issue22297> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7434] general pprint rewrite
Changes by Edward O : -- nosy: +eddygeek ___ Python tracker <http://bugs.python.org/issue7434> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22294] 2to3 consuming_calls: len, min, max, zip, map, reduce, filter, dict, xrange
Changes by Edward O : -- nosy: +BreamoreBoy ___ Python tracker <http://bugs.python.org/issue22294> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22297] 2.7 json encoding broken for enums
Edward O added the comment: The arguments for fixing: * int subclasses overriding str is a very common usecase for enums (so much so that it was added to stdlib in 3.4). * json supporting a standard type of a subsequent python version, though not mandatory, would be beneficial to Py2/Py3 compatibility. * fixing this cannot break existing code * the fix could theoretically be done to 3.0-3.3 if Ethan's argument is deemed important. -- status: pending -> open ___ Python tracker <http://bugs.python.org/issue22297> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8882] socketmodule.c`getsockaddrarg() should not check the length of sun_path
New submission from Edward Pilatowicz : recently i was writing some python code that attempted to bind a unix domain socket to a long filesystem path. this code was failing and telling me that the path name was too long. tracing python i saw that it wasn't event issuing a system call for my bind() request. eventually i tracked down the problem to socketmodule.c`getsockaddrarg(): http://svn.python.org/view/python/trunk/Modules/socketmodule.c?view=markup there we see that getsockaddrarg() checks to verify that the specified path is less than "sizeof addr->sun_path", where addr is a struct sockaddr_un. this seems incorrect to me. on most systems sockaddr_un.sun_path is defined as a small character array. this limit is an ancient bit of unix legacy and most modern systems do not actually limit domain socket names to a path as short as sun_path. on solaris the real limit is MAXPATHLEN, there by allowing unix domain sockets to be bound to any filesystem path. the posix specification also says that users of the sockaddr_un structure should not make any assumptions about the maximum supported length of sun_path. from: http://www.opengroup.org/onlinepubs/009695399/basedefs/sys/un.h.html we have: charsun_path[]socket pathname ... The size of sun_path has intentionally been left undefined. This is because different implementations use different sizes. For example, 4.3 BSD uses a size of 108, and 4.4 BSD uses a size of 104. Since most implementations originate from BSD versions, the size is typically in the range 92 to 108. Applications should not assume a particular length for sun_path or assume that it can hold {_POSIX_PATH_MAX} characters (255). hence, it seems to me that python should not actually be doing any size checks on the path passed to getsockaddrarg(). instead is should dynamically allocate a sockaddr_un large enough to hold whatever string was pass in. this structure can then be passed on to system calls which can they check if the specified path is of a supported length. (if you look at the posix definitions for bind() and connect() you'll see that they both can return ENAMETOOLONG if the passed in pathname is too large.) -- components: None messages: 106929 nosy: Edward.Pilatowicz priority: normal severity: normal status: open title: socketmodule.c`getsockaddrarg() should not check the length of sun_path type: behavior versions: Python 2.6 ___ Python tracker <http://bugs.python.org/issue8882> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8882] socketmodule.c`getsockaddrarg() should not check the length of sun_path
Edward Pilatowicz added the comment: so i wrote a simple test program that tells me the defined length of sun_path and then uses bind() with increasingly long paths to determine the actually supported length of sun_path. here's what i've found: Solaris: defined sun_path = 108 max sun_path = 1024 FreeBSD 8.0: defined sun_path = 104 max sun_path = 254 Fedora 11: defined sun_path = 108 max sun_path = 108 i have requested access to an AIX system to check what length of sun_path is defined and supported there. while i could request that this value be changed in the OS, that would likely cause problems with pre-existing compiled code. i'm guessing that most OS vendors would not be eager to update this value, which is probably why it's been the same small value for such a long time. -- ___ Python tracker <http://bugs.python.org/issue8882> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8882] socketmodule.c`getsockaddrarg() should not check the length of sun_path
Edward Pilatowicz added the comment: some additional data. AIX 6.1: defined sun_path = 1023 max sun_path = 1023 i'll also point out the existence of the SUN_LEN() macro, which is defined on all the previously mentioned operating systems, and which calculates the size of a sockaddr_un structure using strlen() of sun_path, not sizeof(). that said, as a counter argument, UNIX Network Programming by Richard Stevens explicitly mentions that the use of sizeof() is ok. still, personally, i think it's pretty risky for an OS to change this definition. (it seems that AIX is the only OS i've seen that has done so.) i say this because use of the sockaddr_un structure is so prevalent. it's commonly embedded into other structures and passed around via APIs that (unlike bind(), connect(), etc) don't take a size parameter which specifies the size of the structure. -- ___ Python tracker <http://bugs.python.org/issue8882> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29201] Python 3.6 not working within VS 2015
New submission from Edward Laier: I installed Python 3.6 on my Microsoft server 2016 , running VS 2015 Community. I tried the path to Python Environment, then auto-detect, VS crashes and then when it restarts it will have the "+ Custom" grayed out. Where I then have to remove the key to ungray it as mentioned here [link](https://stackoverflow.com/questions/40430831/vs-2015-python-environments-greyed-out) Trying to put in all the variables for the environment doesn't work as 3.6 isn't even an option in the version drop-box for VS. The only option I see for now until this is fixed is removing 3.6 and installing 3.5 -- components: Windows messages: 284957 nosy: Edward Laier, paul.moore, steve.dower, tim.golden, zach.ware priority: normal severity: normal status: open title: Python 3.6 not working within VS 2015 type: behavior versions: Python 3.6 ___ Python tracker <http://bugs.python.org/issue29201> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22820] RESTART line with no output
New submission from Edward Alexander: Whenever i run my code on Python IDLE editor, the output is as follows: == RESTART I am a newbie,it seems i cannot move from this point . This is my code: def convert_to_celsius(fahrenheit): return (fahrenheit - 32) * 5.0 / 9.0 convert_to_celsius(80) -- components: IDLE messages: 230845 nosy: sukari priority: normal severity: normal status: open title: RESTART line with no output type: crash versions: Python 3.3 ___ Python tracker <http://bugs.python.org/issue22820> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20371] datetime.datetime.replace bypasses a subclass's __new__
Changes by Edward O : -- nosy: +eddygeek ___ Python tracker <http://bugs.python.org/issue20371> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue20371] datetime.datetime.replace bypasses a subclass's __new__
Edward O added the comment: Here is a workaround for subclasses (2&3-compatible): --- start code --- class MyDate(datetime): @classmethod def fromDT(cls, d): """ :type d: datetime """ return cls(d.year, d.month, d.day, d.hour, d.minute, d.second, d.microsecond, d.tzinfo) def replace(self, *args, **kw): """ :type other: datetime.timedelta :rtype: MyDate """ return self.fromDT(super(MyDate, self).replace(*args, **kw)) --- end code --- This is really a bug and not a misuse as datetime was specifically adapted to be subclassing-friendly. From a look at the (python) source, this seems to be the only bit that was forgotten. The real fix is as per yselivanov comment [and no, this has nothing to do with pickle or copy AFAIK] -- versions: +Python 3.4 ___ Python tracker <http://bugs.python.org/issue20371> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue26842] Python Tutorial 4.7.1: Need to explain default parameter lifetime
New submission from Edward Segall: I am using the tutorial to learn Python. I know many other languages, and I've taught programming language theory, but even so I found the warning in Section 4.7.1 about Default Argument Values to be confusing. After I spent some effort investigating what actually happens, I realized that the warning is incomplete. I'll suggest a fix below, after explaining what concerns me. Here is the warning in question: - Important warning: The default value is evaluated only once. This makes a difference when the default is a mutable object such as a list, dictionary, or instances of most classes. ... def f(a, L=[]): L.append(a) return L print(f(1)) print(f(2)) print(f(3)) This will print [1] [1, 2] [1, 2, 3] - It's clear from this example that values are carried from one function invocation to another. That's pretty unusual behavior for a "traditional" function, but it's certainly not unheard of -- in C/C++/Java, you can preserve state across invocations by declaring that a local variable has static lifetime. When using this capability, though, it's essential to understand exactly what's happening -- or at least well enough to anticipate its behavior under a range of conditions. I don't believe the warning and example are sufficient to convey such an understanding. After playing with it for a while, I've concluded the following: "regular" local variables have the usual behavior (called "automatic" lifetime in C/C++ jargon), as do the function's formal parameters, EXCEPT when a default value is defined. Each default value is stored in a location that has static lifetime, and THAT is the reason it matters that (per the warning) the expression defining the default value is evaluated only once. This is very unfamiliar behavior -- I don't think I have used another modern language with this feature. So I think it's important that the explanation be very clear. I would like to suggest revising the warning and example to something more like the following: - Important warning: When you define a function with a default argument value, the expression defining the default value is evaluated only once, but the resultant value persists as long as the function is defined. If this value is a mutable object such as a list, dictionary, or instance of most classes, it is possible to change that object after the function is defined, and if you do that, the new (mutated) value will subsequently be used as the default value. For example, the following function accepts two arguments: def f(a, L=[]): L.append(a) return L This function is defined with a default value for its second formal parameter, called L. The expression that defines the default value denotes an empty list. When the function is defined, this expression is evaluated once. The resultant list is saved as the default value for L. Each time the function is called, it appends the first argument to the second one by invoking the second argument's append method. If we call the function with two arguments, the default value is not used. Instead, the list that is passed in as the second argument is modified. However, if we call the function with one argument, the default value is modified. Consider the following sequence of calls. First, we define a list and pass it in each time as the second argument. This list accumulates the first arguments, as follows: myList=[] print(f(0, myList)) print(f(1, myList)) This will print: [0] [0, 1] As you can see, myList is being used to accumulate the values passed in to the first as the first argument. If we then use the default value by passing in only one argument, as follows: print(f(2)) print(f(3)) we will see: [2] [2, 3] Here, the two invocations appended values to to the default list. Let's continue, this time going back to myList: print(f(4,myList)) Now the result will be: [0, 1, 4] because myList still contains the earlier values. The default value still has its earlier values too, as we can see here: print(f(5)) [2, 3, 5] To summarize, there are two distinct cases: 1) When the function is invoked with an argument list that includes a value for L, that L (the one being passed in) is changed by the function. 2) When the function is invoked with an argument list that does not include a value for L, the default value for L is changed, and that change persists through future invocations. - I hope this is useful. I realize it is much longer than the original. I had hoped to make it shorter, but when I did I found I was glossing over importa
[issue26842] Python Tutorial 4.7.1: Need to explain default parameter lifetime
Edward Segall added the comment: I agree with most of your points: A tutorial should be brief and should not go down rabbit holes. Expanded discussion of default parameter behavior would probably fit better with the other facets of parameter speceification and parameter passing, perhaps as a FAQ. But I also believe a change to the current presentation is needed. Perhaps it would be best to introduce default arguments using simple numerical types, and refer to a separate explanation (perhaps as a FAQ) of the complexities associated with using mutable objects as defaults. > Also, I don't really like the provided explanation, "there are two cases > ...". The actual execution model has one case (default arguments are > evaluated once when the function is defined) and there are many ways to use > it. The distinction between the two cases lies in storage of the result, not in argument evaluation. In the non-default case, the result is stored in a caller-provided object, while in the default case, the result is stored in a callee-provided object. This results in different behaviors (as the example makes clear); hence the two cases are not the same. This distinction is important to new users because it's necessary to think of them differently, and because (to me, at least) one of them is very non-intuitive. In both cases, the change made to the object is a side effect of the function. In the non-default case, this side effect is directly visible to the caller, but in the default case, it is only indirectly visible. Details like this are probably obvious to people who are very familiar with both call by object reference and to Python's persistent lifetime of default argument objects, but I don't think that group fits the target audience for a tutorial. -- ___ Python tracker <http://bugs.python.org/issue26842> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23512] List of builtins is not alphabetical on https://docs.python.org/2/library/functions.html
New submission from Edward D'Souza: The list of built-in functions at the top of https://docs.python.org/2/library/functions.html is not alphabetical. Specifically, (apply, coerce, intern, buffer) allow appear out of order at the end of the list, instead of where they should be alphabetically. -- assignee: docs@python components: Documentation messages: 236505 nosy: docs@python, edsouza priority: normal severity: normal status: open title: List of builtins is not alphabetical on https://docs.python.org/2/library/functions.html type: behavior versions: Python 2.7 ___ Python tracker <http://bugs.python.org/issue23512> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23512] The list of built-in http://text-processing.com/demo/sentiments is not alphabetical on https://docs.python.org/2/library/functions.html
Edward D'Souza added the comment: Doesn't make sense to me. The page says "They are listed here in alphabetical order.", which isn't true. Furthermore, not putting them in order screws up people who assume it is in alphabetical order and try to search for a function with their eyes. If they are so special, put them in a separate table. This inconsistency is simply disrespectful to the reader, IMO. -- ___ Python tracker <http://bugs.python.org/issue23512> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue23512] The list of built-in functions is not alphabetical on https://docs.python.org/2/library/functions.html
Edward D'Souza added the comment: I think putting them in a separate table is good, but I think it makes more sense to appear right below the existing table at the top of the page. For better or worse, these "non-essential" functions are still builtins in Python 2. It would be disconcerting if you went to this page looking for a builtin (eg. coerce) and couldn't find it at the top of the page with the other builtins. Above all, the list of "builtins" should be accurate as to what builtins actually exist in Python 2. -- ___ Python tracker <http://bugs.python.org/issue23512> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3590] sax.parser hangs on byte streams
New submission from Edward K Ream <[EMAIL PROTECTED]>: While porting Leo to Python 3.0, I found that passing any byte stream to xml.sax.parser.parse will hang the parser. My quick fix was to change: while buffer != "": to: while buffer != "" and buffer != b"": at line 123 of xmlreader.py Here is the entire function: def parse(self, source): from . import saxutils source = saxutils.prepare_input_source(source) self.prepareParser(source) file = source.getByteStream() buffer = file.read(self._bufsize) ### while buffer != "": while buffer != "" and buffer != b"": ### EKR self.feed(buffer) buffer = file.read(self._bufsize) self.close() For reference, here is the code in Leo that was hanging:: parser = xml.sax.make_parser() parser.setFeature(xml.sax.handler.feature_external_ges,1) handler = saxContentHandler(c,inputFileName,silent,inClipboard) parser.setContentHandler(handler) parser.parse(theFile) Looking at the test_expat_file function in test_sax.py, it appears that the essential difference between the code that hangs and the successful unit test is that that Leo opens the file in 'rb' mode. (code not shown) It's doubtful that 'rb' mode is correct--from the unit test I deduce that the default 'r' mode would be better. Anyway, it would be nice if parser.parse didn't hang on dubious streams. HTH. Edward -- components: Library (Lib) messages: 71339 nosy: edreamleo severity: normal status: open title: sax.parser hangs on byte streams type: behavior versions: Python 3.0 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3590> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3590] sax.parser hangs on byte streams
Edward K Ream <[EMAIL PROTECTED]> added the comment: On Mon, Aug 18, 2008 at 10:09 AM, Benjamin Peterson <[EMAIL PROTECTED]>wrote: > > Benjamin Peterson <[EMAIL PROTECTED]> added the comment: > > It should probably be changed to just while buffer != b"" since it > requests a byte stream. That was my guess as well. I added the extra test so as not to remove a test that might, under some circumstance be important. Just to be clear, I am at present totally confused about io streams :-) Especially as used by the sax parsers. In particular, opening a file in 'r' mode, that is, passing a *non*-byte stream to parser.parse, works, while opening a file in 'rb' mode, that is, passing a *byte* stream to parser.parse, hangs. Anyway, opening the file passed to parser.parse with 'r' mode looks like the (only) way to go when using Python 3.0. In Python 2.5, opening files passed to parser.parse in 'rb' mode works. I don't recall whether I had any reason for 'rb' mode: it may have been an historical accident, or just a lucky accident :-) Edward Edward K. Ream email: [EMAIL PROTECTED] Leo: http://webpages.charter.net/edreamleo/front.html Added file: http://bugs.python.org/file11145/unnamed ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3590> ___On Mon, Aug 18, 2008 at 10:09 AM, Benjamin Peterson <mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]> wrote: Benjamin Peterson <mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]> added the comment: It should probably be changed to just while buffer != b"" since it requests a byte stream.That was my guess as well. I added the extra test so as not to remove a test that might, under some circumstance be important.Just to be clear, I am at present totally confused about io streams :-) Especially as used by the sax parsers. In particular, opening a file in 'r' mode, that is, passing a *non*-byte stream to parser.parse, works, while opening a file in 'rb' mode, that is, passing a *byte* stream to parser.parse, hangs. Anyway, opening the file passed to parser.parse with 'r' mode looks like the (only) way to go when using Python 3.0. In Python 2.5, opening files passed to parser.parse in 'rb' mode works. I don't recall whether I had any reason for 'rb' mode: it may have been an historical accident, or just a lucky accident :-) EdwardEdward K. Ream email: mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]Leo: http://webpages.charter.net/edreamleo/front.html";>http://webpages.charter.net/edreamleo/front.html ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3590] sax.parser considers XML as text rather than bytes
Edward K Ream <[EMAIL PROTECTED]> added the comment: On Mon, Aug 18, 2008 at 1:51 PM, Antoine Pitrou <[EMAIL PROTECTED]>wrote: > > Antoine Pitrou <[EMAIL PROTECTED]> added the comment: > > From the discussion on the python-3000, it looks like it would be nice > if sax.parser handled both bytes and unicode streams. > > Edward, does your simple fix make sax.parser work entirely well with > byte streams? No. The sax.parser seems to have other problems. Here is what I *think* I know ;-) 1. A smallish .leo file (an xml file) containing a single non-ascii (utf-8) encoded character appears to have been read correctly with Python 3.0. 2. A larger .leo file fails as follows (it's possible that the duplicate error messages are a Leo problem): Traceback (most recent call last): Traceback (most recent call last): File "C:\leo.repo\leo-30\leo\core\leoFileCommands.py", line 1283, in parse_leo_file parser.parse(theFile) # expat does not support parseString File "C:\leo.repo\leo-30\leo\core\leoFileCommands.py", line 1283, in parse_leo_file parser.parse(theFile) # expat does not support parseString File "c:\python30\lib\xml\sax\expatreader.py", line 107, in parse xmlreader.IncrementalParser.parse(self, source) File "c:\python30\lib\xml\sax\expatreader.py", line 107, in parse xmlreader.IncrementalParser.parse(self, source) File "c:\python30\lib\xml\sax\xmlreader.py", line 121, in parse buffer = file.read(self._bufsize) File "c:\python30\lib\xml\sax\xmlreader.py", line 121, in parse buffer = file.read(self._bufsize) File "C:\Python30\lib\io.py", line 1670, in read eof = not self._read_chunk() File "C:\Python30\lib\io.py", line 1670, in read eof = not self._read_chunk() File "C:\Python30\lib\io.py", line 1499, in _read_chunk self._set_decoded_chars(self._decoder.decode(input_chunk, eof)) File "C:\Python30\lib\io.py", line 1499, in _read_chunk self._set_decoded_chars(self._decoder.decode(input_chunk, eof)) File "C:\Python30\lib\io.py", line 1236, in decode output = self.decoder.decode(input, final=final) File "C:\Python30\lib\io.py", line 1236, in decode output = self.decoder.decode(input, final=final) File "C:\Python30\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] File "C:\Python30\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 74: character maps to UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 74: character maps to The same calls to sax read the file correctly on Python 2.5. It would be nice to have a message pinpoint the line and character offset of the problem. My vote would be for the code to work on both kinds of input streams. This would save the users considerable confusion if sax does the (tricky) conversions automatically. Imo, now would be the most convenient time to attempt this--there is a certain freedom in having everything be partially broken :-) Edward Edward K. Ream email: [EMAIL PROTECTED] Leo: http://webpages.charter.net/edreamleo/front.html Added file: http://bugs.python.org/file11147/unnamed ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3590> ___On Mon, Aug 18, 2008 at 1:51 PM, Antoine Pitrou <mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]> wrote: Antoine Pitrou <mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]> added the comment: From the discussion on the python-3000, it looks like it would be nice if sax.parser handled both bytes and unicode streams. Edward, does your simple fix make sax.parser work entirely well with byte streams?No. The sax.parser seems to have other problems. Here is what I *think* I know ;-)1. A smallish .leo file (an xml file) containing a single non-ascii (utf-8) encoded character appears to have been read correctly with Python 3.0. 2. A larger .leo file fails as follows (it's possible that the duplicate error messages are a Leo problem):Traceback (most recent call last):Traceback (most recent call last): File "C:\leo.repo\leo-30\leo\core\leoFileCommands.py", line 1283, in parse_leo_file parser.parse(theFile) # expat does not support parseString File "C:\leo.repo\leo-30\leo\core\leoFileCommands.py", line 1283, in parse_leo_file parser.parse(theFile) # expat does not support parseString File "c:\python30\lib\xml\sa
[issue3590] sax.parser considers XML as text rather than bytes
Edward K Ream <[EMAIL PROTECTED]> added the comment: On Mon, Aug 18, 2008 at 11:00 AM, Antoine Pitrou <[EMAIL PROTECTED]>wrote: > > Antoine Pitrou <[EMAIL PROTECTED]> added the comment: > > > Just to be clear, I am at present totally confused about io streams :-) > > Python 3.0 distincts more clearly between unicode strings (called "str" > in 3.0) and bytes strings (called "bytes" in 3.0). The most important > point being that there is no more any implicit conversion between the > two: you must explicitly use .encode() or .decode(). > > Files opened in binary ("rb") mode returns byte strings, but files > opened in text ("r") mode return unicode strings, which means you can't > give a text file to 3.0 library expecting a binary file, or vice-versa. > > What is more worrying is that XML, until decoded, should be considered a > byte stream, so sax.parser should accept binary files rather than text > files. I took a look at test_sax and indeed it considers XML as text > rather than bytes :-( Thanks for these remarks. They confirm what I suspected, but was unsure of, namely that it seems strange to be passing something other than a byte stream to parser.parse. > > Bumping this as critical because it needs a decision very soon (ideally > before beta3). Thanks for taking this seriously. Edward P.S. I love the new unicode plans. They are going to cause some pain at first for everyone (Python team and developers), but in the long run they are going to be a big plus for Python. EKR Edward K. Ream email: [EMAIL PROTECTED] Leo: http://webpages.charter.net/edreamleo/front.html Added file: http://bugs.python.org/file11148/unnamed ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3590> ___On Mon, Aug 18, 2008 at 11:00 AM, Antoine Pitrou <mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]> wrote: Antoine Pitrou <mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]> added the comment: > Just to be clear, I am at present totally confused about io streams :-) Python 3.0 distincts more clearly between unicode strings (called "str" in 3.0) and bytes strings (called "bytes" in 3.0). The most important point being that there is no more any implicit conversion between the two: you must explicitly use .encode() or .decode(). Files opened in binary ("rb") mode returns byte strings, but files opened in text ("r") mode return unicode strings, which means you can't give a text file to 3.0 library expecting a binary file, or vice-versa. What is more worrying is that XML, until decoded, should be considered a byte stream, so sax.parser should accept binary files rather than text files. I took a look at test_sax and indeed it considers XML as text rather than bytes :-(Thanks for these remarks. They confirm what I suspected, but was unsure of, namely that it seems strange to be passing something other than a byte stream to parser.parse. Bumping this as critical because it needs a decision very soon (ideally before beta3).Thanks for taking this seriously.EdwardP.S. I love the new unicode plans. They are going to cause some pain at first for everyone (Python team and developers), but in the long run they are going to be a big plus for Python. EKREdward K. Ream email: mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]Leo: http://webpages.charter.net/edreamleo/front.html";>http://webpages.charter.net/edreamleo/front.html ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue3590] sax.parser considers XML as text rather than bytes
Edward K Ream <[EMAIL PROTECTED]> added the comment: On Mon, Aug 18, 2008 at 4:15 PM, Antoine Pitrou <[EMAIL PROTECTED]>wrote: > > Antoine Pitrou <[EMAIL PROTECTED]> added the comment: > > > The same calls to sax read the file correctly on Python 2.5. > > What are those calls exactly? parser = xml.sax.make_parser() parser.setFeature(xml.sax.handler.feature_external_ges,1) handler = saxContentHandler(c,inputFileName,silent,inClipboard) parser.setContentHandler(handler) parser.parse(theFile) As discussed in http://bugs.python.org/issue3590 theFile is a file opened with 'rb' attributes Edward ---- Edward K. Ream email: [EMAIL PROTECTED] Leo: http://webpages.charter.net/edreamleo/front.html Added file: http://bugs.python.org/file11151/unnamed ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue3590> ___On Mon, Aug 18, 2008 at 4:15 PM, Antoine Pitrou <mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]> wrote: Antoine Pitrou <mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]> added the comment: > The same calls to sax read the file correctly on Python 2.5. What are those calls exactly? parser = xml.sax.make_parser() parser.setFeature(xml.sax.handler.feature_external_ges,1) handler = saxContentHandler(c,inputFileName,silent,inClipboard) parser.setContentHandler(handler) parser.parse(theFile)As discussed in http://bugs.python.org/issue3590";>http://bugs.python.org/issue3590theFile is a file opened with 'rb' attributes EdwardEdward K. Ream email: mailto:[EMAIL PROTECTED]">[EMAIL PROTECTED]Leo: http://webpages.charter.net/edreamleo/front.html";>http://webpages.charter.net/edreamleo/front.html ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4531] Deprecation warnings in lib\compiler\ast.py
New submission from Edward K Ream <[EMAIL PROTECTED]>: Python 2.6 final on Windows XP gives following warnings with -3 option: c:\python26\lib\compiler\ast.py:54: SyntaxWarning: tuple parameter unpacking has been removed in 3.x def __init__(self, (left, right), lineno=None): c:\python26\lib\compiler\ast.py:434: SyntaxWarning: tuple parameter unpacking has been removed in 3.x def __init__(self, (left, right), lineno=None): c:\python26\lib\compiler\ast.py:488: SyntaxWarning: tuple parameter unpacking has been removed in 3.x def __init__(self, (left, right), lineno=None): c:\python26\lib\compiler\ast.py:806: SyntaxWarning: tuple parameter unpacking has been removed in 3.x def __init__(self, (left, right), lineno=None): c:\python26\lib\compiler\ast.py:896: SyntaxWarning: tuple parameter unpacking has been removed in 3.x def __init__(self, (left, right), lineno=None): c:\python26\lib\compiler\ast.py:926: SyntaxWarning: tuple parameter unpacking has been removed in 3.x def __init__(self, (left, right), lineno=None): c:\python26\lib\compiler\ast.py:998: SyntaxWarning: tuple parameter unpacking has been removed in 3.x def __init__(self, (left, right), lineno=None): c:\python26\lib\compiler\ast.py:1098: SyntaxWarning: tuple parameter unpacking has been removed in 3.x def __init__(self, (left, right), lineno=None): c:\python26\lib\compiler\ast.py:1173: SyntaxWarning: tuple parameter unpacking has been removed in 3.x def __init__(self, (left, right), lineno=None): c:\python26\lib\compiler\pycodegen.py:903: SyntaxWarning: tuple parameter unpacking has been removed in 3.x Edward ---- Edward K. Ream email: [EMAIL PROTECTED] Leo: http://webpages.charter.net/edreamleo/front.html -- components: Library (Lib) messages: 76904 nosy: edreamleo severity: normal status: open title: Deprecation warnings in lib\compiler\ast.py type: compile error versions: Python 2.6 ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue4531> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue4531] Deprecation warnings in lib\compiler\ast.py
Edward K Ream <[EMAIL PROTECTED]> added the comment: On Thu, Dec 4, 2008 at 12:33 PM, Brett Cannon <[EMAIL PROTECTED]> wrote: > > Brett Cannon <[EMAIL PROTECTED]> added the comment: > > Considering the entire compiler package is not in 3.0 it is not worth > fixing this. Closing as wont fix. Thanks for this clarification. Edward ---- Edward K. Ream email: [EMAIL PROTECTED] Leo: http://webpages.charter.net/edreamleo/front.html ___ Python tracker <[EMAIL PROTECTED]> <http://bugs.python.org/issue4531> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38663] Untokenize does not round-trip ws before bs-nl
New submission from Edward K Ream : Tested on 3.6. tokenize.untokenize does not round-trip whitespace before backslash-newlines outside of strings: from io import BytesIO import tokenize # Round tripping fails on the second string. table = ( r''' print\ ("abc") ''', r''' print \ ("abc") ''', ) for s in table: tokens = list(tokenize.tokenize( BytesIO(s.encode('utf-8')).readline)) result = g.toUnicode(tokenize.untokenize(tokens)) print(result==s) I have an important use case that would benefit from a proper untokenize. After considerable study, I have not found a proper fix for tokenize.add_whitespace. I would be happy to work with anyone to rewrite tokenize.untokenize so that unit tests pass without fudges in TestRoundtrip.check_roundtrip. -- messages: 355827 nosy: edreamleo priority: normal severity: normal status: open title: Untokenize does not round-trip ws before bs-nl type: behavior versions: Python 3.6 ___ Python tracker <https://bugs.python.org/issue38663> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38663] Untokenize does not round-trip ws before bs-nl
Edward K Ream added the comment: The original bug report used a Leo-only function, g.toUnicode. To fix this, replace: result = g.toUnicode(tokenize.untokenize(tokens)) by: result_b = tokenize.untokenize(tokens) result = result_b.decode('utf-8', 'strict') -- ___ Python tracker <https://bugs.python.org/issue38663> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38663] Untokenize does not round-trip ws before bs-nl
Edward K Ream added the comment: This post https://groups.google.com/d/msg/leo-editor/DpZ2cMS03WE/VPqtB9lTEAAJ discusses a complete rewrite of tokenizer.untokenize. To quote from the post: I have "discovered" a spectacular replacement for Untokenizer.untokenize in python's tokenize library module. The wretched, buggy, and impossible-to-fix add_whitespace method is gone. The new code has no significant 'if' statements, and knows almost nothing about tokens! This is the way untokenize is written in The Book. The new code should put an end to a long series of issues against untokenize code in python's tokenize library module. Some closed issues were blunders arising from dumbing-down the TestRoundtrip.check_roundtrip method in test_tokenize.py. Imo, the way is now clear for proper unit testing of python's Untokenize class. -- ___ Python tracker <https://bugs.python.org/issue38663> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue38663] Untokenize does not round-trip ws before bs-nl
Edward K Ream added the comment: This post: https://groups.google.com/d/msg/leo-editor/DpZ2cMS03WE/5X8IDzpgEAAJ discusses unit testing. The summary states: "I've done the heavy lifting on issue 38663. Python devs should handle the details of testing and packaging." I'll leave it at that. In some ways this issue if very minor, and of almost no interest to anyone :-) Do with it as you will. The ball is in python's court. -- ___ Python tracker <https://bugs.python.org/issue38663> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue33337] Provide a supported Concrete Syntax Tree implementation in the standard library
Edward K Ream added the comment: Hello all, This is a "sideways" response to this issue. I have been dithering about whether to give you a heads up. I hope you won't mind... I have just announced the leoAst.py on python-announce-list. You can read the announcement here: https://github.com/leo-editor/leo-editor/issues/1565#issuecomment-654904747 Imo, leoAst.py solves many of the concerns mentioned in the first comment of this thread. leoAst.py is certainly a different approach. Also imo, the TOG and TOG in leoAst.py plug significant holes in python's ast and tokenize modules. These classes might be candidates for python's ast module. If you're interested, I will be willing to do further work. If not, I completely understand. As shown in the project's history, a significant amount of invention and discovery was required. The root of much of my initial confusion and difficulties was the notion that "real programmers don't use tokens". In fact, I discovered that the reverse is true. Tokens contain the ground truth. In many cases, the parse tree doesn't. I would be interested in your reactions. -- nosy: +edreamleo ___ Python tracker <https://bugs.python.org/issue7> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41504] Add links to asttokens, leoAst, LibCST and Parso to ast.rst
New submission from Edward K Ream : These links added with the provisional approval of GvR, pending approval of the PR. -- assignee: docs@python components: Documentation messages: 375019 nosy: docs@python, edreamleo priority: normal severity: normal status: open title: Add links to asttokens, leoAst, LibCST and Parso to ast.rst type: enhancement versions: Python 3.9 ___ Python tracker <https://bugs.python.org/issue41504> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue41504] Add links to asttokens, leoAst, LibCST and Parso to ast.rst
Edward K Ream added the comment: You're welcome. It was a pleasure working with you all on this issue. I enjoyed learning the PR workflow, and I enjoyed the discussion of the merits. One last comment. Like everything in life, links and their implied endorsements are provisional. If a link ever becomes problematic, I would expect the python devs to remove it. -- ___ Python tracker <https://bugs.python.org/issue41504> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22616] Allow connecting AST nodes with corresponding source ranges
Edward K Ream added the comment: On Mon, Jan 14, 2019 at 5:24 AM Ivan Levkivskyi wrote: Adding endline and endcolumn to every ast node will be a big improvement. Edward -- Edward K. Ream: edream...@gmail.com Leo: http://leoeditor.com/ -- -- nosy: +edreamleo ___ Python tracker <https://bugs.python.org/issue22616> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25778] winreg.EnumValue does not truncate strings correctly
Edward K. Ream added the comment: The last message on this thread was in January, and this item is Open. According to Pep 478, 3.5.2 final was released Sunday, June 26, 2016. How is this issue not a release blocker? Why does there appear to be no urgency in fixing this bug? This bug bites for 64-bit versions of Python 3. When it bit, it caused Leo to crash during startup. When it bit, it was reason to recommend Python 2 over Python 3. I have just released an ugly workaround in Leo. So now Leo itself can start up, but there is no guarantee that user plugins and scripts will work. Imo, no future version of Python 3 should go out the door until this bug is fixed, for sure, and for all time. If you want people to use Python 3, it can NOT have this kind of bug in it. -- nosy: +Edward.K..Ream ___ Python tracker <http://bugs.python.org/issue25778> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25778] winreg.EnumValue does not truncate strings correctly
Edward K. Ream added the comment: On Sat, Dec 3, 2016 at 1:37 PM, Steve Dower wrote: Thanks, Steve and David, for your replies. Getting this issue fixed eventually will do. Glad to hear it was a mistake, and not policy ;-) EKR -- ___ Python tracker <http://bugs.python.org/issue25778> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue25778] winreg.EnumValue does not truncate strings correctly
Edward K. Ream added the comment: Thank you, Steve, et. al. for resolving this issue. -- ___ Python tracker <http://bugs.python.org/issue25778> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue29480] Mac OSX Installer SSL Roots
New submission from Edward Ned Harvey: I would like to suggest that the OSX installer automatically run "Install Certificates.command", or display a prompt to users saying "Run Now" during installation. Having the readme is helpful - but only after you google for 20 minutes, because of an exception you encountered. Of course nobody reads the readme during install. "I've installed python a thousand times before, I know what I'm doing." There are so many things that require SSL, and it's reasonably assumed to be functional by default. -- components: Installation messages: 287302 nosy: rahvee priority: normal severity: normal status: open title: Mac OSX Installer SSL Roots type: behavior versions: Python 3.6 ___ Python tracker <http://bugs.python.org/issue29480> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17850] unicode_escape encoding fails for '\\Upsilon'
New submission from Edward K. Ream: On both windows and Linux the following fails on Python 2.7: s = '\\Upsilon' unicode(s,"unicode_escape") UnicodeDecodeError: 'unicodeescape' codec can't decode bytes in position 0-7: end of string in escape sequence BTW, the six.py package uses this call. If this call doesn't work, six is broken. -- components: Library (Lib) messages: 187852 nosy: Edward.K..Ream priority: normal severity: normal status: open title: unicode_escape encoding fails for '\\Upsilon' type: crash versions: Python 2.7 ___ Python tracker <http://bugs.python.org/issue17850> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17850] unicode_escape encoding fails for '\\Upsilon'
Edward K. Ream added the comment: Thanks for your quick reply. If this is not a bug, why does six define six.u as unicode(s,"unicode_escape") for *all* u constants?? -- ___ Python tracker <http://bugs.python.org/issue17850> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue17850] unicode_escape encoding fails for '\\Upsilon'
Edward K Ream added the comment: On Fri, Apr 26, 2013 at 8:51 AM, Edward K. Ream wrote: > > If this is not a bug, why does six define six.u as > unicode(s,"unicode_escape") for *all* u constants?? > Oops. The following works:: s = r'\\Upsilon' unicode(s,"unicode_escape") My apologies for the noise. Edward -- nosy: +edreamleo ___ Python tracker <http://bugs.python.org/issue17850> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22616] Allow connecting AST nodes with corresponding source ranges
Edward K. Ream added the comment: I urge the Python development team to fix this and the related bugs given in the Post Script. The lack of an easy way of associating ast nodes with text ranges in the original sources is arguably the biggest hole in the Python api. These bugs have immediate, severe, practical consequences for any tool that attempts to regularize (pep 8) or beautify Python code. Consider the code for PythonTidy: http://lacusveris.com/PythonTidy/PythonTidy-1.23.python Every version has had bugs in this area arising from difficult workarounds to the hole in the API. The entire Comments class is a horror directly related to these issues. Consider Aivar's workaround to these bugs: https://bitbucket.org/plas/thonny/src/8cdaa41aca7a5cc0b31618b6f1631d360c488196/src/ast_utils.py?at=default See the docstring for def fix_ast_problems. This is an absurdly difficult solution to what should be a trivial problem. It's impossible to build reliable software using such heroic hacks. The additional bugs listed below further complicate a nightmarish task. In short, these bugs are *not* minor little nits. They are preventing the development of reliable source-code tools. Edward K. Ream P.S. Here are the related bugs: http://bugs.python.org/issue10769 Allow connecting AST nodes with corresponding source ranges http://bugs.python.org/issue21295 Python 3.4 gives wrong col_offset for Call nodes returned from ast.parse http://bugs.python.org/issue18374 ast.parse gives wrong position (col_offset) for some BinOp-s http://bugs.python.org/issue16806 col_offset is -1 and lineno is wrong for multiline string expressions EKR -- nosy: +Edward.K..Ream ___ Python tracker <http://bugs.python.org/issue22616> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue22819] Python3.4: xml.sax.saxutils.XMLGenerator.__init__ fails with pythonw.exe
New submission from Edward K. Ream: In Python3.2 xml.sax.saxutils.XMLGenerator.__init__ succeeds if the "out" keyword argument is not given and sys.stdout is None, which will typically be the case when using pythonw.exe. Alas, on Python3.4, the ctor throws an exception in this case. This is a major compatibility issue, and is completely unnecessary: the ctor should work as before. An easy fix: allocate a file-like object as the out stream, or just do what is done in Python 3.2 ;-) -- components: Library (Lib) messages: 230844 nosy: Edward.K..Ream priority: normal severity: normal status: open title: Python3.4: xml.sax.saxutils.XMLGenerator.__init__ fails with pythonw.exe type: crash versions: Python 3.4 ___ Python tracker <http://bugs.python.org/issue22819> ___ ___ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11691] sqlite3 Cursor.description doesn't set type_code
New submission from William Edward Stuart Clemens : The DB API Spec 2.0 (PEP 249) clearly requires that column name and type_code be set as the first two values in Cursor.description the other 5 attributes are optional. The sqlite3 module doesn't set type_code. -- components: None files: sqlite.patch keywords: patch messages: 132289 nosy: wesclemens priority: normal severity: normal status: open title: sqlite3 Cursor.description doesn't set type_code type: behavior versions: Python 2.7 Added file: http://bugs.python.org/file21421/sqlite.patch ___ Python tracker <http://bugs.python.org/issue11691> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11691] sqlite3 Cursor.description doesn't set type_code
William Edward Stuart Clemens added the comment: The patch for version 3.3 has a one line difference. -- assignee: -> docs@python components: +Documentation, Library (Lib) -None nosy: +docs@python versions: +Python 3.3 Added file: http://bugs.python.org/file21422/sqlite3_type_code_py33.diff ___ Python tracker <http://bugs.python.org/issue11691> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com