[issue35107] untokenize() fails on tokenize output when a newline is missing

2022-01-19 Thread Terry J. Reedy
Terry J. Reedy added the comment: #44667 tokenize.py emits spurious NEWLINE if file ends on a comment without a newline Fixed on 3.11, 3.10, 3.9 Aug 2021. -- ___ Python tracker _

[issue35107] untokenize() fails on tokenize output when a newline is missing

2022-01-19 Thread Terry J. Reedy
Change by Terry J. Reedy : -- resolution: -> duplicate stage: -> resolved status: pending -> closed superseder: -> tokenize.py emits spurious NEWLINE if file ends on a comment without a newline ___ Python tracker

[issue35107] untokenize() fails on tokenize output when a newline is missing

2022-01-18 Thread Irit Katriel
Irit Katriel added the comment: I am unable to reproduce this on 3.11: >>> tokenize.untokenize(tokenize.generate_tokens(io.StringIO('#').readline)) '#' -- nosy: +iritkatriel status: open -> pending ___ Python tracker

[issue35107] untokenize() fails on tokenize output when a newline is missing

2018-10-30 Thread Terry J. Reedy
Terry J. Reedy added the comment: It seems to me a bug that if '\n' is not present, tokenize adds both NL and NEWLINE tokens, instead of just one of them. Moreover, both tuples of the double correction look wrong. If '\n' is present, TokenInfo(type=56 (NL), string='\n', start=(1, 1), end=

[issue35107] untokenize() fails on tokenize output when a newline is missing

2018-10-29 Thread Serhiy Storchaka
Serhiy Storchaka added the comment: I am surprised, that removing the newline character adds a token: >>> pprint.pprint(list(tokenize.generate_tokens(io.StringIO('#\n').readline))) [TokenInfo(type=55 (COMMENT), string='#', start=(1, 0), end=(1, 1), line='#\n'), TokenInfo(type=56 (NL), string=

[issue35107] untokenize() fails on tokenize output when a newline is missing

2018-10-29 Thread Ammar Askar
Ammar Askar added the comment: Actually nevermind, disregard that, I was just testing it wrong. I think the simplest fix here is to add '#' to the list of characters here so we don't double insert newlines for comments: https://github.com/python/cpython/blob/b83d917fafd87e4130f9c7d5209ad2deb

[issue35107] untokenize() fails on tokenize output when a newline is missing

2018-10-29 Thread Gregory P. Smith
Gregory P. Smith added the comment: Interesting! I have a 3.6.2 sitting around and cannot reproduce that "x=1" behavior. I don't know what the behavior _should_ be. It just feels natural that untokenize should be able to round trip anything tokenize or generate_tokens emits without raisin

[issue35107] untokenize() fails on tokenize output when a newline is missing

2018-10-29 Thread Ammar Askar
Ammar Askar added the comment: fwiw I think there's more at play here than the newline change. This is the behavior I get on 3.6.5 (before the newline change is applied). # works as expected but check out this input: >>> t.untokenize(tokenize.generate_tokens(io.StringIO('#').readline)) '#' >

[issue35107] untokenize() fails on tokenize output when a newline is missing

2018-10-29 Thread Pablo Galindo Salgado
Change by Pablo Galindo Salgado : -- nosy: +pablogsal ___ Python tracker ___ ___ Python-bugs-list mailing list Unsubscribe: https:/

[issue35107] untokenize() fails on tokenize output when a newline is missing

2018-10-29 Thread Ammar Askar
Ammar Askar added the comment: Looks like this is caused by this line here: https://github.com/python/cpython/blob/b83d917fafd87e4130f9c7d5209ad2debc7219cd/Lib/tokenize.py#L551-L558 which adds a newline token implicitly after comments. Since the input didn't terminate with a '\n', the code

[issue35107] untokenize() fails on tokenize output when a newline is missing

2018-10-29 Thread Gregory P. Smith
New submission from Gregory P. Smith : The behavior change introduced in 3.6.7 and 3.7.1 via https://bugs.python.org/issue33899 has further consequences: ```python >>> tokenize.untokenize(tokenize.generate_tokens(io.StringIO('#').readline)) Traceback (most recent call last): File "", line 1,