New submission from Jared Grubb:
tokenize does not handle line joining properly, as the following string
fails the CPython tokenizer but passes the tokenize module.
Example 1:
>>> s = "if 1:\n \\\n #hey\n print 1"
>>> exec s
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<string>", line 3
#hey
^
SyntaxError: invalid syntax
>>> tokenize.tokenize(StringIO(s).readline)
1,0-1,2: NAME 'if'
1,3-1,4: NUMBER '1'
1,4-1,5: OP ':'
1,5-1,6: NEWLINE '\n'
2,0-2,2: INDENT ' '
3,2-3,6: COMMENT '#hey'
3,6-3,7: NEWLINE '\n'
4,2-4,7: NAME 'print'
4,8-4,9: NUMBER '1'
5,0-5,0: DEDENT ''
5,0-5,0: ENDMARKER ''
__________________________________
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue2180>
__________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com