On 29 Sep 2010, at 00:22, "Martin v. Löwis" wrote:
>> I certainly wouldn't be opposed to an API that accepts a string as well
>> though.
>
> Notice that this can't really work for Python 2 source code (but of
> course, it doesn't need to).
>
> In Python 2, if you have a string literal in the s
> I certainly wouldn't be opposed to an API that accepts a string as well
> though.
Notice that this can't really work for Python 2 source code (but of
course, it doesn't need to).
In Python 2, if you have a string literal in the source code, you need
to know the source encoding in order to get t
Am 28.09.2010 05:45, schrieb Steve Holden:
> On 9/27/2010 11:27 PM, Benjamin Peterson wrote:
>> 2010/9/27 Meador Inge :
>>> which, as seen in the trace, is because the 'detect_encoding' function in
>>> 'Lib/tokenize.py' searches for 'BOM_UTF8' (a 'bytes' object) in the string
>>> to tokenize 'first
On Tue, Sep 28, 2010 at 7:09 AM, Nick Coghlan wrote:
> A feature request on the tracker is the best way to make that happen.
>
Done - http://bugs.python.org/issue9969. Thanks for the feedback everyone.
-- Meador
___
Python-Dev mailing list
Python-Dev
On Tue, Sep 28, 2010 at 9:29 PM, Michael Foord
wrote:
> On 28/09/2010 12:19, Antoine Pitrou wrote:
>> On Mon, 27 Sep 2010 23:45:45 -0400
>> Steve Holden wrote:
>>> On 9/27/2010 11:27 PM, Benjamin Peterson wrote:
Tokenize only works on bytes. You can open a feature request if you
desire
On 28 September 2010 12:29, Michael Foord wrote:
> On 28/09/2010 12:19, Antoine Pitrou wrote:
>
>> On Mon, 27 Sep 2010 23:45:45 -0400
>> Steve Holden wrote:
>>
>>> On 9/27/2010 11:27 PM, Benjamin Peterson wrote:
>>>
2010/9/27 Meador Inge:
> which, as seen in the trace, is because
On 28/09/2010 12:19, Antoine Pitrou wrote:
On Mon, 27 Sep 2010 23:45:45 -0400
Steve Holden wrote:
On 9/27/2010 11:27 PM, Benjamin Peterson wrote:
2010/9/27 Meador Inge:
which, as seen in the trace, is because the 'detect_encoding' function in
'Lib/tokenize.py' searches for 'BOM_UTF8' (a 'byt
On Mon, 27 Sep 2010 23:45:45 -0400
Steve Holden wrote:
> On 9/27/2010 11:27 PM, Benjamin Peterson wrote:
> > 2010/9/27 Meador Inge :
> >> which, as seen in the trace, is because the 'detect_encoding' function in
> >> 'Lib/tokenize.py' searches for 'BOM_UTF8' (a 'bytes' object) in the string
> >> t
On 9/27/2010 11:27 PM, Benjamin Peterson wrote:
> 2010/9/27 Meador Inge :
>> which, as seen in the trace, is because the 'detect_encoding' function in
>> 'Lib/tokenize.py' searches for 'BOM_UTF8' (a 'bytes' object) in the string
>> to tokenize 'first' (a 'str' object). It seems to me that strings
2010/9/27 Meador Inge :
> which, as seen in the trace, is because the 'detect_encoding' function in
> 'Lib/tokenize.py' searches for 'BOM_UTF8' (a 'bytes' object) in the string
> to tokenize 'first' (a 'str' object). It seems to me that strings should
> still be able to be tokenized, but maybe I a
Hi All,
I was going through some of the open issues related to 'tokenize' and ran
across 'issue2180'. The reproduction case for this issue is along the lines
of:
>>> tokenize.tokenize(io.StringIO("if 1:\n \\\n #hey\n print 1").readline)
but, with 'py3k' I get:
>>> tokenize.tokenize(io.Str
11 matches
Mail list logo