[Python-Dev] Re: PEP 659: Specializing Adaptive Interpreter

2021-05-17 Thread gopinathinchennai01
The related code that generates this bytecode

https://www.credosystemz.com/training-in-chennai/best-python-training-in-chennai/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/U4B3JG2LNHNHPZRGFC4ZHL4S2OYFVJ3V/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Speeding up CPython

2021-05-17 Thread Abdur-Rahmaan Janhangeer
On Thu, May 13, 2021 at 10:03 PM Stephen J. Turnbull <
turnbull.stephen...@u.tsukuba.ac.jp> wrote:

> *Creating* plausible issues is hard work, I assure you as a university
> professor.  Coming up with "exercises" that are not makework requires
> expertise in both the domain and in educational psychology.  (Some
> people are "just good at it", of course, but it's quite clear from
> popular textbooks that most are not.)  I think that would be a very
> unproductive use of developer time, especially since "git clone; git
> checkout some-tag-in-2017" is pretty much what you're asking for
> otherwise.


Maybe selecting already solved issues which theoretically takes
away the pain of mimicking real-world scenarios. Great to have the
insights from someone behind the scenes of exercises

The problem is not a lack of issues to practice on.  It's that (1) the
> PR process itself is a barrier, or at least an annoyance, and (2) many
> new contributors need mentoring.  (Or think they do.  Some just need
> encouragment, others need help on technique, but both groups are more
> or less blocked without the mentoring.)
>

I think setting up is not that hard. VStinner contributed a great piece in
the
sense of https://cpython-core-tutorial.readthedocs.io/en/latest/ if someone
gets stuck, he can ping the list or something like that . Like once you set
the project running i guess what you need is contribute or explore and
understand, both theoretically solved using the educational repo. Like you
need to find something to do before the interest wanes away. As Terry Reedy
encourages, getting more and more people to contribute ensures that at least
a couple of people passes through the vital processes need to get people
going/becoming  a regular contributor. This idea aims to make this process
easier.

And, of course, real contribution involves a lot of unfun work.
> Writing tests, writing documentation, explaining to other developers
> who start out -1 because they don't get it, overcoming your own mental
> blocks to changing your submission because *you* don't get it, and on
> and on.  A lot of newcomers think "I'm not good at that, if I have to
> do it I can't contribute" (and a few selfishly think they can just do
> the fun parts and achieve fame and fortune), but you know, "if not
> you, then who?  If you don't do it for Python, where are you going to
> be able to contribute?"
>

Having past solved issues picked out and documented some more in increasing
level of difficulty seems to iron out the issues.


> To be honest, although I'm not a specialist in organizational behavior
> and am operating with a small sample, I can say that from the point of
> view of identifying tasks, finding solutions, and implementing them,
> Python is the most effective non-hierarchical organization I've ever
> seen.  I can't say I've seen more than one or two hierarchical
> organizations that are significantly better at implementing solutions
> and don't burn up their workers in the process -- and the ones I'm
> thinking of are way smaller than Python.  (Yes, I know that there are
> people who have gotten burned up in Python, too.  We can do better on
> that, but Python does not deliberately sacrifice people to the
> organization.)
>

I agree that the Python community is awesome, the different WGs act like
great departments, people do give  a lot of time but being subscribed in
here for some years made me see some recurring patterns. Also, while
organising FlaskCon, we got some really great insights into the community.
The usergroup page where usergroups are listed is a big lie in the sense
that
though is lists all usergroups once initiated, the real picture is way
different.
We contacted a great deal of them. Here and there there is room for
improvement
in the machinery.


> I have to point out that there's a whole crew over on corementorship
> doing this work, and at least one Very Senior Developer with their own
> private mentoring program.[1]  IMO, that is a big part of why Python
> is as successful as it is.  If more senior developers would take on
> these tasks it would have a big effect downstream.  But emotional work
> is hard, and it comes in big chunks.  In many situations you have to
> follow through, on the mentee's schedule, or the mentee will "slip the
> hook and swim away."  So it's a big ask.  I'm willing to make that ask
> in the abstract, but there's not even one senior developer I'm able to
> point to and say "definitely that person would do more for Python by
> mentoring than by hacking".  It's a very hard problem.
>

That's why i guess what i am proposing might seem simple but it's
fundamentally
putting CPython contribution mentoring in auto-pilot mode. I've seen as i
said VStinner's
initiative and initiatives like these pay off far more than just the docs
though it can be
included in the docs, but having some tidbit liberty addresses some on the
fly issues.
But not all people have time for that as juggling work, life and Op

[Python-Dev] Re: Speeding up CPython

2021-05-17 Thread Abdur-Rahmaan Janhangeer
A really awesome book, i was proposing a by the house training.
The community is awesome, just some more twerkings needed
as you always see the lost beginner wanting mentorship, the contributors
contributing and the core-devs having no time to cater for a whole community
of mentorship seekers.

Kind Regards,

Abdur-Rahmaan Janhangeer
about  | blog

github 
Mauritius
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MASJZFLZS7SHSYT27KMECPUMD7SSGZIH/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Critique of PEP 657 -- Include Fine Grained Error Locations in Tracebacks

2021-05-17 Thread Mark Shannon

Hi everyone,

I fully agree with the rationale for the PEP, better error messages 
always help.
However, I think the proposed implementation could be misleading in some 
cases and wastes memory.



Use one position, not one-and-a-half positions.
---

The main problem with PEP 657, IMO, is that it wants to present the 
location of an error as a pair of positions, but without the line number 
of the second position.

Consequently it ends up with one and a half positions, not two.
This does not work well for a number of reasons.

1.  Errors spanning multiple lines.

Consider:

(i1 + i2 +
 s1
 )

Where i1, i2 are integers and s1 is a string.

With a single location, the error points to the second `+`. PEP 657 
would highlight the whole line, `i1 + i2 +`, making it unclear where the 
error occurred.



2. Repeated binary operations on the same line.

A single location can also be clearer when all the code is on one line.

i1 + i2 + s1

PEP 657:

i1 + i2 + s1


Using a single location:

i1 + i2 + s1
^

3. Tracking locations in the compiler.

While nicer locations for errors is great, it won't be popular if it has 
a negative impact on performance.
Locations need to tracked through the compiler. The simpler the 
location, the easier this is to do correctly without a negative 
performance impact.
It is already tricky to do this correctly with just line numbers because 
we have both source that compiles to no bytecodes (try, pass) and 
bytecodes that have no source (implicit return None and except cleanups).



A single location can still be presented as a whole token, as tokenizing 
a single line of code is easy and fast. So when presenting an error, the 
whole token can be highlighted.


E.g:

NameError
name


Compression
---

PEP 657 proposes that no compression be used, but also mandates the use 
of a lossy compression scheme (truncating all offsets over 255).
I think it would be better to provide an API like PEP 626 and not 
restrict the internal format used. In fact, extending or replacing the 
API of PEP 626 seems the best approach to me.


I wouldn't worry about memory consumption.
The code object layout and unmarshalling process needs to be 
re-implemented for reduced memory use and faster startup: 
https://github.com/markshannon/faster-cpython/blob/master/tiers.md#tier-0

A few bytes more or less in the location table(s) is inconsequential.

Cheers,
Mark.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/KR7ACFCUNMHT4M7R4XNHGRFV27HZBDFD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Critique of PEP 657 -- Include Fine Grained Error Locations in Tracebacks

2021-05-17 Thread Pablo Galindo Salgado
Hi Mark,

Thanks for your useful feedback. Some comments:

> 1.  Errors spanning multiple lines.

That is addressed. Check the latest version of the PEP: we are propising
storing the end lines as well:

https://www.python.org/dev/peps/pep-0657/#specification


> 2. Repeated binary operations on the same line.

It has been suggested to add this on top of the previous information, but
we think this is problematic since these locations need to be
manually calculated in the compiler, while the ranges are derived from the
AST ranges. Given how error prone the old manual calculations of the AST
were, we really want to avoid manual calculations here. Additionally, many
users have mentioned that a single caret is difficult to read and that
motivates the ranges in SyntaxErrors that we introduced.
> 3. Tracking locations in the compiler. PEP 657 proposes that no
compression be used

No, PEP 657 doesn't specify the compression, which is different (check the
latest version). We say:

"The internal storage, compression and encoding of the information is left
as an implementation detail and can be changed at any point as long as the
public API remains unchanged."

> I think it would be better to provide an API like PEP 626 and not
restrict the internal format used.

Indeed, that is what we are doing. Check
https://www.python.org/dev/peps/pep-0657/#id10

Cheers,
Pablo Galindo Salgado


On Mon, 17 May 2021 at 14:17, Mark Shannon  wrote:

> Hi everyone,
>
> I fully agree with the rationale for the PEP, better error messages
> always help.
> However, I think the proposed implementation could be misleading in some
> cases and wastes memory.
>
>
> Use one position, not one-and-a-half positions.
> ---
>
> The main problem with PEP 657, IMO, is that it wants to present the
> location of an error as a pair of positions, but without the line number
> of the second position.
> Consequently it ends up with one and a half positions, not two.
> This does not work well for a number of reasons.
>
> 1.  Errors spanning multiple lines.
>
> Consider:
>
>  (i1 + i2 +
>   s1
>   )
>
> Where i1, i2 are integers and s1 is a string.
>
> With a single location, the error points to the second `+`. PEP 657
> would highlight the whole line, `i1 + i2 +`, making it unclear where the
> error occurred.
>
>
> 2. Repeated binary operations on the same line.
>
> A single location can also be clearer when all the code is on one line.
>
> i1 + i2 + s1
>
> PEP 657:
>
> i1 + i2 + s1
> 
>
> Using a single location:
>
> i1 + i2 + s1
>  ^
>
> 3. Tracking locations in the compiler.
>
> While nicer locations for errors is great, it won't be popular if it has
> a negative impact on performance.
> Locations need to tracked through the compiler. The simpler the
> location, the easier this is to do correctly without a negative
> performance impact.
> It is already tricky to do this correctly with just line numbers because
> we have both source that compiles to no bytecodes (try, pass) and
> bytecodes that have no source (implicit return None and except cleanups).
>
>
> A single location can still be presented as a whole token, as tokenizing
> a single line of code is easy and fast. So when presenting an error, the
> whole token can be highlighted.
>
> E.g:
>
> NameError
>  name
>  
>
> Compression
> ---
>
> PEP 657 proposes that no compression be used, but also mandates the use
> of a lossy compression scheme (truncating all offsets over 255).
> I think it would be better to provide an API like PEP 626 and not
> restrict the internal format used. In fact, extending or replacing the
> API of PEP 626 seems the best approach to me.
>
> I wouldn't worry about memory consumption.
> The code object layout and unmarshalling process needs to be
> re-implemented for reduced memory use and faster startup:
> https://github.com/markshannon/faster-cpython/blob/master/tiers.md#tier-0
> A few bytes more or less in the location table(s) is inconsequential.
>
> Cheers,
> Mark.
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/KR7ACFCUNMHT4M7R4XNHGRFV27HZBDFD/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/TKY4YMPAQZDKCK7NV4AQ3IFAN5MF76DU/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Critique of PEP 657 -- Include Fine Grained Error Locations in Tracebacks

2021-05-17 Thread Pablo Galindo Salgado
P.S. We will add "using a single caret" to the "rejected ideas section"
with some rationale.

On Mon, 17 May 2021, 14:28 Pablo Galindo Salgado, 
wrote:

> Hi Mark,
>
> Thanks for your useful feedback. Some comments:
>
> > 1.  Errors spanning multiple lines.
>
> That is addressed. Check the latest version of the PEP: we are propising
> storing the end lines as well:
>
> https://www.python.org/dev/peps/pep-0657/#specification
>
>
> > 2. Repeated binary operations on the same line.
>
> It has been suggested to add this on top of the previous information, but
> we think this is problematic since these locations need to be
> manually calculated in the compiler, while the ranges are derived from the
> AST ranges. Given how error prone the old manual calculations of the AST
> were, we really want to avoid manual calculations here. Additionally, many
> users have mentioned that a single caret is difficult to read and that
> motivates the ranges in SyntaxErrors that we introduced.
> > 3. Tracking locations in the compiler. PEP 657 proposes that no
> compression be used
>
> No, PEP 657 doesn't specify the compression, which is different (check the
> latest version). We say:
>
> "The internal storage, compression and encoding of the information is
> left as an implementation detail and can be changed at any point as long as
> the public API remains unchanged."
>
> > I think it would be better to provide an API like PEP 626 and not
> restrict the internal format used.
>
> Indeed, that is what we are doing. Check
> https://www.python.org/dev/peps/pep-0657/#id10
>
> Cheers,
> Pablo Galindo Salgado
>
>
> On Mon, 17 May 2021 at 14:17, Mark Shannon  wrote:
>
>> Hi everyone,
>>
>> I fully agree with the rationale for the PEP, better error messages
>> always help.
>> However, I think the proposed implementation could be misleading in some
>> cases and wastes memory.
>>
>>
>> Use one position, not one-and-a-half positions.
>> ---
>>
>> The main problem with PEP 657, IMO, is that it wants to present the
>> location of an error as a pair of positions, but without the line number
>> of the second position.
>> Consequently it ends up with one and a half positions, not two.
>> This does not work well for a number of reasons.
>>
>> 1.  Errors spanning multiple lines.
>>
>> Consider:
>>
>>  (i1 + i2 +
>>   s1
>>   )
>>
>> Where i1, i2 are integers and s1 is a string.
>>
>> With a single location, the error points to the second `+`. PEP 657
>> would highlight the whole line, `i1 + i2 +`, making it unclear where the
>> error occurred.
>>
>>
>> 2. Repeated binary operations on the same line.
>>
>> A single location can also be clearer when all the code is on one line.
>>
>> i1 + i2 + s1
>>
>> PEP 657:
>>
>> i1 + i2 + s1
>> 
>>
>> Using a single location:
>>
>> i1 + i2 + s1
>>  ^
>>
>> 3. Tracking locations in the compiler.
>>
>> While nicer locations for errors is great, it won't be popular if it has
>> a negative impact on performance.
>> Locations need to tracked through the compiler. The simpler the
>> location, the easier this is to do correctly without a negative
>> performance impact.
>> It is already tricky to do this correctly with just line numbers because
>> we have both source that compiles to no bytecodes (try, pass) and
>> bytecodes that have no source (implicit return None and except cleanups).
>>
>>
>> A single location can still be presented as a whole token, as tokenizing
>> a single line of code is easy and fast. So when presenting an error, the
>> whole token can be highlighted.
>>
>> E.g:
>>
>> NameError
>>  name
>>  
>>
>> Compression
>> ---
>>
>> PEP 657 proposes that no compression be used, but also mandates the use
>> of a lossy compression scheme (truncating all offsets over 255).
>> I think it would be better to provide an API like PEP 626 and not
>> restrict the internal format used. In fact, extending or replacing the
>> API of PEP 626 seems the best approach to me.
>>
>> I wouldn't worry about memory consumption.
>> The code object layout and unmarshalling process needs to be
>> re-implemented for reduced memory use and faster startup:
>> https://github.com/markshannon/faster-cpython/blob/master/tiers.md#tier-0
>> A few bytes more or less in the location table(s) is inconsequential.
>>
>> Cheers,
>> Mark.
>> ___
>> Python-Dev mailing list -- python-dev@python.org
>> To unsubscribe send an email to python-dev-le...@python.org
>> https://mail.python.org/mailman3/lists/python-dev.python.org/
>> Message archived at
>> https://mail.python.org/archives/list/python-dev@python.org/message/KR7ACFCUNMHT4M7R4XNHGRFV27HZBDFD/
>> Code of Conduct: http://python.org/psf/codeofconduct/
>>
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/py

[Python-Dev] Re: Speeding up CPython

2021-05-17 Thread Diego Peres
Recently I found cinder on GitHub created by Instagram
, looks like they have the
same interest of yous (speed up cpython) and might be useful team up with
them.

```
We've made Cinder publicly available in order to facilitate conversation
about potentially upstreaming some of this work to CPython and to reduce
duplication of effort among people working on CPython performance.
```

Em seg., 17 de mai. de 2021 às 12:52, Abdur-Rahmaan Janhangeer <
arj.pyt...@gmail.com> escreveu:

> A really awesome book, i was proposing a by the house training.
> The community is awesome, just some more twerkings needed
> as you always see the lost beginner wanting mentorship, the contributors
> contributing and the core-devs having no time to cater for a whole
> community
> of mentorship seekers.
>
> Kind Regards,
>
> Abdur-Rahmaan Janhangeer
> about  | blog
> 
> github 
> Mauritius
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/MASJZFLZS7SHSYT27KMECPUMD7SSGZIH/
> Code of Conduct: http://python.org/psf/codeofconduct/
>


-- 
Att,
Diego da Silva Péres.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/K6L2KXDANFJXDZXYEHWFKMCCILVIT7RM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Critique of PEP 657 -- Include Fine Grained Error Locations in Tracebacks

2021-05-17 Thread Ammar Askar
> While nicer locations for errors is great, it won't be popular if it has
> a negative impact on performance.
> Locations need to tracked through the compiler.

In performance sensitive contexts won't most code be pre-compiled into
pyc files anyway? I feel like the performance cost of accurate column
tracking in the compiler isn't too big of a concern unless I'm missing
something.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/JDRA26FG5TXD3D7VMLE2UNKQQ4WIHLEF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Critique of PEP 657 -- Include Fine Grained Error Locations in Tracebacks

2021-05-17 Thread Mark Shannon

Hi,

On 17/05/2021 5:22 pm, Ammar Askar wrote:
>> While nicer locations for errors is great, it won't be popular if it has
>> a negative impact on performance.
>> Locations need to tracked through the compiler.
>
> In performance sensitive contexts won't most code be pre-compiled into
> pyc files anyway? I feel like the performance cost of accurate column
> tracking in the compiler isn't too big of a concern unless I'm missing
> something.
>

The cost I'm concerned about is the runtime cost of worse code, because 
the compiler can't perform some optimizations due the constraints of 
providing the extended debug information.


Cheers,
Mark.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IXXOBTZJQEVF6EZP5ACQNKTN7RVDQ7SI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Critique of PEP 657 -- Include Fine Grained Error Locations in Tracebacks

2021-05-17 Thread Ammar Askar
> The cost I'm concerned about is the runtime cost of worse code, because
> the compiler can't perform some optimizations due the constraints of
> providing the extended debug information.

Aah thanks for clarifying, I see what you mean now. In cases like this
where the compiler is making optimizations, I think it is perfectly
fine to just elide the column information. While it would be nice to
maintain accurate columns wherever possible, you shouldn't constrain
improvements and optimizations based on it. The traceback machinery
will simply not print out the carets in that case and everything
should just work smoothly.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/EB24LA7L5C35QHQTFLB6QZX26E77O6QM/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Critique of PEP 657 -- Include Fine Grained Error Locations in Tracebacks

2021-05-17 Thread Christopher Barker
> > The cost I'm concerned about is the runtime cost of worse code, because
> > the compiler can't perform some optimizations due the constraints of
> > providing the extended debug information.


Python does have an Optimized mode (-O). Granted, it’s not used very often,
but this would be a good use case for it.

-CHB



> Aah thanks for clarifying, I see what you mean now. In cases like this
> where the compiler is making optimizations, I think it is perfectly
> fine to just elide the column information. While it would be nice to
> maintain accurate columns wherever possible, you shouldn't constrain
> improvements and optimizations based on it. The traceback machinery
> will simply not print out the carets in that case and everything
> should just work smoothly.
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/EB24LA7L5C35QHQTFLB6QZX26E77O6QM/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-- 
Christopher Barker, PhD (Chris)

Python Language Consulting
  - Teaching
  - Scientific Software Development
  - Desktop GUI and Web Development
  - wxPython, numpy, scipy, Cython
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/GBAJSME7P7D6FS4NDCFCJRSJXN6LIYZK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Critique of PEP 657 -- Include Fine Grained Error Locations in Tracebacks

2021-05-17 Thread Ethan Furman

On 5/17/2021 6:13 AM, Mark Shannon wrote:

> Where i1, i2 are integers and s1 is a string.

> i1 + i2 + s1
> 

Wouldn't the carets just be under the i2 + s1 portion?

--
~Ethan~
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YPWW3PIBJDUIETZQOJNHIYR5FNGMOJJ5/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Critique of PEP 657 -- Include Fine Grained Error Locations in Tracebacks

2021-05-17 Thread Brandt Bucher
Ethan Furman wrote:
> On 5/17/2021 6:13 AM, Mark Shannon wrote:
> > Where i1, i2 are integers and s1 is a string.
> > > i1 + i2 + s1
> > 
> Wouldn't the carets just be under the i2 + s1 portion?

I don't think so, since this is executed as `((i1 + i2) + s1)`.

Mark's carets look correct to me, since the second (outer) addition's LHS is 
the result of adding `i1` and `i2`:

```
Python 3.11.0a0 (heads/main:a42d98ed91, May 16 2021, 14:02:36) [GCC 7.5.0] on 
linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import ast
>>> op = ast.parse("i1 + i2 + s1", mode="eval").body
>>> op

>>> op.col_offset
0
>>> op.end_col_offset
12
```

Brandt
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/YCMHFU4I7TEYDQP7OH4AX2YOD4KPLNFX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Critique of PEP 657 -- Include Fine Grained Error Locations in Tracebacks

2021-05-17 Thread Nathaniel Smith
On Mon, May 17, 2021 at 6:18 AM Mark Shannon  wrote:
> 2. Repeated binary operations on the same line.
>
> A single location can also be clearer when all the code is on one line.
>
> i1 + i2 + s1
>
> PEP 657:
>
> i1 + i2 + s1
> 
>
> Using a single location:
>
> i1 + i2 + s1
>  ^

It's true this case is a bit confusing with the whole operation span
highlighted, but I'm not sure the single location version is much better. I
feel like a Really Good UI would like, highlight the two operands in
different colors or something, or at least underline the two separate items
whose type is incompatible separately:

TypeError: unsupported operand type(s) for +: 'int' + 'str':
i1 + i2 + s1
^^^   ~~

More generally, these error messages are the kind of thing where the UI can
always be tweaked to improve further, and those tweaks can make good use of
any rich source information that's available.

So, here's another option to consider:

- When parsing, assign each AST node a unique, deterministic id (e.g.
sequentially across the AST tree from top-to-bottom, left-to-right).
- For each bytecode offset, store the corresponding AST node id in an
lnotab-like table
- When displaying a traceback, we already need to go find and read the
original .py file to print source code at all. Re-parse it, and use the ids
to find the original AST node, in context with full structure. Let the
traceback formatter do whatever clever stuff it wants with this info.

Of course if the .py and .pyc files don't match, this might produce
gibberish. We already have that problem with showing source lines, but it
might be even more confusing if we get some random unrelated AST node. This
could be avoided by storing some kind of hash in the code object, so that
we can validate the .py file we find hasn't changed (sha512 if we're
feeling fancy, crc32 if we want to save space, either way is probably fine).

This would make traceback printing more expensive, but only if you want the
fancy features, and traceback printing is already expensive (it does file
I/O!). Usually by the time you're rendering a traceback it's more important
to optimize for human time than CPU time. It would take less memory than
PEP 657, and the same as Mark's proposal (both only track one extra integer
per bytecode offset). And it would allow for arbitrarily rich traceback
display.

(I guess in theory you could make this even cheaper by using it to replace
lnotab, instead of extending it. But I think keeping lnotab around is a
good idea, as a fallback for cases where you can't find the original source
but still want some hint at location information.)

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/BUXFOSAEBXLIHH432PKBCXOGXUAHQIVP/
Code of Conduct: http://python.org/psf/codeofconduct/