[Python-Dev] Avoiding CPython performance regressions

2015-11-30 Thread Fabio Zadrozny
Hi python-dev,

I've seen that on and off CPython had attempts to measure benchmarks over
time to avoid performance regressions (i.e.: https://speed.python.org), but
had nothing concrete so far, so, I ended up creating a hosted service for
that (https://www.speedtin.com) and I'd like to help in setting up a
structure to run the benchmarks from https://hg.python.org/benchmarks/ and
properly upload them to SpeedTin (if CPython devs are Ok with that) -- note
that I don't really have server to run the benchmarks, only to host the
data (but https://speed.python.org seems to indicate that such a server is
available...).

There's a sample report at:
https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time/ (it
has real data from running using the PyPy benchmarks as I only discovered
about the benchmarks from https://hg.python.org/benchmarks/ later on --
also, it doesn't seem to support Python 3 right now, so, it's probably not
that useful for the current Python dev, but it does have some nice insight
on CPython 2.7.x performance over time).

Later on, the idea is being able to compare across different Python
implementations which use the same benchmark set... (although that needs
other implementations to also post to the data to SpeedTin).

Note that uploading the data to SpeedTin should be pretty straightforward
(by using https://github.com/fabioz/pyspeedtin, so, the main issue would be
setting up o machine to run the benchmarks).

Best Regards,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Fabio Zadrozny
On Mon, Nov 30, 2015 at 3:33 PM, Stewart, David C  wrote:

>
> On 11/30/15, 5:52 AM, "Python-Dev on behalf of R. David Murray"
>  rdmur...@bitdance.com> wrote:
>
> >
> >There's also an Intel project posted about here recently that checks
> >individual benchmarks for performance regressions and posts the results
> >to python-checkins.
>
> The description of the project is at https://01.org/lp - Python results
> are indeed sent daily to python-checkins. (No results for Nov 30 and Dec 1
> due to Romania National Day holiday!)
>
> There is also a graphic dashboard at
> http://languagesperformance.intel.com/


​Hi Dave,

Interesting, but ​I'm curious on which benchmark set are you running? From
the graphs it seems it has a really high standard deviation, so, I'm
curious to know if that's really due to changes in the CPython codebase /
issues in the benchmark set or in how the benchmarks are run... (it doesn't
seem to be the benchmarks from https://hg.python.org/benchmarks/ right?).

​--
Fabio​


> ​
>
> Dave
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/fabiofz%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Fabio Zadrozny
On Tue, Dec 1, 2015 at 6:36 AM, Maciej Fijalkowski  wrote:

> Hi
>
> Thanks for doing the work! I'm on of the pypy devs and I'm very
> interested in seeing this getting somewhere. I must say I struggle to
> read the graph - is red good or is red bad for example?
>
> I'm keen to help you getting anything you want to run it repeatedly.
>
> PS. The intel stuff runs one benchmark in a very questionable manner,
> so let's maybe not rely on it too much.
>

​Hi Maciej,

Great, it'd be awesome having data on multiple Python VMs (my latest target
is really having a way to compare across multiple VMs/versions easily and
help each implementation keep a focus on performance). Ideally, a single,
dedicated machine could be used just to run the benchmarks from multiple
VMs (one less variable to take into account for comparisons later on, as
I'm not sure it'd be reliable to normalize benchmark data from different
machines -- it seems Zach was the one to contact from that, but if there's
such a machine already being used to run PyPy, maybe it could be extended
to run other VMs too?).

As for the graph, it should be easy to customize (and I'm open to
suggestions). In the case, as it is, red is slower and blue is faster (so,
for instance in
https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time,  the
fastest CPython version overall was 2.7.3 -- and 2.7.1 was the baseline).
I've updated the comments to make it clearer (and changed the second graph
to compare the latest against the fastest version (2.7.rc11 vs 2.7.3) for
the individual benchmarks.

Best Regards,

Fabio



>
> On Mon, Nov 30, 2015 at 3:52 PM, R. David Murray 
> wrote:
> > On Mon, 30 Nov 2015 09:02:12 -0200, Fabio Zadrozny 
> wrote:
> >> Note that uploading the data to SpeedTin should be pretty
> straightforward
> >> (by using https://github.com/fabioz/pyspeedtin, so, the main issue
> would be
> >> setting up o machine to run the benchmarks).
> >
> > Thanks, but Zach almost has this working using codespeed (he's still
> > waiting on a review from infrastructure, I think).  The server was not in
> > fact running; a large part of what Zach did was to get that server set
> up.
> > I don't know what it would take to export the data to another consumer,
> > but if you want to work on that I'm guessing there would be no objection.
> > And I'm sure there would be no objection if you want to get involved
> > in maintaining the benchmark server!
> >
> > There's also an Intel project posted about here recently that checks
> > individual benchmarks for performance regressions and posts the results
> > to python-checkins.
> >
> > --David
> > ___
> > Python-Dev mailing list
> > Python-Dev@python.org
> > https://mail.python.org/mailman/listinfo/python-dev
> > Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/fabiofz%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Fabio Zadrozny
On Tue, Dec 1, 2015 at 9:35 AM, Victor Stinner 
wrote:

> 2015-12-01 10:49 GMT+01:00 Fabio Zadrozny :
> > As for the graph, it should be easy to customize (and I'm open to
> > suggestions). In the case, as it is, red is slower and blue is faster
> (so,
> > for instance in
> > https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time
>
> For me, -10% means "faster" in the context of a benchmark. On this
> graph, I see -21% but it's slower in fact. I'm confused.
>
> Victor
>

Humm, I understand your point, although I think the main reason for the
confusion is the lack of a real legend there...

I.e.: the reason it's like that is because the idea is that it's a
comparison among 2 versions, not absolute benchmark times, so negative
means one version is 'slower/worse' than another and blue means it's
'faster/better' (as a reference, Eclipse also uses the same format for
reporting it -- e.g.:
http://download.eclipse.org/eclipse/downloads/drops4/R-4.5-201506032000/performance/performance.php?fp_type=0
)

I've added a legend now, so, hopefully it clears up the confusion ;)

--
Fabio
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Avoiding CPython performance regressions

2015-12-01 Thread Fabio Zadrozny
On Tue, Dec 1, 2015 at 8:14 AM, Maciej Fijalkowski  wrote:

> On Tue, Dec 1, 2015 at 11:49 AM, Fabio Zadrozny  wrote:
> >
> > On Tue, Dec 1, 2015 at 6:36 AM, Maciej Fijalkowski 
> wrote:
> >>
> >> Hi
> >>
> >> Thanks for doing the work! I'm on of the pypy devs and I'm very
> >> interested in seeing this getting somewhere. I must say I struggle to
> >> read the graph - is red good or is red bad for example?
> >>
> >> I'm keen to help you getting anything you want to run it repeatedly.
> >>
> >> PS. The intel stuff runs one benchmark in a very questionable manner,
> >> so let's maybe not rely on it too much.
> >
> >
> > Hi Maciej,
> >
> > Great, it'd be awesome having data on multiple Python VMs (my latest
> target
> > is really having a way to compare across multiple VMs/versions easily and
> > help each implementation keep a focus on performance). Ideally, a single,
> > dedicated machine could be used just to run the benchmarks from multiple
> VMs
> > (one less variable to take into account for comparisons later on, as I'm
> not
> > sure it'd be reliable to normalize benchmark data from different
> machines --
> > it seems Zach was the one to contact from that, but if there's such a
> > machine already being used to run PyPy, maybe it could be extended to run
> > other VMs too?).
> >
> > As for the graph, it should be easy to customize (and I'm open to
> > suggestions). In the case, as it is, red is slower and blue is faster
> (so,
> > for instance in
> > https://www.speedtin.com/reports/1_CPython27x_Performance_Over_Time,
> the
> > fastest CPython version overall was 2.7.3 -- and 2.7.1 was the baseline).
> > I've updated the comments to make it clearer (and changed the second
> graph
> > to compare the latest against the fastest version (2.7.rc11 vs 2.7.3) for
> > the individual benchmarks.
> >
> > Best Regards,
> >
> > Fabio
>
> There is definitely a machine available. I suggest you ask
> python-infra list for access. It definitely can be used to run more
> than just pypy stuff. As for normalizing across multiple machines -
> don't even bother. Different architectures make A LOT of difference,
> especially with cache sizes and whatnot, that seems to have different
> impact on different loads.
>
> As for graph - I like the split on the benchmarks and a better
> description (higher is better) would be good.
>
> I have a lot of ideas about visualizations, pop in on IRC, I'm happy
> to discuss :-)
>
>
​Ok, I mailed infrastructure(at)python.org to see how to make it work.

I did add a legend now, so, it should be much easier to read already ;)

As for ideas on visualizations, I definitely want to hear about suggestions
on how to improve it, although I'll start focusing on having the servers to
get benchmark data running and will move on to improving the graphs right
afterwards.

Cheers,

Fabio




> Cheers,
> fijal
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Python advanced debug support (update frame code)

2014-01-12 Thread Fabio Zadrozny
Hi Python-dev.

I'm playing a bit on the concept on live-coding during a debug session and
one of the most annoying things is that although I can reload the code for
a function (using something close to xreload), it seems it's not possible
to change the code for the current frame (i.e.: I need to get out of the
function call and then back in to a call to the method from that frame to
see the changes).

I gave a look on the frameobject and it seems it would be possible to set
frame.f_code to another code object -- and set the line number to the start
of the new object, which would cover the most common situation, which would
be restarting the current frame -- provided the arguments remain the same
(which is close to what the java debugger in Eclipse does when it drops the
current frame -- on Python, provided I'm not in a try..except block I can
do even better setting the the frame.f_lineno, but without being able to
change the frame f_code it loses a lot of its usefulness).

So, I'd like to ask for feedback from people with more knowledge on whether
it'd be actually feasible to change the frame.f_code and possible
implications on doing that.

Thanks,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Criticism of execfile() removal in Python3

2014-06-14 Thread Fabio Zadrozny
On Sat, Jun 14, 2014 at 6:00 PM, Markus Unterwaditzer <
mar...@unterwaditzer.net> wrote:

> On Tue, Jun 10, 2014 at 05:23:12AM +0300, Paul Sokolovsky wrote:
> > Hello,
> >
> > I was pleasantly surprised with the response to recent post about
> > MicroPython implementation details
> > (https://mail.python.org/pipermail/python-dev/2014-June/134718.html). I
> > hope that discussion means that posts about alternative implementations
> > are not unwelcome here, so I would like to bring up another (of many)
> > issues we faced while implementing MicroPython.
> >
> > execfile() builtin function was removed in 3.0. This brings few
> > problems:
> >
> > 1. It hampers interactive mode - instead of short and easy to type
> > execfile("file.py") one needs to use exec(open("file.py").read()). I'm
> > sure that's not going to bother a lot of people - after all, the
> > easiest way to execute a Python file is to drop back to shell and
> > restart python with file name, using all wonders of tab completion. But
> > now imagine that Python interpreter runs on bare hardware, and its REPL
> > is the only shell. That's exactly what we have with MicroPython's
> > Cortex-M port. But it's not really MicroPython-specific, there's
> > CPython port to baremetal either - http://www.pycorn.org/ .
>
> As far as i can see, minimizing the amount of characters to type was never
> a
> design goal of the Python language. And because that goal never mattered as
> much for the designers as it seems to do for you, the reason for it to get
> removed -- reducing the amount of builtins without reducing functionality
> --
> was the only one left.
>
> > 2. Ok, assuming that exec(open().read()) idiom is still a way to go,
> > there's a problem - it requires to load entire file to memory. But
> > there can be not enough memory. Consider 1Mb file with 900Kb comments
> > (autogenerated, for example). execfile() could easily parse it, using
> > small buffer. But exec() requires to slurp entire file into memory, and
> > 1Mb is much more than heap sizes that we target.
>
> That is a valid concern, but i believe violating the language
> specification and
> adding your own execfile implementation (either as a builtin or in a new
> stdlib
> module) here is justified, even if it means you will have to modify your
> existing Python 3 code to use it -- i don't think the majority of software
> written in Python will be able to run under such memory constraints without
> major modifications anyway.
>
> > Comments, suggestions? Just to set a productive direction, please
> > kindly don't consider the problems above as MicroPython's.
>
> A new (not MicroPython-specific) stdlib module containing functions such as
> execfile could be considered. Not really for Python-2-compatibility, but
> for
> performance-critical situations.
>
> I am not sure if this is a good solution. Not at all. Even though it's
> separated from the builtins, i think it would still sacrifice the purity
> of the
> the language (by which i mean having a minimal composable API), because
> people
> are going to use it anyway. It reminds me of the situation in Python 2
> where
> developers are trying to use cStringIO with a fallback to StringIO as a
> matter
> of principle, not because they actually need that kind of performance.
>
> Another, IMO better idea which shifts the problem to the MicroPython devs
> is to
> "just" detect code using
>
> exec(open(...).read())
>
> and transparently rewrite it to something more memory-efficient. This is
> the
> idea i actually think is a good one.
>
>
> > I very much liked how last discussion went: I was pointed that
> > https://docs.python.org/3/reference/index.html is not really a CPython
> > reference, it's a *Python* reference, and there were even motion to
> > clarify in it some points which came out from MicroPython discussion.
> > So, what about https://docs.python.org/3/library/index.html - is it
> > CPython, or Python standard library specification? Assuming the latter,
> > what we have is that, by removal of previously available feature,
> > *Python* became less friendly for interactive usage and less scalable.
>
> "Less friendly for interactive usage" is a strong and vague statement. If
> you're going after the amount of characters required to type, yes,
> absolutely,
> but by that terms one could declare Bash and Perl to be superior languages.
> Look at it from a different perspective: There are fewer builtins to
> remember.
>
> >
>

Well, I must say that the exec(open().read()) is not really a proper
execfile implementation because it may fail because of encoding issues...
(i.e.: one has to check the file encoding to do the open with the proper
encoding, otherwise it's possible to end up with gibberish).

The PyDev debugger has an implementation (see:
https://github.com/fabioz/Pydev/blob/development/plugins/org.python.pydev/pysrc/_pydev_execfile.py)
which considers the encoding so that the result is ok (but it still has a
bug related to utf-8 w

Re: [Python-Dev] Store startup modules as C structures for 20%+ startup speed improvement?

2018-09-18 Thread Fabio Zadrozny
On Mon, Sep 17, 2018 at 9:23 PM, Carl Shapiro 
wrote:

> On Sun, Sep 16, 2018 at 1:24 PM, Antoine Pitrou 
> wrote:
>
>> I think it's of limited interest if it only helps with modules used
>> during the startup sequence, not arbitrary stdlib or third-party
>> modules.
>>
>
> This should help any use-case that is already using the freeze module
> already bundled with CPython.  Third-party code, like py2exe, py2app,
> pyinstaller, and XAR could build upon this to create applications that
> start faster.
>

I think this seems like a great idea.

Some questions though:

During the import process, Python can already deal with folders and .zip
files in sys.path... now, instead of having special handling for a new
concept with a custom command line, etc, why not just say that this is a
special file (e.g.: files with a .pyfrozen extension) and make importlib be
able to deal with it when it's on sys.path (that way there could be
multiple of those and there should be no need to turn it on/off, custom
command line, etc)?

Another question: doesn't importlib already provide hooks for external
contributors which could address that use case? (so, this could initially
be available as a third party library for maturing outside of CPython and
then when it's deemed to be mature it could be integrated into CPython --
not that this can't happen on Python 3.8 timeframe, but it'd be useful
checking its use against the current Python version and measuring benefits
with real world code).

To give an idea, on my machine the baseline Python startup is about 20ms
>> (`time python -c pass`), but if I import Numpy it grows to 100ms, and
>> with Pandas it's more than 200ms.  Saving 4ms on the baseline startup
>> would make no practical difference for concrete usage.
>>
>
> Do you have a feeling for how many of those milliseconds are spend loading
> bytecode from disk?  If so standalone executables that contain numpy and
> pandas (and mercurial) would start faster
>
>
>> I'm ready to think there are other use cases where it matters, though.
>>
>
> I think so.  I hope you will, too :-)
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> fabiofz%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Store startup modules as C structures for 20%+ startup speed improvement?

2018-09-18 Thread Fabio Zadrozny
On Tue, Sep 18, 2018 at 2:57 PM, Carl Shapiro 
wrote:

> On Tue, Sep 18, 2018 at 5:55 AM, Fabio Zadrozny  wrote:
>
>> During the import process, Python can already deal with folders and .zip
>> files in sys.path... now, instead of having special handling for a new
>> concept with a custom command line, etc, why not just say that this is a
>> special file (e.g.: files with a .pyfrozen extension) and make importlib be
>> able to deal with it when it's on sys.path (that way there could be
>> multiple of those and there should be no need to turn it on/off, custom
>> command line, etc)?
>>
>
> That is an interesting idea but it might not be easy to work into this
> design.  The improvement in start-up time comes from eliminating the
> overheads of filesystem I/O, memory allocation, and un-marshaling
> bytecode.  Having this data on the filesystem would reintroduce the cost of
> filesystem I/O and it would add a load-time relocation to the equation so
> the overall performance benefits would be greatly lessened.
>
>
>> Another question: doesn't importlib already provide hooks for external
>> contributors which could address that use case? (so, this could initially
>> be available as a third party library for maturing outside of CPython and
>> then when it's deemed to be mature it could be integrated into CPython --
>> not that this can't happen on Python 3.8 timeframe, but it'd be useful
>> checking its use against the current Python version and measuring benefits
>> with real world code).
>>
>
> This may be possible but, for the same reasons I outline above, it would
> certainly come at the expense of performance.
>
> I think many people are interested in a better .pyc format but our goals
> are much more modest.  We are actually trying to not introduce a whole new
> way to externalize .py data in CPython.  Rather, we think of this as just
> making the existing frozen module capability much faster so its use can be
> broadened to making start-up performance better.  The user visible part,
> the command line interface to bypass the frozen module, would be a
> nice-to-have for developers but is something we could live without.
>

Just to make sure we're in the same page, the approach I'm talking about
would still be having a dll, not a better .pyc format, so, during the
import a custom importer would open that dll once and provide modules from
it -- do you think this would be much more overhead than what's proposed
now?

I guess it may be a bit slower because it'd have to obey the existing
import capabilities, but that shouldn't mean more time is spent on IO,
memory allocation nor un-marshaling bytecode (although it may be that I
misunderstood the approach or the current import capabilities don't provide
the proper api for that).
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Improve CPython tracing performance

2020-10-29 Thread Fabio Zadrozny
Hi all,

Right now, when a debugger is active, the number of local variables can
affect the tracing speed quite a lot.

For instance, having tracing setup in a program such as the one below takes
4.64 seconds to run, yet, changing all the variables to have the same name
-- i.e.: change all assignments to `a = 1` (such that there's only a single
variable in the namespace), it takes 1.47 seconds (in my machine)... the
higher the number of variables, the slower the tracing becomes.

```
import time
t = time.time()

def call():
a = 1
b = 1
c = 1
d = 1
e = 1
f = 1

def noop(frame, event, arg):
return noop

import sys
sys.settrace(noop)

for i in range(1_000_000):
call()

print('%.2fs' % (time.time() - t,))
```

This happens because `PyFrame_FastToLocalsWithError` and
`PyFrame_LocalsToFast` are called inside the `call_trampoline` (
https://github.com/python/cpython/blob/master/Python/sysmodule.c#L946).

So, I'd like to simply remove those calls.

Debuggers can call  `PyFrame_LocalsToFast` when needed -- otherwise
mutating non-current frames doesn't work anyways. As a note, pydevd already
has such a call:
https://github.com/fabioz/PyDev.Debugger/blob/0d4d210f01a1c0a8647178b2e665b53ab113509d/_pydevd_bundle/pydevd_save_locals.py#L57
and PyPy also has a counterpart.

As for `PyFrame_FastToLocalsWithError`, I don't really see any reason to
call it at all.

i.e.: something as the code below prints the `a` variable from the `main()`
frame regardless of that and I checked all pydevd tests and nothing seems
to be affected (it seems that accessing f_locals already does this:
https://github.com/python/cpython/blob/cb9879b948a19c9434316f8ab6aba9c4601a8173/Objects/frameobject.c#L35,
so, I don't see much reason to call it at all).

```
def call():
import sys
frame = sys._getframe()
print(frame.f_back.f_locals)

def main():
a = 1
call()

if __name__ == '__main__':
main()
```

Does anyone see any issue with this?

If it's non controversial, is a PEP needed or just an issue to track it
would be enough to remove those 2 lines?

Thanks,

Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/62WK3THUDNWZCDOMXXDZFG3O4LIOIP4W/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Improve CPython tracing performance

2020-10-29 Thread Fabio Zadrozny
On Thu, Oct 29, 2020 at 9:45 AM Victor Stinner  wrote:

> Le jeu. 29 oct. 2020 à 13:02, Fabio Zadrozny  a écrit :
> > Debuggers can call  `PyFrame_LocalsToFast` when needed -- otherwise
> mutating non-current frames doesn't work anyways. As a note, pydevd already
> has such a call:
> https://github.com/fabioz/PyDev.Debugger/blob/0d4d210f01a1c0a8647178b2e665b53ab113509d/_pydevd_bundle/pydevd_save_locals.py#L57
> and PyPy also has a counterpart.
>
> Hum, if a trace or profile function is written in Python, reading
> frame.f_locals does call PyFrame_FastToLocalsWithError(). So a Python
> debugger/profiler would be ok with your code.
>
> For a debugger/profiler written in C, it would be a backward
> incompatible change. I agree that it would be reasonable to require it
> to call PyFrame_FastToLocalsWithError().
>
> > If it's non controversial, is a PEP needed or just an issue to track it
> would be enough to remove those 2 lines?
>
> Incompatible changes should be well documented in What's New in Python
> 3.10. In this case, I don't think that a deprecation period is needed.
>
> Just open an issue. Please post the URL to your issue in reply to your
> email. It's even better if you can write a PR to implement your idea
> ;-)
>


Ok, I've created https://bugs.python.org/issue42197 to track it.

--
Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IIH5J5MGGF4JALBIGVDNEMPVVGCORHLF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Improve CPython tracing performance

2020-10-30 Thread Fabio Zadrozny
On Fri, Oct 30, 2020 at 7:02 AM Nick Coghlan  wrote:

> On Fri, 30 Oct 2020 at 00:19, Fabio Zadrozny  wrote:
> > On Thu, Oct 29, 2020 at 9:45 AM Victor Stinner 
> wrote:
> >>
> >> > If it's non controversial, is a PEP needed or just an issue to track
> it would be enough to remove those 2 lines?
> >>
> >> Incompatible changes should be well documented in What's New in Python
> >> 3.10. In this case, I don't think that a deprecation period is needed.
> >>
> >> Just open an issue. Please post the URL to your issue in reply to your
> >> email. It's even better if you can write a PR to implement your idea
> >> ;-)
>
> Removing those calls would require a PEP, as it would break all sorts
> of tools in cases that currently work correctly.
>
> > Ok, I've created https://bugs.python.org/issue42197 to track it.
>
> Please also have a look at PEP 558 and its draft reference
> implementation at https://github.com/python/cpython/pull/3640
>
> The way these trampoline calls currently work isn't just slow, it's
> actually broken in various ways, and changing them to use a
> write-through proxy instead of a dict-based snapshot means that the
> cost of producing those dict-based snapshots simply because tracing is
> turned on will go away.
>
> The PEP itself didn't seem to be particularly controversial (at least
> in its current form - earlier versions drew more objections), but
> there's a bunch of preparatory work that needs to be done before it
> could seriously be submitted for final review (specifically: the
> write-through proxy isn't actually implementing the full mutable
> mapping API. In order for it to do that without excessive code
> duplication, the helper functions already written for ordered
> dictionaries needed to moved out to a separate linkable module so that
> the new write-through proxy can reuse them without taking a separate
> copy of them)
>
>
Hi Nick!

As a note, the current implementation does allow debuggers to mutate frame
locals -- as long as they understand that they need to call `
PyFrame_LocalsToFast ` when doing such a change -- potentially using ctypes
(I'm just mentioning this because PEP 558 seems to imply this isn't
possible).

i.e.: Debuggers already *must* call ` PyFrame_LocalsToFast ` if locals from
a frame which is not the current frame are being mutated, so, as far as I
can see a debugger is already broken if it isn't doing that -- some years
ago I even thought about exposing it in the frame API:
https://bugs.python.org/issue1654367, but in the end accessing it through
the C-API through ctypes does get the job done, debugger authors just need
to be aware of it -- PyPy also has a counterpart mentioned in that issue.

I agree that having f_locals be a proxy that does everything transparently
would be better, but unfortunately I don't currently have the time
available to help there... in your opinion, just removing those calls as I
proposed (requiring that debuggers call `PyFrame_LocalsToFast`) would be
acceptable? If you think it is, I'll proceed on creating the PEP, otherwise
I'll probably just drop it until f_locals becomes a proxy (in which case
I'd expect the `PyFrame_FastToLocalsWithError` and  `PyFrame_LocalsToFast`
to be removed in the same commit which converts f_locals to a proxy).

Cheers,

Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/NHFSOAJW7JVNVURUBROLGOU4DKX5MTBX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] f_lasti not correct with PREDICT (is it fixable?)

2021-04-17 Thread Fabio Zadrozny
Hi all,

I currently have a use-case where I need to rely on `f_lasti`, but it seems
that it's not really possible to rely on it as it doesn't accurately point
to the actual bytecode last executed when PREDICT is used.

So, I'd like to know: is it possible to change that so that the `f_lasti`
is indeed properly updated (even when PREDICT) is used (to actually match
what the docs say about it: https://docs.python.org/3/library/inspect.html)?

As a note, my use case is doing a step into a function in a debugger.

This is needed in order to know whether a given function call maps the
target being step into.

For instance, given something as:

def method(c):
return c+1

a = method
a(a(a(1)))

The user can choose to step into the `a` at the middle. The only
conceivable way out I found to implement it reliably was by identifying
that when the user enters the `method()` function the f_lasti on the parent
frame is the one which would result in that function call.

So, I have this working on a POC, but to have it working on all the cases
I'm having to handle all existing bytecode instructions and then going to
ceval.c and then identify which ones have a PREDICT and then having to do
the adjustments to match the `f_lasti` accordingly when the function call
is reached (which is a bit of an herculean task to support multiple
versions of CPython as that's definitely an implementation detail), so, I'd
really be interested in having the f_lasti work reliably.

Thanks,

Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/HYTVMG4YTU3QLRGEE4LMLGFANQPO4PD3/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 659: Specializing Adaptive Interpreter

2021-05-13 Thread Fabio Zadrozny
Em qua., 12 de mai. de 2021 às 14:45, Mark Shannon 
escreveu:

> Hi everyone,
>
> I would like to present PEP 659.
>
> This is an informational PEP about a key part of our plan to improve
> CPython performance for 3.11 and beyond.
>
> For those of you aware of the recent releases of Cinder and Pyston,
> PEP 659 might look similar.
> It is similar, but I believe PEP 659 offers better interpreter
> performance and is more suitable to a collaborative, open-source
> development model.
>
> As always, comments and suggestions are welcome.
>
>
Hi Mark,

I think this seems like a nice proposal... I do have some questions related
to the PEP though (from the point of view of implementing a debugger over
it some things are kind of vague to me):

1. When will the specialization happen? (i.e.: is bytecode expected to be
changed while the code is running inside a frame or must it all be done
prior to entering a frame on a subsequent call?)

2. When the adaptive specialization happens, will that be reflected on the
actual bytecode seen externally in the frame or is that all internal? Will
clients be able to make sense of that? -- i.e.: In the debugger right now I
have a need on some occasions to detect the structure of the code from the
bytecode itself (for instance to detect whether some exception would be
handled or unhandled at raise time just given the bytecode).

3. Another example: I'm working right now on a feature to step into a
method. To do that right now my approach is:
- Compute the function call names and bytecode offsets in a given frame.
- When a frame is called (during a frame.f_trace call), check the
parent frame bytecode offset (frame.f_lasti) to detect if the last thing
was the expected call (and if it was, break the execution).

This seems reasonable given the current implementation, where bytecodes are
all fixed and there's a mapping from the frame.f_lasti ... Will that still
work with the specializing adaptive interpreter?

4. Will it still be possible to change the frame.f_code prior to execution
from a callback set in `PyThreadState.interp.eval_frame` (which will change
the code to add a breakpoint to the bytecode and later call
`_PyEval_EvalFrameDefault`)? Note: this is done in the debugger so that
Python can run without any tracing until the breakpoint is hit (tracing is
set afterwards to actually pause the execution as well as doing step
operations).

Best regards,

Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/SP4LXKA2Y3LUCEGDUUX2RB3ZWM2JICKB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 659: Specializing Adaptive Interpreter

2021-05-14 Thread Fabio Zadrozny
>
> > 3. Another example: I'm working right now on a feature to step into a
> > method. To do that right now my approach is:
> >  - Compute the function call names and bytecode offsets in a given
> > frame.
> >  - When a frame is called (during a frame.f_trace call), check the
> > parent frame bytecode offset (frame.f_lasti) to detect if the last thing
> > was the expected call (and if it was, break the execution).
> >
> > This seems reasonable given the current implementation, where bytecodes
> > are all fixed and there's a mapping from the frame.f_lasti ... Will that
> > still work with the specializing adaptive interpreter?
>
> If you are implementing this in Python, then everything should work as
> it does now.
>

Ok... this part is all done in Python, so, if frame.f_lasti is still
updated properly according to the original bytecode while executing the
super instructions, then all seems to work properly on my side ;).

>
> OOI, would inserting a breakpoint at offset 0 in the callee function
> work?
>

Yes... if you're curious, for the breakpoint to actually work, what is done
is generate bytecode which calls a function to set the tracing and later
generates a spurious line event (so that the tracing function is then able
to make the pause. The related code that generates this bytecode would be:
https://github.com/fabioz/PyDev.Debugger/blob/pydev_debugger_2_4_1/_pydevd_frame_eval/pydevd_modify_bytecode.py#L74
).


>
> >
> > 4. Will it still be possible to change the frame.f_code prior to
> > execution from a callback set in `PyThreadState.interp.eval_frame`
> > (which will change the code to add a breakpoint to the bytecode and
> > later call `_PyEval_EvalFrameDefault`)? Note: this is done in the
> > debugger so that Python can run without any tracing until the breakpoint
> > is hit (tracing is set afterwards to actually pause the execution as
> > well as doing step operations).
>
> Since frame.f_code is read-only in Python, I assume you mean in C.
>
> I can make no guarantees about the layout or meaning of fields in the C
> frame struct, I'm afraid.
> But I'm sure we can get something to work for you.
>

Yes, it's indeed done in C (cython in this case... the related code for
reference is:
https://github.com/fabioz/PyDev.Debugger/blob/pydev_debugger_2_4_1/_pydevd_frame_eval/pydevd_frame_evaluator.template.pyx#L591
).

I'm fine in going through some other way or using some other API (this is
quite specific to a debugger after all), I just wanted to let you know of
the use case so that something along those lines can still be supported
(currently on CPython, this is as low-overhead for debugging as I can think
of, but since there'll be an adaptive bytecode specializer, using any other
custom API would also be reasonable).

Cheers,

Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/4GJ7W4PHK7XUPLHJPH5LEADRNIBZQXCS/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Fixing unbuffered output on Windows

2021-07-02 Thread Fabio Zadrozny
Hi all,

I've created a pull request some time ago to fix
https://bugs.python.org/issue42044 (
https://github.com/python/cpython/pull/26678).

I know python devs are pretty busy, but I'd really appreciate it if someone
could take a look at it (as I *think* it's a simple fix).

Thanks,

Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/5HM65EF4NA5SR3NGJDMXUZMJFCL6FTJK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: API's to interface python with Eclipse to execute and debug C/C++ Source files.

2021-09-04 Thread Fabio Zadrozny
Em qui., 19 de ago. de 2021 às 01:57, Chandrakant Shrimantrao <
chandrakant.shrimant...@ltts.com> escreveu:

> Hi,
> We would like to know if there is  any option available to execute and
> debug( setting break point and delete break point) C/C++ Source files
> Python API's inside Eclipse IDE after interfacing Pydev with Eclipse.
> Please let us know.
>
> Thanks and regards.
> Chandrakant shrimantrao.
>
>
Hello Chandrakant,

The usual way I go about this is that one of the debuggers should be in
remote mode, so, either you start it with PyDev and then attach the C/C++
debugger to it afterwards or you start it with the C/C++ debugger and then
attach the PyDev debugger afterwards.

Attaching to the PyDev debugger is mostly a matter of calling
`pydevd.settrace()` appropriately (making sure you have the remote debugger
on the client listening).
See: https://www.pydev.org/manual_adv_remote_debugger.html for details.

Unfortunately I haven't really used Eclipse CDT much to know how it works
on the C/C++ side, but if you can't find out searching for it, presumably
someone from CDT can help you around:
https://www.eclipse.org/forums/index.php?t=thread&frm_id=80

Best regards,

Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/Q2UAOWRERMMH6TH5BKNFYM6SHUBMG2J4/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] What's the rule for issuing line tracing numbers when using a tracing function?

2021-09-16 Thread Fabio Zadrozny
Hi all,

I have a weird case where I thought line events would be issued and yet
they aren't even though they're in the instructions in the bytecode (both
in 3.9 and 3.10).

i.e.:

Given the code:

def check_backtrack(x):  # line 1
if not (x == 'a'  # line 2
or x == 'c'):  # line 3
pass  # line 4

it has dis.dis such as:

  2   0 LOAD_FAST0 (x)
  2 LOAD_CONST   1 ('a')
  4 COMPARE_OP   2 (==)
  6 POP_JUMP_IF_TRUE12 (to 24)

  3   8 LOAD_FAST0 (x)
 10 LOAD_CONST   2 ('c')
 12 COMPARE_OP   2 (==)

  2  14 POP_JUMP_IF_TRUE10 (to 20)

  4  16 LOAD_CONST   0 (None)
 18 RETURN_VALUE

  2 >>   20 LOAD_CONST   0 (None)
 22 RETURN_VALUE
>>   24 LOAD_CONST   0 (None)
 26 RETURN_VALUE

So, by just following the instructions/line numbers, I'd say that when the
instruction:

2  14 POP_JUMP_IF_TRUE10 (to 20)

is executed, a line event would take place, yet, this isn't true, but if
that offset is changed to include more instructions then such a line event
is issued.

i.e.: something as:

def tracer(frame, event, arg):
print(frame, event)
return tracer

import sys
sys.settrace(tracer)
check_backtrack('f')

prints:

1 call
2 line
3 line
4 line
4 return

when I expected it to print:

1 call
2 line
3 line
2 line |<-- this is not being issued
4 line
4 return

So, I have some questions related to this:

Does anyone know why this happens?
What's the rule to identify this?
Why is that line number assigned to that instruction (i.e.: it seems a bit
odd that this is set up like that in the first place)?

Thanks,

Fabio

p.s.: I'm asking because in a debugger which changes bytecode I want to
keep the same semantics and it appears that if I add more bytecode at that
instruction offset, those semantics aren't kept (but I don't really know
what are the semantics to keep here since it seems like that instruction
should issue a line event even though it doesn't).
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/CP2PTFCMTK57KM3M3DLJNWGO66R5RVPB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: What's the rule for issuing line tracing numbers when using a tracing function?

2021-09-17 Thread Fabio Zadrozny
Em qui., 16 de set. de 2021 às 11:49, Fabio Zadrozny 
escreveu:

> Hi all,
>
> I have a weird case where I thought line events would be issued and yet
> they aren't even though they're in the instructions in the bytecode (both
> in 3.9 and 3.10).
>
> i.e.:
>
> Given the code:
>
> def check_backtrack(x):  # line 1
> if not (x == 'a'  # line 2
> or x == 'c'):  # line 3
> pass  # line 4
>
> it has dis.dis such as:
>
>   2   0 LOAD_FAST0 (x)
>   2 LOAD_CONST   1 ('a')
>   4 COMPARE_OP   2 (==)
>   6 POP_JUMP_IF_TRUE12 (to 24)
>
>   3   8 LOAD_FAST0 (x)
>  10 LOAD_CONST   2 ('c')
>  12 COMPARE_OP   2 (==)
>
>   2  14 POP_JUMP_IF_TRUE10 (to 20)
>
>   4  16 LOAD_CONST   0 (None)
>  18 RETURN_VALUE
>
>   2 >>   20 LOAD_CONST   0 (None)
>  22 RETURN_VALUE
> >>   24 LOAD_CONST   0 (None)
>  26 RETURN_VALUE
>
> So, by just following the instructions/line numbers, I'd say that when the
> instruction:
>
> 2  14 POP_JUMP_IF_TRUE10 (to 20)
>
> is executed, a line event would take place, yet, this isn't true, but if
> that offset is changed to include more instructions then such a line event
> is issued.
>
> i.e.: something as:
>
> def tracer(frame, event, arg):
> print(frame, event)
> return tracer
>
> import sys
> sys.settrace(tracer)
> check_backtrack('f')
>
> prints:
>
> 1 call
> 2 line
> 3 line
> 4 line
> 4 return
>
> when I expected it to print:
>
> 1 call
> 2 line
> 3 line
> 2 line |<-- this is not being issued
> 4 line
> 4 return
>
> So, I have some questions related to this:
>
> Does anyone know why this happens?
> What's the rule to identify this?
> Why is that line number assigned to that instruction (i.e.: it seems a bit
> odd that this is set up like that in the first place)?
>
> Thanks,
>
> Fabio
>
> p.s.: I'm asking because in a debugger which changes bytecode I want to
> keep the same semantics and it appears that if I add more bytecode at that
> instruction offset, those semantics aren't kept (but I don't really know
> what are the semantics to keep here since it seems like that instruction
> should issue a line event even though it doesn't).
>

Answering my own question after investigating:

It boils down to the way that ceval.c does prediction of bytecodes which
makes it miss the line.

i.e.: the compare is something as:

TARGET(COMPARE_OP): {
assert(oparg <= Py_GE);
PyObject *right = POP();
PyObject *left = TOP();
PyObject *res = PyObject_RichCompare(left, right, oparg);
SET_TOP(res);
Py_DECREF(left);
Py_DECREF(right);
if (res == NULL)
goto error;
PREDICT(POP_JUMP_IF_FALSE);
PREDICT(POP_JUMP_IF_TRUE);
DISPATCH();
}

Given that, PREDICT makes the "POP_JUMP_IF_FALSE" /  "POP_JUMP_IF_TRUE"
line be ignored and its line becomes merged with the "COMPARE_OP" in the
tracing and when the bytecode is manipulated this is no longer true, so
those spurious line events start to be generated in the tracing.

So, in the end it boils down the the eval loop not respecting what's
written in the bytecode under some conditions -- in this particular case
it's probably good as the jump line seems to be off and I'd say it's a bug
in the bytecode generation which is then fixed by a bug in the PREDICT
ignoring that line that's off, so, the bugs nullify each other, but then,
my rewriting of bytecode makes the PREDICT fail and then the issue of
having the line off in the bytecode becomes apparent while tracing.

As a note, this is the 2nd time I've been bitten by PREDICT (
https://mail.python.org/archives/list/python-dev@python.org/message/HYTVMG4YTU3QLRGEE4LMLGFANQPO4PD3/)...


Does someone know if with the introduction of the new
optimizations/quickening the PREDICT will still be used in 3.11? If there
are 2 interpretation modes now, the PREDICT probably makes it hard to have
both do the same things since the eval loop doesn't really match what the
bytecode says due to the PREDICT.

Thanks,

Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/W6YZXQAG4GMMXJ6GXLHAYFP4CCMMVEZF/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Parsing f-strings from PEP 498 -- Literal String Interpolation

2016-11-03 Thread Fabio Zadrozny
Hi Python-Dev,

I'm trying to get my head around on what's accepted in f-strings --
https://www.python.org/dev/peps/pep-0498/ seems very light on the details
on what it does accept as an expression and how things should actually be
parsed (and the current implementation still doesn't seem to be in a state
for a final release, so, I thought asking on python-dev would be a
reasonable option).

I was thinking there'd be some grammar for it (something as
https://docs.python.org/3.6/reference/grammar.html), but all I could find
related to this is a quote saying that f-strings should be something as:

f '  {} 

So, given that, is it safe to assume that  would be equal to
the "test" node from the official grammar?

I initially thought it would obviously be, but the PEP says that using a
lamda inside the expression would conflict because of the colon (which
wouldn't happen if a proper grammar was actually used for this parsing as
there'd be no conflict as the lamda would properly consume the colon), so,
I guess some pre-parser steps takes place to separate the expression to
then be parsed, so, I'm interested on knowing how exactly that should work
when the implementation is finished -- lots of plus points if there's
actually a grammar to back it up :)

Thanks,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Parsing f-strings from PEP 498 -- Literal String Interpolation

2016-11-04 Thread Fabio Zadrozny
Answers inline...

On Fri, Nov 4, 2016 at 5:56 AM, Eric V. Smith  wrote:

> On 11/3/2016 3:06 PM, Fabio Zadrozny wrote:
>
>> Hi Python-Dev,
>>
>> I'm trying to get my head around on what's accepted in f-strings --
>> https://www.python.org/dev/peps/pep-0498/ seems very light on the
>> details on what it does accept as an expression and how things should
>> actually be parsed (and the current implementation still doesn't seem to
>> be in a state for a final release, so, I thought asking on python-dev
>> would be a reasonable option).
>>
>
> In what way do you think the implementation isn't ready for a final
> release?


Well, the cases listed in the docs​ (https://hg.python.org/
cpython/file/default/Doc/reference/lexical_analysis.rst) don't work in the
latest release (with SyntaxErrors) -- and the bug I created related to it:
http://bugs.python.org/issue28597 was promptly closed as duplicate -- so, I
assumed (maybe wrongly?) that the parsing still needs work.


> I was thinking there'd be some grammar for it (something as
>> https://docs.python.org/3.6/reference/grammar.html), but all I could
>> find related to this is a quote saying that f-strings should be
>> something as:
>>
>> f '  {   > specifier> } 
>>
>> So, given that, is it safe to assume that  would be equal to
>> the "test" node from the official grammar?
>>
>
> No. There are really three phases here:
>
> 1. The f-string is tokenized as a regular STRING token, like all other
> strings (f-, b-, u-, r-, etc).
> 2. The parser sees that it's an f-string, and breaks it into expression
> and text parts.
> 3. For each expression found, the expression is compiled with
> PyParser_ASTFromString(..., Py_eval_input, ...).
>
> Step 2 is the part that limits what types of expressions are allowed.
> While scanning for the end of an expression, it stops at the first '!',
> ':', or '}' that isn't inside of a string and isn't nested inside of
> parens, braces, and brackets.
>

​It'd be nice if at least this description could be added to the PEP (as
all other language implementations and IDEs will have to work the same way
and will probably reference it) -- a grammar example, even if not used
would be helpful (personally, I think hand-crafted parsers are always worse
in the long run compared to having a proper grammar with a parser, although
I understand that if you're not really used to it, it may be more work to
set it up).

Also, I find it a bit troubling that PyParser_ASTFromString is used there
and not just the node which would be related to an expression, although I
understand it's probably an easier approach, although in the end you
probably have to filter it and end up just accepting what's beneath the
"test" from the grammar, no? (i.e.: that's what a lambda body accepts).


> The nesting-tracking is why these work:
> >>> f'{(lambda x:3)}'
> ' at 0x0296E560>'
> >>> f'{(lambda x:3)!s:.20}'
> ' a'
>
> But this doesn't:
> >>> f'{lambda x:3}'
>   File "", line 1
> (lambda x)
>  ^
> SyntaxError: unexpected EOF while parsing
>
> Also, backslashes are not allowed anywhere inside of the expression. This
> was a late change right before beta 1 (I think), and differs from the PEP
> and docs. I have an open item to fix them.
>
> I initially thought it would obviously be, but the PEP says that using a
>> lamda inside the expression would conflict because of the colon (which
>> wouldn't happen if a proper grammar was actually used for this parsing
>> as there'd be no conflict as the lamda would properly consume the
>> colon), so, I guess some pre-parser steps takes place to separate the
>> expression to then be parsed, so, I'm interested on knowing how exactly
>> that should work when the implementation is finished -- lots of plus
>> points if there's actually a grammar to back it up :)
>>
>
> I've considered using the grammar and tokenizer to implement f-string
> parsing, but I doubt it will ever happen. It's a lot of work, and
> everything that produced or consumed tokens would have to be aware of it.
> As it stands, if you don't need to look inside of f-strings, you can just
> treat them as regular STRING tokens.
>

​Well, I think all language implementations / IDEs (or at least those which
want to give syntax errors) will *have* to look inside f-strings.

Also, you could still have a separate grammar saying how to look inside
f-strings (this would make the lives of other implementors easier) even if
it was a post-processing step as you're doing now.


>
> I hope that helps.
>
> Eric.
>

​It does, thank you very much.

​Best Regards,

Fabio​
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Parsing f-strings from PEP 498 -- Literal String Interpolation

2016-11-04 Thread Fabio Zadrozny
On Fri, Nov 4, 2016 at 3:15 PM, Eric V. Smith  wrote:

> On 11/4/2016 10:50 AM, Fabio Zadrozny wrote:
>
>> In what way do you think the implementation isn't ready for a final
>> release?
>>
>>
>> Well, the cases listed in the docs​
>> (https://hg.python.org/cpython/file/default/Doc/reference/
>> lexical_analysis.rst
>> <https://hg.python.org/cpython/file/default/Doc/reference/
>> lexical_analysis.rst>)
>> don't work in the latest release (with SyntaxErrors) -- and the bug I
>> created related to it: http://bugs.python.org/issue28597
>> <http://bugs.python.org/issue28597> was promptly closed as duplicate
>> -- so, I assumed (maybe wrongly?) that the parsing still needs work.
>>
>
> It's not the parsing that needs work, it's the documentation. Those
> examples used to work, but the parser was deliberately changed to not
> support them. There's a long discussion on python-ideas about it, starting
> at https://mail.python.org/pipermail/python-ideas/2016-August/041727.html


​Understood ;)
​


>
>
> ​It'd be nice if at least this description could be added to the PEP (as
>> all other language implementations and IDEs will have to work the same
>> way and will probably reference it) -- a grammar example, even if not
>> used would be helpful (personally, I think hand-crafted parsers are
>> always worse in the long run compared to having a proper grammar with a
>> parser, although I understand that if you're not really used to it, it
>> may be more work to set it up).
>>
>
> I've written a parser generator just to understand how they work, so I'm
> completely sympathetic to this. However, in this case, I don't think it
> would be any easier. I'm basically writing a tokenizer, not an expression
> parser. It's much simpler. The actual parsing is handled by
> PyParser_ASTFromString. And as I state below, you have to also consider the
> parser consumers.
>
> Also, I find it a bit troubling that
>> ​​
>> PyParser_ASTFromString is used
>> there and not just the node which would be related to an expression,
>> although I understand it's probably an easier approach, although in the
>> end you probably have to filter it and end up just accepting what's
>> beneath the "test" from the grammar, no? (i.e.: that's what a lambda
>> body accepts).
>>
>
> Using PyParser_ASTFromString is the easiest possible way to do this. Given
> a string, it returns an AST node. What could be simpler?


​I think that for implementation purposes, given the python infrastructure,
it's fine, but for specification purposes, probably incorrect... As I don't
think f-strings should accept:

 f"start {import sys; sys.version_info[0];} end" (i.e.:
​
PyParser_ASTFromString doesn't just return an expression, it accepts any
valid Python code, even code which can't be used in an f-string).


> ​Well, I think all language implementations / IDEs (or at least those
>> which want to give syntax errors) will *have* to look inside f-strings.
>>
>
> While it's probably true that IDEs (and definitely language
> implementations) will want to parse f-strings, I think there are many more
> code scanners that are not language implementations or IDEs. And by being
> "just" regular strings with a new prefix, it's trivial to get any parser
> that doesn't care about the internal structure to at least see f-strings as
> normal strings.
>
> Also, you could still have a separate grammar saying how to look inside
>> f-strings (this would make the lives of other implementors easier) even
>> if it was a post-processing step as you're doing now.
>>
>
> Yes. I've contemplated exposing the f-string scanner. That's the part that
> returns expressions (as strings) and literal strings. I realize that won't
> help 3.6.


Nice...

As a note, just for the record, my own interest on f-strings is knowing how
exactly they are parsed for providing a preview of PyDev with syntax
highlighting and preliminary support for f-strings (which at the very
minimum besides syntax highlighting for the parts of f-strings should also
show syntax errors inside them).

​Cheers,​

​Fabio
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Parsing f-strings from PEP 498 -- Literal String Interpolation

2016-11-09 Thread Fabio Zadrozny
On Sat, Nov 5, 2016 at 10:36 AM, Nick Coghlan  wrote:

> On 5 November 2016 at 04:03, Fabio Zadrozny  wrote:
> > On Fri, Nov 4, 2016 at 3:15 PM, Eric V. Smith 
> wrote:
> >> Using PyParser_ASTFromString is the easiest possible way to do this.
> Given
> >> a string, it returns an AST node. What could be simpler?
> >
> >
> > I think that for implementation purposes, given the python
> infrastructure,
> > it's fine, but for specification purposes, probably incorrect... As I
> don't
> > think f-strings should accept:
> >
> >  f"start {import sys; sys.version_info[0];} end" (i.e.:
> > PyParser_ASTFromString doesn't just return an expression, it accepts any
> > valid Python code, even code which can't be used in an f-string).
>
> f-strings use the "eval" parsing mode, which starts from the
> "eval_input" node in the grammar (which is only a couple of nodes
> higher than 'test', allowing tuples via 'testlist' as well as trailing
> newlines and EOF):
>
> >>> ast.parse("import sys; sys.version_info[0];", mode="eval")
> Traceback (most recent call last):
>  File "", line 1, in 
>  File "/usr/lib64/python3.5/ast.py", line 35, in parse
>return compile(source, filename, mode, PyCF_ONLY_AST)
>  File "", line 1
>import sys; sys.version_info[0];
> ^
> SyntaxError: invalid syntax
>
> You have to use "exec" mode to get the parser to allow statements,
> which is why f-strings don't do that:
>
> >>> ast.dump(ast.parse("import sys; sys.version_info[0];",
> mode="exec"))
> "Module(body=[Import(names=[alias(name='sys', asname=None)]),
> Expr(value=Subscript(value=Attribute(value=Name(id='sys', ctx=Load()),
> attr='version_info', ctx=Load()), slice=Index(value=Num(n=0)),
> ctx=Load()))])"
>
> The unique aspect for f-strings that means they don't permit some
> otherwise valid Python expressions is that it also does the initial
> pre-tokenisation based on:
>
> 1. Look for an opening '{'
> 2. Look for a closing '!', ':' or '}'  accounting for balanced string
> quotes, parentheses, brackets and braces
>
> Ignoring the surrounding quotes, and using the `atom` node from
> Python's grammar to represent the nesting tracking, and TEXT to stand
> in for arbitrary text, it's something akin to:
>
> fstring: (TEXT ['{' maybe_pyexpr ('!' | ':' | '}')])+
> maybe_pyexpr: (atom | TEXT)+
>
> That isn't quite right, since it doesn't properly account for brace
> nesting, but it gives the general idea - there's an initial really
> simple tokenising pass that picks out the potential Python
> expressions, and then those are run through the AST parser's
> equivalent of eval().
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>


​Hi Nick and Eric,

Just wanted to say thanks for the feedback and point to a grammar I ended
up doing on my side (in JavaCC), just in case someone else decides to do a
formal grammar later on it can probably be used as a reference (shouldn't
be hard to convert it to a bnf grammar):

https://github.com/fabioz/Pydev/blob/master/plugins/org.python.pydev.parser/src/org/python/pydev/parser/grammar_fstrings/grammar_fstrings.jjt
​

Also, as a feedback, I found it a bit odd that there can't be any space nor
new line between the last format specifiers and '}'

I.e.:

f'''{
dict(
  a = 10
)
!r
}
'''

​is not valid, whereas ​

​
f'''{
dict(
  a = 10
)
!r}
'''​
is valid -- as a note, this means my grammar has a bug as both versions are
accepted -- and I currently don't care enough about that change from the
implementation to fix it ;)

Cheers,

Fabio​
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Deprecate `from __future__ import unicode_literals`?

2016-12-20 Thread Fabio Zadrozny
On Mon, Dec 19, 2016 at 11:50 PM, Chris Barker 
wrote:

> Please don't get rid of unicode+literals -- I don't even think we should
> depreciate it as a recommendation or discourage it.
>
> Maybe a note or two added as to where issues may arise would be good.
>
> I've found importing unicode_literals to be an excellent way to write
> py2/3 code. And I have never found a problem.
>
> I'm also hoping that my py2/3 compatible code will someday be py3 only --
> and then I'll be really glad that I don't have all those u" all over the
> place.
>
> Also it does "automagically" do the right thing with, for instance passing
> a literal to the file handling functions in the os module -- so that's
> pretty nice.
>
> The number of times you need to add a b"" is FAR fewer than "text" string
> literals. Let's keep it.
>
> -CHB
>
>
Same thing here... also, it helps coding with the same mindset of Python 3,
where everything is unicode by default -- and yes, there are problems if
you use a unicode in an API that accepts bytes on Python 2, but then, you
can also have the same issues on Python 3 -- you need to know and keep
track on the bytes vs unicode everywhere (although they're syntactically
similar to declare, they're not the same thing) and I find that there are
less places where you need to put b'' than u'' (if you code with unicode in
mind in Python 2)...

On the ideal world, Python 2 would actually be improved to accept unicode
on the places where Python 3 accepts unicode (such as subprocess.Popen,
etc) to make it easier in porting applications that actually do the "right"
thing on Python 2 to go to Python 3.

Best Regards,

Fabio​
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-06 Thread Fabio Zadrozny
Hi Barry,

I think it's a nice idea.

Related to the name, on the windows c++ there's "DebugBreak":
https://msdn.microsoft.com/en-us/library/windows/desktop/ms679297(v=vs.85).aspx,
which I think is a better name (so, it'd be debug_break for Python -- I
think it's better than plain breakpoint(), and wouldn't clash as debug()).

For the PyDev.Debugger (https://github.com/fabioz/PyDev.Debugger), which is
the one used by PyDev & PyCharm, I think it would also work.

For instance, for adding the debugger in PyDev, there's a template
completion that'll add the debugger to the PYTHONPATH and start the remote
debugger (same as pdb.set_trace()):

i.e.: the 'pydevd' template expands to something as:

import sys;sys.path.append(r'path/to/ide/shipped_debugger/pysrc')
import pydevd;pydevd.settrace()

I think I could change the hook on a custom sitecustomize (there's already
one in place in PyDev) so that the debug_break() would actually read some
env var to do that work (and provide some utility for users to pre-setup it
when not launching from inside the IDE).

Still, there may be other settings that the user needs to pass to
settrace() when doing a remote debug session -- i.e.: things such as the
host, port to connect, etc -- see:
https://github.com/fabioz/PyDev.Debugger/blob/master/pydevd.py#L1121, so,
maybe the debug_break() method should accept keyword arguments to pass
along to support other backends?

Cheers,

Fabio

On Wed, Sep 6, 2017 at 1:44 PM, Barry Warsaw  wrote:

> On Sep 6, 2017, at 07:46, Guido van Rossum  wrote:
> >
> > IIRC they indeed insinuate debug() into the builtins. My suggestion is
> also breakpoint().
>
> breakpoint() it is then!  I’ll rename the sys hooks too, but keep the
> naming scheme of the existing sys hooks.
>
> Cheers,
> -Barry
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> fabiofz%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Compiling of ast.Module in Python 3.10 and co_firstlineno behavior

2022-02-17 Thread Fabio Zadrozny
Hi all,

I'm stumbling with an issue where the co_firstlineno behavior changed from
Python 3.9 to Python 3.10 and I was wondering if this was intentional or
not.

i.e.: Whenever a code is compiled in Python 3.10, the `code.co_firstlineno`
is now always 1, whereas previously it was equal to the first statement.

Also, does anyone know if there is any way to restore the old behavior in
Python 3.10? I tried setting the `module.lineno` but it didn't really make
any difference...

As an example, given the code below:

import dis

source = '''
print(1)

print(2)
'''

initial_module = compile(source, '', 'exec', PyCF_ONLY_AST, 1)

import sys
print(sys.version)

for i in range(2):
module = Module([initial_module.body[i]], [])
module_code = compile(module, '', 'exec')
print(' --> First lineno:', module_code.co_firstlineno)
print(' --> Line starts :', list(lineno for offset, lineno in
dis.findlinestarts(module_code)))
print(' dis ---')
dis.dis(module_code)



I have the following outputs for Pyhon 3.9/Python 3.10:

3.9.6 (default, Jul 30 2021, 11:42:22) [MSC v.1916 64 bit (AMD64)]
 --> First lineno: 2
 --> Line starts : [2]
 dis ---
  2   0 LOAD_NAME0 (print)
  2 LOAD_CONST   0 (1)
  4 CALL_FUNCTION1
  6 POP_TOP
  8 LOAD_CONST   1 (None)
 10 RETURN_VALUE
 --> First lineno: 4
 --> Line starts : [4]
 dis ---
  4   0 LOAD_NAME0 (print)
  2 LOAD_CONST   0 (2)
  4 CALL_FUNCTION1
  6 POP_TOP
  8 LOAD_CONST   1 (None)
 10 RETURN_VALUE



3.10.0 (tags/v3.10.0:b494f59, Oct  4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)]
 --> First lineno: 1
 --> Line starts : [2]
 dis ---
  2   0 LOAD_NAME0 (print)
  2 LOAD_CONST   0 (1)
  4 CALL_FUNCTION1
  6 POP_TOP
  8 LOAD_CONST   1 (None)
 10 RETURN_VALUE
 --> First lineno: 1
 --> Line starts : [4]
 dis ---
  4   0 LOAD_NAME0 (print)
  2 LOAD_CONST   0 (2)
  4 CALL_FUNCTION1
  6 POP_TOP
  8 LOAD_CONST   1 (None)
 10 RETURN_VALUE

Thanks,

Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VXW3TVHVYOMXDQIQBJNZ4BTLXFT4EPQZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Compiling of ast.Module in Python 3.10 and co_firstlineno behavior

2022-02-17 Thread Fabio Zadrozny
Em qui., 17 de fev. de 2022 às 16:05, Mark Shannon 
escreveu:

> Hi Fabio,
>
> This happened as part of implementing PEP 626.
> The previous behavior isn't very robust w.r.t doc strings and
> compiler optimizations.
>
> OOI, why would you want to revert to the old behavior?
>
>
Hi Mark,

The issue I'm facing is that ipython uses an approach of obtaining the ast
for a function to be executed and then it goes on node by node executing it.

When running in the debugger, the debugger caches some information based on
(co_firstlineno, co_name, co_filename) to have information saved across
multiple calls to the same function, which works in general because each
function in a given python file would have its own co_firstlineno, but in
this specific case here it gets a single function and then recompiles it
expression by expression -- so, it'll have the same co_filename ()
and the same co_name (), but then the co_firstlineno would be
different (because the statement resides in a different line), but with
Python 3.10 this assumption fails as even the co_firstlineno will be the
same...

You can see the actual issues at:
https://github.com/microsoft/vscode-jupyter/issues/8803 /
https://github.com/ipython/ipykernel/issues/841/
https://github.com/microsoft/debugpy/issues/844

After thinkering a bit it seems it's possible to create a new code object
based on an existing code object with `code.replace` (re-assembling the
co_lnotab/co_firstlineno), so, I'm going to propose that as a fix to
ipython, but I found it really strange that this did change in Python 3.10
in the first place as the old behavior seemed reasonable for me (i.e.: with
the new behavior it's a bit strange that the user is compiling something
with a single statement on line 99 and yet the resulting code object will
have the co_firstlineno == 1).

-- note: I also couldn't find any mention of this in the changelog, so, I
thought this could've happened by mistake.

Best regards,

Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DVP4VK3BY4XDC6B6HSVPLJTPCQKISAPC/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Compiling of ast.Module in Python 3.10 and co_firstlineno behavior

2022-02-18 Thread Fabio Zadrozny
Em qui., 17 de fev. de 2022 às 17:55, Gabriele 
escreveu:

> Hi Fabio
>
> Does the actual function object get re-created as well during the
> recompilation process that you have described? Perhaps it might help
> to note that the __code__ attribute of a function object f can be
> mutated and that f is hashable?
>

Thank you for the reminder... Right now the way that it works in ipython
the code object is really recreated and then is directly executed (which
kind of makes sense since it's expected that cells change for
re-evaluation).

I had previously considered caching in the debugger using the code object,
but as code objects can be created during the regular execution, the
debugger could end up creating a huge leak.

Best regards,

Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/Z5DH3HOV73OS2N3C4ZKYI4UB2WQYTS2I/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: C API: Move PEP 523 "Adding a frame evaluation API to CPython" private C API to the internal C API

2022-03-24 Thread Fabio Zadrozny
>
> PEP 523 API added more private functions for code objects:
>
> * _PyEval_RequestCodeExtraIndex()
> * _PyCode_GetExtra()
> * _PyCode_SetExtra()
>
> The _PyEval_RequestCodeExtraIndex() function seems to be used by the
> pydevd debugger. The two others seem to be unused in the wild. I'm not
> sure if these ones should be moved to the internal C API. They can be
> left unchanged, since they don't use a type only defined by the
> internal C API.
>
Just to note, the pydevd/debugpy debuggers actually uses all of those APIs.

i.e.:

https://github.com/fabioz/PyDev.Debugger/blob/main/_pydevd_frame_eval/pydevd_frame_evaluator.template.pyx#L187
https://github.com/fabioz/PyDev.Debugger/blob/main/_pydevd_frame_eval/pydevd_frame_evaluator.template.pyx#L232
https://github.com/fabioz/PyDev.Debugger/blob/main/_pydevd_frame_eval/pydevd_frame_evaluator.template.pyx#L311

The debugger already has workarounds because of changes to evaluation api
over time (see:
https://github.com/fabioz/PyDev.Debugger/blob/main/_pydevd_frame_eval/pydevd_frame_evaluator.template.pyx#L491)
and I know 3.11 won't be different.

I'm ok with changes as I understand that this is a special API -- as long
as there's still a way to use it and get the information needed (the
debugger already goes through many hops because it needs to use many
internals of CPython -- in every new release it's a **really** big task to
update to the latest version as almost everything that the debugger relies
to make debugging fast changes across versions and I never really know if
it'll be possible to support it until I really try to do the port -- I
appreciate having less things in a public API so it's easier to have
extensions work in other interpreters/not recompiling on newer versions,
but please keep it possible to use private APIs which provides the same
access that CPython has to access things internally for special cases such
as the debugger).

Maybe later on that PEP from mark which allows a better debugger API could
alleviate that (but until then, if possible I appreciate it if there's some
effort not to break things unless really needed -- ideally with
instructions on how to port).

Anyways, to wrap up, the debugger already needs to be built with
`Py_BUILD_CORE_MODULE=1` anyways, so, I guess having it in a private API
(as long as it's still accessible in that case) is probably not a big issue
for the debugger and having setters/getters to set it instead of relying on
`state.interp.eval_frame` seems good to me.

Cheers,

Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/XPDT55ANVKHGG74D62HDBOFLC4EXWJ26/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: C API: Move PEP 523 "Adding a frame evaluation API to CPython" private C API to the internal C API

2022-03-24 Thread Fabio Zadrozny
Em qui., 24 de mar. de 2022 às 15:39, Fabio Zadrozny 
escreveu:

> PEP 523 API added more private functions for code objects:
>>
>> * _PyEval_RequestCodeExtraIndex()
>> * _PyCode_GetExtra()
>> * _PyCode_SetExtra()
>>
>> The _PyEval_RequestCodeExtraIndex() function seems to be used by the
>> pydevd debugger. The two others seem to be unused in the wild. I'm not
>> sure if these ones should be moved to the internal C API. They can be
>> left unchanged, since they don't use a type only defined by the
>> internal C API.
>>
> Just to note, the pydevd/debugpy debuggers actually uses all of those APIs.
>
> i.e.:
>
>
> https://github.com/fabioz/PyDev.Debugger/blob/main/_pydevd_frame_eval/pydevd_frame_evaluator.template.pyx#L187
>
> https://github.com/fabioz/PyDev.Debugger/blob/main/_pydevd_frame_eval/pydevd_frame_evaluator.template.pyx#L232
>
> https://github.com/fabioz/PyDev.Debugger/blob/main/_pydevd_frame_eval/pydevd_frame_evaluator.template.pyx#L311
>
> The debugger already has workarounds because of changes to evaluation api
> over time (see:
> https://github.com/fabioz/PyDev.Debugger/blob/main/_pydevd_frame_eval/pydevd_frame_evaluator.template.pyx#L491)
> and I know 3.11 won't be different.
>
> I'm ok with changes as I understand that this is a special API -- as long
> as there's still a way to use it and get the information needed (the
> debugger already goes through many hops because it needs to use many
> internals of CPython -- in every new release it's a **really** big task to
> update to the latest version as almost everything that the debugger relies
> to make debugging fast changes across versions and I never really know if
> it'll be possible to support it until I really try to do the port -- I
> appreciate having less things in a public API so it's easier to have
> extensions work in other interpreters/not recompiling on newer versions,
> but please keep it possible to use private APIs which provides the same
> access that CPython has to access things internally for special cases such
> as the debugger).
>
> Maybe later on that PEP from mark which allows a better debugger API could
> alleviate that (but until then, if possible I appreciate it if there's some
> effort not to break things unless really needed -- ideally with
> instructions on how to port).
>
> Anyways, to wrap up, the debugger already needs to be built with
> `Py_BUILD_CORE_MODULE=1` anyways, so, I guess having it in a private API
> (as long as it's still accessible in that case) is probably not a big issue
> for the debugger and having setters/getters to set it instead of relying on
> `state.interp.eval_frame` seems good to me.
>
> Cheers,
>
> Fabio
>
>

I think the main issue here is the compatibility across the same version
though... is it possible to have some kind of guarantee on private APIs
that something won't change across micro-releases?

I.e.: having the frame evaluation function change across major releases and
having them be reworked seems reasonable, but then having the frame
evaluation be changed across micro-releases wouldn't be.

So, I'm ok in pushing things to the internal API, but then I still would
like guarantees about the compatibility of that API in the same major
release (otherwise those setters/getters/frame evaluation should probably
remain on the public API if the related structure was moved to the internal
API).

Cheers,

Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DHKE7LVN4R7NQFTBJJHGXI3AJOK6OYIV/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: C API: Move PEP 523 "Adding a frame evaluation API to CPython" private C API to the internal C API

2022-04-22 Thread Fabio Zadrozny
Em sex., 22 de abr. de 2022 às 09:02, Petr Viktorin 
escreveu:

> Hello Fabio,
> Let's talk a bit about which API should, exactly, be guaranteed to not
> change across minor releases.
> So far it looks like:
> - PyEval_RequestCodeExtraIndex
> - PyCode_GetExtra
> - PyCode_SetExtra
> - PyFrameEvalFunction
> - PyInterpreterState_GetEvalFrameFunc
> - PyInterpreterState_SetEvalFrameFunc
>
> Do any more come to mind?
>
> The issue with this set is that in 3.11, _PyFrameEvalFunction changes
> its signature to take _PyInterpreterFrame rather than PyFrameObject.
> Exposing _PyInterpreterFrame would be quite problematic. For example,
> since it's not a PyObject, it has its own lifetime management that's
> controlled by the interpreter itself,. And it includes several
> pointers whose lifetime and semantics also isn't guaranteed (they
> might be borrowed, cached or filled on demand). I don't think we can
> make any guarantees on these, so the info needs to be accessed using
> getter functions.
>
> There is the function _PyFrame_GetFrameObject, which returns a
> PyFrameObject.
> I think it would be best to only expose _PyInterpreterFrame as an
> opaque structure, and expose PyFrame_GetFrameObject so debuggers can
> get a PyFrameObject from it.
> Does that sound reasonable?
>


Humm, now I'm a bit worried... the approach the debugger is using gets the
PyFrameObject that's about to be executed and changes the
PyFrameObject.f_code just before the execution so that the new code is
executed instead.

>From what you're saying the PyFrameObject isn't really used anymore
(apparently it's substituted by a _PyInterpreterFrame?)... in this case,
will this approach still let the debugger patch the code object in the
frame before it's actually executed?

-- i.e.: the debugger changes the state.interp.eval_frame to its own custom
evaluation function, but _PyEval_EvalFrameDefault is still what ends up
being called afterwards (it works more as a hook to change the
PyFrameObject.f_code prior to execution than as an alternate interpreter).



On Thu, Mar 24, 2022 at 8:13 PM Fabio Zadrozny  wrote:
> >
> >
> > Em qui., 24 de mar. de 2022 às 15:39, Fabio Zadrozny 
> escreveu:
> >>>
> >>> PEP 523 API added more private functions for code objects:
> >>>
> >>> * _PyEval_RequestCodeExtraIndex()
> >>> * _PyCode_GetExtra()
> >>> * _PyCode_SetExtra()
> >>>
> >>> The _PyEval_RequestCodeExtraIndex() function seems to be used by the
> >>> pydevd debugger. The two others seem to be unused in the wild. I'm not
> >>> sure if these ones should be moved to the internal C API. They can be
> >>> left unchanged, since they don't use a type only defined by the
> >>> internal C API.
> >>
> >> Just to note, the pydevd/debugpy debuggers actually uses all of those
> APIs.
> >>
> >> i.e.:
> >>
> >>
> https://github.com/fabioz/PyDev.Debugger/blob/main/_pydevd_frame_eval/pydevd_frame_evaluator.template.pyx#L187
> >>
> https://github.com/fabioz/PyDev.Debugger/blob/main/_pydevd_frame_eval/pydevd_frame_evaluator.template.pyx#L232
> >>
> https://github.com/fabioz/PyDev.Debugger/blob/main/_pydevd_frame_eval/pydevd_frame_evaluator.template.pyx#L311
> >>
> >> The debugger already has workarounds because of changes to evaluation
> api over time (see:
> https://github.com/fabioz/PyDev.Debugger/blob/main/_pydevd_frame_eval/pydevd_frame_evaluator.template.pyx#L491)
> and I know 3.11 won't be different.
> >>
> >> I'm ok with changes as I understand that this is a special API -- as
> long as there's still a way to use it and get the information needed (the
> debugger already goes through many hops because it needs to use many
> internals of CPython -- in every new release it's a **really** big task to
> update to the latest version as almost everything that the debugger relies
> to make debugging fast changes across versions and I never really know if
> it'll be possible to support it until I really try to do the port -- I
> appreciate having less things in a public API so it's easier to have
> extensions work in other interpreters/not recompiling on newer versions,
> but please keep it possible to use private APIs which provides the same
> access that CPython has to access things internally for special cases such
> as the debugger).
> >>
> >> Maybe later on that PEP from mark which allows a better debugger API
> could alleviate that (but until then, if possible I appreciate it if
> there's some effort not to break things unless really need

[Python-Dev] Setting frame evaluation function (i.e.:PyInterpreterState.frame_eval)

2019-10-16 Thread Fabio Zadrozny
Hi All,

I'm trying to upgrade the pydevd debugger to the latest version of CPython
(3.8), however I'm having some issues being able to access
`PyInterpreterState.eval_frame` when compiling, so, I'd like to ask if
someone can point me in the right direction.

What I'm trying to do is compile something like:

#include "pystate.h"
...
PyThreadState *ts = PyThreadState_Get();
PyInterpreterState *interp = ts->interp;
interp->eval_frame = my_frame_eval_func;

and the error I'm having is:

_pydevd_frame_eval/pydevd_frame_evaluator.c(7534): error C2037: left of
'eval_frame' specifies undefined struct/union '_is'

So, it seems that now "pystate.h" only has a forward reference to "_is" and
a typedef from " PyInterpreterState" to "_is" and "_is" is defined in
"include/internal/pycore_pystate.h", which doesn't seem like I should be
including (in fact, if I try to include it I get an error saying that I
would need to define Py_BUILD_CORE)... so, can someone point me to the
proper way to set the frame evaluation function on CPython 3.8?

Thanks,

Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/BJAO35DB4KDC2EWV553YEU3VLRNYJSQ6/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Setting frame evaluation function (i.e.:PyInterpreterState.frame_eval)

2019-10-16 Thread Fabio Zadrozny
On Wed, Oct 16, 2019 at 11:14 AM Victor Stinner  wrote:

> Hi Fabio,
>
> Right, the PyInterpreterState structure is now opaque. You need to
> include pycore_pystate.h which requires to define the
> Py_BUILD_CORE_MODULE macro. That's the internal C API which "should
> not be used", but it's ok to use it for very specific use cases, like
> debuggers.
>
> Maybe we should provide an interpreter method to set
> interp->eval_frame, to avoid to pull the annoying internal C API.
>
>
Hi Victor,

Thank you very much, that did the trick!

I agree it'd be nicer to have some method to set up the frame evaluation
function instead of pulling up the internal C API, but it's also *very*
specialized, so, I'm not sure how much it's worth it (I'm happy with just
being able to do it, even if it's not very straightforward).

Best Regards,

Fabio

Le mer. 16 oct. 2019 à 15:47, Fabio Zadrozny  a écrit :
> >
> > Hi All,
> >
> > I'm trying to upgrade the pydevd debugger to the latest version of
> CPython (3.8), however I'm having some issues being able to access
> `PyInterpreterState.eval_frame` when compiling, so, I'd like to ask if
> someone can point me in the right direction.
> >
> > What I'm trying to do is compile something like:
> >
> > #include "pystate.h"
> > ...
> > PyThreadState *ts = PyThreadState_Get();
> > PyInterpreterState *interp = ts->interp;
> > interp->eval_frame = my_frame_eval_func;
> >
> > and the error I'm having is:
> >
> > _pydevd_frame_eval/pydevd_frame_evaluator.c(7534): error C2037: left of
> 'eval_frame' specifies undefined struct/union '_is'
> >
> > So, it seems that now "pystate.h" only has a forward reference to "_is"
> and a typedef from " PyInterpreterState" to "_is" and "_is" is defined in
> "include/internal/pycore_pystate.h", which doesn't seem like I should be
> including (in fact, if I try to include it I get an error saying that I
> would need to define Py_BUILD_CORE)... so, can someone point me to the
> proper way to set the frame evaluation function on CPython 3.8?
> >
> > Thanks,
> >
> > Fabio
> > ___
> > Python-Dev mailing list -- python-dev@python.org
> > To unsubscribe send an email to python-dev-le...@python.org
> > https://mail.python.org/mailman3/lists/python-dev.python.org/
> > Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/BJAO35DB4KDC2EWV553YEU3VLRNYJSQ6/
> > Code of Conduct: http://python.org/psf/codeofconduct/
>
>
>
> --
> Night gathers, and now my watch begins. It shall not end until my death.
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/AQNRC2FUWZFXNUETXUNPTMT52WXLFCUT/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Setting frame evaluation function (i.e.:PyInterpreterState.frame_eval)

2019-10-17 Thread Fabio Zadrozny
On Wed, Oct 16, 2019 at 1:05 PM Victor Stinner  wrote:

> Would you mind to open an issue at bugs.python.org? You can put me
> ("vstinner") in the nosy list.
>
Done: https://bugs.python.org/issue38500

>
> Victor
>
> Le mer. 16 oct. 2019 à 17:07, Fabio Zadrozny  a écrit :
> >
> >
> > On Wed, Oct 16, 2019 at 11:14 AM Victor Stinner 
> wrote:
> >>
> >> Hi Fabio,
> >>
> >> Right, the PyInterpreterState structure is now opaque. You need to
> >> include pycore_pystate.h which requires to define the
> >> Py_BUILD_CORE_MODULE macro. That's the internal C API which "should
> >> not be used", but it's ok to use it for very specific use cases, like
> >> debuggers.
> >>
> >> Maybe we should provide an interpreter method to set
> >> interp->eval_frame, to avoid to pull the annoying internal C API.
> >>
> >
> > Hi Victor,
> >
> > Thank you very much, that did the trick!
> >
> > I agree it'd be nicer to have some method to set up the frame evaluation
> function instead of pulling up the internal C API, but it's also *very*
> specialized, so, I'm not sure how much it's worth it (I'm happy with just
> being able to do it, even if it's not very straightforward).
> >
> > Best Regards,
> >
> > Fabio
> >
> >> Le mer. 16 oct. 2019 à 15:47, Fabio Zadrozny  a
> écrit :
> >> >
> >> > Hi All,
> >> >
> >> > I'm trying to upgrade the pydevd debugger to the latest version of
> CPython (3.8), however I'm having some issues being able to access
> `PyInterpreterState.eval_frame` when compiling, so, I'd like to ask if
> someone can point me in the right direction.
> >> >
> >> > What I'm trying to do is compile something like:
> >> >
> >> > #include "pystate.h"
> >> > ...
> >> > PyThreadState *ts = PyThreadState_Get();
> >> > PyInterpreterState *interp = ts->interp;
> >> > interp->eval_frame = my_frame_eval_func;
> >> >
> >> > and the error I'm having is:
> >> >
> >> > _pydevd_frame_eval/pydevd_frame_evaluator.c(7534): error C2037: left
> of 'eval_frame' specifies undefined struct/union '_is'
> >> >
> >> > So, it seems that now "pystate.h" only has a forward reference to
> "_is" and a typedef from " PyInterpreterState" to "_is" and "_is" is
> defined in "include/internal/pycore_pystate.h", which doesn't seem like I
> should be including (in fact, if I try to include it I get an error saying
> that I would need to define Py_BUILD_CORE)... so, can someone point me to
> the proper way to set the frame evaluation function on CPython 3.8?
> >> >
> >> > Thanks,
> >> >
> >> > Fabio
> >> > ___
> >> > Python-Dev mailing list -- python-dev@python.org
> >> > To unsubscribe send an email to python-dev-le...@python.org
> >> > https://mail.python.org/mailman3/lists/python-dev.python.org/
> >> > Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/BJAO35DB4KDC2EWV553YEU3VLRNYJSQ6/
> >> > Code of Conduct: http://python.org/psf/codeofconduct/
> >>
> >>
> >>
> >> --
> >> Night gathers, and now my watch begins. It shall not end until my death.
>
>
>
> --
> Night gathers, and now my watch begins. It shall not end until my death.
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/JCFM5I4UHQ2ZZCOQ5GWWWTKWIXPZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Mixed Python/C debugging

2019-12-02 Thread Fabio Zadrozny
 Hi Skip,

I just wanted to note that what I usually do in this case is having 2
debuggers attached.

i.e.: start one any way you want and then do an attach to from the other
debugger -- in my case as I'm usually on the Python side I usually start
the Python debugger and then do an attach to from the C++ IDE, but you can
probably do it the other way around too :)

On Sun, Dec 1, 2019 at 1:57 PM Skip Montanaro 
wrote:

> Having tried comp.lang.python with no response, I turn here...
>
> After at least ten years away from Python's run-time interpreter &
> byte code compiler, I'm getting set to familiarize myself with that
> again. This will, I think, entail debugging a mixed Python/C
> environment. I'm an Emacs user and am aware that GDB since 7.0 has
> support for debugging at the Python code level. Is Emacs+GDB my best
> bet? Are there any Python IDEs which support C-level breakpoints and
> debugging?
>
> Thanks,
>
> Skip
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/L2KBZM64MYPXIITN4UU3X6L4PZS2YRTB/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZJOFXW2K42YUMP3MQY6POBLRJKMU2BLU/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Debug C++

2020-03-31 Thread Fabio Zadrozny
If you're on Windows, another option would be doing a remote attach from
Visual Studio C++

--  so, you can start the program under debugging in Python and then do an
attach to process from Visual Studio C++.

On Mon, Mar 30, 2020 at 9:07 PM Rhodri James  wrote:

> On 30/03/2020 23:20, Leandro Müller wrote:
> > Hi
> > Are there any away to debug C module during python debug?
>
> In a word, gdb.
>
> I've been doing this quite a lot lately.  The trick is to start Python
> up under gdb, set a pending breakpoint in your C module, then carry on
> as normal.  For example:
>
>
> rhodri@Wildebeest:~/hub_module$ gdb python3
> GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git
> Copyright (C) 2018 Free Software Foundation, Inc.
>
> [...large amounts of boilerplate omitted for brevity...]
>
> Reading symbols from python3...Reading symbols from
>
> /usr/lib/debug/.build-id/28/7763e881de67a59b31b452dd0161047f7c0135.debug...done.
> done.
> (gdb) b Hub_new
> Function "Hub_new" not defined.
> Make breakpoint pending on future shared library load? (y or [n]) y
> Breakpoint 1 (Hub_init) pending.
> (gdb) run
> Starting program: /home/rhodri/Work/Lego/hub_module/hat_env/bin/python3
> [Thread debugging using libthread_db enabled]
> Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
> Python 3.6.9 (default, Nov  7 2019, 10:44:02)
> [GCC 8.3.0] on linux
> Type "help", "copyright", "credits" or "license" for more information.
>  >>> from hub import hub
>
> Breakpoint 1, Hub_new (type=0x76236340 , args=(), kwds=0x0)
>  at src/hubmodule.c:164
> 164 HubObject *self = (HubObject *)type->tp_alloc(type, 0);
> (gdb)
>
>
> ...and off you go!
>
>
> --
> Rhodri James *-* Kynesim Ltd
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/4JVUDRD5YZVVZ4FWC3BYN3RJDXBNBTIF/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VXDGWFIWM7VQ4PLO4EUYN3XKUEZFSK7C/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 617: New PEG parser for CPython

2020-04-06 Thread Fabio Zadrozny
On Thu, Apr 2, 2020 at 3:16 PM Guido van Rossum  wrote:

> Since last fall's core sprint in London, Pablo Galindo Salgado, Lysandros
> Nikolaou and myself have been working on a new parser for CPython. We are
> now far enough along that we present a PEP we've written:
>
> https://www.python.org/dev/peps/pep-0617/
>
> Hopefully the PEP speaks for itself. We are hoping for a speedy resolution
> so we can land the code we've written before 3.9 beta 1.
>
> If people insist I can post a copy of the entire PEP here on the list, but
> since a lot of it is just background information on the old LL(1) and the
> new PEG parsing algorithms, I figure I'd spare everyone the need of reading
> through that. Below is a copy of the most relevant section from the PEP.
> I'd also like to point out the section on performance (which you can find
> through the above link) -- basically performance is on a par with that of
> the old parser.
>
>
Hi Guido,

I think using a PEG parser is interesting, but I do have some questions
related to what's to expect in the future for other people which have to
follow the Python grammar, so, can you shed some light on this?

Does that mean that the grammar format currently available (which is
currently specified in https://docs.python.org/3.8/reference/grammar.html)
will no longer be updated/used?

Is it expected that other language implementations/parsers also have to
move to a PEG parser in the future? -- which would probably be the case if
the language deviates strongly off LL(1)

Thanks,

Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZETOUF7L7XBPJ2D2U7UZZBBBDTP727XZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP: Modify the C API to hide implementation details

2020-04-13 Thread Fabio Zadrozny
> * Hide implementation details from the C API to be able to `optimize
>   CPython`_ and make PyPy more efficient.
> * The expectation is that `most C extensions don't rely directly on
>   CPython internals`_ and so will remain compatible.
> * Continue to support old unmodified C extensions by continuing to
>   provide the fully compatible "regular" CPython runtime.
> * Provide a `new optimized CPython runtime`_ using the same CPython code
>   base: faster but can only import C extensions which don't use
>   implementation details. Since both CPython runtimes share the same
>   code base, features implemented in CPython will be available in both
>   runtimes.
>
>
Adding my 2cents from someone who does use the CPython API (for a debugger).

I must say I'm -1 until alternative APIs needed are available in the
optimized CPython runtime (I'd also say that this is a really big
incompatible change and would need a Python 4.0 to do)... I guess that in
order for this to work, the first step wouldn't be breaking everyone but
talking to extension authors (maybe checking for the users of the APIs
which will be deprecated) and seeing alternatives before pushing something
which will break CPython extensions which rely on such APIs.

I also don't think that CPython should have 2 runtimes... if the idea is to
leverage extensions to other CPython implementations, I think going just
for a more limited API is the way to go (but instead of just breaking
extensions that use the CPython internal API, try to come up with
alternative APIs for the users of the current CPython API -- for my use
case, I know the debugger could definitely do with just a few simple
additions: it uses the internal API mostly because there aren't real
alternatives for a couple of use cases). i.e.: if numpy/pandas/ doesn't adopt the optimized runtime because they don't have the
needed support they need, it won't be useful to have it in the first place
(you'd just be in the same place where other Python implementations already
are).

Also, this should probably follow the usual deprecation cycle: do a major
CPython release which warns about using the APIs that'll be deprecated and
only in the next CPython release should those APIs be actually removed (and
when that's done it probably deserves to be called Python 4).

Cheers,

Fabio
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IT6TQLDRII66K4T42NU2ZFTYOE6GYBRI/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Python 3.0 grammar ambiguous?

2009-03-08 Thread Fabio Zadrozny
Hi All,

I'm trying to parse Python 3.0 following the Python 3.0 grammar from:
http://svn.python.org/projects/python/branches/py3k/Grammar/Grammar

Now, when getting to the arglist, it seems that the grammar is
ambiguous, and I was wondering how does Python disambiguate that (I'll
not put the whole grammar here, just the part that appears to be
ambiguous):

arglist: (argument ',')*
(argument [',']
 |'*' test (',' argument)* [',' '**' test]
 |'**' test
 )

argument: test [comp_for]
test: or_test
or_test: and_test
and_test: not_test
not_test: 'not' not_test | comparison
comparison: star_expr
star_expr: ['*'] expr


So, with that construct, having call(arglist) in a format:
call(*test), the grammar would find it to be consumed in the argument
construction (because of the start_expr) and not in the arglist in the
'*' test (because the construct is ambiguous and the argument
construct comes first), so, I was wondering how does Python
disambiguate that... anyone has any pointers on it? It appears that
happened while adding PEP 3132.

Am I missing something here?

Thanks,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3.0 grammar ambiguous?

2009-03-08 Thread Fabio Zadrozny
>> I was wondering how does Python
>> disambiguate that... anyone has any pointers on it?
>
> That is easy to answer:
>
> py> parser.expr("f(*x)").totuple()
> (258, (326, (303, (304, (305, (306, (307, (309, (310, (311, (312, (313,
> (314, (315, (316, (317, (1, 'f')), (321, (7, '('), (329, (16, '*'),
> (303, (304, (305, (306, (307, (309, (310, (311, (312, (313, (314, (315,
> (316, (317, (1, 'x', (8, ')', (4, ''),
> (0, ''))
> py> symbol.arglist
> 329
>
> So much for the "how", I don't know why it makes this choice; notice
> that this is the better choice, though:
>
> py> f((*x))
>  File "", line 1
> SyntaxError: can use starred expression only as assignment target


Yeap, very strange that it works... I can't get it to work in JavaCC.

I'm considering setting arglist as
((argument() [','])+  ['**' test])
| '**' test

Because it is able to handle the constructs removing the ambiguity,
and make things right semantically later on, but I don't like the idea
of being so different from the official grammar (although I'm running
out of choices).

Thanks,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3.0 grammar ambiguous?

2009-03-08 Thread Fabio Zadrozny
> To be honest I wasn't aware of this ambiguity. It seems that whoever
> wrote the patch for argument unpacking (a, b, *c = ...) got "lucky"
> with an ambiguous grammar. This surprises me, because IIRC usually
> pgen doesn't like ambiguities. Other parser generators usually have
> some way to deal with ambiguous grammars, but they also usually have
> features that make it unnecessary to use the exact same grammar as
> pgen -- for example, most parser generators are able to backtrack or
> look ahead more than one token, so that they can distinguish between
> "a = b" and "a" once the '=' token is seen, rather than having to
> commit to parse an expression first.

JavaCC can actually do that, but in the current construct, the
ambiguity is not dependent on a lookahead, because both '*' test and
star_expr will match it equally well -- because they're actually the
same thing grammar-wise (so, there's no way to disambiguate without a
semantic handling later on)

After taking a 2nd look, I think that probably the best solution would
be creating a new testlist to be used from the expr_stmt -- something
like testlist_start_expr.

E.g.:

testlist: test (',' test)* [',']
testlist_star_expr:  [test | star_expr] (',' [test | star_expr])* [',']

And also, star_expr could probably be just
'*' NAME
(to make it faster to match -- or could it match something else?)

Note that I still haven't tested this -- I'll have time to do it
tomorrow on my JavaCC grammar (so, I can give you a better feedback if
this actually works).

Regards,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python 3.0 grammar ambiguous?

2009-03-09 Thread Fabio Zadrozny
>> To be honest I wasn't aware of this ambiguity. It seems that whoever
>> wrote the patch for argument unpacking (a, b, *c = ...) got "lucky"
>> with an ambiguous grammar. This surprises me, because IIRC usually
>> pgen doesn't like ambiguities. Other parser generators usually have
>> some way to deal with ambiguous grammars, but they also usually have
>> features that make it unnecessary to use the exact same grammar as
>> pgen -- for example, most parser generators are able to backtrack or
>> look ahead more than one token, so that they can distinguish between
>> "a = b" and "a" once the '=' token is seen, rather than having to
>> commit to parse an expression first.
>
> JavaCC can actually do that, but in the current construct, the
> ambiguity is not dependent on a lookahead, because both '*' test and
> star_expr will match it equally well -- because they're actually the
> same thing grammar-wise (so, there's no way to disambiguate without a
> semantic handling later on)
>
> After taking a 2nd look, I think that probably the best solution would
> be creating a new testlist to be used from the expr_stmt -- something
> like testlist_start_expr.
>
> E.g.:
>
> testlist: test (',' test)* [',']
> testlist_star_expr:  [test | star_expr] (',' [test | star_expr])* [',']
>
> And also, star_expr could probably be just
> '*' NAME
> (to make it faster to match -- or could it match something else?)
>
> Note that I still haven't tested this -- I'll have time to do it
> tomorrow on my JavaCC grammar (so, I can give you a better feedback if
> this actually works).
>

Ok, I've created a bug for that at: http://bugs.python.org/issue5460
and commented on a solution (and just to note, star_expr could match
b, *a.a = [1,2,3], so, it cannot be changed for NAME)

The solution is working for me, and it should be straightforward to
apply it to Python.

Regards,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Cannot set PYTHONPATH with big paths with Python 3.0 and 3.1

2009-06-08 Thread Fabio Zadrozny
Hi all,

I've reported bug http://bugs.python.org/issue5924 some time ago and I
think it's a release blocker -- it seems easy to fix, but I don't have
time to actually submit a patch, so, I'd like to draw attention to it,
especially as a release candidate is already out.

Cheers,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] nonlocal keyword in 2.x?

2009-10-22 Thread Fabio Zadrozny
On Wed, Oct 21, 2009 at 10:56 PM, Mike Krell  wrote:
> Is there any possibility of backporting support for the nonlocal keyword
> into a  2.x release?  I see it's not in 2.6, but I don't know if that was an
> intentional design choice or due to a lack of demand / round tuits.  I'm
> also not sure if this would fall under the scope of the proposed moratorium
> on new language features (although my first impression was that it could be
> allowed since it already exists in python 3.
>
> One of my motivations for asking is a recent blog post by Fernando Perez of
> IPython fame that describes an interesting decorator-based idiom inspired by
> Apple's Grand Central Dispatch which would allow many interesting
> possibilities for expressing parallelization and other manipulations of
> execution context for blocks of python code.  Unfortunately, using the
> technique to its fullest extent requires the nonlocal keyword.
>
> The blog post is here:
> https://cirl.berkeley.edu/fperez/py4science/decorators.html
>

Just as a note, the nonlocal there is not a requirement...

You can just create a mutable object there and change that object (so,
you don't need to actually rebind the object in the outer scope).

E.g.: instead of creating a float in the context, create a list with a
single float and change the float in the list (maybe the nonlocal
would be nicer, but it's certainly still usable)

Cheers,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Grammar change in classdef

2006-09-16 Thread Fabio Zadrozny
I've been porting the grammar for pydev to version 2.5 and I've seen
that you can now declare a class in the format: class B():pass
(without the testlist)

-- from the grammar: classdef: 'class' NAME ['(' [testlist] ')'] ':' suite

I think that this change should be presented at
http://docs.python.org/dev/whatsnew/whatsnew25.html

I'm saying that because I've only stumbled upon it by accident -- and
I wasn't able to find any explanation on the reason or semantics of
the change...

Thanks,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Grammar change in classdef

2006-09-16 Thread Fabio Zadrozny
On 9/16/06, Lawrence Oluyede <[EMAIL PROTECTED]> wrote:
> > I think that this change should be presented at
> > http://docs.python.org/dev/whatsnew/whatsnew25.html
>
> It's already listed there: http://docs.python.org/dev/whatsnew/other-lang.html
>

Thanks... also, I don't know if the empty yield statement is mentioned
too (I couldn't find it either).

Cheers,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] New relative import issue

2006-09-17 Thread Fabio Zadrozny
I've been playing with the new features and there's one thing about
the new relative import that I find a little strange and I'm not sure
this was intended...

When you do a from . import xxx, it will always fail if you're in a
top-level module, and when executing any module, the directory of the
module will automatically go into the pythonpath, thus making all the
relative imports in that structure fail.

E.g.:

/foo/bar/imp1.py <-- has a "from . import imp2"
/foo/bar/imp2.py

if I now put a test-case (or any other module I'd like as the main module) at:
/foo/bar/mytest.py

if it imports imp1, it will always fail.

The solutions I see would be:
- only use the pythonpath actually defined by the user (and don't put
the current directory in the pythonpath)
- make relative imports work even if they reach some directory in the
pythonpath (making it work as an absolute import that would only
search the current directory structure)

Or is this actually a bug? (I'm with python 2.5 rc2)

I took another look at http://docs.python.org/dev/whatsnew/pep-328.html
and the example shows:

pkg/
pkg/__init__.py
pkg/main.py
pkg/string.py

with the main.py doing a "from . import string", which is what I was
trying to accomplish...

Cheers,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Changing a value in a frame (for a debugger)

2007-02-06 Thread Fabio Zadrozny

Hi All,

I'm currently trying to change the value for a variable in the debugger
using:

frame = findFrame(thread_id, frame_id)
exec '%s=%s' % (attr, expression) in frame.f_globals, frame.f_locals

it works well when the frame for the change is the topmost frame, but fails
otherwise...

I found some references speaking about some issue with PyFrame_FastToLocals
which made it appear like this was not possible to do... so, basically, is
there some way to make this work from python (changing a variable having a
given frame) or not?

Thanks,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Changing a value in a frame (for a debugger)

2007-02-07 Thread Fabio Zadrozny

On 2/7/07, Greg Ewing <[EMAIL PROTECTED]> wrote:


Fabio Zadrozny wrote:

> frame = findFrame(thread_id, frame_id)
> exec '%s=%s' % (attr, expression) in frame.f_globals, frame.f_locals

The locals of a function are actually stored in an array.
When you access them as a dict using locals(), all you
get is a dict containing a copy of their current values.
Modifying that dict doesn't affect the underlying array.

It seems that reading the f_locals of a frame does the
same thing. To modify the locals, you would need to poke
values into the original array -- but it doesn't seem
to be exposed to Python.

So it looks like you're out of luck.



Would it be ok to add a feature request for that? I initially thought it was
completely read-only, but I find it strange that it affects the topmost
frame correctly (so, it seems that even though I get a copy, when I alter
that copy on the topmost frame it affects it correctly)... anyone has a clue
why that happens?

It seems to affect pdb too...

Consider the code:
if __name__ == '__main__':

   def call1():
   v_on_frame1 = 10
   print 'v_on_frame1', v_on_frame1

   def call0():
   import pdb;pdb.set_trace()
   v_on_frame0 = 10
   call1()
   print 'v_on_frame0', v_on_frame0

   call0()


#when modifying in the current frame

x:\scbr15\source\python\tests_base\empty_test.py(9)call0()

-> v_on_frame0 = 10
(Pdb) n

x:\scbr15\source\python\tests_base\empty_test.py(10)call0()

-> call1()
(Pdb) v_on_frame0 = 40
(Pdb) c
v_on_frame1 10
v_on_frame0 40



#when modifying an upper frame it does not work

x:\scbr15\source\python\tests_base\empty_test.py(9)call0()

-> v_on_frame0 = 10
(Pdb) n

x:\scbr15\source\python\tests_base\empty_test.py(10)call0()

-> call1()
(Pdb) s
--Call--

x:\scbr15\source\python\tests_base\empty_test.py(3)call1()

-> def call1():
(Pdb) n

x:\scbr15\source\python\tests_base\empty_test.py(4)call1()

-> v_on_frame1 = 10
(Pdb) u

x:\scbr15\source\python\tests_base\empty_test.py(10)call0()

-> call1()
(Pdb) v_on_frame0 = 40
(Pdb) d

x:\scbr15\source\python\tests_base\empty_test.py(4)call1()

-> v_on_frame1 = 10
(Pdb) c
v_on_frame1 10
v_on_frame0 10
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Changing a value in a frame (for a debugger)

2007-02-07 Thread Fabio Zadrozny

On 2/7/07, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote:



Seriously, a feature request is likely to sit there
forever. If you would come up with an actual patch,
that would be a different thing. You'll likely answer
your other question in the process of developing a
patch, too.



Ok... I've added it as a feature-request in
https://sourceforge.net/tracker/index.php?func=detail&aid=1654367&group_id=5470&atid=355470(it
explains the problem and proposes a patch), so, if someone could take
some time to go through it, it would be nice ;-)

Thanks,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Test cases not garbage collected after run

2011-04-07 Thread Fabio Zadrozny
I actually created a bug entry for this
(http://bugs.python.org/issue11798) and just later it occurred that I
should've asked in the list first :)

So, here's the text for opinions:

Right now, when doing a test case, one must clear all the variables
created in the test class, and I believe this shouldn't be needed...

E.g.:

class Test(TestCase):
  def setUp(self):
self.obj1 = MyObject()

  ...

  def tearDown(self):
del self.obj1

Ideally (in my view), right after running the test, it should be
garbage-collected and the explicit tearDown just for deleting the
object wouldn't be needed (as the test would be garbage-collected,
that reference would automatically die), because this is currently
very error prone... (and probably a source of leaks for any
sufficiently big test suite).

If that's accepted, I can provide a patch.

Thanks,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Nonlocal shortcut

2008-12-07 Thread Fabio Zadrozny
Hi,

I'm currently implementing a parser to handle Python 3.0, and one of
the points I found conflicting with the grammar specification is the
PEP 3104.

It says that a shortcut would be added to Python 3.0 so that "nonlocal
x = 0" can be written. However, the latest grammar specification
(http://docs.python.org/dev/3.0/reference/grammar.html?highlight=full%20grammar)
doesn't seem to take that into account... So, can someone enlighten me
on what should be the correct treatment for that on a grammar that
wants to support Python 3.0?

Thanks,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Nonlocal shortcut

2008-12-07 Thread Fabio Zadrozny
>> I'm currently implementing a parser to handle Python 3.0, and one of
>> the points I found conflicting with the grammar specification is the
>> PEP 3104.
>>
>> It says that a shortcut would be added to Python 3.0 so that "nonlocal
>> x = 0" can be written. However, the latest grammar specification
>> (http://docs.python.org/dev/3.0/reference/grammar.html?highlight=full%20grammar)
>> doesn't seem to take that into account... So, can someone enlighten me
>> on what should be the correct treatment for that on a grammar that
>> wants to support Python 3.0?
>
> An issue was already filed about this:
> http://bugs.python.org/issue4199
> It should be ready for inclusion in 3.0.1.
>

Thanks for pointing that out.

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Can't have unbuffered text I/O in Python 3.0?

2008-12-19 Thread Fabio Zadrozny
Hi,

I'm currently having problems to get the output of Python 3.0 into the
Eclipse console (integrating it into Pydev).

The problem appears to be that stdout and stderr are not running
unbuffered (even passing -u or trying to set PYTHONUNBUFFERED), and
the content only appears to me when a flush() is done or when the
process finishes.

So, in the search of a solution, I found a suggestion from
http://stackoverflow.com/questions/107705/python-output-buffering

to use the following construct:

sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)

But that gives the error below in Python 3.0:

sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
  File "D:\bin\Python30\lib\os.py", line 659, in fdopen
return io.open(fd, *args, **kwargs)
  File "D:\bin\Python30\lib\io.py", line 243, in open
raise ValueError("can't have unbuffered text I/O")
ValueError: can't have unbuffered text I/O

So, I'd like to know if there's some way I can make it run unbuffered
(to get the output contents without having to flush() after each
write).

Thanks,

Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Can't have unbuffered text I/O in Python 3.0?

2008-12-19 Thread Fabio Zadrozny
You're right, thanks (guess I'll use that option then).

Now, is it a bug that Python 3.0 doesn't run unbuffered when
specifying -u or PYTHONUNBUFFERED, or was this support dropped?

Thanks,

Fabio

On Fri, Dec 19, 2008 at 8:03 PM, Brett Cannon  wrote:
> On Fri, Dec 19, 2008 at 13:43, Fabio Zadrozny  wrote:
>> Hi,
>>
>> I'm currently having problems to get the output of Python 3.0 into the
>> Eclipse console (integrating it into Pydev).
>>
>> The problem appears to be that stdout and stderr are not running
>> unbuffered (even passing -u or trying to set PYTHONUNBUFFERED), and
>> the content only appears to me when a flush() is done or when the
>> process finishes.
>>
>> So, in the search of a solution, I found a suggestion from
>> http://stackoverflow.com/questions/107705/python-output-buffering
>>
>> to use the following construct:
>>
>> sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
>>
>> But that gives the error below in Python 3.0:
>>
>>sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
>>  File "D:\bin\Python30\lib\os.py", line 659, in fdopen
>>return io.open(fd, *args, **kwargs)
>>  File "D:\bin\Python30\lib\io.py", line 243, in open
>>raise ValueError("can't have unbuffered text I/O")
>> ValueError: can't have unbuffered text I/O
>>
>> So, I'd like to know if there's some way I can make it run unbuffered
>> (to get the output contents without having to flush() after each
>> write).
>
> Notice how the exception specifies test I/O cannot be unbuffered. This
> restriction does not apply to bytes I/O. Simply open it as 'wb'
> instead of 'w' and it works.
>
> -Brett
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Can't have unbuffered text I/O in Python 3.0?

2008-12-20 Thread Fabio Zadrozny
It appears that this bug was already reported: http://bugs.python.org/issue4705

Any chance that it gets in the next 3.0.x bugfix release?

Just as a note, if I do: sys.stdout._line_buffering = True, it also
works, but doesn't seem right as it's accessing an internal attribute.

Note 2: the solution that said to pass 'wb' does not work, because I
need the output as text and not binary or text becomes garbled when
it's not ascii.

Thanks,

Fabio

On Fri, Dec 19, 2008 at 9:03 PM, Guido van Rossum  wrote:
> Fror truly unbuffered text output you'd have to make changes to the
> io.TextIOWrapper class to flush after each write() call. That's an API
> change -- the constructor currently has a line_buffering option but no
> option for completely unbuffered mode. It would also require some
> changes to io.open() which currently rejects buffering=0 in text mode.
> All that suggests that it should wait until 3.1.
>
> However it might make sense to at least turn on line buffering when -u
> or PYTHONUNBUFFERED is given; that doesn't require API changes and so
> can be considered a bug fix.
>
> --Guido van Rossum (home page: http://www.python.org/~guido/)
>
>
>
> On Fri, Dec 19, 2008 at 2:47 PM, Antoine Pitrou  wrote:
>>
>>> Well, ``python -h`` still lists it.
>>
>> Precisely, it says:
>>
>> -u : unbuffered binary stdout and stderr; also PYTHONUNBUFFERED=x
>> see man page for details on internal buffering relating to '-u'
>>
>> Note the "binary". And indeed:
>>
>> ./python -u
>> Python 3.1a0 (py3k:67839M, Dec 18 2008, 17:56:54)
>> [GCC 4.3.2] on linux2
>> Type "help", "copyright", "credits" or "license" for more information.
> import sys
> sys.stdout.buffer.write(b"y")
>> y1
>
>>
>> I don't know what it would take to enable unbuffered text IO while keeping 
>> the
>> current TextIOWrapper implementation...
>>
>> Regards
>>
>> Antoine.
>>
>>
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> http://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe: 
>> http://mail.python.org/mailman/options/python-dev/guido%40python.org
>>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> http://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> http://mail.python.org/mailman/options/python-dev/fabiofz%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Can't have unbuffered text I/O in Python 3.0?

2008-12-21 Thread Fabio Zadrozny
>> It appears that this bug was already reported: 
>> http://bugs.python.org/issue4705
>>
>> Any chance that it gets in the next 3.0.x bugfix release?
>>
>> Just as a note, if I do: sys.stdout._line_buffering = True, it also
>> works, but doesn't seem right as it's accessing an internal attribute.
>>
>> Note 2: the solution that said to pass 'wb' does not work, because I
>> need the output as text and not binary or text becomes garbled when
>> it's not ascii.
>>
>
> Can't you decode the bytes after you receive them?
>

Well, in short, no (long answer is that I probably could if I spent a
long time doing my own console instead of relying on what's already
done and working in Eclipse for all the current available languages it
supports, but that just doesn't seem right).

Also, it's seems easily solvable (enabling line buffering for the
python streams when -u is passed) in the Python side... My current
workaround is doing that on a custom site-initialization when a Python
3 interpreter is found, but I find that this is not the right way for
doing it, and it really feels like a Python bug.

-- Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Import semantics

2006-06-11 Thread Fabio Zadrozny
Python and Jython import semantics differ on how sub-packages should be accessed after importing some module:Jython 2.1 on java1.5.0 (JIT: null)Type "copyright", "credits" or "license" for more information.
>>> import xml>>> xml.domPython 2.4.2 (#67, Sep 28 2005, 12:41:11) [MSC v.1310 32 bit (Intel)] on win32Type "help", "copyright", "credits" or "license" for more information.
>>> import xml>>> xml.domTraceback (most recent call last):  File "", line 1, in ?AttributeError: 'module' object has no attribute 'dom'>>> from xml.dom
 import pulldom>>> xml.domNote that in Jython importing a module makes all subpackages beneath it available, whereas in python, only the tokens available in __init__.py are accessible, but if you do load the module later even if not getting it directly into the namespace, it gets accessible too -- this seems more like something unexpected to me -- I would expect it to be available only if I did some "import 
xml.dom" at some point.My problem is that in Pydev, in static analysis, I would only get the tokens available for actually imported modules, but that's not true for Jython, and I'm not sure if the current behaviour in Python was expected.
So... which would be the right semantics for this?Thanks,Fabio
___
Python-Dev mailing list
Python-Dev@python.org
http://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com