[Python-Dev] bpo-33257: seeking advice & approval on the course of action

2018-05-02 Thread Ivan Pozdeev via Python-Dev
The bottom line is: Tkinter is currently broken -- as in, it's not 
thread-safe (in both Py2 and Py3) despite being designed and advertizing 
itself as such.
All the fix options require some redesign of either `_tkinter', or some 
of the core as well.


So, I'd like to get some kind of core team's feedback and/or approval 
before pursuing any of them.


The options are outlined in https://bugs.python.org/issue33257#msg316087 .

If anyone of you is in Moscow, we can meet up and discuss this in a more 
time-efficient manner.


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Drop/deprecate Tkinter?

2018-05-02 Thread Ivan Pozdeev via Python-Dev
As https://bugs.python.org/issue33257 and 
https://bugs.python.org/issue33316 showed, Tkinter is broken, for both 
Py2 and Py3, with both threaded and non-threaded Tcl, since 2002 at 
least, and no-one gives a damn.


This seems to be a testament that very few people are actually 
interested in or are using it.


If that's so, there's no use keeping it in the standard library -- if 
anything, because there's not enough incentive and/or resources to 
support it. And to avoid screwing people (=me) up when they have the 
foolishness to think they can rely on it in their projects -- nowhere in 
the docs it is said that the module is only partly functional.


--

Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Drop/deprecate Tkinter?

2018-05-02 Thread Ivan Pozdeev via Python-Dev

On 03.05.2018 1:01, Antoine Pitrou wrote:

On Wed, 2 May 2018 22:54:04 +0100
Paul Moore  wrote:

On 2 May 2018 at 22:37, Antoine Pitrou  wrote:

To elaborate a bit: the OP, while angry, produced both a detailed
analysis *and* a PR.  It's normal to be angry when an advertised
feature doesn't work and it makes you lose hours of work (or, even,
forces you to a wholesale redesign). Producing a detailed analysis and a
PR is more than most people will ever do.

His *other* email seems reasonable, and warrants a response, yes. But
are we to take the suggestion made here (to drop tkinter) seriously,
based on the fact that there's a (rare - at least it appears that the
many IDLE users haven't hit it yet) race condition that causes a crash
in Python 2.7? (It appears that the problem doesn't happen in the
python.org 3.x builds, if I understand the description of the issue).
In 3.x, Tkinter+threads is broken too, albeit in a different way -- see 
https://bugs.python.org/issue33412 (this should've been the 2nd link in 
the initial message, sorry for the mix-up).
The 2.x bug also shows in 3.x if it's linked with a nonthreaded version 
of Tcl (dunno how rare that is, but the code still supports this setup).

I and others actually suggested it seriously in the past.  Now,
admittedly, at least IDLE seems better maintained than it used to
be -- not sure about Tkinter itself.


Nor do I think the tone of his message here is acceptable - regardless
of how annoyed he is, posting insults ("no-one gives a damn") about
volunteer contributors in a public mailing list isn't reasonable or
constructive. Call that "playing speech police" if you want, but I
think that being offended or annoyed and saying so is perfectly
reasonable.

Will all due respect, it's sometimes unpredictable what kind of wording
Anglo-Saxons will take as an insult, as there's lot of obsequiosity
there that doesn't exist in other cultures. To me, "not give a damn"
reads like a familiar version of "not care about something", but
apparently it can be offensive.

Confirm, never meant this as an insult.

I had to use emotional language to drive the point home that it's not 
some nitpick, it really causes people serious trouble (I lost a source 
of income, for the record).
Without the emotional impact, my message could easily be ignored as some 
noise not worth attention. This time, it's just too damn important to 
allow this possibility.


The module being abandoned and unused is truly the only explanation I 
could think of when seeing that glaring bugs have stayed unfixed for 15 
years (an infinity in IT), in an actively developed and highly used 
software.
This may be flattering for my ego, but if the module really is in any 
production use to speak of, then in all these years, with all this 
humongous user base, someone, somewhere in the world, at some point, 
should have looked into this. I don't even program in C professionally, 
yet was able to diagnose it and make a PR!


---

I'll make a PR with the doc warning as Guido suggested unless there are 
any better ideas.


Meanwhile, I'd really appreciate any response to my other message -- it 
is about actually fixing the issue, and I do need feedback to be able to 
proceed.
No need to delve all the way in and give an official authorization or 
something. I'm only looking for an opinion poll on which redesign option 
(if any) looks like the most reasonable way to proceed and/or in line 
with the big picture (the last one -- to provide a unifying vision -- is 
_the_ job of a BDFL IIRC).



Regards

Antoine.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Drop/deprecate Tkinter?

2018-05-03 Thread Ivan Pozdeev via Python-Dev

On 03.05.2018 20:11, Ryan Gonzalez wrote:

On May 3, 2018 11:56:24 AM MRAB  wrote:


On 2018-05-03 13:24, Steve Holden wrote:

On Thu, May 3, 2018 at 12:12 AM, Ivan Pozdeev via Python-Dev
mailto:python-dev@python.org>> wrote:

    On 03.05.2018 1:01, Antoine Pitrou wrote:

    On Wed, 2 May 2018 22:54:04 +0100
    Paul Moore <mailto:p.f.mo...@gmail.com>> wrote:


    On 2 May 2018 at 22:37, Antoine Pitrou mailto:solip...@pitrou.net>> wrote:

    To elaborate a bit: the OP, while angry, produced 
both a

    detailed
    analysis *and* a PR.  It's normal to be angry when an
    advertised
    feature doesn't work and it makes you lose hours of 
work

    (or, even,
    forces you to a wholesale redesign). Producing a
    detailed analysis and a
    PR is more than most people will ever do.

    His *other* email seems reasonable, and warrants a 
response,

    yes. But
    are we to take the suggestion made here (to drop tkinter)
    seriously,
    based on the fact that there's a (rare - at least it 
appears

    that the
    many IDLE users haven't hit it yet) race condition that
    causes a crash
    in Python 2.7? (It appears that the problem doesn't happen
    in the
    python.org <http://python.org> 3.x builds, if I understand
    the description of the issue).

    In 3.x, Tkinter+threads is broken too, albeit in a different way --
    see https://bugs.python.org/issue33412
    <https://bugs.python.org/issue33412> (this should've been the 2nd
    link in the initial message, sorry for the mix-up).


​The observation in t​hat issue that tkinter and threads should be
handled in specific ways is certainly a given for old hands, who have
long put the GUI code in one thread with one or more concurrent worker
threads typically communicating through queues. But I haven't built
anything like that recently, so I couldn't say how helpful the current
documenation might be.


Interacting with the GUI only in the main thread is something that I've
had to do in other languages (it is/was the recommended practice), so I
naturally do the same with Python and tkinter. It's also easier to
reason about because you don't get elements of the GUI changing
unexpectedly.


To add to this, most GUI frameworks disallow modifications outside the 
main thread altogether. IIRC both GTK+ and Qt require this, or else 
it's undefined altogether.


You still need some facility (*cough*SendMessage*cough*) to send update 
commands to the GUI (the model->view link in MVC, presenter->view in MVP).
Who and how specifically carries out these commands is unimportant, an 
implementation detail.


Every GUI has an event/message queue exactly for that, that other 
threads can sent work requests into:
https://doc.qt.io/qt-5.10/qcoreapplication.html#postEvent , 
https://developer.gnome.org/gdk3/stable/gdk3-Threads.html#gdk3-Threads.description 
, 
https://en.wikipedia.org/wiki/Event_dispatching_thread#Submitting_user_code_to_the_EDT 
, the aforementioned WinAPI's SendMessage() and PostMessage() just to 
name a few.


Tcl/Tk, being arguably the oldest usable GUI toolkit in existence, has 
an event queue likewise but doesn't provide a complete event loop 
implementation, only the building blocks for it. Tkinter fills that gap 
with its `tk.mainloop()`.
It fails to provide a working means to send work into it though. Having 
to use a second, duplicating event queue and poll it (=busy loop) 
instead is an obvious crutch.


[snip]
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/rymg19%40gmail.com



--
Ryan (ライアン)
Yoko Shimomura, ryo (supercell/EGOIST), Hiroyuki Sawano >> everyone else
https://refi64.com/


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dealing with tone in an email

2018-05-03 Thread Ivan Pozdeev via Python-Dev

On 03.05.2018 21:31, Brett Cannon wrote:



On Thu, 3 May 2018 at 01:27 Paul Moore > wrote:


On 3 May 2018 at 03:26, Steven D'Aprano mailto:st...@pearwood.info>> wrote:

>> Will all due respect, it's sometimes unpredictable what kind of
wording
>> Anglo-Saxons will take as an insult, as there's lot of obsequiosity
>> there that doesn't exist in other cultures. To me, "not give a
damn"
>> reads like a familiar version of "not care about something", but
>> apparently it can be offensive.
>
> I'm Anglo-Saxon[1], and honestly I believe that it is
thin-skinned to
> the point of ludicrousness to say that "no-one gives a damn" is an
> insult. This isn't 1939 when Clark Gable's famous line "Frankly
my dear,
> I don't give a damn" was considered shocking. Its 2018 and to
not give a
> damn is a more forceful way of saying that people don't care,
that they
> are indifferent.

Sigh. That's not what I was saying at all. I was trying to point out
that Antoine's claim that people should ignore the rhetoric and that
complaining about the attitude was unreasonable, was in itself unfair.
People have a right to point out that a mail like the OP's was badly
worded.

> With respect to Paul, I literally cannot imagine why he thinks that
> *anyone*, not even the tkinter maintainers or developers themselves,
> ought to feel *offended* by Ivan's words.

Personally, they didn't offend me. I don't pretend to know how others
might take them. But they *did* annoy me. I'm frankly sick of people
(not on this list) complaining that people who work on projects in
their own time, free of charge, "don't care enough" or "are ignoring
my requirement". We all do it, to an extent, and it's natural to get
frustrated, but the onus is on the person asking for help to be polite
and fair. And maybe this response was the one where I finally let that
frustration show through. I may read less email for a week or two,
just to get a break.


I had the same response as Paul: annoyed. And while Ivan thought he 
was using "emotional language to drive the point home that it's not 
some nitpick", it actually had the reverse effect on me and caused me 
not to care because I don't need to invite annoyance into my life when 
putting in my personal time into something.


No one is saying people can't be upset and if you are ever upset 
there's something wrong; we're human beings after all. But those of us 
speaking up about the tone are saying that you can also wait until 
you're not so upset to write an email. This was never going to be 
resolved in an hour, so waiting an hour until you're in a better place 
to write an email that wasn't quite so inflammatory seems like a 
reasonable thing to ask.



Let me express things right from the horse's mouth.

The sole purpose of the tone was to not let the mesage be flat-out ignored.
I had my neutral-toned, to-the-point messages to mailing lists flat-out 
ignored one too many times for reasons that I can only guess about.

This time, the situation was too important to let that happen.

Whatever anyone may think of this, it worked. I got my message through, 
and got the feedback on the topic that I needed to proceed in resolving 
the problem that caused it.
I seriously doubt I could achieve that with a neutral-toned message just 
stating the facts: dry facts would not show ppl how this could be 
important ("ah, just another n00b struggling with Tkinter basics" or 
something).


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Dealing with tone in an email

2018-05-05 Thread Ivan Pozdeev via Python-Dev

On 04.05.2018 19:04, Guido van Rossum wrote:
Thank you Steven! I assume that Brian hadn't seen my response (such 
crossed messages due to delivery delays are very common in this 
mailing list).


I'd like to use your email (nearly) verbatim to start off the 
discussion about civility we're going to have at the Language Summit.


Since I won't be present at the summit to tell my side of the story, you 
can see it below.

It's up to you to judge it, but as least you need to know what to judge.

In a nutshell, this is an exceptional situation, and I saw no better way 
that was guaranteed to work.
I never meant or mean to use this as a standard tactic, this is only the 
second such case in my life.
On Fri, May 4, 2018 at 8:43 AM, Steven D'Aprano > wrote:


On Thu, May 03, 2018 at 06:31:03PM +, Brett Cannon wrote:

> No one is saying people can't be upset and if you are ever upset
there's
> something wrong; we're human beings after all. But those of us
speaking up
> about the tone are saying that you can also wait until you're
not so upset
> to write an email. This was never going to be resolved in an
hour, so
> waiting an hour until you're in a better place to write an email
that
> wasn't quite so inflammatory seems like a reasonable thing to ask.

Certainly!

I'm not defending Ivan's initial email. His tantrum *was* annoying,
unreasonable, and unfair to those who do care about tkinter. He could
have done better.

But *we* should be better too. Our response to Ivan has not been
welcoming, and as a community we haven't lived up to our own
standards,
as we have piled onto him to express our rightous indignation:

1. Guido responded telling Ivan to calm down and work off his
   frustration elsewhere. And that's where things should have
   stopped, unless Ivan had persisted in his impoliteness.

2. Brian upped the ante by bringing the CoC into discussion.

3. Paul raised it again by describing Ivan's post as "offensive".

4. And now, Steve H has claimed that Ivan's initial post was
   bordering on "abusive".

We've gone from rightly treating Ivan's post as intemperate and
impolite, and telling him to chill, to calling his post
"offensive", to
"abusive". (Next, I presume, someone will claim to be traumatised by
Ivan's email.)

Just as Ivan should have waited until he had calmed down before
firing
off his rant, so we ought to resist the temptation to strike back
with
hostility at trivial social transgressions, especially from
newcomers.
This is what Ivan actually said:

- Tkinter is broken and partly functional (an opinion with only the
  most tenuous connection with fact, but hardly abusive);

- that nobody cares (factually wrong, but not abusive);

- that possibly nobody is using it (factually wrong, but not abusive);

- that if that's the case (it isn't), then it should be removed
  from the std lib (a reasonable suggestion if only the premise had
  been correct).

As I suspected. This is a classic scenario that is occasionally seen 
anywhere: "everyone is underestimating a problem until a disaster strikes".
The team's perception of Tkinter is basically: "well, there are slight 
issues, and the docs are lacking, but no big deal."


Well, this _is_ a big deal. As in, "with 15+ years of experience, 5+ 
with Python, I failed to produce a working GUI in a week; no-one on the 
Net, regardless of experience, (including Terry) is ever sure how to do 
things right; every online tutorial says: "all the industry-standard and 
expected ways are broken/barred, we have to resort to ugly workarounds 
to accomplish just about anything"" big deal. This is anything but 
normal, and all the more shocking in Python where the opposite is the norm.


And now, a disaster striked. Not knowing this, I've relied on Tkinter 
with very much at stake (my income for the two following months, 
basically), and lost. If that's not a testament just how much damage 
Tkinter's current state actually does, I dunno what is.


Of course, it's up to me to write fixes and all since this is a 
volunteer project. But I can't do this alone, I must recruit the team's 
cooperation if I hope to ever be successful. Unless I shatter their 
current outlook on the matter first, any fixes I provide will likely be 
dismissed as unneeded or deferred indefinitely as unimportant. There are 
precedents of that, including with no response whatsoever, and the 
messages were written neutrally, with a thorough explanation, patch/PR 
etc. (I do believe the maintainers are doing their best. Still, the mere 
fact that they chose to work with other tickers over mine shows that 
they considered those more important. So it does matter if they 
underestimate a topic.)


That's why I had to resort to shock value. First, it would guarantee 
that my message won't fall 

Re: [Python-Dev] Windows 10 build agent failures

2018-05-06 Thread Ivan Pozdeev via Python-Dev
For me, Tcl/Tk failed to build with SDK 10.0.16299.0 , I had to 
expicitly fall back to 10.0.15063.0 ( 
https://stackoverflow.com/questions/48559337/error-when-building-tcltk-in-visual-studio-2017 
). May be related if VS was (auto)updated on the builders.||


On 06.05.2018 11:05, Paul Goins wrote:
||

Hi,

Just kind of "looking around" at stuff I can help with, and I noticed 
a few days ago that Windows 10 AMD64 builds of Python 3.6/3.7/3.x are 
generally failing.


It seems like the failures started April 16th around 1am per BuildBot 
and went from consistently passing to consistently failing.  The 
errors appear to be timeouts; the test runs are going over 15 minutes.


I can run the tests locally and things seem fine.

The one thing which seems to have changed is, under the "10 slowest 
tests" header, test_io has shot up in terms of time to complete:


3.6: 2 min 13 sec 
 
to 9 min 25 sec 

3.7: 2 min 10 sec 
 
to 9 min 26 sec 

3.x: 3 min 39 sec 
 
to 9 min 17 sec 



Locally this test suite runs in around 36 seconds.  I see no real 
change between running one of the last "good" changesets versus the 
current head of master.  I'm suspecting an issue on the build agent 
perhaps?  Thoughts?


Best Regards,
Paul Goins


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-33257: seeking advice & approval on the course of action

2018-05-14 Thread Ivan Pozdeev via Python-Dev

On 14.05.2018 21:58, Terry Reedy wrote:

On 5/14/2018 12:20 PM, Chris Barker via Python-Dev wrote:
On Wed, May 2, 2018 at 8:21 PM, Terry Reedy <mailto:tjre...@udel.edu>> wrote:


    On 5/2/2018 4:38 PM, Ivan Pozdeev via Python-Dev wrote:

    The bottom line is: Tkinter is currently broken


    This is way over-stated.  Many modules have bugs, somethings in
    features more central to their main purpose.

I'll suggest a re-statement:

tkinter is not thread safe,


Still over-stated.  If one uses tcl/tk compiled with thread support, 
tkinter *is* thread-safe.  This is 'as far as I know' from running 
posted 'failing' examples (possible with bug fixes) with 3.5+ on 
Windows, which is installed with tcl/tk 8.6, which defaults to 
thread-safe.


This means that you didn't (yet) read the letter that I attached to 
https://bugs.python.org/issue33479 .

Reciting the relevant section:

===
The reality is that with neither flavor of Tcl is Tkinter completely 
thread-safe, but with threaded flavor, it's more so:


* with nonthreaded Tcl, making concurrent Tcl calls leads to crashes due 
to incorrect management of the "Tcl lock" as per 
https://bugs.python.org/issue33257
* with threaded Tcl, the only issue that I found so far is that a few 
APIs must be called from the interpreter's thread 
(https://bugs.python.org/issue33412#msg316152; so far, I know 
`mainloop()` and `destroy()` to be this) -- while most can be called 
from anywhere. Whether the exceptions are justified is a matter of 
discussion (e.g. at first glance, `destroy()` can be fixed).

===
Tkinter was intended to also be thread-safe when using tcl/tk without 
thread support, which was the default for tcl/tk 8.5 and before. The 
posted examples can fail on 2.x on Windows, which comes with tcl/tk 
8.5 or before. _tkinter.c has some different #ifdefs for the two 
situations.



and yet it is documented as being thread safe


True in https://docs.python.org/3/library/tk.html
Unspecified in https://docs.python.org/3/library/tkinter.html


This is either a bug(s) in the implementation or the docs.


Both


So what are the solutions?

1) fix the docs -- unless tkInter is made thread safe really soon, 
and fixes are back-ported, this seems like a no brainer -- at least 
temporarily.


https://bugs.python.org/issue33479 'Document tkinter and threads'


2) fix the issues that make tkInter not thread safe


with non-thread tcl/tk.

https://bugs.python.org/issue33257 has a patch that might improve the 
situation for one type of call.  Fixing everything might not be 
possible.  AFAIK, there are currently no tests of thread safety.




--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-33257: seeking advice & approval on the course of action

2018-05-14 Thread Ivan Pozdeev via Python-Dev

On 14.05.2018 22:05, Ivan Pozdeev wrote:

On 14.05.2018 21:58, Terry Reedy wrote:

On 5/14/2018 12:20 PM, Chris Barker via Python-Dev wrote:
On Wed, May 2, 2018 at 8:21 PM, Terry Reedy <mailto:tjre...@udel.edu>> wrote:


    On 5/2/2018 4:38 PM, Ivan Pozdeev via Python-Dev wrote:

    The bottom line is: Tkinter is currently broken


    This is way over-stated.  Many modules have bugs, somethings in
    features more central to their main purpose.

I'll suggest a re-statement:

tkinter is not thread safe,


Still over-stated.  If one uses tcl/tk compiled with thread support, 
tkinter *is* thread-safe.  This is 'as far as I know' from running 
posted 'failing' examples (possible with bug fixes) with 3.5+ on 
Windows, which is installed with tcl/tk 8.6, which defaults to 
thread-safe.


This means that you didn't (yet) read the letter that I attached to 
https://bugs.python.org/issue33479 .

Reciting the relevant section:

===
The reality is that with neither flavor of Tcl is Tkinter completely 
thread-safe, but with threaded flavor, it's more so:


* with nonthreaded Tcl, making concurrent Tcl calls leads to crashes 
due to incorrect management of the "Tcl lock" as per 
https://bugs.python.org/issue33257
* with threaded Tcl, the only issue that I found so far is that a few 
APIs must be called from the interpreter's thread 
(https://bugs.python.org/issue33412#msg316152; so far, I know 
`mainloop()` and `destroy()` to be this) -- while most can be called 
from anywhere. Whether the exceptions are justified is a matter of 
discussion (e.g. at first glance, `destroy()` can be fixed).
And another undocumented limitation for threaded Tcl: when calling 
anything from outside the interpreter thread, `mainloop()` must be 
running in the interpreter threads, or the call will either raise or 
hang (dunno any more details atm).

===


Tkinter was intended to also be thread-safe when using tcl/tk without 
thread support, which was the default for tcl/tk 8.5 and before. The 
posted examples can fail on 2.x on Windows, which comes with tcl/tk 
8.5 or before. _tkinter.c has some different #ifdefs for the two 
situations.



and yet it is documented as being thread safe


True in https://docs.python.org/3/library/tk.html
Unspecified in https://docs.python.org/3/library/tkinter.html


This is either a bug(s) in the implementation or the docs.


Both


So what are the solutions?

1) fix the docs -- unless tkInter is made thread safe really soon, 
and fixes are back-ported, this seems like a no brainer -- at least 
temporarily.


https://bugs.python.org/issue33479 'Document tkinter and threads'


2) fix the issues that make tkInter not thread safe


with non-thread tcl/tk.

https://bugs.python.org/issue33257 has a patch that might improve the 
situation for one type of call.  Fixing everything might not be 
possible.  AFAIK, there are currently no tests of thread safety.






--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Making Tcl/Tk more suitable for embedding (was: [issue33479] Document tkinter and threads)

2018-05-15 Thread Ivan Pozdeev via Python-Dev
Subj is off topic for the ticket, so I guess this discussion is better 
continued here.


On 15.05.2018 18:20, Mark Roseman wrote:

Mark Roseman  added the comment:

Hi Ivan, thanks for your detailed response. The approach you're suggesting ("Since 
the sole offender is their threading model, the way is to show them how it's defective 
and work towards improving it.") is in the end not something I think is workable.

Some historical context. Ousterhout had some specific ideas about how Tcl/Tk should be used, and that was 
well-reflected in his early control of the code base. He was certainly outspoken against threads. The main 
argument is that they're complicated if you don't know what you're doing, which included the 
"non-professional programmers" he considered the core audience. Enumerating how threads were used 
at the time, most of the uses could be handled (more simply) in other ways, such as event-driven and 
non-blocking timers and I/O (so what people today would refer to as the "node.js event model"). 
Threads (or separate communicating processes) were for long-running computations, things he always envisioned 
happening in C code (written by more "professional programmers"), not Tcl. His idea of how Tcl and 
C development would be split didn't match reality given faster machines, more memory, etc.


Very enlightening. Many thanks.


The second thing is that Tcl had multiple interpreters baked in pretty much 
from the beginning at the C level and exposed fairly early on (1996?) at the 
Tcl level, akin to PEP 554. Code isolation and resource management were the key 
motivators, but of course others followed. Creating and using Tcl interpreters 
was quick, lightweight (fast startup, low memory overhead, etc.) and easy. So 
in other words, the notion of multiple interpreters in Tcl vs. Python is 
completely different. I had one large application I built around that time that 
often ended up with hundreds of interpreters running.


Not familiar with the concept so can't say atm if tkinter can make any 
use of this. All tkinter-using code I've seen so far only ever uses a 
single tkinter.Tk() -- thus a single interpreter.



Which brings me to threads and how they were added to the language. Your guess ("My guess for 
the decision is it was the easiest way to migrate the code base") is incorrect. The idea of 
"one thread/one interpreter" was just not seen as a restriction, and was a very natural 
extension of what had come before. It fit the use cases well (AOLserver was another good example) 
and was still very understandable from the user level. Contrast with Python's GIL, etc.


I'm not actually suggesting any changes to Tcl as a language, only to 
its C interface (details follow).
AFAIK Tcl also advertises itself as an embeddable language as its main 
selling point, having been developed primarity as an interface to Tk 
rather than a self-sufficient language (or, equivalently, this being its 
primary use case now). Having to do an elaborate setup with lots of 
custom logic to be able to embed it is a major roadblock. This can be 
the leverage.


From C interface's standpoint, an interpreter is effectively a bunch of 
data that can be passed to APIs. Currently, all Tcl_* calls with a 
specific interpreter instance must be made from the same thread, and 
this fact enforces sequential access. I'm suggesting to wrap all these 
public APIs with an interpreter-specific lock -- so calls can be made 
from any OS thread and the lock enforces sequential access. For Tcl's 
execution model and existing code, nothing will change.
The downside (that will definitely be brought up) is the overhead, of 
course. The question is thus whether the above-mentioned benefit 
outweighs it.



With that all said, there would be very little motivation to change the Tcl/Tk side to allow multiple threads 
to access one interpreter, because in terms of the API and programming model that Tcl/Tk advertises, it's 
simply not a problem. Keep in mind, the people working on the Tcl/Tk core are very smart programmers, know 
threads very well, etc., so it's not an issue of "they should know better" or "it's old." 
In other words, "show them how it's defective" is a non-starter.

The other, more practical matter in pushing for changes in the Tcl/Tk core, is 
that there are a fairly small number of people working on it, very part-time. 
Almost all of them are most interested in the Tcl side, not Tk. Changes made in 
Tk most often amount to bug fixes because someone's running into something in 
their own work. Expecting large-scale changes to happen to Tk without some way 
to get dedicated new resources put into it is not realistic.

A final matter on the practical side. As you've carefully noted, certain Tcl/Tk calls now 
happen to work when called from different threads. Consider those a side-effect of 
present implementation, not a guarantee. Future core changes could change what can be 
called from different threads, making the situa

Re: [Python-Dev] [python-committers] FINAL WEEK FOR 3.7.0 CHANGES!

2018-05-18 Thread Ivan Pozdeev via Python-Dev

On 18.05.2018 10:55, Serhiy Storchaka wrote:

17.05.18 21:39, Brett Cannon пише:
Maybe we should start thinking about flagging PRs or issues as 
needing a What's New entry to help track when they need one, or 
always expect it in a PR and ignore that requirement when a 'skip 
whats new' label is applied. That would at least make it easier to 
keep track of what needs to be done.


The requirement of flagging PRs or issues as needing a What's New 
entry doesn't differ in principle from the requirement of creating a 
What's New entry for these changes. The latter is good, and I'm trying 
always create a What's New entry for significant enhancement or 
potentially breaking change. And even I sometimes is unsure and don't 
document some important changes (like in issue30399). Needed a look of 
yet one pair of eyes.


As for requiring a What's New entry by default and introducing a 'skip 
whats new' label, I suppose this will add much nuisance. Most PRs 
(except docs and tests changes) need a news entry, but most PRs don't 
need a What's New entry because their are bug fixes. Therefore a 'skip 
whats new' label will be required much more times than 'skip news' or 
'skip issue' labels.


Since Python uses semantic versioning (https://semver.org), the 
criterion for "what's new-worthy" changes is simple: they are _public 
interface changes_ (which include visible changes to documented behavior).
(I maintain that changes to behavior that is not documented -- incl. 
issue30399 -- are _not_ public interface changes, and whoever relies on 
them does that on their own risk.)


Reading previous What's New, I see that it is structured like this
* Entries for major changes:
    * General design decisions
    * Changes that fall into a category (more recent What's New's 
include about a dozen categories)

* "Other": the list of the rest

So, it makes sense to mark work items as "interface change" or 
something, and optionally with a caterory if that category is established.
You can't make a mistake here 'cuz a public interface change requires an 
edit to related documentation.
A thing that can help is a tool that makes a structural diff between 
NEWS files for different versions and between different branches. It 
will filter out bugfix changes. The simple 'diff' is not well 
appropriate because entries can be in different order, and news 
entries now are scattered between several files, and news entries for 
previous version sometimes should be searched in different files, and 
sometimes should be searched on a different branch. The text of 
entries in different versions can also be different because the same 
issue can change the behavior on the master and backport the part of 
changes as a bugfix.
Not all bugs apply to all, or multiple branches, so that wouldn't filter 
them out reliably.



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Normalisation of unicode and keywords

2018-05-18 Thread Ivan Pozdeev via Python-Dev

On 18.05.2018 14:46, Steven D'Aprano wrote:

Stephan Houben noticed that Python apparently allows identifiers to be
keywords, if you use Unicode "mathematical bold" letters. His
explanation is that the identifier is normalised, but not until after
keywords are checked for. So this works:

class Spam:
 locals()['if'] = 1


Spam.𝐢𝐟# U+1D422 U+1D41F
# returns 1


Of course Spam.if fails with SyntaxError.

Should this work? Is this a bug, a feature, or an accident of
implementation we can ignore?

Voting for bug:
Either those identifiers should be considered equal, or they shouldn't. 
They can't be considered "partially" equal.





--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My fork lacks a 3.7 branch - can I create it somehow?

2018-05-22 Thread Ivan Pozdeev via Python-Dev

On 22.05.2018 3:07, Skip Montanaro wrote:

My GitHub fork of the cpython repo was made awhile ago, before a 3.7 branch
was created. I have no remotes/origin/3.7. Is there some way to create it
from remotes/upstream/3.7? I asked on GitHub's help forums. The only
recommendation was to to delete my fork and recreate it. That seemed kind
of drastic, and I will do it if that's really the only way, but this seems
like functionality Git and/or GitHub probably supports.

Thx,
You don't really need copies of official branches on your Github fork if 
you're not a maintainer for these branches.
(You'll have to keep master though AFAIK since Git needs some branch to 
be marked as "default".)


It's sufficient to just have topic branches for PRs there: you take 
official branches from python/cpython and topic branches from your fork, 
do the edits and manipulations locally, then upload the changed topic 
branches to your fork.
I found this easier than having everything in your fork 'cuz it saves 
you the hassle of keeping your copies up-to-date and having unexpected 
merge conflicts in your PRs if the copies get out of date.



Skip
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 574 (pickle 5) implementation and backport available

2018-05-25 Thread Ivan Pozdeev via Python-Dev

On 25.05.2018 20:36, Raymond Hettinger wrote:



On May 24, 2018, at 10:57 AM, Antoine Pitrou  wrote:

While PEP 574 (pickle protocol 5 with out-of-band data) is still in
draft status, I've made available an implementation in branch "pickle5"
in my GitHub fork of CPython:
https://github.com/pitrou/cpython/tree/pickle5

Also I've published an experimental backport on PyPI, for Python 3.6
and 3.7.  This should help people play with the new API and features
without having to compile Python:
https://pypi.org/project/pickle5/

Any feedback is welcome.

Thanks for doing this.

Hope it isn't too late, but I would like to suggest that protocol 5 support 
fast compression by default.  We normally pickle objects so that they can be 
transported (saved to a file or sent over a socket). Transport costs (reading 
and writing a file or socket) are generally proportional to size, so 
compression is likely to be a net win (much as it was for header compression in 
HTTP/2).

The PEP lists compression as a possible a refinement only for large objects, 
but I expect is will be a win for most pickles to compress them in their 
entirety.


I would advise against that. Pickle format is unreadable as it is, 
compression will make it literally impossible to diagnose problems.

Python supports transparent compression, e.g. with the 'zlib' codec.



Raymond
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How to watch buildbots?

2018-05-30 Thread Ivan Pozdeev via Python-Dev

On 30.05.2018 13:01, Victor Stinner wrote:

Hi,

I would like to delegate the maintenance task "watch buildbots", since
I'm already very busy with many other maintenance tasks. I'm looking
for volunteers to handle incoming emails on buildbot-status. I already
started to explain to Pablo Galindo Salgado how to do that, but it
would be great to have at least two people doing this task. Otherwise,
Pablo wouldn't be able to take holiday or just make a break for any
reason. Buildbots are evil beast which require care every day.
Otherwise, they quickly turn red and become less useful :-(

It seems like the first blocker issue is that we have no explicit
documentation "how to deal with buildbots?" (the devguide
documentation is incomplete, it doesn't explain what I'm explaining
below). Let me start with a few notes of how I watch buildbots.

I'm getting buildbot notifications on IRC (#python-dev on Freenode)
and on the buildbot-status mailing list:
https://mail.python.org/mm3/mailman3/lists/buildbot-status.python.org/

When a buildbot fails, I look at tests logs and I try to check if an
issue has already been reported. For example, search for the test
method in title (ex: "test_complex" for test_complex() method). If no
result, search using the test filename (ex: "test_os" for
Lib/test/test_os.py). If there is no result, repeat with full text
searchs ("All Text"). If you cannot find any open bug, create a new
one:

* The title should contain the test name, test method and the buildbot
name. Example: " test_posix: TestPosixSpawn fails on PPC64 Fedora
3.x".
* The description should contain the link to the buildbot failure. Try
to identify useful parts of tests log and copy them in the
description.
* Fill the Python version field (ex: "3.8" for 3.x buildbots)
* Select at least the "Tests" Component. You may select additional
Components depending on the bug.

If a bug was already open, you may add a comment to mention that there
is a new failure: add at least a link to buildbot name and a link to
the failure.

And that's all! Simple, isn't it? At this stage, there is no need to
investigate the test failure.

To finish, reply to the failure notification on the mailing list with
a very short email: add a link to the existing or the freshly created
issue, maybe copy one line of the failure and/or the issue title.

Recent bug example: https://bugs.python.org/issue33630

--

Later, you may want to analyze these failures, but I consider that
it's a different job (different "maintenance task"). If you don't feel
able to analyze the bug, you may try to find someone who knows more
than you about the failure.

For better bug reports, you can look at the [Changes] tab of a build
failure, and try to identify which recent change introduced the
regression. This task requires to follow recent commits, since
sometimes the failure is old, it's just that the test fails randomly
depending on network issues, system load, or anything else. Sometimes,
previous tests have side effects. Or the buildbot owner made a change
on the system. There are many different explanation, it's hard to
write a complete list. It's really on a case by case basis.

Hopefully, it's now more common that a buildbot failure is obvious and
caused by a very specific recent changes which can be found in the
[Changes] tab.

--

If you are interested to help me on watching our CIs: please come on
the python-build...@python.org mailing list! Introduce yourself and
explain how do you plan to help. I may propose to mentor you to assist
you the first weeks.

As I wrote, maybe a first step would be to write down a documentation
how to deal with buildbots and/or update and complete existing
documentations.

https://devguide.python.org/buildbots/

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


What's the big idea of separate buildbots anyway? I thought the purpose 
of CI is to test everything _before_
it breaks the main codebase. Then it's the job of the contributor rather 
than maintainer to fix any breakages.


So, maybe making them be driven by Github checks would be a better time 
investment.
Especially since we've got VSTS checks just recently, so whoever was 
doing that still knows how to interface with this Github machinery.


If the bots cancel a previous build if a new one for the same PR 
arrives, this will not lead to a significant load difference 'cuz the 
number of
actively developed PRs is stable and roughly equal to the number of 
merges according to the open/closed tickets dynamics.


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What is the command to upgrade python 3.6.5 to 3.7.5?

2018-05-31 Thread Ivan Pozdeev via Python-Dev

https://stackoverflow.com/questions/15102943/how-to-update-python/50616351#50616351

On 01.06.2018 5:22, Jonathan Tsang via Python-Dev wrote:

Hi Dev. Support,

 Is there a command that can help me to upgrade python 3.6.5 to 3.7.5 
without uninstall and reinstall please?


Thanks,
Jonathan


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] How to watch buildbots?

2018-05-31 Thread Ivan Pozdeev via Python-Dev

On 30.05.2018 16:36, Nick Coghlan wrote:
On 30 May 2018 at 22:30, Ivan Pozdeev via Python-Dev 
mailto:python-dev@python.org>> wrote:


What's the big idea of separate buildbots anyway? I thought the
purpose of CI is to test everything _before_
it breaks the main codebase. Then it's the job of the contributor
rather than maintainer to fix any breakages.


So, maybe making them be driven by Github checks would be a better
time investment.
Especially since we've got VSTS checks just recently, so whoever
was doing that still knows how to interface with this Github
machinery.

If the bots cancel a previous build if a new one for the same PR
arrives, this will not lead to a significant load difference 'cuz
the number of
actively developed PRs is stable and roughly equal to the number
of merges according to the open/closed tickets dynamics.


There are a few key details here:

1. We currently need to run post-merge CI anyway, as we're not doing 
linearised commits (where core devs just approve a change without 
merging it, and then a gating system like Zuul ensures that the tests 
are run against the latest combination of the target branch and the PR 
before merging the change)


This is the only point here that looks valid (others can be refuted). 
This technique limits the achievable commit rate by 1/testing_time . Our 
average rate probably fits into this, though it still means delays.


2. Since the buildbots are running on donated dedicated machines 
(rather than throwaway instances from a dynamic CI provider), we need 
to review the code before we let it run on the contributed systems
3. The buildbot instances run *1* build at a time, which would lead to 
major PR merging bottlenecks during sprints if we made them a gating 
requirement
4. For the vast majority of PRs, the post-merge cross-platform testing 
is a formality, since the code being modified is using lower level 
cross-platform APIs elsewhere in the standard library, so if it works 
on Windows, Linux, and Mac OS X, it will work everywhere Python runs
5. We generally don't *want* to burden new contributors with the task 
of dealing with the less common (or harder to target) platforms 
outside the big 3 - when they do break, it often takes a non-trivial 
amount of platform knowledge to understand what's different about the 
platform in question


Cheers,
Nick.

P.S. That said, if VSTS or Travis were to offer FreeBSD as an option 
for pre-merge CI, I'd suggest we enable it, at least in an advisory 
capacity - it's a better check against Linux-specific assumptions 
creeping into the code base than Mac OS X, since the latter is 
regularly different enough from other *nix systems that we need to 
give it dedicated code paths.


--
Nick Coghlan   | ncogh...@gmail.com <mailto:ncogh...@gmail.com> |   
Brisbane, Australia


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Keeping an eye on Travis CI, AppVeyor and buildbots: revert on regression

2018-06-04 Thread Ivan Pozdeev via Python-Dev

No, replying only to you wasn't intended.

https://docs.travis-ci.com/user/running-build-in-debug-mode/ is the 
official doc on how to debug a Travis CI build via ssh.



On 04.06.2018 22:31, Victor Stinner wrote:

FYI you only replied to me in private. Is it on purpose?

I'm interested if I can learn how to get a SSH access to Travis CI!

2018-06-04 21:05 GMT+02:00 Ivan Pozdeev :

On 04.06.2018 19:31, Victor Stinner wrote:

2018-05-30 11:33 GMT+02:00 Victor Stinner :

I fixed a few tests which failed randomly. There are still a few, but
the most annoying have been fixed.

Quick update a few days later.

For an unknown reason,
test_multiprocessing_forkserver.TestIgnoreEINTR.test_ignore() started
to fail very frequently but only on Travis CI. I have no explanation
why only Travis CI. I failed to reproduce the issue on a Ubuntu Trusty
container or in a Ubuntu Trusty VM. After hours of debug, I found the
bug and wrote a fix. But the fix didn't work in all cases. A second
fix and now it seems like the issue is gone!


FYI Travis claim they provide ssh access on request to debug particularly
pesky issues.
Last time I tried, got no response from them though.


https://bugs.python.org/issue33532 if you are curious about the
strange multiprocessing send() which must block but it doesn't :-)

Except Windows 7 which has issues with test_asyncio and
multiprocessing tests because this buildbot is slow, it seems like
most CIs are now stable.

Known issues:

* PPC64 Fedora 3.x, PPC64LE Fedora 3.x, s390x RHEL 3.x:
https://bugs.python.org/issue33630
* AIX: always red
* USBan: experimental buildbot
* Alpine: platform not supported yet (musl issues)

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan



--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Microsoft to acquire GitHub for $7.5 billion

2018-06-04 Thread Ivan Pozdeev via Python-Dev


On 04.06.2018 21:46, Ethan Furman wrote:

On 06/04/2018 10:49 AM, Mariatta Wijaya wrote:


I think we shouldn't be speculating or making guesses.


We should have contingency plans and be prepared.  More than one 
bought-out competitor has simply disappeared, or been hamstrung in its 
effectiveness.


Actually, since M$ has closely integrated Python into VS, I'm expecting 
Guido to receive an acquisition offer next!



--
~Ethan~

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why not using "except: (...) raise" to cleanup on error?

2018-06-04 Thread Ivan Pozdeev via Python-Dev

On 04.06.2018 20:11, Chris Angelico wrote:

On Tue, Jun 5, 2018 at 2:57 AM, Yury Selivanov  wrote:

On Mon, Jun 4, 2018 at 12:50 PM Chris Angelico  wrote:

On Tue, Jun 5, 2018 at 2:11 AM, Victor Stinner  wrote:

[..]

For me, it's fine to catch any exception using "except:" if the block
contains "raise", typical pattern to cleanup a resource in case of
error. Otherwise, there is a risk of leaking open file or not flushing
data on disk, for example.

Pardon the dumb question, but why is try/finally unsuitable?

Because try..finally isn't equivalent to try..except?  Perhaps you
should look at the actual code:
https://github.com/python/cpython/blob/b609e687a076d77bdd687f5e4def85e29a044bfc/Lib/asyncio/base_events.py#L1117-L1123

Oh. Duh. Yep, it was a dumb question. Sorry! The transport should ONLY
be closed on error.


I smell a big, big design violation here.
The whole point of Exception vs BaseException is that anything not 
Exception is "not an error", has a completely different effect on the 
program than an error, and thus is to be dealt with completely 
differently. For example, warnings do not disrupt the control flow, and 
GeneratorExit is normally handled by the `for` loop machinery.

That's the whole point why except: is strongly discouraged.

Be _very_ careful because when a system has matured, the risk of making 
bad to disastrous design decisions skyrockets (because "the big picture" 
grows ever larger, and it's ever more difficult to account for all of it).


The best solution I know of is an independent sanity-check against the 
project's core design principles: focus solely on them and say if the 
suggestion is in harmony with the existing big picture. This prevents 
the project from falling victim to 
https://en.wikipedia.org/wiki/Design_by_committee in the long run. This 
is easier to do for someone not intimately involved with the change and 
the affected area 'cuz they are less biased in favor of the change and 
less distracted by minute details.


Someone may take up this role to "provide a unified vision" (to reduce 
the load on a single http://meatballwiki.org/wiki/BenevolentDictator , 
different projects have tried delegates (this can run afoul of 
https://en.wikipedia.org/wiki/Conway%27s_law though) and a round-robin 
approach (Apache)).
The best way, however, would probably be for anyone dealing with a 
design change to remember to make this check.


This is even easier in Python, 'cuz the core values are officially 
formulated as Python Zen, and any module has one or two governing 
principles at its core, tops, that can be extracted by skimming through 
its docs.



ChrisA
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why not using "except: (...) raise" to cleanup on error?

2018-06-04 Thread Ivan Pozdeev via Python-Dev

On 04.06.2018 23:52, Ivan Pozdeev wrote:

On 04.06.2018 20:11, Chris Angelico wrote:
On Tue, Jun 5, 2018 at 2:57 AM, Yury Selivanov 
 wrote:
On Mon, Jun 4, 2018 at 12:50 PM Chris Angelico  
wrote:
On Tue, Jun 5, 2018 at 2:11 AM, Victor Stinner 
 wrote:

[..]

For me, it's fine to catch any exception using "except:" if the block
contains "raise", typical pattern to cleanup a resource in case of
error. Otherwise, there is a risk of leaking open file or not 
flushing

data on disk, for example.

Pardon the dumb question, but why is try/finally unsuitable?

Because try..finally isn't equivalent to try..except?  Perhaps you
should look at the actual code:
https://github.com/python/cpython/blob/b609e687a076d77bdd687f5e4def85e29a044bfc/Lib/asyncio/base_events.py#L1117-L1123 



In this particular code, it looks like just KeyboardInterrupt needs to 
be handled in addition to Exception -- and even that's not certain 'cuz 
KeyboardInterrupt is an abnormal termination and specifically designed 
to not be messed with by the code ("The exception inherits from 
|BaseException| 
 
so as to not be accidentally caught by code that catches |Exception| 
 
and thus prevent the interpreter from exiting."). It only makes sense to 
catch it in REPL interfaces where the user clearly wants to terminale 
the current command rather than the entire program.


If e.g. a warning is upgraded to exception, this means that some code is 
broken from user's POV, but not from Python team's POV, so we can't 
really be sure if we can handle this situation gracefully: our cleanup 
code can fail just as well!



Oh. Duh. Yep, it was a dumb question. Sorry! The transport should ONLY
be closed on error.


I smell a big, big design violation here.
The whole point of Exception vs BaseException is that anything not 
Exception is "not an error", has a completely different effect on the 
program than an error, and thus is to be dealt with completely 
differently. For example, warnings do not disrupt the control flow, 
and GeneratorExit is normally handled by the `for` loop machinery.

That's the whole point why except: is strongly discouraged.

Be _very_ careful because when a system has matured, the risk of 
making bad to disastrous design decisions skyrockets (because "the big 
picture" grows ever larger, and it's ever more difficult to account 
for all of it).


The best solution I know of is an independent sanity-check against the 
project's core design principles: focus solely on them and say if the 
suggestion is in harmony with the existing big picture. This prevents 
the project from falling victim to 
https://en.wikipedia.org/wiki/Design_by_committee in the long run. 
This is easier to do for someone not intimately involved with the 
change and the affected area 'cuz they are less biased in favor of the 
change and less distracted by minute details.


Someone may take up this role to "provide a unified vision" (to reduce 
the load on a single http://meatballwiki.org/wiki/BenevolentDictator , 
different projects have tried delegates (this can run afoul of 
https://en.wikipedia.org/wiki/Conway%27s_law though) and a round-robin 
approach (Apache)).
The best way, however, would probably be for anyone dealing with a 
design change to remember to make this check.


This is even easier in Python, 'cuz the core values are officially 
formulated as Python Zen, and any module has one or two governing 
principles at its core, tops, that can be extracted by skimming 
through its docs.



ChrisA
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru




--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Why not using "except: (...) raise" to cleanup on error?

2018-06-04 Thread Ivan Pozdeev via Python-Dev

On 05.06.2018 0:54, Ivan Pozdeev wrote:

On 04.06.2018 23:52, Ivan Pozdeev wrote:

On 04.06.2018 20:11, Chris Angelico wrote:
On Tue, Jun 5, 2018 at 2:57 AM, Yury Selivanov 
 wrote:
On Mon, Jun 4, 2018 at 12:50 PM Chris Angelico  
wrote:
On Tue, Jun 5, 2018 at 2:11 AM, Victor Stinner 
 wrote:

[..]
For me, it's fine to catch any exception using "except:" if the 
block

contains "raise", typical pattern to cleanup a resource in case of
error. Otherwise, there is a risk of leaking open file or not 
flushing

data on disk, for example.

Pardon the dumb question, but why is try/finally unsuitable?

Because try..finally isn't equivalent to try..except? Perhaps you
should look at the actual code:
https://github.com/python/cpython/blob/b609e687a076d77bdd687f5e4def85e29a044bfc/Lib/asyncio/base_events.py#L1117-L1123 



In this particular code, it looks like just KeyboardInterrupt needs to 
be handled in addition to Exception -- and even that's not certain 
'cuz KeyboardInterrupt is an abnormal termination and specifically 
designed to not be messed with by the code ("The exception inherits 
from |BaseException| 
 
so as to not be accidentally caught by code that catches |Exception| 
 
and thus prevent the interpreter from exiting.").




It only makes sense to catch it in REPL interfaces where the user 
clearly wants to terminale the current command rather than the entire 
program.


Remembered `pip`, too -- there, it's justified by it working in 
transactions.


If e.g. a warning is upgraded to exception, this means that some code 
is broken from user's POV, but not from Python team's POV, so we can't 
really be sure if we can handle this situation gracefully: our cleanup 
code can fail just as well!



Oh. Duh. Yep, it was a dumb question. Sorry! The transport should ONLY
be closed on error.


I smell a big, big design violation here.
The whole point of Exception vs BaseException is that anything not 
Exception is "not an error", has a completely different effect on the 
program than an error, and thus is to be dealt with completely 
differently. For example, warnings do not disrupt the control flow, 
and GeneratorExit is normally handled by the `for` loop machinery.

That's the whole point why except: is strongly discouraged.

Be _very_ careful because when a system has matured, the risk of 
making bad to disastrous design decisions skyrockets (because "the 
big picture" grows ever larger, and it's ever more difficult to 
account for all of it).


The best solution I know of is an independent sanity-check against 
the project's core design principles: focus solely on them and say if 
the suggestion is in harmony with the existing big picture. This 
prevents the project from falling victim to 
https://en.wikipedia.org/wiki/Design_by_committee in the long run. 
This is easier to do for someone not intimately involved with the 
change and the affected area 'cuz they are less biased in favor of 
the change and less distracted by minute details.


Someone may take up this role to "provide a unified vision" (to 
reduce the load on a single 
http://meatballwiki.org/wiki/BenevolentDictator , different projects 
have tried delegates (this can run afoul of 
https://en.wikipedia.org/wiki/Conway%27s_law though) and a 
round-robin approach (Apache)).
The best way, however, would probably be for anyone dealing with a 
design change to remember to make this check.


This is even easier in Python, 'cuz the core values are officially 
formulated as Python Zen, and any module has one or two governing 
principles at its core, tops, that can be extracted by skimming 
through its docs.



ChrisA
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru




--
Regards,
Ivan


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Docstrings on builtins

2018-06-04 Thread Ivan Pozdeev via Python-Dev

On 05.06.2018 3:09, Matthias Bussonnier wrote:

This may even be a bug/feature of IPython,

I see that inspect.signature(timedelta) fails, so if timedelta? says
Init signature: timedelta(self, /, *args, **kwargs)
Then this may be some IPython internal logic. The timedelta class seem 
to use __new__ instead of __init__ (not sure why)


Because it's an immutable type.


and __new__ have a meaningful signature,
So maybe we should fallback on that during signature inspection.

According to 
https://stackoverflow.com/questions/4374006/check-for-mutability-in-python ,

there are no reliable tests for mutability.


Feel free to open an issue on the IPython repo.

Btw IPython is uppercase I, and we don't want any troupe with the 
fruit giant.

--
M

On Mon, 4 Jun 2018 at 16:30, Chris Barker via Python-Dev 
mailto:python-dev@python.org>> wrote:


On Mon, Jun 4, 2018 at 3:27 PM, Victor Stinner
mailto:vstin...@redhat.com>> wrote:

For Argument Clinic, have a look at
https://docs.python.org/dev/howto/clinic.html


Thanks Victor -- scanning that page, it is indeed where I needed
to look.

You can also try to copy/paste code from other files using
Argument
Clinic and then run "make clinic" to regenerate the generated
files.


good idea.

Now to find some time to actually work on this...

-CHB


Victor

2018-06-04 23:45 GMT+02:00 Chris Barker via Python-Dev
mailto:python-dev@python.org>>:
> Over on python-ideas, someone is/was proposing literals for
timedeltas.
>
> I don't expect that will come to anything, but it did make
me take a look at
> the docstring for datetime.timedelta. I use iPython's ? a
lot for a quick
> overview of how to use a class/function.
>
> This is what I get:
>
> In [8]: timedelta?
> Init signature: timedelta(self, /, *args, **kwargs)
> Docstring:      Difference between two datetime values.
> File:  ~/miniconda2/envs/py3/lib/python3.6/datetime.py
> Type:           type
>
>
> That is, well, not so useful. I'd like to see at least the
signature:
>
> datetime.timedelta(days=0, seconds=0, microseconds=0,
milliseconds=0,
> minutes=0, hours=0, weeks=0
>
> And ideally much of the text in the docs.
>
> I've noticed similarly minimal docstrings on a number of
builtin functions
> and methods.
>
> If I wanted to contribute a PR to enhance these docstrings,
where would they
> go?  I've seen mention of "argument clinic", but really
don't know quite
> what that is, or how it works, but it appears to be related.
>
> Anyway -- more comprehensive docstrings on buildins could
really help
> Python's usability for command line usage.
>
> Thanks,
> -  Chris
>
>
>
>
> --
>
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R            (206) 526-6959  voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115       (206) 526-6317  main reception
>
> chris.bar...@noaa.gov 
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org 
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
>
https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com
>




-- 


Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

chris.bar...@noaa.gov 
___
Python-Dev mailing list
Python-Dev@python.org 
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:

https://mail.python.org/mailman/options/python-dev/bussonniermatthias%40gmail.com



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Docstrings on builtins

2018-06-05 Thread Ivan Pozdeev via Python-Dev

On 05.06.2018 17:56, Chris Barker wrote:

OK,

looking a bit deeper:

In [69]: timedelta.__new__.__doc__
Out[69]: 'Create and return a new object.  See help(type) for accurate 
signature.'


In [70]: timedelta.__init__.__doc__
Out[70]: 'Initialize self.  See help(type(self)) for accurate signature.'

In [71]: timedelta.__doc__
Out[71]: 'Difference between two datetime values.'

So the none of the docstrings have the proper information. And:

help(timedelta) returns:

Help on class timedelta in module datetime:

class timedelta(builtins.object)
 |  Difference between two datetime values.
 |
 |  Methods defined here:
 |
 |  __abs__(self, /)
 |  abs(self)
 |
 |  __add__(self, value, /)
 |  Return self+value.


So no signature either.

I'm guessing this is because argument clinic has not been properly 
applied -- so Ihave a PR to work on.


but where does help() get its info anyway?

I always thought docstrings were supposed to be used for the basic, 
well, docs. And between the class and __new__ and __init__, somewhere 
in there you should learn how to initialize an instance, yes?



In [5]: print(str.__doc__)
str(object='') -> str
str(bytes_or_buffer[, encoding[, errors]]) -> str

Create a new string object from the given object. If encoding or
errors is specified <...>

As you can see, the start of the type's docstring contains constructor 
signature(s).


Timedelta's one should probably do the same.


-CHB





On Mon, Jun 4, 2018 at 6:21 PM, Matthias Bussonnier 
mailto:bussonniermatth...@gmail.com>> 
wrote:




On Mon, 4 Jun 2018 at 17:29, Ivan Pozdeev via Python-Dev
mailto:python-dev@python.org>> wrote:

On 05.06.2018 3:09, Matthias Bussonnier wrote:

This may even be a bug/feature of IPython,

I see that inspect.signature(timedelta) fails, so if
timedelta? says
Init signature: timedelta(self, /, *args, **kwargs)
Then this may be some IPython internal logic. The timedelta
class seem to use __new__ instead of __init__ (not sure why)


Because it's an immutable type.

Ah, yes, thanks.


and __new__ have a meaningful signature,
So maybe we should fallback on that during signature inspection.


According to

https://stackoverflow.com/questions/4374006/check-for-mutability-in-python

<https://stackoverflow.com/questions/4374006/check-for-mutability-in-python>
,
there are no reliable tests for mutability.

Sure, but we can test if the signature of __init__ is (self,/,
*args, **kwargs), and if it is,  it is useless we can attempt to
get the signature from __new__ and show that instead.  We do
similar things for docstrings, if __init__ have no docstring we
look at the class level docstring.
-- 
M



___
Python-Dev mailing list
Python-Dev@python.org <mailto:Python-Dev@python.org>
https://mail.python.org/mailman/listinfo/python-dev
<https://mail.python.org/mailman/listinfo/python-dev>
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/chris.barker%40noaa.gov
<https://mail.python.org/mailman/options/python-dev/chris.barker%40noaa.gov>




--

Christopher Barker, Ph.D.
Oceanographer

Emergency Response Division
NOAA/NOS/OR&R            (206) 526-6959   voice
7600 Sand Point Way NE   (206) 526-6329   fax
Seattle, WA  98115       (206) 526-6317   main reception

chris.bar...@noaa.gov <mailto:chris.bar...@noaa.gov>


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Microsoft to acquire GitHub for $7.5 b

2018-06-05 Thread Ivan Pozdeev via Python-Dev

On 05.06.2018 17:28, Martin Gainty wrote:


who owns the Data hosted on Github?

Github Author?
Microsoft?


Martin
https://help.github.com/articles/github-terms-of-service/#d-user-generated-content 
:


"/You own content you create, but you allow us certain rights to it, so 
that we can display and share the content you post. You still have 
control over your content, and responsibility for it, and the rights you 
grant us are limited to those we need to provide the service. We have 
the right to remove content or close Accounts if we need to."/





*From:* Python-Dev  
on behalf of M.-A. Lemburg 

*Sent:* Tuesday, June 5, 2018 7:54 AM
*To:* Antoine Pitrou; python-dev@python.org
*Subject:* Re: [Python-Dev] Microsoft to acquire GitHub for $7.5 billion
Something that may change is the way they treat Github
accounts, after all, MS is very much a sales driven company.

But then there's always the possibility to move to Gitlab
as alternative (hosted or run on PSF VMs), so I would
worry too much.

Do note, however, that the value in Github is not so much with
the products they have, but with the data. Their databases
know more about IT developer than anyone else and given
that Github is put under the AI umbrella in MS should tell
us something :-)


On 04.06.2018 19:02, Antoine Pitrou wrote:
>
> That's true, but Microsoft has a lot of stakes in the ecosystem.
> For example, since it has its own CI service that it tries to promote
> (VSTS), is it in Microsoft's best interest to polish and improve
> integrations with other CI services?
>
> Regards
>
> Antoine.
>
>
> On Mon, 4 Jun 2018 09:06:28 -0700
> Guido van Rossum  wrote:
>> On Mon, Jun 4, 2018 at 8:40 AM, Antoine Pitrou 
 wrote:

>>
>>>
>>> On Mon, 4 Jun 2018 17:03:27 +0200
>>> Victor Stinner  wrote:

 At this point, I have no opinion about the event :-) I just guess 
that

 it should make GitHub more sustainable since Microsoft is a big
 company with money and interest in GitHub. I'm also confident that
 nothing will change soon. IMHO there is no need to worry about
 anything.
>>>
>>> It does spell uncertainty on the long term.  While there is no need to
>>> worry for now, I think it gives a different colour to the debate about
>>> moving issues to Github.
>>>
>>
>> I don't see how this *increases* the uncertainty. Surely if GitHub had
>> remained independent there would have been be similar concerns 
about how it

>> would make enough money to stay in business.
>>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev 


Python-Dev Info Page 
mail.python.org
Do not post general Python questions to this list. For help with 
Python please see the Python help page.. On this list the key Python 
developers discuss the future of the language and its implementation.




> Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/mal%40egenix.com

>

--
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Experts (#1, Jun 05 2018)
>>> Python Projects, Coaching and Consulting ... 
http://www.egenix.com/ 
>>> Python Database Interfaces ... http://products.egenix.com/ 

>>> Plone/Zope Database Interfaces ... http://zope.egenix.com/ 




::: We implement business ideas - efficiently in both time and costs :::

   eGenix.com Software, Skills and Services GmbH Pastor-Loeh-Str.48
    D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
http://www.egenix.com/company/contact/ 


http://www.malemburg.com/ 

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/mgainty%40hotmail.com



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Keeping an eye on Travis CI, AppVeyor and buildbots: revert on regression

2018-06-06 Thread Ivan Pozdeev via Python-Dev

On 06.06.2018 18:10, Victor Stinner wrote:

2018-06-04 21:37 GMT+02:00 Ivan Pozdeev :

https://docs.travis-ci.com/user/running-build-in-debug-mode/ is the official
doc on how to debug a Travis CI build via ssh.

Did you already try it? The doc mentions a "[Debug]" button, but I
cannot see it whereas I'm logged in in the Python organization.
Last I checked, they wrote it's only available for paid accounts (on 
travis-ci.com) by default and only enabled for others on a case-by-case 
basis, but I cannot find this info now.
So suggest you make a support ticket at 
https://github.com/travis-ci/travis-ci .

I also tried the curl API call but it fails with:

{
   "@type": "error",
   "error_type": "wrong_credentials",
   "error_message": "access denied"
}

curl -s -X POST \
   -H "Content-Type: application/json" \
   -H "Accept: application/json" \
   -H "Travis-API-Version: 3" \
   -H "Authorization: token X" \
   -d "{\"quiet\": true}" \
   https://api.travis-ci.org/job/388706591/debug

where X is my hidden token ;-)

If I use an invalid token ID, I get a different error: just the string
"access denied", instead of a JSON dictionary. First I was also
confused between travis-ci.com and travis-ci.org ... The documentation
shows an example with .com, but Python organization uses .org.

Victor


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] About [].append == [].append

2018-06-21 Thread Ivan Pozdeev via Python-Dev

First, tell us what problem you're solving.

Strictly speaking, bound methods don't have an unambiguous notion of 
equality:


are they equal if they do the same thing, or of they do they same thing 
_on the same object_?


The result that you're seeing is a consequence of that same dichotomy in 
the minds of the .__eq__ designers, and Python Zen advises "In the face 
of ambiguity, refuse the temptation to guess." -- which is what you're 
suggesting.



On 21.06.2018 14:25, Jeroen Demeyer wrote:

Currently, we have:

>>> [].append == [].append
False

However, with a Python class:

>>> class List(list):
... def append(self, x): super().append(x)
>>> List().append == List().append
True

In the former case, __self__ is compared using "is" and in the latter 
case, it is compared using "==".


I think that comparing using "==" is the right thing to do because 
"is" is really an implementation detail. Consider


>>> (1).bit_length == (1).bit_length
True
>>> (1).bit_length == (1+0).bit_length
False

I guess that's also the reason why CPython internally rarely uses "is" 
for comparisons.


See also:
- https://bugs.python.org/issue1617161
- https://bugs.python.org/issue33925

Any opinions?


Jeroen.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] About [].append == [].append

2018-06-21 Thread Ivan Pozdeev via Python-Dev

On 21.06.2018 16:39, Steven D'Aprano wrote:

On Thu, Jun 21, 2018 at 02:33:27PM +0300, Ivan Pozdeev via Python-Dev wrote:


First, tell us what problem you're solving.

You might not be aware of the context of Jereon's question. He is the
author of PEP 579 and 580, so I expect he's looking into implementation
details of the CPython builtin functions and methods.


I see.

`pythonobject.c:method_richcompare' compares .im_func and .im_self . 
Bound builtin methods should do the same, obviously -- preferrably, even 
use the same code.




https://www.python.org/dev/peps/pep-0579/

https://www.python.org/dev/peps/pep-0580/



Strictly speaking, bound methods don't have an unambiguous notion of
equality:

are they equal if they do the same thing, or of they do they same thing
_on the same object_?

That's a red-herring, because CPython already defines an unambiguous
notion of method equality. The problem is that the notion depends on
whether the method is written in Python or not, and that seems like a
needless difference.



The result that you're seeing is a consequence of that same dichotomy in
the minds of the .__eq__ designers, and Python Zen advises "In the face
of ambiguity, refuse the temptation to guess." -- which is what you're
suggesting.

How do you come to that conclusion? If "refuse the temptation to guess"
applied here, we couldn't do this:

py> "".upper == "".upper
True

(by your reasoning, it should raise an exception).

Note the contrast in treatment of strings with:

py> [].append == [].append
False

(The reason is that "" is cached and reused, and the empty string is
not.)



On 21.06.2018 14:25, Jeroen Demeyer wrote:

[...]


I think that comparing using "==" is the right thing to do because
"is" is really an implementation detail.

+1




--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] About [].append == [].append

2018-06-21 Thread Ivan Pozdeev via Python-Dev

On 21.06.2018 23:40, Guido van Rossum wrote:
I'm with Serhiy here, for mutable values I don't think the methods 
should compare equal, even when the values do. For immutables I don't 
care either way, it's an implementation detail.


In this light, methods rather shouldn't have a rich comparison logic at 
all -- at the very least, until we have a realistic use case and can 
flesh out the requirements for it.


In my previous message, I meant that if they do have that logic, the 
right way is what `method_richcompare' does. And that was apparently 
what the method's author (that you might be familiar with) was thinking 
in 
https://github.com/python/cpython/commit/47b9ff6ba11fab4c90556357c437cb4feec1e853 
-- and even then and there, they were hesitant about the feature's 
usefulness.


But Serhiy has just disproven that that is the right way which looks 
like the final nail into its coffin.


On Thu, Jun 21, 2018, 12:55 Serhiy Storchaka > wrote:


21.06.18 14:25, Jeroen Demeyer пише:
> Currently, we have:
>
>  >>> [].append == [].append
> False
>
> However, with a Python class:
>
>  >>> class List(list):
>  def append(self, x): super().append(x)
>  >>> List().append == List().append
> True

I think this is a bug. These bound methods can't be equal because
they
have different side effect.

The use case for using "is" for __self__ is described by the OP of
issue1617161. I don't know use cases for using "==".

There is a related problem of hashing. Currently
bound methods are not hashable if __self__ is not hashable. This
makes
impossible using them as dict keys.

___
Python-Dev mailing list
Python-Dev@python.org 
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/guido%40python.org



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] About [].append == [].append

2018-06-22 Thread Ivan Pozdeev via Python-Dev

On 22.06.2018 19:41, Steven D'Aprano wrote:

On Fri, Jun 22, 2018 at 08:13:44AM -0700, Guido van Rossum wrote:


Honestly it looks to me like the status quo is perfect.

Does this example work for you?

py> (17.1).hex == (17.1).hex
True

But:

py> a = 17.1
py> b = 17.1
py> a.hex == b.hex
False

I know why it happens -- at the REPL, the interpreter uses the same
object for both 17.1 instances when they're part of the same statement,
but not when they're on separate lines. I just don't know whether this
is desirable or not.


Strictly speaking, I can't see anything in the docs about method 
equality semantics.
If that's true, it's an implementation detail, and users shouldn't rely 
on it.
Consequently, anything is "desirable" that is sufficient for the Python 
codebase.


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PySequence_Check but no __len__

2018-06-22 Thread Ivan Pozdeev via Python-Dev

On 22.06.2018 22:07, Terry Reedy wrote:

On 6/22/2018 7:17 AM, Christian Tismer wrote:



My problem is to find out how to deal with a class which has
__getitem__ but no __len__.

The documentation suggests that the length of a sequence can always
be obtained by len().
https://docs.python.org/3/reference/datamodel.html


It says that plainly: "The built-in function len() returns the number 
of items of a sequence. "


https://docs.python.org/3/library/collections.abc.html#collections-abstract-base-classes 



says that a Sequence has both __getitem__ and __len__.

I am surprised that a C-API function calls something a 'sequence' 
without it having __len__.


A practical sequence check is checking for __iter__ . An iterator 
doesn't necessarily have a defined length -- e.g. a stream or a generator.


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PySequence_Check but no __len__

2018-06-22 Thread Ivan Pozdeev via Python-Dev

On 22.06.2018 22:17, Ivan Pozdeev wrote:

On 22.06.2018 22:07, Terry Reedy wrote:

On 6/22/2018 7:17 AM, Christian Tismer wrote:



My problem is to find out how to deal with a class which has
__getitem__ but no __len__.

The documentation suggests that the length of a sequence can always
be obtained by len().
https://docs.python.org/3/reference/datamodel.html


It says that plainly: "The built-in function len() returns the number 
of items of a sequence. "


https://docs.python.org/3/library/collections.abc.html#collections-abstract-base-classes 



says that a Sequence has both __getitem__ and __len__.

I am surprised that a C-API function calls something a 'sequence' 
without it having __len__.


A practical sequence check is checking for __iter__ . An iterator 
doesn't necessarily have a defined length -- e.g. a stream or a 
generator.


Now, I know this isn't what 
https://docs.python.org/3/glossary.html#term-sequence says.
But practically, the documentation seems to use "sequence" in the sense 
"finite iterable". Functions that need to know the length of input in 
advance seem to be the minority.


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-23 Thread Ivan Pozdeev via Python-Dev

On 23.06.2018 5:46, Steven D'Aprano wrote:

On Fri, Jun 22, 2018 at 10:59:43AM -0700, Michael Selik wrote:


I've started testing the proposed syntax when I teach. I don't have a
large
sample yet, but most students either dislike it or don't appreciate the
benefits. They state a clear preference for shorter, simpler lines at the
consequence of more lines of code.

Of course they do -- they're less fluent at reading code. They don't
have the experience to judge good code from bad.

The question we should be asking is, do we only add features to Python
if they are easy for beginners? It's not that I especially want to add
features which *aren't* easy for beginners, but Python isn't Scratch and
"easy for beginners" should only be a peripheral concern.


Python's design principles are expressed in the Zen. They rather focus 
on being no more complex than absolutely necessary, without prioritizing 
either beginners or old-timers ("simple is better than complex", 
"complex is better than complicated").



This is partly because students, lacking the experience to instantly
recognize larger constructs, prefer a more concrete approach to
coding. "Good code" is code where the concrete behaviour is more
easily understood. As a programmer gains experience, s/he learns to
grok more complex expressions, and is then better able to make use of
the more expressive constructs such as list comprehensions.


I don't think that's the only dynamic going on here. List comprehensions
are more expressive, but also more declarative and in Python they have nice
parallels with SQL and speech patterns in natural language. The concept of
a comprehension is separate from its particular expression in Python. For
example, Mozilla's array comprehensions in Javascript are/were ugly [0].

Mozilla's array comprehensions are almost identical to Python's, aside
from a couple of trivial differences:

 evens = [for (i of numbers) if (i % 2 === 0) i];

compared to:

 evens = [i for i in numbers if (i % 2 == 0)]

- the inexplicable (to me) decision to say "for x of array" instead of
   "for x in array";

- moving the expression to the end, instead of the beginning.

The second one is (arguably, though not by me) an improvement, since it
preserves a perfect left-to-right execution order within the
comprehension.

  

Students who are completely new to programming can see the similarity of
list comprehensions to spoken language.

o_O

I've been using comprehensions for something like a decade, and I can't
:-)

The closest analogy to comprehensions I know of is set builder notation
in mathematics, which is hardly a surprise. That's where Haskell got the
inspiration from, and their syntax is essentially an ASCIIfied version
of set builder notation:

Haskell: [(i,j) | i <- [1,2], j <- [1..4]]

Maths:   {(i,j) : i ∈ {1, 2}, j ∈ {1...4}}

I teach secondary school children maths, and if there's a plain English
natural language equivalent to list builder notation, neither I nor any
of my students, nor any of the text books I've read, have noticed it.






--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] We now have C code coverage!

2018-06-23 Thread Ivan Pozdeev via Python-Dev

On 23.06.2018 13:52, Paul Moore wrote:

On 22 June 2018 at 23:21, Brett Cannon  wrote:

Thanks to a PR from Ammar Askar we now run Python under lcov as part of the
code coverage build. And thanks to codecov.io automatically merging code
coverage reports we get a complete report of our coverage (the first results
of which can now be seen at https://codecov.io/gh/python/cpython).

And funny enough the coverage average changed less than 1%. :)

Nice!

One thing I noticed, code that's Windows-specific isn't covered. I
assume that's because the coverage reports are based on runs of the
test suite on Linux. Is it possible to merge in data from the Windows
test runs? If not, what's the best way to address this? Should we be
mocking things to attempt to test Windows-specific code even on Linux,
or should we simply accept that we're not going to achieve 100%
coverage and not worry about it?


AFAICS lcov is based on gcov which is GCC-specific.


Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-24 Thread Ivan Pozdeev via Python-Dev

On 24.06.2018 9:53, Chris Angelico wrote:

On Sun, Jun 24, 2018 at 4:33 PM, Nick Coghlan  wrote:

On 24 June 2018 at 15:56, Steven D'Aprano  wrote:

On Sun, Jun 24, 2018 at 02:33:59PM +1000, Nick Coghlan wrote:


Given that PEP 572 *is* proposing implicit comprehension state export,

"Implicit" and "explicit" are two terms which often get misused to mean
"I don't like it" and "I do like it".

Making the intentional choice to use an assignment expression is not
really "implicit" in any meaningful sense.


My 2c.
An expression is intuitively thought to be self-contained i.e. without 
side effects.
if I write `a=b+1`, I'm not expecting it to do anything except assigning 
`a'.


Expressions with side effects has long since proven to be problematic 
because of the implicit (thus hard to see and track) links they create

(and because the result depends on the order of evaluation).
Moreover, Python's other design elements have been consistently 
discouraging expressions with side effects, too (e.g. mutator methods 
intentionally return None instead of the new value, making them useless 
in expressions), so the proposition is in direct conflict with the 
language's design.


Assignment expressions are a grey area: they carry the full implications 
of expressions with side effects described above, but their side effect 
is their only effect, i.e. they are explicit and prominent about the 
"evil" they do.



No, it's actually implicit: there's an extra "global NAME" or
"nonlocal NAME" in the equivalent code for a comprehension that isn't
there in the as-written source code, and doesn't get emitted for a
regular assignment expression or for the iteration variable in a
comprehension - it only shows up due to the defined interaction
between comprehensions and assignment expressions.

The implicit "nonlocal NAME" is only because there is an equally
implicit function boundary. Why is there a function boundary marked by
square brackets? It's not saying "def" or "lambda", which obviously
create functions. It's a 'for' loop wrapped inside a list display.
What part of that says "hey, I'm a nested function"?

So if there's an implicit function, with implicit declaration of a
magical parameter called ".0", why can't it have an equally implicit
declaration that "spam" is a nonlocal name?

ChrisA
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-25 Thread Ivan Pozdeev via Python-Dev

On 25.06.2018 2:30, Greg Ewing wrote:

Guido van Rossum wrote:

Greg seem to be +0 or better for (a)


Actually, I'm closer to -1 on (a) as well. I don't like := as a
way of getting assignment in an expression. The only thing I would
give a non-negative rating is some form of "where" or "given".

"as" was suggested even before is became a keyword in `with'. ( if 
(re.match(regex,line) as m) is not None:  )


The only objective objection I've heard is it's already used in `import' 
and `with' -- but that's perfectly refutable.




Brief summary of reasons for disliking ":=":

* Cryptic use of punctuation

* Too much overlap in functionality with "="

* Asymmetry between first and subsequent uses of the bound value

* Makes expressions cluttered and hard to read to my eyes



--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-25 Thread Ivan Pozdeev via Python-Dev

On 25.06.2018 14:44, Nick Coghlan wrote:

On 25 June 2018 at 02:24, Guido van Rossum  wrote:

A quick follow-up: PEP 572 currently has two ideas: (a) introduce := for
inline assignment, (b) when := is used in a comprehension, set the scope for
the target as if the assignment occurred outside any comprehensions. It
seems we have more support for (a) than for (b) -- at least Nick and Greg
seem to be +0 or better for (a)

Right, the proposed blunt solution to "Should I use 'NAME = EXPR' or
'NAME := EXPR'?" bothers me a bit, but it's the implementation
implications of parent local scoping that I fear will create a
semantic tar pit we can't get out of later.


but -1 for (b). IIRC (b) originated with
Tim. But his essay on the topic, included as Appendix A
(https://www.python.org/dev/peps/pep-0572/#appendix-a-tim-peters-s-findings)
does not even mention comprehensions. However, he did post his motivation
for (b) on python-ideas, IIRC a bit before PyCon; and the main text of the
PEP gives a strong motivation
(https://www.python.org/dev/peps/pep-0572/#scope-of-the-target).
Nevertheless, maybe we should compromise and drop (b)?

Unfortunately, I think the key rationale for (b) is that if you
*don't* do something along those lines, then there's a different
strange scoping discrepancy that arises between the non-comprehension
forms of container displays and the comprehension forms:

 (NAME := EXPR,) # Binds a local
 tuple(NAME := EXPR for __ in range(1)) # Doesn't bind a local

 [NAME := EXPR] # Binds a local
 [NAME := EXPR for __ in range(1)] # Doesn't bind a local
 list(NAME := EXPR for __ in range(1)) # Doesn't bind a local

 {NAME := EXPR} # Binds a local
 {NAME := EXPR for __ in range(1)} # Doesn't bind a local
 set(NAME := EXPR for __ in range(1)) # Doesn't bind a local

 {NAME := EXPR : EXPR2} # Binds a local
 {NAME := EXPR : EXPR2 for __ in range(1)} # Doesn't bind a local
 set((NAME := EXPR, EXPR2) for __ in range(1)) # Doesn't bind a local

Those scoping inconsistencies aren't *new*, but provoking them
currently involves either class scopes, or messing about with
locals().


I've got an idea about this.

The fact is, assignments don't make much sense in an arbitrary part of a 
comprehension:
`for' variables are assigned every iteration, so when the result is 
returned, only the final value will be seen.
(And if you need a value every iteration, just go the explicit way and 
add it to the returned tuple.)


Contrary to that, the "feeder" expression is only evaluated once at the 
start -- there, assignments do make sense.

Effectively, it's equivalent to an additional line:

seq = range(calculate_b() as bottom, calculate_t() as top)
results = [calculate_r(bottom,r,top) for r in seq]

So, I suggest to evaluate the "feeder" expression in a local scope but 
expressions that are evaluated every iteration in a private scope.


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-26 Thread Ivan Pozdeev via Python-Dev

On 26.06.2018 0:13, Steve Holden wrote:
On Mon, Jun 25, 2018 at 8:37 PM, Terry Reedy > wrote:


On 6/24/2018 7:25 PM, Guido van Rossum wrote:

I'd wager that the people who might be most horrified about it


the (b) scoping rule change

would be people who feel strongly that the change to the
comprehension scope rules in Python 3 is a big improvement,


I might not be one of those 'most horrified' by (b), but I
increasingly don't like it, and I was at best -0 on the
comprehension scope change. To me, iteration variable assignment
in the current scope is a non-problem.  So to me the change was
mostly useless churn.  Little benefit, little harm.  And not worth
fighting when others saw a benefit.

However, having made the change to nested scopes, I think we
should stick with them.  Or repeal them.  (I believe there is
another way to isolate iteration names -- see below).  To me, (b)
amounts to half repealing the nested scope change, making
comprehensions half-fowl, half-fish chimeras.

​[...]​

-- 
Terry Jan Reedy


​I'd like to ask: how many readers of ​
​this email have ever deliberately taken advantage of the limited 
Python 3 scope in comprehensions and generator expressions to use what 
would otherwise be a conflicting local variable name?​


I did:

for l in (l.rstrip() for l in f):

The provisional unstripped line variable is totally unneeded in the 
following code.




I appreciate that the scope limitation can sidestep accidental naming 
errors, which is a good thing.


Unfortunately, unless we anticipate Python 4 (or whatever) also making 
for loops have an implicit scope, I am left wondering whether it's not 
too large a price to pay. After all, special cases aren't special 
enough to break the rules, and unless the language is headed towards 
implicit scope for all uses of "for" one could argue that the scope 
limitation is a special case too far. It certainly threatens to be yet 
another confusion for learners, and while that isn't the only 
consideration, it should be given due weight.



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-26 Thread Ivan Pozdeev via Python-Dev

On 26.06.2018 1:58, Greg Ewing wrote:

Chris Angelico wrote:


The wheel turns round and round, and the same spokes come up.




Unless there's a repository of prior discussion no-one can be bothered 
to gather scraps from around the Net.
Wikis solve this by all the discussion being in one place, and even they 
struggle is there were multiple.



A discussion long past, and a discussion yet to come.

There are no beginnings or endings in the Wheel of Python...



--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Policy on refactoring/clean up

2018-06-26 Thread Ivan Pozdeev via Python-Dev

On 26.06.2018 12:00, Jeroen Demeyer wrote:

Hello,

On https://github.com/python/cpython/pull/7909 I encountered friction 
for a PR which I expected to be uncontroversial: it just moves some 
code without changing any functionality.


So basically my question is: is there some CPython policy *against* 
refactoring code to make it easier to read and write? (Note that I'm 
not talking about pure style issues here)


Background: cpython has a source file "call.c" (introduced in 
https://github.com/python/cpython/pull/12) but the corresponding 
declarations are split over several .h files. While working on PEP 
580, I found this slightly confusing. I decided that it would make 
more sense to group all these declarations in a new file "call.h". 
That's what PR 7909 does. In my opinion, the resulting code is easier 
to read. It also defines a clear place for declarations of future 
functionality added to "call.c" (for example, if we add a public API 
for FASTCALL). Finally, I added/clarified a few comments.


I expected the PR to be either ignored or accepted. However, I 
received a negative reaction from Inada Naoki on it.


I don't mind closing the PR and keeping the status quo if there is a 
general agreement. However, I'm afraid that a future reviewer of PEP 
580 might say "your includes are a mess" and he will be right.


AFAICS, your PR is not a strict improvement, that's the reason for the 
"friction".
You may suggest it as a supplemental PR to PEP 580. Or even a part of 
it, but since the changes are controversial, better make the 
refactorings into separate commits so they can be rolled back separately 
if needed.




Jeroen.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Policy on refactoring/clean up

2018-06-26 Thread Ivan Pozdeev via Python-Dev

On 26.06.2018 14:43, Jeroen Demeyer wrote:

On 2018-06-26 13:11, Ivan Pozdeev via Python-Dev wrote:

AFAICS, your PR is not a strict improvement


What does "strict improvement" even mean? Many changes are not strict 
improvements, but still useful to have.


Inada pointed me to YAGNI 
(https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it) but I 
disagree with that premise: there is a large gray zone between 
"completely useless" and "really needed". My PR falls in that gap of 
"nice to have but we can do without it".



You may suggest it as a supplemental PR to PEP 580. Or even a part of
it, but since the changes are controversial, better make the
refactorings into separate commits so they can be rolled back separately
if needed.


If those refactorings are rejected now, won't they be rejected as part 
of PEP 580 also?


This is exactly what that the YAGNI principle is about, and Inada was 
right to point to it. Until you have an immediate practical need for 
something, you don't really know the shape and form for it that you will 
be the most comfortable with. Thus any "would be nice to have" 
tinkerings are essentially a waste of time and possibly a degradation, 
too: you'll very likely have to change them again when the real need 
arises -- while having to live with any drawbacks in the meantime.


So, if you suggest those changes together with the PEP 580 PR, they will 
be reviewed through the prism of the new codebase and its needs, which 
are different from the current codebase and its needs.



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Policy on refactoring/clean up

2018-06-26 Thread Ivan Pozdeev via Python-Dev

On 26.06.2018 14:54, INADA Naoki wrote:


On Tue, Jun 26, 2018 at 8:46 PM Jeroen Demeyer <mailto:j.deme...@ugent.be>> wrote:


On 2018-06-26 13:11, Ivan Pozdeev via Python-Dev wrote:
> AFAICS, your PR is not a strict improvement

What does "strict improvement" even mean? Many changes are not strict
improvements, but still useful to have.

Inada pointed me to YAGNI


​No, YAGNI is posted by someone and they removed their comment.


Yes, that was me instead.
I posted it and then changed my mind. Apparently, notifications were 
sent nonetheless.

I didn't watch the thread and kinda assumed that you pointed that out, too.
(Just to put everything straight and not make anyone suspect I'm trying 
to pull the wool over anyone's eyes here.)




My point was:

Moving code around makes:

  * hard to track history.

  * hard to backport patches to old branches.

https://github.com/python/cpython/pull/7909#issuecomment-400219905

And I prefer keeping definitions relating to​ methods in methodobject.h to
move them to call.h only because they're used/implemented in call.c

(https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it) but I
disagree with that premise: there is a large gray zone between
"completely useless" and "really needed". My PR falls in that gap of
"nice to have but we can do without it".


​So I didn't think even it is "nice to have".​

> You may suggest it as a supplemental PR to PEP 580. Or even a
part of
> it, but since the changes are controversial, better make the
> refactorings into separate commits so they can be rolled back
separately
> if needed.

If those refactorings are rejected now, won't they be rejected as
part
of PEP 580 also?


Real need is important than my preference.  If it is needed PEP 580, 
I'm OK.

But I didn't know which part of the PR is required by PEP 580.

Regards,

--
INADA Naoki  mailto:songofaca...@gmail.com>>


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Policy on refactoring/clean up

2018-06-26 Thread Ivan Pozdeev via Python-Dev

On 26.06.2018 14:54, Ivan Pozdeev via Python-Dev wrote:

On 26.06.2018 14:43, Jeroen Demeyer wrote:

On 2018-06-26 13:11, Ivan Pozdeev via Python-Dev wrote:

AFAICS, your PR is not a strict improvement


What does "strict improvement" even mean? Many changes are not strict 
improvements, but still useful to have.


Inada pointed me to YAGNI 
(https://en.wikipedia.org/wiki/You_aren%27t_gonna_need_it) but I 
disagree with that premise: there is a large gray zone between 
"completely useless" and "really needed". My PR falls in that gap of 
"nice to have but we can do without it".



You may suggest it as a supplemental PR to PEP 580. Or even a part of
it, but since the changes are controversial, better make the
refactorings into separate commits so they can be rolled back 
separately

if needed.


If those refactorings are rejected now, won't they be rejected as 
part of PEP 580 also?


This is exactly what that the YAGNI principle is about, and Inada was 
right to point to it.


Strike this part out since he didn't actually say that as it turned out.

Until you have an immediate practical need for something, you don't 
really know the shape and form for it that you will be the most 
comfortable with. Thus any "would be nice to have" tinkerings are 
essentially a waste of time and possibly a degradation, too: you'll 
very likely have to change them again when the real need arises -- 
while having to live with any drawbacks in the meantime.


So, if you suggest those changes together with the PEP 580 PR, they 
will be reviewed through the prism of the new codebase and its needs, 
which are different from the current codebase and its needs.



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru




--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-27 Thread Ivan Pozdeev via Python-Dev

On 26.06.2018 1:34, Greg Ewing wrote:

Ivan Pozdeev via Python-Dev wrote:
"as" was suggested even before is became a keyword in `with'. ( if 
(re.match(regex,line) as m) is not None:  )


That's not equivalent where/given, though, since it still
has the asymmetry problem.

What do you mean by "asymmetry"? The fact that the first time around, 
it's the expression and after that, the variable?


If that, it's not a "problem". The whole idea is to assign the result of 
a subexpression to something.
If you force any assignments to be outside, it won't be a subexpression 
anymore, but effectively a separate statement -- if not syntactically, 
then visually at least -- both of which are the things the feature's 
purpose is to avoid.


If you seek to force assignments outside, you should've rather suggested 
inline code blocks e.g. like anonymous methods in C# ( { a=foo(); 
b=bar(); return a+b;} ).


Using this assigned result elsewhere in the same expression (akin to 
regex backreferences) is not a part of the basic idea actually.
It depends on the evaluation order (and whether something is evaluated 
at all), so I doubt it should even be allowed -- but even if it is, it's 
a side benefit at best.


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-27 Thread Ivan Pozdeev via Python-Dev

On 27.06.2018 5:36, Guido van Rossum wrote:

[This is my one response today]

On Mon, Jun 25, 2018 at 12:40 PM Terry Reedy > wrote:


On 6/24/2018 7:25 PM, Guido van Rossum wrote:
> I'd wager that the people who might be most horrified about it

the (b) scoping rule change

> would be people who feel strongly that the change to the
> comprehension scope rules in Python 3 is a big improvement,

I might not be one of those 'most horrified' by (b), but I
increasingly
don't like it, and I was at best -0 on the comprehension scope
change.
To me, iteration variable assignment in the current scope is a
non-problem.  So to me the change was mostly useless churn. Little
benefit, little harm.  And not worth fighting when others saw a
benefit.


Fair enough, and by itself this might not have been enough reason to 
make the change. But see below.


However, having made the change to nested scopes, I think we should
stick with them.  Or repeal them.  (I believe there is another way to
isolate iteration names -- see  below).  To me, (b) amounts to half
repealing the nested scope change, making comprehensions half-fowl,
half-fish chimeras.


That depends on how you see it -- to me (b) just means that there's an 
implicit nonlocal[1] to make the assignment have the (desirable) 
side-effect.


The key thing to consider here is whether that side-effect is in fact 
desirable. For me, the side-effect of the comprehension's loop control 
variable was never desirable -- it was just an implementation detail 
leaking out. (And that's different from leaking a regular for-loop's 
control variable -- since we have 'break' (and 'else') there are some 
legitimate use cases. But comprehensions try to be expressions, and 
here the side effect is at best useless and at worst a nasty surprise.)


> and who are familiar with the difference in implementation
> of comprehensions (though not generator expressions) in Python 2
vs. 3.

That I pretty much am, I think.  In Python 2, comprehensions (the
fish)
were, at least in effect, expanded in-line to a normal for loop.
Generator expressions (the fowls) were different.  They were, and
still
are, expanded into a temporary generator function whose return
value is
dropped back into the original namespace.  Python 3 turned
comprehensions (with 2 news varieties thereof) into fowls also,
temporary functions whose return value is dropped back in the
original
namespace.  The result is that a list comprehension is equivalent to
list(generator_ expression), even though, for efficiency, it is not
implemented that way.  (To me, this unification is more a benefit
than
name hiding.)


Right, and this consistency convinced me that the change was worth it. 
I just really like to be able to say "[... for ...]" is equivalent to 
"list(... for ...)", and similar for set and dict.


"A shorthand to list()/dict()/set()" is actually how I thought of 
comprehensions when I studied them. And I was actually using list() in 
my code for some time before I learned of their existence.



(b) proposes to add extra hidden code in and around the temporary
function to partly undo the isolation.


But it just adds a nonlocal declaration. There's always some hidden 
code ('def' and 'return' at the very least).


list comprehensions would no
longer be equivalent to list(generator_expression), unless
generator_expressions got the same treatment, in which case they
would
no longer be equivalent to calling the obvious generator function.
Breaking either equivalence might break someone's code.


Ah, there's the rub! I should probably apologize for not clarifying my 
terminology more. In the context of PEP 572, when I say 
"comprehensions" I include generators! PEP 572 states this explicitly 
(https://github.com/python/peps/blame/master/pep-0572.rst#L201-L202).


Certainly PEP 572 intends to add that implicit nonlocal to both 
comprehensions and generator expressions. (I just got really tired of 
writing that phrase over and over, and at some point I forgot that 
this is only a parenthetical remark added in the PEP's latest 
revision, and not conventional terminology -- alas. :-)


Part (b) of PEP 572 does several things of things to *retain* consistency:

- The target of := lives in the same scope regardless of whether it 
occurs in a comprehension, a generator expression, or just in some 
other expression.


- When it occurs in a comprehension or generator expression, the scope 
is the same regardless of whether it occurs in the "outermost 
iterable" or not.


If we didn't have (b) the target would live in the 
comprehension/genexpr scope if it occurred in a comprehension/genexp 
but outside its "outermost iterable", and in the surrounding scope 
otherwise.


---

How loop variables might be isolated without a nested scop

Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-27 Thread Ivan Pozdeev via Python-Dev

On 27.06.2018 16:25, Greg Ewing wrote:

Ivan Pozdeev via Python-Dev wrote:
Using this assigned result elsewhere in the same expression (akin to 
regex backreferences) is not a part of the basic idea actually.


If that's true, then the proposal has mutated into something
that has *no* overlap whatsoever with the use case that started
this whole discussion,


I don't know what and where "started" it (AFAIK the idea has been around 
for years) but for me, the primary use case for an assignment expression 
is to be able to "catch" a value into a variable in places where I can't 
put an assignment statement in, like the infamous `if re.match() is not 
None'.



which was about binding a temporary
variable in a comprehension, for use *within* the comprehension.


Then I can't understand all the current fuss about scoping.
AFAICS, it's already like I described in 
https://mail.python.org/pipermail/python-dev/2018-June/154067.html :
the outermost iterable is evaluated in the local scope while others in 
the internal one:


In [13]: [(l,i) for l in list(locals())[:5] for i in locals()]
Out[13]:
[('__name__', 'l'),
 ('__name__', '.0'),
 ('__builtin__', 'l'),
 ('__builtin__', '.0'),
 ('__builtin__', 'i'),
 ('__builtins__', 'l'),
 ('__builtins__', '.0'),
 ('__builtins__', 'i'),
 ('_ih', 'l'),
 ('_ih', '.0'),
 ('_ih', 'i'),
 ('_oh', 'l'),
 ('_oh', '.0'),
 ('_oh', 'i')]

(note that `i' is bound after the first evaluation of internal 
`locals()' btw, as to be expected)


If the "temporary variables" are for use inside the comprehension only, 
the assignment expression needs to bind in the current scope like the 
regular assignment statement, no changes are needed!


It depends on the evaluation order (and whether something is 
evaluated at all),


Which to my mind is yet another reason not to like ":=".



--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-27 Thread Ivan Pozdeev via Python-Dev

On 27.06.2018 16:49, Steven D'Aprano wrote:

On Wed, Jun 27, 2018 at 08:00:20AM -0400, Eric V. Smith wrote:

On 6/27/2018 7:08 AM, Chris Angelico wrote:

It gets funnier with nested loops. Or scarier. I've lost the ability
to distinguish those two.

def test():
 spam = 1
 ham = 2
 vars = [key1+key2 for key1 in locals() for key2 in locals()]
 return vars

Wanna guess what that's gonna return?

I'm not singling out Chris here, but these discussions would be easier
to follow and more illuminating if the answers to such puzzles were
presented when they're posed.

You can just copy and paste the function into the interactive
interpreter and run it :-)

But where's the fun in that? The point of the exercise is to learn first
hand just how complicated it is to try to predict the *current* scope
behaviour of comprehensions. Without the ability to perform assignment
inside them, aside from the loop variable, we've managed to avoid
thinking too much about this until now.

It also demonstrates the unrealisticness of treating comprehensions as a
separate scope -- they're hybrid scope, with parts of the comprehension
running in the surrounding local scope, and parts running in an sublocal
scope.

Earlier in this thread, Nick tried to justify the idea that
comprehensions run in their own scope, no matter how people think of
them -- but that's an over-simplification, as Chris' example above
shows. Parts of the comprehension do in fact behave exactly as the naive
model would suggest (even if Nick is right that other parts don't).

As complicated and hairy as the above example is, (1) it is a pretty
weird thing to do, so most of us will almost never need to consider it;
and (2) backwards compatibility requires that we live with it now (at
least unless we introduce a __future__ import).

If we can't simplify the scope of comprehensions, we can at least
simplify the parts that actually matters. What matters are the loop
variables (already guaranteed to be sublocal and not "leak" out of the
comprehension) and the behaviour of assignment expressions (open to
discussion).

Broadly speaking, there are two positions we can take:

1. Let the current implementation of comprehensions as an implicit
hidden function drive the functionality; that means we duplicate the
hairiness of the locals() behaviour seen above, although it won't be
obvious at first glance.

What this means in practice is that assignments will go to different
scopes depending on *where* they are in the comprehension:

 [ expr   for x in iter1  for y in iter2  if cond   ...]
 [ BB for x in AA for y in BB if BB ...]

Assignments in the section marked "AA" will be in the local scope;
assignments in the BB sections will be in the sublocal scope. That's
not too bad, up to the point you try to assign to the same name in
AA and BB. And then you are likely to get confusing hard to
debug UnboundLocalErrors.


This isn't as messy as you make it sound if you remember that the 
outermost iterable is evaluated only once at the start and all the 
others -- each iteration.

Anyone using comprehensions has to know this fact.
The very readable syntax also makes it rather straightforward (though 
admittedly requiring some hand-tracing) to figure out what is evaluated 
after what.




2. Or we can keep the current behaviour for locals and the loop
variables, but we can keep assignment expressions simple by ensuring
they always bind to the enclosing scope. Compared to the complexity of
the above, we have the relatively straight forward:

 [ AA for x in AA for y in AA if AA ...]

The loop variables continue to be hidden away in the invisible, implicit
comprehension function, where they can't leak out, while explicit
assignments to variables (using := or given or however it is spelled)
will always go into the surrounding local scope, like they do in every
other expression.

Does it matter that the implementation of this requires an implicit
nonlocal declaration for each assignment? No more than it matters that
comprehensions themselves require an implicit function.

And what we get out of this is simpler semantics at the Python level:

- Unless previous declared global, assignment expressions always bind to
the current scope, even if they're inside a comprehension;

- and we don't have to deal with the oddity that different bits of a
comprehension run in different scopes (unless we go out of our way to
use locals()); merely using assignment expressions will just work
consistently and simply, and loop variables will still be confined to
the comprehension as they are now.




--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-27 Thread Ivan Pozdeev via Python-Dev

On 28.06.2018 1:42, Steven D'Aprano wrote:

On Wed, Jun 27, 2018 at 05:52:16PM +0300, Ivan Pozdeev via Python-Dev wrote:


What this means in practice is that assignments will go to different
scopes depending on *where* they are in the comprehension:

 [ expr   for x in iter1  for y in iter2  if cond   ...]
 [ BB for x in AA for y in BB if BB ...]

Assignments in the section marked "AA" will be in the local scope;
assignments in the BB sections will be in the sublocal scope. That's
not too bad, up to the point you try to assign to the same name in
AA and BB. And then you are likely to get confusing hard to
debug UnboundLocalErrors.

This isn't as messy as you make it sound if you remember that the
outermost iterable is evaluated only once at the start and all the
others -- each iteration.

The question isn't *how often* they are evaluated, or how many loops you
have, but *what scope* they are evaluated in. Even in a single loop
comprehension, parts of it are evaluated in the local scope and parts
are evaluated in an implicit sublocal scope.


All expressions inside the comprehension other than the initial iterable 
have access to the loop variables generated by the previous parts. So 
they are necessarily evaluated in the internal scope for that to be 
possible.


Since this is too an essential semantics that one has to know to use the 
construct sensibly, I kinda assumed you could make that connection...

E.g.:

[(x*y) for x in range(5) if x%2 for y in range(x,5) if not (x+y)%2]
   A  B  C  D   E

C and D have access to the current x; E and A to both x and y.



The overlap between the two is the trap, if you try to assign to the
same variable in the loop header and then update it in the loop body.

Not to mention the inconsistency that some assignments are accessible
from the surrounding code:

 [expr for a in (x := func(), ...) ]
 print(x)  # works

while the most useful ones, those in the body, will be locked up in an
implicit sublocal scope where they are unreachable from outside of the
comprehension:

 [x := something ...  for a in sequence ]
 print(x)  # fails




--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-27 Thread Ivan Pozdeev via Python-Dev

On 28.06.2018 2:31, Ivan Pozdeev via Python-Dev wrote:

On 28.06.2018 1:42, Steven D'Aprano wrote:
On Wed, Jun 27, 2018 at 05:52:16PM +0300, Ivan Pozdeev via Python-Dev 
wrote:



What this means in practice is that assignments will go to different
scopes depending on *where* they are in the comprehension:

 [ expr   for x in iter1  for y in iter2  if cond   ...]
 [ BB for x in AA for y in BB if BB ...]

Assignments in the section marked "AA" will be in the local scope;
assignments in the BB sections will be in the sublocal scope. 
That's

not too bad, up to the point you try to assign to the same name in
AA and BB. And then you are likely to get confusing hard to
debug UnboundLocalErrors.

This isn't as messy as you make it sound if you remember that the
outermost iterable is evaluated only once at the start and all the
others -- each iteration.

The question isn't *how often* they are evaluated, or how many loops you
have, but *what scope* they are evaluated in. Even in a single loop
comprehension, parts of it are evaluated in the local scope and parts
are evaluated in an implicit sublocal scope.


All expressions inside the comprehension other than the initial 
iterable have access to the loop variables generated by the previous 
parts. So they are necessarily evaluated in the internal scope for 
that to be possible.


Since this is too an essential semantics that one has to know to use 
the construct sensibly, I kinda assumed you could make that connection...

E.g.:

[(x*y) for x in range(5) if x%2 for y in range(x,5) if not (x+y)%2]
   A  B  C  D   E

C and D have access to the current x; E and A to both x and y.

This means btw that users cannot rely on there being a single internal 
scope, or a scope at all.
The public guarantee is only the access to the loop variables (and, with 
the PEP, additional variables from assignments), of the current 
iteration, generated by the previous parts.




The overlap between the two is the trap, if you try to assign to the
same variable in the loop header and then update it in the loop body.

Not to mention the inconsistency that some assignments are accessible
from the surrounding code:

 [expr for a in (x := func(), ...) ]
 print(x)  # works

while the most useful ones, those in the body, will be locked up in an
implicit sublocal scope where they are unreachable from outside of the
comprehension:

 [x := something ...  for a in sequence ]
 print(x)  # fails






--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-27 Thread Ivan Pozdeev via Python-Dev

On 28.06.2018 2:45, Ivan Pozdeev via Python-Dev wrote:

On 28.06.2018 2:31, Ivan Pozdeev via Python-Dev wrote:

On 28.06.2018 1:42, Steven D'Aprano wrote:
On Wed, Jun 27, 2018 at 05:52:16PM +0300, Ivan Pozdeev via 
Python-Dev wrote:



What this means in practice is that assignments will go to different
scopes depending on *where* they are in the comprehension:

 [ expr   for x in iter1  for y in iter2  if cond ...]
 [ BB for x in AA for y in BB if BB ...]

Assignments in the section marked "AA" will be in the local 
scope;
assignments in the BB sections will be in the sublocal scope. 
That's

not too bad, up to the point you try to assign to the same name in
AA and BB. And then you are likely to get confusing hard to
debug UnboundLocalErrors.

This isn't as messy as you make it sound if you remember that the
outermost iterable is evaluated only once at the start and all the
others -- each iteration.
The question isn't *how often* they are evaluated, or how many loops 
you

have, but *what scope* they are evaluated in. Even in a single loop
comprehension, parts of it are evaluated in the local scope and parts
are evaluated in an implicit sublocal scope.


All expressions inside the comprehension other than the initial 
iterable have access to the loop variables generated by the previous 
parts. So they are necessarily evaluated in the internal scope for 
that to be possible.


Since this is too an essential semantics that one has to know to use 
the construct sensibly, I kinda assumed you could make that 
connection...

E.g.:

[(x*y) for x in range(5) if x%2 for y in range(x,5) if not (x+y)%2]
   A  B  C  D   E

C and D have access to the current x; E and A to both x and y.

This means btw that users cannot rely on there being a single internal 
scope, or a scope at all.
The public guarantee is only the access to the loop variables (and, 
with the PEP, additional variables from assignments), of the current 
iteration, generated by the previous parts.


The expressions in the comprehension just somehow automagically 
determine which of the variables are internal and which are local. How 
they do that is an implementation detail.
And the PEP doesn't need to (and probably shouldn't) make guarantees 
here other than where the variables from expressions are promised to be 
accessible.




The overlap between the two is the trap, if you try to assign to the
same variable in the loop header and then update it in the loop body.

Not to mention the inconsistency that some assignments are accessible
from the surrounding code:

 [expr for a in (x := func(), ...) ]
 print(x)  # works

while the most useful ones, those in the body, will be locked up in an
implicit sublocal scope where they are unreachable from outside of the
comprehension:

 [x := something ...  for a in sequence ]
 print(x)  # fails








--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-27 Thread Ivan Pozdeev via Python-Dev

On 28.06.2018 2:31, Greg Ewing wrote:

Steven D'Aprano wrote:
The *very first* motivating example for this proposal came from a 
comprehension.


I think it is both unfortunate and inevitable that the discussion bogged
down in comprehension-hell.


I think the unfortunateness started when we crossed over from
talking about binding a temporary name for use *within* a
comprehension or expression, to binding a name for use *outside*
the comprehension or expression where it's bound.

I've shown in <05f368c2-3cd2-d7e0-9f91-27afb40d5...@mail.mipt.ru> (27 
Jun 2018 17:07:24 +0300) that assignment expressions are fine in most 
use cases without any changes to scoping whatsoever.


So, as Guido suggested in 
 (26 
Jun 2018 19:36:14 -0700), the scoping matter can be split into a 
separate PEP and discussion.



As long as it's for internal use, whether it's in a comprehension
or not isn't an issue.

Tim Peters has also given a couple of good examples of mathematical 
code that would benefit strongly from this feature.


Going back a few months now, they were the examples that tipped me over


Well, I remain profoundly unconvinced that writing comprehensions
with side effects is ever a good idea, and Tim's examples did
nothing to change that.



--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Informal educator feedback on PEP 572 (was Re: 2018 Python Language Summit coverage, last part)

2018-06-27 Thread Ivan Pozdeev via Python-Dev

On 28.06.2018 2:44, Greg Ewing wrote:

Ivan Pozdeev via Python-Dev wrote:
for me, the primary use case for an assignment expression is to be 
able to "catch" a value into a variable in places where I can't put 
an assignment statement in, like the infamous `if re.match() is not 
None'.


This seems to be one of only about two uses for assignment
expressions that gets regularly brought up. The other is
the loop-and-a-half, which is already adequately addressed
by iterators.

So maybe instead of introducing an out-of-control sledgehammer
in the form of ":=", we could think about addressing this
particular case.

Like maybe adding an "as" clause to if-statements:

   if pattern.match(s) as m:
  do_something_with(m)



I've skimmed for the origins of "as" (which I remember seeing maybe even 
before Py3 was a thing) and found this excellent analysis of modern 
languages which is too a part of the PEP 572 discussion:

https://mail.python.org/pipermail/python-ideas/2018-May/050920.html

It basically concludes that most recently-created languages do not have 
assignment expressions; they rather allow assignment statement(s?) 
before the tested expression in block statements (only if/while is 
mentioned. `for' is not applicable because its exit condition in Python 
is always the iterable's exhaustion, there's nothing in it that could be 
used as a variable).


It, however, doesn't say anything about constructs that are not block 
statements but are equivalent to them, like the ternary operator. (In 
comprehensions, filter conditions are the bits equivalent to if/while 
statements.)


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 572: Write vs Read, Understand and Control Flow

2018-07-04 Thread Ivan Pozdeev via Python-Dev

On 04.07.2018 15:29, Victor Stinner wrote:

The PEP 572 has been approved, it's no longer worth it to discuss it ;-)

Victor


As of now, https://www.python.org/dev/peps/pep-0572/ is marked as "draft".


2018-07-04 13:21 GMT+02:00 Abdur-Rahmaan Janhangeer :

was going to tell

instead of := maybe => better

:= too close to other langs

Abdur-Rahmaan Janhangeer
https://github.com/Abdur-rahmaanJ


Of the proposed syntaxes, I dislike identifer := expression less, but

I'd still rather not see it added.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/arj.python%40gmail.com

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Examples for PEP 572

2018-07-04 Thread Ivan Pozdeev via Python-Dev

On 04.07.2018 11:54, Serhiy Storchaka wrote:

04.07.18 10:06, Tim Peters пише:

[Tim]
 >> I really don't know what Guido likes best about this, but for me 
it's


 >> the large number of objectively small wins in `if` and `while`

 >> contexts.   They add up.  That conclusion surprised me.  That 
there are


 >> occasionally bigger wins to be had is pure gravy.


[Serhiy Storchaka]
 > Could you please show me several examples in real code? I
 > have not seen any win yet.

My PEP Appendix was derived entirely from looking at real code. If 
you don't believe the examples I showed there are wins (and I don't 
know whether you've seen them, because your original message in this 
thread only echoed examples from the body of the PEP), then what we 
each mean by "win" in this context has no intersection, so discussing 
it would be futile (for both of us).


Sorry, this PEP was rewritten so many times that I missed your Appendix.


while total != (total := total + term):
    term *= mx2 / (i*(i+1))
    i += 2
return total


This code looks clever that the original while loop with a break in a 
middle. I like clever code. But it needs more mental efforts for 
understanding it.


I admit that this is a good example.

There is a tiny problem with it (and with rewriting a while loop as a 
for loop, as I like). Often the body contains not a single break. In 
this case the large part of cleverness is disappeared. :-(


It took me a few minutes to figure out that this construct actually 
checks term == 0.


So, this example abuses the construct to do something it's not designed 
to do: perform an unrelated operation before checking the condition.

(Cue attempts to squeeze ever mode code here.) I would fail it in review.

This "clever" code is exactly what Perl burned itself on and what 
Python, being its antithesis, was specifically designed to avoid.





if result := solution(xs, n):
    # use result


It looks equally clear with the original code. This is not enough for 
introducing a new syntax.



if reductor := dispatch_table.get(cls):
    rv = reductor(x)
elif reductor := getattr(x, "__reduce_ex__", None):
    rv = reductor(4)
elif reductor := getattr(x, "__reduce__", None):
    rv = reductor()
else:
    raise Error("un(shallow)copyable object of type %s" % cls)


I was going to rewrite this code as

    reductor = dispatch_table.get(cls)
    if reductor:
    rv = reductor(x)
    else:
    rv = x.__reduce_ex__(4)

There were reasons for the current complex code in Python 2, but now 
classic classes are gone, and every class has the __reduce_ex__ method 
which by default calls __reduce__ which by default is inherited from 
object. With that simplification the benefit of using ":=" in this 
example looks less impressed.



if (diff := x - x_base) and (g := gcd(diff, n)) > 1:
    return g



while a > (d := x // a**(n-1)):
    a = ((n-1)*a + d) // n
return a


I would have a fun writing such code. As well as I believe you had a 
fun writing it. But I suppose the original code is simpler for a 
casual reader, and I will refrain from using such code in a project 
maintained by other people (in particular in the Python stdlib).


Which is what I expect:  the treatment you gave to the examples from 
the body of the PEP suggests you're determined not to acknowledge any 
"win", however small.


I have to admit that *there are* examples that can have a small win. I 
wondering why your examples are not used in the PEP body instead of 
examples which play *against* PEP 572.


Yet a win too small to me for justifying such syntax change. I know 
that I can not convince you or Guido.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Don't assign to a variable used later in the expression

2018-07-04 Thread Ivan Pozdeev via Python-Dev

On 04.07.2018 10:10, Nathaniel Smith wrote:

On Tue, Jul 3, 2018 at 11:07 PM, Serhiy Storchaka  wrote:

04.07.18 00:51, Chris Angelico пише:

On Wed, Jul 4, 2018 at 7:37 AM, Serhiy Storchaka
wrote:

I believe most Python users are not
professional programmers -- they are sysadmins, scientists, hobbyists and
kids --

[citation needed]

I don't understand what citation do you need.


In particularly mutating and
non-mutating operations are separated. The assignment expression breaks
this.

[citation needed]

In Python the assignment (including the augmented assignment) is a
statement, del is a statement, function and class declarations are
statements, import is a statement. Mutating methods like list.sort() and
dict.update() return None to discourage using them in expressions. This a
common knowledge, I don't know who's citation you need.

Right, Python has a *very strong* convention that each line should
have at most one side-effect, and that if it does have a side-effect
it should be at the outermost level.

I think the most striking evidence for this is that during the
discussion of PEP 572 we discovered that literally none of us –
including Guido – even *know* what the order-of-evaluation is inside
expressions. In fact PEP 572 now has a whole section talking about the
oddities that have turned up here so far, and how to fix them. Which
just goes to show that even its proponents don't actually think that
anyone uses side-effects inside expressions, because if they did, then
they'd consider these changes to be compatibility-breaking changes. Of
course the whole point of PEP 572 is to encourage people to embed
side-effects inside expressions, so I hope they've caught all the
weird cases, because even if we can still change them now we won't be
able to after PEP 572 is implemented.


I may have a fix to this:

Do not recommend assigning to the variable that is used later in the 
expression.


And to facilitate that, do not make any strong guarantees about 
evaluation order -- making any such attempt a gamble.


I immediately saw those "total := total + item" as odd but couldn't 
quite point out why.
Now I see: it ignores the whole augmented assignment machinery thing, 
which will make people demand that next.
Making this a discouraged case will diminish valid use cases and lower 
the need for that.



Some people make fun of Python's expression/statement dichotomy,
because hey don't you know that everything can be an expression,
functional languages are awesome hurhur, but I think Python's approach
is actually very elegant. Python is unapologetically an imperative
language, but even we dirty imperative programmers can agree with the
functional fanatics that reasoning about side-effects and sequencing
is hard. One-side-effect-per-line is a very elegant way to keep
sequencing visible on the page and as easy to reason about as
possible.

Or as Dijkstra put it: "our intellectual powers are rather geared to
master static relations and that our powers to visualize processes
evolving in time are relatively poorly developed. For that reason we
should do (as wise programmers aware of our limitations) our utmost to
shorten the conceptual gap between the static program and the dynamic
process, to make the correspondence between the program (spread out in
text space) and the process (spread out in time) as trivial as
possible."

It's very disheartening that not only is PEP 572 apparently going to
be accepted, but as far as I can tell neither the text nor its
proponents have even addressed this basic issue.

-n



--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] "as" postfix notation highlights

2018-07-04 Thread Ivan Pozdeev via Python-Dev

On 04.07.2018 10:10, Nathaniel Smith wrote:


I think the most striking evidence for this is that during the
discussion of PEP 572 we discovered that literally none of us –
including Guido – even *know* what the order-of-evaluation is inside
expressions.

If has stricken me that this is a highlight for the "as" syntax right here!

* Since it's a postfix, it preserves the forward reading order and is 
thus more fitting for inline syntax:


    while inv == (make_step(state, inv) as new_inv) and 
is_valid(new_inv): inv = new_inv


Makes it clear that in the first two cases, the old value of `inv' is 
used, and only after that, it's reassigned.


The prefix syntax of an assignment is instead read "from `=' on, then 
return to the start". This is okay for a standalone construct, but if 
embedded, the reading order becomes nontrivial:


    while inv == (new_inv := make_step(state, inv)) and 
is_valid(new_inv): inv = new_inv


* In the light of "Don't assign to a variable used later in the 
expression" , "as" looks completely different from assignment, which 
will deter folks from trying to do the problematic augmented assignments 
and demand expression syntax for them.



  In fact PEP 572 now has a whole section talking about the
oddities that have turned up here so far, and how to fix them. Which
just goes to show that even its proponents don't actually think that
anyone uses side-effects inside expressions, because if they did, then
they'd consider these changes to be compatibility-breaking changes. Of
course the whole point of PEP 572 is to encourage people to embed
side-effects inside expressions, so I hope they've caught all the
weird cases, because even if we can still change them now we won't be
able to after PEP 572 is implemented.

Some people make fun of Python's expression/statement dichotomy,
because hey don't you know that everything can be an expression,
functional languages are awesome hurhur, but I think Python's approach
is actually very elegant. Python is unapologetically an imperative
language, but even we dirty imperative programmers can agree with the
functional fanatics that reasoning about side-effects and sequencing
is hard. One-side-effect-per-line is a very elegant way to keep
sequencing visible on the page and as easy to reason about as
possible.

Or as Dijkstra put it: "our intellectual powers are rather geared to
master static relations and that our powers to visualize processes
evolving in time are relatively poorly developed. For that reason we
should do (as wise programmers aware of our limitations) our utmost to
shorten the conceptual gap between the static program and the dynamic
process, to make the correspondence between the program (spread out in
text space) and the process (spread out in time) as trivial as
possible."

It's very disheartening that not only is PEP 572 apparently going to
be accepted, but as far as I can tell neither the text nor its
proponents have even addressed this basic issue.

-n



--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Don't assign to a variable used later in the expression

2018-07-04 Thread Ivan Pozdeev via Python-Dev

On 05.07.2018 2:29, Nathaniel Smith wrote:

On Wed, Jul 4, 2018 at 4:10 PM, Ivan Pozdeev via Python-Dev
 wrote:

On 04.07.2018 10:10, Nathaniel Smith wrote:

Right, Python has a *very strong* convention that each line should
have at most one side-effect, and that if it does have a side-effect
it should be at the outermost level.
I think the most striking evidence for this is that during the
discussion of PEP 572 we discovered that literally none of us –
including Guido – even *know* what the order-of-evaluation is inside
expressions. In fact PEP 572 now has a whole section talking about the
oddities that have turned up here so far, and how to fix them. Which
just goes to show that even its proponents don't actually think that
anyone uses side-effects inside expressions, because if they did, then
they'd consider these changes to be compatibility-breaking changes. Of
course the whole point of PEP 572 is to encourage people to embed
side-effects inside expressions, so I hope they've caught all the
weird cases, because even if we can still change them now we won't be
able to after PEP 572 is implemented.

I may have a fix to this:

Do not recommend assigning to the variable that is used later in the
expression.

This would rule out all the comprehension use cases.

Only those outside of the outermost iterable.

I'd be fine with that personally. Reading through the PEP again I see
that there are more examples of them than I previously realized,
inside the semantics discussion and... well, this may be a personal
thing but for me they'd all be better described as "incomprehensions".
But, nonetheless, the comprehension use cases are supposed to be a
core motivation for the whole PEP.


Far from it. If/while, too. Any construct that accepts an expression and 
uses its result but doesn't allow to insert an additional line in the 
middle qualifies.

Also, some of the main arguments
for why a full-fledged := is better than the more limited alternative
proposals rely on using a variable on the same line where it's
assigned (e.g. Tim's gcd example). So I don't see this recommendation
getting any official traction within PEP 572 or PEP 8.

That's actually a valid use case!
In the aforementioned example,

if (diff := x - x_base) and (g := gcd(diff, n)) > 1:
return g

 the variable `diff' doesn't exist before, so there's no confusion 
which value is used.



Okay, I stay corrected:

_Do not recommend *changing* a variable that is used later in the 
expression._


I.e. the variable should not exist before assignment (or effectively not 
exist -- i.e. the old value should not be used).



E.g., good:

    [ rem for x in range(10) if rem := x%5 ]

bad:

    [ sum_ for x in range(10) if (sum_ := sum_ + x) % 5 ] # good 
luck figuring out what sum_ will hold




Of course you're free to use whatever style rules you prefer locally –
python-dev has nothing to do with that.

-n



--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Examples for PEP 572

2018-07-04 Thread Ivan Pozdeev via Python-Dev

On 04.07.2018 4:26, Tim Peters wrote:

[INADA Naoki]
> ...
> On the other hand, I understand PEP 572 allows clever code
> simplifies tedious code.  It may increase readability of non-dirty 
code.


The latter is the entire intent ,of course.  We can't force people to 
write readable code, but I don't understand the widespread assumption 
that other programmers are our enemies who have to be preemptively 
disarmed ;-)


Use code review to enforce readable code.  If you want a coding 
standard here, use mine:  "if using an assignment expression isn't 
obviously better (at least a little so), DON'T USE IT".  That's the 
same standard I use for lots of things (e.g., is such-&-such better as 
a listcomp or as nested loops?).  It only requires that you have 
excellent taste in what "better" means ;-)


As I noted in the PEP's Appendix A, I refuse to even write code like

i = j = count = nerrors = 0
because it squashes conceptually distinct things into a single 
statement .  I'll always write that as


i = j = 0 count = 0 nerrors = 0
instead - or even in 4 lines if `i` and `j` aren't conceptually related.

That's how annoyingly pedantic I can be ;-)   Yet after staring at 
lots of code, starting from a neutral position (why have an opinion 
about anything before examination?), I became a True Believer.


I really don't know what Guido likes best about this, but for me it's 
the large number of objectively small wins in `if` and `while` 
contexts.   They add up.  That conclusion surprised me.  That there 
are occasionally bigger wins to be had is pure gravy.


But in no case did I count "allows greater cleverness" as a win.  The 
Appendix contains a few examples of "bad" uses too, where cleverness 
in pursuit of brevity harms clarity.  In fact, to this day, I believe 
those examples derived from abusing assignment expressions in 
real-life code are more horrifying than any of the examples anyone 
else _contrived_ to "prove" how bad the feature is.


I apparently have more faith that people will use the feature as 
intended.  Not all people, just most.  The ones who don't can be 
beaten into compliance, same as with any other abused feature ;-)




It's not about if a syntax can be used right or wrong. It's about how 
easy it is to use it right vs wrong.


A syntax, any syntax, naturally nudges the user to use it in specific 
ways, by making these ways easy to write and read.
One of Python's hightlights is that it strives to make the easiest 
solutions the right ones -- "make right things easy, make wrong things 
hard".


How many of the users are "professional" vs "amateur" programmers is 
irrelevant. (E.g. while newbies are ignorant, pros are instead 
constantly pressed for time.)
Python Zen rather focuses on making it easy to write correct code for 
everyone, beginners and pros alike.


(As Stéfane Fermigier righly showed in message from 4 Jul 2018 11:59:47 
+0200, there are always orders of magnitude more "amateurs" than 
"professionals", and even fewer competent ones.)



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Don't assign to a variable used later in the expression

2018-07-04 Thread Ivan Pozdeev via Python-Dev

On 05.07.2018 3:40, Ivan Pozdeev via Python-Dev wrote:

On 05.07.2018 2:29, Nathaniel Smith wrote:

On Wed, Jul 4, 2018 at 4:10 PM, Ivan Pozdeev via Python-Dev
  wrote:

On 04.07.2018 10:10, Nathaniel Smith wrote:

Right, Python has a *very strong* convention that each line should
have at most one side-effect, and that if it does have a side-effect
it should be at the outermost level.
I think the most striking evidence for this is that during the
discussion of PEP 572 we discovered that literally none of us –
including Guido – even *know* what the order-of-evaluation is inside
expressions. In fact PEP 572 now has a whole section talking about the
oddities that have turned up here so far, and how to fix them. Which
just goes to show that even its proponents don't actually think that
anyone uses side-effects inside expressions, because if they did, then
they'd consider these changes to be compatibility-breaking changes. Of
course the whole point of PEP 572 is to encourage people to embed
side-effects inside expressions, so I hope they've caught all the
weird cases, because even if we can still change them now we won't be
able to after PEP 572 is implemented.

I may have a fix to this:

Do not recommend assigning to the variable that is used later in the
expression.

This would rule out all the comprehension use cases.

Only those outside of the outermost iterable.
Scratch this line, it was from an earlier edit of the letter. I 
invalidate this myself further on.

I'd be fine with that personally. Reading through the PEP again I see
that there are more examples of them than I previously realized,
inside the semantics discussion and... well, this may be a personal
thing but for me they'd all be better described as "incomprehensions".
But, nonetheless, the comprehension use cases are supposed to be a
core motivation for the whole PEP.


Far from it. If/while, too. Any construct that accepts an expression 
and uses its result but doesn't allow to insert an additional line in 
the middle qualifies.

Also, some of the main arguments
for why a full-fledged := is better than the more limited alternative
proposals rely on using a variable on the same line where it's
assigned (e.g. Tim's gcd example). So I don't see this recommendation
getting any official traction within PEP 572 or PEP 8.

That's actually a valid use case!
In the aforementioned example,
if (diff := x - x_base) and (g := gcd(diff, n)) > 1:
 return g
 the variable `diff' doesn't exist before, so there's no confusion 
which value is used.



Okay, I stay corrected:

_Do not recommend *changing* a variable that is used later in the 
expression._


I.e. the variable should not exist before assignment (or effectively 
not exist -- i.e. the old value should not be used).



E.g., good:

    [ rem for x in range(10) if rem := x%5 ]

bad:

    [ sum_ for x in range(10) if (sum_ := sum_ + x) % 5 ] # good 
luck figuring out what sum_ will hold




Of course you're free to use whatever style rules you prefer locally –
python-dev has nothing to do with that.

-n



--
Regards,
Ivan


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Assignment expression and coding style: the while True case

2018-07-04 Thread Ivan Pozdeev via Python-Dev

On 05.07.2018 3:36, Chris Angelico wrote:

On Thu, Jul 5, 2018 at 10:33 AM, Victor Stinner  wrote:

2018-07-05 2:15 GMT+02:00 Chris Angelico :

On Thu, Jul 5, 2018 at 10:03 AM, Victor Stinner  wrote:

On the 3360 for loops of the stdlib (*), I only found 2 loops which
would benefit of assignment expressions.

It's not easy to find loops which:
- build a list,
- are simple enough to be expressed as list comprehension,
- use a condition (if),
- use an expression different than just a variable name as the list
value (value appended to the list).

Are you implying that the above conditions are essential for
assignment expressions to be useful, or that this defines one
particular way in which they can be of value?

Hum, maybe I wasn't specific enough. I'm looking for "for loops" and
list comprehensions in stdlib which *can be* written using assignment
expression, like:

[var for ... in ... if (var := expr)].

I'm not discussing if such change is worth it or not. I just counted
how many for loops/list comprehensions *can* be modified to use
assingment expressions in the stdlib.

Maybe I missed some loops/comprehensions, and I would be happy to see
more examples ;-)

Cool. So you're looking for ones that fit a particular pattern that
can benefit, but there are almost certainly other patterns out there.
Just making sure that you weren't trying to say "out of 3360 loops,
exactly 3358 of them absolutely definitely cannot be improved here".
Well, if no-one knows how to find something that can be improved, it 
can't be improved :)

ChrisA
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Assignment expression and coding style: the while True case

2018-07-04 Thread Ivan Pozdeev via Python-Dev

On 05.07.2018 1:51, Victor Stinner wrote:

Hi,

Let's say that the PEP 572 (assignment expression) is going to be
approved. Let's move on and see how it can be used in the Python
stdlib.

I propose to start the discussion about "coding style" (where are
assignment expressions appropriate or not?) with the "while True"
case.

I wrote a WIP pull request to use assignment expressions in "while True":
https://github.com/python/cpython/pull/8095/files

In short, replace:

 while True:
 x = expr
 if not x:
 break
 ...
with:

 while (x := expr):
 ...

My question is now: for which "while True" patterns are the assignment
expression appropriate? There identified different patterns.


== Pattern 1, straighforward ==

while True:
 line = input.readline()
 if not line:
 break
 ...

IMHO here assingment expression is appropriate here. The code remains
straighfoward to read.

while (line := input.readline()):
 ...


== Pattern 2, condition ==

Condition more complex than just "not line":

while True:
 q = c//n
 if n <= q:
 break
 ...

replaced with:

while (q := c//n) < n:
 ...

IMHO it's still acceptable to use assignement expression... Maybe only
for basic conditions? (see above)


== Pattern 3, double condition ==

while True:
 s = self.__read(1)
 if not s or s == NUL:
 break
 

replaced with:

while (s := self.__read(1)) and s != NUL:
 ...

Honestly, here, I don't know if it's appropriate...

At the first look, "s != NUL" is surprising, since "s" is not defined
before the while, it's only defined in the first *test* (defining a
variable inside a test is *currently* uncommon in Python).


== Pattern 4, while (...): pass ==

Sometimes, the loop body is replaced by "pass".

while True:
 tarinfo = self.next()
 if tarinfo is None:
 break

replaced with:

while (tarinfo := self.next()) is not None:
 pass

It reminds me the *surprising* "while (func());" or "while (func())
{}" in C (sorry for theorical C example, I'm talking about C loops
with an empty body).

Maybe it's acceptable here, I'm not sure.

Would be more readable with a more descriptive variable name:

while (chunk := self.next()) is not None:
pass
tarinfo = chunk



Note: such loop is rare (see my PR).


== Pattern 5, two variables ==

while True:
 m = match()
 if not m:
 break
 j = m.end()
 if i == j:
 break
 ...

replaced with:

while (m := match()) and (j := m.end()) == i:
 ...

Maybe we reached here the maximum acceptable complexity of a single
Python line? :-)

Would be more readable with additional parentheses:

while (m := match()) and ((j := m.end()) == i):




== Other cases ==

I chose to not use assignment expressions for the following while loops.

(A)

while True:
 name, token = _getname(g)
 if not name:
 break
 ...

"x, y := ..." is invalid. It can be tricked using "while (x_y :=
...)[0]: x, y = x_y; ...". IMHO it's not worth it.

(B)

while True:
 coeff = _dlog10(c, e, places)
 # assert len(str(abs(coeff)))-p >= 1
 if coeff % (5*10**(len(str(abs(coeff)))-p-1)):
 break
 places += 3

NOT replaced with:

while not (coeff := _dlog10(c, e, places)) % (5*10**(len(str(abs(coeff)))-p-1)):
 places += 3

^-- Tim Peters, I'm looking at you :-)

coeff is defined and then "immediately" used in "y" expression of
x%y... Yeah, it's valid code, but it looks too magic to me...

(C)

while True:
 chunk = self.raw.read()
 if chunk in empty_values:
 nodata_val = chunk
 break
 ...

"nodata_val = chunk" cannot be put into the "chunk := self.raw.read()"
assignment expression combined with a test. At least, I don't see how.
(D)

while 1:
 u1 = random()
 if not 1e-7 < u1 < .999:
 continue
 ...

Again, I don't see how to use assignment expression here.
Here, unlike the previous example, the assignment is a subexpression of 
the conditions, so it _can_ be inlined:


while  (u1:=random()) < 1e-7 or u1 > .999:


Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 572, VF/B, and "Shark Jumping"

2018-07-04 Thread Ivan Pozdeev via Python-Dev

On 05.07.2018 2:52, Mike Miller wrote:

Recently on Python-Dev:

On 2018-07-03 15:24, Chris Barker wrote:
> On Tue, Jul 3, 2018 at 2:51 PM, Chris Angelico > On Wed, Jul 4, 2018 at 7:37 AM, Serhiy Storchaka 


>
> > I believe most Python users are not
> > professional programmers -- they are sysadmins, scientists, 
hobbyists

> > and kids --
>
> [citation needed]
>
> fair enough, but I think we all agree that *many*, if not most, 
Python users
> are "not professional programmers". While on the other hand everyone 
involved

> in discussion on python-dev and python-ideas is a serious (If not
> "professional") programmer.


Python Audience - wants clarity:

Not sure I'd say that most users are not professionals, but one major 
strength of Python is its suitability as a teaching language, which 
enlarges the community every year.


Additionally, I have noticed a dichotomy between prolific "C 
programmers" who've supported this PEP and many Python programmers who 
don't want it.  While C-devs use this construct all the time, their 
stereotypical Python counterpart is often looking for simplicity and 
clarity instead.  That's why we're here, folks.



Value - good:

Several use cases are handled well by PEP 572.  However it has been 
noted that complexity must be capped voluntarily relatively early—or 
the cure soon becomes worse than the disease.



Frequency - not much:

The use cases for assignment-expressions are not exceedingly common, 
coming up here and there.  Their omission has been a very mild burden 
and we've done without for a quarter century.


Believe the authors agreed that it won't be used too often and won't 
typically be mis- or overused.



New Syntax - a high burden:

For years I've read on these lists that syntax changes must clear a 
high threshold of the (Value*Frequency)/Burden (or VF/B) ratio.


Likewise, a few folks have compared PEP 572 to 498 (f-strings) which 
some former detractors have come to appreciate.  Don't believe this 
comparison applies well, since string interpolation is useful a 
hundred times a day, more concise, clear, and runs faster than 
previous functionality.  Threshold was easily cleared there.



Conclusion:

An incongruous/partially redundant new syntax to perform existing 
functionality more concisely feels too low on the VF/B ratio IMHO.  
Value is good though mixed, frequency is low, and burden is higher 
than we'd like, resulting in "meh" and binary reactions.


Indeed many modern languages omit this feature specifically in an 
effort to reduce complexity, ironically citing the success of Python 
in support.  Less is more.



Compromise:

Fortunately there is a compromise design that is chosen often these 
days in new languages---restricting these assignments to if/while 
(potentially comp/gen) statements.


https://mail.python.org/pipermail/python-dev/2018-July/154343.html :
"Any construct that accepts an expression and uses its result but 
doesn't allow to insert an additional line in the middle qualifies."


If/when is not enough.

And https://mail.python.org/pipermail/python-dev/2018-June/154160.html 
disproves the "chosen often these days in new languages".


We can also reuse the existing "EXPR as NAME" syntax that already 
exists and is widely enjoyed.




For the record, with "as", Victor Stinner's examples from the 5 Jul 2018 
00:51:37 +0200 letter would look like:


while expr as x:

while input.readline() as line:

while (c//n as q) < n:

while (self.__read(1) as s) and s != NUL:

while (self.next() as tarinfo) is not None:
pass

while (match() as m) and (m.end() as j) == i:



This compromise design:

    1  Handles the most common cases (of a group of infrequent cases)
    0  Doesn't handle more obscure cases.
    1  No new syntax (through reuse)
    1  Looks Pythonic as hell
    1  Difficult to misuse, complexity capped

    Score: 4/5

PEP 572:

    1  Handles the most common cases (of a group of infrequent cases)
    1  Handles even more obscure cases.
    0  New syntax
    0  Denser look: more colons, parens, expression last
    0  Some potential for misuse, complexity uncapped

    Score: 2/5


Thanks for reading, happy independence,
-Mike


Very fitting, given the recent mentions of "dictatorship" and all :-)



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 572 semantics: all capabilities of the assignment statement

2018-07-04 Thread Ivan Pozdeev via Python-Dev
Victor Stinner in "Assignment expression and coding style: the while 
True case" and others have brought to attention


that the AE as currently written doesn't support all the capabilities of 
the assignment statement, namely:


* tuple unpacking
* augmented assignment

(I titled the letter "all capabilities" 'cuz I may've missed something.)

Should it?

Personally, I'm for the unpacking but against augmentation 'cuz it has 
proven incomprehensible as per the 5 Jul 2018 04:22:36 +0300 letter.


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Assignment expression and coding style: the while True case

2018-07-05 Thread Ivan Pozdeev via Python-Dev

On 05.07.2018 9:23, Serhiy Storchaka wrote:

05.07.18 01:51, Victor Stinner пише:

== Pattern 1, straighforward ==

while True:
 line = input.readline()
 if not line:
 break
 ...

IMHO here assingment expression is appropriate here. The code remains
straighfoward to read.

while (line := input.readline()):
 ...


We already have an idiom for this:

for line in input:
    ...



This is not strictly equivalent: it has internal caching unaffected by 
-u and you can't iterate and .read() at the same time.


Though in this specific case (the example is from Lib\base64.py AFAICS), 
the change to `for' is fine.



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Assignment expression and coding style: the while True case

2018-07-05 Thread Ivan Pozdeev via Python-Dev

On 05.07.2018 9:47, Steven D'Aprano wrote:

On Thu, Jul 05, 2018 at 12:51:37AM +0200, Victor Stinner wrote:


I propose to start the discussion about "coding style" (where are
assignment expressions appropriate or not?) with the "while True"
case.

We don't even have an official implementation yet, and you already want
to start prescribing coding style? We probably have months before 3.8
alpha comes out.


This is an excellent way to look at the picture with a user's eyes. You 
immediately see what parts of the design lend themselves well to 
practice and what don't, which details are unclear, which additional 
features are missing (note how he revealed the lack of tuple unpacking) 
-- by all means, something to do at the design stage, before doing the 
implementation.


It's also a testimony of how much an improvement a feature will _really_ 
be -- something that contrived examples can't tell.


I salute Victor for such an enlightemed idea and going for the trouble 
to carry it out!



I appreciate your enthusiasm, but what's the rush? Give people a chance
to play with the syntax in the REPL before making Thou Shalt and Thou
Shalt Not rules for coding style and making wholesale changes to the std
lib.

This topic has been argued and argued and argued on two mailing lists
for over four months. Let's take a couple of weeks to catch our breath,
wait for the implementation to actually hit the 3.8 repo, before trying
to prescribe coding style or thinking about which parts of the std lib
should be refactored to use it and which shouldn't.

There is no need to rush into making changes. Let them happen naturally.





--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 572, VF/B, and "Shark Jumping"

2018-07-05 Thread Ivan Pozdeev via Python-Dev

On 05.07.2018 9:20, Steven D'Aprano wrote:

On Thu, Jul 05, 2018 at 05:33:50AM +0300, Ivan Pozdeev via Python-Dev wrote:


And https://mail.python.org/pipermail/python-dev/2018-June/154160.html
disproves the "chosen often these days in new languages".

Ivan, I think you may have linked to the wrong page. That page was Chris
kindly referring you to my post here:

https://mail.python.org/pipermail/python-ideas/2018-May/050938.html


This is as intended.
I wanted to show my summary and Chris' refuttal, with links to both 
original posts. Because my letter is much shorter than the originals 
while carrying the same message. Also to show that I've made the same 
mistake, which puts things in perspective: how an outsider could get the 
wrong idea.



which refutes Mike's original, biased selection of a handful of
languages. Which he then misrepresented as not including assignment
expressions when half of them actually do, at least in a limited form.

(3 out of the 5 of Mike's examples include *at least* some limited
assignment expression. My survey found 13 out of 18 modern languages
have at least some form of assignment expression. See link above for
details.)

It simply isn't true that modern languages are moving away from
assignment expressions. Some are. Some aren't. Even those that don't
support assignment expressions in general usually support special syntax
to allow it in a few contexts.

But even if we pretended that, let's say, Go for example has no
assignment expressions (it actually does, but limited only to the
special case of if statements), what conclusion should we draw?

That Rob Pike is ever so much a better language designer than Guido?
Maybe he is, maybe he isn't, but Go is just eight years old. Python is
27. When Python was 8, it lacked a lot of features we find indispensible
now:

https://www.python.org/download/releases/1.5/whatsnew/

Who is to say that when Go is 27, or even 10, it won't have added
assignment expressions?

Some of Go's choices seem a bit... idiosyncratic. Strings are still
ASCII byte-strings. Unicode text is relegated to a seperate type,
"runes", the naming of which is a tad patronising and contemptuous of
non-ASCII users. There are no exceptions or try...finally. The designers
bowed to public pressure and added a sort of poor-man's exception system,
panic/recover, but for most purposes, they still requiring the "check a
flag to test success" anti-pattern. The designers are actively opposed
to assertions.

I dare say a lot of Python's choices seem strange to Go programmers too.

Rather than saying "Go got it right", maybe we should be saying "Go got
it wrong".




We can also reuse the existing "EXPR as NAME" syntax that already
exists and is widely enjoyed.


For the record, with "as", Victor Stinner's examples from the 5 Jul 2018
00:51:37 +0200 letter would look like:


Enough with the "as" syntax. This discussion has been going on since
FEBRUARY, and "as" was eliminated as ambiguous months ago. Stop beating
that dead horse.






--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 572: intended scope of assignment expression

2018-07-05 Thread Ivan Pozdeev via Python-Dev

On 05.07.2018 15:20, Victor Stinner wrote:

Hi,

My work (*) in the "Assignment expression and coding style: the while
True case" thread helped me to understand something about the
*intended* scope.

While technically, assignment expressions keep the same scoping rules
than assignment statements, writing "if (x := func()): ..." or "while
(x := func()): ..." shows the "intented" scope of the variable. Even
if, as explained properly in the PEP, the scope is wider (for good
reasons) as "for line in file: ..." keeps line alive after the loop
(nothing new under the sun). It's something subtle that I missed at
the first read (of the code and the PEP), the difference is not
obvious.

x = func()
if x:
 ... # obviously use x
# do we still plan to use x here?
# it's non obvious just by reading the if

versus

if (x := func()):
 ... # obviously use x
# ":=" in the if "announces" that usually x is no longer used
# here, even if technically x is still defined


The construct for temporary variables is `with'. `if' carries no such 
implications.



See my match/group PR for more concrete examples:
https://github.com/python/cpython/pull/8097/files

I understand the current PEP 572 rationale as: assignment expressions
reduces the number of lines and the indentation level... pure syntax
sugar.

IMHO this "intended" scope is a much better way to sell assignment
expressions than the current rationale. In fact, it's explained later
very quickly in the PEP:
https://www.python.org/dev/peps/pep-0572/#capturing-condition-values

But it could be better explained than just "good effect in the header
of an if or while statement".

The PEP contains a good example of the intended scope:

if pid := os.fork():
 # Parent code
 # pid is valid and is only intended to be used in this scope
 ... # use pid
else:
 # Child code
 # pid is "invalid" (equal to zero)
 ... # don't use pid
# since this code path is common to parent and child,
# the pid is considered invalid again here
# (since the child does also into this path)
... # don't use pid


(*) My work: my current 3 pull requests showing how assignment
expressions can be
used in the stdlib:

while True: https://github.com/python/cpython/pull/8095/files
match/group: https://github.com/python/cpython/pull/8097/files
list comp: https://github.com/python/cpython/pull/8098/files

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 572, VF/B, and "Shark Jumping"

2018-07-05 Thread Ivan Pozdeev via Python-Dev

On 05.07.2018 3:22, Chris Angelico wrote:

Python uses "as NAME" for things that
are quite different from this, so it's confusing
I wrote in 
https://mail.python.org/pipermail/python-dev/2018-June/154066.html that 
this is easily refutable.

Looks like not for everybody. Okay, here goes:

The constructs that currently use `as' are:

* import module as m
* except Exception as e:
* with expr as obj:

* In `with', there's no need to assign both `expr' and its __enter__() 
result -- because the whole idea of `with' is to put the object through 
`__enter__', and because a sane `__enter__()' implementation will return 
`self' anyway (or something with the same semantic -- i.e. _effectively_ 
`self'). But just in case, the double-assignment can be written as:


with (expr as obj) as ctx_obj:

by giving "as" lower priority than `with'. As I said, the need for this 
is nigh-nonexistent.


* `import' doesn't allow expressions (so no syntactic clash here), but 
the semantic of "as" here is equivalent to the AE, so no confusion here.

* Same goes for `except`: doesn't accept expressions, same semantic.

So, with "as" only `with' becomes the exception -- and an easily 
explainable one since its whole purpose is to implicitly call the 
context manager interface.

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Symmetric vs asymmetric symbols (was PEP 572: Do we really need a ":" in ":="?)

2018-07-06 Thread Ivan Pozdeev via Python-Dev

On 06.07.2018 7:02, Chris Angelico wrote:

On Fri, Jul 6, 2018 at 12:48 PM, Alexander Belopolsky
 wrote:

Python really has a strong C legacy and this is the area where I agree that
C designers made a mistake by picking a symmetric symbol (=) for an
asymmetric operation. On top of that, they picked an asymmetric digraph (!=)
for a symmetric operation as well and Python (unfortunately) followed the
crowd and ditched a much better alternative (<>).  My only hope is that
Python 4.0 will allow ← to be used in place of either = or :=. :-)

Interesting. Looking over Python's binary operators, we have:

|, ^, &, +, *: symmetric (on ints)
-, /, //, **: asymmetric
<, >: mirrored operations
<=, >=: mirrored operations but not reflected
<<, >>: non-mirrored asymmetric
and, or: technically asymmetric but often treated as symmetric
in, not in: asymmetric
is, is not: symmetric

Which ones ought to have symmetric symbols, in an ideal world? Should
<= and >= be proper mirrors of each other? Are << and >> confusing? Is
it a problem that the ** operator is most decidedly asymmetric?

Personally, I'm very happy that the operators use the same symbols
that they do in other languages - U+002B PLUS SIGN means addition, for
instance - and everything else is secondary. But maybe this is one of
those "hidden elegances" that you're generally not *consciously* aware
of, but which makes things "feel right", like how Disney's "Moana" has
freedom to the right of the screen and duty to the left. Are there
languages where symmetric operations are always represented with
symmetric symbols and vice versa?


Nothing like that.
The connotations for the symbols rather draw from other fields that 
we're familiar with.


Primarily math (that everyone has studied at school -- this is also the 
reason why we use infix notation even though the postfix one allows to 
ditch braces);

in particular, % draws from ÷ (division sign), / from fraction sign;
& is from English;
!, | and ^ are a mash from "the closest unused symbols on keyboard" to 
symbols from logic algebra:

   ¬  (https://en.wikipedia.org/wiki/Negation),∨ {\displaystyle \lor }
  ∨ (https://en.wikipedia.org/wiki/Logical_disjunction),
  ∧ (https://en.wikipedia.org/wiki/Logical_conjunction),
 ↑ or | (https://en.wikipedia.org/wiki/Sheffer_stroke),
 ↓ (https://en.wikipedia.org/wiki/Peirce's_arrow);

"!=" reads literally "not equal" ( ! is "not", = is "equal" )
   (while "<>" reads "less or greater" which is mathematically not 
equivalent to that: not everything has a defined ordering relation. "<>" 
draws from BASIC AFAIK which was geared towards regular users who don't 
deal with advanced mathematics.)



ChrisA
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] A "day of silence" on PEP 572?

2018-07-06 Thread Ivan Pozdeev via Python-Dev

On 06.07.2018 1:40, Guido van Rossum wrote:

Thanks you for writing up a proposal. There have been many proposals 
made, including 'EXPR as NAME', similar to yours. It even has a small 
section in the PEP: 
https://www.python.org/dev/peps/pep-0572/#alternative-spellings. It's 
really hard to choose between alternatives, but all things considered 
I have decided in favor of `NAME := EXPR` instead. Your efforts are 
appreciated but you would just be wasting your time if you wrote a 
PEP. If you're interested in helping out, would you be interested in 
working on the implementation of PEP 572?


Maybe we should call for subj? Not a day most probably, rather however 
much time is needed.


AFAICS, all the arguments have already been told and retold. So we 
should probably give Guido some peace of mind until he officially 
accepts the PEP or whatever he decides.


--
Regards,
Ivan
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Naming comprehension syntax [was Re: Informal educator feedback on PEP 572 ...]

2018-07-06 Thread Ivan Pozdeev via Python-Dev

On 07.07.2018 2:31, Guido van Rossum wrote:
On Fri, Jul 6, 2018 at 4:19 PM Terry Reedy > wrote:


Since Guido, the first respondent, did not immediately shoot the idea
down, I intend to flesh it out and make it more concrete.


Maybe I should have shot it down. The term is entrenched in multiple 
languages by now (e.g. 
https://en.wikipedia.org/wiki/List_comprehension). Regarding "list 
builder" one could argue that it would just add more confusion, since 
there's already an unrelated Builder Pattern 
(https://en.wikipedia.org/wiki/Builder_pattern) commonly used in Java. 
(Though I worry about the presence of a Python example in that 
Wikipedia page. :-)


According to https://en.wikipedia.org/wiki/List_comprehension#History, 
the term's known from at least 1977 and comes from such influential 
languages as NPL, Miranda and Haskell. So it's not you to blame for it :-)




Also, "generator builder" is not much more expressive than "generator 
expression",


"generator builder" is simply incorrect. The GE doesn't "build" 
generators, it's a generator itself. It's a generator _and_ an 
expression. What could be a more obvious name?
This suggestion looks like coming from someone who hasn't quite grasped 
generators yet.


and the key observation that led to this idea was that it's such a 
mouthful to say "comprehensions and generator expressions".


Since "X comprehensions" are advertised as and intended to be 
functionally equivalent to `X(generator expression)', I use just 
"generator expressions" to refer to all.
That's accurate because the common part with the distinctive syntax -- 
which is the thing referred to when addressing them all -- effectively 
_is_ a generator expression (the syntax differences in the leading term 
are insignificant), what wraps it is of no concern.


So, no new terms are necessary, but someone who cares may add a note to 
the docs to this effect.


Maybe it's not too late to start calling the latter "generator 
comprehensions" so that maybe by the year 2025 we can say 
"comprehensions" and everyone will understand we mean all four types?


FWIW more people should start using "list display" etc. for things 
like [a, b, c].

--
--Guido van Rossum (python.org/~guido )


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Naming comprehension syntax [was Re: Informal educator feedback on PEP 572 ...]

2018-07-06 Thread Ivan Pozdeev via Python-Dev

On 07.07.2018 2:58, Ivan Pozdeev via Python-Dev wrote:

On 07.07.2018 2:31, Guido van Rossum wrote:
On Fri, Jul 6, 2018 at 4:19 PM Terry Reedy <mailto:tjre...@udel.edu>> wrote:


Since Guido, the first respondent, did not immediately shoot the
idea
down, I intend to flesh it out and make it more concrete.


Maybe I should have shot it down. The term is entrenched in multiple 
languages by now (e.g. 
https://en.wikipedia.org/wiki/List_comprehension). Regarding "list 
builder" one could argue that it would just add more confusion, since 
there's already an unrelated Builder Pattern 
(https://en.wikipedia.org/wiki/Builder_pattern) commonly used in 
Java. (Though I worry about the presence of a Python example in that 
Wikipedia page. :-)


According to https://en.wikipedia.org/wiki/List_comprehension#History, 
the term's known from at least 1977 and comes from such influential 
languages as NPL, Miranda and Haskell. So it's not you to blame for it :-)




Also, "generator builder" is not much more expressive than "generator 
expression",


"generator builder" is simply incorrect. The GE doesn't "build" 
generators, it's a generator itself. It's a generator _and_ an 
expression. What could be a more obvious name?
This suggestion looks like coming from someone who hasn't quite 
grasped generators yet.


and the key observation that led to this idea was that it's such a 
mouthful to say "comprehensions and generator expressions".


Since "X comprehensions" are advertised as and intended to be 
functionally equivalent to `X(generator expression)', I use just 
"generator expressions" to refer to all.
That's accurate because the common part with the distinctive syntax -- 
which is the thing referred to when addressing them all -- effectively 
_is_ a generator expression (the syntax differences in the leading 
term are insignificant), what wraps it is of no concern.


So, no new terms are necessary, but someone who cares may add a note 
to the docs to this effect.




Maybe it's not too late to start calling the latter "generator 
comprehensions" so that maybe by the year 2025 we can say 
"comprehensions" and everyone will understand we mean all four types?



https://docs.python.org/3/reference/expressions.html?highlight=comprehension#displays-for-lists-sets-and-dictionaries

Oh, I see. So, "comprehension" is actually the official term for this 
"distinctive syntax", and the fact that "generator expressions" came to 
use it is but a coincidence.


In that case, we can do a Solomon's decision: mention _both_ that 
"comprehension" is the official term for the syntax in GE's reference 
entry, _and_ the fact that "X comprehensions" are effectively wrapped 
GEs in their reference entries.


Then everyone will learn both terminologies and could choose which is 
more convenient to use.


FWIW more people should start using "list display" etc. for things 
like [a, b, c].

--
--Guido van Rossum (python.org/~guido <http://python.org/%7Eguido>)


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru




___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Naming comprehension syntax [was Re: Informal educator feedback on PEP 572 ...]

2018-07-06 Thread Ivan Pozdeev via Python-Dev

https://github.com/python/cpython/pull/8145

On 07.07.2018 3:33, Ivan Pozdeev via Python-Dev wrote:

On 07.07.2018 2:58, Ivan Pozdeev via Python-Dev wrote:

On 07.07.2018 2:31, Guido van Rossum wrote:
On Fri, Jul 6, 2018 at 4:19 PM Terry Reedy <mailto:tjre...@udel.edu>> wrote:


Since Guido, the first respondent, did not immediately shoot the
idea
down, I intend to flesh it out and make it more concrete.


Maybe I should have shot it down. The term is entrenched in multiple 
languages by now (e.g. 
https://en.wikipedia.org/wiki/List_comprehension). Regarding "list 
builder" one could argue that it would just add more confusion, 
since there's already an unrelated Builder Pattern 
(https://en.wikipedia.org/wiki/Builder_pattern) commonly used in 
Java. (Though I worry about the presence of a Python example in that 
Wikipedia page. :-)


According to 
https://en.wikipedia.org/wiki/List_comprehension#History, the term's 
known from at least 1977 and comes from such influential languages as 
NPL, Miranda and Haskell. So it's not you to blame for it :-)




Also, "generator builder" is not much more expressive than 
"generator expression",


"generator builder" is simply incorrect. The GE doesn't "build" 
generators, it's a generator itself. It's a generator _and_ an 
expression. What could be a more obvious name?
This suggestion looks like coming from someone who hasn't quite 
grasped generators yet.


and the key observation that led to this idea was that it's such a 
mouthful to say "comprehensions and generator expressions".


Since "X comprehensions" are advertised as and intended to be 
functionally equivalent to `X(generator expression)', I use just 
"generator expressions" to refer to all.
That's accurate because the common part with the distinctive syntax 
-- which is the thing referred to when addressing them all -- 
effectively _is_ a generator expression (the syntax differences in 
the leading term are insignificant), what wraps it is of no concern.


So, no new terms are necessary, but someone who cares may add a note 
to the docs to this effect.




Maybe it's not too late to start calling the latter "generator 
comprehensions" so that maybe by the year 2025 we can say 
"comprehensions" and everyone will understand we mean all four types?



https://docs.python.org/3/reference/expressions.html?highlight=comprehension#displays-for-lists-sets-and-dictionaries

Oh, I see. So, "comprehension" is actually the official term for this 
"distinctive syntax", and the fact that "generator expressions" came 
to use it is but a coincidence.


In that case, we can do a Solomon's decision: mention _both_ that 
"comprehension" is the official term for the syntax in GE's reference 
entry, _and_ the fact that "X comprehensions" are effectively wrapped 
GEs in their reference entries.


Then everyone will learn both terminologies and could choose which is 
more convenient to use.


FWIW more people should start using "list display" etc. for things 
like [a, b, c].

--
--Guido van Rossum (python.org/~guido <http://python.org/%7Eguido>)


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru




___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru




___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Time for 3.4.9 and 3.5.6

2018-07-08 Thread Ivan Pozdeev via Python-Dev
I'll use this opportunity to remind you that 3.4 build is broken -- it 
can't be built from start to installer with the instructions given 
because of outside factors (CPython has migrated from Hg to Git). 
https://bugs.python.org/issue31623 about this was ignored (see 
https://bugs.python.org/issue31623#msg303708 for supplemental fixes).


If this isn't something considered needing a fix, the claim that 3.4 is 
supported in any shape and form is but a pretense -- if something can't 
be built, it can't be used.



On 08.07.2018 10:45, Larry Hastings wrote:



My six-month cadence means it's time for the next releases of 3.4 and 
3.5.  There haven't been many changes since the last releases--two, to 
be exact.  These two security fixes were backported to both 3.4 and 3.5:


  * bpo-32981: Fix catastrophic backtracking vulns (GH-5955)
  * bpo-33001: Prevent buffer overrun in os.symlink (GH-5989)

3.5 also got some doc-only changes related to the online "version 
switcher" dropdown.  (They weren't backported to 3.4 because we don't 
list 3.4 in the version switcher dropdown anymore.)



There are currently no PRs open for either 3.4 or 3.5, and they also 
have no open "release blocker" or "deferred blocker" bugs. It seems 
things are pretty quiet in our two security-fixes-only branches--a 
good way to be!


I therefore propose to cut the RCs in a week and a half, and the 
finals two weeks later.  So:


Wednesday  July 18 2018 - 3.4.9rc1 and 3.5.6rc1
Wednesday August 1 2018 - 3.4.9 final and 3.5.6 final

If anybody needs more time I'm totally happy to accommodate them--you 
can probably have all the time you need.  I'm trying to keep to my 
rough six-month cadence, but honestly that's pretty arbitrary.


Thanks to all of you who keep making 3.4 and 3.5 better,


//arry/


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Time for 3.4.9 and 3.5.6

2018-07-08 Thread Ivan Pozdeev via Python-Dev

On 09.07.2018 1:32, Larry Hastings wrote:

On 07/08/2018 10:05 AM, Ivan Pozdeev via Python-Dev wrote:


I'll use this opportunity to remind you that 3.4 build is broken -- 
it can't be built from start to installer with the instructions given 
because of outside factors (CPython has migrated from Hg to Git). 
https://bugs.python.org/issue31623 about this was ignored (see 
https://bugs.python.org/issue31623#msg303708 for supplemental fixes).


If this isn't something considered needing a fix, the claim that 3.4 
is supported in any shape and form is but a pretense -- if something 
can't be built, it can't be used.




By "3.4 build is broken", you mean that building the installer is 
broken on Windows.  Sadly the maintainer of that installer is no 
longer part of the Python community, and as a Linux-only dev I have no 
way of testing any proposed change.




Not only that, building the binaries is also broken as per 
https://bugs.python.org/issue31645 (that's one of the aforementioned 
"supplemental fixes").


More importantly, 3.4 is in security-fixes-only mode, which means that 
changes that aren't security fixes won't be accepted.  Fixing this 
would not be a security fix.  So even if the patch was clean and 
well-reviewed and worked perfectly I'm simply not going to merge it 
into 3.4.  The 3.4 tree is only going to be in security-fixes mode for 
another eight months anyway, after which I will retire as 3.4 release 
manager, and 3.4 will no longer be supported by the Python core 
development community at all.


I kinda don't see a point of claiming any kind of support and doing any 
work if the codebase is unusable. All that achieves is confused users 
and wasted time for everyone involved.


If you "a Linux-only dev" and no-one is going to look at the Windows 
part, why not just say clearly that this version line is not supported 
outside Linux?
I'm okay with that (what is and isn't supported is none of my business). 
At least, there won't be a nasty surprise when I rely on the team's 
claim that the code is workable, and it actually isn't -- and another 
one when I go for the trouble to provide a fix, and is told that I'm a 
troublemaker and has just massively wasted my and everybody else's time 
as a thanks.


Besides, that'll be a reason to officially close all still-open tickets 
for 3.4/3.5 (there are about 2000 that are mentioning them) regardless 
of the topic (I've checked that none are currently marked as security 
issues).


As pointed out in that bpo issue: if the problem is entirely due to 
switching from "git" to "hg", then you should have very little 
difficulty working around that.  You can use a git-to-hg bridge, or 
create a local-only hg repo from the 3.4 tree.  That should permit you 
to build your own installers.  I'm a little sad that the 3.4 Windows 
installers no longer build directly out-of-tree without such a 
workaround, but sometimes that's just what happens with a Python 
release three major releases out of date languishing in 
security-fixes-only mode.



//arry/


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] why is not 64-bit installer the default download link for Windows?

2018-07-09 Thread Ivan Pozdeev via Python-Dev

On 09.07.2018 19:01, Steve Dower wrote:

On 09Jul2018 0803, Cosimo Lupo wrote:
If one goes to httWhps://www.python.org/downloads 
 from a Windows browser, the 
default download URL is for the 32-bit installer instead of the 
64-bit one.

I wonder why is this still the case?
Shouldn't we encourage new Windows users (who may not even know the 
distinction between the two architectures) to use the 64-bit version 
of Python, since most likely they can?


The difficulty is that they *definitely* can use the 32-bit version, 
and those few who are on older machines or older installs of Windows 
may not understand why the link we provide didn't work for them.


From the various telemetry I've seen (and I work at Microsoft, so I 
have better access than most :) ), there is still enough 32-bit 
Windows out there that I'm not confident enough with "most likely". I 
haven't checked any location data (not even sure if we've got it), but 
I'd guess that there's higher 32-bit usage among less privileged 
countries and people.


I've thought a bit about making a single installer that can offer the 
option of 32-bit/64-bit at install time, but I don't actually think 
it's that big a problem to deserve that much effort as a solution.


Perhaps we should add non-button text below the button saying "Get the 
64-bit version"?




Maybe infer the bitness from User-Agent instead. This seems to be the 
trend among official sites in general.



Cheers,
Steve
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] why is not 64-bit installer the default download link for Windows?

2018-07-10 Thread Ivan Pozdeev via Python-Dev

On 11.07.2018 1:41, Victor Stinner wrote:

2018-07-09 18:01 GMT+02:00 Steve Dower :

The difficulty is that they *definitely* can use the 32-bit version, and
those few who are on older machines or older installs of Windows may not
understand why the link we provide didn't work for them.

Let's say that only 10% of users still use 32-bit version. If they
download a default 64-bit binary, I'm quite sure that running the
binary will emit an error no? Such users should be used to such error,
and be able to get the 64-bit version, no?


Attached the image of what happens. The message is:

"One or more issues caused the setup to fail. Please fix the issues and 
the retry setup. For more information see the log file .


0x80070661 - This installation package is not supported by this 
processor type. Contact your product vendor."


Pretty descriptive in my book.


Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Can I make marshal.dumps() slower but stabler?

2018-07-13 Thread Ivan Pozdeev via Python-Dev
If the use case for stability is only .pyc compilation, I doubt it's 
even relevant 'cuz .pyc's are supposed to be compiled in isolation from 
other current objects (otherwise, they wouldn't be reusable or would be 
invalidated when dependent modules change, neither of which is the 
case), so relevant reference counts should always be the same.

I may be mistaking though.

On 13.07.2018 16:57, Christian Tismer wrote:

Well, to my knowledge they did not modify the marshal code.
They are in fact heavily dependent from marshal speed since that
is used frequently to save and restore state of many actors.

But haven't looked further since 2010 ;-)

Btw., why are they considering to make the algorithm slower,
just because someone wants the algorithm stable?

An optional keyword argument would give the stability, and the
default behavior would not be changed at all.

Cheers - Chris


On 12.07.18 12:07, Steve Holden wrote:

Eve is indeed based on stackless 2, and are well capable of ignoring
changes they don't think they need (or were when I was working with
them). At one point I seem to remember they optimised their interpreter
to use singleton floating-point values, saving large quantities of
memory by having only one floating-point zero.

Steve Holden

On Thu, Jul 12, 2018 at 9:55 AM, Alex Walters mailto:tritium-l...@sdamon.com>> wrote:



 > -Original Message-
 > From: Python-Dev  list=sdamon@python.org > On Behalf Of
 Victor Stinner
 > Sent: Thursday, July 12, 2018 4:01 AM
 > To: Serhiy Storchaka mailto:storch...@gmail.com>>
 > Cc: python-dev mailto:python-dev@python.org>>
 > Subject: Re: [Python-Dev] Can I make marshal.dumps() slower but stabler?
 >
 > 2018-07-12 8:21 GMT+02:00 Serhiy Storchaka mailto:storch...@gmail.com>>:
 > >> Is there any real application which marshal.dumps() performance is
 > >> critical?
 > >
 > > EVE Online is a well known example.
 >
 > EVE Online has been created in 2003. I guess that it still uses Python
 2.7.
 >
 > I'm not sure that a video game would pick marshal in 2018.
 >

 EVE doesn't use stock CPython, IIRC.  They use a version of stackless 2,
 with their own patches.  If a company is willing to patch python
 itself, I
 don't think their practices should be cited without more context
 about what
 they actually modified.

 > Victor
 > ___
 > Python-Dev mailing list
 > Python-Dev@python.org 
 > https://mail.python.org/mailman/listinfo/python-dev
 
 > Unsubscribe:
 https://mail.python.org/mailman/options/python-dev/tritium-
 
 > list%40sdamon.com 

 ___
 Python-Dev mailing list
 Python-Dev@python.org 
 https://mail.python.org/mailman/listinfo/python-dev
 
 Unsubscribe:
 https://mail.python.org/mailman/options/python-dev/steve%40holdenweb.com
 




___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/tismer%40stackless.com





___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Const access to CPython objects outside of GIL?

2018-07-17 Thread Ivan Pozdeev via Python-Dev

On 17.07.2018 7:18, Radim Řehůřek wrote:

Hi all,

one of our Python projects calls for pretty heavy, low-level 
optimizations.


We went down the rabbit hole and determined that having access to 
PyList_GET_ITEM(list), PyInt_AS_LONG(int) and PyDict_GetItem(dict, 
unicode) on Python objects **outside of GIL** might be a good-enough 
solution. The Python objects in question are guaranteed to live and 
not be mutated externally in any way. They're "frozen" and read-only.




The standard practice if you need to access something outside of GIL is 
to get a private C object from it.
For immutable types, you can get a pointer to the underlying data if the 
internal representation is compatible with some C type (e.g. char* 
|PyBytes_AsString|(PyObject/ *o/) and FILE* 
|PyFile_AsFile|(PyObject/ *p/) (Py2 only -- in Py3, PyFile no longer 
wraps stdio FILE) ); otherwise, the C API can produce a copy (e.g. 
wchar_t* |PyUnicode_AsWideCharString|(PyObject/ *unicode/, 
Py_ssize_t/ *size/) ).


Though you can call whatever you want outside of GIL (it's not like we 
can prevent you), anything that's not officially guaranteed you're doing 
on your own risk. Even if it happens to work now, it can break in 
unpredictable ways at any point in the future.



Under what conditions is it OK to call these 3 functions on such objects?

More generally, what is the CPython 2.7/3.5 contract regarding (lack 
of) object mutation, and the need for reference counting and 
synchronization via GIL?


Which C API functions are safe to call on "const" objects?

Obviously releasing GIL and then calling C API is hacky, but from 
initial experiments, it seems to work (see 
https://stackoverflow.com/questions/51351609/can-i-const-access-cpython-objects-without-gil). 
But I'm wondering if there's a more formal contract around this behaviour.


Cheers,
Radim



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Fuzzing the Python standard library

2018-07-18 Thread Ivan Pozdeev via Python-Dev

On 17.07.2018 19:44, Jussi Judin wrote:

Hi,

I have been fuzzing[1] various parts of Python standard library for Python 3.7 
with python-afl[2] to find out internal implementation issues that exist in the 
library. What I have been looking for are mainly following:

* Exceptions that are something else than the documented ones. These usually 
indicate an internal implementation issue. For example one would not expect an 
UnicodeDecodeError from netrc.netrc() function when the documentation[3] 
promises netrc.NetrcParseError and there is no way to pass properly sanitized 
file object to the netrc.netrc().
* Differences between values returned by C and Python versions of some 
functions. quopri module may have these.
* Unexpected performance and memory allocation issues. These can be somewhat 
controversial to fix, if at all, but at least in some cases from end-user perspective it 
can be really nasty if for example fractions.Fraction("1.64E664644") 
results in hundreds of megabytes of memory allocated and takes very long to calculate. I 
gave up waiting for that function call to finish after 5 minutes.

As this is going to result in a decent amount of bug reports (currently I only filed 
one[4], although that audio processing area has much more issues to file), I would 
like to ask your opinion on filing these bug reports. Should I report all issues 
regarding some specific module in one bug report, or try to further split them into 
more fine grained reports that may be related? These different types of errors are 
specifically noticeable in zipfile module that includes a lot of different exception 
and behavioral types on invalid data 
 . 
And in case of sndhdr module, there are multiple modules with issues (aifc, sunau, 
wave) that then show up also in sndhdr when they are used. Or are some of you willing 
to go through the crashes that pop up and help with the report filing?


I'm not from the core team, so will recite best practices from my own 
experience.


Bugs should be reported "one per root cause" aka 1bug report=1fix. It's 
permissible to report separately, especially if you're not sure if they 
are the same bug (then add a prominent link), but since this is a 
volunteer project, you really should be doing any diplicate checks 
_before_ reporting. Since you'll be checking existing tickets before 
reporting each new one anyway, that'll automatically include _your own_ 
previous tickets ;-)
For ditto bugs in multiple places, it's better to err on the side of 
fewer tickets -- this will both be less work for everyone and give a 
more complete picture. If something proves to warrant a separate ticket, 
it can be split off later.



The code and more verbose description for this is available from 
. It works by default on some 
GNU/Linux systems only (I use Debian testing), as it relies on /dev/shm/ being 
available and uses shell scripts as wrappers that rely on various tools that may not 
be installed on all systems by default.

As a bonus, as this uses coverage based fuzzing, it also opens up the possibility of 
automatically creating a regression test suite for each of the fuzzed modules to ensure 
that the existing functionality (input files under /corpus/ directory) 
does not suddenly result in additional exceptions and that it is more easy to test 
potential bug fixes (crash inducing files under /crashes/ directory).

As a downside, this uses two quite specific tools (afl, python-afl) that have 
further dependencies (Cython) inside them, I doubt the viability of integrating 
this type of testing as part of normal Python verification process. As a 
difference to libFuzzer based fuzzing that is already integrated in Python[5], 
this instruments the actual (and only the) Python code and not the actions that 
the interpreter does in the background. So this should result in better fuzzer 
coverage for Python code that is used with the downside that when C functions 
are called, they are complete black boxes to the fuzzer.

I have mainly run these fuzzer instances at most for several hours per module 
with 4 instances and stopped running no-issue modules after there have been no 
new coverage discovered after more than 10 minutes. Also I have not really 
created high quality initial input files, so I wouldn't be surprised if there 
are more issues lurking around that could be found with throwing more CPU and 
higher quality fuzzers at the problem.

[1]: https://en.wikipedia.org/wiki/Fuzzing
[2]: https://github.com/jwilk/python-afl
[3]: https://docs.python.org/3/library/netrc.html
[4]: https://bugs.python.org/issue34088
[5]: https://github.com/python/cpython/tree/3.7/Modules/_xxtestfuzz



--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://

Re: [Python-Dev] Benchmarks why we need PEP 576/579/580

2018-07-22 Thread Ivan Pozdeev via Python-Dev
I think it'll benefit all to keep the discussion objective, or at least 
"good subjective" 
(https://stackoverflow.blog/2010/09/29/good-subjective-bad-subjective/). 
Laments or mutual accusations are only wasting everyone's time, 
including the writers.
It's strange that even Guido jumped on the bandwagon -- since he's 
supposed to have had lots of experience to tell right away when a 
discussion has become unproductive. (Or maybe he's testing us?)



All the material to discuss that we have in this thread is a single test 
result that's impossible to reproduce and impossible to run in Py3.


All that this shows is that the PEPs will _likely_ substantially improve 
performance in some scenarios. It's however impossible to say from this 
how frequent these scenarios are in practice, and how consistent the 
improvement is among them. Likewise, it's impossible to say anything 
about the complexity the changes will reduce/introduce without a 
proof-of-concept implementation. So while this is an argument in favor 
of the PEPs, it's too flimsy _on its own_ to accept anything. More and 
better tests and/or sample implementations are needed to say anything 
more conclusive.


All that was already pointed out, and that's where the thread should 
have ended IMO 'cuz there's nothing else to say on the matter.



On 23.07.2018 1:28, Guido van Rossum wrote:
On Sun, Jul 22, 2018 at 1:11 PM, Jeroen Demeyer > wrote:


On 2018-07-22 14:52, Stefan Behnel wrote:

Someone has to maintain the *existing* code
base and help newcomers to get into it and understand it.


This is an important point that people seem to be overlooking. The
complexity and maintenance burden of PEP 580 should be compared to
the status-quo. The existing code is complicated, yet nobody seems
to find that a problem. But when PEP 580 makes changes to that
complicated code (and documents some of the existing complexity),
it's suddenly the fault of PEP 580 that the code is complicated.

I also wonder if there is a psychological difference simply
because this is a PEP and not an issue on bugs.python.org
. That might give the impression that it's
a more serious thing somehow. Previous optimizations
(https://bugs.python.org/issue26110
 for example) were not done in
a PEP and nobody ever mentioned that the extra complexity might be
a problem.

Finally, in some ways, my PEP would actually be a simplification
because it replaces several special cases by one general protocol.
Admittedly, the general protocol that I propose is more
complicated than each existing special case individually but the
overall complexity might actually decrease.


So does your implementation of the PEP result in a net increase or 
decrease of the total lines of code? I know that that's a poor proxy 
for complexity (we can all imagine example bits of code that would 
become less complex by rewriting them in more lines), but if your diff 
actually deleted more lines than it adds, that would cut short a lot 
of discussion. I have a feeling though that that's not the case, and 
now you're in the defense.


--
--Guido van Rossum (python.org/~guido )


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Finding Guido's replacement

2018-07-22 Thread Ivan Pozdeev via Python-Dev
Whatever you decide, please research existing practices and their 
results so as not to repeat the same mistakes as others made before you.
In particular, http://meatballwiki.org/wiki/BenevolentDictator and 
https://en.wikipedia.org/wiki/Anti-pattern .
It would be a waste if Python falls victim to the same trapping as 
thousands before it.



On 22.07.2018 23:12, Chris Angelico wrote:

Guido's term as Benevolent Dictator For Life has been a long one, but
in the wake of his resignation, we have an opportunity to correct some
fundamental flaws in the system. Among them:

* Guido lacks patience, as evidenced by the brevity of his acceptance
posts. See 
https://mail.python.org/pipermail/python-dev/2017-December/151038.html
and https://mail.python.org/pipermail/python-dev/2011-November/114545.html
and particularly
https://mail.python.org/pipermail/python-dev/2016-May/144646.html
where Guido specifically cites his own lack of patience.

* Lately, all Guido's actions have been to benefit his employer, not
the Common Pythonista. We have proof of this from reliable reporting
sources such as Twitter and social media.

* Finally, "For Life" is far too long. We need to change our rulers
periodically.

I propose a new way to appoint a project head. All candidates shall be
flown to an island owned by the Python Secret Underground (which
emphatically does NOT exist, but an island that would be owned by it
if it did), whereupon they parachute down, search for guns, and
proceed to fight each other until only one is left alive. The survivor
shall be treated to a chicken dinner and proclaimed Patient,
Understanding, Benevolent Governor, a title which shall be retained
for one fortnight, after which we repeat the exercise.

If this plan meets with broad approval, I shall write up PEP 3401, in
honour of the prior art in PEP 401.

ChrisA
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Tests for internal functionality

2019-03-16 Thread Ivan Pozdeev via Python-Dev

In https://github.com/python/cpython/pull/6541 , I was requested to add tests 
for an internal C function.

As I wrote in 
https://github.com/python/cpython/pull/6541#issuecomment-445514807 , it's not 
clear from the codebase

1) where tests for internal (as opposed to public) functionality should be 
located
    * I only ran across very few tests for private functionality and they were located alongside the public tests. See e.g. 
https://github.com/python/cpython/pull/12140 -- site._getuserbase is a private function.


2) what spec one should test against since there's no official docs for 
internal functionality

That time, they let it slide, but of course this is not something to make a 
habit of. It still bothers me that I left unfinished business there.
Any thoughts?

--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Best way to specify docstrings for member objects

2019-03-20 Thread Ivan Pozdeev via Python-Dev

On 19.03.2019 21:55, Raymond Hettinger wrote:

I'm working on ways to make improve help() by giving docstrings to member 
objects.

One way to do it is to wait until after the class definition and then make 
individual, direct assignments to __doc__ attributes.This way widely the 
separates docstrings from their initial __slots__ definition.   Working 
downstream from the class definition feels awkward and doesn't look pretty.

There's another way I would like to propose¹.  The __slots__ definition already 
works with any iterable including a dictionary (the dict values are ignored), 
so we could use the values for the  docstrings.

This keeps all the relevant information in one place (much like we already do 
with property() objects).  This way already works, we just need a few lines in 
pydoc to check to see if a dict if present.  This way also looks pretty and 
doesn't feel awkward.

I've included worked out examples below.  What do you all think about the 
proposal?


Raymond


¹ https://bugs.python.org/issue36326


== Desired help() output ==


help(NormalDist)

Help on class NormalDist in module __main__:

class NormalDist(builtins.object)
  |  NormalDist(mu=0.0, sigma=1.0)
  |
  |  Normal distribution of a random variable
  |
  |  Methods defined here:
  |
  |  __init__(self, mu=0.0, sigma=1.0)
  |  NormalDist where mu is the mean and sigma is the standard deviation.
  |
  |  cdf(self, x)
  |  Cumulative distribution function.  P(X <= x)
  |
  |  pdf(self, x)
  |  Probability density function.  P(x <= X < x+dx) / dx
  |
  |  --
  |  Data descriptors defined here:
  |
  |  mu
  |  Arithmetic mean.
  |
  |  sigma
  |  Standard deviation.
  |
  |  variance
  |  Square of the standard deviation.



== Example of assigning docstrings after the class definition ==

class NormalDist:
 'Normal distribution of a random variable'

 __slots__ = ('mu', 'sigma')

 def __init__(self, mu=0.0, sigma=1.0):
 'NormalDist where mu is the mean and sigma is the standard deviation.'
 self.mu = mu
 self.sigma = sigma

 @property
 def variance(self):
 'Square of the standard deviation.'
 return self.sigma ** 2.

 def pdf(self, x):
 'Probability density function.  P(x <= X < x+dx) / dx'
 variance = self.variance
 return exp((x - self.mu)**2.0 / (-2.0*variance)) / sqrt(tau * variance)

 def cdf(self, x):
 'Cumulative distribution function.  P(X <= x)'
 return 0.5 * (1.0 + erf((x - self.mu) / (self.sigma * sqrt(2.0

NormalDist.mu.__doc__ = 'Arithmetic mean'
NormalDist.sigma.__doc__ = 'Standard deviation'


IMO this is another manifestation of the problem that things in the class 
definition have no access to the class object.
Logically speaking, a definition item should be able to see everything that is 
defined before it.
For the same reason, we have to jump through hoops to use a class name in a class attribute definition -- see e.g. 
https://stackoverflow.com/questions/14513019/python-get-class-name


If that problem is resolved, you would be able to write something like:

class NormalDist:
'Normal distribution of a random variable'

__slots__ = ('mu', 'sigma')

__self__.mu.__doc__= 'Arithmetic mean'
    __self__.sigma.__doc__= 'Stndard deviation'





== Example of assigning docstrings with a dict =

class NormalDist:
 'Normal distribution of a random variable'

 __slots__ = {'mu' : 'Arithmetic mean.', 'sigma': 'Standard deviation.'}

 def __init__(self, mu=0.0, sigma=1.0):
 'NormalDist where mu is the mean and sigma is the standard deviation.'
 self.mu = mu
 self.sigma = sigma

 @property
 def variance(self):
 'Square of the standard deviation.'
 return self.sigma ** 2.

 def pdf(self, x):
 'Probability density function.  P(x <= X < x+dx) / dx'
 variance = self.variance
 return exp((x - self.mu)**2.0 / (-2.0*variance)) / sqrt(tau * variance)

 def cdf(self, x):
 'Cumulative distribution function.  P(X <= x)'
 return 0.5 * (1.0 + erf((x - self.mu) / (self.sigma * sqrt(2.0

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Remove tempfile.mktemp()

2019-03-20 Thread Ivan Pozdeev via Python-Dev
Before we can say if something is "secure" or not, we need a threat model -- i.e we need to agree which use cases we are protecting and from 
what threats.


So far, I've seen these use cases:

1. File for the current process' private use
2. File/file name generated by the current process; written by another process, 
read by current one
3. File name generated by the current process; written by the current process, 
read by another one

And the following threats, three axes:

a. Processes run as other users
b. Processes run as the same user (or a user that otherwise automatically has 
access to all your files)

1. Accidental collision from a process that uses CREATE_NEW or equivalent
2. Accidental collision from a process that doesn't use CREATE_NEW or equivalent
3. Malicious code creating files at random
4. Malicious code actively monitoring file creation

-1. read
-2. write

E.g. for threat b-4), it's not safe to use named files for IPC at all, only 
case 1 can be secured (with exclusive open).

On 19.03.2019 16:03, Stéphane Wirtel wrote:


Hi,

Context: raise a warning or remove tempfile.mktemp()
BPO: https://bugs.python.org/issue36309

Since 2.3, this function is deprecated in the documentation, just in the
documentation. In the code, there is a commented RuntimeWarning.
Commented by Guido in 2002, because the warning was too annoying (and I
understand ;-)).

So, in this BPO, we start to discuss about the future of this function
and Serhiy proposed to discuss on the Python-dev mailing list.

Question: Should we drop it or add a (Pending)DeprecationWarning?

Suggestion and timeline:

3.8, we raise a PendingDeprecationWarning
 * update the code
 * update the documentation
 * update the tests
   (check a PendingDeprecationWarning if sys.version_info == 3.8)

3.9, we change PendingDeprecationWarning to DeprecationWarning
   (check DeprecationWarning if sys.version_info == 3.9)

3.9+, we drop tempfile.mktemp()

What do you suggest?

Have a nice day and thank you for your feedback.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] New Python Initialization API

2019-03-28 Thread Ivan Pozdeev via Python-Dev



On 27.03.2019 20:48, Victor Stinner wrote:

Hi,

I would like to add a new C API to initialize Python. I would like
your opinion on the whole API before making it public. The code is
already implemented. Doc of the new API:

https://pythondev.readthedocs.io/init_config.html


To make the API public, _PyWstrList, _PyInitError, _PyPreConfig,
_PyCoreConfig and related functions should be made public.

By the way, I would suggest to rename "_PyCoreConfig" to just
"PyConfig" :-) I don't think that "core init" vs "main init" is really
relevant: more about that below.


Let's start with two examples using the new API.

Example of simple initialization to enable isolated mode:

 _PyCoreConfig config = _PyCoreConfig_INIT;
 config.isolated = 1;

 _PyInitError err = _Py_InitializeFromConfig(&config);
By my outsider observation, the `config' argument and return code are asking to be added to Py_Initialize instead, 
`_Py_InitializeFromConfig` and `_Py_PreInitialize` look redundant.

 if (_Py_INIT_FAILED(err)) {
 _Py_ExitInitError(err);
 }
 /* ... use Python API here ... */
 Py_Finalize();

Example using the pre-initialization to enable the UTF-8 Mode (and use the
"legacy" Py_Initialize() function):

 _PyPreConfig preconfig = _PyPreConfig_INIT;
 preconfig.utf8_mode = 1;

 _PyInitError err = _Py_PreInitialize(&preconfig);
 if (_Py_INIT_FAILED(err)) {
 _Py_ExitInitError(err);
 }

 /* at this point, Python will only speak UTF-8 */

 Py_Initialize();
 /* ... use Python API here ... */
 Py_Finalize();

Since November 2017, I'm refactoring the Python Initialization code to
cleanup the code and prepare a new ("better") API to configure Python
Initialization. I just fixed the last issues that Nick Coghlan asked
me to fix (add a pre-initialization step: done, fix mojibake: done).
My work is inspired by Nick Coghlan's PEP 432, but it is not
implementing it directly. I had other motivations than Nick even if we
are somehow going towards the same direction.

Nick wants to get a half-initialized Python ("core init"), configure
Python using the Python API and Python objects, and then finish the
implementation ("main init").

I chose a different approach: put *everything* into a single C
structure (_PyCoreConfig) using C types. Using the structure, you
should be able to do what Nick wanted to do, but with C rather than
Python. Nick: please tell me if I'm wrong :-)

This work is also connected to Eric Snow's work on sub-interpreters
(PEP 554) and moving global variables into structures. For example,
I'm using his _PyRuntime structure to store a new "preconfig" state
(pre-initialization configuration, more about that below).

In November 2017, when I started to work on the Python Initialization
(bpo-32030), I identified the following problems:

* Many parts of the code were interdependent
* Code executed early in Py_Main() used the Python API before the Python API
   was fully initialized. Like code parsing -W command line option which
   used PyUnicode_FromWideChar() and PyList_Append().
* Error handling used Py_FatalError() which didn't let the caller to decide
   how to handle the error. Moreover, exit() was used to exit Python,
whereas libpython shouldn't do that: a library should not exit the
whole process! (imagine when Python is embedded inside an application)

One year and a half later, I implemented the following solutions:

* Py_Main() and Py_Initialize() code has been reorganized to respect
   priorities between global configuration variables (ex:
   Py_IgnoreEnvironmentFlag), environment variables (ex: PYTHONPATH), command
   line arguments (ex: -X utf8), configuration files (ex: pyenv.cfg), and the
   new _PyPreConfig and _PyCoreConfig structures which store the whole
   configuration.
* Python Initialization no longer uses the Python API but only C types
   like wchar_t* strings, a new _PyWstrList structure and PyMem_RawMalloc()
   memory allocator (PyMem_Malloc() is no longer used during init).
* The code has been modified to use a new _PyInitError structure. The caller
   of the top function gets control to cleanup everything before handling the
   error (display a fatal error message or simply exit Python).

The new _PyCoreConfig structure has the top-priority and provides a single
structure for all configuration parameters.

It becomes possible to override the code computing the "path configuration"
like sys.path to fully control where Python looks to import modules. It
becomes possible to use an empty list of paths to only allow builtin modules.

A new "pre-initialization" steps is responsible to configure the bare minimum
before the Python initialization: memory allocators and encodings
(LC_CTYPE locale
and the UTF-8 mode). The LC_CTYPE is no longer coerced and the UTF-8 Mode is
no longer enabled automatically depending on the user configuration to prevent
mojibake. Previously, calling Py_DecodeLocale() to get a Unicode wchar_t*
st

Re: [Python-Dev] PEP 578: Python Runtime Audit Hooks

2019-03-29 Thread Ivan Pozdeev via Python-Dev
Like in the mktemp thread earlier, I would request a threat model (what use cases are supposed to be protected (in this case, by reporting 
rather than preventing) and from what threats) -- in the discussion, and eventually, in the PEP.
Without one, any claims and talks about whether something would be an effective security measure are pointless -- 'cuz you would never know 
if you accounted for everything and would not even have the definition of that "everything".


On 29.03.2019 1:35, Steve Dower wrote:

Hi all

Time is short, but I'm hoping to get PEP 578 (formerly PEP 551) into Python 3.8. Here's the current text for review and comment before I 
submit to the Steering Council.


The formatted text is at https://www.python.org/dev/peps/pep-0578/ (update just pushed, so give it an hour or so, but it's fundamentally 
the same as what's there)


No Discourse post, because we don't have a python-dev equivalent there yet, so 
please reply here for this one.

Implementation is at https://github.com/zooba/cpython/tree/pep-578/ and my backport to 3.7 
(https://github.com/zooba/cpython/tree/pep-578-3.7/) is already getting some real use (though this will not be added to 3.7, unless people 
*really* want it, so the backport is just for reference).


Cheers,
Steve

=

PEP: 578
Title: Python Runtime Audit Hooks
Version: $Revision$
Last-Modified: $Date$
Author: Steve Dower 
Status: Draft
Type: Standards Track
Content-Type: text/x-rst
Created: 16-Jun-2018
Python-Version: 3.8
Post-History:

Abstract


This PEP describes additions to the Python API and specific behaviors
for the CPython implementation that make actions taken by the Python
runtime visible to auditing tools. Visibility into these actions
provides opportunities for test frameworks, logging frameworks, and
security tools to monitor and optionally limit actions taken by the
runtime.

This PEP proposes adding two APIs to provide insights into a running
Python application: one for arbitrary events, and another specific to
the module import system. The APIs are intended to be available in all
Python implementations, though the specific messages and values used
are unspecified here to allow implementations the freedom to determine
how best to provide information to their users. Some examples likely
to be used in CPython are provided for explanatory purposes.

See PEP 551 for discussion and recommendations on enhancing the
security of a Python runtime making use of these auditing APIs.

Background
==

Python provides access to a wide range of low-level functionality on
many common operating systems. While this is incredibly useful for
"write-once, run-anywhere" scripting, it also makes monitoring of
software written in Python difficult. Because Python uses native system
APIs directly, existing monitoring tools either suffer from limited
context or auditing bypass.

Limited context occurs when system monitoring can report that an
action occurred, but cannot explain the sequence of events leading to
it. For example, network monitoring at the OS level may be able to
report "listening started on port 5678", but may not be able to
provide the process ID, command line, parent process, or the local
state in the program at the point that triggered the action. Firewall
controls to prevent such an action are similarly limited, typically
to process names or some global state such as the current user, and
in any case rarely provide a useful log file correlated with other
application messages.

Auditing bypass can occur when the typical system tool used for an
action would ordinarily report its use, but accessing the APIs via
Python do not trigger this. For example, invoking "curl" to make HTTP
requests may be specifically monitored in an audited system, but
Python's "urlretrieve" function is not.

Within a long-running Python application, particularly one that
processes user-provided information such as a web app, there is a risk
of unexpected behavior. This may be due to bugs in the code, or
deliberately induced by a malicious user. In both cases, normal
application logging may be bypassed resulting in no indication that
anything out of the ordinary has occurred.

Additionally, and somewhat unique to Python, it is very easy to affect
the code that is run in an application by manipulating either the
import system's search path or placing files earlier on the path than
intended. This is often seen when developers create a script with the
same name as the module they intend to use - for example, a
``random.py`` file that attempts to import the standard library
``random`` module.

This is not sandboxing, as this proposal does not attempt to prevent
malicious behavior (though it enables some new options to do so).
See the `Why Not A Sandbox`_ section below for further discussion.

Overview of Changes
===

The aim of these changes is to enable both application developers and
system administrators to integrate Python into their existing
monito

Re: [Python-Dev] Strange umask(?)/st_mode issue

2019-03-30 Thread Ivan Pozdeev via Python-Dev

On 30.03.2019 19:00, Steve Dower wrote:

On 29Mar.2019 1944, Steve Dower wrote:

On 29Mar.2019 1939, Cameron Simpson wrote:

Can you get a branch into your pipeline? Then you could just hack the
tarfile test with something quick and dirty like:

    pid = os.getpid()
    system("strace -p %d 2>/path/to/strace.out &" % pid)
    time.sleep(2)   # get strace heaps of time to start

just before the tarfile open. A ghastly hack but it would get you
debugging info. You could even decide to remove the strace.out file if
the umask issue doesn't show, if it is erratic (can't see why it would
be though).

Perfect, I'll give this a go. Thanks!

I set up a PR to collect this trace and the results are at:
https://github.com/python/cpython/pull/12625

However, I suspect it's a non-result:

umask(022)  = 022
open("/home/vsts/work/1/s/build/test_python_5154/@test_5154_tmp-tardir/tmp.tar",
O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC, 0666) = 3
write(3, "BZh91AY&SYY{\270\344\0\0\24P\0\300\0\4\0\0\10
\\314\5)\246"..., 46) = 46
close(3)= 0
stat("/home/vsts/work/1/s/build/test_python_5154/@test_5154_tmp-tardir/tmp.tar",
{st_mode=S_IFREG|0666, st_size=46, ...}) = 0

Happy to take more suggestions if anyone has them.


According to https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml#use-a-microsoft-hosted-agent 
, MS uses Ubuntu 16.04


http://manpages.ubuntu.com/manpages/xenial/man2/umask.2.html suggests that umask is ignored if the parent directory has a default ACL (a 
newly-created dir inherits those from its parent).


As per https://linuxconfig.org/how-to-manage-acls-on-linux , the following commands should show if acls are enabled on the current FS and if 
any are active on the dir:


DEVICE=$(df  | tail -n +2 | awk '{print $1}')
sudo tune2fs -l $DEVICE | grep -w "Default mount options"
mount | grep -w $DEVICE
getfacl 

In `getfacl' output for a directory, entries that start with "default:" list 
the default ACL .

`setfacl -b ' removes all ACLs from location.


Thanks,
Steve
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Strange umask(?)/st_mode issue

2019-04-01 Thread Ivan Pozdeev via Python-Dev

On 02.04.2019 1:44, Steve Dower wrote:

On 01Apr2019 1535, Cameron Simpson wrote:

On 01Apr2019 09:12, Steve Dower  wrote:

On 30Mar2019 1130, Gregory P. Smith wrote:
I wouldn't expect it to be the case in a CI environment but I believe a umask can be overridden if the filesystem is mounted and 
configured with acls set? (oh, hah, Ivan just said the same thing)


Yep, it appears this is the case. The Pipelines team got back to me and it seems to be a known issue - the workaround they gave me was 
to run "sudo setfacl -Rb /home/vsts" at the start, so I've merged that in for now (to master and 3.7).


Could that be done _without_ sudo to just the local directory containing the test tar file? If that works then you don't need any nasty 
privileged sudo use (which will just break on platforms without sudo anyway).


I tried something similar to that and it didn't work. My guess is it's to do with the actual mount point? (I also tried without sudo at 
first, and when I didn't work, I tried it with sudo. I hear that's how to decide whether you need it or not ;) )


In any case, it only applies to the Azure Pipelines build definition, so there 
aren't any other platforms where it'll be used.


https://github.com/python/cpython/pull/12655

Cheers,
Steve
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Need help to fix HTTP Header Injection vulnerability

2019-04-10 Thread Ivan Pozdeev via Python-Dev


On 10.04.2019 7:30, Karthikeyan wrote:

Thanks Gregory. I think it's a good tradeoff to ensure this validation only for 
URLs of http scheme.

I also agree handling newline is little problematic over the years and the discussion over the level at which validation should occur also 
prolongs some of the patches. https://bugs.python.org/issue35906 is another similar case where splitlines is used but it's better to raise 
an error and the proposed fix could be used there too. Victor seemed to wrote a similar PR like linked one for other urllib functions only 
to fix similar attack in ftplib to reject newlines that was eventually fixed only in ftplib


* https://bugs.python.org/issue30713
* https://bugs.python.org/issue29606

Search also brings multiple issues with one duplicate over another that makes these attacks scattered over the tracker and some edge case 
missing. Slightly off topic, the last time I reported a cookie related issue where the policy can be overriden by third party library I 
was asked to fix it in stdlib itself since adding fixes to libraries causes maintenance burden to downstream libraries to keep up 
upstream. With urllib being a heavily used module across ecosystem it's good to have a fix landing in stdlib that secures downstream 
libraries encouraging users to upgrade Python too.


Validation should occur whenever user data crosses a trust boundary -- i.e. when the library starts to assume that an extracted chunk now 
contains something valid.


https://tools.ietf.org/html/rfc3986 defines valid syntax (incl. valid characters) for every part of a URL -- _of any scheme_ (FYI, \r\n are 
invalid everywhere and the test code for     `data:' that Karthikeyan referred to is raw data to compare to rather than a part of a URL). It 
also obsoletes all the RFCs that the current code is written against.


AFAICS, urllib.split* fns (obsoleted as public in 3.8) are used by both urllib and urllib2 to parse URLs. They can be made to each validate 
the chunk that they split off. urlparse can validate the entire URL altogether.


Also, all modules ought to use the same code (urlparse looks like the best 
candidate) to parse URLs -- this will minimize the attack surface.

I think I can look into this later this week.

Fixing this is going to break code that relies on the current code accepting invalid URLs. But the docs have never said that e.g. in 
urlopen, anything apart from a (valid) URL is accepted (in particular, this implies that the user is responsible for escaping stuff properly 
before passing it). So I would say that we are within our right here and whoever is relying on those quirks is and has always been on 
unsupported territory.
Determining which of those quirks are exploitable and which are not to fix just the former is an incomparably larger, more error-prone and 
avoidable work. If anything, the history of the issue referenced to by previous posters clearly shows that this is too much to ask from the 
Python team.


I also see other undocumented behavior like accepting '>' (also 
obsoleted as public in 3.8) which I would like to but it's of no harm.

--

Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Removing PID check from signal handler

2019-04-12 Thread Ivan Pozdeev via Python-Dev

On 12.04.2019 21:05, Steve Dower wrote:

On 12Apr.2019 0919, Jeroen Demeyer wrote:

The signal handler (that receives signals from the OS) in Python starts
with a check

     if (getpid() == main_pid)

Looking at the comments, the intent was to do a check for the main
*thread* but this is checking the *process* id. So this condition is
basically always true. Therefore, I suggest to remove it in
https://bugs.python.org/issue36601

If you have any objections or comments, I suggest to post them to that bpo.

To add a little more context, the check was added about 25 years ago as
a "hack" for some reason that we can't figure out anymore.

So if you are a historian of ancient operating systems and know of one
that might have raised signal handlers in a different process from the
one where it was registered, we'd love to hear from you.


According to 
https://www.linuxquestions.org/questions/programming-9/the-return-value-of-getpid-called-from-main-thread-and-new-thread-r-identical-624399/ ,

threads used to have different PIDs in the 2.4 Linux kernel.


Cheers,
Steve
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] bpo-36558: Change time.mktime() return type from float to int?

2019-04-16 Thread Ivan Pozdeev via Python-Dev

On 16.04.2019 17:24, Victor Stinner wrote:

Hi,

time.mktime() looks "inconsistent" to me and I would like to change
it, but I'm not sure how it impacts backward compatibility.
https://bugs.python.org/issue36558

time.mktime() returns a floating point number:


type(time.mktime(time.localtime()))



The documentation says:

"It returns a floating point number, for compatibility with :func:`.time`."

time.time() returns a float because it has sub-second resolution, but
the C function mktime() returns an integer number of seconds.

Would it make sense to change mktime() return type from float to int?

I would like to change mktime() return type to make the function more
consistent: all inputs are integers, it sounds wrong to me to return
float. The result should be integer as well.

How much code would it break? I guess that the main impact are unit
tests relying on repr(time.mktime(t)) exact value. But it's easy to
fix the tests: use int(time.mktime(t)) or "%.0f" % time.mktime(t) to
never get ".0", or use float(time.mktime(t))) to explicitly cast for a
float (that which be a bad but quick fix).
I envision it breaking code that relies on implicitly inferring the type of the result from the types of both operands (e.g. arithmetic 
operations).

But for mktime() specifically, I presume the amount of such code very small.

Note: I wrote and implemented the PEP 564 to avoid any precision loss.
mktime() will not start loosing precision before year 285,422,891
(which is quite far in the future ;-)).

Victor


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Use C extensions compiled in release mode on a Python compiled in debug mode

2019-04-23 Thread Ivan Pozdeev via Python-Dev

On 24.04.2019 2:44, Victor Stinner wrote:

Hi,

Two weeks ago, I started a thread "No longer enable Py_TRACE_REFS by
default in debug build", but I lost myself in details, I forgot the
main purpose of my proposal...

Let me retry from scratch with a more explicit title: I would like to
be able to run C extensions compiled in release mode on a Python
compiled in debug mode ("pydebug").


This is going to be impossible because debug Python links against debug C runtime which is binary incompatible with the release one (at 
least, in Windows).



The use case is to debug bugs in C
extensions thanks to additional runtime checks of a Python debug
build, and more generally get a better debugging experiences on
Python. Even for pure Python, a debug build is useful (to get the
Pyhon traceback in gdb using "py-bt" command).
That said, debug vs release extension compilation is currently bugged. It's impossible to make a debug build of an extension against a 
release Python (linked against release runtime, so not fully debug, just without optimizations) and vice versa. pip fails to build 
extensions for a debug Python for the same reason. I've no idea how (and if at all) people manage to diagnose problems in extensions.

https://bugs.python.org/issue33637


Currently, using a Python compiled in debug mode means to have to
recompile C extensions in debug mode. Compile a C extension requires a
C compiler, header files, pull dependencies, etc. It can be very
complicated in practical (and pollute your system with all these
additional dependencies). On Linux, it's already hard, but on Windows
it can be even harder.

Just one concrete example: no debug build of numpy is provided at
https://pypi.org/project/numpy/ Good luck to build numpy in debug mode
manually (install OpenBLAS, ATLAS, Fortran compiler, Cython, etc.)
:-)

The above paragraph is probably the reason ;-)


--

The first requirement for the use case is that a Python debug build
supports the ABI of a release build. The current blocker issue is that
the Py_DEBUG define imply the Py_TRACE_REFS define: PyObject gets 2
extra fields (_ob_prev and _ob_next) which change the offset of all
attributes of all objects and makes the ABI completely incompatible. I
propose to no longer imply Py_TRACE_REFS *by default* (but keep the
code):

https://bugs.python.org/issue36465
https://github.com/python/cpython/pull/12615

(Py_TRACE_REFS would be a different ABI.)

The second issue is that library filenames are different for a debug
build: SOABI gets an additional "d" flag for Py_DEBUG. A debug build
should first look for "NAME.cpython-38dm.so" (flags: "dm"), but then
also look for "NAME.cpython-38m.so" (flags: "m"). The opposite is not
possible: a debug build contains many additional functions missing
from a release build.

For Windows, maybe we should provide a Python compiled in debug mode
with the same C Runtime than a Python compiled in release mode.
Otherwise, the debug C Runtime is causing another ABI issue.

Maybe pip could be enhanced to support installing C extensions
compiled in release mode when using a debug mode. But that's more for
convenience, it's not really required, since it is easy to switch the
Python runtime between release and debug build.

Apart of Py_TRACE_REFS, I'm not aware of other ABI differences in
structures. I know that the COUNT_ALLOCS define changes the ABI, but
it's not implied by Py_DEBUG: you have to opt-in for COUNT_ALLOCS. (I
propose to do the same for Py_TRACE_REFS ;-))

Note: Refleaks buildbots don't use Py_TRACE_REFS to track memory
leaks, only sys.gettotalrefcount().

--

Python debug build has many benefit. If you ignore C extensions, the
debug build is usually compiled with compiler optimization disabled
which makes debugging in gdb a much better experience. If you never
tried: on a release build, most (if not all) variables are "" and it's really painful to basic debug functions like displaying
the current Python frame.

Assertions are removed in release modes, whereas they can detect a
wide range of bugs way earlier: integer overflow, buffer under- and
overflow, exceptions ignored silently, etc. Nobody likes to see a bug
for the first time in production. For example, I modified Python 3.8
to now logs I/O errors when a file is closed implicitly, but only in
debug or development mode. In release Python silently ignored EBADF
error on such case, whereas it can lead to very nasty bugs causing
Python to call abort() (which creates a coredump on Linux): see
https://bugs.python.org/issue18748 ...

DeprecationWarning and ResourceWarning are shown by default in debug mode :-)

There are too many different additional checks done at runtime: I
cannot list them all here.

--

Being able to switch between Python in release mode and Python in
debug mode is a first step. My long term plan would be to better
separate "Python" from its "runtime". CPython in release mode would be
one runtime, CPython in debug mode would be another runtime, PyPy can

Re: [Python-Dev] Use C extensions compiled in release mode on a Python compiled in debug mode

2019-04-24 Thread Ivan Pozdeev via Python-Dev

On 24.04.2019 3:50, Ivan Pozdeev via Python-Dev wrote:

On 24.04.2019 2:44, Victor Stinner wrote:

Hi,

Two weeks ago, I started a thread "No longer enable Py_TRACE_REFS by
default in debug build", but I lost myself in details, I forgot the
main purpose of my proposal...

Let me retry from scratch with a more explicit title: I would like to
be able to run C extensions compiled in release mode on a Python
compiled in debug mode ("pydebug").


This is going to be impossible because debug Python links against debug C runtime which is binary incompatible with the release one (at 
least, in Windows).


To elaborate:

As per 
https://stackoverflow.com/questions/37541210/whats-the-difference-in-usage-between-shared-libraries-built-in-debug-and-relea/37580323#37580323 ,
Problems will occur if you have two modules that 1. use different versions or binary representations of a type and 2. exchange objects of 
that type


Now, I trust Victor has ensured no discrepancies in explicitly exchanged types.
But I'm not sure if Python and the extension still rely on implicitly sharing some C runtime entities. (In Py2, that would at least be 
descriptor table that MSVCRT maintains privately but Py3 doesn't rely on it AFAIK).



The use case is to debug bugs in C
extensions thanks to additional runtime checks of a Python debug
build, and more generally get a better debugging experiences on
Python. Even for pure Python, a debug build is useful (to get the
Pyhon traceback in gdb using "py-bt" command).
That said, debug vs release extension compilation is currently bugged. It's impossible to make a debug build of an extension against a 
release Python (linked against release runtime, so not fully debug, just without optimizations) and vice versa. pip fails to build 
extensions for a debug Python for the same reason. I've no idea how (and if at all) people manage to diagnose problems in extensions.

https://bugs.python.org/issue33637


Currently, using a Python compiled in debug mode means to have to
recompile C extensions in debug mode. Compile a C extension requires a
C compiler, header files, pull dependencies, etc. It can be very
complicated in practical (and pollute your system with all these
additional dependencies). On Linux, it's already hard, but on Windows
it can be even harder.

Just one concrete example: no debug build of numpy is provided at
https://pypi.org/project/numpy/ Good luck to build numpy in debug mode
manually (install OpenBLAS, ATLAS, Fortran compiler, Cython, etc.)
:-)

The above paragraph is probably the reason ;-)


--

The first requirement for the use case is that a Python debug build
supports the ABI of a release build. The current blocker issue is that
the Py_DEBUG define imply the Py_TRACE_REFS define: PyObject gets 2
extra fields (_ob_prev and _ob_next) which change the offset of all
attributes of all objects and makes the ABI completely incompatible. I
propose to no longer imply Py_TRACE_REFS *by default* (but keep the
code):

https://bugs.python.org/issue36465
https://github.com/python/cpython/pull/12615

(Py_TRACE_REFS would be a different ABI.)

The second issue is that library filenames are different for a debug
build: SOABI gets an additional "d" flag for Py_DEBUG. A debug build
should first look for "NAME.cpython-38dm.so" (flags: "dm"), but then
also look for "NAME.cpython-38m.so" (flags: "m"). The opposite is not
possible: a debug build contains many additional functions missing
from a release build.

For Windows, maybe we should provide a Python compiled in debug mode
with the same C Runtime than a Python compiled in release mode.
Otherwise, the debug C Runtime is causing another ABI issue.

Maybe pip could be enhanced to support installing C extensions
compiled in release mode when using a debug mode. But that's more for
convenience, it's not really required, since it is easy to switch the
Python runtime between release and debug build.

Apart of Py_TRACE_REFS, I'm not aware of other ABI differences in
structures. I know that the COUNT_ALLOCS define changes the ABI, but
it's not implied by Py_DEBUG: you have to opt-in for COUNT_ALLOCS. (I
propose to do the same for Py_TRACE_REFS ;-))

Note: Refleaks buildbots don't use Py_TRACE_REFS to track memory
leaks, only sys.gettotalrefcount().

--

Python debug build has many benefit. If you ignore C extensions, the
debug build is usually compiled with compiler optimization disabled
which makes debugging in gdb a much better experience. If you never
tried: on a release build, most (if not all) variables are "" and it's really painful to basic debug functions like displaying
the current Python frame.

Assertions are removed in release modes, whereas they can detect a
wide range of bugs way earlier: integer overflow, buffer under- and
overflow, exceptions ignored silently, etc. Nobody likes to see a bug
f

Re: [Python-Dev] Use C extensions compiled in release mode on a Python compiled in debug mode

2019-04-24 Thread Ivan Pozdeev via Python-Dev

On 24.04.2019 17:03, Antoine Pitrou wrote:

On Wed, 24 Apr 2019 01:44:17 +0200
Victor Stinner  wrote:

The first requirement for the use case is that a Python debug build
supports the ABI of a release build. The current blocker issue is that
the Py_DEBUG define imply the Py_TRACE_REFS define: PyObject gets 2
extra fields (_ob_prev and _ob_next) which change the offset of all
attributes of all objects and makes the ABI completely incompatible. I
propose to no longer imply Py_TRACE_REFS *by default* (but keep the
code):

https://bugs.python.org/issue36465
https://github.com/python/cpython/pull/12615

+1 from me.


The second issue is that library filenames are different for a debug
build: SOABI gets an additional "d" flag for Py_DEBUG. A debug build
should first look for "NAME.cpython-38dm.so" (flags: "dm"), but then
also look for "NAME.cpython-38m.so" (flags: "m").

Sounds fair (but only on Unix, I guess).


Maybe pip could be enhanced to support installing C extensions
compiled in release mode when using a debug mode. But that's more for
convenience, it's not really required, since it is easy to switch the
Python runtime between release and debug build.

Not sure what you mean by "easy to switch the Python runtime".  As soon
as I want to use pip, I have to use a release build, right?

No, pip works with a debug Python just as well (python.bat -m ensurepip) and installs 
modules to `/site-packages` IIRC.
But building extensions is broken in this case as per 
https://mail.python.org/pipermail/python-dev/2019-April/157180.html .

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] datetime.fromisocalendar

2019-04-29 Thread Ivan Pozdeev via Python-Dev

On 29.04.2019 16:30, Victor Stinner wrote:

I reviewed and merged Paul's PR. I concur with Guido, the new
constructor perfectly makes sense and is useful.

About the implementation: date and time are crazy beasts. Extract of the code:

 if not 0 < week < 53:
 out_of_range = True

 if week == 53:
 # ISO years have 53 weeks in them on years starting with a
 # Thursday and leap years starting on a Wednesday
 first_weekday = _ymd2ord(year, 1, 1) % 7
 if (first_weekday == 4 or (first_weekday == 3 and
_is_leap(year))):
 out_of_range = False

 if out_of_range:
 raise ValueError(f"Invalid week: {week}")

"ISO years have 53 weeks in them on years starting with a Thursday and
leap years starting on a Wednesday" !?!
https://www.staff.science.uu.nl/~gent0113/calendar/isocalendar.htm , linked from 
https://docs.python.org/3/library/datetime.html?highlight=isocalendar#datetime.date.isocalendar

Victor

Le sam. 27 avr. 2019 à 22:37, Guido van Rossum  a écrit :

I think it’s a good idea.

On Sat, Apr 27, 2019 at 11:43 AM Paul Ganssle  wrote:

Greetings,

Some time ago, I proposed adding a `.fromisocalendar` alternate constructor to 
`datetime` (bpo-36004), with a corresponding implementation (PR #11888). I 
advertised it on datetime-SIG some time ago but haven't seen much discussion 
there, so I'd like to bring it to python-dev's attention as we near the cut-off 
for new Python 3.8 features.

Other than the fact that I've needed this functionality in the past, I also think a good 
general principle for the datetime module is that when a class (time, date, datetime) has 
a "serialization" method (.strftime, .timestamp, .isoformat, .isocalendar, 
etc), there should be a corresponding deserialization method (.strptime, .fromtimestamp, 
.fromisoformat) that constructs a datetime from the output. Now that `fromisoformat` was 
introduced in Python 3.7, I think `isocalendar` is the only remaining method without an 
inverse. Do people agree with this principle? Should we add the `fromisocalendar` method?

Thanks,
Paul

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/guido%40python.org

--
--Guido (mobile)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vstinner%40redhat.com




--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Redoing failed PR checks

2019-05-08 Thread Ivan Pozdeev via Python-Dev

On 08.05.2019 22:47, Terry Reedy wrote:

On 5/8/2019 10:23 AM, Eric V. Smith wrote:

I think you can close and reopen the PR. That’s what I’m trying on my blocked 
PR.


That works but reruns all the CI checks, including the ones already passed.  Some bots allow individual reruns, but it is not as clear as 
it should be.




I think rerunning PR checks is intentionally blocked to prevent submitters from 
silently smuggling unreliable code in.

Whatever the case, you can make an empty commit with `git commit --allow-empty`
(credit goes to https://coderwall.com/p/vkdekq/git-commit-allow-empty).

This will rerun things, but all the failures will be visible in the PR.

--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 581 (Using GitHub issues for CPython) is accepted

2019-05-15 Thread Ivan Pozdeev via Python-Dev

On 15.05.2019 11:48, Antoine Pitrou wrote:

On Tue, 14 May 2019 18:11:14 -0700
Barry Warsaw  wrote:


As the BDFL-Delegate for PEP 581, and with the unanimous backing of the rest of 
the Steering Council, I hereby Accept this PEP.

For future reference, is it possible to post the Steering Council's
reflection and rationale on the PEP?
+1. Specifically, I'd like to know if the risks and the potential for GitHub missing any needed features were estimated. The PEP says 
nothing about this.

Thank you

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python in next Windows 10 update

2019-05-22 Thread Ivan Pozdeev via Python-Dev



On 21.05.2019 23:30, Steve Dower wrote:

Hi all

Just sharing this here because I think it's important for us to be aware of it - I'm not trying to promote or sell anything here :) (Those 
who were at the language summit have seen this already.)


In the next Windows 10 update that starts rolling out today, we (Microsoft) have added "python.exe" and "python3.exe" commands that are 
installed on PATH *by default* and will open the Microsoft Store at the page where we (Python core team) publish our build.


This makes it a 1-2 click process to get from a clean machine to having a usable Python install ("python.exe" -> opens Store -> "Get it 
Free" -> "python.exe" now works!)


The associated blog post:

https://devblogs.microsoft.com/python/python-in-the-windows-10-may-2019-update/

Here are answers to a few questions that I assume will come up, at least from 
this audience that understands the issues better than most:

* if someone had installed Python and put it on PATH with our installer, this 
new command *does not* interfere
* if someone had manually modified their own PATH, they *may* see some 
interference (but we [Microsoft] decided this was an acceptable risk)
* the Python 3.7 installed from the store will not auto-update to 3.8, but when 3.8 is released we (Microsoft) will update the redirect to 
point at it
* if you pass arguments to the redirect command, it just exits with an error code - you only get the Store page if you run it without 
arguments
* once the Store package is installed, the redirect command is replaced (this required a new feature in the OS). If you install with the 
regular installer and update PATH, or active a venv, it will add it *before* the redirect. So these scenarios should be all good.


I'm happy to answer other questions here. The long-term contact for this integration is python (at) microsoft.com, which right now will 
come to me.



As someone whose job is to diagnose and fix problems with running software:
Are there patches in your release? Do you provide corresponding sources and 
debug symbols for it?


And on a personal note, I'm very excited that we (Microsoft) got the approval to do this. Getting *anything* added to Windows is a big 
task, so it's a reflection of the popularity and support for Python that's growing within Microsoft that we were able to make this happen. 
That's due to every contributor, both to the core runtime and the ecosystem. I hope this will only help us improve the availability of 
Python for users and make it an easier choice for dev tasks in the future.


Cheers,
Steve
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/vano%40mail.mipt.ru


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Python in next Windows 10 update

2019-05-22 Thread Ivan Pozdeev via Python-Dev

On 22.05.2019 23:52, Steve Dower wrote:

On 22May2019 1309, Ivan Pozdeev via Python-Dev wrote:

As someone whose job is to diagnose and fix problems with running software:
Are there patches in your release? Do you provide corresponding sources and 
debug symbols for it?


You can find the sources at https://github.com/python/cpython :)


For Anaconda, this is not so, they apply private patches. So I had to make sure.


I'm working on getting debug symbols packaged for the next release, but once I do that they'll be exactly the same binaries as in the 
traditional installer on python.org. (Right now they are two separate builds of the same source.)


The package on the Store is not a Microsoft build or release of Python - it's published by whoever the Windows build manager is at the 
time. Just to be confusing, it's me right now, but the actual install is not owned or managed by Microsoft - just endorsed and linked.


Cheers,
Steve


--
Regards,
Ivan

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


  1   2   3   >