[Python-Dev] Re: Further PEP 615 Discussion: Equality and hash of ZoneInfo

2020-04-20 Thread Paul Ganssle
> In every use-case that I've ever had, and every one that I can
> imagine, I've not cared about the difference between "US/Eastern" and
> "America/New_York". In fact, if ZoneInfo("US/Eastern") returned
> something that had a name of "America/New_York", I would be fine with
> that. Similarly, Australia/Melbourne and Australia/Sydney are, to my
> knowledge, equivalent. (If I'm wrong on my own country's history of
> timezones, then I apologize and withdraw the example, but I'm talking
> about cases where you absolutely cannot tell the difference based on
> the displayed time.) Having those compare equal would be convenient.

I tend to agree, but there's a minor complication in that there is not,
as far as I can tell, an easy cross-platform way to determine the
"canonical" zone name, and normalizing America/New_York to the
deprecated US/Eastern would be bad, so we really don't want to do that
(in fact, this happens with the way that dateutil.zoneinfo stores its
time zones, and has been rather irksome to me). The key is exposed as
part of the public API, because it's useful for serializing the zone
between languages, e.g. if you want to send an aware datetime as JSON,
you probably want something that looks something like: {"datetime":
"2020-05-01T03:04:01", "zone": "America/New_York"}.

One reason this may be a problem is that something like Asia/Vientiane
is, at the moment, a symlink to Asia/Bangkok, but Vientiane is in Laos
and Bangkok is in Thailand - if time in Laos changes relative to
Asia/Bangkok, Asia/Vientiane will stop being a link, but if we normalize
"Asia/Vientiane" to "Asia/Bangkok" on systems with sufficiently old time
zone data, we may lose that information on deserialization.

Of course, I do not consider this to be a major problem (any more than
the whole idea of stable keys over time is a somewhat fragile
abstraction), because if, for example, Massachusetts were to go to
"permanent daylight saving time" (i.e. year-round Atlantic Standard
Time), a new America/Boston zone would be created, and all the
Bostonians who have been using America/New_York would be in much the
same situation, but it's just one thing that gives me pause about
efforts to normalize links.

> I don't think it's a problem to have equivalent objects unable to
> coexist in a set. That's just the way sets work - len({5, 5.0}) is 1,
> not 2.

I mostly agree with this, it's just that I don't have a good idea why
you'd want to put a time zone in a set in the first place, and the
notion of equivalent is relative to what you're using the object for. In
some ways two zones are not equivalent unless they are the same object,
e.g.:

dt0 = datetime(2020, 4, 1, tzinfo=zi1)
dt1 = datetime(2020, 1, 1, tzinfo=zi0)
dt1 - dt0

If we assume that zi0 and zi1 both are "America/New_York" zones, the
result depends on whether or not they are the same object. If both zi0
and zi1 are ZoneInfo("America/New_York"), then the result is one thing,
if one or more of them was constructed with
ZoneInfo.no_cache("America/New_York"), it's a different one. The result
of `.tzname()`, `.utcoffset()` and `.dst()` calls are the same no matter
what, though.

> Since options 3 and 4 are the most expensive, I'm fine with the idea
> of a future method that would test for equivalence, rather than having
> them actually compare equal; but I'd also be fine with having
> ZoneInfo("US/Eastern") actually return the same object that
> ZoneInfo("America/New_York") returns. For the equality comparison, I
> would be happy with proposal 2.

Do you have any actual use cases for the equality comparison? I think
proposal 2 is /reasonable/, but to the extent that anyone ever notices
the difference between proposal 1 and proposal 2, it's more likely to
cause confusion - you can always do `zi0.key == zi1.key`, but most
people will naively look at `zi0 == zi1` to debug their issue, only to
not realize that `zi0 == zi1` isn't actually the relevant comparison
when talking about inter-zone vs. same-zone comparisons.

> ChrisA
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/JPUGSSXX2MWF3ABH3QNHXSMNVDWMRVJS/
> Code of Conduct: http://python.org/psf/codeofconduct/


signature.asc
Description: OpenPGP digital signature
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/AW3ZMWX6MNDU35L3AW5RIRYF7MAYFCZW/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Further PEP 615 Discussion: Equality and hash of ZoneInfo

2020-04-20 Thread Paul Ganssle
> In every use-case that I've ever had, and every one that I can
> imagine, I've not cared about the difference between "US/Eastern" and
> "America/New_York". In fact, if ZoneInfo("US/Eastern") returned
> something that had a name of "America/New_York", I would be fine with
> that. Similarly, Australia/Melbourne and Australia/Sydney are, to my
> knowledge, equivalent. (If I'm wrong on my own country's history of
> timezones, then I apologize and withdraw the example, but I'm talking
> about cases where you absolutely cannot tell the difference based on
> the displayed time.) Having those compare equal would be convenient.

I tend to agree, but there's a minor complication in that there is not,
as far as I can tell, an easy cross-platform way to determine the
"canonical" zone name, and normalizing America/New_York to the
deprecated US/Eastern would be bad, so we really don't want to do that
(in fact, this happens with the way that dateutil.zoneinfo stores its
time zones, and has been rather irksome to me). The key is exposed as
part of the public API, because it's useful for serializing the zone
between languages, e.g. if you want to send an aware datetime as JSON,
you probably want something that looks something like: {"datetime":
"2020-05-01T03:04:01", "zone": "America/New_York"}.

One reason this may be a problem is that something like Asia/Vientiane
is, at the moment, a symlink to Asia/Bangkok, but Vientiane is in Laos
and Bangkok is in Thailand - if time in Laos changes relative to
Asia/Bangkok, Asia/Vientiane will stop being a link, but if we normalize
"Asia/Vientiane" to "Asia/Bangkok" on systems with sufficiently old time
zone data, we may lose that information on deserialization.

Of course, I do not consider this to be a major problem (any more than
the whole idea of stable keys over time is a somewhat fragile
abstraction), because if, for example, Massachusetts were to go to
"permanent daylight saving time" (i.e. year-round Atlantic Standard
Time), a new America/Boston zone would be created, and all the
Bostonians who have been using America/New_York would be in much the
same situation, but it's just one thing that gives me pause about
efforts to normalize links.

> I don't think it's a problem to have equivalent objects unable to
> coexist in a set. That's just the way sets work - len({5, 5.0}) is 1,
> not 2.

I mostly agree with this, it's just that I don't have a good idea why
you'd want to put a time zone in a set in the first place, and the
notion of equivalent is relative to what you're using the object for. In
some ways two zones are not equivalent unless they are the same object,
e.g.:

dt0 = datetime(2020, 4, 1, tzinfo=zi1)
dt1 = datetime(2020, 1, 1, tzinfo=zi0)
dt1 - dt0

If we assume that zi0 and zi1 both are "America/New_York" zones, the
result depends on whether or not they are the same object. If both zi0
and zi1 are ZoneInfo("America/New_York"), then the result is one thing,
if one or more of them was constructed with
ZoneInfo.no_cache("America/New_York"), it's a different one. The result
of `.tzname()`, `.utcoffset()` and `.dst()` calls are the same no matter
what, though.

> Since options 3 and 4 are the most expensive, I'm fine with the idea
> of a future method that would test for equivalence, rather than having
> them actually compare equal; but I'd also be fine with having
> ZoneInfo("US/Eastern") actually return the same object that
> ZoneInfo("America/New_York") returns. For the equality comparison, I
> would be happy with proposal 2.

Do you have any actual use cases for the equality comparison? I think
proposal 2 is /reasonable/, but to the extent that anyone ever notices
the difference between proposal 1 and proposal 2, it's more likely to
cause confusion - you can always do `zi0.key == zi1.key`, but most
people will naively look at `zi0 == zi1` to debug their issue, only to
not realize that `zi0 == zi1` isn't actually the relevant comparison
when talking about inter-zone vs. same-zone comparisons.

> ChrisA
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/JPUGSSXX2MWF3ABH3QNHXSMNVDWMRVJS/
> Code of Conduct: http://python.org/psf/codeofconduct/
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/JVNW32COCOLBAKPREUSW7Z3TGFOMTJIB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] [RELEASE] Python 2.7.18, the end of an era

2020-04-20 Thread Benjamin Peterson
I'm eudaemonic to announce the immediate availability of Python 2.7.18.

Python 2.7.18 is a special release. I refer, of course, to the fact that 
"2.7.18" is the closest any Python version number will ever approximate e, 
Euler's number. Simply exquisite!

A less transcendent property of Python 2.7.18 is that it is the last Python 2.7 
release and therefore the last Python 2 release. It's time for the CPython 
community to say a fond but firm farewell to Python 2. Users still on Python 2 
can use e to compute the instantaneously compounding interest on their 
technical debt.

Download this unique, commemorative Python release on python.org:

   https://www.python.org/downloads/release/python-2718/

Python 2.7 has been under active development since the release of Python 2.6, 
more than 11 years ago. Over all those years, CPython's core developers and 
contributors sedulously applied bug fixes to the 2.7 branch, no small task as 
the Python 2 and 3 branches diverged. There were large changes midway through 
Python 2.7's life such as PEP 466's feature backports to the ssl module and 
hash randomization. Traditionally, these features would never have been added 
to a branch in maintenance mode, but exceptions were made to keep Python 2 
users secure. Thank you to CPython's community for such dedication.

Python 2.7 was lucky to have the services of two generations of binary builders 
and operating system experts, Martin von Löwis and Steve Dower for Windows, and 
Ronald Oussoren and Ned Deily for macOS. The reason we provided binary Python 
2.7 releases for macOS 10.9, an operating system obsoleted by Apple 4 years 
ago, or why the "Microsoft Visual C++ Compiler for Python 2.7" exists is the 
dedication of these individuals.

I thank the past and present Python release managers, Barry Warsaw, Ned Deily, 
Georg Brandl, Larry Hastings, and Łukasz Langa for their advice and support 
over the years. I've learned a lot from them—like don't be the sucker who 
volunteers to manage the release right before a big compatibility break!

Python 3 would be nowhere without the critical work of the wider community. 
Library maintainers followed CPython by maintaining Python 2 support for many 
years but also threw their weight behind the Python 3 statement 
(https://python3statement.org). Linux distributors chased Python 2 out of their 
archives. Users migrated hundreds of millions of lines of code, developed 
porting guides, and kept Python 2 in their brain while Python 3 gained 10 years 
of improvements.

Finally, thank you to GvR for creating Python 0.9, 1, 2, and 3.

Long live Python 3+!

Signing off,
Benjamin
2.7 release manager
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/OFCIETIXLX34X7FVK5B5WPZH22HXV342/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: [RELEASE] Python 2.7.18, the end of an era

2020-04-20 Thread Guido van Rossum
And thank you, Benjamin!

Where's the virtual party?

--Guido

On Mon, Apr 20, 2020 at 08:09 Benjamin Peterson  wrote:

> I'm eudaemonic to announce the immediate availability of Python 2.7.18.
>
> Python 2.7.18 is a special release. I refer, of course, to the fact that
> "2.7.18" is the closest any Python version number will ever approximate e,
> Euler's number. Simply exquisite!
>
> A less transcendent property of Python 2.7.18 is that it is the last
> Python 2.7 release and therefore the last Python 2 release. It's time for
> the CPython community to say a fond but firm farewell to Python 2. Users
> still on Python 2 can use e to compute the instantaneously compounding
> interest on their technical debt.
>
> Download this unique, commemorative Python release on python.org:
>
>https://www.python.org/downloads/release/python-2718/
>
> Python 2.7 has been under active development since the release of Python
> 2.6, more than 11 years ago. Over all those years, CPython's core
> developers and contributors sedulously applied bug fixes to the 2.7 branch,
> no small task as the Python 2 and 3 branches diverged. There were large
> changes midway through Python 2.7's life such as PEP 466's feature
> backports to the ssl module and hash randomization. Traditionally, these
> features would never have been added to a branch in maintenance mode, but
> exceptions were made to keep Python 2 users secure. Thank you to CPython's
> community for such dedication.
>
> Python 2.7 was lucky to have the services of two generations of binary
> builders and operating system experts, Martin von Löwis and Steve Dower for
> Windows, and Ronald Oussoren and Ned Deily for macOS. The reason we
> provided binary Python 2.7 releases for macOS 10.9, an operating system
> obsoleted by Apple 4 years ago, or why the "Microsoft Visual C++ Compiler
> for Python 2.7" exists is the dedication of these individuals.
>
> I thank the past and present Python release managers, Barry Warsaw, Ned
> Deily, Georg Brandl, Larry Hastings, and Łukasz Langa for their advice and
> support over the years. I've learned a lot from them—like don't be the
> sucker who volunteers to manage the release right before a big
> compatibility break!
>
> Python 3 would be nowhere without the critical work of the wider
> community. Library maintainers followed CPython by maintaining Python 2
> support for many years but also threw their weight behind the Python 3
> statement (https://python3statement.org). Linux distributors chased
> Python 2 out of their archives. Users migrated hundreds of millions of
> lines of code, developed porting guides, and kept Python 2 in their brain
> while Python 3 gained 10 years of improvements.
>
> Finally, thank you to GvR for creating Python 0.9, 1, 2, and 3.
>
> Long live Python 3+!
>
> Signing off,
> Benjamin
> 2.7 release manager
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/OFCIETIXLX34X7FVK5B5WPZH22HXV342/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
-- 
--Guido (mobile)
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/M5FB5DWORXQQIV4S4MHMPMU6JKYBY4WO/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Accepting PEP 617: New PEG parser for CPython

2020-04-20 Thread Brett Cannon
The steering council is happy to announce that  we have accepted PEP 617! 
Thanks to the PEP authors for all their hard work (which includes sending a PR 
to update the acceptance of the PEP 😉).
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/FTXPZTEHX25QXEMY2QJP3M6KZVXEQNHK/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] PEP 616 "String methods to remove prefixes and suffixes" accepted

2020-04-20 Thread Victor Stinner
Hi,

The Python Steering Council accepts the PEP 616 "String methods to
remove prefixes and suffixes":
https://www.python.org/dev/peps/pep-0616/

Congrats Dennis Sweeney!

We just have one last request: we expect the documentation to explain
well the difference between removeprefix()/removesuffix() and
lstrip()/strip()/rstrip(), since it is the rationale of the PEP ;-)

You can find the WIP implementation at:

* https://github.com/python/cpython/pull/18939
* https://bugs.python.org/issue39939

Victor
-- 
Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/2VGR3BMACYLARB7SKGX3SDPPXIXZSJE2/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Accepting PEP 617: New PEG parser for CPython

2020-04-20 Thread Batuhan Taskaya
Congratulations!

On Mon, Apr 20, 2020, 9:30 PM Brett Cannon  wrote:

> The steering council is happy to announce that  we have accepted PEP 617!
> Thanks to the PEP authors for all their hard work (which includes sending a
> PR to update the acceptance of the PEP 😉).
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/FTXPZTEHX25QXEMY2QJP3M6KZVXEQNHK/
> Code of Conduct: http://python.org/psf/codeofconduct/
>
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/E4CQN7ZAA3D6LSS56WDYDFYFRH2F4WRH/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 616 "String methods to remove prefixes and suffixes" accepted

2020-04-20 Thread Eric V. Smith

Congratulations, Dennis!

Not 10 minutes ago I was writing code that could have used this 
functionality. And I got it wrong on my first attempt! I'm looking 
forward to it in 3.9.


Eric

On 4/20/2020 2:26 PM, Victor Stinner wrote:

Hi,

The Python Steering Council accepts the PEP 616 "String methods to
remove prefixes and suffixes":
https://www.python.org/dev/peps/pep-0616/

Congrats Dennis Sweeney!

We just have one last request: we expect the documentation to explain
well the difference between removeprefix()/removesuffix() and
lstrip()/strip()/rstrip(), since it is the rationale of the PEP ;-)

You can find the WIP implementation at:

* https://github.com/python/cpython/pull/18939
* https://bugs.python.org/issue39939

Victor

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VV2CFGYTJXADLK5NJXECU55HS5PYNUK3/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 comments

2020-04-20 Thread Eric Snow
On Sat, Apr 18, 2020 at 11:16 AM Antoine Pitrou  wrote:
> * I do think a minimal synchronization primitive would be nice.
>   Either a Lock (in the Python sense) or a Semaphore: both should be
>   relatively easy to provide, by wrapping an OS-level synchronization
>   primitive.  Then you can recreate all high-level synchronization
>   primitives, like the threading and multiprocessing modules do (using
>   a Lock or a Semaphore, respectively).
>
>   (note you should be able to emulate a semaphore using blocking send()
>   and recv() calls, but that's probably not very efficient, and
>   efficiency is important)

You make a good point about efficiency.  The blocking is definitely
why I figured we could get away with avoiding a locking primitive.

One reason I wanted to avoid a shareable synchronization primitive is
that I've had many bad experiences with something similar in Go:
mixing locks, channels, and goroutines).  I'll also admit that the
ideas in CSP had an impact on this. :)

Mixing channels and locks can be a serious pain point.  So if we do
end up supporting shared locks, I suppose I'd feel better about it if
we had an effective way to discourage folks using them normally.  Two
possible approaches:

* keep them in a separate module on PyPI that folks could use when experimenting
* add a shareable lock class (to the "interpreters" module) with a
name that made it clear you shouldn't use it normally.

If blocking send/recv were efficient enough, I'd rather not have a
shareable lock at all.  Or I suppose it could be re-implemented later
using a channel. :)

On Sat, Apr 18, 2020 at 11:30 AM Antoine Pitrou  wrote:
> By the way, perhaps this could be even be implemented as making
> _threading.Lock shareable.  This would probably require some changes in
> the underlying C Lock structure (e.g. pointing to an
> atomically-refcounted shared control block), but nothing intractable,
> and reasonably efficient.

Making _threading.Lock shareable kind of seems like the best way to
go.  Honestly I was already looking into it relative to the
implementation for the low-level channel_send_wait(). [1]  However, I
got nervous about that as soon as I started looking at how to separate
the low-level mutex from the Lock object (so it could be shared). :)
So I'd probably want some help on the implementation work.

-eric

[1] https://www.python.org/dev/peps/pep-0554/#return-a-lock-from-send
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/JMZREXHKODJFQBH6RCHDQ6CVRA4YMNCP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 comments

2020-04-20 Thread Eric Snow
Thanks for the feedback, Antoine.  I've responded inline below and
will be making appropriate changes to the PEP.  One point I'd like to
reinforce before my comments is the PEP's emphasis on minimalism.

>From PEP 554:

This proposal is focused on enabling the fundamental capability of
multiple isolated interpreters in the same Python process.  This is a
new area for Python so there is relative uncertainly about the best
tools to provide as companions to subinterpreters.  Thus we minimize
the functionality we add in the proposal as much as possible.

I don't think anything you've mentioned really deviates much from
that, and making the module provisional helps.  I just want us to be
careful not to add stuff that we'll decide we want to remove later. :)

FYI, I'm already updating the PEP based on feedback from the other
email thread.  I'll let you know once all the updates are done.


On Sat, Apr 18, 2020 at 11:16 AM Antoine Pitrou  wrote:
> First, I would like to say that I have no fondamental problem with this
> PEP. While I agree with Nathaniel that the rationale given about the CSP
> concurrency model seems a bit weak, the author is obviously expressing
> his opinion there and I won't object to that.  However, I think the PEP
> is desirable for other reasons.  Mostly, I hope that by making the
> subinterpreters functionality available to pure Python programmers
> (while it was formally an advanced and arcane part of the C API), we
> will spur of bunch of interesting third-party experimentations,
> including possibilities that we on python-dev have not thought about.

The experimentation angle is one I didn't consider all that much, but
you make a good point.

> The appeal of the PEP for experimentations is multiple:
> 1) ability to concurrently run independent execution environments
>without spawning child processes (which on some platforms and in some
>situations may not be very desirable: for example on Windows where
>the cost of spawning is rather high; also, child processes may
>crash, and sometimes it is not easy for the parent to recover,
>especially if a synchronization primitive is left in an unexpected
>state)
> 2) the potential for parallelizing CPU-bound pure Python code
>in a single process, if a per-interpreter GIL is finally implemented
> 3) easier support for sharing large data between separate execution
>environments, without the hassle of setting up shared memory or the
>fragility of relying on fork() semantics
>
> (and as I said, I hope people find other applications)

These are covered in the PEP, though not together in the rationale,
etc.  Should I add explicit mention of experimentation as a motivation
in the abstract or rationale sections?  Would you like me to add a
dedicated paragraph/section covering experimentation?

> As for the argument that we already have asyncio and several other
> packages, I actually think that combining these different concurrency
> mechanisms would be interesting complex applications (such as
> distributed systems).  For that, however, I think the PEP as currently
> written is a bit lacking, see below.

Yeah, that would be interesting.  What in particular will help make
subinterpreters and asyncio more cooperative?

> Now for the detailed comments.
>
> * I think the module should indeed be provisional.  Experimentation may
>   discover warts that call for a change in the API or semantics.  Let's
>   not prevent ourselves from fixing those issues.

Sounds good.

> * The "association" timing seems quirky and potentially annoying: an
>   interpreter only becomes associated with a channel the first time it
>   calls recv() or send().  How about, instead, associating an
>   interpreter with a channel as soon as that channel is given to it
>   through `Interpreter.run(..., channels=...)` (or received through
>   `recv()`)?

That seems fine to me.  I do not recall the exact reason for tying
association to recv() or send().  I only vaguely remember doing it
that way for a technical reason.  If I determine that reason then I'll
bring it up.  In the meantime I'll update the PEP to associate
interpreters when the channel end is sent.

FWIW, it may have been influenced by the automatic channel closing
when no interpreters are associated.  If interpreters are associated
when channel ends are sent (rather than when used) then interpreters
will have to be more careful about releasing channels.  That's just a
guess as to why I did it that way. :)

> * How hard would it be, in the current implementation, to add buffering
>   to channels?  It doesn't have to be infinite: you can choose a fixed
>   buffer size (or make it configurable in the create() function, which
>   allows passing 0 for unbuffered).  Like Nathaniel, I think unbuffered
>   channels will quickly be annoying to work with (yes, you can create a
>   helper thread... now you have one additional thread per channel,
>   which isn't pretty -- especially 

[Python-Dev] Accepted PEP 615: Support for the IANA Time Zone Database in the Standard Library

2020-04-20 Thread Barry Warsaw
The Python Steering Council accepts PEP 615 -  Support for the IANA Time Zone 
Database in the Standard Library:

https://www.python.org/dev/peps/pep-0615/

Congratulations Paul Ganssle!

This is a fantastic, well written PEP, and we appreciate Paul’s engagement with 
the SC to clear up our last questions and concerns.  We look forward to its 
availability in Python 3.9.  Thank you Paul, and everyone who helped contribute 
to this important new module.

Cheers,
-Barry (on behalf of the Python Steering Council)



signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MDR2FL66R4T4VSLUI5XRFFUTKD43FMK4/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 comments

2020-04-20 Thread Eric Snow
On Sat, Apr 18, 2020 at 6:50 PM Greg Ewing  wrote:
> On 19/04/20 5:02 am, Antoine Pitrou wrote:
> > * How hard would it be, in the current implementation, to add buffering
> >to channels?
> >
> > * In the same vein, I think channels should allow adding readiness
> >callbacks
>
> Of these, I think the callbacks are more fundamental. If you
> have a non-buffered channel with readiness callbacks, you can
> implement a buffered channel on top of it.

Some questions:

* Do you think it is worth adding readiness callbacks if we already
have channel buffering?
* Would a low-level channel implementation based on callbacks or locks
be better (simpler, faster, etc.) than one based on buffering?
* Would readiness callbacks in the high-level API be more or less
user-friendly than alternatives: optional blocking, a lock, etc.?

FWIW, I tend to find callbacks a greater source of complexity than alternatives.

Thanks!

-eric
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/DSGO3UX4QGS24W5WDP46NHOESI6UXSUJ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 616 "String methods to remove prefixes and suffixes" accepted

2020-04-20 Thread Guido van Rossum
Congrats Dennis! I hope your PR lands soon.

On Mon, Apr 20, 2020 at 12:40 PM Eric V. Smith  wrote:

> Congratulations, Dennis!
>
> Not 10 minutes ago I was writing code that could have used this
> functionality. And I got it wrong on my first attempt! I'm looking
> forward to it in 3.9.
>
> Eric
>
> On 4/20/2020 2:26 PM, Victor Stinner wrote:
> > Hi,
> >
> > The Python Steering Council accepts the PEP 616 "String methods to
> > remove prefixes and suffixes":
> > https://www.python.org/dev/peps/pep-0616/
> >
> > Congrats Dennis Sweeney!
> >
> > We just have one last request: we expect the documentation to explain
> > well the difference between removeprefix()/removesuffix() and
> > lstrip()/strip()/rstrip(), since it is the rationale of the PEP ;-)
> >
> > You can find the WIP implementation at:
> >
> > * https://github.com/python/cpython/pull/18939
> > * https://bugs.python.org/issue39939
> >
> > Victor
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/VV2CFGYTJXADLK5NJXECU55HS5PYNUK3/
> Code of Conduct: http://python.org/psf/codeofconduct/
>


-- 
--Guido van Rossum (python.org/~guido)
*Pronouns: he/him **(why is my pronoun here?)*

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/IQMXG57KDF4G62KKWKXAXKNYSAU7IE5G/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 comments

2020-04-20 Thread Antoine Pitrou
On Mon, 20 Apr 2020 14:22:03 -0600
Eric Snow  wrote:
> 
> > The appeal of the PEP for experimentations is multiple:
> > 1) ability to concurrently run independent execution environments
> >without spawning child processes (which on some platforms and in some
> >situations may not be very desirable: for example on Windows where
> >the cost of spawning is rather high; also, child processes may
> >crash, and sometimes it is not easy for the parent to recover,
> >especially if a synchronization primitive is left in an unexpected
> >state)
> > 2) the potential for parallelizing CPU-bound pure Python code
> >in a single process, if a per-interpreter GIL is finally implemented
> > 3) easier support for sharing large data between separate execution
> >environments, without the hassle of setting up shared memory or the
> >fragility of relying on fork() semantics
> >
> > (and as I said, I hope people find other applications)  
> 
> These are covered in the PEP, though not together in the rationale,
> etc.  Should I add explicit mention of experimentation as a motivation
> in the abstract or rationale sections?  Would you like me to add a
> dedicated paragraph/section covering experimentation?

I was mostly exposing my thought process here :-)  IOW, you don't have
to do anything, except if you think that would be helpful.

> > As for the argument that we already have asyncio and several other
> > packages, I actually think that combining these different concurrency
> > mechanisms would be interesting complex applications (such as
> > distributed systems).  For that, however, I think the PEP as currently
> > written is a bit lacking, see below.  
> 
> Yeah, that would be interesting.  What in particular will help make
> subinterpreters and asyncio more cooperative?

Readiness callbacks would help wrangle any kind of asynchronous /
event-driven framework around subinterpreters.

> > * In the same vein, I think channels should allow adding readiness
> >   callbacks (that are called whenever a channel becomes ready for
> >   sending or receiving, respectively).  This would make it easy to plug
> >   them into an event loop or other concurrency systems (such as
> >   Future-based concurrency).  Note that each interpreter "associated"
> >   with a channel should be able to set its own readiness callback: so
> >   one callback per Python object representing the channel, but
> >   potentially multiple callbacks for the underlying channel primitive.  
> 
> Would this be as useful if we have buffered channels?  It sounds like
> you wanted one or the other but not both.

Both are useful at somewhat different levels (though as Greg said, if
you have readiness callbacks, you can probably cook up a buffering layer
using them).  Especially, readiness callbacks (or some other form of
push notification) are desirable for reasonable interaction with an
event loop.

> > Of course, I hope these are all actionable before beta1 :-)  If not,
> > here is my preferential priority list:
> >
> > * High priority: fix association timing
> > * High priority: either buffering /or/ readiness callbacks
> > * Middle priority: get_main() /or/ is_main()  
> 
> These should be doable for beta1 since they re either trivial or
> already done. :)

Great :-)

Best regards

Antoine.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/47K57R54P6MXCPWE2ZM35NLQNCODPAVG/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Accepted PEP 615: Support for the IANA Time Zone Database in the Standard Library

2020-04-20 Thread Guido van Rossum
Congrats Paul! I am very happy that we'll get tz support built into the
stdlib.

On Mon, Apr 20, 2020 at 1:33 PM Barry Warsaw  wrote:

> The Python Steering Council accepts PEP 615 -  Support for the IANA Time
> Zone Database in the Standard Library:
>
> https://www.python.org/dev/peps/pep-0615/
>
> Congratulations Paul Ganssle!
>
> This is a fantastic, well written PEP, and we appreciate Paul’s engagement
> with the SC to clear up our last questions and concerns.  We look forward
> to its availability in Python 3.9.  Thank you Paul, and everyone who helped
> contribute to this important new module.
>
> Cheers,
> -Barry (on behalf of the Python Steering Council)
>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at
> https://mail.python.org/archives/list/python-dev@python.org/message/MDR2FL66R4T4VSLUI5XRFFUTKD43FMK4/
> Code of Conduct: http://python.org/psf/codeofconduct/
>


-- 
--Guido van Rossum (python.org/~guido)
*Pronouns: he/him **(why is my pronoun here?)*

___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/GWLDFYXETB7M5MIWGHHSVMY2SPOMFMPQ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Accepted PEP 615: Support for the IANA Time Zone Database in the Standard Library

2020-04-20 Thread Victor Stinner
Congrats Paul! This one wasn't easy!

When Paul got promoted, I asked him if he could write a PEP about the
local timezone. Well, here we are ;-)

Paul: can I now also get nanosecond resolution? :-D
https://bugs.python.org/issue15443

Oh, and leap seconds?
https://bugs.python.org/issue23574

I commented there: "One option to explore is to add a "leap seconds"
field to datetime.datetime which can be negative (just in case someone
decides to add negative leap seconds in the future)."

OMG handling date and time is so hard!

Victor

Le lun. 20 avr. 2020 à 22:39, Barry Warsaw  a écrit :
>
> The Python Steering Council accepts PEP 615 -  Support for the IANA Time Zone 
> Database in the Standard Library:
>
> https://www.python.org/dev/peps/pep-0615/
>
> Congratulations Paul Ganssle!
>
> This is a fantastic, well written PEP, and we appreciate Paul’s engagement 
> with the SC to clear up our last questions and concerns.  We look forward to 
> its availability in Python 3.9.  Thank you Paul, and everyone who helped 
> contribute to this important new module.
>
> Cheers,
> -Barry (on behalf of the Python Steering Council)
>
> ___
> Python-Dev mailing list -- python-dev@python.org
> To unsubscribe send an email to python-dev-le...@python.org
> https://mail.python.org/mailman3/lists/python-dev.python.org/
> Message archived at 
> https://mail.python.org/archives/list/python-dev@python.org/message/MDR2FL66R4T4VSLUI5XRFFUTKD43FMK4/
> Code of Conduct: http://python.org/psf/codeofconduct/



-- 
Night gathers, and now my watch begins. It shall not end until my death.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/RYNKXOJXSWKFRH3QJWJQ22XMAMO2OTA6/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 comments

2020-04-20 Thread Greg Ewing

On 21/04/20 8:29 am, Eric Snow wrote:

* Would a low-level channel implementation based on callbacks or locks
be better (simpler, faster, etc.) than one based on buffering?


Depends on what you mean by "better". Callbacks are more
versatile; a buffered channel just does buffering, but
with callbacks you can do other things, e.g. hooking
into an event loop.


* Would readiness callbacks in the high-level API be more or less
user-friendly than alternatives: optional blocking, a lock, etc.?


I would consider callbacks to be part of a low-level
layer that you wouldn't use directly most of the time.
Some user-friendly high-level things such as buffered
channels would be provided.

Efficiency is a secondary consideration. If it turns
out to be a problem, that can be addressed later.

--
Greg
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/QYYHM7I3BZEVO7YH5EAALJBN5Y3GO3QJ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 comments

2020-04-20 Thread Eric Snow
On Mon, Apr 20, 2020 at 2:22 PM Eric Snow  wrote:
> On Sat, Apr 18, 2020 at 11:16 AM Antoine Pitrou  wrote:
> > * The "association" timing seems quirky and potentially annoying: an
> >   interpreter only becomes associated with a channel the first time it
> >   calls recv() or send().  How about, instead, associating an
> >   interpreter with a channel as soon as that channel is given to it
> >   through `Interpreter.run(..., channels=...)` (or received through
> >   `recv()`)?
>
> That seems fine to me.  I do not recall the exact reason for tying
> association to recv() or send().  I only vaguely remember doing it
> that way for a technical reason.  If I determine that reason then I'll
> bring it up.  In the meantime I'll update the PEP to associate
> interpreters when the channel end is sent.
>
> FWIW, it may have been influenced by the automatic channel closing
> when no interpreters are associated.  If interpreters are associated
> when channel ends are sent (rather than when used) then interpreters
> will have to be more careful about releasing channels.  That's just a
> guess as to why I did it that way. :)

As I've gone to update the PEP for this I'm feeling less comfortable
with changing it.  There is a subtle difference which concretely
manifests in 2 ways.

Firstly, the programmatic exposure of "associated"
(SendChannel.interpreters and RecvChannel.Interpreters) would be
different.  With the current specification, "associated" means "has
been used by".  With your recommendation it would mean "is accessible
by".  Is it more useful to think about them one way or the other?
Would there be value in making both meanings part of the API
separately ("associated" + "bound") somehow?

Secondly, with the current spec channels get automatically closed
sooner, effectively as soon as all wrapping objects *that were used*
are garbage collected (or released).  With your recommendation it only
happens as soon all all wrapping objects are garbage collected (or
released).  In the former case channels could get auto-closed before
you expect them to.  In the latter case they could leak if users
forget to release them when unused.  Is there a good way to address
both downsides?

-eric
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/VZQFVSXP3JFZGNOTINZKMFPCRINQX4U6/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 comments

2020-04-20 Thread Eric Snow
On Mon, Apr 20, 2020 at 4:19 PM Greg Ewing  wrote:
>
> On 21/04/20 8:29 am, Eric Snow wrote:
> > * Would a low-level channel implementation based on callbacks or locks
> > be better (simpler, faster, etc.) than one based on buffering?
>
> Depends on what you mean by "better". Callbacks are more
> versatile; a buffered channel just does buffering, but
> with callbacks you can do other things, e.g. hooking
> into an event loop.

Thanks for clarifying.  For the event loop case, what is the downside
to adapting to the API in the existing proposal?

> > * Would readiness callbacks in the high-level API be more or less
> > user-friendly than alternatives: optional blocking, a lock, etc.?
>
> I would consider callbacks to be part of a low-level
> layer that you wouldn't use directly most of the time.
> Some user-friendly high-level things such as buffered
> channels would be provided.

Ah, PEP 554 is just about the high-level API.  Currently in the
low-level API recv() doesn't ever block (instead raising
ChannelEmptyError if empty) and channel_send() returns a pre-acquired
lock that releases once the object is received.

I'm not opposed to a different low-level API, but keep in mind that
we're short on time.

> Efficiency is a secondary consideration. If it turns
> out to be a problem, that can be addressed later.

+1

-eric
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/FK4GTBHQARS7GOLGB6VOUDRM4LOLP366/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-20 Thread Nathaniel Smith
On Fri, Apr 17, 2020 at 3:57 PM Eric Snow  wrote:
>
> On Fri, Apr 17, 2020 at 2:59 PM Nathaniel Smith  wrote:
> > I think some perspective might be useful here :-).
> >
> > The last time we merged a new concurrency model in the stdlib, it was 
> > asyncio.
> >
> > [snip]
> >
> > OTOH, AFAICT the new concurrency model in PEP 554 has never actually
> > been used, and it isn't even clear whether it's useful at all.
>
> Perhaps I didn't word things quite right.  PEP 554 doesn't provide a
> new concurrency model so much as it provides functionality that could
> probably be used as the foundation for one.

That makes it worse, right? If I wrote a PEP saying "here's some
features that could possibly someday be used to make a new concurrency
model", that wouldn't make it past the first review.

> Ultimately the module
> proposed in the PEP does the following:
>
> * exposes the existing subinterpreters functionality almost as-is

So I think this is a place where we see things really differently.

I guess your perspective is, subinterpreters are already a CPython
feature, so we're not adding anything, and we don't really need to
talk about whether CPython should support subinterpreters.

But this simply isn't true. Yes, there's some APIs for subinterpreters
added back in the 1.x days, but they were never really thought
through, and have never actually worked. There are exactly 3 users,
and all have serious issues, and a strategy for avoiding
subinterpreters because of the brokenness. In practice, the existing
ecosystem of C extensions has never supported subinterpreters.

This is clearly not a great state of affairs – we should either
support them or not support them. Shipping a broken feature doesn't
help anyone. But the current status isn't terribly harmful, because
the general consensus across the ecosystem is that they don't work and
aren't used.

If we start exposing them in the stdlib and encouraging people to use
them, though, that's a *huge* change. Our users trust us. If we tell
them that subinterpreters are a real thing now, then they'll spend
lots of effort on trying to support them.

Since subinterpreters are confusing, and break the C API/ABI, this
means that every C extension author will have to spend a substantial
amount of time figuring out what subinterpreters are, how they work,
squinting at PEP 489, asking questions, auditing their code, etc. This
will take years, and in the mean time, users will expect
subinterpreters to work, be confused at why they break, yell at random
third-party maintainers, spend days trying to track down mysterious
problems that turn out to be caused by subinterpreters, etc. There
will be many many blog posts trying to explain subinterpreters and
understand when they're useful (if ever), arguments about whether to
support them. Twitter threads. Production experiments. If you consider
that we have thousands of existing C extensions and millions of users,
accepting PEP 554 means forcing people you don't know to collectively
spend many person-years on subinterpreters.

Random story time: NumPy deprecated some C APIs some years ago, a
little bit before I got involved. Unfortunately, it wasn't fully
thought through; the new APIs were a bit nicer-looking, but didn't
enable any new features, didn't provide any path to getting rid of the
old APIs, and in fact it turned out that there were some critical use
cases that still required the old API. So in practice, the deprecation
was never going anywhere; the old APIs work just as well and are never
going to get removed, so spending time migrating to the new APIs was,
unfortunately, a completely pointless waste of time that provided zero
value to anyone.

Nonetheless, our users trusted us, so lots and lots of projects spend
substantial effort on migrating to the new API: figuring out how it
worked, making PRs, reviewing them, writing shims to work across the
old and new API, having big discussions about how to make the new API
work with Cython, debating what to do about the cases where the new
APIs were inadequate, etc. None of this served any purpose: they just
did it because they trusted us, and we misled them. It's pretty
shameful, honestly. Everyone meant well, but in retrospect it was a
terrible betrayal of our users' trust.

Now, that only affected projects that were using the NumPy C API, and
even then, only developers who were diligent and trying to follow the
latest updates; there were no runtime warnings, nothing visible to
end-users, etc. Your proposal has something like 100x-1000x more
impact, because you want to make all C extensions in Python get
updated or at least audited, and projects that aren't updated will
produce mysterious crashes, incorrect output, or loud error messages
that cause users to come after the developers and demand fixes.

Now maybe that's worth it. I think on net the Py3 transition was worth
it, and that was even more difficult. But Py3 had an incredible amount
of scrutiny and rationale. Here you're

[Python-Dev] Re: PEP 554 comments

2020-04-20 Thread Eric Snow
On Mon, Apr 20, 2020 at 4:23 PM Eric Snow  wrote:
> As I've gone to update the PEP for this I'm feeling less comfortable
> with changing it.

Also, the resulting text of the PEP makes it a little harder to follow
when an interpreter gets associated.  However, this is partly an
artifact of the structure of the PEP.  (The details of association
need to be moved to a separate section.)  The same situation would
apply to docs.  However, I'm not sure it would be a problem in
practice.

-eric
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/D2WG55BYIXLFVURS2JNINHBEYJSKCYZB/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 616 "String methods to remove prefixes and suffixes" accepted

2020-04-20 Thread Raymond Hettinger
Please consider adding underscores to the names:  remove_prefix() and 
remove_suffix().

The latter method causes a mental hiccup when first read as removes-uffix, 
forcing mental backtracking to get to remove-suffix. We had a similar problem 
with addinfourl initially being read as add-in-four-l before mentally 
backtracking to add-info-url.

The PEP says this alternative was considered, but I disagree with the rationale 
given in the PEP.  The reason that "startswith" and "endswith" don't have 
underscores is that they aren't needed to disambiguate the text.  Our rules are 
to add underscores and to spell-out words when it improves readability, which 
in this case it does.   Like casing conventions, our rules and preferences for 
naming evolved after the early modules were created -- the older the module, 
the more likely that it doesn't follow modern conventions.

We only have one chance to get this right (bugs can be fixed, but API choices 
persist for very long time).  Take it from someone with experience with this 
particular problem.  I created imap() but later regretted the naming pattern 
when if came to ifilter() and islice() which sometimes cause mental hiccups 
initially being read as if-ilter and is-lice.


Raymond
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/ZMXSQ5T6L6CR5GUIBFEYLJJF7FE4B4US/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: Accepting PEP 617: New PEG parser for CPython

2020-04-20 Thread Raymond Hettinger
This will be a nice improvement.


Raymond
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/C3MUSEKXCDL4HSIEIJNBHWQG5B7WCQLD/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-20 Thread Nathaniel Smith
On Mon, Apr 20, 2020 at 4:26 PM Edwin Zimmerman  wrote:
>
> On 4/20/2020 6:30 PM, Nathaniel Smith wrote:
> > We already have robust support for threads for low-isolation and
> > subprocesses for high-isolation. Can you name some use cases where
> > neither of these are appropriate and you instead want an in-between
> > isolation – like subprocesses, but more fragile and with odd edge
> > cases where state leaks between them?
> I don't know if this has been mentioned before or not, but I'll bring it up 
> now: massively concurrent networking code on Windows.  Socket connections 
> could be passed off from the main interpreter to sub-interpreters for 
> concurrent processing that simply isn't possible with the global GIL 
> (provided the GIL actually becomes per-interpreter).  On *nix you can fork, 
> this would give CPython on Windows similar capabilities.

Both Windows and Unix have APIs for passing sockets between related or
unrelated processes -- no fork needed. On Windows, it's exposed as the
socket.share method:
https://docs.python.org/3/library/socket.html#socket.socket.share

The APIs for managing and communicating between processes are
definitely not the most obvious or simplest to use, but they're very
mature and powerful, and it's a lot easier to wrap them up in a
high-level API than it is to effectively reimplement process
separation from scratch inside CPython.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/KMS6JEGPB62STE4SE7YWGFALNFUE2LUX/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-20 Thread Edwin Zimmerman
On 4/20/2020 6:30 PM, Nathaniel Smith wrote:
> We already have robust support for threads for low-isolation and
> subprocesses for high-isolation. Can you name some use cases where
> neither of these are appropriate and you instead want an in-between
> isolation – like subprocesses, but more fragile and with odd edge
> cases where state leaks between them?
I don't know if this has been mentioned before or not, but I'll bring it up 
now: massively concurrent networking code on Windows.  Socket connections could 
be passed off from the main interpreter to sub-interpreters for concurrent 
processing that simply isn't possible with the global GIL (provided the GIL 
actually becomes per-interpreter).  On *nix you can fork, this would give 
CPython on Windows similar capabilities.
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/MNC6BO7GC6RZA5TI7UTPHJXYDQLQY7DA/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-20 Thread Edwin Zimmerman
On 4/20/2020 7:33 PM, Nathaniel Smith wrote:
> On Mon, Apr 20, 2020 at 4:26 PM Edwin Zimmerman  
> wrote:
>> On 4/20/2020 6:30 PM, Nathaniel Smith wrote:
>>> We already have robust support for threads for low-isolation and
>>> subprocesses for high-isolation. Can you name some use cases where
>>> neither of these are appropriate and you instead want an in-between
>>> isolation – like subprocesses, but more fragile and with odd edge
>>> cases where state leaks between them?
>> I don't know if this has been mentioned before or not, but I'll bring it up 
>> now: massively concurrent networking code on Windows.  Socket connections 
>> could be passed off from the main interpreter to sub-interpreters for 
>> concurrent processing that simply isn't possible with the global GIL 
>> (provided the GIL actually becomes per-interpreter).  On *nix you can fork, 
>> this would give CPython on Windows similar capabilities.
> Both Windows and Unix have APIs for passing sockets between related or
> unrelated processes -- no fork needed. On Windows, it's exposed as the
> socket.share method:
> https://docs.python.org/3/library/socket.html#socket.socket.share
>
> The APIs for managing and communicating between processes are
> definitely not the most obvious or simplest to use, but they're very
> mature and powerful, and it's a lot easier to wrap them up in a
> high-level API than it is to effectively reimplement process
> separation from scratch inside CPython.
>
> -n
+1 on not being most obvious or simplest to use.  Not only that, but to use it 
you have to write Windows-specific code.  PEP 554 would provide a uniform, 
cross-platform capability that I would choose any day over a random pile of 
os-specific hacks.
--Edwin
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/PVLDRLGQ24OV6VM5OR3OFF7HEQY245BP/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-20 Thread Nathaniel Smith
On Mon, Apr 20, 2020 at 5:36 PM Edwin Zimmerman  wrote:
>
> On 4/20/2020 7:33 PM, Nathaniel Smith wrote:
> > On Mon, Apr 20, 2020 at 4:26 PM Edwin Zimmerman  
> > wrote:
> >> On 4/20/2020 6:30 PM, Nathaniel Smith wrote:
> >>> We already have robust support for threads for low-isolation and
> >>> subprocesses for high-isolation. Can you name some use cases where
> >>> neither of these are appropriate and you instead want an in-between
> >>> isolation – like subprocesses, but more fragile and with odd edge
> >>> cases where state leaks between them?
> >> I don't know if this has been mentioned before or not, but I'll bring it 
> >> up now: massively concurrent networking code on Windows.  Socket 
> >> connections could be passed off from the main interpreter to 
> >> sub-interpreters for concurrent processing that simply isn't possible with 
> >> the global GIL (provided the GIL actually becomes per-interpreter).  On 
> >> *nix you can fork, this would give CPython on Windows similar capabilities.
> > Both Windows and Unix have APIs for passing sockets between related or
> > unrelated processes -- no fork needed. On Windows, it's exposed as the
> > socket.share method:
> > https://docs.python.org/3/library/socket.html#socket.socket.share
> >
> > The APIs for managing and communicating between processes are
> > definitely not the most obvious or simplest to use, but they're very
> > mature and powerful, and it's a lot easier to wrap them up in a
> > high-level API than it is to effectively reimplement process
> > separation from scratch inside CPython.
> >
> > -n
> +1 on not being most obvious or simplest to use.  Not only that, but to use 
> it you have to write Windows-specific code.  PEP 554 would provide a uniform, 
> cross-platform capability that I would choose any day over a random pile of 
> os-specific hacks.

I mean, sure, if you've decided to build one piece of hypothetical
software well and another badly, then the good one will be better than
the bad one, but that doesn't really say much, does it?

In real life, I don't see how it's possible to get PEP 554's
implementation to the point where it works reliably and robustly –
i.e., I just don't think the promises the PEP makes can actually be
fulfilled. And even if you did, it would still be several orders of
magnitude easier to build a uniform, robust, cross-platform API on top
of tools like socket.share than it would be to force changes on every
C extension. PEP 554 is hugely expensive; you can afford a *lot* of
careful systems engineering while still coming in way under that
budget.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/L63XKYQDFVCOCNZC2VN27KFW2C3NTBKZ/
Code of Conduct: http://python.org/psf/codeofconduct/


[Python-Dev] Re: PEP 554 for 3.9 or 3.10?

2020-04-20 Thread Eric Snow
Nathaniel,

Your tone and approach to this conversation concern me.  I appreciate
that you have strong feelings here and readily recognize I have my own
biases, but it's becoming increasingly hard to draw any constructive
insight from what tend to be very longs posts from you.  It ends up
being a large commitment of time for small gains.  And honestly, it's
also becoming hard to not counter some of your more elaborate
statements with my own unhelpful prose.  In the interest of making
things better, please take it all down a notch or two.

I apologize if I sound frustrated.  I am frustrated, which is only
more frustrating because I respect you a lot and feel like your
feedback should be more helpful.  I'm trying to moderate my responses,
but I expect some of my emotion may slip through. :/

On Mon, Apr 20, 2020 at 4:30 PM Nathaniel Smith  wrote:
> On Fri, Apr 17, 2020 at 3:57 PM Eric Snow  wrote:
> That makes it worse, right? If I wrote a PEP saying "here's some
> features that could possibly someday be used to make a new concurrency
> model", that wouldn't make it past the first review.

Clearly, tying this to "concurrency models" is confusing here.  So
let's just say, as Paul Moore put it, the PEP allows us to "organize"
our code in a new way (effectively along the lines of isolated threads
with message passing).

> I guess your perspective is, subinterpreters are already a CPython
> feature, so we're not adding anything, and we don't really need to
> talk about whether CPython should support subinterpreters.
>
> But this simply isn't true. Yes, there's some APIs for subinterpreters
> added back in the 1.x days, but they were never really thought
> through, and have never actually worked.

The C-API was thought through more than sufficiently.  Subinterpreters
are conceptually and practically a very light wrapper around the
fundamental architecture of CPython's runtime.  The API exposes
exactly that, no more, no less.  What is missing or broken?

They also work fine in most cases.  Mostly they have problems with
extension modules that have unsafe process-global state and break in
some less common cases due to bugs in CPython (which have not been
fixed because no one cared enough).

> There are exactly 3 users,
> and all have serious issues, and a strategy for avoiding
> subinterpreters because of the brokenness. In practice, the existing
> ecosystem of C extensions has never supported subinterpreters.

Catch-22: why would they ever bother if no one is using them.

> This is clearly not a great state of affairs – we should either
> support them or not support them. Shipping a broken feature doesn't
> help anyone. But the current status isn't terribly harmful, because
> the general consensus across the ecosystem is that they don't work and
> aren't used.
>
> If we start exposing them in the stdlib and encouraging people to use
> them, though, that's a *huge* change.

You are arguing that this is effectively a new feature.  As you noted
earlier, I am saying it isn't.

> Our users trust us. If we tell
> them that subinterpreters are a real thing now, then they'll spend
> lots of effort on trying to support them.

What is "lots"?  We've yet to see clear evidence of possible severe
impact.  On the contrary, I've gotten feedback from folks highly
involved in the ecosystem that it will not be a big problem.  It won't
take care of itself, but it won't require a massive effort.

> Since subinterpreters are confusing, and break the C API/ABI

How are they confusing and how do they break either the C-API or
C-ABI?  This sort of misinformation (or perhaps just miscommunication)
is not helpful at all to your argument.

>, this
> means that every C extension author will have to spend a substantial
> amount of time figuring out what subinterpreters are, how they work,
> squinting at PEP 489, asking questions, auditing their code, etc.

You make it sounds like tons of work, but I'm unconvinced, as noted
earlier.  Consider that we regularly have new features for which
extensions must provide support.  How is this different?

> This
> will take years, and in the mean time, users will expect
> subinterpreters to work, be confused at why they break, yell at random
> third-party maintainers, spend days trying to track down mysterious
> problems that turn out to be caused by subinterpreters, etc. There
> will be many many blog posts trying to explain subinterpreters and
> understand when they're useful (if ever), arguments about whether to
> support them. Twitter threads. Production experiments. If you consider
> that we have thousands of existing C extensions and millions of users,
> accepting PEP 554 means forcing people you don't know to collectively
> spend many person-years on subinterpreters.

Again you're painting a hopeless picture, but so far it's no more than
a picture that contrasts with other less negative feedback I've
gotten.  So yours comes off as unhelpful here.

> Random story time: NumPy deprecated some C APIs 

[Python-Dev] Re: PEP 554 comments

2020-04-20 Thread Greg Ewing

On 21/04/20 10:23 am, Eric Snow wrote:

with the current spec channels get automatically closed
sooner, effectively as soon as all wrapping objects *that were used*
are garbage collected (or released).


Maybe I'm missing something, but just because an object
hasn't been used *yet* doesn't mean it isn't going to
be used in the future, so isn't this wildly wrong?

--
Greg
___
Python-Dev mailing list -- python-dev@python.org
To unsubscribe send an email to python-dev-le...@python.org
https://mail.python.org/mailman3/lists/python-dev.python.org/
Message archived at 
https://mail.python.org/archives/list/python-dev@python.org/message/3LIEAT727R6GZBU3CUUJTIENK62SEZTB/
Code of Conduct: http://python.org/psf/codeofconduct/