Re: [Python-Dev] async/await behavior on multiple calls

2015-12-15 Thread Kevin Conway
I think there may be somewhat of a language barrier here. OP appears to be
mixing the terms of coroutines and futures. The behavior OP describes is
that of promised or async tasks in other languages.

Consider a JS promise that has been resolved:

promise.then(function (value) {...});

promise.then(function (value) {...});

Both of the above will execute the callback function with the resolved
value regardless of how much earlier the promise was resolved. This is not
entirely different from how Futures work in Python when using
'add_done_callback'.

The code example from OP, however, is showing the behaviour of awaiting a
coroutine twice rather than awaiting a Future twice. Both objects are
awaitable but both exhibit different behaviour when awaited multiple times.

A scenario I believe deserves a test is what happens in the asyncio
coroutine scheduler when a promise is awaited multiple times. The current
__await__ behaviour is to return self only when not done and then to return
the value after resolution for each subsequent await. The Task, however,
requires that it must be a Future emitted from the coroutine and not a
primitive value. Awaiting a resolved future should result

On Tue, Dec 15, 2015, 14:44 Guido van Rossum  wrote:

> Agreed. (But let's hear from the OP first.)
>
> On Tue, Dec 15, 2015 at 12:27 PM, Andrew Svetlov  > wrote:
>
>> Both Yury's suggestions sounds reasonable.
>>
>> On Tue, Dec 15, 2015 at 10:24 PM, Yury Selivanov
>>  wrote:
>> > Hi Roy and Guido,
>> >
>> > On 2015-12-15 3:08 PM, Guido van Rossum wrote:
>> > [..]
>> >>
>> >>
>> >> I don't know how long you have been using async/await, but I wonder if
>> >> it's possible that you just haven't gotten used to the typical usage
>> >> patterns? In particular, your claim "anything that takes an
>> `awaitable` has
>> >> to know that it wasn't already awaited" makes me sound that you're just
>> >> using it in an atypical way (perhaps because your model is based on
>> other
>> >> languages). In typical asyncio code, one does not usually take an
>> awaitable,
>> >> wait for it, and then return it -- one either awaits it and then
>> extracts
>> >> the result, or one returns it without awaiting it.
>> >
>> >
>> > I agree.  Holding a return value just so that coroutine can return it
>> again
>> > seems wrong to me.
>> >
>> > However, since coroutines are now a separate type (although they share
>> a lot
>> > of code with generators internally), maybe we can change them to throw
>> an
>> > error when they are awaited on more than one time?
>> >
>> > That should be better than letting them return `None`:
>> >
>> > coro = coroutine()
>> > await coro
>> > await coro  # <- will raise RuntimeError
>> >
>> >
>> > I'd also add a check that the coroutine isn't being awaited by more
>> than one
>> > coroutine simultaneously (another, completely different issue, more on
>> which
>> > here: https://github.com/python/asyncio/issues/288).  This was fixed in
>> > asyncio in debug mode, but ideally, we should fix this in the
>> interpreter
>> > core.
>> >
>> > Yury
>> > ___
>> > Python-Dev mailing list
>> > Python-Dev@python.org
>> > https://mail.python.org/mailman/listinfo/python-dev
>> > Unsubscribe:
>> >
>> https://mail.python.org/mailman/options/python-dev/andrew.svetlov%40gmail.com
>>
>>
>>
>> --
>> Thanks,
>> Andrew Svetlov
>>
> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>>
> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>>
>
>
>
> --
> --Guido van Rossum (python.org/~guido)
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/kevinjacobconway%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] async/await behavior on multiple calls

2015-12-15 Thread Kevin Conway
I agree with Barry. We need more material that introduces the community to
the new async/await syntax and the new concepts they bring. We borrowed the
words from other languages but not all of their behaviours.

With coroutines in particular, we can do a better job of describing the
differences between them and the previous generator-coroutines, the rules
regarding what - if anything - is emitted from a '.send()', and how
await resolves to a value. If you read through the asyncio Task code enough
you'll figure it out, but we can't expect the community as a whole to learn
the language, or asyncio, that way.

Back to the OP's issue. The behaviour you are seeing of None being the
value of an exhausted coroutine is consistent with that of an exhausted
generator. Pushing the iterator with __next__() or .send() after completion
results in a StopIteration being raised with a value of None regardless of
what the final yielded/returned value was. Futures can be awaited multiple
times because the __iter__/__await__ method defined causes them to raise
StopIteration with the resolved value.

I think the list is trying to tell you that awaiting a coro multiple times
is simply not a valid case in Python because they are exhaustible
resources. In asyncio, they are primarily a helpful mechanism for shipping
promises to the Task wrapper. In virtually all cases the pattern is:

> await some_async_def()

and almost never:

> coro = some_async_def()
> await coro


On Tue, Dec 15, 2015 at 9:34 PM Yury Selivanov 
wrote:

> Roy,
>
> On 2015-12-15 8:29 PM, Roy Williams wrote:
> [..]
> >
> > My proposal would be to automatically wrap the return value from an
> > `async` function or any object implementing `__await__` in a future
> > with `asyncio.ensure_future()`.  This would allow async/await code to
> > behave in a similar manner to other languages implementing async/await
> > and would remain compatible with existing code using asyncio.
> >
> > What's your thoughts?
>
> Other languages, such as JavaScript, have a notion of event loop
> integrated on a very deep level.  In Python, there is no centralized
> event loop, and asyncio is just one way of implementing one.
>
> In asyncio, Future objects are designed to inter-operate with an event
> loop (that's also true for JS Promises), which means that in order to
> automatically wrap Python coroutines in Futures, we'd have to define the
> event loop deep in Python core.  Otherwise it's impossible to implement
> 'Future.add_done_callback', since there would be nothing that calls the
> callbacks on completion.
>
> To avoid adding a built-in event loop, PEP 492 introduced coroutines as
> an abstract language concept.  David Beazley, for instance, doesn't like
> Futures, and his new framework 'curio' does not have them at all.
>
> I highly doubt that we want to add a generalized event loop in Python
> core, define a generalized Future interface, and make coroutines return
> it.  It's simply too much work with no clear wins.
>
> Now, your initial email highlights another problem:
>
> coro = coroutine()
> print(await coro)  # will print the result of coroutine
> await coro  # prints None
>
> This is a bug that needs to be fixed.  We have two options:
>
> 1. Cache the result when the coroutine object is awaited first time.
> Return the cached result when the coroutine object is awaited again.
>
> 2. Raise an error if the coroutine object is awaited more than once.
>
> The (1) option would solve your problem.  But it also introduces new
> complexity: the GC of result will be delayed; more importantly, some
> users will wonder if we cache the result or run the coroutine again.
> It's just not obvious.
>
> The (2) option is Pythonic and simple to understand/debug, IMHO.  In
> this case, the best way for you to solve your initial problem, would be
> to have a decorator around your tasks.  The decorator should wrap
> coroutines with Futures (with asyncio.ensure_future) and everything will
> work as you expect.
>
> Thanks,
> Yury
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/kevinjacobconway%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] asyncio: how to interrupt an async def w/ finally: ( e.g. Condition.wait() )

2015-12-19 Thread Kevin Conway
> An async procedure call whose refcount reaches zero without completing
simply goes away; finally: blocks are *not* called and there is *no*
warning.

I believe OP is looking at these two scenarios:

def generator():
try:
yield None
yield None
finally:
print('finally')


gen = generator()
gen.send(None)
del gen
# prints finally on GC


class Awaitable:
def __await__(self):
return self
def __next__(self):
return self


async def coroutine():
try:
await Awaitable()
await Awaitable()
finally:
print('finally')


coro = coroutine()
coro.send(None)
del coro
# prints finally on GC

I don't see any difference in the behaviour between the two. My guess is
that OP's code is not hitting a zero refcount.

On Sat, Dec 19, 2015 at 5:00 PM Gustavo Carneiro 
wrote:

> I tried to reproduce the problem you describe, but failed.  Here's my test
> program (forgive the awful tab indentation, long story):
>
> --
> import asyncio
>
> async def foo():
> print("resource acquire")
> try:
> await asyncio.sleep(100)
> finally:
> print("resource release")
>
>
> async def main():
> task = asyncio.ensure_future(foo())
> print("task created")
> await asyncio.sleep(0)
> print("about to cancel task")
> task.cancel()
> print("task cancelled, about to wait for it")
> try:
> await task
> except asyncio.CancelledError:
> pass
> print("waited for cancelled task")
>
>
> if __name__ == '__main__':
> loop = asyncio.get_event_loop()
> loop.run_until_complete(main())
> loop.close()
> ---
>
> I get this output:
>
> 
> 10:54:28 ~/Documents$ python3.5 foo.py
> task created
> resource acquire
> about to cancel task
> task cancelled, about to wait for it
> resource release
> waited for cancelled task
> 
>
> Which seems to indicate that the finally clause is correctly executed when
> the task is waited for, after being cancelled.
>
> But maybe I completely misunderstood your problem...
>
>
> On 19 December 2015 at 21:40, Matthias Urlichs 
> wrote:
>
>> On 19.12.2015 20:25, Guido van Rossum wrote:
>> > Perhaps you can add a check for a simple boolean 'stop' flag to your
>> > condition check, and when you want to stop the loop you set that flag
>> > and then call notify() on the condition. Then you can follow the
>> > standard condition variable protocol instead of all this nonsense. :-)
>> Your example does not work.
>>
>> >def stop_it(self):
>> >self.stopped = True
>> >self.uptodate.notify()
>>
>> self.uptodate needs to be locked before I can call .notify() on it.
>> Creating a new task just for that seems like overkill, and I'd have to
>> add a generation counter to prevent a race condition. Doable, but ugly.
>>
>> However, this doesn't fix the generic problem; Condition.wait() was just
>> what bit me today.
>> When a non-async generator goes out of scope, its finally: blocks will
>> execute. An async procedure call whose refcount reaches zero without
>> completing simply goes away; finally: blocks are *not* called and there
>> is *no* warning.
>> I consider that to be a bug.
>>
>> --
>> -- Matthias Urlichs
>>
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>>
> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/gjcarneiro%40gmail.com
>>
>
>
>
> --
> Gustavo J. A. M. Carneiro
> Gambit Research
> "The universe is always one step beyond logic." -- Frank Herbert
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/kevinjacobconway%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 544: Protocols - second round

2017-05-28 Thread Kevin Conway
> Some of the possible options for the title are
It seems like you're talking about something most other languages would
refer to as "Interfaces". What is unique about this proposal that would
call for not using the industry standard language?

> Type-hints should not have runtime semantics, beyond those that they have
as classes
>  lots of code uses isinstance(obj, collections.abc.Iterable) and similar
checks with other ABCs
Having interfaces defined as something extended from abc doesn't
necessitate their use at runtime, but it does open up a great deal of
options for those of us who want to do so. I've been leveraging abc for a
few years now to implement a lightweight version of what this PEP is
attempting to achieve (https://github.com/kevinconway/iface). Once you
start getting into dynamically loaded plugins you often lose the ability to
strictly enforce the shape of the input until runtime. In those cases, I've
found it exceedingly useful to add 'isinstance' and 'issubbclass' as
assertions to input of untrusted types for the tests and non-production
deployments. For a perf boost in prod you can throw the -O flag and strip
out the assertions to remove the runtime checks. I've found that to be a
valuable pattern.

On Sun, May 28, 2017 at 8:21 AM Ivan Levkivskyi 
wrote:

> Thanks everyone for interesting suggestions!
>
> @Antoine @Guido:
> Some of the possible options for the title are:
> * Protocols (structural subtyping)
> * Protocols (static duck typing)
> * Structural subtyping (static duck typing)
> which one do you prefer?
>
> @Nick:
> Yes, explicit imports are not necessary for static type checkers (I will
> add a short comment about this).
>
> @Mark:
> I agree with Guido on all points here. For example,
> collections.abc.Iterable is already a class,
> and lots of code uses isinstance(obj, collections.abc.Iterable) and
> similar checks with other ABCs
> (also in a structural manner, i.e. via __subclasshook__). So that
> disabling this will case many breakages.
> The question of whether typing.Iterable[int] should be a class is
> independent (orthogonal) and
> does not belong to this PEP.
>
> --
> Ivan
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/kevinjacobconway%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 544: Protocols - second round

2017-05-29 Thread Kevin Conway
>From the PEP:
> The problem with them is that a class has to be explicitly marked to
support them, which is unpythonic and unlike what one would normally do in
idiomatic dynamically typed Python code.
> The same problem appears with user-defined ABCs: they must be explicitly
subclassed or registered.
Neither of these statements are entirely true. The semantics of `abc` allow
for exactly the kind of detached interfaces this PEP is attempting to
provide. The `abc.ABCMeta` provides the `__subclasshook__` which allows a
developer to override the default check of internal `abc` registry state
with virtually any logic that determines the relationship of a class with
the interface. The prior art I linked to earlier in the thread uses this
feature to generically support `issubclass` and `isinstance` in such a way
that the PEPs goal is achieved.

> The intention of this PEP is to solve all these problems by allowing
users to write the above code without explicit base classes in the class
definition
As I understand this goal, you want to take what some of us in the
community have been building ourselves and make it canonical via the
stdlib. What strikes me as odd is that the focus is on 3rd party type
checkers first rather than introducing this as a feature of the language
runtime and then updating the type checker contract to make use of it. I
see a mention of the `isinstance` check support in the postponed/rejected
ideas, but the only rationale given for it being in that category is,
generally, "there are edge cases". For example, the PEP lists this as an
edge case:
>The problem with this is instance checks could be unreliable, except for
situations where there is a common signature convention such as Iterable
However, the sample given demonstrates precisely the expected behavior of
checking if a concrete implements the protocol. It's unclear why this
sample is given as a negative. The other case given is:
> Another potentially problematic case is assignment of attributes after
instantiation
Can you elaborate on how type checkers would not encounter this same issue?
If there is a solution to this problem for type checkers, would that same
solution not work at runtime? Also, it seems odd to use a custom initialize
function rather than `__init__`. I don't think it was intentional, but this
makes it seem like a bit of a strawman that doesn't represent typical
Python code.

> Also, extensive use of ABCs might impose additional runtime costs.
I'd love to see some data around this. Given that it's a rationale for the
PEP I'd expect to see some numbers behind it. For example, is memory cost
of directly registering implementations to abc linear or worse? What is the
runtime growth pattern of isinstance or issubclass when used with heavily
registered or deeply registered abc graphs and is it different than those
calls on concrete class hierarchies? Does the cost affect anything more
than the initial evaluation of the code or, in the absence of
isinstance/issubclass checks, does it continue to have an impact on the
runtime?



On Mon, May 29, 2017 at 5:41 AM Ivan Levkivskyi 
wrote:

> On 28 May 2017 at 19:40, Guido van Rossum  wrote:
>
>> On Sun, May 28, 2017 at 8:27 AM, Ivan Levkivskyi 
>> wrote:
>>
> [...]
>
>> Regarding the title, I'd like to keep the word Protocol in the title too,
>> so I'd go with "Protocols: Structural subtyping (duck typing)" -- hope
>> that's not too long to fit in a PEP title field.
>>
>
> OK, this looks reasonable.
>
>
>>
>> > Type-hints should not have runtime semantics, beyond those that they
 have as classes
 >  lots of code uses isinstance(obj, collections.abc.Iterable) and
 similar checks with other ABCs
 Having interfaces defined as something extended from abc doesn't
 necessitate their use at runtime, but it does open up a great deal of
 options for those of us who want to do so. I've been leveraging abc for a
 few years now to implement a lightweight version of what this PEP is
 attempting to achieve

>>>
>>> IIUC this is not the main goal of the PEP, the main goal is to provide
>>> support/standard for _static_ structural subtyping.
>>> Possibility to use protocols in runtime context is rather a minor bonus
>>> that exists mostly to provide a seamless transition
>>> for projects that already use ABCs.
>>>
>>
>> Is something like this already in the PEP? It deserves attention in one
>> of the earlier sections.
>>
>
> Yes, similar discussions appear in "Rationale and Goals", and "Existing
> approaches to structural subtyping". Maybe I need to polish the text there
> adding more focus on static typing.
>
> --
> Ivan
>
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-08-30 Thread Kevin Conway
> Can Execution Context be implemented outside of CPython

I know I'm well late to the game and a bit dense, but where in the pep is
the justification for this assertion? I ask because we buy something to
solve the same problem in Twisted some time ago:
https://bitbucket.org/hipchat/txlocal . We were able to leverage
generator/coroutine decorators to preserve state without modifying the
runtime.

Given that this problem only exists in runtime that multiplex coroutines on
a single thread and the fact that coroutine execution engines only exist in
user space, why doesn't it make more sense to leave this to a library that
engines like asyncio and Twisted are responsible for standardising on?

On Wed, Aug 30, 2017, 09:40 Yury Selivanov  wrote:

> On Wed, Aug 30, 2017 at 9:44 AM, Yury Selivanov 
> wrote:
> [..]
> >> FYI, I've been sketching an alternative solution that addresses these
> kinds
> >> of things. I've been hesitant to post about it, partly because of the
> >> PEP550-based workarounds that Nick, Nathaniel, Yury etc. have been
> >> describing, and partly because that might be a major distraction from
> other
> >> useful discussions, especially because I wasn't completely sure yet
> about
> >> whether my approach has some fatal flaw compared to PEP 550 ;).
> >
> > We'll never know until you post it. Go ahead.
>
> The only alternative design that I considered for PEP 550 and
> ultimately rejected was to have a the following thread-specific
> mapping:
>
>   {
>  var1: [stack of values for var1],
>  var2: [stack of values for var2]
>   }
>
> So the idea is that when we set a value for the variable in some
> frame, we push it to its stack.  When the frame is done, we pop it.
> This is a classic approach (called Shallow Binding) to implement
> dynamic scope.  The fatal flow that made me to reject this approach
> was the CM protocol (__enter__).  Specifically, context managers need
> to be able to control values in outer frames, and this is where this
> approach becomes super messy.
>
> Yury
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/kevinjacobconway%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com