Yury Selivanov added the comment:
Yes, I've experienced this bug. We need to fix this in 3.8.1.
--
nosy: +lukasz.langa
priority: normal -> release blocker
___
Python tracker
<https://bugs.python.org
Yury Selivanov added the comment:
> It would be nice if asyncio.run() uses the default loop if it's available
> (and not running), and that the default loop remains intact after the call
> returns.
Unfortunately it's not possible to implement that reliably and
Yury Selivanov added the comment:
> Well, I'm basically using a run method defined as a shorthand for
> self.loop.run_until_complete (without closing loop, reusing it throughout).
> It would be nice if asyncio.run could simply be used instead, but I
> understand you're
Yury Selivanov added the comment:
Yeah, please do the change! Thanks!
--
___
Python tracker
<https://bugs.python.org/issue38652>
___
___
Python-bugs-list mailin
Yury Selivanov added the comment:
> The change is slightly not backward compatible but
Yeah, that's my main problem with converting `loop.run_in_executor()` to a
coroutine. When I attempted doing that I discovered that there's code that
expects the method to return a Future, a
Yury Selivanov added the comment:
> It is a more complex solution but definitely 100% backward compatible; plus
> the solution we can prepare people for removing the deprecated code
> eventually.
Yeah. Do you think it's worth it bothering with this old low-level API instea
Yury Selivanov added the comment:
Few thoughts:
1. I like the idea of having a context manager to create a thread pool. It
should be initialized in a top-level coroutine and the passed down to other
code, as in:
async def main():
async with asyncio.ThreadPool(concurrency=10) as pool
Yury Selivanov added the comment:
>> async with asyncio.ThreadPool(concurrency=10) as pool:
> I'm definitely on board with the usage of an async context manager and the
> functionality shown in the example, but I'm not sure that I entirely
> understand what t
Yury Selivanov added the comment:
>
>
>
> IMO, I think it would be a bit more clear to just explicitly call it
> "threads" or "max_threads", as that explains what it's effectively doing.
> While "concurrency" is still a perfectly correct
Yury Selivanov added the comment:
> From my understanding, the executor classes are designed around spawning the
> threads (or processes in the case of ProcessPoolExecutor) as needed up to
> max_workers, rather than spawning them upon startup. The asynchronous
> spawning o
Yury Selivanov added the comment:
> We should either remove the API (not realistic dream at least for many years)
> or fix it. There is no choice actually.
I don't understand. What happens if we don't await the future that
run_in_executor returns? Does it get GCed eventuall
Yury Selivanov added the comment:
> I'm going to have to rescind the above statements. I was able to implement a
> new prototype of asyncio.ThreadPool (using ThreadPoolExecutor) that spawns
> it's threads asynchronously on startup. Since this one a bit more involved
>
Yury Selivanov added the comment:
> I think that I'm still not understanding something important though. Even if
> we initialize our ThreadPoolExecutor outside of __init__ (in a start()
> coroutine method, as your proposing), it seems like the threads will be
> spawn
Yury Selivanov added the comment:
> MultiLoopChildWatcher must ensures that the event loop is awaken when it
> receives a signal by using signal.setwakeup(). This is done by
> _UnixSelectorEventLoop.add_signal_handler(). Maybe MultiLoopChildWatcher
> could reuse this function,
Yury Selivanov added the comment:
Yeah, I'm also not convinced this has to be part of asyncio, especially since
we'll likely have task groups in 3.11.
--
___
Python tracker
<https://bugs.python.o
Yury Selivanov added the comment:
> nest-asyncio library (https://github.com/erdewit/nest_asyncio)
Seeing that the community actively wants to have support for nested loops I'm
slowly changing my opinion on this.
Guido, maybe we should allow nested asyncio loops disabled by
New submission from Yury Selivanov :
See this script:
https://gist.github.com/1st1/eccc32991dc2798f3fa0b4050ae2461d
Somehow an identity async function alters the behavior of manual iteration
though the wrapped nested generator.
This is a very subtle bug and I'm not even sure if this
Change by Yury Selivanov :
--
nosy: +gvanrossum
___
Python tracker
<https://bugs.python.org/issue45088>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from Yury Selivanov :
Per discussion on python-dev (also see the linked email), PyAiter_Check should
only check for `__anext__` existence (and not for `__aiter__`) to be consistent
with `Py_IterCheck`.
While there, I'd like to rename PyAiter_Check to PyAIter_Chec
Change by Yury Selivanov :
--
keywords: +patch
pull_requests: +26618
pull_request: https://github.com/python/cpython/pull/28194
___
Python tracker
<https://bugs.python.org/issue45
Yury Selivanov added the comment:
We can merge this PR as is (Benjamin, thanks for working on this!), but I think
that as soon as we merge it we should do some refactoring and deprecations.
The child watchers API has to go. It's confusing, painful to use, it's not
compatible
Yury Selivanov added the comment:
I'd be -1 on changing the default of an existing method, at least without
consulting with a wider audience. We can add a new method to the loop and
deprecate create_datagram_endpoint.
I suggest to post to python-dev and discuss this before makin
New submission from Yury Selivanov :
The exception should probably be just ignored. Andrew, thoughts?
Here's an example error traceback:
Traceback (most recent call last):
File "c:\projects\asyncpg\asyncpg\connection.py", line 1227, in _cancel
awai
Yury Selivanov added the comment:
> My preference for create_datagram_endpoint() would be:
> - make the "reuse_address" parameter a no-op, and raise an error when
> "reuse_address=True" is passed
> - do that in 3.8 as well
Yeah, I like this prposal; we can
Yury Selivanov added the comment:
> Oh in that case, would you like me to close or modify GH-17311? I didn't
> think you'd approve of making the more extensive changes all the way back to
> 3.5.
After reading the comments here I think Antoine's solution makes sense.
Yury Selivanov added the comment:
This seems like a useful idea. I recommend to write a test implementation and
play with it.
Andrew:
> I think the proposal makes the queues API more error-prone: concurrent put()
> and close() produces unpredictable result on get() side.
How? C
Yury Selivanov added the comment:
> 1. A CancelledError (or maybe`QueueCancelled`?) exception is raised in all
> producers and consumers ) - this gives a producer a chance to handle the
> error and do something with the waiting item that could not be `put()`
> 2. Items curr
Yury Selivanov added the comment:
> The issue is minor, I suspect nobody wants to derive from ContextVar class.
I don't think that's allowed, actually.
> The generic implementation for __class_getitem__ is returning unmodified self
> argument. Yuri, is there a reason to
Change by Yury Selivanov :
--
nosy: -yselivanov
___
Python tracker
<https://bugs.python.org/issue38973>
___
___
Python-bugs-list mailing list
Unsubscribe:
Yury Selivanov added the comment:
> This throws 'cannot pickle Context'
Yes, this is expected. contextvars are not compatible with multiprocessing.
> This hangs forever *
This hanging part is weird, and most likely hints at a bug in
Yury Selivanov added the comment:
I think we still use get_event_loop() in asyncio code, no? If we do, we should
start by raising deprecation warnings in those call sites, e.g. if a Future or
Lock is created outside of a coroutine and no explicit event loop is passed. We
should do this in
Yury Selivanov added the comment:
> There is not clear rationale to justify the addition of the function
Yeah, with the new threaded watcher being the default we don't need this
anymore.
> so I reject the feature
NP, here, but, hm, can you unilaterally reject f
Yury Selivanov added the comment:
> I'm not strongly against the feature. I first proposed to expose it, but make
> it private. Almost one year later, the PR was not updated. So I just closed
> the PR and the issue.
All clear, Victor. Let's keep this closed. The reaso
Yury Selivanov added the comment:
Андрей,
Here's how you can fix your example:
import asyncio
class Singleton:
_LOCK = None
_LOOP = None
@classmethod
async def create(cls):
if cls._LOOP is None:
cls._LOOP = asyncio.get_running_loop()
Yury Selivanov added the comment:
> Unfortunately, if Python is used as a frontend for a native libray (eg
> accessed via ctypes), and in case that the state of interest is managed in
> the native library, contextvar API is insufficient.
Please elaborate with a cleare
Yury Selivanov added the comment:
> The ship has sailed, this change breaks a lot of existing code without a
> strong reason.
Yes.
It's a common thing to compute asyncio.sleep delay and sometimes it goes
negative. The current behavior is part of our API now.
--
resoluti
Yury Selivanov added the comment:
> Is there any existing API that can be used to call `lib.set_state` on context
> changes?
No, but there's C API that you can use to get/set contextvars. If a C library
is hard coded to use threadlocals I'm afraid there's nothing we ca
Yury Selivanov added the comment:
> I agree, but wouldn't you agree that some information is better than no
> information?
We do agree with that. Making it work in the way that does not disturb people
when a 10mb bytes string is passed is challenging. We could just cut everything
Yury Selivanov added the comment:
> Would there be too much overhead if allowing specification of a python
> function that contextvars calls on context changes?
Potentially yes, especially if we allow more than one context change callback.
Allowing just one makes the API inflexible
Yury Selivanov added the comment:
> It does seem like a good solution.
Great. I'll close this issue then as the proposed solution is actually not as
straightforward as it seems. Task names exist specifically to solve this case.
--
resolution: -> rejected
stage: -> r
Yury Selivanov added the comment:
> I very doubt if any sane code is organizing like this test: start delayed
> reading, cancel it and read again.
Hm, cancellation should work correctly no matter how "sane" or "insane" the
user code is.
> The worse, neither p
Yury Selivanov added the comment:
> Source?
I could not find a good source, sorry. I remember I had a complaint in uvloop
to support negative timeouts, but I can't trace it.
That said, I also distinctly remember seeing code (and writing such code
myself) that performs comput
Yury Selivanov added the comment:
> For asyncio.Lock (plus other synchronization primitives) and asyncio.Queue,
> this would be added in https://github.com/python/cpython/pull/18195.
> Currently waiting on emanu (author of the PR) to finish up some changes, but
> it's mostl
Yury Selivanov added the comment:
What's the actual use case for exposing this functionality?
--
___
Python tracker
<https://bugs.python.org/issue37497>
___
___
Yury Selivanov added the comment:
Thank you so much, Stefan, for looking into this. Really appreciate the help.
--
___
Python tracker
<https://bugs.python.org/issue39
Yury Selivanov added the comment:
I'd be fine with `Signature.from_text()`, but not with `Signature` constructor
/ `signature()` function accepting both callable and string arguments. Overall,
I think that we ought to have a real need to add this new API, so unless
there's a go
Yury Selivanov added the comment:
Good catch & PR ;) Thanks
--
___
Python tracker
<https://bugs.python.org/issue39965>
___
___
Python-bugs-list ma
New submission from Yury Norov :
Hi all,
In Python, I need a tool to reverse part of a list (tail) quickly.
I expected that
nums[start:end].reverse()
would do it inplace with the performance similar to nums.reverse().
However, it doesn't work at all. The fastest way to reverse a
Yury Selivanov added the comment:
> @Yury what do you think?
Yeah, the documentation needs to be fixed.
> Maybe "Returns an iterator of awaitables"?
I'd suggest to change to: "Return an iterator of coroutines. Each coroutine
allows to wait for the earliest next
Yury Selivanov added the comment:
IMO this is a 3.9 fix.
--
___
Python tracker
<https://bugs.python.org/issue29587>
___
___
Python-bugs-list mailing list
Unsub
Change by Yury Selivanov :
--
nosy: -yselivanov
___
Python tracker
<https://bugs.python.org/issue39562>
___
___
Python-bugs-list mailing list
Unsubscribe:
Yury Selivanov added the comment:
Good catch. The function should be fixed to:
_marker = object()
def run(coro, *, debug=_marker):
if debug is not _marker:
loop.set_debug(debug)
--
___
Python tracker
<https://bugs.python.
Yury Selivanov added the comment:
> If so, the main purpose of that example is just to demonstrate basic
> async/await syntax, and show asyncio.run() for a trivial case to clearly show
> how it's used at a fundamental level; it's intentional that the more involv
Change by Yury Selivanov :
--
nosy: -yselivanov
___
Python tracker
<https://bugs.python.org/issue40257>
___
___
Python-bugs-list mailing list
Unsubscribe:
Yury Selivanov added the comment:
> I think that in case inner task cancelation fails with some error,
> asyncio.wait_for should reraise it instead of silently losing it.
+1.
--
___
Python tracker
<https://bugs.python.org/i
Yury Selivanov added the comment:
New changeset de92769d473d1c0955d36da2fc71462621326f00 by jack1142 in branch
'master':
bpo-34790: add version of removal of explicit passing of coros to
`asyncio.wait`'s documentation (#20008)
https://github.com/python
Yury Selivanov added the comment:
New changeset 382a5635bd10c237c3e23e346b21cde27e48d7fa by romasku in branch
'master':
bpo-40607: Reraise exception during task cancelation in asyncio.wait_for()
(GH-20054)
https://github.com/python/cpython/commit/382a5635bd10c237c3e23e346b21cd
Yury Selivanov added the comment:
Elevating to release blocker to make sure it's included. The PR is good.
--
___
Python tracker
<https://bugs.python.org/is
Change by Yury Selivanov :
--
priority: normal -> release blocker
___
Python tracker
<https://bugs.python.org/issue31033>
___
___
Python-bugs-list mai
Yury Selivanov added the comment:
> 2) Add some warning about the value is thrown away (in debug mode) and
> document it somewhere.
The documentation update is definitely something that needs to be done in 3.9.
Want to submit a PR?
We can also issue a warning in asyncio debug mode
Yury Selivanov added the comment:
> I think I am closing the PR as it seems that the gains are not good enough
> (and there is quite a lot of noise by comparing the benchmarks together).
IMO you need to implement LOAD_METHOD support for all kinds of calls, including
the ones that use
Yury Selivanov added the comment:
> I will try to do some prototyping around that to see how much can we gain in
> that route. In any case, adding LOAD_METHOD support for all kinds of calls
> should be an improvement by itself even without caching, no?
Exactly.
As one arg
Yury Selivanov added the comment:
Hey Hynek! :) Can you submit a PR?
--
___
Python tracker
<https://bugs.python.org/issue42600>
___
___
Python-bugs-list mailin
Yury Selivanov added the comment:
New changeset 17ef4319a34f5a2f95e7823dfb5f5b8cff11882d by Richard Kojedzinszky
in branch 'master':
bpo-41891: ensure asyncio.wait_for waits for task completion (#22461)
https://github.com/python/cpython/commit/17ef4319a34f5a2f95e7823dfb5f5b
Yury Selivanov added the comment:
Thanks for the PR, Richard!
--
nosy: -miss-islington
___
Python tracker
<https://bugs.python.org/issue41891>
___
___
Pytho
Change by Yury Selivanov :
--
nosy: -miss-islington
resolution: -> fixed
stage: patch review -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.or
Yury Selivanov added the comment:
This has been actually first fixed in #41891 but somehow wasn't yet merged.
Yurii, thanks so much for working on this and making a PR, there was just
another PR to fix the same issue that was there first, so I had to merge that
one.
--
resol
Yury Selivanov added the comment:
New changeset 82dbfd5a04863d8b6363527e6a34a90c9aa5691b by Miss Islington (bot)
in branch '3.9':
bpo-41891: ensure asyncio.wait_for waits for task completion (GH-22461) (#23840)
https://github.com/python/cpython/commit/82dbfd5a04863d8b6363527e6a34a9
Yury Selivanov added the comment:
> The gist seems to be to have extra opcodes that only work for certain
> situations (e.g. INT_BINARY_ADD). In a hot function we can rewrite opcodes
> with their specialized counterpart. The new opcode contains a guard that
> rewrites itself
Yury Selivanov added the comment:
> Do we have good intuition or data about which operations need speeding up
> most? Everybody always assumes it's BINARY_ADD, but much Python code isn't
> actually numeric, and binary operations aren't all that common.
IMO, we sho
Yury Selivanov added the comment:
> So it seems that everything is in the noise range except the "float"
> benchmark that is 1.11x faster
Yeah, this is why.
https://github.com/python/pyperformance/blob/master/pyperformance/benchmarks/bm_float.py#L12
This is a great result
Yury Selivanov added the comment:
> Some microbenchmarks:
Can you add a new one `read_slots`?
--
___
Python tracker
<https://bugs.python.org/issu
Yury Selivanov added the comment:
> I tried to implement such optimization in my old
> https://faster-cpython.readthedocs.io/fat_python.html project. I implemented
> guards to de-optimize the code if a builtin is overriden.
FWIW the globals opcode cache handles all of this now. T
Yury Selivanov added the comment:
> So you think that even a dedicated "LEN" opcode would not be any faster?
> (This is getting in Paul Sokolovsky territory -- IIRC he has a variant of
> Python that doesn't allow overriding builtins.)
Yeah, a dedicated LEN opcode co
Yury Selivanov added the comment:
> Giving the ability to control the cache size, at least at Python startup, is
> one option.
I'd really prefer not to allow users to control cache sizes. There's basically
no point in that; the only practically useful thing is to enab
Yury Selivanov added the comment:
Just a note, __context__ cycles can theoretically be longer than 2 nodes. I've
encountered cycles like `exc.__context__.__context__.__context__ is exc` a few
times in my life, typically resulting from some weird third-party libraries.
The only soluti
Yury Selivanov added the comment:
> If someone else agrees, I can create a new issue.
I'd keep this one issue, but really up to you. I don't think I have time in the
next few days to work on what I proposed but would be happy to brainstorm
Yury Selivanov added the comment:
New changeset 210a137396979d747c2602eeef46c34fc4955448 by Fantix King in branch
'master':
bpo-30064: Fix asyncio loop.sock_* race condition issue (#20369)
https://github.com/python/cpython/commit/210a137396979d747c2602eeef46c3
Yury Selivanov added the comment:
New changeset dc4eee9e266267498a6b783a0abccc23c06f2b87 by Fantix King in branch
'master':
bpo-30064: Properly skip unstable loop.sock_connect() racing test (GH-20494)
https://github.com/python/cpython/commit/dc4eee9e266267498a6b783a0abccc
Yury Selivanov added the comment:
> I'm suggesting a method on coroutines that runs them without blocking, and
> will run a callback when it's complete.
And how would that method be implemented? Presumably the event loop would
execute the coroutine, but that API is already
Change by Yury Selivanov :
--
resolution: -> rejected
stage: -> resolved
status: open -> closed
___
Python tracker
<https://bugs.python.org/issue40844>
___
___
Yury Selivanov added the comment:
This has long been implemented:
https://docs.python.org/3/library/contextlib.html#contextlib.asynccontextmanager
--
nosy: +yselivanov
___
Python tracker
<https://bugs.python.org/issue40
Yury Selivanov added the comment:
> Since asyncio is no longer provisional, should it break backwards
> compatibility with just a What's New entry?
Adding new APIs to asyncio can't be classified as a backward compatibility
issue. Otherwise the development of it would stall
Yury Selivanov added the comment:
> Optimally, we want to do removals before the beta so that users can prepare
> accordingly rather than deal with breakage in the final release.
+1 to remove it now. Up to Lukasz to give us green or red light here,
Yury Selivanov added the comment:
Thanks for posting this, Mike.
> Vague claims of "framework X is faster because it's async" appear, impossible
> to confirm as it is unknown how much of their performance gains come from the
> "async" aspect and how m
Yury Selivanov added the comment:
The idiomatic way:
async def main():
loop = asyncio.get_running_loop()
loop.set_exception_handler(...)
# other code
asyncio.run(main())
We don't want to add new arguments to asyncio.run as there would be too
Yury Selivanov added the comment:
> I think this is a really critical technique to have so that libraries that
> mediate between a user-facing facade and TCP based backends no longer have to
> make a hard choice about if they are to support sync vs. async (or async with
> an o
Yury Selivanov added the comment:
> Yeah, writing a trivial "event loop" to drive actually-synchronous code is
> easy. Try it out:
This is exactly the approach I used in edgedb-python.
> I guess there's technically some overhead, but it's tiny.
Correct, the o
Yury Selivanov added the comment:
> The community is hurting *A LOT* right now because asyncio is intentionally
> non-compatible with the traditional blocking approach that is not only still
> prevalent it's one that a lot of us think is *easier* to work with.
Mike, I'
Yury Selivanov added the comment:
> Changes in bpo-41242 were rejected not because the new code is worse, but
> because it is not obviously and significantly better that the existing code.
> Here status quo wins for the same reasons. This rule saves us from endless
> rewriting
Yury Selivanov added the comment:
> n python to know if there could be a context switch to get_running_loop while
> set_running_loop is running.
No, it's protected by the GIL.
Good catch, and merged.
--
nosy: +yselivanov
resolution: -> fixed
stage: patch review ->
Yury Selivanov added the comment:
Looks like https://github.com/python/cpython/pull/17975 was forgotten and was
never committed to 3.9. So it's 3.10 now.
Best bet for you is to use uvloop which should support the feature.
--
___
Python tr
Yury Selivanov added the comment:
> So how about maybe:
That wouldn't work. You still haven't explained what's wrong with calling `loop
= asyncio.get_running_loop()` inside `async def main()`. That literally solves
all problems without the need of us
Yury Selivanov added the comment:
New changeset 0b6169e391ce6468aad711f08ffb829362293ad5 by Tony Solomonik in
branch '3.8':
bpo-41247: asyncio.set_running_loop() cache running loop holder (#21406)
https://github.com/python/cpython/commit/0b6169e391ce6468aad711f08ffb82
Yury Selivanov added the comment:
> The aiohttp issue says they won't fix this until asyncio supports it. Kinda
> understand that.
I saw you opened an issue with aiohttp to allow this and they're open to it. I
hope that will get some movement. It also would be a big test f
Yury Selivanov added the comment:
New changeset 568fb0ff4aa641329261cdde20795b0aa9278175 by Tony Solomonik in
branch 'master':
bpo-41273: asyncio's proactor read transport's better performance by using
recv_into instead of recv (#21442)
https://github.com/p
Yury Selivanov added the comment:
We don't want to extend StreamReader with new APIs as ultimately we plan to
deprecate it. A new streams API is needed, perhaps inspired by Trio. Sadly, I'm
-1 on this one.
--
___
Python track
Yury Selivanov added the comment:
> Im interested in learning about the new api.
There are two problems with the current API:
1. Reader and writer are separate objects, while they should be one.
2. Writer.write is not a coroutine, although it should be one.
There are other minor nits,
Yury Selivanov added the comment:
> Is it simply combining stream reader and stream writer into a single object
> and changing the write() function to always wait the write (thus deprecating
> drain) and that's it?
Pretty much. We might also rename a few APIs here and ther
Yury Selivanov added the comment:
> By the way if we will eventually combine StreamReader and StreamWriter won't
> this function (readinto) be useful then?
Yes. But StreamReader and StreamWriter will stay for the foreseeable future
for backwards compatibility pretty much fro
101 - 200 of 3129 matches
Mail list logo