[issue34790] Deprecate passing coroutine objects to asyncio.wait()

2019-10-28 Thread Kyle Stanley
Change by Kyle Stanley : -- pull_requests: +16503 stage: -> patch review pull_request: https://github.com/python/cpython/pull/16977 ___ Python tracker <https://bugs.python.org/issu

[issue34790] Deprecate passing coroutine objects to asyncio.wait()

2019-10-28 Thread Kyle Stanley
Kyle Stanley added the comment: GH-16977 Implements the deprecation warning, adds tests, and adds the 3.9 whatsnew entry. Once this PR is finalized, I think this issue can be closed. -- stage: patch review -> ___ Python tracker <

[issue34790] Deprecate passing coroutine objects to asyncio.wait()

2019-10-28 Thread Kyle Stanley
Kyle Stanley added the comment: > Is this whatsnew/3.8.rst entry correct and complete? > The :func:`asyncio.coroutine` :term:`decorator` is deprecated and will be > removed in version 3.10. Instead of ``@asyncio.coroutine``, use > :keyword:`async def` instead. (Contribut

[issue38652] Remove/update provisional note for asyncio.BufferedProtocol

2019-10-30 Thread Kyle Stanley
New submission from Kyle Stanley : In the documentation (https://docs.python.org/3.9/library/asyncio-protocol.html#buffered-streaming-protocols) for asyncio.BufferedProtocol there is a provisional API note. This should be updated/removed for 3.8 and 3.9 since it became a part of the stable

[issue38652] Remove/update provisional note for asyncio.BufferedProtocol

2019-10-30 Thread Kyle Stanley
Kyle Stanley added the comment: > I would propose to changing it to: propose changing it to* Also, I think this issue would be a good candidate for "newcomer friendly", since it's a simple and well-defined documentation update. It could provide a decent introduction f

[issue38652] Remove/update provisional note for asyncio.BufferedProtocol

2019-10-30 Thread Kyle Stanley
Change by Kyle Stanley : -- components: +asyncio ___ Python tracker <https://bugs.python.org/issue38652> ___ ___ Python-bugs-list mailing list Unsubscribe:

[issue38652] Remove/update provisional note for asyncio.BufferedProtocol

2019-10-31 Thread Kyle Stanley
Kyle Stanley added the comment: > I'm a newcomer, would it be ok for me to take on this propsal? It will need approval from Yury and/or Andrew before it can be merged, but I think this is relatively uncontroversial. You can definitely work on i

[issue32309] Implement asyncio.run_in_executor shortcut

2019-10-31 Thread Kyle Stanley
Kyle Stanley added the comment: > I don't like the low-level API of run_in_executor. "executor" being the > first argument, the inability to pass **kwargs, etc. > I mean it's great that we can use 'concurrent.futures' in asyncio, but having > na

[issue32309] Implement asyncio.run_in_executor shortcut

2019-10-31 Thread Kyle Stanley
Kyle Stanley added the comment: > end up adding two high-level functions Clarification: asyncio.run_in_executor() would be a function, but asyncio.ThreadPool would be a context manager class. -- ___ Python tracker <https://bugs.pyth

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-01 Thread Kyle Stanley
Kyle Stanley added the comment: So, here's a prototype implementation of asyncio.ThreadPool that would function exactly as Yury described, but I'm not convinced about the design. Usually, it seems preferred to separate the context manager from the rest of the class (as was

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-01 Thread Kyle Stanley
Kyle Stanley added the comment: > I don't think changing the default executor is a good approach. What happens, > if two or more thread pools are running at the same time? In that case they > will use the same default executor anyway, so creating a new executor each >

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-01 Thread Kyle Stanley
Kyle Stanley added the comment: Also, I agree with Paul's idea of initializing the ThreadPoolExecutor in the __init__ instead of __aenter__, that makes more sense now that I think about it. -- ___ Python tracker <https://bugs.py

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-01 Thread Kyle Stanley
Kyle Stanley added the comment: > thread = threading.Thread(target=loop._do_shutdown, args=(executor,future)) Correction: > thread = threading.Thread(target=_do_shutdown, args=(loop, executor,future)) Also, it might make more sense to rename _do_shutdown() to _do_executor_shutdow

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-01 Thread Kyle Stanley
Kyle Stanley added the comment: Actually, I think it would be better to move the functionality of loop.shutdown_default_executor() to a new private method loop._shutdown_executor() that takes an executor argument rather than shutting down the default one. This could be used in both

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-01 Thread Kyle Stanley
Kyle Stanley added the comment: > Also in this case run awaits and returns the result. Yury suggested earlier > just to return the future and not await. Yeah that's roughly what my initial version was doing. I'm personally leaning a bit more towards returning the future

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-01 Thread Kyle Stanley
Kyle Stanley added the comment: > async with asyncio.ThreadPool(concurrency=10) as pool: I'm definitely on board with the usage of an async context manager and the functionality shown in the example, but I'm not sure that I entirely understand what the "concurrency" k

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-01 Thread Kyle Stanley
Kyle Stanley added the comment: > I believe behavior occurs within shutdown_default_executor(), correct? > Specifically, within for ThreadPoolExecutor when executor.shutdown(wait=True) > is called and all of the threads are joined without a timeout, it simply > waits for ea

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-01 Thread Kyle Stanley
Kyle Stanley added the comment: > Number of OS threads to spawn. Ah I see, so this would correspond with the "max_workers" argument of ThreadPoolExecutor then, correct? If so, we could pass this in the __init__ for ThreadPool: def __init__(self, concurrency): ... s

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-01 Thread Kyle Stanley
Kyle Stanley added the comment: > def __init__(self, concurrency=None): Minor clarification: the default should probably be None, which would effectively set the default maximum number of threads to min(32, os.cpu_count() + 4), once it's passed to ThreadPool

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-01 Thread Kyle Stanley
Kyle Stanley added the comment: > And that's why I like it. If we add ProcessPool it will have the same > argument: concurrency. > max_workers isn't correct, as we want to spawn all threads and all processes > when we start. Thus btw makes me think that initializin

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-02 Thread Kyle Stanley
Kyle Stanley added the comment: > No, that would be too much work. Writing a thread pool or process pool from > scratch is an extremely tedious task and it will take us years to stabilize > the code. It's not simple. > We should design *our* API correctly though. And t

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-04 Thread Kyle Stanley
Kyle Stanley added the comment: > The asynchronous spawning of threads or processes would also not be > compatible with the executor subclasses as far as I can tell. > Thus, it seemed to make more sense to me to actually build up a new Pool > class from scratch that was larg

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-04 Thread Kyle Stanley
Kyle Stanley added the comment: > Nice work! This is a great excercise, but we can really just use > concurrent.futures.ThreadPool as is. Spawning threads is fast. As I mentioned > before all we need to do is to design *our* API to NOT initialize pools in > __init__, that'

[issue38652] Remove/update provisional note for asyncio.BufferedProtocol

2019-11-04 Thread Kyle Stanley
Kyle Stanley added the comment: > Hey, I've done the change and opened a pull request for it (I'm working with > Ben and I've let him know) Make sure to change the title of the PR to "bpo-: ", this will automatically attach the PR to the associated bpo issue.

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-04 Thread Kyle Stanley
Kyle Stanley added the comment: > since the new threads are spawned in ThreadPoolExecutor *after* > executor.submit() is called It's also worth mentioning that ThreadPoolExecutor only spawns up to one additional thread at a time for each executor.subm

[issue38692] add a pidfd child process watcher

2019-11-05 Thread Kyle Stanley
Kyle Stanley added the comment: > My the main question is: how to detect if the new watcher can be used or > asyncio should fallback to threaded based solution? Perhaps in the __init__ we could do something like this: class PidfdChildWatcher(AbstractChildWatcher): def __init_

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2021-05-22 Thread Kyle Stanley
Kyle Stanley added the comment: > Since 3.5 has now reached end-of-life, this issue will not be fixed there so > it looks like it can be closed. Thanks, Ned <3 -- ___ Python tracker <https://bugs.python.or

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2021-05-22 Thread Kyle Stanley
Kyle Stanley added the comment: > Thanks, Ned <3 (For following up and closing the issue) -- ___ Python tracker <https://bugs.python.org/issue37228> ___ __

[issue39995] test_concurrent_futures: ProcessPoolSpawnExecutorDeadlockTest.test_crash() fails with OSError: [Errno 9] Bad file descriptor

2021-05-28 Thread Kyle Stanley
Kyle Stanley added the comment: Thanks for closing up the issue, Victor :) -- ___ Python tracker <https://bugs.python.org/issue39995> ___ ___ Python-bugs-list m

[issue39529] Deprecate get_event_loop()

2021-05-31 Thread Kyle Stanley
Kyle Stanley added the comment: > But why does `asyncio.run` unconditionally create a new event loop instead of > running on `asyncio.get_event_loop`? AFAIK, it does so for purposes of compatibility in programs that need multiple separate event loops and providing a degree o

[issue44697] Memory leak when asyncio.open_connection raise

2021-07-26 Thread Kyle Stanley
Kyle Stanley added the comment: Thank you Arteem, that should help indicate where the memory leak is present. -- ___ Python tracker <https://bugs.python.org/issue44

[issue38692] add a pidfd child process watcher

2019-11-08 Thread Kyle Stanley
Kyle Stanley added the comment: > I got a failure in newly added test_pidfd_open: > I'm running kernel 5.3.7-x86_64-linode130 with Arch Linux. > I think you must still be experiencing some sort of sandboxing. I don't know > how else you would get an EPERM out of

[issue38692] add a pidfd child process watcher

2019-11-08 Thread Kyle Stanley
Kyle Stanley added the comment: > [aeros:~/repos/benjaminp-cpython]$ ./python -m test test_pty -F > (asyncio-pidfd) ... 0:01:31 load avg: 1.57 [2506] test_pty 0:01:31 load avg: 1.57 [2507] test_pty Oops, looks like I copied the wrong results of a separate test I was running e

[issue38692] add a pidfd child process watcher

2019-11-08 Thread Kyle Stanley
Kyle Stanley added the comment: [aeros:~/repos/benjaminp-cpython]$ ./python -m test test_posix -F (asyncio-pidfd) ... 0:08:52 load avg: 1.89 [1008] test_posix 0:08:52 load avg: 2.22 [1009] test_posix ... 1008 tests OK. Total duration: 8 min 52 sec Tests result: INTERRUPTED

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-09 Thread Kyle Stanley
Kyle Stanley added the comment: > (a) design the API correctly; (b) ship something that definitely works with a proven ThreadPoolExecutor; (c) write lots of tests; (d) write docs; (e) if (a-d) are OK, refine the implementation later by replacing ThreadPoolExecutor with a proper (ea

[issue28533] Replace asyncore

2019-11-11 Thread Kyle Stanley
Kyle Stanley added the comment: > I'm happy to work on replacing asyncore usage in one of the other test files. Sounds good, just let us know which one(s) you're working on. (: -- ___ Python tracker <https://bugs.pytho

[issue38692] add a pidfd child process watcher

2019-11-13 Thread Kyle Stanley
Kyle Stanley added the comment: > We can merge this PR as is (Benjamin, thanks for working on this!), but I > think that as soon as we merge it we should do some refactoring and > deprecations. > The child watchers API has to go. It's confusing, painful to use, it's

[issue38692] add a pidfd child process watcher

2019-11-13 Thread Kyle Stanley
Kyle Stanley added the comment: > > The child watchers API has to go. It's confusing, painful to use, it's not > > compatible with third-party event loops. It increases the API surface > > without providing us with enough benefits. > +1 Also, adding to this

[issue38591] Deprecate Process Child Watchers

2019-11-14 Thread Kyle Stanley
Kyle Stanley added the comment: > I understand that there's *some* overhead associated with spawning a new > thread, but from my impression it's not substantial enough to make a > significant impact in most cases. Although I think this still stands to some degree, I will

[issue38591] Deprecate Process Child Watchers

2019-11-14 Thread Kyle Stanley
Kyle Stanley added the comment: > You have to account also for the thread stack size. I suggest to look at RSS > memory instead. Ah, good point. I believe get_size() only accounts for the memory usage of the thread object, not the amount allocated in physical memory from the thread

[issue38692] add a pidfd child process watcher

2019-11-14 Thread Kyle Stanley
Kyle Stanley added the comment: > PidfdChildWatcher should be enumerated in unix_events.py:__all__ to make the > class visible by asyncio import rules. > Kyle, would you make a post-fix PR? I actually just noticed that myself and was coming back to the bpo issue to mention th

[issue38692] add a pidfd child process watcher

2019-11-14 Thread Kyle Stanley
Change by Kyle Stanley : -- pull_requests: +16671 stage: resolved -> patch review pull_request: https://github.com/python/cpython/pull/17161 ___ Python tracker <https://bugs.python.org/issu

[issue38692] add a pidfd child process watcher

2019-11-14 Thread Kyle Stanley
Change by Kyle Stanley : -- stage: patch review -> resolved status: open -> closed ___ Python tracker <https://bugs.python.org/issue38692> ___ ___ Pyth

[issue38591] Deprecate Process Child Watchers

2019-11-15 Thread Kyle Stanley
Kyle Stanley added the comment: > I think so. It will take a long before we remove it though. In that case, it could be a long term deprecation notice, where we start the deprecation process without having a definitive removal version. This will at least encourage users to look towa

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-16 Thread Kyle Stanley
Kyle Stanley added the comment: > (a) design the API correctly; > (b) ship something that definitely works with a proven ThreadPoolExecutor; Yury and Andrew, here's my latest API design for asyncio.ThreadPool: https://github.com/python/cpython/compare/master...aeros:asyncio

[issue32309] Implement asyncio.run_in_executor shortcut

2019-11-16 Thread Kyle Stanley
Kyle Stanley added the comment: So, I just had an interesting idea... what if ThreadPool.run() returned a Task instead of a coroutine object? With the current version of asyncio.ThreadPool, if a user wants to create a Task, they would have to do something like this: async with

[issue38276] test_asyncio: test_cancel_make_subprocess_transport_exec() failed on RHEL7 LTO + PGO 3.x

2019-11-17 Thread Kyle Stanley
Change by Kyle Stanley : -- nosy: +aeros ___ Python tracker <https://bugs.python.org/issue38276> ___ ___ Python-bugs-list mailing list Unsubscribe:

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-11-20 Thread Kyle Stanley
Kyle Stanley added the comment: > I think you can use SO_REUSEPORT instead, and for UDP sockets it's identical > to SO_REUSEADDR except with the same-UID restriction added? > If that's right then it might make sense to unconditionally switch > SO_REUSEADDR -> SO_RE

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-11-20 Thread Kyle Stanley
Kyle Stanley added the comment: > There are some platforms (Linux pre-3.9 kernels) that don't have > SO_REUSEPORT. I wish I could say I don't care about such platforms; alas, I > just had to compile Python 3.7 on a system running a 2.6 kernel last month at > a client

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-11-20 Thread Kyle Stanley
Kyle Stanley added the comment: > I'd like to point out that it is also documented completely wrong up to this > point in time and thus people who chose True are most likely to be unaware of > the actual consequences. A user's explicit choice based on misinformati

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-11-20 Thread Kyle Stanley
Kyle Stanley added the comment: > My preference for create_datagram_endpoint() would be: > - make the "reuse_address" parameter a no-op, and raise an error when > "reuse_address=True" is passed > - do that in 3.8 as well This solution would more elegant

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-11-20 Thread Kyle Stanley
Change by Kyle Stanley : -- keywords: +patch pull_requests: +16799 stage: -> patch review pull_request: https://github.com/python/cpython/pull/17311 ___ Python tracker <https://bugs.python.org/issu

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-11-20 Thread Kyle Stanley
Kyle Stanley added the comment: > Yeah, I like this prposal; we can apply this to all Python's from 3.5 to 3.8. > With a proper documentation update this should be OK. Oh in that case, would you like me to close or modify GH-17311? I didn't think you'd approve of maki

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-11-20 Thread Kyle Stanley
Change by Kyle Stanley : -- stage: -> patch review ___ Python tracker <https://bugs.python.org/issue37228> ___ ___ Python-bugs-list mailing list Unsubscrib

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-11-21 Thread Kyle Stanley
Kyle Stanley added the comment: So after trying a few different implementations, I don't think the proposal to simply change `SO_REUSEADDR` -> `SO_REUSEPORT` will work, due to Windows incompatibility (based on the results from Azure Pipelines). `SO_REUSEADDR` is supported on Wind

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-11-21 Thread Kyle Stanley
Kyle Stanley added the comment: > I was assuming we'd only do this on Linux, since that's where the bug is... > though now that you mention it the Windows behavior is probably wonky too. Yeah, but I'm not confident that the bug is exclusive to Linux. From what I&#x

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-11-21 Thread Kyle Stanley
Kyle Stanley added the comment: > some platforms apparently do have SO_REUSEPORT defined but the option still > doesn't work, resulting in a ValueError exception from > create_datagram_endpoint(). Are you aware of what currently supported platforms have SO_REUSEPORT defi

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-11-22 Thread Kyle Stanley
Kyle Stanley added the comment: > This was reported by a partner that was working porting our code to Android > but might be fixed current Android API levels. I cannot test this myself. Thanks, it's helpful to be aware of potential incompatibilities either way. I don't t

[issue37224] test__xxsubinterpreters fails randomly

2019-11-22 Thread Kyle Stanley
Kyle Stanley added the comment: > Sorry I haven't gotten back to you sooner, Kyle. Thanks for working on this. > I'm looking at your PR right now. > BTW, Kyle, your problem-solving approach on this is on-track. Don't get > discouraged. This stuff is tricky.

[issue37224] test__xxsubinterpreters fails randomly

2019-11-22 Thread Kyle Stanley
Kyle Stanley added the comment: So, I was finally able to replicate a failure in test_still_running locally, it required using a rather ridiculous number of parallel workers: $ ./python -m test test__xxsubinterpreters -j200 -F ... Exception in thread Thread-7: Traceback (most recent call

[issue37224] test__xxsubinterpreters fails randomly

2019-11-23 Thread Kyle Stanley
Kyle Stanley added the comment: > I was able to consistently reproduce the above failure using 200 parallel > workers, even without `-f`. Oops, I didn't mean without passing `-F`, as this would result in only a single test being ran. I meant without letting it repeat multiple t

[issue37224] test__xxsubinterpreters fails randomly

2019-11-23 Thread Kyle Stanley
Kyle Stanley added the comment: > So, I was finally able to replicate a failure in test_still_running locally, > it required using a rather ridiculous number of parallel workers I forgot to mention that I was able to replicate the above failure on the latest commit to the 3.8 branch.

[issue37224] test__xxsubinterpreters fails randomly

2019-11-30 Thread Kyle Stanley
Kyle Stanley added the comment: > Regarding "is_running()", notice that it relies almost entirely on > "frame->f_executing". That might not be enough (or maybe the behavior there > changed). That would be worth checking out. Based on the above hint, I was

[issue37224] test__xxsubinterpreters fails randomly

2019-11-30 Thread Kyle Stanley
Kyle Stanley added the comment: > so that operations (such as running scripts, destroying the interpreter, etc) > can't occur during finalization Clarification: by "destroying the interpreter" I am specifically referring to calling `interp_destroy()` after finalizatio

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-07 Thread Kyle Stanley
Kyle Stanley added the comment: > Where are we with this? The deadline for 3.8.1 and 3.7.6 is coming up in a > few days. I believe we're just waiting on review and additional feedback on GH-17311, which implements Antoine's proposal. The only remaining component I can

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-09 Thread Kyle Stanley
Kyle Stanley added the comment: Thanks for taking care of merging to 3.x (master) and 3.8, Ɓukasz! > Kyle, I'm releasing 3.8.1rc1 now. Please add the What's New entry before next > Monday (3.8.1). No problem, I'll definitely have time to do that before 3.8.1 final, li

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-10 Thread Kyle Stanley
Kyle Stanley added the comment: > The backport to 3.7 seems straightforward so I did it to unblock 3.7.6rc1. > The backport to 3.6 is a bit more complicated and 3.6.10rc1 can wait a bit > longer so I'll leave that for Kyle along with the various What's New entries.

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-10 Thread Kyle Stanley
Kyle Stanley added the comment: > since the release for 3.7.1 and 3.7.6 are coming up soon. Clarification: should be "since the release for 3.8.1 and 3.7.6 are coming up soon", that was a typo. -- ___ Python tracker <https:/

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-10 Thread Kyle Stanley
Kyle Stanley added the comment: Oh okay, I'll work on the 3.6 backport first then. -- ___ Python tracker <https://bugs.python.org/issue37228> ___ ___ Pytho

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-10 Thread Kyle Stanley
Change by Kyle Stanley : -- pull_requests: +17045 pull_request: https://github.com/python/cpython/pull/17571 ___ Python tracker <https://bugs.python.org/issue37

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-10 Thread Kyle Stanley
Kyle Stanley added the comment: Now that the backports for 3.6-3.8 are merged, I'll work on the What's New entries next. Waiting on feedback from Larry Hastings regarding the potential 3.5 backport, I'll add him to the nosy list.

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-11 Thread Kyle Stanley
Kyle Stanley added the comment: > The fix seems to generate few DeprecationWarning while running test suite > with -Wall. Thanks Karthikeyan, I'm guessing that I missed an assertWarns() or left an outdated test somewhere that explicitly sets `reuse_address=False` (since `reuse_a

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-11 Thread Kyle Stanley
Kyle Stanley added the comment: > or left an outdated test somewhere that explicitly sets `reuse_address=False` Looks like this was the issue, I left a `reuse_address=False` in both test_create_datagram_endpoint_nosoreuseport and test_create_datagram_endpoint_ip_addr. I fixed it locally

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-11 Thread Kyle Stanley
Kyle Stanley added the comment: > I'll fix them accordingly and open a new PR (as well as the backports). Nevermind. Upon further inspection, the other occurrences of `reuse_address=` were for create_server(), not create_datagram_endpoint(). The PR will only include the removal of

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-11 Thread Kyle Stanley
Kyle Stanley added the comment: > One more resource warning about unclosed resource being garbage collected. As > per the other tests I think transport and protocol need to be closed as per > below patch but someone can verify if it's the right approach. Ah, good catch. It

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-11 Thread Kyle Stanley
Change by Kyle Stanley : -- pull_requests: +17051 pull_request: https://github.com/python/cpython/pull/17577 ___ Python tracker <https://bugs.python.org/issue37

[issue39027] run_coroutine_threadsafe uses wrong TimeoutError

2019-12-12 Thread Kyle Stanley
Kyle Stanley added the comment: Thanks for letting us know, janust. I confirmed that `asyncio.TimeoutError` no longer works for the code example in 3.8 and that replacing it with `concurrent.futures.TimeoutError` works correctly. Before moving forward with a PR to the docs, I think we

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-13 Thread Kyle Stanley
Change by Kyle Stanley : -- pull_requests: +17067 pull_request: https://github.com/python/cpython/pull/17595 ___ Python tracker <https://bugs.python.org/issue37

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-13 Thread Kyle Stanley
Kyle Stanley added the comment: Opened a PR that adds the whatsnew entries to master, 3.8, 3.7, and 3.6: GH-17595. -- ___ Python tracker <https://bugs.python.org/issue37

[issue37224] test__xxsubinterpreters fails randomly

2019-12-13 Thread Kyle Stanley
Kyle Stanley added the comment: > Yep, it has to use the public C-API just like any other module. The > function has a "_Py" prefix and be defined in Include/cpython, right? Yeah, I named it "_PyInterpreterIsFinalizing" and it's within Include/cpython. Defini

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-16 Thread Kyle Stanley
Change by Kyle Stanley : -- pull_requests: +17099 pull_request: https://github.com/python/cpython/pull/17630 ___ Python tracker <https://bugs.python.org/issue37

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-16 Thread Kyle Stanley
Change by Kyle Stanley : -- pull_requests: +17100 pull_request: https://github.com/python/cpython/pull/17631 ___ Python tracker <https://bugs.python.org/issue37

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-16 Thread Kyle Stanley
Change by Kyle Stanley : -- pull_requests: +17101 pull_request: https://github.com/python/cpython/pull/17632 ___ Python tracker <https://bugs.python.org/issue37

[issue37228] UDP sockets created by create_datagram_endpoint() allow by default multiple processes to bind the same port

2019-12-16 Thread Kyle Stanley
Kyle Stanley added the comment: Thanks for taking care of merging the remaining backport PRs for 3.6-3.8, Ned. Now, the only branch left is (potentially) 3.5. -- ___ Python tracker <https://bugs.python.org/issue37

[issue39085] Improve docs for await expression

2019-12-17 Thread Kyle Stanley
New submission from Kyle Stanley : For context, I decided to open this issue after receiving a substantial volume of very similar questions and misconceptions from users of asyncio and trio about what `await` does, mostly within a dedicated "async" topical help chat (in the "

[issue39085] Improve docs for await expression

2019-12-18 Thread Kyle Stanley
Kyle Stanley added the comment: > Sorry, my English is bad; I cannot help with docs too much. No problem. Your feedback is still incredibly helpful and very much appreciated either way. (: > Anyway, technically an awaited coroutine *can* be suspended but the > suspension is n

[issue37224] test__xxsubinterpreters fails randomly

2020-01-09 Thread Kyle Stanley
Kyle Stanley added the comment: > For a struct-specific getter we usually end the prefix with an underscore: _PyInterpreter_IsFinalizing. Otherwise, that's the same name I would have used. :) Good to know, thanks! > No worries (or hurries). Just request a review from me when y

[issue39207] concurrent.futures.ProcessPoolExecutor does not properly reap jobs and spawns too many workers

2020-01-10 Thread Kyle Stanley
Kyle Stanley added the comment: > What "ignores the max_workers argument" means? >From my understanding, their argument was that the parameter name >"max_workers" and documentation implies that it will spawn processes as needed >up to *max_workers* based

[issue38356] test_asyncio: SubprocessThreadedWatcherTests leaks threads

2020-01-12 Thread Kyle Stanley
Kyle Stanley added the comment: > I hope it is fixed now. > Thanks, Kyle! No problem, thanks for looking over it. Let me know if the warning comes up again. If it does, I'll be sure to look into it. -- ___ Python trac

[issue39207] concurrent.futures.ProcessPoolExecutor does not properly reap jobs and spawns too many workers

2020-01-14 Thread Kyle Stanley
Kyle Stanley added the comment: > I think this should be fixed like ThreadPoolExecutor. Are there are any downsides or complications with changing this behavior for ProcessPoolExecutor to consider, such as what I mentioned above? From my understanding, there would be a performance pena

[issue39207] concurrent.futures.ProcessPoolExecutor does not properly reap jobs and spawns too many workers

2020-01-14 Thread Kyle Stanley
Kyle Stanley added the comment: > It would certainly be better to start the worker processes on demand. It > probably also requires careful thought about how to detect that more workers > are required. Alright. In that case, I'll do some experimentation when I get the chance

[issue37224] test__xxsubinterpreters fails randomly

2020-01-14 Thread Kyle Stanley
Kyle Stanley added the comment: Update: I have a bit of good news and not so great news. The good news is that I had some time to work on this again, specifically with isolating the failure in test__xxsubinterpreters.DestroyTests. Locally, I added a few temporary "@unittest

[issue37224] test__xxsubinterpreters fails randomly

2020-01-14 Thread Kyle Stanley
Kyle Stanley added the comment: I also just realized that I can run "test.test__xxsubinterpreters.DestroyTests" by itself with: ./python -m test test__xxsubinterpreters.DestroyTests -j200 -F -v For some reason, I hadn't thought of running that class of tests by itself

[issue37224] test__xxsubinterpreters fails randomly

2020-01-14 Thread Kyle Stanley
Kyle Stanley added the comment: > I also just realized that I can run > "test.test__xxsubinterpreters.DestroyTests" by itself with: > ./python -m test test__xxsubinterpreters.DestroyTests -j200 -F -v Oops, the correct syntax is: ./python -m test test__xxsubinterpreters -

[issue37224] test__xxsubinterpreters fails randomly

2020-01-14 Thread Kyle Stanley
Kyle Stanley added the comment: I just made a rather interesting discovery. Instead of specifically focusing my efforts on the logic with interp_destroy(), I decided to take a closer look at the failing unit test itself. The main test within DestroyTests that's failing is the foll

[issue39349] Add "cancel" parameter to concurrent.futures.Executor.shutdown()

2020-01-15 Thread Kyle Stanley
New submission from Kyle Stanley : This feature enhancement issue is based on the following python-ideas thread: https://mail.python.org/archives/list/python-id...@python.org/thread/ZSUIFN5GTDO56H6LLOPXOLQK7EQQZJHJ/ In summary, the main suggestion was to implement a new parameter called

[issue39349] Add "cancel" parameter to concurrent.futures.Executor.shutdown()

2020-01-16 Thread Kyle Stanley
Change by Kyle Stanley : -- components: +Library (Lib) ___ Python tracker <https://bugs.python.org/issue39349> ___ ___ Python-bugs-list mailing list Unsubscribe:

[issue39349] Add "cancel_futures" parameter to concurrent.futures.Executor.shutdown()

2020-01-16 Thread Kyle Stanley
Kyle Stanley added the comment: As of now, I have the implementation for ThreadPoolExecutor working correctly, and a unit test added to ensure its behavior. It was a bit more involved than I originally anticipated, as I had to make a minor change in the _worker() function to allow the new

[issue39349] Add "cancel_futures" parameter to concurrent.futures.Executor.shutdown()

2020-01-16 Thread Kyle Stanley
Kyle Stanley added the comment: > It was a bit more involved than I originally anticipated, as I had to make a > minor change in the _worker() function to allow the new parameter to be > compatible with wait (which is important, as it prevents dangling threads). Never mind, I just

[issue39349] Add "cancel_futures" parameter to concurrent.futures.Executor.shutdown()

2020-01-18 Thread Kyle Stanley
Kyle Stanley added the comment: I now have a working implementation, for both ThreadPoolExecutor and ProcessPoolExecutor. I've also ensured that the tests I added are not vulnerable to race conditions with the following: ``` [aeros:~/repos/aeros-cpython]$ ./python -m

<    1   2   3   4   5   >