[issue38576] CVE-2019-18348: CRLF injection via the host part of the url passed to urlopen()

2019-11-20 Thread kim


Change by kim :


--
nosy: +kim

___
Python tracker 
<https://bugs.python.org/issue38576>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17155] logging can raise UnicodeEncodeError

2013-02-20 Thread Kim

Kim added the comment:

I'm running into similar issues with 2.6.7 and logging 0.4.9.6, where unicode 
strings are fine in print statements and codecs writes, but the same string is 
giving tracebacks for logging.  If it's an education issue, I'm not finding the 
education I need ... :-/

import logging
import codecs

# Unicode string
i = u'\u0433\u043e\u0432\u043e\u0440\u0438\u0442\u044a'

# Print statement is fine
print "hi, i'm the string in question in a print statement: %s" % i

# Codecs write is fine
with codecs.open('/tmp/utf8', 'w', 'utf-8') as f:
f.write(i)

# Logging gives a Traceback
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
handler = logging.FileHandler('/tmp/out', 'w', 'utf-8')
handler.setFormatter(logging.Formatter(u'[%(levelname)s] %(message)s'))
# I've also tried nixing setFormatter and going with the default
log.addHandler(handler)
log.debug(u"process_clusters: From CSV: %s", i)
# I've also tried a bare call to i, with and without the u in the message, and 
explicitly i.encode('utf8'); all Tracebacks.

--
nosy: +kiminoa

___
Python tracker 
<http://bugs.python.org/issue17155>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17155] logging can raise UnicodeEncodeError

2013-02-20 Thread Kim

Kim added the comment:

p.s. Converting to a StreamHandler fixes my issue for now.

--

___
Python tracker 
<http://bugs.python.org/issue17155>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-02-23 Thread Joongi Kim


New submission from Joongi Kim :

I'm now tracking the recent addition and discussion of TaskGroup and 
cancellation scopes. It's interesting! :)

I would like to suggest to have a different mode of operation in 
asyncio.TaskGroup, which I named "PersistentTaskGroup".

AFAIK, TaskGroup targets to replace asyncio.gather, ensuring completion or 
cancellation of all tasks within the context manager scope.

I believe that a "safe" asyncio application should consist of a nested tree of 
task groups, which allow us to explicitly state when tasks of different 
purposes and contexts terminate.  For example, a task group for database 
transactions should be shutdown before a task group for HTTP handlers is 
shutdown.

To this end, in server applications with many sporadically spawned tasks 
throughout the whole process lifetime, there are different requirements for a 
task group that manages such task sets.  The tasks should *not* be cancelled 
upon the unhandled exceptions of sibling tasks in the task group, while we need 
an explicit "fallback" exception handler for those (just like 
"return_exceptions=True" in asyncio.gather).  The tasks belong to the task 
group but their references should not be kept forever to prevent memory leak 
(I'd suggest using weakref.WeakSet).  When terminating the task group itself, 
the ongoing tasks should be cancelled.  The cancellation process upon 
termination may happend in two phases: cancel request with initial timeout + 
additional limited waiting of cancellations.  (This is what Guido has mentioned 
in the discussion in bpo-46771.)

An initial sketch of PersistentTaskGroup is on aiotools:
https://github.com/achimnol/aiotools/blob/main/src/aiotools/ptaskgroup.py
Currently has no two-phase cancellation because it would require Python 3.11 
with asyncio.Task.uncancel().

As Andrew has left a comment 
(https://github.com/achimnol/aiotools/issues/29#issuecomment-997437030), I 
think it is the time to revisit the concrete API design and whether to include 
PersistentTaskGroup in the stdlib or not.

--
components: asyncio
messages: 413880
nosy: achimnol, asvetlov, gvanrossum, yselivanov
priority: normal
severity: normal
status: open
title: PersistentTaskGroup API
type: enhancement
versions: Python 3.11

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46844] Context-based TaskGroup for legacy libraries

2022-02-23 Thread Joongi Kim


New submission from Joongi Kim :

Along with bpo-46843 and the new asyncio.TaskGroup API, I would like to suggest 
addition of context-based TaskGroup feature.

Currently asyncio.create_task() just creates a new task directly attached to 
the event loop, while asyncio.TaskGroup.create_task() creates a new task 
managed by the TaskGroup instance.

It would be ideal to all existing asyncio codes to migrate to use TaskGroup, 
but this is impractical.

An alternative approach is to implicitly bind asyncio.create_task() under a 
specific context to a specific task group, probably using contextvars.

I believe that this approach would allow more control over tasks implicitly 
spawned by 3rd-party libraries that cannot control.

How about your thoughts?

--
components: asyncio
messages: 413881
nosy: achimnol, asvetlov, gvanrossum, yselivanov
priority: normal
severity: normal
status: open
title: Context-based TaskGroup for legacy libraries
type: enhancement
versions: Python 3.11

___
Python tracker 
<https://bugs.python.org/issue46844>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46844] Context-based TaskGroup for legacy libraries

2022-02-23 Thread Joongi Kim


Joongi Kim  added the comment:

The main benefit is that any legacy code that I cannot modify can be upgraded 
to TaskGroup-based codes, which offers a better machinary for exception 
handling and propagation.

There may be different ways to visit this issue: allow replacing the task 
factory in asyncio at runtime.  Then I could just implement my own snippet to 
transfer the "ownership" of the task to a specific task group.

--

___
Python tracker 
<https://bugs.python.org/issue46844>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46844] Context-based TaskGroup for legacy libraries

2022-02-23 Thread Joongi Kim


Joongi Kim  added the comment:

Conceptually it is similar to replace malloc using LD_PRELOAD or 
LD_LIBRARY_PATH manipulation.  When I cannot modify the executable/library 
binaries, this allows replacing the functionality of specific functions.

If we could assign a specific (persistent) task group to all asyncio tasks 
spawned by a black-box code (when the black-box itself does not use task 
groups), we could achieve the full application-level transparency on the timing 
of task cancellation.

--

___
Python tracker 
<https://bugs.python.org/issue46844>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46844] Context-based TaskGroup for legacy libraries

2022-02-23 Thread Joongi Kim


Joongi Kim  added the comment:

It is also useful to write debugging/monitoring codes for asyncio applications. 
 For instance, we could "group" tasks from different libraries and count them.

--

___
Python tracker 
<https://bugs.python.org/issue46844>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46844] Context-based TaskGroup for legacy libraries

2022-02-23 Thread Joongi Kim


Joongi Kim  added the comment:

My propsal is to opt-in the taskgroup binding for asyncio.create_task() under a 
specific context, not changing the defautl behavior.

--

___
Python tracker 
<https://bugs.python.org/issue46844>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46844] Context-based TaskGroup for legacy libraries

2022-02-23 Thread Joongi Kim


Joongi Kim  added the comment:

An example would be like:

tg = asyncio.TaskGroup()
...
async with tg:
  with asyncio.TaskGroupBinder(tg):  # just a hypothetical API
asyncio.create_task(...) # equivalent to tg.create_task(...)
await some_library.some_work()   # all tasks are bound to tg
  asyncio.create_task(...)   # fire-and-forget (not bound to tg)

If TaskGroup supports enumeration/counting of its own tasks and asyncio allows 
enumeration of TaskGroups just like asyncio.Task.all_tasks(), we could extend 
aiomonitor to provide per-taskgroup statistics.

In my projects, we have multiple cases to find and fix bugs in customer sites 
using aiomonitor and I'm willing to improve aiomonitor to support task groups 
as well.

--

___
Python tracker 
<https://bugs.python.org/issue46844>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46844] Context-based TaskGroup for legacy libraries

2022-02-23 Thread Joongi Kim


Joongi Kim  added the comment:

Ah, and this use case also requires that TaskGroup should have an option like 
`return_exceptions=True` which makes it not to cancel sibling tasks upon 
unhandled exceptions, as I suggested in PersistentTaskGroup (bpo-46843).

--

___
Python tracker 
<https://bugs.python.org/issue46844>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46844] Context-based TaskGroup for legacy libraries

2022-02-24 Thread Joongi Kim


Joongi Kim  added the comment:

Ok, let me be clear: Patching asyncio.create_task() to support this opt-in 
contextual task group binding is not an ultimate goal of this issue.  If it 
becomes possible to override/extend the task factory at runtime with any event 
loop implementation, then it's ok to implement this feature request as a 
3rd-party library.  I also don't want to bloat the stdlib with version-specific 
branches, if there are alternative ways to achieve the same goal.  I just 
wanted to check out your opinons and potential alternative approaches to 
implement it.

--

___
Python tracker 
<https://bugs.python.org/issue46844>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-02-24 Thread Joongi Kim


Joongi Kim  added the comment:

So I have more things in mind.

Basically PersistentTaskGroup resemble TaskGroup in that:
 - It has the same "create_task()" method.
 - It has an explicit "cancel()" or "shutdown()" method.
 - Exiting of the context manager means that all tasks of it have either 
completed or cancelled.

TaskGroup is intended to be used for a short-lived set of tasks, while 
PersistentTaskGroup is intended for a long-running set of tasks though 
individual tasks may be short-lived.  Thus, adding globally accessible 
monitoring facility for plain TaskGroup would not be that useful.  In contrast, 
it is super-useful to have a monitoring feature in PersistentTaskGroup!

In aiomonitor, we can enumerate the currently running asyncio tasks by reading 
asyncio.Task.all_tasks().  This has saved my life several times when debugging 
real-world server applications.  I think we can go further by having 
asyncio.PersistentTaskGroup.all_task_groups() which works in the same way.  If 
we make different modules and libraries to use different persistent task 
groups, then we could keep track of their task statistics separately.

--

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-02-24 Thread Joongi Kim


Joongi Kim  added the comment:

I think people may ask "why in stdlib?".

My reasons are:
 - We are adding new asyncio APIs in 3.11 such as TaskGroup, so I think it is a 
good time to add another one, as long as it does not break existing stuffs.
 - I believe that long-running task sets are equally representative use-case 
for real-world asyncio applications, particularly for servers.  Why not to have 
intrinsic support for them?
 - PersistentTaskGroup is going to be universally adopted throughout my 70+K 
LoC asyncio codebase, for instance, in every aiohttp.Application context, 
plugin contexts and modules, etc.

Of course, the name "PersistentTaskGroup" may look quite long, and I'm 
completely open with alternative suggestions.  I also welcome suggestions on 
changes to its functional semantics based on your experience and knowledge.

--

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-02-24 Thread Joongi Kim


Joongi Kim  added the comment:

Example use cases:

* Implement an event iteration loop to fetch events and dispatch the handlers 
depending on the event type (e.g., WebSocket connections, message queues, etc.)
  - https://github.com/aio-libs/aiohttp/pull/2885
  - https://github.com/lablup/backend.ai-manager/pull/533
  - https://github.com/lablup/backend.ai-agent/pull/341
  - https://github.com/lablup/backend.ai-agent/pull/331
* Separate monitoring of event handler tasks by the event sources.
  - aiomonitor extension to count currently ongoing tasks and extract the most 
frequent task stack frames
* Separate the fallback exception handlers by each persistent task group, 
instead of using the single "global" event loop exception handler.

--

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-02-24 Thread Joongi Kim


Joongi Kim  added the comment:

Some search results from cs.github.com with the input "asyncio task weakset", 
which may be replaced/simplified with PersistentTaskGroup:

- 
https://github.com/Textualize/textual/blob/38efc821737e3158a8c4c7ef8ecfa953dc7c0ba8/src/textual/message_pump.py#L43
- 
https://github.com/aiokitchen/aiomisc/blob/59abd4434e6d134537490db699f89a51df1e6bbc/aiomisc/entrypoint.py#L132
- 
https://github.com/anki/cozmo-python-sdk/blob/dd29edef18748fcd816550469195323842a7872e/src/cozmo/event.py#L102
- 
https://github.com/aio-libs/aiohttp-sse/blob/db7d49bfc8a4907d9a8e7696a85b9772e1c550eb/examples/graceful_shutdown.py#L50
- 
https://github.com/mosquito/aiormq/blob/9c6c0dfc771ea8f6e79b7532177640c2692c640f/aiormq/base.py#L18
https://github.com/mars-project/mars/blob/d1a14cc4a1cb96e40e1d81eef38113b0c9221a84/mars/lib/aio/_runners.py#L57

--

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-02-24 Thread Joongi Kim


Joongi Kim  added the comment:

@yselivanov @asvetlov
I think this API suggestion would require more refining and discussion in 
depths, and probably it may be better to undergo the PEP writing and review 
process.  Or I might need to have a separate discussion thread somewhere else 
(maybe discuss.python.org?).

Since I'm just a newbie in terms of Python core/stdlib development, could one 
of you guide me with what you think as the right way?

--

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46622] Add an async variant of lru_cache for coroutines.

2022-02-24 Thread Joongi Kim


Change by Joongi Kim :


--
nosy: +achimnol

___
Python tracker 
<https://bugs.python.org/issue46622>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-02-24 Thread Joongi Kim


Joongi Kim  added the comment:

@gvanrossum As you mentioned, the event loop currently plays the role of the 
top-level task group already, even without introducing yet another top-level 
task.  For instance, asyncio.run() includes necessary shutdown procedures to 
cancel all belonging unfinished tasks and async generators.

However, I think we should provide an abstraction to organize the shutdown 
procedures in a *hierarchical* manner.  For example, we could cancel all event 
handler tasks before cancelling all HTTP handler tasks upon a web server 
shutdown.  This prevents any potential races between theses two different task 
sets.  I think you could agree with the necessity of orderly release of 
underlying resources during shutdown in general.  Currently 
asyncio.Task.all_tasks() is just a list created from WeakSet and we cannot 
guarantee which tasks will be cancelled first.

Yes, this can be done by manually writing codes to declare multiple WeakSets 
and a for-loop to cancel the contained tasks by enumerating over them, just 
like asyncio.run() does.  With the new addition of TaskGroup and 
ExceptionGroup, this code does not require core changes of Python.

But I believe that this hierarchical persistent task group abstraction should 
be an essential part of the API and asyncio tutorials when writing server 
applications.  asyncio.run() could be written by users, but I think the core 
devs have agreed with that it is an essential abstraction to be included in the 
stdlib.  I'd like argue that hierarchical persistent task groups is the same 
case.

Though I named it "PersistentTaskGroup" because it looks similar to TaskGroup, 
but this name may be misleading.  In PersistentTaskGroup, even when all tasks 
finish successfully, it does NOT terminate but keeps waiting for new tasks to 
be spawned.  It terminates only when the outer task is cancelled or its 
shutdown() method is called.  Note that belonging tasks may be either 
short-running or long-running, and this does not matter.  The point is to 
shutdown any remaining tasks in an orderly manner.  If you don't like the 
naming, please suggest alternatives.

--

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-02-24 Thread Joongi Kim


Joongi Kim  added the comment:

This particular experience, 
https://github.com/lablup/backend.ai-agent/pull/331, has actually motivated me 
to suggest PersistentTaskGroup.

The program subscribes the event stream of Docker daemon using aiohttp as an 
asyncio task, and this should be kept running throughout the whole application 
lifetime.  I first applied aiotools.TaskGroup to ensure shutdown of spawned 
event handler tasks, but I missed that it cancels all sibling tasks if one of 
the spawned tasks bubbles up an unhandled exception.  This has caused silent 
termination of the subscriber task and led to a bug.  We could debug this issue 
by inspecting aiomonitor and checking the existence of this task.  After this 
issue, I began to think we need a proper abstraction of a long-running task 
group (NOTE: the task group is long-running.  The lifetime of internal tasks 
does not matter).

Another case is that https://github.com/lablup/backend.ai/issues/330.

One of our customer site has suffered from excessive CPU usage by our program.  
We could identify the issue by aiomonitor, and the root cause was the 
indefinite accumulation of peridoically created asyncio tasks to measure the 
disk usage of user directories, when there are too many files in them.  Since 
the number of tasks have exceeded 10K, it was very difficult to group and 
distinguish individual asyncio tasks in aiomonitor.  I thought that it would be 
nice if we could group such tasks into long-running groups and view task 
statistics separately.

--

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-02-24 Thread Joongi Kim


Joongi Kim  added the comment:

I ended up with the following conclusion:
- The new abstraction should not cancel sibling tasks and itself upon unhandled 
execption but loudly report such errors (and the fallback error handler should 
be customizable).
- Nesting task groups will give additional benefits such as orderly shutdown of 
different task groups.  Empty up message queues before shutting down netweork 
connections, etc.

You may take my suggestion as "let's have a hierarchical nested virtual event 
loops to group tasks".  PersistentTaskGroup actually shares many 
characteristics with the event loop while itself is not an event loop.

So I came up with WeakSet with task decorators to handle exceptions by my own, 
and this is the current rudimentary implementation of PersistentTaskGroup in 
aiotools.

And I discovered from the additional search results that the same pattern 
---managing sporadic tasks using WeakSet and writing a proper cancellation loop 
of them---appear quite commonly in many different asyncio applications and 
libraries.

So that's why I think this should be an intrinsic/essential abstraction.

--

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-02-24 Thread Joongi Kim


Joongi Kim  added the comment:

Here is one another story.

When handling message queues in distributed applications, I use the following 
pattern frequently for graceful shutdown:
* Use a sentinel object to signal the end of queue.
* Enqueue the sentinel object when:
  - The server is shutting down. (i.e., cancelled explicitly)
  - The connection peer has sent an explicit termination message. (e.g., EOF)
* Wait until all enqueued messages before the sentinal object to be processed.
  - I'd like to impose a shutdown timeout on here using a persistent task 
group, by spawning all handler tasks of this queue into it.

--

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46844] Context-based TaskGroup for legacy libraries

2022-02-24 Thread Joongi Kim


Joongi Kim  added the comment:

I have added more about my stories in bpo-46843.

I think the suggestion of implicit taskgroup binding with the current 
asyncio.TaskGroup has no point but it would have more meaning with 
PersistentTaskGroup.

So, if we treat PersistentTaskGroup as a "nested, hierarchical virtual event 
loop" to repeat and group shutdown procedures for different task sets 
separately, the point may look a little bit clearer.

It is more like assigning a virtual event loop to different modules and 
libraries, while keeping the behavior of asyncio.create_task() same.  The 
difference is that the caller controls when these virtual loops are terminated 
and in what order.

Does this make sense better?

--

___
Python tracker 
<https://bugs.python.org/issue46844>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46844] Implicit binding of PersistentTaskGroup (or virtual event loops)

2022-02-24 Thread Joongi Kim


Joongi Kim  added the comment:

Updated the title to reduce confusion.

--
title: Context-based TaskGroup for legacy libraries -> Implicit binding of 
PersistentTaskGroup (or virtual event loops)

___
Python tracker 
<https://bugs.python.org/issue46844>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-02-24 Thread Joongi Kim


Joongi Kim  added the comment:

Anoter case:

https://github.com/lablup/backend.ai-manager/pull/533
https://github.com/lablup/backend.ai-agent/pull/341

When shutting down the application, I'd like to explicitly cancel the shielded 
tasks, while keep them shielded before shutdown.

So I inserted `ptaskgroup.create_task()` inside `asyncio.shield()`, so that the 
tasks are not cancelled upon the cancellation of their callers but they get 
cancelled when the server shuts down.

This pattern is conveniently implemented with PersistentTaskGroup.

--

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-02-25 Thread Joongi Kim


Joongi Kim  added the comment:

Good to hear that TaskGroup already uses WeakSet.

When all tasks finish, PersistentTaskGroup should not finish and wait for 
future tasks, unless explicitly cancelled or shutdown.  Could this be also 
configured with asyncio.TaskGroup?

I'm also ok with adding a simple option for such behavior to asyncio.TaskGroup 
instead of adding a whole new API/class.

--

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-02-25 Thread Joongi Kim


Joongi Kim  added the comment:

> As for errors in siblings aborting the TaskGroup, could you apply a wrapper 
> to the scheduled coroutines to swallow and log any errors yourself?

Yes, this could be a simplest way to implement PersistentTaskGroup if TaskGroup 
supports "persistent" option to keep it running.

And just a question: I'm just curious about what happens if belonging tasks see 
the cancellation raised from their inner tasks.  Sibling tasks should not be 
cancelled, and the outer task group should not be cancelled, unless the task 
group itself has requested cancellation.  Could the new cancellation counter 
help this?

--

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-02-25 Thread Joongi Kim


Joongi Kim  added the comment:

> And just a question: I'm just curious about what happens if belonging tasks 
> see the cancellation raised from their inner tasks.  Sibling tasks should not 
> be cancelled, and the outer task group should not be cancelled, unless the 
> task group itself has requested cancellation.  Could the new cancellation 
> counter help this?

To achieve this by distinguishing cancellation from inner/outer tasks, 
TaskGroup._on_task_done() should be modified to skip setting _on_completed_fut 
because it should keep running.  Swallowing exceptions in child tasks can be 
done without modifying TaskGroup, but this part requires changes of TaskGroup.

Another difference is the usage.  Instead of relying on the async context 
manager interface, we would call "TaskGroup.shutdown()" separately from either 
directly in signal handlers or from cleanup methods of long-lived objects that 
have task groups as attributes.

And I also want to perform two-phase cancellation: instead of cancelling all 
tasks immediately as in current _abort(), have a configurable grace period 
until they have chances to complete and then cancel with additional timeout on 
cancellation itself to prevent hangs.

--

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-02-25 Thread Joongi Kim


Joongi Kim  added the comment:

Short summary:

PersistentTaskGroup shares the followings from TaskGroup:
- It uses WeakSet to keep track of child tasks.
- After exiting the async context manager scope (or the shutdown procedure), it 
ensures that all tasks are complete or cancelled.

PersistentTaskGroup differs in that:
- It keeps running after all tasks successfully finish unless it is explicitly 
shutdown or the parent task is cancelled.
- It is one of the main use cases that shutdown() method is called separately.  
The shutdown procedure may be triggered from different task contexts.
- It provides two-phase cancellation with a configurable grace period.
- It does not propagate unhandled exceptions and cancellations from child tasks 
to the outside of the task group and sibling tasks but calls a customizable 
fallback exception handler. -> This could be done without modifying TaskGroup.

The API looks similar to TaskGroup with minor modification.
The semantics of a PersistentTaskGroup more resembles a nested event loop, in 
that it has its own set of tasks, it keeps running until closed, and it has its 
own fallback exception handler.

Note that current aiotools implementation lacks many details, such as two-phase 
cancellation.  I'm going to implement more soon.

--

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-02-27 Thread Joongi Kim


Joongi Kim  added the comment:

I have updated the PersistentTaskGroup implementation referring 
asyncio.TaskGroup and added more detailed test cases, which works with the 
latest Python 3.11 GitHub checkout.

https://github.com/achimnol/aiotools/pull/36/files

Please have a look at the class docstring.
There are two different usage: async context manager vs. attributes of 
long-lived objects.

One of the point is to "revive" asyncio.gather() with return_exceptions=True 
but let it handle/report exceptions immediately with customizable exception 
handler.

Currently two-phase shutdown is not implemented yet as I'm still thinking about 
how to adapt the current implementation.

--

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46875] Missing name in TaskGroup.__repr__()

2022-02-27 Thread Joongi Kim


New submission from Joongi Kim :

The __repr__() method in asyncio.TaskGroup does not include self._name.
I think this is a simple overlook, because asyncio.Task includes the task name 
in __repr__(). :wink:

https://github.com/python/cpython/blob/345572a1a02/Lib/asyncio/taskgroups.py#L28-L42

I'll make a simple PR to fix it.

--
components: asyncio
messages: 414162
nosy: achimnol, asvetlov, gvanrossum, yselivanov
priority: normal
severity: normal
status: open
title: Missing name in TaskGroup.__repr__()
versions: Python 3.11

___
Python tracker 
<https://bugs.python.org/issue46875>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46875] Missing name in TaskGroup.__repr__()

2022-02-27 Thread Joongi Kim


Joongi Kim  added the comment:

Ah, I'm confused with aiotools.TaskGroup (originated from EdgeDB's TaskGroup) 
code while browsing both aiotools and stdlib asyncio.TaskGroup source codes.

The naming facility seems to be intentionally removed when ported to the stdlib.
So I am closing this and sorry fo the noise.

Though, is there any particular reason to remove it?
My guess is that you think that TaskGroup is more like a control-flow structure 
which does not need to be named, just like we don't name "for loop" for 
instance.

--
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue46875>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue46843] PersistentTaskGroup API

2022-03-06 Thread Joongi Kim


Joongi Kim  added the comment:

I have released the new version of aiotools with rewritten TaskGroup and 
PersistentTaskGroup.

https://aiotools.readthedocs.io/en/latest/aiotools.taskgroup.html

aiotools.TaskGroup has small additions to asyncio.TaskGroup: a naming API and 
`current_taskgroup` context variable.

aiotools.PersistentTaskGroup is what I've described here, highlighting both 
async-with usage and long-lived object usage and `all_ptaskgroups()` 
classmethod for the monitoring purpose except the two-phase graceful shutdown 
(future TODO).

--

___
Python tracker 
<https://bugs.python.org/issue46843>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1241] subprocess.py stdout of childprocess always buffered.

2007-10-05 Thread Jason Kim

New submission from Jason Kim:

Hi. 

I am currently using subprocess.py (2.4.4 and above) to try to have a
portable way of running a sub task in linux and windows.

I ran into a strange problem - a program runs and is "timed out"
but the the subprocess's stdout and stderr are not fully "grabbed"
So I've been trying various ways to force "flush" the stdout and stderr
of the child process so that at least partial results can be saved.

The "problem" app being spawned off is very simple:
--
#include 

int main() {
  int i = 0;
  for (i = 0; i < 1000; ++i) {
printf("STDOUT boo %d\n",i);
fprintf(stdout,"STDOUT sleeping %d\n",i);
fprintf(stderr,"STDERR sleeping %d\n",i);
//fflush(stdout);
//fflush(stderr);
sleep(1);
  } 
}

---

i.e. it just dumps its output to both stdout and stderr. The issue that
I am seeing is that no matter what options I tried to place for
subprocess(), the ONLY output I see from the executed process are

"STDERR sleeping " lines, UNLESS I uncomment the fflush(stdout) line in
the application.

Executing the script with python -u doesn't seem to help either.
Now, if the task completes normally, then I am able to grab the entire
stdout and stderr produced by the subprocess. The issue is that I can't
seem to grab the partial output for stdout, and there does not seem to
be a way to make the file descriptors returned by pipe() to be unbuffered.

So the question is: what is the preferred method of forcing the pipe()
file descriptors created by subprocess.__init__() to be fully unbuffered?

Second, is there a better way of doing this?
i.e. a portable way to spawn off a task, with an optional timeout, grab
any partial results from the task's stdout and stderr, and grab the
return code from the child task?

Any hints and advice will be greatly appreciated.

Thank you.

The relevant snippet of python code is:

import threading
from signal import *
from subprocess import *
import time
import string
import copy
import re
import sys
import os
from glob import glob
from os import path
import thread

class task_wrapper():
   def run(s):
  if s.timeout > 0:
#print "starting timer for ",s.timeout
s.task_timer = threading.Timer(s.timeout, task_wrapper.cleanup, [s])
s.task_timer.start()
  s.task_start_time = time.time()
  s.task_end_time = s.task_start_time
  s.subtask=Popen(s.cmd, bufsize=0, env=s.env, stdout=PIPE, stderr=PIPE)
  s.task_out, s.task_err = s.subtask.communicate()


  def kill(s, subtask):
""" attempts a portable way to kill things
First, flush the buffer
"""
print "killing", subtask.pid
sys.stdout.flush()
#s.subtask.stdin.flush()
print "s.subtask.stdout.fileno()=",s.subtask.stdout.fileno()
print "s.subtask.stderr.fileno()=",s.subtask.stderr.fileno()
#os.fsync(s.subtask.stderr.fileno())
#os.fsync(s.subtask.stdout.fileno())
s.subtask.stdout.flush()
s.subtask.stderr.flush()

if os.name == "posix":
  os.kill(subtask.pid, SIGKILL)
elif os.name == "nt":
  import win32api
  win32api.TerminateProcess(subtask._handle ,9)

  def cleanup(s, mode="TIMEOUT"):
s.timer_lock.acquire()
if s.task_result == None:
if mode == "TIMEOUT":
   s.msg( """ Uhoh, subtask took too long""")
   s.kill(s.subtask) 
   

--
messages: 56247
nosy: jason.w.kim
severity: normal
status: open
title: subprocess.py stdout of childprocess always buffered.
type: behavior
versions: Python 2.4, Python 2.5

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue1241>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12806] argparse: Hybrid help text formatter

2011-09-24 Thread Graylin Kim

Graylin Kim  added the comment:

I fully support taking blank line based line-wrapping approach and agree with 
Zbyszek's suggested indentation approach as well. I am not sure why they didn't 
occur to me at the time but they are certainly a more effective and widely 
adopted approaches to the structured text problem.

I suppose here is where I should volunteer to update the patch file...


Re: Bike-shedding

>dash '-' has special meaning in brackets:

Good catch, I had intended on '-' being a valid list item character. It clearly 
needs to be escaped. Not that it would matter given your proposed alternative.

>>  if(list_match):
>Parenthesis unnecessary.

In my defense I have the sadistic pleasure of coding in PHP where they are 
necessary for 8 hours a day for my day job. I can only apologize profusely for 
my offense and beg for forgiveness :)

>> lines = list()
>Why not just 'lines = []'?

Not to get off topic, but I happen to like list() and dict() instead of [] and 
{} for empty collections. If there are non-religious reasons for avoiding this 
practice I'll consider it. I don't want to invoke a holy war here, just 
wondering if there are practical reasons.

>One a side note: due to #13041 the terminal width is normally stuck
at 80 chars.

Not a good reason to remove the flexibility from the implementation I don't 
think.

--

___
Python tracker 
<http://bugs.python.org/issue12806>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13104] urllib.request.thishost() returns a garbage value

2011-10-04 Thread Deokhwan Kim

New submission from Deokhwan Kim :

There is a minor typo in Lib/urllib/request.py:thishost(). Because of it, the 
thishost() function is returning a garbage value:

  >>> import urllib.request
  >>> urllib.request.thishost()
  ('XXX.X.XXX.com', ['X.X.XXX.com'], ['123.45.67.89'])

It is expected to return the IP addresses of the current host, so the correct 
return value would be like:

  >>> urllib.request.thishost.__doc__
  'Return the IP addresses of the current host.'
  >>> urllib.request.thishost()
  ('127.0.0.1', '127.0.1.1')

The attached patch will fix the mistake .

--
components: Library (Lib)
files: thishost.patch
keywords: patch
messages: 144929
nosy: dkim
priority: normal
severity: normal
status: open
title: urllib.request.thishost() returns a garbage value
type: behavior
versions: Python 3.2
Added file: http://bugs.python.org/file23316/thishost.patch

___
Python tracker 
<http://bugs.python.org/issue13104>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5715] listen socket close in SocketServer.ForkingMixIn.process_request()

2011-05-24 Thread Donghyun Kim

Donghyun Kim  added the comment:

On May 24, 2011, at 12:44 PM, Charles-François Natali wrote:

> I don't know how I could miss this: closing the server socket is perfectly 
> fine in TCP, since a new one is returned by accept(). But in UDP, it's 
> definitely wrong, since it's used by the handler.
> I don't know however how I missed this, since I remember having run 
> test_socketserver...

It's been a long time since the issue submitted, anyway, I was cursed to look 
at only TCP too :-)

I agree that ForkingUDPServer should be supported in SocketServer.py.
(Although users should take care of socket locking for concurrent accesses)

How about using BaseServer(TCPServer).server_close() instead of 
self.socket.close() in the patch?

As UDPServer has no server_close() method overridden, unlike ForkingTCPServer, 
ForkingUDPServer seems to have no actual "server" in design.
So, I think we could say that 
- closing TCP listen socket in child process = "server_close()" in child process
- nothing to do on UDP socket in child process = "server_close() but nothing 
will be done in the method" (b/c BaseServer.server_close() does nothing)

What do you think?

-
Donghyun Kim
http://www.uryan.net

--
Added file: http://bugs.python.org/file22098/unnamed

___
Python tracker 
<http://bugs.python.org/issue5715>
___On May 24, 2011, 
at 12:44 PM, Charles-François Natali wrote:I don't know how 
I could miss this: closing the server socket is perfectly fine in TCP, since a 
new one is returned by accept(). But in UDP, it's definitely wrong, since it's 
used by the handler.I don't know however how I missed this, since I 
remember having run test_socketserver...It's been a 
long time since the issue submitted, anyway, I was cursed to look at only TCP 
too :-)I agree that ForkingUDPServer should be supported in 
SocketServer.py.(Although users should take care of socket locking for 
concurrent accesses)How about using 
BaseServer(TCPServer).server_close() instead of self.socket.close() in the 
patch?As UDPServer has no server_close() method overridden, unlike 
ForkingTCPServer, ForkingUDPServer seems to have no actual "server" in 
design.So, I think we could say that - closing TCP listen socket 
in child process = "server_close()" in child process- nothing to do on UDP 
socket in child process = "server_close() but nothing will be done in the 
method" (b/c BaseServer.server_close() does nothing)What do you 
think?
-Donghyun Kimhttp://www.uryan.net";>http://www.uryan.net

___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12806] argparse: Hybrid help text formatter

2011-08-21 Thread Graylin Kim

New submission from Graylin Kim :

When using argparse I frequently run into situations where my helper text is a 
mix of prose and bullets or options. I need the RawTextFormatter for the 
bullets, and I need the default formatter for the prose (so the line wraps 
intelligently).

The current HelpFormatter classes are marked as public by name only, so 
sub-classing them with overrides to get the desired functionality isn't great 
unless it gets pushed upstream. To that end, I've attached a subclass 
implementation that I've been using for the following effect:

Example:
>>> parser = argparse.ArgumentParser(formatter_class=FlexiFormatter)
>>> parser.add_argument('--example', help='''\
... This argument's help text will have this first long line\
... wrapped to fit the target window size so that your text\
... remains flexible.
...
... 1. This option list
... 2. is still persisted
... 3. and the option strings get wrapped like this with an\
...indent for readability.
...
... You must use backslashes at the end of lines to indicate that\
... you want the text to wrap instead of preserving the newline.
...
... As with docstrings, the leading space to the text block is\
... ignored.
... ''')
>>> parser.parse_args(['-h'])

usage: argparse_formatter.py [-h] [--example EXAMPLE]

optional arguments:
  -h, --help show this help message and exit
  --example EXAMPLE  This argument's help text will have this first
 long line wrapped to fit the target window size
 so that your text remains flexible.

 1. This option list
 2. is still persisted
 3. and the option strings get wrapped like
this with an indent for readability.

 You must use backslashes at the end of lines to
 indicate that you want the text to wrap instead
 of preserving the newline.

 As with docstrings, the leading space to the
 text block is ignored.


 1. This option list
 2. is still persisted
 3. and the option strings get wrapped like
this with an indent for readability.

 You must use backslashes at the end of lines to
 indicate that you want the text to wrap instead
 of preserving the newline.

 As with docstrings, the leading space to the
 text block is ignored.


If there is interest in this sort of thing I'd be happy to fix it up for 
inclusion.

--
components: Library (Lib)
files: argparse_formatter.py
messages: 142651
nosy: GraylinKim
priority: normal
severity: normal
status: open
title: argparse: Hybrid help text formatter
versions: Python 2.7
Added file: http://bugs.python.org/file22977/argparse_formatter.py

___
Python tracker 
<http://bugs.python.org/issue12806>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12806] argparse: Hybrid help text formatter

2011-08-21 Thread Graylin Kim

Graylin Kim  added the comment:

I just noticed that the example output above repeats with a different indent. 
The attached formatter isn't broken, I just messed up the editing on my post. 
The repeated text isn't part of the output (and shouldn't be there).

While I'm certainly at fault here, a feature to preview your post before final 
submission would likely help people like me to catch these sorts of errors 
before spamming the world with them. :)

Apologies for the double post.

--

___
Python tracker 
<http://bugs.python.org/issue12806>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2126] BaseHTTPServer.py

2008-02-16 Thread June Kim

Changes by June Kim:


--
components: Library (Lib)
nosy: juneaftn, rhettinger
severity: normal
status: open
title: BaseHTTPServer.py
type: behavior
versions: Python 2.5

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue2126>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2126] BaseHTTPServer.py and long POST from IE

2008-02-16 Thread June Kim

New submission from June Kim:

http://bugs.python.org/issue430160
http://bugs.python.org/issue427345

These two issues refer to the same bug, which occurs when there is a
POST from an Internet Explorer and the POST's content is long enough.
The issue was resolved by changing the CGIHTTPServer.py to consume the
remaining garbage. However, the bug potentially remains with
BaseHTTPServer.py(and hence its descendants like SimpleHTTPServer, and
3rd party libraries like MoinMoin's stand alone server).

People should have the knowledge of the IE POST bug and put the code
for treating it everytime when they use BaseHTTPServer.

Simple way to solve this is inserting the garbage consuming code in
the "finish" method:

while select.select([self.rfile], [],[],0)[0]:
   if not self.rfile.read(1): break

--
title: BaseHTTPServer.py -> BaseHTTPServer.py and long POST from IE

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue2126>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2126] BaseHTTPServer.py fails long POST from IE

2008-02-16 Thread June Kim

Changes by June Kim:


--
title: BaseHTTPServer.py and long POST from IE -> BaseHTTPServer.py fails long 
POST from IE

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue2126>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2129] Link error of gethostbyaddr and gethostname in Python Manuals (the chm file)

2008-02-16 Thread June Kim

New submission from June Kim:

Finding gethostname and gethostbyaddr entities from the index tab and 
clicking them in Python25.chm results in showing up the wrong section 
of 14.1.1 Process Parameters, instead of the proper section 17.2 
socket.

--
components: Documentation
messages: 62459
nosy: juneaftn
severity: minor
status: open
title: Link error of gethostbyaddr and gethostname in Python Manuals (the chm 
file)
type: behavior
versions: Python 2.5

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue2129>
__
___
Python-bugs-list mailing list 
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2678] hmac performance optimization

2008-04-24 Thread Nikolay Kim

New submission from Nikolay Kim <[EMAIL PROTECTED]>:

i removed lambda in _strxor function

--
components: Library (Lib)
files: hmac.py.diff
keywords: patch
messages: 65720
nosy: fafhrd
severity: normal
status: open
title: hmac performance optimization
type: performance
versions: Python 2.5
Added file: http://bugs.python.org/file10083/hmac.py.diff

__
Tracker <[EMAIL PROTECTED]>
<http://bugs.python.org/issue2678>
__
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36077] Inheritance dataclasses fields and default init statement

2019-08-13 Thread Kim Gustyr


Change by Kim Gustyr :


--
nosy: +kgustyr

___
Python tracker 
<https://bugs.python.org/issue36077>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37862] Search doesn't find built-in functions

2019-08-14 Thread Kim Oldfield


New submission from Kim Oldfield :

The python 3 documentation search
https://docs.python.org/3/search.html
doesn't always find built-in functions.

For example, searching for "zip" takes me to
https://docs.python.org/3/search.html?q=zip

I would expect the first match to be a link to
https://docs.python.org/3/library/functions.html#zip
but I can't see a link to this page anywhere in the 146 results.

--
assignee: docs@python
components: Documentation
messages: 349781
nosy: docs@python, kim.oldfield
priority: normal
severity: normal
status: open
title: Search doesn't find built-in functions
type: behavior
versions: Python 2.7, Python 3.5, Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue37862>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37862] Search doesn't find built-in functions

2019-08-21 Thread Kim Oldfield


Kim Oldfield  added the comment:

Usually the search page is the quickest way to find documentation about a 
module or function - quicker than navigating through a couple of levels of 
pages (documentation home, index, index by letter, scroll or search in page to 
find desired name, click on name).

Searching for builtin functions is inconsistent. Some functions (eg getattr) 
are found as expected in a search, while other functions (eg zip and many 
others) aren't found in the search results. This could easily lead someone to 
incorrectly concluding that the function they are search for doesn't exist in 
python.

I find the response of "The search page is the last thing one should use" 
strange. Surely as the option to search is there, and it mostly works, we 
should be making incremental improvements as necessary to make it better so 
that everyone can easily find the right parts of the python documentation.

--

___
Python tracker 
<https://bugs.python.org/issue37862>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38511] Multiprocessing does not work properly when using the trace module.

2019-10-17 Thread minjeong kim


New submission from minjeong kim <98...@naver.com>:

normal result :
$ python test.py
{'p1': 1, 'p2': 1, 'p3': 1, 'p4': 1}

run with tracing :
$ python -mtrace --trackcalls test.py
{}

It seems that the foo and save functions that multiprocess should call are not 
called.

--
components: Library (Lib)
files: test.py
messages: 354864
nosy: rls1004
priority: normal
severity: normal
status: open
title: Multiprocessing does not work properly when using the trace module.
type: behavior
versions: Python 2.7, Python 3.5, Python 3.6
Added file: https://bugs.python.org/file48667/test.py

___
Python tracker 
<https://bugs.python.org/issue38511>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44286] venv activate script would be good to show failure.

2021-06-02 Thread john kim


New submission from john kim :

I changed the path of the project using venv, so it didn't work properly.

I thought it worked successfully because there was a (venv) on the terminal 
line.

However, the __VENV_DIR__ in the 

set "VIRTUAL_ENV=__VENV_DIR__" 

in the activate script did not work properly because it was the PATH before the 
replacement.

How about adding a procedure to verify the __VENV_DIR__ is a valid PATH to the 
Activate Script?

--
components: Library (Lib)
messages: 394909
nosy: idle947
priority: normal
severity: normal
status: open
title: venv activate script would be good to show failure.
type: enhancement
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue44286>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44286] venv activate script would be good to show failure.

2021-06-03 Thread john kim


john kim  added the comment:

Thank you for your explanation of venv.

I understand that venv is not portable. 

But I was confused because the venv was written on the left side of the 
terminal line and it looked like it was working.

Therefore, if venv does not work, such as if __venv_dir__ is invalid path, i 
thought any warning message or exception handling was necessary. Or (venv) 
doesn't show up.

--

___
Python tracker 
<https://bugs.python.org/issue44286>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44286] venv activate script would be good to show failure.

2021-06-04 Thread john kim


john kim  added the comment:

Okay. Thank you for the detailed explanation.

--
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 
<https://bugs.python.org/issue44286>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44343] Adding the "with" statement support to ContextVar

2021-06-07 Thread Joongi Kim


New submission from Joongi Kim :

This is just an idea: ContextVar.set() and ContextVar.reset() looks naturally 
mappable with the "with" statement.

For example:

a = ContextVar('a')
token = a.set(1234)
...
a.reset(token)

could be naturally rewritten as:

a = ContextVar('a')
with a.set(1234):
...

Is there any particular reason *not* to do this?
If not, I'd like make a PR to add this API.
Naming suggestions of this API are welcome, but it also seems possible to keep 
it "set()" if we retain the reference to the ContextVar instance in the Token 
instance.

--
components: Library (Lib)
messages: 395302
nosy: achimnol
priority: normal
severity: normal
status: open
title: Adding the "with" statement support to ContextVar
type: enhancement
versions: Python 3.11

___
Python tracker 
<https://bugs.python.org/issue44343>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44343] Adding the "with" statement support to ContextVar

2021-06-09 Thread Joongi Kim


Joongi Kim  added the comment:

After checking out PEP-567 (https://www.python.org/dev/peps/pep-0567/),
I'm adding njs to the nosy list.

--
nosy: +njs

___
Python tracker 
<https://bugs.python.org/issue44343>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44738] io_uring as a new backend to selectors and asyncio

2021-07-26 Thread Joongi Kim


New submission from Joongi Kim :

This is a rough early idea suggestion on adding io_uring as an alternative I/O 
multiplexing mechanism in Python (maybe selectors and asyncio). io_uring is a 
relatively new I/O mechanism introduced in Linux kernel 5.1 or later.

https://lwn.net/Articles/776703/
https://lwn.net/Articles/810414/
https://blogs.oracle.com/linux/post/an-introduction-to-the-io_uring-asynchronous-io-framework

The advantages of io_uring over epoll:
 - completion-based
 - less number of syscalls
 - higher performance (https://twitter.com/hielkedv/status/1218891982636027905)
 - file I/O support including read/write/stat/open/close

I'm not sure that io_uring would bring actual performance improvements to 
Python (and asyncio) or not yet.
We need some exploration and prototyping, but still technically it would be a 
nice-to-have feature, considering Python's recent speed-up optimizations.
Also io_uring is also intended to support high-speed storage devices such as 
NVMe, and may be a good addition to asyncio in terms of improved async file I/O 
support.

Here are existing attempts to incorporate uring in other languages:
 - liburing (C, https://github.com/axboe/liburing)
 - iou, tokio-uring (Rust, https://tokio.rs/blog/2021-07-tokio-uring)

I don't have any estimation on the efforts and time required to do the work,
but just want to spark the discussion. :)

--
components: asyncio
messages: 398215
nosy: achimnol, asvetlov, corona10, njs, yselivanov
priority: normal
severity: normal
status: open
title: io_uring as a new backend to selectors and asyncio
type: enhancement
versions: Python 3.11

___
Python tracker 
<https://bugs.python.org/issue44738>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44738] io_uring as a new backend to selectors and asyncio

2021-07-26 Thread Joongi Kim


Joongi Kim  added the comment:

Ah, yes, but one year has passed so it may be another chance to discuss its 
adoption, as new advances like tokio_uring became available.

--

___
Python tracker 
<https://bugs.python.org/issue44738>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue44738] io_uring as a new backend to selectors and asyncio

2021-07-26 Thread Joongi Kim


Joongi Kim  added the comment:

As in the previous discussion, instead of tackling stdlib right away, it would 
be nice to evaluate the approach using 3rd-party libs, such as trio and/or 
async-tokio, or maybe a new library.

I have a strong feeling that we need to improve the async file I/O.
AFAIK, aiofiles is the only choice we have and it uses a thread pool, which 
involves many more context switches than required.

--

___
Python tracker 
<https://bugs.python.org/issue44738>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue45416] "loop argument must agree with lock" instantiating asyncio.Condition

2021-10-10 Thread Joongi Kim


Change by Joongi Kim :


--
keywords: +patch
nosy: +Joongi Kim
nosy_count: 6.0 -> 7.0
pull_requests: +27160
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/28850

___
Python tracker 
<https://bugs.python.org/issue45416>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue38599] Deprecate creation of asyncio object when the loop is not running

2020-01-06 Thread Joongi Kim


Joongi Kim  added the comment:

It is also generating deprecation warning:

> /opt/python/3.8.0/lib/python3.8/asyncio/queues.py:48: DeprecationWarning: The 
> loop argument is deprecated since Python 3.8, and scheduled for removal in 
> Python 3.10.
>   self._finished = locks.Event(loop=loop)

--
nosy: +achimnol

___
Python tracker 
<https://bugs.python.org/issue38599>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30064] BaseSelectorEventLoop.sock_{recv, sendall}() don't remove their callbacks when canceled

2020-05-11 Thread Joongi Kim


Joongi Kim  added the comment:

I just encountered this issue when doing "sys.exit(1)" on a Click-based CLI 
program that internally uses asyncio event loop via wrapped via a context 
manager, on Python 3.8.2.

Using uvloop or adding "time.sleep(0.1)" before "sys.exit(1)" removes the error.

--
nosy: +achimnol

___
Python tracker 
<https://bugs.python.org/issue30064>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue30064] BaseSelectorEventLoop.sock_{recv, sendall}() don't remove their callbacks when canceled

2020-05-11 Thread Joongi Kim


Joongi Kim  added the comment:

And I suspect that this issue is something simliar to what I did in a recent 
janus PR:
https://github.com/aio-libs/janus/blob/ec8592b91254971473b508313fb91b01623f13d7/janus/__init__.py#L84
to give a chance for specific callbacks to execute via an extra context switch.

--

___
Python tracker 
<https://bugs.python.org/issue30064>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43159] pathlib with_suffix() should accept suffix not start with dot

2021-02-07 Thread JiKwon Kim


New submission from JiKwon Kim :

Currently pathlib with_suffix() function only accepts suffix starts with 
dot(".").

Consider this code;

some_pathlib_path.with_suffix("jpg")

This should change suffix to ".jpg", not raising ValueError.

--
components: Library (Lib)
messages: 386612
nosy: elbarkwon
priority: normal
severity: normal
status: open
title: pathlib with_suffix() should accept suffix not start with dot
versions: Python 3.10, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue43159>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue43194] Add JFXX as jpeg marker in imghdr module

2021-02-10 Thread JiKwon Kim


New submission from JiKwon Kim :

Currently imghdr module only finds "JFIF" or "Exif" in specific position. 
However there's some jpeg images with "JFXX" marker. I had some image with this 
marker and imghdr.what() returned None.

Refer to:
https://www.ecma-international.org/wp-content/uploads/ECMA_TR-98_1st_edition_june_2009.pdf
(Section 10.1 JFIF Extension APP0 Marker Segment)

--
components: Library (Lib)
messages: 386782
nosy: elbarkwon
priority: normal
pull_requests: 23291
severity: normal
status: open
title: Add JFXX as jpeg marker in imghdr module
type: enhancement
versions: Python 3.10, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue43194>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41227] minor typo in asyncio transport protocol

2020-07-07 Thread Wansoo Kim


Change by Wansoo Kim :


--
keywords: +patch
nosy: +ys19991
nosy_count: 4.0 -> 5.0
pull_requests: +20529
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21384

___
Python tracker 
<https://bugs.python.org/issue41227>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41241] Unnecessary Type casting in 'if condition'

2020-07-08 Thread Wansoo Kim


New submission from Wansoo Kim :

Hello!

When using 'if syntax', casting condition to bool type is unnecessary. Rather, 
it only occurs overhead.

https://github.com/python/cpython/blob/b26a0db8ea2de3a8a8e4b40e69fc8642c7d7cb68/Lib/asyncio/futures.py#L118

If you look at the link above, the `val` has been cast to bool type. This works 
well without bool casting.

This issue is my first issue. So if you have a problem, please tell me!

Thanks You!

--
components: asyncio
messages: 373309
nosy: asvetlov, ys19991, yselivanov
priority: normal
severity: normal
status: open
title: Unnecessary Type casting in 'if condition'
type: enhancement
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue41241>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41241] Unnecessary Type casting in 'if condition'

2020-07-08 Thread Wansoo Kim


Change by Wansoo Kim :


--
keywords: +patch
pull_requests: +20544
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21396

___
Python tracker 
<https://bugs.python.org/issue41241>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41242] When concating strings, I think it is better to use += than join the list

2020-07-08 Thread Wansoo Kim


New submission from Wansoo Kim :

Hello

I think it's better to use += than list.join() when concating strings.

This is more intuitive than other methods.

Also, I personally think it is not good for one variable to change to another 
type during runtime.

https://github.com/python/cpython/blob/b26a0db8ea2de3a8a8e4b40e69fc8642c7d7cb68/Lib/asyncio/base_events.py#L826

If you look at the link above, `msg` was a list type at first, in the end 
 become a str type.

--
components: asyncio
messages: 373310
nosy: asvetlov, ys19991, yselivanov
priority: normal
severity: normal
status: open
title: When concating strings, I think it is better to use += than join the list
type: enhancement

___
Python tracker 
<https://bugs.python.org/issue41242>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41242] When concating strings, I think it is better to use += than join the list

2020-07-08 Thread Wansoo Kim


Change by Wansoo Kim :


--
keywords: +patch
pull_requests: +20545
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21397

___
Python tracker 
<https://bugs.python.org/issue41242>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41244] Change to use str.join() instead of += when concatenating string

2020-07-08 Thread Wansoo Kim


New submission from Wansoo Kim :

https://bugs.python.org/issue41242

According to BPO-41242, it is better to use join than += when concatenating 
multiple strings.

https://github.com/python/cpython/blob/b26a0db8ea2de3a8a8e4b40e69fc8642c7d7cb68/Lib/asyncio/queues.py#L82

However, the link above uses += in the same pattern. I think we'd better change 
this to `str.join()`

--
components: asyncio
messages: 373317
nosy: asvetlov, ys19991, yselivanov
priority: normal
severity: normal
status: open
title: Change to use str.join() instead of += when concatenating string
type: enhancement

___
Python tracker 
<https://bugs.python.org/issue41244>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41244] Change to use str.join() instead of += when concatenating string

2020-07-08 Thread Wansoo Kim


Change by Wansoo Kim :


--
keywords: +patch
pull_requests: +20546
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21398

___
Python tracker 
<https://bugs.python.org/issue41244>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41242] When concating strings, I think it is better to use += than join the list

2020-07-08 Thread Wansoo Kim


Wansoo Kim  added the comment:

Well... to be honest, I'm a little confused. bpo-41244 and this issue are 
completely opposite. I'm not used to Python community yet because it hasn't 
been long since I joined it.

You're saying that if a particular method is not dramatically good, we prefer 
to keep the existing one as it is, right?

Your comment was very helpful to me. Maybe I can learn one by one like this.

Thank you very much.

--

___
Python tracker 
<https://bugs.python.org/issue41242>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41199] Docstring convention not followed for dataclasses documentation page

2020-07-08 Thread Wansoo Kim


Wansoo Kim  added the comment:

May I solve this issue?

--
nosy: +ys19991

___
Python tracker 
<https://bugs.python.org/issue41199>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41199] Docstring convention not followed for dataclasses documentation page

2020-07-09 Thread Wansoo Kim


Change by Wansoo Kim :


--
keywords: +patch
pull_requests: +20562
pull_request: https://github.com/python/cpython/pull/21413

___
Python tracker 
<https://bugs.python.org/issue41199>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37894] [win] shutil.which can not find the path if 'cmd' include directory path and not include extension name

2020-07-09 Thread Wansoo Kim


Wansoo Kim  added the comment:

Can I solve this problem?

--
nosy: +ys19991

___
Python tracker 
<https://bugs.python.org/issue37894>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue37578] Change Glob: Allow Recursion for Hidden Files

2020-07-09 Thread Wansoo Kim


Wansoo Kim  added the comment:

Can you reproduce this bug? I was able to find the hidden file by recursive 
search by excuting the code below.

```
from glob import glob

hidden = glob('**/.*')

print(hidden)
```

--
nosy: +ys19991

___
Python tracker 
<https://bugs.python.org/issue37578>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41264] Do not use the name of the built-in function as a variable.

2020-07-09 Thread Wansoo Kim


New submission from Wansoo Kim :

Using the name of the built-in function as a variable can cause unexpected 
problems.

```
# example

type = 'Hello'

...

type('Happy')

Traceback (most recent call last):
  File "", line 1, in 
TypeError: 'str' object is not callable

```

You can go back without any problems right now, but you may have problems later 
when others make corrections.

This code can be returned without any problems right now, but it may cause 
problems later when others make a change.

In the Lib/xml/etree function/_default, assign a value for the type.

```
...

type = self._doctype[1]
if type == "PUBLIC" and n == 4:
name, type, pubis, system = self._doctype

...
```

--
messages: 373442
nosy: ys19991
priority: normal
severity: normal
status: open
title: Do not use the name of the built-in function as a variable.
type: enhancement

___
Python tracker 
<https://bugs.python.org/issue41264>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41264] Do not use the name of the built-in function as a variable.

2020-07-10 Thread Wansoo Kim


Change by Wansoo Kim :


--
keywords: +patch
pull_requests: +20574
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21427

___
Python tracker 
<https://bugs.python.org/issue41264>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41264] Do not use the name of the built-in function as a variable.

2020-07-10 Thread Wansoo Kim


Change by Wansoo Kim :


--
pull_requests: +20575
pull_request: https://github.com/python/cpython/pull/21428

___
Python tracker 
<https://bugs.python.org/issue41264>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40982] copytree example in shutil

2020-07-10 Thread Wansoo Kim


Wansoo Kim  added the comment:

Can I solve this issue?

--
nosy: +ys19991

___
Python tracker 
<https://bugs.python.org/issue40982>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41284] High Level API for json file parsing

2020-07-12 Thread Wansoo Kim


New submission from Wansoo Kim :

Many Python users use the following snippets to read Json File.

```
with oepn(filepath, 'r') as f:
data = json.load(f)
```

I suggest providing this snippet as a function.


```
data = json.read(filepath)
```

Reading Json is very frequent task for python users. I think it is worth 
providing this with the High Level API.

--
components: Library (Lib)
messages: 373552
nosy: ys19991
priority: normal
severity: normal
status: open
title: High Level API for json file parsing
versions: Python 3.10

___
Python tracker 
<https://bugs.python.org/issue41284>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41284] High Level API for json file parsing

2020-07-12 Thread Wansoo Kim


Change by Wansoo Kim :


--
keywords: +patch
pull_requests: +20601
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/21453

___
Python tracker 
<https://bugs.python.org/issue41284>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34624] -W option and PYTHONWARNINGS env variable does not accept module regexes

2020-07-14 Thread Yongjik Kim


Yongjik Kim  added the comment:

Hi, sorry if I'm interrupting, but while we're at this, could we also not 
escape regex for "message" part?  (Or at least amend the documentation to 
clarify that the message part is literal string match?)

Currently, the docs on -W just say "The meaning of each of these fields is as 
described in The Warnings Filter" and then "The Warnings Filter" section says 
that the "message" field is a regex, but currently it's only true if you run 
warnings.filterwarnings() directly, and not if you use the -W option.

--
nosy: +Yongjik Kim

___
Python tracker 
<https://bugs.python.org/issue34624>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41320] async process closing after event loop closed

2020-07-19 Thread Joongi Kim


Change by Joongi Kim :


--
nosy: +achimnol

___
Python tracker 
<https://bugs.python.org/issue41320>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41229] Asynchronous generator memory leak

2020-07-19 Thread Joongi Kim


Joongi Kim  added the comment:

>From the given example, if I add "await q.aclose()" after "await 
>q.asend(123456)" it does not leak the memory.

This is a good example showing that we should always wrap async generators with 
explicit "aclosing" context manager (which does not exist yet in the stdlib).
I'm already doing so by writing a custom library:
https://github.com/achimnol/aiotools/blob/ef7bf0ce/src/aiotools/context.py#L152

We may need to update the documentation to recommend explicit aclosing of async 
generators.

--
nosy: +achimnol

___
Python tracker 
<https://bugs.python.org/issue41229>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41229] Asynchronous generator memory leak

2020-07-19 Thread Joongi Kim


Joongi Kim  added the comment:

I've searched the Python documentation and the docs must be updated to 
explicitly state the necessity of aclose().

refs)
https://docs.python.org/3/reference/expressions.html#asynchronous-generator-functions
https://www.python.org/dev/peps/pep-0525/

I'm not sure that what the original authors' intention is, but for me, it looks 
like that calling aclose() is an optional thing and the responsibility to call 
aclose() on async generators is left to the asyncgen-shutdown handler of the 
event loop.

The example in this issue show that we need to aclose asyncgens whenever we are 
done with it, even far before shutting down the event loop.

--

___
Python tracker 
<https://bugs.python.org/issue41229>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41229] Asynchronous generator memory leak

2020-07-19 Thread Joongi Kim


Change by Joongi Kim :


--
nosy: +njs

___
Python tracker 
<https://bugs.python.org/issue41229>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41229] Asynchronous generator memory leak

2020-07-19 Thread Joongi Kim


Change by Joongi Kim :


--
nosy: +Joongi Kim
nosy_count: 6.0 -> 7.0
pull_requests: +20687
pull_request: https://github.com/python/cpython/pull/21545

___
Python tracker 
<https://bugs.python.org/issue41229>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41532] Import precedence is broken in some library

2020-08-12 Thread Jinseo Kim


Jinseo Kim  added the comment:

Yes, I restarted and cleared directory before each test.

--

___
Python tracker 
<https://bugs.python.org/issue41532>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41532] Import precedence is broken in some library

2020-08-12 Thread Jinseo Kim


Jinseo Kim  added the comment:

My environment is Ubuntu 18.04.4

Python version:
  Python 3.8.0 (default, Oct 28 2019, 16:14:01)
  [GCC 8.3.0] on linux

--

___
Python tracker 
<https://bugs.python.org/issue41532>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35061] Specify libffi.so soname for ctypes

2020-10-26 Thread Yongkwan Kim


Yongkwan Kim  added the comment:

My solution is creating link of libffi.so.6 as .5 This is for anyone who has 
same issue with me.
But thanks for your kind reply though.

--

___
Python tracker 
<https://bugs.python.org/issue35061>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue41229] Asynchronous generator memory leak

2020-11-09 Thread Joongi Kim


Change by Joongi Kim :


--
pull_requests: +22115
pull_request: https://github.com/python/cpython/pull/23217

___
Python tracker 
<https://bugs.python.org/issue41229>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12806] argparse: Hybrid help text formatter

2012-02-22 Thread Graylin Kim

Graylin Kim  added the comment:

I'd be willing to at some point but I cannot see myself getting around to
it in the near future.

If someone else wants to offer an implementation that would be great.

On Wed, Feb 22, 2012 at 10:42 AM, Zbyszek Szmek wrote:

>
> Zbyszek Szmek  added the comment:
>
> > I suppose here is where I should volunteer to update the patch file...
> @GraylinKim: do you still intend to work on this?
>
> --
>
> ___
> Python tracker 
> <http://bugs.python.org/issue12806>
> ___
>

--

___
Python tracker 
<http://bugs.python.org/issue12806>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29406] asyncio SSL contexts leak sockets after calling close with certain Apache servers

2017-06-18 Thread Nikolay Kim

Changes by Nikolay Kim :


--
pull_requests: +2319

___
Python tracker 
<http://bugs.python.org/issue29406>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29970] Severe open file leakage running asyncio SSL server

2017-06-18 Thread Nikolay Kim

Nikolay Kim added the comment:

question is, should asyncio handle timeouts or leave it to caller?

https://github.com/python/cpython/pull/480 fixes leak during handshake.

--
nosy: +fafhrd91

___
Python tracker 
<http://bugs.python.org/issue29970>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29970] Severe open file leakage running asyncio SSL server

2017-06-18 Thread Nikolay Kim

Nikolay Kim added the comment:

I see. this is server specific problem. as a temp solution I'd use proxy for 
ssl termination.

--

___
Python tracker 
<http://bugs.python.org/issue29970>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29406] asyncio SSL contexts leak sockets after calling close with certain Apache servers

2017-06-19 Thread Nikolay Kim

Nikolay Kim added the comment:

Let’s close this issue then. I don’t like it anyway.

> On Jun 19, 2017, at 10:21 AM, Grzegorz Grzywacz  
> wrote:
> 
> 
> Grzegorz Grzywacz added the comment:
> 
> This is not problem with madis-data.ncep.noaa.gov not doing ssl shutdown, 
> this is problem with asyncio not doing it.
> 
> Patch from this #30698 issue fix this too.
> 
> --
> nosy: +grzgrzgrz3
> 
> ___
> Python tracker 
> <http://bugs.python.org/issue29406>
> ___

--
nosy: +fafhrd

___
Python tracker 
<http://bugs.python.org/issue29406>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35627] multiprocessing.queue in 3.7.2 doesn't behave as it was in 3.7.1

2018-12-31 Thread June Kim


Change by June Kim :


--
components: Library (Lib)
nosy: June Kim
priority: normal
severity: normal
status: open
title: multiprocessing.queue in 3.7.2 doesn't behave as it was in 3.7.1
type: behavior
versions: Python 3.7

___
Python tracker 
<https://bugs.python.org/issue35627>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35627] multiprocessing.queue in 3.7.2 doesn't behave as it was in 3.7.1

2018-12-31 Thread June Kim


New submission from June Kim :

## Test code ##
## Modified a bit from the original written by Doug Hellmann
## https://pymotw.com/3/multiprocessing/communication.html

import multiprocessing
import time


class Consumer(multiprocessing.Process):
def __init__(self, task_queue, result_queue):
multiprocessing.Process.__init__(self)
self.task_queue = task_queue
self.result_queue = result_queue

def run(self):
proc_name = self.name
while True:
print('Getting task')
next_task = self.task_queue.get()
print(f'task got: {next_task}')
if next_task is None:
print('{}: Exiting'.format(proc_name))
self.task_queue.task_done()
break
print('{}: {}'.format(proc_name, next_task))
answer = next_task()
self.task_queue.task_done()
self.result_queue.put(answer)


class Task:
def __init__(self, a, b):
self.a = a
self.b = b

def __call__(self):
time.sleep(0.1)
return '{self.a} * {self.b} = {product}'.format(
self=self, product=self.a * self.b)

def __str__(self):
return '{self.a} * {self.b}'.format(self=self)


def test():
tasks = multiprocessing.JoinableQueue()
results = multiprocessing.Queue()
num_consumers = multiprocessing.cpu_count() * 2
print('Creating {} consumers'.format(num_consumers))
consumers = [Consumer(tasks, results) for i in range(num_consumers)]
[w.start() for w in consumers]
num_jobs = 10
print('Putting')
[tasks.put(Task(i, i)) for i in range(num_jobs)]
print('Poisoning')
[tasks.put(None) for i in range(num_consumers)]
print('Joining')
tasks.join()
while num_jobs:
result = results.get()
print('Result:', result)
num_jobs -= 1

###
1. This code works perfectly in 3.7.1 but halts the main process in 3.7.2
2. It seems the JoinableQueue is empty when it is accessed by processes.
3. IMHO, resource sharing mechanism in multiprocessing.queue seems not working 
properly.

--

___
Python tracker 
<https://bugs.python.org/issue35627>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35627] multiprocessing.queue in 3.7.2 doesn't behave as it was in 3.7.1

2018-12-31 Thread June Kim


June Kim  added the comment:

Here is my environment

---system
CPU: Intel i5 @2.67GHz
RAM: 8G
OS: Windows 10 Home (64bit)
OS version: 1803
OS build: 17134.472

---python
version1: 3.7.1 AMD64 on win32
version2: 3.7.2 AMD64 on win32
Python path: (venv)/Scripts/python.exe
IDE: VS Code(1.30.1)
Terminal: Git Bash

--

___
Python tracker 
<https://bugs.python.org/issue35627>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29406] asyncio SSL contexts leak sockets after calling close with certain Apache servers

2017-03-03 Thread Nikolay Kim

Changes by Nikolay Kim :


--
pull_requests: +371

___
Python tracker 
<http://bugs.python.org/issue29406>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29742] asyncio get_extra_info() throws exception

2017-03-06 Thread Nikolay Kim

New submission from Nikolay Kim:

https://github.com/python/asyncio/issues/494

--
messages: 289138
nosy: fafhrd91
priority: normal
pull_requests: 435
severity: normal
status: open
title: asyncio get_extra_info() throws exception
versions: Python 3.5, Python 3.6, Python 3.7

___
Python tracker 
<http://bugs.python.org/issue29742>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   >