sbt added the comment:
The failures for test_multiprocessing and test_concurrent_futures seem to be
caused by a leak in _multiprocessing.win32.WaitForMultipleObjects().
The attached patch fixes those leaks for me (on a 32 bit build).
--
keywords: +patch
nosy: +sbt
Added file: http
sbt added the comment:
The attached patch fixes the time related refleaks.
--
___
Python tracker
<http://bugs.python.org/issue14125>
___
___
Python-bugs-list m
sbt added the comment:
Ah. Forgot the patch.
--
Added file: http://bugs.python.org/file24662/time_strftime_leak.patch
___
Python tracker
<http://bugs.python.org/issue14
New submission from sbt :
Currently the only documented way to have customised pickling for a type is to
register a reduction function with the global dispatch table managed by the
copyreg module. But such global changes are liable to disrupt other code which
uses pickling.
Multiprocessing
sbt added the comment:
> I don't understand the following code:
> ...
> since self.dispatch_table is a property returning
> self._dispatch_table. Did you mean type(self).dispatch_table?
More or less. That code was a botched attempt to match the behaviour of the C
imple
sbt added the comment:
> Hmm, I tried to apply the latest patch to the default branch and it
> failed. It also seems the patch was done against a changeset
> (508bc675af63) which doesn't exist in the repo...
I will do an updated patch against a "public" changeset.
sbt added the comment:
Updated patch with docs.
--
Added file: http://bugs.python.org/file24729/pickle_dispatch.patch
___
Python tracker
<http://bugs.python.org/issue14
sbt added the comment:
Updated patch against 2822765e48a7.
--
Added file: http://bugs.python.org/file24730/pipe_poll_fix.patch
___
Python tracker
<http://bugs.python.org/issue12
sbt added the comment:
Updated patch addressing Antoine's comments.
--
Added file: http://bugs.python.org/file24737/pipe_poll_fix.patch
___
Python tracker
<http://bugs.python.org/is
sbt added the comment:
What you were told on IRC was wrong. By default the queue *does* have infinite
size.
When a process puts an item on the queue for the first time, a background
thread is started which is responsible for writing items to the underlying
pipe. This does mean that, on
New submission from sbt :
According to Microsoft's documentation sockets created using socket() have the
overlapped attribute, but sockets created with WSASocket() do not unless you
pass the WSA_FLAG_OVERLAPPED flag. The documentation for WSADuplicateSocket()
says
If the source process
sbt added the comment:
pitrou wrote:
> Are you sure this is desired? Nowhere can I think of a place in the
> stdlib where we use overlapped I/O on sockets.
multiprocessing.connection.wait() does overlapped zero length reads on sockets.
It's documentation currently claims that it
sbt added the comment:
I think
PyAPI_FUNC(PyObject *) _PyIter_GetIter(const char *iter);
has a confusing name for a convenience function which retrieves an attribute
from the builtin module by name.
Not sure what would be better. Maybe _PyIter_GetBuiltin().
--
nosy: +sbt
sbt added the comment:
It appears that the 4th argument of the socket constructor is undocumented, so
presumably one is expected to use fromfd() instead.
Maybe you could have a frominfo(info) function (to match fromfd(fd,...)) and a
dupinfo(pid) method.
(It appears that multiprocessing uses
sbt added the comment:
_DummyThread.__init__() explicitly deletes self._Thread__block:
def __init__(self):
Thread.__init__(self, name=_newname("Dummy-%d"))
# Thread.__block consumes an OS-level locking primitive, which
# can never be used by a _DummyThre
sbt added the comment:
Ignore my last message...
--
___
Python tracker
<http://bugs.python.org/issue14308>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from sbt :
The attached patch reimplements ForkingPickler using the new dispatch_table
attribute.
This allows ForkingPickler to subclass Pickler (implemented in C) instead of
_Pickler (implemented in Python).
--
components: Library (Lib)
files: mp_forking_pickler.patch
sbt added the comment:
_eintr_retry is currently unused. The attached patch removes it.
If it is retained then we should at least add a warning that it does not
recalculate timeouts.
--
keywords: +patch
Added file: http://bugs.python.org/file24888/mp_remove_eintr_retry.patch
New submission from sbt :
When pickling a function object, if it cannot be saved as a global the C
implementation falls back to using copyreg/__reduce__/__reduce_ex__.
The comment for the changeset which added this fallback claims that it is for
compatibility with the Python implementation
sbt added the comment:
> I think this captures the functionality better than "duplicate" or
> duppid() since there is no actual duplication involved until the
> fromshare() function is called.
Are you saying the WSADuplicateSocket() call in share() doesn't duplica
sbt added the comment:
> If duplication happened early, then there would have to be a way to
> "unduplicate" it in the source process if, say, IPC somehow failed.
> There is currently no api to undo the effects of WSADuplicateSocket().
If this were a normal handle the
sbt added the comment:
> ... and that pickling things like dict iterators entail running the
> iterator to completion and storing all of the results in a list.
The thing to emphasise here is that pickling an iterator is "destructive":
afterwards the original iterator wil
sbt added the comment:
> If you look at the patch it isn't (or shouldn't be).
Sorry. I misunderstood when Raymond said "running the iterator to completion".
--
___
Python tracker
<http://
sbt added the comment:
Jimbofbx wrote:
> def main():
> from multiprocessing import Pipe, reduction
> i, o = Pipe()
> print(i);
> reduced = reduction.reduce_connection(i)
> print(reduced);
> newi = reduced[0](*reduced[1])
> print(ne
sbt added the comment:
ForkingPickler is only used when creating a child process. The
multiprocessing.reduction module is only really intended for sending stuff to
*pre-existing* processes.
As things stand, after importing multiprocessing.reduction you can do something
like
buf
sbt added the comment:
> But ForkingPickler could be used in multiprocessing.connection,
> couldn't it?
I suppose so.
Note that the way a connection handle is transferred between existing processes
is unnecessarily inefficient on Windows. A background server thread (one per
proc
New submission from sbt :
In multiprocessing.connection on Windows, socket handles are indirectly
duplicated using DuplicateHandle() instead the WSADuplicateSocket(). According
to Microsoft's documentation this is not supported.
This is easily avoided by using socket.detach() inste
sbt added the comment:
> There is a simpler way to do this on Windows. The sending process
> duplicates the handle, and the receiving process duplicates that second
> handle using DuplicateHandle() and the DUPLICATE_CLOSE_SOURCE flag. That
> way no server thread is necessar
Changes by sbt :
Removed file: http://bugs.python.org/file25153/mp_socket_dup.patch
___
Python tracker
<http://bugs.python.org/issue14522>
___
___
Python-bugs-list mailin
sbt added the comment:
> What is the bug that this fixes? Can you provide a test case?
The bug is using an API in a way that the documentation says is
wrong/unreliable. There does not seem to be a classification for that.
I have never seen a problem caused by using DuplicateHandle() s
sbt added the comment:
Actually Issue 9753 was causing failures in test_socket.BasicTCPTest and
test_socket.BasicTCPTest2 on at least one Windows XP machine.
--
___
Python tracker
<http://bugs.python.org/issue14
sbt added the comment:
> Is there a reason the patch changes close() to win32.CloseHandle()?
This is a Windows only code path so close() is just an alias for
win32.CloseHandle(). It allow removal of the lines
# Late import because of circular import
from multiprocessing.fork
sbt added the comment:
New patch skips tests if ctypes not available.
--
Added file: http://bugs.python.org/file25155/cond_wait_for.patch
___
Python tracker
<http://bugs.python.org/issue14
sbt added the comment:
I only looked quickly at the web pages, so I may have misunderstood.
But it sounds like this applies when the attacker gets multiple chances to
guess the digest for a *fixed* message (which was presumably chosen by the
attacker).
That is not the case here because
sbt added the comment:
There is an undocumented function multiprocessing.allow_connection_pickling()
whose docstring claims it allows connection and socket objects to be pickled.
The attached patch fixes the multiprocessing.reduction module so that it works
correctly. This means that
sbt added the comment:
> I think a generic solution must be found for multiprocessing, so I'll
> create a separate issue.
I have submitted a patch for Issue 4892 which makes connection and socket
objects picklable. It uses socket.share() and socket.fromshare()
sbt added the comment:
Updated patch which uses ForkingPickler in Connection.send().
Note that connection sharing still has to be enabled using
allow_connection_pickling().
Support could be enabled automatically, but that would introduce more circular
imports which confuse me. It might be
sbt added the comment:
> But connection doesn't depend on reduction, neither does forking.
If registration of (Pipe)Connection is done in reduction then you can't make
(Pipe)Connection picklable *automatically* unless you make connection depend on
reduction (possibly indirectly)
sbt added the comment:
I think it would be reasonable to add a safe comparison function to hmac.
Its documentation could explain briefly when it would be preferable to "==".
--
___
Python tracker
<http://bugs.python.o
New submission from sbt :
When running test_multiprocessing on Linux I occasionally see a stream of
errors caused by ignored weakref callbacks:
Exception AssertionError: AssertionError() in ignored
These do not cause the unittests to fail.
Finalizers from the parent process are supposed
sbt added the comment:
Patch to disable gc.
--
keywords: +patch
Added file: http://bugs.python.org/file25180/mp_disable_gc.patch
___
Python tracker
<http://bugs.python.org/issue14
sbt added the comment:
The last patch did not work on Unix.
Here is a new version where the reduction functions are automatically
registered, so allow_connection_pickling() is redundant.
--
Added file: http://bugs.python.org/file25181/mp_pickle_conn.patch
sbt added the comment:
> That's a problem indeed. Perhaps we need a global "fork lock" shared
> between subprocess and multiprocessing?
I did an atfork patch which included a (recursive) fork lock. See
http://bugs.python.org/review/6721/show
The patch included chang
sbt added the comment:
Why not just
def time_independent_equals(a, b):
return len(a) == len(b) and sum(x != y for x, y in zip(a, b)) == 0
--
___
Python tracker
<http://bugs.python.org/issue14
sbt added the comment:
Alternative patch which records pid when Finalize object is created. The
callback does nothing if recorded pid does not match os.getpid().
--
Added file: http://bugs.python.org/file25195/mp_finalize_pid.patch
___
Python
sbt added the comment:
> But what if Finalize is used to cleanup a resource that gets
> duplicated in children, like a file descriptor?
> See e.g. forking.py, line 137 (in Popen.__init__())
> or heap.py, line 244 (BufferWrapper.__init__()).
This was how Finalize objects already ac
sbt added the comment:
I think there are some issues with the treatment of the DWORD type. (DWORD is
a typedef for unsigned long.)
_subprocess always treats them as signed, whereas _multiprocessing treats them
(correctly) as unsigned. _windows does a mixture: functions from _subprocess
sbt added the comment:
Attached is an up to date patch.
* code has been moved to Modules/_windows.c
* DWORD is uniformly treated as unsigned
* _subprocess's handle wrapper type has been removed (although
subprocess.py still uses a Python implemented handle wrapper type)
I'm no
sbt added the comment:
> I don't think we need the vcproj file, unless I missed something.
_multiprocessing.win32 currently wraps closesocket(), send() and recv() so it
needs to link against ws2_32.lib.
I don't know how to make _windows link against ws2_32.lib without adding a
sbt added the comment:
New patch. Compared to the previous one:
* socket functions have been moved from _windows to _multiprocessing
* _windows.vcpoj has been removed (so _windows is part of pythoncore.vcproj)
* no changes to pcbuild.sln needed
* removed reference to 'win32_functions.
sbt added the comment:
> I think the module would be better named _win32, since that's the name
> of the API (like POSIX under Unix).
Changed in new patch.
> Also, it seems there are a couple of naming inconsistencies renaming
> (e.g. the overlapped wrapper is named "
sbt added the comment:
New patch which calculates endtime outside loop.
--
Added file: http://bugs.python.org/file25240/cond_wait_for.patch
___
Python tracker
<http://bugs.python.org/issue14
sbt added the comment:
> How about _windowsapi or _winapi then, to ensure there are no clashes?
I don't have any strong feelings, but I would prefer _winapi.
--
___
Python tracker
<http://bugs.python.org
sbt added the comment:
s/_win32/_winapi/g
--
Added file: http://bugs.python.org/file25241/winapi_module.patch
___
Python tracker
<http://bugs.python.org/issue11
sbt added the comment:
> Overlapped's naming is still lagging behind :-)
Argh. And a string in winapi_module too.
Yet another patch.
--
Added file: http://bugs.python.org/file25252/winapi_module.patch
___
Python tracker
<http://bugs
sbt added the comment:
Can this issue be reclosed now?
--
___
Python tracker
<http://bugs.python.org/issue14310>
___
___
Python-bugs-list mailing list
Unsub
sbt added the comment:
Up to date patch.
--
Added file: http://bugs.python.org/file25270/mp_pickle_conn.patch
___
Python tracker
<http://bugs.python.org/issue4
sbt added the comment:
A couple of minor changes based on Antoine's earlier review (which I did not
notice till now).
--
Added file: http://bugs.python.org/file25272/mp_pickle_conn.patch
___
Python tracker
<http://bugs.python.org/i
101 - 158 of 158 matches
Mail list logo