Changes by Richard Oudkerk :
--
nosy: -sbt
___
Python tracker
<http://bugs.python.org/issue19066>
___
___
Python-bugs-list mailing list
Unsubscribe:
Changes by Richard Oudkerk :
--
nosy: -sbt
___
Python tracker
<http://bugs.python.org/issue19124>
___
___
Python-bugs-list mailing list
Unsubscribe:
Richard Oudkerk added the comment:
> Well, perhaps we can special-case builtins not to be "wiped" at shutdown.
> However, there is another problem here in that the Popen object survives
> until the builtins module is wiped. This should be investigated too.
Maybe it is becau
Richard Oudkerk added the comment:
Is BoundedSemaphore really supposed to be "robust" in the face of too many
releases, or does it just provide a sanity check?
I think that releasing a bounded semaphore too many times is a programmer
error, and the exception is just a debugging a
Richard Oudkerk added the comment:
> the previous initializers were not supposed to return any value
Previously, any returned value would have been ignored. But the documentation
does not say that the function has to return None. So I don't think we can
assume there is no compa
Richard Oudkerk added the comment:
I think "misuse" is an exageration. Various functions change some state and
return a value that is usually ignored, e.g. os.umask(), signal.signal().
> Global variables usage is a pattern which might lead to code errors and many
> developers
Richard Oudkerk added the comment:
> These functions are compliant with POSIX standards and the return values
> are actually useful, they return the previously set masks and handlers,
> often are ignored but in complex cases it's good to know their previous
> state.
Yes.
Richard Oudkerk added the comment:
BTW, the context objects are singletons.
I could not see a sensible way to make ctx.Process be a picklable class (rather
than a method) if there can be multiple instances of a context type. This
means that the helper processes survive until the program
Richard Oudkerk added the comment:
Attached is a patch which allows the use of separate contexts. For example
try:
ctx = multiprocessing.get_context('forkserver')
except ValueError:
ctx = multiprocessing.get_context('spawn')
q = ctx.Queue()
Changes by Richard Oudkerk :
--
nosy: +sbt
___
Python tracker
<http://bugs.python.org/issue12413>
___
___
Python-bugs-list mailing list
Unsubscribe:
Richard Oudkerk added the comment:
> I'm already confused by the fact that the test is named
> test_multiprocessing_spawn and the error is coming from a module named
> popen_fork...)
popen_spawn_posix.Popen is a subclass of po
Richard Oudkerk added the comment:
> I haven't read all of your patch yet, but does this mean a forkserver
> will be started regardless of whether it is later used?
No, it is started on demand. But since it is started using
_posixsbuprocess.fork_exec(), nothing is inherited fr
Richard Oudkerk added the comment:
After running ugly_hack(), trying to malloc a largeish block (1MB) fails:
int main(void)
{
int first;
void *ptr;
ptr = malloc(1024*1024);
assert(ptr != NULL);/* succeeds */
free(ptr);
first = ugly_hack();
ptr = malloc
Changes by Richard Oudkerk :
--
resolution: -> fixed
stage: -> committed/rejected
status: open -> pending
title: Robustness issues in multiprocessing.{get,set}_start_method -> Support
different contexts in multiprocessing
type: behavior -&
Changes by Richard Oudkerk :
--
status: open -> closed
___
Python tracker
<http://bugs.python.org/issue18999>
___
___
Python-bugs-list mailing list
Unsubscrib
Richard Oudkerk added the comment:
On 16/10/2013 8:14pm, Guido van Rossum wrote:
> (2) I get this message -- what does it mean and should I care?
> 2 tests altered the execution environment:
> test_asyncio.test_base_events test_asyncio.test_futures
Perhaps threads from the Threa
Richard Oudkerk added the comment:
I think at module level you can do
if sys.platform != 'win32':
raise unittest.SkipTest('Windows only')
--
___
Python tracker
<http://bug
Richard Oudkerk added the comment:
I can reproduce the problem on the Non-Debug Gentoo buildbot using only
os.fork() and os.kill(pid, signal.SIGTERM). See
http://hg.python.org/cpython/file/9853d3a20849/Lib/test/_test_multiprocessing.py#l339
To investigate further I think strace and/or
Richard Oudkerk added the comment:
> I fixed the out of space last night. (Someday I'll get around to figuring
> out which test it is that is leaving a bunch of data around when it fails,
> but I haven't yet).
It looks like on the Debug Gentoo buildbot configure an
Richard Oudkerk added the comment:
I finally have a gdb backtrace of a stuck child (started using os.fork() not
multiprocessing):
#1 0xb76194da in ?? () from /lib/libc.so.6
#2 0xb6d59755 in ?? ()
from
/var/lib/buildslave/custom.murray-gentoo/build/build/lib.linux-i686-3.4-pydebug
Richard Oudkerk added the comment:
Actually, according to strace the call which blocks is
futex(0xb7839454, FUTEX_WAIT_PRIVATE, 1, NULL
--
___
Python tracker
<http://bugs.python.org/issue19
Changes by Richard Oudkerk :
--
nosy: +sbt
___
Python tracker
<http://bugs.python.org/issue10015>
___
___
Python-bugs-list mailing list
Unsubscribe:
Richard Oudkerk added the comment:
I guess this should be clarified in the docs, but multiprocessing.pool.Pool is
a *class* whose constructor takes a context argument, where as
multiprocessing.Pool() is a *bound method* of the default context. (In
previous versions multiprocessing.Pool was a
Richard Oudkerk added the comment:
> I guess we'll have to write platform-dependent code and make this an
> optional feature. (Essentially, on platforms like AIX, for a
> write-pipe, connection_lost() won't be called unless you try to write
> some more bytes to it.)
I
Richard Oudkerk added the comment:
Would it make sense to use socketpair() instead of pipe() on AIX? We could
check for the "bug" directly rather than checking specifically for AIX.
--
___
Python tracker
<http://bugs.python.o
Richard Oudkerk added the comment:
> Is this patch still of relevance for asyncio?
No, the _overlapped extension contains the IOCP stuff.
--
___
Python tracker
<http://bugs.python.org/issu
Richard Oudkerk added the comment:
> Richard, do you have time to get your patch ready for 3.4?
Yes. But we don't seem to have concensus on how to handle exceptions. The
main question is whether a failed prepare callback should prevent the fork from
happenning, or just be
Richard Oudkerk added the comment:
> - now that FDs are non-inheritable by default, fork locks around
> subprocess and multiprocessing shouldn't be necessary anymore? What
> other use cases does the fork-lock have?
CLOEXEC fds will still be inherited by forked children.
&
Richard Oudkerk added the comment:
The following uses socketpair() instead of pipe() for stdin, and works for me
on Linux:
diff -r 7d94e4a68b91 asyncio/unix_events.py
--- a/asyncio/unix_events.pySun Oct 20 20:25:04 2013 -0700
+++ b/asyncio/unix_events.pyMon Oct 21 17:15:19 2013 +0100
Richard Oudkerk added the comment:
Won't using a prepare handler mean that the parent and child processes will use
the same seed until one or other of them forks again?
--
___
Python tracker
<http://bugs.python.org/is
New submission from Richard Neill:
It would be really nice if python supported mathematical operations on
dictionaries. This is widely requested (eg lots of stackoverflow queries), but
there's currently no simple way to do it.
I propose that this should work in the "obvious"
Changes by Richard Oudkerk :
--
resolution: -> fixed
stage: needs patch -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.or
Richard Oudkerk added the comment:
This is a test of threading.Barrier rather than anything implemented directly
by multiprocessing.
Tests which involve timeouts tend to be a bit flaky. Increasing the length of
timeouts usually helps, but makes the tests take even longer.
How often have you
Richard Oudkerk added the comment:
Given PEP 446 (fds are now CLOEXEC by default) I prepared an updated patch
where the fork lock is undocumented and subprocess no longer uses the fork
lock. (I did not want to encourage the mixing of threads with fork() without
exec() by exposing the fork
Richard Oudkerk added the comment:
It is a recent kernel and does support pipe2().
After some debugging it appears that a pipe handle created in Popen.__init__()
was being leaked to a forked process, preventing Popen.__init__() from
completing before the forked process did.
Previously the
Richard Oudkerk added the comment:
Although it is undocumented, in python 3.4 you can control the prefix used by
doing
multiprocessing.current_process()._config['semprefix'] = 'myprefix'
in the main process at the beginning of the program.
Unfortunately, this will
Richard Oudkerk added the comment:
This was fixed for 3.3 in #1692335.
The issue of backporting to 2.7 is discussed in #17296.
--
resolution: -> duplicate
status: open -> closed
superseder: -> Cannot unpickle classes derived from 'Exception'
type
Richard Oudkerk added the comment:
> So hopefully the bug should disappear entirely in future releases of tcl,
> but for now you can work around it by building tcl without threads,
> calling exec in between the fork and any use of tkinter in the child
> process, or not importing t
Richard Oudkerk added the comment:
Fixed by #11161.
--
resolution: -> fixed
stage: -> committed/rejected
status: open -> closed
superseder: -> futures.ProcessPoolExecutor hangs
___
Python tracker
<http://bugs.python
New submission from Richard PALO:
I'd like to have reopened this previous issue as it is still very much the case.
I believe as well that the common distros (I can easily verify OpenIndiana and
OmniOS) patch it out (patch file attached).
Upstream/oracle/userland-gate seems to as well.
Richard PALO added the comment:
I don't believe the problem is a question solely of building the python
sources, but also certain dependent application sources...
I know of at least libreoffice building against python and this problem has
come up.
The workaround was to apply the
Richard PALO added the comment:
Sure, attached is a simple test found on the internet, compiled with the
following reproduces the problem:
richard@devzone:~/src$ /opt/local/gcc48/bin/g++ -o tp tp.cpp -DSOLARIS
-I/opt/local/include/python2.7 -L/opt/local/lib -lpython2.7
In file included from
Richard Oudkerk added the comment:
If you have a pending overlapped operation then the associated buffer should
not be deallocated until that operation is complete, or else you are liable to
get a crash or memory corruption.
Unfortunately WinXP provides no reliable way to cancel a pending
Richard Oudkerk added the comment:
> As close() on regular files, I would prefer to call explicitly cancel()
> to control exactly when the overlapped operation is cancelled.
If you use daemon threads then you have no guarantee that the thread will ever
get a chance to explicitly call
Richard Oudkerk added the comment:
I think the attached patch should fix it. Note that with the patch the
RuntimeError can probably only occur on Windows XP.
Shall I apply it?
--
keywords: +patch
Added file: http://bugs.python.org/file32597/dealloc-runtimeerror.patch
Richard Oudkerk added the comment:
On 13/11/2013 3:07pm, STINNER Victor wrote:
>> On Vista and later, yes, this is done in the deallocator using
>> CancelIoEx(), although there is still a warning.
>
> I don't understand. The warning is emitted because an operating is no
Richard Oudkerk added the comment:
Note that on Windows if you redirect the standard streams then *all*
inheritable handles are inherited by the child process.
Presumably the handle for f_w file object (and/or a duplicate of it) created in
one thread is accidentally "leaked" to
Changes by Richard Oudkerk :
--
resolution: -> fixed
stage: -> committed/rejected
status: open -> closed
type: behavior ->
___
Python tracker
<http://bugs.python
Richard Oudkerk added the comment:
Thanks for the patches.
Fixed in 7aabbe919f55, 11cafbe6519f.
--
resolution: -> fixed
stage: -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.or
Richard Oudkerk added the comment:
Hopefully the applied change will fix failure (or at least make this much less
likey).
--
resolution: -> fixed
stage: -> committed/rejected
status: open -> closed
type: -> behavior
___
Python tr
Richard Oudkerk added the comment:
I don't think the patch to the _test_multiprocessing will work. It defines
cls._Popen but I don't see how that would be used by cls.Pool to start the
processes.
I will have a think about a fix.
--
Richard Oudkerk added the comment:
> If the result of os.read() was stored in a Python daemon thread, the
> memory should be released since the following changeset. Can someone
> check if this issue still exist?
If a daemon thread is killed while it is blocking on os.read() then
Changes by Richard Oudkerk :
--
resolution: -> fixed
status: open -> closed
___
Python tracker
<http://bugs.python.org/issue19599>
___
___
Python-bugs-list
Richard Oudkerk added the comment:
It would be nice to try this on another Vista machine - the WinXP, Win7,
Windows Server 2003 and Windows Server 2008 buildbots don't seem to show this
failure.
It looks as though the TimerOrWaitFired argument passed to the callback
registered
Richard Oudkerk added the comment:
Could you try this patch?
--
keywords: +patch
Added file: http://bugs.python.org/file32822/wait-for-handle.patch
___
Python tracker
<http://bugs.python.org/issue19
Richard Oudkerk added the comment:
> Possibly related: ...
That looks unrelated since it does not involve wait_for_handle().
Unfortunately test_utils.run_briefly() offers few guarantees when using the
IOCP event loop.
--
___
Python tracker
&l
Richard Oudkerk added the comment:
> I've always had an implicit understanding that calls with timeouts may,
> for whatever reason, return sooner than requested (or later!), and the
> most careful approach is to re-check the clock again.
I've always had the implicit understa
Richard Oudkerk added the comment:
>From what I remember a proxy method will be thread/process-safe if the
>referent's corresponding method is thread safe.
It should certainly be documented that the exposed methods of a proxied object
should be
New submission from Richard Milne:
Reading the pkzip APPNOTE and the documentation for the zipfile module, I was
under the impression that I could set the DEFLATE compression level, on a
per-file basis, for each file added to an archive, by setting the appropriate
bits in zipinfo.flag_bits
Richard Kiss added the comment:
I reread more carefully, and I am in agreement now that I better understand
what's going on. Thanks for your patience.
--
nosy: +Richard.Kiss
___
Python tracker
<http://bugs.python.org/is
Changes by Richard Oudkerk :
--
assignee: -> sbt
___
Python tracker
<http://bugs.python.org/issue10850>
___
___
Python-bugs-list mailing list
Unsubscrib
Richard Oudkerk added the comment:
Since there are no new features added to Python 2, this would be a Python 3
only feature.
I think for Python 3 it is better to concentrate on developing
concurrent.futures rather than multiprocessing.Pool
Changes by Richard Oudkerk :
--
assignee: -> sbt
___
Python tracker
<http://bugs.python.org/issue21779>
___
___
Python-bugs-list mailing list
Unsubscrib
Changes by Richard Oudkerk :
--
assignee: -> sbt
___
Python tracker
<http://bugs.python.org/issue21664>
___
___
Python-bugs-list mailing list
Unsubscrib
Richard Oudkerk added the comment:
Updated version of the patch. Still needs docs.
--
Added file: http://bugs.python.org/file35902/memoryview-array-value.patch
___
Python tracker
<http://bugs.python.org/issue14
Richard Oudkerk added the comment:
I can't remember why I did not use fstat() -- probably it did not occur to me.
--
___
Python tracker
<http://bugs.python.org/is
New submission from Martin Richard:
Hi,
Following the discussion on the python-tulip group, I'd like to propose a patch
for the documentation of StreamWriter.drain().
This patch aims to give a better description of what drain() is intended to do,
and when to use it. In particula
Changes by Martin Richard :
--
hgrepos: -273
___
Python tracker
<http://bugs.python.org/issue22348>
___
___
Python-bugs-list mailing list
Unsubscribe:
Martin Richard added the comment:
Here is an other patch which mentions high and low water limits. I think it's
better to talk about it, since it tells extactly what a "full buffer" and
"partially drained" means.
On the other hand, StreamWriter wraps the transport but
Richard Oudkerk added the comment:
I guess this is a case where we should not be trying to import the main module.
The code for determining the path of the main module (if any) is rather crufty.
What is sys.modules['__main__'] and sys.modules['__main__'].__file__ if
Richard Oudkerk added the comment:
So there are really two situations:
1) The __main__ module *should not* be imported. This is the case if you use
__main__.py in a package or if you use nose to call test_main().
This should really be detected in get_preparation_data() in the parent process
Richard Oudkerk added the comment:
> I appear to be somehow getting child processes where __main__.__file__ is
> set, but __main__.__spec__ is not.
That seems to be true for the __main__ module even when multiprocessing is not
involved. Running a file /tmp/foo.py containing
impo
Richard Oudkerk added the comment:
Thanks for your hard work Nick!
--
___
Python tracker
<http://bugs.python.org/issue19946>
___
___
Python-bugs-list mailin
Richard Oudkerk added the comment:
On 19/12/2013 10:00 pm, Nick Coghlan wrote:
> I think that needs to be fixed on the multiprocessing side rather than just
> in the tests - we shouldn't create a concrete context for a start method
> that isn't going to work on that platform
Richard Oudkerk added the comment:
How often has this happened?
If the machine was very loaded then maybe the timeout was not enough time for
the semaphore to be cleaned up by the tracker process. But I would expect 1
second to be more than ample
Richard Oudkerk added the comment:
It is probably harmless then.
I don't think increasing the timeout is necessary -- the multiprocessing tests
already take a long time.
--
___
Python tracker
<http://bugs.python.org/is
Richard Oudkerk added the comment:
The following from the docs is wrong:
> ... module globals are no longer forced to None during interpreter
> shutdown.
Actually, in 3.4 module globals *sometimes* get forced to None during
interpreter shutdown, so the version the __del__ method can
New submission from Richard Philips:
The reference to the pysqlite web page on:
http://docs.python.org/3.4/library/sqlite3.html
should be:
https://github.com/ghaering/pysqlite
--
assignee: docs@python
components: Documentation
messages: 208261
nosy: Richard.Philips, docs
Richard Oudkerk added the comment:
_overlapped is linked against the socket library whereas _winapi is not so
it can be bundled in with python3.dll.
I did intend to switch multiprocessing over to using _overlapped but I did
not get round to it.
Since this is a private module the names of
Richard Oudkerk added the comment:
This is expected. Killing processes which use shared locks is never going to
end well. Even without the lock deadlock, the data in the pipe would be liable
to be corrupted if a processes is killed while putting or getting from a queue.
If you want to be
Richard Oudkerk added the comment:
LGTM
--
___
Python tracker
<http://bugs.python.org/issue20540>
___
___
Python-bugs-list mailing list
Unsubscribe:
Richard Oudkerk added the comment:
BTW, I see little difference between 3.2 and the unpatched default branch on
MacOSX:
$ py-32/release/python.exe ~/Downloads/test_manager.py
0.0007331371307373047
8.20159912109375e-05
9.417533874511719e-05
8.082389831542969e-05
7.796287536621094e-05
Richard Oudkerk added the comment:
On Unix, using the fork start method (which was the only option till 3.4),
every sub process will incref every shared object for which its parent has a
reference.
This is deliberate because there is not really any way to know which shared
objects a
Richard Oudkerk added the comment:
> Thanks Richard. The set_start_method() call will affect any process
> started from that time on? Is it possible to change idea at some point in
> the future?
You can use different start methods in the same program by creating different
Changes by Richard Oudkerk :
--
assignee: -> sbt
___
Python tracker
<http://bugs.python.org/issue7503>
___
___
Python-bugs-list mailing list
Unsubscrib
Changes by Richard Oudkerk :
--
assignee: -> sbt
___
Python tracker
<http://bugs.python.org/issue20633>
___
___
Python-bugs-list mailing list
Unsubscrib
Richard Oudkerk added the comment:
I am not sure method_to_typeid and create_method were really intended to be
public -- they are only used by Pool proxies.
You can maybe work around the problem by registering a second typeid without
specifying callable. That can be used in method_to_typeid
New submission from Richard Shapiro :
in Modules/timemodule.c, in the routine time_strftime, there is a range
check on the tm_isdst field:
if (buf.tm_isdst < -1 || buf.tm_isdst > 1) {
PyErr_SetString(PyExc_ValueError,
"daylight savings
Richard Shapiro added the comment:
Here's a patch to normalize the results of the various system calls
which return time information. This was against the source for Python 2.5.1.
*** timemodule.cTue Sep 8 10:28:31 2009
--- /home/rshapiro/216/redist/Python-2.5.1/Mo
New submission from Richard Jones :
I'm using python 2.6 maint SVN r75588 and get the attached build log
when I run:
configure --enable-framework
make
Failed to build these modules:
_curses_curses_panel _tkinter
readline
--
components: Build
files: p
New submission from Richard Hansen :
The description of the unicode_escape codec says that it produces "a
string that is suitable as Unicode literal in Python source code." [1]
Unfortunately, this is not true as it does not escape quotes. For example:
print u'a\'b&
Richard Hansen added the comment:
> If we change this, the encoder should quote both single and double
> quotes - simply because it is not known whether the literal
> will use single or double quotes.
Or document that single quotes are always escaped so that the user knows he/she
c
Richard Hansen added the comment:
I thought about raw_unicode_escape more, and there's a way to escape quotes:
use unicode escape sequences (e.g., ur'\u0027'). I've attached a new patch
that does the following:
* backslash-escapes single quotes when encoding with the
Richard Hansen added the comment:
Attached is a patch to the unicode unit tests. It adds tests for the following:
* unicode_escape escapes single quotes
* raw_unicode_escape escapes single quotes
* raw_unicode_escape escapes backslashes
--
Added file: http://bugs.python.org
Changes by Richard Hansen :
Removed file: http://bugs.python.org/file15742/unicode_escape_reorg.patch
___
Python tracker
<http://bugs.python.org/issue7615>
___
___
Pytho
Richard Hansen added the comment:
Attaching updated unicode_escape_reorg.patch. This fixes two additional issues:
* don't call _PyString_Resize() on an empty string because there is only one
empty string instance, and that instance is returned when creating an empty
string
* make
Changes by Richard Hansen :
Removed file: http://bugs.python.org/file15748/unicode_escape_reorg.patch
___
Python tracker
<http://bugs.python.org/issue7615>
___
___
Pytho
Richard Hansen added the comment:
Attaching updated unicode_escape_reorg.patch. This addresses two additional
issues:
* removes pickle's workaround of raw-unicode-escape's broken escaping
* eliminates duplicated code (the raw escape encode function was copied with
onl
Richard Hansen added the comment:
I believe this issue is ready to be bumped to the "patch review" stage.
Thoughts?
--
___
Python tracker
<http://bugs.python.
Richard Hansen added the comment:
> Does the last patch obsolete the first two? If so please delete the
> obsolete ones.
Yes and no -- it depends on what the core Python developers want and are
comfortable with:
* unicode_escape_single_quotes.patch: Only escapes single quotes,
701 - 800 of 1064 matches
Mail list logo