[issue17314] Stop using imp.find_module() in multiprocessing

2013-05-25 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Looks good to me.

(Any particular reason for ignoring AttributeError?)

--

___
Python tracker 
<http://bugs.python.org/issue17314>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17314] Stop using imp.find_module() in multiprocessing

2013-05-25 Thread Richard Oudkerk

Richard Oudkerk added the comment:

The unit tests pass with the patch already (if we don't delete the "import imp" 
line).

What attributes will be set by init_module_attrs()?

--

___
Python tracker 
<http://bugs.python.org/issue17314>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18078] threading.Condition to allow notify on a specific waiter

2013-05-28 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> BTW, I find .notify(N) not much useful, because the docs say it may wake 
> more threads on a different implementation and also I can't never know 
> whom am I waking.

The fact that more than N threads can wake up is not a problem if you are 
retesting an appropriate predicate as soon as your waiting threads awakes.  
(And if you are not retesting then you are abusing the condition variable.)

But it might be nice to be able to wait on multiple conditions at once, 
assuming they are associated with the same lock.  Maybe we could have a static 
method

Condition.wait_for_any(cond_to_pred: dict, timeout=None) -> condition

where cond_to_pred is a mapping from condition variable to predicate function 
to test for.  The return value would be the condition variable whose predicate 
is true (or None if there was a timeout).  So then

cond.wait_for(pred)

would be equivalent to

Condition.wait_for_any({cond: pred}) is cond

--
nosy: +sbt

___
Python tracker 
<http://bugs.python.org/issue18078>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18078] threading.Condition to allow notify on a specific waiter

2013-05-28 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> This could solve the waiting problem for the "thread", but also may 
> keep the other Condition objs waiting -- and that may not be problem 
> because I'm already using .notify_all()

I don't understand what you mean.

> Probably this function have more use cases than my original idea, but 
> is it simple to wait on several locks?

It would be for waiting for several conditions associated with the same lock, 
not for waiting for several locks.

> It could be in the form:
>
> Condition.wait_for_any(cond_to_pred: dict|list, timeout=None) -> condition
>
> For cases when there's no need for a predicate function.

There is always a need for a predicate function.

--

___
Python tracker 
<http://bugs.python.org/issue18078>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18078] threading.Condition to allow notify on a specific waiter

2013-05-28 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> You cannot associate several conditions to the *inner lock*, because you 
> don't have access to them (otherwise I wouldn't open this issue).

Condition.wait_for_any() would create a single inner lock and add it to the 
_waiters list for each of the condition variables.  notify() and notify_all() 
would need to deal with the possibility that releasing the inner lock fails 
with ThreadError because it has already been unlocked.

> You may not need to test for a predicate when using .wait() . Only when 
> you're 
> using .wait_for()
> This is what I'm most interested in mimicking.

(Ignoring timeouts) wait() should be used with the idiom

while :
cond.wait()

This allows the woken thread to check whether it is really supposed to continue 
--
it sounds like you are not doing this.  The only advantage of using wait() over 
wait_for() is that sometimes it avoids you having to create a named function or 
lambda function like in

cond.wait_for(lambda: not ())

--

___
Python tracker 
<http://bugs.python.org/issue18078>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16895] Batch file to mimic 'make' on Windows

2013-05-28 Thread Richard Oudkerk

Richard Oudkerk added the comment:

I can't say I know enough about batch files to understand much of the code, but 
a few notes:

Windows XP does not have the command "where" which you use -- Python 3.4 will 
still support XP.

Except perhaps for looping I would prefer to get rid of the use of goto.  The 
fact that some goto targets end in "exit /b ..." make it very confusing as to 
where "exit /b" will return control.

The initial pushd is matched by various popd's which are scattered over 
hundreds of lines (including one in :usage).  I think it would be better to 
keep matching pushd/popd reasonably close together.  For instance, I think you 
could do something like

...
pushd "%~dp0"
call :main ...
popd
exit /b

:main
...
exit /b

It would also be helpful if the end of the subroutines were marked with a 
comment like

rem end :foo

--

___
Python tracker 
<http://bugs.python.org/issue16895>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16895] Batch file to mimic 'make' on Windows

2013-05-28 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> Can't this just be a Python script?

That would cause bootstrap issues for people who do not already have 
python installed.

--

___
Python tracker 
<http://bugs.python.org/issue16895>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18040] SIGINT catching regression on windows in 2.7

2013-05-30 Thread Richard Oudkerk

Richard Oudkerk added the comment:

I am not to familiar with the signal handling machinery.  (I only did 
some refactoring to expose the event handle already used by time.sleep().)

The change looks reasonable, but I am also not sure how necessary it is.

--

___
Python tracker 
<http://bugs.python.org/issue18040>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18120] multiprocessing: garbage collector fails to GC Pipe() end when spawning child process

2013-06-02 Thread Richard Oudkerk

Richard Oudkerk added the comment:

The way to deal with this is to pass the write end of the pipe to the child 
process so that the child process can explicitly close it -- there is no reason 
to expect garbage collection to make this happen automatically.

You don't explain the difference between functional.py and nonfunctional.py.  
The most obvious thing is the fact that nonfunctional.py seems to have messed 
up indentation: you have a while loop in the class declaration instead of in 
the run() method.

--
nosy: +sbt

___
Python tracker 
<http://bugs.python.org/issue18120>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18120] multiprocessing: garbage collector fails to GC Pipe() end when spawning child process

2013-06-02 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> The write end of that pipe goes out of scope and has no references in the 
> child thread.  Therefore, per my understanding, it should be garbage 
> collected (in the child thread).  Where am I wrong about this?

The function which starts the child process by (indirectly) invoking os.fork() 
never gets a chance to finish in the child process, so nothing "goes out of 
scope".

Anyway, relying on garbage collection to close resources for you is always a 
bit dodgy.

--

___
Python tracker 
<http://bugs.python.org/issue18120>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18120] multiprocessing: garbage collector fails to GC Pipe() end when spawning child process

2013-06-02 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> So you're telling me that when I spawn a new child process, I have to 
> deal with the entirety of my parent process's memory staying around 
> forever?

With a copy-on-write implementation of fork() this quite likely to use less 
memory than starting a fresh process for the child process.  And it is 
certainly much faster.

> I would have expected this to call to fork(), which gives the child 
> plenty of chance to clean up, then call exec() which loads the new 
> executable.

There is an experimental branch (http://hg.python.org/sandbox/sbt) which 
optionally behaves like that.  Note that "clean up" means close all fds not 
explcitly passed, and has nothing to do with garbage collection.

--

___
Python tracker 
<http://bugs.python.org/issue18120>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18121] antigravity leaks subprocess.Popen object

2013-06-02 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Presumably this is caused by the fact that Popen.__del__() ressurects self by 
appending self to _active if the process is still alive.

On Windows this is unnecessary.  On Unix it would be more sensible to just 
append the *pid* to _active.

--
nosy: +sbt

___
Python tracker 
<http://bugs.python.org/issue18121>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18120] multiprocessing: garbage collector fails to GC Pipe() end when spawning child process

2013-06-02 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> What I'm still trying to grasp is why Python explicitly leaves the
> parent processes info around in the child.  It seems like there is
> no benefit (besides, perhaps, speed) and that this choice leads to
> non-intuitive behavior - like this.

The Windows implementation does not use fork() but still exhibits the 
same behaviour in this respect (except in the experimental branch 
mentioned before).  The real issue is that fds/handles will get 
inherited by the child process unless you explicitly close them. 
(Actually on Windows you need to find a way to inject specific handles 
from the parent to child process).

The behaviour you call non-intuitive is natural to someone used to using 
fork() and pipes on Unix.  multiprocessing really started as a 
cross-platform work-around for the lack of fork() on Windows.

Using fork() is also a lot more flexible: many things that work fine on 
Unix will not work correctly on Windows because of pickle-issues.

The main problem with fork() is that forking a process with multiple 
threads can be problematic.

--

___
Python tracker 
<http://bugs.python.org/issue18120>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18120] multiprocessing: garbage collector fails to GC Pipe() end when spawning child process

2013-06-03 Thread Richard Oudkerk

Richard Oudkerk added the comment:

On 03/06/2013 1:02am, spresse1 wrote:
> Whats really bugging me is that it remains open and I can't fetch a reference.
> If I could do either of these, I'd be happy.
> ...
> Perhaps I really want to be implementing with os.fork().  Sigh, I was trying 
> to
> save myself some effort...

I don't see how using os.fork() would make things any easier.  In either 
case you need to prepare a list of fds which the child process should 
close before it starts, or alternatively a list of fds *not* to close.

The real issue is that there is no way for multiprocessing (or 
os.fork()) to automatically infer which fds the child process is going 
to use: if don't explicitly close unneeded ones then the child process 
will inherit all of them.

It might be helpful if multiprocessing exposed a function to close all 
fds except those specified -- see close_all_fds_except() at

http://hg.python.org/sandbox/sbt/file/5d4397a38445/Lib/multiprocessing/popen_spawn_posix.py#l81

Remembering not to close stdout (fd=1) and stderr (fd=2), you could use 
it like

 def foo(reader):
 close_all_fds_except([1, 2, reader.fileno()])
 ...

 r, w = Pipe(False)
 p = Process(target=foo, args=(r,))

--

___
Python tracker 
<http://bugs.python.org/issue18120>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18122] RuntimeError: not holding the import lock

2013-06-03 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Forking as a side effect of importing a module is evil.  I think raising a 
RuntimeError is preferable to trying to make it Just Work.

But maybe one could do

void
_PyImport_ReInitLock(void)
{
if (import_lock != NULL) {
import_lock = PyThread_allocate_lock();
PyThread_acquire_lock(import_lock, WAIT_LOCK);
}
import_lock_thread = PyThread_get_thread_ident();
_PyImport_ReleaseLock();
}

--
nosy: +sbt

___
Python tracker 
<http://bugs.python.org/issue18122>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18120] multiprocessing: garbage collector fails to GC Pipe() end when spawning child process

2013-06-03 Thread Richard Oudkerk

Richard Oudkerk added the comment:

On 03/06/2013 3:07pm, spresse1 wrote:
> I could reimplement the close_all_fds_except() call (in straight python, using
> os.closerange()).  That seems like a reasonable solution, if a bit of a hack.
> However, given that pipes are exposed by multiprocessing, it might make sense
> to try to get this function incorperated into the main version of it?

close_all_fds_except() is already pure python:

try:
MAXFD = os.sysconf("SC_OPEN_MAX")
except:
MAXFD = 256

def close_all_fds_except(fds):
fds = list(fds) + [-1, MAXFD]
fds.sort()
for i in range(len(fds) - 1):
os.closerange(fds[i]+1, fds[i+1])

> I also think that with introspection it would be possible for the 
> subprocessing
> module to be aware of which file descriptors are still actively referenced.
> (ie: 0,1,2 always referenced, introspect through objects in the child to see 
> if
> they have the file.fileno() method) However, I can't state this as a certainty
> without going off and actually implementing such a version.  Additionally, I 
> can
> make absolutely no promises as to the speed of this.  Perhaps, if it 
> functioned,
> it would be an option one could turn on for cases like mine.

So you want a way to visit all objects directly or indirectly referenced 
by the process object, so you can check whether they have a fileno() 
method?  At the C level all object types which support GC define a 
tp_traverse function, so maybe that could be made available from pure 
Python.

But really, this sounds rather fragile.

--

___
Python tracker 
<http://bugs.python.org/issue18120>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18120] multiprocessing: garbage collector fails to GC Pipe() end when spawning child process

2013-06-03 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Actually, you can use gc.get_referents(obj) which returns the direct children 
of obj (and is presumably implemented using tp_traverse).

I will close.

--
resolution:  -> rejected
stage:  -> committed/rejected
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue18120>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18078] threading.Condition to allow notify on a specific waiter

2013-06-04 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> Furthermore, the complexity is rather bad: if T is the average number
> of waiting threads, an C the number of conditions being waited on, the
> wait is O(C) (appending to C wait queues) and wakeup is O(CT) (C
> removal from a T-length deque).

Which just means that waiting on C conditions is C times more expensive than 
waiting on 1 currently is.  That seems reasonable enough to me, and anyway, I 
would expect C to be fairly small.

Note that the alternative is to use a single condition and use notify_all() 
instead of notify().  That is likely to be much more expensive because every 
waiting thread must wake up to see if it should continue.

But I am still not sure it is worth it.

BTW, I think it would be better to have wait_for_any() return a list of ready 
conditions rather than a boolean.

--

___
Python tracker 
<http://bugs.python.org/issue18078>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17931] PyLong_FromPid() is not correctly defined on Windows 64-bit

2013-06-05 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> pid_t is HANDLE on Windows, which is a pointer.

I think this is wrong.

The signature of getpid() is

int _getpid(void);

so pid_t should be equivalent to int.

The complication is that the return values of spawn*() etc are process handles 
(cast to intptr_t), not pids:

intptr_t _spawnv(int mode, const char *cmdname, const char *const *argv);

See

http://msdn.microsoft.com/en-us/library/t2y34y40%28v=vs.100%29.aspx
http://msdn.microsoft.com/en-us/library/7zt1y878%28v=vs.80%29.aspx

--
nosy: +sbt
status: closed -> open

___
Python tracker 
<http://bugs.python.org/issue17931>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17931] PyLong_FromPid() is not correctly defined on Windows 64-bit

2013-06-05 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> @sbt: Would you like to have a strict separation between UNIX-like pid 
> (pid_t) and Windows process identifier (HANDLE)? 

Yes.  And would I certainly like SIZEOF_PID_T == sizeof(pid_t) ;-)

Note that _winapi takes the policy of treating HANDLE as an unsigned quantity 
(as PyLong_*VoidPtr() does for pointers).  I am not sure if signed or unsigned 
is better, but I lean towards unsigned.  It is easy enough to cast to intptr_t 
if we need to.

I think it is enough to treat HANDLE as void*, but adding PyLong_*Handle() is 
easy enough.

There does not seem to be a format character for void* (or size_t), and adding 
one would be useful.

Or maybe rather than adding ever more format characters which are aliases for 
old ones, we could just create macros like

#define PY_PARSE_INT "i"
#define PY_PARSE_UINTPTR_T "K"
#define PY_PARSE_VOID_PTR PY_PARSE_UINTPTR_T
#define PY_PARSE_HANDLE PY_PARSE_UINTPTR_T
#define PY_PARSE_PID_T PY_PARSE_INT

--

___
Python tracker 
<http://bugs.python.org/issue17931>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17931] PyLong_FromPid() is not correctly defined on Windows 64-bit

2013-06-05 Thread Richard Oudkerk

Richard Oudkerk added the comment:

I see _Py_PARSE_PID already exists but no others ...

--

___
Python tracker 
<http://bugs.python.org/issue17931>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17931] PyLong_FromPid() is not correctly defined on Windows 64-bit

2013-06-05 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Attached is a patch that adds _Py_PARSE_INTPTR and _Py_PARSE_UINTPTR to 
Include/longobject.h.

It also uses _Py_PARSE_INTPTR in Modules/posixmodule.c and PC/msvcrtmodule.c 
and removes the definition for SIZEOF_PID.

--
Added file: http://bugs.python.org/file30472/py_parse_intptr.patch

___
Python tracker 
<http://bugs.python.org/issue17931>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17931] PyLong_FromPid() is not correctly defined on Windows 64-bit

2013-06-06 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue17931>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15528] Better support for finalization with weakrefs

2013-06-08 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> PJE suggests importing atexit and registering finalize only when it's 
> actually used. I guess this would be the easiest workaround.

Done.

--
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue15528>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18174] Make regrtest with --huntrleaks check for fd leaks

2013-06-09 Thread Richard Oudkerk

New submission from Richard Oudkerk:

regrtest already tests for refcount leaks and memory allocation leaks.  It can 
also be made to check for file descriptor leaks (and perhaps also handles on 
Windows).

Running with the attached patch makes it look like test_openpty, test_shutil, 
test_subprocess, test_uuid all leak fds on Linux, but I have not investigated:

$ ./python -m test.regrtest -R 3:3 test_openpty test_shutil test_subprocess 
test_uuid
[1/4] test_openpty
123456
..
test_openpty leaked [2, 2, 2] fds, sum=6
[2/4/1] test_shutil
beginning 6 repetitions
123456
..
test_shutil leaked [4, 4, 4] fds, sum=12
[3/4/2] test_subprocess
beginning 6 repetitions
123456
..
test_subprocess leaked [5, 5, 5] fds, sum=15
[4/4/3] test_uuid
beginning 6 repetitions
123456
..
test_uuid leaked [1, 1, 1] fds, sum=3
4 tests failed:
test_openpty test_shutil test_subprocess test_uuid

--
files: fdleak.patch
keywords: patch
messages: 190871
nosy: sbt
priority: normal
severity: normal
status: open
title: Make regrtest with --huntrleaks check for fd leaks
Added file: http://bugs.python.org/file30518/fdleak.patch

___
Python tracker 
<http://bugs.python.org/issue18174>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18175] os.listdir(fd) leaks fd on error

2013-06-09 Thread Richard Oudkerk

New submission from Richard Oudkerk:

If os.listdir() is used with an fd, but fdopendir() fails (e.g. if the the fd 
is a normal file) then a duplicated fd is leaked.

This explains the leaks in test_shutil mentioned in #18174.

--
messages: 190875
nosy: sbt
priority: normal
severity: normal
status: open
title: os.listdir(fd) leaks fd on error

___
Python tracker 
<http://bugs.python.org/issue18175>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18180] Refleak in test_imp on Windows

2013-06-10 Thread Richard Oudkerk

New submission from Richard Oudkerk:

Seems to be in error path of _PyImport_GetDynLoadWindows().

--
files: load_dynamic.patch
keywords: patch
messages: 190901
nosy: sbt
priority: normal
severity: normal
status: open
title: Refleak in test_imp on Windows
Added file: http://bugs.python.org/file30524/load_dynamic.patch

___
Python tracker 
<http://bugs.python.org/issue18180>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18180] Refleak in test_imp on Windows

2013-06-10 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
resolution:  -> fixed
stage:  -> committed/rejected
status: open -> closed
type:  -> resource usage
versions: +Python 3.3, Python 3.4

___
Python tracker 
<http://bugs.python.org/issue18180>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18174] Make regrtest with --huntrleaks check for fd leaks

2013-06-10 Thread Richard Oudkerk

Richard Oudkerk added the comment:

The test_shutil leak is caused by #17899.  The others are fixed by  
a7381fe515e8 and 46fe1bb0723c.

--

___
Python tracker 
<http://bugs.python.org/issue18174>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18174] Make regrtest with --huntrleaks check for fd leaks

2013-06-12 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Updated version which adds checks for handle leaks on Windows.

--
Added file: http://bugs.python.org/file30561/fdleak.patch

___
Python tracker 
<http://bugs.python.org/issue18174>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9122] Problems with multiprocessing, Python embedding and Windows

2013-06-14 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
nosy: +sbt

___
Python tracker 
<http://bugs.python.org/issue9122>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18214] Stop purging modules which are garbage collected before shutdown

2013-06-14 Thread Richard Oudkerk

New submission from Richard Oudkerk:

Currently when a module is garbage collected its dict is purged by replacing 
all values except __builtins__ by None.  This helps clear things at shutdown. 

But this can cause problems if it occurs *before* shutdown: if we use a 
function defined in a module which has been garbage collected, then that 
function must not depend on any globals, because they will have been purged.

Usually this problem only occurs with programs which manipulate sys.modules.  
For example when setuptools and nose run tests they like to reset sys.modules 
each time.  See for example

  http://bugs.python.org/issue15881

See also

  http://bugs.python.org/issue16718

The trivial patch attached prevents the purging behaviour for modules gc'ed 
before shutdown begins.  Usually garbage collection will end up clearing the 
module's dict anyway.

I checked the count of refs and blocks reported on exit when running a trivial 
program and a full regrtest (which will cause quite a bit of sys.modules 
manipulation).  The difference caused by the patch is minimal.

Without patch:
  do nothing:[20234 refs, 6582 blocks]
  full regrtest: [92713 refs, 32597 blocks]

With patch:
  do nothing:[20234 refs, 6582 blocks]
  full regrtest: [92821 refs, 32649 blocks]

--
files: prevent-purge-before-shutdown.patch
keywords: patch
messages: 191135
nosy: sbt
priority: normal
severity: normal
status: open
title: Stop purging modules which are garbage collected before shutdown
versions: Python 3.4
Added file: http://bugs.python.org/file30583/prevent-purge-before-shutdown.patch

___
Python tracker 
<http://bugs.python.org/issue18214>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18212] No way to check whether Future is finished?

2013-06-14 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Do you want something like

f.done() and not f.cancelled() and f.exception() is None

--
nosy: +sbt

___
Python tracker 
<http://bugs.python.org/issue18212>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18214] Stop purging modules which are garbage collected before shutdown

2013-06-15 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
nosy: +pitrou

___
Python tracker 
<http://bugs.python.org/issue18214>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18214] Stop purging modules which are garbage collected before shutdown

2013-06-15 Thread Richard Oudkerk

Richard Oudkerk added the comment:

On 15/06/2013 7:11pm, Antoine Pitrou wrote:
>> Usually garbage collection will end up clearing the module's dict anyway.
>
> This is not true, since global objects might have a __del__ and then hold
> the whole module dict alive through a reference cycle. Happily though,
> PEP 442 is going to make that concern obsolete.

I did say "usually".

> As for the interpreter shutdown itself, I have a pending patch (post-PEP 442)
> to get rid of the globals cleanup as well. It may be better to merge the two 
> approaches.

So you would just depend on garbage collection?  Do you know how many 
refs/blocks are left at exit if one just uses garbage collection 
(assuming PEP 442 is in effect)?  I suppose adding GC support to those 
modules which currently lack it would help a lot.

BTW, I had a more complicated patch which keeps track of module dicts 
using weakrefs and purges any which were left after garbage collection 
has had a chance to free stuff.  But most module dicts ended up being 
purged anyway, so it did not seem worth the hassle when a two-line patch 
mostly fixes the immediate problem.

--

___
Python tracker 
<http://bugs.python.org/issue18214>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18252] timeit makes code run faster?

2013-06-18 Thread Richard Oudkerk

Richard Oudkerk added the comment:

I think if you use timeit then the code is wrapped inside a function before it 
is compiled.  This means that your code can mostly use faster local lookups 
rather than global lookups.

--
nosy: +sbt

___
Python tracker 
<http://bugs.python.org/issue18252>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18252] timeit makes code run faster?

2013-06-18 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
stage: committed/rejected -> 

___
Python tracker 
<http://bugs.python.org/issue18252>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18252] timeit makes code run faster?

2013-06-18 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
stage:  -> committed/rejected

___
Python tracker 
<http://bugs.python.org/issue18252>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16507] Patch selectmodule.c to support WSAPoll on Windows

2013-06-20 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
keywords: +gsoc -patch
resolution:  -> rejected
stage:  -> committed/rejected
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue16507>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9122] Problems with multiprocessing, Python embedding and Windows

2013-06-20 Thread Richard Oudkerk

Richard Oudkerk added the comment:

We don't do non-security updates on Python 2.6 anymore.

As a workaround you might be able to do something like

import sys, multiprocessing
sys.frozen = True# or multiprocessing.forking.WINEXE = True

...

if __name__ == '__main__':
multiprocessing.freeze_support()
...

(I am not familiar with using Cython.)

--

___
Python tracker 
<http://bugs.python.org/issue9122>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6461] multiprocessing: freezing apps on Windows

2013-06-20 Thread Richard Oudkerk

Richard Oudkerk added the comment:

I just tried freezing the program

  from multiprocessing import freeze_support,Manager

  if __name__ == '__main__':
  freeze_support()
  m=Manager()
  l = m.list([1,2,3])
  l.append(4)
  print(l)
  print(repr(l))

using cx_Freeze with Python 2.7, and it worked fine:

  PS> cxfreeze.bat foo.py
  copying C:\Python27\lib\site-packages\cx_Freeze\bases\Console.exe -> 
C:\Tmp\dir\dist\foo.exe
  copying C:\Windows\system32\python27.dll -> C:\Tmp\dir\dist\python27.dll
  writing zip file C:\Tmp\dir\dist\foo.exe
  ...

  PS> dist\foo
  [1, 2, 3, 4]
  

--

___
Python tracker 
<http://bugs.python.org/issue6461>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17018] Inconsistent behaviour of methods waiting for child process

2013-06-20 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
resolution:  -> fixed
stage:  -> committed/rejected
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue17018>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18122] RuntimeError: not holding the import lock

2013-06-20 Thread Richard Oudkerk

Richard Oudkerk added the comment:

See also #9573 and #15914.

--

___
Python tracker 
<http://bugs.python.org/issue18122>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15198] multiprocessing Pipe send of non-picklable objects doesn't raise error

2013-06-20 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
resolution:  -> works for me
stage:  -> committed/rejected
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue15198>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18277] Queue is empty right after put from the same process/thread

2013-06-21 Thread Richard Oudkerk

Richard Oudkerk added the comment:

This is a very similar issue to #17985.

While it may seem counter-intuitive, I don't see how it makes any difference.  
Another thread/process might remove the item before you can get it.

I find it very difficult to imagine a real program where you can safely use 
get_nowait() without being prepared to handle an Empty exception.

--

___
Python tracker 
<http://bugs.python.org/issue18277>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18277] Queue is empty right after put from the same process/thread

2013-06-21 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Why would you use a multi-process queue to "pass messages from one part of the 
program to another part, in the same process and thread"?  Why not just use a 
deque?

Is this something you actually did, or are you just trying to come up with a 
plausible example?

And, of course, if you are sure there must be an item available, you could just 
use get() instead of get_nowait().

--

___
Python tracker 
<http://bugs.python.org/issue18277>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17621] Create a lazy import loader mixin

2013-06-22 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Apologies for being dense, but how would you actually use such a loader?

Would you need to install something in sys.meta_path/sys.path_hooks?  Would it 
make all imports lazy or only imports of specified modules?

--
nosy: +sbt

___
Python tracker 
<http://bugs.python.org/issue17621>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17621] Create a lazy import loader mixin

2013-06-22 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Shouldn't the import lock be held to make it threadsafe?

--

___
Python tracker 
<http://bugs.python.org/issue17621>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18277] Queue is empty right after put from the same process/thread

2013-06-24 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> I did this to use the same abstraction that was used extensively for 
> other purposes, instead of recreating the same abstraction with a deque 
> as its basis. 

So you wanted a FIFO queue and preferred the API of Queue to that of deque?  
Well it will be *much* less efficient.  queue.Queue is also less efficient, but 
not by such a wide margin.

I have added a documentation note.

--

___
Python tracker 
<http://bugs.python.org/issue18277>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7292] Multiprocessing Joinable race condition?

2013-06-24 Thread Richard Oudkerk

Richard Oudkerk added the comment:

unfinished_tasks is simply used as a counter.  It is only accessed while 
holding self._cond.  If you get this error then I think the error text is 
correct -- your progam calls task_done() to many times.

The proposed patch silences the sanity check by making it block for a while 
instead.  The fact the program seems to work without deadlocking does not mean 
the program or the patch is correct.

Without more information I will close this.

--
status: open -> pending

___
Python tracker 
<http://bugs.python.org/issue7292>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue15818] multiprocessing documentation of Process.exitcode

2013-06-24 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
resolution:  -> fixed
stage:  -> committed/rejected
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue15818>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17621] Create a lazy import loader mixin

2013-06-24 Thread Richard Oudkerk

Richard Oudkerk added the comment:

I was thinking about the line

  self.__dict__.update(state)

overwriting new data with stale data.

--

___
Python tracker 
<http://bugs.python.org/issue17621>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18277] Queue is empty right after put from the same process/thread

2013-06-24 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> 1. "but should not cause any pratical difficulties" <-- you have a typo in 
> 'pratical' there.
> 2. What exactly do you mean by "managed" queues in the new addition?

Woops.  Fixed now see 860fc6a2bd21, 347647a1f798.  A managed queue is 
one created like

 manager = multiprocessing.Manager()
 queue = manager.Queue()

queue is a proxy for a conventional queue object in a "manager" process.

> Also, did part #2 of the note come up in other reports?
> It seemed somewhat trivial (can't hurt though)...

Yes it did (but I can't find the report now).

--

___
Python tracker 
<http://bugs.python.org/issue18277>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18277] Queue is empty right after put from the same process/thread

2013-06-24 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
assignee:  -> docs@python
components: +Documentation -IO, Interpreter Core
nosy: +docs@python
resolution:  -> fixed
stage:  -> committed/rejected
status: open -> closed
type: behavior -> 

___
Python tracker 
<http://bugs.python.org/issue18277>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17985] multiprocessing Queue.qsize() and Queue.empty() with different results

2013-06-24 Thread Richard Oudkerk

Richard Oudkerk added the comment:

This is really a documentation issue.  The doc fix for #18277 covers this.

--
components: +Library (Lib) -Extension Modules
resolution:  -> wont fix
stage:  -> committed/rejected
status: open -> closed
type:  -> behavior

___
Python tracker 
<http://bugs.python.org/issue17985>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18329] for line in socket.makefile() speed degradation

2013-06-29 Thread Richard Oudkerk

Richard Oudkerk added the comment:

I think in Python 3 makefile() returns a TextIOWrapper object by default. To 
force the use of binary you need to specfiy the mode:

fileobj = ss.makefile(mode='rb')

--
nosy: +sbt

___
Python tracker 
<http://bugs.python.org/issue18329>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18331] runpy.run_path gives functions with corrupted .__globals__

2013-06-30 Thread Richard Oudkerk

Richard Oudkerk added the comment:

When modules are garbage collected the associated globals dict is purged -- see 
#18214.  This means that all values (except __builtins__) are replaced by None.

To work around this run_path() apparently returns a *copy* of the globals dict 
which was created before purging.

--
nosy: +sbt

___
Python tracker 
<http://bugs.python.org/issue18331>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18332] _posix_listdir may leak FD

2013-06-30 Thread Richard Oudkerk

Richard Oudkerk added the comment:

I think this is a duplicate of #17899.

--
nosy: +sbt

___
Python tracker 
<http://bugs.python.org/issue18332>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17097] multiprocessing BaseManager serve_client() does not check EINTR on recv

2013-07-01 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
resolution:  -> fixed
stage:  -> committed/rejected
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue17097>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18344] _bufferedreader_read_all() may leak reference to data

2013-07-02 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Patch attached.

--
keywords: +patch
Added file: http://bugs.python.org/file30748/buf-readall.patch

___
Python tracker 
<http://bugs.python.org/issue18344>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17273] Pool methods can only be used by parent process.

2013-07-02 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
resolution:  -> fixed
stage:  -> committed/rejected
status: open -> closed
title: multiprocessing.pool.Pool task/worker handlers are not fork safe -> Pool 
methods can only be used by parent process.
type: behavior -> 
versions: +Python 2.7, Python 3.4

___
Python tracker 
<http://bugs.python.org/issue17273>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14206] multiprocessing.Queue documentation is lacking important details

2013-07-02 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue14206>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14206] multiprocessing.Queue documentation is lacking important details

2013-07-02 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
resolution:  -> fixed
stage:  -> committed/rejected

___
Python tracker 
<http://bugs.python.org/issue14206>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17261] multiprocessing.manager BaseManager cannot return proxies from proxies remotely (when listening on '')

2013-07-02 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
resolution:  -> fixed
stage:  -> committed/rejected
status: open -> closed
versions: +Python 3.3, Python 3.4

___
Python tracker 
<http://bugs.python.org/issue17261>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2286] Stack overflow exception caused by test_marshal on Windows x64

2013-07-02 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Reopening because I think this is again a problem for Win64 and 3.x.  The Win64 
buildbots always seem to crash on test_marshal (and I do too).

It appears to be BugsTestCase.test_loads_2x_code() which crashes, which is 
virtually the same as test_loads_recursion().

--
nosy: +sbt
status: closed -> open
versions: +Python 3.4

___
Python tracker 
<http://bugs.python.org/issue2286>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2286] Stack overflow exception caused by test_marshal on Windows x64

2013-07-02 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Closing because this is caused by #17206 and is already discussed there.

--
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue2286>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7292] Multiprocessing Joinable race condition?

2013-07-03 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
resolution:  -> works for me
stage: test needed -> committed/rejected
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue7292>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18329] for line in socket.makefile() speed degradation

2013-07-04 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> I think I know what's going on here. For socket IO readline() uses a 
> readahead buffer size of 1.

Why is that?  I think that makefile(mode='rb') and fdopen() both create 
BufferedReader objects with the same buffer size.

It looks to me like there are the same number of reads for both cases (about 
120,000 ~ data_size/buffer_size).  But with SocketIO, there are 5 function 
calls for each read into the buffer.

--

___
Python tracker 
<http://bugs.python.org/issue18329>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18329] for line in socket.makefile() speed degradation

2013-07-04 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Using

while True:
if not fileobj.read(8192):
break

instead of

for line in fileobj:
pass

results in higher throughput, but a similar slowdown with makefile().  So this 
is not a problem specific to readline().

--

___
Python tracker 
<http://bugs.python.org/issue18329>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18329] for line in socket.makefile() speed degradation

2013-07-04 Thread Richard Oudkerk

Richard Oudkerk added the comment:

The only real reason for implementing SocketIO in pure Python is because read() 
and write() do not work on Windows with sockets.  (I think there are also a few 
complications involving SSL sockets and the close() method.)

On Windows I have implemented a file object type in C which works with pipe 
handles.  I hope to use it in multiprocessing at some point.  It would not be 
too difficult to support sockets as well and use that instead of SocketIO.  For 
Unix, FileIO can be used instead of SocketIO.

--

___
Python tracker 
<http://bugs.python.org/issue18329>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18329] for line in socket.makefile() speed degradation

2013-07-04 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Ah.  I had not thought of socket timeouts.

--

___
Python tracker 
<http://bugs.python.org/issue18329>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18329] for line in socket.makefile() speed degradation

2013-07-04 Thread Richard Oudkerk

Richard Oudkerk added the comment:

I find that by adding the lines

fileobj.raw.readinto = ss.recv_into
fileobj.raw.read = ss.recv

the speed with makefile() is about 30% slower than with fdopen().

--

___
Python tracker 
<http://bugs.python.org/issue18329>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6642] returning after forking a child thread doesn't call Py_Finalize

2013-07-05 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Shouldn't the child process be terminating using os._exit()?

--
nosy: +sbt

___
Python tracker 
<http://bugs.python.org/issue6642>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18382] multiprocessing's overlapped PipeConnection issues on Windows 8

2013-07-06 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Does that test always fail?

--
nosy: +sbt

___
Python tracker 
<http://bugs.python.org/issue18382>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4708] os.pipe should return inheritable descriptors (Windows)

2013-07-09 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> - would improve POSIX compatibility, it mimics what os.pipe()
> does on those OS

I disagree.

On Windows fds can only be inherited if you start processes using the spanwn*() 
family of functions.  If you start them using CreateProcess() then the 
underlying *handles* are inherited, but the *fds* are not.

In Python 2, os.spawn*() used spawn*(), so making os.pipe() return inheritable 
fds would have made some sense.  But in Python 3 os.spawn*() is implemented 
using subprocess/CreateProcess so fds will NOT be inherited (even if the 
wrapped handles are).

Note that subprocess *does* know how to redirect the standard streams to fds 
returned by os.pipe().

So for Python 3 I don't think there is any point in changing things.

--

___
Python tracker 
<http://bugs.python.org/issue4708>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue4708] os.pipe should return inheritable descriptors (Windows)

2013-07-09 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Oops.  I confused os.popen() with os.spawn*().  os.spawnv() IS still 
implemented using spawnv() in Python 3.

--

___
Python tracker 
<http://bugs.python.org/issue4708>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18455] Multiprocessing connection SocketClient retries connection on socket

2013-07-14 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
nosy: +sbt

___
Python tracker 
<http://bugs.python.org/issue18455>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18344] _bufferedreader_read_all() may leak reference to data

2013-07-15 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue18344>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18344] _bufferedreader_read_all() may leak reference to data

2013-07-15 Thread Richard Oudkerk

Changes by Richard Oudkerk :


--
resolution:  -> fixed
stage: needs patch -> committed/rejected

___
Python tracker 
<http://bugs.python.org/issue18344>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18455] Multiprocessing connection SocketClient retries connection on socket

2013-07-15 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Thanks for the report.

This should be fixed now in 2.7.  (3.1 and 3.2 only get security fixes.)

--
resolution:  -> fixed
stage:  -> committed/rejected
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue18455>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue17778] Fix test discovery for test_multiprocessing.py

2013-07-16 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Thanks for the patches!

--
resolution:  -> fixed
stage: patch review -> committed/rejected
status: open -> closed

___
Python tracker 
<http://bugs.python.org/issue17778>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18512] sys.stdout.write does not allow bytes in Python 3.x

2013-07-20 Thread Richard Oudkerk

Richard Oudkerk added the comment:

You can do

sys.stdout.buffer.write(b"hello")

See


http://docs.python.org/dev/library/io.html?highlight=buffer#io.TextIOBase.buffer

--
nosy: +sbt

___
Python tracker 
<http://bugs.python.org/issue18512>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18078] threading.Condition to allow notify on a specific waiter

2013-07-24 Thread Richard Oudkerk

Richard Oudkerk added the comment:

IMHO

1) It should check all predicates.
2) It should return a list of ready conditions.
3) It should *not* accept a list of conditions.
4) from_condition() should be removed.

Also notify() should try again if releasing a waiter raises RuntimeError 
because it has already been released.  Otherwise notify() can be a noop even 
when there are threads waiting on the condition.

I would also put

for cond in conditions:
cond._remove_waiter(waiter)

in wait_for_any() in to a finally clause in case the wait was interrupted by 
KeyboardInterrupt.  (Accounting for KeyboardInterrupt everywhere is not 
feasible, but for blocking calls which can be interrupted I think we should 
try.)

--

___
Python tracker 
<http://bugs.python.org/issue18078>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8713] multiprocessing needs option to eschew fork() under Linux

2013-07-30 Thread Richard Oudkerk

Richard Oudkerk added the comment:

The spawn branch is in decent shape, although the documentation is not 
up-to-date.

I would like to commit before the first alpha.

--

___
Python tracker 
<http://bugs.python.org/issue8713>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18214] Stop purging modules which are garbage collected before shutdown

2013-07-31 Thread Richard Oudkerk

Richard Oudkerk added the comment:

I played a bit with the patch and -v -Xshowrefcount.  The number of references 
and blocks left at exit varies (and is higher than for unpatched python).

It appears that a few (1-3) module dicts are not being purged because they have 
been "orphaned".  (i.e. the module object was garbaged collected before we 
check the weakref, but the module dict survived.)  Presumably it is the hash 
randomization causing the randomness.

Maybe 8 out of 50+ module dicts actually die a natural death by being garbage 
collected before they are purged.  Try

./python -v -Xshowrefcount check_purging.py

--
Added file: http://bugs.python.org/file31105/check_purging.py

___
Python tracker 
<http://bugs.python.org/issue18214>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18214] Stop purging modules which are garbage collected before shutdown

2013-08-01 Thread Richard Oudkerk

Richard Oudkerk added the comment:

On 01/08/2013 10:59am, Antoine Pitrou wrote:
> If you replace the end of your script with the following:
>
> for name, mod in sys.modules.items():
>  if name != 'encodings':
>  mod.__dict__["__blob__"] = Blob(name)
> del name, mod, Blob
>
>
> then at the end of the shutdown phase, remaining is empty.

On Windows, even with this change, I get for example:

   # remaining {'encodings.mbcs', '__main__', 'encodings.cp1252'}
   ...
   [22081 refs, 6742 blocks]

or

   # remaining {'__main__', 'encodings'}
   ...
   [23538 refs, 7136 blocks]

--

___
Python tracker 
<http://bugs.python.org/issue18214>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18214] Stop purging modules which are garbage collected before shutdown

2013-08-01 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> You might want to open a prompt and look at gc.get_referrers() for 
> encodings.mbcs.__dict__ (or another of those modules).

>>> gc.get_referrers(sys.modules['encodings.mbcs'].__dict__)
[, , , ]

>>> gc.get_referrers(sys.modules['encodings.cp1252'].__dict__)
[, , , , , ]

>>> gc.get_referrers(sys.modules['__main__'].__dict__)
[, ,
,  at 0x02AD3DB8>, , )>]

--

___
Python tracker 
<http://bugs.python.org/issue18214>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18214] Stop purging modules which are garbage collected before shutdown

2013-08-01 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> I get different numbers from you. If I run "./python -v -c pass", most 
> modules in the "wiping" phase are C extension modules, which is expected. 
> Pretty much every pure Python module ends up garbage collected before 
> that.

The *module* gets gc'ed, sure.  But you can't tell from "./python -v -c pass" 
when the *module dict* get gc'ed.

Using "./python -v check_purging.py", before the purging stage (# cleanup [3]) 
I only get

# purge/gc operator 54
# purge/gc io 53
# purge/gc keyword 52
# purge/gc types 51
# purge/gc sysconfig 50

That leaves lots of pure python module dicts to be purged later on.

--

___
Python tracker 
<http://bugs.python.org/issue18214>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18214] Stop purging modules which are garbage collected before shutdown

2013-08-01 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> Also, do note that purge/gc after wiping can still be a regular
> gc pass unless the module has been wiped. The gc could be triggered
> by another module being wiped.

For me, the modules which die naturally after purging begins are

# purge/gc encodings.aliases 34
# purge/gc _io 14
# purge/gc collections.abc 13
# purge/gc sre_compile 12
# purge/gc heapq 11
# purge/gc sre_constants 10
# purge/gc _weakrefset 9
# purge/gc reprlib 8
# purge/gc weakref 7
# purge/gc site 6
# purge/gc abc 5
# purge/gc encodings.latin_1 4
# purge/gc encodings.utf_8 3
# purge/gc genericpath 2

Of these, all but the first appear to happen during the final cyclic 
garbage collection.

--

___
Python tracker 
<http://bugs.python.org/issue18214>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18214] Stop purging modules which are garbage collected before shutdown

2013-08-01 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Yes, I agree the patch is ok.

It would be would be much simpler to keep track of the module dicts if 
they were weakrefable.  Alternatively, at shutdown a weakrefable object 
with a reference to the module dict could be inserted in to each module 
dict.  We could then use those to find orphaned module dicts.  But I 
doubt it is worth the extra effort.

--

___
Python tracker 
<http://bugs.python.org/issue18214>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18649] list2cmdline function in subprocess module handles \" sequence wrong

2013-08-04 Thread Richard Oudkerk

Richard Oudkerk added the comment:

Firstly, list2cmdline() takes a list as its argument, not a string:

  >>> import subprocess
  >>> print subprocess.list2cmdline([r'\"1|2\"'])
  \\\"1|2\\\"

But the problem with passing arguments to a batch file is that cmd.exe parses 
arguments differently from how normal executables do.  In particular, "|" is 
treated specially and "^" is used as an escape character.

If you define test.bat as

  @echo off
  echo "%1"

then

  subprocess.call(['test.bat', '1^|2'])

prints

  "1|2"

as expected.

This is a duplicate of http://bugs.python.org/issue1300.

--
nosy: +sbt
resolution:  -> invalid
stage:  -> committed/rejected
status: open -> closed
type:  -> behavior

___
Python tracker 
<http://bugs.python.org/issue18649>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18649] list2cmdline function in subprocess module handles \" sequence wrong

2013-08-04 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> I think you're missing the point. The implementation is wrong as it 
> does not do what documentation says which is "A double quotation mark 
> preceded by a backslash is interpreted as a literal double quotation 
> mark."

That docstring describes how the string returned by list2cmdline() is 
interpreted by the MS C runtime.  I assume you mean this bit:

3) A double quotation mark preceded by a backslash is
   interpreted as a literal double quotation mark.

This looks correct to me: it implies that list2cmdline() must convert a double 
quotation mark to a double quotation mark preceded by a backslash.  e.g.

  >>> print(subprocess.list2cmdline(['"']))
  \"

> How the output of list2cmdline interacts with the cmd.exe is another 
> issue (It just happens here that if implementation of list2cmdline were 
> in line with its documentation then there wouldn't be any subsequent 
> problem with cmd.exe).

As I said, list2cmdline() behaves as expected.  Whatever else happens, "|" must 
be escaped with "^" or else cmd will interpret it specially.

--

___
Python tracker 
<http://bugs.python.org/issue18649>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8713] multiprocessing needs option to eschew fork() under Linux

2013-08-07 Thread Richard Oudkerk

Changes by Richard Oudkerk :


Added file: http://bugs.python.org/file31186/b3620777f54c.diff

___
Python tracker 
<http://bugs.python.org/issue8713>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8713] multiprocessing needs option to eschew fork() under Linux

2013-08-07 Thread Richard Oudkerk

Richard Oudkerk added the comment:

I have done quite a bit of refactoring and added some extra tests.

When I try using the forkserver start method on the OSX Tiger buildbot (the 
only OSX one available) I get errors.  I have disabled the tests for OSX, but 
it seemed to be working before.  Maybe that was with a different buildbot.

--

___
Python tracker 
<http://bugs.python.org/issue8713>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue18676] Queue: document that zero is accepted as timeout value

2013-08-07 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> IMHO it just doesn't make sense passing 0.0 as a timeout value.

I have written lots of code that looks like

timeout = max(deadline - time.time(), 0)
some_function(..., timeout=timeout)

This makes perfect sense.  Working code should not be broken -- it is the 
docsting that should be changed.

I can't think of *any* function taking a timeout which rejects a zero timeout.  
See select(), poll(), Condition.wait(), Lock.acquire(), Thread.join().  In each 
case a zero timeout causes a non-blocking call.

Also, note that the implementation does not contradict the docstring or 
documentation: they say nothing about what happens it timeout is zero (or 
negative).

--
nosy: +sbt

___
Python tracker 
<http://bugs.python.org/issue18676>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8713] multiprocessing needs option to eschew fork() under Linux

2013-08-08 Thread Richard Oudkerk

Richard Oudkerk added the comment:

> Richard, can you say what failed on the OS X 10.4 (Tiger) buildbot?

There seems to be a problem which depends on the order in which you run 
the test, and it happens on Linux also.  For example if I do

   ./python -m test -v \
   test_multiprocessing_fork \
   test_multiprocessing_forkserver

Then I get lots of failures when forkserver runs.  I have tracked down 
the changeset which caused the problem, but I have not had time to look 
in to it.

 > The only vaguely suspicious message when running with -v was:
 > [...]
 > [semaphore_tracker] '/mp18203-0': [Errno 22] Invalid argument
 > [semaphore_tracker] '/mp18203-1': successfully unlinked
 > [...]

That is expected and it shows the semaphore tracker is working as 
expected.  Maybe I should print a note to stderr to expect this.

--

___
Python tracker 
<http://bugs.python.org/issue8713>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8713] multiprocessing needs option to eschew fork() under Linux

2013-08-10 Thread Richard Oudkerk

Changes by Richard Oudkerk :


Added file: http://bugs.python.org/file31214/c7aa0005f231.diff

___
Python tracker 
<http://bugs.python.org/issue8713>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8713] multiprocessing needs option to eschew fork() under Linux

2013-08-10 Thread Richard Oudkerk

Richard Oudkerk added the comment:

The forkserver process is now started using _posixsubprocess.fork_exec().  This 
should fix the order dependent problem mentioned before.

Also the forkserver tests are now reenabled on OSX.

--

___
Python tracker 
<http://bugs.python.org/issue8713>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8713] multiprocessing needs option to eschew fork() under Linux

2013-08-13 Thread Richard Oudkerk

Changes by Richard Oudkerk :


Added file: http://bugs.python.org/file31282/4fc7c72b1c5d.diff

___
Python tracker 
<http://bugs.python.org/issue8713>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8713] multiprocessing needs option to eschew fork() under Linux

2013-08-13 Thread Richard Oudkerk

Richard Oudkerk added the comment:

I have added documentation now so I think it is ready to merge (except for a 
change to Makefile).

--

___
Python tracker 
<http://bugs.python.org/issue8713>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   3   4   5   6   7   8   >