[issue12856] tempfile PRNG reuse between parent and child process

2011-08-29 Thread Ferringb

Ferringb  added the comment:

Bleh; pardon, reuploading the patch.  hg export aparently appends to the output 
file rather than overwriting it (last patch had duplicated content in it).

--
Added file: 
http://bugs.python.org/file23067/unique-seed-per-process-tempfile.patch

___
Python tracker 
<http://bugs.python.org/issue12856>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12856] tempfile PRNG reuse between parent and child process

2011-08-29 Thread Ferringb

New submission from Ferringb :

Roughly; tempfile's uniqueness is derived from a global random instance; while 
there are protections for thread access, a forked child process /will/ inherit 
that PRNG source, resulting in children/parent trying the same set of names.

Mostly it's proving annoying in some code I have to deal in, although it 
wouldn't surprise me if someone watching a known temp location could use the 
predictability in some fashion.

As for affect, all versions of python have this; attached patch is cut against 
trunk.

--
files: unique-seed-per-process-tempfile.patch
keywords: patch
messages: 143192
nosy: ferringb
priority: normal
severity: normal
status: open
title: tempfile PRNG reuse between parent and child process
type: behavior
Added file: 
http://bugs.python.org/file23066/unique-seed-per-process-tempfile.patch

___
Python tracker 
<http://bugs.python.org/issue12856>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12856] tempfile PRNG reuse between parent and child process

2011-08-29 Thread Ferringb

Changes by Ferringb :


Removed file: 
http://bugs.python.org/file23066/unique-seed-per-process-tempfile.patch

___
Python tracker 
<http://bugs.python.org/issue12856>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue12856] tempfile PRNG reuse between parent and child process

2011-08-29 Thread Ferringb

Ferringb  added the comment:

> the test must be skipped where os.fork() isn't available (namely, under 
> Windows)

Done, although I still humbly suggest telling windows to bugger off ;)

> I would do os.read(fd, 100) (or some other large value) rather than 
> os.read(fd, 6), so that the test doesn't depend on the exact length of the 
> random sequences produced

100 is no different than 6 (same potential exists); better to just use the 
length from the parent side access to the PRNG.  That leaves open the unlikely 
scenario of child returning 7 chars, parent 6, and child/parent agreeing on the 
first 6... which would very likely be a bug anyways.

--
Added file: 
http://bugs.python.org/file23068/unique-seed-per-process-tempfile.patch

___
Python tracker 
<http://bugs.python.org/issue12856>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13788] os.closerange optimization

2012-01-14 Thread Ferringb

New submission from Ferringb :

The current implementation of closerange essentially is a bruteforce invocation 
of close for every integer in the range.

While this works, it's rather noisy for stracing, and for most invocations, is 
near a thousand close invocations more than needed.

As such it should be aware of /proc/${PID}/fd, and use that to isolate down 
just what is actually open, and close that.

--
components: Extension Modules
files: closerange-optimization.patch
keywords: patch
messages: 151273
nosy: ferringb
priority: normal
severity: normal
status: open
title: os.closerange optimization
type: performance
Added file: http://bugs.python.org/file24241/closerange-optimization.patch

___
Python tracker 
<http://bugs.python.org/issue13788>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13788] os.closerange optimization

2012-01-14 Thread Ferringb

Ferringb  added the comment:

Fixed tabs/spaces...

--
Added file: http://bugs.python.org/file24242/closerange-optimization.patch

___
Python tracker 
<http://bugs.python.org/issue13788>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue13788] os.closerange optimization

2012-01-14 Thread Ferringb

Changes by Ferringb :


Removed file: http://bugs.python.org/file24241/closerange-optimization.patch

___
Python tracker 
<http://bugs.python.org/issue13788>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8052] subprocess close_fds behavior should only close open fds

2012-01-15 Thread Ferringb

Ferringb  added the comment:

In #13788, I've uploaded a patch modifying closerange along the same lines as 
this discussion; someone w/ appropriate rights should set dependencies as 
needed.

Either way, here's a question: does anyone actually know of a unix that does 
procfs, and has a daft opendir implementation as described below?  Aka, are we 
actually worrying about something relevant, or just hypotheticals?

Strikes me any scenario where this actually would occur, we're already in hell 
from broken implementations- meaning we probably don't support them.  In the 
same angle, has anyone asked opengroup (forums, ml, etc), or moreso figured out 
*where* to ask for the given reasoning here?

Regardless, if we're dead set on adhering to the standards there (and using 
re-entrant readdir_r and friends isn't enough to make people happy), a few 
hacks come to mind:

1) in the child (child1), split a pipe, fork/exec (child2) an ls -l (or 
equivalent) of /proc/$PID/fd, feeding it back to child1 which then acts on it.
2) grab the fd list pre-fork along w/ the link count for /proc/$PID/fd; child 
re-stats /proc/$PID/fd, if link count is the same, the results should be able 
to be acted upon.  I'm *reasonably* sure there is a retarded FS or two out 
there that doesn't set link count for a directory correctly, but I've never 
heard of it for a procfs.  Regardless, should be detectable- nlinks would be 0. 
 If it is, and len(fds) != 0, then you know you can't trust the results and 
have to fallback to brute force close the range.  Additionally, we ought to be 
able to test for this... so... score.

Addressing: "signal handlers can open files".  Yes, they can, and our current 
implementation doesn't handle that case /anyways/, so it's orthogonal to 
speeding up closerange.

Finally, doing some codesearching, here's the rough list of heavy hitters 
spotted using this:
*) java (as mentioned)
*) chrome's seccomp sandbox uses it
*) glibc's nscd uses it (pretty strong indication this is safe in that case to 
say the least)
*) gdm
*) pulseaudio (limited to linux)
*) opensolaris actually does this, although from reading the code it sounds as 
if there is an issue w/ vfork- thus they use getdents directly.  Look for 
spawn_closefrom for details.

So.. seems a bit linuxy.  Could possibly enable it just there (whitelist).

--
nosy: +ferringb

___
Python tracker 
<http://bugs.python.org/issue8052>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8052] subprocess close_fds behavior should only close open fds

2012-01-15 Thread Ferringb

Ferringb  added the comment:

>The only question is: do other Unix also have /proc//fd? e.g.
>FreeBSD, OpenBSD. That's especially important because FreeBSD can have
>a huge RLIMIT_NOFILE by default.

Unless the OS gives some way to optimize the process (whether inferring from 
procfs, or making use of spawn_closefrom), there really isn't anything we can 
do.  O_CLOEXEC is one option, but that's basically the same as the close loop 
in terms of syscalls- specifically post fork looping over the range and setting 
it.  Beyond that, it's linux specific, so only would be used if the root python 
was invoked from lacked procfs.

I'm willing to extend my original patch to handle alternate OS hints as needed; 
in the same way, the nlinks trick I can implement although I'd be more inclined 
to just limit my original closerange patch to OSs that have a sane opendir and 
procfs.

--

___
Python tracker 
<http://bugs.python.org/issue8052>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6559] add pass_fds paramter to subprocess.Popen()

2012-01-16 Thread Ferringb

Ferringb  added the comment:

Just noticed this patch... aside from liking the intention, the api for this is 
going to grow tiresome quick since it expects the FDs to already be in place; 
is there any reasons a mapping wasn't used here, specifically of 
(src_fd|src_fileobj) -> target_fd ?

If that was fed in, post fork the client can shuffle around the fd's into 
appropriate positions- something the parent may not be able to do (threaded 
environment for example, or async in some respect).

I've had similar functionality in my own process code for a while, and have 
found it to be generally pretty easy to deal with- for subprocess it has the 
added benefit that the existing stdin/stdout/stderr bits could be pretty easily 
folded directly into it.

So... any reason this route wasn't considered, or just wasn't thought about?

--
nosy: +ferringb

___
Python tracker 
<http://bugs.python.org/issue6559>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14173] PyOS_FiniInterupts leaves signal.getsignal segfaulty

2012-03-02 Thread Ferringb

New submission from Ferringb :

During Py_Finalize (pythonrun.c), it does the following:
1) suppress signal handling PyOs_FiniInterupts
2) clear caches
3) force gc collection; first for objects, then via wiping modules.

The problem is that for unix OSs, Modules/signal.c's PyOs_FiniInterrupts leaves 
itself in a state where its internal Handlers are effectively reset to NULL, 
except the various functions don't properly handle that scenario.

Attached is a test case demonstrating it; it segfaults on every python version 
I've tested (2.4->3.2; haven't tried 3.3).

Since this *only* occurs during the final gc sweep when modules are destroyed, 
its a bit of a pain in the ass to detect that we're in that scenario, and that 
we must not touch signal.getsignal lest it segfault the interp.  That said,

def _SignalModuleUsable():
try:
  signal.signal(signal.SIGUSR1, signal.signal(signal.SIGUSR1, some_handler))
  return True
except (TypeError, AttributeError, SystemError):
  # we were invoked in a module cleanup context.
  return False

does manage to poke the api just right so that it can be detected w/out 
segfaulting the interp.

Finally, note that while folks could point at __del__... its not really at 
fault.  Doing a proper weakref.ref finalizer can trigger the same- the fault is 
signal.c's PyOs_FiniInterrupts leaving the signal module in a bad state.  For 
the testcase, I used __del__ just because it was quicker/less code to do so.

--
components: Interpreter Core
files: test.py
messages: 154758
nosy: ferringb
priority: normal
severity: normal
status: open
title: PyOS_FiniInterupts leaves signal.getsignal segfaulty
type: crash
versions: Python 2.6, Python 2.7, Python 3.1, Python 3.2
Added file: http://bugs.python.org/file24703/test.py

___
Python tracker 
<http://bugs.python.org/issue14173>
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com