[issue12881] ctypes: segfault with large structure field names
Charles-François Natali added the comment: Looks good to me. -- nosy: +neologix stage: patch review -> commit review ___ Python tracker <http://bugs.python.org/issue12881> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12936] armv5tejl: random segfaults in getaddrinfo()
Charles-François Natali added the comment: > 2) http://sources.redhat.com/bugzilla/show_bug.cgi?id=12453 We actually had another issue due to this particular libc bug: http://bugs.python.org/issue6059 Basically, the problem is that if some libraries are dynamically loaded in an interleaved way, the TLS can be returned uninitialized, hence the segfault upon access. This problem can show up now because the import orders for some modules have been modified: depending on the test that crashes - or rather the tests that run just before - you might be able to pinpoint it quickly (or you could maybe use "ltrace -e dlopen"). >> Apparently, Etch on ARM uses linuxthread instead of NPTL ... > > FYI you can also try to print sys.thread_info (which should give the same > information, "NPTL 2.7"). > > NPTL has know issues: see for example the Python issue #4970. NPTL is old and > has been replaced by pthread in the glibc on Linux. I think you're confusing with linuxthreads ;-) -- ___ Python tracker <http://bugs.python.org/issue12936> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12936] armv5tejl: random segfaults in getaddrinfo()
Charles-François Natali added the comment: Oh, and BTW, for the "Backtrace stopped: frame did not save the PC", you might want to install the libc-dbg package. This might help in finding precisely where it's crashing. -- ___ Python tracker <http://bugs.python.org/issue12936> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12936] armv5tejl segfaults: sched_setaffinity() vs. pthread_setaffinity_np()
Charles-François Natali added the comment: > I think I got it: pthread_setaffinity_np() does not crash. Nice. Out of curiosity, I just looked at the source code, and it just does sched_setaffinity(thread->tid), so you can do the same with sched_setaffinity(syscall(SYS_gettid)) for the current thread. However, I don't think we should/could add this to the posix module: it expects a pthread_t instead of a PID, to which we don't have access. Furthermore, even though we're linked with pthread, this should normally succeed - or at least not crash - when called from the main thread - and it does on my Debian squeeze box. So I'd suggest closing this issue. -- ___ Python tracker <http://bugs.python.org/issue12936> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12936] armv5tejl segfaults: sched_setaffinity() vs. pthread_setaffinity_np()
Charles-François Natali added the comment: > If we have access (and as I understood from Victor's post we do): > pthread_getaffinity_np() also exists on FreeBSD, which would be > an advantage. Yes, but I see several drawbacks: - as noted by Victor, it's really easy to crash the interpreter by passing an invalid thread ID, which IMHO, should be avoided at all cost - to be safe, we would need to have a different API depending on whether Python is built with threads or not (i.e. sched_setaffinity() without threads, and pthread_setaffinity_np()) - pthread_setaffinity_np() is really non-portable (it's guarded by __USE_GNU in my system's header) - sched_setaffinity() seems to work fine on most systems even when linked with pthread > I don't care strongly about using pthread_getaffinity_np(), but at least I'd > like to skip the scheduling sections on arm-linux if they don't work reliably. Sounds reasonable. I guess you could use os.uname() or platform.machine(). -- ___ Python tracker <http://bugs.python.org/issue12936> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12936] armv5tejl segfaults: sched_setaffinity() vs. pthread_setaffinity_np()
Charles-François Natali added the comment: > Do you mean that signal.pthread_kill() should be removed? This function is > very useful and solve some issues that cannot be solved differently. At the > same time, I don't think that it's possible to workaround the crashes. At > least, I don't see how: pthread_kill(tid, 0) is supposed to check if tid > exists, but it does crash... No, I don't suggest to remove it, it is useful. As for the crashes, with glibc pthread_t is really a pointer, so there's no way to check its validity beforehand. Even if we did check the thread ID against the list of Python-created threads IDs (stored in Thread._ident), this could still crash, because the ID becomes invalid as soon as the thread terminates (all threads are started detached). Furthermore, this wouldn't work for non-Python created threads. > We cannot use the same name for two different C function. One expects a > process identifier, whereas the other expects a thread identifier! If Python > is compiled without thread, the thread will not exist (as some modules and > many other functions). > I know, that's why I said "different API": but I must admit it was poorly worded ;-) However, this wouldn't solve this particular problem: as long as we expose sched_setaffinity(), it will crash as soon as someone passes `0` or getpid() as PID. >> pthread_setaffinity_np() is really non-portable >> (it's guarded by __USE_GNU in my system's header) > > We can check it in configure. We already use some functions which are GNU > extensions, like makedev(). Oh, os.makedev() availability is just not > documented :-) As I said, this wouldn't solve this problem. If someone deems it necessary, we can open another issue for this feature request. >> sched_setaffinity() seems to work fine on most systems >> even when linked with pthread > > Again, it looks like a libc/kernel bug. I don't think that Python can work > around such issue. > Agreed. > I don't know or need (), but the difference between sched_setaffinity and > pthread_getaffinity_np is the same between sigprocmask() and > pthread_sigmask(). I chose to expose only the later because the behaviour of > sigprocmask is undefined in a process using threads. Exactly. However, nothing prevents someone from using sigprocmask() in a multithreaded process, the only difference is that it won't crash (AFAICT). So I suggest to: 1) skip the problematic tests on ARM when built with threads to avoid segfaults 2) if someone wants pthread_getaffinity_np(), then we can still open a separate feature request -- ___ Python tracker <http://bugs.python.org/issue12936> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12975] Invitation to connect on LinkedIn
Changes by Charles-François Natali : -- resolution: -> invalid status: open -> closed ___ Python tracker <http://bugs.python.org/issue12975> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12975] Invitation to connect on LinkedIn
Changes by Charles-François Natali : Removed file: http://bugs.python.org/file23150/unnamed ___ Python tracker <http://bugs.python.org/issue12975> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8828] Atomic function to rename a file
Charles-François Natali added the comment: > According to the following article, a fsync is also needed on the > directory after a rename. I don't understand if is it always needed for > an atomic rename, or if we only need it for the "atomic write" pattern. It's not needed if you just want atomicity, i.e. the file is visible either under its old name or its new name, but not neither or both. If is however needed if you want durability, i.e. you want to guarantee that the file is visible under its new name after your atomic_rename returns. -- nosy: +neologix ___ Python tracker <http://bugs.python.org/issue8828> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12936] armv5tejl segfaults: sched_setaffinity() vs. pthread_setaffinity_np()
Charles-François Natali added the comment: > I'd prefer to disable the misbehaving functions entirely on arm. -10 If we start disabling features on platforms with partly bogus implementations, we might as well drop threading on OpenBSD, sendmsg() on OS-X, etc. Furthermore, it's really just a libc bug, which might be fixed in a more recent version, or with another libc provider (eglibc, uclibc, etc.). -- ___ Python tracker <http://bugs.python.org/issue12936> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12976] select module: only use EVFILT_TIMER if available (kqueue backend)
Charles-François Natali added the comment: Hello, According to http://fxr.watson.org/fxr/ident?v=NETBSD;im=3;i=EVFILT_TIMER EVFILT_TIMER is defined on NetBSD. As for MirBSD, with all due respect, it really looks like a niche platform, definitely not officially supported by Python. Of course, this patch is so trivial and small that it can easily be merged, but it would be nice if MirBSD defined it in its header file instead (it's not the first problem due to kqueue-incompatibilities between on BSD platforms, see for example issue #12181 and issue #6419). -- nosy: +haypo, neologix ___ Python tracker <http://bugs.python.org/issue12976> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12976] select module: only use EVFILT_TIMER if available (kqueue backend)
Charles-François Natali added the comment: Since this patch alone won't be enough to support MirBSD (and is required only for MirBSD), I suggest you to post the complete patch, and rename this issue "add support for MirBSD platform", or something along those lines. That way, we can consider this as a feature request, and apply it in one chunk, if accepted. However, I think that the current consensus is somewhat hostile to adding support for so-called "exotic platforms", so I can't guarantee this will be included (posting the whole patch will certainly help making a decision). I'm adding Martin to the noisy list, he'll probably give you more insight on this. -- nosy: +loewis ___ Python tracker <http://bugs.python.org/issue12976> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12981] rewrite multiprocessing (senfd|recvfd) in Python
New submission from Charles-François Natali : Now that sendmsg()/recvmsg() are exposed in socketmodule, we could use them to replace the ad-hoc FD-passing routines used by multiprocessing.reduction. Antoine suggested adding sendfd()/recvfd() methods to socket objects, but I'm not sure about this, since those only make sense for Unix domain sockets. Two remarks on the patch attached: - this removes sendfd()/recvfd() from _multiprocessing (but AFAICT those were never documented as part of the public API) - EOF/invalid data received result in a RuntimeError -- components: Library (Lib) files: multiprocessing_fd.diff keywords: patch messages: 144047 nosy: haypo, neologix priority: normal severity: normal status: open title: rewrite multiprocessing (senfd|recvfd) in Python type: feature request versions: Python 3.3 Added file: http://bugs.python.org/file23156/multiprocessing_fd.diff ___ Python tracker <http://bugs.python.org/issue12981> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12981] rewrite multiprocessing (senfd|recvfd) in Python
Charles-François Natali added the comment: > I don't think that it's a problem to remove private functions. > Alright. > Is it mandatory to send a non-empty message (first argument for sendmsg, b'x' > in your patch)? The original C function sends a random byte :-) > Some implementation can return EINVAL if no data is sent (i.e. you can't send only ancillary data). > multiprocessing_recvfd() contains cmsg_level=SOL_SOCKET and > cmsg_type=SCM_RIGHTS, your Python function doesn't check cmsg_level or > cmsg_type. Should it be checked? > Yes, it should be checked, I'll update the patch. > I don't know sendmsg/recvmsg API. Do they guarantee to send/receive all data? > For data no, but ancillary data, yes. The only thing that could go wrong would be a buffer too short to hold the ancillary data, but: - the buffer size is computed with CMSG_DATA(), so it should be enough - if the ancillay data is truncated, struct.unpack will raise an exception -- ___ Python tracker <http://bugs.python.org/issue12981> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12981] rewrite multiprocessing (senfd|recvfd) in Python
Changes by Charles-François Natali : Added file: http://bugs.python.org/file23166/multiprocessing_fd-1.diff ___ Python tracker <http://bugs.python.org/issue12981> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12981] rewrite multiprocessing (senfd|recvfd) in Python
Changes by Charles-François Natali : Removed file: http://bugs.python.org/file23156/multiprocessing_fd.diff ___ Python tracker <http://bugs.python.org/issue12981> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12981] rewrite multiprocessing (senfd|recvfd) in Python
Charles-François Natali added the comment: I only tried on Linux. By the way, what's the simplest way to create a personal clone to test patches on some of the buildbots before committing them? -- ___ Python tracker <http://bugs.python.org/issue12981> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8237] multiprocessing.Queue() blocks program
Charles-François Natali added the comment: It's a dupe of issue #8426: the Queue isn't full, but the underlying pipe is, so the feeder thread blocks on the write to the pipe (actually when trying to acquire the lock protecting the pipe from concurrent access). Since the children processes join the feeder thread on exit (to make sure all data has been flushed to pipe), they block. -- nosy: +neologix resolution: invalid -> duplicate stage: test needed -> committed/rejected status: open -> closed superseder: -> multiprocessing.Queue fails to get() very large objects ___ Python tracker <http://bugs.python.org/issue8237> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12996] multiprocessing.Connection endianness issue
New submission from Charles-François Natali : Since the rewrite in pure Python of multiprocessing.Connection (issue #11743), multiprocessing.Connection sends and receives the length of the data (used as header) in host byte order. This will break if the connection's endpoints are on machine with different endianness. Patch attached (it also removes an unnecessary computation of the length of the data being sent). -- components: Library (Lib) files: multiprocessing_conn_endianness.diff keywords: needs review, patch messages: 144148 nosy: haypo, neologix priority: normal severity: normal stage: patch review status: open title: multiprocessing.Connection endianness issue type: behavior versions: Python 3.3 Added file: http://bugs.python.org/file23171/multiprocessing_conn_endianness.diff ___ Python tracker <http://bugs.python.org/issue12996> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12999] _XOPEN_SOURCE usage on Solaris
Changes by Charles-François Natali : -- nosy: haypo, neologix priority: normal severity: normal stage: needs patch status: open title: _XOPEN_SOURCE usage on Solaris type: behavior versions: Python 3.3 ___ Python tracker <http://bugs.python.org/issue12999> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12999] _XOPEN_SOURCE usage on Solaris
New submission from Charles-François Natali : While testing issue #12981, I stumbled on a problem on OpenIndiana buildbot: """ test test_multiprocessing crashed -- Traceback (most recent call last): File "/export/home/buildbot/64bits/custom.cea-indiana-amd64/build/Lib/test/regrtest.py", line 1133, in runtest_inner the_package = __import__(abstest, globals(), locals(), []) File "/export/home/buildbot/64bits/custom.cea-indiana-amd64/build/Lib/test/test_multiprocessing.py", line 38, in from multiprocessing import util, reduction File "/export/home/buildbot/64bits/custom.cea-indiana-amd64/build/Lib/importlib/_bootstrap.py", line 437, in load_module return self._load_module(fullname) File "/export/home/buildbot/64bits/custom.cea-indiana-amd64/build/Lib/importlib/_bootstrap.py", line 141, in decorated return fxn(self, module, *args, **kwargs) File "/export/home/buildbot/64bits/custom.cea-indiana-amd64/build/Lib/importlib/_bootstrap.py", line 342, in _load_module exec(code_object, module.__dict__) File "/export/home/buildbot/64bits/custom.cea-indiana-amd64/build/Lib/multiprocessing/reduction.py", line 57, in raise ImportError('pickling of connections not supported') ImportError: pickling of connections not supported """ Which means that socket.CMSG_LEN isn't defined. Now, you might wonder how this can work in the C version of multiprocessing.(sendfd|recvfd), which needs CMSG_LEN(). Here's how: """ #ifdef __sun /* The control message API is only available on Solaris if XPG 4.2 or later is requested. */ #define _XOPEN_SOURCE 500 #endif """ And indeed: http://fxr.watson.org/fxr/source/common/sys/socket.h?v=OPENSOLARIS#L478 """ #if defined(_XPG4_2) /* * The cmsg headers (and macros dealing with them) were made available as * part of UNIX95 and hence need to be protected with a _XPG4_2 define. */ """ The problem is that socketmodule uses pyconfig.h defines, and _XOPEN_SOURCE isn't defined on Solaris: http://hg.python.org/cpython/rev/7c947768b435 (it was added explicitely to Modules/_multiprocessing/multiprocessing.h for sendmsg by http://hg.python.org/cpython/rev/419901e65dd2). So, _XOPEN_SOURCE is needed on Solaris to build socket_sendmsg and friends. I'm not sure about the best way to proceed, since Martin certainly had good reasons to remove _XOPEN_SOURCE definition entirely on Solaris. Should we define it only at the top of socketmodule? -- nosy: +loewis ___ Python tracker <http://bugs.python.org/issue12999> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12981] rewrite multiprocessing (senfd|recvfd) in Python
Charles-François Natali added the comment: > Did you try it on Linux, FreeBSD and/or Windows? It works fine on Linux, FreeBSD, OS X and Windows, but not on Solaris: see issue #12999. -- dependencies: +_XOPEN_SOURCE usage on Solaris ___ Python tracker <http://bugs.python.org/issue12981> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12976] add support for MirBSD platform
Charles-François Natali added the comment: Hello Benny, > As requested, here is the full patch for MirBSD support. The diff was taken > against version 2.7.2. It is really quite easy, you just need to handle > MirBSD like OpenBSD. > With this patch, I can successfully compile and run Python on MirBSD. Even > though it is a rather exotic platform, I encourage you to take these changes, > as they are quite minimal. Indeed, it's quite short and manageable, but see http://bugs.python.org/issue11937, especially Martin's and Terry's comments: """ Guido established a policy a few years ago that we should rather not incorporate support for every minority platform for which people contribute patches. While I'd personally agree that an Interix port would certainly be "fun", pragmatically, I'm -1 on having the code in the code basis, and propose to close this issue as "won't fix". We would certainly be happy to link to gentoo prefix from the "other ports" page on python.org. """ and """ Markus, I agree with Martin that this patch would go against current policy and should be closed. Rather than close it myself, I will try to persuade you to do so. [...] """ This patch is much simpler and cleaner though (OTOH, it's so simple it shouldn't be too much work for MirBSD folks to keep this patch in sync). -- ___ Python tracker <http://bugs.python.org/issue12976> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12981] rewrite multiprocessing (senfd|recvfd) in Python
Charles-François Natali added the comment: Here's a patch taking into account the fact that multiprocessing.reduction might not be available and importing it can raise an ImportError (which is already the case with the C implementation, but multiprocessing.reduction tests have been added recently to test_multiprocessing), e.g. if the OS doesn't support FD passing. With this patch, the pure Python version can be applied, and passes on Linux, FreeBSD, OS X, Windows and OpenSolaris (except that it's not available on OpenSolaris until issue #12999 gets fixed). I also slightly modified the struct format used in the pure Python version to make sure the length is sent as a a native int ("@i") instead of a standardized int ("=i"), which might break if sizeof(int) != 4 (not sure there are many ILP64 architectures out there, but you never know...). -- Added file: http://bugs.python.org/file23179/skip_reduction.diff Added file: http://bugs.python.org/file23180/multiprocessing_fd-2.diff ___ Python tracker <http://bugs.python.org/issue12981> ___diff -r c6d52971dd2a Lib/test/test_multiprocessing.py --- a/Lib/test/test_multiprocessing.py Thu Sep 15 18:18:51 2011 +0200 +++ b/Lib/test/test_multiprocessing.py Sat Sep 17 10:54:10 2011 +0200 @@ -35,7 +35,13 @@ import multiprocessing.heap import multiprocessing.pool -from multiprocessing import util, reduction +from multiprocessing import util + +try: +from multiprocessing import reduction +HAS_REDUCTION = True +except ImportError: +HAS_REDUCTION = False try: from multiprocessing.sharedctypes import Value, copy @@ -1631,6 +1637,7 @@ os.write(fd, data) os.close(fd) +@unittest.skipUnless(HAS_REDUCTION, "test needs multiprocessing.reduction") def test_fd_transfer(self): if self.TYPE != 'processes': self.skipTest("only makes sense with processes") @@ -1648,6 +1655,7 @@ with open(test.support.TESTFN, "rb") as f: self.assertEqual(f.read(), b"foo") +@unittest.skipUnless(HAS_REDUCTION, "test needs multiprocessing.reduction") @unittest.skipIf(sys.platform == "win32", "test semantics don't make sense on Windows") @unittest.skipIf(MAXFD <= 256, @@ -1987,10 +1995,12 @@ 'multiprocessing', 'multiprocessing.connection', 'multiprocessing.heap', 'multiprocessing.managers', 'multiprocessing.pool', 'multiprocessing.process', -'multiprocessing.reduction', 'multiprocessing.synchronize', 'multiprocessing.util' ] +if HAS_REDUCTION: +modules.append('multiprocessing.reduction') + if c_int is not None: # This module requires _ctypes modules.append('multiprocessing.sharedctypes') diff -r c6d52971dd2a Lib/multiprocessing/reduction.py --- a/Lib/multiprocessing/reduction.py Thu Sep 15 18:18:51 2011 +0200 +++ b/Lib/multiprocessing/reduction.py Fri Sep 16 19:44:51 2011 +0200 @@ -39,6 +39,7 @@ import sys import socket import threading +import struct import _multiprocessing from multiprocessing import current_process @@ -51,7 +52,8 @@ # # -if not(sys.platform == 'win32' or hasattr(_multiprocessing, 'recvfd')): +if not(sys.platform == 'win32' or (hasattr(socket, 'CMSG_LEN') and + hasattr(socket, 'SCM_RIGHTS'))): raise ImportError('pickling of connections not supported') # @@ -77,10 +79,23 @@ else: def send_handle(conn, handle, destination_pid): -_multiprocessing.sendfd(conn.fileno(), handle) +with socket.fromfd(conn.fileno(), socket.AF_UNIX, socket.SOCK_STREAM) as s: +s.sendmsg([b'x'], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, +struct.pack("@i", handle))]) def recv_handle(conn): -return _multiprocessing.recvfd(conn.fileno()) +size = struct.calcsize("@i") +with socket.fromfd(conn.fileno(), socket.AF_UNIX, socket.SOCK_STREAM) as s: +msg, ancdata, flags, addr = s.recvmsg(1, socket.CMSG_SPACE(size)) +try: +cmsg_level, cmsg_type, cmsg_data = ancdata[0] +if (cmsg_level == socket.SOL_SOCKET and +cmsg_type == socket.SCM_RIGHTS): +return struct.unpack("@i", cmsg_data[:size])[0] +except (ValueError, IndexError, struct.error): +pass +raise RuntimeError('Invalid data received') + # # Support for a per-process server thread wh
[issue12981] rewrite multiprocessing (senfd|recvfd) in Python
Changes by Charles-François Natali : Removed file: http://bugs.python.org/file23166/multiprocessing_fd-1.diff ___ Python tracker <http://bugs.python.org/issue12981> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13001] test_socket.testRecvmsgTrunc failure on FreeBSD 7.2 buildbot
New submission from Charles-François Natali : http://www.python.org/dev/buildbot/all/builders/x86 FreeBSD 7.2 3.x/builds/2129/steps/test/logs/stdio """ == FAIL: testRecvmsgTrunc (test.test_socket.RecvmsgUDPTest) -- Traceback (most recent call last): File "/usr/home/db3l/buildarea/3.x.bolen-freebsd7/build/Lib/test/test_socket.py", line 1666, in testRecvmsgTrunc self.checkFlags(flags, eor=False) File "/usr/home/db3l/buildarea/3.x.bolen-freebsd7/build/Lib/test/test_socket.py", line 1354, in checkFlags self.assertEqual(flags & mask, checkset & mask) AssertionError: 0 != 16 == FAIL: testRecvmsgTrunc (test.test_socket.RecvmsgIntoUDPTest) -- Traceback (most recent call last): File "/usr/home/db3l/buildarea/3.x.bolen-freebsd7/build/Lib/test/test_socket.py", line 1666, in testRecvmsgTrunc self.checkFlags(flags, eor=False) File "/usr/home/db3l/buildarea/3.x.bolen-freebsd7/build/Lib/test/test_socket.py", line 1354, in checkFlags self.assertEqual(flags & mask, checkset & mask) AssertionError: 0 != 16 """ This fails because MSG_TRUNC isn't always set in msg_flags when receiving a truncated datagram with recvmsg(). It's a known kernel bug (http://svnweb.freebsd.org/base?view=revision&revision=211030), fixed in FreeBSD 8 (and the test indeed passes on the FreeBSD 8 buildbot). The patch attached skips the test on FreeBSD < 8 (and introduces @support.requires_freebsd_version). -- components: Tests files: freebsd_msgtrunc.diff keywords: needs review, patch messages: 144188 nosy: haypo, neologix priority: normal severity: normal stage: patch review status: open title: test_socket.testRecvmsgTrunc failure on FreeBSD 7.2 buildbot type: behavior versions: Python 3.3 Added file: http://bugs.python.org/file23182/freebsd_msgtrunc.diff ___ Python tracker <http://bugs.python.org/issue13001> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12996] multiprocessing.Connection endianness issue
Charles-François Natali added the comment: > "Since the rewrite in pure Python of multiprocessing.Connection (issue > #11743), multiprocessing.Connection sends and receives the length of the data > (used as header) in host byte order." > > I don't think so, the C code uses also the host endian. This issue is a > feature request. > No. http://hg.python.org/cpython/file/5deecc04b7a2/Modules/_multiprocessing/socket_connection.c In conn_send_string(): """ /* The "header" of the message is a 32 bit unsigned number (in network order) which specifies the length of the "body". If the message is shorter than about 16kb then it is quicker to combine the "header" and the "body" of the message and send them at once. */ [...] *(UINT32*)message = htonl((UINT32)length); """ in conn_recv_string(): """ ulength = ntohl(ulength); """ > I don't know if anyone uses multiprocessing on different hosts (because it > doesn't work currently). > > If you would like to support using multiprocessing on different hosts, it > should be documented in multiprocessing doc. It does work, it's even documented ;-) http://docs.python.org/dev/library/multiprocessing.html#multiprocessing-managers """ A manager object returned by Manager() controls a server process which holds Python objects and allows other processes to manipulate them using proxies. [...] Server process managers are more flexible than using shared memory objects because they can be made to support arbitrary object types. Also, a single manager can be shared by processes on different computers over a network. They are, however, slower than using shared memory. """ Managers use multiprocessing.connection to serialize data and send them over a socket: http://hg.python.org/cpython/file/5deecc04b7a2/Lib/multiprocessing/managers.py """ # # Mapping from serializer name to Listener and Client types # listener_client = { 'pickle' : (connection.Listener, connection.Client), 'xmlrpclib' : (connection.XmlListener, connection.XmlClient) } """ Yeah, Python's awesome :-) -- ___ Python tracker <http://bugs.python.org/issue12996> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12981] rewrite multiprocessing (senfd|recvfd) in Python
Charles-François Natali added the comment: > "It works fine on Linux, FreeBSD, OS X and Windows, but not on Solaris: see > issue #12999." > > Oh, thank for testing before committing :) It's hard to debug multiprocessing. Yes. Especially when you stumble upon a kernel/libc bug 25% of the time... So, what should I do? Apply the test catching the multiprocessing.connection ImportError to test_multiprocessing (which is necessary even with the current C version)? And then apply the pure Python version, or wait until the OpenIndiana case gets fixed? -- ___ Python tracker <http://bugs.python.org/issue12981> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12981] rewrite multiprocessing (senfd|recvfd) in Python
Changes by Charles-François Natali : Removed file: http://bugs.python.org/file23180/multiprocessing_fd-2.diff ___ Python tracker <http://bugs.python.org/issue12981> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12981] rewrite multiprocessing (senfd|recvfd) in Python
Charles-François Natali added the comment: > I had a look at this patch, and the FD passing looked OK, except > that calculating the buffer size with CMSG_SPACE() may allow more > than one file descriptor to be received, with the extra one going > unnoticed - it should use CMSG_LEN() instead Thanks for catching this. Here's an updated patch. > (the existing C implementation has the same problem, I see). I just checked, and the C version uses CMSG_SPACE() as the buffer size, but passes CMSG_LEN() to cmsg->cmsg_len and msg.msg_controllen. Or am I missing something? -- Added file: http://bugs.python.org/file23189/multiprocessing_fd-3.diff ___ Python tracker <http://bugs.python.org/issue12981> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12996] multiprocessing.Connection endianness issue
Changes by Charles-François Natali : -- resolution: -> fixed stage: patch review -> committed/rejected status: open -> closed ___ Python tracker <http://bugs.python.org/issue12996> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12981] rewrite multiprocessing (senfd|recvfd) in Python
Charles-François Natali added the comment: I committed the patch to catch the ImportError in test_multiprocessing. I'll commit the other patch (pure Python version) in a couple days. > Ah, no, you're right - that's fine. Sorry for the false alarm. No problem. As they say, "better safe than sorry". -- ___ Python tracker <http://bugs.python.org/issue12981> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13022] _multiprocessing.recvfd() doesn't check that file descriptor was actually received
Charles-François Natali added the comment: > The patch includes a test case, but like the other recently-added > tests for the function, it isn't guarded against > multiprocessing.reduction being unavailable. Issue #12981 has a > patch "skip_reduction.diff" (already in 3.3) to fix this, I'll apply skip_reduction.diff and your patch to 2.7 and 3.2. -- ___ Python tracker <http://bugs.python.org/issue13022> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10141] SocketCan support
Charles-François Natali added the comment: Here's an updated patch, with more tests. Please review! -- keywords: +needs review nosy: +haypo stage: patch review -> commit review Added file: http://bugs.python.org/file23225/socketcan_v4.patch ___ Python tracker <http://bugs.python.org/issue10141> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10141] SocketCan support
Charles-François Natali added the comment: > - dummy question: why an address is a tuple with 1 string instead of just > the string? Does AF_UNIX also uses a tuple of 1 string? I think the reason behind the tuple is future proofing. Here's the definition of `struct sockaddr_can` in my Linux box's headers: """ /** * struct sockaddr_can - the sockaddr structure for CAN sockets * @can_family: address family number AF_CAN. * @can_ifindex: CAN network interface index. * @can_addr:protocol specific address information */ struct sockaddr_can { sa_family_t can_family; int can_ifindex; union { /* transport protocol class address information (e.g. ISOTP) */ struct { canid_t rx_id, tx_id; } tp; /* reserved for future CAN protocols address information */ } can_addr; }; """ By making it a tuple, it will be easier to extend the address that must be passed to bind(2), should it ever evolve, in a backward compatible way. Well, that's just a guess (I'm by no means a SocketCAN expert :-). > - the example should also use struct.pack() to create the frame, I don't > like hardcoded BLOB Done. > - in test_socket: _have_socket_can() interprets permission denied as "CAN > is not supported", it would be nice to provide a better skip message. Create > maybe a decorator based? AFAICT, it shouldn't fail with EPERM or so. Also, I'm not sure what the message would look like, and it's probably a bit overkill. > - _have_socket_can(): you may move s.close() outside the try block (add > maybe a "else:" block?) because you may hide a real bug in .close() Changed that. > - data += b'\0' * (8 - can_dlc): I prefer data = data.ljust(8, '\x00') Hum... Done. > - you might add frame encoder/decoder in your example Done. > - if (!strcmp(PyBytes_AS_STRING(interfaceName), "")) hum. > PyBytes_GET_SIZE(intername)==0 should be enough Done. > - you truncate the interface name, it can be surprising, I would prefer an > error (e.g. "interface name too long: 20 characters, the maximum is 10 > characters" ?) I changed that, and added a test. Also, note that AF_PACKET suffers from the same problem. I'll submit a separate patch. > - (oh no! don't include horrible configure diff in patches for the bug > tracker :-p) Yeah, I usually take care of that, but forgot this time. > In which Linux version was CAN introduced? > Apparently, 2.6.25. Note that we don't need @support.requires_linux_version() though, it should be catched by HAVE_SOCKET_CAN (also, you can't use it as a class decorator...). Here's the updated patch. It passes on all the buildbots (of course, it's only relevant on Linux). -- Added file: http://bugs.python.org/file23234/socketcan_v5.patch ___ Python tracker <http://bugs.python.org/issue10141> ___diff -r a06ef7ab7321 Doc/library/socket.rst --- a/Doc/library/socket.rstWed Sep 21 22:05:01 2011 +0200 +++ b/Doc/library/socket.rstFri Sep 23 23:27:19 2011 +0200 @@ -80,6 +80,11 @@ If *addr_type* is TIPC_ADDR_ID, then *v1* is the node, *v2* is the reference, and *v3* should be set to 0. +- A tuple ``(interface, )`` is used for the :const:`AF_CAN` address family, + where *interface* is a string representing a network interface name like + ``'can0'``. The network interface name ``''`` can be used to receive packets + from all network interfaces of this family. + - Certain other address families (:const:`AF_BLUETOOTH`, :const:`AF_PACKET`) support specific representations. @@ -216,6 +221,19 @@ in the Unix header files are defined; for a few symbols, default values are provided. +.. data:: AF_CAN + PF_CAN + SOL_CAN_* + CAN_* + + Many constants of these forms, documented in the Linux documentation, are + also defined in the socket module. + + Availability: Linux >= 2.6.25. + + .. versionadded:: 3.3 + + .. data:: SIO_* RCVALL_* @@ -387,10 +405,14 @@ Create a new socket using the given address family, socket type and protocol number. The address family should be :const:`AF_INET` (the default), - :const:`AF_INET6` or :const:`AF_UNIX`. The socket type should be - :const:`SOCK_STREAM` (the default), :const:`SOCK_DGRAM` or perhaps one of the - other ``SOCK_`` constants. The protocol number is usually zero and may be - omitted in that case. + :const:`AF_INET6`, :const:`AF_UNIX` or :const:`AF_CAN`. The socket type + should be :const:`SOCK_STREAM` (the default), :const:`SOCK_DGRAM`, + :const:`SOCK_RAW` or perhaps one of the other ``SOCK_`` constants. The + protocol number is usually zero and may be omitted in that case or + :const:
[issue10141] SocketCan support
Changes by Charles-François Natali : Removed file: http://bugs.python.org/file23225/socketcan_v4.patch ___ Python tracker <http://bugs.python.org/issue10141> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12981] rewrite multiprocessing (senfd|recvfd) in Python
Changes by Charles-François Natali : -- dependencies: -_XOPEN_SOURCE and _XOPEN_SOURCE_EXTENDED usage on Solaris resolution: -> fixed stage: -> committed/rejected ___ Python tracker <http://bugs.python.org/issue12981> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12981] rewrite multiprocessing (senfd|recvfd) in Python
Changes by Charles-François Natali : -- status: open -> closed ___ Python tracker <http://bugs.python.org/issue12981> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13058] Fix file descriptor leak on error
Charles-François Natali added the comment: Patch applied, thanks! -- nosy: +neologix resolution: -> fixed stage: -> committed/rejected status: open -> closed versions: +Python 2.7, Python 3.2, Python 3.3 -Python 3.4 ___ Python tracker <http://bugs.python.org/issue13058> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13070] segmentation fault in pure-python multi-threaded server
Charles-François Natali added the comment: Confirmed with default. The problem is that the TextIOWrapper gets collected after the underlying BufferedRWPair has been cleared (tp_clear) by the garbage collector: when _PyIOBase_finalize() is called for the TextIOWrapper, it checks if the textio is closed, which indirectly checks if the underlying rwpair is closed: """ static PyObject * bufferedrwpair_closed_get(rwpair *self, void *context) { return PyObject_GetAttr((PyObject *) self->writer, _PyIO_str_closed); } """ Since self->writer has already been set to NULL by bufferedrwpair_clear(), PyObject_GetAttr() segfaults. @Victor Could you try the patch attached? -- keywords: +patch nosy: +amaury.forgeotdarc, neologix, pitrou versions: +Python 3.3 Added file: http://bugs.python.org/file23277/buffered_closed_gc.diff ___ Python tracker <http://bugs.python.org/issue13070> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13070] segmentation fault in pure-python multi-threaded server
Charles-François Natali added the comment: With test. -- Added file: http://bugs.python.org/file23278/buffered_closed_gc-1.diff ___ Python tracker <http://bugs.python.org/issue13070> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13070] segmentation fault in pure-python multi-threaded server
Changes by Charles-François Natali : Removed file: http://bugs.python.org/file23277/buffered_closed_gc.diff ___ Python tracker <http://bugs.python.org/issue13070> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13070] segmentation fault in pure-python multi-threaded server
Changes by Charles-François Natali : Removed file: http://bugs.python.org/file23278/buffered_closed_gc-1.diff ___ Python tracker <http://bugs.python.org/issue13070> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13070] segmentation fault in pure-python multi-threaded server
Changes by Charles-François Natali : Added file: http://bugs.python.org/file23279/buffered_closed_gc-1.diff ___ Python tracker <http://bugs.python.org/issue13070> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13084] test_signal failure
Charles-François Natali added the comment: See http://bugs.python.org/issue12469, specifically http://bugs.python.org/issue12469#msg139831 """ > > When signals are unblocked, pending signal ared delivered in the reverse > > order > > of their number (also on Linux, not only on FreeBSD 6). > > I don't like this. > POSIX doesn't make any guarantee about signal delivery order, except > for real-time signals. > It might work on FreeBSD and Linux, but that's definitely not > documented, and might break with new kernel releases, or other > kernels. It looks like it works like this on most OSes (Linux, Mac OS X, Solaris, FreeBSD): I don't see any test_signal failure on 3.x buildbots. If we have a failure, we can use set() again, but only for test_pending: signal order should be reliable if signals are not blocked. """ Looks like we now have a failure :-) -- nosy: +haypo, neologix ___ Python tracker <http://bugs.python.org/issue13084> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13084] test_signal failure
Changes by Charles-François Natali : -- keywords: +patch Added file: http://bugs.python.org/file23284/check_signum_order.diff ___ Python tracker <http://bugs.python.org/issue13084> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13070] segmentation fault in pure-python multi-threaded server
Charles-François Natali added the comment: > Shouldn't the test use "self.BufferedRWPair" instead of > "io.BufferedRWPair"? Yes. > Also, is it ok to just return NULL or should the error state also be > set? Well, I'm not sure, that why I made you and Amaury noisy :-) AFAICT, this is the only case where _check_closed can encounter a NULL self->writer. And this specific situation is not an error (nothing prevents the rwpair from being garbaged collected before the textio) ,and _PyIOBase_finalize() explicitely clears any error returned: """ /* If `closed` doesn't exist or can't be evaluated as bool, then the object is probably in an unusable state, so ignore. */ res = PyObject_GetAttr(self, _PyIO_str_closed); if (res == NULL) PyErr_Clear(); else { closed = PyObject_IsTrue(res); Py_DECREF(res); if (closed == -1) PyErr_Clear(); } """ Furthermore, I'm not sure about what kind of error would make sense here. -- Added file: http://bugs.python.org/file23285/buffered_closed_gc-2.diff ___ Python tracker <http://bugs.python.org/issue13070> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13070] segmentation fault in pure-python multi-threaded server
Changes by Charles-François Natali : Removed file: http://bugs.python.org/file23279/buffered_closed_gc-1.diff ___ Python tracker <http://bugs.python.org/issue13070> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13001] test_socket.testRecvmsgTrunc failure on FreeBSD 7.2 buildbot
Charles-François Natali added the comment: > @requires_freebsd_version should be factorized with > @requires_linux_version. Patches attached. > Can we workaround FreeBSD (< 8) bug in C/Python? Not really. > Or should we remove the function on FreeBSD < 8? There's really no reason to do that (and it's really a minor bug). -- Added file: http://bugs.python.org/file23296/freebsd_msgtrunc-1.diff Added file: http://bugs.python.org/file23297/requires_unix_version.diff ___ Python tracker <http://bugs.python.org/issue13001> ___diff --git a/Lib/test/test_socket.py b/Lib/test/test_socket.py --- a/Lib/test/test_socket.py +++ b/Lib/test/test_socket.py @@ -1659,6 +1659,9 @@ def _testRecvmsgShorter(self): self.sendToServer(MSG) +# FreeBSD < 8 doesn't always set the MSG_TRUNC flag when a truncated +# datagram is received (issue #13001). +@support.requires_freebsd_version(8) def testRecvmsgTrunc(self): # Receive part of message, check for truncation indicators. msg, ancdata, flags, addr = self.doRecvmsg(self.serv_sock, @@ -1668,6 +1671,7 @@ self.assertEqual(ancdata, []) self.checkFlags(flags, eor=False) +@support.requires_freebsd_version(8) def _testRecvmsgTrunc(self): self.sendToServer(MSG) diff --git a/Lib/test/support.py b/Lib/test/support.py --- a/Lib/test/support.py +++ b/Lib/test/support.py @@ -44,8 +44,8 @@ "Error", "TestFailed", "ResourceDenied", "import_module", "verbose", "use_resources", "max_memuse", "record_original_stdout", "get_original_stdout", "unload", "unlink", "rmtree", "forget", -"is_resource_enabled", "requires", "requires_linux_version", -"requires_mac_ver", "find_unused_port", "bind_port", +"is_resource_enabled", "requires", "requires_freebsd_version", +"requires_linux_version", "requires_mac_ver", "find_unused_port", "bind_port", "IPV6_ENABLED", "is_jython", "TESTFN", "HOST", "SAVEDCWD", "temp_cwd", "findfile", "create_empty_file", "sortdict", "check_syntax_error", "open_urlresource", "check_warnings", "CleanImport", "EnvironmentVarGuard", "TransientResource", @@ -312,17 +312,17 @@ msg = "Use of the %r resource not enabled" % resource raise ResourceDenied(msg) -def requires_linux_version(*min_version): -"""Decorator raising SkipTest if the OS is Linux and the kernel version is -less than min_version. +def _requires_unix_version(sysname, min_version): +"""Decorator raising SkipTest if the OS is `sysname` and the version is less +than `min_version`. -For example, @requires_linux_version(2, 6, 35) raises SkipTest if the Linux -kernel version is less than 2.6.35. +For example, @_requires_unix_version('FreeBSD', (7, 2)) raises SkipTest if +the FreeBSD version is less than 7.2. """ def decorator(func): @functools.wraps(func) def wrapper(*args, **kw): -if sys.platform == 'linux': +if platform.system() == sysname: version_txt = platform.release().split('-', 1)[0] try: version = tuple(map(int, version_txt.split('.'))) @@ -332,13 +332,29 @@ if version < min_version: min_version_txt = '.'.join(map(str, min_version)) raise unittest.SkipTest( -"Linux kernel %s or higher required, not %s" -% (min_version_txt, version_txt)) -return func(*args, **kw) -wrapper.min_version = min_version +"%s version %s or higher required, not %s" +% (sysname, min_version_txt, version_txt)) return wrapper return decorator +def requires_freebsd_version(*min_version): +"""Decorator raising SkipTest if the OS is FreeBSD and the FreeBSD version is +less than `min_version`. + +For example, @requires_freebsd_version(7, 2) raises SkipTest if the FreeBSD +version is less than 7.2. +""" +return _requires_unix_version('FreeBSD', min_version) + +def requires_linux_version(*min_version): +"""Decorator raising SkipTest if the OS is Linux and the Linux version is +less than `min_version`. + +For example, @requires_linux_version(2, 6, 32) raises SkipTest if the Linux +version is less than 2.6.32. +""" +return _requires_unix_version('Linux', min_version) + def requires_mac_ver(*min_version): """Decorator raising SkipTest if the OS is Mac OS X and the OS X version if less than min_version. ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10141] SocketCan support
Charles-François Natali added the comment: So, Victor, what do you think of the last version? This patch has been lingering for quite some time, and it's really a cool feature. -- ___ Python tracker <http://bugs.python.org/issue10141> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13084] test_signal failure
Changes by Charles-François Natali : -- resolution: -> fixed stage: -> committed/rejected status: open -> closed ___ Python tracker <http://bugs.python.org/issue13084> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13045] socket.getsockopt may require custom buffer contents
Charles-François Natali added the comment: Hello, method:: socket.getsockopt(level, optname[, optarg]) The overloading of the third parameter is confusing: it can already be an integer value or a buffer size, I don't think that adding a third possibility is a good idea. It might be better to add another optional `buffer` argument (and ignore `buflen` if this argument is provided). Also, it would be nice to have a corresponding unit test: since I doubt this buffer argument is supported by many Unices out there, you can probably reuse a subset of what ipset does (just take care and guard it by @support.requires_linux_version() if applicable). -- ___ Python tracker <http://bugs.python.org/issue13045> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13001] test_socket.testRecvmsgTrunc failure on FreeBSD 7.2 buildbot
Changes by Charles-François Natali : -- resolution: -> fixed stage: patch review -> committed/rejected status: open -> closed ___ Python tracker <http://bugs.python.org/issue13001> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12156] test_multiprocessing.test_notify_all() timeout (1 hour) on FreeBSD 7.2
Charles-François Natali added the comment: test_multiprocessing frequently hangs on FreeBSD < 8 buildbots, and this probably has to do with the limit on the max number of POSIX semaphores: """ == ERROR: test_notify_all (test.test_multiprocessing.WithProcessesTestCondition) -- Traceback (most recent call last): File "/usr/home/db3l/buildarea/3.x.bolen-freebsd7/build/Lib/test/test_multiprocessing.py", line 777, in test_notify_all cond = self.Condition() File "/usr/home/db3l/buildarea/3.x.bolen-freebsd7/build/Lib/multiprocessing/__init__.py", line 189, in Condition return Condition(lock) File "/usr/home/db3l/buildarea/3.x.bolen-freebsd7/build/Lib/multiprocessing/synchronize.py", line 198, in __init__ self._lock = lock or RLock() File "/usr/home/db3l/buildarea/3.x.bolen-freebsd7/build/Lib/multiprocessing/synchronize.py", line 172, in __init__ SemLock.__init__(self, RECURSIVE_MUTEX, 1, 1) File "/usr/home/db3l/buildarea/3.x.bolen-freebsd7/build/Lib/multiprocessing/synchronize.py", line 75, in __init__ sl = self._semlock = _multiprocessing.SemLock(kind, value, maxvalue) OSError: [Errno 23] Too many open files in system """ There are probably dangling semaphores, since the test doesn't use that much POSIX semaphores. Either way, we can't do much about it... -- ___ Python tracker <http://bugs.python.org/issue12156> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13070] segmentation fault in pure-python multi-threaded server
Charles-François Natali added the comment: > Probably. OTOH, not setting the error state when returning NULL is > usually an error (and can result in difficult-to-debug problems), so > let's stay on the safe side. > RuntimeError perhaps. OK, I'll update the patch accordingly. > Does that mean that an application will see a Python exception? No, the finalization code explicitly clears any exception set. -- ___ Python tracker <http://bugs.python.org/issue13070> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10348] multiprocessing: use SysV semaphores on FreeBSD
Charles-François Natali added the comment: -1 IMHO, implementing SysV semaphores would be a step backwards, plus the API is a real pain. I think there's no reason to complicate the code to accomodate such corner cases, especially since the systems that don't support POSIX semaphores will eventually die out... -- nosy: +neologix ___ Python tracker <http://bugs.python.org/issue10348> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13070] segmentation fault in pure-python multi-threaded server
Charles-François Natali added the comment: Sorry, forgot about this issue... Updated patch (I'm not really satisfied with the error message, don't hesitate if you can think of a better wording). -- Added file: http://bugs.python.org/file23319/buffered_closed_gc-3.diff ___ Python tracker <http://bugs.python.org/issue13070> ___diff --git a/Lib/test/test_io.py b/Lib/test/test_io.py --- a/Lib/test/test_io.py +++ b/Lib/test/test_io.py @@ -2421,6 +2421,20 @@ with self.open(support.TESTFN, "rb") as f: self.assertEqual(f.read(), b"456def") +def test_rwpair_cleared_before_textio(self): +# Issue 13070: TextIOWrapper's finalization would crash when called +# after the reference to the underlying BufferedRWPair got cleared. +for i in range(1000): +b1 = self.BufferedRWPair(self.MockRawIO(), self.MockRawIO()) +t1 = self.TextIOWrapper(b1, encoding="ascii") +b2 = self.BufferedRWPair(self.MockRawIO(), self.MockRawIO()) +t2 = self.TextIOWrapper(b2, encoding="ascii") +# circular references +t1.buddy = t2 +t2.buddy = t1 +support.gc_collect() + + class PyTextIOWrapperTest(TextIOWrapperTest): pass diff --git a/Modules/_io/bufferedio.c b/Modules/_io/bufferedio.c --- a/Modules/_io/bufferedio.c +++ b/Modules/_io/bufferedio.c @@ -2307,6 +2307,10 @@ static PyObject * bufferedrwpair_closed_get(rwpair *self, void *context) { +if (self->writer == NULL) { +PyErr_SetString(PyExc_RuntimeError, "the writer object has been cleared"); +return NULL; +} return PyObject_GetAttr((PyObject *) self->writer, _PyIO_str_closed); } ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13045] socket.getsockopt may require custom buffer contents
Charles-François Natali added the comment: > I've attached an update for the previous patch. Now there's no more > overloading for the third argument and socket.getsockopt accepts one more > optional argument -- a buffer to use as an input to kernel. Remarks: """ + length. If *buffer* is absent and *buflen* is an integer, then *buflen* [...] + this buffer is returned as a bytes object. If *buflen* is absent, an integer """ There's a problem here, the first buflen part should probably be removed. Also, you might want to specify that if a custom buffer is provided, the length argument will be ignored. > By the way, I don't really think that any POSIX-compliant UNIX out there > would treat the buffer given to getsockopt in any way different from what > Linux does. It is very easy to copy the buffer from user to kernel and back, > and it is so inconvenient to prevent kernel from reading it prior to > modification, that I bet no one has ever bothered to do this. Me neither, I don't expect the syscall to return EINVAL: the goal is just to test the correct passing of the input buffer, and the length computation. If we can't test this easily within test_socket, it's ok, I guess the following should be enough: - try supplying a non-buffer argument as fourth parameter (e.g. and int), and check that you get a ValueError - supply a buffer with a size == sizeof(int) (SIZEOF_INT is defined in Lib/test/test_socket.py), and call getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 0, ): this should normally succeed, and return a buffer (check the return type) -- ___ Python tracker <http://bugs.python.org/issue13045> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10141] SocketCan support
Changes by Charles-François Natali : -- nosy: +pitrou ___ Python tracker <http://bugs.python.org/issue10141> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11956] 3.3 : test_import.py causes 'make test' to fail
Changes by Charles-François Natali : -- resolution: -> fixed stage: -> committed/rejected status: open -> closed ___ Python tracker <http://bugs.python.org/issue11956> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10141] SocketCan support
Charles-François Natali added the comment: > I don't have much to say about the patch, given that I don't know > anything about CAN and my system doesn't appear to have a "vcan0" > interface. I had never heard about it before this issue, but the protocol is really simple. If you want to try it out (just for fun :-), you just have to do the following: # modprobe vcan # ip link add dev vcan0 type vcan # ifconfig vcan0 up -- ___ Python tracker <http://bugs.python.org/issue10141> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13070] segmentation fault in pure-python multi-threaded server
Charles-François Natali added the comment: Committed to 3.2 and default. Victor, thanks for the report! -- resolution: -> fixed stage: -> committed/rejected status: open -> closed ___ Python tracker <http://bugs.python.org/issue13070> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13070] segmentation fault in pure-python multi-threaded server
Charles-François Natali added the comment: > The issue doesn't affect Python 2.7? > Duh! I was sure the _io module had been introduced in Python 3 (I/O layer rewrite, etc). Yes, it does apply to 2.7. I'll commit the patch later today. -- ___ Python tracker <http://bugs.python.org/issue13070> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10141] SocketCan support
Charles-François Natali added the comment: Committed. Matthias, Tiago, thanks! -- resolution: -> fixed stage: commit review -> committed/rejected status: open -> closed ___ Python tracker <http://bugs.python.org/issue10141> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8037] multiprocessing.Queue's put() not atomic thread wise
Charles-François Natali added the comment: > Modifying an object which is already on a traditional queue can also > change what is received by the other thread (depending on timing). > So Queue.Queue's put() is not "atomic" either. Therefore I do not > believe this behaviour is a bug. Agreed. > However the solution proposed is a good one since it fixes Issue > 10886. In addition it prevents arbitrary code being run in the > background thread by weakref callbacks or __del__ methods. Such > arbitrary code may cause inconsistent state in a forked process if > the fork happens while the queue's thread is running -- see issue > 6271. [...] > I would suggest closing this issue and letting Issue 10886 take it's > place. Makes sense. -- nosy: +neologix resolution: -> duplicate stage: test needed -> committed/rejected status: open -> closed superseder: -> Unhelpful backtrace for multiprocessing.Queue ___ Python tracker <http://bugs.python.org/issue8037> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10141] SocketCan support
Charles-François Natali added the comment: >From python-dev: """ I work on Ubuntu Jaunty for my cpython development work - an old version, I know, but still quite serviceable and has worked well for me over many months. With the latest default cpython repository, however, I can't run the regression suite because the socket module now fails to build: gcc -pthread -fPIC -g -O0 -Wall -Wstrict-prototypes -IInclude -I. -I./Include -I/usr/local/include -I/home/vinay/projects/python/default -c /home/vinay/projects/python/default/Modules/socketmodule.c -o build/temp.linux-i686-3.3-pydebug/home/vinay/projects/python/default /Modules/socketmodule.o .../Modules/socketmodule.c: In function ‘makesockaddr’: .../Modules/socketmodule.c:1224: error: ‘AF_CAN’ undeclared (first use in this function) .../Modules/socketmodule.c:1224: error: (Each undeclared identifier is reported only once .../Modules/socketmodule.c:1224: error: for each function it appears in.) .../Modules/socketmodule.c: In function ‘getsockaddrarg’: .../Modules/socketmodule.c:1610: error: ‘AF_CAN’ undeclared (first use in this function) .../Modules/socketmodule.c: In function ‘getsockaddrlen’: .../Modules/socketmodule.c:1750: error: ‘AF_CAN’ undeclared (first use in this function) On this system, AF_CAN *is* defined, but in linux/socket.h, not in sys/socket.h. >From what I can see, sys/socket.h includes bits/socket.h which includes asm/socket.h, but apparently linux/socket.h isn't included. """ Vinay, what happens if you replace in Modules/socketmodule.h: """ #ifdef HAVE_LINUX_CAN_H #include #endif """ with """ #ifdef HAVE_LINUX_CAN_H #include #include #endif """ -- nosy: +vinay.sajip resolution: fixed -> stage: committed/rejected -> needs patch status: closed -> open ___ Python tracker <http://bugs.python.org/issue10141> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10141] SocketCan support
Charles-François Natali added the comment: > which would imply that on this system at least, the AF_CAN definition is > supposed to come from elsewhere. Yes, from . Looks like a crappy libc version: is present, but AF_CAN is not defined. Just for fun, is PF_CAN defined? You might try the following in configure.in: """ # On Linux, can.h and can/raw.h require sys/socket.h AC_CHECK_HEADERS(linux/can.h linux/can/raw.h,,,[ #ifdef HAVE_SYS_SOCKET_H #include #ifndef AF_CAN # error "AF_CAN not defined" #endif #endif ]) """ -- ___ Python tracker <http://bugs.python.org/issue10141> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10141] SocketCan support
Charles-François Natali added the comment: Here's a better patch. It only touches socketmodule.c, so no need for autoconf, just use make. If it works on your box, I'll test it on the custom buildbots before pushing it for good (if I learned one thing, it's to never underestimate the potential for broken headers/libc). -- Added file: http://bugs.python.org/file23337/af_can_undefined.diff ___ Python tracker <http://bugs.python.org/issue10141> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10517] test_concurrent_futures crashes with "--with-pydebug" on RHEL5 with "Fatal Python error: Invalid thread state for this thread"
Charles-François Natali added the comment: Hello, > Did anyone test this fix for case of fork() being called from Python sub > interpreter? > Not specifically, unless it's part of the test suite. Anyway, unless this problem is systematic - which I doubt - it probably wouldn't have helped. > Getting a report of fork() failing in sub interpreters under mod_wsgi that > may be caused by this change. Still investigating. > > Specifically throwing up error: > > Couldn't create autoTLSkey mapping > Hmmm. If you can, try strace or instrument the code (perror() should be enough) to see why it's failing. pthread_setspecific() can fail with: - EINVAL, if the TLS key is invalid (which would be strange since we call pthread_key_delete()/pthread_key_create() just before) - or ENOMEM, if you run out of memory/address space The later seems much more likely (e.g. if many child processes and subinterpreters are created). BTW, if this is a bug report from someone else, tell him to post here, it'll be easier. And we don't byte :-) -- ___ Python tracker <http://bugs.python.org/issue10517> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10141] SocketCan support
Charles-François Natali added the comment: Working fine on the buildbots and Vinay's box, closing! -- resolution: -> fixed stage: needs patch -> committed/rejected status: open -> closed ___ Python tracker <http://bugs.python.org/issue10141> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue11812] transient socket failure to connect to 'localhost'
Charles-François Natali added the comment: > Attached patch reads the name of the server socket instead of using > HOST or 'localhost'. > By the way, why do we use 'localhost' instead of '127.0.0.1' for > support.HOST? '127.0.0.1' doesn't depend on the DNS configuration of > the host (especially its "hosts" file, even Windows has such file). This might be a good idea. Apparently, Windows 7 doesn't use its hosts file (yes, it does have one) to resolve 'localhost', but its DNS resolver, see http://serverfault.com/questions/4689/windows-7-localhost-name-resolution-is-handled-within-dns-itself-why Depending on the DNS setup, it could lead to a latency which might explain such failures. > Seems a clear race condition. The code looks correct: a threading.Event is set by the server once it called listen(), point at which incoming connections should be queued (SYN/ACK is sent before accept()). So I'd bet either on resolution delay (on Unix /etc/nsswitch.conf), or an overloaded machine. -- nosy: +neologix ___ Python tracker <http://bugs.python.org/issue11812> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13148] simple bug in mmap size check
Charles-François Natali added the comment: > The condition contradicts the exception text: Why? The offset is zero-based, so 0 <= offset < size is a valid check. > First of all, it doesn't fail (at least on Linux), I tested it before > posting. Hmmm. You got lucky, since the offset must be a multiple of the page size. > tried on newer Linux - crashes with my patch. That's exactly why we perform such checks. Here's what POSIX says: """ [EINVAL] The addr argument (if MAP_FIXED was specified) or off is not a multiple of the page size as returned by sysconf(), or are considered invalid by the implementation. [ENXIO] Addresses in the range [off, off + len) are invalid for the object specified by fildes. """ Since we usually want to avoid implementation-defined behavior (and crashes), I think it's better to stick with the current checks (note that issue #12556 concerned a really corner case: /proc entries supporting mmap but reporting a zero-length). > Therefore, I'm no longer pushing for this change, I will need another > workaround anyway. Alright, closing then. -- resolution: -> rejected stage: -> committed/rejected status: open -> closed ___ Python tracker <http://bugs.python.org/issue13148> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10517] test_concurrent_futures crashes with "--with-pydebug" on RHEL5 with "Fatal Python error: Invalid thread state for this thread"
Charles-François Natali added the comment: I did a quick test (calling fork() from a subinterpreter), and as expected, I couldn't reproduce the problem. So I still favor an OOM condition making pthread_setspecific bail out with ENOMEM, othe other option being a nasty libc bug. If the problem persists, please open a new issue. -- ___ Python tracker <http://bugs.python.org/issue10517> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13156] _PyGILState_Reinit assumes auto thread state will always exist which is not true.
Changes by Charles-François Natali : -- nosy: +pitrou ___ Python tracker <http://bugs.python.org/issue13156> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13156] _PyGILState_Reinit assumes auto thread state will always exist which is not true.
Charles-François Natali added the comment: Note that this doesn't apply to default: the problem is that 2.7 and 3.2 don't use native TLS, and with the ad-hoc TLS implementation, a NULL value isn't supported: """ /* Internal helper. * If the current thread has a mapping for key, the appropriate struct key* * is returned. NB: value is ignored in this case! * If there is no mapping for key in the current thread, then: * If value is NULL, NULL is returned. * Else a mapping of key to value is created for the current thread, * and a pointer to a new struct key* is returned; except that if * malloc() can't find room for a new struct key*, NULL is returned. * So when value==NULL, this acts like a pure lookup routine, and when * value!=NULL, this acts like dict.setdefault(), returning an existing * mapping if one exists, else creating a new mapping. """ So PyThread_set_key_value() has different semantics between 2.7/3.2 and default... > So _PyGILState_Reinit() is broken because it assumes that an auto > thread state will always exist for the thread for it to reinit, which > will not always be the case. Hmm... Please see http://docs.python.org/c-api/init.html#non-python-created-threads """ When threads are created using the dedicated Python APIs (such as the threading module), a thread state is automatically associated to them and the code showed above is therefore correct. However, when threads are created from C (for example by a third-party library with its own thread management), they don’t hold the GIL, nor is there a thread state structure for them. If you need to call Python code from these threads (often this will be part of a callback API provided by the aforementioned third-party library), you must first register these threads with the interpreter by creating a thread state data structure, then acquiring the GIL, and finally storing their thread state pointer, before you can start using the Python/C API. When you are done, you should reset the thread state pointer, release the GIL, and finally free the thread state data structure. The PyGILState_Ensure() and PyGILState_Release() functions do all of the above automatically. The typical idiom for calling into Python from a C thread is: """ I think you should be calling call PyGILState_Ensure() before (whoch does associate the thread state to the autoTLS key. I let Antoine answer, he's got much more experience than me. -- ___ Python tracker <http://bugs.python.org/issue13156> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13156] _PyGILState_Reinit assumes auto thread state will always exist which is not true.
Charles-François Natali added the comment: > So, the documentation you quote is only to do with the main > interpreter and is not how things work for sub interpreters. You're right, my bad. However, it would probably be better to destroy/reset the autoTLSkey even if the current thread doesn't have an associated TLS key (to avoid stumbling upon the original libc bug of issue #10517): """ void _PyGILState_Reinit(void) { PyThreadState *tstate = PyGILState_GetThisThreadState(); PyThread_delete_key(autoTLSkey); if ((autoTLSkey = PyThread_create_key()) == -1) Py_FatalError("Could not allocate TLS entry"); /* re-associate the current thread state with the new key */ if (tstate && PyThread_set_key_value(autoTLSkey, (void *)tstate) < 0) Py_FatalError("Couldn't create autoTLSkey mapping"); } """ Now that i think about it, the problem is even simpler: this patch shouldn't have been applied to 2.7 and 3.2, it was only relevant for native pthread TLS implementation (which does allow NULL values). So the solution would be simply to backout this patch on 2.7 and 3.2. -- ___ Python tracker <http://bugs.python.org/issue13156> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13156] _PyGILState_Reinit assumes auto thread state will always exist which is not true.
Charles-François Natali added the comment: > So the solution would be simply to backout this patch on 2.7 and 3.2. Actually, I just checked, and the native TLS implementation is present in 3.2, so this problem shouldn't show up: did you test it with 3.2? AFAICT, this should only affect 2.7 (for which this patch wasn't relevant). -- ___ Python tracker <http://bugs.python.org/issue13156> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13146] Writing a pyc file is not atomic
Charles-François Natali added the comment: > Here is a patch for import.c. Looks good to me. > This new patch also fixes importlib. """ path_tmp = path + '.tmp' with _io.FileIO(path_tmp, 'wb') as file: file.write(data) _os.rename(path_tmp, path) """ I don't know exactly the context in which this code runs, but you can have a corruption if multiple processes try to write the bytecode file at the same time, since they'll all open the .tmp file: it should be opened with O_EXCL. Also, as a side note, I'm wondering whether this type of check: """ if not sys.platform.startswith('win'): # On POSIX-like platforms, renaming is atomic """ couldn't be rewritten as """ if os.name == 'posix': # On POSIX-like platforms, renaming is atomic """ Fox example, does OS-X report as POSIX? -- nosy: +neologix ___ Python tracker <http://bugs.python.org/issue13146> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13156] _PyGILState_Reinit assumes auto thread state will always exist which is not true.
Charles-François Natali added the comment: Here's a patch for 3.2 and default which calls PyThread_set_key_value() only if there was an auto thread state previously associated (while the current code works with pthread TLS, there are other implementations which may behave strangely, and there's still the ad-hoc implementation in Python/thread.c). -- Added file: http://bugs.python.org/file23391/tstate_after_fork.diff ___ Python tracker <http://bugs.python.org/issue13156> ___diff --git a/Python/pystate.c b/Python/pystate.c --- a/Python/pystate.c +++ b/Python/pystate.c @@ -586,9 +586,9 @@ autoInterpreterState = NULL; } -/* Reset the TLS key - called by PyOS_AfterFork. +/* Reset the TLS key - called by PyOS_AfterFork(). * This should not be necessary, but some - buggy - pthread implementations - * don't flush TLS on fork, see issue #10517. + * don't reset TLS upon fork(), see issue #10517. */ void _PyGILState_Reinit(void) @@ -598,8 +598,10 @@ if ((autoTLSkey = PyThread_create_key()) == -1) Py_FatalError("Could not allocate TLS entry"); -/* re-associate the current thread state with the new key */ -if (PyThread_set_key_value(autoTLSkey, (void *)tstate) < 0) +/* If the thread had an associated auto thread state, reassociate it with + * the new key (this will not hold, for example, for a thread created + * outside of Python calling into a subinterpreter). */ +if (tstate && PyThread_set_key_value(autoTLSkey, (void *)tstate) < 0) Py_FatalError("Couldn't create autoTLSkey mapping"); } ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13146] Writing a pyc file is not atomic
Charles-François Natali added the comment: > Or perhaps append the PID to the name of the temp file ? > (easier done in Python than in C :-)) I don't really like appending PIDs to generate file names: - if you have multiple processes at the same time, they'll all write their own file which will end up being replaced by the last one to perform the move, whereas with O_EXCL, they'll see immediately that another instance is writing it (the overhead is negligible with such small files, but maybe not so much when creating the file requires a certain amout of work) - if processes crash at the wrong time, you can end up with a flurry of . - the last one is even more insidious and unlikely, but here it goes: the PID is unique only on a given machine: if you have, for example, a network file system shared between multiple hosts, then you can have a PID collision, whereas O_EXCL is safe (O_EXCL doesn't work on NFSv2, but nowadays every OS implements it correctly on NFSv3) O_EXCL is really what POSIX offers to solve this (and it's also what import.c does). > >> Also, as a side note, I'm wondering whether this type of check: >> """ >> if not sys.platform.startswith('win'): >> # On POSIX-like platforms, renaming is atomic >> """ >> >> couldn't be rewritten as >> """ >> if os.name == 'posix': >> # On POSIX-like platforms, renaming is atomic >> """ > > No, because os.py is not available to importlib (which must be > bootstrappable early). See the _bootstrap.py header for information > about what is available; this is also why we use FileIO instead of > open(). OK. So is the O_EXCL approach possible? Would something like _io.open(_os.open(path, _os.O_CREATE|os.O_EXCL...), 'wb') work? Also, since this can be quite tricky and redundant, how about adding a framework to do this kind of thing to the standard library? Something like with atomic_create(, 'b') as f: f.write() where atomic_create would be a context manager that would make `f` point to a temporary file (open with O_EXCL :-), and do the rename at the end. It could also accept an option to ensure durability (i.e. call fsync() on the file and on the parent directory). Note that it probably wouldn't help here, since we only have access to a really limited part of the library :-) -- ___ Python tracker <http://bugs.python.org/issue13146> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12619] Automatically regenerate platform-specific modules
Charles-François Natali added the comment: > Related : #1565071 and #3990 . There is no reason to keep plat-xxx files if > cannot be managed properly. +1 -- ___ Python tracker <http://bugs.python.org/issue12619> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13226] Expose RTLD_* constants in the posix module
Charles-François Natali added the comment: Note that I'm really +10 on this issue: such constants belong to individual modules rather than to the unmanageable Lib/plat-XXX/. -- nosy: +neologix ___ Python tracker <http://bugs.python.org/issue13226> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10332] Multiprocessing maxtasksperchild results in hang
Charles-François Natali added the comment: Here's an updated patch. I'll open a separate issue for the thread-safety. -- keywords: +needs review nosy: +pitrou stage: -> patch review Added file: http://bugs.python.org/file23489/pool_lifetime_close-1.diff ___ Python tracker <http://bugs.python.org/issue10332> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10332] Multiprocessing maxtasksperchild results in hang
Changes by Charles-François Natali : Removed file: http://bugs.python.org/file21644/pool_lifetime_close.diff ___ Python tracker <http://bugs.python.org/issue10332> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13140] ThreadingMixIn.daemon_threads is not honored when parent is daemon
Charles-François Natali added the comment: """ """Start a new thread to process the request.""" t = threading.Thread(target = self.process_request_thread, args = (request, client_address)) if self.daemon_threads: t.daemon = True """ If daemon_threads is false, t.daemon is not set, and the daemonic property is inherited from the creating thread, i.e. the server thread. Patch attached (I don't think a test is necessary for such a trivial change). -- keywords: +needs review, patch nosy: +haypo, neologix stage: needs patch -> patch review Added file: http://bugs.python.org/file23491/socketserver_daemon.diff ___ Python tracker <http://bugs.python.org/issue13140> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8828] Atomic function to rename a file
Charles-François Natali added the comment: The recent issue #13146 renewed my interest, so I'd like to make this move forward, since I think an atomic rename/write API could be quite useful. Issue #8604 (Adding an atomic FS write API) can be achieved relatively easily with the typical (fsync() left aside) - create temporary file - write to the temp file - atomically rename the temp file to the target path But the problem is that rename is only atomic on POSIX, and not on Windows. So I'd suggest to: - rename this issue to target specifically Windows ;-) - add MoveFileTransacted to the standard library (PC/msvcrtmodule.c, posixmodule?) I'm -1 on exposing a "best effort" atomic rename/file API: either the OS offers the primitives necessary to achieve atomicity, or it doesn't. It's better to have a working implementation on some OSes than a flaky implementation on every OS. Note that I'll happily take over the atomic file API part (issue #8604), but since my Windows kung-fu is so poor, it'd be nice if someone with some Windows experience could tackle this MoveFileTransacted -- ___ Python tracker <http://bugs.python.org/issue8828> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13140] ThreadingMixIn.daemon_threads is not honored when parent is daemon
Charles-François Natali added the comment: > I would prefer to preserve the inheritance by default, and to change the > daemonic attribute only if it is explicitly set to True or False. > This way it will be backward compatible. It may be backward compatible, but IMHO, the current behavior is broken: while it can certainly make sense to set the server thread daemonic, you most certainly don't want to have client threads daemonic implicitely (since you usually don't want to terminate the client's connections abruptly when the main thread exits). But I must admit I don't have a strong opinion, so both solutions are OK to me. The only thing that bothers me is this: """ +.. versionchanged:: 3.3 + previously, the *daemon_threads = False* flag was ignored. """ You usually document new features or behavior changes: this really looks like a bug fix (and is one actually). Something like "the semantics of *daemon_threads* changed slighty" might be better (but I'm no native speaker). -- ___ Python tracker <http://bugs.python.org/issue13140> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8828] Atomic function to rename a file
Charles-François Natali added the comment: > MoveFileTransacted is only available under Vista or later. You should be able > to use MoveFileEx for the same effect. Nice. > "The solution? Let's remember that metadata changes are atomic. Rename is > such a case." > Hmmm. Is he referring to the "standard" rename? The blog doesn't evoke a specific function, but if it was the case, then why bother at all? By the way: """ - MoveFileEx() with MOVEFILE_REPLACE_EXISTING and MOVEFILE_WRITE_THROUGH flags: not atomic (eg. "If the file is to be moved to a different volume, the function simulates the move by using the CopyFile and DeleteFile functions."), version >= Windows 2000 """ There's exactly the same limitation with the POSIX version (except that it'll fail with EXDEV instead of silently doing the copy+unlink). -- ___ Python tracker <http://bugs.python.org/issue8828> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13215] multiprocessing Manager.connect() aggressively retries refused connections
Charles-François Natali added the comment: > While a 20 second timeout may make sense for *unresponsive* servers, > ECONNREFUSED probably indicates that the server is not listening on this > port, so > hammering it with 1,999 more connection attempts isn't going to help. That's funny, I noticed this a couple days ago, and it also puzzled me... > I'm not sure, but I think that would be for the case where you are spawning > the > server yourself and the child process takes time to start up. That's also what I think. But that's strange, since: - this holds for every client/server communication (e.g. why not do that for smtplib, telnetlib, etc. ?) - it's against the classical connect() semantics - some code may prefer failing immediately (instead of "hammering" the remote host) if the remote server is down, or the address is incorrect: it can still handle the ECONNREFUSED if it wants to retry, with a custom retry timeout I removed the retry code and run test_multiprocessing and test_concurrent_futures in loop, and didn't see any failure (on Linux), so I'd say we could probably remove that. OTOH, I would feel bad if this broke someone's code (even though code relying on the automatic retries is probably broken). So I'm +1 on removing the retry logic altogether, unless of course someone comes up with a good reason to keep it (I digged a little through the logs to see when this was introduced, but apparently it was there in the original import). If we don't remove it, I agree we should at least reduce the timeout and increase the period (an exponential backoff may be a bit overkill). -- ___ Python tracker <http://bugs.python.org/issue13215> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue10332] Multiprocessing maxtasksperchild results in hang
Charles-François Natali added the comment: James, thanks for the report! -- resolution: -> fixed stage: patch review -> committed/rejected status: open -> closed ___ Python tracker <http://bugs.python.org/issue10332> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13263] Group some os functions in submodules
Charles-François Natali added the comment: > I think there is a value to use the very same function names in the > posix module as in the posix API. It would still be the case, except that they'd live in distinct submodule. > The posix API (and C in general) is also flat, and uses the prefix > convention. That's because C doesn't have namespaces: it's certainly due to this limitation, and not a design choice (and when you think about it, there is a namespace hierarchy, in the header files: , , etc.). -- nosy: +neologix ___ Python tracker <http://bugs.python.org/issue13263> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13263] Group some os functions in submodules
Charles-François Natali added the comment: > I would prefer to keep the shared prefix even if we move functions to a new > module. Python refers usually to the C documentation for the details of a > function. If we rename a function, it becomes more difficult to get the > manual of the function. Indeed. But that's what I understood from Ezio's proposal, I don't think he's suggesting to rename them. -- ___ Python tracker <http://bugs.python.org/issue13263> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5661] asyncore should catch EPIPE while sending() and receiving()
Charles-François Natali added the comment: Did you try with the current branches? AFAICT, this should have been solved by 208a5290fd38 (issue #11265), and I did a quick test with default and it seems to be fixed. In any case, it's probably a good idea to add this test to test_asyncore. > So it seems that, on linux, when writing to a closed socket, you get > an ECONNRESET when there is still data in the socket, and an EPIPE > otherwise. In the first case the tcp connection ends with a single > RESET, and in the second case it ends with the sequence FIN-ACK-RESET. Yes, see RFC1122 section 4.2.2.13: """ A host MAY implement a "half-duplex" TCP close sequence, so that an application that has called CLOSE cannot continue to read data from the connection. If such a host issues a CLOSE call while received data is still pending in TCP, or if new data is received after CLOSE is called, its TCP SHOULD send a RST to show that data was lost. """ -- nosy: +neologix ___ Python tracker <http://bugs.python.org/issue5661> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5661] asyncore should catch EPIPE while sending() and receiving()
Charles-François Natali added the comment: >> Did you try with the current branches? > > Yes, the test is pass against the current default and 2.7 branches. > One must remove EPIPE from the asyncore._DISCONNECTED frozenset to > make the test to fail. OK. Then I'll add this test to test_asyncore. -- ___ Python tracker <http://bugs.python.org/issue5661> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue13285] signal module ignores external signal changes
Charles-François Natali added the comment: > So it's impossible to reliably save and restore signal handlers through > python when they can also be changed outside the python interpreter. signal.getsignal() or signal.signal() return the current/previous handler as a Python function. How could it return a reference to a native (i.e. C) signal handler? While we could in theory return it as a magic cookie (i.e. the handler's address as returned by sigaction/signal) that can just be passed back to signal.signal(), it would be a bad idea: if the user passes an invalid address, the process will crash when the signal is received. -- nosy: +neologix ___ Python tracker <http://bugs.python.org/issue13285> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5661] asyncore should catch EPIPE while sending() and receiving()
Charles-François Natali added the comment: The test fails on OS X: """ == ERROR: test_handle_close_after_conn_broken (test.test_asyncore.TestAPI_UseIPv4Poll) -- Traceback (most recent call last): File "/Users/buildbot/buildarea/3.x.parc-snowleopard-1/build/Lib/test/test_asyncore.py", line 661, in test_handle_close_after_conn_broken self.loop_waiting_for_flag(client) File "/Users/buildbot/buildarea/3.x.parc-snowleopard-1/build/Lib/test/test_asyncore.py", line 523, in loop_waiting_for_flag asyncore.loop(timeout=0.01, count=1, use_poll=self.use_poll) File "/Users/buildbot/buildarea/3.x.parc-snowleopard-1/build/Lib/asyncore.py", line 215, in loop poll_fun(timeout, map) File "/Users/buildbot/buildarea/3.x.parc-snowleopard-1/build/Lib/asyncore.py", line 196, in poll2 readwrite(obj, flags) File "/Users/buildbot/buildarea/3.x.parc-snowleopard-1/build/Lib/asyncore.py", line 123, in readwrite obj.handle_error() File "/Users/buildbot/buildarea/3.x.parc-snowleopard-1/build/Lib/asyncore.py", line 112, in readwrite obj.handle_expt_event() File "/Users/buildbot/buildarea/3.x.parc-snowleopard-1/build/Lib/asyncore.py", line 476, in handle_expt_event self.handle_expt() File "/Users/buildbot/buildarea/3.x.parc-snowleopard-1/build/Lib/test/test_asyncore.py", line 470, in handle_expt raise Exception("handle_expt not supposed to be called") Exception: handle_expt not supposed to be called """ Looks like the FD is returned in the exception set on OS X... -- ___ Python tracker <http://bugs.python.org/issue5661> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5661] asyncore should catch EPIPE while sending() and receiving()
Charles-François Natali added the comment: > The test fails when use_poll is True. > The difference between using poll() and poll2(): poll uses select(2), while poll2 uses poll(2) (duh, that's confusing). It seems that on OS X Snow Leopard, poll(2) sets the POLLPRI flag upon EPIPE (and probably doesn't return the FD in the exception set for select(2)...). Not sure whether it's legal, but it's the only OS to do that (POLLPRI usually used for OOB data). Also, note that Tiger doesn't behave that way. OS X often offers such surprises :-) > What about forcing self.use_poll to False, before calling > loop_waiting_for_flag() ? The drawback being that the test will be run > twice with the same environment. I just added an handle_expt() handler, and it works fine. Closing, thanks for the patch! -- assignee: josiahcarlson -> resolution: -> fixed stage: patch review -> committed/rejected status: open -> closed ___ Python tracker <http://bugs.python.org/issue5661> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12905] multiple errors in test_socket on OpenBSD
Charles-François Natali added the comment: Rémi, do you want to submit a patch to skip those tests on OpenBSD? -- ___ Python tracker <http://bugs.python.org/issue12905> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12797] io.FileIO and io.open should support openat
Charles-François Natali added the comment: > Thanks. Although, on second thought, I'm not sure whether Amaury's > idea (allowing a custom opener) is not better... Thoughts? +1. This would also address issues #12760 and #12105. -- ___ Python tracker <http://bugs.python.org/issue12797> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue12105] open() does not able to set flags, such as O_CLOEXEC
Changes by Charles-François Natali : -- status: open -> closed superseder: -> io.FileIO and io.open should support openat ___ Python tracker <http://bugs.python.org/issue12105> ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com