New submission from Charles-Francois Natali :
test_ftplib fails in TestIPv6Environment:
==
ERROR: test_makepasv (test.test_ftplib.TestIPv6Environment
Charles-Francois Natali added the comment:
> I don't understand the point concerning trimming/fragmentation/threading by
> Charles-Francois: dlmalloc will allocate its own memory segment using mmap
> and handle memory inside that segment when you do a
> dlmalloc/dlfree/dlreal
Charles-Francois Natali added the comment:
Closing as invalid, since it's definitely not a Python issue, but much more
likely a network configuration problem.
--
resolution: -> invalid
status: open -> closed
___
Python tra
Charles-Francois Natali added the comment:
Even worse than that, mixing to malloc implementations could lead to trouble.
For example, the trimming code ensures that the heap is where it last set it.
So if an allocation has been made by another implementation in the meantime,
the heap won'
Changes by Charles-Francois Natali :
Added file: http://bugs.python.org/file21819/is_ipv6_enabled.diff
___
Python tracker
<http://bugs.python.org/issue11811>
___
___
Pytho
Charles-Francois Natali added the comment:
> As for ssl_ipv6.diff, it fails on certificate verification:
Of course.
The new version should fix this (tested on google.com).
> is_ipv6_enabled.diff is fine.
Since IPv6 capability is unlikely to change in the middle of a test, I replace
Changes by Charles-Francois Natali :
Removed file: http://bugs.python.org/file21812/is_ipv6_enabled.diff
___
Python tracker
<http://bugs.python.org/issue11811>
___
___
Changes by Charles-Francois Natali :
Removed file: http://bugs.python.org/file21811/ssl_ipv6.diff
___
Python tracker
<http://bugs.python.org/issue11811>
___
___
Python-bug
Changes by Charles-Francois Natali :
Added file: http://bugs.python.org/file21812/is_ipv6_enabled.diff
___
Python tracker
<http://bugs.python.org/issue11811>
___
___
Pytho
Charles-Francois Natali added the comment:
A patch is attached, along with corresponding test.
Notes:
- since I don't have an IPv6 internet connectivity, I could only test it locally
- I chose 'ipv6.google.com' as SSL server for the test. If it's a problem, I
can change i
Charles-Francois Natali added the comment:
Suggesting to close.
--
nosy: +giampaolo.rodola
___
Python tracker
<http://bugs.python.org/issue11247>
___
___
Pytho
Charles-Francois Natali added the comment:
It's a duplicate of http://bugs.python.org/issue10517
--
nosy: +neologix, pitrou
___
Python tracker
<http://bugs.python.org/is
Charles-Francois Natali added the comment:
Here's an updated patch, tested on RHEL4U8.
--
Added file: http://bugs.python.org/file21804/tls_reinit.diff
___
Python tracker
<http://bugs.python.org/is
Changes by Charles-Francois Natali :
Removed file: http://bugs.python.org/file21678/thread_invalid_key.diff
___
Python tracker
<http://bugs.python.org/issue10517>
___
___
Changes by Charles-Francois Natali :
Removed file: http://bugs.python.org/file21801/tls_reinit.diff
___
Python tracker
<http://bugs.python.org/issue10517>
___
___
Pytho
Changes by Charles-Francois Natali :
Removed file: http://bugs.python.org/file21802/tls_reinit_bis.diff
___
Python tracker
<http://bugs.python.org/issue10517>
___
___
Charles-Francois Natali added the comment:
> Thank you. I like this patch, except that _PyGILState_ReInit() should be
> declared in the appropriate .h file, not in signalmodule.c.
I asked myself this question when writing the patch: what's the convention
regarding functions ?
Charles-Francois Natali added the comment:
The most obvious explanation for that failure is that the barrier's timeout is
too low.
def test_default_timeout(self):
"""
Test the barrier's default timeout
"""
#cre
Changes by Charles-Francois Natali :
Added file: http://bugs.python.org/file21802/tls_reinit_bis.diff
___
Python tracker
<http://bugs.python.org/issue10517>
___
___
Pytho
Charles-Francois Natali added the comment:
> Ah, using the fallback implementation of tls? Surely this isn't a
> problem with the pthreads tls, I'd be surprised if it retains TLS values
> after fork.
It surprised me too when I found that out, but it's really with the
Charles-Francois Natali added the comment:
Note that the setpgid creation part is now somewhat redundant with Popen's
start_new_session flag (which calls setsid). Also, this should probably be an
option, since with that patch every subprocess is in its own process group.
> I was w
Charles-Francois Natali added the comment:
I just noticed there's already a version of dlmalloc in
Modules/_ctypes/libffi/src/dlmalloc.c
Compiling with gcc -shared -fpic -o /tmp/dlmalloc.so
./Modules/_ctypes/libffi/src/dlmalloc.c
Then LD_PRELOAD=/tmp/dlmalloc.so ./python
works just
Charles-Francois Natali added the comment:
> it is possible to impact the memory allocation system on AIX using some
> environment variables (MALLOCOPTIONS and others)
LD_PRELOAD won't impact AIX's malloc behaviour, but allows you to
replace it transparently by any other im
Charles-Francois Natali added the comment:
> How about deleting the mapping (pthread_key_delete) and recreating it
> from scratch, then?
Sounds good.
So the idea would be to retrieve the current thread's tstate, destroy the
current autoTLSkey, re-create it, and re-associate the cur
Charles-Francois Natali added the comment:
> Not necessarily. You can have several interpreters (and therefore several
> thread states) in a single thread, using Py_NewInterpreter(). It's used by
> mod_wsgi and probably other software. If you overwrite the old value with the
Charles-Francois Natali added the comment:
> So, if it is possible to fix this and remove this weird special case and cast
> it into the abyss, then by all means, you have my 10 thumbs up. Not that it
> counts for much :)
Me too.
We still have a couple hundred RHEL4/5 boxes at wo
Charles-Francois Natali added the comment:
> It isn't better.
Requests above 256B are directly handled by malloc, so MALLOC_MMAP_THRESHOLD_
should in fact be set to 256 (with 1024 I guess that on 64-bit every mid-sized
dictionnary gets allocated
Charles-Francois Natali added the comment:
> The MALLOC_MMAP_THRESHOLD improvement is less visible here:
>
Are you running on 64-bit ?
If yes, it could be that you're exhausting M_MMAP_MAX (malloc falls
back to brk when there are too many mmap mappings).
You cou
Charles-Francois Natali added the comment:
This is definitely a malloc bug.
Test with default malloc on a Debian box:
cf@neobox:~/cpython$ ./python ../issue11849_test.py
*** Python 3.3.0 alpha
--- PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
0 3778 pts/2S+ 0:00
Charles-Francois Natali added the comment:
Sébastien:
I'm chiming in late, but doesn't AIX have something like LD_PRELOAD?
Why not use it to transparently replace AIX's legacy malloc by another malloc
implementation like dlmalloc or ptmalloc?
That would not require any patching
Charles-Francois Natali added the comment:
PaX doesn't block mprotect in itself, but prevents pages from being both
writable and executable.
Andreas's right, it's probably due to a dlopen of an object requiring
executable stack via ctypes.
So you should report this to iotop
Charles-Francois Natali added the comment:
> and it seems - as far as i understand what i read - that you're
> still right; and, furthermore, that fsync() does everything
> anyway. (But here an idiot is talking about *very* complicated
> stuff.)
>
I just double-checked, an
Charles-Francois Natali added the comment:
I'm -10 on sync_file_range on Linux:
- it doesn't update the file metadata, so there's a high chance of corruption
after a crash
- last time I checked, it didn't flush the disk cache (well, it probably does
if barriers are enab
Charles-Francois Natali added the comment:
Is there anything I can do to help this move forward ?
--
___
Python tracker
<http://bugs.python.org/issue11
Charles-Francois Natali added the comment:
> in particular: linux doesn't guarantee that data gets writting to the disk
> when you call fsync, only that the data gets pushed to the storage device.
Barriers are now enable by default in EXT4, and Theodore Tso has been
favourable
Charles-Francois Natali added the comment:
I know that POSIX makes no guarantee regarding durable writes, but IMHO that's
definitely wrong, in the sense that when one calls fsync, he expects the data
to be committed to disk and be durable.
Fixing this deficiency through Python's exp
Charles-Francois Natali added the comment:
> BTW, after utilize lxml instead of ElementTree, such phenomenon of increasing
> memory usage disappeared.
If you looked at the link I posted, you'll see that lxml had some similar
issues and solved it by calling malloc_trim systemat
Charles-Francois Natali added the comment:
> IMO, it would be nice if I could ask my queue, "Just what is your capacity
(in bytes, not entries) anyways? I want to know how much I can put in here
without worrying about whether the remote side is dequeueing." I guess I'd
s
Charles-Francois Natali added the comment:
> kaifeng added the comment:
>
> I added 'malloc_trim' to the test code and rerun the test with Python 2.5 /
> 3.2 on CentOS 5.3. The problem still exists.
>
Well, malloc_trim can fail, but how did you "add" it ?
Changes by Charles-Francois Natali :
--
keywords: +patch
Added file: http://bugs.python.org/file21696/gc_trim.diff
___
Python tracker
<http://bugs.python.org/issue11
Charles-Francois Natali added the comment:
The "problem" is not with Python, but with your libc.
When a program - such as Python - returns memory, it uses the free(3) library
call.
But the libc is free to either return the memory immediately to the kernel
using the relevant sy
Charles-Francois Natali added the comment:
Note: this seems to be fixed in RHEL6.
(Sorry for the noise).
--
___
Python tracker
<http://bugs.python.org/issue10
Changes by Charles-Francois Natali :
--
keywords: +patch
Added file: http://bugs.python.org/file21678/thread_invalid_key.diff
___
Python tracker
<http://bugs.python.org/issue10
Changes by Charles-Francois Natali :
Added file: http://bugs.python.org/file21677/test_specific.c
___
Python tracker
<http://bugs.python.org/issue10517>
___
___
Python-bug
Charles-Francois Natali added the comment:
This is due to a bug in the TLS key management when mixed with fork.
Here's what happens:
When a thread is created, a tstate is allocated and stored in the thread's TLS:
thread_PyThread_start_new_thread -> t_bootstrap -> _Py
Charles-Francois Natali added the comment:
I'm not sure whether POSIX warrants anything about this behavior, but nothing
prevents a process from running with a UID not listed in /etc/passwd (or NIS,
whatever). For example, sudo allows running a command with a UID not listed in
the pas
Charles-Francois Natali added the comment:
It's probably a duplicate of http://bugs.python.org/issue8428
It would be nice if you could try to reproduce it with a py3k snapshot though,
just to be sure.
--
nosy: +neologix
___
Python tracker
Charles-Francois Natali added the comment:
It's documented in
http://docs.python.org/library/multiprocessing.html#multiprocessing-programming
:
"""
Joining processes that use queues
Bear in mind that a process that has put items in a queue will wait before
terminating un
Charles-Francois Natali added the comment:
This problem arises because the pool's close method is called before all the
tasks have completed. Putting a sleep(1) before pool.close() won't exhibit this
lockup.
The root cause is that close makes the workers handler thread exit:
Charles-Francois Natali added the comment:
Sorry, wrong copy-paste, the failing assertion will of course be this one:
773 self.assertReturnsIfImplemented(6, get_value, woken)
since woken.get_value() == 5
--
___
Python tracker
<h
Charles-Francois Natali added the comment:
One possible cause for those intermittent failures is the preemtion of a thread
while waiting on the condition:
def wait(self, timeout=None):
233 assert self._lock._semlock._is_mine(), \
234'must acquire() cond
Charles-Francois Natali added the comment:
Attached is a patch fixing this race, and a similar one in Pool's terminate.
--
keywords: +patch
Added file: http://bugs.python.org/file21608/pool_shutdown_race.diff
___
Python tracker
Charles-Francois Natali added the comment:
I think those lockups are due to a race in the Pool shutdown code.
In Lib/multiprocessing/pool.py:
def close(self):
debug('closing pool')
if self._state == RUN:
self._state = CLOSE
self._work
Charles-Francois Natali added the comment:
> Oh, I didn't know. In this case, is my commit 3664fc29e867 correct? I
> think that it is, because without the patch, subprocess may call poll()
> with a negative timeout, and so it is no more a timeout at all.
>
Yes, it looks cor
Charles-Francois Natali added the comment:
> Check also this:
>
> http://bugs.python.org/issue11740
You should indicate it as duplicate.
--
___
Python tracker
<http://bugs.python.o
Charles-Francois Natali added the comment:
> You may also patch poll_poll().
>
Poll accepts negative timeout values, since it's the only way to
specify an infinite wait (contrarily to select which can be passed
NULL).
--
___
Python tra
Charles-Francois Natali added the comment:
It seems to have fixed the failure, no ?
I don't know what's the policy regarding syscall parameters check, but
I think it'd be better to check that the timeout passed to select is
not negative, and raise an exception otherwise, inst
Charles-Francois Natali added the comment:
Does this only happen on Cygwin buildbots ?
If yes, then it might simply be an issue with Cygwin's fork implementation,
which is much slower than natively.
Right now, the test waits 0.5s before checking that the processes are started,
Charles-Francois Natali added the comment:
Is the SIGBUS generated on the first page access ?
How much memory does this buildbot have ?
--
___
Python tracker
<http://bugs.python.org/issue11
Charles-Francois Natali added the comment:
_remaining_time doesn't check that endtime > current time and can return a
negative number, which would trigger an EINVAL when passed to select
(select_select doesn't seem to check for negative double).
Note that a check is perf
Charles-Francois Natali added the comment:
This test assumes that send will necessarily return if interrupted by a signal,
but the kernel can automatically restart the syscall when no data has been
committed (instead of returning -1 with errno set to EINTR).
And, AFAIK, that's exactly
Charles-Francois Natali added the comment:
> There is something interesting in this output: the test uses a subprocess and
> we only have the traceback of the parent. It may be nice to have the trace of
> the child process. It might be possible by sending a signal to the child
>
Charles-Francois Natali added the comment:
> I wonder whether the Java people are simply unaware of the potential problem?
> Or perhaps they have checked the Linux and Solaris implementations of
> readdir()
> and confirmed that it is in fact safe on those platforms. Even if this i
Changes by Charles-Francois Natali :
Removed file: http://bugs.python.org/file17081/base_http_server_fqdn_lag.diff
___
Python tracker
<http://bugs.python.org/issue6
Charles-Francois Natali added the comment:
Ooops, it's of course not going to break code containing accept + fork or pipe
+ fork, you obviously also need an execve ;-)
But the point is that you can't change the semantics of FDs being inheritable
across an execve (think about inetd f
Charles-Francois Natali added the comment:
If you're suggesting to set FDs CLOEXEC by default, I think it's neither
possible nor reasonable:
- you have to take into account not only files, but also pipes, sockets, etc
- there's no portable way to e.g. open a file and set it CLO
Charles-Francois Natali added the comment:
my_fgets in Parser/myreadline.c is broken:
There's a comment saying that a fgets is retried on EINTR, but the code doesn't
retry. It used to in older cPython versions, but there was also a bug, so my
guess is that this bug has been here
Charles-Francois Natali added the comment:
In that case, it's likely due to the way OS-X handles interrupted syscalls.
Under Linux, getchar and friends (actually read with default SA_RESTART) won't
return EINTR on (SIGSTOP|SIGTSTP)/SIGCONT.
Under OS-X, it seems that e.g. getchar (
Charles-Francois Natali added the comment:
I'm still not sure I understand the problem.
- when you hit CTRL-Z, the process is put in background, since it receives a
SIGTSTP : normal
- when you put it in foreground with 'fg', it doesn't resume ? Did you try to
Charles-Francois Natali added the comment:
What's the problem here ?
CTRL-Z causes the controlling terminal to send a SIGTSTP to the process, and
the default handler stops the process, pretty much like a SIGSTOP.
If you don't want that to happen:
import signal
signal.signal(sign
Charles-Francois Natali added the comment:
Could you try with Python 3.2 ?
In 3.1, the only available pickle implementation was in pure python: with
cPickle (2.7) or _pickle (3.2), it should be much faster.
--
nosy: +neologix
___
Python tracker
Charles-Francois Natali added the comment:
Could you try with the attached patch ?
The problem is that subprocess silently replaces bufsize=0, so child.stdout is
actually buffered, and when you read just one byte, everything that's available
for reading is read into the python's obj
Charles-Francois Natali added the comment:
The check is done in py3k:
Traceback (most recent call last):
File "/home/cf/test_zip.py", line 7, in
print(z.read("secretfile.txt"))
File "/home/cf/py3k/Lib/zipfile.py", line 889, in read
with self.open
Charles-Francois Natali added the comment:
Attached is a patch checking that no FD is closed more once when
closing pipe FDs, along with an update for test_subprocess.
--
keywords: +patch
Added file: http://bugs.python.org/file21053/subprocess_same_fd.diff
Charles-Francois Natali added the comment:
The problem lies here:
/* Close pipe fds. Make sure we don't close the same fd more than */
/* once, or standard fds. */
if (p2cread > 2) {
POSIX_CALL(close(p2cread));
}
(c2pwrite > 2) {
POSIX_CALL(close(c2pwrite));
}
if (errwrite
Charles-Francois Natali added the comment:
> wait4 without WNOHANG works fine. waitpid works fine even with WNOHANG.
> I don't know which workaround is the better.
As far as the test is concerned, it's of course better to use wait4
without WNOHANG in a test names test_wait4 (
Charles-Francois Natali added the comment:
If test_wait3 and test_fork1 pass, then yes, it's probably an issue with AIX's
wait4.
See http://fixunix.com/aix/84872-sigchld-recursion.html:
"""
Replace the wait4() call with a waitpid() call...
like this:
for(n=0;wai
Charles-Francois Natali added the comment:
> Big dirs are really slow to read at once. If user wants to read items one by
> one like here
The problem is that readdir doesn't read a directory entry one at a time.
When you call readdir on an open DIR * for the first time, the lib
New submission from Charles-Francois Natali :
While tracing a program using multiprocessing queues, I noticed that there were
many calls to gettimeofday.
It turns out that acquire_timed, used by lock_PyThread_acquire_lock and
rlock_acquire, always call gettimeofday, even if no timeout argument
Charles-Francois Natali added the comment:
> The code that is segfaulting is using pycrypto and sqlite3, so it may be that
> a bug in one of these is trampling on something. No idea how to investigate
> any further.
You could try valgrind:
$ valgrind --tool=memcheck -o /tmp/o
Charles-Francois Natali added the comment:
Do you have a coredump ?
It'd be curious to see this faulting address.
I didn't notice the first time, but in the OP case the address is definitely
wrong: 0xecc778b7 is above PAGE_OFFSET (0xc000 on x86), so unless he's
using a kern
Charles-Francois Natali added the comment:
It's probably a Windows limitation regarding the number of bytes that can be
written to stdout in one write.
As for the difference between python versions, what does
python -c "import sys; print(sys.getsizeof('a'))"
Charles-Francois Natali added the comment:
> Patch looks mostly good. Why do you use ~PROT_WRITE instead of
> PROT_READ|PROT_EXEC as in your example?
Because I'm not sure that PROT_EXEC is supported by all platforms. See
http://pubs.opengroup.org/onlinepubs/007908799/xsh/mmap.
New submission from Charles-Francois Natali :
$ cat /tmp/test_mmap.py
import mmap
m = mmap.mmap(-1, 1024, prot=mmap.PROT_READ|mmap.PROT_EXEC)
m[0] = 0
$ ./python /tmp/test_mmap.py
Segmentation fault
When trying to perform a write, is_writable is called to check that we can
indeed write to
Charles-Francois Natali added the comment:
Attached is a patch removing useless calls to
Py_BEGIN_ALLOW_THREADS/Py_END_ALLOW_THREADS for several posix
functions.
It's straigthforward, but since I only have Linux boxes, I couldn't
test it under Windows.
--
keywords: +patch
Charles-Francois Natali added the comment:
2011/3/3 Antoine Pitrou :
>
> Antoine Pitrou added the comment:
>
>> Just to be clear, I'm not at all criticizing the current GIL
>> implementation, there's been a great work done on it.
>> I'm just sayi
Charles-Francois Natali added the comment:
> Do you want to propose a patch?
Sure, if removing those calls to Py_BEGIN_ALLOW_THREADS/Py_END_ALLOW_THREADS
seems reasonable (I might haved missed something obvious).
Just to be clear, I'm not at all criticizing the current GIL implem
Charles-Francois Natali added the comment:
Well, those are contrived examples showing the effect of the convoy effect
induced by those unneeded GIL release/acquire: releasing and re-acquiring the
GIL comes with a cost (e.g. under Linux, futex are really fast in the
uncontended case since
Charles-Francois Natali added the comment:
I didn't even know that Windows had such calls.
But anyway, if we start releasing the GIL around each malloc call, then it's
going to get really complicated:
static PyObject *
posix_geteuid(PyObject *self, PyObject *noargs)
{
New submission from Charles-Francois Natali :
Some posix module functions unnecessarily release the GIL.
For example, posix_dup, posix_dup2 and posix_pipe all release the GIL, but
those are non-blocking syscalls (the don't imply any I/O, only modifying the
process file descriptors table).
Changes by Charles-Francois Natali :
--
nosy: neologix
priority: normal
severity: normal
status: open
title: some posix module functions
___
Python tracker
<http://bugs.python.org/issue11
Charles-Francois Natali added the comment:
> Your posix_closefrom() implementation as written today is not safe to call
> between fork() and exec() due to the opendir/readdir implementation. It can
> and will hang processes at unexpected times.
Yeah, I remove the patch when I real
Changes by Charles-Francois Natali :
Removed file: http://bugs.python.org/file20980/py3k_closefrom.diff
___
Python tracker
<http://bugs.python.org/issue11284>
___
___
Charles-Francois Natali added the comment:
Attached is a new version falling back to /proc/self/fd when closefrom(2) is
not available (on Unix), working on Linux.
It's indeed much faster than the current approach.
Note that it's only used if _posixsubprocess is not available, becau
Changes by Charles-Francois Natali :
Added file: http://bugs.python.org/file20980/py3k_closefrom.diff
___
Python tracker
<http://bugs.python.org/issue11284>
___
___
Pytho
Changes by Charles-Francois Natali :
Removed file: http://bugs.python.org/file20979/py3k_closefrom.diff
___
Python tracker
<http://bugs.python.org/issue11284>
___
___
Charles-Francois Natali added the comment:
Attached is a patch adding os.closefrom.
If closefrom(2) is available, it's used.
Otherwise, two options:
- if sysconf and _SC_OPEN_MAX are defined, we close each file descriptor up to
_SC_OPEN_MAX
- if not, we choose a default value (256), and
Charles-Francois Natali added the comment:
> So, even though implemented in C, the file descriptor closing logic is still
> quite costly!
Yes, see this recent issue: http://bugs.python.org/issue11284
In the reporter's case, it's much worse, because FreeBSD (at least the ver
Charles-Francois Natali added the comment:
> pitrou> I think your analysis is wrong. These mmap() calls are for
> pitrou> anonymous memory, most likely they are emitted by the libc's
> pitrou> malloc() to get some memory from the kernel. In other words
> pitrou>
Charles-Francois Natali added the comment:
2011/3/2 Eric Wolf :
>
> Eric Wolf added the comment:
>
> I just got confirmation that OSM is using pbzip2 to generate these files. So
> they are multi-stream. At least that gives a final answer but doesn't solve
> my probl
1 - 100 of 207 matches
Mail list logo