Changes by Tim Golden :
Removed file: http://bugs.python.org/file9919/os_access-r62091.patch
___
Python tracker
<http://bugs.python.org/issue2528>
___
___
Python-bug
Tim Peters added the comment:
Serhiy, yup, that regexp is slow, but it does finish - so the engine is doing
something to avoid _unbounded_ repetitive matching of an empty string.
Change it to
(?:.?.+)*y
and the group can no longer match an empty string, but it's still slow
(although
Tim Golden added the comment:
I attach a patch against 3.3; this is substantially Dave Chambers' original
patch with a trivial test added and a doc change. This means that HKCR is
scanned to determine extensions and these will override anything in the
mimetypes db. The doc change highl
Tim Peters added the comment:
Serhiy, yes, I know the regexp you gave takes exponential time. But:
1. This appears to have nothing to do with repeated 0-length matches. I gave
you an example of a very similar regexp that also takes exponential time, but
never makes any 0-length sub-match
Tim Peters added the comment:
So does anyone believe this check serves a useful purpose _now_? Doesn't seem
so to me.
--
___
Python tracker
<http://bugs.python.org/is
Tim Golden added the comment:
Thanks for the review, Ben. Updated patches attached.
1 & 3) default_encoding -- Your two points appear to contradict each
other slightly. What's in the updated patches is: 3.x has no encoding
(because everything's unicode end-to-end); 2.7 attempt
Tim Peters added the comment:
Victor, Wikipedia has a readable explanation:
http://en.wikipedia.org/wiki/NTFS_junction_point
I haven't used them much. From cmd.exe, I've been able to delete them, not
with "del" but with "rmdir". You can create one from cmd.
Tim Peters added the comment:
Yup - 2.7 evaluates this in a less precise way, as
log(10L) = log(10./16 * 2**4) = log(0.625) + log(2)*4
>>> log(10L) == log(0.625) + log(2)*4
True
This patterns works well even for longs that are far too large to represent as
a double; e.g.,
&
Tim Peters added the comment:
FYI, the improvement was made in these 2 changesets:
c8dc4b5e54fb
a9349fd3d0f7
If desired, those could presumably be grafted back into the 2.7 branch.
The commit messages refer to issue #9599, but that appears to be mistaken
Tim Peters added the comment:
OK, the correct one is issue #9959.
--
___
Python tracker
<http://bugs.python.org/issue18739>
___
___
Python-bugs-list mailin
Tim Peters added the comment:
+1 on fixing it in 2.7, for the reasons Mark gave.
Way back when I introduced the original scheme, log(a_long) raised an
exception, and the `int` and `long` types had a much higher wall between them.
The original scheme changed an annoying failure into a "p
Tim Mooney added the comment:
For what it's worth, I've been using a patch nearly identical to this one with
python 2.6.x and 2.7.x with good success, and in my case it was under Solaris
10 with the no-cost "oss" package from 4Front. I now have a new workstation
Tim Peters added the comment:
The docs look correct to me. For example,
>>> from datetime import *
>>> td = datetime.now() - datetime(1,1,1)
>>> td
datetime.timedelta(735103, 61756, 484000)
>>>
Tim Peters added the comment:
Python's debug-mode memory allocators add some "magic values" before and after
each allocated chunk of memory, and check them when the chunk is freed to make
sure nobody overwrote them. In this case, someone did overwrite the byte at
p-5, where p
Tim Peters added the comment:
Impossible to know, but since everything in the traceback comes from
matplotlib, the error is most likely in matplotlib.
--
nosy: +tim.peters
___
Python tracker
<http://bugs.python.org/issue18
Tim Peters added the comment:
Memory corruption can be difficult to track down. Best thing you can do is
strive to find a test case as small and fast as possible that shows the same
kind of error.
By "rogue extension module" I just mean 3rd-party C code (like, for example,
matpl
Tim Peters added the comment:
By the way, if memory serves, compiling with --with-pydebug changes the memory
layout of Python objects, so a Python compiled this way _cannot_ be used
successfully with extension modules that were compiled without the same
options. Did you rebuild your
Tim Peters added the comment:
Well, if you delete a giant list, and the list held the only references
remaining to the objects the list contained, then the memory for those objects
will be free'd, one object at a time. A debug build would then detect the
memory corruption in those ob
Tim Peters added the comment:
Note that the same poster is also reporting memory corruption in issue 18843.
I suggest ignoring this one unless/until the earlier bug is resolved (memory
corruption can easily cause a segfault - or any other kind of error
Tim Peters added the comment:
Did you read Misc/README.valgrind (in the Python tree)? The warnings you've
seen so far are probably all nonsense, and README.valgrind explains how to
avoid getting them.
--
___
Python tracker
<http://bugs.py
Tim Peters added the comment:
I don't know why there isn't a configure switch for this - but then I've never
used valgrind - LOL ;-)
Other developers use valgrind on Python routinely, though. So it's unlikely
you'll find a legitimate problem _in Pyt
New submission from Tim Peters:
In issue 18843 a user noted that Misc/README.valgrind doesn't mention the
--with-valgrind configure option. It probably should. But since I've never
used valgrind, I'm not the guy to do it ;-)
--
components: Build
messages: 196338
n
Tim Peters added the comment:
I opened issue 18859 about the lack of --with-valgrind info in
Misc/README.valgrind. Thanks for noticing!
--
___
Python tracker
<http://bugs.python.org/issue18
Changes by Tim Peters :
--
keywords: +easy
___
Python tracker
<http://bugs.python.org/issue18859>
___
___
Python-bugs-list mailing list
Unsubscribe:
Tim Peters added the comment:
Hmm. I don't quite know what you're doing: you said you're getting away from
--with-pydebug, but these "bad leading pad byte" messages can't be generated
unless Python is compiled with (at least) PYMALLOC_DEBUG defined.
That sa
Tim Peters added the comment:
Yet Another Tool ;-) Python's "small object" allocator grabs memory in chunks
of 256KB from the system, and carves up the space itself. Other memory tools
(like Valgrind ...) only see that Python has grabbed 256KB chunks, so can't
detect any
Tim Peters added the comment:
It would be a severely lame OS that allowed a process to overwrite another
process's memory ;-) "Bad C or C++ code", in the process you're running, is
still the best guess.
A memory module that sometimes dropped the last bit _could_ be at fa
New submission from Tim Peters:
In
http://bugs.python.org/issue18843
a user reported a debug PyMalloc "bad leading pad byte" memory
corruption death while running their code. After some thrashing, they
decided to rebuild Python, and got the same kind of error while
rebuilding Py
Changes by Tim Peters :
--
resolution: -> invalid
___
Python tracker
<http://bugs.python.org/issue18881>
___
___
Python-bugs-list mailing list
Unsubscri
Changes by Tim Peters :
--
stage: -> committed/rejected
___
Python tracker
<http://bugs.python.org/issue18881>
___
___
Python-bugs-list mailing list
Unsubscri
Changes by Tim Peters :
--
status: open -> closed
___
Python tracker
<http://bugs.python.org/issue18881>
___
___
Python-bugs-list mailing list
Unsubscri
Tim Peters added the comment:
I sent a msg to Python-Dev, asking for a Gentoo user to try to reproduce this.
Thanks!
--
___
Python tracker
<http://bugs.python.org/issue18
Tim Peters added the comment:
Thanks for chiming in, Stephen! I can't answer your questions, but agree the
original poster needs to tell you exactly what he did -- else we'd just be
thrashing at random. There's been enough
Tim Peters added the comment:
Martin, can you please supply exact commands Stephen can use to try to
reproduce the problem you saw running `emerge`? And try them yourself first,
to make sure you can reproduce the problem too.
Any detail may be important. For example, it's possible th
Tim Peters added the comment:
The matplotlib people won't care about this one either. matplotlib allocated
the memory, and the error message at the end says it's _trying_ to call
_PyMem_DebugFree (which may well be trying to release the memory), but the
binaries aren
Tim Peters added the comment:
Martin, would it be possible to borrow someone else's machine and try to
reproduce this? If you can, that would greatly reduce the probability of this
being a HW error. It would also leave us with an exact set of commands to
share so others can try it on
Tim Peters added the comment:
Now you have something to show the matplotlib folks - although they're not
likely to get excited about leaking 40 bytes.
There is nothing Python can do about this. matplotlib is responsible for
free'ing the memory matplotlib allocates, just as
Tim Peters added the comment:
OK, it sounds to me like you do not have a reproducible test case, of any kind.
It that's true, this bug report isn't going anywhere :-(
Python isn't a memory-testing program, so it would be insanely inefficient for
it to (for example) read u
Tim Peters added the comment:
As issue 18843 has evolved, seems more likely now that it's flaky HW, but
agreed in any case there's really no evidence of a Python problem here. So
closing it. Martin, we can revisit this if there's real progress on the other
issue.
--
Tim Peters added the comment:
Nasty problem ;-)
I don't understand the need for all the indirections in the second patch.
Like, why use a weakref? It's not like we have to worry about an immortal
tstate keeping a measly little lock object alive forever, right? Seems to me
t
Tim Peters added the comment:
Someone may find the new stress.valgrind.stderr interesting, but - since I've
never used valgrind - it doesn't mean much to me.
I _expected_ you'd run the little stress program under a debug Python and
without valgrind, since that's the onl
Tim Peters added the comment:
I'm getting a headache now - LOL ;-) Thanks for the explanation!
What I still don't understand: the new lock is an internal implementation
detail. How would it gain a weakref with a callback? Users aren't going to
mess with this lock, and if y
Tim Peters added the comment:
Note this line in your first post:
DUMA Aborting: mprotect() failed: Cannot allocate memory.
Python never calls mprotect(), but DUMA() probably does. Also note what it
said after that:
Check README section 'MEMORY USAGE AND EXECUTION SPEED'
Tim Peters added the comment:
Thanks for that, Stephen! I don't know of anything else you could try that
would be helpful. The OP doesn't appear able to reproduce his problems either,
and last I heard he was off running `emerge` under DUMA:
http://duma.sourceforge.net/
Tim Peters added the comment:
All the buildbots are failing due to changeset 868ad6fa8e68 - I'm going to back
it out.
--
nosy: +tim.peters
___
Python tracker
<http://bugs.python.org/is
Tim Peters added the comment:
Suggest caution here. test_sax fails the same way for me too (Windows Vista),
under both the released 3.3.2 and a Python built from the current hg default
branch.
However, these files (test.xml and test.xml.out) have not changed since the
Python 2.7 line - the
Tim Peters added the comment:
Seeing as Python 3 _does_ open these files in binary mode, I fiddled my local
.hgeol to mark the test files as BIN (then deleted the test data directory and
did an "hg revert" on the directory). Then test_sax passes.
I don't know whether that
Tim Peters added the comment:
test_email still passed on Windows under 3.2.5 (but test_sax failed).
test_email and test_sax both fail under 3.3.2.
I'll just push the change to .hgeol - minimally invasive, fixes the immediate
problem, and it appears these "really are&quo
Changes by Tim Peters :
--
resolution: -> fixed
stage: patch review -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.or
Tim Peters added the comment:
Terry, yes, the installer won't change line endings. I think - we'll find out
for sure next time a release is cut - LOL ;-)
Agreed that Lib/email/test/data/msg_26.txt is probably obsolete. Fix it, if
you like! It's helpful to get rid of
Tim Peters added the comment:
I'm closing this. While it makes a big difference for a cwr coded in Python,
it turn out to be minor in C. The extra complications (more state to remember
and update across next() invocations) isn't worth the minor speedup in C.
--
New submission from Tim Peters:
Here under 3.3.2:
"""
>>> from threading import Lock
>>> help(Lock)
Help on built-in function allocate_lock in module _thread:
allocate_lock(...)
allocate_lock() -> lock object
(allocate() is an obsolete synonym)
Tim Peters added the comment:
Oh, I'm not opposed, I'm just complaining ;-)
It would be much nicer to have an approach that worked for all thread users,
not just threading.Thread users. For example, a user can easily (well,
plausibly) get into the same kinds of troubles here
New submission from Tim Peters:
Here from the 3.3.2 docs for threading.Lock:
"""
acquire(blocking=True, timeout=-1)
Acquire a lock, blocking or non-blocking.
...
When invoked with the floating-point timeout argument set to a positive value,
block for at most the number of se
Tim Peters added the comment:
Oops! The docs are wrong - a negative timeout actually raises:
ValueError: timeout value must be strictly positive
unless the timeout is exactly -1. All the more reason to ensure that a
negative waittime isn't passed.
I opened a different issue about th
Tim Peters added the comment:
I think the docs are already clear: they say "the generator-iterator’s close()
method will be called". That's all that needs to be said: now go look at the
docs for generator.close(). They explain _all_ that close() does, and it would
b
Tim Peters added the comment:
Fudge - there's another unlikely problem here. For example: main program
creates a threading.Thread t, runs it, and does t.join(5) (whatever - any
timeout value). When t.join() returns, the main program has no idea whether t
is done or not. Suppose t
Tim Peters added the comment:
When the comment was introduced, Python's Wichmann-Hill generator had a much
shorter period, and we couldn't even generate all the permutations of a deck of
cards.
The period is astronomically larger now, but the stackoverflow answer (2080) is
corre
Changes by Tim Peters :
--
resolution: -> invalid
stage: -> committed/rejected
___
Python tracker
<http://bugs.python.org/issue18928>
___
___
Python-bugs-
Tim Peters added the comment:
So you're not concerned about a now-private API (which used to be advertised),
but are concerned about a user mucking with a new private lock in an
exceedingly unlikely (in the absence of malice) way. That clarifies things ;-)
I'm not really conce
New submission from Tim Peters:
On Windows, _debugmallocstats() output ends with lines like this:
0 free 12-sized PyTupleObjects * zd bytes each =0
0 free 13-sized PyTupleObjects * zd bytes each =0
"zd" is senseless. Betting it's due
Changes by Tim Peters :
--
resolution: -> fixed
stage: needs patch -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.or
Changes by Tim Peters :
--
components: +Tests -Interpreter Core
___
Python tracker
<http://bugs.python.org/issue18944>
___
___
Python-bugs-list mailing list
Unsub
Changes by Tim Peters :
--
resolution: -> fixed
stage: -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.or
Tim Peters added the comment:
The fix is obviously correct ;-) - so what next? Armin & Terry, I don't know
whether you have commit privileges. If you don't, assign it to me and I'll
commit it.
Removed 3.5 from the Versions list and added 3.3 and 3.4.
--
nosy: +
Tim Peters added the comment:
All the timeout args are great! I wish Python had always had them :-)
Back to the pain at hand, it's not the number of lines of code that rubs me the
wrong way, but the sheer obscurity of it all. This handful of lines is - of
necessity - sprayed across
Tim Peters added the comment:
Right, I should have asked specifically about cpython commit privs ;-) Thanks
for expounding!
--
___
Python tracker
<http://bugs.python.org/issue18
Tim Peters added the comment:
New patch (threadstate_join_4.patch) refactors so that is_alive() returns True
until the tstate is cleared. This simplifies join() a lot (it doesn't have to
roll its own timeout implementation anymore), but complicates is_alive().
Caution: I don't kno
Tim Peters added the comment:
New patch threadstate_join_5.patch adds more testing of is_alive().
An inelegance I don't care about (but someone might): if join() is called with
a timeout, and the Python part of the thread ends before the timeout expires
(_stopped gets set), then a
Tim Peters added the comment:
Ah! I'm running on Windows, where all fork() tests are skipped. Can you fix
it? My prejudice is that anyone mixing threads with fork() should be shot for
the good of humanity <0.7 wink>.
--
___
Python tra
Tim Peters added the comment:
Cool! What could possibly go wrong? ;-)
--
___
Python tracker
<http://bugs.python.org/issue18808>
___
___
Python-bugs-list mailin
Tim Peters added the comment:
If would be nice if Tamas could test it in his application, since we're not
actually testing Py_EndInterpreter. But, ya, commit it! If it doesn't work
for Tamas, we can start over again ;-)
--
___
Pyth
Tim Peters added the comment:
Excellent - ship it! :-)
--
___
Python tracker
<http://bugs.python.org/issue18808>
___
___
Python-bugs-list mailing list
Unsub
Tim Peters added the comment:
Figures. That's why I wanted your name on the blamelist ;-)
--
___
Python tracker
<http://bugs.python.org/issue18808>
___
___
Tim Peters added the comment:
Just pushed 5cfd7b2eb994 in a poke-and-hope blind attempt to repair the
annoying ;-) buildbot failures.
--
___
Python tracker
<http://bugs.python.org/issue18
Tim Peters added the comment:
Doesn't look like 5cfd7b2eb994 is going to fix it :-( So I'll revert it.
Attaching the patch as blind.patch. After that patch, is_alive() only looks at
Thread data members, where ._is_stopped "is obviously" True, and ._tstate_lock
"
Tim Peters added the comment:
Weird! The Ubuntu box passed test_is_alive_after_fork on its /second/ run with
the patch:
http://buildbot.python.org/all/builders/x86%20Ubuntu%20Shared%203.x/builds/8564/steps/test/logs/stdio
The other box passed all tests:
http://buildbot.python.org/all
Tim Peters added the comment:
Well, the next time the Ubuntu box ran the tests, it was clean all the way. So
it's fixed! Despite that it isn't ;-)
--
___
Python tracker
<http://bugs.python.o
Tim Peters added the comment:
[Antoine]
> Oh, I also get the following sporadic failure
> which is triggered by slight change in semantics
> with Thread.join(timeout) :-)
> ==
> FAIL: t
Tim Peters added the comment:
Ah - the test used to do t.join(NUMTASKS)! That's just bizarre ;-)
I believe I can repair that too (well - there was never a _guarantee_ that
waiting 10 seconds would be long enough), but I'll wait until this all settles
down.
join() and is_alive
Tim Peters added the comment:
Without _stopped, join() can simply wait to acquire _tstate_lock (with or
without a timeout, and skipping this if _tstate_lock is already None). Etc ;-)
Of course details matter, but it's easy. I did it once, but the tests joining
the main thread failed
Tim Peters added the comment:
-1 from me, and I'm a comma-loving American ;-)
I'm sure lots of code in the wild parses this output - Serhiy isn't the only
one doing it.
--
___
Python tracker
<http://bugs.pyt
Tim Peters added the comment:
> The MainThread class could override is_alive() and join(), then.
I think it will be easier than that, but we'll see ;-)
--
___
Python tracker
<http://bugs.python.org
New submission from Tim Peters:
As discussed in issue 18808, now that we're checking for a tstate lock, the
Thread._stopped Event has become an "attractive nuisance". The attached patch
removes it.
This simplifies .join() and .is_alive(), and restores pre-18808 .join(ti
Changes by Tim Peters :
--
dependencies: +Thread.join returns before PyThreadState is destroyed
___
Python tracker
<http://bugs.python.org/issue18984>
___
___
Tim Peters added the comment:
I agree with your diagnosis. Unfortunately, I can't test the fork stuff.
Well, OK, I actually think that's fortunate (for me ;-) ).
If you can see a quick way to fix these, please do. I'm going to vanish for
ab
Tim Peters added the comment:
Thanks, Antoine! I pushed this change, figuring that even if some buildbots
are still unhappy about 18808, getting rid of the Event can't make them more
unhappy, and may make them happier.
--
resolution: -&g
Changes by Tim Peters :
--
Removed message: http://bugs.python.org/msg197342
___
Python tracker
<http://bugs.python.org/issue18984>
___
___
Python-bugs-list mailin
Tim Peters added the comment:
New changeset aff959a3ba13 by Tim Peters in branch 'default':
Issue 18984: Remove ._stopped Event from Thread internals.
http://hg.python.org/cpython/rev/aff959a3ba13
--
___
Python tracker
<http://bu
Tim Peters added the comment:
Antoine, could I bother you to try the attached cleanup.patch? It looks
harmless to me, but when I checked it in the Unix-y buildbots failed the
thread+fork tests again :-( Two examples:
http://buildbot.python.org/all/builders/x86%20Gentoo%203.x/builds/4914
Changes by Tim Peters :
--
stage: commit review -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.org/issue18808>
___
___
Tim Peters added the comment:
Yes - and I just closed 18808 :-)
--
stage: patch review -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.org/i
Tim Peters added the comment:
Well - I remain baffled, but am grateful for the patch - thanks :-)
--
___
Python tracker
<http://bugs.python.org/issue18
New submission from Tim Peters:
Don't know whether this is new. Found it running the test suite under a
3.4.0a2 freshly installed from the python.org .msi installer:
== CPython 3.4.0a2 (v3.4.0a2:9265a2168e2c+, Sep 8 2013, 19:41:05) [MSC v.1600
32 bit (Intel)]
== Windows-Vista-6.0.600
Tim Peters added the comment:
Serhiy, did you test "hg update -C default"? Didn't work for me :-(
Martin, I don't know an easy way. eol fiddling in Hg seems brittle :-(
I suppose you could get a fresh clone and then _compare_ the checked-out files
to your old clone.
Tim Peters added the comment:
OK, "hg up -C" _can_ work, but it appears to require that "hg stat" shows that
the files with the "bad" line endings are modified (M). That may or may not be
the case, depending on lots of things.
Martin, can you verify that (for e
Tim Peters added the comment:
BTW, the reason I wonder whether you don't have bad line ends in your tree is
this: if you did, test_sax would have been failing for you too. I assume you
run the test suite before building the inst
Tim Peters added the comment:
No problems, Martin - thanks for following up on this! :-)
--
___
Python tracker
<http://bugs.python.org/issue18992>
___
___
Pytho
Tim Newsham added the comment:
This still crashes in newer builds such as:
Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41)
[GCC 4.4.3] on linux2
>>> import os
>>> os.execve("/bin/ls", [], {})
Segmentation fault
0 __strrchr_sse42 () at ../sysdeps/x86_6
Tim Silk added the comment:
> Thank you for doing this!
No problem. I've been thinking about getting involved with python development
for a while - this seemed a good place to start!
> Do you have a real name so that I can credit you?
Yes (thankfully) - I've added it to my
1301 - 1400 of 2346 matches
Mail list logo