Changes by Richard Oudkerk :
--
nosy: +sbt
___
Python tracker
<http://bugs.python.org/issue15881>
___
___
Python-bugs-list mailing list
Unsubscribe:
Richard Oudkerk added the comment:
The documentation needs updating for Python 3 so that a byte string is used.
So the line becomes
s = Array('c', b'hello world', lock=lock)
--
nosy: +sbt
___
Python tracker
<http://bug
Changes by Richard Oudkerk :
--
resolution: -> fixed
stage: -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.or
New submission from Richard Oudkerk:
Currently rawiobase_read() reads to a bytearray object and then copies the data
to a bytes object.
There is a TODO comment saying that the bytes object should be created
directly. The attached patch does that.
--
files: iobase_read.patch
keywords
Richard Oudkerk added the comment:
I see the same error on Windows (when pressing ^C), but on Linux I get
Error in sys.exitfunc:
Traceback (most recent call last):
File "/usr/lib/python2.7/atexit.py", line 28, in _run_exitfuncs
import traceback
File "/usr/lib/python2
New submission from Richard Oudkerk:
With Python 2.7 on Windows the following crashes with an assertion:
>>> import os
[43042 refs]
>>> f = open("foobar", "wb")
[43048 refs]
>>> os.close(f.fileno())
[43048 refs]
>>
Richard Oudkerk added the comment:
If buffering is off then they all fail the assertion except isatty().
--
___
Python tracker
<http://bugs.python.org/issue15
Richard Oudkerk added the comment:
I suspect the problem is caused by nose's isolate plugin.
With this enabled, a copy of sys.modules is saved before each test and then
restored after the test. This causes garbage collection of newly imported
modules. The destructor for the module
Richard Oudkerk added the comment:
Actually, I am not so sure it is the isolate plugin. But I do think that
sys.modules is being manipulated somewhere before shutdown.
--
___
Python tracker
<http://bugs.python.org/issue15
Richard Oudkerk added the comment:
Actually it is test.with_project_on_sys_path() in setuptools/commands/test.py
that does the save/restore of sys.modules. See
http://www.eby-sarna.com/pipermail/peak/2010-May/003357.html
--
___
Python tracker
Richard Oudkerk added the comment:
I get the same hang on Linux with Python 3.2.
For Windows the documentation does warn against starting a process as a side
effect of importing a process. There is no explicit warning for Unix, but I
would still consider it bad form to do such things as a
Richard Oudkerk added the comment:
Here is a reproduction without using multiprocessing:
create.py:
import threading, os
def foo():
print("Trying import")
import sys
print("Import successful")
pid = os.fork()
if pid == 0:
try:
t = threadi
Changes by Richard Oudkerk :
--
type: crash -> behavior
___
Python tracker
<http://bugs.python.org/issue15914>
___
___
Python-bugs-list mailing list
Unsubscri
Richard Oudkerk added the comment:
Python 3.2 has extra code in _PyImport_ReInitLock() which means that when a
fork happens as a side effect of an import, the main thread of the forked
process owns the import lock. Therefore other threads in the forked process
cannot import anything
Richard Oudkerk added the comment:
It looks like the problem was caused be the fix for
http://bugs.python.org/issue9573
I think the usage this was intended to enable is evil since one of the forked
processes should always be terminated with os._exit
Richard Oudkerk added the comment:
New patch which checks the refcount of the memoryview and bytes object after
calling readinto().
If either refcount is larger than the expected value of 1, then the data is
copied rather than resized.
--
Added file: http://bugs.python.org/file27211
Richard Oudkerk added the comment:
> I think that's a useless precaution. The bytes object cannot "leak"
> since you are using PyMemoryView_FromMemory(), which doesn't know about
> the original object.
The bytes object cannot "leak" so, as you say, che
Richard Oudkerk added the comment:
> Then the view owns a reference to the bytes object. But that does not
> solve the problem that writable memoryviews based on a readonly object
> might be hanging around.
How about doing
PyObject_GetBuffer(b, &buf, PyBUF_WRITAB
Richard Oudkerk added the comment:
The current non-test uses of PyMemoryView_FromBuffer() are in
_io.BufferedReader.read(), _io.BufferedWriter.write(), PyUnicode_Decode().
It looks like they can each be made to leak a memoryview that references a
deallocated buffer. (Maybe the answer is
Richard Oudkerk added the comment:
I am rather confused about the ownership semantics when one uses
PyMemoryView_FromBuffer().
It looks as though PyMemoryView_FromBuffer() "steals" ownership of the buffer
since, when the associated _PyManagedBufferObject is garbage
Richard Oudkerk added the comment:
> You would need to call memory_release(). Perhaps we can just expose it on the
> C-API level as PyMemoryView_Release().
Should PyMemoryView_Release() release the _PyManagedBufferObject by doing
mbuf_release(view->mbuf) even if view->mbuf->expo
Richard Oudkerk added the comment:
> Are we talking about a big speedup here or could we perhaps just keep
> the existing code?
I doubt it is worth the hassle. But I did want to know if there was a clean
way to do what I wanted.
--
___
Changes by Richard Oudkerk :
--
nosy: +sbt
___
Python tracker
<http://bugs.python.org/issue15983>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from Richard Oudkerk:
A memoryview which does not own a reference to its base object can point to
freed or reallocated memory. For instance the following segfaults for me on
Windows and Linux.
import io
class File(io.RawIOBase):
def readinto(self, buf):
global
Richard Oudkerk added the comment:
I notice that queue.Queue.join() does not have a timeout parameter either.
Have you hit a particular problem that would be substantially easier with the
patch?
--
versions: -Python 2.7, Python 3.2, Python 3.3
Richard Oudkerk added the comment:
> I've added a new patch, that implements a shared/exclusive lock as
> described in my comments above, for the threading and multiprocessing
> module.
The patch does not seem to touch the threading mode and does not come with
tests
Richard Oudkerk added the comment:
@Sebastian: Both your patch sets are missing the changes to threading.py.
--
___
Python tracker
<http://bugs.python.org/issue8
Richard Oudkerk added the comment:
> @richard: I'm sorry, but both of my patches contain changes to
> 'Lib/threading.py' and can be applied on top of Python 3.3.0. So can you
> explain what do you mean, by missing the changes to threading.py?
I was reading the Rietve
Richard Oudkerk added the comment:
> With this, you are stuck with employing a context manager model only.
> You loose the flexibility to do explicit acquire_read() or
> acquire_write().
You are not restricted to the context manager model. Just use
selock.shared.acq
Richard Oudkerk added the comment:
I think Sebastian's algorithm does not correctly deal with the non-blocking
case. Consider the following situation:
* Thread-1 successfully acquires exclusive lock.
Now num_got_lock == 1.
* Thread-2 blocks waiting for shared lock.
Will block
Richard Oudkerk added the comment:
My previous comment applied to Sebastian's first patch. The second seems to
fix the issue.
--
___
Python tracker
<http://bugs.python.org/i
New submission from Richard Jones:
The attached simple patch demonstrates the problem:
>>> str(NormalizedVersion('1.0.post1'))
'1.0.post1.z'
and includes a fix.
--
assignee: eric.araujo
components: Distutils2
files: post-fix.patch
keywords: patch
New submission from Richard Jones:
The attached patch includes the maintainer information in the data sent to PyPI
in a register or upload submission.
--
assignee: eric.araujo
components: Distutils2
files: maintainer.patch
keywords: patch
messages: 171774
nosy: alexis, eric.araujo
Richard Oudkerk added the comment:
> I think you got that argument backwards. The simple greedy policy you
> implement works well provided there are not too many readers. Otherwise,
> the writers will be starved, since they have to wait for an oppertune
> moment when no readers a
Richard Oudkerk added the comment:
> The unlock operation is the same, so now you have to arbitrarily pick one
> of the "lockd" and chose release().
That depends on the implementation. In the three implementations on
http://en.wikipedia.org/wiki/Readers-writers_pro
Richard Oudkerk added the comment:
The patch does not seem to walk the mro to look for slots in base classes.
Also, an instance with a __dict__ attribute may also have attributes stored in
slots.
BTW, copyreg._slotnames(cls) properly calculates the slot names for cls and
tries to cache them
Richard Oudkerk added the comment:
> Multiprocessing: Because there is no way I know to share a list of
> owning thread ids, this version is more limited
Why do you need a *shared* list? I think it should be fine to use a
per-process list of owning thread ids. So the current threa
Richard Oudkerk added the comment:
> Well, what I am doing is more or less the equivalent of
>
> return object.__slots__ if hasattr(object, '__slots') else object.__dict__
>
> and this is coherent with the updated documentation. The one you
> proposed is a
Richard Oudkerk added the comment:
> This is from Python side. Did ht_slots field of PyHeapTypeObject does not
> contain properly calculated slot names?
Looking at the code, it appears that ht_slots does *not* include inherited
Richard Oudkerk added the comment:
> That modifying the dict has no effect on the object is okay.
I have written "vars(obj).update(...)" before. I don't think it would be okay
to break that.
--
___
Python tracker
<h
Richard Oudkerk added the comment:
A search of googlecode shows 42 examples of "vars(...).update" compared to 3000
for ".__dict__.update". I don't know if that is enough to worry about.
http://code.google.com/codesearch#search/&q=vars\%28[A-Za-z0-9_]%2B\%29\.u
Richard Oudkerk added the comment:
Attached is a new version of Kristjan's patch with support for managers. (A
threading._RWLockCore object is proxied and wrapped in a local instance of a
subclass of threading.RWLock.)
Also I made multiprocessing.RWLock.__init__(
Richard Oudkerk added the comment:
Fixed patch because I didn't test on Unix...
--
Added file: http://bugs.python.org/file27422/rwlock-sbt.patch
___
Python tracker
<http://bugs.python.org/i
Changes by Richard Oudkerk :
Removed file: http://bugs.python.org/file27421/rwlock-sbt.patch
___
Python tracker
<http://bugs.python.org/issue8800>
___
___
Python-bug
Richard Oudkerk added the comment:
This is more or less a duplicate of #15833 (although the errno mentioned there
is EIO instead of the more sensible EROFS).
--
nosy: +sbt
___
Python tracker
<http://bugs.python.org/issue16
Richard Oudkerk added the comment:
Kristjan: you seem to have attached socketserver.patch to the wrong issue.
--
___
Python tracker
<http://bugs.python.org/issue8
Richard Oudkerk added the comment:
_sha3 is not being built on Windows, so importing hashlib fails
>>> import hashlib
ERROR:root:code for hash sha3_224 was not found.
Traceback (most recent call last):
File "C:\Repos\cpython-dirty\lib\hashlib.py", line 109, in
__get_openss
Richard Oudkerk added the comment:
> 6cf6b8265e57 and 8172cc8bfa6d have fixed the issue on my VM. I didn't
> noticed the issue as I only tested hashlib with the release builds, not
> the debug builds. Sorry for that.
Ah. I did not even notice there was _sha3.vcxproj.
Is there
New submission from Richard Oudkerk:
ctypes.WinError() is defined as
def WinError(code=None, descr=None):
if code is None:
code = GetLastError()
if descr is None:
descr = FormatError(code).strip()
return WindowsError(code, descr)
Since
Changes by Richard Oudkerk :
--
resolution: -> fixed
stage: needs patch -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.or
Richard Oudkerk added the comment:
> Cogen [4] uses ctypes wrapper.
In the code for the IOCP reactor only ctypes.FormatError() is used from ctypes.
It uses pywin32 instead.
--
nosy: +sbt
___
Python tracker
<http://bugs.python.org/issu
Richard Oudkerk added the comment:
Note that since Python 3.3, multiprocessing and _winapi make some use of
overlapped IO.
One can use _winapi.ReadFile() and _winapi.WriteFile() to do overlapped IO on
normal socket handles created using socket.socket
Richard Oudkerk added the comment:
Adding the IOCP functions to _winapi is straightforward -- see patch. Note
that there seems to be no way to unregister a handle from an IOCP.
Creating overlapped equivalents of socket.accept() and socket.connect() looks
more complicated. Perhaps that
Changes by Richard Oudkerk :
Added file: http://bugs.python.org/file27516/iocp_example.py
___
Python tracker
<http://bugs.python.org/issue16175>
___
___
Python-bugs-list m
New submission from Richard Jones:
The RotatingFileHandler classes force the open() mode of the new log file to be
"w" even though it is initially defaulted to "a" in doRollover() methods:
self.mode = 'w'
self.stream = self._open()
This can cause
Richard Oudkerk added the comment:
I think this is a duplicate of Issue #15646 which has been fixed in the 2.7 and
3.x branches.
If you run Lib/test/mp_fork_bomb.py you should get a RuntimeError with a
helpful message telling you to use the 'if __name__ == "__main
Richard Oudkerk added the comment:
> select() other than being supported on all platforms has the advantage of
> being simple and quick to use (you just call it once by passing a set of fds
> and then you're done).
Do you mean at the C level? Wouldn't you just do
stru
New submission from Richard Oudkerk:
Using VS2010 _socket links against ws2_32.lib but select links against
wsock32.lib.
Using VS2008 both extensions link against ws2_32.lib. It appears that the
conversion to VS2010 caused the regression.
(Compare #10295 and #11750.)
--
messages
Richard Oudkerk added the comment:
> A preliminary patch is in attachment.
> By default it uses select() but looks for ValueError (raised in case
> FD_SETSIZE gets hit) and falls back on using poll().
>
> This is the failure I get when running tests on Linux.
> It is related
Richard Oudkerk added the comment:
> Using poll() by default is controversial for 2 reasons, I think:
>
> #1 - a certain slowdown is likely to be introduced (I'll measure it)
With a single fd poll is a bit faster than select:
$ python -m timeit -s 'from select import
Richard Oudkerk added the comment:
> Still not getting what you refer to when you talk about > 512 fds
> problem.
Whether you get back the original objects or only their fds will depend on
whether some fd was larger than FD_SETSIZE.
--
_
Richard Oudkerk added the comment:
> This problem affects any single use of select(): instead of using an
> ad-hoc wrapper in each module, it would probably make sense to add a
> higher level selector class to the select module which would fallback on
> the right syscall (i.
Richard Oudkerk added the comment:
> A use case for not using fork() is when your parent process opens some
> system resources of some sort (for example a listening TCP socket). The
> child will then inherit those resources, which can have all kinds of
> unforeseen and
Richard Oudkerk added the comment:
LGTM
--
___
Python tracker
<http://bugs.python.org/issue16284>
___
___
Python-bugs-list mailing list
Unsubscribe:
Changes by Richard Oudkerk :
--
resolution: -> fixed
stage: needs patch -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.or
Changes by Richard Oudkerk :
--
nosy: +sbt
___
Python tracker
<http://bugs.python.org/issue16307>
___
___
Python-bugs-list mailing list
Unsubscribe:
Richard Oudkerk added the comment:
For updated code see http://hg.python.org/sandbox/sbt#spawn
This uses _posixsubprocess and closefds=True.
--
hgrepos: +157
___
Python tracker
<http://bugs.python.org/issue8
New submission from Richard Delorenzi:
This code produces the wrong result
import cjson
cjson.decode(cjson.encode('/'))
It produces '\\/', it should produce '/'
using
/usr/lib/pymodules/python2.7/cjson.so
cjson version 1.0.5-4build1
--
componen
Richard Fothergill added the comment:
I'm getting these results on both:
Python 3.2.3 (default, Apr 10 2013, 06:11:55)
[GCC 4.6.3] on linux2
and
Python 2.7.3 (default, Apr 10 2013, 06:20:15)
[GCC 4.6.3] on linux2
The symptoms are exactly as Terrence described.
Nesting proxied containe
Changes by Richard Oudkerk :
--
assignee: -> sbt
___
Python tracker
<http://bugs.python.org/issue20990>
___
___
Python-bugs-list mailing list
Unsubscrib
Richard Oudkerk added the comment:
We should only wrap the exception with ExceptionWithTraceback in the process
case where it will be pickled and then unpickled.
--
assignee: -> sbt
___
Python tracker
<http://bugs.python.org/issu
Richard Oudkerk added the comment:
For reasons we all know unpickling unauthenticated data received over TCP is
very risky. Sending an unencrypted authentication key (as part of a pickle)
over TCP would make the authentication useless.
When a proxy is pickled the authkey is deliberately
Changes by Richard Oudkerk :
--
resolution: -> fixed
stage: -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.or
Changes by Richard Oudkerk :
--
resolution: -> fixed
stage: test needed -> committed/rejected
status: open -> closed
___
Python tracker
<http://bugs.python.or
Richard Oudkerk added the comment:
Testing the is_forking() requires cx_freeze or something similar, so it really
cannot go in the test suite.
I have tested it manually (after spending too long trying to get cx_freeze to
work with a source build).
It should be noted that on Unix freezing is
Richard Oudkerk added the comment:
No, the argument will not go away now.
However, I don't much like the API which is perhaps why I did not get round to
documenting it.
It does have tests. Currently 'xmlrpclib' is the only supported alternative,
but JSON support could be add
Richard Oudkerk added the comment:
Using asyncio and the IOCP eventloop it is not necessary to use threads.
(Windows may use worker threads for overlapped IO, but that is hidden from
Python.) See
https://code.google.com/p/tulip/source/browse/examples/child_process.py
for vaguely "e
Richard Oudkerk added the comment:
Using truncate() to zero extend is not really portable: it is only guaranteed
on XSI-compliant POSIX systems.
Also, the FreeBSD man page for mmap() has the following warning:
WARNING! Extending a file with ftruncate(2), thus creating a big
hole, and then
Richard Oudkerk added the comment:
I would recommended using _overlapped instead of _winapi.
I intend to move multiprocessing over in future.
Also note that you can do nonblocking reads by starting an overlapped read
then cancelling it immediately if it fails with "incomplete". You
New submission from Richard Kiss:
Some tasks created via asyncio are vanishing because there is no reference to
their resultant futures.
This behaviour does not occur in Python 3.3.3 with asyncio-0.4.1.
Also, doing a gc.collect() immediately after creating the tasks seems to fix
the problem
Changes by Richard Kiss :
--
hgrepos: -231
___
Python tracker
<http://bugs.python.org/issue21163>
___
___
Python-bugs-list mailing list
Unsubscribe:
Changes by Richard Kiss :
--
title: asyncio Task Possibly Incorrectly Garbage Collected -> asyncio task
possibly incorrectly garbage collected
___
Python tracker
<http://bugs.python.org/issu
Richard Kiss added the comment:
I agree it's confusing and I apologize for that.
Background:
This multiplexing pattern is used in pycoinnet, a bitcoin client I'm developing
at <https://github.com/richardkiss/pycoinnet>. The BitcoinPeerProtocol class
multiplexes protoc
Richard Kiss added the comment:
I'll investigate further.
--
___
Python tracker
<http://bugs.python.org/issue21163>
___
___
Python-bugs-list mailing list
Richard Kiss added the comment:
You were right: adding a strong reference to each Task seems to have solved the
original problem in pycoinnet. I see that the reference to the global lists of
asyncio.tasks is a weakset, so it's necessary to keep a strong reference myself.
This does s
Richard Oudkerk added the comment:
I would guess that the problem is simply that LogisticRegression objects are
not picklable. Does the problem still occur if you do not use freeze?
--
___
Python tracker
<http://bugs.python.org/issue21
Richard Oudkerk added the comment:
Ah, I misunderstood: you meant that it freezes/hangs, not that you used a
freeze tool.
--
___
Python tracker
<http://bugs.python.org/issue21
Richard Oudkerk added the comment:
Could you try pickling and unpickling the result of func():
import cPickle
data = cPickle.dumps(func([1,2,3]), -1)
print cPickle.loads(data)
--
___
Python tracker
<http://bugs.python.org/issue21
Richard Oudkerk added the comment:
Can you explain why you write in 512 byte chunks. Writing in one chunk should
not cause a deadlock.
--
___
Python tracker
<http://bugs.python.org/issue1191
Richard Oudkerk added the comment:
I added some comments.
Your problem with lost data may be caused by the fact you call ov.cancel() and
expect ov.pending to tell you whether the write has/will succeed. Instead you
should use ov.getresult() and expect either success or an "aborted&q
New submission from Richard Kiss:
import asyncio
import os
def t1(q):
yield from asyncio.sleep(0.5)
q.put_nowait((0, 1, 2, 3, 4, 5))
def t2(q):
v = yield from q.get()
print(v)
q = asyncio.Queue()
asyncio.get_event_loop().run_until_complete(asyncio.wait([t1(q), t2(q)]))
When
Richard Kiss added the comment:
For a reason that I don't understand, this patch to asyncio fixes the problem:
--- a/asyncio/tasks.py Mon Mar 31 11:31:16 2014 -0700
+++ b/asyncio/tasks.py Sat Apr 12 20:37:02 2014 -0700
@@ -49,7 +49,8 @@
def __next__(self):
return next(sel
Richard Oudkerk added the comment:
If you use the short timeouts to make the wait interruptible then you can
use waitformultipleobjects (which automatically waits on an extra event
object) instead of waitforsingleobject.
--
___
Python tracker
<h
Richard Marko added the comment:
Would be nice to have this commited as without this change
-if self.quitting:
-return # None
+if not self.botframe:
+self.botframe = frame
it's not possible to quit Bdb (and the code it's executing) as it ju
Changes by Richard Oudkerk :
--
assignee: -> sbt
___
Python tracker
<http://bugs.python.org/issue20147>
___
___
Python-bugs-list mailing list
Unsubscrib
Richard Oudkerk added the comment:
register_after_fork() is intentionally undocumented and for internal use.
It is only run when starting a new process using the "fork" start method
whether on Windows or not -- the "fork" in its name is a hint.
--
resolution:
Richard Kiss added the comment:
The more I use asyncio, the more I am convinced that the correct fix is to keep
a strong reference to a pending task (perhaps in a set in the eventloop) until
it starts.
Without realizing it, I implicitly made this assumption when I began working on
my asyncio
Changes by Richard Oudkerk :
--
nosy: +sbt
___
Python tracker
<http://bugs.python.org/issue7292>
___
___
Python-bugs-list mailing list
Unsubscribe:
Richard Oudkerk added the comment:
Maybe lru_cache() should have a key argument so you can specify a specialized
key function. So you might have
def _compile_key(args, kwds, typed):
return args
@functools.lru_cache(maxsize=500, key=_compile_key)
def _compile(pattern
Richard Oudkerk added the comment:
The patch does not apply correctly against vanilla Python 3.3. I would guess
that you are using a version of Python which has been patched to add mingw
support. Where did you get it from?
(In vanilla Python 3.3, setup.py does not contain any mention of
501 - 600 of 1064 matches
Mail list logo