Nikolaus Rath added the comment:
*ping*
Is there anything wrong with the patch? What do I have to do to get it applied?
--
Added file: http://bugs.python.org/file33360/issue18574.diff
___
Python tracker
<http://bugs.python.org/issue18
Nikolaus Rath added the comment:
Here's a patch that (I think) incorporates all the comments. If someone could
apply it, that would be great :-).
--
keywords: +patch
versions: +Python 2.7, Python 3.1, Python 3.2, Python 3.3, Python 3.4, Python
3.5
Added file: http://bugs.pytho
Nikolaus Rath added the comment:
*ping*
Any comments on the updated patch? Can it be applied?
--
___
Python tracker
<http://bugs.python.org/issue12704>
___
___
Nikolaus Rath added the comment:
*ping*
Is this suitable for inclusion? Or do I need to do anything else?
--
___
Python tracker
<http://bugs.python.org/issue17
Nikolaus Rath added the comment:
I have attached a patch that takes into account your comments. Would this be
suitable for inclusion?
--
keywords: +patch
Added file: http://bugs.python.org/file33362/issue17811.diff
___
Python tracker
<h
Nikolaus Rath added the comment:
Thanks for looking at this Martin! I have attached an updated patch that
includes a reference to open and slightly changed language.
But please, let's not have the best be the enemy of the good here. There will
probably always be room for further improv
Nikolaus Rath added the comment:
I've updated the patch again to fix a problem with HTTPSConnection.
--
Added file: http://bugs.python.org/file33431/issue7776.patch
___
Python tracker
<http://bugs.python.org/i
Nikolaus Rath added the comment:
Rebased patch on current tip.
--
Added file: http://bugs.python.org/file33520/issue7776.patch
___
Python tracker
<http://bugs.python.org/issue7
Nikolaus Rath added the comment:
I have started working on this (partial patch attached), but I am unsure how to
handle cases where the docstring definition utilizes a macro. For example:
PyDoc_STRVAR(strftime_doc,
"strftime(format[, tuple]) -> string\n\
\n\
Convert a time tuple to
Nikolaus Rath added the comment:
Thanks for looking at this! Attached is an updated patch with testcase (sorry,
I thought I had included testcases everywhere).
--
Added file: http://bugs.python.org/file33532/issue18574_rev2.patch
___
Python tracker
Nikolaus Rath added the comment:
Indeed, this makes most sense. I didn't know that glossary entry existed. I
have attached an updated patch. Thanks for reviewing!
--
Added file: http://bugs.python.org/file33533/issue17811_rev2.patch
___
P
Changes by Nikolaus Rath :
--
versions: -Python 3.5
___
Python tracker
<http://bugs.python.org/issue17811>
___
___
Python-bugs-list mailing list
Unsubscribe:
Nikolaus Rath added the comment:
Will do. Another question: what should I do with return annotations, e.g. the
"-> dict" part in
"get_clock_info(name: str) -> dict\n
Should I drop that part? In the clinic howto, I couldn't find anything about
return annota
Nikolaus Rath added the comment:
One more question: what's a good way to find sites that need changes? Searching
for "PyArg_" finds both converted and unconverted functions...
--
___
Python tracker
<http://bugs.pyt
Nikolaus Rath added the comment:
Here is a patch for timemodule.c. I have converted everything except:
* ctime() uses the gettmarg() function. Converting gettmarg() itself doesn't
seem possible to me (because it's not Python callable), and duplicating the
functionality i
Changes by Nikolaus Rath :
Removed file: http://bugs.python.org/file33521/issue20177.patch
___
Python tracker
<http://bugs.python.org/issue20177>
___
___
Python-bug
Nikolaus Rath added the comment:
I just realized that the Misc/NEWS entry is just utterly wrong (it is talking
abou the client, even though the bug is in the server), my apologies. Here's a
fixed one:
- Issue #18574: Added missing newline in 100-Continue reply
Nikolaus Rath added the comment:
Hmm. After reading some of the threads on python-dev, I'm now confused if I did
the conversion of optional arguments correctly.
Should something like "strftime(format[, tuple])" be converted using optional
groups, or should tuple become a p
Nikolaus Rath added the comment:
I can confirm this. The actual problem is that neither XML nor SGML PIs in the
input make it into the etree, and no events are generated for them during
incremental parsing.
XML PIs that are added into the tree using Python functions are correctly
written
Changes by Nikolaus Rath :
--
title: xml.etree.ElementTree strips XML declaration and procesing instructions
-> xml.etree.ElementTree skips processing instructions when parsing
___
Python tracker
<http://bugs.python.org/iss
Nikolaus Rath added the comment:
No, I really mean XML processing instruction. I agree with you that the XML
declaration is a non-issue, because there is no information lost: you know that
you're going to write XML, and you manually specify the encoding. Thus it's
trivial to add t
Nikolaus Rath added the comment:
(adding the documentation experts from
http://docs.python.org/devguide/experts.html to noisy in the hope to push this
forward)
--
nosy: +eric.araujo, ezio.melotti, georg.brandl
___
Python tracker
<h
Nikolaus Rath added the comment:
(adding the documentation and ctypes experts from
http://docs.python.org/devguide/experts.html to noisy list in the hope to get
this moving again.)
--
nosy: +belopolsky, eric.araujo, ezio.melotti, georg.brandl, meador.inge
versions: -Python 2.6
Nikolaus Rath added the comment:
Nadeem, did you have a chance to look at this again, or do you have any partial
patch already?
If not, I'd like to try working on a patch.
--
___
Python tracker
<http://bugs.python.org/is
Nikolaus Rath added the comment:
For the record: I disagree that this is an enhancement. ElementTree supports
PIs as first-class tree elements. They can be added, inspected, removed, and
written out when serializing into XML. Only when reading in XML, they are
silently dropped. I think this
Nikolaus Rath added the comment:
That makes sense. Attached is an updated patch. It removes most of the
duplication, and clearly says that there is no semantic difference between the
yield statement and the yield expression at all.
I also moved the "see also" block to follow the d
Nikolaus Rath added the comment:
As discussed on devel, here's an updated patch for timemodule.c that
uses a custom C converter to handle the parse_time_t_args() utility
function. The only function I did not convert is mktime, because I did
not find a way to do so without duplicatin
Nikolaus Rath added the comment:
The patches look fine to me. They are only docpatches, so no testcase is needed.
I have rebased them on current hg tip to avoid fuzz, but otherwise left them
unchanged.
I think this is ready to be committed.
--
nosy: +ezio.melotti, georg.brandl
Nikolaus Rath added the comment:
Apologies, I missed that. I'll be more careful in the future. I've attached an
updated patch that also adds some extra Sphinx markup, but should IMO still be
credited to Ryan and Karl.
--
Added file: http://bugs.python.org/file33674/issue1144
New submission from Nikolaus Rath:
(This issue was branched of from #9521).
When parsing XML, etree currently skips over all processing instructions and
comments. However, both can be represented in the tree and are also written out
when generating XML.
The attached patch documents this (IMO
Nikolaus Rath added the comment:
I have created issue 20375 with a patch to document the current behavior.
--
___
Python tracker
<http://bugs.python.org/issue9
Changes by Nikolaus Rath :
--
nosy: +nikolaus.rath
___
Python tracker
<http://bugs.python.org/issue17681>
___
___
Python-bugs-list mailing list
Unsubscribe:
Nikolaus Rath added the comment:
Is there any reason why unconsumed_tail needs to be exposted?
I would suggest to instead introduce a boolean attribute data_ready than
indicates that more decompressed data can be provided without additional
compressed input.
Example:
# decomp = decompressor
Nikolaus Rath added the comment:
(I already sent this to python-dev, but maybe it makes more sense to have these
thoughts together with the patch)
After looking at the conversion of parse_time_t_args again, I think I lost
track of what we're actually gaining with this procedure. If a
Nikolaus Rath added the comment:
Let me be more precise:
My suggestion is not to remove `unconsumed_tail` entirely, but I think its
value needs to be defined only when the end of the compressed stream has been
reached.
In other words, you could still do:
while not decomp.eof
# ...
if
Nikolaus Rath added the comment:
I've also attached a testcase to confirm that the docpatch reflects current
behavior, and to make sure that anticipated enhancements in Python 3.5 behave
in a backwards compatible way.
--
Added file: http://bugs.python.org/file33713/
Nikolaus Rath added the comment:
I've attached updated patches to reflect the recent changes in the derby policy.
part1 is the part of the patch that is suitable for 3.4.
part2 are the changes that have to wait for 3.5 (mostly because of default
values).
--
Added file:
Changes by Nikolaus Rath :
Added file: http://bugs.python.org/file33716/timemodule_part1_rev3.patch
___
Python tracker
<http://bugs.python.org/issue20177>
___
___
Pytho
Changes by Nikolaus Rath :
Removed file: http://bugs.python.org/file33714/timemodule_part1_rev3.patch
___
Python tracker
<http://bugs.python.org/issue20177>
___
___
Pytho
Changes by Nikolaus Rath :
Added file: http://bugs.python.org/file33717/timemodule_part2_rev3.patch
___
Python tracker
<http://bugs.python.org/issue20177>
___
___
Pytho
Nikolaus Rath added the comment:
I'll wait with working on _datetimemodule.c until someone had a chance to look
over my _timemodule.c patch. (I want to make sure I'm following the right
procedure).
--
___
Python tracker
<http://bu
Nikolaus Rath added the comment:
I am working on this now.
--
nosy: +nikratio
___
Python tracker
<http://bugs.python.org/issue15216>
___
___
Python-bugs-list m
Nikolaus Rath added the comment:
Question:
What is the point of the
old_encoding = codecs.lookup(self._encoding).name
encoding = codecs.lookup(encoding).name
if encoding == old_encoding and errors == self._errors:
# no change
return
dance? Isn
Nikolaus Rath added the comment:
Second question:
The following looks as if a BOM might be written for writeable, non-seekable
streams:
# don't write a BOM in the middle of a file
if self._seekable and self.writable():
position = self.buffer.tell()
Nikolaus Rath added the comment:
Thanks Nick! Will take this into account.
I've stumbled over another question in the meantime:
It seems to me that after the call to set_encoding(), self._snapshot contains
the decoder flags from the state of the *old* decoder. On the next call of eg.
Nikolaus Rath added the comment:
Thanks for the review! I've attached an updated patch.
An update to the set_tunnel library documentation is being discussed in issue
11448. I'm not sure how to best handle the overlap. Maybe the best way is to
first deal with issue 11448, and th
Nikolaus Rath added the comment:
Hmm. I think I found another problem... please wait for another patch revision.
--
___
Python tracker
<http://bugs.python.org/issue7
Nikolaus Rath added the comment:
Ok, I've attached yet another patch revision.
This revision is less complex, because it gets rid of the ability to set up
chains of tunnels. The only reason that I put that in was to preserve backward
compatibility -- but upon reviewing the old implement
Nikolaus Rath added the comment:
I'm about 40% done with translating Victor's patch into C. However, in the
process I got the feeling that this approach may not be so good after all.
Note that:
* The only use-case for set_encoding that I have found was changing the
encoding of
Nikolaus Rath added the comment:
Wow, I didn't realize that programming Python using the C interface was that
tedious and verbose. I have attached a work-in-progress patch. It is not
complete yet, but maybe somebody could already take a look to make sure that
I'm not heading complet
Changes by Nikolaus Rath :
Added file: http://bugs.python.org/file33857/set_encoding-4.patch
___
Python tracker
<http://bugs.python.org/issue15216>
___
___
Python-bug
Nikolaus Rath added the comment:
The attached patch now passes all testcases.
However, every invocation of set_encoding() when there is buffered data leaks
one reference. I haven't been able to find the error yet.
As for adding a reopen() or configure() method: I don't like it very
Nikolaus Rath added the comment:
I think that any API that gives you a TextIOWrapper object with undesired
properties is broken and needs to be fixed, rather than a workaround added to
TextIOWrapper.
That said, I defer to the wisdow of the core developers, and I'll be happy to
impl
Nikolaus Rath added the comment:
Newest version of the patch attached, the reference leak problem has been fixed.
--
Added file: http://bugs.python.org/file33893/set_encoding-6.patch
___
Python tracker
<http://bugs.python.org/issue15
Nikolaus Rath added the comment:
On 02/04/2014 03:28 AM, Nick Coghlan wrote:
>
> Nick Coghlan added the comment:
>
> If operating systems always exposed accurate metadata and configuration
> settings, I'd agree with you. They don't though, so sometimes develop
New submission from Nikolaus Rath:
It would be nice to have a readinto1 method to complement the existing read,
readinto, and read1 methods of io.BufferedIOBase.
--
components: Library (Lib)
messages: 210794
nosy: nikratio
priority: normal
severity: normal
status: open
title
Nikolaus Rath added the comment:
(I'll work on a patch for this myself, this bug is just to prevent duplicate
work)
--
___
Python tracker
<http://bugs.python.org/is
New submission from Nikolaus Rath :
When threads are created by a C extension loaded with ctypes,
threading.local() objects are always empty. If one uses
_threading_local.local() instead of threading.local(), the problem does
not occur.
More information and example program showing the
New submission from Nikolaus Rath :
On http://docs.python.org/3.1/library/codecs.html it says that
Possible values for errors are 'strict' (raise an exception in case of
an encoding error), 'replace' (replace malformed data with a suitable
replacement marker, such as
Changes by Nikolaus Rath :
--
nosy: +Nikratio
___
Python tracker
<http://bugs.python.org/issue5689>
___
___
Python-bugs-list mailing list
Unsubscribe:
New submission from Nikolaus Rath :
ctypes currently has a datatype c_size_t which corresponds to size_t in
C, but there is no datatype for the C ssize_t.
--
assignee: theller
components: ctypes
messages: 91713
nosy: Nikratio, theller
severity: normal
status: open
title: Add support for
Nikolaus Rath added the comment:
I can give it a shot if you give me a rough idea where I have to make
the appropriate changes.
--
___
Python tracker
<http://bugs.python.org/issue6
Nikolaus Rath added the comment:
Ok, apparently the lines that define c_size_t are these:
if sizeof(c_uint) == sizeof(c_void_p):
c_size_t = c_uint
elif sizeof(c_ulong) == sizeof(c_void_p):
c_size_t = c_ulong
elif sizeof(c_ulonglong) == sizeof(c_void_p):
c_size_t = c_ulonglong
New submission from Nikolaus Rath :
It would be great if the documentation of c_char_p
(http://docs.python.org/library/ctypes.html#ctypes.c_char_p) could be
reformulated as follows (would have saved me quite some time):
class ctypes.c_char_p¶
Represents the C char * datatype when it points
Nikolaus Rath added the comment:
I don't want to judge if the best way to represent binary data in C is a
void* or char*, but there is a lot of C code out there that uses char*,
and if we want to interface with such a library we need to use
POINTER(c_char).
Note that my docpatch doesn&
Nikolaus Rath added the comment:
I just spend several days figuring out a problem that was caused by this
behaviour and in the process I really tried to read everything that
relates to destruction of objects and exception handling in Python.
Georg, could you give me a pointer where exactly
New submission from Nikolaus Rath :
The attached test program calls apply_async with a function that will raise
CalledProcessError. However, when result.get() is called, it raises a TypeError
and the program hangs:
$ ./bug.py
ERROR:root:ops
Traceback (most recent call last):
File "./b
Nikolaus Rath added the comment:
@ray: Try it with the following dummy dcon program:
$ cat dcon
#!/bin/sh
exit 127
(and change the path to dcon in bug.py accordingly).
--
___
Python tracker
<http://bugs.python.org/issue9
New submission from Nikolaus Rath :
$ python --version
Python 2.6.5
$ pylint --version
pylint 0.21.1,
astng 0.20.1, common 0.50.3
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56)
[GCC 4.4.3]
$ pylint pylint_crasher.py
* Module pylint_crasher
R0903: 62:Config: Too few public
Nikolaus Rath added the comment:
Duplicate of http://projects.scipy.org/numpy/ticket/1462
--
status: open -> closed
___
Python tracker
<http://bugs.python.org/iss
New submission from Nikolaus Rath:
The os.writev and os.readv functions are currently documented as:
os.writev(fd, buffers)
Write the contents of buffers to file descriptor fd, where buffers is an
arbitrary sequence of buffers. Returns the total number of bytes written.
os.readv(fd
Nikolaus Rath added the comment:
Here's a first attempt at improvement based on my guess:
os.writev(fd, buffers)
Write the contents of buffers to file descriptor fd, where buffers is an
arbitrary sequence of buffers. In this context, a buffer may be any Python
object that provi
Nikolaus Rath added the comment:
What section do you mean? bytearray is not mentioned anywhere in
http://docs.python.org/3.4/library/os.html.
I think the problem with just linking to the C API section is that it doesn't
help people that are only using pure Python. You can't look a
New submission from Nikolaus Rath:
The zlib Decompress.decompress has a max_length parameter that limits the size
of the returned uncompressed data.
The lzma and bz2 decompress methods do not have such a parameter.
Therefore, it is not possible to decompress untrusted lzma or bz2 data without
Changes by Nikolaus Rath :
--
title: lzma and bz2 decompress methods lack max_size attribute -> lzma and bz2
decompress methods lack max_size parameter
___
Python tracker
<http://bugs.python.org/issu
Nikolaus Rath added the comment:
The lack of output size limiting has security implications as well.
Without being able to limit the size of the uncompressed data returned per
call, it is not possible to decompress untrusted lzma or bz2 data without
becoming susceptible to a DoS attack, as
New submission from Nikolaus Rath:
The subprocess documentation currently just says that Popen.stdin et all are
"file objects", which is linked to the glossary entry. This isn't very helpful,
as it doesn't tell whether the streams are bytes or text streams.
Suggested patc
Nikolaus Rath added the comment:
I think it would also be rather important to know if the streams are buffered
or not.
--
___
Python tracker
<http://bugs.python.org/issue17
Nikolaus Rath added the comment:
Matt, I believe in that case it's still a documentation issue, because then the
documentation probably should say that using absolute paths to libraries is a
bad idea in general.
--
___
Python tracker
Changes by Nikolaus Rath :
--
nosy: +nikratio
___
Python tracker
<http://bugs.python.org/issue17852>
___
___
Python-bugs-list mailing list
Unsubscribe:
Nikolaus Rath added the comment:
Serhiy, I believe this still happens in Python 3.4, but it is harder to
reproduce. I couldn't get Armin's script to produce the problem either, but I'm
pretty sure that this is what causes e.g.
https://bugs.debian.org/cgi-bin/bugreport.cg
Nikolaus Rath added the comment:
This will probably be too radial, but I think it should at least be mentioned
as a possible option.
We could just not attempt to implicitly flush buffers in the finalizer at all.
This means scripts relying on this will break, but in contrast to the current
Nikolaus Rath added the comment:
*ping*. It's been another 8 months. It would be nice if someone could review
the patch.
--
___
Python tracker
<http://bugs.python.org/is
Nikolaus Rath added the comment:
Martin Panter writes:
> We still need a patch for max_length in BZ2Decompressor, and to use it
> in BZ2File.
I'm still quite interested to do this. The only reason I haven't done it
yet is that I'm waiting for the LZMA patch to be reviewe
Nikolaus Rath added the comment:
Serhiy, did you add me to Cc just for information, or is there anything I
should be doing (having written the patch that introduced this bug)?
--
___
Python tracker
<http://bugs.python.org/issue23
Nikolaus Rath added the comment:
Yes, I still plan to do it, but I haven't started yet.
That said, I certainly won't be offended if someone else implements the feature
either. Please just let me know when you start working on this (I'll do the
same), so there's no du
Nikolaus Rath added the comment:
On a more practical note:
I believe Nadeem at one point said that the bz2 module is not exactly an
example for good stdlib coding, while the lzma module's implementation is quite
clean. Therefore
Therefore, one idea I had for the bz2 module was to "
Nikolaus Rath added the comment:
I've started to work on the bzip module. I'll attach I work-in-progress patch
if I get stuck or run out of time.
--
___
Python tracker
<http://bugs.python.o
Nikolaus Rath added the comment:
Attached is a patch for the bz2 module.
--
Added file: http://bugs.python.org/file38194/issue15955_bz2.diff
___
Python tracker
<http://bugs.python.org/issue15
Nikolaus Rath added the comment:
This just happened again to someone else, also using Python 3.4:
https://bitbucket.org/nikratio/s3ql/issues/87
Is there anything the affected people can do to help debugging this?
--
___
Python tracker
<h
Nikolaus Rath added the comment:
*ping*
Just letting people know that this is still happening regularly and still
present in 3.5.
Some reports:
https://bitbucket.org/nikratio/s3ql/issues/87/
https://bitbucket.org/nikratio/s3ql/issues/109/ (last comment)
--
versions: +Python 3.5
Nikolaus Rath added the comment:
Stefan, sorry for ignoring your earlier reply. I somehow missed the question at
the end.
I believe that users of the Python module are *not* expected to make use of the
WANT_READ, WANT_WRITE flags. Firstly because the documentation (of Python's ssl
m
Nikolaus Rath added the comment:
Would you be willing to review a patch to incorporate the handling into the SSL
module?
--
___
Python tracker
<http://bugs.python.org/issue22
Nikolaus Rath added the comment:
Updated patch attached.
--
Added file: http://bugs.python.org/file38206/issue15955_bz2_rev2.diff
___
Python tracker
<http://bugs.python.org/issue15
Nikolaus Rath added the comment:
Martin, I'll try to review your GzipFile patch. But maybe it would make sense
to open a separate issue for this?
I think the LZMAFile patch has not yet been reviewed or committed, and we
probably want a patch for BZ2File too. The review page is already p
Nikolaus Rath added the comment:
Especially now that this is only going to go into 3.5, I think it makes more
sense to handle GzipFile, LZMAFile and BZ2File all in one go. Looking at the
code, otherwise there's going to be a lot of duplication.
How about introducing a base
Nikolaus Rath added the comment:
On Feb 27 2015, Martin Panter wrote:
> In the code review, Nikolaus raised the idea of allowing a custom
> “buffer_size” parameter for the BufferedReader. I think this would
> need a bit of consideration about how it should work:
>
> 1. Should
Nikolaus Rath added the comment:
On Mar 06 2015, Martin Panter wrote:
> Still to do: Need to find a better home for the _DecompressReader and
> _BaseStream classes. Currently it lives in “lzma”, but apparently it
> is possible for any of the gzip, bz2, lzma modules to not be
> impor
Nikolaus Rath added the comment:
If you want to add support for buffer_size=0 in a separate patch/issue I think
that's fine. But in that case I would not add a buffer_size parameter now at
all. IMO, not having it is better as having it but not supporting zero (even if
it's documen
Nikolaus Rath added the comment:
As discussed in Rietveld, here's an attempt to reuse more DecompressReader for
GzipFile. There is still an unexplained test failure (test_read_truncated).
--
Added file: http://bugs.python.org/file38674/LZMAFile-etc.v7.
201 - 300 of 303 matches
Mail list logo