I noticed something (in 2.5) yesterday, which may be a feature, but is more
likely a bug.
In tokenizer.c, tok->encoding is allocated using PyMem_MALLOC().
However, this then gets handed to a node->r_str in parsetok.c, and then
released in node.c using PyObject_Free().
Now, by coincidence, PyObj
Kristján Valur Jónsson wrote:
>
> I noticed something (in 2.5) yesterday, which may be a feature, but is more
> likely a bug.
> In tokenizer.c, tok->encoding is allocated using PyMem_MALLOC().
> However, this then gets handed to a node->r_str in parsetok.c, and then
> released in node.c using Py
Kristján Valur Jónsson wrote:
> My feeling Is that these two APIs shouldn’t be interchangeable.
> Especially since you can’t hand a PyObject_Malloc’d object to
> PyMem_Free() so the inverse shouldn’t be expected to work.
I thought this had officially been deemed illegal for a while, and
Google fo
Guido van Rossum wrote:
You might see a pattern. Is this on Windows?
Well, yes, but I'm not 100%. The problematic machine is a Windows box, but
there are no non-windows boxes on that network and vpn'ing from one of my
non-windows boxes slows things down enough that I'm not confident what I'd
be
On Fri, Sep 4, 2009 at 1:11 PM, Chris Withers wrote:
> Am I right in reading this as most of the time is being spent in httplib's
> HTTPResponse._read_chunked and none of the methods it calls?
>
> If so, is there a better way that a bunch of print statements to find where
> in that method the time
Simon Cross gmail.com> writes:
>
> Well, since the source for _read_chunked includes the comment
>
> # XXX This accumulates chunks by repeated string concatenation,
> # which is not efficient as the number or size of chunks gets big.
>
> you might gain some speed improvement wit
Simon Cross wrote:
Well, since the source for _read_chunked includes the comment
# XXX This accumulates chunks by repeated string concatenation,
# which is not efficient as the number or size of chunks gets big.
you might gain some speed improvement with minimal effort by gather
On Fri, Sep 04, 2009 at 04:02:39PM +0100, Chris Withers wrote:
> So, httplib does this:
>
> GET / HTTP/1.1
[skip]
> While wget does this:
>
> GET / HTTP/1.0
[skip]
> - Apache responds with a chunked response only to httplib. Why is that?
Probably because wget uses HTTP/1.0?
Oleg.
--
Oleg
Chris Withers wrote:
> - Apache in this instance responds with HTTP 1.1, even though the wget
> request was 1.0, is that allowed?
>
> - Apache responds with a chunked response only to httplib. Why is that?
>
I find it very confusing that you say "Apache" since your really want to
say "Coyote" wh
ACTIVITY SUMMARY (08/28/09 - 09/04/09)
Python tracker at http://bugs.python.org/
To view or respond to any of the issues listed below, click on the issue
number. Do NOT respond to this message.
2374 open (+24) / 16285 closed (+18) / 18659 total (+42)
Open issues with patches: 939
Average
Antoine Pitrou wrote:
Simon Cross gmail.com> writes:
Well, since the source for _read_chunked includes the comment
# XXX This accumulates chunks by repeated string concatenation,
# which is not efficient as the number or size of chunks gets big.
you might gain some speed impro
Chris Withers simplistix.co.uk> writes:
>
> The fix is applied on the trunk, but the problem still exists on the 2.6
> branch, 3.1 branch and 3.2 branch.
>
> Which of these should I merge to? I assume all of them?
The performance problem is sufficiently serious that it should be considered a
b
Hi All,
Anyone know what's causing this failure?
test test___all__ failed -- Traceback (most recent call last):
File "Lib/test/test___all__.py", line 106, in test_all
self.check_all("profile")
File "Lib/test/test___all__.py", line 23, in check_all
exec("from %s import *" % modname, n
Chris Withers schrieb:
> Hi All,
>
> Anyone know what's causing this failure?
>
> test test___all__ failed -- Traceback (most recent call last):
>File "Lib/test/test___all__.py", line 106, in test_all
> self.check_all("profile")
>File "Lib/test/test___all__.py", line 23, in check_all
On Fri, Sep 4, 2009 at 4:28 AM, Simon
Cross wrote:
> On Fri, Sep 4, 2009 at 1:11 PM, Chris Withers wrote:
>> Am I right in reading this as most of the time is being spent in httplib's
>> HTTPResponse._read_chunked and none of the methods it calls?
>>
>> If so, is there a better way that a bunch of
Guido van Rossum wrote:
+1 on trying this. Constructing a 116MB string by concatenating 1KB
buffers surely must take forever. (116MB divided by 85125 recv() calls
give 1365 byte per chunk, which is awful.) The HTTP/1.0 business looks
like a red herring.
Also agreed that this is an embarrassment.
Guido van Rossum python.org> writes:
>
> +1 on trying this. Constructing a 116MB string by concatenating 1KB
> buffers surely must take forever. (116MB divided by 85125 recv() calls
> give 1365 byte per chunk, which is awful.)
It certainly is quite small but perhaps the server tries to stay belo
On 30/08/2009 9:37 PM, Martin Geisler wrote:
Mark Hammond writes:
1) I've stalled on the 'none:' patch I promised to resurrect. While
doing this, I re-discovered that the tests for win32text appear to
check win32 line endings are used by win32text on *all* platforms, not
just Windows.
I thi
18 matches
Mail list logo