[ python-Bugs-1462352 ] socket.ssl won't work together with socket.settimeout on Win
Bugs item #1462352, was opened at 2006-03-31 14:11
Message generated for change (Comment added) made by tim_one
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1462352&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Windows
Group: Python 2.4
Status: Open
Resolution: None
Priority: 6
Submitted By: Georg Brandl (gbrandl)
Assigned to: Nobody/Anonymous (nobody)
Summary: socket.ssl won't work together with socket.settimeout on Win
Initial Comment:
Symptoms:
>>> import socket
>>> s = socket.socket()
>>> s.settimeout(30.0)
>>> s.connect(("gmail.org", 995))
>>> ss = socket.ssl(s)
Traceback (most recent call last):
File "", line 1, in ?
File "C:\python24\lib\socket.py", line 74, in ssl
return _realssl(sock, keyfile, certfile)
socket.sslerror: (2, 'The operation did not complete
(read)')
This does not occur on Unix, where
test_socket_ssl.test_timeout runs smoothly.
--
>Comment By: Tim Peters (tim_one)
Date: 2006-04-08 03:48
Message:
Logged In: YES
user_id=31435
Normal MS struct member alignment is definitely screwed up
inside _ssl.c, but still don't know how that happens.
sizeof this struct should be 16, but is reported as 12 when
the source is inside _ssl.c:
struct dummy {
int a;
double x;
};
(note that in the details in previous comments, the double
&Sock->sock_timeout was not 8-byte aligned in _ssl.c, but
was in socketmodule.c). I don't see any MS packing pragmas
in any of the OpenSSL .h files either.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 02:05
Message:
Logged In: YES
user_id=31435
As a sanity check on all those details, inside newPySSLObject()
*(double *)((char *)&Sock->sock_timeout + 4)
is in fact 30.0.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 01:25
Message:
Logged In: YES
user_id=31435
I did a data breakpoint on the 8 bytes in the sock_timeout
member, and it never triggered: nothing stores anything
there to change 30.0 to 0.0.
Instead, socketmodule.c and _ssl.c have different views of
where the members of a PySocketSockObject live. WRT
socketmodule.c sock_settimeout's `s`, and _ssl.c
newPySSLObject's `Sock` (which are the same object in the
test case), the debugger agrees about the addresses at which
these members live:
&s->_ob_next0x0096c3e8
&s->_ob_prev0x0096c3ec
&s->ob_refcnt 0x0096c3f0
&s->ob_type 0x0096c3f4
&s->sock_fd 0x0096c3f8
&s->sock_family 0x0096c3fc
&s->sock_type 0x0096c400
&s->sock_proto 0x0096c404
&s->sock_addr 0x0096c408
&s->errorhandler 0x0096c488
But there's a radical disconnect about where it thinks
sock_timeout lives:
&s->sock_timeout0x0096c490
&Sock->sock_timeout 0x0096c48c
Indeed,
printf("%d\n", sizeof(PySocketSockObject));
displays different results:
socketmodule.c: 176
_ssl.c: 172
I'm unclear about why. Doing
printf("%d\n", sizeof(sock_addr_t));
prints 128 in both modules, so there's not an obvious
difference there.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-01 21:08
Message:
Logged In: YES
user_id=31435
BTW, does anyone understand why this part of my first
comment was true?:
"""
check_socket_and_wait_for_timeout() takes the "else if
(s->sock_timeout == 0.0)" path and and returns
SOCKET_IS_NONBLOCKING.
"""
How did s->sock_timeout become 0? s.settimeout(30.0) was
called, and the same s was passed to socket.ssl(). I don't
understand this at all:
>>> s.connect(("gmail.org", 995))
>>> s.gettimeout()
30.0
>>> s._sock
>>> s._sock.gettimeout()
30.0
>>> ss = socket.ssl(s)
but a breakpoint in newPySSLObject() right there shows that
Sock->sock_timeout is 0.0. HTF did that happen?
If I poke 30.0 (under the debugger) into Sock->sock_timeout
at the start of newPySSLObject(), the constructor finishes
unexceptionally.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-01 20:57
Message:
Logged In: YES
user_id=31435
gmail.com happened to respond when I tried it today, so I
can confirm (alas) that the patch at
http://pastebin.com/633224
made no difference to the outcome on Windows.
--
Comment By: Tim Peters (tim_one)
Date: 2006-03-31 20:36
Message:
Logged In: YES
user_id=31435
Because the
s.connect(("gmail.org", 995))
line started timing out on all (non-Windows) buildbot slaves
some hours ago, causing all test runs to fail, I disabled
test_timeout on all boxes for now (on trunk & on 2.4 branch).
--
[ python-Bugs-1462352 ] socket.ssl won't work together with socket.settimeout on Win
Bugs item #1462352, was opened at 2006-03-31 21:11
Message generated for change (Comment added) made by loewis
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1462352&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Windows
Group: Python 2.4
>Status: Closed
>Resolution: Fixed
Priority: 6
Submitted By: Georg Brandl (gbrandl)
Assigned to: Nobody/Anonymous (nobody)
Summary: socket.ssl won't work together with socket.settimeout on Win
Initial Comment:
Symptoms:
>>> import socket
>>> s = socket.socket()
>>> s.settimeout(30.0)
>>> s.connect(("gmail.org", 995))
>>> ss = socket.ssl(s)
Traceback (most recent call last):
File "", line 1, in ?
File "C:\python24\lib\socket.py", line 74, in ssl
return _realssl(sock, keyfile, certfile)
socket.sslerror: (2, 'The operation did not complete
(read)')
This does not occur on Unix, where
test_socket_ssl.test_timeout runs smoothly.
--
>Comment By: Martin v. Löwis (loewis)
Date: 2006-04-08 11:18
Message:
Logged In: YES
user_id=21627
The problem is that WIN32 isn't initially defined.
WinSock2.h has this structure:
#if !defined(WIN32) && !defined(_WIN64)
#include
#endif
#include
#if !defined(WIN32) && !defined(_WIN64)
#include
#endif
Even though WIN32 is initially not defined, it is defined
at the end of WinSock2.h, so that the poppack.h is not
included. That leaves a pragma pack(push,4) on the pack
stack. I haven't traced where exactly WIN32 is defined,
but it probably comes from Ole2.h.
Fixed in 43731 and 43732. Not sure whether anything needs to
be done to the test suite.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 09:48
Message:
Logged In: YES
user_id=31435
Normal MS struct member alignment is definitely screwed up
inside _ssl.c, but still don't know how that happens.
sizeof this struct should be 16, but is reported as 12 when
the source is inside _ssl.c:
struct dummy {
int a;
double x;
};
(note that in the details in previous comments, the double
&Sock->sock_timeout was not 8-byte aligned in _ssl.c, but
was in socketmodule.c). I don't see any MS packing pragmas
in any of the OpenSSL .h files either.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 08:05
Message:
Logged In: YES
user_id=31435
As a sanity check on all those details, inside newPySSLObject()
*(double *)((char *)&Sock->sock_timeout + 4)
is in fact 30.0.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 07:25
Message:
Logged In: YES
user_id=31435
I did a data breakpoint on the 8 bytes in the sock_timeout
member, and it never triggered: nothing stores anything
there to change 30.0 to 0.0.
Instead, socketmodule.c and _ssl.c have different views of
where the members of a PySocketSockObject live. WRT
socketmodule.c sock_settimeout's `s`, and _ssl.c
newPySSLObject's `Sock` (which are the same object in the
test case), the debugger agrees about the addresses at which
these members live:
&s->_ob_next0x0096c3e8
&s->_ob_prev0x0096c3ec
&s->ob_refcnt 0x0096c3f0
&s->ob_type 0x0096c3f4
&s->sock_fd 0x0096c3f8
&s->sock_family 0x0096c3fc
&s->sock_type 0x0096c400
&s->sock_proto 0x0096c404
&s->sock_addr 0x0096c408
&s->errorhandler 0x0096c488
But there's a radical disconnect about where it thinks
sock_timeout lives:
&s->sock_timeout0x0096c490
&Sock->sock_timeout 0x0096c48c
Indeed,
printf("%d\n", sizeof(PySocketSockObject));
displays different results:
socketmodule.c: 176
_ssl.c: 172
I'm unclear about why. Doing
printf("%d\n", sizeof(sock_addr_t));
prints 128 in both modules, so there's not an obvious
difference there.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-02 04:08
Message:
Logged In: YES
user_id=31435
BTW, does anyone understand why this part of my first
comment was true?:
"""
check_socket_and_wait_for_timeout() takes the "else if
(s->sock_timeout == 0.0)" path and and returns
SOCKET_IS_NONBLOCKING.
"""
How did s->sock_timeout become 0? s.settimeout(30.0) was
called, and the same s was passed to socket.ssl(). I don't
understand this at all:
>>> s.connect(("gmail.org", 995))
>>> s.gettimeout()
30.0
>>> s._sock
>>> s._sock.gettimeout()
30.0
>>> ss = socket.ssl(s)
but a breakpoint in newPySSLObject() right there shows that
Sock->sock_timeout is 0.0. HTF did that happen?
If I poke 30.0 (under the debugger) into Sock->sock_timeout
at the start of newPySSLObject(), the constructor finishes
unexceptionally.
--
[ python-Bugs-1462352 ] socket.ssl won't work together with socket.settimeout on Win
Bugs item #1462352, was opened at 2006-03-31 14:11
Message generated for change (Comment added) made by tim_one
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1462352&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Windows
Group: Python 2.4
Status: Closed
Resolution: Fixed
Priority: 6
Submitted By: Georg Brandl (gbrandl)
Assigned to: Nobody/Anonymous (nobody)
Summary: socket.ssl won't work together with socket.settimeout on Win
Initial Comment:
Symptoms:
>>> import socket
>>> s = socket.socket()
>>> s.settimeout(30.0)
>>> s.connect(("gmail.org", 995))
>>> ss = socket.ssl(s)
Traceback (most recent call last):
File "", line 1, in ?
File "C:\python24\lib\socket.py", line 74, in ssl
return _realssl(sock, keyfile, certfile)
socket.sslerror: (2, 'The operation did not complete
(read)')
This does not occur on Unix, where
test_socket_ssl.test_timeout runs smoothly.
--
>Comment By: Tim Peters (tim_one)
Date: 2006-04-08 08:48
Message:
Logged In: YES
user_id=31435
Whoa -- good eye, Martin! Thank you. Looks like bugs all
over the place.
FYI, I later rehabilitated the disabled part of
test_socket_ssl, and removed the Windows special-casing, in
revs 43734 (trunk) and 43735 (2.4 branch).
--
Comment By: Martin v. Löwis (loewis)
Date: 2006-04-08 05:18
Message:
Logged In: YES
user_id=21627
The problem is that WIN32 isn't initially defined.
WinSock2.h has this structure:
#if !defined(WIN32) && !defined(_WIN64)
#include
#endif
#include
#if !defined(WIN32) && !defined(_WIN64)
#include
#endif
Even though WIN32 is initially not defined, it is defined
at the end of WinSock2.h, so that the poppack.h is not
included. That leaves a pragma pack(push,4) on the pack
stack. I haven't traced where exactly WIN32 is defined,
but it probably comes from Ole2.h.
Fixed in 43731 and 43732. Not sure whether anything needs to
be done to the test suite.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 03:48
Message:
Logged In: YES
user_id=31435
Normal MS struct member alignment is definitely screwed up
inside _ssl.c, but still don't know how that happens.
sizeof this struct should be 16, but is reported as 12 when
the source is inside _ssl.c:
struct dummy {
int a;
double x;
};
(note that in the details in previous comments, the double
&Sock->sock_timeout was not 8-byte aligned in _ssl.c, but
was in socketmodule.c). I don't see any MS packing pragmas
in any of the OpenSSL .h files either.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 02:05
Message:
Logged In: YES
user_id=31435
As a sanity check on all those details, inside newPySSLObject()
*(double *)((char *)&Sock->sock_timeout + 4)
is in fact 30.0.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 01:25
Message:
Logged In: YES
user_id=31435
I did a data breakpoint on the 8 bytes in the sock_timeout
member, and it never triggered: nothing stores anything
there to change 30.0 to 0.0.
Instead, socketmodule.c and _ssl.c have different views of
where the members of a PySocketSockObject live. WRT
socketmodule.c sock_settimeout's `s`, and _ssl.c
newPySSLObject's `Sock` (which are the same object in the
test case), the debugger agrees about the addresses at which
these members live:
&s->_ob_next0x0096c3e8
&s->_ob_prev0x0096c3ec
&s->ob_refcnt 0x0096c3f0
&s->ob_type 0x0096c3f4
&s->sock_fd 0x0096c3f8
&s->sock_family 0x0096c3fc
&s->sock_type 0x0096c400
&s->sock_proto 0x0096c404
&s->sock_addr 0x0096c408
&s->errorhandler 0x0096c488
But there's a radical disconnect about where it thinks
sock_timeout lives:
&s->sock_timeout0x0096c490
&Sock->sock_timeout 0x0096c48c
Indeed,
printf("%d\n", sizeof(PySocketSockObject));
displays different results:
socketmodule.c: 176
_ssl.c: 172
I'm unclear about why. Doing
printf("%d\n", sizeof(sock_addr_t));
prints 128 in both modules, so there's not an obvious
difference there.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-01 21:08
Message:
Logged In: YES
user_id=31435
BTW, does anyone understand why this part of my first
comment was true?:
"""
check_socket_and_wait_for_timeout() takes the "else if
(s->sock_timeout == 0.0)" path and and returns
SOCKET_IS_NONBLOCKING.
"""
How did s->sock_timeout become 0? s.settimeout(30.0) was
called, and the same s was passed to socket.ssl(). I don't
understand this at all:
>>> s.connect(("gmail.org", 995))
>>> s.ge
[ python-Bugs-1462352 ] socket.ssl won't work together with socket.settimeout on Win
Bugs item #1462352, was opened at 2006-03-31 19:11
Message generated for change (Comment added) made by gbrandl
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1462352&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Windows
Group: Python 2.4
Status: Closed
Resolution: Fixed
Priority: 6
Submitted By: Georg Brandl (gbrandl)
Assigned to: Nobody/Anonymous (nobody)
Summary: socket.ssl won't work together with socket.settimeout on Win
Initial Comment:
Symptoms:
>>> import socket
>>> s = socket.socket()
>>> s.settimeout(30.0)
>>> s.connect(("gmail.org", 995))
>>> ss = socket.ssl(s)
Traceback (most recent call last):
File "", line 1, in ?
File "C:\python24\lib\socket.py", line 74, in ssl
return _realssl(sock, keyfile, certfile)
socket.sslerror: (2, 'The operation did not complete
(read)')
This does not occur on Unix, where
test_socket_ssl.test_timeout runs smoothly.
--
>Comment By: Georg Brandl (gbrandl)
Date: 2006-04-08 12:52
Message:
Logged In: YES
user_id=849994
Now one Windows buildbot turned red again since the timeout
didn't raise socket.timeout but socket.error... :-|
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 12:48
Message:
Logged In: YES
user_id=31435
Whoa -- good eye, Martin! Thank you. Looks like bugs all
over the place.
FYI, I later rehabilitated the disabled part of
test_socket_ssl, and removed the Windows special-casing, in
revs 43734 (trunk) and 43735 (2.4 branch).
--
Comment By: Martin v. Löwis (loewis)
Date: 2006-04-08 09:18
Message:
Logged In: YES
user_id=21627
The problem is that WIN32 isn't initially defined.
WinSock2.h has this structure:
#if !defined(WIN32) && !defined(_WIN64)
#include
#endif
#include
#if !defined(WIN32) && !defined(_WIN64)
#include
#endif
Even though WIN32 is initially not defined, it is defined
at the end of WinSock2.h, so that the poppack.h is not
included. That leaves a pragma pack(push,4) on the pack
stack. I haven't traced where exactly WIN32 is defined,
but it probably comes from Ole2.h.
Fixed in 43731 and 43732. Not sure whether anything needs to
be done to the test suite.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 07:48
Message:
Logged In: YES
user_id=31435
Normal MS struct member alignment is definitely screwed up
inside _ssl.c, but still don't know how that happens.
sizeof this struct should be 16, but is reported as 12 when
the source is inside _ssl.c:
struct dummy {
int a;
double x;
};
(note that in the details in previous comments, the double
&Sock->sock_timeout was not 8-byte aligned in _ssl.c, but
was in socketmodule.c). I don't see any MS packing pragmas
in any of the OpenSSL .h files either.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 06:05
Message:
Logged In: YES
user_id=31435
As a sanity check on all those details, inside newPySSLObject()
*(double *)((char *)&Sock->sock_timeout + 4)
is in fact 30.0.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 05:25
Message:
Logged In: YES
user_id=31435
I did a data breakpoint on the 8 bytes in the sock_timeout
member, and it never triggered: nothing stores anything
there to change 30.0 to 0.0.
Instead, socketmodule.c and _ssl.c have different views of
where the members of a PySocketSockObject live. WRT
socketmodule.c sock_settimeout's `s`, and _ssl.c
newPySSLObject's `Sock` (which are the same object in the
test case), the debugger agrees about the addresses at which
these members live:
&s->_ob_next0x0096c3e8
&s->_ob_prev0x0096c3ec
&s->ob_refcnt 0x0096c3f0
&s->ob_type 0x0096c3f4
&s->sock_fd 0x0096c3f8
&s->sock_family 0x0096c3fc
&s->sock_type 0x0096c400
&s->sock_proto 0x0096c404
&s->sock_addr 0x0096c408
&s->errorhandler 0x0096c488
But there's a radical disconnect about where it thinks
sock_timeout lives:
&s->sock_timeout0x0096c490
&Sock->sock_timeout 0x0096c48c
Indeed,
printf("%d\n", sizeof(PySocketSockObject));
displays different results:
socketmodule.c: 176
_ssl.c: 172
I'm unclear about why. Doing
printf("%d\n", sizeof(sock_addr_t));
prints 128 in both modules, so there's not an obvious
difference there.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-02 02:08
Message:
Logged In: YES
user_id=31435
BTW, does anyone understand why this part of my first
comment was true?:
"""
check_socket_and_wait_for_time
[ python-Bugs-1462352 ] socket.ssl won't work together with socket.settimeout on Win
Bugs item #1462352, was opened at 2006-03-31 14:11
Message generated for change (Comment added) made by tim_one
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1462352&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Windows
Group: Python 2.4
Status: Closed
Resolution: Fixed
Priority: 6
Submitted By: Georg Brandl (gbrandl)
>Assigned to: Trent Mick (tmick)
Summary: socket.ssl won't work together with socket.settimeout on Win
Initial Comment:
Symptoms:
>>> import socket
>>> s = socket.socket()
>>> s.settimeout(30.0)
>>> s.connect(("gmail.org", 995))
>>> ss = socket.ssl(s)
Traceback (most recent call last):
File "", line 1, in ?
File "C:\python24\lib\socket.py", line 74, in ssl
return _realssl(sock, keyfile, certfile)
socket.sslerror: (2, 'The operation did not complete
(read)')
This does not occur on Unix, where
test_socket_ssl.test_timeout runs smoothly.
--
>Comment By: Tim Peters (tim_one)
Date: 2006-04-08 09:00
Message:
Logged In: YES
user_id=31435
Yup, saw that, but I'd like input from Trent (it's his box):
why is it timing out at all? 30 seconds is a huge blob of
time, and none of the other buildbots are timing out. When
I disabled test_socket_ssl's test_timeout, _all_ buildbot
slaves were timing out (because gmail.org simply wasn't
responding to anyone at the time).
IOW, there may be a Win2K-specific bug in the Python
implementation here (note that Trent's is the only Win2K
slave we have).
--
Comment By: Georg Brandl (gbrandl)
Date: 2006-04-08 08:52
Message:
Logged In: YES
user_id=849994
Now one Windows buildbot turned red again since the timeout
didn't raise socket.timeout but socket.error... :-|
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 08:48
Message:
Logged In: YES
user_id=31435
Whoa -- good eye, Martin! Thank you. Looks like bugs all
over the place.
FYI, I later rehabilitated the disabled part of
test_socket_ssl, and removed the Windows special-casing, in
revs 43734 (trunk) and 43735 (2.4 branch).
--
Comment By: Martin v. Löwis (loewis)
Date: 2006-04-08 05:18
Message:
Logged In: YES
user_id=21627
The problem is that WIN32 isn't initially defined.
WinSock2.h has this structure:
#if !defined(WIN32) && !defined(_WIN64)
#include
#endif
#include
#if !defined(WIN32) && !defined(_WIN64)
#include
#endif
Even though WIN32 is initially not defined, it is defined
at the end of WinSock2.h, so that the poppack.h is not
included. That leaves a pragma pack(push,4) on the pack
stack. I haven't traced where exactly WIN32 is defined,
but it probably comes from Ole2.h.
Fixed in 43731 and 43732. Not sure whether anything needs to
be done to the test suite.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 03:48
Message:
Logged In: YES
user_id=31435
Normal MS struct member alignment is definitely screwed up
inside _ssl.c, but still don't know how that happens.
sizeof this struct should be 16, but is reported as 12 when
the source is inside _ssl.c:
struct dummy {
int a;
double x;
};
(note that in the details in previous comments, the double
&Sock->sock_timeout was not 8-byte aligned in _ssl.c, but
was in socketmodule.c). I don't see any MS packing pragmas
in any of the OpenSSL .h files either.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 02:05
Message:
Logged In: YES
user_id=31435
As a sanity check on all those details, inside newPySSLObject()
*(double *)((char *)&Sock->sock_timeout + 4)
is in fact 30.0.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 01:25
Message:
Logged In: YES
user_id=31435
I did a data breakpoint on the 8 bytes in the sock_timeout
member, and it never triggered: nothing stores anything
there to change 30.0 to 0.0.
Instead, socketmodule.c and _ssl.c have different views of
where the members of a PySocketSockObject live. WRT
socketmodule.c sock_settimeout's `s`, and _ssl.c
newPySSLObject's `Sock` (which are the same object in the
test case), the debugger agrees about the addresses at which
these members live:
&s->_ob_next0x0096c3e8
&s->_ob_prev0x0096c3ec
&s->ob_refcnt 0x0096c3f0
&s->ob_type 0x0096c3f4
&s->sock_fd 0x0096c3f8
&s->sock_family 0x0096c3fc
&s->sock_type 0x0096c400
&s->sock_proto 0x0096c404
&s->sock_addr 0x0096c408
&s->errorhandler 0x0096c488
But there's a radical disconnect about where it thinks
sock_timeout l
[ python-Feature Requests-1462486 ] Scripts invoked by -m should trim exceptions
Feature Requests item #1462486, was opened at 2006-04-01 10:23
Message generated for change (Comment added) made by ncoghlan
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1462486&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Tim Delaney (tcdelaney)
Assigned to: Nick Coghlan (ncoghlan)
Summary: Scripts invoked by -m should trim exceptions
Initial Comment:
Currently in 2.5, an exception thrown from a script invoked by -m
(runpy.run_module) will dump an exception like:
Traceback (most recent call last):
File "D:\Development\Python25\Lib\runpy.py", line 418, in run_module
filename, loader, alter_sys)
File "D:\Development\Python25\Lib\runpy.py", line 386, in _run_module_code
mod_name, mod_fname, mod_loader)
File "D:\Development\Python25\Lib\runpy.py", line 366, in _run_code
exec code in run_globals
File "D:\Development\modules\test25.py", line 53, in
raise GeneratorExit('body')
GeneratorExit: body
This should probably be trimmed to:
Traceback (most recent call last):
File "test25.py", line 53, in
raise GeneratorExit('body')
GeneratorExit: body
to match when a script is invoked by filename.
--
>Comment By: Nick Coghlan (ncoghlan)
Date: 2006-04-09 03:28
Message:
Logged In: YES
user_id=1038590
I can fix it so that the runpy module lines are only masked
out when the module is invoked implicitly via the -m switch
by giving the C code a private entry point
(_run_module_as_main) that catches exceptions and prints the
filtered traceback before doing sys.exit(-1).
I'll make sure to add some tests to test_cmd_line to verify
the updated behaviour.
--
Comment By: Nick Coghlan (ncoghlan)
Date: 2006-04-07 21:12
Message:
Logged In: YES
user_id=1038590
I'd forgotten about SF's current "no email when assigned a
bug" feature. . .
I'm inclined to agree with Guido that it could be tricky to
get rid of these without also masking legitimate traceback
info for import errors (e.g. if the PEP 302 emulation
machinery blows up rather than returning None the way it is
meant to when it can't find a loader for the module).
OTOH, I don't like the current output for an import errror,
either:
C:\>C:\python25\python.exe -m junk
Traceback (most recent call last):
File "C:\Python25\Lib\runpy.py", line 410, in run_module
raise ImportError("No module named " + mod_name)
ImportError: No module named junk
So I'll look into it - if it is suspected that runpy is at
fault for a problem with running a script, then there's two
easy ways to get the full traceback:
C:\>C:\python25\python.exe -m runpy junk
C:\>C:\python25\python.exe C:\Python25\Lib\runpy junk
--
Comment By: Guido van Rossum (gvanrossum)
Date: 2006-04-06 10:45
Message:
Logged In: YES
user_id=6380
I'm not so sure. Who looks at the top of the traceback
anyway? And it might hide clues about problems caused by
runpy.py.
--
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1462486&group_id=5470
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Feature Requests-1453341 ] sys.setatomicexecution - for critical sections
Feature Requests item #1453341, was opened at 2006-03-19 05:52 Message generated for change (Comment added) made by ncoghlan You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1453341&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: None >Status: Closed >Resolution: Rejected Priority: 5 Submitted By: kxroberto (kxroberto) Assigned to: Nobody/Anonymous (nobody) Summary: sys.setatomicexecution - for critical sections Initial Comment: In order to maintain threaded code uncomplicated (VHL) and in order to avoid massive use of locks, when introducing few critical non-atomic sections, it should be possible to put out that practical hammer .. try: last_ci=sys.setcheckinterval(sys.maxint) critical_function() # now runs atomically finally: sys.setcheckinterval(last_ci) #(sys.setcheckinterval assumed to return the last value) ..by an official function/class (in sys or thread): Maybe: == atomic = sys.setatomicexecution(mode) try: print "Executing critical section" finally: sys.setatomicexecution(atomic) there should be different modes/levels for blocking : * 0=None * 1=block only python execution in other threads * 2=block signals * 4/5/7/15=block threads at OS level (if OS supports) * 8=(reserved: block future sub-/stackless switching inside current thread..) see: http://groups.google.de/group/comp.lang.python/browse_frm/thread/bf5461507803975e/3bd7dfa9422a1200 compare: http://www.stackless.com/wiki/Tasklets#critical-sections --- Also Python could define officially its time atoms beyond CPU atoms in the docs (That also defines the VHL usability of the language). Thus thinks like single-element changes ... obj.var=x , d[key]=x , l.append(x) .pop() should be guaranteed to work atomic/correct FOR EVER. l[slice], d.keys() .items(), .. questionable? If not guaranteed for the future, offer duplicates for speed critical key building blocks like: l.slice_atomic(slice), d.keys_atomic() ,... in order to make code compatible for the future. --- Extra fun for blowing up python libs for poeple who don't want to learn that try..finally all the time: copy/deepcopy/dump maybe could be enriched by copy_atomic , deepcopy_atomic, dump_atomic - or just RuntimeError tolerant versions, deepcopy_save (no use of .iterxxx) -- >Comment By: Nick Coghlan (ncoghlan) Date: 2006-04-09 04:17 Message: Logged In: YES user_id=1038590 Raymond brought this idea up on python-dev at the time of the c.l.p discussion - it was rejected on the basis that thread synchronisation tools (Queues and Locks) are provided for a reason. Python level access to the Global Interpreter Lock is neither necessary nor desirable. Avoiding the tools provided to permit threading to work correctly and then finding that threaded code doesn't work as desired really shouldn't be surprising. FWIW, Python 2.5 aims to make normal locking easier to use by permitting: from __future__ import with_statement from threading import Lock sync_lock = Lock() def my_func(*args, **kwds): with sync_lock: # Only one thread at a time can enter this section # regardless of IO or anything else # This section, on the other hand, is a free-for-all If you genuinely have to avoid normal thread synchronisation primitives, you can abuse (and I really do mean abuse) the interpreter's import lock for this purpose: imp.acquire_lock() try: print 'critical section' finally: imp.release_lock() Or even: @contextlib.contextmanager def CriticalSection() imp.acquire_lock() try: yield finally: imp.release_lock() with CriticalSection(): print 'critical section' -- Comment By: kxroberto (kxroberto) Date: 2006-03-21 01:28 Message: Logged In: YES user_id=972995 ... only PyEval_RestoreThread with the harder execution level in its tstate -- Comment By: kxroberto (kxroberto) Date: 2006-03-21 01:24 Message: Logged In: YES user_id=972995 thus the GIL could simply have a harder state 2 : "locked hard for PyEval_AcquireThread/PyEval_AcquireLock/.." ? Only PyEval_RestoreThread gets the lock again after PyEval_SaveThread. Robert -- Comment By: Martin Gfeller (gfe) Date: 2006-03-20 22:17 Message: Logged In: YES user_id=884167 - sys.setcheckinterval(sys.maxint) does not prevent thread switching when doing IO, does it? There is no way that I know of to prevent thread switching in this situation. - When calling back into Python from C Code, there is no way to tell Python
[ python-Feature Requests-1453341 ] sys.setatomicexecution - for critical sections
Feature Requests item #1453341, was opened at 2006-03-19 05:52 Message generated for change (Comment added) made by ncoghlan You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1453341&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Interpreter Core Group: None Status: Closed Resolution: Rejected Priority: 5 Submitted By: kxroberto (kxroberto) Assigned to: Nobody/Anonymous (nobody) Summary: sys.setatomicexecution - for critical sections Initial Comment: In order to maintain threaded code uncomplicated (VHL) and in order to avoid massive use of locks, when introducing few critical non-atomic sections, it should be possible to put out that practical hammer .. try: last_ci=sys.setcheckinterval(sys.maxint) critical_function() # now runs atomically finally: sys.setcheckinterval(last_ci) #(sys.setcheckinterval assumed to return the last value) ..by an official function/class (in sys or thread): Maybe: == atomic = sys.setatomicexecution(mode) try: print "Executing critical section" finally: sys.setatomicexecution(atomic) there should be different modes/levels for blocking : * 0=None * 1=block only python execution in other threads * 2=block signals * 4/5/7/15=block threads at OS level (if OS supports) * 8=(reserved: block future sub-/stackless switching inside current thread..) see: http://groups.google.de/group/comp.lang.python/browse_frm/thread/bf5461507803975e/3bd7dfa9422a1200 compare: http://www.stackless.com/wiki/Tasklets#critical-sections --- Also Python could define officially its time atoms beyond CPU atoms in the docs (That also defines the VHL usability of the language). Thus thinks like single-element changes ... obj.var=x , d[key]=x , l.append(x) .pop() should be guaranteed to work atomic/correct FOR EVER. l[slice], d.keys() .items(), .. questionable? If not guaranteed for the future, offer duplicates for speed critical key building blocks like: l.slice_atomic(slice), d.keys_atomic() ,... in order to make code compatible for the future. --- Extra fun for blowing up python libs for poeple who don't want to learn that try..finally all the time: copy/deepcopy/dump maybe could be enriched by copy_atomic , deepcopy_atomic, dump_atomic - or just RuntimeError tolerant versions, deepcopy_save (no use of .iterxxx) -- >Comment By: Nick Coghlan (ncoghlan) Date: 2006-04-09 04:34 Message: Logged In: YES user_id=1038590 On the other changes you suggest (which Raymond didn't bring up on python-dev): Python can't formally define as atomic any operations that may execute arbitrary Python code, as the interpreter cannot control what that code may do. All of the examples you give are in that category. Slowing down the common cases (unthreaded code and Queue-based threaded code) by adding internal locking to every data structure is also considered highly undesirable. -- Comment By: Nick Coghlan (ncoghlan) Date: 2006-04-09 04:17 Message: Logged In: YES user_id=1038590 Raymond brought this idea up on python-dev at the time of the c.l.p discussion - it was rejected on the basis that thread synchronisation tools (Queues and Locks) are provided for a reason. Python level access to the Global Interpreter Lock is neither necessary nor desirable. Avoiding the tools provided to permit threading to work correctly and then finding that threaded code doesn't work as desired really shouldn't be surprising. FWIW, Python 2.5 aims to make normal locking easier to use by permitting: from __future__ import with_statement from threading import Lock sync_lock = Lock() def my_func(*args, **kwds): with sync_lock: # Only one thread at a time can enter this section # regardless of IO or anything else # This section, on the other hand, is a free-for-all If you genuinely have to avoid normal thread synchronisation primitives, you can abuse (and I really do mean abuse) the interpreter's import lock for this purpose: imp.acquire_lock() try: print 'critical section' finally: imp.release_lock() Or even: @contextlib.contextmanager def CriticalSection() imp.acquire_lock() try: yield finally: imp.release_lock() with CriticalSection(): print 'critical section' -- Comment By: kxroberto (kxroberto) Date: 2006-03-21 01:28 Message: Logged In: YES user_id=972995 ... only PyEval_RestoreThread with the harder execution level in its tstate -- Comment By: kxroberto (kxroberto) Date: 2006-03-21 01:24 Message: Logged In: YES user_id=972995
[ python-Feature Requests-1462486 ] Scripts invoked by -m should trim exceptions
Feature Requests item #1462486, was opened at 2006-03-31 19:23
Message generated for change (Comment added) made by tim_one
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1462486&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Tim Delaney (tcdelaney)
Assigned to: Nick Coghlan (ncoghlan)
Summary: Scripts invoked by -m should trim exceptions
Initial Comment:
Currently in 2.5, an exception thrown from a script invoked by -m
(runpy.run_module) will dump an exception like:
Traceback (most recent call last):
File "D:\Development\Python25\Lib\runpy.py", line 418, in run_module
filename, loader, alter_sys)
File "D:\Development\Python25\Lib\runpy.py", line 386, in _run_module_code
mod_name, mod_fname, mod_loader)
File "D:\Development\Python25\Lib\runpy.py", line 366, in _run_code
exec code in run_globals
File "D:\Development\modules\test25.py", line 53, in
raise GeneratorExit('body')
GeneratorExit: body
This should probably be trimmed to:
Traceback (most recent call last):
File "test25.py", line 53, in
raise GeneratorExit('body')
GeneratorExit: body
to match when a script is invoked by filename.
--
>Comment By: Tim Peters (tim_one)
Date: 2006-04-08 16:23
Message:
Logged In: YES
user_id=31435
I see no reason to bother with this -- it adds complexity,
and I don't see any real benefit. What's bad about having
runpy show up in the traceback, given that code in runpy.py
actually _is_ in the call stack? Why try to hide the truth
of it?
--
Comment By: Nick Coghlan (ncoghlan)
Date: 2006-04-08 13:28
Message:
Logged In: YES
user_id=1038590
I can fix it so that the runpy module lines are only masked
out when the module is invoked implicitly via the -m switch
by giving the C code a private entry point
(_run_module_as_main) that catches exceptions and prints the
filtered traceback before doing sys.exit(-1).
I'll make sure to add some tests to test_cmd_line to verify
the updated behaviour.
--
Comment By: Nick Coghlan (ncoghlan)
Date: 2006-04-07 07:12
Message:
Logged In: YES
user_id=1038590
I'd forgotten about SF's current "no email when assigned a
bug" feature. . .
I'm inclined to agree with Guido that it could be tricky to
get rid of these without also masking legitimate traceback
info for import errors (e.g. if the PEP 302 emulation
machinery blows up rather than returning None the way it is
meant to when it can't find a loader for the module).
OTOH, I don't like the current output for an import errror,
either:
C:\>C:\python25\python.exe -m junk
Traceback (most recent call last):
File "C:\Python25\Lib\runpy.py", line 410, in run_module
raise ImportError("No module named " + mod_name)
ImportError: No module named junk
So I'll look into it - if it is suspected that runpy is at
fault for a problem with running a script, then there's two
easy ways to get the full traceback:
C:\>C:\python25\python.exe -m runpy junk
C:\>C:\python25\python.exe C:\Python25\Lib\runpy junk
--
Comment By: Guido van Rossum (gvanrossum)
Date: 2006-04-05 20:45
Message:
Logged In: YES
user_id=6380
I'm not so sure. Who looks at the top of the traceback
anyway? And it might hide clues about problems caused by
runpy.py.
--
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1462486&group_id=5470
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Feature Requests-1462486 ] Scripts invoked by -m should trim exceptions
Feature Requests item #1462486, was opened at 2006-03-31 19:23
Message generated for change (Comment added) made by gvanrossum
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1462486&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: None
Status: Open
Resolution: None
Priority: 5
Submitted By: Tim Delaney (tcdelaney)
Assigned to: Nick Coghlan (ncoghlan)
Summary: Scripts invoked by -m should trim exceptions
Initial Comment:
Currently in 2.5, an exception thrown from a script invoked by -m
(runpy.run_module) will dump an exception like:
Traceback (most recent call last):
File "D:\Development\Python25\Lib\runpy.py", line 418, in run_module
filename, loader, alter_sys)
File "D:\Development\Python25\Lib\runpy.py", line 386, in _run_module_code
mod_name, mod_fname, mod_loader)
File "D:\Development\Python25\Lib\runpy.py", line 366, in _run_code
exec code in run_globals
File "D:\Development\modules\test25.py", line 53, in
raise GeneratorExit('body')
GeneratorExit: body
This should probably be trimmed to:
Traceback (most recent call last):
File "test25.py", line 53, in
raise GeneratorExit('body')
GeneratorExit: body
to match when a script is invoked by filename.
--
>Comment By: Guido van Rossum (gvanrossum)
Date: 2006-04-08 18:16
Message:
Logged In: YES
user_id=6380
I'm with Tim. Please close w/o action.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-08 16:23
Message:
Logged In: YES
user_id=31435
I see no reason to bother with this -- it adds complexity,
and I don't see any real benefit. What's bad about having
runpy show up in the traceback, given that code in runpy.py
actually _is_ in the call stack? Why try to hide the truth
of it?
--
Comment By: Nick Coghlan (ncoghlan)
Date: 2006-04-08 13:28
Message:
Logged In: YES
user_id=1038590
I can fix it so that the runpy module lines are only masked
out when the module is invoked implicitly via the -m switch
by giving the C code a private entry point
(_run_module_as_main) that catches exceptions and prints the
filtered traceback before doing sys.exit(-1).
I'll make sure to add some tests to test_cmd_line to verify
the updated behaviour.
--
Comment By: Nick Coghlan (ncoghlan)
Date: 2006-04-07 07:12
Message:
Logged In: YES
user_id=1038590
I'd forgotten about SF's current "no email when assigned a
bug" feature. . .
I'm inclined to agree with Guido that it could be tricky to
get rid of these without also masking legitimate traceback
info for import errors (e.g. if the PEP 302 emulation
machinery blows up rather than returning None the way it is
meant to when it can't find a loader for the module).
OTOH, I don't like the current output for an import errror,
either:
C:\>C:\python25\python.exe -m junk
Traceback (most recent call last):
File "C:\Python25\Lib\runpy.py", line 410, in run_module
raise ImportError("No module named " + mod_name)
ImportError: No module named junk
So I'll look into it - if it is suspected that runpy is at
fault for a problem with running a script, then there's two
easy ways to get the full traceback:
C:\>C:\python25\python.exe -m runpy junk
C:\>C:\python25\python.exe C:\Python25\Lib\runpy junk
--
Comment By: Guido van Rossum (gvanrossum)
Date: 2006-04-05 20:45
Message:
Logged In: YES
user_id=6380
I'm not so sure. Who looks at the top of the traceback
anyway? And it might hide clues about problems caused by
runpy.py.
--
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1462486&group_id=5470
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-892939 ] Race condition in popen2
Bugs item #892939, was opened at 02/08/04 09:31 Message generated for change (Comment added) made by sf-robot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=892939&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: Python 2.3 >Status: Closed Resolution: Fixed Priority: 6 Submitted By: Ken McNeil (kenmcneil) Assigned to: Neal Norwitz (nnorwitz) Summary: Race condition in popen2 Initial Comment: The "fix" for bug #761888 created a race condition in Popen3 . The interaction between wait and _cleanup is the root of the problem. def wait(self): """Wait for and return the exit status of the child process.""" if self.sts < 0: pid, sts = os.waitpid(self.pid, 0) if pid == self.pid: self.sts = sts return self.sts def _cleanup(): for inst in _active[:]: inst.poll() In wait, between the check of self.sts and the call to os.waitpid a new Popen3 object can be created in another thread which will trigger a call to _cleanup. Since the call to _cleanup polls the process, when the thread running wait starts back up again it will try to poll the process using os.waitpid, which will throw an OSError because os.waitpid has already been called for the PID indirectly in _cleanup. A work around is for the caller of wait to catch the OSError and check the sts field, and if sts is non-negative then the OSError is most likely because of this problem and can be ignored. However, sts is undocumented and should probably stay that way. My suggestion is that the patch that added _active, _cleanup, and all be removed and a more suitable mechanism for fixing bug #761888 be found. As it has been said in the discussion of bug #761888, magically closing FDs is not a "good thing". It seems to me that surrounding the call to os.fork with a try/except, and closing the pipes in the except, would be suitable but I don't know how this would interact with a failed call to fork, therefore I wont provide a patch. -- >Comment By: SourceForge Robot (sf-robot) Date: 04/08/06 19:20 Message: Logged In: YES user_id=1312539 This Tracker item was closed automatically by the system. It was previously set to a Pending status, and the original submitter did not respond within 14 days (the time period specified by the administrator of this Tracker). -- Comment By: Neal Norwitz (nnorwitz) Date: 03/24/06 20:56 Message: Logged In: YES user_id=33168 Martin and I worked out a patch which should solve this problem and it was committed. For more info see bug 1183780, If this does not solve the problem, change the status from pending to open. Otherwise, this bug report should close automatically in a couple of weeks since it's pending. -- Comment By: Neal Norwitz (nnorwitz) Date: 03/23/06 00:47 Message: Logged In: YES user_id=33168 I believe this is basically a duplicate of 1183780. There is a patch attached there. Can you verify if it fixes your problem? -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=892939&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-998246 ] Popen3.poll race condition
Bugs item #998246, was opened at 07/26/04 12:14 Message generated for change (Comment added) made by sf-robot You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=998246&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Python Library Group: None >Status: Closed Resolution: Fixed Priority: 5 Submitted By: Tres Seaver (tseaver) Assigned to: Neal Norwitz (nnorwitz) Summary: Popen3.poll race condition Initial Comment: poll() swallows all IOErrors, including ENOCHILD; if the child process exits before poll is called, then an applications which loops on poll() will never exit. I am working around this (against Python 2.3.3) via the following: try: pid, status = os.waitpid(proc.pid, os.WNOHANG) except os.error, e: if e.errno == 10: # ENOCHILD result = 0 else: raise else: if pid == proc.pid: result = status where 'proc' is an instance of Popen3. -- >Comment By: SourceForge Robot (sf-robot) Date: 04/08/06 19:20 Message: Logged In: YES user_id=1312539 This Tracker item was closed automatically by the system. It was previously set to a Pending status, and the original submitter did not respond within 14 days (the time period specified by the administrator of this Tracker). -- Comment By: Neal Norwitz (nnorwitz) Date: 03/24/06 20:59 Message: Logged In: YES user_id=33168 Tres, Martin and I worked out a patch that we thinks solves the problem. It's checked in. See the other bug report for more info. If you don't believe the patch will fix your problem, change the status from pending to open. Otherwise, this bug should automatically close in a couple of weeks. -- Comment By: Tres Seaver (tseaver) Date: 03/23/06 04:21 Message: Logged In: YES user_id=127625 1183780 is indeed a similar bug, although he reports it against Popen4 rather than Popen3. His patch needs to be modified to re-raise errors which are not ENOCHILD, however. I no longer have accees to either the application or the machine where I found this issue, and hence can't verify that the patch fixes the code which triggered the problem. -- Comment By: Neal Norwitz (nnorwitz) Date: 03/23/06 00:45 Message: Logged In: YES user_id=33168 I believe this is basically a duplicate of 1183780. There is a patch attached there. Can you verify if it fixes your problem? -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=998246&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Feature Requests-1462486 ] Scripts invoked by -m should trim exceptions
Feature Requests item #1462486, was opened at 2006-04-01 10:23
Message generated for change (Settings changed) made by ncoghlan
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1462486&group_id=5470
Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: Python Library
Group: None
>Status: Closed
>Resolution: Rejected
Priority: 5
Submitted By: Tim Delaney (tcdelaney)
Assigned to: Nick Coghlan (ncoghlan)
Summary: Scripts invoked by -m should trim exceptions
Initial Comment:
Currently in 2.5, an exception thrown from a script invoked by -m
(runpy.run_module) will dump an exception like:
Traceback (most recent call last):
File "D:\Development\Python25\Lib\runpy.py", line 418, in run_module
filename, loader, alter_sys)
File "D:\Development\Python25\Lib\runpy.py", line 386, in _run_module_code
mod_name, mod_fname, mod_loader)
File "D:\Development\Python25\Lib\runpy.py", line 366, in _run_code
exec code in run_globals
File "D:\Development\modules\test25.py", line 53, in
raise GeneratorExit('body')
GeneratorExit: body
This should probably be trimmed to:
Traceback (most recent call last):
File "test25.py", line 53, in
raise GeneratorExit('body')
GeneratorExit: body
to match when a script is invoked by filename.
--
>Comment By: Nick Coghlan (ncoghlan)
Date: 2006-04-09 12:44
Message:
Logged In: YES
user_id=1038590
Done.
--
Comment By: Guido van Rossum (gvanrossum)
Date: 2006-04-09 08:16
Message:
Logged In: YES
user_id=6380
I'm with Tim. Please close w/o action.
--
Comment By: Tim Peters (tim_one)
Date: 2006-04-09 06:23
Message:
Logged In: YES
user_id=31435
I see no reason to bother with this -- it adds complexity,
and I don't see any real benefit. What's bad about having
runpy show up in the traceback, given that code in runpy.py
actually _is_ in the call stack? Why try to hide the truth
of it?
--
Comment By: Nick Coghlan (ncoghlan)
Date: 2006-04-09 03:28
Message:
Logged In: YES
user_id=1038590
I can fix it so that the runpy module lines are only masked
out when the module is invoked implicitly via the -m switch
by giving the C code a private entry point
(_run_module_as_main) that catches exceptions and prints the
filtered traceback before doing sys.exit(-1).
I'll make sure to add some tests to test_cmd_line to verify
the updated behaviour.
--
Comment By: Nick Coghlan (ncoghlan)
Date: 2006-04-07 21:12
Message:
Logged In: YES
user_id=1038590
I'd forgotten about SF's current "no email when assigned a
bug" feature. . .
I'm inclined to agree with Guido that it could be tricky to
get rid of these without also masking legitimate traceback
info for import errors (e.g. if the PEP 302 emulation
machinery blows up rather than returning None the way it is
meant to when it can't find a loader for the module).
OTOH, I don't like the current output for an import errror,
either:
C:\>C:\python25\python.exe -m junk
Traceback (most recent call last):
File "C:\Python25\Lib\runpy.py", line 410, in run_module
raise ImportError("No module named " + mod_name)
ImportError: No module named junk
So I'll look into it - if it is suspected that runpy is at
fault for a problem with running a script, then there's two
easy ways to get the full traceback:
C:\>C:\python25\python.exe -m runpy junk
C:\>C:\python25\python.exe C:\Python25\Lib\runpy junk
--
Comment By: Guido van Rossum (gvanrossum)
Date: 2006-04-06 10:45
Message:
Logged In: YES
user_id=6380
I'm not so sure. Who looks at the top of the traceback
anyway? And it might hide clues about problems caused by
runpy.py.
--
You can respond by visiting:
https://sourceforge.net/tracker/?func=detail&atid=355470&aid=1462486&group_id=5470
___
Python-bugs-list mailing list
Unsubscribe:
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1466641 ] Bogus SyntaxError in listcomp
Bugs item #1466641, was opened at 2006-04-08 08:51 Message generated for change (Comment added) made by ncoghlan You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1466641&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Parser/Compiler Group: Python 2.5 Status: Open Resolution: None Priority: 5 Submitted By: Tim Peters (tim_one) Assigned to: Nobody/Anonymous (nobody) Summary: Bogus SyntaxError in listcomp Initial Comment: The attached syn.py gives a SyntaxError in 2.5a1 and trunk. Works fine in earlier Pythons. Whittled down from real-life Zope3 source. def d(dir): return [fn for fn in os.listdir(dir) if fn if fn] -- >Comment By: Nick Coghlan (ncoghlan) Date: 2006-04-09 13:32 Message: Logged In: YES user_id=1038590 Is including two if clauses with a single for clause really meant to be legal? *goes and looks at language reference* Wow. What a strange way to write "and". . . -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1466641&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[ python-Bugs-1466641 ] Bogus SyntaxError in listcomp
Bugs item #1466641, was opened at 2006-04-07 18:51 Message generated for change (Comment added) made by tim_one You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1466641&group_id=5470 Please note that this message will contain a full copy of the comment thread, including the initial issue submission, for this request, not just the latest update. Category: Parser/Compiler Group: Python 2.5 Status: Open Resolution: None Priority: 5 Submitted By: Tim Peters (tim_one) Assigned to: Nobody/Anonymous (nobody) Summary: Bogus SyntaxError in listcomp Initial Comment: The attached syn.py gives a SyntaxError in 2.5a1 and trunk. Works fine in earlier Pythons. Whittled down from real-life Zope3 source. def d(dir): return [fn for fn in os.listdir(dir) if fn if fn] -- >Comment By: Tim Peters (tim_one) Date: 2006-04-09 00:07 Message: Logged In: YES user_id=31435 The whittled-down version looks ridiculous, but the original wasn't quite such an affront to beauty :-) It's really no stranger than allowing pure "if" statements to nest, and it would be more painful to contort the grammar to disallow it (I haven't looked at the 2.5 parser, but it was very surprising to me that it didn't allow it!). -- Comment By: Nick Coghlan (ncoghlan) Date: 2006-04-08 23:32 Message: Logged In: YES user_id=1038590 Is including two if clauses with a single for clause really meant to be legal? *goes and looks at language reference* Wow. What a strange way to write "and". . . -- You can respond by visiting: https://sourceforge.net/tracker/?func=detail&atid=105470&aid=1466641&group_id=5470 ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
