error when using Pmw.BLT
Hello, I tried to follow the following code demonstrating the use of Pmw.BLT: >>> from Tkinter import * >>> import Pmw >>> master = Tk() >>> g = Pmw.Blt.Graph( master ) I got the following error message after I typed the last line: Traceback (most recent call last): File "", line 1, in File "C:\Python26\lib\site-packages\Pmw\Pmw_1_3\lib\PmwBlt.py", line 260, in _ _init__ Tkinter.Widget.__init__(self, master, _graphCommand, cnf, kw) File "C:\Python26\lib\lib-tk\Tkinter.py", line 1932, in __init__ (widgetName, self._w) + extra + self._options(cnf)) _tkinter.TclError: invalid command name "::blt::graph I am a newbie in python. Does anybody know what causes this error? I am running python 2.6.3 under windows xp. Thanks for your help!! -- http://mail.python.org/mailman/listinfo/python-list
about Frame.__init__(self)
I just a newbie of python Now I found that almost every program use Tkinter have this line class xxx(xxx): """x""" def __init__(self): """x""" Frame.__init__(self) . ... the line "Frame.__init__(self)" puzzle me. why use it like this? can some one explain it? regards, yang -- http://mail.python.org/mailman/listinfo/python-list
about Frame.__init__(self)
I just a newbie of python Now I found that almost every program use Tkinter have this line class xxx(xxx): """x""" def __init__(self): """x""" Frame.__init__(self) . ... the line "Frame.__init__(self)" puzzle me. why use it like this? can some one explain it? regards, yang -- http://mail.python.org/mailman/listinfo/python-list
Assertion failure on hotshot.stats.load()
Note: I realize hotshot is obsoleted by cProfile, but 2.5 breaks several packages I depend on. I'm using Python 2.4.3. I'm getting an AssertionError on "assert not self._stack" when calling hotshot.stats.load() on my app's hotshot profile. The app consistently causes hotshot to generate such a problematic profile, but I have no idea what's causing it. Anybody know what's wrong? Here's the profile: http://www.filefactory.com/file/76fdbd/ Potentially relevant bugs: http://sourceforge.net/tracker/index.php?func=detail&aid=900092&group_id=5470&atid=105470 http://sourceforge.net/tracker/index.php?func=detail&aid=1019882&group_id=5470&atid=105470 Thanks in advance for any help. -- http://mail.python.org/mailman/listinfo/python-list
Re: Assertion failure on hotshot.stats.load()
I fell back onto the old profile module, but got the following error
when trying to use zope.interface. I am now without any way to profile
my application.
Traceback (most recent call last):
File "/home/yang/local/bin/profile.py", line 611, in ?
run('execfile(%r)' % (sys.argv[0],), options.outfile, options.sort)
File "/home/yang/local/bin/profile.py", line 72, in run
prof = prof.run(statement)
File "/home/yang/local/bin/profile.py", line 448, in run
return self.runctx(cmd, dict, dict)
File "/home/yang/local/bin/profile.py", line 454, in runctx
exec cmd in globals, locals
File "", line 1, in ?
File
"/.automount/nms.lcs.mit.edu/export/home/yang/proj/cartel/trunk/icedb/src/frontend/icedb-central.py",
line 5, in ?
from icedb import *
File "/home/yang/local/lib/python2.4/site-packages/icedb/__init__.py",
line 4, in ?
import cafnet
File "/home/yang/local/lib/python2.4/site-packages/cafnet/__init__.py",
line 269, in ?
class Cnl( object ):
File "/opt/zope/lib/python/zope/interface/advice.py", line 132, in advise
return callback(newClass)
File "/opt/zope/lib/python/zope/interface/declarations.py", line
485, in _implements_advice
classImplements(cls, *interfaces)
File "/opt/zope/lib/python/zope/interface/declarations.py", line
462, in classImplements
spec.declared += tuple(_normalizeargs(interfaces))
File "/opt/zope/lib/python/zope/interface/declarations.py", line
1373, in _normalizeargs
_normalizeargs(v, output)
File "/opt/zope/lib/python/zope/interface/declarations.py", line
1372, in _normalizeargs
for v in sequence:
TypeError: Error when calling the metaclass bases
iteration over non-sequence
On 10/27/06, Yang fer7msb02-at-sneakemail.com |python|
<...> wrote:
> Note: I realize hotshot is obsoleted by cProfile, but 2.5 breaks
> several packages I depend on. I'm using Python 2.4.3.
>
> I'm getting an AssertionError on "assert not self._stack" when calling
> hotshot.stats.load() on my app's hotshot profile. The app consistently
> causes hotshot to generate such a problematic profile, but I have no
> idea what's causing it. Anybody know what's wrong?
>
> Here's the profile:
>
> http://www.filefactory.com/file/76fdbd/
>
> Potentially relevant bugs:
>
> http://sourceforge.net/tracker/index.php?func=detail&aid=900092&group_id=5470&atid=105470
> http://sourceforge.net/tracker/index.php?func=detail&aid=1019882&group_id=5470&atid=105470
>
> Thanks in advance for any help.
> --
> http://mail.python.org/mailman/listinfo/python-list
>
--
http://mail.python.org/mailman/listinfo/python-list
Re: Assertion failure on hotshot.stats.load()
I created a simple test case showing the zope.interface problem. Just
pass the following file to profile.py (i.e. the 'profile' module in
your Python standard library, run as a standalone app). The culprit
*seems* to be Twisted. Any ideas? Thanks in advance.
#!/usr/bin/env python
"""
error i get:
Traceback (most recent call last):
File "/home/yang/local/bin/profile.py", line 611, in ?
run('execfile(%r)' % (sys.argv[0],), options.outfile, options.sort)
File "/home/yang/local/bin/profile.py", line 72, in run
prof = prof.run(statement)
File "/home/yang/local/bin/profile.py", line 448, in run
return self.runctx(cmd, dict, dict)
File "/home/yang/local/bin/profile.py", line 454, in runctx
exec cmd in globals, locals
File "", line 1, in ?
File "/home/yang/proj/assorted/sandbox/trunk/src/py/profile.py", line 6, in ?
class Cnl( object ):
File "/opt/zope/lib/python/zope/interface/advice.py", line 132, in advise
return callback(newClass)
File "/opt/zope/lib/python/zope/interface/declarations.py", line
485, in _implements_advice
classImplements(cls, *interfaces)
File "/opt/zope/lib/python/zope/interface/declarations.py", line
462, in classImplements
spec.declared += tuple(_normalizeargs(interfaces))
File "/opt/zope/lib/python/zope/interface/declarations.py", line
1373, in _normalizeargs
_normalizeargs(v, output)
File "/opt/zope/lib/python/zope/interface/declarations.py", line
1372, in _normalizeargs
for v in sequence:
TypeError: Error when calling the metaclass bases
iteration over non-sequence
"""
from zope.interface import *
from twisted.internet import interfaces
class Cnl( object ):
implements( interfaces.IPushProducer )
On 10/27/06, Yang fer7msb02-at-sneakemail.com |python|
<...> wrote:
> I fell back onto the old profile module, but got the following error
> when trying to use zope.interface. I am now without any way to profile
> my application.
>
> Traceback (most recent call last):
> File "/home/yang/local/bin/profile.py", line 611, in ?
> run('execfile(%r)' % (sys.argv[0],), options.outfile, options.sort)
> File "/home/yang/local/bin/profile.py", line 72, in run
> prof = prof.run(statement)
> File "/home/yang/local/bin/profile.py", line 448, in run
> return self.runctx(cmd, dict, dict)
> File "/home/yang/local/bin/profile.py", line 454, in runctx
> exec cmd in globals, locals
> File "", line 1, in ?
> File
> "/.automount/nms.lcs.mit.edu/export/home/yang/proj/cartel/trunk/icedb/src/frontend/icedb-central.py",
> line 5, in ?
> from icedb import *
> File "/home/yang/local/lib/python2.4/site-packages/icedb/__init__.py",
> line 4, in ?
> import cafnet
> File "/home/yang/local/lib/python2.4/site-packages/cafnet/__init__.py",
> line 269, in ?
> class Cnl( object ):
> File "/opt/zope/lib/python/zope/interface/advice.py", line 132, in advise
> return callback(newClass)
> File "/opt/zope/lib/python/zope/interface/declarations.py", line
> 485, in _implements_advice
> classImplements(cls, *interfaces)
> File "/opt/zope/lib/python/zope/interface/declarations.py", line
> 462, in classImplements
> spec.declared += tuple(_normalizeargs(interfaces))
> File "/opt/zope/lib/python/zope/interface/declarations.py", line
> 1373, in _normalizeargs
> _normalizeargs(v, output)
> File "/opt/zope/lib/python/zope/interface/declarations.py", line
> 1372, in _normalizeargs
> for v in sequence:
> TypeError: Error when calling the metaclass bases
> iteration over non-sequence
>
> On 10/27/06, Yang fer7msb02-at-sneakemail.com |python|
> <...> wrote:
> > Note: I realize hotshot is obsoleted by cProfile, but 2.5 breaks
> > several packages I depend on. I'm using Python 2.4.3.
> >
> > I'm getting an AssertionError on "assert not self._stack" when calling
> > hotshot.stats.load() on my app's hotshot profile. The app consistently
> > causes hotshot to generate such a problematic profile, but I have no
> > idea what's causing it. Anybody know what's wrong?
> >
> > Here's the profile:
> >
> > http://www.filefactory.com/file/76fdbd/
> >
> > Potentially relevant bugs:
> >
> > http://sourceforge.net/tracker/index.php?func=detail&aid=900092&group_id=5470&atid=105470
> > http://sourceforge.net/tracker/index.php?func=detail&aid=1019882&group_id=5470&atid=105470
> >
> > Thanks in advance for any help.
> > --
> > http://mail.python.org/mailman/listinfo/python-list
> >
> --
> http://mail.python.org/mailman/listinfo/python-list
>
--
http://mail.python.org/mailman/listinfo/python-list
Closing socket file descriptors
Hi, I'm experiencing a problem when trying to close the file descriptor for a socket, creating another socket, and then closing the file descriptor for that second socket. I can't tell if my issue is about Python or POSIX. In the following, the first time through, everything works. On the second connection, though, the same file descriptor as the first connection may be re-used, but for some reason, trying to do os.read/close on that file descriptor will cause an error. Thanks in advance for any help on why this problem is occurring and/or how to resolve it (preferrably, I can continue to use file descriptors instead of resorting to socket.recv/socket.close). def handle( s ): print id(s), s print os.read( s.fileno(), 4096 ) # s.recv(4096) os.close( s.fileno() ) # s.close() svr = socket.socket() svr.bind( ( 'localhost', 8003 ) ) svr.listen( 1 ) while True: print 'accepting' s,_ = svr.accept() handle( s ) # Traceback (most recent call last): # File "./normal_server_close_error.py", line 25, in # handle( s ) # File "./normal_server_close_error.py", line 13, in handle # print os.read( s.fileno(), 4096 ) # s.recv(4096) # OSError: [Errno 9] Bad file descriptor -- http://mail.python.org/mailman/listinfo/python-list
Re: Closing socket file descriptors
Hi, thanks for your answer. Should I just use that object's close() method? Is it safe to assume that objects that have fileno() also have close()? (Statically typed interfaces would come in handy now.) I'm writing a simple asynchronous I/O framework (for learning purposes - I'm aware of the myriad such frameworks for Python), and I'm writing a wrapper around objects that can be passed into select.select(). Since select() requires objects that have fileno's, that's the only requirement I place on the wrapped object's interface, and thus why I've been using FD-based operations: class handle( object ): '''handle( underlying_file ) -> handle object A wrapper for a nonblocking file descriptor/handle. Wraps any object with a fileno() method.''' __slots__ = [ '_real' ] # _real is the underlying file handle. Must support fileno(). def __init__( self, real_handle ): self._real = real_handle def fileno( self ): return self._real.fileno() def close( self ): close( self._real.fileno() ) # seems that this must now become: self._real.close() def wait_readable( self ): ... "js " <[EMAIL PROTECTED]> wrote in news:[EMAIL PROTECTED]: > Hello, Yang. > > You're not supposed to use os.open there. > See the doc at http://docs.python.org/lib/os-fd-ops.html > > Is there any reason you want to use os.close? > > On 20 May 2007 04:26:12 GMT, Yang > <[EMAIL PROTECTED]> wrote: >> Hi, I'm experiencing a problem when trying to close the file descriptor >> for a socket, creating another socket, and then closing the file >> descriptor for that second socket. I can't tell if my issue is about >> Python or POSIX. >> >> In the following, the first time through, everything works. On the >> second connection, though, the same file descriptor as the first >> connection may be re-used, but for some reason, trying to do >> os.read/close on that file descriptor will cause an error. >> >> Thanks in advance for any help on why this problem is occurring and/or >> how to resolve it (preferrably, I can continue to use file descriptors >> instead of resorting to socket.recv/socket.close). >> >> def handle( s ): >> print id(s), s >> print os.read( s.fileno(), 4096 ) # s.recv(4096) >> os.close( s.fileno() ) # s.close() >> svr = socket.socket() >> svr.bind( ( 'localhost', 8003 ) ) >> svr.listen( 1 ) >> while True: >> print 'accepting' >> s,_ = svr.accept() >> handle( s ) >> >> # Traceback (most recent call last): >> # File "./normal_server_close_error.py", line 25, in >> # handle( s ) >> # File "./normal_server_close_error.py", line 13, in handle >> # print os.read( s.fileno(), 4096 ) # s.recv(4096) >> # OSError: [Errno 9] Bad file descriptor >> -- >> http://mail.python.org/mailman/listinfo/python-list >> > -- http://mail.python.org/mailman/listinfo/python-list
Determining cause of termination
Hi all, my Python (2.4) program crashed after a couple days of running (this'll be a pain to debug, I know). I think it just...stopped running. My log files didn't show any (unusual) exceptions (I use the logging module to files and stdout/stderr piped to files). I have a feeling that the python interpreter crashed - perhaps I consumed a lot of memory and so CPython died. I also don't use ctypes or anything - I do import some twisted modules but I don't really use them. There was no core dump and I don't know how to elicit one. How can I find out as much as possible what happened? For starters, I'm now checking exit status and periodically running 'top' to monitor memory consumption, but are there any other ideas? Thanks in advance, Yang -- http://mail.python.org/mailman/listinfo/python-list
os.system() not returning
I have a Python program that does the following (pseudo-code):
while True:
is_downloading = True
use ftplib to download any new files from a server
is_downloading = False
os.system('make')
sleep(60)
To deal with intermittent connectivity/failures (this is running on a
mobile device), /etc/ppp/ip-up.local (a program that is run whenever
Internet connectivity is established) issues SIGUSR1 to the python
process, which handles it as such:
def handle_sigusr1(sig, bt):
global is_downloading, debug
if debug: print 'got SIGUSR1'; sys.stdout.flush()
if is_downloading:
args = ['python'] + sys.argv
if debug: print 'spawning', args; sys.stdout.flush()
pid = os.spawnvp(os.P_NOWAIT, 'python', args)
if debug: print 'pid', pid; sys.stdout.flush()
os.kill(os.getpid(), SIGTERM)
signal(SIGUSR1, handle_sigusr1)
(I start a new process since I didn't want to get into the business of
killing threads.)
However, os.system() occasionally does not return. It's just:
...
os.system('make -C ' + localpath + ' -f ' + makefiles[-1])
if debug: print 'sleeping'
...
and the stdout log always ends in "make: Leaving directory `/dldir'"
(make finishes). The python process is still running, but doesn't
respond to SIGUSR1, so perhaps it's still in the syscall. (SIGTERM
kills it, though; no SIGKILL needed.)
I separately tested that (a) python can be interrupted by SIGUSR1
while in blocking socket IO, and (b) SIGUSR1 doesn't screw up python
while in a os.system('make') (the signal gets handled after the call
returns).
Has anybody seen this kind of behavior, or might know what's going on?
Thanks in advance for any help.
FWIW, here is other info:
[EMAIL PROTECTED]:~$ uname -a
Linux soekris4801 2.6.20-soekris #2 Sun Nov 4 19:07:00 EST 2007 i586 unknown
[EMAIL PROTECTED]:~$ python -V
Python 2.5.1
Here are the commands the Makefile executes (repeatedly) - all
standard bash/command-line tools:
make: Entering directory `/dldir'
ls -1 /dldir/setup*.bash | tail -1 | xargs bash -x
+ set -o errexit
+ set -o nounset
+ mkdir -p /tftproot/
++ ls -1 /dldir/iii-03.tgz
++ sed 's/.*iii-\([0-9]*\)\.tgz.*/\1/'
++ tail -1
+ avail=03
++ cat /tftproot/iii-version
+ installed=03
+ '[' -z 03 ']'
+ (( installed < avail ))
make: Leaving directory `/dldir'
--
http://mail.python.org/mailman/listinfo/python-list
error when using Pmw.BLT
I tried to follow the following code demonstrating the use of Pwm.BLT: >>> from Tkinter import * >>> import Pmw >>> master = Tk() >>> g = Pmw.Blt.Graph( master ) I got the following error message after I typed the last line: Traceback (most recent call last): File "", line 1, in File "C:\Python26\lib\site-packages\Pmw\Pmw_1_3\lib\PmwBlt.py", line 260, in _ _init__ Tkinter.Widget.__init__(self, master, _graphCommand, cnf, kw) File "C:\Python26\lib\lib-tk\Tkinter.py", line 1932, in __init__ (widgetName, self._w) + extra + self._options(cnf)) _tkinter.TclError: invalid command name "::blt::graph I am a newbie in python. Does anybody know what causes this error? I am running python 2.6.3 under windows xp. Thanks for your help!! -- http://mail.python.org/mailman/listinfo/python-list
Re: error when using Pmw.BLT
On Oct 21, 12:13 pm, Robert Kern wrote: > On 2009-10-21 10:50 AM, Yang Yang wrote: > > > > > Hello, > > > I tried to follow the following code demonstrating the use of Pmw.BLT: > > > >>> from Tkinter import * > > >>> import Pmw > > >>> master = Tk() > > >>> g = Pmw.Blt.Graph( master ) > > > I got the following error message after I typed the last line: > > > Traceback (most recent call last): > > File "", line 1, in > > File "C:\Python26\lib\site- > > packages\Pmw\Pmw_1_3\lib\PmwBlt.py", line > > 260, in _ > > _init__ > > Tkinter.Widget.__init__(self, master, _graphCommand, cnf, kw) > > File "C:\Python26\lib\lib-tk\Tkinter.py", line 1932, in __init__ > > (widgetName, self._w) + extra + self._options(cnf)) > > _tkinter.TclError: invalid command name "::blt::graph > > > I am a newbie in python. Does anybody know what causes this error? I > > am running python 2.6.3 under windows xp. > > You need to install BLT for Tcl. It is not a standard part of Tcl, and it is > not > included in the version of Tcl that comes bundled with Python. > > -- > Robert Kern > > "I have come to believe that the whole world is an enigma, a harmless enigma > that is made terrible by our own mad attempt to interpret it as though it > had > an underlying truth." > -- Umberto Eco Thanks, Robert! Here is another question on this. I am running Python 2.6.3 which uses Tcl 8.5. I could not find the BLT binary source for Tcl 8.5. The latest version is for Tcl 8.4. How could I install BLT? For example, can I change the default Tcl version that Python has access to to some older version? Thanks! -- http://mail.python.org/mailman/listinfo/python-list
problems on installing PyGTK in Windows XP
Python 2.6.3 is installed on my Windows XP throught the binary file
provided by Python.org. Then I followed the steps described here:
http://faq.pygtk.org/index.py?req=show&file=faq21.001.htp
to install PyGTK. However, I still get the following error:
>>> import pygtk
>>> pygtk.require('2.0')
>>> import gtk
Traceback (most recent call last):
File "", line 1, in
File "C:\Python26\lib\site-packages\gtk-2.0\gtk\__init__.py", line
48, in
from gtk import _gtk
ImportError: DLL load failed: The specified module could not be found.
I googled this problem and noticed that there are quite a few people
having the same problem. But I am unable to find anything which helps
me solve this problem.
Could you please give me some suggestions?
Thanks!
--
http://mail.python.org/mailman/listinfo/python-list
multiprocessing Pool.imap broken?
I've tried both the multiprocessing included in the python2.6 Ubuntu package (__version__ says 0.70a1) and the latest from PyPI (2.6.2.1). In both cases I don't know how to use imap correctly - it causes the entire interpreter to stop responding to ctrl-C's. Any hints? Thanks in advance. $ python Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import multiprocessing as mp >>> mp.Pool(1).map(abs, range(3)) [0, 1, 2] >>> list(mp.Pool(1).imap(abs, range(3))) ^C^C^C^C^\Quit -- http://mail.python.org/mailman/listinfo/python-list
Re: multiprocessing Pool.imap broken?
On Tue, Mar 29, 2011 at 6:44 PM, Yang Zhang wrote: > I've tried both the multiprocessing included in the python2.6 Ubuntu > package (__version__ says 0.70a1) and the latest from PyPI (2.6.2.1). > In both cases I don't know how to use imap correctly - it causes the > entire interpreter to stop responding to ctrl-C's. Any hints? Thanks > in advance. > > $ python > Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) > [GCC 4.4.3] on linux2 > Type "help", "copyright", "credits" or "license" for more information. >>>> import multiprocessing as mp >>>> mp.Pool(1).map(abs, range(3)) > [0, 1, 2] >>>> list(mp.Pool(1).imap(abs, range(3))) > ^C^C^C^C^\Quit > In case anyone jumps on this, this isn't an issue with running from the console: $ cat /tmp/go3.py import multiprocessing as mp print mp.Pool(1).map(abs, range(3)) print list(mp.Pool(1).imap(abs, range(3))) $ python /tmp/go3.py [0, 1, 2] ^C^C^C^C^C^\Quit (I've actually never seen the behavior described in the corresponding Note at the top of the multiprocessing documentation.) -- http://mail.python.org/mailman/listinfo/python-list
Re: multiprocessing Pool.imap broken?
The problem was that Pool shuts down from its finalizer: http://stackoverflow.com/questions/5481104/multiprocessing-pool-imap-broken/5481610#5481610 On Wed, Mar 30, 2011 at 5:59 AM, eryksun () wrote: > On Tuesday, March 29, 2011 9:44:21 PM UTC-4, Yang Zhang wrote: >> I've tried both the multiprocessing included in the python2.6 Ubuntu >> package (__version__ says 0.70a1) and the latest from PyPI (2.6.2.1). >> In both cases I don't know how to use imap correctly - it causes the >> entire interpreter to stop responding to ctrl-C's. Any hints? Thanks >> in advance. >> >> $ python >> Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) >> [GCC 4.4.3] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >> >>> import multiprocessing as mp >> >>> mp.Pool(1).map(abs, range(3)) >> [0, 1, 2] >> >>> list(mp.Pool(1).imap(abs, range(3))) >> ^C^C^C^C^\Quit > > It works fine for me on Win32 Python 2.7.1 with multiprocessing 0.70a1. So > it's probably an issue with the implementation on Linux. > > Python 2.7.1 (r271:86832, Nov 27 2010, 18:30:46) > [MSC v.1500 32 bit (Intel)] on win32 > Type "help", "copyright", "credits" or "license" for more information. >>>> import multiprocessing as mp >>>> list(mp.Pool(1).imap(abs, range(3))) > [0, 1, 2] >>>> mp.__version__ > '0.70a1' > -- > http://mail.python.org/mailman/listinfo/python-list > -- Yang Zhang http://yz.mit.edu/ -- http://mail.python.org/mailman/listinfo/python-list
Re: multiprocessing Pool.imap broken?
My self-reply tried to preempt your suggestion :) On Wed, Mar 30, 2011 at 12:12 AM, [email protected] wrote: > Yang, > > My guess is that you are running into a problem using multiprocessing with > the interpreter. The documentation states that Pool may not work correctly > in this case. >> >> Note: Functionality within this package requires that the __main__ method >> be importable by the children. This is covered in Programming guidelines >> however it is worth pointing out here. This means that some examples, such >> as the multiprocessing.Pool examples will not work in the interactive >> interpreter. > > Hope this helps, > > Kyle > > > On Tue, Mar 29, 2011 at 6:51 PM, Yang Zhang wrote: >> >> On Tue, Mar 29, 2011 at 6:44 PM, Yang Zhang >> wrote: >> > I've tried both the multiprocessing included in the python2.6 Ubuntu >> > package (__version__ says 0.70a1) and the latest from PyPI (2.6.2.1). >> > In both cases I don't know how to use imap correctly - it causes the >> > entire interpreter to stop responding to ctrl-C's. Any hints? Thanks >> > in advance. >> > >> > $ python >> > Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) >> > [GCC 4.4.3] on linux2 >> > Type "help", "copyright", "credits" or "license" for more information. >> >>>> import multiprocessing as mp >> >>>> mp.Pool(1).map(abs, range(3)) >> > [0, 1, 2] >> >>>> list(mp.Pool(1).imap(abs, range(3))) >> > ^C^C^C^C^\Quit >> > >> >> In case anyone jumps on this, this isn't an issue with running from the >> console: >> >> $ cat /tmp/go3.py >> import multiprocessing as mp >> print mp.Pool(1).map(abs, range(3)) >> print list(mp.Pool(1).imap(abs, range(3))) >> >> $ python /tmp/go3.py >> [0, 1, 2] >> ^C^C^C^C^C^\Quit >> >> (I've actually never seen the behavior described in the corresponding >> Note at the top of the multiprocessing documentation.) >> -- >> http://mail.python.org/mailman/listinfo/python-list > > > -- > http://mail.python.org/mailman/listinfo/python-list > > -- Yang Zhang http://yz.mit.edu/ -- http://mail.python.org/mailman/listinfo/python-list
Conversion: execfile --> exec
The python 2.x command is as following:
---
info = {}
execfile(join('chaco', '__init__.py'), info)
--
But execfile has been removed in python 3.x.
So my problem is how to convert the above to a 3.x based command?
thanks very much
--
https://mail.python.org/mailman/listinfo/python-list
CPython compiled failed in macOS
version: macOS 10.14.4, Apple LLVM version 10.0.1 (clang-1001.0.46.4). I cloned the CPython source code from GitHub then compiled it which used to work quite well. However, I messed up my terminal a few days ago for installing gdb. Now when I try to compile the latest CPython source code with ./configure --with-pydebug && make -j I always get the error messages: ld: warning: ld: warning: ignoring file libpython3.8d.a, file was built for archive which is not the architecture being linked (x86_64): libpython3.8d.aignoring file libpython3.8d.a, file was built for archive which is not the architecture being linked (x86_64): libpython3.8d.a Undefined symbols for architecture x86_64: "__Py_UnixMain", referenced from: _main in python.o ld: symbol(s) not found for architecture x86_64Undefined symbols for architecture x86_64: "_PyEval_InitThreads", referenced from: _test_repeated_init_and_subinterpreters in _testembed.o "_PyEval_ReleaseThread", referenced from: _test_repeated_init_and_subinterpreters in _testembed.o "_PyEval_RestoreThread", referenced from: _test_repeated_init_and_subinterpreters in _testembed.o _test_bpo20891 in _testembed.o "_PyEval_SaveThread", referenced from: _test_bpo20891 in _testembed.o "_PyGILState_Check", referenced from: _bpo20891_thread in _testembed.o "_PyGILState_Ensure", referenced from: _test_repeated_init_and_subinterpreters in _testembed.o _bpo20891_thread in _testembed.o "_PyGILState_Release", referenced from: _test_repeated_init_and_subinterpreters in _testembed.o _bpo20891_thread in _testembed.o "_PyInterpreterState_GetID", referenced from: _print_subinterp in _testembed.o "_PyMem_RawFree", referenced from: _test_pre_initialization_api in _testembed.o "_PyRun_SimpleStringFlags", referenced from: _test_pre_initialization_api in _testembed.o _test_pre_initialization_sys_options in _testembed.o _test_init_main in _testembed.o _check_stdio_details in _testembed.o _print_subinterp in _testembed.o _dump_config in _testembed.o "_PySys_AddWarnOption", referenced from: _test_pre_initialization_sys_options in _testembed.o "_PySys_AddXOption", referenced from: _test_pre_initialization_sys_options in _testembed.o "_PySys_ResetWarnOptions", referenced from: _test_pre_initialization_sys_options in _testembed.o "_PyThreadState_Get", referenced from: _test_repeated_init_and_subinterpreters in _testembed.o _print_subinterp in _testembed.o "_PyThreadState_Swap", referenced from: _test_repeated_init_and_subinterpreters in _testembed.o "_PyThread_acquire_lock", referenced from: _test_bpo20891 in _testembed.o "_PyThread_allocate_lock", referenced from: _test_bpo20891 in _testembed.o "_PyThread_exit_thread", referenced from: _bpo20891_thread in _testembed.o "_PyThread_free_lock", referenced from: _test_bpo20891 in _testembed.o "_PyThread_release_lock", referenced from: _bpo20891_thread in _testembed.o "_PyThread_start_new_thread", referenced from: _test_bpo20891 in _testembed.o "_Py_BytesWarningFlag", referenced from: _test_init_global_config in _testembed.o _test_init_from_config in _testembed.o _set_all_global_config_variables in _testembed.o "_Py_DebugFlag", referenced from: _set_all_global_config_variables in _testembed.o "_Py_DecodeLocale", referenced from: _test_pre_initialization_api in _testembed.o "_Py_DontWriteBytecodeFlag", referenced from: _test_init_global_config in _testembed.o _test_init_from_config in _testembed.o _set_all_global_config_variables in _testembed.o _check_init_python_config in _testembed.o "_Py_EndInterpreter", referenced from: _test_repeated_init_and_subinterpreters in _testembed.o "_Py_Finalize", referenced from: _test_forced_io_encoding in _testembed.o _test_repeated_init_and_subinterpreters in _testembed.o _test_pre_initialization_api in _testembed.o _test_pre_initialization_sys_options in _testembed.o _test_initialize_twice in _testembed.o _test_initialize_pymain in _testembed.o _test_init_default_config in _testembed.o ... "_Py_FrozenFlag", referenced from: _test_init_global_config in _testembed.o _test_init_from_config in _testembed.o _set_all_global_config_variables in _testembed.o _check_init_python_config in _testembed.o "_Py_IgnoreEnvironmentFlag", referenced from: _test_init_env in _testembed.o _test_init_env_dev_mode in _testembed.o _test_init_env_dev_mode_alloc in _testembed.o _set_all_global_config_variables in _testembed.o _check_init_python_config in _testembed.o "_Py_Initialize", referenced from: _test_forced_io_encoding in _testembed.o _test_pre_initialization_api in _testembed.o _test_initialize_twice in _testembed.o _test
Duplicate function in thread_pthread.h
When I try to understand the code about the thread. I found the thread_pthread.h file has some duplicate functions. Like `PyThread_free_lock`, `PyThread_release_lock`, `PyThread_acquire_lock_timed`. IIUC, C doesn't support function overload. So why we have functions with the same name and args? -- https://mail.python.org/mailman/listinfo/python-list
write function call _io_BufferedWriter_write_impl twice?
My script looks like this:
f = open('myfile', 'a+b')
f.write(b'abcde')
And I also add a `printf` statement in the _io_BufferedWriter_write_impl
function.
static PyObject * _io_BufferedWriter_write_impl(buffered *self,
Py_buffer *buffer)
/*[clinic end generated code: output=7f8d1365759bfc6b
input=dd87dd85fc7f8850]*/
{
printf("call write_impl\n");
PyObject *res = NULL;
Py_ssize_t written, avail, remaining;
Py_off_t offset;
...
After I compiled then run my script. I found _io_BufferedWriter_write_impl
had
been called twice which I expected only once. The second time it changed
self->pos to an unexpected value too.
--
https://mail.python.org/mailman/listinfo/python-list
Questions about the IO modules and C-api
I have some questions about the IO modules.
1. My script:
f = open('myfile, 'a+b')
f.close()
I added a printf statement at the beginning of _io_open_impl
,
the output is:
_io_open
_io_open
_io_open
_io_open
_io_open
Why this function has been called more than one time? I expect this
function only let us call the `open()` system call once.
2. I'm not familiar with the C, How the c-api like
PyObject_CallMethodObjArgs(self->raw,
_PyIO_str_write, memobj, NULL)
works?
I guess this function will finally call the `write()` system call but I
don't know why it would work. I found `self->raw` is an empty PyObject and
`_PyIO_str_write` is a global variable which is NULL. Why an empty
PyObject have a write method? Why we didn't just use `write()` system call
directly?
Thank you so much :D
Best,
Windson
--
https://mail.python.org/mailman/listinfo/python-list
Understand workflow about reading and writing files in Python
I'm trying to understand the workflow of how python read/writes files with buffer. I drew a diagram for it. I will be appreciated if someone can review the diagram :D [image: 屏幕快照 2019-06-15 下午12.50.57.png] -- https://mail.python.org/mailman/listinfo/python-list
Understand workflow about reading and writing files in Python
I'm trying to understand the workflow of how Python read/writes data with buffer. I will be appreciated if someone can review it. ### Read n data 1. If the data already in the buffer, return data 2. If the data not in the buffer: 1. copy all the current data from the buffer 2. create a new buffer object, fill the new buffer with raw read which read data from disk. 3. concat the data in the old buffer and new buffer. 4. return the data ### Write n data 1. If data small enough to fill into the buffer, write data to the buffer 2. If data can't fill into the buffer 1. flush the data in the buffer 1. If succeed: 1. create a new buffer object. 2. fill the new buffer with data return from raw write 2. If failed: 1. Shifting the buffer to make room for writing data to the buffer 2. Buffer as much writing data as possible (may raise BlockingIOError) 2. return the data -- https://mail.python.org/mailman/listinfo/python-list
Fwd: Understand workflow about reading and writing files in Python
Thank you so much for you review DL Neil, it really helps :D. However,
there are some parts still confused me, I replyed as below.
DL Neil 于2019年6月19日周三 下午2:03写道:
> I've not gone 'back' to refer to any ComSc theory on buffer-management.
> Perhaps you might benefit from such?
>
> I just take a crash course on it so I want to know if I understand the
details correctly :D
> I like your use of the word "shift", so I'll continue to use it.
>
> There are three separate units of data to consider - each of which could
> be called a "buffer". To avoid confusing (myself) I'll only call the
> 'middle one' that:
> 1 the unit of data 'coming' from the data-source
> 2 the "buffer" you are implementing
> 3 the unit of data 'going' out to a data-destination.
>
> Just to make it clear, when we use `f.write('abc')` in python, (1) means
'abc', (2) means the buffer handle by Python (by default 8kb), (2) means
the file *f* we are writing to, right?
1 and 3 may be dictated to you, eg hardware or file specifications, code
> requirements, etc.
>
> So, data is shifted into the (2) buffer in a unit-size decided by (1) -
> in most use-cases each incoming unit will be the same size, but remember
> that the last 'unit' may/not be full-size. Similarly, data shifted out
> from the (2) buffer to (3).
>
> The size of (1) is likely not that of (3) - otherwise why use a
> "buffer"? The size of (2) must be larger than (1) and larger than (2) -
> for reasons already illustrated.
>
Is this a typo? (2) larger than (1) larger than (2)?
>
> I recall learning how to use buffers with a series of hand-drawn block
> diagrams. Recommend you try similarly!
>
>
> Now, let's add a few critiques, as requested (interposed below):-
>
>
> On 19/06/19 3:53 PM, Windson Yang wrote:t
> > I'm trying to understand the workflow of how Python read/writes data with
> > buffer. I will be appreciated if someone can review it.
> >
> > ### Read n data
>
> - may need more than one read operation if the size of (3) "demands"
> more data than the size of (1)/one "read".
>
Looks like the size of len of one read() depends on
https://github.com/python/cpython/blob/master/Modules/_io/bufferedio.c#L1655
?
>
> > 1. If the data already in the buffer, return data
>
> - this a data-transfer of size (3)
>
> For extra credit/an unnecessary complication (but probable speed-up!):
> * if the data-remaining is less than size (3) consider a read-ahead
> mechanism
>
> > 2. If the data not in the buffer:
>
> - if buffer's data-len < size (3)
>
> > 1. copy all the current data from the buffer
>
> * if "buffer" is my (2), then no-op
>
I don't understand your point here, when we read data we would copy some
data from the current buffer from python, right? (
https://github.com/python/cpython/blob/master/Modules/_io/bufferedio.c#L1638),
we use `out` (which point to res) to store the data here.
>
> > 2. create a new buffer object, fill the new buffer with raw read
> which
> > read data from disk.
>
> * this becomes: perform read operation and append incoming data (size
> (1)) to "buffer" - hence why "buffer" is larger than (1), by definition.
> NB if size (1) is smaller than size (3), multiple read operations may be
> necessary. Thus a read-loop!?
>
> Yes, you are right, here is a while loop (
https://github.com/python/cpython/blob/master/Modules/_io/bufferedio.c#L1652
)
>
> > 3. concat the data in the old buffer and new buffer.
>
> = now no-op. Hopefully the description of 'three buffers' removes this
> confusion of/between buffers.
>
> I don't get it. When we call the function like seek(0) then read(1000),
we can still use the data from buffer from python, right?
>
> > 4. return the data
>
> * make the above steps into a while-loop and there won't be a separate
> step here (it is the existing step 1!)
>
>
> * build all of the above into a function/method, so that the 'mainline'
> only has to say 'give me data'!
>
>
> > ### Write n data
> > 1. If data small enough to fill into the buffer, write data to the buffer
>
> =yes, the data coming from source (1), which in this case is 'your' code
> may/not be sufficient to fill the output size (3). So, load it into the
> "buffer" (2).
>
> > 2. If data can't fill into the buffer
> > 1. flush the data in the buffer
>
> =This statement seems to suggest that if there is already some data in
> the buffer, it will b
Re: Understand workflow about reading and writing files in Python
When you said "C-runtime buffered I/O", are you talking about Standard I/O in C (FILE * object)? AFAIN, In CPython, we didn't use Standard I/O, right? Dennis Lee Bieber 于2019年6月25日周二 上午12:48写道: > On Mon, 24 Jun 2019 15:18:26 +1200, DL Neil > declaimed the following: > > > > > >However, the OpSys may have read considerably more data, depending upon > >the device(s) involved, the application, etc; eg if we ask for 2 bytes > >the operating system will read a much larger block (or applicable unit) > >of data from a disk drive. > > > > Depending upon implementation, there could be layers of buffers > involved... > > Python application requests, say, 50 bytes using a "buffered I/O" > file > object. That file object may invoke a C-runtime buffered I/O call that > requests whatever the C-runtime buffer-size is -- say a 512 byte sector. > That request goes to a low-level device driver for a file system/device > that does I/O in 4KB clusters. So the first read results in the OS reading > 4KB into a buffer, and passing 512 bytes to the C-call, which then returns > 50 bytes to the Python layer. > > The second read for 50 bytes is satisfied from the remaining bytes > in > the C-runtime sector buffer. The 11th read of 50 bytes will get 12 bytes > from the sector, and then the C-runtime has to request the next sector from > the OS, which is satisfied from the file system cluster buffer. After the > 8th sector is processed, the next request results in the OS going to disk > for the next cluster. > > > -- > Wulfraed Dennis Lee Bieber AF6VN > [email protected] > http://wlfraed.microdiversity.freeddns.org/ > > -- > https://mail.python.org/mailman/listinfo/python-list > -- https://mail.python.org/mailman/listinfo/python-list
Re: Understand workflow about reading and writing files in Python
DL Neil 于2019年6月24日周一 上午11:18写道: > Yes, better to reply to list - others may 'jump in'... > > > On 20/06/19 5:37 PM, Windson Yang wrote: > > Thank you so much for you review DL Neil, it really helps :D. However, > > there are some parts still confused me, I replyed as below. > > It's not a particularly easy topic... > > > > DL Neil > <mailto:[email protected]>> 于2019年6月19日周三 下午2:03写道: > > > > I've not gone 'back' to refer to any ComSc theory on > buffer-management. > > Perhaps you might benefit from such? > > > > I just take a crash course on it so I want to know if I understand the > > details correctly :D > > ...there are so many ways one can mess-up! > > > > I like your use of the word "shift", so I'll continue to use it. > > > > There are three separate units of data to consider - each of which > > could > > be called a "buffer". To avoid confusing (myself) I'll only call the > > 'middle one' that: > > 1 the unit of data 'coming' from the data-source > > 2 the "buffer" you are implementing > > 3 the unit of data 'going' out to a data-destination. > > > > Just to make it clear, when we use `f.write('abc')` in python, (1) means > > 'abc', (2) means the buffer handle by Python (by default 8kb), (2) means > > the file *f* we are writing to, right? > > Sorry, this is my typo, (3) means the file *f* we are writing to, right? > No! (sorry) f.write() is an output operation, thus nr3. > > "f" is not a "buffer handle" but a "file handle" or more accurately a > "file object". > > When we: > > one_input = f.read( NRbytes ) > > (ignoring EOF/short file and other exceptions) that many bytes will > 'appear' in our program labelled as "one_input". > > However, the OpSys may have read considerably more data, depending upon > the device(s) involved, the application, etc; eg if we ask for 2 bytes > the operating system will read a much larger block (or applicable unit) > of data from a disk drive. > > The same applies in reverse, with f.write( NRbytes/byte-object ), until > we flush or close the file. > > Those situations account for nr1 and nr3. In the usual case, we have no > control over the size of these buffers - and it is best not to meddle! > > I agreed with you. Hence:- > > > 1 and 3 may be dictated to you, eg hardware or file specifications, > > code > > requirements, etc. > > > > So, data is shifted into the (2) buffer in a unit-size decided by > (1) - > > in most use-cases each incoming unit will be the same size, but > > remember > > that the last 'unit' may/not be full-size. Similarly, data shifted > out > > from the (2) buffer to (3). > > > > The size of (1) is likely not that of (3) - otherwise why use a > > "buffer"? The size of (2) must be larger than (1) and larger than > (2) - > > for reasons already illustrated. > > > > Is this a typo? (2) larger than (1) larger than (2)? > > Correct - well spotted! nr2 > nr1 and nr2 > nr3 > When we run 'f.write(100', I understand why nr2 (by defaut 8kb) > nr1 (100), but I'm not sure why nr2 > nr3 (file object) here? > > > > I recall learning how to use buffers with a series of hand-drawn > block > > diagrams. Recommend you try similarly! > > Try this! > > > > Now, let's add a few critiques, as requested (interposed below):- > > > > > > On 19/06/19 3:53 PM, Windson Yang wrote:t > > > I'm trying to understand the workflow of how Python read/writes > > data with > > > buffer. I will be appreciated if someone can review it. > > > > > > ### Read n data > > > > - may need more than one read operation if the size of (3) "demands" > > more data than the size of (1)/one "read". > > > > > > Looks like the size of len of one read() depends on > > > https://github.com/python/cpython/blob/master/Modules/_io/bufferedio.c#L1655 > ? > > > You decide how many bytes should be read. That's how much will be > transferred from the OpSys' I/O into the Python program's space. With > the major exception, that if there is no (more) data available, it is > defined as an exception (EOF = end of file) or if there are fewer bytes > of data
fopen() and open() in cpython
After my investigation, I found Since Python maintains its own buffer when read/write files, the build-in python open() function will call the open() system call instead of calling standard io fopen() for caching. So when we read/write a file in Python, it would not call fopen(), fopen() only use for Python itself but not for python user. Am I correct? -- https://mail.python.org/mailman/listinfo/python-list
Re: fopen() and open() in cpython
Thank you so much for the answer, now it makes sense :D eryk sun 于2019年8月15日周四 上午12:27写道: > On 8/13/19, Windson Yang wrote: > > After my investigation, I found Since Python maintains its own buffer > when > > read/write files, the build-in python open() function will call the > open() > > system call instead of calling standard io fopen() for caching. So when > we > > read/write a file in Python, it would not call fopen(), fopen() only use > > for Python itself but not for python user. Am I correct? > > Python 2 I/O wraps C FILE streams (i.e. fopen, fclose, fread, fwrite, > fgets). Python 3 has its own I/O stack (raw, buffered, text) that aims > to be more reliably cross-platform than C FILE streams. Python 3 still > uses FILE streams internally in some cases (e.g. to read pyvenv.cfg at > startup). > > FYI in Windows open() or _wopen() is a C runtime library function, not > a system function. It calls the Windows API function CreateFile, which > calls the NT system function, NtCreateFile. It's similarly layered for > all calls, e.g. read() calls ReadFile or ReadConsoleW, which calls > NtReadFile or NtDeviceIoControlFile (ReadConsoleW). > -- https://mail.python.org/mailman/listinfo/python-list
How should we use global variables correctly?
I can 'feel' that global variables are evil. I also read lots of articles proves that (http://wiki.c2.com/?GlobalVariablesAreBad). However, I found CPython Lib use quite a lot of `global` keyword. So how should we use `global` keyword correctly? IIUC, it's fine that we use `global` keyword inside the Lib since most of the time the user just import the lib and call the API. In any other situation, we should avoid using it. Am I right? -- https://mail.python.org/mailman/listinfo/python-list
Re: How should we use global variables correctly?
Thank you all for the great explanation, I still trying to find some good example to use 'global', In CPython, I found an example use 'global' in cpython/Lib/zipfile.py _crctable = None def _gen_crc(crc): for j in range(8): if crc & 1: crc = (crc >> 1) ^ 0xEDB88320 else: crc >>= 1 return crc def _ZipDecrypter(pwd): key0 = 305419896 key1 = 591751049 key2 = 878082192 global _crctable if _crctable is None: _crctable = list(map(_gen_crc, range(256))) crctable = _crctable _crctable only been used in the _ZipDecrypter function. IIUC, the code can be refactored to def _gen_crc(crc): ...stay the same def _ZipDecrypter(pwd, _crctable=list(map(_gen_crc, range(256: key0 = 305419896 key1 = 591751049 key2 = 878082192 crctable = _crctable Which avoid using 'global' keyword. Why we are not doing this? I guess the reason we use 'global' here because we don't want to create `_crctable = list(map(_gen_crc, range(256)))` every time when we run '_ZipDecrypter' function. So we kinda cache _crctable with 'global', am I right? -- https://mail.python.org/mailman/listinfo/python-list
Re: How should we use global variables correctly?
I also want to know what is the difference between "using 'global variables' in a py module" and "using a variable in class". For example: In global.py: foo = 1 def bar(): global foo return foo + 1 In class.py class Example: def __init__(self): self.foo = 1 def bar() return self.foo + 1 Expect the syntax, why using class variable self.foo would be better (or more common)? I think the 'global' here is relative, foo is global in global.py and self.foo is global in Example class. If the global.py is short and clean enough (didn't have a lot of other class), they are pretty much the same. Or I missed something? Chris Angelico 于2019年8月23日周五 上午9:34写道: > On Fri, Aug 23, 2019 at 11:24 AM Windson Yang wrote: > > > > Thank you all for the great explanation, I still trying to find some good > > example to use 'global', In CPython, I found an example use 'global' in > > cpython/Lib/zipfile.py > > > > _crctable = None > > def _gen_crc(crc): > > for j in range(8): > > if crc & 1: > > crc = (crc >> 1) ^ 0xEDB88320 > > else: > > crc >>= 1 > > return crc > > > > def _ZipDecrypter(pwd): > > key0 = 305419896 > > key1 = 591751049 > > key2 = 878082192 > > > > global _crctable > > if _crctable is None: > > _crctable = list(map(_gen_crc, range(256))) > > crctable = _crctable > > > > _crctable only been used in the _ZipDecrypter function. IIUC, the code > can > > be refactored to > > > > def _gen_crc(crc): > > ...stay the same > > > > def _ZipDecrypter(pwd, _crctable=list(map(_gen_crc, range(256: > > key0 = 305419896 > > key1 = 591751049 > > key2 = 878082192 > >crctable = _crctable > > > > Which avoid using 'global' keyword. Why we are not doing this? I guess > the > > reason we use 'global' here because we don't want to create `_crctable = > > list(map(_gen_crc, range(256)))` every time when we run '_ZipDecrypter' > > function. So we kinda cache _crctable with 'global', am I right? > > It's a cache that is made ONLY when it's first needed. If you put it > in the function header, it has to be created eagerly as soon as the > module is imported. > > ChrisA > -- > https://mail.python.org/mailman/listinfo/python-list > -- https://mail.python.org/mailman/listinfo/python-list
Re: How should we use global variables correctly?
Thank you all. I agreed with Frank that > It would make sense to use the 'global' keyword if you have a module with various functions, several of which refer to 'foo', but only one of which changes the value of 'foo'. I also found an example in cpython/lib/gettext.py, only 'textdomain function' can change '_current_domain', other functions just refer to it. So, it will be not evil or to use 'global' keyword correctly when there is only one function can change its value? Cameron Simpson 于2019年8月23日周五 下午3:15写道: > On 23Aug2019 09:07, Frank Millman wrote: > >On 2019-08-23 8:43 AM, Windson Yang wrote: > >>In class.py > >> > >> class Example: > >> def __init__(self): > >> self.foo = 1 > >> def bar() > >> return self.foo + 1 > >> > >>Expect the syntax, why using class variable self.foo would be better (or > >>more common)? I think the 'global' here is relative, foo is global in > >>global.py and self.foo is global in Example class. If the global.py is > >>short and clean enough (didn't have a lot of other class), they are > pretty > >>much the same. Or I missed something? > >> > > > >One difference is that you could have many instances of Example, each > >with its own value of 'foo', whereas with a global 'foo' there can > >only be one value of 'foo' for the module. > > But that is an _instance_ attribute. Which is actually what Windson Yang > made. > > A class attribute is bound to the class, not an instance. The > terminology is important. > > Cheers, > Cameron Simpson > -- > https://mail.python.org/mailman/listinfo/python-list > -- https://mail.python.org/mailman/listinfo/python-list
What make function with huge list so slow
I have two functions to calculate Fibonacci numbers. fib_dp use a list to store the calculated number. fib_dp2 just use two variables. def fib_dp(n): if n <= 1: return n dp = [0] * (n+1) dp[0], dp[1] = 0, 1 for i in range(2, n+1): dp[i] = dp[i-1] + dp[i-2] return dp[-1] def fib_dp2(n): if n <= 1: return n pre, now = 0, 1 for i in range(2, (n+1)): pre, now = now, pre+now return now Theoretically, both of their time complexity should be O(n). However, when the input n is so big (like 40), fib_dp2(40) can calculate it in 2s but fib_dp(40) takes *more than 60s* (python3.7.3 and macOS 10.14.6). Why?. At first, I guess the reasons are 1. It took too much time looking up the value in the list (the worse case should be O(1) according to the document). However, the below function fib_dp_tem(40) can finish in 2s, so looking up value should not be the bottleneck. def fib_dp_look(n): if n <= 1: return n dp = [0] * (n+1) dp[0], dp[1] = 0, 1 for i in range(2, n+1): # change dp[i] to tem, this function is not correct now but it return dp[-1] in 2s tem = dp[i-1] + dp[i-2] return dp[-1] 2. It took too much time setting up the value in the list (the worse case should be O(1) according to the document). Again, the below function fib_dp_set(40) can finish in 2s, so setting value should not be the bottleneck too. def fib_dp_set(n): if n <= 1: return n dp = [0] * (n+1) dp[0], dp[1] = 0, 1 for i in range(2, n+1): # this function is not correct now but it return dp[-1] in 2s dp[i-1] = i dp[i-2] = i + 1 return dp[-1] 3. python use some kind of cache for 'pre', 'now' variable, (like 'register variable' in C, but I'm not sure how it work in CPython) I also tried to use the dis module but with no luck. Any reason to make fib_dp so much slower than fib_dp2? Please let me know, thank you. Bests, Windson -- https://mail.python.org/mailman/listinfo/python-list
Re: What make function with huge list so slow
Thank you, Chris. I tried your suggestions. I don't think that is the reason, fib_dp_look() and fib_dp_set() which also allocation a big list can return in 2s. Chris Angelico 于2019年8月25日周日 上午11:27写道: > On Sun, Aug 25, 2019 at 12:56 PM Windson Yang wrote: > > > > I have two functions to calculate Fibonacci numbers. fib_dp use a list to > > store the calculated number. fib_dp2 just use two variables. > > > > def fib_dp(n): > > if n <= 1: > > return n > > dp = [0] * (n+1) > > dp[0], dp[1] = 0, 1 > > for i in range(2, n+1): > > dp[i] = dp[i-1] + dp[i-2] > > return dp[-1] > > > > def fib_dp2(n): > > if n <= 1: > > return n > > pre, now = 0, 1 > > for i in range(2, (n+1)): > > pre, now = now, pre+now > > return now > > > > Theoretically, both of their time complexity should be O(n). However, > when > > the input n is so big (like 40), fib_dp2(40) can calculate it in > 2s > > but fib_dp(40) takes *more than 60s* (python3.7.3 and macOS 10.14.6). > > Why? > > Memory allocation can take a long time. Try grabbing the line > initializing dp and slapping that into the body of dp2, and then > compare their times; that might make all the difference. > > ChrisA > -- > https://mail.python.org/mailman/listinfo/python-list > -- https://mail.python.org/mailman/listinfo/python-list
Re: What make function with huge list so slow
'I'm just running them in succession and seeing how long they'. The full code looks like this, this is only an example.py here. and I run 'time python3 example.py' for each function. def fib_dp(n): dp = [0] * (n+1) if n <= 1: return n dp[0], dp[1] = 0, 1 for i in range(2, n+1): dp[i] = dp[i-1] + dp[i-2] return dp[-1] def fib_dp2(n): # add dp here just for calculating memory allocation time dp = [0] * (n+1) if n <= 1: return n pre, now = 0, 1 for i in range(2, (n+1)): pre, now = now, pre+now return now # run the function only once # fib_dp(40) # took more than 60s # fib_dp2(40) # took about 2s -- https://mail.python.org/mailman/listinfo/python-list
Re: What make function with huge list so slow
> Figure out how much memory fib_dp is holding on to right before it returns > the answer. fib(40) is a _big_ number! And so is fib(39), and > fib(38), and fib(37), etc. By the time you're done, you're holding > on to quite a huge pile of storage here. Depending on how much physical > memory you have, you much actually be swapping before you're done. > > -- > Alan Bawden > -- > T Thank you Alan. You are right! we use too much memeoy to save the answer in the dp list. Now I get it, thank you so much. -- https://mail.python.org/mailman/listinfo/python-list
Python v.s. c++
Hi, if i am already skillful with c++. Is it useful to learn python? thanks! -- http://mail.python.org/mailman/listinfo/python-list
spawn* or exec* and fork, what should I use and how ?
Hi,
I want to use python as a "shell like" program,
and execute an external program in it( such as mv, cp, tar, gnuplot)
I tried:
os.execv("/bin/bash",("/usr/bin/gnuplot",'-c "gnuplot < plot.tmp"'))
since it's in a for-loop, it should be executed many times, but
It exits after the first time running.
so I have to use spawn* like this:
os.spawnlp(os.P_WAIT, 'gnuplot', 'gnuplot', 'plot.tmp')
It works very well.
My question is,
1. why my exec(..) command doesn't work?
2. exec* must be with fork ?
3. In what situation, we choose one over another ?
Thank you!
regards,
Lingyun
--
http://mail.python.org/mailman/listinfo/python-list
Re: spawn* or exec* and fork, what should I use and how ?
Peter Hansen wrote:
Lingyun Yang wrote:
I want to use python as a "shell like" program,
and execute an external program in it( such as mv, cp, tar, gnuplot)
os.execv("/bin/bash",("/usr/bin/gnuplot",'-c "gnuplot < plot.tmp"'))
I would suggest checking out the "subprocess" module,
new in Python 2.4. It subsumes the functionality
of most of the alternative methods such as execv and
spawn and os.system(), and provides an arguably cleaner
interface.
-Peter
Thank you!
I got the document about subprocess,
http://www.python.org/dev/doc/devel/lib/module-subprocess.html
--
http://mail.python.org/mailman/listinfo/python-list
Re: Newbie question related to Boolean in Python
bear_moved = False
while True:
next = raw_input("> ")
if next == "take honey":
dead("The bear looks at you then slaps your face off.")
elif next == "taunt bear" and not bear_moved:
print "The bear has moved from the door. You can go through."
bear_moved = True
elif next == "taunt bear" and bear_moved:
dead("The bear gets pissed off and chews your leg off.")
elif next == "open door" and bear_moved:
gold_room()
else:
print "I got no idea what that means."
# sorry if it looks confusing, this is the real code
--
https://mail.python.org/mailman/listinfo/python-list
PyImport_Import() failed
Hello, I'm new to Python.I want to embed Python interpreter into my C++ application powered by Borland C++ Builder 2006. I have written some C++ code in order to evaluate Python script file, but PyImport_Import() calls
failed.I really don't know what's wrong with my cpp code.It always return a NULL value. Any advice is appreciated!int TPython::ExecuteScript(const char * pszFileName){ PyObject *pFileName, *pModule, *pDict, *pFunc;
PyObject *pArgs, *pValue; pFileName = PyString_FromString(pszFileName); /* Error checking of pFileName left out */ if(!pFileName) { trace1(("pFileName == NULL "));
} pModule = PyImport_Import(pFileName); // always failed, returned NULL:( Py_DECREF(pFileName); if (pModule != NULL) {
} else { PyErr_Print(); trace1(("Failed to load \"%s\"\n", pszFileName)); return 1; }
return 0;}
--
http://mail.python.org/mailman/listinfo/python-list
How to download a web page just like a web browser do ?
Hi , It is about one month passed since I post to this list last time . Yes , I use python , I used it in every day normal work , whenever I need to do some scripts or other little-scale works , python is the first one I took consideration in . I must say it is a powerful tool for me , and what is more important is there is a friendly and flourish community here . Oh , I must stop appriciation and go to my question now . Everyday , I receive a newsletter from NYTimes , but I didn't want to read the news in evening the time the letter came in . So , I am about to download the web page contains the news and read them next morning ! I decide to use python to write a tool , which should be feeded with a URL , and then download the page to my disk . This function just like the Browser's "save as..." function . I know what shoud I do to accomplish that , I need to parse the web page , and download all pages in the page , and modify all the links to conrespond local disk links and ... So , is there any similar function any one have impelment? Does anyone can share some code with me ? I really don't want to some confusing code to process such as text findings and substitutions . Thanks in advance ! Bo -- http://mail.python.org/mailman/listinfo/python-list
Python 2.5 "make install" bug?
All the site-packages/*.so files get copied to the directory specified in my ~/.pydistutils.cfg instead of lib-dynload under the prefix dir, then proceeds to chmod 755 all the files in that directory (including ones that existed before install). Please advise. -- http://mail.python.org/mailman/listinfo/python-list
An algorithm problem
Hi , I have writen a python program to slove a problem described as below: (Forgive my poor English !) Put the 2^n 0 or 1 to form a ring , and we can select any continuous n ones from the ring to constitute a binary number . And obviously we can get 2^n selections , so the question is : Given a number n and you must Povide an algorithm by which the 0-1 ring is produced , and the 2^n number we get are just from 0 to 2^n-1 uniquely and entirely . So I write the following program to slove the question , and it works well for the n below 10 flag = 0 input = 10 number = 2**input set = set() list = [] def list2int(l , n ): ret = 0 for x in l : if( n == 0 ): break if x : ret = ( ret << 1 ) | 1 else : ret = ( ret << 1 ) | 0 n = n - 1 return ret def ring( index , number ): global list , set temp = [] if( number < index ): #the range overflow #detect whether the remain of the list is ok ? for x in xrange(1,input): begin = number - input + 1 + x l = list[begin:] + list[0:x] i = list2int(l , input ) if i in set: for i in temp: set.remove(i) return else: set.add(i) temp.append(i) #index = index + 1 global flag flag = 1 return list.append(1) if len(list) < input or list2int(list[index-input+1:index+2],input) not in set: if(len(list) >= input ): set.add(list2int(list[index-input+1:index+2],input)) ring(index+1,number) if(flag == 1): return if(len(list) >= input ): set.remove(list2int(list[index-input+1:index+2],input)) list = list[:index] list.append(0) if len(list) < input or list2int(list[index-input+1:index+2],input) not in set: if(len(list) >= input ): set.add(list2int(list[index-input+1:index+2],input)) ring(index+1,number) if(flag == 1): return if(len(list) >= input ): set.remove(list2int(list[index-input+1:index+2],input)) list = list[:index] ring(0,number-1) print list But when the n reach 10 , Python report the stack is exhausted : RuntimeError: maximum recursion depth exceeded And my question is , where can I put some optimization to make the code can work more smoothly with more greater n ? Thanks in advance ! -- http://mail.python.org/mailman/listinfo/python-list
Re: An algorithm problem
I am sorry , and thanks for your friendliness . I have change my source code for more meaningful variable name and more comments follow the instructions John has writen . I think that is useful for you to look my code . And this time I put the code in the attachment . Indeed , I write that in the Eclipse Pydev , I don't know why it lose its idention in the mailing list . Also , I forget an example for the question , and I compensate for it : Given the integer 2 for instance : what we exapct from the algorithm is a list of 0 or 1 , for this instance it is 1 1 0 0 . We can take it as a ring , and fetch any continuous 2 numbers from it , finally get four combinations : 11 , 10 , 00 , 01 and they are 3 , 2 , 0 , 1 . I hope this time it is clear for all to understand what problem confront me . And again I am sorry for the trouble , thanks for you again ! Best Regard Bo Yang -- http://mail.python.org/mailman/listinfo/python-list
Re: An algorithm problem
I am sorry , and thanks for your friendliness . I have change my source code for more meaningful variable name and more comments follow the instructions John has writen . I think that is useful for you to look my code . And this time I put the code in the attachment . Indeed , I write that in the Eclipse Pydev , I don't know why it lose its idention in the mailing list . Also , I forget an example for the question , and I compensate for it : Given the integer 2 for instance : what we exapct from the algorithm is a list of 0 or 1 , for this instance it is 1 1 0 0 . We can take it as a ring , and fetch any continuous 2 numbers from it , finally get four combinations : 11 , 10 , 00 , 01 and they are 3 , 2 , 0 , 1 . I hope this time it is clear for all to understand what problem confront me . And again I am sorry for the trouble , thanks for you again ! Best Regard Bo Yang #This is the file for the solution of the 0-1 ring problem #in the csdn algorithm forum #The input to this program is a integer n , #The ouuput of the algorithm is the a list of 2^n elements #in which every number in the range of 0~2^n-1 just appear #once exactly . flag = 0# whether we found the ring input = 2 # the input n range = 2**input #2^n integers = set()#the set of the numbers we have in the ring now ring = [] # the ring def list2int(list , n ): #convert a list contains only 0 or 1 to a integer ret = 0 for x in list : if n == 0 : break if x : ret = ( ret << 1 ) | 1 else : ret = ( ret << 1 ) | 0 n = n - 1 return ret def getring( index , range ):#the main function for the problem global ring , integers temp = [] if range < index : #the range overflow #detect whether the remain of the list contains the remain integers for x in xrange(1,input): begin = range - input + 1 + x l = ring[begin:] + ring[0:x] i = list2int(l , input ) if i in integers: for i in temp: integers.remove(i) return else: integers.add(i) temp.append(i) #index = index + 1 #if we reach here , it indicat that we succeed global flag flag = 1 return #test whether 1 in the position is ok? ring.append(1) if len(ring) < input or list2int(ring[index-input+1:index+2],input) not in integers: if len(ring) >= input : integers.add(list2int(ring[index-input+1:index+2],input)) getring(index+1,range) if flag == 1: return if len(ring) >= input : integers.remove(list2int(ring[index-input+1:index+2],input)) #test whether 0 in this position is ok? ring = ring[:index] ring.append(0) if len(ring) < input or list2int(ring[index-input+1:index+2],input) not in integers: if len(ring) >= input : integers.add(list2int(ring[index-input+1:index+2],input)) getring(index+1,range) if flag == 1 : return if len(ring) >= input : integers.remove(list2int(ring[index-input+1:index+2],input)) ring = ring[:index] #begin process getring(0,range-1) #output the result list print ring-- http://mail.python.org/mailman/listinfo/python-list
An error ?
Hi , I am confronted with an odd question in the python cgi module ! Below is my code : import cgitb ; cgitb.enable() import cgi print "Hello World !" How easy the script is , and the url is 202.113.239.51/vote/cgi/a.py but apache give me a 'Server internal error !' and the error log is : [Fri Jun 16 14:06:45 2006] [error] [client 10.10.110.17] malformed header from script. Bad header=Hello World!: a.py I wish somebody could help me , thanks in advance ! Best Regard ! -- http://mail.python.org/mailman/listinfo/python-list
Re: An error ?
It works , thank you everyone ! I think there is much more I need to learn before I can grasp a full understand of the C/S modle ! Thanks again ! -- http://mail.python.org/mailman/listinfo/python-list
Re: Python or Ajax?
I think if you know java language very well but feel suffering with the error prone javascript , GWT is good choose for AJAX development . With the well-known IDE Eclipse your development time efficiency will promote fast ! -- http://mail.python.org/mailman/listinfo/python-list
How do you use this list ?
Hi everyone , I have join this list for about 4 months , and everyday I receive hundreds of mails . There is no means to read all of them , so I just read something interesting for me . But if so , there are too much mails pile up in my inbox , I want to ask how do you use this list , reading every mail come in or just read what you think interesting ? Thank you ! Best Regard ! -- http://mail.python.org/mailman/listinfo/python-list
Re: How to download a web page just like a web browser do ?
Thank you , Max ! I think HarvestMan is just what I need ! Thanks again ! -- http://mail.python.org/mailman/listinfo/python-list
How many web framework for python ?
Hello everybody , I am a student major in software engeering . I need to do something for my course . There are very good web framework for java and ruby , Is there one for python ? I want to write a web framework for python based on mod_python as my course homework , could you give some advise ? -- http://mail.python.org/mailman/listinfo/python-list
Re: How many web framework for python ?
Thank you very much ! -- http://mail.python.org/mailman/listinfo/python-list
make a class instance from a string ?
Hi,
I know in java , we can use
class.ForName("classname")
to get an instance of the class 'classname' from a
string , in python , how do I do that ?
Thanks in advance !
--
http://mail.python.org/mailman/listinfo/python-list
Re: do design patterns still apply with Python?
Paul Novak 写道:
> A lot of the complexity of design patterns in Java falls away in
> Python, mainly because of the flexibility you get with dynamic typing.
>
I agree with this very much !
In java or C++ or all such static typing and compiled languages , the
type is fixed on
in the compile phrase , so for the flexible at the runtime , we often
need to program to
interface . For example ,
we do in java :
implement I{...}
class A implement I{...}
class B implement I{...}
oprate(I var) // we can pass A's instance or B's instance here
and in C++ :
class Abstract{...}
class A : Abstract{...}
class B : Abstract{...}
oprate(Abstract var) // pass the A's instance or B's instance here
But in python , type is dynamic , and name is bind at runtime , so we
can pass any variable as we want ! This feather make python not need for
redundant class inherits
and interfaces which are the core of the GoF's design patterns I think !
> For a Pythonic Perspective on Patterns, "Python Programming Patterns"
> by Thomas W. Christopher is definitely worth tracking down. It looks
> like it is out of print, but you can find used copies on Amazon.
>
> Regards,
>
> Paul.
>
>
> This sounds like an article crying out to be written,
> "(Learning) Design Patterns with Python".
>
> Has it been written already?
>
> Cheers,
> Terry
>
>
> Bruce Eckel began writing "Thinking In Python" it was last updated
> in 2001.
>
--
http://mail.python.org/mailman/listinfo/python-list
Re: do design patterns still apply with Python?
Paul Novak :
> A lot of the complexity of design patterns in Java falls away in
> Python, mainly because of the flexibility you get with dynamic typing.
>
I agree with this very much !
In java or C++ or all such static typing and compiled languages , the
type is fixed on
in the compile phrase , so for the flexible at the runtime , we often
need to program to
interface . For example ,
we do in java :
implement I{...}
class A implement I{...}
class B implement I{...}
oprate(I var) // we can pass A's instance or B's instance here
and in C++ :
class Abstract{...}
class A : Abstract{...}
class B : Abstract{...}
oprate(Abstract var) // pass the A's instance or B's instance here
But in python , type is dynamic , and name is bind at runtime , so we
can pass any variable as we want ! This feather make python not need for
redundant class inherits
and interfaces which are the core of the GoF's design patterns I think !
> For a Pythonic Perspective on Patterns, "Python Programming Patterns"
> by Thomas W. Christopher is definitely worth tracking down. It looks
> like it is out of print, but you can find used copies on Amazon.
>
> Regards,
>
> Paul.
>
>
>
> This sounds like an article crying out to be written,
> "(Learning) Design Patterns with Python".
>
> Has it been written already?
>
> Cheers,
> Terry
>
>
> Bruce Eckel began writing "Thinking In Python" it was last updated
> in 2001.
>
--
http://mail.python.org/mailman/listinfo/python-list
reading argv argument of unittest.main()
I've read that unittest.main() can take an optional argv argument, and that if it is None, it will be assigned sys.argv. Is there a way to pass command line arguments through unittest.main() to the setUp method of a class derived from unittest.TestCase? Thank you in advance. Winston -- http://mail.python.org/mailman/listinfo/python-list
using PyUnit to test with multiple threads
Is it possible to use PyUnit to test with multiple threads? I want to send many commands to a database at the same time. The order of execution of the commands is indeterminate, and therefore, so is the status message returned. For example, say that I send the commands "get" and "delete" for a given record to the database at the same time. If the get executes before the delete, I expect a success message (assuming that the record exists in the database). If the delete executes before the get, I expect a failure message. Is there a way to write tests in PyUnit for this type of situation? Thank you in advance. Winston -- http://mail.python.org/mailman/listinfo/python-list
Re: reusing parts of a string in RE matches?
John Salerno 写道: > I probably should find an RE group to post to, but my news server at > work doesn't seem to have one, so I apologize. But this is in Python > anyway :) > > So my question is, how can find all occurrences of a pattern in a > string, including overlapping matches? I figure it has something to do > with look-ahead and look-behind, but I've only gotten this far: > > import re > string = 'abababababababab' > pattern = re.compile(r'ab(?=a)') > Below from the python library reference *|(?=...)|* Matches if ... matches next, but doesn't consume any of the string. This is called a lookahead assertion. For example, Isaac (?=Asimov) will match |'Isaac '| only if it's followed by |'Asimov'|. > m = pattern.findall(string) > > This matches all the 'ab' followed by an 'a', but it doesn't include the > 'a'. What I'd like to do is find all the 'aba' matches. A regular > findall() gives four results, but really there are seven. > > I try the code , but I give seven results ! > Is there a way to do this with just an RE pattern, or would I have to > manually add the 'a' to the end of the matches? > > Thanks. > -- http://mail.python.org/mailman/listinfo/python-list
python3.0 base64 error
I use base64 module on python3.0
like:
import base64
b="hello world"
a=base64.b64encode(b)
print(a)
but when i run it,it catch a error:
Traceback (most recent call last):
File "/home/jackie-yang/yd5m19/pythonstudy/test.py", line 4, in
a=base64.b64encode(b)
File "/usr/local/lib/python3.0/base64.py", line 56, in b64encode
raise TypeError("expected bytes, not %s" % s.__class__.__name__)
TypeError: expected bytes, not str
and also the document's example:
>>> import base64
>>> encoded = base64.b64encode('data to be encoded')
>>> encoded
'ZGF0YSB0byBiZSBlbmNvZGVk'
>>> data = base64.b64decode(encoded)
>>> data
'data to be encoded'
can not run too.
what happen?
--
http://mail.python.org/mailman/listinfo/python-list
Re: psycopg2 weirdness
For posterity: the problem turned out to be a second request being made
in quick succession by the client-side Javascript, causing the web.py
request handler to run in multiple threads concurrently. The request
handlers don't create their own Postgresql connections, but instead
share one across all sessions. The absence of any synchronization
protecting this connection resulted in myriad errors and crashes in the
C extension module (in both pygresql and psycopg2).
Neha Gupta wrote:
Hey,
I only have little experience with web.py and psycopg2 and am running
into a weird problem, I'd appreciate any help I can get with debugging
it.
I wrote a simple program that works and I don't see any crash:
import psycopg2
try:
database_conn = psycopg2.connect("dbname='dbname' user='username'
host='hostname'");
except:
print "Unable to connect to the database!"
database_conn.set_isolation_level(0)
cur = database_conn.cursor();
while True:
query = "SELECT avg(dep_delay), extract(hour from crs_dep_time) as
crs_dep_hour, origin from flightdata where date = '01-05-2007' group
by origin, crs_dep_hour order by origin, crs_dep_hour";
cur.execute(query)
rows = cur.fetchall()
print rows
-
However, I have a small website built using web.py framework which has
a date picker that lets the user pick a date and it takes the user to
a new url such as: localhost:8080/departures/01-05-2007. I issue a
query to my database for the date selected by the user and retrieve
the results. The problem now is that if I select different dates
directly by changing the url then everything works but as soon as I
pick a date from date picker the server crashes. I removed the date
picker and made it just a text box but as soon as I hit the submit
button, server crashes so I know it is not the date picker that
causing trouble.
---
class departures:
def buildDepartureTableHtml(self, date):
web.debug('date', date)
# Issue the query.
# select avg(dep_delay), extract(hour from crs_dep_time) as
crs_dep_hour, origin from flightdata where date = '2007-02-15' group
by origin, crs_dep
# _hour order by origin, crs_dep_hour;
try:
web.debug("About to issue query")
# query = "SELECT avg(dep_delay), extract(hour from
crs_dep_time) as
crs_dep_hour, origin from flightdata where date = '" + date + "' group
by origin, crs_dep_hour order by origin, crs_dep_hour";
query = "SELECT avg(dep_delay), extract(hour from
crs_dep_time) as
crs_dep_hour, origin from flightdata where date = '01-05-2007' group
by origin, crs_dep_hour order by origin, crs_dep_hour";
cur.execute(query)
web.debug('query executed!')
rows = cur.fetchall()
web.debug('rows fetched!')
web.debug(rows)
except Exception, e:
print repr(e)
database_conn.rollback()
return "Invalid Date"
--
// JS code
function submitForm() {
var date = ($("date").value).replace(/\//g,"-");
window.location = "http://"; + window.location.host + "/
departures/" + date;
}
You can see above that I even ignored the date passed from the form
and I have hardcoded '01-05-2007'. The message "About to issue query"
gets printed as well as the right date chosen from the date picker but
then I see the following:
Assertion failed: (str != NULL), function PyString_FromString, file
Objects/stringobject.c, line 107.
Abort trap
with a pop that says: "The application Python quit unexpectedly. The
problem may have been caused by the _psycopg.so plug-in".
--
I don't understand the error message above. The date did get passed
correctly and am now not even using it, I use the hard coded date. So
what is going on?
Any help would be great.
Thank you!
Neha
--
http://mail.python.org/mailman/listinfo/python-list
--
Yang Zhang
http://www.mit.edu/~y_z/
--
http://mail.python.org/mailman/listinfo/python-list
some question about python2.6 and python3k
Hi,guys i am a new guy for python world,i have some question want to ask 1.should i learn about python2.6 or python3k?i heard of it has some difference from them 2.Do python3k has some good web framework(like web.py)? Thanks ! -- http://mail.python.org/mailman/listinfo/python-list
dict is really slow for big truck
i try to load a big file into a dict, which is about 9,000,000 lines,
something like
1 2 3 4
2 2 3 4
3 4 5 6
code
for line in open(file)
arr=line.strip().split('\t')
dict[arr[0]]=arr
but, the dict is really slow as i load more data into the memory, by
the way the mac i use have 16G memory.
is this cased by the low performace for dict to extend memory or
something other reason.
is there any one can provide a better solution
--
http://mail.python.org/mailman/listinfo/python-list
dict would be very slow for big data
hi i am trying to insert a lot of data into a dict, which may be 10,000,000 level. after inserting 10 unit, the insert rate become very slow, 50,000/ s, and the entire time used for this task would be very long,also. would anyone know some solution for this case? thanks -- http://mail.python.org/mailman/listinfo/python-list
Unbuffered stdout/auto-flush
Hi, is there any way to get unbuffered stdout/stderr without relying on the -u flag to python or calling .flush() on each print (including indirect hacks like replacing sys.stdout with a wrapper that succeeds each write() with a flush())? Thanks in advance! -- Yang Zhang http://www.mit.edu/~y_z/ -- http://mail.python.org/mailman/listinfo/python-list
Re: py2exe, PyQT, QtWebKit and jpeg problem
On 6月20日, 下午11时04分, Carbonimax <[EMAIL PROTECTED]> wrote: > hello > > I have a problem with py2exe and QtWebKit : > I make a program with a QtWebKit view. > If I launch the .py directly, all images (jpg, png) are displayed but > if I compile it with py2exe I have only png images. No jpg ! > No error message, nothing. > > Have you a solution ? Thank you. I have the same problem with you. I find a way to fix it: 1. Copy Qt plugins to the directory: $YOUR_DIST_PATH/PyQt4/plugins; 2. Copy qt.conf to youar dist directory; 3. Edit qt.conf, change Prefix to $YOUR_DIST_PATH/PyQt4 Orit -- http://mail.python.org/mailman/listinfo/python-list
Re: Learning different languages
Rich said : > Hi, > > (this is a probably a bit OT here, but comp.lang seems rather > desolated, so I'm not sure I would get an answer there. And right now > I'm in the middle of learning Python anyway so...) > > Anyway, my question is: what experience you people have with working > with different languages at the same time? > > I am an undergraduate now , majoring in software engineering .I have learn three lanaguages , c/c++ , java , sql . And now I am taking a part-time job in a software corporation which engage in enterprise mail system development . I must use php to maintain their web sever page , while I am using perl script to process the mail message. Meantime , I am very interested in python too .I can't say I am good at any one of these, but I must use all of these at a time . > Actually I did myself many years ago, on my Commodore machines, where > I programmed a lot in both basic, assembler and machine code, and > don't recall I had any problems with handling these parallel. But > then, they are very different languages, so it's not easy to get their > syntax etc. mixed up with each other. > > Yes , I feel that too . I often use break statement in perl script only be warned an syntax error ! > I'm more thinking about Python, PHP, C++, Perl, Euphoria, which are > languages I'm thinking of learning now. They look much more like each > other than basic and MC, at places some even share the exact same > syntax it seems, so your brain might get confused with what language > you're actually working with? > > How is your experience with handling these paralell?. And what would > you recommend - take one (or perhaps two) at a time, and then continue > with the next? Or is it OK to go ahead with them all, at once? > > I think when anybody learn a new language , the most important thing is not the syntax of that language but the builtin functions and the libraries the language provide ! My experience is : Learning a language is relatively easy , but being good at a language is a far more difficult thing! Regards! -- http://mail.python.org/mailman/listinfo/python-list
How can I get the text under the cusor ?
Hello , I want to develop an application to record some of the best words and ideas in the web when I surfing with the Firefox or IE . I would like the application have such a GUI : 1.it appear in the system tray area in the Windows ; 2.whenever I select some words (ether in an IE or MS word),I hope the application to display a small window to ask me wether I will add these words to the application and whether I append some notes to the text . So , how can I trace the cusor , and the text selected in Python ? -- http://mail.python.org/mailman/listinfo/python-list
A question about the urllib2 ?
Hi , Recently I use python's urllib2 write a small script to login our university gateway . Usually , I must login into the gateway in order to surf the web . So , every time I start my computer , it is my first thing to do that open a browser to login the gateway ! So , I decide to write such a script , sending some post information to the webserver directly to login automatic once the computer is on . And I write the code below : urllib2.urlopen(urllib2.Request(url="https://202.113.16.223/php/user_login.php";, data="loginuser=0312889&password=o127me&domainid=1&refer=1& logintype= #")) In the five '#' above , I must submit some Chinese character , but the urllib2 complain for the non-ascii characters . What do you think this ? Any help will be appreciated very much , thanks in advance ! -- http://mail.python.org/mailman/listinfo/python-list
Re: A question about the urllib2 ?
Fuzzyman 写道: > Bo Yang wrote: > >> Hi , >> Recently I use python's urllib2 write a small script to login our >> university gateway . >> Usually , I must login into the gateway in order to surf the web . So , >> every time I >> start my computer , it is my first thing to do that open a browser to >> login the gateway ! >> >> So , I decide to write such a script , sending some post information to >> the webserver >> directly to login automatic once the computer is on . And I write the >> code below : >> >> urllib2.urlopen(urllib2.Request(url="https://202.113.16.223/php/user_login.php";, >> data="loginuser=0312889&password=o127me&domainid=1&refer=1& logintype= >> #")) >> >> In the five '#' above , I must submit some Chinese character , but the >> urllib2 complain >> for the non-ascii characters . >> >> What do you think this ? >> >> > I haven't tried this, so I'm guessing :-) > > Do you pass in the string to urllib2 as unicode ? If so, try encoding > it to UTF8 first... > > Otherwise you might try escaping it using ``urllib.quote_plus``. (Note > ``urllib``, *not* ``urllib2``.) > > All the best, > > Fuzzyman > http://www.voidspace.org.uk/python/index.shtml > > >> Any help will be appreciated very much , thanks in advance ! >> > > Thank you , I have got it ! I quote the Chinese with urllib.quote() , Thanks again ! -- http://mail.python.org/mailman/listinfo/python-list
Re: A question about the urllib2 ?
Serge Orlov 写道:
> Bo Yang wrote:
>
>> Hi ,
>> Recently I use python's urllib2 write a small script to login our
>> university gateway .
>> Usually , I must login into the gateway in order to surf the web . So ,
>> every time I
>> start my computer , it is my first thing to do that open a browser to
>> login the gateway !
>>
>> So , I decide to write such a script , sending some post information to
>> the webserver
>> directly to login automatic once the computer is on . And I write the
>> code below :
>>
>> urllib2.urlopen(urllib2.Request(url="https://202.113.16.223/php/user_login.php";,
>> data="loginuser=0312889&password=o127me&domainid=1&refer=1& logintype=
>> #"))
>>
>> In the five '#' above , I must submit some Chinese character , but the
>> urllib2 complain
>> for the non-ascii characters .
>>
>
> I guess the server expect that a browser is coming from page
> https://202.113.16.223/, so the url should be submitted in the encoding
> of page https://202.113.16.223/ which is gb2312.
>
> urllib2.urlopen(urllib2.Request(url="https://202.113.16.223/php/user_login.php";,
> data=u"loginuser=0312889&password=o127me&domainid=1&refer=1& logintype=
> 写道".encode('gb2312')))
>
> should work
>
>
Yeah , and it works !
Thank you !
--
http://mail.python.org/mailman/listinfo/python-list
Re: Kross - Start of a Unified Scripting Approach
RM 写道: > This is from the new KOffice Announcement. > > http://www.koffice.org/announcements/announce-1.5.php > > '''This version of KOffice features a start of a unified scripting > solution called Kross. Kross provides cross-language support for > scripting (thus its name) and at present supports Python and Ruby. > > Kross is easy to include into programs previously lacking scripting > abilities, and is included in this version as a technology preview. So > far, only Krita and Kexi are improved by means of the Kross engine.We > would also like to point out that the API might change in the future > and expect Kross to be fully integrated into KOffice version 2.0.''' > > Interesting isn't it? > > Just like the .Net vision , Good idea ! -- http://mail.python.org/mailman/listinfo/python-list
Python subprocesses experience mysterious delay in receiving stdin EOF
I reduced a problem I was seeing in my application down into the following test case. In this code, a parent process concurrently spawns 2 (you can spawn more) subprocesses that read a big message from the parent over stdin, sleep for 5 seconds, and write something back. However, there's unexpected waiting happening somewhere, causing the code to complete in 10 seconds instead of the expected 5. If you set `verbose=True`, you can see that the straggling subprocess is receiving most of the messages, then waiting for the last chunk of 3 chars---it's not detecting that the pipe has been closed. Furthermore, if I simply don't do anything with the second process (`doreturn=True`), the first process will *never* see the EOF. Any ideas what's happening? Further down is some example output. Thanks in advance. from subprocess import * from threading import * from time import * from traceback import * import sys verbose = False doreturn = False msg = (20*4096+3)*'a' def elapsed(): return '%7.3f' % (time() - start) if sys.argv[1:]: start = float(sys.argv[2]) if verbose: for chunk in iter(lambda: sys.stdin.read(4096), ''): print >> sys.stderr, '..', time(), sys.argv[1], 'read', len(chunk) else: sys.stdin.read() print >> sys.stderr, elapsed(), '..', sys.argv[1], 'done reading' sleep(5) print msg else: start = time() def go(i): print elapsed(), i, 'starting' p = Popen(['python','stuckproc.py',str(i), str(start)], stdin=PIPE, stdout=PIPE) if doreturn and i == 1: return print elapsed(), i, 'writing' p.stdin.write(msg) print elapsed(), i, 'closing' p.stdin.close() print elapsed(), i, 'reading' p.stdout.read() print elapsed(), i, 'done' ts = [Thread(target=go, args=(i,)) for i in xrange(2)] for t in ts: t.start() for t in ts: t.join() Example output: 0.001 0 starting 0.003 1 starting 0.005 0 writing 0.016 1 writing 0.093 0 closing 0.093 0 reading 0.094 1 closing 0.094 1 reading 0.098 .. 1 done reading 5.103 1 done 5.108 .. 0 done reading 10.113 0 done -- Yang Zhang http://yz.mit.edu/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Python subprocesses experience mysterious delay in receiving stdin EOF
On Wed, Feb 9, 2011 at 11:01 AM, MRAB wrote: > On 09/02/2011 01:59, Yang Zhang wrote: >> >> I reduced a problem I was seeing in my application down into the >> following test case. In this code, a parent process concurrently >> spawns 2 (you can spawn more) subprocesses that read a big message >> from the parent over stdin, sleep for 5 seconds, and write something >> back. However, there's unexpected waiting happening somewhere, causing >> the code to complete in 10 seconds instead of the expected 5. >> >> If you set `verbose=True`, you can see that the straggling subprocess >> is receiving most of the messages, then waiting for the last chunk of >> 3 chars---it's not detecting that the pipe has been closed. >> Furthermore, if I simply don't do anything with the second process >> (`doreturn=True`), the first process will *never* see the EOF. >> >> Any ideas what's happening? Further down is some example output. >> Thanks in advance. >> >> from subprocess import * >> from threading import * >> from time import * >> from traceback import * >> import sys >> verbose = False >> doreturn = False >> msg = (20*4096+3)*'a' >> def elapsed(): return '%7.3f' % (time() - start) >> if sys.argv[1:]: >> start = float(sys.argv[2]) >> if verbose: >> for chunk in iter(lambda: sys.stdin.read(4096), ''): >> print>> sys.stderr, '..', time(), sys.argv[1], 'read', >> len(chunk) >> else: >> sys.stdin.read() >> print>> sys.stderr, elapsed(), '..', sys.argv[1], 'done reading' >> sleep(5) >> print msg >> else: >> start = time() >> def go(i): >> print elapsed(), i, 'starting' >> p = Popen(['python','stuckproc.py',str(i), str(start)], >> stdin=PIPE, stdout=PIPE) >> if doreturn and i == 1: return >> print elapsed(), i, 'writing' >> p.stdin.write(msg) >> print elapsed(), i, 'closing' >> p.stdin.close() >> print elapsed(), i, 'reading' >> p.stdout.read() >> print elapsed(), i, 'done' >> ts = [Thread(target=go, args=(i,)) for i in xrange(2)] >> for t in ts: t.start() >> for t in ts: t.join() >> >> Example output: >> >> 0.001 0 starting >> 0.003 1 starting >> 0.005 0 writing >> 0.016 1 writing >> 0.093 0 closing >> 0.093 0 reading >> 0.094 1 closing >> 0.094 1 reading >> 0.098 .. 1 done reading >> 5.103 1 done >> 5.108 .. 0 done reading >> 10.113 0 done >> > I changed 'python' to the path of python.exe and 'stuckproc.py' to its > full path and tried it with Python 2.7 on Windows XP Pro. It worked as > expected. Good point - I didn't specify that I'm seeing this on Linux (Ubuntu 10.04's Python 2.6). -- Yang Zhang http://yz.mit.edu/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Python subprocesses experience mysterious delay in receiving stdin EOF
On Thu, Feb 10, 2011 at 12:28 AM, Jean-Michel Pichavant wrote: > Yang Zhang wrote: >> >> On Wed, Feb 9, 2011 at 11:01 AM, MRAB wrote: >> >>> >>> On 09/02/2011 01:59, Yang Zhang wrote: >>> >>>> >>>> I reduced a problem I was seeing in my application down into the >>>> following test case. In this code, a parent process concurrently >>>> spawns 2 (you can spawn more) subprocesses that read a big message >>>> from the parent over stdin, sleep for 5 seconds, and write something >>>> back. However, there's unexpected waiting happening somewhere, causing >>>> the code to complete in 10 seconds instead of the expected 5. >>>> >>>> If you set `verbose=True`, you can see that the straggling subprocess >>>> is receiving most of the messages, then waiting for the last chunk of >>>> 3 chars---it's not detecting that the pipe has been closed. >>>> Furthermore, if I simply don't do anything with the second process >>>> (`doreturn=True`), the first process will *never* see the EOF. >>>> >>>> Any ideas what's happening? Further down is some example output. >>>> Thanks in advance. >>>> >>>> from subprocess import * >>>> from threading import * >>>> from time import * >>>> from traceback import * >>>> import sys >>>> verbose = False >>>> doreturn = False >>>> msg = (20*4096+3)*'a' >>>> def elapsed(): return '%7.3f' % (time() - start) >>>> if sys.argv[1:]: >>>> start = float(sys.argv[2]) >>>> if verbose: >>>> for chunk in iter(lambda: sys.stdin.read(4096), ''): >>>> print>> sys.stderr, '..', time(), sys.argv[1], 'read', >>>> len(chunk) >>>> else: >>>> sys.stdin.read() >>>> print>> sys.stderr, elapsed(), '..', sys.argv[1], 'done reading' >>>> sleep(5) >>>> print msg >>>> else: >>>> start = time() >>>> def go(i): >>>> print elapsed(), i, 'starting' >>>> p = Popen(['python','stuckproc.py',str(i), str(start)], >>>> stdin=PIPE, stdout=PIPE) >>>> if doreturn and i == 1: return >>>> print elapsed(), i, 'writing' >>>> p.stdin.write(msg) >>>> print elapsed(), i, 'closing' >>>> p.stdin.close() >>>> print elapsed(), i, 'reading' >>>> p.stdout.read() >>>> print elapsed(), i, 'done' >>>> ts = [Thread(target=go, args=(i,)) for i in xrange(2)] >>>> for t in ts: t.start() >>>> for t in ts: t.join() >>>> >>>> Example output: >>>> >>>> 0.001 0 starting >>>> 0.003 1 starting >>>> 0.005 0 writing >>>> 0.016 1 writing >>>> 0.093 0 closing >>>> 0.093 0 reading >>>> 0.094 1 closing >>>> 0.094 1 reading >>>> 0.098 .. 1 done reading >>>> 5.103 1 done >>>> 5.108 .. 0 done reading >>>> 10.113 0 done >>>> >>>> >>> >>> I changed 'python' to the path of python.exe and 'stuckproc.py' to its >>> full path and tried it with Python 2.7 on Windows XP Pro. It worked as >>> expected. >>> >> >> Good point - I didn't specify that I'm seeing this on Linux (Ubuntu >> 10.04's Python 2.6). >> >> > > python test.py 0.000 0 starting > 0.026 0 writing > 0.026 0 closing > 0.026 0 reading > 0.029 .. 0 done reading > 0.030 1 starting > 0.038 1 writing > 0.058 1 closing > 0.058 1 reading > 0.061 .. 1 done reading > 5.026 0 done > 5.061 1 done > > on debian lenny (Python 2.5) > > JM > FWIW, this is consistently reproduce-able across all the Ubuntu 10.04s I've tried. You may need to increase the message size so that it's large enough for your system. -- Yang Zhang http://yz.mit.edu/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Python subprocesses experience mysterious delay in receiving stdin EOF
Anybody else see this issue? On Thu, Feb 10, 2011 at 10:37 AM, Yang Zhang wrote: > On Thu, Feb 10, 2011 at 12:28 AM, Jean-Michel Pichavant > wrote: >> Yang Zhang wrote: >>> >>> On Wed, Feb 9, 2011 at 11:01 AM, MRAB wrote: >>> >>>> >>>> On 09/02/2011 01:59, Yang Zhang wrote: >>>> >>>>> >>>>> I reduced a problem I was seeing in my application down into the >>>>> following test case. In this code, a parent process concurrently >>>>> spawns 2 (you can spawn more) subprocesses that read a big message >>>>> from the parent over stdin, sleep for 5 seconds, and write something >>>>> back. However, there's unexpected waiting happening somewhere, causing >>>>> the code to complete in 10 seconds instead of the expected 5. >>>>> >>>>> If you set `verbose=True`, you can see that the straggling subprocess >>>>> is receiving most of the messages, then waiting for the last chunk of >>>>> 3 chars---it's not detecting that the pipe has been closed. >>>>> Furthermore, if I simply don't do anything with the second process >>>>> (`doreturn=True`), the first process will *never* see the EOF. >>>>> >>>>> Any ideas what's happening? Further down is some example output. >>>>> Thanks in advance. >>>>> >>>>> from subprocess import * >>>>> from threading import * >>>>> from time import * >>>>> from traceback import * >>>>> import sys >>>>> verbose = False >>>>> doreturn = False >>>>> msg = (20*4096+3)*'a' >>>>> def elapsed(): return '%7.3f' % (time() - start) >>>>> if sys.argv[1:]: >>>>> start = float(sys.argv[2]) >>>>> if verbose: >>>>> for chunk in iter(lambda: sys.stdin.read(4096), ''): >>>>> print>> sys.stderr, '..', time(), sys.argv[1], 'read', >>>>> len(chunk) >>>>> else: >>>>> sys.stdin.read() >>>>> print>> sys.stderr, elapsed(), '..', sys.argv[1], 'done reading' >>>>> sleep(5) >>>>> print msg >>>>> else: >>>>> start = time() >>>>> def go(i): >>>>> print elapsed(), i, 'starting' >>>>> p = Popen(['python','stuckproc.py',str(i), str(start)], >>>>> stdin=PIPE, stdout=PIPE) >>>>> if doreturn and i == 1: return >>>>> print elapsed(), i, 'writing' >>>>> p.stdin.write(msg) >>>>> print elapsed(), i, 'closing' >>>>> p.stdin.close() >>>>> print elapsed(), i, 'reading' >>>>> p.stdout.read() >>>>> print elapsed(), i, 'done' >>>>> ts = [Thread(target=go, args=(i,)) for i in xrange(2)] >>>>> for t in ts: t.start() >>>>> for t in ts: t.join() >>>>> >>>>> Example output: >>>>> >>>>> 0.001 0 starting >>>>> 0.003 1 starting >>>>> 0.005 0 writing >>>>> 0.016 1 writing >>>>> 0.093 0 closing >>>>> 0.093 0 reading >>>>> 0.094 1 closing >>>>> 0.094 1 reading >>>>> 0.098 .. 1 done reading >>>>> 5.103 1 done >>>>> 5.108 .. 0 done reading >>>>> 10.113 0 done >>>>> >>>>> >>>> >>>> I changed 'python' to the path of python.exe and 'stuckproc.py' to its >>>> full path and tried it with Python 2.7 on Windows XP Pro. It worked as >>>> expected. >>>> >>> >>> Good point - I didn't specify that I'm seeing this on Linux (Ubuntu >>> 10.04's Python 2.6). >>> >>> >> >> python test.py 0.000 0 starting >> 0.026 0 writing >> 0.026 0 closing >> 0.026 0 reading >> 0.029 .. 0 done reading >> 0.030 1 starting >> 0.038 1 writing >> 0.058 1 closing >> 0.058 1 reading >> 0.061 .. 1 done reading >> 5.026 0 done >> 5.061 1 done >> >> on debian lenny (Python 2.5) >> >> JM >> > > FWIW, this is consistently reproduce-able across all the Ubuntu 10.04s > I've tried. You may need to increase the message size so that it's > large enough for your system. > > -- > Yang Zhang > http://yz.mit.edu/ > -- Yang Zhang http://yz.mit.edu/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Python subprocesses experience mysterious delay in receiving stdin EOF
After way too much time, I figured it out, after a quote from [this
post](http://fixunix.com/questions/379652-sending-eof-named-pipe.html)
jumped out at me:
> See the "I/O on Pipes and FIFOs" section of pipe(7) ("man 7 pipe")
>
> "If all file descriptors referring to the write end of a pipe have
> been closed, then an attempt to read(2) from the pipe will see
> end-of-file (read(2) will return 0)."
I should've known this, but it never occurred to me - had nothing to
do with Python in particular. What was happening was: the subprocesses
were getting forked with open (writer) file descriptors to each
others' pipes. As long as there are open writer file descriptors to a
pipe, readers won't see EOF.
E.g.:
p1=Popen(..., stdin=PIPE, ...) # creates a pipe the parent process
can write to
p2=Popen(...) # inherits the writer FD - as long as p2 exists, p1
won't see EOF
Turns out there's a `close_fds` parameter to `Popen`, so the solution
is to pass `close_fds=True`. All simple and obvious in hindsight, but
still managed to cost at least a couple eyeballs good chunks of time.
On Sun, Feb 13, 2011 at 11:52 PM, Yang Zhang wrote:
> Anybody else see this issue?
>
> On Thu, Feb 10, 2011 at 10:37 AM, Yang Zhang wrote:
>> On Thu, Feb 10, 2011 at 12:28 AM, Jean-Michel Pichavant
>> wrote:
>>> Yang Zhang wrote:
>>>>
>>>> On Wed, Feb 9, 2011 at 11:01 AM, MRAB wrote:
>>>>
>>>>>
>>>>> On 09/02/2011 01:59, Yang Zhang wrote:
>>>>>
>>>>>>
>>>>>> I reduced a problem I was seeing in my application down into the
>>>>>> following test case. In this code, a parent process concurrently
>>>>>> spawns 2 (you can spawn more) subprocesses that read a big message
>>>>>> from the parent over stdin, sleep for 5 seconds, and write something
>>>>>> back. However, there's unexpected waiting happening somewhere, causing
>>>>>> the code to complete in 10 seconds instead of the expected 5.
>>>>>>
>>>>>> If you set `verbose=True`, you can see that the straggling subprocess
>>>>>> is receiving most of the messages, then waiting for the last chunk of
>>>>>> 3 chars---it's not detecting that the pipe has been closed.
>>>>>> Furthermore, if I simply don't do anything with the second process
>>>>>> (`doreturn=True`), the first process will *never* see the EOF.
>>>>>>
>>>>>> Any ideas what's happening? Further down is some example output.
>>>>>> Thanks in advance.
>>>>>>
>>>>>> from subprocess import *
>>>>>> from threading import *
>>>>>> from time import *
>>>>>> from traceback import *
>>>>>> import sys
>>>>>> verbose = False
>>>>>> doreturn = False
>>>>>> msg = (20*4096+3)*'a'
>>>>>> def elapsed(): return '%7.3f' % (time() - start)
>>>>>> if sys.argv[1:]:
>>>>>> start = float(sys.argv[2])
>>>>>> if verbose:
>>>>>> for chunk in iter(lambda: sys.stdin.read(4096), ''):
>>>>>> print>> sys.stderr, '..', time(), sys.argv[1], 'read',
>>>>>> len(chunk)
>>>>>> else:
>>>>>> sys.stdin.read()
>>>>>> print>> sys.stderr, elapsed(), '..', sys.argv[1], 'done reading'
>>>>>> sleep(5)
>>>>>> print msg
>>>>>> else:
>>>>>> start = time()
>>>>>> def go(i):
>>>>>> print elapsed(), i, 'starting'
>>>>>> p = Popen(['python','stuckproc.py',str(i), str(start)],
>>>>>> stdin=PIPE, stdout=PIPE)
>>>>>> if doreturn and i == 1: return
>>>>>> print elapsed(), i, 'writing'
>>>>>> p.stdin.write(msg)
>>>>>> print elapsed(), i, 'closing'
>>>>>> p.stdin.close()
>>>>>> print elapsed(), i, 'reading'
>>>>>> p.stdout.read()
>>>>>> print elapsed(), i, 'done'
>>>>>> ts = [Thread(target=go, args=(i,)) for i in xrange(2)]
>>>>>&
Please recommend a open source for Python ACLs function
Hello Dear All: I would like to write some simple python test code with ACL(Access Control List) functions. Now simply I aim to use MAC address as ACL parameters, is there any good ACL open source recommended for using? Simple one is better. Any tips or suggestions welcomed and appreciated. Thank you. Kay -- http://mail.python.org/mailman/listinfo/python-list
Re: Keyboard Layout: Dvorak vs Colemak: is it Worthwhile to Improve the Dvorak Layout?
On Jun 13, 11:30 am, Tim Roberts wrote: > Xah Lee wrote: > > >(a lil weekend distraction from comp lang!) > > >in recent years, there came this Colemak layout. The guy who created > >it, Colemak, has a site, and aggressively market his layout. It's in > >linuxes distro by default, and has become somewhat popular. > >... > >If your typing doesn't come anywhere close to a data-entry clerk, then > >any layout “more efficient” than Dvorak is practically meaningless. > > More than that, any layout "more efficient" than QWERTY is practically > meaningless. The whole "intentional inefficiency" thing in the design of > the QWERTY layout is an urban legend. Once your fingers have the mapping > memorized, the actual order is irrelevent. Studies have shown that even a > strictly alphabetical layout works perfectly well, once the typist is > acclimated. > -- > Tim Roberts, [email protected] > Providenza & Boekelheide, Inc. Could you show which studies? Do they do research just about habit or other elements (e.g. movement rates, comfortablility, ...) as well? Have they ever heard of RSI because of repetitive movements? -- http://mail.python.org/mailman/listinfo/python-list
sublime textx 3 and python 35
Dear Python Team, I'm trying to install python35 and use sublime text 3 to write code on it. After I change the path name to the python app location. I still have some problem. I wrote a simple print function in sublime text(I already saved the file with .py extension) and the console show "finished in 0.2 s " without popping up any window displaying the word I tried to print. Could you give any of your insights on this issue? Also python 35 is automatically installed under my User folder and I for some reason couldn't find it from scratch by clikcing on folders(like from My computer to user to ) Thanks! Frank -- https://mail.python.org/mailman/listinfo/python-list
Localhost client-server simple ssl socket test program problems
Hello,everyone!!
I am writing a simple ssl client-server test program on my personal laptop.
And I encounter some problems with my simple programs.
Please give me some
helps.
My server code:
import socketimport sslbindsocket =
socket.socket()bindsocket.bind(('127.0.0.1', 1234))bindsocket.listen(5)print
'server is waiting for connection...'newsocket, fromaddr =
bindsocket.accept()print 'start ssl socket...'connstream =
ssl.wrap_socket(newsocket, server_side=True,
certfile="/etc/home/ckyang/PHA/testsslsocket/mypha.crt",
keyfile="/etc/home/ckyang/PHA/testsslsocket/mypha.key",
ssl_version=ssl.PROTOCOL_SSLv23)data = connstream.read()print 'connected from
address', fromaddrprint 'received data as', repr(data)connstream.close()
My client code:
import socketimport ssls = socket.socket(socket.AF_INET,
socket.SOCK_STREAM)ssl_sock = ssl.wrap_socket(s,
ca_certs="/home/ckyang/PHA/testsslsocket/myCA.crt",
cert_reqs=ssl.CERT_REQUIRED)ssl_sock.connect(("127.0.0.1",
1234))ssl_sock.write("hello")ssl_sock.close()
---Server
side error:
File "views.py", line 17, in connstream = ssl.wrap_socket(newsocket,
server_side=True, certfile="/etc/home/ckyang/PHA/testsslsocket/mypha.crt",
keyfile="/etc/home/ckyang/PHA/testsslsocket/mypha.key",
ssl_version=ssl.PROTOCOL_SSLv23) File "/usr/lib/python2.7/ssl.py", line 344,
in wrap_socketciphers=ciphers) File "/usr/lib/python2.7/ssl.py", line 119,
in __init__ciphers)ssl.SSLError: [Errno 336265218] _ssl.c:347:
error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib
Client side error:
File "client.py", line 10, in ssl_sock.connect(("127.0.0.1", 1234))
File "/usr/lib/python2.7/ssl.py", line 299, in connectself.do_handshake()
File "/usr/lib/python2.7/ssl.py", line 283, in do_handshake
self._sslobj.do_handshake()socket.error: [Errno 104] Connection reset by peer
So
what is wrong with my code?
The codes are so simple and so much like python official site sample
demonstration, but I still cant get it work, so frustrating.
Seems the problem happened on server side then cause client side cant connect
well, is that right?
My platform is ubuntu, with openssl 0.9.8 and python 2.7.
All certificates and keys self-signed by openssl for test convenience.
This is the site for referrence :
http://andyjeffries.co.uk/articles/x509-encrypted-authenticated-socket-ruby-client
Or should I need a real certificate issued by a real CA to let things work?
Any tips or suggestions welcomed, thank you very much~
Good day.
Kay
--
http://mail.python.org/mailman/listinfo/python-list
RE: Localhost client-server simple ssl socket test program problems
Thanks for tips. But I dont understand one thing is if Python's SSL lib doesn't support encrypted private keys for sockets. Then why should we "encrypt" the private key with "openssl rsa -in /etc/home/ckyang/PHA/testsslsocket/mypha.key -out /etc/home/ckyang/PHA/testsslsocket/mypha-nopasswd.key" again? Shouldn't that be decrypted? And also this solution is not the right one, I use mypha-nopasswd.key replace the original one, still not work. So sad. But thanks. ^ ^ Kay > To: [email protected] > From: [email protected] > Subject: Re: Localhost client-server simple ssl socket test program problems > Date: Thu, 15 Dec 2011 20:45:43 +0100 > > Am 15.12.2011 20:09, schrieb Yang Chun-Kai: > > Server side error: > > > > File "views.py", line 17, in > > connstream = ssl.wrap_socket(newsocket, server_side=True, > > certfile="/etc/home/ckyang/PHA/testsslsocket/mypha.crt", > > keyfile="/etc/home/ckyang/PHA/testsslsocket/mypha.key", > > ssl_version=ssl.PROTOCOL_SSLv23) > > File "/usr/lib/python2.7/ssl.py", line 344, in wrap_socket > > ciphers=ciphers) > > File "/usr/lib/python2.7/ssl.py", line 119, in __init__ > > ciphers) > > ssl.SSLError: [Errno 336265218] _ssl..c:347: error:140B0002:SSL > > routines:SSL_CTX_use_PrivateKey_file:system lib > > This error is most likely caused by an encrypted private key. Python's > SSL lib doesn't support encrypted private keys for sockets. You can > encrypt the private key with > >openssl rsa -in /etc/home/ckyang/PHA/testsslsocket/mypha.key -out > /etc/home/ckyang/PHA/testsslsocket/mypha-nopasswd.key > > Christian > > > -- > http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
RE: Localhost client-server simple ssl socket test program problems
Hello~ Thanks for your fast reply. No, it doesn't ask for password, just a single line with "writing RSA kay", then mypha-nopasswd.key appeared. If my key is not in PEM Format, can openssl with simple commands to switch it to? Or I should re-do the self-signed process with some certain key-words / parameters? And what you mean about Python 2.x's SSL module doesn't support cert directories ? Can you be more specific about that ^^. Do you mean parameters with certfile and keyfile those two should put together or CA certificate need to be chained with other CA? Thanks. Kay > To: [email protected] > From: [email protected] > Subject: Re: Localhost client-server simple ssl socket test program problems > Date: Thu, 15 Dec 2011 21:19:14 +0100 > > Am 15.12.2011 21:09, schrieb Yang Chun-Kai: > > Thanks for tips. > > > > But I dont understand one thing is if Python's SSL lib doesn't support > > encrypted private keys for sockets. > > > > Then why should we "encrypt" the private key with "openssl rsa -in > > /etc/home/ckyang/PHA/testsslsocket/mypha.key -out > > > > /etc/home/ckyang/PHA/testsslsocket/mypha-nopasswd.key" again? > > > > Shouldn't that be decrypted? > > > > And also this solution is not the right one , I use mypha-nopasswd.key > > replace the original one, still not work. > > IIRC the command should decrypt the key. Did it prompt for a password? > > The error could be caused by other issues. For example the key and cert > must be in PEM Format. The PKS#12 isn't supported. I'm not sure if > Python's builtin SSL module loads DER certs. > > You may also missing a valid CA cert chain. Python 2.x's SSL module > doesn't support cert directories so you have to provide a chain file. > The certs in the chain file must be in the right order, too. > > Christian > > -- > http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
RE: Localhost client-server simple ssl socket test program problems
> To: [email protected] > From: [email protected] > Subject: Re: Localhost client-server simple ssl socket test program problems > Date: Thu, 15 Dec 2011 20:45:43 +0100 > > Am 15.12.2011 20:09, schrieb Yang Chun-Kai: > > Server side error: > > > > File "views.py", line 17, in > > connstream = ssl.wrap_socket(newsocket, server_side=True, > > certfile="/etc/home/ckyang/PHA/testsslsocket/mypha.crt", > > keyfile="/etc/home/ckyang/PHA/testsslsocket/mypha.key", > > ssl_version=ssl.PROTOCOL_SSLv23) > > File "/usr/lib/python2.7/ssl.py", line 344, in wrap_socket > > ciphers=ciphers) > > File "/usr/lib/python2.7/ssl.py", line 119, in __init__ > > ciphers) > > ssl.SSLError: [Errno 336265218] _ssl..c:347: error:140B0002:SSL > > routines:SSL_CTX_use_PrivateKey_file:system lib > > This error is most likely caused by an encrypted private key. Python's > SSL lib doesn't support encrypted private keys for sockets. You can > encrypt the private key with >>>>>> >>>I generate the server private key with "openssl genrsa -out mypha.key >>>>>> >>>2048".>>>But this seems the standard command to do it.>>>How do I get >>>>>> >>>the private key without encrypted ?>>>Or should I always do this and >>>>>> >>>encrypt it again to get it decrypted ?>>>If I use the encrypted key >>>>>> >>>and .csr to produce my certificate will that be different from >>>>>> >>>decrypted key?>>>Thanks.>>>Kay>>> >openssl rsa -in /etc/home/ckyang/PHA/testsslsocket/mypha.key -out > /etc/home/ckyang/PHA/testsslsocket/mypha-nopasswd.key > > Christian > > > -- > http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
python2.7 kill thread and find thread id
Hello,guys!! I am using python2.7 to write a simple thread program which print the current running thread id and kill it with this id. But I have some questions with this. My code: -from threading import Threadclass t(Thread): def __init__(self): Thread.__init__(self) def run(self): self.tid = Thread.get_ident() print 'thread id is', self.tid def kill(self): *** // how to do this with its own id, for example "exit(self.tid)" ?if __name__ == "__main__" go = t() go.start() go.kill()-First, I can't call get_ident(), seems not supported. Second, how to kill the thread with its own id? I know I can use SystemExit() to shut this down, but I want to kill the certain thread not the whole program. Anyone know how to fix my code to achieve it? Any tips welcomed. Thank you in advance. Kay -- http://mail.python.org/mailman/listinfo/python-list
RE: python2.7 kill thread and find thread id
Sorry for the misarrangement of my code in list, it happens everytime. I apologized. From: [email protected] To: [email protected] Subject: python2.7 kill thread and find thread id Date: Wed, 4 Jan 2012 14:10:46 +0800 Hello,guys!! I am using python2.7 to write a simple thread program which print the current running thread id and kill it with this id. But I have some questions with this. < /div>My code: -from threading import Threadclass t(Thread): def __init__(self): Thread.__init__(self) def run(self): self.tid = Thread.get_ident() print 'thread id is', self.tid def kill(self): *** // how to do this with its own id, for example "exit(self.tid)" ?if __name__ == "__main__" go = t() go.start() go.kill()-First, I can't call get_ident(), seems not supported. Second, how to kill the thread with its own id? I know I can use SystemExit() to shut this down, but I want to kill the certain thread not the whole program. Anyone know how to fix my code to achieve it? Any tips welcomed. Thank you in advance. Kay -- http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
Python 3.5.0 python --version command reports 2.5.4
Hi, I just installed Python 3.5.0 (since 3.5.2 would not installed on Windows 2008 R2) and tried the python --version command. Surprisingly, the command reported 2.5.4. What's going on? Gang Yang Shonborn-Becker Systems Inc. (SBSI) Contractor Engineering Supporting SEC Office: 732-982-8561, x427 Cell: 732-788-7501 Email: [email protected]<mailto:[email protected]> -- https://mail.python.org/mailman/listinfo/python-list
Python 3.5.0 python --version command reports 2.5.4
Hi, I just installed Python 3.5.0 (since 3.5.2 would not installed on Windows 2008 R2) and tried the python --version command. Surprisingly, the command reported 2.5.4. What's going on? Gang Yang Shonborn-Becker Systems Inc. (SBSI) Contractor Engineering Supporting SEC Office: 732-982-8561, x427 Cell: 732-788-7501 Email: [email protected]<mailto:[email protected]> -- https://mail.python.org/mailman/listinfo/python-list
RE: [Non-DoD Source] Re: Python 3.5.0 python --version command reports 2.5.4
Thanks to all that replied. Indeed I had CollabNet SVN server installed a while back and it came with an older version of Python. Gang Yang Shonborn-Becker Systems Inc. (SBSI) Contractor Engineering Supporting SEC Office: 732-982-8561, x427 Cell: 732-788-7501 Email: [email protected]<mailto:[email protected]> From: Python-list [[email protected]] on behalf of Terry Reedy [[email protected]] Sent: Wednesday, September 07, 2016 12:42 AM To: [email protected] Subject: [Non-DoD Source] Re: Python 3.5.0 python --version command reports 2.5.4 All active links contained in this email were disabled. Please verify the identity of the sender, and confirm the authenticity of all links contained within the message prior to copying and pasting the address to a Web browser. On 9/6/2016 4:59 PM, [email protected] wrote: > I just installed Python 3.5.0 (since 3.5.2 would not installed on Windows 2008 > R2) and tried the python --version command. Surprisingly, the command reported > 2.5.4. What's going on? Most likely you have 2.5.4 installed and are running it. -- Terry Jan Reedy -- Caution-https://mail.python.org/mailman/listinfo/python-list -- https://mail.python.org/mailman/listinfo/python-list
How to configure trusted CA certificates for SSL client?
Hi, I'm using Python 3.X (3.5 on Windows 2008 and 3.4 on CentOS 6.7) and encountered an SSL client side CA certificates issue. The issue came up when a third-party package (django-cas-ng) tried to verify the CAS service ticket (ST) by calling CAS server using requests.get(...) and failed with CERTIFICATE_VERIFY_FAILED error. The CAS server is accessed by HTTPS with a self-signed server certificate. Following some suggestions on the internet, I've tried to modify django-cas-ng's code to call requests.get(..) with verify parameter, such as requests.get(..., verify=False) and requests.get(..., verify="CAS server cert"). Both workarounds worked, but I can't change third-party package code. I also tried to add the CAS server cert to the underlying OS (Windows 2008 and CentOS 6.7), but it did not help. My question is where does SSL client code get the trusted CA certificates from, from Python or the underlying OS? What configuration do I need in order for the SSL client to conduct the SSL handshake successfully? Appreciate any help! Gang Gang Yang Shonborn-Becker Systems Inc. (SBSI) Contractor Engineering Supporting SEC Office: 732-982-8561, x427 Cell: 732-788-7501 Email: [email protected]<mailto:[email protected]> -- https://mail.python.org/mailman/listinfo/python-list
