Re: [Tutor] Python 2.7.1 interpreter passing function pointer as function argument and Shedskin 0.7

2010-12-29 Thread Frank Chang
 I asked the Shedskin developers about this issue and they are currently
adding support for __call__ . They recommend renaming the  class Matcher
__call__ method ,for example as next, and then explicitly call it on line
148 as
lookup_func.next(match).
 I followed their suggestion and the Shedskin 0.7 Python to C++ compiler
does not complain about the unbound identifier 'lookup_func' anymore.
 I apologize for the cut and paste mangling. Is there a better method
than copy-pasting for including 20 or more lines of python source code in
the tutor posts? Thank you.

   def find_all_matches(self, word, k, lookup_func):
   lev = self.levenshtein_automata(word, k).to_dfa()
   match = lev.next_valid_string('\0')
   while match:
  follow = lookup_func.test(match)  ### line 148 ###
   if not follow:
  return
   if match == follow:
  yield match
   follow = follow + '\0'
   match = lev.next_valid_string(follow)

 class Matcher(object):
 def __init__(self, l):
self.l = l
self.probes = 0
 def test(self, w):
self.probes += 1
pos = bisect.bisect_left(self.l, w)
if pos < len(self.l):
return self.l[pos]
   else:
return None
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Python 2.7.1 interpreter passing function pointer as function argument and Shedskin 0.7

2010-12-29 Thread Alan Gauld


"Frank Chang"  wrote

I apologize for the cut and paste mangling. Is there a better 
method
than copy-pasting for including 20 or more lines of python source 
code in

the tutor posts? Thank you.


Long listings are usually better in pastebin - where the indentation 
is
clear and we get syntax colouring too. It all makes them more readable 
:-)


Then just post the url to the pastebin on your mail.

HTH,


--
Alan Gauld
Author of the Learn to Program web site
http://www.alan-g.me.uk/


___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


[Tutor] scraping and saving in file

2010-12-29 Thread Tommy Kaas
Hi,

I’m trying to learn basic web scraping and starting from scratch. I’m using
Activepython 2.6.6

 

I have uploaded a simple table on my web page and try to scrape it and will
save the result in a text file. I will separate the columns in the file with
#.

It works fine but besides # I also get spaces between the columns in the
text file. How do I avoid that?

 

This is the script:

 

import urllib2 

from BeautifulSoup import BeautifulSoup 

 

f = open('tabeltest.txt', 'w')

 

soup =
BeautifulSoup(urllib2.urlopen('http://www.kaasogmulvad.dk/unv/python/tabelte
st.htm').read())

 

rows = soup.findAll('tr')

 

for tr in rows:

cols = tr.findAll('td')

print >> f,
cols[0].string,'#',cols[1].string,'#',cols[2].string,'#',cols[3].string

f.close()

 

And the text file looks like this:

 

Kommunenr # Kommune # Region # Regionsnr

101 # København # Hovedstaden # 1084

147 # Frederiksberg # Hovedstaden # 1084

151 # Ballerup # Hovedstaden # 1084

153 # Brøndby # Hovedstaden # 1084

155 # Dragør # Hovedstaden # 1084

 

Thanks in advance

 

Tommy Kaas

 

Kaas & Mulvad

Lykkesholms Alle 2A, 3.

1902 Frederiksberg C

 

Mobil: 27268818

Mail:   tommy.k...@kaasogmulvad.dk

Web: www.kaasogmulvad.dk

 

___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Python 2.7.1 interpreter passing function pointer as function argument and Shedskin 0.7

2010-12-29 Thread Steven D'Aprano

Alan Gauld wrote:


"Frank Chang"  wrote


I apologize for the cut and paste mangling. Is there a better method
than copy-pasting for including 20 or more lines of python source code in
the tutor posts? Thank you.


Long listings are usually better in pastebin - where the indentation is
clear and we get syntax colouring too. It all makes them more readable :-)



Even better is to spend the time to work out the smallest amount of code 
that exhibits the problem. The code you post must be runnable, it must 
actually demonstrate the problem -- it is surprising how many people 
post "broken" code that actually works, or worse, code that fails in 
some completely different way -- and lastly, and most importantly, it 
should be as small as possible.


Doing so shows respect to those you are asking to look at the code, AND 
maximizes the number of people willing to spend the time answering your 
question: instead of looking at 200 lines and seven functions, they only 
need to look at one function and sixteen lines of code. (Or whatever it 
takes to reproduce the problem.)


Even more important, while trying to reproduce the problem in the 
minimum code possible, you may very well solve the problem yourself. I 
can't tell you how many times I've gone to write up a request for help, 
but while refactoring my code I've discovered the mistake and didn't 
need to post my question.




--
Steven
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] scraping and saving in file

2010-12-29 Thread Steven D'Aprano

Tommy Kaas wrote:


I have uploaded a simple table on my web page and try to scrape it and will
save the result in a text file. I will separate the columns in the file with
#.

It works fine but besides # I also get spaces between the columns in the
text file. How do I avoid that?


The print command puts spaces between the each output object:

>>> print 1, 2, 3  # Three objects being printed.
1 2 3

To prevent this, use a single output object. There are many ways to do 
this, here are three:


>>> print "%d%d%d" % (1, 2, 3)
123
>>> print str(1) + str(2) + str(3)
123
>>> print ''.join('%s' % n for n in (1, 2, 3))
123


But in your case, the best way is not to use print at all. You are 
writing to a file -- write to the file directly, don't mess about with 
print. Untested:



f = open('tabeltest.txt', 'w')
url = 'http://www.kaasogmulvad.dk/unv/python/tabeltest.htm'
soup = BeautifulSoup(urllib2.urlopen(url).read())
rows = soup.findAll('tr')
for tr in rows:
cols = tr.findAll('td')
output = "#".join(cols[i].string for i in (0, 1, 2, 3))
f.write(output + '\n')  # don't forget the newline after each row
f.close()



--
Steven

___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] scraping and saving in file

2010-12-29 Thread Knacktus

Am 29.12.2010 10:54, schrieb Tommy Kaas:

Hi,

I’m trying to learn basic web scraping and starting from scratch. I’m
using Activepython 2.6.6

I have uploaded a simple table on my web page and try to scrape it and
will save the result in a text file. I will separate the columns in the
file with #.

It works fine but besides # I also get spaces between the columns in the
text file. How do I avoid that?

This is the script:

import urllib2

from BeautifulSoup import BeautifulSoup

f = open('tabeltest.txt', 'w')

soup =
BeautifulSoup(urllib2.urlopen('http://www.kaasogmulvad.dk/unv/python/tabeltest.htm').read())

rows = soup.findAll('tr')

for tr in rows:

 cols = tr.findAll('td')

 print >> f,
cols[0].string,'#',cols[1].string,'#',cols[2].string,'#',cols[3].string


You can strip the whitespaces from the strings. I assume the 
"string"-attribute returns a string (I don't now the API of Beautiful 
Soup) E.g.:

cols[0].string.strip()

Also, you can use join() to create the complete string:

resulting_string = "#".join([col.string.strip() for col in cols])

The long version without list comprehension (just for illustration, 
better use list comprehension):


resulting_string = "#".join([cols[0].string.strip(), 
cols[1].string.strip(), cols[2].string.strip(), cols[3].string.strip(), 
cols[4].string.strip()])


HTH,

Jan






f.close()

And the text file looks like this:

Kommunenr # Kommune # Region # Regionsnr

101 # København # Hovedstaden # 1084

147 # Frederiksberg # Hovedstaden # 1084

151 # Ballerup # Hovedstaden # 1084

153 # Brøndby # Hovedstaden # 1084

155 # Dragør # Hovedstaden # 1084

Thanks in advance

Tommy Kaas

Kaas & Mulvad

Lykkesholms Alle 2A, 3.

1902 Frederiksberg C

Mobil: 27268818

Mail: tommy.k...@kaasogmulvad.dk 

Web: www.kaasogmulvad.dk 



___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] scraping and saving in file

2010-12-29 Thread Peter Otten
Tommy Kaas wrote:

> I’m trying to learn basic web scraping and starting from scratch. I’m
> using Activepython 2.6.6

> I have uploaded a simple table on my web page and try to scrape it and
> will save the result in a text file. I will separate the columns in the
> file with
> #.
 
> It works fine but besides # I also get spaces between the columns in the
> text file. How do I avoid that?

> This is the script:

> import urllib2
> from BeautifulSoup import BeautifulSoup
> f = open('tabeltest.txt', 'w')
> soup = 
BeautifulSoup(urllib2.urlopen('http://www.kaasogmulvad.dk/unv/python/tabelte
> st.htm').read())
 
> rows = soup.findAll('tr')

> for tr in rows:
> cols = tr.findAll('td')
> print >> f,
> cols[0].string,'#',cols[1].string,'#',cols[2].string,'#',cols[3].string
> 
> f.close()

> And the text file looks like this:

> Kommunenr # Kommune # Region # Regionsnr
> 101 # København # Hovedstaden # 1084
> 147 # Frederiksberg # Hovedstaden # 1084
> 151 # Ballerup # Hovedstaden # 1084
> 153 # Brøndby # Hovedstaden # 1084

The print statement automatically inserts spaces, so you can either resort 
to the write method

for i in range(4):
if i:
f.write("#")
f.write(cols[i].string)

which is a bit clumsy, or you build the complete line and then print it as a 
whole:

print >> f, "#".join(col.string for col in cols)

Note that you have non-ascii characters in your data -- I'm surprised that 
writing to a file works for you. I would expect that

import codecs
f = codecs.open("tmp.txt", "w", encoding="utf-8")

is needed to successfully write your data to a file

Peter

___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] scraping and saving in file

2010-12-29 Thread Tommy Kaas
Steven D'Aprano wrote:
> But in your case, the best way is not to use print at all. You are writing
to a
> file -- write to the file directly, don't mess about with print. Untested:
> 
> 
> f = open('tabeltest.txt', 'w')
> url = 'http://www.kaasogmulvad.dk/unv/python/tabeltest.htm'
> soup = BeautifulSoup(urllib2.urlopen(url).read())
> rows = soup.findAll('tr')
> for tr in rows:
>  cols = tr.findAll('td')
>  output = "#".join(cols[i].string for i in (0, 1, 2, 3))
>  f.write(output + '\n')  # don't forget the newline after each row
> f.close()

Steven, thanks for the advice. 
I see the point. But now I have problems with the Danish characters. I get
this:

Traceback (most recent call last):
  File "C:/pythonlib/kursus/kommuner-regioner_ny.py", line 36, in 
f.write(output + '\n')  # don't forget the newline after each row
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf8' in position
5: ordinal not in range(128)

I have tried to add # -*- coding: utf-8 -*- to the top of the script, but It
doesn't help?

Tommy


___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] scraping and saving in file

2010-12-29 Thread Peter Otten
Tommy Kaas wrote:

> Steven D'Aprano wrote:
>> But in your case, the best way is not to use print at all. You are
>> writing
> to a
>> file -- write to the file directly, don't mess about with print.
>> Untested:
>> 
>> 
>> f = open('tabeltest.txt', 'w')
>> url = 'http://www.kaasogmulvad.dk/unv/python/tabeltest.htm'
>> soup = BeautifulSoup(urllib2.urlopen(url).read())
>> rows = soup.findAll('tr')
>> for tr in rows:
>>  cols = tr.findAll('td')
>>  output = "#".join(cols[i].string for i in (0, 1, 2, 3))
>>  f.write(output + '\n')  # don't forget the newline after each row
>> f.close()
> 
> Steven, thanks for the advice.
> I see the point. But now I have problems with the Danish characters. I get
> this:
> 
> Traceback (most recent call last):
>   File "C:/pythonlib/kursus/kommuner-regioner_ny.py", line 36, in 
> f.write(output + '\n')  # don't forget the newline after each row
> UnicodeEncodeError: 'ascii' codec can't encode character u'\xf8' in
> position 5: ordinal not in range(128)
> 
> I have tried to add # -*- coding: utf-8 -*- to the top of the script, but
> It doesn't help?

The coding cookie only affects unicode string constants in the source code, 
it doesn't change how the unicode data coming from BeautifulSoup is handled.
As I suspected in my other post you have to convert your data to a specific 
encoding (I use UTF-8 below) before you can write it to a file:

import urllib2 
import codecs
from BeautifulSoup import BeautifulSoup 

html = urllib2.urlopen(
'http://www.kaasogmulvad.dk/unv/python/tabeltest.htm').read()
soup = BeautifulSoup(html)

with codecs.open('tabeltest.txt', "w", encoding="utf-8") as f:
rows = soup.findAll('tr')
for tr in rows:
cols = tr.findAll('td')
print >> f, "#".join(col.string for col in cols)

The with statement implicitly closes the file, so you can avoid f.close() at 
the end of the script.

Peter

___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


[Tutor] scraping and saving in file SOLVED

2010-12-29 Thread Tommy Kaas
With Stevens help about writing and Peters help about import codecs - and when 
I used \r\n instead of \r to give me new lines everything worked. 
I just thought that \n would be necessary?
Thanks.
Tommy

> -Oprindelig meddelelse-
> Fra: tutor-bounces+tommy.kaas=kaasogmulvad...@python.org
> [mailto:tutor-bounces+tommy.kaas=kaasogmulvad...@python.org] På
> vegne af Peter Otten
> Sendt: 29. december 2010 11:46
> Til: tutor@python.org
> Emne: Re: [Tutor] scraping and saving in file
> 
> Tommy Kaas wrote:
> 
> > I’m trying to learn basic web scraping and starting from scratch. I’m
> > using Activepython 2.6.6
> 
> > I have uploaded a simple table on my web page and try to scrape it and
> > will save the result in a text file. I will separate the columns in
> > the file with #.
> 
> > It works fine but besides # I also get spaces between the columns in
> > the text file. How do I avoid that?
> 
> > This is the script:
> 
> > import urllib2
> > from BeautifulSoup import BeautifulSoup f = open('tabeltest.txt', 'w')
> > soup =
> BeautifulSoup(urllib2.urlopen('http://www.kaasogmulvad.dk/unv/python/ta
> belte
> > st.htm').read())
> 
> > rows = soup.findAll('tr')
> 
> > for tr in rows:
> > cols = tr.findAll('td')
> > print >> f,
> > cols[0].string,'#',cols[1].string,'#',cols[2].string,'#',cols[3].strin
> > g
> >
> > f.close()
> 
> > And the text file looks like this:
> 
> > Kommunenr # Kommune # Region # Regionsnr
> > 101 # København # Hovedstaden # 1084
> > 147 # Frederiksberg # Hovedstaden # 1084
> > 151 # Ballerup # Hovedstaden # 1084
> > 153 # Brøndby # Hovedstaden # 1084
> 
> The print statement automatically inserts spaces, so you can either resort to
> the write method
> 
> for i in range(4):
> if i:
> f.write("#")
> f.write(cols[i].string)
> 
> which is a bit clumsy, or you build the complete line and then print it as a
> whole:
> 
> print >> f, "#".join(col.string for col in cols)
> 
> Note that you have non-ascii characters in your data -- I'm surprised that
> writing to a file works for you. I would expect that
> 
> import codecs
> f = codecs.open("tmp.txt", "w", encoding="utf-8")
> 
> is needed to successfully write your data to a file
> 
> Peter
> 
> ___
> Tutor maillist  -  Tutor@python.org
> To unsubscribe or change subscription options:
> http://mail.python.org/mailman/listinfo/tutor

___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] scraping and saving in file

2010-12-29 Thread Dave Angel

On 01/-10/-28163 02:59 PM, Tommy Kaas wrote:

Steven D'Aprano wrote:

But in your case, the best way is not to use print at all. You are writing

to a

file -- write to the file directly, don't mess about with print. Untested:


f = open('tabeltest.txt', 'w')
url = 'http://www.kaasogmulvad.dk/unv/python/tabeltest.htm'
soup = BeautifulSoup(urllib2.urlopen(url).read())
rows = soup.findAll('tr')
for tr in rows:
  cols = tr.findAll('td')
  output = "#".join(cols[i].string for i in (0, 1, 2, 3))
  f.write(output + '\n')  # don't forget the newline after each row
f.close()


Steven, thanks for the advice.
I see the point. But now I have problems with the Danish characters. I get
this:

Traceback (most recent call last):
   File "C:/pythonlib/kursus/kommuner-regioner_ny.py", line 36, in
 f.write(output + '\n')  # don't forget the newline after each row
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf8' in position
5: ordinal not in range(128)

I have tried to add # -*- coding: utf-8 -*- to the top of the script, but It
doesn't help?

Tommy

The coding line only affects how characters in the source module are 
interpreted.  For each file you input or output, you need to also decide 
the encoding to use.  As Peter said, you probably need

codecs.open(filename, "w", encoding="utf-8")

DaveA

___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


[Tutor] python telnet slow down.

2010-12-29 Thread Rayon
Hi I need help with a some telnet automation I am trying build. 
I need to login to a Nortel switch and send a table dump command capture
that data and send it to I file. 
I have the code it, works and does all that I need my problem is that when I
try to dump a table later than 4 mega the it can take hours but when I use
cross talk and do it the manual way it takes 20 min. 
I need to know why and if there are any changes I can make to my code to fix
this. 



#!/usr/bin/env python

import telnetlib
#import telnet_hack

class Telnet_lib:

host = {'server':'','user':'','password':''}
commands = []
read_till = ''
capture_file =''

def fileopen(self,capture_file,data):
try:
  out_file = open(capture_file,"w")
  out_file.write(data)
except IOError,error:
print error


def gtt_telnet(self):
tn = telnetlib.Telnet(self.host['server'])
tn.set_debuglevel(100)
tn.read_until('Enter User Name', 3)
tn.write(self.host['user'] + "\r")
tn.read_until('Password: ', 3)
tn.write(self.host['password'] + "\r")
tn.read_until('HGUO0190.PPC4 V:103', 3)
for com in self.commands:
tn.write(com + "\r")
print 'dump starts'
tn.read_some()
#data = tn.read_until(self.read_till)
#self.fileopen(self.capture_file,data)
 

___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] scraping and saving in file SOLVED

2010-12-29 Thread Peter Otten
Tommy Kaas wrote:

> With Stevens help about writing and Peters help about import codecs - and
> when I used \r\n instead of \r to give me new lines everything worked. I
> just thought that \n would be necessary? Thanks.
> Tommy

Newline handling varies across operating systems. If you are on Windows and 
open a file in text mode your program sees plain "\n",  but the data stored 
on disk is "\r\n". Most other OSes don't mess with newlines.

If you always want "\r\n" you can rely on the csv module to write your data, 
but the drawback is that you have to encode the strings manually:

import csv
import urllib2 
from BeautifulSoup import BeautifulSoup 

html = urllib2.urlopen(
'http://www.kaasogmulvad.dk/unv/python/tabeltest.htm').read()
soup = BeautifulSoup(html)

with open('tabeltest.txt', "wb") as f:
writer = csv.writer(f, delimiter="#")
rows = soup.findAll('tr')
for tr in rows:
cols = tr.findAll('td')
writer.writerow([unicode(col.string).encode("utf-8")
 for col in cols])

PS: It took me some time to figure out how deal with beautifulsoup's flavour 
of unicode:

>>> import BeautifulSoup as bs
>>> s = bs.NavigableString(u"älpha")
>>> s
u'\xe4lpha'
>>> s.encode("utf-8")
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib/pymodules/python2.6/BeautifulSoup.py", line 430, in encode
return self.decode().encode(encoding)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe4' in position 
0: ordinal not in range(128)
>>> unicode(s).encode("utf-8") # heureka
'\xc3\xa4lpha'


___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


[Tutor] Print Help

2010-12-29 Thread delegbede
I want to loop over a list and then print out a one statement that carries all 
the items in the list e.g

def checking(responses):

''' Converts strings that look like A2C3D89AA90 into
{'A':'2', 'C':'3', 'D':'89', 'AA':'90'}'''

responses=dict(re.findall(r'([A-Z]{1,2})([0-9]+)', responses.upper())) 
if responses else {}
for key in responses.keys():
count = []
if key in ['A'] and int(responses[key]) not in range(1,3): 
count += key
elif key in ['B', 'G', 'T', 'U', 'V', 'W', 'X'] and 
int(responses[key]) not in range(1, 3): count += key
elif key in ['H', 'J', 'K', 'M', 'N', 'P', 'Q','R','S'] and 
int(responses[key]) not in range(1, 6): count += key
elif key in ['D', 'E'] and int(responses[key]) == 4: print 
'accepted'
for err in count:
count = ", ".join(count)
   
print "Invalid Message: You specified out of range values for %s" % 
(", ".join(count))  

What I am expecting is:
Invalid Message: You specified out of range values for a,s,d,r,t,u

Assuming those were the values that are incorrect. 

Ple help. 

Sent from my BlackBerry wireless device from MTN
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] scraping and saving in file SOLVED

2010-12-29 Thread Stefan Behnel

Peter Otten, 29.12.2010 13:45:

   File "/usr/lib/pymodules/python2.6/BeautifulSoup.py", line 430, in encode
 return self.decode().encode(encoding)


Wow, that's evil.

Stefan

___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Print Help

2010-12-29 Thread Dave Angel

On 01/-10/-28163 02:59 PM, delegb...@dudupay.com wrote:

#§¶Ú%¢Šh½êÚ–+-jwm…éé®)í¢ëZ¢w¬µ«^™éí¶­qªë‰ë–[az+^šÈ§¶¥ŠË^×Ÿrœ’)à­ë)¢{°*'½êí²ËkŠx,¶­–Š$–)`·...@Ý"žÚ


Not sure why the quoting of this particular message didn't work.  Since 
I have no problem with anyone else's message, I suggest it's one of your 
settings.


Recopying your original:

>I want to loop over a list and then print out a one statement that 
>carries all the items in the list e.g

>
>def checking(responses):
>
>''' Converts strings that look like A2C3D89AA90 into
>{'A':'2', 'C':'3', 'D':'89', 'AA':'90'}'''
>
>responses=dict(re.findall(r'([A-Z]{1,2})([0-9]+)', 
>responses.upper())) if responses else {}

>for key in responses.keys():
>count = []
>if key in ['A'] and int(responses[key]) not in 
>range(1,3): count += key
>elif key in ['B', 'G', 'T', 'U', 'V', 'W', 'X'] and 
>int(responses[key]) not in range(1, 3): count += key
>elif key in ['H', 'J', 'K', 'M', 'N', 'P', 
>'Q','R','S'] and int(responses[key]) not in range(1, 6): count += key
>elif key in ['D', 'E'] and int(responses[key]) == 4: 
>print 'accepted'

>for err in count:
>count = ", ".join(count)
>
>print "Invalid Message: You specified out of range values 
>for %s" % (", ".join(count))

>
>What I am expecting is:
>Invalid Message: You specified out of range values for a,s,d,r,t,u
>
>Assuming those were the values that are incorrect.

Hi Delegbede,

First comment is that your indentation is wrong, and the code won't 
compile.  Did you want your print to line up with "for key" or with "elif" ?


Second is that you keep defining count to have different types.  Choose 
what type it's supposed to have, and fix all the places that would make 
it different.  Your comments imply you want it to be a list, which is 
what you initialize it to.  But then you concatenate a string to it, 
which will give an error.  You probably wanted to append to it, using

counts.append(key)

Then you have a nonsensical twoline loop:  for err in count, but inside 
the loop you change count.  Delete both those lines, as you're going to 
use join inside your print statement instead.


Finally, you reassign the counts list inside the loop, so it will only 
show the last values found.  Move the counts=[] outside the outermost loop.


Fix the code up so it doesn't get any compile or runtime errors, and if 
you can't, post a complete traceback for the error you can't fix.


And if you can run it without any errors, but the output is wrong, show 
us what it actually prints, not just what you'd like it to.


DaveA
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


[Tutor] Python 2.7.1 interpreter question

2010-12-29 Thread Frank Chang
   I separated my test program into two python files. The first one is
automata.py. The pastebin url for automata.py is:
http://pastebin.com/embed_iframe.php?i=J9MRPibX";
style="border:none;width:100%">
  The second file is automata_test2.py. It imports automata.py. The
pastebin url for automata_test2.py is:
http://pastebin.com/embed_iframe.php?i=A44d2EvV";
style="border:none;width:100%">.

  When I use the Python 2.7.1 interpreter I get the following traceback
:

F:\shedskin\shedskin-0.7>python automata_test2.py
Traceback (most recent call last):
  File "automata_test2.py", line 23, in 
list(automata.find_all_matches('nice', 1, m))
AttributeError: 'module' object has no attribute 'find_all_matches'

 Could anyone help me fix this error? I am new to python. Thank you.
___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Python 2.7.1 interpreter question

2010-12-29 Thread Alan Gauld


"Frank Chang"  wrote 

automata.py. The pastebin url for automata.py is:
http://pastebin.com/embed_iframe.php?i=J9MRPibX";
style="border:none;width:100%">



   list(automata.find_all_matches('nice', 1, m))
AttributeError: 'module' object has no attribute 'find_all_matches'


find_all_matches() is a method of a class. Therefore you need to 
call it via an instance of that class. You need to instantiate the 
class using the module name but after that the object will be 
local:


myObj = module.myclass()
myObj.myMethod()

HTH,





___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor


Re: [Tutor] Python 2.7.1 interpreter question

2010-12-29 Thread Steven D'Aprano

Frank Chang wrote:


  When I use the Python 2.7.1 interpreter I get the following traceback
:

F:\shedskin\shedskin-0.7>python automata_test2.py
Traceback (most recent call last):
  File "automata_test2.py", line 23, in 
list(automata.find_all_matches('nice', 1, m))
AttributeError: 'module' object has no attribute 'find_all_matches'

 Could anyone help me fix this error? I am new to python. Thank you.


This error tells you that the module "automata.py" does not have a 
global-level object called "find_all_matches".


If you look at the module, you will see it does not have a global-level 
object called "find_all_matches", exactly as Python tells you. To fix 
this, you need to either:


(1) Give the automata module a function "find_all_matches" that does 
what you expect it to do; or


(2) Change automata_test2 to create a DFA instance, then call the 
find_all_matches method in that instance. Something like:


instance = automata.DFA(start_state)
instance.find_all_matches("word", k, lookup_func)


--
Steven

___
Tutor maillist  -  Tutor@python.org
To unsubscribe or change subscription options:
http://mail.python.org/mailman/listinfo/tutor