Just as a matter of curiosity piqued by having to understand someone
else's code. Is the difference here just a matter of style, or is one
better somehow than the other?
>>> l
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
>>> ','.join([str(x) for x in l])
'0,1,2,3,4,5,6,7,8,9,10'
>>> ','.join(map(lambda x:
Kent, Steve, Marty, et alia,
On Tue, 13 Jan 2009, Kent Johnson wrote:
Where does this come from? It looks like the string representation of
a dict. Can you get the actual dict? Perhaps there is a better way to
do whatever you are doing?
It does look like that doesn't it, but it's actually a j
All,
Something I don't understand (so what else is new?) about quoting and
escaping:
s = """ "some" \"thing\" """
s
' "some" "thing" '
I've got strings like this:
s = """[{"title" : "Egton, Yorkshire", "start" : new Date(1201,1,4),
"description" : "Hardy's long name: Egton, Yorkshire.
Dear all,
I've been around and around with this and can't seem to conceptualize it
properly.
I've got a javascript object in a text file that I'd like to treat as json
so that I can import it into a python program via simplejson.loads();
however, it's not proper json because it has new Date(
Kent,
On Wed, 14 May 2008, Kent Johnson wrote:
I understand about removing elements from a container you're iterating. Is
data.remove(x) problematic in this context?
Yes. It can cause the iteration to skip elements ofthe list. Better to
post-process the list with a list comprehension:
evts = [
Bob, and Kent, Many thanks!
Sounds like the key 'processed' is created by the assignment x['processed'] =
True. So those dictionaries that have not experienced this assignment have no
such key. You should instead use: if 'processed' in x:
Doh! Now that WAS obvious
Try lst.remove(x)
Now t
Something basic about lists and loops that I'm not getting here. I've got
a long list of dictionaries that looks like this:
lst = [{'placename': u'Stow, Lincolnshire', 'long-name': u'Stow,
Lincolnshire.', 'end': datetime.date(1216, 9, 28), 'start':
datetime.date(1216, 9, 26)},
{'placename': u'
On Sun, 5 Aug 2007, Kent Johnson wrote:
Hmm...actually, isupper() works fine on unicode strings:
In [18]: s='H\303\211RON'.decode('utf-8')
In [21]: print 'H\303\211RON'
HÉRON
In [22]: s.isupper()
Out[22]: True
:-)
I modified uppers to include only the latin characters, and added the
apostroph
Kent, Many thanks again, and thanks too to Paul at
http://tinyurl.com/yrl8cy.
That's very effective, thanks very much for the detailed explanation;
however, I'm a little surprised that it's necessary. I would have thought
that there would be some standard module that included a unicode
equivalent
I'm parsing a utf-8 encoded file with lines characterized by placenames in
all caps thus:
HEREFORD, Herefordshire.
..other lines..
HÉRON (LE), Normandie.
..other lines..
I identify these lines for parsing using
for line in data:
if re.match(r'[A-Z]{2,}', line):
but of course this catches
On Thu, 5 Jul 2007, Kent Johnson wrote:
>> First, don't confuse unicode and utf-8.
>
> Too late ;-) already pitifully confused.
> This is a good place to start correcting that:
> http://www.joelonsoftware.com/articles/Unicode.html
Thanks for this, it's just what I needed!
> if s is your utf-8 s
Terry, thanks.
Sadly, I'm still missing something.
I've tried all the aliases in locale.py, most return
locale.Error: unsupported locale setting
one that doesn't is:
locale.setlocale(locale.LC_ALL, ('fr_fr'))
'fr_fr'
but if I set it thus it returns:
Angoul?äMe, Angoumois.
I'm running pyth
Dear All,
I have some utf-8 unicode text with lines like this:
ANVERS-LE-HOMONT, Maine.
ANGOULÊME, Angoumois.
ANDELY (le Petit), Normandie.
which I'm using as-is in this line of code:
place.append(line.strip())
What I would prefer would be something like this:
place.append(line.title().strip
Kent,
That's damned clever! Your solution hovers right at the limit of my
understanding, but the print statements illustrate very clearly the
operation of the function.
Many thanks!
Jon
On Sun, 27 May 2007, Kent Johnson wrote:
>> Here's a puzzle that should be simple, but I'm so used to words
Dear all,
Here's a puzzle that should be simple, but I'm so used to words that
numbers tend to baffle me.
I've got fields that look something like this:
1942. Oct. 1,3,5,7,8,9,10
I need to parse them to obtain something like this:
The xml representation is incidental, the basic problem is
Kent,
Thanks so much. It's easy when you know how. Now that I know, I only need
the encode('utf-8') step since geopy does the urlencode step.
On Thu, 17 May 2007, Kent Johnson wrote:
> It's two steps. First convert to utf-8, then urlencode:
c = u'\xe2'
c
> u'\xe2'
c.encode('utf-8
Dear all,
I've got a python list of data pulled via ElementTree from an xml file
that contains mixed str and unicode
strings, like this:
[u'Jumi\xe9ge, Normandie', 'Farringdon, Hampshire', 'Ravensworth,
Durham', 'La Suse, Anjou', 'Lions, Normandie', 'Lincoln, Lincolnshire',
'Chelmsford, Esse
Daniel,
It was kind of you to respond, and your response was a model of clarity.
You correctly surmised from my awkward framing of the question, that what
I wanted was a list of sibling elements between one named anchor and the
next. My problem was, in part, that I still don't think in terms of
As a complete tyro, I've broken my teeth on this web-page scraping
problem. I've several times wanted to scrape pages in which the only
identifying elements are positional rather than syntactical, that is,
pages in which everything's a sibling and there's no way to predict how
many sibs there a
19 matches
Mail list logo