Re: multiprocessing deadlock
Hi Brian,
I think there could be a slight problem (if I've understood your code).
> import multiprocessing
> import queue
>
> def _process_worker(q):
> while True:
do you really want to run it indefinitely here?
> try:
> something = q.get(block=True, timeout=0.1)
> except queue.Empty:
> return
So, if your queue is empty, why do you want to just return?
> else:
> print('Grabbed item from queue:', something)
>
>
> def _make_some_processes(q):
> processes = []
> for _ in range(10):
This is going to loop q*10 many times. Do you really want that?
> p = multiprocessing.Process(target=_process_worker, args=(q,))
OK.
> p.start()
> processes.append(p)
Here. Do you want to add it to processes list? why?
> return processes
OK.
> def _do(i):
> print('Run:', i)
> q = multiprocessing.Queue()
> for j in range(30):
> q.put(i*30+j)
30 items in the queue for each i (i*30). Cool.
> processes = _make_some_processes(q)
>
> while not q.empty():
> pass
why are you checking q.empty( ) here again?
> # The deadlock only occurs on Mac OS X and only when these lines
> # are commented out:
> # for p in processes:
> # p.join()
Why are you joining down here? Why not in the loop itself? I tested it
on Linux and Win32. Works fine for me. I don't know about OSX.
> for i in range(100):
> _do(i)
_do(i) is ran 100 times. Why? Is that what you want?
> Output (on Mac OS X using the svn version of py3k):
> % ~/bin/python3.2 moprocessmoproblems.py
> Run: 0
> Grabbed item from queue: 0
> Grabbed item from queue: 1
> Grabbed item from queue: 2
> ...
> Grabbed item from queue: 29
> Run: 1
And this is strange. Now, I happened to be hacking away some
multiprocessing code meself. I saw your thread so I whipped up
something that you can have a look.
> At this point the script produces no additional output. If I uncomment the
> lines above then the script produces the expected output. I don't see any
> docs that would explain this problem and I don't know what the rule would be
> e.g. you just join every process that uses a queue before the queue is
> garbage collected.
You join the processes when you want it to return with or without
optional parameter `timeout' in this case. Let me be more specific.
The doc for Process says:
"join([timeout])
Block the calling thread until the process whose join() method is
called terminates or until the optional timeout occurs."
Right. Now, the join( ) here is going to be the the called process'
join( ). If this particular join( ) waits indefinitely, then your
parent process' join( ) will _also_ wait indefinitely waiting for the
child processes' join( ) to finish up __unless__ you define a timeout
value.
> Any ideas why this is happening?
Have a look below. If I've understood your code, then it will be
reflective of your situation but with different take on the
implementation side of things (ran on Python2.6):
from multiprocessing import Process, Queue;
from Queue import Empty;
import sys;
def _process_worker(q):
try:
something = q.get(block=False, timeout=None);
except Empty:
print sys.exc_info();
else:
print 'Removed %d from the queue' %something;
def _make_some_processes():
q = Queue();
for i in range(3):
for j in range(3):
q.put(i*30+j);
while not q.empty():
p = Process(target=_process_worker, args=(q,));
p.start();
p.join();
if __name__ == "__main__":
_make_some_processes();
'''
Removed 0 from the queue
Removed 1 from the queue
Removed 2 from the queue
Removed 30 from the queue
Removed 31 from the queue
Removed 32 from the queue
Removed 60 from the queue
Removed 61 from the queue
Removed 62 from the queue
'''
--
Regards,
Ishwor Gurung
--
http://mail.python.org/mailman/listinfo/python-list
Re: A new way to configure Python logging
Wolodja Wentland cl.uni-heidelberg.de> writes:
> First and foremost: A big *THANK YOU* for creating and maintaining the
> logging module. I use it in every single piece of software I create and
> am very pleased with it.
I'm glad you like it. Thanks for taking the time to write this detailed
post about your usage of logging.
> You asked for feedback on incremental logging and I will just describe
> how I use the logging module in an application.
>
> Almost all applications I write consist of a/many script(s) (foo-bar,
> foo-baz, ...) and a associated package (foo).
>
> Logger/Level Hierarchy
> --
>
> I usually register a logger 'foo' within the application and one logger
> for each module in the package, so the resulting logger hierarchy will
> look like this:
>
> foo
> |__bar
> |__baz
> |__newt
> |___witch
>
> I set every loggers log level to DEBUG and use the respective logger in
You only need set foo's level to DEBUG and all of foo.bar, foo.baz etc.
will inherit that level. Setting the level explicitly on each logger is
not necessary, though doing it may improve performance slightly as the
system does not need to search ancecstors for an effective level. Also,
setting the level at just one logger ('foo') makes it easier to turn down
logging verbosity for foo.* by just changing the level in one place.
> Among other levels specific to the application, like PERFORMANCE for
> performance related unit tests, ...
I'm not sure what you mean here - is it that you've defined a custom level
called PERFORMANCE?
> Application User Interface
> --
>
[snip]
All of this sounds quite reasonable.
> Implementation
> --
>
> You have rightfully noted in the PEP, that the ConfigParser method
> is not really suitable for incremental configuration and I therefore
> configure the logging system programmatically.
Since you allow users the ability to control logging from the command-line,
you need to do programmatic configuration anyway.
> I create all loggers with except the root (foo) with:
>
> LOG = logging.getLogger(__name__)
> LOG.setLevel(logging.DEBUG)
>
> within each module and then register suitable handlers *with the root
> logger* to process incoming LogRecords. That means that I usually have a
> StreamHandler, a FileHandler among other more specific ones.
See my earlier comment about setting levels for each logger explicitly. How
do you avoid low-level chatter from all modules being displayed to users? Is
it through the use of Filters?
> The Handlers I register have suitable Filters associated with them,
> so that it is easy to just add multiple handlers for various levels to
> the root handler without causing LogRecords to get logged multiple
> times.
>
> I have *never* had to register any handlers with loggers further down in
> the hierarchy. I much rather like to combine suitable Filters and
> Handlers at the root logger. But that might just be me and due to my
> very restricted needs. What is a use case for that?
There are times where specific handlers are attached lower down in the
logger hierarchy (e.g. a specific subsystem) to send information to a relevant
audience, e.g. the development or support team for that subsystem. Technically
you can achieve this by attaching everything to the root and then attaching
suitable Filters to those handlers, but it may be easier in some cases to
attach the handlers to a lower-level logger directly, without the need for
Filters.
> The unsuitabililty of the ConfigParser method however is *not* due to the
> *format* of the textual logging configuration (ie. ini vs dict) but
> rather due to the fact that the logging library does not expose all
> aspects of the configuration to the programmer *after* it was configured
> with .fileConfig().
>
> Please contradict me if I am wrong here, but there seems to be *no* method
> to change/delete handlers/loggers once they are configured. Surely I
> could temper with logging's internals, but I don't want to do this.
You are right, e.g. the fileConfig() API does not support Filters. There is
also no API to get the current configuration in any form.
There isn't a strong use case for allowing arbitrary changes to the logging
setup using a configuration API. Deletion of loggers is problematic in a
multi-threaded environment (you can't be sure which threads have a reference
to those loggers), though you can disable individual loggers (as fileConfig
does when called with two successive, disjoint configurations). Also, deleting
handlers is not really necessary since you can change their levels to achieve
much the same effect.
> PEP 391
> ---
>
> I like PEP 391 a lot. Really! Thanks for it. The configuration format is
> very concise and easily readable. I like the idea of decoupling the
> object ultimately used for configuring (the dict) from the storage of
> that object (pickled, YAML, JSON, ...).
That's right - the dict is the
Re: Frameworks
> Emmanuel Surleau a écrit : > > It still manages to retain flexibility, but you're basically stuck > > with Django's ORM > > You're by no way "stuck" with Django's ORM - you are perfectly free > not to use it. But then you'll obviously loose quite a lot of useful > features and 3rd part apps... > >>> > >>> You lose most of what makes it worth using Django, > >> > >> Mmmm... I beg to disagree. You still have the core framework (request / > >> response handling, sessions etc), the templating system, the form API > >> etc. As far as I'm concerned, it's quite enough to "make it worth". > > > > The form API is pretty good, but I don't think the rest makes it stand > > out that much, compared to other frameworks. > > I don't care if it "stand out that much" - it works fine and is well > documented. Given that for most web apps, Django's ORM is a good enough > tool, I don't see the point in using 3 or more "different" frameworks > that basically do the same things in slightly different ways, each with > it's own strong and weak points. I think we'd be lucky to have just 3 :) But there are obviously advantages to something more modular like Pylons, and Tornado looks interesting as well (haven't used it yet, though). That's open-source for you. > > To me, the notion of reusable apps > > and the application ecosystem it allows is Django's most compelling > > feature. > > +1. Thinking about it, that's where modularity can hurt, eg, Pylons. Developing a "reusable app" like Django's is more difficult if you don't know what ORM or templating system is being used. Unless you stick to the most popular choices, but this removes, in practice, flexibility for developers who need such apps. > > You are, of course, welcome to disagree. > > I'm not saying that Django is "better" than Pylons or web.py or (insert > yur favorite framework here) - and to be true, I think Pylons is > globally smarter than Django -, I'm saying that it do the job, and do it > well enough to be worth using. Sorry for being so pragmatic. > > >>> Having to implement a mini-parser for > >>> each single tag > >> > >> Most of the "mini-parser" stuff is so very easily factored out I'm > >> afraid I won't buy your argument. > > > > You'll at least agree that in terms of line of codes necessary to > > implement a custom tag, it's very far from optimal, I hope? > > I also agree that in terms of LOCs necessary to implement a log file > parser, Python is also very far from optimal, at least compared to Perl !-) I think that depends on how many regular expressions you use in your code. Think about all these lines with } you don't need any more. As far as languages go, Python strikes a good balance between readability and concision, really. > How many Django custom tags did you write, exactly ? And of which level > of complexity ? Once again, I'm not pretending Django is the best thing > ever, but most of your remarks remind me of what I once could have say - > that is, before having enough experience writing and maintaining Django > apps. One of the greatest features of Django - and possibly what finally > makes it so pythonic - is that it doesn't try to be *too* smart - just > smart enough. I'll grant you I don't have a huge experience with Django custom tags (or Django in general). Should I do more Django in the future, custom tags would irk me less (after all, I even got used to ASP, so...). This still does not make them particularly practical, though I agree that they are not a huge inconvenience. -- http://mail.python.org/mailman/listinfo/python-list
Re: multiprocessing deadlock
En Sat, 24 Oct 2009 02:48:38 -0300, Brian Quinlan escribió: On 24 Oct 2009, at 14:10, Gabriel Genellina wrote: En Thu, 22 Oct 2009 23:18:32 -0300, Brian Quinlan escribió: I don't like a few things in the code: I'm actually not looking for workarounds. I want to know if this is a multiprocessing bug or if I am misunderstanding the multiprocessing docs somehow and my demonstrated usage pattern is somehow incorrect. Those aren't really workarounds, but things to consider when trying to narrow down what's causing the problem. The example is rather long as it is, and it's hard to tell what's wrong since there are many places thay might fail. The busy wait might be relevant, or not; having a thousand zombie processes might be relevant, or not. I don't have an OSX system to test, but on Windows your code worked fine; although removing the busy wait and joining the processes made for a better work load (with the original code, usually only one of the subprocesses in each run grabbed all items from the queue) -- Gabriel Genellina -- http://mail.python.org/mailman/listinfo/python-list
Re: multiprocessing deadlock
On 24 Oct 2009, at 19:49, Gabriel Genellina wrote: En Sat, 24 Oct 2009 02:48:38 -0300, Brian Quinlan escribió: On 24 Oct 2009, at 14:10, Gabriel Genellina wrote: En Thu, 22 Oct 2009 23:18:32 -0300, Brian Quinlan > escribió: I don't like a few things in the code: I'm actually not looking for workarounds. I want to know if this is a multiprocessing bug or if I am misunderstanding the multiprocessing docs somehow and my demonstrated usage pattern is somehow incorrect. Those aren't really workarounds, but things to consider when trying to narrow down what's causing the problem. The example is rather long as it is, and it's hard to tell what's wrong since there are many places thay might fail. I agree that the multiprocessing implementation is complex is there are a lot of spinning wheels. At this point, since no one has pointed out how I am misusing the module, I think that I'll just file a bug. The busy wait might be relevant, or not; having a thousand zombie processes might be relevant, or not. According to the docs: """On Unix when a process finishes but has not been joined it becomes a zombie. There should never be very many because each time a new process starts (or active_children() is called) all completed processes which have not yet been joined will be joined. Also calling a finished process’s Process.is_alive() will join the process. Even so it is probably good practice to explicitly join all the processes that you start.""" Cheers, Brian-- http://mail.python.org/mailman/listinfo/python-list
Re: which "dictionary with attribute-style access"?
baloan wrote:
On Oct 22, 6:34 am, "Gabriel Genellina"
wrote:
class AttrDict(dict):
"""A dict whose items can also be accessed as member variables."""
def __init__(self, *args, **kwargs):
dict.__init__(self, *args, **kwargs)
self.__dict__ = self
def copy(self):
return AttrDict(self)
def __repr__(self):
return 'AttrDict(' + dict.__repr__(self) + ')'
@classmethod
def fromkeys(self, seq, value = None):
return AttrDict(dict.fromkeys(seq, value))
Looks fine as long as nobody uses an existing method name as adictionary
key:
py> d = AttrDict({'name':'value'})
py> d.items()
[('name', 'value')]
py> d = AttrDict({'items': [1,2,3]})
py> d.items()
Traceback (most recent call last):
File "", line 1, in
TypeError: 'list' object is not callable
(I should have included a test case for this issue too).
Of course, if this doesn't matter in your application, go ahead and use
it. Just be aware of the problem.
--
Gabriel Genellina
I see two ways to avoid collisions with existing method names:
1. (easy, reduced functionality) override __setattr__ and __init__,
test for keys matching existing method names, throw an exception if
exists (KeyError).
2. (complex, emulates dict) override __setattr__ and __init__, test
for keys matching existing method names, and store those keys in a
shadow dict. More problems arise when thinking about how to choose
access between dict method names and item keys.
--
Andreas Balogh
baloand at gmail dot com
That was indeed my last post. It seems for this thread Google does not sort correctly
descending "creation time fo the last post". This thread was not visible as "new" when I
posted...
Regards, Andreas
--
Andreas Balogh
baloand (at) gmail.com
--
http://mail.python.org/mailman/listinfo/python-list
Socket logic problem
I have several instances of the same generator function running
simultaneously, some within the same process, others in separate processes. I
want them to be able to share data (the dictionaries passed to them as
arguments), in such a way that instances designated as "leaders" send their
dictionaries to "follower" instances.
I'm trying to use sockets to relay the dicts in pickled form, like this:
from socket import socket
PORT = 2050
RELAY = socket()
RELAY.bind(('', PORT))
RELAY.listen(5)
PICKLEDICT = ''
while 1:
INSTANCE = RELAY.accept()[0]
STRING = INSTANCE.recv(1024)
if STRING == "?":
INSTANCE.send(PICKLEDICT)
else:
PICKLEDICT = STRING
What I was hoping this would do is allow the leaders to send their dicts to
this socket and the followers to read them from it after sending an initial
"?", and that the same value would be returned for each such query until it
was updated.
But clearly I have a fundamental misconception of sockets, as this logic only
allows a single query per connection, new connections break the old ones, and
a new connection is required to send in a new value.
Are sockets actually the best way to do this? If so, how to set it up to do
what I want? If not, what other approaches could I try?
Regards,
John
--
http://mail.python.org/mailman/listinfo/python-list
Re: multiprocessing deadlock
"Brian Quinlan" schrieb im Newsbeitrag news:[email protected]... > > Any ideas why this is happening? > > Cheers, > Brian IMHO your code is buggy. You run in an typical race condition. consider following part in your code: > def _make_some_processes(q): > processes = [] > for _ in range(10): > p = multiprocessing.Process(target=_process_worker, args=(q,)) > p.start() > processes.append(p) > return processes p.start() may start an process right now, in 5 seconds or an week later, depending on how the scheduler of your OS works. Since all your processes are working on the same queue it is -- very -- likely that the first process got started, processed all the input and finished, while all the others haven't even got started. Though your first process exits, and your main process also exits, because the queue is empty now ;). > while not q.empty(): > pass If you where using p.join() your main process wourd terminate when the last process terminates ! That's an different exit condition! When the main process terminates all the garbage collection fun happens. I hope you don't wonder that your Queue and the underlaying pipe got closed and collected! Well now that all the work has been done, your OS may remember that someone sometimes in the past told him to start an process. >def _process_worker(q): > while True: > try: > something = q.get(block=True, timeout=0.1) > except queue.Empty: > return > else: > print('Grabbed item from queue:', something) The line something = q.get(block=True, timeout=0.1) should cause some kind of runtime error because q is already collected at that time. Depending on your luck and the OS this bug may be handled or not. Obviously you are not lucky on OSX ;) That's what i think happens. -- http://mail.python.org/mailman/listinfo/python-list
Re: A new way to configure Python logging
On Sat, Oct 24, 2009 at 07:54 +, Vinay Sajip wrote:
> Wolodja Wentland cl.uni-heidelberg.de> writes:
[snip]
> > foo
> > |__bar
> > |__baz
> > |__newt
> > |___witch
> >
> > I set every loggers log level to DEBUG and use the respective logger in
> You only need set foo's level to DEBUG and all of foo.bar, foo.baz etc.
> will inherit that level.
OK, thanks for pointing that out!
[snip]
> > Among other levels specific to the application, like PERFORMANCE for
> > performance related unit tests, ...
>
> I'm not sure what you mean here - is it that you've defined a custom level
> called PERFORMANCE?
Exactly. I used that particular level for logging within a unit test
framework for messages about performance related tests. Combined with a
Handler that generated HTML files from the LogRecord queue using various
templates (make, jinja, ...) it became a nice way to create nice looking
test reports.
Could a HTMLHandler be added to the standard set? Preferably one that
leaves the choice of the template engine to the user.
> > Application User Interface
> [snip]
> All of this sounds quite reasonable.
Great :-)
>
> > Implementation
> > --
> >
> > You have rightfully noted in the PEP, that the ConfigParser method
> > is not really suitable for incremental configuration and I therefore
> > configure the logging system programmatically.
> Since you allow users the ability to control logging from the command-line,
> you need to do programmatic configuration anyway.
Yes, but that could be made easier. (see below)
> > I create all loggers with except the root (foo) with:
> >
> > LOG = logging.getLogger(__name__)
> > LOG.setLevel(logging.DEBUG)
> >
> > within each module and then register suitable handlers *with the root
> > logger* to process incoming LogRecords. That means that I usually have a
> > StreamHandler, a FileHandler among other more specific ones.
>
> See my earlier comment about setting levels for each logger explicitly. How
> do you avoid low-level chatter from all modules being displayed to users? Is
> it through the use of Filters?
Exactly. The Handlers will usually employ elaborate filtering, so they
can be "plugged together" easily:
- User wants html? Ah, just add the HTMLHandler to the root logger
- User wants verbose output? Ah, just add the VerboseHandler to ...
- ...
> There are times where specific handlers are attached lower down in the
> logger hierarchy (e.g. a specific subsystem) to send information to a relevant
> audience, e.g. the development or support team for that subsystem.
Guess I never had the need for that.
> Technically you can achieve this by attaching everything to the root
> and then attaching suitable Filters to those handlers, but it may be
> easier in some cases to attach the handlers to a lower-level logger
> directly, without the need for Filters.
Which is exactly what I do and I think that it fits my particular
mindset. I see the root handler basically as a multiplexer that feeds
LogRecords to various various co-routines (ie handlers) that decide what
to do with them. I like working on the complete set of LogRecords
accumulated from different parts of the application. The
handler/filter/... naming convention is just a more verbose/spelled out
way of defining different parts of the pipeline that the developer might
want to use. I guess I would welcome general purpose hook for each edge
in the logger tree and in particular one hook feeding different
co-routines at the root logger.
> though your configuration would have to leave out any handlers which
> are optionally specified via command-line arguments.
> > * Additionally: The possibility to *override* some parts of the
> > configuration in another file (files?).
>
> That requirement is too broad to be able to give a one-size-fits-all
> implementation.
I was thinking along the line of ConfigParser.read([file1, file2, ...]),
so that you could have:
--- /etc/foo/logging.conf ---
...
formatters:
default:
format: '%(asctime)s %(levelname)-8s %(name)-15s %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
...
--- snip ---
and:
--- ~/.foo/logging.conf ---
formatters:
# You can adapt the message and date format to your needs here.
# The following placeholder can be used:
# asctime- description
# ...
default:
format: '%(levelname)-8s %(name)-15s %(message)s'
datefmt: '%Y-%m-%d %H:%M:%S'
--- snip ---
So that if I call:
logging.config.fromFiles(['/etc/foo/logging.conf,
os.path.expanduser(
'~/.foo/logging.conf')])
The user adaptations will overrule the defaults in the shipped
configuration. I know that I could implement that myself using
{}.update() and the like, but the use case might be common enough to
justify inclusion in the logging module.
> > * The possibility to enable/disable certain parts of the configuration.
>
> You can do that by changing levels in an incremental
Re: unicode and dbf files
On Oct 24, 4:14 am, Ethan Furman wrote:
> John Machin wrote:
> > On Oct 23, 3:03 pm, Ethan Furman wrote:
>
> >>John Machin wrote:
>
> >>>On Oct 23, 7:28 am, Ethan Furman wrote:
>
> Greetings, all!
>
> I would like to add unicode support to my dbf project. The dbf header
> has a one-byte field to hold the encoding of the file. For example,
> \x03 is code-page 437 MS-DOS.
>
> My google-fu is apparently not up to the task of locating a complete
> resource that has a list of the 256 possible values and their
> corresponding code pages.
>
> >>>What makes you imagine that all 256 possible values are mapped to code
> >>>pages?
>
> >>I'm just wanting to make sure I have whatever is available, and
> >>preferably standard. :D
>
> So far I have found this, plus
> variations:http://support.microsoft.com/kb/129631
>
> Does anyone know of anything more complete?
>
> >>>That is for VFP3. Try the VFP9 equivalent.
>
> >>>dBase 5,5,6,7 use others which are not defined in publicly available
> >>>dBase docs AFAICT. Look for "language driver ID" and "LDID". Secondary
> >>>source: ESRI support site.
>
> >>Well, a couple hours later and still not more than I started with.
> >>Thanks for trying, though!
>
> > Huh? You got tips to (1) the VFP9 docs (2) the ESRI site (3) search
> > keywords and you couldn't come up with anything??
>
> Perhaps "nothing new" would have been a better description. I'd already
> seen the clicketyclick site (good info there)
Do you think so? My take is that it leaves out most of the codepage
numbers, and these two lines are wrong:
65h Nordic MS-DOS code page 865
66h Russian MS-DOS code page 866
> and all I found at ESRI
> were folks trying to figure it out, plus one link to a list that was no
> different from the vfp3 list (or was it that the list did not give the
> hex values? Either way, of no use to me.)
Try this:
http://webhelp.esri.com/arcpad/8.0/referenceguide/
>
> I looked at dbase.com, but came up empty-handed there (not surprising,
> since they are a commercial company).
MS and ESRI have docs ... does that mean that they are non-commercial
companies?
> I searched some more on Microsoft's site in the VFP9 section, and was
> able to find the code page section this time. Sadly, it only added
> about seven codes.
>
> At any rate, here is what I have come up with so far. Any corrections
> and/or additions greatly appreciated.
>
> code_pages = {
> '\x01' : ('ascii', 'U.S. MS-DOS'),
All of the sources say codepage 437, so why ascii instead of cp437?
> '\x02' : ('cp850', 'International MS-DOS'),
> '\x03' : ('cp1252', 'Windows ANSI'),
> '\x04' : ('mac_roman', 'Standard Macintosh'),
> '\x64' : ('cp852', 'Eastern European MS-DOS'),
> '\x65' : ('cp866', 'Russian MS-DOS'),
> '\x66' : ('cp865', 'Nordic MS-DOS'),
> '\x67' : ('cp861', 'Icelandic MS-DOS'),
> '\x68' : ('cp895', 'Kamenicky (Czech) MS-DOS'), # iffy
Indeed iffy. Python doesn't have a cp895 encoding, and it's probably
not alone. I suggest that you omit Kamenicky until someone actually
wants it.
> '\x69' : ('cp852', 'Mazovia (Polish) MS-DOS'), # iffy
Look 5 lines back. cp852 is 'Eastern European MS-DOS'. Mazovia
predates and is not the same as cp852. In any case, I suggest that you
omit Masovia until someone wants it. Interesting reading:
http://www.jastra.com.pl/klub/ogonki.htm
> '\x6a' : ('cp737', 'Greek MS-DOS (437G)'),
> '\x6b' : ('cp857', 'Turkish MS-DOS'),
> '\x78' : ('big5', 'Traditional Chinese (Hong Kong SAR, Taiwan)\
big5 is *not* the same as cp950. The products that create DBF files
were designed for Windows. So when your source says that LDID 0xXX
maps to Windows codepage YYY, I would suggest that all you should do
is translate that without thinking to python encoding cpYYY.
> Windows'), # wag
What does "wag" mean?
> '\x79' : ('iso2022_kr', 'Korean Windows'), # wag
Try cp949.
> '\x7a' : ('iso2022_jp_2', 'Chinese Simplified (PRC, Singapore)\
> Windows'), # wag
Very wrong. iso2022_jp_2 is supposed to include basic Japanese, basic
(1980) Chinese (GB2312) and a basic Korean kit. However to quote from
"CJKV Information Processing" by Ken Lunde, "... from a practical
point of view, ISO-2022-JP-2 . [is] equivalent to ISO-2022-JP-1
encoding." i.e. no Chinese support at all. Try cp936.
> '\x7b' : ('iso2022_jp', 'Japanese Windows'), # wag
Try cp936.
> '\x7c' : ('cp874', 'Thai Windows'), # wag
> '\x7d' : ('cp1255', 'Hebrew Windows'),
> '\x7e' : ('cp1256', 'Arabic Windows'),
> '\xc8' : ('cp1250', 'Eastern European Windows'),
> '\xc9' : ('cp1251', 'Russian Windows'),
> '\xca' : ('cp1254', 'Turkish Windows'),
> '\xcb' : ('cp1253', 'Greek Windows'),
> '\x96' : ('mac_cyrillic', 'Russian Macintosh'),
> '\x97' : ('mac_latin2', 'Macintosh EE'),
> '\x98' : (
Re: PySerial
Yes, with the serial to usb adapter is located on COM3. I have been able to use puTTY to get into the port and shoot commands at the device, but when I try to use python I get 'You don't have permissions". On Sat, Oct 24, 2009 at 12:50 AM, Gabriel Genellina wrote: > En Fri, 23 Oct 2009 20:56:21 -0300, Ronn Ross > escribió: > > > I have tried setting the baud rate with no success. Also I'm using port #2 >> because I"m using a usb to serial cable. >> > > Note that Serial(2) is known as COM3 in Windows, is it ok? > > > -- > Gabriel Genellina > > -- > http://mail.python.org/mailman/listinfo/python-list > -- http://mail.python.org/mailman/listinfo/python-list
[ANN] python-daemon 1.5.2
Howdy all, I'm pleased to announce the release of version 1.5.2 of ‘python-daemon’. What is python-daemon = The ‘python-daemon’ library is the reference implementation of PEP 3143 http://www.python.org/dev/peps/pep-3143/>, “Standard daemon process library”. The source distribution is available via the PyPI page for this version, http://pypi.python.org/pypi/python-daemon/1.5.2/>. The latest version is always available via the library's PyPI page http://pypi.python.org/pypi/python-daemon/>. What's new in this version == Since version 1.5 the following significant improvements have been made: * The documented option ‘prevent_core’, which defaults to True allowing control over whether core dumps are prevented in the daemon process, is now implemented (it is specified in PEP 3143 but was accidentally omitted until now). * A document answering Frequently Asked Questions is now added. -- \ “Any intelligent fool can make things bigger and more complex… | `\It takes a touch of genius – and a lot of courage – to move in | _o__)the opposite direction.” —Albert Einstein | Ben Finney -- http://mail.python.org/mailman/listinfo/python-list
Re: multiprocessing deadlock
On 24 Oct 2009, at 21:37, larudwer wrote: "Brian Quinlan" schrieb im Newsbeitrag news:[email protected]... Any ideas why this is happening? Cheers, Brian IMHO your code is buggy. You run in an typical race condition. consider following part in your code: def _make_some_processes(q): processes = [] for _ in range(10): p = multiprocessing.Process(target=_process_worker, args=(q,)) p.start() processes.append(p) return processes p.start() may start an process right now, in 5 seconds or an week later, depending on how the scheduler of your OS works. Agreed. Since all your processes are working on the same queue it is -- very -- likely that the first process got started, processed all the input and finished, while all the others haven't even got started. Agreed. Though your first process exits, and your main process also exits, because the queue is empty now ;). The main process shouldn't (and doesn't exit) - the _do function exits (with some processes possibly still running) and the next iteration in for i in range(100): _do(i) is evaluated. while not q.empty(): pass If you where using p.join() your main process wourd terminate when the last process terminates ! That's an different exit condition! When you say "your main process would terminate", you mean that the _do function would exit, right? Because process.join() has nothing to do with terminating the calling process - it just blocks until process terminates. When the main process terminates all the garbage collection fun happens. I hope you don't wonder that your Queue and the underlaying pipe got closed and collected! I expected the queue and underlying queue and pipe to get collected. Well now that all the work has been done, your OS may remember that someone sometimes in the past told him to start an process. Sure, that could happen at this stage. Are you saying that it is the user of the multiprocessing module's responsibility to ensure that the queue is not collected in the parent process until all the child processes using it have exited? Actually, causing the queues to never be collected fixes the deadlock: + p = [] def _do(i): print('Run:', i) q = multiprocessing.Queue() + p.append(q) print('Created queue') for j in range(30): q.put(i*30+j) processes = _make_some_processes(q) print('Created processes') while not q.empty(): pass print('Q is empty') This behavior is counter-intuitive and, as far as I can tell, not documented anywhere. So it feels like a bug. Cheers, Brian def _process_worker(q): while True: try: something = q.get(block=True, timeout=0.1) except queue.Empty: return else: print('Grabbed item from queue:', something) The line something = q.get(block=True, timeout=0.1) should cause some kind of runtime error because q is already collected at that time. Depending on your luck and the OS this bug may be handled or not. Obviously you are not lucky on OSX ;) That's what i think happens. -- http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
repr(complex) in Py3.1
Hi, in Python 3.1.1, I get this: Python 3.1.1 (r311:74480, Oct 22 2009, 19:34:26) [GCC 4.3.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> 2j 2j >>> -2j -2j >>> -0-2j -2j >>> (-0-2j) -2j >>> -(2j) (-0-2j) The last line differs from what earlier Python versions printed here: Python 2.6.2 (r262:71600, Oct 22 2009, 20:58:58) [GCC 4.3.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> 2j 2j >>> -2j -2j >>> -0-2j -2j >>> (-0-2j) -2j >>> -(2j) -2j I know at least that the float repr() was modified in Py3.1, but is the above behaviour intentional? It certainly breaks doctests, and I don't see a good reason for that. Stefan -- http://mail.python.org/mailman/listinfo/python-list
Re: repr(complex) in Py3.1
On Oct 24, 3:26 pm, Stefan Behnel wrote: > Hi, > > in Python 3.1.1, I get this: > > Python 3.1.1 (r311:74480, Oct 22 2009, 19:34:26) > [GCC 4.3.2] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> 2j > 2j > >>> -2j > -2j > >>> -0-2j > -2j > >>> (-0-2j) > -2j > >>> -(2j) > (-0-2j) > > The last line differs from what earlier Python versions printed here: [...] > I know at least that the float repr() was modified in Py3.1, but is the > above behaviour intentional? It certainly breaks doctests, and I don't see > a good reason for that. Yes, it's intentional. The problem with the 2.6 complex repr is that it doesn't accurately represent negative zeros, so that if you try to eval the result you don't end up with the same complex number: Python 2.6.1 (r261:67515, Jul 7 2009, 23:51:51) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> z = -(2j) >>> z.real -0.0 >>> w = complex(repr(z)) >>> w.real 0.0 This was part of a set of changes made to ensure that complex(repr(z)) round-tripping worked correctly, even when z involves negative zeros, nans, or infinities. (However, note that eval(repr(z)) doesn't round-trip correctly; that's not a problem that's easy to solve.) Mark -- http://mail.python.org/mailman/listinfo/python-list
Re: repr(complex) in Py3.1
> I know at least that the float repr() was modified in Py3.1, but is the > above behaviour intentional? It certainly breaks doctests, and I don't see > a good reason for that. I don't know whether it was intentional, but it looks right to me. 2j is the complex number +0.0+2.0j (right?). Then, -(2j) is the negated value of this, i.e. -0.0-2.0j (*). It seems that the complex type made no distinction between +0.0 and -0.0 in the past, but it should (just as the float type does). Regards, Martin (*) this is different from -2j, where the sign applies to the imaginary part only. -- http://mail.python.org/mailman/listinfo/python-list
Re: repr(complex) in Py3.1
Mark Dickinson, 24.10.2009 16:44: > On Oct 24, 3:26 pm, Stefan Behnel wrote: >> in Python 3.1.1, I get this: >> >> Python 3.1.1 (r311:74480, Oct 22 2009, 19:34:26) >> [GCC 4.3.2] on linux2 >> Type "help", "copyright", "credits" or "license" for more information. >> >>> -(2j) >> (-0-2j) >> >> The last line differs from what earlier Python versions printed here. >> I know at least that the float repr() was modified in Py3.1, but is the >> above behaviour intentional? It certainly breaks doctests, and I don't see >> a good reason for that. > > Yes, it's intentional. The problem with the 2.6 complex repr is that > it doesn't accurately represent negative zeros, so that if you try > to eval the result you don't end up with the same complex number Ok, thanks. I guess we'll have to work around that in Cython's doctest suite then. Stefan -- http://mail.python.org/mailman/listinfo/python-list
sequential multiple processes
Hi, all How to run multiple processes with sequential input of a thousand of data in a script run? I have a python script and 1,000 independent input data for it. Previously, I divided input data into* n* groups and ran a same python script *n* times to use *n* processors. It's inconvenient. How can I do same thing in a signle script running? Thank you in advance, Hyunchul -- http://mail.python.org/mailman/listinfo/python-list
Re: sequential multiple processes
Hyunchul Kim wrote: Hi, all How to run multiple processes with sequential input of a thousand of data in a script run? I have a python script and 1,000 independent input data for it. Previously, I divided input data into/ n/ groups and ran a same python script /n/ times to use /n/ processors. It's inconvenient. How can I do same thing in a signle script running? Thank you in advance, Hyunchul Use the subprocess module in your main program to start n subprocesses and feed each its portion of the data. Gary Herron -- http://mail.python.org/mailman/listinfo/python-list
Re: sequential multiple processes
On Sun, Oct 25, 2009 at 00:41 +0900, Hyunchul Kim wrote: > [0;37m How to run multiple processes with sequential input of a thousand > of data > [0;37m in a script run? > > [0;37m I have a python script and 1,000 independent input data for it. > > [0;37m Previously, I divided input data into [0;36mn[0;37m groups and > ran a same python > [0;37m script [0;36mn[0;37m times to use [0;36mn[0;37m processors. > > [0;37m It's inconvenient. > > [0;37m How can I do same thing in a signle script running? > Have a look at [1] it describes a way to multiplex data to different receivers. You can then combine that with the multiprocessing/threading module in the stdlib. kind regards Wolodja [1] http://www.dabeaz.com/generators/ signature.asc Description: Digital signature -- http://mail.python.org/mailman/listinfo/python-list
Python 3.1.1 bytes decode with replace bug
The Python 3.1.1 documentation has the following example:
>>> b'\x80abc'.decode("utf-8", "strict")
Traceback (most recent call last):
File "", line 1, in ?
UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0:
unexpected code byte
>>> b'\x80abc'.decode("utf-8", "replace")
'\ufffdabc'
>>> b'\x80abc'.decode("utf-8", "ignore")
'abc'
Strict and Ignore appear to work as per the documentation but replace
does not. Instead of replacing the values it fails:
>>> b'\x80abc'.decode('utf-8', 'replace')
Traceback (most recent call last):
File "", line 1, in
File "p:\SW64\Python.3.1.1\lib\encodings\cp437.py", line 19, in
encode
return codecs.charmap_encode(input,self.errors,encoding_map)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\ufffd' in
position
1: character maps to
If this a known bug with 3.1.1?
--
http://mail.python.org/mailman/listinfo/python-list
ANN: Jump 0.9.0 released!
I am proud to announce that Jump 0.9.0 is released! You can find the Jump project at http://gitorious.org/jump, and its documentation at http://gitorious.org/jump/pages/Home. This version is the Jump's first release. The goal of Jump is to make a distribution for Jython application in a really simple step. Features: * Distributing Jython applications into a single, independent JAR file. * Supporting Java source code and third-party JAR files. * Starting the distribution from either Jython or Java code. * Packaging `only required` Python modules into the final distribution `automatically`, which means you don't have to worry about using Python third-party libraries as long as they can be found in your `sys.path`. * All Python modules packaged into the final distribution are compiled to $py.class files, which means your source code is not public. The Future: * Creating native applications for Mac, Windows and Linux. * Creating web application archive (WAR) files for Python WSGI applications. If you have any question or bug report, please post it to our mailing list at http://groups.google.com/group/ollix-jump, contributions are welcome as well. -- http://mail.python.org/mailman/listinfo/python-list
Error building Python 2.6.3 and 2.6.4rc2
HI! I'm on a openSUSE system where Python 2.6 is installed from RPMs. I'm trying to build Python from source separately. This used to work in former versions but fails now (see build traceback below). Anyone having a clue what's going on here? Is there a possible work-around? A possible conflict with the openSUSE-RPMs? Ciao, Michael. - snip - /usr/src/Python-2.6.4rc2> make 'import site' failed; use -v for traceback Traceback (most recent call last): File "./setup.py", line 15, in from distutils.command.build_ext import build_ext File "/usr/src/michael/Python-2.6.4rc2/Lib/distutils/command/build_ext.py", line 13, in from site import USER_BASE, USER_SITE File "/usr/src/michael/Python-2.6.4rc2/Lib/site.py", line 513, in main() File "/usr/src/michael/Python-2.6.4rc2/Lib/site.py", line 496, in main known_paths = addsitepackages(known_paths) File "/usr/src/michael/Python-2.6.4rc2/Lib/site.py", line 288, in addsitepackages addsitedir(sitedir, known_paths) File "/usr/src/michael/Python-2.6.4rc2/Lib/site.py", line 185, in addsitedir addpackage(sitedir, name, known_paths) File "/usr/src/michael/Python-2.6.4rc2/Lib/site.py", line 155, in addpackage exec line File "", line 1, in AttributeError: 'module' object has no attribute 'lib' make: *** [sharedmods] Error 1 -- http://mail.python.org/mailman/listinfo/python-list
Re: Error building Python 2.6.3 and 2.6.4rc2
Michael Ströder wrote: > - snip - > /usr/src/Python-2.6.4rc2> make > 'import site' failed; use -v for traceback > Traceback (most recent call last): > File "./setup.py", line 15, in > from distutils.command.build_ext import build_ext > File "/usr/src/michael/Python-2.6.4rc2/Lib/distutils/command/build_ext.py", > line 13, in > from site import USER_BASE, USER_SITE > File "/usr/src/michael/Python-2.6.4rc2/Lib/site.py", line 513, in > main() > File "/usr/src/michael/Python-2.6.4rc2/Lib/site.py", line 496, in main > known_paths = addsitepackages(known_paths) > File "/usr/src/michael/Python-2.6.4rc2/Lib/site.py", line 288, in > addsitepackages > addsitedir(sitedir, known_paths) > File "/usr/src/michael/Python-2.6.4rc2/Lib/site.py", line 185, in addsitedir > addpackage(sitedir, name, known_paths) > File "/usr/src/michael/Python-2.6.4rc2/Lib/site.py", line 155, in addpackage > exec line > File "", line 1, in > AttributeError: 'module' object has no attribute 'lib' > make: *** [sharedmods] Error 1 You have a broken or malformed .pth file on your system. Add a line with "print fullname, line" right before "exec line" to debug your issue. Christian -- http://mail.python.org/mailman/listinfo/python-list
How to set up web service by python?
I'm following the instruction on
http://fragments.turtlemeat.com/pythonwebserver.php to set web
service. But I get the following error message when I run the script
on my mac machine. I'm wondering why I see the permission denied error
message.
$python webserver.py
Traceback (most recent call last):
File "webserver.py", line 63, in
main()
File "webserver.py", line 55, in main
server = HTTPServer(('', 80), MyHandler)
File
"/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/SocketServer.py",
line 330, in __init__
self.server_bind()
File
"/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/BaseHTTPServer.py",
line 101, in server_bind
SocketServer.TCPServer.server_bind(self)
File
"/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/SocketServer.py",
line 341, in server_bind
self.socket.bind(self.server_address)
File "", line 1, in bind
socket.error: (13, 'Permission denied')
--
http://mail.python.org/mailman/listinfo/python-list
Re: How to set up web service by python?
Peng Yu schrieb:
I'm following the instruction on
http://fragments.turtlemeat.com/pythonwebserver.php to set web
service. But I get the following error message when I run the script
on my mac machine. I'm wondering why I see the permission denied error
message.
$python webserver.py
Traceback (most recent call last):
File "webserver.py", line 63, in
main()
File "webserver.py", line 55, in main
server = HTTPServer(('', 80), MyHandler)
File
"/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/SocketServer.py",
line 330, in __init__
self.server_bind()
File
"/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/BaseHTTPServer.py",
line 101, in server_bind
SocketServer.TCPServer.server_bind(self)
File
"/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/SocketServer.py",
line 341, in server_bind
self.socket.bind(self.server_address)
File "", line 1, in bind
socket.error: (13, 'Permission denied')
As you don't show us the code, I can only guess - but experience tells
me that you try & bind your service to a priviledged (<=1024) port,
which *nix only allows with root-privileges.
Diez
--
http://mail.python.org/mailman/listinfo/python-list
Re: Error building Python 2.6.3 and 2.6.4rc2
Christian Heimes wrote:
> Michael Ströder wrote:
>> - snip -
>> /usr/src/Python-2.6.4rc2> make
>> 'import site' failed; use -v for traceback
>> Traceback (most recent call last):
>> File "./setup.py", line 15, in
>> from distutils.command.build_ext import build_ext
>> File "/usr/src/michael/Python-2.6.4rc2/Lib/distutils/command/build_ext.py",
>> line 13, in
>> from site import USER_BASE, USER_SITE
>> File "/usr/src/michael/Python-2.6.4rc2/Lib/site.py", line 513, in
>> main()
>> File "/usr/src/michael/Python-2.6.4rc2/Lib/site.py", line 496, in main
>> known_paths = addsitepackages(known_paths)
>> File "/usr/src/michael/Python-2.6.4rc2/Lib/site.py", line 288, in
>> addsitepackages
>> addsitedir(sitedir, known_paths)
>> File "/usr/src/michael/Python-2.6.4rc2/Lib/site.py", line 185, in
>> addsitedir
>> addpackage(sitedir, name, known_paths)
>> File "/usr/src/michael/Python-2.6.4rc2/Lib/site.py", line 155, in
>> addpackage
>> exec line
>> File "", line 1, in
>> AttributeError: 'module' object has no attribute 'lib'
>> make: *** [sharedmods] Error 1
>
>
> You have a broken or malformed .pth file on your system. Add a line with
> "print fullname, line" right before "exec line" to debug your issue.
Many thanks for the hint! Removing the file
/usr/lib/python2.6/site-packages/_local.pth made the build working but
SetupTools are no longer working then.
The file looks like this (single line wrapped here):
--- snip ---
import site; import sys; site.addsitedir('/usr/local/' + sys.lib + '/python' +
sys.version[:3] + '/site-packages',set())
--- snip ---
Ciao, Michael.
--
http://mail.python.org/mailman/listinfo/python-list
Re: How to set up web service by python?
In article <[email protected]>, Diez B. Roggisch wrote: > >As you don't show us the code, I can only guess - but experience tells >me that you try & bind your service to a priviledged (<=1024) port, >which *nix only allows with root-privileges. Concur. You need root privileges to run any service on port 80. Try using 8080 instead. -- -Ed Falk, [email protected] http://thespamdiaries.blogspot.com/ -- http://mail.python.org/mailman/listinfo/python-list
Re: Python 3.1.1 bytes decode with replace bug
Joe wrote:
Please provide more information
The Python 3.1.1 documentation has the following example:
Where? I could not find them
b'\x80abc'.decode("utf-8", "strict")
Traceback (most recent call last):
File "", line 1, in ?
UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0:
unexpected code byte
b'\x80abc'.decode("utf-8", "replace")
'\ufffdabc'
b'\x80abc'.decode("utf-8", "ignore")
'abc'
Strict and Ignore appear to work as per the documentation but replace
does not. Instead of replacing the values it fails:
b'\x80abc'.decode('utf-8', 'replace')
Traceback (most recent call last):
File "", line 1, in
File "p:\SW64\Python.3.1.1\lib\encodings\cp437.py", line 19, in
encode
return codecs.charmap_encode(input,self.errors,encoding_map)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\ufffd' in
position
1: character maps to
Which interpreter and system? With Python 3.1 (r31:73574, Jun 26 2009,
20:21:35) [MSC v.1500 32 bit (Intel)] on win32, IDLE, I get
>>> b'\x80abc'.decode('utf-8', 'replace') # pasted from above
'�abc'
as per the example.
If this a known bug with 3.1.1?
Do you do a search in the issues list at bugs.python.org?
I did and did not find anything. The discrepancy between doc (if the
example really is from the doc) and behavior (if really 3.1) would be a
bug, but more info is needed.
Terry Jan Reedy
--
http://mail.python.org/mailman/listinfo/python-list
Re: Python 3.1.1 bytes decode with replace bug
On Sat, Oct 24, 2009 at 1:09 PM, Joe wrote:
> The Python 3.1.1 documentation has the following example:
>
b'\x80abc'.decode("utf-8", "strict")
> Traceback (most recent call last):
> File "", line 1, in ?
> UnicodeDecodeError: 'utf8' codec can't decode byte 0x80 in position 0:
> unexpected code byte
b'\x80abc'.decode("utf-8", "replace")
> '\ufffdabc'
b'\x80abc'.decode("utf-8", "ignore")
> 'abc'
>
> Strict and Ignore appear to work as per the documentation but replace
> does not. Instead of replacing the values it fails:
>
b'\x80abc'.decode('utf-8', 'replace')
> Traceback (most recent call last):
> File "", line 1, in
> File "p:\SW64\Python.3.1.1\lib\encodings\cp437.py", line 19, in
> encode
> return codecs.charmap_encode(input,self.errors,encoding_map)[0]
> UnicodeEncodeError: 'charmap' codec can't encode character '\ufffd' in
> position
> 1: character maps to
>
> If this a known bug with 3.1.1?
>
It's not a bug. The problem isn't even the decode statement. Python
successfully creates the unicode string '\ufffdabc' and then tries to
print it to the screen. so it has to convert it to cp437 (your console
encoding) which fails. That's why the traceback mentions the cp437
file and not the utf-8 file.
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
--
http://mail.python.org/mailman/listinfo/python-list
Re: PySerial
Gabriel Genellina wrote: En Fri, 23 Oct 2009 20:56:21 -0300, Ronn Ross escribió: I have tried setting the baud rate with no success. Also I'm using port #2 because I"m using a usb to serial cable. Note that Serial(2) is known as COM3 in Windows, is it ok? Do you have a machine with a COM3 port? John Nagle -- http://mail.python.org/mailman/listinfo/python-list
IDLE python shell freezes after running show() of matplotlib
I am having a weird problem on IDLE. After I plot something using show () of matplotlib, the python shell prompt in IDLE just freezes that I cannot enter anything and there is no new ">>>" prompt show up. I tried ctrl - C and it didn't work. I have to restart IDLE to use it again. My system is Ubuntu Linux 9.04. I used apt-get to install IDLE. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python 3.1.1 bytes decode with replace bug
Thanks for your response. > Please provide more information > > > The Python 3.1.1 documentation has the following example: > > Where? I could not find them http://docs.python.org/3.1/howto/unicode.html#unicode-howto Scroll down the page about half way to the "The String Type" section. The example was copied from the second example with the light green background. > Which interpreter and system? With Python 3.1 (r31:73574, Jun 26 2009, Python 3.1.1 (r311:74483, Aug 17 2009, 16:45:59) [MSC v.1500 64 bit (AMD64)] on win32 Windows 7 x64 RTM, Python 3.1.1 > Do you do a search in the issues list at bugs.python.org? Yes, I did not see anything that seemed to apply. -- http://mail.python.org/mailman/listinfo/python-list
slightly OT - newbie Objective-C resources for experienced Python users
I'm starting to look at the iPhone SDK and I'd like to know of resources on the Net that approach that language with a Pythonic mindset. Mind you, I want to code Objective-C, not pine about Python not being on the iPhone either. The kind of elegant simple code that a good Python coder who also knows C well would code in C. For example, valueForKey seems to map well to dynamic coding a la __getattr__, but a Java programmer would be all over the getter/ setters, design patterns and frameworks instead. I want to know about simplicity, introspection and dynamic stuff, not Java-translated-to- Objective-C. Any recommendations? -- http://mail.python.org/mailman/listinfo/python-list
Re: Python 3.1.1 bytes decode with replace bug
Joe wrote: Thanks for your response. Please provide more information The Python 3.1.1 documentation has the following example: Where? I could not find them http://docs.python.org/3.1/howto/unicode.html#unicode-howto Scroll down the page about half way to the "The String Type" section. The example was copied from the second example with the light green background. Which interpreter and system? With Python 3.1 (r31:73574, Jun 26 2009, Python 3.1.1 (r311:74483, Aug 17 2009, 16:45:59) [MSC v.1500 64 bit (AMD64)] on win32 Windows 7 x64 RTM, Python 3.1.1 For the reason BK explained, the important difference is that I ran in the IDLE shell, which handles screen printing of unicode better ;-) The important lesson for debugging, which I forgot also in my response, is to separate creation of a (unicode) string from the printing of such. You are not the first to get caught on this. IE, >>>s = >>>print(s) Do you do a search in the issues list at bugs.python.org? Yes, I did not see anything that seemed to apply. tjr -- http://mail.python.org/mailman/listinfo/python-list
Problem with urllib2.urlopen() opening a local file
I want to use urlopen() to open either a http://... file or a local file File:C:/... I don't have problems opening and reading the file either way. But when I run the script on a server (ArcGIS server), the request won't complete if it was trying to open a local file. Even though I call close() in either case, something seemed to be preventing the script to complete if urlopen is trying to open a local file. I wonder if there is anything else I should do in the code to kill the file descriptor, or if it is a known issue that something is leaking When running the script standalone, this is not an issue. -- http://mail.python.org/mailman/listinfo/python-list
Re: IDLE python shell freezes after running show() of matplotlib
Forrest Sheng Bao wrote: I am having a weird problem on IDLE. After I plot something using show () of matplotlib, the python shell prompt in IDLE just freezes that I cannot enter anything and there is no new ">>>" prompt show up. I tried ctrl - C and it didn't work. I have to restart IDLE to use it again. My system is Ubuntu Linux 9.04. I used apt-get to install IDLE. You should really look at smart questions; I believe you have a problem, and that you have yet to imagine how to give enough information for someone else to help you. http://www.catb.org/~esr/faqs/smart-questions.html Hint: I don't know your CPU, python version, IDLE version, matplotlib version, nor do you provide a small code example that allows me to easily reproduce your problem (or not). --Scott David Daniels [email protected] -- http://mail.python.org/mailman/listinfo/python-list
how can i use lxml with win32com?
hello... if anyone know..please help me ! i really want to know...i was searched in google lot of time. but can't found clear soultion. and also because of my lack of python knowledge. i want to use IE.navigate function with beautifulsoup or lxml.. if anyone know about this or sample. please help me! thanks in advance .. -- View this message in context: http://www.nabble.com/how-can-i-use-lxml-with-win32com--tp26044339p26044339.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python 3.1.1 bytes decode with replace bug
> For the reason BK explained, the important difference is that I ran in > the IDLE shell, which handles screen printing of unicode better ;-) Something still does not seem right here to me. In the example above the bytes were decoded to 'UTF-8' with the replace option so any characters that were not UTF-8 were replaced and the resulting string is '\ufffdabc' as BK explained. I understand that the replace worked. Now consider this: Python 3.1.1 (r311:74483, Aug 17 2009, 16:45:59) [MSC v.1500 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> s = '\ufffdabc' >>> print(s) Traceback (most recent call last): File "", line 1, in File "p:\SW64\Python.3.1.1\lib\encodings\cp437.py", line 19, in encode return codecs.charmap_encode(input,self.errors,encoding_map)[0] UnicodeEncodeError: 'charmap' codec can't encode character '\ufffd' in position 0: character maps to >>> import sys >>> sys.getdefaultencoding() 'utf-8' This too fails for the exact same reason (and doesn't invole decode). In the original example I decoded to UTF-8 and in this example the default encoding is UTF-8 so why is cp437 being used? Thanks in advance for your assistance! -- http://mail.python.org/mailman/listinfo/python-list
Re: PAMIE and beautifulsoup problem
hello!
im very sorry to late reply.
follow script i was executed.
from BeautifulSoup import BeautifulSoup
from PAM30 import PAMIE
url = 'http://www.cnn.com'
ie = PAMIE(url)
bs = BeautifulSoup(ie.pageText())
and i was got such like follow error in wingide.
i can guess because of current version of PAM30 don't have attribute
'pageText'.
but i couldn't found what is same attribute in PAM30 module. and other
problem is while im using PAMIE,is it possible to change normal '
Dispatch("InternetExplorer.Application") ' ?
i mean.don't open another internet explorer windows, and using current
PAMIE session.
thanks in advance!
AttributeError: PAMIE instance has no attribute 'pageText'
File "C:\test12.py", line 7, in
bs = BeautifulSoup(ie.pageText())
Gabriel Genellina-7 wrote:
>
> En Fri, 23 Oct 2009 03:03:56 -0300, elca escribió:
>
>> follow script is which i was found in google.
>> but it not work for me.
>> im using PAMIE3 version.even if i changed to pamie 2b version ,i couldn't
>> make it working.
>
> You'll have to provide more details. *What* happened? You got an
> exception? Please post the complete exception traceback.
>
>> from BeautifulSoup import BeautifulSoup
>> Import cPAMIE
>> url = 'http://www.cnn.com'
>> ie = cPAMIE.PAMIE(url)
>> bs = BeautifulSoup(ie.pageText())
>
> Also, don't re-type the code. Copy and paste it, directly from the program
> that failed.
>
> --
> Gabriel Genellina
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
>
--
View this message in context:
http://www.nabble.com/PAMIE-and-beautifulsoup-problem-tp26021305p26044579.html
Sent from the Python - python-list mailing list archive at Nabble.com.
--
http://mail.python.org/mailman/listinfo/python-list
Re: Python 3.1.1 bytes decode with replace bug
On Sat, Oct 24, 2009 at 8:47 PM, Joe wrote: >> For the reason BK explained, the important difference is that I ran in >> the IDLE shell, which handles screen printing of unicode better ;-) > > Something still does not seem right here to me. > > In the example above the bytes were decoded to 'UTF-8' with the > replace option so any characters that were not UTF-8 were replaced and > the resulting string is '\ufffdabc' as BK explained. I understand > that the replace worked. > > Now consider this: > > Python 3.1.1 (r311:74483, Aug 17 2009, 16:45:59) [MSC v.1500 64 bit > (AMD64)] on > win32 > Type "help", "copyright", "credits" or "license" for more information. s = '\ufffdabc' print(s) > Traceback (most recent call last): > File "", line 1, in > File "p:\SW64\Python.3.1.1\lib\encodings\cp437.py", line 19, in > encode > return codecs.charmap_encode(input,self.errors,encoding_map)[0] > UnicodeEncodeError: 'charmap' codec can't encode character '\ufffd' in > position > 0: character maps to import sys sys.getdefaultencoding() > 'utf-8' > > This too fails for the exact same reason (and doesn't invole decode). > > In the original example I decoded to UTF-8 and in this example the > default encoding is UTF-8 so why is cp437 being used? > > Thanks in advance for your assistance! > Try checking sys.stdout.encoding. Then run the command chcp (not in the python interpreter). You'll probably get 437 from both of those. Just because the system encoding is set to utf-8 doesn't mean the console is. Nobody really uses cp437 anymore- it was replaced years ago by cp1252- but Microsoft is scared to do anything to cmd.exe because it might break somebody's 20-year-old DOS script > > > > > > -- > http://mail.python.org/mailman/listinfo/python-list > -- http://mail.python.org/mailman/listinfo/python-list
quit button
When I click "quit" button, why the following code has problem? from Tkinter import * colors = ['red', 'green', 'yellow', 'orange', 'blue', 'navy'] def gridbox(parent): r = 0 for c in colors: l = Label(parent, text=c, relief=RIDGE, width=25) e = Entry(parent, bg=c, relief=SUNKEN, width=50) l.grid(row=r, column=0) e.grid(row=r, column=1) r = r+1 def packbox(parent): for c in colors: f = Frame(parent) l = Label(f, text=c, relief=RIDGE, width=25) e = Entry(f, bg=c, relief=SUNKEN, width=50) f.pack(side=TOP) l.pack(side=LEFT) e.pack(side=RIGHT) if __name__ == '__main__': root = Tk() gridbox(Toplevel()) packbox(Toplevel()) Button(root, text='Quit', command=root.quit).pack() mainloop() -- http://mail.python.org/mailman/listinfo/python-list
lambda forms within a loop
The snippet of code below uses two functions to dynamically create functions using lambda. Both of these uses should produce the same result, but they don't. The expected output of this code is 11 12 11 12 However, what we get instead is 12 12 11 12 The problem is that the two functions returned by MakeLambdaBad() are apparently the same, but the functions returned by MakeLambdaGood() are different. Can anyone explain why this would/should be the case? -- Michal Ostrowski [email protected] def MakeLambdaGood(): def DoLambda(x): return lambda q: x + q a = [] for x in [1,2]: a.append(DoLambda(x)) return a def MakeLambdaBad(): a = [] for x in [1,2]: a.append(lambda q: x + q) return a [a,b] = MakeLambdaBad() print a(10) print b(10) [a,b] = MakeLambdaGood() print a(10) print b(10) -- http://mail.python.org/mailman/listinfo/python-list
Re: lambda forms within a loop
On Sat, Oct 24, 2009 at 8:33 PM, Michal Ostrowski wrote: > The snippet of code below uses two functions to dynamically create > functions using lambda. > Both of these uses should produce the same result, but they don't. > def MakeLambdaGood(): > def DoLambda(x): > return lambda q: x + q > a = [] > for x in [1,2]: > a.append(DoLambda(x)) > return a > > def MakeLambdaBad(): > a = [] > for x in [1,2]: > a.append(lambda q: x + q) > return a > > [a,b] = MakeLambdaBad() > print a(10) > print b(10) > [a,b] = MakeLambdaGood() > print a(10) > print b(10) > The expected output of this code is > > 11 > 12 > 11 > 12 > > However, what we get instead is > > 12 > 12 > 11 > 12 > > The problem is that the two functions returned by MakeLambdaBad() are > apparently the same, but the functions returned by MakeLambdaGood() > are different. > > Can anyone explain why this would/should be the case? This comes up often enough there should be a FAQ entry on it. A decent explanation of the problem: http://lackingrhoticity.blogspot.com/2009/04/python-variable-binding-semantics-part.html Essentially, the lambdas in MakeLambdaBad() *don't* capture the value of x at the time they are *created*; they both just reference the same variable x in the function's scope and both use whatever its "current value" at the time they get *called*. x's value gets frozen at 2 when MakeLambdaBad() returns and the rest of the variables in x's scope are destroyed; hence, they both use 2 as x's value. The common workaround for the problem is to use the default argument values mechanism in the lambdas. MakeLambdaGood() avoids the problem by forcing x's evaluation (via DoLambda()) at the time the lambdas are created (as opposed to called) and thus reference x's value rather than x itself as a variable. Cheers, Chris -- http://blog.rebertia.com -- http://mail.python.org/mailman/listinfo/python-list
Re: lambda forms within a loop
In message , Michal Ostrowski wrote: > def MakeLambdaBad(): > a = [] > for x in [1,2]: > a.append(lambda q: x + q) > return a Here's another form that should work: def MakeLambdaGood2() : a = [] for x in [1, 2] : a.append((lambda x : lambda q : x + q)(x)) return a It's all a question of scope. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python 2.6 Deprecation Warnings with __new__ Can someone explain why?
Terry Reedy writes on Fri, 23 Oct 2009 03:04:41 -0400: > Consider this: > > def blackhole(*args, **kwds): pass > > The fact that it accept args that it ignores could be considered > misleading or even a bug. Maybe, it could. But, it is by no means necessary. In mathematics, there is a set of important functions which behave precisely as described above (there ignore their arguments); they are called "constant functions" -- http://mail.python.org/mailman/listinfo/python-list
Re: how can i use lxml with win32com?
Hi, elca, 25.10.2009 02:35: > hello... > if anyone know..please help me ! > i really want to know...i was searched in google lot of time. > but can't found clear soultion. and also because of my lack of python > knowledge. > i want to use IE.navigate function with beautifulsoup or lxml.. > if anyone know about this or sample. > please help me! > thanks in advance .. You wrote a message with nine lines, only one of which gives a tiny hint on what you actually want to do. What about providing an explanation of what you want to achieve instead? Try to answer questions like: Where does your data come from? Is it XML or HTML? What do you want to do with it? This might help: http://www.catb.org/~esr/faqs/smart-questions.html Stefan -- http://mail.python.org/mailman/listinfo/python-list
Re: how can i use lxml with win32com?
Hello, im very sorry . first my source is come from website which consist of html mainly. and i want to make web scraper. i was found some script source in internet. following is script source which can beautifulsoup and PAMIE work together. but if i run this script source error was happened. AttributeError: PAMIE instance has no attribute 'pageText' File "C:\test12.py", line 7, in bs = BeautifulSoup(ie.pageText()) and following is orginal source until i was found in internet. from BeautifulSoup import BeautifulSoup from PAM30 import PAMIE url = 'http://www.cnn.com' ie = PAMIE(url) bs = BeautifulSoup(ie.pageText()) if possible i really want to make it work together with beautifulsoup or lxml with PAMIE. sorry my bad english. thanks in advance. Stefan Behnel-3 wrote: > > Hi, > > elca, 25.10.2009 02:35: >> hello... >> if anyone know..please help me ! >> i really want to know...i was searched in google lot of time. >> but can't found clear soultion. and also because of my lack of python >> knowledge. >> i want to use IE.navigate function with beautifulsoup or lxml.. >> if anyone know about this or sample. >> please help me! >> thanks in advance .. > > You wrote a message with nine lines, only one of which gives a tiny hint > on > what you actually want to do. What about providing an explanation of what > you want to achieve instead? Try to answer questions like: Where does your > data come from? Is it XML or HTML? What do you want to do with it? > > This might help: > > http://www.catb.org/~esr/faqs/smart-questions.html > > Stefan > -- > http://mail.python.org/mailman/listinfo/python-list > > -- View this message in context: http://www.nabble.com/how-can-i-use-lxml-with-win32com--tp26044339p26045617.html Sent from the Python - python-list mailing list archive at Nabble.com. -- http://mail.python.org/mailman/listinfo/python-list
Re: how can i use lxml with win32com?
On 25 Oct 2009, at 07:45 , elca wrote: i want to make web scraper. if possible i really want to make it work together with beautifulsoup or lxml with PAMIE. Scraping information from webpages falls apart in two tasks: 1. Getting the HTML data 2. Extracting information from the HTML data It looks like you want to use Internet Explorer for getting the HTML data; is there any reason you can't use a simpler approach like using urllib2.urlopen()? Once you have the HTML data, you could feed it into BeautifulSoup or lxml. Mixing up 1 and 2 into a single statement created some confusion for you, I think. Greetings, -- http://mail.python.org/mailman/listinfo/python-list
