Re: Asyncio -- delayed calculation
On Thu, 1 Dec 2016 06:53 pm, Christian Gollwitzer wrote: > well that works - but I think it it is possible to explain it, without > actually understanding what it does behind the scences: > > x = foo() > # schedule foo for execution, i.e. put it on a TODO list > > await x > # run the TODO list until foo has finished Nope, sorry, that doesn't help even a little bit. For starters, it doesn't work: it is a syntax error. py> async def foo(): ... return 1 ... py> x = foo() py> await x File "", line 1 await x ^ SyntaxError: invalid syntax But even if it did work, why am I waiting for the TODO list to finish? Doesn't that mean I'm now blocking, which goes completely against the idea of writing non-blocking asynchronous code? If I wanted to block waiting for x, I'd just make it a regular, easy-to-understand, synchronous function. Besides, where does x stash it's result? In a global variable? What if another async routine messes with the same global? My first impressions on this is that we have a couple of good models for preemptive parallelism, threads and processes, both of which can do everything that concurrency can do, and more, and both of which are significantly easier to understand too. So why do we need asyncio? What is it actually good for? -- Steve “Cheer up,” they said, “things could be worse.” So I cheered up, and sure enough, things got worse. -- https://mail.python.org/mailman/listinfo/python-list
Re: Asyncio -- delayed calculation
Steve D'Aprano :
> py> await x
> File "", line 1
> await x
> ^
> SyntaxError: invalid syntax
"await" is only allowed inside a coroutine.
> So why do we need asyncio? What is it actually good for?
Asyncio is a form of cooperative multitasking. It presents a framework
of "fake threads". The objective and programming model is identical to
that of threads.
As to why Python should need a coroutine framework, you can answer it
from two angles:
1. Why does one need cooperative multitasking?
2. Why should one use coroutines to implement cooperative
multitasking?
1. Cooperative multitasking has made its way in Java ("NIO") and C#
(async/await). It has come about as enterprise computing realized the
multithreading model of the 1990's was shortsighted. In particular, it
wasn't scalable. Enterprise solutions collapsed under the weight of tens
of thousands of threads. Stack space ran out and schedulers became slow.
https://en.wikipedia.org/wiki/C10k_problem>
2. I have always been into asynchronous programming (cooperative
multitasking), but coroutines are far from my favorite programming
model. I am guessing Guido introduced them to Python because:
* C# has them (me too!).
* They have a glorious computer scientific past (CSP, emperor's new I/O
framework).
* They look like threads.
* They were already there in the form of generators (low-hanging
fruit).
And, maybe most importantly:
* Twisted (et al) had needed an event-driven framework but Python
didn't have one out of the box (https://lwn.net/Articles/692254/>).
Marko
--
https://mail.python.org/mailman/listinfo/python-list
Re: Asyncio -- delayed calculation
"Steve D'Aprano" wrote in message news:[email protected]... My first impressions on this is that we have a couple of good models for preemptive parallelism, threads and processes, both of which can do everything that concurrency can do, and more, and both of which are significantly easier to understand too. So why do we need asyncio? What is it actually good for? As I have mentioned, my use-case is a multi-user client/server system. The traditional approach used to be to use threads to allow multiple users to work concurrently. Then Twisted made a strong case for an asynchronous approach. One of their claims (which I have no reason to doubt) was that, because each user 'session' spends most of its time waiting for something - keyboard input, reply from database, etc - their approach allows hundreds of concurrent users, something that I believe would not be possible with threading or multi-processing. Frank Millman -- https://mail.python.org/mailman/listinfo/python-list
Re: Asyncio -- delayed calculation
"Frank Millman" : > Then Twisted made a strong case for an asynchronous approach. One of > their claims (which I have no reason to doubt) was that, because each > user 'session' spends most of its time waiting for something - > keyboard input, reply from database, etc - their approach allows > hundreds of concurrent users, something that I believe would not be > possible with threading or multi-processing. You don't need asyncio to do event-driven programming in Python. I have programmed several event-driven applications in Python (and even more of them in other languages) without using asyncio (you only need select.epoll()). My favorite model is the so-called "callback hell" with explicit finite state machines. That tried-and-true model has always been used with parallel, concurrent and network programming as well as user-interface programming. Your code should closely resemble this: http://www.bayfronttechnologies.com/capesdl.gif> Multiprocessing is also an important tool for compartmentalizing and parallelizing functionality. Threads are mainly suitable for turning obnoxious blocking APIs into nonblocking ones, but even there, I would prefer to give that job to separate processes. Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: Asyncio -- delayed calculation
On Sat, Dec 3, 2016 at 1:26 AM, Frank Millman wrote: > Then Twisted made a strong case for an asynchronous approach. One of their > claims (which I have no reason to doubt) was that, because each user > 'session' spends most of its time waiting for something - keyboard input, > reply from database, etc - their approach allows hundreds of concurrent > users, something that I believe would not be possible with threading or > multi-processing. I'm not sure that "hundreds" would be a problem - I've had processes with hundreds of threads before - but if you get up to hundreds of *thousands* of threads, then yes, threads start to be a problem. When you have that sort of traffic (that's multiple requests per millisecond, or over a million requests per minute), you have to think about throughput and efficiency, not just simplicity. But for low traffic environments (single digits of requests per second), threads work just fine. Actually, for traffic levels THAT low, you could probably serialize your requests and nobody would notice. That's the simplest of all, but it scales pretty terribly :) Not everyone needs the full power of asyncio, so not everyone will really see the point. I like that it's available to everyone, though. If you need it, it's there. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Request Help With Byte/String Problem
On 2016-12-02, Wildman via Python-list wrote: > On Wed, 30 Nov 2016 14:39:02 +0200, Anssi Saari wrote: > >> There'll be a couple more issues with the printing but they should be >> easy enough. > > I finally figured it out, I think. I'm not sure if my changes are > what you had in mind but it is working. Below is the updated code. > Thank you for not giving me the answer. I was a good learning > experience for me and that was my purpose in the first place. > > def format_ip(addr): > return str(int(addr[0])) + '.' + \ # replace ord() with int() >str(int(addr[1])) + '.' + \ >str(int(addr[2])) + '.' + \ >str(int(addr[3])) This is a little more "pythonic": def format_ip(addr): return '.'.join(str(int(a) for a in addr)) I don't know what the "addr" array contains, but if addr is a byte string, then the "int()" call is not needed, in Pythong 3, a byte is already an integer: def format_ip(a): return '.'.join(str(b) for b in a) addr = b'\x12\x34\x56\x78' print(format_ip(addr)) If is an array of strings containing base-10 representations of integers, then the str() call isn't needed either: def format_ip(a): return '.'.join(s for s in a) addr = ['12','34','56','78'] print(format_ip(addr)) -- Grant Edwards grant.b.edwardsYow! TAILFINS!! ... click at ... gmail.com -- https://mail.python.org/mailman/listinfo/python-list
Re: correct way to catch exception with Python 'with' statement
On 2016-12-02, Steve D'Aprano wrote: > I'm not an expert on the low-level hardware details, so I welcome > correction, but I think that you can probably expect that the OS can > interrupt code execution between any two CPU instructions. Yep, mostly. Some CPUs have "lock" features that allow two or more adjacent instructions to be atomic. Such a "lock" feature is usually used to implement atomic read-modify write operations (e.g. increment a memory byte/word, set/clear a bit in a memory byte/word) on CPUs that don't have any read-modify-write instructions. In general CISC processors like x86, AMD64, 68K have read-modify-write instructions that allow you to increment a memory location or set/clear a bit in memory with a single instruction: INC.W [R0]# increment memory word whose addr is in register R0 Many RISC CPUs don't (many ARMs) and require three instructions to increment a word in memory: LD R1,[R0]# read into register R1 memory word whose addr is in R0 INC R1 # increment value in R1 ST R1,[R0]# store from regster R1 into memory at addr in R0 Some such RISC CPUs have a mechism (other than the normal interrupt enable/disable mechanism) to allow you to lock the CPU for the duration of those three instructions to ensure that they are atomic. -- Grant Edwards grant.b.edwardsYow! I request a weekend in at Havana with Phil Silvers! gmail.com -- https://mail.python.org/mailman/listinfo/python-list
Re: Request Help With Byte/String Problem
On Fri, 02 Dec 2016 15:11:18 +, Grant Edwards wrote: > I don't know what the "addr" array contains, but if addr is a byte > string, then the "int()" call is not needed, in Pythong 3, a byte is > already an integer: > > def format_ip(a): >return '.'.join(str(b) for b in a) > > addr = b'\x12\x34\x56\x78' > > print(format_ip(addr)) It is a byte string just like your 'addr =' example and the above code works perfectly. Thank you. -- GNU/Linux user #557453 The cow died so I don't need your bull! -- https://mail.python.org/mailman/listinfo/python-list
Re: correct way to catch exception with Python 'with' statement
On 12/01/2016 08:39 PM, Ned Batchelder wrote: > On Thursday, December 1, 2016 at 7:26:18 PM UTC-5, DFS wrote: >> How is it possible that the 'if' portion runs, then 44/100,000ths of a >> second later my process yields to another process which deletes the >> file, then my process continues. > > A modern computer is running dozens or hundreds (or thousands!) of > processes "all at once". How they are actually interleaved on the > small number of actual processors is completely unpredictable. There > can be an arbitrary amount of time passing between any two processor > instructions. I'm not seeing DFS's original messages, so I'm assuming he's blocked on the mailing list somehow (I have a vague memory of that). But reading your quotes of his message reminds me that when I was a system administrator, I saw this behavior all the time when using rsync on a busy server. It is very common for files to disappear out from under rsync. Often we just ignore this, but it can lead to incomplete backups depending on what disappeared. It was a great day when I moved to using ZFS snapshots in conjunction with rsync to prevent this problem. Anyway the point is to confirm what you're saying that yes this race condition is a very real problem and actually happens in the real world. He's obviously thinking way too narrowly about the issue and neglecting to consider the effects of multiple processes other than the running python program. It could even be something as simple as trying to read a config program that might be opened in an editor by the user. Many editors will move/rename the old file, then write the new version, and then delete the old file. If the reading of the file happens to be attempted after the old file was renamed but before the new version was written, the file is gone. This has happened to me before, especially if load average was high and the disk has fallen behind. -- https://mail.python.org/mailman/listinfo/python-list
Re: correct way to catch exception with Python 'with' statement
Grant Edwards : > In general CISC processors like x86, AMD64, 68K have read-modify-write > instructions that allow you to increment a memory location or > set/clear a bit in memory with a single instruction: > > INC.W [R0]# increment memory word whose addr is in register R0 The x86 instruction set has a special lock prefix for the purpose: http://stackoverflow.com/questions/8891067/what-does-the-lock-ins truction-mean-in-x86-assembly> Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: Request Help With Byte/String Problem
On 2016-12-02, Wildman via Python-list wrote: > On Fri, 02 Dec 2016 15:11:18 +, Grant Edwards wrote: > >> I don't know what the "addr" array contains, but if addr is a byte >> string, then the "int()" call is not needed, in Pythong 3, a byte is >> already an integer: >> >> def format_ip(a): >>return '.'.join(str(b) for b in a) >> >> addr = b'\x12\x34\x56\x78' >> >> print(format_ip(addr)) > > It is a byte string just like your 'addr =' example and > the above code works perfectly. More importantly, you've now learned about generator comprehensions (aka generator expressions) and the string type's "join" method. ;) -- Grant Edwards grant.b.edwardsYow! My Aunt MAUREEN was a at military advisor to IKE & gmail.comTINA TURNER!! -- https://mail.python.org/mailman/listinfo/python-list
Re: correct way to catch exception with Python 'with' statement
On 2016-12-02, Marko Rauhamaa wrote: > Grant Edwards : >> In general CISC processors like x86, AMD64, 68K have read-modify-write >> instructions that allow you to increment a memory location or >> set/clear a bit in memory with a single instruction: >> >> INC.W [R0]# increment memory word whose addr is in register R0 > > The x86 instruction set has a special lock prefix for the purpose: > > http://stackoverflow.com/questions/8891067/what-does-the-lock-instruction-mean-in-x86-assembly> The x86 already has single-instruction read-modify-write instruction, so there's no possibility of your task being interrupted/suspended during those single-instruction operations (which was sort of the original topic). What the lock prefix does is lock the _bus_ for the duration of that one instruction so that other bus masters (other CPU cores or DMA masters) can't access the memory bus in the middle of the R-M-W instruction. Obiously, if you've got multiple bus masters, merely locking the CPU and not the bus may not be sufficient to avoid race conditions. Locking the CPU only prevents races between different execution contexts (processes, threads, interrupt handlers) on that one CPU. If you've only got one CPU, and you know that none of the DMA masters are going to write to your memory location, then you don't need to lock the bus as long as your operation is a single instruction. -- Grant Edwards grant.b.edwardsYow! Here I am at the flea at market but nobody is buying gmail.commy urine sample bottles ... -- https://mail.python.org/mailman/listinfo/python-list
Re: What do you think: good idea to launch a marketplace on python+django?
On Friday, December 2, 2016 at 2:01:57 AM UTC-5, Gus_G wrote: > Hello, what do you think about building a marketplace website on connection > of python+django? End effect-side should look and work similar to these: > https://zoptamo.com/uk/s-abs-c-uk, https://www.ownerdirect.com/ . What are > your opinions on this idea? Maybe there is other, better way to build it? Something like this: https://marketplace.django-cms.org/en/ ? -- https://mail.python.org/mailman/listinfo/python-list
Re: compile error when using override
from __future__ import division
from sympy import *
x, y, z, t = symbols('x y z t')
k, m, n = symbols('k m n', integer=True)
f, g, h = symbols('f g h', cls=Function)
class AA(object):
@staticmethod
def __additionFunction__(a1, a2):
return a1*a2 #Put what you want instead of this
def __multiplyFunction__(a1, a2):
return a1*a2+a1 #Put what you want instead of this
def __divideFunction__(a1, a2):
return a1*a1*a2 #Put what you want instead of this
def __init__(self, value):
self.value = value
def __add__(self, other):
return self.value*other.value
def __mul__(self, other):
return self.value*other.value + other.value
def __div__(self, other):
return self.value*other.value*other.value
solve([AA(x)*AA(y) + AA(-1), AA(x) + AA(-2)], x, y)
>>> solve([AA(x)*AA(y) + AA(-1), AA(x) + AA(-2)], x, y)
Traceback (most recent call last):
File "", line 1, in
TypeError: unsupported operand type(s) for +: 'Add' and 'AA'
still error,
actually i invented 3 valued logic algebraic operation which is quintessential
and would replace into it if this succeed
On Friday, December 2, 2016 at 1:02:19 PM UTC+8, Steve D'Aprano wrote:
> On Fri, 2 Dec 2016 01:35 pm, Ho Yeung Lee wrote:
>
> > from __future__ import division
> > import ast
> > from sympy import *
> > x, y, z, t = symbols('x y z t')
> > k, m, n = symbols('k m n', integer=True)
> > f, g, h = symbols('f g h', cls=Function)
> > import inspect
>
> Neither ast nor inspect is used. Why import them?
>
> The only symbols you are using are x and y.
>
>
> > def op2(a,b):
> > return a*b+a
>
> This doesn't seem to be used. Get rid of it.
>
>
> > class AA(object):
> > @staticmethod
> > def __additionFunction__(a1, a2):
> > return a1*a2 #Put what you want instead of this
> > def __multiplyFunction__(a1, a2):
> > return a1*a2+a1 #Put what you want instead of this
> > def __divideFunction__(a1, a2):
> > return a1*a1*a2 #Put what you want instead of this
>
> None of those methods are used. Get rid of them.
>
> > def __init__(self, value):
> > self.value = value
> > def __add__(self, other):
> > return self.value*other.value
>
> Sorry, you want AA(5) + AA(2) to return 10?
>
> > def __mul__(self, other):
> > return self.value*other.value + other.value
> > def __div__(self, other):
> > return self.value*other.value*other.value
> >
> > solve([AA(x)*AA(y) + AA(-1), AA(x) + AA(-2)], x, y)
>
> I don't understand what you are trying to do here. What result are you
> execting?
>
> Maybe you just want this?
>
> from sympy import solve, symbols
> x, y = symbols('x y')
> print( solve([x*y - 1, x - 2], x, y) )
>
> which prints the result:
> [(2, 1/2)]
>
>
> Perhaps if you explain what you are trying to do, we can help better.
>
> But please, cut down your code to only code that is being used!
>
>
>
>
> --
> Steve
> “Cheer up,” they said, “things could be worse.” So I cheered up, and sure
> enough, things got worse.
--
https://mail.python.org/mailman/listinfo/python-list
RE: Can json.dumps create multiple lines
On Thu, Dec 1, 2016 at 10:30 AM, Cecil Westerhof wrote:
> I would prefer when it would generate:
> '[
> "An array",
> "with several strings",
> "as a demo"
> ]'
>
> Is this possible, or do I have to code this myself?
> https://docs.python.org/3/library/json.html?highlight=indent#json.dump
json.dumps(["An array", "with several strings", "as a demo"], indent=0)
>'[\n"An array",\n"with several strings",\n"as a demo"\n]'
print(_)
>[
>"An array",
>"with several strings",
>"as a demo"
>]
As Zac stated the indent:
>>> print(json.dumps(["An array",{"Dummy":{'wop':'dop','dap':'dap'}}, "with
>>> several strings", "as a demo"], sort_keys = True, indent=4))
[
"An array",
{
"Dummy": {
"dap": "dap",
"wop": "dop"
}
},
"with several strings",
"as a demo"
]
>>>
This email is confidential and may be subject to privilege. If you are not the
intended recipient, please do not copy or disclose its content but contact the
sender immediately upon receipt.
--
https://mail.python.org/mailman/listinfo/python-list
