Re: [Tutor] Make a linked list subscriptable?
On 7/11/19 10:55 AM, Mats Wichmann wrote: > On 7/10/19 6:30 PM, Sarah Hembree wrote: >> How might I best make a linked list subscriptable? Below is skeleton code >> for a linked list (my >> actual is much more). I've done __iter__ and __next__ but I would like to >> be able to do start:stop:stride I just can't figure out how. Suggestions or >> just hints please? > As a learning exercise this can be interesting, but as to practical > applications, one would like to ask "why"? If index into the list is > important, then choose a regular list; the "interesting" part of a > linked list, which is "next node" is then available as index + 1. > To expand on the question, the primary use of something like a linked list is that you want cheap insertions/deletions (of O(1)) and in exchange for that indexing becomes O(n), verse an array based list which has O(1) indexing but O(N) insertions/deletions (since you need to compact the array). Both can be iterated in O(1). You can add an index operator that takes O(N) time to a linked list. obj[n] will call obj.__getitem__ (and you will also want to implement __setitem__, __delitem__), and check if the argument is a slice to handle slices. -- Richard Damon ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] Multiprocessing with many input input parameters
On 12/07/2019 01:51, DL Neil wrote: > older articles! We haven't discussed hardware. Most modern PC CPUs offer > multiple "cores". Assuming (say) four cores, asyncio is capable of > running up to four processes concurrently - realising attendant > acceleration of the entirety. Just to pick up on this point because I often see it being cited. The number of concurrent processes running to achieve performance gain is only very loosely tied to the number of cores. We ran concurrent processes for many years before multi-core processors were invented with significant gains. Indeed any modern computer runs hundreds of "concurrent" processes on a small umber of cores and the OS switches between them. What the number of cores affects is the number of processes actually executing at the same time. If you just want to chunk up the processing of a large amount of data and run the exact same code multiple times then there is no point in having more than the number of cores. But if your concurrent processes are doing different tasks on different data then the number of cores is basically irrelevant. And especially if they are performing any kind of I/O operations since they are likely to be parked by the OS for most of the time anyway. Of course, there is a point where creating extra processes becomes counter-effective since that is an expensive operation, and especially if the process will be very short lived or only execute for tiny lengths of time (such as handling a network event by passing it on to some other process). But for most real world uses of multiprocessing the number of cores is not a major factor in deciding how many processes to run. I certainly would not hesitate to run 10xN where N is the number of cores. Beyond that you might need to think carefully. In Sydney's scenario is sounds like the processes are different and explicitly halt to perform I/O so the cores issue should not be a problem. -- Alan G Author of the Learn to Program web site http://www.alan-g.me.uk/ http://www.amazon.com/author/alan_gauld Follow my photo-blog on Flickr at: http://www.flickr.com/photos/alangauldphotos ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] Multiprocessing with many input input parameters
Thanks Mike, But I am still not clear. do I write: def f([x,y,z]) ? How exactly do one write the function and how does one ensure that each positional argument is accounted for. Dr. Sydney Shall Department of Haematological Medicine King's College London 123 Coldharbour Lane London SE5 9NU ENGLAND E-Mail: sydney.shall (Correspondents outside the College should add @KCL.AC.UK) TEL: +44 (0)208 48 59 01 From: Mike Barnett Sent: 11 July 2019 16:40 To: Shall, Sydney Cc: tutor@python.org Subject: RE: [Tutor] Multiprocessing with many input input parameters If you're passing parameters as a list, then you need a "," at the end of the items. Otherwise if you have something like a string as the only item, the list will be the string. list_with_one_item = ['item one',] @mike -Original Message- From: Shall, Sydney Sent: Wednesday, July 10, 2019 11:44 AM To: tutor@python.org Subject: [Tutor] Multiprocessing with many input input parameters I am using MAC OS X 10.14.5 on a MAC iBook I use Python 3.7.0 from Anaconda, with Spyder 3.3.3 I am a relative beginner. My program models cell reproduction. I have written a program that models this and it works. Now I want to model a tissue with several types of cells. I did this by simply rerunning the program with different inputs (cell characteristics). But now I want to send and receive signals between the cells in each population. This requires some sort of concurrent processing with halts at appropriate points to pass and receive signals. I thought to use multiprocessing. I have read the documentation and reproduced the models in the docs. But I cannot figure out how to feed in the data for multiple parameters. I have tried using Pool and it works fine, but I can only get it to accept 1 input parameter, although multiple data inputs with one parameter works nicely. So, my questions are; 1. Is multiprocessing the suitable choice. 2. if yes, how does one write a function with multiple input parameters. Thank s in advance. Sydney Prodessor. Sydney Shall Department of Haematological Medicine King's College London 123 Coldharbour Lane London SE5 9NU ENGLAND E-Mail: sydney.shall (Correspondents outside the College should add @KCL.AC.UK) TEL: +44 (0)208 48 59 01 ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
[Tutor] Output reason
Hi, Can someone please explain me the reason for below output. Program: def fun(n,li = []): a = list(range(5)) li.append(a) print(li) fun(4) fun(5,[7,8,9]) fun(4,[7,8,9]) fun(5) # reason for output (why am I getting to values in this output.) Output: [[0, 1, 2, 3, 4]] [7, 8, 9, [0, 1, 2, 3, 4]] [7, 8, 9, [0, 1, 2, 3, 4]] [[0, 1, 2, 3, 4], [0, 1, 2, 3, 4]] Thank you, Gursimran ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] Output reason
On 12/07/2019 15:24, Gursimran Maken wrote: > Can someone please explain me the reason for below output. You've been bitten by one of the most common gotchas in Python :-) > def fun(n,li = []): > a = list(range(5)) > li.append(a) > print(li) > > fun(4) > fun(5,[7,8,9]) > fun(4,[7,8,9]) > fun(5) # reason for output (why am I getting to values in this output.) When you define a default value in Python it creates the default value at the time you define the function. It then uses that value each time a default is needed. in the case of a list that means Python creates an empty list and stores it for use as the default. When you first call the function with the default Python adds values to the defaiult list. Second time you call the function using the default Python adds (more) values to (the same) default list. Sometimes that is useful, usually it's not. The normal pattern to get round this is to use a None default and modify the function like so def fun(n,li = None): if not ni: ni = [] # create new list a = list(range(5)) li.append(a) return li # bad practice to mix logic and display... HTH -- Alan G Author of the Learn to Program web site http://www.alan-g.me.uk/ http://www.amazon.com/author/alan_gauld Follow my photo-blog on Flickr at: http://www.flickr.com/photos/alangauldphotos ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] Output reason
If I remember how that works right, there is a single empty list that is created and used for all the calls that use the default argument, and then your function modifies that empty list so it is no longer empty, and that modified list is used on future calls. (Not good to use a mutable as a default parameter). A better solution would be to make the default something like None, and test if at the beginning of the function li is None, and if so set it to an empty list, and that empty list will be in function scope so it goes away and a new one is created on a new call. > On Jul 12, 2019, at 10:24 AM, Gursimran Maken > wrote: > > Hi, > > Can someone please explain me the reason for below output. > > Program: > def fun(n,li = []): >a = list(range(5)) >li.append(a) >print(li) > > fun(4) > fun(5,[7,8,9]) > fun(4,[7,8,9]) > fun(5) # reason for output (why am I getting to values in this output.) > > Output: > [[0, 1, 2, 3, 4]] > [7, 8, 9, [0, 1, 2, 3, 4]] > [7, 8, 9, [0, 1, 2, 3, 4]] > [[0, 1, 2, 3, 4], [0, 1, 2, 3, 4]] > > Thank you, > Gursimran > ___ > Tutor maillist - Tutor@python.org > To unsubscribe or change subscription options: > https://mail.python.org/mailman/listinfo/tutor ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] Output reason
On 7/12/19 11:39 AM, Alan Gauld via Tutor wrote: > On 12/07/2019 15:24, Gursimran Maken wrote: > >> Can someone please explain me the reason for below output. > > You've been bitten by one of the most common gotchas in Python :-) > >> def fun(n,li = []): >> a = list(range(5)) >> li.append(a) >> print(li) >> >> fun(4) >> fun(5,[7,8,9]) >> fun(4,[7,8,9]) >> fun(5) # reason for output (why am I getting to values in this output.) > > When you define a default value in Python it creates the default value > at the time you define the function. It then uses that value each time a > default is needed. in the case of a list that means Python creates an > empty list and stores it for use as the default. It may help in seeing why this happens to be aware that a def: statement is an executable statement like any other, which is executed at the time it is reached in the file. Running it generates a function object, a reference to it being attached to the name of the function. Conceptually, def foo(): is like foo = FunctionConstructor() foo ends up referring to an object that is marked as callable so you can then later call it as foo(). So it makes some sense that some things have to happen at object construction time, like setting up the things that are to be used as defaults. > When you first call the function with the default Python adds values to > the defaiult list. > > Second time you call the function using the default Python adds (more) > values to (the same) default list. FWIW, Python does the same no matter the type of the default argument, but it only causes the trap we poor programmers fall into if the type is one that can be modified ("mutable"). If you have fun(n, a=5) or fun(n, s="stringystuff") those are unchangeable and we don't get this little surprise. By the way, checker tools (and IDEs/editors with embedded checking capabilities) will warn about this, which is a hint on using good tools. pylint would tell you this: W0102: Dangerous default value [] as argument (dangerous-default-value) ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] Multiprocessing with many input input parameters
On 11Jul2019 15:40, Mike Barnett wrote: If you're passing parameters as a list, then you need a "," at the end of the items. Otherwise if you have something like a string as the only item, the list will be the string. list_with_one_item = ['item one',] Actually, this isn't true. This is a one element list, no trailing coma required: [5] Mike has probably confused this with tuples. Because tuples are delineated with parentheses, there is ambiguity between a tuple's parentheses and normal "group these terms together" parentheses. So: x = 5 + 4 * (9 + 7) Here we just have parentheses causing the assignment "9 + 7" to occur before the multiplication by 4. And this is also legal: x = 5 + 4 * (9) where the parentheses don't add anything special in terma of behaviour. Here is a 2 element tuple: (9, 7) How does one write a one element tuple? Like this: (9,) Here the trailing comma is _required_ to syntacticly indicate that we intend a 1 element tuple instead of a plain "9 in parentheses") as in the earlier assignment statement. I'm not sure any of this is relevant to Sydney's question though. Cheers, Cameron Simpson ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] Multiprocessing with many input input parameters
On Sat, Jul 13, 2019 at 09:59:16AM +1000, Cameron Simpson wrote: > Mike has probably confused this with tuples. Because tuples are > delineated with parentheses, there is ambiguity between a tuple's > parentheses and normal "group these terms together" parentheses. There are no "tuple parentheses". Tuples are created by the *comma*, not the parens. The only exception is the empty tuple, since you can't have a comma on its own. x = ()# Zero item tuple. x = 1,# Single item tuple. x = 1, 2 # Two item tuple. Any time you have a tuple, you only need to put parens around it to dismbiguate it from the surrounding syntax: x = 1, 2, (3, 4, 5), 6 # Tuple containing a tuple. function(0, 1, (2, 3), 4) # Tuple as argument to a function. or just to make it more clear to the human reader. > Here is a 2 element tuple: > > (9, 7) > > How does one write a one element tuple? Like this: > > (9,) To be clear, in both cases you could drop the parentheses and still get a tuple: 9, 7 9, provided that wasn't in a context where the comma was interpreted as something with higher syntactic precedence, such as a function call: func(9, 7)# Two integer arguments, not one tuple argument. func((9, 7)) # One tuple argument. -- Steven ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] Multiprocessing with many input input parameters
Hi Sydney, On Wed, 10 Jul 2019 at 16:45, Shall, Sydney via Tutor wrote: > > I am a relative beginner. > > My program models cell reproduction. I have written a program that models > this and it works. > > Now I want to model a tissue with several types of cells. I did this by > simply rerunning the program with different inputs (cell characteristics). > But now I want to send and receive signals between the cells in each > population. This requires some sort of concurrent processing with halts at > appropriate points to pass and receive signals. You say that this "requires some sort of concurrent processing" but I think that you are mistaken there. I have a lot of experience in mathematical modelling and dynamic simulation including some multi-cell models and I have never personally come across a situation where any significant benefit could be obtained from using concurrent processing for different parts of a single simulation. Those situations do exist but you haven't said anything to make me think that yours is an exceptional case. A simple way to do this (not the only way!) is something like: # Some data structure that stores which cells are sending messages messages_from_cells = {} for t in range(num_timesteps): # Calculate new state of all cells based only on the old states of all cells and messages. new_cells = {} for c in range(len(cells)): new_cells[c] = update_cell(cells[c], messages_from_cells) # Update all cells synchronously: cells = new_cells # Update messages based on new cell states: for c in range(len(cells)): messages_from_cells = update_messages(cells[c], messages_from_cells) You just need to figure out a data structure (I've suggested a dict above) that would somehow store what messages are being passed between which cells. You can update each cell based on the current messages and then update the messages ready for the next timestep. Concurrent execution is not needed: I have simulated concurrency by using two separate loops over the cells. The result is as if each cell was updated concurrently. Another approach is that at each timestep you can choose a cell randomly and update that one keeping all the others constant. It really depends what kind of model you are using. In a simulation context like this there are two different reasons why you might conceivably want to use concurrent execution: 1. Your simulations are CPU-bound and are slow and you need to make them run faster by using more cores. 2. Your simulation needs more memory then an individual computer has and you need to run it over a cluster of many computers. Python's multiprocessing module can help with the first problem: it can theoretically make your simulations run faster. However it is hard to actually achieve any speedup that way. Most likely there are other ways to make your code run faster that are easier than using concurrent execution and can deliver bigger gains. Multiprocessing used well might make your code 10x faster but I will bet that there are easier ways to make your code 100x faster. Multiprocessing makes the second problem worse: it actually uses more memory on each computer. To solve problem 2 is very hard but can be done. I don't think either problem applies to you though. There is a situation where I have used multiprocessing to make simulations faster. In practice I rarely want to do just one simulation; I want to do thousands with different parameters or because they are stochastic and I want to average them. Running these thousands of simulations can be made faster easily with multiprocessing because it is an "embarrassingly parallel" problem. You need to get your simulations working without multiprocessing first though. This is a much easier way to solve problem 1 (in so far as using more cores can help). Side note: you've misspelt Professor in your email signature: > Prodessor. Sydney Shall -- Oscar ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] Reading .csv data vs. reading an array
On Thu, 11 Jul 2019 at 18:52, Chip Wachob wrote: > > Hello, Hi Chip, ... > So, here's where it gets interesting. And, I'm presuming that someone out > there knows exactly what is going on and can help me get past this hurdle. I don't think anyone knows exactly what's going on... ... > My guess, at this point, is that the way a loop reading a .csv file and the > way a loop reads an array are somehow slightly different and my code isn't > accounting for this. There shouldn't be any difference. When you say "array" it looks to me like a list. Is it a list? I think it should be as simple as changing: for row in csvReader: to for row in myArray: (without making any other changes) > The other possibility is that I've become code-blind to a simple mistake > which my brain keeps overlooking... The only thing I can see is that you've replaced avg+triglevel with triggervolts. Are you sure they're the same? -- Oscar ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor
Re: [Tutor] Multiprocessing with many input input parameters
On 7/12/19 5:53 AM, Shall, Sydney via Tutor wrote: > Thanks Mike, > > But I am still not clear. > > do I write: > > def f([x,y,z]) ? > How exactly do one write the function and how does one ensure that each > positional argument is accounted for. The concept of packing will be useful, you can use the * operator to pack and unpack. A trivial example to get you started: >>> a = [1, 2, 3, 4] >>> print(a) [1, 2, 3, 4] >>> print(*a) 1 2 3 4 In the first print we print the list, in the second we print the result of unpacking the list - you see it's now four numbers rather than one list of four numbers. In a function definition you can pack with the * operator: >>> def f(*args): ... print(type(args)) ... print(len(args)) ... print(args) ... >>> >>> >>> f(1, 2, 3, 4) 4 (1, 2, 3, 4) Here we called the function with four arguments, but it received those packed into the one argument args, which is a tuple of length 4. Python folk conventionally name the argument which packs the positional args that way - *args - but the name "args" has no magic, its familiarity just aids in recognition. By packing your positional args you don't error out if you're not called with the exact number you expect (or if you want to accept differing numbers of args), and then you can do what you need to with what you get. The same packing concept works for dictionaries as well, here the operator is **. >>> def fun(a, b, c): ... print(a, b, c) ... >>> d = {'a':2, 'b':4, 'c':10} >>> fun(**d) 2 4 10 What happened here is in unpacking, the keys in the dict are matched up with the names of the function parameters, and the values for those keys are passed as the parameters. If your dict doesn't match, it fails: >>> d = {'a':2, 'b':4, 'd':10} >>> fun(**d) Traceback (most recent call last): File "", line 1, in TypeError: fun() got an unexpected keyword argument 'd' Dictionary unpacking looks like: >>> def fun(**kwargs): ... print(f"{kwargs}") ... >>> >>> fun(a=1, b=2, c=3) {'a': 1, 'b': 2, 'c': 3} again the name 'kwargs' is just convention. There are rules for how to mix regular positional args, unpacked positional args (or varargs), and keyword ares but don't want to go on forever here... ___ Tutor maillist - Tutor@python.org To unsubscribe or change subscription options: https://mail.python.org/mailman/listinfo/tutor