Re: how to sort a list of tuples with custom function
Glenn Linderman wrote: > On 8/1/2017 2:10 PM, Piet van Oostrum wrote: >> Ho Yeung Lee writes: >> >>> def isneighborlocation(lo1, lo2): >>> if abs(lo1[0] - lo2[0]) < 7 and abs(lo1[1] - lo2[1]) < 7: >>> return 1 >>> elif abs(lo1[0] - lo2[0]) == 1 and lo1[1] == lo2[1]: >>> return 1 >>> elif abs(lo1[1] - lo2[1]) == 1 and lo1[0] == lo2[0]: >>> return 1 >>> else: >>> return 0 >>> >>> >>> sorted(testing1, key=lambda x: (isneighborlocation.get(x[0]), x[1])) >>> >>> return something like >>> [(1,2),(3,3),(2,5)] >> I think you are trying to sort a list of two-dimensional points into a >> one-dimensiqonal list in such a way thet points that are close together >> in the two-dimensional sense will also be close together in the >> one-dimensional list. But that is impossible. > It's not impossible, it just requires an appropriate distance function > used in the sort. That's a grossly misleading addition. Once you have an appropriate clustering algorithm clusters = split_into_clusters(items) # needs access to all items you can devise a key function def get_cluster(item, clusters=split_into_clusters(items)): return next( index for index, cluster in enumerate(clusters) if item in cluster ) such that grouped_items = sorted(items, key=get_cluster) but that's a roundabout way to write grouped_items = sum(split_into_clusters(items), []) In other words: sorting is useless, what you really need is a suitable approach to split the data into groups. One well-known algorithm is k-means clustering: https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.vq.kmeans.html Here is an example with pictures: https://dzone.com/articles/k-means-clustering-scipy -- https://mail.python.org/mailman/listinfo/python-list
cgiapp versus apache & nginx+fcgiwrap
A client is using servicenow to direct requests to a cgi application.
The servicenow mechanism seems to create a whole file json request. For testing
purposes the cgi application tries to respond to standard posts as well. The
connection part is handled like this
F = cgi.FieldStorage(keep_blank_values=False)
D = dict(
json='',
SCRIPT_NAME=os.environ.get('SCRIPT_NAME','app.cgi'),
)
try:
K = F.keys()
except TypeError:
K = None
if K is not None:
for k in K:
D[k] = xmlEscape(F.getvalue(k))
json = D['json']
else:
try:
#assume json is everything
D['json'] = F.file.read()
except:
log_error()
so if we don't see any normal cgi arguments we try to read json from the cgi
input file.
With nginx+fcgiwrap this seems to work well for both normal post/get and the
whole file mechanism, but with apache the split never worked; we always seem to
get keys in K even if it is an empty list.
Looking at the envirnment that the cgi script sees I cannot see anything obvious
except the expected differences for the two frontend servers.
I assume apache (2.4) is doing something different. The client would feel more
comfortable with apache.
Does anyone know how to properly distinguish the two mechanisms ie standard POST
and a POST with no structure?
--
Robin Becker
--
https://mail.python.org/mailman/listinfo/python-list
Re: @lru_cache on functions with no arguments
On Tue, 01 Aug 2017 11:05:38 -0400, Terry Reedy wrote: > On 8/1/2017 7:06 AM, Matt Wheeler wrote: >> On Tue, 1 Aug 2017 at 02:32 Terry Reedy wrote: >> >>> On 7/31/2017 7:31 PM, [email protected] wrote: As part of the Python 3 cleanup in Django there are a fair few uses of >>> @functools.lru_cache on functions that take no arguments. >>> >>> This makes no sense to me. If the function is being called for >>> side-effects, then it should not be cached. If the function is being >>> called for a result, different for each call, calculated from a >>> changing environment, then it should not be cached. (Input from disk >>> is an example.) If the function returns a random number, or a >>> non-constant value from an oracle (such as a person), it should not be >>> cached. If the function returns a constant (possible calculated >>> once), then the constant should just be bound to a name (which is a >>> form of caching) rather than using the overkill of an lru cache. What >>> possibility am I missing? >>> >>> >> A function which is moderately expensive to run, that will always >> return the same result if run again in the same process, and which will >> not be needed in every session. > > In initialization section: > answer = None > > Somewhere else: > answer = expensive_calculaton() if answer is None else answer > > The conditional has to be cheaper than accessing lru cache. There can > be more than one of these. One conditional has to be cheaper than accessing lru cache. Debugging the errors from forgetting to wrap every single reference to "answer" in a "if answer is None" test is not so cheap. Maintaining twice as much state (one function plus one global variable, instead of just one function) is not so cheap. Sometimes paying a small cost to avoid having to occasionally pay a large cost is worth it. This is effectively the "Only Once" decorator pattern, which guarantees a function only executes once no matter how often you call it. -- “You are deluded if you think software engineers who can't write operating systems or applications without security holes, can write virtualization layers without security holes.” —Theo de Raadt -- https://mail.python.org/mailman/listinfo/python-list
Event Ticket | Ticket Printing Malaysia | Fifty Percent Print
Event Ticket Fifty percent print is providing services of voucher printing, Event Ticket Printing in Malaysia. Customers can choose size firstly; select the finishing secondly, a variety of supplement for your options, which can satisfy your needs absolutely. Our journals, organizers, or business organizers give you a Chance to advance your service or product as well as to create a year-round bond with your customers. Available in variety materials, sizes and contents, rest assured you will receive the very best value and quality diaries, organizers or business planners. -- https://mail.python.org/mailman/listinfo/python-list
how to fast processing one million strings to remove quotes
Hi, I am trying to removing extra quotes from a large set of strings (a
list of strings), so for each original string, it looks like,
"""str_value1"",""str_value2"",""str_value3"",1,""str_value4"""
I like to remove the start and end quotes and extra pairs of quotes on each
string value, so the result will look like,
"str_value1","str_value2","str_value3",1,"str_value4"
and then join each string by a new line.
I have tried the following code,
for line in str_lines[1:]:
strip_start_end_quotes = line[1:-1]
splited_line_rem_quotes =
strip_start_end_quotes.replace('\"\"', '"')
str_lines[str_lines.index(line)] = splited_line_rem_quotes
for_pandas_new_headers_str = '\n'.join(splited_lines)
but it is really slow (running for ages) if the list contains over 1
million string lines. I am thinking about a fast way to do that.
I also tried to multiprocessing this task by
def preprocess_data_str_line(data_str_lines):
"""
:param data_str_lines:
:return:
"""
for line in data_str_lines:
strip_start_end_quotes = line[1:-1]
splited_line_rem_quotes = strip_start_end_quotes.replace('\"\"',
'"')
data_str_lines[data_str_lines.index(line)] = splited_line_rem_quotes
return data_str_lines
def multi_process_prepcocess_data_str(data_str_lines):
"""
:param data_str_lines:
:return:
"""
# if cpu load < 25% and 4GB of ram free use 3 cores
# if cpu load < 70% and 4GB of ram free use 2 cores
cores_to_use = how_many_core()
data_str_blocks = slice_list(data_str_lines, cores_to_use)
for block in data_str_blocks:
# spawn processes for each data string block assigned to every cpu
core
p = multiprocessing.Process(target=preprocess_data_str_line,
args=(block,))
p.start()
but I don't know how to concatenate the results back into the list so that
I can join the strings in the list by new lines.
So, ideally, I am thinking about using multiprocessing + a fast function to
preprocessing each line to speed up the whole process.
cheers
--
https://mail.python.org/mailman/listinfo/python-list
RE: concurrent-log-handler 0.9.6 released
Pypiwin32 exists to allow pywin32 to be installed through pip (thanks to Glyph and the Twisted project for supporting that) > -Original Message- > From: Python-announce-list [mailto:python-announce-list-bounces+tritium- > [email protected]] On Behalf Of Preston Landers > Sent: Sunday, July 30, 2017 11:24 AM > To: [email protected] > Subject: concurrent-log-handler 0.9.6 released > > concurrent-log-handler > == > > RotatingFileHandler replacement with concurrency, gzip and Windows > support > -- > > This package provides an additional log handler for Python's standard logging > package (PEP 282). This handler will write log events to a log file which is > rotated when the log file reaches a certain size. Multiple processes can > safely write to the same log file concurrently. Rotated logs can be gzipped > if desired. Windows and POSIX systems are supported. An optional threaded > queue logging handler is provided to perform logging in the background. > > This is a fork of Lowell Alleman's ConcurrentLogHandler 0.9.1 with additional > enhancements: > > * Renamed package to `concurrent_log_handler` > * Provide `use_gzip` option to compress rotated logs > * Support for Windows > * Note: PyWin32 is required on Windows, but can't be installed as an > automatic dependency because it's not currently installable through pip. > * Fix deadlocking problem with ConcurrentLogHandler under newer Python > * More secure generation of random numbers for temporary filenames > * Change the name of the lockfile to have .__ in front of it (hide it on Posix) > * Provide a QueueListener / QueueHandler implementation for > handling log events in a background thread. Optional: requires Python 3. > > > Download > > > `pip install concurrent-log-handler` > > https://github.com/Preston-Landers/concurrent-log-handler > > https://pypi.python.org/pypi/concurrent-log-handler > > > > News / Changes > == > > - 0.9.7/0.9.6: Fix platform specifier for PyPi > > - 0.9.5: Add `use_gzip` option to compress rotated logs. Add an > optional threaded > logging queue handler based on the standard library's > `logging.QueueHandler`. > > - 0.9.4: Fix setup.py to not include tests in distribution. > > - 0.9.3: Refactoring release >* For publishing fork on pypi as `concurrent-log-handler` under new > package name. >* NOTE: PyWin32 is required on Windows but is not an explicit > dependency because >the PyWin32 package is not currently installable through pip. >* Fix lock behavior / race condition > > - 0.9.2: Initial release of fork by Preston Landers based on a fork of > Lowell Alleman's > ConcurrentLogHandler 0.9.1 >* Fixes deadlocking issue with recent versions of Python >* Puts `.__` prefix in front of lock file name >* Use `secrets` or `SystemRandom` if available. >* Add/fix Windows support > > > > thanks, > Preston > -- > https://mail.python.org/mailman/listinfo/python-announce-list > > Support the Python Software Foundation: > http://www.python.org/psf/donations/ -- https://mail.python.org/mailman/listinfo/python-list
Re: how to fast processing one million strings to remove quotes
On 2017-08-02 16:05, Daiyue Weng wrote:
Hi, I am trying to removing extra quotes from a large set of strings (a
list of strings), so for each original string, it looks like,
"""str_value1"",""str_value2"",""str_value3"",1,""str_value4"""
I like to remove the start and end quotes and extra pairs of quotes on each
string value, so the result will look like,
"str_value1","str_value2","str_value3",1,"str_value4"
and then join each string by a new line.
I have tried the following code,
for line in str_lines[1:]:
strip_start_end_quotes = line[1:-1]
splited_line_rem_quotes =
strip_start_end_quotes.replace('\"\"', '"')
str_lines[str_lines.index(line)] = splited_line_rem_quotes
for_pandas_new_headers_str = '\n'.join(splited_lines)
but it is really slow (running for ages) if the list contains over 1
million string lines. I am thinking about a fast way to do that.
[snip]
The problem is the line:
str_lines[str_lines.index(line)]
It does a linear search through str_lines until time finds a match for
the line.
To find the 10th line it must search through the first 10 lines.
To find the 100th line it must search through the first 100 lines.
To find the 1000th line it must search through the first 1000 lines.
And so on.
In Big-O notation, the performance is O(n**2).
The Pythonic way of doing it is to put the results into a new list:
new_str_lines = str_lines[:1]
for line in str_lines[1:]:
strip_start_end_quotes = line[1:-1]
splited_line_rem_quotes = strip_start_end_quotes.replace('\"\"', '"')
new_str_lines.append(splited_line_rem_quotes)
In Big-O notation, the performance is O(n).
--
https://mail.python.org/mailman/listinfo/python-list
Get list of attributes from list of objects?
Given a list of objects that all have a particular attribute, is there
a simple way to get a list of those attributes?
In other words:
class Foo(object):
def __init__(self, name):
self.name = name
foolist = [ Foo('a'), Foo('b'), Foo('c') ]
namelist = []
for foo in foolist:
namelist.append(foo.name)
Is there a way to avoid the for loop and create 'namelist' with a single
expression?
--
Ian Pilcher [email protected]
"I grew up before Mark Zuckerberg invented friendship"
--
https://mail.python.org/mailman/listinfo/python-list
Re: how to fast processing one million strings to remove quotes
that works superbly! any idea about how to multi process the task and
concatenate results from each process back into a list?
On 2 August 2017 at 18:05, MRAB wrote:
> On 2017-08-02 16:05, Daiyue Weng wrote:
>
>> Hi, I am trying to removing extra quotes from a large set of strings (a
>> list of strings), so for each original string, it looks like,
>>
>> """str_value1"",""str_value2"",""str_value3"",1,""str_value4"""
>>
>>
>> I like to remove the start and end quotes and extra pairs of quotes on
>> each
>> string value, so the result will look like,
>>
>> "str_value1","str_value2","str_value3",1,"str_value4"
>>
>>
>> and then join each string by a new line.
>>
>> I have tried the following code,
>>
>> for line in str_lines[1:]:
>> strip_start_end_quotes = line[1:-1]
>> splited_line_rem_quotes =
>> strip_start_end_quotes.replace('\"\"', '"')
>> str_lines[str_lines.index(line)] = splited_line_rem_quotes
>>
>> for_pandas_new_headers_str = '\n'.join(splited_lines)
>>
>> but it is really slow (running for ages) if the list contains over 1
>> million string lines. I am thinking about a fast way to do that.
>>
>> [snip]
>
> The problem is the line:
>
> str_lines[str_lines.index(line)]
>
> It does a linear search through str_lines until time finds a match for the
> line.
>
> To find the 10th line it must search through the first 10 lines.
>
> To find the 100th line it must search through the first 100 lines.
>
> To find the 1000th line it must search through the first 1000 lines.
>
> And so on.
>
> In Big-O notation, the performance is O(n**2).
>
> The Pythonic way of doing it is to put the results into a new list:
>
>
> new_str_lines = str_lines[:1]
>
> for line in str_lines[1:]:
> strip_start_end_quotes = line[1:-1]
> splited_line_rem_quotes = strip_start_end_quotes.replace('\"\"', '"')
> new_str_lines.append(splited_line_rem_quotes)
>
>
> In Big-O notation, the performance is O(n).
> --
> https://mail.python.org/mailman/listinfo/python-list
>
--
https://mail.python.org/mailman/listinfo/python-list
Re: Get list of attributes from list of objects?
On Thu, Aug 3, 2017 at 3:21 AM, Ian Pilcher wrote:
> Given a list of objects that all have a particular attribute, is there
> a simple way to get a list of those attributes?
>
> In other words:
>
> class Foo(object):
> def __init__(self, name):
> self.name = name
>
> foolist = [ Foo('a'), Foo('b'), Foo('c') ]
>
> namelist = []
> for foo in foolist:
> namelist.append(foo.name)
>
> Is there a way to avoid the for loop and create 'namelist' with a single
> expression?
You can't eliminate the loop, but you can compact it into a single
logical operation:
namelist = [foo.name for foo in foolist]
That's a "list comprehension", and is an elegant way to process a list
in various ways.
ChrisA
--
https://mail.python.org/mailman/listinfo/python-list
Re: how to fast processing one million strings to remove quotes
On 8/2/2017 1:05 PM, MRAB wrote:
On 2017-08-02 16:05, Daiyue Weng wrote:
Hi, I am trying to removing extra quotes from a large set of strings (a
list of strings), so for each original string, it looks like,
"""str_value1"",""str_value2"",""str_value3"",1,""str_value4"""
I like to remove the start and end quotes and extra pairs of quotes on
each
string value, so the result will look like,
"str_value1","str_value2","str_value3",1,"str_value4"
and then join each string by a new line.
I have tried the following code,
for line in str_lines[1:]:
strip_start_end_quotes = line[1:-1]
splited_line_rem_quotes =
strip_start_end_quotes.replace('\"\"', '"')
str_lines[str_lines.index(line)] = splited_line_rem_quotes
for_pandas_new_headers_str = '\n'.join(splited_lines)
Do you actually need the list of strings joined up like that into one
string, or will the one string just be split again into multiple strings?
but it is really slow (running for ages) if the list contains over 1
million string lines. I am thinking about a fast way to do that.
[snip]
The problem is the line:
str_lines[str_lines.index(line)]
It does a linear search through str_lines until time finds a match for
the line.
To find the 10th line it must search through the first 10 lines.
To find the 100th line it must search through the first 100 lines.
To find the 1000th line it must search through the first 1000 lines.
And so on.
In Big-O notation, the performance is O(n**2).
The Pythonic way of doing it is to put the results into a new list:
new_str_lines = str_lines[:1]
for line in str_lines[1:]:
strip_start_end_quotes = line[1:-1]
splited_line_rem_quotes = strip_start_end_quotes.replace('\"\"', '"')
new_str_lines.append(splited_line_rem_quotes)
In Big-O notation, the performance is O(n).
Making a slice copy of all but the first member of the list is also
unnecessary. Use an iterator instead.
lineit = iter(str_lines)
new_str_lines = [next(lineit)]
for line in lineit:
...
--
Terry Jan Reedy
--
https://mail.python.org/mailman/listinfo/python-list
Re: Get list of attributes from list of objects?
On 8/2/2017 1:21 PM, Ian Pilcher wrote:
Given a list of objects that all have a particular attribute, is there
a simple way to get a list of those attributes?
In other words:
class Foo(object):
def __init__(self, name):
self.name = name
foolist = [ Foo('a'), Foo('b'), Foo('c') ]
namelist = []
for foo in foolist:
namelist.append(foo.name)
Is there a way to avoid the for loop and create 'namelist' with a single
expression?
namelist = [foo.name for foo in foolist]
--
Terry Jan Reedy
--
https://mail.python.org/mailman/listinfo/python-list
Re: how to fast processing one million strings to remove quotes
Daiyue Weng wrote: > Hi, I am trying to removing extra quotes from a large set of strings (a > list of strings), so for each original string, it looks like, > > """str_value1"",""str_value2"",""str_value3"",1,""str_value4""" Where did you get that strange list from in the first place? If it is read from a file it is likely that you can parse the data into the desired format directly, e. g. by using the csv module. -- https://mail.python.org/mailman/listinfo/python-list
Re: how to fast processing one million strings to remove quotes
it is getting from an encrypted and snappy file On 2 August 2017 at 19:13, Peter Otten <[email protected]> wrote: > Daiyue Weng wrote: > > > Hi, I am trying to removing extra quotes from a large set of strings (a > > list of strings), so for each original string, it looks like, > > > > """str_value1"",""str_value2"",""str_value3"",1,""str_value4""" > > Where did you get that strange list from in the first place? > > If it is read from a file it is likely that you can parse the data into the > desired format directly, e. g. by using the csv module. > > > -- > https://mail.python.org/mailman/listinfo/python-list > -- https://mail.python.org/mailman/listinfo/python-list
Re: Get list of attributes from list of objects?
On 08/02/2017 12:49 PM, Chris Angelico wrote: On Thu, Aug 3, 2017 at 3:21 AM, Ian Pilcher wrote: You can't eliminate the loop, but you can compact it into a single logical operation: namelist = [foo.name for foo in foolist] That's a "list comprehension", and is an elegant way to process a list in various ways. Very nice! Thank you! -- Ian Pilcher [email protected] "I grew up before Mark Zuckerberg invented friendship" -- https://mail.python.org/mailman/listinfo/python-list
Re: how to fast processing one million strings to remove quotes
Daiyue Weng wrote: > On 2 August 2017 at 19:13, Peter Otten <[email protected]> wrote: > >> Daiyue Weng wrote: >> >> > Hi, I am trying to removing extra quotes from a large set of strings (a >> > list of strings), so for each original string, it looks like, >> > >> > """str_value1"",""str_value2"",""str_value3"",1,""str_value4""" >> >> Where did you get that strange list from in the first place? >> >> If it is read from a file it is likely that you can parse the data into >> the desired format directly, e. g. by using the csv module. > it is getting from an encrypted and snappy file Provided it is only a few lines it might be helpful if you could post the code to create the list ;) -- https://mail.python.org/mailman/listinfo/python-list
Re: python in chromebook
On Thu, 27 Jul 2017 10:03:29 +0900, Byung-Hee HWANG (황병희, 黃炳熙) wrote: > my computer is chromebook. how can i install python in chromebook? > barely i did meet develop mode of chromebook. also i'm new to python. > > INDEED, i want to make python code on my chromebook. > > thanks in avance!!! google crouton chromebook -- https://mail.python.org/mailman/listinfo/python-list
Re: Get list of attributes from list of objects?
On Wed, Aug 2, 2017 at 11:49 AM, Chris Angelico wrote:
> On Thu, Aug 3, 2017 at 3:21 AM, Ian Pilcher wrote:
>> Given a list of objects that all have a particular attribute, is there
>> a simple way to get a list of those attributes?
>>
>> In other words:
>>
>> class Foo(object):
>> def __init__(self, name):
>> self.name = name
>>
>> foolist = [ Foo('a'), Foo('b'), Foo('c') ]
>>
>> namelist = []
>> for foo in foolist:
>> namelist.append(foo.name)
>>
>> Is there a way to avoid the for loop and create 'namelist' with a single
>> expression?
>
> You can't eliminate the loop
Well, you can do this:
namelist = list(map(operator.attrgetter('name'), foolist))
But use what Chris and Terry suggested; it's cleaner.
--
https://mail.python.org/mailman/listinfo/python-list
Re: Will my project be accepted in pypi?
There are lots of cool things on PyPI. For instance, check out pytube as well. It's similar to youtube-dl. Cheers, Vedu -- https://mail.python.org/mailman/listinfo/python-list
Re: cgiapp versus apache & nginx+fcgiwrap
On 8/2/2017 1:13 AM, Robin Becker wrote: we always seem to get keys in K even if it is an empty list. Can you treat None and empty list the same? Looking at the envirnment that the cgi script sees I cannot see anything obvious except the expected differences for the two frontend servers. Might be more enlightening to look at the input data stream, if the environment is the same. -- https://mail.python.org/mailman/listinfo/python-list
Subclassing dict to modify values
YANQ (Yet Another Newbie Question) ... I would like to create a subclass of dict that modifies values as they are inserted. (Effectively I want to do the equivalent of "interning" the values, although they aren't strings.) Do I need to implement any methods other than __setitem__? (I.e. will any other methods/operators that add values to the dictionary call my __setitem__ method?) -- Ian Pilcher [email protected] "I grew up before Mark Zuckerberg invented friendship" -- https://mail.python.org/mailman/listinfo/python-list
first code attempt
Hello, I am using the workbook Computer Coding by Jon Woodcock, published by DK WORKBOOKS, to try to learn computer coding. I only get to pages 10 and 11 in Robot Programs when round robots appear in squares to manipulate them. Where in the world do I find robots and squares? I would appreciate your help. Thank you. George Wilson -- https://mail.python.org/mailman/listinfo/python-list
Re: first code attempt
On Wednesday, August 2, 2017 at 5:34:18 PM UTC-5, [email protected] wrote: > Hello, > I am using the workbook Computer Coding by Jon Woodcock, > published by DK WORKBOOKS, to try to learn computer coding. > I only get to pages 10 and 11 in Robot Programs when round > robots appear in squares to manipulate them. Where in the > world do I find robots and squares? As for squares, you came to right place! Not quite bottom of the barrel "okee from muskogee" squares, but somewhere along a graduated scale, for sure. As for robots, i'm sure a few of the "malevolent type" are still running around in Elon Musks' nightmares. But seriously. ;-) If you need help with a coding question, then you should post the code along with any error messages you may have received. Asking vague questions typically does not produce a desirable result. Of course, asking vague questions about a random Python beginners' guide that few here have ever heard of, would produce even less desirable results. The key to eliciting good answers is to be specific, articulate and explicit. Try again. -- https://mail.python.org/mailman/listinfo/python-list
Re: Subclassing dict to modify values
On Wed, 2 Aug 2017 at 23:26 Ian Pilcher wrote: > YANQ (Yet Another Newbie Question) ... > > I would like to create a subclass of dict that modifies values as they > are inserted. (Effectively I want to do the equivalent of "interning" > the values, although they aren't strings.) > > Do I need to implement any methods other than __setitem__? (I.e. will > any other methods/operators that add values to the dictionary call my > __setitem__ method?) > Yes. At least update and setdefault will also need to be overridden. You will probably be better off creating a class which inherits from collections.MutableMapping . A MutableMapping subclass behaves like a dict (i.e. is a mapping type) as long as the minimum required methods are defined. See the table for the required methods at https://docs.python.org/3/library/collections.abc.html#collections-abstract-base-classes All of the mixin methods (the ones defined for you) will call the abstract methods you override. -- -- Matt Wheeler http://funkyh.at -- https://mail.python.org/mailman/listinfo/python-list
Re: first code attempt
On Wed, Aug 2, 2017 at 4:36 PM, wrote:
> Hello,
> I am using the workbook Computer Coding by Jon Woodcock, published by DK
> WORKBOOKS, to try to learn computer coding. I only get to pages 10 and 11
> in Robot Programs when round robots appear in squares to manipulate them.
> Where in the world do I find robots and squares?
I happened to find preview pages for pages 10-11 on the publisher's website:
https://www.dk.com/us/9781465426857-dk-workbooks-computer-coding/
I suggest you reread page 10 very carefully. This is a paper and
pencil set of exercises; nothing is typed into the computer. You have
a "programming language" for your paper and pencil robot with just
three instructions:
(1) "F": This instruction moves your robot forward one square.
(2) "R": Rotate your robot in place 90 degrees (a quarter-turn)
to the right.
(3) "L": Rotate your robot in place 90 degrees (a quarter-turn)
to the left.
The author wants you to figure out what sequence of these three
instructions ("F", "R", and "L") are needed to navigate the paper
robot through the printed mazes provided in the book. In each problem
the robot starts out on the green circle of each printed diagram and
your task is to get the robot to the square with the checkered flag.
The actual problems are on page 11. The instructions say that the
robot cannot go through pink walls, so you will have to write down the
correct instructions to get around such obstacles. Look closely at
the example on page 10. This shows how to write down the instructions
to get through that simple maze.
Again, this is a paper and pencil exercise with a very simple 3
instruction programming language. The author is trying to get you to
understand how detailed and simple the steps need to be to program a
computer. You have to be very specific! And programming languages
have a very limited vocabulary compared to a natural human language.
Programming language "words" mean very specific things, normally one
such "thing" per word. English words, for instance, may mean a
variety of things depending on the context where they are being used.
I hope this helps! Later in the book they will get to how to do
things in the Python programming language (Or so the book description
says.).
You might want to consider joining the Python Tutor list
(https://mail.python.org/mailman/listinfo/tutor). This is meant for
newcomers to Python who have a lot of basic questions. This list is
more oriented towards already competent Python programmers who tend to
go off on interesting technical tangents. ~(:>))
--
boB
--
https://mail.python.org/mailman/listinfo/python-list
Re: first code attempt
On Wednesday, August 2, 2017 at 7:22:56 PM UTC-5, boB Stepp wrote: > You might want to consider joining the Python Tutor list > (https://mail.python.org/mailman/listinfo/tutor). This is meant for > newcomers to Python who have a lot of basic questions. I would second that advice. However, i would also urge the OP to lurk, listen and participate in this group as much as possible. Simply asking questions is not the path to enlightenment, no, one must learn to swim with sharks and run with wolves is one is become competent in any field of expertise. Self confidence and communication skills are just as important as technical skills and library/interface memorization. > This list is more oriented towards already competent Python > programmers who tend to go off on interesting technical > tangents. ~(:>)) Not _always_ interesting (unless i'm participating ;-)[1], but most of the time, yes. And such a diverse range of ideas and topics can only be fostered in a forum that maintains a deep respect for the freedom of speech. Which is why this open forum is so vital to the Python community. Exposure to diverse forms of knowledge is the _key_ to becoming a proficient problem solver. [1] As per Rick's "Theory of Subjectivity"[2] [2] But let's not go off on that tangent...[3] [3] At least, not today O:-) -- https://mail.python.org/mailman/listinfo/python-list
Re: how to sort a list of tuples with custom function
https://gist.github.com/hoyeunglee/f371f66d55f90dda043f7e7fea38ffa2 I am near succeed in another way, please run above code when so much black words, it will be very slow so I only open notepad and maximum it without any content then capture screen and save as roster.png and run it, but I discover it can not circle all words with red rectangle and only part of words On Wednesday, August 2, 2017 at 3:06:40 PM UTC+8, Peter Otten wrote: > Glenn Linderman wrote: > > > On 8/1/2017 2:10 PM, Piet van Oostrum wrote: > >> Ho Yeung Lee writes: > >> > >>> def isneighborlocation(lo1, lo2): > >>> if abs(lo1[0] - lo2[0]) < 7 and abs(lo1[1] - lo2[1]) < 7: > >>> return 1 > >>> elif abs(lo1[0] - lo2[0]) == 1 and lo1[1] == lo2[1]: > >>> return 1 > >>> elif abs(lo1[1] - lo2[1]) == 1 and lo1[0] == lo2[0]: > >>> return 1 > >>> else: > >>> return 0 > >>> > >>> > >>> sorted(testing1, key=lambda x: (isneighborlocation.get(x[0]), x[1])) > >>> > >>> return something like > >>> [(1,2),(3,3),(2,5)] > > >> I think you are trying to sort a list of two-dimensional points into a > >> one-dimensiqonal list in such a way thet points that are close together > >> in the two-dimensional sense will also be close together in the > >> one-dimensional list. But that is impossible. > > > It's not impossible, it just requires an appropriate distance function > > used in the sort. > > That's a grossly misleading addition. > > Once you have an appropriate clustering algorithm > > clusters = split_into_clusters(items) # needs access to all items > > you can devise a key function > > def get_cluster(item, clusters=split_into_clusters(items)): > return next( > index for index, cluster in enumerate(clusters) if item in cluster > ) > > such that > > grouped_items = sorted(items, key=get_cluster) > > but that's a roundabout way to write > > grouped_items = sum(split_into_clusters(items), []) > > In other words: sorting is useless, what you really need is a suitable > approach to split the data into groups. > > One well-known algorithm is k-means clustering: > > https://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.vq.kmeans.html > > Here is an example with pictures: > > https://dzone.com/articles/k-means-clustering-scipy -- https://mail.python.org/mailman/listinfo/python-list
Command-line torrent search tool (windows/linux)
Torrench - A Command-line torrent search tool - For windows and Linux OS The tool fetches torrents from existing torrent-hosting sites. Websites supported: 1. linuxtracker.org - Get linux distros ISO torrents 2. ThePirateBay - Do read usage instructions. Project is in python3, and is completely open-source. Project link: https://github.com/kryptxy/torrench I plan on hosting it on pypi as well as AUR (Arch User repository) :) Hope you like this tool, and find it useful. Feedback/suggestions are highly appreciated. -- https://mail.python.org/mailman/listinfo/python-list
Re: how to fast processing one million strings to remove quotes
On Thursday, 3 August 2017 01:05:57 UTC+10, Daiyue Weng wrote:
> Hi, I am trying to removing extra quotes from a large set of strings (a
> list of strings), so for each original string, it looks like,
>
> """str_value1"",""str_value2"",""str_value3"",1,""str_value4"""
>
>
> I like to remove the start and end quotes and extra pairs of quotes on each
> string value, so the result will look like,
>
> "str_value1","str_value2","str_value3",1,"str_value4"
>
>
> and then join each string by a new line.
>
> I have tried the following code,
>
> for line in str_lines[1:]:
> strip_start_end_quotes = line[1:-1]
> splited_line_rem_quotes =
> strip_start_end_quotes.replace('\"\"', '"')
> str_lines[str_lines.index(line)] = splited_line_rem_quotes
>
> for_pandas_new_headers_str = '\n'.join(splited_lines)
>
> but it is really slow (running for ages) if the list contains over 1
> million string lines. I am thinking about a fast way to do that.
>
> I also tried to multiprocessing this task by
>
> def preprocess_data_str_line(data_str_lines):
> """
>
> :param data_str_lines:
> :return:
> """
> for line in data_str_lines:
> strip_start_end_quotes = line[1:-1]
> splited_line_rem_quotes = strip_start_end_quotes.replace('\"\"',
> '"')
> data_str_lines[data_str_lines.index(line)] = splited_line_rem_quotes
>
> return data_str_lines
>
>
> def multi_process_prepcocess_data_str(data_str_lines):
> """
>
> :param data_str_lines:
> :return:
> """
> # if cpu load < 25% and 4GB of ram free use 3 cores
> # if cpu load < 70% and 4GB of ram free use 2 cores
> cores_to_use = how_many_core()
>
> data_str_blocks = slice_list(data_str_lines, cores_to_use)
>
> for block in data_str_blocks:
> # spawn processes for each data string block assigned to every cpu
> core
> p = multiprocessing.Process(target=preprocess_data_str_line,
> args=(block,))
> p.start()
>
> but I don't know how to concatenate the results back into the list so that
> I can join the strings in the list by new lines.
>
> So, ideally, I am thinking about using multiprocessing + a fast function to
> preprocessing each line to speed up the whole process.
>
> cheers
Hi MRAB,
My first thought is to use split/join to solve this problem, but you would need
to decide what to do with the non-strings in your 1,000,000 element list. You
also need to be sure that the pipe character | is in none of your strings.
split_on_dbl_dbl_quote = original_list.join('|').split('""')
remove_dbl_dbl_quotes_and_outer_quotes =
split_on_dbl_dbl_quote[::2].join('').split('|')
You need to be sure of your data: [::2] (return just even-numbered elements)
relies on all double-double-quotes both opening and closing within the same
string.
This runs in under a second for a million strings but does affect *all*
elements, not just strings. The non-strings would become strings after the
second statement.
As to multi-processing: I would be looking at well-optimised single-thread
solutions like split/join before I consider MP. If you can fit the problem to a
split-join it'll be much simpler and more "pythonic".
Cheers,
Nick
--
https://mail.python.org/mailman/listinfo/python-list
