Re: Merge Two List of Dict

2016-12-01 Thread Peter Otten
Nikhil Verma wrote:

> Hey guys
> 
> What is the most optimal and pythonic solution forthis situation
> 
> A = [{'person_id': '1', 'adop_count': '2'}, {'person_id': '3',
> 'adop_count': '4'}]
> *len(A) might be above 10L*
> 
> B = [{'person_id': '1', 'village_id': '3'}, {'person_id': '3',
> 'village_id': '4'}]
> *len(B) might be above 20L*
> 
> 
> OutPut List should be
> 
> C = B = [{'adop_count': '2', 'village_id': '3'}, {'adop_count': '4',
> 'village_id': '4'}]
> 
> Thanks in advance

Build a lookup table that maps person_id to village_id:

>>> A = [{'person_id': '1', 'adop_count': '2'}, {'person_id': '3',
... 'adop_count': '4'}]
>>> B = [{'person_id': '1', 'village_id': '3'}, {'person_id': '3',
... 'village_id': '4'}]
>>> p2v = {item["person_id"]: item["village_id"] for item in B}
>>> assert len(B) == len(p2v), "duplicate person_id"
>>> import collections
>>> v2a = collections.defaultdict(int)
>>> for item in A:
... v2a[p2v[item["person_id"]]] += int(item["adop_count"])
... 
>>> [{"adop_count": str(v), "village_id": k} for k, v in v2a.items()]
[{'adop_count': '4', 'village_id': '4'}, {'adop_count': '2', 'village_id': 
'3'}]

If the data stems from a database you can run (untested)

select B.village_id, sum(A.adop_count) from A inner join B on A.person_id = 
B.person_id;

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Merge Two List of Dict

2016-12-01 Thread Peter Otten
Peter Otten wrote:

> If the data stems from a database you can run (untested)
> 
> select B.village_id, sum(A.adop_count) from A inner join B on A.person_id
> = B.person_id;
> 

Oops, I forgot the group-by clause:

select B.village_id, sum(A.adop_count) 
from A inner join B on A.person_id = B.person_id 
group by B.village_id;

-- 
https://mail.python.org/mailman/listinfo/python-list


Fwd: Merge Two List of Dict

2016-12-01 Thread Nikhil Verma
Just editing the count it was from Indian place value notation.


-- Forwarded message --
From: Nikhil Verma 
Date: Thu, Dec 1, 2016 at 12:44 PM
Subject: Merge Two List of Dict
To: [email protected]


Hey guys

What is the most optimal and pythonic solution forthis situation

A = [{'person_id': '1', 'adop_count': '2'}, {'person_id': '3',
'adop_count': '4'}]
*len(A) might be above 10*

B = [{'person_id': '1', 'village_id': '3'}, {'person_id': '3',
'village_id': '4'}]
*len(B) might be above 200*


OutPut List should be

C = B = [{'adop_count': '2', 'village_id': '3'}, {'adop_count': '4',
'village_id': '4'}]

Thanks in advance





-- 


[image: --]
Nikhil Verma
[image: http://]about.me/nikhil_verma

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: compile error when using override

2016-12-01 Thread Steve D'Aprano
On Thu, 1 Dec 2016 05:26 pm, Ho Yeung Lee wrote:

> import ast
> from __future__ import division

That's not actually your code. That will be a SyntaxError.

Except in the interactive interpreter, "__future__" imports must be the very
first line of code.


> class A:
>     @staticmethod
>     def __additionFunction__(a1, a2):
>         return a1*a2 #Put what you want instead of this

That cannot work in Python 2, because you are using a "classic"
or "old-style" class. For staticmethod to work correctly, you need to
inherit from object:

class A(object):
...


Also, do not use double-underscore names for your own functions or methods.
__NAME__ (two leading and two trailing underscores) are reserved for
Python's internal use. You should not invent your own.

Why do you need this "additionFunction" method for? Why not put this in the
__add__ method?

>   def __add__(self, other):
>       return self.__class__.__additionFunction__(self.value, other.value)
>   def __mul__(self, other):
>       return self.__class__.__multiplyFunction__(self.value, other.value)

They should be:

def __add__(self, other):
return self.additionFunction(self.value, other.value)

def __mul__(self, other):
return self.multiplyFunction(self.value, other.value)

Or better:

def __add__(self, other):
return self.value + other.value

def __mul__(self, other):
return self.value * other.value



-- 
Steve
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Merge Two List of Dict

2016-12-01 Thread Tim Chase
On 2016-12-01 12:44, Nikhil Verma wrote:
> A = [{'person_id': '1', 'adop_count': '2'}, {'person_id': '3',
> 'adop_count': '4'}]
> *len(A) might be above 10L*
> 
> B = [{'person_id': '1', 'village_id': '3'}, {'person_id': '3',
> 'village_id': '4'}]
> *len(B) might be above 20L*
> 
> 
> OutPut List should be
> 
> C = B = [{'adop_count': '2', 'village_id': '3'}, {'adop_count': '4',
> 'village_id': '4'}]

You omit some details that would help:

- what happened to "person_id" in the results?  Just drop it?

- can duplicates of "person_id" appear in either A or B?  If so, what
  should happen in the output?

- can a "person_id" appear in A or B but not appear in the other?

- is A always just person_id/adop_count and is B always
  person_id/village_id, or can other data appear in (or be absent
  from) each dict?

Your answers would change the implementation details.

-tkc


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: The Case Against Python 3

2016-12-01 Thread Paul Moore
On Tuesday, 29 November 2016 01:01:01 UTC, Chris Angelico  wrote:
> So what is it that's trying to read something and is calling an
> f-string a mere string?

gettext.c2py:

"""Gets a C expression as used in PO files for plural forms and returns a
Python lambda function that implements an equivalent expression.
"""
# Security check, allow only the "n" identifier
import token, tokenize
tokens = tokenize.generate_tokens(io.StringIO(plural).readline)
try:
danger = [x for x in tokens if x[0] == token.NAME and x[1] != 'n']
except tokenize.TokenError:
raise ValueError('plural forms expression error, maybe unbalanced 
parenthesis')
else:
if danger:
raise ValueError('plural forms expression could be dangerous')

So the only things that count as DANGER are NAME tokens that aren't "n". That 
seems pretty permissive...

While I agree that f-strings are more dangerous than people will immediately 
realise (the mere fact that we call them f-*strings* when they definitely 
aren't strings is an example of that), the problem here is clearly (IMO) with 
the sloppy checking in gettext.

Paul
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: The Case Against Python 3

2016-12-01 Thread Ned Batchelder
On Thursday, December 1, 2016 at 9:03:46 AM UTC-5, Paul  Moore wrote:
> While I agree that f-strings are more dangerous than people will immediately 
> realise (the mere fact that we call them f-*strings* when they definitely 
> aren't strings is an example of that), the problem here is clearly (IMO) with 
> the sloppy checking in gettext.


Can you elaborate on the dangers as you see them?

--Ned.
-- 
https://mail.python.org/mailman/listinfo/python-list


Can json.dumps create multiple lines

2016-12-01 Thread Cecil Westerhof
I started to use json.dumps to put things in a SQLite database. But I
think it would be handy when it would be easy to change the values
manually.

When I have a value dummy which contains:
['An array', 'with several strings', 'as a demo']
Then json.dumps(dummy) would generate:
'["An array", "with several strings", "as a demo"]'
I would prefer when it would generate:
'[
 "An array",
 "with several strings",
 "as a demo"
 ]'

Is this possible, or do I have to code this myself?

-- 
Cecil Westerhof
Senior Software Engineer
LinkedIn: http://www.linkedin.com/in/cecilwesterhof
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Can json.dumps create multiple lines

2016-12-01 Thread Zachary Ware
On Thu, Dec 1, 2016 at 10:30 AM, Cecil Westerhof  wrote:
> I would prefer when it would generate:
> '[
>  "An array",
>  "with several strings",
>  "as a demo"
>  ]'
>
> Is this possible, or do I have to code this myself?

https://docs.python.org/3/library/json.html?highlight=indent#json.dump

Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 26 2016, 10:47:25)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import json
>>> json.dumps(["An array", "with several strings", "as a demo"])
'["An array", "with several strings", "as a demo"]'
>>> print(_)
["An array", "with several strings", "as a demo"]
>>> json.dumps(["An array", "with several strings", "as a demo"], indent=0)
'[\n"An array",\n"with several strings",\n"as a demo"\n]'
>>> print(_)
[
"An array",
"with several strings",
"as a demo"
]

I've also seen something about JSON support in SQLite, you may want to
look into that.

-- 
Zach
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Can json.dumps create multiple lines

2016-12-01 Thread John Gordon
In <[email protected]> Cecil Westerhof  writes:

> I started to use json.dumps to put things in a SQLite database. But I
> think it would be handy when it would be easy to change the values
> manually.

> When I have a value dummy which contains:
> ['An array', 'with several strings', 'as a demo']
> Then json.dumps(dummy) would generate:
> '["An array", "with several strings", "as a demo"]'
> I would prefer when it would generate:
> '[
>  "An array",
>  "with several strings",
>  "as a demo"
>  ]'

json.dumps() has an 'indent' keyword argument, but I believe it only
enables indenting of each whole element, not individual members of a list.

Perhaps something in the pprint module?

-- 
John Gordon   A is for Amy, who fell down the stairs
[email protected]  B is for Basil, assaulted by bears
-- Edward Gorey, "The Gashlycrumb Tinies"

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: OSError: [Errno 12] Cannot allocate memory

2016-12-01 Thread duncan smith
On 01/12/16 01:12, Chris Kaynor wrote:
> On Wed, Nov 30, 2016 at 4:54 PM, duncan smith  wrote:
>>
>> Thanks. So something like the following might do the job?
>>
>> def _execute(command):
>> p = subprocess.Popen(command, shell=False,
>>  stdout=subprocess.PIPE,
>>  stderr=subprocess.STDOUT,
>>  close_fds=True)
>> out_data, err_data = p.communicate()
>> if err_data:
>> print err_data
> 
> I did not notice it when I sent my first e-mail (but noted it in my
> second one) that the docstring in to_image is presuming that
> shell=True. That said, as it seems everybody is at a loss to explain
> your issue, perhaps there is some oddity, and if everything appears to
> work with shell=False, it may be worth changing to see if it does fix
> the problem. With other information since provided, it is unlikely,
> however.
> 
> Not specifying the stdin may help, however it will only reduce the
> file handle count by 1 per call (from 2), so there is probably a root
> problem that it will not help.
> 
> I would expect the communicate change to fix the problem, except for
> your follow-up indicating that you had tried that before without
> success.
> 
> Removing the manual stdout.read may fix it, if the problem is due to
> hanging processes, but again, your follow-up indicates thats not the
> problem - you should have zombie processes if that were the case.
> 
> A few new questions that you have not answered (nor have they been
> asked in this thread): How much memory does your system have? Are you
> running a 32-bit or 64-bit Python? Is your Python process being run
> with any additional limitations via system commands (I don't know the
> command, but I know it exists; similarly, if launched from a third
> app, it could be placing limits)?
> 
> Chris
> 

8 Gig, 64 bit, no additional limitations (other than any that might be
imposed by IDLE). In this case the simulation does consume *a lot* of
memory, but that hasn't been the case when I've hit this in the past. I
suppose that could be the issue here. I'm currently seeing if I can
reproduce the problem after adding the p.communicate(), but it seems to
be using more memory than ever (dog slow and up to 5 Gig of swap). In
the meantime I'm going to try to refactor to reduce memory requirements
- and 32 Gig of DDR3 has been ordered. I'll also dig out some code that
generated the same problem before to see if I can reproduce it. Cheers.

Duncan

Duncan
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Asyncio -- delayed calculation

2016-12-01 Thread Ian Kelly
On Thu, Dec 1, 2016 at 12:53 AM, Christian Gollwitzer  wrote:
> well that works - but I think it it is possible to explain it, without
> actually understanding what it does behind the scences:
>
> x = foo()
> # schedule foo for execution, i.e. put it on a TODO list

This implies that if you never await foo it will still get done at
some point (e.g. when you await something else), which for coroutines
would be incorrect unless you call ensure_future() on it.

Come to think about it, it would probably not be a bad style rule to
consider that when you call something that returns an awaitable, you
should always either await or ensure_future it (or something else that
depends on it). Barring the unusual case where you want to create an
awaitable but *not* immediately schedule it.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Can json.dumps create multiple lines

2016-12-01 Thread Tim Chase
On 2016-12-01 17:30, Cecil Westerhof wrote:
> When I have a value dummy which contains:
> ['An array', 'with several strings', 'as a demo']
> Then json.dumps(dummy) would generate:
> '["An array", "with several strings", "as a demo"]'
> I would prefer when it would generate:
> '[
>  "An array",
>  "with several strings",
>  "as a demo"
>  ]'
> 
> Is this possible, or do I have to code this myself?

print(json.dumps(['An array', 'with several strings', 'as a demo'],
indent=0))

for the basics of what you ask, though you can change indent= to
indent the contents for readability.

-tkc



-- 
https://mail.python.org/mailman/listinfo/python-list


Error In querying Genderize.io. Can someone please help

2016-12-01 Thread handar94
import requests
import json
names={'katty','Shean','Rajat'};
for name in names:
request_string="http://api.genderize.io/?"+name
r=requests.get(request_string)
result=json.loads(r.content)


Error---
Traceback (most recent call last):
  File "C:/Users/user/PycharmProjects/untitled7/Mis1.py", line 7, in 
result=json.loads(r.content)
  File "C:\Users\user\Anaconda2\lib\json\__init__.py", line 339, in loads
return _default_decoder.decode(s)
  File "C:\Users\user\Anaconda2\lib\json\decoder.py", line 364, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Users\user\Anaconda2\lib\json\decoder.py", line 382, in raw_decode
raise ValueError("No JSON object could be decoded")
ValueError: No JSON object could be decoded

Can someone please help.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Can json.dumps create multiple lines

2016-12-01 Thread Cecil Westerhof
On Thursday  1 Dec 2016 17:55 CET, Zachary Ware wrote:

> On Thu, Dec 1, 2016 at 10:30 AM, Cecil Westerhof  wrote:
>> I would prefer when it would generate:
>> '[
>> "An array",
>> "with several strings",
>> "as a demo"
>> ]'
>>
>> Is this possible, or do I have to code this myself?
>
> https://docs.python.org/3/library/json.html?highlight=indent#json.dump
>
> Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 26 2016, 10:47:25) [GCC 4.2.1
> (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright",
> "credits" or "license" for more information.
 import json
 json.dumps(["An array", "with several strings", "as a demo"])
> '["An array", "with several strings", "as a demo"]'
 print(_)
> ["An array", "with several strings", "as a demo"]
 json.dumps(["An array", "with several strings", "as a demo"],
 indent=0)
> '[\n"An array",\n"with several strings",\n"as a demo"\n]'
 print(_)
> [
> "An array",
> "with several strings",
> "as a demo"
> ]

Works like a charm. Strings can contain newlines also, but then I do
not want a new line, but that works like a charm.

I used:
cursor.execute('INSERT INTO test (json) VALUES (?)' ,
   [json.dumps(['An array',
'with several strings',
'as a demo',
'and\none\nwith\na\nnewlines'],
   indent = 0)])

and that gave exactly what I wanted.

Now I need to convert the database. But that should not be a big
problem.


> I've also seen something about JSON support in SQLite, you may want
> to look into that.

I will do that, but later. I have what I need.

-- 
Cecil Westerhof
Senior Software Engineer
LinkedIn: http://www.linkedin.com/in/cecilwesterhof
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Can json.dumps create multiple lines

2016-12-01 Thread Cecil Westerhof
On Thursday  1 Dec 2016 22:52 CET, Cecil Westerhof wrote:

> Now I need to convert the database. But that should not be a big
> problem.

I did the conversion with:
cursor.execute('SELECT tipID FROM tips')
ids = cursor.fetchall()
for id in ids:
id = id[0]
cursor.execute('SELECT tip from tips WHERE tipID = ?', [id])
old_value = cursor.fetchone()[0]
new_value = json.dumps(json.loads(old_value), indent = 0)
cursor.execute('UPDATE tips SET tip = ? WHERE tipID = ?', [new_value, 
id])

-- 
Cecil Westerhof
Senior Software Engineer
LinkedIn: http://www.linkedin.com/in/cecilwesterhof
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Error In querying Genderize.io. Can someone please help

2016-12-01 Thread John Gordon
In  [email protected] 
writes:

> import requests
> import json
> names={'katty','Shean','Rajat'};
> for name in names:
> request_string="http://api.genderize.io/?"+name
> r=requests.get(request_string)
> result=json.loads(r.content)

You're using http: instead of https:, and you're using ?katty instead
of ?name=katty, and therefore the host does not recognize your request
as an API call and redirects you to the normal webpage.

-- 
John Gordon   A is for Amy, who fell down the stairs
[email protected]  B is for Basil, assaulted by bears
-- Edward Gorey, "The Gashlycrumb Tinies"

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Can json.dumps create multiple lines

2016-12-01 Thread Peter Otten
Cecil Westerhof wrote:

> On Thursday  1 Dec 2016 22:52 CET, Cecil Westerhof wrote:
> 
>> Now I need to convert the database. But that should not be a big
>> problem.
> 
> I did the conversion with:
> cursor.execute('SELECT tipID FROM tips')
> ids = cursor.fetchall()
> for id in ids:
> id = id[0]
> cursor.execute('SELECT tip from tips WHERE tipID = ?', [id])
> old_value = cursor.fetchone()[0]
> new_value = json.dumps(json.loads(old_value), indent = 0)
> cursor.execute('UPDATE tips SET tip = ? WHERE tipID = ?',
> [new_value, id])

The sqlite3 module lets you define custom functions written in Python:

db = sqlite3.connect(...)
cs = db.cursor()
 
def convert(s):
return json.dumps(
json.loads(s),
indent=0
)

db.create_function("convert", 1, convert)
cs.execute("update tips set tip = convert(tip)")


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Can json.dumps create multiple lines

2016-12-01 Thread Cecil Westerhof
On Thursday  1 Dec 2016 23:58 CET, Peter Otten wrote:

> Cecil Westerhof wrote:
>
>> On Thursday  1 Dec 2016 22:52 CET, Cecil Westerhof wrote:
>>
>>> Now I need to convert the database. But that should not be a big
>>> problem.
>>
>> I did the conversion with:
>> cursor.execute('SELECT tipID FROM tips')
>> ids = cursor.fetchall()
>> for id in ids:
>> id = id[0]
>> cursor.execute('SELECT tip from tips WHERE tipID = ?', [id])
>> old_value = cursor.fetchone()[0]
>> new_value = json.dumps(json.loads(old_value), indent = 0)
>> cursor.execute('UPDATE tips SET tip = ? WHERE tipID = ?',
>> [new_value, id])
>
> The sqlite3 module lets you define custom functions written in
> Python:
>
> db = sqlite3.connect(...)
> cs = db.cursor()
>
> def convert(s):
> return json.dumps(
> json.loads(s),
> indent=0
> )
>
> db.create_function("convert", 1, convert)
> cs.execute("update tips set tip = convert(tip)")

That is a lot better as what I did. Thank you.

-- 
Cecil Westerhof
Senior Software Engineer
LinkedIn: http://www.linkedin.com/in/cecilwesterhof
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: correct way to catch exception with Python 'with' statement

2016-12-01 Thread Ned Batchelder
On Thursday, December 1, 2016 at 2:31:11 PM UTC-5, DFS wrote:
> After a simple test below, I submit that the above scenario would never 
> occur.  Ever.  The time gap between checking for the file's existence 
> and then trying to open it is far too short for another process to sneak 
> in and delete the file.

It doesn't matter how quickly the first operation is (usually) followed
by the second.  Your process could be swapped out between the two
operations. On a heavily loaded machine, there could be a very long
time between them even if on an average machine, they are executed very
quickly.

For most programs, yes, it probably will never be a problem to check
for existence, and then assume that the file still exists.  But put that
code on a server, and run it a couple of million times, with dozens of
other processes also manipulating files, and you will see failures.

How to best deal with this situation depends on what might happen to the
file, and how you can best coordinate with those other programs. Locks
only help if all the interfering programs also use those same locks. A
popular strategy is to simply use the file, and deal with the error that
happens if the file doesn't exist, though that might not make sense
depending on the logic of the program.  You might have to check that the
file exists, and then also deal with the (slim) possibility that it then
doesn't exist.

--Ned.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: compile error when using override

2016-12-01 Thread Ho Yeung Lee
from __future__ import division 
import ast 
from sympy import * 
x, y, z, t = symbols('x y z t') 
k, m, n = symbols('k m n', integer=True) 
f, g, h = symbols('f g h', cls=Function) 
import inspect 
def op2(a,b): 
return a*b+a 

class AA(object):
@staticmethod
def __additionFunction__(a1, a2):
return a1*a2 #Put what you want instead of this
def __multiplyFunction__(a1, a2):
return a1*a2+a1 #Put what you want instead of this
def __divideFunction__(a1, a2):
return a1*a1*a2 #Put what you want instead of this
def __init__(self, value):
self.value = value
def __add__(self, other):
return self.value*other.value
def __mul__(self, other):
return self.value*other.value + other.value
def __div__(self, other):
return self.value*other.value*other.value

solve([AA(x)*AA(y) + AA(-1), AA(x) + AA(-2)], x, y)

>>> class AA(object):
... @staticmethod
... def __additionFunction__(a1, a2):
... return a1*a2 #Put what you want instead of this
... def __multiplyFunction__(a1, a2):
... return a1*a2+a1 #Put what you want instead of this
... def __divideFunction__(a1, a2):
... return a1*a1*a2 #Put what you want instead of this
... def __init__(self, value):
... self.value = value
... def __add__(self, other):
... return self.value*other.value
... def __mul__(self, other):
... return self.value*other.value + other.value
... def __div__(self, other):
... return self.value*other.value*other.value
...
>>> solve([AA(x)*AA(y) + AA(-1), AA(x) + AA(-2)], x, y)
Traceback (most recent call last):
  File "", line 1, in 
TypeError: unsupported operand type(s) for +: 'Add' and 'AA'


On Thursday, December 1, 2016 at 7:19:58 PM UTC+8, Steve D'Aprano wrote:
> On Thu, 1 Dec 2016 05:26 pm, Ho Yeung Lee wrote:
> 
> > import ast
> > from __future__ import division
> 
> That's not actually your code. That will be a SyntaxError.
> 
> Except in the interactive interpreter, "__future__" imports must be the very
> first line of code.
> 
> 
> > class A:
> >     @staticmethod
> >     def __additionFunction__(a1, a2):
> >         return a1*a2 #Put what you want instead of this
> 
> That cannot work in Python 2, because you are using a "classic"
> or "old-style" class. For staticmethod to work correctly, you need to
> inherit from object:
> 
> class A(object):
> ...
> 
> 
> Also, do not use double-underscore names for your own functions or methods.
> __NAME__ (two leading and two trailing underscores) are reserved for
> Python's internal use. You should not invent your own.
> 
> Why do you need this "additionFunction" method for? Why not put this in the
> __add__ method?
> 
> >   def __add__(self, other):
> >       return self.__class__.__additionFunction__(self.value, other.value)
> >   def __mul__(self, other):
> >       return self.__class__.__multiplyFunction__(self.value, other.value)
> 
> They should be:
> 
> def __add__(self, other):
> return self.additionFunction(self.value, other.value)
> 
> def __mul__(self, other):
> return self.multiplyFunction(self.value, other.value)
> 
> Or better:
> 
> def __add__(self, other):
> return self.value + other.value
> 
> def __mul__(self, other):
> return self.value * other.value
> 
> 
> 
> -- 
> Steve
> “Cheer up,” they said, “things could be worse.” So I cheered up, and sure
> enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Request Help With Byte/String Problem

2016-12-01 Thread Wildman via Python-list
On Wed, 30 Nov 2016 07:54:45 -0500, Dennis Lee Bieber wrote:

> On Tue, 29 Nov 2016 22:01:51 -0600, Wildman via Python-list
>  declaimed the following:
> 
>>I really appreciate your reply.  Your suggestion fixed that
>>problem, however, a new error appeared.  I am doing some
>>research to try to figure it out but no luck so far.
>>
>>Traceback (most recent call last):
>>  File "./ifaces.py", line 33, in 
>>ifs = all_interfaces()
>>  File "./ifaces.py", line 21, in all_interfaces
>>name = namestr[i:i+16].split('\0', 1)[0]
>>TypeError: Type str doesn't support the buffer API
> 
>   The odds are good that this is the same class of problem -- you are
> providing a Unicode string to a procedure that wants a byte-string (or vice
> versa)
> 
> https://docs.python.org/3/library/array.html?highlight=tostring#array.array.tostring

That helped.  Thanks.

-- 
 GNU/Linux user #557453
The cow died so I don't need your bull!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Request Help With Byte/String Problem

2016-12-01 Thread Wildman via Python-list
On Wed, 30 Nov 2016 14:39:02 +0200, Anssi Saari wrote:

> There'll be a couple more issues with the printing but they should be
> easy enough.

I finally figured it out, I think.  I'm not sure if my changes are
what you had in mind but it is working.  Below is the updated code.
Thank you for not giving me the answer.  I was a good learning
experience for me and that was my purpose in the first place.

def format_ip(addr):
return str(int(addr[0])) + '.' + \ # replace ord() with int()
   str(int(addr[1])) + '.' + \
   str(int(addr[2])) + '.' + \
   str(int(addr[3]))

ifs = all_interfaces()
for i in ifs: # added decode("utf-8")
print("%12s   %s" % (i[0].decode("utf-8"), format_ip(i[1])))

Thanks again!

-- 
 GNU/Linux user #557453
May the Source be with you.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: correct way to catch exception with Python 'with' statement

2016-12-01 Thread Ned Batchelder
On Thursday, December 1, 2016 at 7:26:18 PM UTC-5, DFS wrote:
> On 12/01/2016 06:48 PM, Ned Batchelder wrote:
> > On Thursday, December 1, 2016 at 2:31:11 PM UTC-5, DFS wrote:
> >> After a simple test below, I submit that the above scenario would never
> >> occur.  Ever.  The time gap between checking for the file's existence
> >> and then trying to open it is far too short for another process to sneak
> >> in and delete the file.
> >
> > It doesn't matter how quickly the first operation is (usually) followed
> > by the second.  Your process could be swapped out between the two
> > operations. On a heavily loaded machine, there could be a very long
> > time between them
> 
> 
> How is it possible that the 'if' portion runs, then 44/100,000ths of a 
> second later my process yields to another process which deletes the 
> file, then my process continues.

A modern computer is running dozens or hundreds (or thousands!) of
processes "all at once". How they are actually interleaved on the
small number of actual processors is completely unpredictable. There
can be an arbitrary amount of time passing between any two processor
instructions.

I'm assuming you've measured this program on your own computer, which
was relatively idle at the moment.  This is hardly a good stress test
of how the program might execute under more burdened conditions.

> 
> Is that governed by the dreaded GIL?
> 
> "The mechanism used by the CPython interpreter to assure that only one 
> thread executes Python bytecode at a time."
> 
> But I see you posted a stack-overflow answer:
> 
> "In the case of CPython's GIL, the granularity is a bytecode 
> instruction, so execution can switch between threads at any bytecode."
> 
> Does that mean "chars=f.read().lower()" could get interrupted between 
> the read() and the lower()?

Yes.  But even more importantly, the Python interpreter is itself a
C program, and it can be interrupted between any two instructions, and
another program on the computer could run instead.  That other program
can fiddle with files on the disk.

> 
> I read something interesting last night:
> https://www.jeffknupp.com/blog/2012/03/31/pythons-hardest-problem/
> 
> "In the new GIL, a hard timeout is used to instruct the current thread 
> to give up the lock. When a second thread requests the lock, the thread 
> currently holding it is compelled to release it after 5ms (that is, it 
> checks if it needs to release it every 5ms)."
> 
> With a 5ms window, it seems the following code would always protect the 
> file from being deleted between lines 4 and 5.
> 
> 
> 1 import os,threading
> 2 f_lock=threading.Lock()
> 3 with f_lock:
> 4   if os.path.isfile(filename):
> 5 with open(filename,'w') as f:
> 6   process(f)
> 
> 

You seem to be assuming that the program that might delete the file
is the same program trying to read the file.  I'm not assuming that.
My Python program might be trying to read the file at the same time
that a cron job is running a shell script that is trying to delete
the file.

> Also, this is just theoretical (I hope).  It would be terrible system 
> design if all those dozens of processes were reading and writing and 
> deleting the same file.

If you can design your system so that you know for sure no one else
is interested in fiddling with your file, then you have an easier
problem.  So far, that has not been shown to be the case. I'm
talking more generally about a program that can't assume those
constraints.

--Ned.
-- 
https://mail.python.org/mailman/listinfo/python-list


How to properly retrieve data using requests + bs4 from multiple pages in a site?

2016-12-01 Thread Juan C.
I'm a student and my university uses Moodle as their learning management
system (LMS). They don't have Moodle Web Services enabled and won't be
enabling it anytime soon, at least for students. The university programs
have the following structure, for example:

1. Bachelor's Degree in Computer Science (duration: 8 semesters)

1.1. Unit 01: Mathematics Fundamental (duration: 1 semester)
1.1.1. Algebra I (first 3 months)
1.1.2. Algebra II (first 3 months)
1.1.3. Calculus I (last 3 months)
1.1.4. Calculus II (last 3 months)
1.1.5. Unit Project (throughout the semester)

1.2. Unit 02: Programming (duration: 1 semester)
1.2.1. Programming Logic (first 3 months)
1.2.2. Data Modelling with UML (first 3 months)
1.2.3. Python I (last 3 months)
1.2.4. Python II (last 3 months)
1.2.5. Unit Project (throughout the semester)

Each course/project have a bunch of assignments + one final assignment.
This goes on, totalizing 8 (eight) units, which will make up for a 4-year
program. I'm building my own client-side Moodle API to be consumed by my
scripts. Currently I'm using 'requests' + 'bs4' to do the job. My code:

package moodle/

user.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from .program import Program
import requests


class User:
   _AUTH_URL = 'http://lms.university.edu/moodle/login/index.php'

   def __init__(self, username, password, program_id):
  self.username = username
  self.password = password
  session = requests.session()
  session.post(self._AUTH_URL, {"username": username, "password":
password})
  self.program = Program(program_id=program_id, session=session)

   def __str__(self):
  return self.username + ':' + self.password

   def __repr__(self):
  return '' % self.username

   def __eq__(self, other):
  if isinstance(other, self):
 return self.username == other.username
  else:
 return False

==

program.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from .unit import Unit
from bs4 import BeautifulSoup


class Program:
   _PATH = 'http://lms.university.edu/moodle/course/index.php?categoryid='

   def __init__(self, program_id, session):
  response = session.get(self._PATH + str(program_id))
  soup = BeautifulSoup(response.text, 'html.parser')

  self.name = soup.find('ul',
class_='breadcrumb').find_all('li')[-2].text.replace('/', '').strip()
  self.id = program_id
  self.units = [Unit(int(item['data-categoryid']), session) for item in
soup.find_all('div', {'class': 'category'})]

   def __str__(self):
  return self.name

   def __repr__(self):
  return '' % (self.name, self.id)

   def __eq__(self, other):
  if isinstance(other, self):
 return self.id == other.id
  else:
 return False

==

unit.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from .course import Course
from bs4 import BeautifulSoup


class Unit:
   _PATH = 'http://lms.university.edu/moodle/course/index.php?categoryid='

   def __init__(self, unit_id, session):
  response = session.get(self._PATH + str(unit_id))
  soup = BeautifulSoup(response.text, 'html.parser')

  self.name = soup.find('ul',
class_='breadcrumb').find_all('li')[-1].text.replace('/', '').strip()
  self.id = unit_id
  self.courses = [Course(int(item['data-courseid']), session) for item
in soup.find_all('div', {'class': 'coursebox'})]

   def __str__(self):
  return self.name

   def __repr__(self):
  return '' % (self.name, self.id)

   def __eq__(self, other):
  if isinstance(other, self):
 return self.id == other.id
  else:
 return False

==

course.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-


from .assignment import Assignment
import re
from bs4 import BeautifulSoup


class Course:
   _PATH = 'http://lms.university.edu/moodle/course/view.php?id='

   def __init__(self, course_id, session):
  response = session.get(self._PATH + str(course_id))
  soup = BeautifulSoup(response.text, 'html.parser')

  self.name = soup.find('h1').text
  self.id = course_id
  self.assignments = [Assignment(int(item['href'].split('id=')[-1]),
session) for item in
 soup.find_all('a', href=re.compile(r'http://lms
\.university\.edu/moodle/mod/assign/view.php\?id=.*'))]

   def __str__(self):
  return self.name

   def __repr__(self):
  return '' % (self.name, self.id)

   def __eq__(self, other):
  if isinstance(other, self):
 return self.id == other.id
  else:
 return False

==

assignment.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-

from bs4 import BeautifulSoup


class Assignment:
   _PATH = 'http://lms.university.edu/moodle/mod/assign/view.php?id='

   def __init__(self, assignment_id, session):
  response = session.get(self._PATH + str(assignment_id))
  soup = BeautifulSoup(response.text, 'html.parser')

  self.name = soup.find('h2').text
  self.id = assignment_id
  self.sent = soup.find('td', {'

Re: correct way to catch exception with Python 'with' statement

2016-12-01 Thread Steve D'Aprano
On Fri, 2 Dec 2016 11:26 am, DFS wrote:

> On 12/01/2016 06:48 PM, Ned Batchelder wrote:
>> On Thursday, December 1, 2016 at 2:31:11 PM UTC-5, DFS wrote:
>>> After a simple test below, I submit that the above scenario would never
>>> occur.  Ever.  The time gap between checking for the file's existence
>>> and then trying to open it is far too short for another process to sneak
>>> in and delete the file.
>>
>> It doesn't matter how quickly the first operation is (usually) followed
>> by the second.  Your process could be swapped out between the two
>> operations. On a heavily loaded machine, there could be a very long
>> time between them
> 
> 
> How is it possible that the 'if' portion runs, then 44/100,000ths of a
> second later my process yields to another process which deletes the
> file, then my process continues.
> 
> Is that governed by the dreaded GIL?

No, that has nothing to do with the GIL. It is because the operating 
system is a preemptive multi-processing operating system. All modern OSes 
are: Linux, OS X, Windows.

Each program that runs, including the OS itself, is one or more processes.
Typically, even on a single-user desktop machine, you will have dozens of
processes running simultaneously.

Every so-many clock ticks, the OS pauses whatever process is running, 
more-or-less interrupting whatever it was doing, passes control on to 
another process, then the next, then the next, and so on. The application 
doesn't have any control over this, it can be paused at any time, 
normally just for a small fraction of a second, but potentially for 
seconds or minutes at a time if the system is heavily loaded.



> "The mechanism used by the CPython interpreter to assure that only one
> thread executes Python bytecode at a time."
> 
> But I see you posted a stack-overflow answer:
> 
> "In the case of CPython's GIL, the granularity is a bytecode
> instruction, so execution can switch between threads at any bytecode."
> 
> Does that mean "chars=f.read().lower()" could get interrupted between
> the read() and the lower()?

Yes, but don't think about Python threads. Think about the OS.

I'm not an expert on the low-level hardware details, so I welcome
correction, but I think that you can probably expect that the OS can
interrupt code execution between any two CPU instructions. Something like
str.lower() is likely to be thousands of CPU instructions, even for a small
string.


[...]
> With a 5ms window, it seems the following code would always protect the
> file from being deleted between lines 4 and 5.
> 
> 
> 1 import os,threading
> 2 f_lock=threading.Lock()
> 3 with f_lock:
> 4   if os.path.isfile(filename):
> 5 with open(filename,'w') as f:
> 6   process(f)
> 
> 
> 
> 
>> even if on an average machine, they are executed very quickly.

Absolutely not. At least on Linux, locks are advisory, not mandatory. Here
are a pair of scripts that demonstrate that. First, the well-behaved script
that takes out a lock:

# --- locker.py ---
import os, threading, time

filename = 'thefile.txt'
f_lock = threading.Lock()

with f_lock:
print '\ntaking lock'
if os.path.isfile(filename):
print filename, 'exists and is a file'
time.sleep(10)
print 'lock still active'
with open(filename,'w') as f:
print f.read()

# --- end ---


Now, a second script which naively, or maliciously, just deletes the file:

# --- bandit.py ---
import os, time
filename = 'thefile.txt'
time.sleep(1)
print 'deleting file, mwahahahaha!!!'
os.remove(filename)
print 'deleted'

# --- end ---



Now, I run them both simultaneously:

[steve@ando thread-lock]$ touch thefile.txt # ensure file exists
[steve@ando thread-lock]$ (python locker.py &) ; (python bandit.py &)
[steve@ando thread-lock]$ 
taking lock
thefile.txt exists and is a file
deleting file, mwahahahaha!!!
deleted
lock still active
Traceback (most recent call last):
  File "locker.py", line 14, in 
print f.read()
IOError: File not open for reading



This is on Linux. Its possible that Windows behaves differently, and I don't
know how to run a command in the background in command.com or cmd.exe or
whatever you use on Windows.


[...]
> Also, this is just theoretical (I hope).  It would be terrible system
> design if all those dozens of processes were reading and writing and
> deleting the same file.

It is not theoretical. And it's not a terrible system design, in the sense
that the alternatives are *worse*.

* Turn the clock back to the 1970s and 80s with single-processing 
  operating systems? Unacceptable -- even primitive OSes like DOS 
  and Mac System 5 needed to include some basic multiprocessing 
  capability.

- And what are servers supposed to do in this single-process world?

- Enforce mandatory locks? A great way for malware or hostile users
  to perform Denial Of Service attacks.

Even locks being left around accidentally can be a real pain: Windows us

Re: compile error when using override

2016-12-01 Thread Steve D'Aprano
On Fri, 2 Dec 2016 01:35 pm, Ho Yeung Lee wrote:

> from __future__ import division
> import ast
> from sympy import *
> x, y, z, t = symbols('x y z t')
> k, m, n = symbols('k m n', integer=True)
> f, g, h = symbols('f g h', cls=Function)
> import inspect

Neither ast nor inspect is used. Why import them?

The only symbols you are using are x and y.


> def op2(a,b):
> return a*b+a

This doesn't seem to be used. Get rid of it.


> class AA(object):
> @staticmethod
> def __additionFunction__(a1, a2):
> return a1*a2 #Put what you want instead of this
> def __multiplyFunction__(a1, a2):
> return a1*a2+a1 #Put what you want instead of this
> def __divideFunction__(a1, a2):
> return a1*a1*a2 #Put what you want instead of this

None of those methods are used. Get rid of them.

> def __init__(self, value):
> self.value = value
> def __add__(self, other):
> return self.value*other.value

Sorry, you want AA(5) + AA(2) to return 10?

> def __mul__(self, other):
> return self.value*other.value + other.value
> def __div__(self, other):
> return self.value*other.value*other.value
> 
> solve([AA(x)*AA(y) + AA(-1), AA(x) + AA(-2)], x, y)

I don't understand what you are trying to do here. What result are you
execting?

Maybe you just want this?

from sympy import solve, symbols
x, y = symbols('x y')
print( solve([x*y - 1, x - 2], x, y) )

which prints the result:
[(2, 1/2)]


Perhaps if you explain what you are trying to do, we can help better.

But please, cut down your code to only code that is being used!




-- 
Steve
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: correct way to catch exception with Python 'with' statement

2016-12-01 Thread Steve D'Aprano
On Fri, 2 Dec 2016 11:26 am, DFS wrote:

>> For most programs, yes, it probably will never be a problem to check
>> for existence, and then assume that the file still exists.  But put that
>> code on a server, and run it a couple of million times, with dozens of
>> other processes also manipulating files, and you will see failures.
> 
> 
> If it's easy for you, can you write some short python code to simulate
> that?

Run these scripts simultaneously inside the same directory, and you will see
a continual stream of error messages:

# -- a.py -- 
filename = 'data'
import os, time

def run():
if os.path.exists(filename):
with open(filename):
pass
else:
print('file is missing!')
# re-create it
with open(filename, 'w'):
pass

while True:
try:
run()
except IOError:
pass
time.sleep(0.05)



# -- b.py --
filename = 'data'
import os, time

while True:
try:
os.remove(filename)
except OSError:
pass
time.sleep(0.05)




The time.sleep() calls are just to slow them down slightly. You can leave
them out if you like.




-- 
Steve
“Cheer up,” they said, “things could be worse.” So I cheered up, and sure
enough, things got worse.

-- 
https://mail.python.org/mailman/listinfo/python-list


What do you think: good idea to launch a marketplace on python+django?

2016-12-01 Thread Gus_G
Hello, what do you think about building a marketplace website on connection of 
python+django? End effect-side should look and work similar to these: 
https://zoptamo.com/uk/s-abs-c-uk, https://www.ownerdirect.com/ . What are your 
opinions on this idea? Maybe there is other, better way to build it?
-- 
https://mail.python.org/mailman/listinfo/python-list