Re: Lua tutorial help for Python programmer?

2016-11-08 Thread Andrea D'Amore
On 7 November 2016 at 20:27, Skip Montanaro  wrote:
> I just got Lua scripting dumped in my lap as a way to do some server
> side scripting in Redis. The very most basic stuff isn't too hard (i =
> 1, a = {"x"=4, ...}, for i = 1,10,2 do ... end), but as soon as I get
> beyond that, I find it difficult to formulate questions which coax
> Google into useful suggestions. Is there an equivalent to the
> python-tutor, python-help, or even this (python-list/comp.lang.python)
> for people to ask Lua questions from the perspective of a Python
> programmer? Maybe an idiom translation table?

There's lua-list, I figure all your questions fit better there than here.
There's the official wiki on lua-users [1], I don't know about a
Py-Lua Rosetta Stone.


> 1. print(tbl) where tbl is a Lua table prints something useless like
[…]
> How can I print a table in one go so I see all its keys and values?

Use the pairs() iterator function (check the reference manual for
ipairs() as well):

for key, value in pairs(my_table) do
print(key, value)
end


> 2. The redis-py package helpfully converts the result of HGETALL to a
> Python dictionary. On the server, The Lua code just sees an
> interleaved list (array?) of the key/value pairs, e.g., "a" "1" "b"
> "2" "c" "hello". I'd dictify that in Python easily enough:
[…]
> Skimming the Lua reference manual, I didn't see anything like dict()
> and zip().

IIRC tables are the only data structures in Lua, actually I liked this
simplification very much.


> I suspect I'm thinking like a Python programmer when I
> shouldn't be. Is there a Lua idiom which tackles this problem in a
> straightforward manner, short of a numeric for loop?


IIRC the standard library is quite compact, so no.
If you want something like more straightforward than

dict = {}
for i = 1, #results, 2 do
dict[results[i]] = results[i+1]
end

You can define your own iterator function and have "for key, value in
…" in the for loop. Not sure it's worth it.

Beware that I'm no Lua expert, I just liked the language and read
about it but never actually used in any project. I suggesting checking
the mailing list or the IRC channel.


> As you can see, this is pretty trivial stuff, mostly representing
> things which are just above the level of the simplest tutorial.

Check "Programming in Lua" book, older versions are made available
online by the author.


[1]: http://lua-users.org/wiki/TutorialDirectory

-- 
Andrea
-- 
https://mail.python.org/mailman/listinfo/python-list


ANN: PyDDF Python Sprint 2016

2016-11-08 Thread eGenix Team: M.-A. Lemburg
[This announcement is in German since it targets a Python sprint in
 Düsseldorf, Germany]


ANKÜNDIGUNG

   PyDDF Python Sprint 2016
in Düsseldorf

 Samstag, 19.11.2016, 10:00-18:00 Uhr
 Sonntag, 20.11.2016, 10:00-18:00 Uhr
trivago GmbH,  Karl-Arnold-Platz 1A,  40474 Düsseldorf


  Python Meeting Düsseldorf
 http://pyddf.de/sprint2016/


INFORMATION

Das Python Meeting Düsseldorf (PyDDF) veranstaltet mit freundlicher
Unterstützung der *trivago GmbH* ein Python Sprint Wochenende im
September.

Der Sprint findet am Wochenende 19./20.11.2016 in der trivago
Niederlassung am Karl-Arnold-Platz 1A statt (nicht am Bennigsen-Platz
1). Bitte beim Pförtner melden.

Google Maps:
https://www.google.de/maps/dir/51.2452741,6.7711581//@51.2450432,6.7714612,18.17z?hl=de

Folgende Themengebiete haben wir als Anregung angedacht:

 * Openpyxl

   Openpyxl ist eine Python Bibliothek, mit der man Excel 2010+
   Dateien lesen und schreiben kann.

   Charlie ist Co-Maintainer des Pakets.

 * MicroPython auf ESP8266 und BBC micro:bit

   MicroPython ist eine Python 3 Implementierung für Micro
   Controller. Sie läuft u.a. auf dem BBC micro:bit, einem
   Ein-Patinen-Computer, der in Großbritanien an Kinder der 7. Klassen
   verteilt wurde, und dem mittlerweile sehr populären IoT Chip
   ESP8266, der WLAN unterstützt.

   Im Sprint wollen wir versuchen, ein Mesh Network aus BBC micro:bits
   aufzubauen, das dann an einen ESP8266 mit dem WLAN verbunden
   wird. Alles mit Hilfe von MicroPython.

   Vorkenntnisse sind eigentlich keine nötig. Wir werden mindestens
   einen ESP8266 und drei BBC micro:bits zur Verfügung haben.

Natürlich kann jeder Teilnehmer weitere Themen vorschlagen, z.B.

 * Kivy
 * Raspberry Pi
 * FritzConnection
 * OpenCV
 * u.a.

Alles weitere und die Anmeldung findet Ihr auf der Sprint Seite:

http://pyddf.de/sprint2016/

Teilnehmer sollten sich zudem auf der PyDDF Liste anmelden, da wir
uns dort koordinieren:

https://www.egenix.com/mailman/listinfo/pyddf


ÜBER UNS

Das Python Meeting Düsseldorf (PyDDF) ist eine regelmäßige Veranstaltung
in Düsseldorf, die sich an Python Begeisterte aus der Region wendet:

 * http://pyddf.de/

Einen guten Überblick über die Vorträge bietet unser YouTube-Kanal,
auf dem wir die Vorträge nach den Meetings veröffentlichen:

 * http://www.youtube.com/pyddf/

Veranstaltet wird das Meeting von der eGenix.com GmbH, Langenfeld,
in Zusammenarbeit mit Clark Consulting & Research, Düsseldorf:

 * http://www.egenix.com/
 * http://www.clark-consulting.eu/

Mit freundlichen Grüßen,
-- 
Marc-Andre Lemburg
eGenix.com

Professional Python Services directly from the Experts (#1, Nov 08 2016)
>>> Python Projects, Coaching and Consulting ...  http://www.egenix.com/
>>> Python Database Interfaces ...   http://products.egenix.com/
>>> Plone/Zope Database Interfaces ...   http://zope.egenix.com/


::: We implement business ideas - efficiently in both time and costs :::

   eGenix.com Software, Skills and Services GmbH  Pastor-Loeh-Str.48
D-40764 Langenfeld, Germany. CEO Dipl.-Math. Marc-Andre Lemburg
   Registered at Amtsgericht Duesseldorf: HRB 46611
   http://www.egenix.com/company/contact/
  http://www.malemburg.com/

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Lua tutorial help for Python programmer?

2016-11-08 Thread Skip Montanaro
On Tue, Nov 8, 2016 at 2:12 AM, Andrea D'Amore  wrote:
> There's lua-list, I figure all your questions fit better there than here.
> There's the official wiki on lua-users [1], I don't know about a
> Py-Lua Rosetta Stone.

Thanks. I asked here specifically because I was interested in Lua from
a Python perspective. I know that I see people bring incorrect
(suboptimal?) idioms from other languages when starting with Python. I
was hoping that some Python people who have a foot in the Lua world
could help me avoid that mistake.

Skip
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: constructor classmethods

2016-11-08 Thread teppo . pera
> How is having 15 arguments in a .create() method better than having 15 
> arguments in __init__() ?
> So, if you use the create() method, and it sets up internal data structures, 
> how do you test them?  In other words, if create() makes that queue then how 
> do you test with a half-empty queue?
> Not all design patterns make sense in every language.
 
Seems that there is still some unclarity about the whole proposal, so I combine 
your questions into an example. But first, little more background.

Generally, with testing, it would be optimal to test outputs of the system for 
given inputs without caring how things are implemented. That way, any changes 
in implementation won't affect test results. Very trivial example would be 
something like this:

def do_something_important(input1, input2, input3)
return  # something done with input1, input2, input3

Implementation of do_something_important can be one liner, or it can contain 
multiple classes, yet the result of the function is what matters. That's also 
the basic strategy (one of many) I try to follow with when testing the code I 
write.

Now, testing the class using queue (as an example) should follow same pattern, 
thus a simple example how to test it would look like this (assuming that 
everyone knows how to use mock library):

# This is just an example testing with DI and mocks.

class Example:
   def __init__(self, queue):
   self._queue = queue

   def can_add(self):
   return not self._queue.full()  

def TestExample(unittest.TestCase):
def setUp(self):
self.queue = mock.MagicMock()

def 
test_it_should_be_possible_to_know_when_there_is_still_room_for_items(self):
   self.queue.full.return_value = False
   example = Example(self.queue)
   self.assertTrue(example.can_add())

def 
test_it_should_be_possible_to_know_when_no_more_items_can_be_added(self):
   self.queue.full.return_value = True
   example = Example(self.queue)
   self.assertFalse(example.can_add())

In above example, example doesn't really care what class the object is. Only 
full method is needed to be implemented. Injected class can be Queue, 
VeryFastQueue or LazyQueue, as long as they implement method "full" 
(duck-typing!). Test takes advantage of that and changing the implementation 
won't break the tests (tests are not caring how Example is storing the Queue). 
 Also, adding more cases is trivial and should also make think the actual 
implementation and what is needed to be taken care of. For example, 
self.queue.full.return_value = None, or self.queue.full.side_effect = 
ValueError(). How should code react on those?

Then comes the next step, doing the actual DI. One solution is:

class Example:
def __init__(self, queue=None):
self._queue = queue or Queue()

Fine approach, but technically __init__ has two execution branches and someone 
staring blindly coverages might require covering those too. Then we can use 
class method too.

class Example:
def __init__(self, queue):
self._queue = queue

@classmethod
def create(cls):
q = Queue()
# populate_with_defaults
# Maybe get something from db too for queue...
return cls(q)

As said, create-method is for convenience. it can (and should) contain minimum 
set of arguments needed from user (no need to be 15 even if __init__ would 
require it) to create the object. It creates the fully functioning Example 
object with default dependencies. Do notice that tests I wrote earlier would 
still work. Create can contain slow executing code, if needed, but it won't 
slow down testing the Example class itself. 

Finally, if you want to be tricky and write own decorator for object 
construction, Python would allow you to do that.

@spec('queue')  # generates __init__ that populates instance with queue given 
as arg
class Example:
@classmethod
def create(cls):
return cls(Queue())

Example can still be initialized calling Example(some_dependency), or calling 
Example.create() which provides default configuration. Writing the decorator 
would give unlimited ways to extend the class. And test written in the 
beginning of the post would still pass.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: constructor classmethods

2016-11-08 Thread Chris Angelico
On Wed, Nov 9, 2016 at 10:01 AM,   wrote:
> One solution is:
>
> class Example:
> def __init__(self, queue=None):
> self._queue = queue or Queue()
>
> Fine approach, but technically __init__ has two execution branches and 
> someone staring blindly coverages might require covering those too. Then we 
> can use class method too.
>
> class Example:
> def __init__(self, queue):
> self._queue = queue
>
> @classmethod
> def create(cls):
> q = Queue()
> # populate_with_defaults
> # Maybe get something from db too for queue...
> return cls(q)
>
> As said, create-method is for convenience. it can (and should) contain 
> minimum set of arguments needed from user (no need to be 15 even if __init__ 
> would require it) to create the object.
>

You gain nothing, though. Whether your code paths are in create() or
in __init__, you still have them. You can make __init__ take no
mandatory arguments (other than self) and then it's still just as easy
to use. Tell me, without looking it up: How many arguments does the
built-in open() function take? But you don't have to worry about them,
most of the time.

Python has idioms available that C++ simply can't use, so what's right
for C++ might well not be right for Python, simply because there's
something better.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: constructor classmethods

2016-11-08 Thread Steve D'Aprano
On Wed, 9 Nov 2016 10:01 am, [email protected] wrote:

> Generally, with testing, it would be optimal to test outputs of the system
> for given inputs without caring how things are implemented.

I disagree with that statement.

You are talking about "black-box testing" -- the test code should treat the
code being tested as a completely opaque, black box where the inner
workings are invisible. Tests are only written against the interface.

The alternative is "white-box testing", where the test code knows exactly
what the implementation is, and can test against the implementation, not
just the interface.

White-box testing is more costly, because any change in implementation will
cause a lot of code churn in the tests. Change the implementation, and
tests will disappear. Change it back, and the tests need to be reverted.

But that churn only applies to one person: the maintainer of the test. It
doesn't affect users of the code. It is certainly a cost, but it is
narrowly focused on the maintainer, not the users.

White-box testing gives superior test coverage, because the test author is
aware of the implementation. Let's suppose that I was writing some
black-box tests for Python's list.sort() method. Knowing nothing of the
implementation, I might think that there's only a handful of cases I need
to care about:

- an empty list []
- a single item list [1]
- an already sorted list [1, 2, 3, 4, 5]
- a list sorted in reverse order [5, 4, 3, 2, 1]
- a few examples of unsorted lists, e.g. [3, 5, 2, 4, 1]

And we're done! Black-box testing makes tests easy.

But in fact, that test suite is completely insufficient. The implementation
of list.sort() uses two different sort algorithms: insertion sort for short
lists, and Timsort for long lists. My black-box test suite utterly fails to
test Timsort.

To be sufficient, I *must* test both insertion sort and Timsort, and I can
only guarantee to do that by testing against the implementation, not the
interface.

Black-box testing is better than nothing, but white-box testing is much more
effective.


> That way, any 
> changes in implementation won't affect test results. Very trivial example
> would be something like this:
> 
> def do_something_important(input1, input2, input3)
> return  # something done with input1, input2, input3
> 
> Implementation of do_something_important can be one liner, or it can
> contain multiple classes, yet the result of the function is what matters.

Certainly. And if you test do_something_important against EVERY possible
combination of inputs, then you don't need to care about the
implementation, since you've tested every single possible case.

But how often do you do that?


[...]
> Then comes the next step, doing the actual DI. One solution is:
> 
> class Example:
> def __init__(self, queue=None):
> self._queue = queue or Queue()

That's buggy. If I pass a queue which is falsey, you replace it with your
own default queue instead of the one I gave you. That's wrong.

If I use queue.Queue, that's not a problem, because empty queues are still
truthy. But if I use a different queue implementation, then your code
breaks.

The lessen here is: when you want to test for None, TEST FOR NONE. Don't
use "or" when you mean "if obj is None".


> Fine approach, but technically __init__ has two execution branches and
> someone staring blindly coverages might require covering those too. 

And here we see the advantage of white-box testing. We do need tests for
both cases: to ensure that the given queue is always used, and that a new
Queue is only used when no queue was given at all. A pure blackbox tester
might not imagine the need for these two cases, as he is not thinking about
the implementation.



> Then we can use class method too.
> 
> class Example:
> def __init__(self, queue):
> self._queue = queue
> 
> @classmethod
> def create(cls):
> q = Queue()
> # populate_with_defaults
> # Maybe get something from db too for queue...
> return cls(q)
> 
> As said, create-method is for convenience. it can (and should) contain
> minimum set of arguments needed from user (no need to be 15 even if
> __init__ would require it) to create the object. 

Why would __init__ require fifteen arguments if the user can pass one
argument and have the other fourteen filled in by default?

The question here is, *why* is the create() method a required part of your
API? There's no advantage to such a change of spelling. The Python style is
to spell instance creation:

instance = MyClass(args)

not 

instance = MyClass.create(args)

Of course you can design your classes using any API you like:

instance = MyClass.Make_Builder().build().create_factory().create(args)

if you insist. But if all create() does is fill in some default values for
you, then it is redundant. The __init__ method can just as easily fill in
the default values. All you are doing is changing the spelling:

MyClass(arg)  # ins

Re: What is currently the recommended way to work with a distutils-based setup.py that requires compilation?

2016-11-08 Thread Tim Johnson
* Ivan Pozdeev via Python-list  [161106 17:28]:
> https://wiki.python.org/moin/WindowsCompilers has now completely replaced
> instructions for `distutils`-based packages (starting with `from
> distutils.core import setup`) with ones for `setuptools`-based ones
> (starting with `from setuptools import setup`).
> 
> However, if I have a `distutils`-based `setup.py`, when I run it,
> `setuptools` is not used - thus the instructions on the page don't work.
> 
> It is possible to run a `distutils`-based script through `setuptools`, as
> `pip` does, but it requires the following code
> (https://github.com/pypa/pip/blob/8.1.2/pip/req/req_install.py#L849 ):
> 
> python -u -c "import setuptools, tokenize;__file__=;
> exec(compile(getattr(tokenize, 'open', open)(__file__).read()
> .replace('\\r\\n', '\\n'), __file__, 'exec'))" 
> 
> They can't possibly expect me to type that on the command line each time,
> now can they?
Ivan, it looks like you aren't getting any answers from seasoned
list gurus to your question.

So, I'm going to take a stab at this and I hope you are not mislead
or misdirected by my comments.

> They can't possibly expect me to type that on the command line each time,

The code that you are quoting above can be placed in a script file
and executed at will. Once you get the syntax correct, you will then
be able to execute that script at any time.

I don't know what operating system you are using: Linux and Mac work
pretty much similarly when it comes to console scripts, windows will
have a different approach, but not radically so.

I hope this helps or puts you on a constructive path.

> I also asked this at http://stackoverflow.com/q/40174932/648265 a couple of
> days ago (to no avail).
> 
> -- 
> 
> Regards,
> Ivan
> 
> -- 
> https://mail.python.org/mailman/listinfo/python-list

-- 
Tim 
http://www.akwebsoft.com, http://www.tj49.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: constructor classmethods

2016-11-08 Thread Marko Rauhamaa
Steve D'Aprano :

> On Wed, 9 Nov 2016 10:01 am, [email protected] wrote:
>> Generally, with testing, it would be optimal to test outputs of the
>> system for given inputs without caring how things are implemented.
>
> I disagree with that statement.

I, OTOH, agree with it.

> But in fact, that test suite is completely insufficient. The
> implementation of list.sort() uses two different sort algorithms:
> insertion sort for short lists, and Timsort for long lists. My
> black-box test suite utterly fails to test Timsort.

Independent algorithms can be packaged as independent software
components and subjected to separate black-box tests -- through their
advertised APIs.

> Black-box testing is better than nothing, but white-box testing is
> much more effective.

I've had to deal with a lot of white-box test code. In practice,

 * It relies on internal reporting by the implementation instead of
   real, observed behavior.

 * It hasn't stayed current with the implementation.

 * It has convoluted the code with scary conditional compilation and
   other tricks, thus lowering the quality of the implementation.

> Certainly. And if you test do_something_important against EVERY
> possible combination of inputs, then you don't need to care about the
> implementation, since you've tested every single possible case.

The space of every imaginable situation is virtually infinite; testing
will only scratch the surface no matter how much effort you put into it.

I'm thinking of recent bugs that have sneaked into a product. The
question was why our tests didn't catch the bugs. The answer was that
those particular sequences weren't included in the thousands of test
cases that we run regularly.

>> class Example:
>> def __init__(self, queue=None):
>> self._queue = queue or Queue()
>
> That's buggy. If I pass a queue which is falsey, you replace it with
> your own default queue instead of the one I gave you. That's wrong.

And your two thousand white-box test cases might miss the bug as well.

> The lessen here is: when you want to test for None, TEST FOR NONE.
> Don't use "or" when you mean "if obj is None".

In the real world, that's among the most benign of bugs that riddle
software (and software tests). Programming is too hard for mortals.

Junior software developers are full of ambitious ideas -- sky's the
limit. Experienced software developers are full of awe if anything
actually works.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: [Theory] How to speed up python code execution / pypy vs GPU

2016-11-08 Thread John Ladasky
On Monday, November 7, 2016 at 5:23:25 PM UTC-8, Steve D'Aprano wrote:
> On Tue, 8 Nov 2016 05:47 am, [email protected] wrote:
> > It has been very important for the field of computational molecular
> > dynamics (and probably several other fields) to get floating-point
> > arithmetic working right on GPU architecture.  I don't know anything about
> > other manufacturers of GPU's, but NVidia announced IEEE-754,
> > double-precision arithmetic for their GPU's in 2008, and it's been
> > included in the standard since CUDA 2.0.
> 
> That's excellent news, and well-done to NVidia.
> 
> But as far as I know, they're not the only manufacturer of GPUs, and they
> are the only ones who support IEEE 754. So this is *exactly* the situation
> I feared: incompatible GPUs with varying support for IEEE 754 making it
> difficult or impossible to write correct numeric code across GPU platforms.
> 
> Perhaps it doesn't matter? Maybe people simply don't bother to use anything
> but Nvidia GPUs for numeric computation, and treat the other GPUs as toys
> only suitable for games.

Maybe so.  I only know for certain that recent NVidia devices comply with 
IEEE-754.  Others might work too.

> > If floating-point math wasn't working on GPU's, I suspect that a lot of
> > people in the scientific community would be complaining.
> 
> I don't.
> 
> These are scientists, not computational mathematics computer scientists. In
> the 1980s, the authors of the "Numeric Recipes in ..." books, William H
> Press et al, wrote a comment about the large number of scientific papers
> and simulations which should be invalidated due to poor numeric properties
> of the default pseudo-random number generators available at the time.
> 
> I see no reason to think that the numeric programming sophistication of the
> average working scientist or Ph.D. student has improved since then.

I work a lot with a package called GROMACS, which does highly iterative 
calculations to simulate the motions of atoms in complex molecules.  GROMACS 
can be built to run on a pure-CPU platform (taking advantage of multiple cores, 
if you want), a pure-GPU platform (leaving your CPU cores free), or a blended 
platform, where certain parts of the algorithm run on CPUs and other parts on 
GPUs.  This latter configuration is the most powerful, because only some parts 
of the simulation algorithm are optimal for GPUs.  GROMACS only supports NVidia 
hardware with CUDA 2.0+.

Because of the iterative nature of these calculations, small discrepancies in 
the arithmetic algorithms can rapidly lead to a completely different-looking 
result.  In order to verify the integrity of GROMACS, the developers run 
simulations with all three supported hardware configurations, and verify that 
the results are identical.  Now, I don't know that every last function and 
corner case in the IEEE-754 suite gets exercised by GROMACS, but that's a 
strong vote of confidence.

> The average scientist cannot even be trusted to write an Excel spreadsheet
> without errors that invalidate their conclusion:
> 
> https://www.washingtonpost.com/news/wonk/wp/2016/08/26/an-alarming-number-of-scientific-papers-contain-excel-errors/
>
>
> let alone complex floating point numeric code. Sometimes those errors can
> change history: the best, some might say *only*, evidence for the austerity
> policies which have been destroying the economies in Europe for almost a
> decade now is simply a programming error.
> 
> http://www.bloomberg.com/news/articles/2013-04-18/faq-reinhart-rogoff-and-the-excel-error-that-changed-history

I know this story.  It's embarrassing.

> These are not new problems: dubious numeric computations have plagued
> scientists and engineers for decades, there is still a huge publication
> bias against negative results, most papers are written but not read, and
> even those which are read, most are wrong.
> 
> http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124
> 
> Especially in fast moving fields of science where there is money to be made,
> like medicine and genetics. There the problems are much, much worse.
> 
> Bottom line: I'm very glad that Nvidia now support IEEE 754 maths, and that
> reduces my concerns: at least users of one common GPU can be expected to
> have correctly rounded results of basic arithmetic operations.
> 
> 
> -- 
> Steve
> “Cheer up,” they said, “things could be worse.” So I cheered up, and sure
> enough, things got worse.

You're right Steve, the election results are rolling in.


-- 
https://mail.python.org/mailman/listinfo/python-list