Re: http://bugs.python.org/issue19495 timeit enhancement

2014-06-13 Thread Mark Lawrence

On 13/06/2014 01:44, Steven D'Aprano wrote:

On Fri, 13 Jun 2014 00:35:43 +0100, Mark Lawrence wrote:


The request is for a class within timeit that allows you to test code
inside a with block.  It strikes me as being useful but there's only one
response on the issue, albeit a positive one.  If others here think this
would be a useful addition I'll see if I can take this forward, unless
there are any timeit fans lurking who'd like to run with it themselves.


I have a Stopwatch() context manager which I have been using for a long
time, very successfully. There's an early version here:

http://code.activestate.com/recipes/577896

I'll clean it up and submit it on the bug tracker.



I see it's there, thanks Steven :)

--
My fellow Pythonistas, ask not what our language can do for you, ask 
what you can do for our language.


Mark Lawrence

---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com


--
https://mail.python.org/mailman/listinfo/python-list


what is the location of erpnext database?

2014-06-13 Thread satishguptajaipur
I was thinking of connecting the database to access for some custom reports.
What is the location of erpnext database?

Regards
Satish
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: what is the location of erpnext database?

2014-06-13 Thread alister
On Fri, 13 Jun 2014 02:16:13 -0700, satishguptajaipur wrote:

> I was thinking of connecting the database to access for some custom
> reports.
> What is the location of erpnext database?
> 
> Regards Satish

How on earth should we know?
It is your database server you should know.



-- 
If you're carrying a torch, put it down.  The Olympics are over.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Asymmetry in globals __getitem__/__setitem__

2014-06-13 Thread robert
On Friday, June 13, 2014 8:07:45 AM UTC+2, Marko Rauhamaa wrote:
> 
> The documentation is a bit vague about it:
> 
>If only globals is provided, it must be a dictionary, which will be
>used for both the global and the local variables. If globals and
>locals are given, they are used for the global and local variables,
>respectively. If provided, locals can be any mapping object.


Interesting.  This paragraph explicitly states "locals can be any mapping 
object," but that seems to be false:


class Namespace(dict):  
def __getitem__(self, key): 
print("getitem", key)   
def __setitem__(self, key, value):  
print("setitem", key, value)

def fun():  
x  # should call locals.__getitem__ 
y = 1  # should call locals.__setitem__ 

exec(fun.__code__, {}, Namespace())


Neither __getitem__ nor __setitem__ seem to be called on the local variables.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Asymmetry in globals __getitem__/__setitem__

2014-06-13 Thread Peter Otten
[email protected] wrote:

> On Friday, June 13, 2014 8:07:45 AM UTC+2, Marko Rauhamaa wrote:
>> 
>> The documentation is a bit vague about it:
>> 
>>If only globals is provided, it must be a dictionary, which will be
>>used for both the global and the local variables. If globals and
>>locals are given, they are used for the global and local variables,
>>respectively. If provided, locals can be any mapping object.
> 
> 
> Interesting.  This paragraph explicitly states "locals can be any mapping
> object," but that seems to be false:
> 
> 
> class Namespace(dict):
> def __getitem__(self, key):
> print("getitem", key)
> def __setitem__(self, key, value):
> print("setitem", key, value)
>
> def fun():
> x  # should call locals.__getitem__

No, x is a global here.

> y = 1  # should call locals.__setitem__
>
> exec(fun.__code__, {}, Namespace())
> 
> 
> Neither __getitem__ nor __setitem__ seem to be called on the local
> variables.

Accessing fun.__code__ is clever, but unfortunately the compiler produces 
different bytecodes for loading/storing variables inside a function. 
Compare:

>>> import dis
>>> def fun(x=2):
... x
... y = 1
... 
>>> dis.dis(fun.__code__)
  2   0 LOAD_FAST0 (x) 
  3 POP_TOP  

  3   4 LOAD_CONST   1 (1) 
  7 STORE_FAST   1 (y) 
 10 LOAD_CONST   0 (None) 
 13 RETURN_VALUE
>>> dis.dis(compile("x\ny=2", "", "exec"))
  1   0 LOAD_NAME0 (x)
  3 POP_TOP

  2   4 LOAD_CONST   0 (2)
  7 STORE_NAME   1 (y)
 10 LOAD_CONST   1 (None)
 13 RETURN_VALUE

Only the latter works as advertised:

>>> exec("x\ny=1", {}, Namespace())
getitem x
setitem y 1



-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Asymmetry in globals __getitem__/__setitem__

2014-06-13 Thread Paul Sokolovsky
Hello,

On Fri, 13 Jun 2014 12:53:54 +0200
Peter Otten <[email protected]> wrote:

[]

> > exec(fun.__code__, {}, Namespace())
> > 
> > 
> > Neither __getitem__ nor __setitem__ seem to be called on the local
> > variables.
> 
> Accessing fun.__code__ is clever, but unfortunately the compiler
> produces different bytecodes for loading/storing variables inside a
> function. Compare:

Compiler produces different bytecodes, and allocates local variables on
stack (just like C) very fortunately, steps like that allowed Python to
drop moniker of "ridiculously slow language". And people should decide
what they really want - fast language which can stand against the
competition, or language with dynamicity and reflection capabilities
beyond any practical need. I make first choice any time. And then it
makes sense to just accept that any function can be JIT (or AOT)
compiled, and there's nothing to fish inside of it (but for the raw
machine code of unknown architecture).


-- 
Best regards,
 Paul  mailto:[email protected]
-- 
https://mail.python.org/mailman/listinfo/python-list


Question about asyncio

2014-06-13 Thread Frank Millman
Hi all

I am trying to get to grips with asyncio, but I don't know if I am doing it 
right.

I have an app that listens for http connections and sends responses. I had 
it working
with cherrypy, which uses threading. Now I am trying to convert it to 
asyncio,
using a package called aiohttp.

I got a basic version working quite quickly, just replacing cherrpy's
request handler with aiohttp's, with a bit of tweaking where required.

Now I want to use the functionality of asyncio by using a 'yield from' to
suspend the currently executing function at a particular point while it
waits for some information. I find that adding 'yield from' turns the
function into a generator, which means that the caller has to iterate over
it. I can avoid that by telling the caller to 'yield from' the generator,
but then *its* caller has to be modified. Now I find I am going through my
entire application and changing every function into a coroutine by
decorating it with @asyncio.coroutine, and changing a simple function call
to a 'yield from'.

So far it is working, but there are dozens of functions to modify, so before
digging too deep a hole for myself I would like to know if this feels like
the right approach.

Thanks

Frank Millman



-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Asymmetry in globals __getitem__/__setitem__

2014-06-13 Thread Marko Rauhamaa
Paul Sokolovsky :

> And people should decide what they really want - fast language which
> can stand against the competition, or language with dynamicity and
> reflection capabilities beyond any practical need. I make first choice
> any time.

I'm in the latter camp, absolutely, except that I have a lot of
practical needs for much of that dynamism.

Admittedly, the topic of this thread is a bit funky. I'm wondering what
the application is: a profiler? a debugger? malware? self-awareness?

> And then it makes sense to just accept that any function can be JIT
> (or AOT) compiled, and there's nothing to fish inside of it (but for
> the raw machine code of unknown architecture).

I've been talking about the need for effective JIT so we can get rid of
Java et co. I wouldn't dream of taking away any of Python's dynamism,
though. In particular, type annotations etc are a big no in my book.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Announcing the PyconUK Education Track

2014-06-13 Thread Tim Golden
If you're UK-based and aren't following the python-uk mailing list --
well, why aren't you? But, just in case, we're announcing the Education
Track at this year's PyCon UK. See here for details:

http://pyconuk.org/education/

If you're a teacher in the UK, or if you know any teachers here, this is
a great chance to get some CPD with the PSF, the Raspberry Pi Foundation
and Bank of America part-funding backfill for supply teachers. How
generous is that?

Teachers on the Friday 19th Sep, Kids on Saturday 20th. (Lots) more
details at the link above.

See you there!

TJG
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: OT: This Swift thing

2014-06-13 Thread Roy Smith
In article <[email protected]>,
 Steven D'Aprano  wrote:

> On Wed, 11 Jun 2014 08:48:36 -0400, Roy Smith wrote:
> 
> > In article <[email protected]>,
> >  Steven D'Aprano  wrote:
> > 
> >> Yes, technically water-cooled engines are cooled by air too. The engine
> >> heats a coolant (despite the name, usually not water these days) which
> >> then heats the air.
> > 
> > Not water???  I'm not aware of any water-cooled engines which use
> > anything other than water.  Well, OK, it's really a solution of ethylene
> > or propylene glycol in water, but the water is what does most of the
> > heat transfer.  The glycol is just there to provide freezing point
> > depression and boiling point elevation.
> 
> Would you consider it fair to say that, say, vinegar is "not water"? 
> Depending on the type of vinegar, it is typically around 5-10% acetic 
> acid, and the rest water. Spirit vinegar can be as much as 20% acetic 
> acid, which still leaves 80% water.

In a car, the water is the important part (even if it's only a 50% 
component).  The primary job of the circulating coolant is to absorb 
heat in one place and transport it to another place.  That requires a 
liquid with a high heat capacity, which is the water.  The other stuff 
is just there to help the water do its job (i.e. not freeze in the 
winter, or boil over in the summer, and some anti-corrosive action 
thrown into the mix).

When you said, "usually not water these days", that's a misleading 
statement.  Certainly, it's "not pure water", or even "just water".  But 
"not water" is a bit of a stretch.

With vinegar, the acetic acid is the important component.  The water is 
just there to dilute it to a useful working concentration and act as a 
carrier.  People are 90% water too, but I wouldn't call a person 
"water".  I would, however, as a first-order description, call the stuff 
circulating through the cooling system in my car, "water".

> Back in the day, car radiators were *literally* water-cooled in the sense 
> that the radiator was filled with 100% water. You filled it from the tap 
> with drinking water. In an emergency, say broken down in the desert, you 
> could drink the stuff from the radiator to survive. If you tried that 
> with many modern cars, you would die a horrible death.

But, I could do that right now, with my car (well, not the drinking 
part) .  In an emergency, I could fill my cooling system with pure 
water, and it would work well enough to get me someplace not too far 
away where I could get repairs done.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python's re module and genealogy problem

2014-06-13 Thread BrJohan

On 11/06/2014 14:23, BrJohan wrote:

For some genealogical purposes I consider using Python's re module.

Rather many names can be spelled in a number of similar ways, and in
order to match names even if they are spelled differently, I will build
regular expressions, each of which is supposed to match  a number of
similar names.

I guess that there will be a few hundred such regular expressions
covering most popular names.

Now, my problem: Is there a way to decide whether any two - or more - of
those regular expressions will match the same string?

Or, stated a little differently:

Can it, for a pair of regular expressions be decided whether at least
one string matching both of those regular expressions, can be constructed?

If it is possible to make such a decision, then how? Anyone aware of an
algorithm for this?


Thank you all for valuable input and interesting thoughts.

After having reconsidered my problem, it might be better to approach it 
a little differently.


Either to state the regexps simply like:
"(Kristina)|(Christina)|(Cristine)|(Kristine)"
instead of "((K|(Ch))ristina)|([CK]ristine)"

Or to put the namevariants in some sequence of sets having elements like:
("Kristina", "Christina", "Cristine", "Kristine")
Matching is then just applying the 'in' operator.

I see two distinct advantages.
1. Readability and maintainability
2. Any namevariant occurring in just one regexp or set means no risk of 
erroneous matching.


Comments?


--
https://mail.python.org/mailman/listinfo/python-list


Re: Question about asyncio

2014-06-13 Thread Ian Kelly
On Fri, Jun 13, 2014 at 5:42 AM, Frank Millman  wrote:
> Now I want to use the functionality of asyncio by using a 'yield from' to
> suspend the currently executing function at a particular point while it
> waits for some information. I find that adding 'yield from' turns the
> function into a generator, which means that the caller has to iterate over
> it.

Hold up; you shouldn't be iterating over the coroutines yourself.
Your choices for invoking an asyncio coroutine are either: 1) schedule
it (the simplest way to do this is by calling asyncio.async); or 2)
using 'yield from' in another coroutine that is already being run as a
task.

> I can avoid that by telling the caller to 'yield from' the generator,
> but then *its* caller has to be modified. Now I find I am going through my
> entire application and changing every function into a coroutine by
> decorating it with @asyncio.coroutine, and changing a simple function call
> to a 'yield from'.

If the caller needs to wait on the result, then I don't think you have
another option but to make it a coroutine also.  However if it doesn't
need to wait on the result, then you can just schedule it and move on,
and the caller doesn't need to be a coroutine itself.  Just be aware
that this could result in different behavior from the threaded
approach, since whatever the function does after the scheduling will
happen before the coroutine is started rather than after.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python's re module and genealogy problem

2014-06-13 Thread Peter Otten
BrJohan wrote:

> On 11/06/2014 14:23, BrJohan wrote:
>> For some genealogical purposes I consider using Python's re module.
>>
>> Rather many names can be spelled in a number of similar ways, and in
>> order to match names even if they are spelled differently, I will build
>> regular expressions, each of which is supposed to match  a number of
>> similar names.
>>
>> I guess that there will be a few hundred such regular expressions
>> covering most popular names.
>>
>> Now, my problem: Is there a way to decide whether any two - or more - of
>> those regular expressions will match the same string?
>>
>> Or, stated a little differently:
>>
>> Can it, for a pair of regular expressions be decided whether at least
>> one string matching both of those regular expressions, can be
>> constructed?
>>
>> If it is possible to make such a decision, then how? Anyone aware of an
>> algorithm for this?
> 
> Thank you all for valuable input and interesting thoughts.
> 
> After having reconsidered my problem, it might be better to approach it
> a little differently.
> 
> Either to state the regexps simply like:
> "(Kristina)|(Christina)|(Cristine)|(Kristine)"
> instead of "((K|(Ch))ristina)|([CK]ristine)"
> 
> Or to put the namevariants in some sequence of sets having elements like:
> ("Kristina", "Christina", "Cristine", "Kristine")
> Matching is then just applying the 'in' operator.
> 
> I see two distinct advantages.
> 1. Readability and maintainability
> 2. Any namevariant occurring in just one regexp or set means no risk of
> erroneous matching.
> 
> Comments?

I like the simple variant

kristinas = ("Kristina", "Christina", "Cristine", "Kristine")

But instead of matching with "in" you could build a dict that maps the name 
variants to a normalised name

normalized_names = {
"Kristina": "Kristina",
"Christina": "Kristina",
...
"John": "John",
"Johann": "John",
...
}
def normalized(name):
return normalized_names.get(name, name)

If you put persons in another dict or a database indexed by the normalised 
name 

lookup = {
"Kristina": ["Kristina Smith", "Christina Miller"],
...
}

you can find all Kristinas with two look-ups:

>>> lookup[normalized("Kristine")]
['Kristina Smith', 'Christina Miller']

PS: A problem with this approach might be that (name in nameset_A) and (name 
in nameset_B) implies nameset_A == nameset_B

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: C-API proper initialization and deallocation of subclasses

2014-06-13 Thread ptb
While there doesn't appear to be too much interest in this question I thought I 
would post the solution.  I had to modify shoddy by adding the proper flags and 
clear/traverse methods such to ensure that cyclic garbage collection was 
properly handled.  I'm not quite sure why I had to do this since my shoddy 
instance did not have any cyclic references but I'm wondering if it was 
necessary since it's a subclass of list which has all the cyclic GC machinery 
in place.  I suspect I won't understand the answer but if someone with more 
knowledge can clear things up, that would great.

On Thursday, June 12, 2014 10:39:27 PM UTC-6, ptb wrote:
> Hello all,
> 
> 
> 
> I decided to play around with the C-API and have gotten stuck.  I went 
> through the Shoddy example 
> (https://docs.python.org/3/extending/newtypes.html#subclassing-other-types) 
> in the docs and tried to extend it by adding a method which creates and 
> returns a shoddy instance.  I dug around to find ways to allocate and 
> initialize my shoddy instance and that seems to work well.  However, I get 
> segfaults when I try to delete my instance.  The code is in the gist: 
> https://gist.github.com/pbrady/f2daf50761e458bbe44a
> 
> 
> 
> The magic happens in the make_a_shoddy function.
> 
> 
> 
> Here's a sample session (Python 3.4.1)
> 
> 
> 
> >>> from shoddy import make_a_shoddy()
> 
> >>> shd = make_a_shoddy()
> 
> tup build
> 
> shd allocated
> 
> list style allocation successful
> 
> Py_SIZE(list) : 5
> 
> Py_SIZE(shoddy) : 5
> 
> >>> type(shd)
> 
> 
> 
> >>> shd[:]
> 
> [1, 2, 3, 4, 5]
> 
> >>> shd.increment()
> 
> 1
> 
> >>> shd.increment()
> 
> 2
> 
> >>> del shd
> 
> Segmentation fault (core dumped)
> 
> 
> 
> This happens even if I don't set the destructor.  Any ideas on what I am 
> doing wrong?
> 
> 
> 
> Thanks,
> 
> Peter.

-- 
https://mail.python.org/mailman/listinfo/python-list


parsley parsing question, how to make a variable grammar

2014-06-13 Thread Eric S. Johansson
In my quest for making speech friendly applications, I've developed a 
very simple domain specific language/notation that works well. I'm using 
parsley which is a great tool for writing parsers especially simple ones 
like the one I need. However, I've come across a problem that I don't 
know how to solve.


Below is my grammar. The problem is the last element which aggregates 
all individual grammar elements. In my domain specific notation, it's 
possible for a user to create extensions and I'm trying to figure out 
how to make the grammar accommodate those extensions.


The first Issue is adding more statements. I can handle that easy enough 
by splitting the existing grammar into a prefix, statements, and 
postfix. I could append more statements to the end of the statement 
section. When I'm done, I can then join all of the pieces together into 
a single grammar.


The second issue is adding more elements between parentheses of bot so 
the additional statements can be included in the grammar. Seems to me, I 
should be able to create a Python expression which returns the OR list 
and it is interpreted as part of the grammar. Failing that, I'm just 
going to do a %s expansion when I create the grammar.


I appreciate any insight before I go too far off track.
--- eric

TF_grammar = r"""
kwToken = (letter|digit|'_')+
uses_statement = 'uses' ws kwToken:kwT ':' :roL -> do_uses 
("".join(kwT), "".join(roL))

returns_statement = 'returns' ws kwToken:kwT -> do_returns("".join(kwT))
template_statement = 'template' ws kwToken:kwT -> do_template("".join(kwT))
remembers_statement = 'remembers' ws kwToken:kwT -> 
do_remembers("".join(kwT))
everything_else = :roL '\n'{0,1} -> do_everything_else 
("".join(roL))
bot = (uses_statement | returns_statement | template_statement | 
everything_else)

"""
--
https://mail.python.org/mailman/listinfo/python-list


Re: tempfile.py", line 83, in once_lock = _allocate_lock() thread.error: can't allocat lock

2014-06-13 Thread Cameron Simpson

On 14Jun2014 01:40, SABARWAL, SHAL  wrote:

Appreciate any help in resolving/understanding following error.
Very occasionally get following error. The error occurs once every 20 - 30 
days. OS HP-UX 11.11

### ERROR LISTNG #
sem_init: Device busy

[...]

   once_lock = _allocate_lock()
   thread.error: can't allocate lock

[...]

This looks like (erroneous) reuse of a previously allocated anonymous 
semaphore. Have a read of HP-UX's sem_init(2) manual page:


  
http://condor.depaul.edu/dmumaugh/readings/handouts/CSC343/pthreads_man%20pages/semaphore.txt

This will be the underlying OS interface used to make a Python lock. It takes 
as an argument a pointer to a piece of memory to represent the semaphore.


That page says that the error code EBUSY ("Device busy") is issued if:

There are threads currently blocked on the semaphore or there are 
outstanding locks held on the semaphore.


I would take this to represent a bug in the python code that manages the 
semaphores supporting the Lock objects. It looks like the code reuses a 
previous sem_t C object while it is still in use. Since this happens very 
rarely for you I would guess this is a race condition not detected in testing.


I guess my first question would be: what release of Python are you running?

Have you considered getting the latest source code for Python ( version 2 or 3 
according to what your code is using) and building that? It may be that this 
issue has been fixed in a later release of Python than the one currently 
installed on your HP-UX box.


Cheers,
Cameron Simpson 
--
https://mail.python.org/mailman/listinfo/python-list


Re: Python's re module and genealogy problem

2014-06-13 Thread Dan Sommers
On Fri, 13 Jun 2014 17:17:06 +0200, BrJohan wrote:

> Or to put the namevariants in some sequence of sets having elements
> like:  ("Kristina", "Christina", "Cristine", "Kristine")

> Matching is then just applying the 'in' operator.

That's definitely a better approach, for the reasons you mentioned.

> Comments?

A soundex (or similar) algorithm will be better in the long run for the
less common, but more often misspelled names.  It's fairly simple to
guess at a number of common spellings for names that *you* think are
common now, but what about names that run in families that aren't yours,
or aren't that common outside of that family, or were wildly popular a
couple of hundred years ago but have fallen out of favor now?

My wife's ancestors (she's the genealogist, I just get to hear the
horror stories) are notorious for being somewhat illiterate; for
changing their names, on purpose, after a feud, in order to "distance"
themselves from their relatives; and also for using not-common-now (or
even not-so-common-then) names.  Add in somewhat illiterate records
keepers and hospital workers (or midwives or neighbors), not to mention
bad copies of bad copies of centuries-old smudged documents, and you
have an instant soup of names that sound alike but are spelled
differently in ways you cannot guess ahead of time.

Your users will appreciate *some* sort of fuzzy matching, or runtime
extensibility, atop the "obvious" spellings you take the time to include
in your software.  And that's *not* a comment on your abilities; it's a
comment on the abilities and creativity of their ancestors.

Dan
-- 
https://mail.python.org/mailman/listinfo/python-list


Python Brasil[10]

2014-06-13 Thread Renato Oliveira
Hi all!

My name is Renato Oliveira, I'm board member of Python Brazil Association
and co-chair of the next Python Brasil Conference
. This year the conf is taking place in Porto de
Galinhas, Pernambuco  (for ten times in a row
elected the most beautiful Brazilian beach) on Nov 4 - Nov 8 with some
special activities (TBA) on November 9th.

The tickets will be available on Monday, and we just announced our first
keynote: Alex Gaynor  

How can you help us? Attending, Spreading the word in your local groups and
sponsoring the conference. The prospectus is here

.

If you're planning to come to the conference and need help to find a place
to stay, just need some tips or have any kind of doubt, please let me know.

Best regards,

Renato Oliveira
@_renatooliveira 
Labcodes - www.labcodes.com.br
Renato Oliveira
@_renatooliveira 
Labcodes - www.labcodes.com.br
-- 
https://mail.python.org/mailman/listinfo/python-list