Re: Trying to hire or grow a Python developer for a 2 year contract

2015-04-07 Thread felix

El 05/04/15 13:28, Peter Rowley escribió:


 Hi,

I'm at York University in Toronto, Canada.  We have a large 
Python-and-Oracle web application written with Pyramids and YUI that 
we use to manage the staffing of courses at York and are looking for 
an intermediate or senior developer for a 2 year contract to work on 
extensions to the application.  If you are interested or know of 
someone who would be, please e-mail me at [email protected].  The 
salary is approximately $75,000 a year and benefits are excellent.


We have been looking for someone for this position for a while and are 
starting to think about hiring someone who has good programming skills 
but not necessarily Python.  To that end, if you're a Python developer 
or hire them, I'd be interested in your opinion of which common 
programming languages that people might know would enable learning 
Python?  I'm guessing PHP for the dynamic language concepts and Java 
for the structure, but that's not based on much evidence.



Nice job!
It makes me realize that studying Python is not just fun but also an 
opportunity to get paid!
Bad for me because I'm just a beginner. I started with Python and Django 
a year ago! ;)


I read the basic concepts of several languages when I was a student: 
Basic, Pascal, C, Assembly language for x86 processor, then some basic 
concepts of OOP, then PHP and I found Python is so natural and easy to 
start with as they say in tutorials.


I'm sure you will find someone soon, Mr. Rowley.
Good luck!!!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Camelot a good tool for me

2015-05-22 Thread felix

El 22/05/15 10:24, Mark Lawrence escribió:

On 22/05/2015 08:59, Cecil Westerhof wrote:

I want to learn a lot of things. For example writing database and
graphical applications. For database I decided on SQLAlchemy and GUI
on Tkinter. In principal I want to write Python 3 applications.

I came across Camelot. As I understand it, this is something to write
graphical database applications fast. It works with Qt, but that
should not be a big problem. It is just to get me started. But it
seems only to work with 2.7 and not 3. Is this true?

Would Camelot be a good tool to get me started, or can I better bite
the bullet and just start with Tkinter and SQLAlchemy?



As others have already said plain SQL is perfectly adequate in many 
situations.  There are also other ORMs with peewee and ponyORM 
springing straight to my mind, although there's certainly a far longer 
list.  What it gets down to is "horses for courses" and you're the 
only person who (hopefully) knows the course you're running :)



I really enjoy Django ORM!

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: a more precise distance algorithm

2015-05-25 Thread felix

El 25/05/15 15:21, ravas escribió:

I read an interesting comment:
"""
The coolest thing I've ever discovered about Pythagorean's Theorem is an 
alternate way to calculate it. If you write a program that uses the distance 
form c = sqrt(a^2 + b^2) you will suffer from the lose of half of your 
available precision because the square root operation is last. A more accurate 
calculation is c = a * sqrt(1 + b^2 / a^2). If a is less than b, you should 
swap them and of course handle the special case of a = 0.
"""

Is this valid? Does it apply to python?
Any other thoughts? :D

My imagining:

def distance(A, B):
 """
 A & B are objects with x and y attributes
 :return: the distance between A and B
 """
 dx = B.x - A.x
 dy = B.y - A.y
 a = min(dx, dy)
 b = max(dx, dy)
 if a == 0:
 return b
 elif b == 0:
 return a
 else:
 return a * sqrt(1 + (b / a)**2)
I don't know if precision lose fits here but the second way you gave to 
calculate c is just Math. Nothing extraordinary here.


c = a * sqrt(1 + b^2 / a^2)
c = sqrt(a^2(1 + b^2 / a^2)) applying the inverse function to introduce 
a inside the square root

c = sqrt(a^2 + a^2*b^2/a^2) then just simplify
c = sqrt(a^2 + b^2)

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Lawful != Mutable (was Can Python function return multiple data?)

2015-06-08 Thread felix

El 07/06/15 12:20, Rustom Mody escribió:

On Saturday, June 6, 2015 at 10:20:49 AM UTC+5:30, Steven D'Aprano wrote:

On Sat, 6 Jun 2015 01:20 pm, Rustom Mody wrote:


On Saturday, June 6, 2015 at 3:30:23 AM UTC+5:30, Chris Angelico wrote:

Congrats! You just proved that an object can itself be immutable, but
can contain references to mutables. Ain't that awesome?

Did you have a point?

[Under assumption you are not being facetious...]
The word immutuable happens to have existed in English before python.
I also happen to have used it before I knew of python
The two meanings do not match
I am surprised
Is that surprising?

Yes, I am surprised that you are surprised. You have been a regular, and
prolific, contributor on this forum for some years now, teach Python, blog
about it. You're quite obviously well-read and experienced. How is it that
you are surprised by such a fundamental part of not just Python's object
model, but of real life objects too?

I suspect you are pretending to be surprised to make a rhetorical point.

Dunno why the fuss...
As someone who often asks for short-simple-self-contained examples,
I would have expected you to prefer clarity of communication over
fidelity of experience??

Anyways...

Did I(rusi) now(2015) find that specific example surprising??

No, but:

1. Questioners here regularly do show similar confusions.
2. I did go through similar decades ago when I first encountered programming
3. And most to the point, just add a little complexity and I (rusi-now)
am as confused as any noob.

A few weeks ago there was this example.
It completely knocked me.
ChrisA's exxplanation clarified.
Here is a reconstruction based on Chris clarification
[Chris do you remember the question?]


t=([1,2],[3,4])
t

([1, 2], [3, 4])

t[1] = t[1].append(5)

Traceback (most recent call last):
   File "", line 1, in 
TypeError: 'tuple' object does not support item assignment

t

([1, 2], [3, 4, 5])


Like many English words, "immutable" has a few meanings in plain English.
Bouvier's Law Dictionary included in 1856 this definition:

 IMMUTABLE. What cannot be removed, what is unchangeable.
 The laws of God peing perfect, are immutable, but no
 human law can be so considered.

Clearly tuples can be removed. They are garbage-collected like any other
values in Python. If nothing else, you can turn the computer off, remove
the RAM, grind it down into the finest powder, and scatter it to the winds.
That surely is enough to remove the tuples 

Ok now rewrite that para above with
s/tuple/numbers like 3 or 666/
So I put '3' on the ram and grind it to finest powder.
Have all trinities (of religious or secular variety) disappeared?
666 gone has the devil been banished from God's (or Steven's) universe?


So according to Bouvier's definition, tuples are not immutable. But I trust
that we can agree that Bouvier's definition is not useful here.

Actually its a very nice and apt quote.
Just add laws of mathematics to laws of God and it will be perfect.
[Or better remember that for Plato they were the same]

Interesting topic.
I'm a newbie to Python and programming.
I remembered that in the official Python tutorial I read this:

"...
TypeError: 'tuple' object does not support item assignment
>>> # but they can contain mutable objects:
..."

Cheers.
-- 
https://mail.python.org/mailman/listinfo/python-list


Multiprocessing.Queue deadlock

2009-10-06 Thread Felix
Hello,

I keep running into a deadlock in a fairly simple parallel script
using Multiprocessing.Queue for sending tasks and receiving results.
>From the documentation I cannot figure out what is happening and none
of the examples seem to cover quite what I am doing. The main code is

results = mp.Queue()
tasks = mp.JoinableQueue()
tasks.put( (0,0) )
procs = [ mp.Process(target=work, args=(tasks, results)) for i in range
(nprocs)]
for p in procs:
p.daemon = True
p.start()

tasks.join()
for i in range(nprocs): tasks.put('STOP')
for p in procs: p.join()
res=[]
while 1:
try:
res.append(res.get(False))
except Empty: break


The function 'work' both consumes tasks adding the results to the
output queue and adds new tasks to the input queue based on its
result.

def work(tasks, results):
for task in iter(tasks.get, 'STOP'):
res = calc(*task)
if res:
results.put(res)
tasks.put((task[0], res[1]))
tasks.put((res[0],task[1]))
   queue.task_done()

This program will hang while the main process joins the workers (after
all results are computed, i.e. after tasks.join() ). The workers have
finished function 'work', but have not terminated yet.

Calling results.cancel_join_thread as a last line in 'work' prevents
the deadlocks, as does terminating the workers directly. However I am
not sure why that would be needed and if it might not make me loose
results.

It seems to be the workers cannot finish pusing buffered results into
the output queue when calling 'results.join_thread' while terminating,
but why is that? I tried calling 'results.close()' before joining the
workers in the main process, but it does not make a difference.

Is there something I am understanding wrong about the interface? Is
there a much better way to do what I am trying to do above?

Thanks
  Felix
-- 
http://mail.python.org/mailman/listinfo/python-list


Multiprocessing.Array bug / shared numpy array

2009-10-08 Thread Felix
Hi,

The documentation for the Multiprocessing.Array says:

multiprocessing.Array(typecode_or_type, size_or_initializer, *,
lock=True)¶

...
If lock is False then access to the returned object will not be
automatically protected by a lock, so it will not necessarily be
“process-safe”.
...

However:
In [48]: mp.Array('i',1,lock=False)
---
AssertionErrorTraceback (most recent call
last)

/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/
multiprocessing/__init__.pyc in Array(typecode_or_type,
size_or_initializer, **kwds)
252 '''
253 from multiprocessing.sharedctypes import Array
--> 254 return Array(typecode_or_type, size_or_initializer,
**kwds)
255
256 #


/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/
multiprocessing/sharedctypes.pyc in Array(typecode_or_type,
size_or_initializer, **kwds)
 85 if lock is None:
 86 lock = RLock()
---> 87 assert hasattr(lock, 'acquire')
 88 return synchronized(obj, lock)
 89

AssertionError:

---
I.e. it looks like lock=false is not actually supported. Or am I
reading this wrong? If not, I can submit a bug report.


I am trying to create a shared, read-only numpy.ndarray between
several processes. After some googling the basic idea is:

sarr = mp.Array('i',1000)
ndarr = scipy.frombuffer(sarr._obj,dtype='int32')

Since it will be read only (after being filled once in a single
process) I don't think I need any locking mechanism. However is this
really true given garbage collection, reference counts and other
implicit things going on?

Or is there a recommended better way to do this?

Thanks
-- 
http://mail.python.org/mailman/listinfo/python-list


SQL user function returning list for IN clause

2009-10-16 Thread Felix
I am using the Python SQLite3 interface, but the question is probably
general to python and SQL.

I want to run a query like

select * from table a, table b where a.foo IN foobar(b.bar)

where foobar is a user function (registered by create_function in
pysqlite3) returning a list of integers. However such functions can
only return basic data types so the above is invalid. I am wondering
what the best way around this is.

I could fetch rows from table b, compute foobar(b.bar) and create a
new query for each result, but that seems very inefficient.
I could create a new table matching each row in b to all values of
b.bar and use that to join but that would be inefficient and very
redundant.

Rewriting the query to say
select * from table a, table b where foobar_predicate(a.foo, b.bar)
would work (foobar_predicate checks if a.foo is in foobar(b.bar). But
it does not allow to use an index on a.foo

If I knew the maximum length of foobar(b.bar) I could say
select * from table a, table b where a.foo in (foobar(b.bar,0), foobar
(b.bar,1), ..., foobar(b.bar,n))
where the second parameter to foobar chooses which element to return.
This is clearly not optimal.

Am I missing some obvious elegant way to do this or is it just not
possible given that the SQL IN statement does not really deal with
lists in the python sense of the word?

Thanks
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: SQL user function returning list for IN clause

2009-10-16 Thread Felix
> > Rewriting the query to say
> > select * from table a, table b where foobar_predicate(a.foo, b.bar)
> > would work (foobar_predicate checks if a.foo is in foobar(b.bar). But
> > it does not allow to use an index on a.foo

> Define a function foobar_contains() as follows:
>
> def foobar_contains(foo, bar):
>     return foo in foobar(bar)
>
> and change the query to
>
> select * from table a, table b where foobar_contains(a.foo, b.bar)

I thought about that (see above), but it would not use an index on
a.foo which a regular a.foo IN (x,y,z) does.

Felix
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Lua is faster than Fortran???

2010-07-08 Thread Felix
On Jul 4, 11:25 am, David Cournapeau  wrote:
> On Mon, Jul 5, 2010 at 12:00 AM, D'Arcy J.M. Cain  wrote:
> > I wish it was orders of magnitude faster for web development.  I'm just
> > saying that places where we need compiled language speed that Python
> > already has that in C.
>
> Well, I wish I did not have to use C, then :) For example, as a
> contributor to numpy, it bothers me at a fundamental level that so
> much of numpy is in C.

This is something that I have been thinking about recently. Python has
won quite a following in the scientific computing area, probably
especially because of great libraries such as numpy, scipy, pytables
etc. But it also seems python itself is falling further and further
behind in terms of performance and parallel processing abilities. Of
course all that can be fixed by writing C modules (e.g. with the help
of cython), but that weakens the case for using python in the first
place.
For an outsider it does not look like a solution to the GIL mess or a
true breakthrough for performance are around the corner (even though
there seem to be many different attempts at working around these
problems or helping with parts). Am I wrong? If not, what is the
perspective? Do we need to move on to the next language and loose all
the great libraries that have been built around python?

Felix
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Lua is faster than Fortran???

2010-07-09 Thread Felix
On Jul 9, 1:16 am, sturlamolden  wrote:
> On 9 Jul, 05:39, Felix  wrote:
> > For an outsider it does not look like a solution to the GIL mess or a
> > true breakthrough for performance are around the corner (even though
> > there seem to be many different attempts at working around these
> > problems or helping with parts). Am I wrong?
>
> Yes you are.
>
> We don't do CPU intensive work in "pure Python". We use Python to
> control C and Fortran libraries. That gives us the opportunity to
> multi-thread in C, release the GIL and multi-thread in Python, or
> both.

Yes, this setup works very well and is (as I said) probably the reason
python is so widely used in scientific computing these days.
However I find that I can almost never do everything with vector
operations, but have to iterate over data structures at some point.
And here the combination of CPython slowness and the GIL means either
bad performance or having to write this in C (with which cython helps
fortunately). If it were possible to write simple, parallel,
reasonably fast loops in (some subset of) python directly that would
certainly be a great advantage. Given the performance of other JITs it
sounds like it should be possible, but maybe python is too complex to
make this realistic.

Felix

PS: No need to convince me that MATLAB is not the solution.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Lua is faster than Fortran???

2010-07-09 Thread Felix
On Jul 9, 12:44 am, Stefan Behnel  wrote:
> Felix, 09.07.2010 05:39:
 > Well, at least its "parallel processing abilities" are quite good
actually.
> If you have really large computations, they usually run on more than one
> computer (not just more than one processor). So you can't really get around
> using something like MPI, in which case an additional threading layer is
> basically worthless, regardless of the language you use. For computations,
> threading keeps being highly overrated.

That is certainly true for large computations. But many smaller tasks
are run on single machines and it does make a difference if they take
1 minute per run or 10. The average number of cores per computer has
been increasing for quite a while now. It seems unfortunate to be
restricted to using only one of them at a time (for regular loops, not
mathematical vector operations). Python has made so many complicated
things easy, but I have not seen an easy way to parallelize a simple
loop on a multicore CPU without having to set up infrastructure and/or
incurring large overhead from many interpreters and marshalling data.
Just the fact that there is such a large number of attempts out there
to fix this suggests that something important is missing.
-- 
http://mail.python.org/mailman/listinfo/python-list


Beginner problem, please help. Building a simple menu + lists , cannot print list

2021-10-11 Thread Felix Kjellström
Hello! Please see the link to the code I have uploaded to my account at 
replit.com

https://replit.com/join/lftxpszwrv-felixkjellstrom

Problem:

When you select the menu option "Add buyer", you can enter three values. See 
code line 5, "def Add_buyer ():"

Then, you use the arrow keys to select the menu option "See list of buyers". 
When you do that the values you just entered should be printed. See code line 
23, "def See_list_of_buyers ():

The problem is that the list is DON'T gets printed.

Problem:

When you select the menu option "Add buyer", you can enter three values. See 
code line 5, "def Add_buyer ():"

Then, you use the arrow keys to select the menu option "See list of buyers". 
When you do that the values you just entered should be printed. See code line 
23, "def See_list_of_buyers ():

The problem is that the list is DON'T gets printed.

-- 
https://mail.python.org/mailman/listinfo/python-list


Trouble with win32com and MS Project

2005-10-19 Thread Felix Collins
Hi,

I'm trying to assign a resource to a task in MS Project by using the 
example from MSDN for VB...


"Use the Add method to add an Assignment object to the Assignments 
collection. The following example adds a resource identified by the 
number of 212 as a new assignment for the specified task.

ActiveProject.Tasks(1).Assignments.Add ResourceID:=212"

My code fragment for Python...

proj.Tasks(3).Assignments.Add(ResourceID=2)

but this doesn't work.  I get...

Error (-2147352567, 'Exception occurred.', (0, None, 'The argument value 
is not valid.', 'D:\\Program Files\\Microsoft 
Office\\OFFICE11\\VBAPJ.CHM', 131074, -2146827187), None)


Anyone got any ideas about how to attack this?

Cheers,
Felix
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Trouble with win32com and MS Project

2005-10-19 Thread Felix Collins
Felix Collins wrote:
> Hi,
> 
> I'm trying to assign a resource to a task in MS Project by using the 
> example from MSDN for VB...
> 
> 
> "Use the Add method to add an Assignment object to the Assignments 
> collection. The following example adds a resource identified by the 
> number of 212 as a new assignment for the specified task.
> 
> ActiveProject.Tasks(1).Assignments.Add ResourceID:=212"
> 
> My code fragment for Python...
> 
> proj.Tasks(3).Assignments.Add(ResourceID=2)

I managed to get this to work by providing the TaskID which is supposed 
to be an optional argument.  I wonder if the win32com wrapper is 
stuffing this up.  Is late binding responsible perhaps?

So the code that works is...

proj.Tasks(3).Assignments.Add(TaskID= 3,ResourceID=2)

incidently this also works...

proj.Tasks(3).Assignments.Add(TaskID= 5,ResourceID=2)

which does seems a bit strange
-- 
http://mail.python.org/mailman/listinfo/python-list


how to pass attribute name via sys.argv

2005-01-27 Thread Felix Hebeler
Hi all,
I am doing some Python scripting for a while, but I'm not too deep into 
it yet. So I have a problem I can't solve.

I need to call an object attribute:
value = object.attrName[0]
the problem is, that the attribute name can only be specified at runtime.
So what I have is something like
>>> attrName = sys.argv[1]
>>> attrName
'cellsize'
and I need to pass it on so I can call
value = object.cellsize[0]
Can this be done using Python?
Thanks for any hints
Cheers
Felix
--
http://mail.python.org/mailman/listinfo/python-list


Re: how to pass attribute name via sys.argv

2005-01-27 Thread Felix Hebeler
Wolfram Kraus wrote:
Felix Hebeler wrote:

I need to call an object attribute:
value = object.attrName[0]

Use getattr:
value = getattr(object, attrName)[0]


HTH,
Wolfram
Thanks so much!
Had I known earlier.
Looks so easy...
Now, why did I not find this in the online tutorial, the reference 
manual, or google?
Not that I didn't try... I mean, I would find 'getattr' if I searched, 
but if you don't know what you're looking for..

I find the reference manual extremely (== too) compact to look things up.
A couple of colleages and me agreed that it is much more difficult to 
find solutions and _useful_ tips for Python than e.g. for Java (where 
there's  Javadoc for example). The syntax doc in the reference manual to 
me looks like computer linguists might understand, but unfortunately not 
me. And Python code IS really easy to read, I agree, but what if I can't 
find out how to write it?
I'd appreciate any link to online resources or recommendations for books 
(english/german)!

Chances are I'm a silly/lazy/deprived/stupid bugger, but I try to think 
there's still hope!

again, thank you so much for your quick response (thanks Gilles Lenfant 
too!), I really DO like the Python community ;-)

Cheers
Felix
--
http://mail.python.org/mailman/listinfo/python-list


regex question

2005-06-25 Thread Felix Schwarz
Hi all,

I'm experiencing problems with a regular expression and I can't figure 
out which words I use when googling. I read the python documentation for 
the re module multiple times now but still no idea what I'm doing wrong.

What I want to do:
- Extract all digits (\d) in a string.
- Digits are separated by space (\w)

What my program does:
- It extracts only the last digit.

Here is my program:
import re
line = ' 1  2   3'
regex = '^' + '(?:\s+(\d))*' + '$'
match = re.match(regex, line)
print "lastindex is: ",match.lastindex
print "matches: ",match.group(1)


Obviously I do not understand how (?:\s+(\d))* works in conjunction with 
  ^ and $.

Does anybody know how to transform this regex to get the result I want 
to have?

fs
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with regexp please

2005-07-22 Thread Felix Collins
Christopher Subich wrote:
> Scott David Daniels wrote:
Thanks to you both.  Wow!  what a quick response!

 >string.rsplit('.',1)[0]

Clever Python!  ;-)


Sorry, I mainly code in C so I'm not very Pythonic in my thinking.
Thanks again...

Felix
-- 
http://mail.python.org/mailman/listinfo/python-list


Help with regexp please

2005-07-22 Thread Felix Collins
Hi,
  I'm not a regexp expert and had a bit of trouble with the following 
search.

I have an "outline number" system like

1
1.2
1.2.3
1.3
2
3
3.1

etc.

I want to parse an outline number and return the parent.

So for example...

parent("1.2.3.4") returns "1.2.3"

The only way I can figure is to do two searches feeding the output of 
the first into the input of the second.

Here is the code fragment...

m = re.compile(r'(\d+\.)+').match("1.2.3.4")
n = re.compile(r'\d+(\.\d+)+').match(m.string[m.start():m.end()])
parentoutlinenumber = n.string[n.start():n.end()]

parentoutlinenumber
1.2.3

How do I get that into one regexp?

Thanks for any help...

Felix
-- 
http://mail.python.org/mailman/listinfo/python-list


HELP:sorting list of outline numbers

2005-08-02 Thread Felix Collins
Hi All,
does anyone know any cleaver tricks to sort a list of outline numbers.

An outline number is a number of the form...1.2.3

they should be sorted in the following way...

1
1.1
1.2
1.12

python's alpha sort (by design) sorts them...

1
1.1
1.12
1.2

That's no good for me.
I'm planning on splitting the strings into multiple lists of ints and 
doing numerical sorts.

Thanks for any clever ideas that might make it easier.

Felix
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: HELP:sorting list of outline numbers

2005-08-02 Thread Felix Collins
Robert Kern wrote:
> Felix Collins wrote:
> 
> Use the "key" keyword argument to list.sort().
> 
> In [1]: outline = ['1.12', '1.1', '1', '1.2']
> 
> In [2]: outline.sort(key=lambda x: map(int, x.split('.')))
> 
> In [3]: outline
> Out[3]: ['1', '1.1', '1.2', '1.12']
> 


Is this new in 2.4?  I have to use 2.3 as I'm working with Trac.

Traceback (most recent call last):
   File "", line 1, in -toplevel-
 keys.sort(key=lambda x: map(int, x.split('.')))
TypeError: sort() takes no keyword arguments


Thanks Scott and Robert for your quick help.  This list is amazing!

Regards,
Felix
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: HELP:sorting list of outline numbers

2005-08-02 Thread Felix Collins
Felix Collins wrote:
> 
> Thanks Scott and Robert for your quick help.  This list is amazing!
> 
> Regards,
> Felix

Using Decorate, Sort , Undecorate...

works like a charm.

Thanks again.

Felix
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: authentication project

2005-08-10 Thread Felix Schwarz
Hi,

for some of the "ground work" you could use the Python Web Modules 
(www.pythonweb.org).

fs
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Inline::Python, pyperl, etc.

2005-09-01 Thread Felix Schwarz

Eli Stevens (WG.c) wrote:
> PyPerl 1.0.1
> http://wiki.python.org/moin/PyPerl
> 
> The interest in these projects seems to have died off about 2001, 
> however.  That, or they simply haven't needed to be updated for the last 
> few Python versions.
> 
> I've bumped into some snags with pyperl (can't import perl2.so?  But 
> it's right there in site-packages/ !), and I'm wondering if it's bitrot 
> or a config error on my end.
> 
> Similarly, with Inline::Python I end up getting strings passed into my 
> python code when I expect integers (I posted about this on 
> [email protected], but things seem pretty dead over there).
> 
> Is anyone actually using any of this stuff?

I made some patches to pyperl and the unit testing suite some months 
ago. At least basic functionality is working again. Have a look at the 
zope-perl mailing list.
I don't know if I announced a single source tar.gz on this mailing list 
but if you have still some interest in this I can mail you  the package.

fs
-- 
http://mail.python.org/mailman/listinfo/python-list


Can __new__ prevent __init__ from being called?

2005-02-15 Thread Felix Wiemann
Sometimes (but not always) the __new__ method of one of my classes
returns an *existing* instance of the class.  However, when it does
that, the __init__ method of the existing instance is called
nonetheless, so that the instance is initialized a second time.  For
example, please consider the following class (a singleton in this case):

>>> class C(object):
...   instance = None
...   def __new__(cls):
... if C.instance is None:
...   print 'Creating instance.'
...   C.instance = object.__new__(cls)
...   print 'Created.'
... return cls.instance
...   def __init__(self):
... print 'In init.'
...
>>> C()
Creating instance.
Created.
In init.
<__main__.C object at 0x4062526c>
>>> C()
In init.   <-- Here I want __init__ not to be executed.
<__main__.C object at 0x4062526c>
>>>

How can I prevent __init__ from being called on the already-initialized
object?

I do not want to have any code in the __init__ method which checks if
the instance is already initialized (like "if self.initialized: return"
at the beginning) because that would mean I'd have to insert this
checking code in the __init__ method of every subclass.

Is there an easier way than using a metaclass and writing a custom
__call__ method?

-- 
Felix Wiemann -- http://www.ososo.de/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Can __new__ prevent __init__ from being called?

2005-02-15 Thread Felix Wiemann
Steven Bethard wrote:

> Felix Wiemann wrote:
>
>> How can I prevent __init__ from being called on the
>> already-initialized object?
>
> The short answer: you can't:
>  http://www.python.org/2.2.3/descrintro.html#__new__

What a pity.  By the way, I'm just seeing that the web page says:

| If you return an existing object, the constructor call will still call
| its __init__ method. If you return an object of a different class, its
| __init__ method will be called.

However, the latter doesn't seem to be true, or am I missing something?

>>> class A(object):
...   def __init__(self):
... print 'Init of A.'
... 
>>> instance = A()
Init of A.
>>> class B(object):
...   def __new__(self):
... return instance
...   def __init__(self):
... print 'Init of B.'
... 
>>> B()  # <- A's __init__ is *not* called.
<__main__.A object at 0x4062424c>
>>> instance = object.__new__(B)
>>> B()  # <- B's __init__ is called
Init of B.
<__main__.B object at 0x406243ec>

So there seems to be some type-checking in type.__call__.

> Note that in the Singleton example there, subclasses are told to
> override init, not __init__ for exactly this reason.

I see.

> py> class C(object):
> ... class __metaclass__(type):
> ... def __call__(cls, *args, **kwargs):
> ... if cls.instance is None:
> ... print 'Creating instance'
> ... cls.instance = cls.__new__(cls, *args, **kwargs)
> ... print 'Created'
> ... cls.instance.__init__(*args, **kwargs)
> ... return cls.instance

I didn't think of inlining the metaclass; that's really nice.

> [...] where all the work is done in the metaclass and you don't even
> define __new__.

Yeah, that's good.  I think I'll go that way.

Thanks a lot!

-- 
Felix Wiemann -- http://www.ososo.de/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: sampling items from a nested list

2005-02-17 Thread Felix Wiemann
Steven Bethard wrote:

> py> data = [[('a', 0),
> ...  ('b', 1),
> ...  ('c', 2)],
> ...
> ... [('d', 2),
> ...  ('e', 0)],
> ...
> ... [('f', 0),
> ...  ('g', 2),
> ...  ('h', 1),
> ...  ('i', 0),
> ...  ('j', 0)]]
>
> I need to count the occurrences of each 'label' (the second item in
> each tuple) in all the items of all the sublists, and randomly remove
> some items until the number of occurrences of each 'label' is equal.

If the tuples are "heavier" than this, you can avoid comparing them
using the following algorithm (which probably still leaves some room for
optimization, e.g. simpler return_list building [or returning a
generator instead of a list], or directly building the sample set
instead of converting a random.sample to a set):

def resample(data):
counts = {}
for i in data:
for j in i:
counts[j[1]] = counts.setdefault(j[1], 0) + 1

min_count = min(counts.itervalues())

# Same keys, so we can reuse the counts dictionary.
indices = counts
for label, count in counts.iteritems():
indices[label] = set(random.sample(xrange(count), min_count))

# Same thing with a generator expression, building a new dict (dunno
# what's faster).
#indices = dict(((label, set(random.sample(xrange(count), min_count)))
#for label, count in counts.iteritems()))

# "done" maps labels to the number of tuples (with that label) which
# have been added to return_list.
done = {}
return_list = []
for i in data:
return_list.append([])
for j in i:
if done.setdefault(j[1], 0) in indices[j[1]]:
return_list[-1].append(j)
done[j[1]] += 1
return return_list

-- 
Felix Wiemann -- http://www.ososo.de/
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: duplicate docstrings

2005-02-19 Thread Felix Wiemann
Steven Bethard wrote:

> class C(object):
>  def foo(self):
>  """Foo things"""
>  ...
>
> class D(object):
>  def foo(self):
>  """Foo things"""
>  ...
>
> It bothers me that I'm basically writing the same docstrings multiple
> times.  I guess what I really want to do is just write the docstrings
> for the interface I'm describing, and only supply docstrings in the
> classes when they need to differ from the interface docstrings.
>
> Is there a good way to do this?  If it's necessary, I can have C and D
> inherit from another class...

Use a common interface type and a metaclass which copies method
docstrings from base classes:

--
#!/usr/bin/env python

import inspect


class DocstringMetaclass(type):

"""Copy method docstrings."""

def __init__(cls, *args):
super(DocstringMetaclass, cls).__init__(*args)
for name, method in cls.__dict__.iteritems():
# method is a function, not a method, so we use isfunction().
if not inspect.isfunction(method) or method.__doc__ is not None:
continue
for c in cls.mro():
if hasattr(c, name):
m = getattr(c, name)
if inspect.ismethod(m) and m.__doc__ is not None:
method.__doc__ = m.__doc__
break


class Interface(object):

__metaclass__ = DocstringMetaclass

def foo(self):
"""Foo things"""

def bar(self):
"""Bar things"""

def baz(self):
"""Baz things in a C manner"""


class Implementation(Interface):

def foo(self):
pass

def bar(self):
pass

def baz(self):
pass


print Implementation.foo.__doc__
print Implementation.bar.__doc__
print Implementation.baz.__doc__
--

Output:

Foo things
Bar things
Baz things in a C manner

-- 
Felix Wiemann -- http://www.ososo.de/
-- 
http://mail.python.org/mailman/listinfo/python-list


instantiate new objects

2005-03-10 Thread Felix Steffenhagen
Hello @ all,
i'm a newbie in python and have written a module for computations in a 
bayesian network.

The module can be found at:
http://www.informatik.uni-freiburg.de/~steffenh/bayes.py
In this module i define four classes.
- cdp (conditional probability [distribution]) consisting of cdp_entry 
objects
- graph ( a graph class )
- bayesNet ( the bayesian network, a subclass of graph )

My problem is the following:
I have a global method test() in the module, where i want to test my 
implementation of the em-learning algorithm, for learning parameters in 
a bayesian network.
When I import this module in the python environment and then run the 
test method, everything seems to be ok, and the calculations that are 
hardcoded in this method seems to be correct.
In the test method i instantiate a bayesNet object for the further 
calculations.
The results that are computed and printed are probability distributions 
in the bayesian network, but this is not important for the problem.
The problem comes when i want to run the test method again for a second 
time. What happens is that the "same" expressions/computations listed
in the test method leads to another results, even if the commands are 
the same.
I can only imagine that this comes out of some objects that are not
destroyed and influence the new bayesNet object that is created there.

If there is a problem with the instantiations, it is in the test method 
which is at the end of the file bayes.py (see above)
Does someone see the problem there???

If you need some more information about what happens in the module 
please write me a mail, but i hope the comments are enough to understand 
the problem.

If you think this is too much off-topic we can discuss the problem out 
of the newsgroup.

thanks in advance,
Felix Steffenhagen
--
http://mail.python.org/mailman/listinfo/python-list


Re: instantiate new objects

2005-03-10 Thread Felix Steffenhagen
The default mutual parameters in the method bayes.generate_cpd(...)
was the problem, thanks alot for the hint and for this code snippet
to find such problems :-).
Greetings,
Felix
Michael Spencer wrote:
Without looking in the slightest at what you are implementing or how, 
this implies that state is maintained between calls to test

The question is where/how is the state maintained?
.
>.
>.
3) as mutable default parameters?
See:
line 135: def __init__(self,V,E,p=[]):
line 150:def generate_cpd(self,node, rest, values={}):
line 360:def computeJPD(self, rest, values={}):
I guess one (or more) of these is the culprit
Before running:
 Python 2.4 (#60, Nov 30 2004, 11:49:19) [MSC v.1310 32 bit (Intel)] on 
win32
 Type "help", "copyright", "credits" or "license" for more information.
 >>> bayesNet.generate_cpd.func_defaults
 ({},)
 >>> bayesNet.__init__.func_defaults
 ([],)
 >>> bayesNet.computeJPD.func_defaults
 ({},)
 >>> test()
 V = {'a': [0, 1], 'b': [0, 1]}
 [snip results]

After test:
 >>> bayesNet.generate_cpd.func_defaults
 ({'a': 1, 'b': 1},)
 >>> bayesNet.__init__.func_defaults
 ([],)
 >>> bayesNet.computeJPD.func_defaults
 ({'a': 1, 'b': 1},)
 >>>
HTH
Michael

--
http://mail.python.org/mailman/listinfo/python-list


Re: Python best practices

2016-01-16 Thread Felix Almeida

Pylint is your friend: http://www.pylint.org/

If you already know a bit about the language then a good place to start 
is the Google Python Style Guide: 
https://google.github.io/styleguide/pyguide.html




On 15/01/16 08:19 PM, [email protected] wrote:

Are there any good resources on python best practices?  e.g., articles

Thanks,
Robert



--
https://mail.python.org/mailman/listinfo/python-list


Re: Hello.

2016-01-19 Thread Felix Almeida

Check your PATH environment variable.


On 16/01/16 04:41 PM, Hmood Js wrote:

cmd won't recognize python at all I've checked several times , and I don't 
understand what's wrong

Sent from Mail for Windows 10



--
https://mail.python.org/mailman/listinfo/python-list


Diagnose a segfault in ipython/readline

2014-03-05 Thread Felix Yan
Hi,

I'm getting a reproducible crash in ipython, but not sure what upstream it 
should belong to.

The crash happens with python 2.7.6/3.3.4, with readline 6.3.

Steps to reproduce:

- run ipython
- input some random char sequence that you never inputed (like 
"ae3r0gka03k0k23"), don't press Enter
- press "Up", followed by any key

Backtrace pasted here: https://paste.xinu.at/cg7/
Downstream bug report on Arch Linux: https://bugs.archlinux.org/task/39144

Any help would be really appreciated!

Regards,
Felix Yan

signature.asc
Description: This is a digitally signed message part.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Diagnose a segfault in ipython/readline

2014-03-05 Thread Felix Yan
On Wednesday, March 05, 2014 20:15:31 Ned Deily wrote:
> The current
> assumption is that Python 2.7.6+, 3.3.5, and 3.4.0 have no problems with
> readline 6.3.

Thank you.

I just gave a try to 3.4.0b2 with readline 6.3, and still get the same 
segfault. Not sure the version is new enough though.

Also we reported the problem on readline mailing list first, so if they end up 
thinking there's something that python need to fix, I'll open a bug on the 
Python bug tracker.

Thanks again!

Regards,
Felix Yan

signature.asc
Description: This is a digitally signed message part.
-- 
https://mail.python.org/mailman/listinfo/python-list


Thread._stop() behavior changed in Python 3.4

2014-03-17 Thread Felix Yan
Hi list,

I noticed a behavior change on Thread._stop() with Python 3.4.

I know the method is an undocumented "feature" itself, but some projects are 
using it, and now they fail.

A minimized snippet to reproduce:

#!/usr/bin/python
import threading
def stale():
import time
time.sleep(1000)
t = threading.Thread(target=stale)
t.start()
t._stop()

This works correctly with Python 3.3, the program exits immediately after 
t._stop() called, and no exception was raised.

But with Python 3.4, an AssertionError was raised:

Traceback (most recent call last):
  File "test.py", line 8, in 
t._stop()
  File "/usr/lib/python3.4/threading.py", line 990, in _stop
assert not lock.locked()
AssertionError

And the program still waits on the sleep().

I know trying to forcefully stop a thread is not really a good practice, but I 
still wonder if there's an easy way to get broken programs to work again, just 
in the way they currently are?

Downstream bug reports, for reference:

http://youtrack.jetbrains.com/issue/PY-12317
https://github.com/paramiko/paramiko/issues/286

Regards,
Felix Yan

signature.asc
Description: This is a digitally signed message part.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Thread._stop() behavior changed in Python 3.4

2014-03-17 Thread Felix Yan
On Monday, March 17, 2014 17:33:09 Antoine Pitrou wrote:
> Hi,
> 
> Felix Yan  gmail.com> writes:
> > A minimized snippet to reproduce:
> > 
> > #!/usr/bin/python
> > import threading
> > 
> > def stale():
> > import time
> > time.sleep(1000)
> > 
> > t = threading.Thread(target=stale)
> > t.start()
> > t._stop()
> > 
> > This works correctly with Python 3.3, the program exits immediately after
> > t._stop() called, and no exception was raised.
> 
> Basically what you are doing is abusing a private method because you want
> to make the thread daemonic after it was started (a daemonic thread is
> not waited for at interpreter exit). Please do note one thing: the _stop()
> method does *not* actually stop the thread; it just marks it stopped, but
> the underlying OS thread continues to run (and may indeed continue to
> execute Python code until the interpreter exits).
> 
> So the obvious "solution" here is to mark the thread daemonic before
> starting it.
> 
> A possible related improvement would be to relax the contraints on
> Thread.daemon to allow setting the flag on a running thread?
> 
> That said, daemon threads (or abuse of the _stop() method as you did) can
> lead to instabilities and oddities as some code will continue executing
> while the interpreter starts shutting down. This has been improved but
> perhaps not totally solved in recent interpreter versions. A fully correct
> solution would involve gracefully telling the thread to shut down, via a
> boolean flag, an Event, a file descriptor or any other means.
> 
> (if you are interested in this, please open a new issue at
> http://bugs.python.org)
> 
> Regards
> 
> Antoine.

Thanks for the detailed explanation!

Actually I didn't used _stop() myself either, but noticed the problem when 
trying to build paramiko against python 3.4.

Thanks especially for the tip that the threads may be still running - actually 
I didn't even think about this part!

For now I just skipped the test suites for paramiko to get the packaging done 
(since the test suites themselves are passed without a problem, just the test 
script made something wrong). I'll try to follow up the issue for paramiko :)

Regards,
Felix Yan

signature.asc
Description: This is a digitally signed message part.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Thread._stop() behavior changed in Python 3.4

2014-03-17 Thread Felix Yan
On Tuesday, March 18, 2014 05:08:20 Chris Angelico wrote:
> I've posted comments on both the issues you linked to. My guess based
> on a cursory look at paramiko is that it's a test suite watchdog,
> which would be much better implemented with a subprocess; I may be
> wrong, though. In any case, if it's just a tests problem, you should
> theoretically be able to ignore it.
> 
> ChrisA

I was just trying to comment and see yours... Thanks a lot! :D

Regards,
Felix Yan

signature.asc
Description: This is a digitally signed message part.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Opening Multiple files at one time

2015-04-22 Thread Felix Dietrich
[email protected] writes:

> Dear Group,
>
> I am trying to open multiple files at one time. 
> I am trying to do it as,
>
>  for item in  [ "one", "two", "three" ]:
>f = open (item + "world.txt", "w")
>f.close()
>
> This is fine. But I was looking if I do not know the number of
> text files I would create beforehand, so not trying xrange option
> also.
>
> And if in every run of the code if the name of the text files have
> to created on its own.
>
> Is there a solution for this? 

I have trouble comprehending your question so I am going to guess a bit
– feel free to clarify your problem.

If you want to repeat a set of commands not for a certain number of
items but based on a condition take a look at the *while-loop*.  The
basic loop construct looks like this:

  while condition:
do_stuff()

Now the problem: *While* your program still has stuff to do (you have to
come up with an appropiate condition) you want to write output to files.
The filenames will be a concatenation of a number counting the already
created files and a predifined string (say "world.txt").

(Is this a correct description of your problem? Again: feel free to
clarify.)

The following lines might get you closer to a solution:


  j = 1
  continue_to_open_files = True
  while continue_to_open_files:
with open("%03iworld" % j, "w") as f:
  f.write("some content")
if some_condition:
  continue_to_open_files = False
j += 1


Alternativly /itertools.count/ allows using of the for-loop:

  import itertools
  for j in itertools.count(1):
with open("%03iworld" % j, "w") as f:
  f.write("some content")
if some_condition:
  break


Translating numbers represented by digits into words is another problem.

--
Felix Dietrich
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Issuing a sqilte query, picking a random result, and copying to the system clipboard

2015-06-23 Thread Felix Yan
On 06/22/2015 07:51 PM, Tim Chase wrote:
> On Win32, you'd need the Win32 add-on libraries to shove things onto
> the clipboard, while under X, you'd need other facilities (either
> using Tkinter or piping to something like xclip(1)), and yet another
> way of doing things on MacOS.

Or you may want an existing library for all these. For example, pyperclip:

>>> import pyperclip
>>> pyperclip.copy('The text to be copied to the clipboard.')

FYI: https://pypi.python.org/pypi/pyperclip

-- 
Regards,
Felix Yan



signature.asc
Description: OpenPGP digital signature
-- 
https://mail.python.org/mailman/listinfo/python-list


pyqt scrollview layout

2006-07-21 Thread Felix Steffenhagen
Hello,

I have a problem with updating contents in a qscrollview.
I've implementented two widgets (PremiseInput and PremiseList).
You can find the source code under
http://www.informatik.uni-freiburg.de/~steffenh/premiseinput.{html|py} and
http://www.informatik.uni-freiburg.de/~steffenh/premiselist.{html|py}

The PremiseInput is a widget containing two QLineEdit's and a QComboBox.
This widget should be showed in the PremiseList widget, where I want
the functionality to dynamically add new PremiseInputs and the widths
of them should be adjusted to the width of the QScrollView, contained
in the PremiseList.
And there is my problem.

The PremiseInput has a sizeHint of 333. When I show the standard PremiseList,
the viewport of the scrollview has this size. But when I resize the PremiseList
widget to a lower or higher width and add a new PremiseInput, the size is
adjusted to the sizeHint and I'm not able to resize the QScrollView content
which is too small for larger window sizes and to high for smaller ones.

My resizeEvent method works, so that widget resizes also resize the QScrollView
and its content, but when I want to add the same functionality to the
addPremise() method, nothing happens.

Does anyone know where the problem lies and perhaps has a solution for me?

regards,
Felix
-- 
http://mail.python.org/mailman/listinfo/python-list


AI library

2006-12-15 Thread Felix Benner
I thought about an AI library for python. This is the possible 
structure I came up with. Are there any thoughts about it?

ailib/

search.py

class State:
"represents an immutable state of a problem"
def __str__(self):
pass

def __hash__(self):
pass

class StateSpace:
"represents a traversable collection of states"
def start(self):
"returns a list of initial states"
pass

def follow(self, state).
"returns a list of states that follow after state"
pass

def search(self, order=none):
"""returns an iterator over all states according to 
order
order can be a constant (depth, breadth, incdepth) or 
a
heuristic. If none some default search will be 
used."""
pass

def __iter__(self):
return self.search()

class Heuristic:
def follow(self, state):
"returns a list of states probably closer to the 
solution than s
pass

class Graph(StateSpace):
def __init__(self, paths)
"paths is a set of tuples (nodeFrom, nodeTo, weight)"
pass

def a-star(self, nodeFrom, nodeTo):
"""searches the shortest path (minimal weight) from
nodeFrom to nodeTo."""
pass

plan.py

class State(search.State):
"represents a state of the world that can be changed 
through action.
pass

class Agent(search.StateSpace):
"""For any given state an Agent has a set of possible 
actions
that transform the initial state into a subsequent 
state."""

def perceive(self):
"returns the state of the world the agent perceives to 
be in."
pass

def actions(self, state):
"returns an iterator over actions available in the 
given state."
pass

def __iter__(self):
return self.actions(self.perceive())

def plan(self, state, currentState=none):
"""returns a sequence of actions that are supposed to
transform the currently perceived state of the world
into the desired state."""
if currentState==none:
currentState = self.perceive()
else:
pass

logic.py

class Symbol:
"a Symbol that can be bound to a value or unbound."
def bound(self):
"returns true if the symbol is bound."

def value(self):
"if bound returns the value."

class Function:
"transforms a list of symbols into another symbol."
def __init__(self, *symbols):
self.symbols = symbols

def __call__(self):
"returns some symbol."
pass

class Predicate:
"either is or is not valid for a given list of symbols"
def __init__(self, *symbols):
self.symbols = symbols

def __call__(self):
"returns true or false"
pass

class Junctor:
"""a relation between predicates deriving a truth value 
from the
truth values of the predicates"""
def __init__(self, *predicates):
self.predicates = predicates

def __call__(self):
"returns some truth value"
pass

class Quantifier:
"somehow binds symbols."
pass

class Clause:
"A quantified junctor."
pass

class Axioms:
"A list of clauses"
def consistent(self):
"returns true if the list of axioms is consistent"
pass

def valid(self, clause, bind=true):
"""returns true if the clause is consistent with the 
set of axio
If bind is true, any unbound symbol will be bound to a 
value if
pass

statistics.py

class Entity:
"bearer of statistically relevant features."
pass

class Set:
"set of entities. defines usual statistical functions"
def __iter__(self):
"iterate over all entities."

def avg(self, feature):
"returns the average of the given feature of all 
entities."
pass
# likewise other functions

def entropy(self, a, b):
"returns the level of randomnes (0..1) between the 
given feature
pass

def correlates(self, a, b, e=0.5):
return self.entropy(a, b) < e

def mine(self):
  

Re: MySQLdb, lots of columns and newb-ness

2006-12-19 Thread Felix Benner
Andrew Sackville-West schrieb:

> I have an ascii data dump from a POS system that has 131 fields in a
> single column in a flat file. I can easily open the file, read in the
> data and assemble it into various formats. okay. what I *want* to do
> is insert each of these fields into a mysql database that has 132
> columns that correspond to the 131 fields in the ascii file (plus one
> for the date).
> 
> I can successfully connect to mysql and do stuff to my tables my
> specific problem is how to efficiently put those 132 fields into the
> thing. All I have been able to figure out is really ugly stuff like:
> build the mysql statement out of various pieces with appropriate
> commas and quote included. stuff like (not tested)


Haven't tested it, but maybe
http://dev.mysql.com/doc/refman/5.0/en/load-data.html is your friend.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Fall of Roman Empire

2006-12-20 Thread Felix Benner
Thomas Ploch schrieb:
>> Ben Finney schrieb:
>>> "John Machin" <[EMAIL PROTECTED]> writes:
>>>
 Ben Finney wrote:

>  \  "...one of the main causes of the fall of the Roman Empire was |
>   `\that, lacking zero, they had no way to indicate successful |
> _o__)   termination of their C programs."  -- Robert Firth |
 An amusing .sig, but it doesn't address the root cause: As they had no
 way of testing for the end of a string, in many cases successful
 termination of their C programs would have been unlikely.
>>> Yet historically proven: the 'imperium' process they were running
>>> terminated many centuries ago.
>>>
>>> Or did it fork and exec a different process?
>>>
> 
> I rather stay with the metaphysics:
> 
> 
> #include "metaphysics.h"
> 
> static metaPower God;
> 
> universe *makeUniverse(metaPower God)
> {
> if (!God) {
> printf("Oops, no God available at the moment.Try again later!");
> return NULL;
> }
> 
> universe *everything;
> 
> if (!(everything = malloc(sizeof(universe {
> God.mood = REALLY_BORED;
> printf("God has no time to create a universe.");
> return NULL;
> } else {
> return universe;
> }
> }
> 
> 
>  :-)
> 
> Sorry, somehow had to do this. Please slap me (i like it, don't worry)
> if it's totally stupid
> 
> 

s totally stupid! You forgot the main function! (not to mention you
returned universe instead of everything)

static int main(int argc, char **argv) {
char *god_name;
if (argc)
god_name = argv[1];
else
god_name = "YHWH";
metaPower God = getGodByName(god_name);
universe *everything = makeUniverse(God);
while (simulatePhysics(everything));
return 0;
}
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [ANN] Py++ - 0.8.5

2006-12-21 Thread Felix Benner
Roman Yakovenko schrieb:
> Hello!
> 
> I'm pleased to announce the 0.8.5 release of Py++.

I'm just wondering why there is a comp.lang.python.announce newsgroup.
Could it be for making announcements or would that be too obvious?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Question on regex

2006-12-23 Thread Felix Benner
Prabhu Gurumurthy schrieb:

> to fix this problem, i used negative lookahead with ip pattern:
> so the ip pattern now changes to:
> \d{1,3}(\.\d{1,3}){3}(?!/\d+)
> 
> now the problem is  10.150.100.0 works fine, 10.100.4.64 subnet gets
> matched with ip pattern with the following result:
> 
> 10.100.4.6
> 
> Is there a workaround for this or what should change in ip regex pattern.
> 

I think what you want is that neither /d+ nor another digit nor a . follows:
\d{1,3}(\.\d{1,3}){3}(?!(/\d)|\d|\.)
This way 10.0.0.1234 won't be recognized as ip. Neither will 23.12.
which could be a problem if an ip is at the end of a sentence, so you
might want to omit that.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Help with small program

2006-12-24 Thread Felix Benner
smartbei schrieb:
> Hello, I am a newbie with python, though I am having a lot of fun using
> it. Here is one of the excersizes I am trying to complete:
> the program is supposed to find the coin combination so that with 10
> coins you can reach a certain amoung, taken as a parameter. Here is the
> current program:
> 
> coins = (100,10,5,1,0.5)
> anslist = []
> def bar(fin, hist = {100:0,10:0,5:0,1:0,0.5:0}):
>   s = sum(x*hist[x] for x in hist)
>   l = sum(hist.values())
>   if s < fin and l < 10:
>   for c in coins:
>   if (s+c) <= fin:
>   hist[c] += 1
>   bar(fin, hist)
>   hist[c] -= 1
>   elif l==10 and s==fin and not hist in anslist:
>   #p1
>   anslist.append(hist)
> 
> bar(50)
> print anslist
> 
> The problem is that if I run it, anslist prints as [{0.5: 0, 1: 0, 10:
> 0, 100: 0, 5: 0}], which doesnt even add up to 50. When I check how
> many times the program has reached the #p1 by sticking a print there,
> it only reaches it once, and it comes out correct. why is it that this
> result is replaced by the incorrect final one?
> 

hist is stored in anslist as a pointer only, therfore the hist[c] -= 1
operates on the same dict as is stored in the anslist. Try the following
in the python interpreter:

a = { 'key' : 1 }
l = [a]
l[0]['key'] -= 1
a

instead use:

anslist.append(dict(hist.items))

which will copy the dict.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: problem with PIPE

2006-12-24 Thread Felix Benner
Dhika Cikul schrieb:
> Hello,
> 
> I'm new in Python, i don't know my subject is correct or wrong. I have
> problem with my script. I want to change password with passwd password
> in python without user submitted anything from keyboard. I get
> tutorial that i must use pipe to process this. And this is my code :
> 
> [code]
> 
>   1.
>   2. #!/usr/bin/python
>   3.
>   4. import os
>   5.
>   6. COMMAND = 'passwd'
>   7. PASSWD  = 'mypassword'
>   8.
>   9. # open a pipe to passwd program and
>  10. # write the data to the pipe
>  11. p = os.popen("%s" % COMMAND, 'w')
>  12. p.write(PASSWD)
>  13. p.write('\n')
>  14. p.write(PASSWD)
>  15. p.close()
>  16.
> [/code]
> 
> 
> but i got this error :
> 
> [output]
>[EMAIL PROTECTED] cp]$ ./password
>Changing password for user cp.
>Changing password for cp
>(current) UNIX password: passwd: Authentication token manipulation error
> [/output]
> 
> Anyone can help me how to write to pipe.. i try several method, and
> always fail.
> 
> Thank's

I guess the passwd program doesn't allow changing passwords from a pipe
since it is a potential security hole.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: DOS, UNIX and tabs

2006-12-28 Thread Felix Benner
Sebastian 'lunar' Wiesner schrieb:
> Ben <[EMAIL PROTECTED]> typed
> 
>> I have a python script on a windows system that runs fine. Both use
>> tabs to indent sections of the code.
> 
> Just a tip for you: In python you never use tabs for indentation. The
> python style guide [1] recommends four spaces per indentation level.
> 
> [1] http://www.python.org/dev/peps/pep-0008/
> 

I like using tabs. And the style guide doesn't give a reason why one
shouldn't and neither does the thread
http://www.python.org/search/hypermail/python-1994q2/0198.html in the
archive.
So what's the point in typing four spaces for indentation instead of one
tab?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: DOS, UNIX and tabs

2006-12-28 Thread Felix Benner
Christophe Cavalaria schrieb:
> Steven D'Aprano wrote:

> You gave the reason in your post : because other people who are using
> software that doesn't understand tabs as YOU expect them to have problems
> with your code.
> 
> Tabs aren't a problem at all as long as nobody else than you edit your code.

Sorry, but that's a silly argument. With the same argument we should
stop using python alltogether since the usual MBA will understand
nothing but VBA.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Threads Dying?

2007-06-28 Thread felix seltzer

are you using pygtk as well?
how are you using your threads, (just out of curiosity into the issue)

-felix

On 6/28/07, Gabriel Genellina <[EMAIL PROTECTED]> wrote:


En Thu, 28 Jun 2007 15:12:53 -0300, Robert Rawlins - Think Blue
<[EMAIL PROTECTED]> escribió:

> I've got an application that seems to be a little bit unstable and
> freezes
> quite a bit, and I'm suspecting it's something in one of my threads
> that's
> causing the problem, when does a thread die?

After the run() method finishes, either normally or because an unhandled
exception happened.

> And how can I be sure that its
> dyeing when its mean to be?

I'm not sure what you are asking - you can check periodically inside run()
for some condition (an Event object, a special object placed on a Queue,
even a global variable in the simplest case) and exit when the condition
is met.

--
Gabriel Genellina
--
http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Help Needed in WxPython

2007-06-28 Thread felix seltzer

If you use pygtk, the notebook object could do that in a few lines of
code
but im not sure about wxPython.
note that if your using *nix of some sort, gtk should work fine, but under
windows some people report issues.
-felix

On 6/28/07, senthil arasu <[EMAIL PROTECTED]> wrote:


Hi,
Currently Iam integrating GUI Framework in Python.
As per design design,I need to use tab buttons to launch different HTML
pages in same frame(without launching seperate window ). I have already
tried with webbrowser class & WxPython GUI kit. Iam unable to get the
expected result.

I am wanted to be clear..!whether python supports my design or i need to
go for some other option

I need somebody to help me.

thanks

--
http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Shed Skin Python-to-C++ Compiler 0.0.21, Help needed

2007-06-29 Thread felix seltzer

does this project include support for pygtk type GUI's?

On 6/29/07, Mark Dufour <[EMAIL PROTECTED]> wrote:


Hi all,

I have just released version 0.0.22 of Shed Skin, an experimental
Python-to-C++ compiler. Among other things, it has the exciting new
feature of being able to generate (simple, for now) extension modules,
so it's much easier to compile parts of a program and use them (by
just importing them). Here's the complete changelog:

-support for generating simple extension modules (linux/windows; see
README)
-dos text format fix (long overdue)
-improved detection of dynamic types (avoid hanging on them)
-improved overloading (__nonzero__, __int__, __abs__ etc.)
-add str(ing).{capitalize, capwords, swapcase, center, ato*)
-fix string.maketrans
-several other minor bug fixes

For more details about Shed Skin and a collection of 27 programs, at a
total of about 7,000 lines, that it can compile (resulting in an
average speedup of about 39 times over CPython and 11 times over Psyco
on my computer), please visit the homepage at:

http://mark.dufour.googlepages.com

I could really use some help in pushing Shed Skin forward. Please try
the latest release and send in bug reports, or join the project via
the homepage.


Thanks,
Mark Dufour.


On 3/31/07, Mark Dufour <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> I have recently released version 0.0.20 and 0.0.21 of Shed Skin, an
> optimizing Python-to-C++ compiler. Shed Skin allows for translation of
> pure (unmodified), implicitly statically typed Python programs into
> optimized C++, and hence, highly optimized machine language. Besides
> many bug fixes and optimizations, these releases add the following
> changes:
>
> -support for 'bisect', 'collections.deque' and 'string.maketrans'
> -improved 'copy' support
> -support for 'try, else' construction
> -improved error checking for dynamic types
> -printing of floats is now much closer to CPython
>
> For more details about Shed Skin and a collection of 27 programs, at a
> total of about 7,000 lines, that it can compile (resulting in an
> average speedup of about 39 times over CPython and 11 times over Psyco
> on my computer), please visit the homepage at:
>
> http://mark.dufour.googlepages.com
>
> I could really use more help it pushing Shed Skin further. Simple ways
> to help out, but that can save me lots of time, are to find smallish
> code fragments that Shed Skin currently breaks on, and to help
> improve/optimize the (C++) builtins and core libraries. I'm also
> hoping someone else would like to deal with integration with CPython
> (so Shed Skin can generate extension modules, and it becomes easier to
> use 'arbitrary' external CPython modules such as 're' and 'pygame'.)
> Finally, there may be some interesting Master's thesis subjects in
> improving Shed Skin, such as transforming heap allocation into stack-
> and static preallocation, where possible, to bring performance even
> closer to manual C++. Please let me know if you are interested in
> helping out, and/or join the Shed Skin mailing list.
>
>
> Thanks!
> Mark Dufour.
> --
> "One of my most productive days was throwing away 1000 lines of code"
> - Ken Thompson
>

Mark Dufour.
--
"One of my most productive days was throwing away 1000 lines of code"
- Ken Thompson
--
http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list

Re: Help needed in PyGTk

2007-06-29 Thread felix seltzer

try the pygtk mailing list,
"pygtk" <[EMAIL PROTECTED]>
they will probobly be able to help you more.

On 6/29/07, senthil arasu <[EMAIL PROTECTED]> wrote:


Hi,
I am trying to render HTML in PyGTK widget but iam not getting the
expected result.
I would like to know whether PyGTK supports HTML rendering feature or not.

Please help me to solve this issue.

thanks


--
http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list

Re: HTML Render Support in PyGTK

2007-06-29 Thread felix seltzer

http://directory.fsf.org/webauth/htmlpreproc/gtkhtml.html

might help. just like thomas though... more info on what your doing/have
done would help us help you

On 6/29/07, Thomas Jollans <[EMAIL PROTECTED]> wrote:


There was no need to re-ask so soon.

On Friday 29 June 2007, senthil arasu wrote:
> Hi,
> I am trying to render HTML in PyGTK widget but iam not getting the
expected
   ^^
What have you tried so far ?
> result.
> I would like to know whether PyGTK supports HTML rendering feature or
not.

I believe GTK+2 has an HTML renderer, I don't know whether it's included
in
PyGTK by default etc.

--
  Regards,   Thomas Jollans
GPG key: 0xF421434B may be found on various keyservers, eg pgp.mit.edu
Hacker key :
v4sw6+8Yhw4/5ln3pr5Ock2ma2u7Lw2Nl7Di2e2t3/4TMb6HOPTen5/6g5OPa1XsMr9p-7/-6

--
http://mail.python.org/mailman/listinfo/python-list


-- 
http://mail.python.org/mailman/listinfo/python-list

good matlab interface

2007-06-29 Thread felix seltzer

Does any one know of a good matlab interface?
I would just use scipy or numpy, but i also need to use
the matlab neural network functions.  I have tried PyMat, but am having
a hard time getting it to install correctly.

For that mater, a good neural net module for python would
work just as well as a good matlab interface.

Any suggestions?

-felix
-- 
http://mail.python.org/mailman/listinfo/python-list

Re: good matlab interface

2007-07-02 Thread felix seltzer

my problems where more with scipy, which i needed for pymat.
scipy gives two import errors (but still imports), and then pymat
cant find the libraries that scipy provides.

ohwell...
i think i can just modify it to work with numpy untill
i can sort out the errors.

thanks,
-felix


On 7/2/07, Brian Blais <[EMAIL PROTECTED]> wrote:


On Jun 30, 2007, at 2:31 AM, felix seltzer wrote:

> Does any one know of a good matlab interface?
> I would just use scipy or numpy, but i also need to use
> the matlab neural network functions.  I have tried PyMat, but am
> having
> a hard time getting it to install correctly.
>

What problems are you having installing?  I had one problem with the
terrible matlab license server, which I had to solve by making site-
packages world writable, installing pymat as a user, and then
removing the world writable flag.  Root just didn't have access to
matlab on my machine.  :P



bb


-- 
http://mail.python.org/mailman/listinfo/python-list

Designing superclasses so inherited methods return objects with same type as the instance.

2008-11-19 Thread Felix T.
I have a class called Interval(type.ObjectType) that is supposed to
mimic closed mathematical intervals. Right now, it has a lot of
methods like this:

def __add__(self,other):
if type(other) in Numerical:
return Interval(self.lower_bound+other, self.upper_bound
+other)
else:
return Interval(self.lower_bound+other.lower_bound,
self.upper_bound+other.upper_bound)

that return new objects of the same type.

The problem is that if this method is called by a subclass like

class HalfOpen(Interval):
  #new comparison methods
  ...
it returns an object with Interval (not HalfOpen) type.


I either have to redefine methods like __add__ so that they return
objects of the right type (even though the logic is the same) or find
some way to redefine Interval's methods so they are more flexible.
Right now, I am looking at:

def __add__(self,other):
if type(other) in Numerical:
return self.__class__(self.lower_bound+other,
self.upper_bound+other)
else:
return self.__class__(self.lower_bound+other.lower_bound,
self.upper_bound+other.upper_bound)

Is there a standard way to do this, or a better one?

Thanks in advance,
Felix
--
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing.Queue deadlock

2009-10-07 Thread Felix Schlesinger
On Oct 7, 12:16 pm, MRAB  wrote:
> Felix wrote:
> > Hello,
>
> > I keep running into a deadlock in a fairly simple parallel script
> > using Multiprocessing.Queue for sending tasks and receiving results.

> > It seems to be the workers cannot finish pusing buffered results into
> > the output queue when calling 'results.join_thread' while terminating,
> > but why is that? I tried calling 'results.close()' before joining the
> > workers in the main process, but it does not make a difference.
>
> > Is there something I am understanding wrong about the interface? Is
> > there a much better way to do what I am trying to do above?

> You can therefore get into a deadlock where:
>
> * Process A won't read from the queue until it has joined process B.
> * The join won't succeed until process B has terminated.
> * Process B won't terminate until it has finished writing to the queue.
> * Process B can't finish writing to the queue because it's full.
> * The queue is full because process A isn't reading from it.

I thought about that, but it seemed unlikely since I am not generating
too many results (a few thousand small touples of int). Also I tried
to deal with it by reading as many results form the queue as were
available, then joining the workers, then reading again. This did not
work reliably, maybe because the queue would fill up again while I
start joining the individual workers.

In any case the core of the problem is the following:

A bunch of workers push an unknown number of results into a queue. The
main process needs to collect all those results.

What is the right way to implement that with multiprocessing? I tried
joining the workers and then reading everything available, but
obviously (see above) that does not seem to work.

A dirty trick that works would be reading all results slowly and
assuming no more results are comming after the queue is empty, but
this is obviously unstable:

while 1:
try:
res.append(results.get(True,LONG_TIMEOUT))
except Empty:
break

It could be made somewhat better by joining the workers afterwards and
reading again, but another deadlock might happen.

What I am doing now is having the workers push a "DONE" flag on the
result queue when they end and reading results until all DONE flags
have arrived:


def work(tasks, results):
for task in iter(tasks.get, 'STOP'):
res = calc(*task)
if res:
results.put(res)
tasks.put((task[0], res[1]))
tasks.put((res[0],task[1]))
   queue.task_done()
results.put('DONE')

And in main:

res = []
for i in range(opts.nprocs):
res += list(iter(results.get,'DONE'))

for p in procs:
p.join()

This seems to work, and as long as workers push data to the results
queue in the same order as the puts happen in each process (is this
guaranteed?) it should be stable. But is it the best/easiest way to do
this?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Multiprocessing.Queue deadlock

2009-10-08 Thread Felix Schlesinger
On Oct 8, 3:21 am, Dennis Lee Bieber  wrote:
> On Wed, 7 Oct 2009 10:24:08 -0700 (PDT), Felix Schlesinger
> > A bunch of workers push an unknown number of results into a queue. The
> > main process needs to collect all those results.
>
> > What is the right way to implement that with multiprocessing? I tried
> > joining the workers and then reading everything available, but
> > obviously (see above) that does not seem to work.
>
>         The cleanest solution that I can think of is to have the processes
> return a special token which identifies WHICH process is terminating, so
> you can join just that one, and go back and continue looking for data
> from the others.

I implemented the lazy version of this, namely waiting until all
workers signal that they are done (reading results until I encounter
the right number of 'done' tokens'. And only after that joining all
workers.). I think this is stable, but I am not an expert on the
issue.
Putting 'done' is always the last call to queue.put a worker makes.
Does that guarantee that it will not block after 'done' is read by the
main process?

Felix
-- 
http://mail.python.org/mailman/listinfo/python-list


ConfigParser is not parsing

2010-02-12 Thread felix gao
Hi all,

I am trying to get the some configuration file read in by Python, however,
after the read command it return a list with the filename that I passed in.
what is going on?

Python 2.6.1 (r261:67515, Jul  7 2009, 23:51:51)
[GCC 4.2.1 (Apple Inc. build 5646)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import ConfigParser
>>> p = ConfigParser.SafeConfigParser()
>>> cfg = p.read("S3Files.conf")
>>> cfg
['S3Files.conf']


 cat S3Files.conf
[main]
taskName=FileConfigDriver
lastProcessed=2010-01-31
dateFromat=%Y-%m-%d
skippingValue=86400
skippingInterval=seconds

Thanks in advance.
-- 
http://mail.python.org/mailman/listinfo/python-list


more pythonic way

2019-02-11 Thread Felix Lazaro Carbonell
 

Hello to everyone:

Could you please tell me wich way of writing this method is more pythonic:

 

..

def find_monthly_expenses(month=None, year=None):

month = month or datetime.date.today()

..

 

Or it should better be:

...

if not month:

month = datetime.date.today()

..

 

Cheers,

Felix.

 

-- 
https://mail.python.org/mailman/listinfo/python-list


RE: more pythonic way

2019-02-11 Thread Felix Lazaro Carbonell
Sorry I meant 
 

..

def find_monthly_expenses(month=None, year=None):

month = month or datetime.date.today().month

..
 

Or it should better be:

...

if not month:

month = datetime.date.today().month

..
 

Cheers,

Felix.

 
-- 
https://mail.python.org/mailman/listinfo/python-list

-- 
https://mail.python.org/mailman/listinfo/python-list


RE: more pythonic way

2019-02-11 Thread Felix Lazaro Carbonell



-Mensaje original-
De: Python-list [mailto:[email protected]]
En nombre de Grant Edwards
Enviado el: lunes, 11 de febrero de 2019 02:46 p.m.
Para: [email protected]
Asunto: Re: more pythonic way

On 2019-02-11, Felix Lazaro Carbonell  wrote:

> Could you please tell me wich way of writing this method is more pythonic:
>
> def find_monthly_expenses(month=None, year=None):
> month = month or datetime.date.today()
>
> Or it should better be:
>
> if not month:
> month = datetime.date.today()

>The most pythonic way is to do this:
>
>   def find_monthly_expenses(month=datetime.date.today().month,
year=datetime.date.today().year):
>  ...
>
>And then start a month-long argument on the mailing list about how the
behavior of parameter default values is wrong and needs be changed.
>
>;)
>
>-- 
>Grant Edwards   grant.b.edwardsYow! I always have fun
>  at   because I'm out of my
>  gmail.commind!!!
>
>--

Thanks Grant:

 but now I think I should have mentioned that this is a method in a Django
model, and default arguments are evaluated once when the method is defined,
not each time the method is called.
So, your way, wil yield the date when Django was started and not the date in
wich this method is called, and the date I intend to get is the one when the
method is called. I think that I shouldn't call datetime.date.today() as a
default value for the method's parameters.

Cheers,
Felix.

-- 
https://mail.python.org/mailman/listinfo/python-list


python3.7.2 won't compile with SSL support

2019-02-21 Thread Felix Lazaro Carbonell
Hello:

 

I'm trying to install python3.7.2 from source in debian9.8  but it doesn't
compile with SSL.

 

I already installed openssl

 

And ./configure -with-openssl=/usr/include/openssl/ yields:

 

checking for openssl/ssl.h in /usr/include/openssl/... no

 

and ssl.h is certainly in /usr/include/openssl/

 

any ideas please?

 

Thanks in advance,

Felix.

-- 
https://mail.python.org/mailman/listinfo/python-list


RE: python3.7.2 won't compile with SSL support (solved)

2019-02-21 Thread Felix Lazaro Carbonell


Incredibly:

./configure --with-ssl=/usr/include/openssl/

Made the trick!!
Although --with-ssl is not documented in ./configure --help.

Cheers,
Felix.

-- 
https://mail.python.org/mailman/listinfo/python-list


Python boilerplate

2016-03-19 Thread Fernando Felix do Nascimento Junior
A simple boilerplate for those who don't know the structure of a project. 
https://goo.gl/lJRvS6

## Features

* Build and distribute with setuptools
* Check code style with flake8
* Make and run tests with pytest
* Run tests on every Python version with tox
* Code coverage with coverage.py

## Structure

Structure of the project in tree format.

├── CONTRIBUTING.md
├── LICENSE
├── Makefile
├── MANIFEST.in
├── module_name.py
├── README.md
├── requirements
│   ├── dev.txt
│   └── prod.txt
├── requirements.txt
├── setup.cfg
├── setup.py
├── tests.py
└── tox.ini

Fernando Felix
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Python boilerplate

2016-03-20 Thread Fernando Felix do Nascimento Junior
@all

I released version 1.0.0 with a tiny glossary and explanation of each file in 
the boilerplate.

@Chris

I made the boilerplate with intent that everyone can understand, download and 
use quickly. So, I didn't put extra dependence like cookiecutter (that depends 
jinja, that depends markupsafe) to **just** replace fields and then run the 
project.

I also preferred to use .md instead .rst because it's more clean in my opinion 
and used by default in platforms like GitHub and Stackoverflow. See mkdocs to 
generate documentation with markdown.

In same way, I choose pytest because the default test framework is verbose and 
its CamelCase sux.

About entry_points maybe I'll consider it too, but I didn't understand why 
packages are best than modules... both can be reusable and not every project 
needs packages.

I looked your release shell script. It's very nice. In Flask GitHub repository 
has a pretty nice too. See it ~scripts/make-release.py.


Thanks,
-- 
https://mail.python.org/mailman/listinfo/python-list


Python 3.8.5

2021-01-06 Thread Joseph Milroy Felix Moraes (Moraes) via Python-list
Good day,

I keep getting this error message when trying to open Python 3.8.5 on my 
computer windows 7 , 64 bit.

---
python.exe - System Error
---
The program can't start because api-ms-win-crt-runtime-l1-1-0.dll is missing 
from your computer. Try reinstalling the program to fix this problem.
---
OK 
---

kindly assist

Regards,
Milroy

-- 
https://mail.python.org/mailman/listinfo/python-list