Solutions for finding the 1000th prime

2010-05-21 Thread Neal
I'm doing the MIT OpenCourseWare class that this assignment hails from
and I don't doubt that its a relatively common assignment. Upon
searching for it for some ideas of why my program wouldn't work one of
the top results is this a thread from this group full of derision and
sarcasm. Believe me I understand that the people here don't want to do
another person's homework but for someone who isn't going to be coding
for a living or is a hobbyist like I am, there could be some useful
information. Eventually I figured out what I was doing wrong, and I
was hoping to add some insight. At this point in the lectures about
all we've learned to deal with as far as commands are: while, if,
else, elif and some boolean operators.

I started defining three variables I would need:
 One to count how many primes I have
 Which number I'm currently checking to be a prime
 Current number I'm dividing by as my checker

The assignment states that its easiest to check all odd integers > 1
but not to forget that 2 is a prime, easiest probably just to start
your counter at 1, I didn't but it took an extra 3 lines just to get
my counter to 1, and then get to the next number I'm checking all
without being in a usable loop.

You start your loop with the stated condition, no need for it to run
any more past counting the 1000th prime.

You could have your program check to see if a number is divisible by
every number less than itself and if so moving on to your next number.
However you do know that any number isn't going to be divisible by
another number in that sequence until it reaches itself divided by 2.
If you keep in mind though that we are only dealing with odd numbers,
you know that they will not be divisible by two, so instead we can
move on to itself divided by 3.

Every iteration of the loop when it finds a prime, needs to increase
the count, move onto the next candidate you're checking, and redefine
the divisor you're starting to check next time around. If a number is
determined not to be a prime you need to move onto the next candidate
and redefine your divisor (which is where I was stuck for a long time
writing this program). If your not sure that the number is a prime or
not a prime you simply need to redefine the divisor and continue the
loop.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Solutions for finding the 1000th prime

2010-05-21 Thread Neal
You did provide a very constructive answer and I do apologize for
generalizing the group or all the posts. And while the original poster
did not seem to have made much of an effort, the tone of the initial
response of that thread turns off anyone else who may be willing to
make that effort.

I also want people who may have forgotten to understand how early this
problem shows up in an Introduction to Computer Science course. In
this class its the second assignment, right after the obligatory
'Hello World!' variant.

I'm sure that while finding solutions and new keywords is integral to
learning how to program in a language, defining the a function isn't a
tool that has been introduced quite yet, as simple as it may seem.

Thank you for the link to the mailing list as well.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Concurrent writes to the same file

2013-07-11 Thread Neal Becker
Dave Angel wrote:

> On 07/11/2013 12:57 AM, Jason Friedman wrote:
>> Other than using a database, what are my options for allowing two processes
>> to edit the same file at the same time?  When I say same time, I can accept
>> delays.  I considered lock files, but I cannot conceive of how I avoid race
>> conditions.
>>
> 
> In general, no.  That's what a database is for.
> 
> Now, you presumably have some reason to avoid database, but in its stead
> you have to specify some other limitations.  To start with, what do you
> mean by "the same time"?  If each process can modify the entire file,
> then there's no point in one process reading the file at all until it
> has the lock.  So the mechanism would be
>1) wait till you can acquire the lock
>2) open the file, read it, modify it, flush and close
>3) release the lock
> 
> To come up with an appropriate lock, it'd be nice to start by specifying
> the entire environment.  Which Python, which OS?  Are the two processes
> on the same CPU, what's the file system, and is it locally mounted?
> 
> 
> 
> 

You can use a seperate server process to do file I/O, taking multiple inputs 
from clients.  Like syslogd.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [OT] Simulation Results Managment

2012-07-14 Thread Neal Becker
[email protected] wrote:

> Hi,
> This is a general question, loosely related to python since it will be the
> implementation language. I would like some suggestions as to manage simulation
> results data from my ASIC design.
> 
> For my design,
> - I have a number of simulations testcases (TEST_XX_YY_ZZ), and within each of
> these test cases we have:
>   - a number of properties (P_AA_BB_CC)
>   - For each property, the following information is given
> - Property name (P_NAME)
> - Number of times it was checked (within the testcase) N_CHECKED
> - Number of times if failed (within the testcase) N_FAILED
> - A simulation runs a testcase with a set of parameters.
>   - Simple example, SLOW_CLOCK, FAST_CLOCK, etc
> - For the design, I will run regression every night (at least), so I will have
> results from multiple timestamps We have < 1000 TESTCASES, and < 1000
> PROPERTIES.
> 
> At the moment, I have a script that extracts property information from
> simulation logfile, and provides single PASS/FAIL and all logfiles stored in a
> directory structure with timestamps/testnames and other parameters embedded in
> paths
> 
> I would like to be easily look at (visualize) the data and answer the
> questions - When did this property last fail, and how many times was it
> checked - Is this property checked in this test case.
> 
> Initial question: How to organize the data within python?
> For a single testcase, I could use a dict. Key P_NAME, data in N_CHECKED,
> N_FAILED I then have to store multiple instances of testcase based on date
> (and simulation parameters.
> 
> Any comments, suggestions?
> Thanks,
> Steven

One small suggestion,
I used to store test conditions and results in log files, and then write 
parsers 
to read the results.  The formats kept changing (add more conditions/results!) 
and maintenance was a pain.

Now, in addition to a text log file, I write a file in pickle format containing 
a dict of all test conditions and results.  Much more convenient.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [OT] Simulation Results Managment

2012-07-15 Thread Neal Becker
Dieter Maurer wrote:

> [email protected] writes:
>> ...
>> Does pickle have any advantages over json/yaml?
> 
> It can store and retrieve almost any Python object with almost no effort.
> 
> Up to you whether you see it as an advantage to be able to store
> objects rather than (almost) pure data with a rather limited type set.
> 
> 
> Of course, "pickle" is a proprietary Python format. Not so easy to
> decode it with something else than Python. In addition, when
> you store objects, the retrieving application must know the classes
> of those objects -- and its knowledge should not be too different
> from how those classes looked when the objects have been stored.
> 
> 
> I like very much to work with objects (rather than with pure data).
> Therefore, I use "pickle" when I know that the storing and retrieving
> applications all use Python. I use pure (and restricted) data formats
> when non Python applications come into play.

Typically what I want to do is post-process (e.g. plot) results using python 
scripts, so using pickle is great for that.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Google the video "9/11 Missing Links". 9/11 was a Jew Job!

2012-07-19 Thread Neal Becker
Google the video "Go fuck yourself"

-- 
http://mail.python.org/mailman/listinfo/python-list


equiv of perl regexp grammar?

2012-09-13 Thread Neal Becker
I noticed this and thought it looked interesting:

http://search.cpan.org/~dconway/Regexp-
Grammars-1.021/lib/Regexp/Grammars.pm#DESCRIPTION

I'm wondering if python has something equivalent?

-- 
http://mail.python.org/mailman/listinfo/python-list


A little morning puzzle

2012-09-19 Thread Neal Becker
I have a list of dictionaries.  They all have the same keys.  I want to find 
the 
set of keys where all the dictionaries have the same values.  Suggestions?

-- 
http://mail.python.org/mailman/listinfo/python-list


howto handle nested for

2012-09-28 Thread Neal Becker
I know this should be a fairly basic question, but I'm drawing a blank.

I have code that looks like:
 
  for s0 in xrange (n_syms):
for s1 in xrange (n_syms):
for s2 in xrange (n_syms):
for s3 in xrange (n_syms):
for s4 in range (n_syms):
for s5 in range (n_syms):

Now I need the level of nesting to vary dynamically.  (e.g., maybe I need to 
add 
for  s6 in range (n_syms))

Smells like a candidate for recursion.  Also sounds like a use for yield.  Any 
suggestions? 

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: howto handle nested for

2012-09-28 Thread Neal Becker
Neal Becker wrote:

> I know this should be a fairly basic question, but I'm drawing a blank.
> 
> I have code that looks like:
>  
>   for s0 in xrange (n_syms):
> for s1 in xrange (n_syms):
> for s2 in xrange (n_syms):
> for s3 in xrange (n_syms):
> for s4 in range (n_syms):
> for s5 in range (n_syms):
> 
> Now I need the level of nesting to vary dynamically.  (e.g., maybe I need to
> add
> for  s6 in range (n_syms))
> 
> Smells like a candidate for recursion.  Also sounds like a use for yield.  Any
> suggestions?

Thanks for the suggestions: I found itertools.product is just great for this.

-- 
http://mail.python.org/mailman/listinfo/python-list


serialization and versioning

2012-10-12 Thread Neal Becker
I wonder if there is a recommended approach to handle this issue.

Suppose objects of a class C are serialized using python standard pickling.  
Later, suppose class C is changed, perhaps by adding a data member and a new 
constructor argument.

It would see the pickling protocol does not directly provide for this - but is 
there a recommended method?

I could imagine that a class could include a class __version__ property that 
might be useful - although I would further expect that it would not have been 
defined in the original version of class C (but only as an afterthought when it 
became necessary).

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: serialization and versioning

2012-10-12 Thread Neal Becker
Etienne Robillard wrote:

> On Fri, 12 Oct 2012 06:42:03 -0400
> Neal Becker  wrote:
> 
>> I wonder if there is a recommended approach to handle this issue.
>> 
>> Suppose objects of a class C are serialized using python standard pickling.
>> Later, suppose class C is changed, perhaps by adding a data member and a new
>> constructor argument.
>> 
>> It would see the pickling protocol does not directly provide for this - but
>> is there a recommended method?
>> 
>> I could imagine that a class could include a class __version__ property that
>> might be useful - although I would further expect that it would not have been
>> defined in the original version of class C (but only as an afterthought when
>> it became necessary).
>> 
>> --
>> http://mail.python.org/mailman/listinfo/python-list
> 
> i guess a easy answer is to say to try python 3.3 but how would this translate
> in python (2) code ?

So are you saying python 3.3 has such a feature?  Where is it described?

-- 
http://mail.python.org/mailman/listinfo/python-list


simple string format question

2012-10-15 Thread Neal Becker
Is there a way to specify to format I want a floating point written with no 
more 
than e.g., 2 digits after the decimal?  I tried {:.2f}, but then I get all 
floats written with 2 digits, even if they are 0:

2.35 << yes, that's what I want
2.00 << no, I want just 2 or 2.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to insert random error in a programming

2012-10-15 Thread Neal Becker
Debashish Saha wrote:

> how to insert random error in a programming?

Apparently, giving it to Microsoft will work.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Immutability and Python

2012-10-30 Thread Neal Becker
rusi wrote:

> On Oct 29, 8:20 pm, andrea crotti  wrote:
> 
>> Any comments about this? What do you prefer and why?
> 
> Im not sure how what the 'prefer' is about -- your specific num
> wrapper or is it about the general question of choosing mutable or
> immutable types?
> 
> If the latter I would suggest you read
> http://en.wikipedia.org/wiki/Alexander_Stepanov#Criticism_of_OOP
> 
> [And remember that Stepanov is the author of C++ STL, he is arguably
> as important in the C++ world as Stroustrup]

The usual calls for immutability are not related to OO.  They have to do with 
optimization, and specifically with parallel processing.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANNOUNCE: Thesaurus - a recursive dictionary subclass using attributes

2013-01-08 Thread Neal Becker
Did you intend to give anyone permission to use the code?  I see only a 
copyright notice, but no permissions.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ANN: PyDTLS

2013-01-09 Thread Neal Becker
A bit OT, but the widespread use of rfc 6347 could have a big impact on my 
work.  
I wonder if it's likely to see widespread use?  What are likely/possible use 
cases?

Thank.

-- 
http://mail.python.org/mailman/listinfo/python-list


surprising result all (generator) (bug??)

2012-01-31 Thread Neal Becker
I was just bitten by this unexpected behavior:

In [24]: all ([i > 0 for i in xrange (10)])
Out[24]: False

In [25]: all (i > 0 for i in xrange (10))
Out[25]: True

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: surprising result all (generator) (bug??)

2012-01-31 Thread Neal Becker
Mark Dickinson wrote:

> On Jan 31, 6:40 am, Neal Becker  wrote:
>> I was just bitten by this unexpected behavior:
>>
>> In [24]: all ([i > 0 for i in xrange (10)])
>> Out[24]: False
>>
>> In [25]: all (i > 0 for i in xrange (10))
>> Out[25]: True
> 
> What does:
> 
>>>> import numpy
>>>> all is numpy.all
> 
> give you?
> 
> --
> Mark
In [31]: all is numpy.all
Out[31]: True

Excellent detective work, Mark!  But it still is unexpected, at least to me.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: [Perl Golf] Round 1

2012-02-05 Thread Neal Becker
Heiko Wundram wrote:

> Am 05.02.2012 12:49, schrieb Alec Taylor:
>> Solve this problem using as few lines of code as possible[1].
> 
> Pardon me, but where's "the problem"? If your intention is to propose "a
> challenge", say so, and state the associated problem clearly.
> 

But this really misses the point.  Python is not about coming up with some 
clever, cryptic, one-liner to solve some problem.  It's about clear code.  If 
you want clever, cryptic, one-liner's stick with perl.

-- 
http://mail.python.org/mailman/listinfo/python-list


pickle/unpickle class which has changed

2012-03-06 Thread Neal Becker
What happens if I pickle a class, and later unpickle it where the class now has 
added some new attributes?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: pickle/unpickle class which has changed

2012-03-06 Thread Neal Becker
Peter Otten wrote:

> Steven D'Aprano wrote:
> 
>> On Tue, 06 Mar 2012 07:34:34 -0500, Neal Becker wrote:
>> 
>>> What happens if I pickle a class, and later unpickle it where the class
>>> now has added some new attributes?
>> 
>> Why don't you try it?
>> 
>> py> import pickle
>> py> class C:
>> ... a = 23
>> ...
>> py> c = C()
>> py> pickled = pickle.dumps(c)
>> py> C.b = 42  # add a new class attribute
>> py> d = pickle.loads(pickled)
>> py> d.a
>> 23
>> py> d.b
>> 42
>> 
>> 
>> Unless you mean something different from this, adding attributes to the
>> class is perfectly fine.
>> 
>> But... why are you dynamically adding attributes to the class? Isn't that
>> rather unusual?
> 
> The way I understand the problem is that an apparently backwards-compatible
> change like adding a third dimension to a point with an obvious default
> breaks when you restore an "old" instance in a script with the "new"
> implementation:
> 
>>>> import pickle
>>>> class P(object):
> ... def __init__(self, x, y):
> ... self.x = x
> ... self.y = y
> ... def r2(self):
> ... return self.x*self.x + self.y*self.y
> ...
>>>> p = P(2, 3)
>>>> p.r2()
> 13
>>>> s = pickle.dumps(p)
>>>> class P(object):
> ... def __init__(self, x, y, z=0):
> ... self.x = x
> ... self.y = y
> ... self.z = z
> ... def r2(self):
> ... return self.x*self.x + self.y*self.y + self.z*self.z
> ...
>>>> p = P(2, 3)
>>>> p.r2()
> 13
>>>> pickle.loads(s).r2()
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "", line 7, in r2
> AttributeError: 'P' object has no attribute 'z'
> 
> By default pickle doesn't invoke __init__() and updates __dict__ directly.
> As pointed out in my previous post one way to fix the problem is to
> implement a __setstate__() method:
> 
>>>> class P(object):
> ... def __init__(self, x, y, z=0):
> ... self.x = x
> ... self.y = y
> ... self.z = z
> ... def r2(self):
> ... return self.x*self.x + self.y*self.y + self.z*self.z
> ... def __setstate__(self, state):
> ... self.__dict__["z"] = 42 # stupid default
> ... self.__dict__.update(state)
> ...
>>>> pickle.loads(s).r2()
> 1777
> 
> This keeps working with pickles of the new implementation of P:
> 
>>>> q = P(3, 4, 5)
>>>> pickle.loads(pickle.dumps(q)).r2()
> 50

So if in my new class definition there are now some new attributes, and if I 
did 
not add a __setstate__ to set the new attributes, I guess then when unpickled 
the instance of the class will simply lack those attributes?

-- 
http://mail.python.org/mailman/listinfo/python-list


cython + scons + c++

2012-03-08 Thread Neal Becker
Is there a version of cython.py, pyext.py that will work with c++?

I asked this question some time ago, but never got an answer.

I tried the following code, but it doesn't work correctly.  If the commented 
lines are uncommented, the gcc command is totally mangled.

Although it did build my 1 test extension OK, I didn't use any libstdc++ - I 
suspect it won't link correctly in general because it doesn't seem to treat the 
code as c++ (treats it as c code).

cyenv = Environment(PYEXT_USE_DISTUTILS=True)
cyenv.Tool("pyext")
cyenv.Tool("cython")
import numpy

cyenv.Append(PYEXTINCPATH=[numpy.get_include()])
cyenv.Replace(CYTHONFLAGS=['--cplus'])
#cyenv.Replace(CXXFILESUFFIX='.cpp')
#cyenv.Replace(CYTHONCFILESUFFIX='.cpp')


-- 
http://mail.python.org/mailman/listinfo/python-list


argparse ConfigureAction problem

2012-03-24 Thread Neal Becker
I've been using arparse with ConfigureAction (which is shown below).  But, it 
doesn't play well with positional arguments.  For example:

./plot_stuff2.py --plot stuff1 stuff2
[...]
plot_stuff2.py: error: argument --plot/--with-plot/--enable-plot/--no-plot/--
without-plot/--disable-plot: invalid boolean value: 'stuff1'

Problem is --plot takes an optional argument, and so the positional arg is 
assumed to be the arg to --plot.  Not sure how to fix this.


Here is the parser code:

parser = argparse.ArgumentParser()

[...]
parser.add_argument ('--plot', action=ConfigureAction, default=False)
parser.add_argument ('files', nargs='*')

opt = parser.parse_args(cmdline[1:])

Here is ConfigureAction:

-
import argparse
import re


def boolean(string):
string = string.lower()
if string in ['0', 'f', 'false', 'no', 'off']:
return False
elif string in ['1', 't', 'true', 'yes', 'on']:
return True
else:
raise ValueError()


class ConfigureAction(argparse.Action):

def __init__(self,
 option_strings,
 dest,
 default=None,
 required=False,
 help=None,
 metavar=None,
 positive_prefixes=['--', '--with-', '--enable-'],
 negative_prefixes=['--no-', '--without-', '--disable-']):
strings = []
self.positive_strings = set()
self.negative_strings = set()
for string in option_strings:
assert re.match(r'--[A-z]+', string)
suffix = string[2:]
for positive_prefix in positive_prefixes:
self.positive_strings.add(positive_prefix + suffix)
strings.append(positive_prefix + suffix)
for negative_prefix in negative_prefixes:
self.negative_strings.add(negative_prefix + suffix)
strings.append(negative_prefix + suffix)
super(ConfigureAction, self).__init__(
option_strings=strings,
dest=dest,
nargs='?',
const=None,
default=default,
type=boolean,
choices=None,
required=required,
help=help,
metavar=metavar)

def __call__(self, parser, namespace, value, option_string=None):
if value is None:
value = option_string in self.positive_strings
elif option_string in self.negative_strings:
value = not value
setattr(namespace, self.dest, value)


-- 
http://mail.python.org/mailman/listinfo/python-list


set PYTHONPATH for a directory?

2012-05-04 Thread Neal Becker
I'm testing some software I'm building against an alternative version of a 
library.  So I have an alternative library in directory L.  Then I have in an 
unrelated directory, the test software, which I need to use the library version 
from directory L.

One approach is to set PYTHONPATH whenever I run this test software.  Any 
suggestion on a more foolproof approach?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Good data structure for finding date intervals including a given date

2012-05-12 Thread Neal Becker
Probably boost ITL (Interval Template Library) would serve as a good example.  
I 
noticed recently someone created an interface for python.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: usenet reading

2012-06-03 Thread Neal Becker
Jon Clements wrote:

> Hi All,
> 
> Normally use Google Groups but it's becoming absolutely frustrating - not only
> has the interface changed to be frankly impractical, the posts are somewhat
> random of what appears, is posted and whatnot. (Ironically posted from GG)
> 
> Is there a server out there where I can get my news groups? I use to be with
> an ISP that hosted usenet servers, but alas, it's no longer around...
> 
> Only really interested in Python groups and C++.
> 
> Any advice appreciated,
> 
> Jon.

Somewhat unrelated - any good news reader for Android?

-- 
http://mail.python.org/mailman/listinfo/python-list


mode for file created by open

2012-06-08 Thread Neal Becker
If a new file is created by open ('xxx', 'w')

How can I control the file permission bits?  Is my only choice to use chmod 
after opening, or use os.open?

Wouldn't this be a good thing to have as a keyword for open?  Too bad what 
python calls 'mode' is like what posix open calls 'flags', and what posix open 
calls 'mode' is what should go to chmod.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: mode for file created by open

2012-06-09 Thread Neal Becker
Cameron Simpson wrote:

> On 08Jun2012 14:36, Neal Becker  wrote:
> | If a new file is created by open ('xxx', 'w')
> | 
> | How can I control the file permission bits?  Is my only choice to use chmod
> | after opening, or use os.open?
> | 
> | Wouldn't this be a good thing to have as a keyword for open?  Too bad what
> | python calls 'mode' is like what posix open calls 'flags', and what posix
> | open calls 'mode' is what should go to chmod.
> 
> Well, it does honour the umask, and will call the OS open with 0666
> mode so you'll get 0666-umask mode bits in the new file (if it is new).
> 
> Last time I called os.open was to pass a mode of 0 (raceproof lockfile).
> 
> I would advocate (untested):
> 
>   fd = os.open(...)
>   os.fchmod(fd, new_mode)
>   fp = os.fdopen(fd)
> 
> If you need to constrain access in a raceless fashion (specificly, no
> ealy window of _extra_ access) pass a restrictive mode to os.open and
> open it up with fchmod.
> 
> Cheers,

Doesn't anyone else think it would be a good addition to open to specify a file 
creation mode?  Like posix open?  Avoid all these nasty workarounds?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: mode for file created by open

2012-06-09 Thread Neal Becker
Terry Reedy wrote:

> On 6/9/2012 10:08 AM, Devin Jeanpierre wrote:
>> On Sat, Jun 9, 2012 at 7:42 AM, Neal Becker  wrote:
>>> Doesn't anyone else think it would be a good addition to open to specify a
>>> file
>>> creation mode?  Like posix open?  Avoid all these nasty workarounds?
>>
>> I do, although I'm hesitant, because this only applies when mode ==
>> 'w', and open has a large and growing list of parameters.
> 
> The buffer parameter (I believe it is) also does not always apply.
> 
> The original open builtin was a thin wrapper around old C's stdio.open.
> Open no longer has that constraint. After more discussion here, someone
> could open a tracker issue with a specific proposal. Keep in mind that
> 'mode' is already a parameter name for the mode of opening, as opposed
> to the permission mode for subsequent users.
> 

I haven't seen the current code - I'd guess it just uses posix open.

So I would guess it wouldn't be difficult to add the creation mode argument.

How about call it cr_mode?

-- 
http://mail.python.org/mailman/listinfo/python-list


module name vs '.'

2012-06-18 Thread Neal Becker
Am I correct that a module could never come from a file path with a '.' in the 
name?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: module name vs '.'

2012-06-18 Thread Neal Becker
I meant a module

src.directory contains
__init__.py
neal.py
becker.py

from src.directory import neal


On Mon, Jun 18, 2012 at 9:44 AM, Dave Angel  wrote:

> On 06/18/2012 09:19 AM, Neal Becker wrote:
> > Am I correct that a module could never come from a file path with a '.'
> in the
> > name?
> >
>
> No.
>
> Simple example: Create a directory called src.directory
> In that directory, create two files
>
> ::neal.py::
> import becker
> print becker.__file__
> print becker.hello()
>
>
> ::becker.py::
> def hello():
>print "Inside hello"
>return "returning"
>
>
> Then run neal.py, from that directory;
>
>
> davea@think:~/temppython/src.directory$ python neal.py
> /mnt/data/davea/temppython/src.directory/becker.pyc
> Inside hello
> returning
> davea@think:~/temppython/src.directory$
>
> Observe the results of printing __file__
>
> Other approaches include putting a directory path containing a period
> into sys.path
>
>
>
> --
>
> DaveA
>
>
-- 
http://mail.python.org/mailman/listinfo/python-list


writable iterators?

2011-06-22 Thread Neal Becker
AFAICT, the python iterator concept only supports readable iterators, not 
write.  
Is this true?

for example:

for e in sequence:
  do something that reads e
  e = blah # will do nothing

I believe this is not a limitation on the for loop, but a limitation on the 
python iterator concept.  Is this correct?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: writable iterators?

2011-06-22 Thread Neal Becker
Steven D'Aprano wrote:

> On Wed, 22 Jun 2011 15:28:23 -0400, Neal Becker wrote:
> 
>> AFAICT, the python iterator concept only supports readable iterators,
>> not write. Is this true?
>> 
>> for example:
>> 
>> for e in sequence:
>>   do something that reads e
>>   e = blah # will do nothing
>> 
>> I believe this is not a limitation on the for loop, but a limitation on
>> the python iterator concept.  Is this correct?
> 
> Have you tried it? "e = blah" certainly does not "do nothing", regardless
> of whether you are in a for loop or not. It binds the name e to the value
> blah.
> 

Yes, I understand that e = blah just rebinds e.  I did not mean this as an 
example of working code.  I meant to say, does Python have any idiom that 
allows 
iteration over a sequence such that the elements can be assigned?

...
> * iterators are lazy sequences, and cannot be changed because there's
> nothing to change (they don't store their values anywhere, but calculate
> them one by one on demand and then immediately forget that value);
> 
> * immutable sequences, like tuples, are immutable and cannot be changed
> because that's what immutable means;
> 
> * mutable sequences like lists can be changed. The standard idiom for
> that is to use enumerate:
> 
> for i, e in enumerate(seq):
> seq[i] = e + 42
> 
> 
AFAIK, the above is the only python idiom that allows iteration over a sequence 
such that you can write to the sequence.  And THAT is the problem.  In many 
cases, indexing is much less efficient than iteration.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: writable iterators?

2011-06-23 Thread Neal Becker
Ian Kelly wrote:

> On Wed, Jun 22, 2011 at 3:54 PM, Steven D'Aprano
>  wrote:
>> Fortunately, that's not how it works, and far from being a "limitation",
>> it would be *disastrous* if iterables worked that way. I can't imagine
>> how many bugs would occur from people reassigning to the loop variable,
>> forgetting that it had a side-effect of also reassigning to the iterable.
>> Fortunately, Python is not that badly designed.
> 
> The example syntax is a non-starter, but there's nothing wrong with
> the basic idea.  The STL of C++ uses output iterators and a quick
> Google search doesn't turn up any "harmful"-style rants about those.
> 
> Of course, there are a couple of major differences between C++
> iterators and Python iterators.  FIrst, C++ iterators have an explicit
> dereference step, which keeps the iterator variable separate from the
> value that it accesses and also provides a possible target for
> assignment.  You could say that next(iterator) is the corresponding
> dereference step in Python, but it is not accessible in a for loop and
> it does not provide an assignment target in any case.
> 
> Second, C++ iterators separate out the dereference step from the
> iterator advancement step.  In Python, both next(iterator) and
> generator.send() are expected to advance the iterator, which would be
> problematic for creating an iterator that does both input and output.
> 
> I don't think that output iterators would be a "disaster" in Python,
> but I also don't see a clean way to add them to the existing iterator
> protocol.
> 
>> If you want to change the source iterable, you have to explicitly do so.
>> Whether you can or not depends on the source:
>>
>> * iterators are lazy sequences, and cannot be changed because there's
>> nothing to change (they don't store their values anywhere, but calculate
>> them one by one on demand and then immediately forget that value);
> 
> No, an iterator is an object that allows traversal over a collection
> in a manner independent of the implementation of that collection.  In
> many instances, especially in Python and similar languages, the
> "collection" is abstracted to an operation over another collection, or
> even to the results of a serial computation where there is no actual
> "collection" in memory.
> 
> Iterators are not lazy sequences, because they do not behave like
> sequences.  You can't index them, you can't reiterate them, you can't
> get their length (and before you point out that there are ways of
> doing each of these things -- yes, but none of those ways use
> sequence-like syntax).  For true lazy sequences, consider the concept
> of streams and promises in the functional languages.
> 
> In any case, the desired behavior of an output iterator on a source
> iterator is clear enough to me.  If the source iterator is also an
> output iterator, then it propagates the write to it.  If the source
> iterator is not an output iterator, then it raises a TypeError.
> 
>> * mutable sequences like lists can be changed. The standard idiom for
>> that is to use enumerate:
>>
>> for i, e in enumerate(seq):
>> seq[i] = e + 42
> 
> Unless the underlying collection is a dict, in which case I need to do:
> 
> for k, v in d.items():
> d[k] = v + 42
> 
> Or a file:
> 
> for line in f:
> # I'm not even sure whether this actually works.
> f.seek(-len(line))
> f.write(line.upper())
> 
> As I said above, iterators are supposed to provide
> implementation-independent traversal over a collection.  For writing,
> enumerate fails in this regard.


While python may not have output iterators, interestingly numpy has just added 
this capability.  It is part of nditer.  So, this may suggest a syntax.

There have been a number of responses to my question that suggest using 
indexing 
(maybe with enumerate).  Once again, this is not suitable for many data 
structures.  c++ and stl teach that iteration is often far more efficient than 
indexing.  Think of a linked-list.  Even for a dense multi-dim array, index 
calculations are much slower than iteration.

I believe the lack of output iterators is a defienciency in the python iterator 
concept.


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: writable iterators?

2011-06-23 Thread Neal Becker
Chris Torek wrote:

> In article  I wrote, in part:
>>Another possible syntax:
>>
>>for item in container with key:
>>
>>which translates roughly to "bind both key and item to the value
>>for lists, but bind key to the key and value for the value for
>>dictionary-ish items".  Then ... the OP would write, e.g.:
>>
>>for elem in sequence with index:
>>...
>>sequence[index] = newvalue
>>
>>which of course calls the usual container.__setitem__.  In this
>>case the "new protocol" is to have iterators define a function
>>that returns not just the next value in the sequence, but also
>>an appropriate "key" argument to __setitem__.  For lists, this
>>is just the index; for dictionaries, it is the key; for other
>>containers, it is whatever they use for their keys.
> 
> I note I seem to have switched halfway through thinking about
> this from "value" to "index" for lists, and not written that. :-)
> 
> Here's a sample of a simple generator that does the trick for
> list, buffer, and dict:
> 
> def indexed_seq(seq):
> """
> produce a pair
>  
> such that seq[key_or_index] is  initially; you can
> write on seq[key_or_index] to set a new value while this
> operates.  Note that we don't allow tuple and string here
> since they are not writeable.
> """
> if isinstance(seq, (list, buffer)):
> for i, v in enumerate(seq):
> yield i, v
> elif isinstance(seq, dict):
> for k in seq:
> yield k, seq[k]
> else:
> raise TypeError("don't know how to index %s" % type(seq))
> 
> which shows that there is no need for a new syntax.  (Turning the
> above into an iterator, and handling container classes that have
> an __iter__ callable that produces an iterator that defines an
> appropriate index-and-value-getter, is left as an exercise. :-) )

Here is what numpy nditer does:

 for item in np.nditer(u, [], ['readwrite'], order='C'):
... item[...] = 10

Notice that the slice syntax is used to 'dereference' the iterator.  This seems 
like reasonably pythonic syntax, to my eye.

-- 
http://mail.python.org/mailman/listinfo/python-list


'Use-Once' Variables and Linear Objects

2011-08-02 Thread Neal Becker
I thought this was an interesting article

http://www.pipeline.com/~hbaker1/Use1Var.html

-- 
http://mail.python.org/mailman/listinfo/python-list


argparse, tell if arg was defaulted

2011-03-15 Thread Neal Becker
Is there any way to tell if an arg value was defaulted vs. set on command line?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: argparse, tell if arg was defaulted

2011-03-15 Thread Neal Becker
Robert Kern wrote:

> On 3/15/11 9:54 AM, Neal Becker wrote:
>> Is there any way to tell if an arg value was defaulted vs. set on command
>> line?
> 
> No. If you need to determine that, don't set a default value in the
> add_argument() method. Then just check for None and replace it with the
> default value and do whatever other processing for the case where the user
> does not specify that argument.
> 
> parser.add_argument('-f', '--foo', help="the foo argument [default: bar]")
> 
> args = parser.parse_args()
> if args.foo is None:
>  args.foo = 'bar'
>  print 'I'm warning you that you did not specify a --foo argument.'
>  print 'Using default=bar.'
> 

Not a completely silly use case, actually.  What I need here is a combined 
command line / config file parser.

Here is my current idea:
-

parser = OptionParser()
parser.add_option ('--opt1', default=default1)

(opt,args) = parser.parse_args()

import json, sys

for arg in args:
print 'arg:', arg
d = json.load(open (arg, 'r'))
parser.set_defaults (**d)

(opt,args) = parser.parse_args()
---

parse_args() is called 2 times.  First time is just to find the non-option 
args, 
which are assumed to be the name(s) of config file(s) to read.  This is used to 
set_defaults.  Then run parse_args() again.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: argparse, tell if arg was defaulted

2011-03-15 Thread Neal Becker
Robert Kern wrote:

> On 3/15/11 12:46 PM, Neal Becker wrote:
>> Robert Kern wrote:
>>
>>> On 3/15/11 9:54 AM, Neal Becker wrote:
>>>> Is there any way to tell if an arg value was defaulted vs. set on command
>>>> line?
>>>
>>> No. If you need to determine that, don't set a default value in the
>>> add_argument() method. Then just check for None and replace it with the
>>> default value and do whatever other processing for the case where the user
>>> does not specify that argument.
>>>
>>> parser.add_argument('-f', '--foo', help="the foo argument [default: bar]")
>>>
>>> args = parser.parse_args()
>>> if args.foo is None:
>>>   args.foo = 'bar'
>>>   print 'I'm warning you that you did not specify a --foo argument.'
>>>   print 'Using default=bar.'
>>>
>>
>> Not a completely silly use case, actually.  What I need here is a combined
>> command line / config file parser.
>>
>> Here is my current idea:
>> -
>>
>> parser = OptionParser()
>> parser.add_option ('--opt1', default=default1)
>>
>> (opt,args) = parser.parse_args()
>>
>> import json, sys
>>
>> for arg in args:
>>  print 'arg:', arg
>>  d = json.load(open (arg, 'r'))
>>  parser.set_defaults (**d)
>>
>> (opt,args) = parser.parse_args()
>> ---
>>
>> parse_args() is called 2 times.  First time is just to find the non-option
>> args,
>> which are assumed to be the name(s) of config file(s) to read.  This is used
>> to
>> set_defaults.  Then run parse_args() again.
> 
> I think that would work fine for most cases. Just be careful with the argument
> types that may consume resources. E.g. type=argparse.FileType().
> 
> You could also make a secondary parser that just extracts the config-file
> argument:
> 
> [~]
> |25> import argparse
> 
> [~]
> |26> config_parser = argparse.ArgumentParser(add_help=False)
> 
> [~]
> |27> config_parser.add_argument('-c', '--config', action='append')
> _AppendAction(option_strings=['-c', '--config'], dest='config', nargs=None,
> const=None, default=None, type=None, choices=None, help=None, metavar=None)
> 
> [~]
> |28> parser = argparse.ArgumentParser()
> 
> [~]
> |29> parser.add_argument('-c', '--config', action='append')  # For the --help
> string.
> _AppendAction(option_strings=['-c', '--config'], dest='config', nargs=None,
> const=None, default=None, type=None, choices=None, help=None, metavar=None)
> 
> [~]
> |30> parser.add_argument('-o', '--output')
> _StoreAction(option_strings=['-o', '--output'], dest='output', nargs=None,
> const=None, default=None, type=None, choices=None, help=None, metavar=None)
> 
> [~]
> |31> parser.add_argument('other', nargs='*')
> _StoreAction(option_strings=[], dest='other', nargs='*', const=None,
> default=None, type=None, choices=None, help=None, metavar=None)
> 
> [~]
> |32> argv = ['-c', 'config-file.json', '-o', 'output.txt', 'other',
> |'arguments']
> 
> [~]
> |33> known, unknown = config_parser.parse_known_args(argv)
> 
> [~]
> |34> known
> Namespace(config=['config-file.json'])
> 
> [~]
> |35> unknown
> ['-o', 'output.txt', 'other', 'arguments']
> 
> [~]
> |36> for cf in known.config:
> ...> # Load d from file.
> ...> parser.set_defaults(**d)
> ...>
> 
> [~]
> |37> parser.parse_args(unknown)
> Namespace(config=None, other=['other', 'arguments'], output='output.txt')
> 
> 

nice!

-- 
http://mail.python.org/mailman/listinfo/python-list


argparse csv + choices

2011-03-30 Thread Neal Becker
I'm trying to combine 'choices' with a comma-seperated list of options, so I 
could do e.g., 

--cheat=a,b

parser.add_argument ('--cheat', choices=('a','b','c'), type=lambda x: 
x.split(','), default=[])

test.py --cheat a
 error: argument --cheat: invalid choice: ['a'] (choose from 'a', 'b', 'c')

The validation of choice is failing, because parse returns a list, not an item. 
 
Suggestions?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: argparse csv + choices

2011-03-31 Thread Neal Becker
Robert Kern wrote:

> On 3/30/11 10:32 AM, Neal Becker wrote:
>> I'm trying to combine 'choices' with a comma-seperated list of options, so I
>> could do e.g.,
>>
>> --cheat=a,b
>>
>>  parser.add_argument ('--cheat', choices=('a','b','c'), type=lambda x:
>> x.split(','), default=[])
>>
>> test.py --cheat a
>>   error: argument --cheat: invalid choice: ['a'] (choose from 'a', 'b', 'c')
>>
>> The validation of choice is failing, because parse returns a list, not an
>> item. Suggestions?
> 
> Do the validation in the type function.
> 
> 
> import argparse
> 
> class ChoiceList(object):
>  def __init__(self, choices):
>  self.choices = choices
> 
>  def __repr__(self):
>  return '%s(%r)' % (type(self).__name__, self.choices)
> 
>  def __call__(self, csv):
>  args = csv.split(',')
>  remainder = sorted(set(args) - set(self.choices))
>  if remainder:
>  raise ValueError("invalid choices: %r (choose from %r)" %
> (remainder, self.choices))
>  return args
> 
> 
> parser = argparse.ArgumentParser()
> parser.add_argument('--cheat', type=ChoiceList(['a','b','c']), default=[])
> print parser.parse_args(['--cheat=a,b'])
> parser.parse_args(['--cheat=a,b,d'])
> 

Excellent!  Thanks!

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python ioctl

2011-04-14 Thread Neal Becker
Nitish Sharma wrote:

> Hi PyPpl,
> For my current project I have a kernel device driver and a user-space
> application. This user-space application is already provided to me, and
> written in python. I have to extend this application with some addition
> features, which involves communicating with kernel device driver through
> ioctl() interface.
> I am fairly new with Python and not able to grok how to provide "op" in
> ioctl syntax - fcntl.ioctl (fd, op[, arg[, mutate_flag]]). Operations
> supported by device driver, through ioctl, are of the form: IOCTL_SET_MSG
>  _IOR(MAGIC_NUMBER, 0, char*).
> It'd be great if some help can be provided about how to "encode" these
> operations in python to implement the desired functionality.
> 
> Regards
> Nitish
Here's some of my stuff.  Specific to my device, but maybe you get some ideas

 eioctl.py 
from ctypes import *

libc = CDLL ('/lib/libc.so.6')
#print libc.ioctl

def set_ioctl_argtype (arg_type):
libc.ioctl.argtypes = (c_int, c_int, arg_type)

IOC_WRITE = 0x1

_IOC_NRBITS=8
_IOC_TYPEBITS=  8
_IOC_SIZEBITS=  14
_IOC_DIRBITS=   2

_IOC_NRSHIFT=   0
_IOC_TYPESHIFT= (_IOC_NRSHIFT+_IOC_NRBITS)
_IOC_SIZESHIFT= (_IOC_TYPESHIFT+_IOC_TYPEBITS)
_IOC_DIRSHIFT=  (_IOC_SIZESHIFT+_IOC_SIZEBITS)


def IOC (dir, type, nr, size):
return (((dir)  << _IOC_DIRSHIFT) | \
 ((type) << _IOC_TYPESHIFT) | \
 ((nr)   << _IOC_NRSHIFT) | \
 ((size) << _IOC_SIZESHIFT))

def ioctl (fd, request, args):
return libc.ioctl (fd, request, args)
--

example of usage:

# Enable byte swap in driver
from eioctl import IOC, IOC_WRITE

EOS_IOC_MAGIC = 0xF4

request = IOC(IOC_WRITE, EOS_IOC_MAGIC, 1, struct.calcsize ('i'))
err = fcntl.ioctl(eos_fd, request, 1)



-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Get the IP address of WIFI interface

2011-05-15 Thread Neal Becker
Far.Runner wrote:

> Hi python experts:
> There are two network interfaces on my laptop: one is 100M Ethernet
> interface, the other is wifi interface, both are connected and has an ip
> address.
> The question is: How to get the ip address of the wifi interface in a python
> script without parsing the output of a shell command like "ipconfig" or
> "ifconfig"?
> 
> OS: Windows or Linux
> 
> F.R

Here's some useful snippits for linux:

def get_default_if():
f = open('/proc/net/route')
for i in csv.DictReader(f, delimiter="\t"):
if long(i['Destination'], 16) == 0:
return i['Iface']
return None

def get_ip_address(ifname):
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
return socket.inet_ntoa(fcntl.ioctl(
s.fileno(),
0x8915,  # SIOCGIFADDR
struct.pack('256s', ifname[:15])
)[20:24])


-- 
http://mail.python.org/mailman/listinfo/python-list


cPickle -> invalid signature

2011-05-17 Thread Neal Becker
What does it mean when cPickle.load says:
RuntimeError: invalid signature

Is binary format not portable?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: cPickle -> invalid signature

2011-05-17 Thread Neal Becker
Gabriel Genellina wrote:

> En Tue, 17 May 2011 08:41:41 -0300, Neal Becker 
> escribió:
> 
>> What does it mean when cPickle.load says:
>> RuntimeError: invalid signature
>>
>> Is binary format not portable?
> 
> Are you sure that's the actual error message?
> I cannot find such message anywhere in the sources.
> The pickle format is quite portable, even cross-version. As a generic
> answer, make sure you open the file in binary mode, both when writing and
> reading.
> 

Yes, that's the message.

Part of what is pickled is a numpy array.  I am writing on a 32-bit linux 
system 
and reading on a 64-bit system.  Reading on the 64-bit system is no problem.

Maybe the message comes from numpy's unpickling?

-- 
http://mail.python.org/mailman/listinfo/python-list


when is filter test applied?

2017-10-03 Thread Neal Becker
In the following code (python3):

for rb in filter (lambda b : b in some_seq, seq):
  ... some code that might modify some_seq

I'm assuming that the test 'b in some_seq' is applied late, at the start of 
each iteration (but it doesn't seem to be working that way in my real code), 
so that if 'some_seq' is modified during a previous iteration the test is 
correctly performed on the latest version of 'some_seq' at the start of each 
iteration.  Is this correct, and is this guaranteed?


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: when is filter test applied?

2017-10-03 Thread Neal Becker
I'm not certain that it isn't behaving as expected - my code is quite
complicated.

On Tue, Oct 3, 2017 at 11:35 AM Paul Moore  wrote:

> My intuition is that the lambda creates a closure that captures the
> value of some_seq. If that value is mutable, and "modify some_seq"
> means "mutate the value", then I'd expect each element of seq to be
> tested against the value of some_seq that is current at the time the
> test occurs, i.e. when the entry is generated from the filter.
>
> You say that doesn't happen, so my intuition (and yours) seems to be
> wrong. Can you provide a reproducible test case? I'd be inclined to
> run that through dis.dis to see what bytecode was produced.
>
> Paul
>
> On 3 October 2017 at 16:08, Neal Becker  wrote:
> > In the following code (python3):
> >
> > for rb in filter (lambda b : b in some_seq, seq):
> >   ... some code that might modify some_seq
> >
> > I'm assuming that the test 'b in some_seq' is applied late, at the start
> of
> > each iteration (but it doesn't seem to be working that way in my real
> code),
> > so that if 'some_seq' is modified during a previous iteration the test is
> > correctly performed on the latest version of 'some_seq' at the start of
> each
> > iteration.  Is this correct, and is this guaranteed?
> >
> >
> > --
> > https://mail.python.org/mailman/listinfo/python-list
>
-- 
https://mail.python.org/mailman/listinfo/python-list


f-string syntax deficiency?

2023-06-06 Thread Neal Becker
The following f-string does not parse and gives syntax error on 3.11.3:

f'thruput/{"user" if opt.return else "cell"} vs. elevation\n'

However this expression, which is similar does parse correctly:

f'thruput/{"user" if True else "cell"} vs. elevation\n'

I don't see any workaround.  Parenthesizing doesn't help:
 f'thruput/{"user" if (opt.return) else "cell"} vs. elevation\n'

also gives a syntax error
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Recommendation for drawing graphs and creating tables, saving as PDF

2021-06-11 Thread Neal Becker
Jan Erik Moström wrote:

> I'm doing something that I've never done before and need some advise for
> suitable libraries.
> 
> I want to
> 
> a) create diagrams similar to this one
> https://www.dropbox.com/s/kyh7rxbcogvecs1/graph.png?dl=0 (but with more
> nodes) and save them as PDFs or some format that can easily be converted
> to PDFs
> 
> b) generate documents that contains text, lists, and tables with some
> styling. Here my idea was to save the info as markdown and create PDFs
> from those files, but if there is some other tools that gives me better
> control over the tables I'm interested in knowing about them.
> 
> I looked around around but could only find two types of libraries for a)
> libraries for creating histograms, bar charts, etc, b) very basic
> drawing tools that requires me to figure out the layout etc. I would
> prefer a library that would allow me to state "connect A to B", "connect
> C to B", "connect B to D", and the library would do the whole layout.
> 
> The closest I've found it to use markdown and mermaid or graphviz but
> ... PDFs (perhaps I should just forget about PDFs, then it should be
> enough to send people to a web page)
> 
> (and yes, I could obviously use LaTeX ...)
> 
> = jem

Like this?
https://pypi.org/project/blockdiag/

-- 
https://mail.python.org/mailman/listinfo/python-list


best way to ensure './' is at beginning of sys.path?

2017-02-03 Thread Neal Becker
I want to make sure any modules I build in the current directory overide any 
others.  To do this, I'd like sys.path to always have './' at the beginning.

What's the best way to ensure this is always true whenever I run python3?

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: best way to ensure './' is at beginning of sys.path?

2017-02-04 Thread Neal Becker
Neal Becker wrote:

> I want to make sure any modules I build in the current directory overide
> any
> others.  To do this, I'd like sys.path to always have './' at the
> beginning.
> 
> What's the best way to ensure this is always true whenever I run python3?

Sorry if I was unclear, let me try to describe the problem more precisely.

I have a library of modules I have written using boost::python.  They are 
all in a directory under my home directory called 'sigproc'.

In ~/.local/lib/python3.5/site-packages, I have

--- sigproc.pth
/home/nbecker
/home/nbecker/sigproc
---

The reason I have 2 here is so I could use either

import modA

or 

import sigproc.modA

although I almost always just use 
import modA
.


Now I have started experimenting with porting to pybind11 to replace 
boost::python.  I am working in a directory called pybind11-test.
I built modules there, with the same names as ones in sigproc.
What I observed, I believe, is that when I try in that directory,
import modA

it imported the old one in sigproc, not the new one in "./".

This behavior I found surprising.  I examined sys.path, and found it did not 
contain "./".

Then I prepended "./" to sys.path and found
import modA

appeared to correctly import the module in the current directory.
I think I want this behavior always, and was asking how to ensure it.

Thanks.

-- 
https://mail.python.org/mailman/listinfo/python-list


profile guided optimization of loadable python modules?

2018-07-04 Thread Neal Becker
Has anyone tried to optimize shared libraries (for loadable python modules) 
using gcc with profile guided optimization?  Is it possible?

Thanks,
Neal

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: clever exit of nested loops

2018-09-27 Thread Neal Becker
Christian Gollwitzer wrote:

> Am 26.09.18 um 12:28 schrieb Bart:
>> On 26/09/2018 10:10, Peter Otten wrote:
>>> class Break(Exception):
>>> pass
>>>
>>> try:
>>> for i in range(10):
>>> print(f'i: {i}')
>>> for j in range(10):
>>> print(f'\tj: {j}')
>>> for k in range(10):
>>> print(f'\t\tk: {k}')
>>>
>>> if condition(i, j, k):
>>> raise Break
>>> except Break:
>>> pass
>>>
>> 
>> For all such 'solutions', the words 'sledgehammer' and 'nut' spring to
>> mind.
>> 
>> Remember the requirement is very simple, to 'break out of a nested loop'
>> (and usually this will be to break out of the outermost loop). What
>> you're looking is a statement which is a minor variation on 'break'.
> 
> Which is exactly what it does. "raise Break" is a minor variation on
> "break".
> 
>> Not
>> to have to exercise your imagination in devising the most convoluted
>> code possible.
> 
> To the contrary, I do think this solution looks not "convoluted" but
> rather clear. Also, in Python some other "exceptions" are used for a
> similar purpose - for example "StopIteration" to signal that an iterator
> is exhausted. One might consider to call these "signals" instead of
> "exceptions", because there is nothing exceptional, apart from the
> control flow.
> 
> Christian
> 
> 

I've done the same before myself (exit from nested blocks to a containing 
block using exception), but it does violate the principle "Exceptions should 
be used for exceptional conditions).

-- 
https://mail.python.org/mailman/listinfo/python-list


I'd like to add -march=native to my pip builds

2016-04-08 Thread Neal Becker
I'd like to add -march=native to my pip builds.  How can I do this?


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: I'd like to add -march=native to my pip builds

2016-04-08 Thread Neal Becker
Stefan Behnel wrote:

> CFLAGS="-O3 -march=native"  pip install  --no-use-wheel

Thanks, not bad.  But no way to put this in a config file so I don't have to 
remember it, I guess?

-- 
https://mail.python.org/mailman/listinfo/python-list


Just-in-Time Static Type Checking for Dynamic Languages

2016-04-20 Thread Neal Becker
I saw this article, which might interest some of you.  It discusses 
application to ruby, but perhaps might have ideas useful for python.

https://arxiv.org/abs/1604.03641

-- 
https://mail.python.org/mailman/listinfo/python-list


pickle and module versioning

2018-12-17 Thread Neal Becker
I find pickle really handy for saving results from my (simulation) 
experiments.  But recently I realized there is an issue.  Reading the saved 
results requires loading the pickle, which in turn will load any referenced 
modules.  Problem is, what if the modules have changed?

For example, I just re-implemented a python module in C++, in a not quite 
compatible way.  AFAIK, my only choice to not break my setup is to choose a 
different name for the new module.

Has anyone else run into this issue and have any ideas?  I can imagine 
perhaps some kind of module versioning could be used (although haven't 
really thought through the details).

Thanks,
Neal

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: How to force the path of a lib ?

2019-01-23 Thread Neal Becker
dieter wrote:

> Vincent Vande Vyvre  writes:
>> I am working on a python3 binding of a C++ lib. This lib is installed
>> in my system but the latest version of this lib introduce several
>> incompatibilities. So I need to update my python binding.
>>
>> I'm working into a virtual environment (py370_venv) python-3.7.0 is
>> installed into ./localpythons/lib/python3.7
>>
>> So, the paths are:
>> # python-3.7.0
>> ~/./localpythons/lib/python3.7/
>> # my binding python -> libexiv2
>> ~/./localpythons/lib/python3.7/site-packages/pyexiv2/*.py
>> ~/./localpythons/lib/python3.7/site-
packages/pyexiv2/libexiv2python.cpython-37m-x86_64-linux-gnu.so
>>
>> # and the latest version of libexiv2
>> ~/CPython/py370_venv/lib/libexiv2.so.0.27.0
>>
>> All theses path are in the sys.path
>>
>> Now I test my binding:
> import pyexiv2
>> Traceback (most recent call last):
>> File "", line 1, in 
>> File
>> "/home/vincent/CPython/py370_venv/lib/python3.7/site-
packages/py3exiv2-0.1.0-py3.7-linux-x86_64.egg/pyexiv2/__init__.py",
>> line 60, in 
>> import libexiv2python
>> ImportError:
>> /home/vincent/CPython/py370_venv/lib/python3.7/site-
packages/py3exiv2-0.1.0-py3.7-linux-x86_64.egg/libexiv2python.cpython-37m-
x86_64-linux-gnu.so:
>> undefined symbol: _ZN5Exiv27DataBufC1ERKNS_10DataBufRefE
>
>>
>> Checking the libexiv2.so the symbol exists
>> ~/CPython/py370_venv/lib$ objdump -T libexiv2.so.0.27.0
>> 
>> 0012c8d0 gDF .text000f  Base
>> _ZN5Exiv27DataBufC1ERKNS_10DataBufRefE
>> 
>>
>> But it is not present into my old libexiv2 system, so I presume python
>> use /usr/lib/x86_64-linux-gnu/libexiv2.so.14.0.0  (The old 0.25) instead
>> of ~/CPython/py370_venv/lib/libexiv2.so.0.27.0 (The latest 0.27)
>>
>> How can I solve that ?
> 
> To load external C/C++ shared objects, the dynamic lickage loader
> (ldd) is used. "ldd" does not look at Pthon's "sys.path".
> Unless configured differently, it looks at standard places
> (such as "/usr/lib/x86_64-linux-gnu").
> 
> You have several options to tell "ldd" where to look for
> shared objects:
> 
>  * use the envvar "LD_LIBRARY_PATH"
>This is a "path variable" similar to the shell's "PATH",
>telling the dynamic loader in which directories (before
>the standard ones) to look for shared objects
> 
>  * use special linker options (when you link your Python
>extension shared object) to tell where dependent shared
>object can be found.
> 

To follow up on that last point, look up --rpath and related.

-- 
https://mail.python.org/mailman/listinfo/python-list


exit 2 levels of if/else and execute common code

2019-02-11 Thread Neal Becker
I have code with structure:
```
if cond1:
  [some code]
  if cond2: #where cond2 depends on the above [some code]
[ more code]

  else:
[ do xxyy ]
else:
  [ do the same xxyy as above ]
```

So what's the best style to handle this?  As coded, it violates DRY.  
Try/except could be used with a custom exception, but that seems a bit heavy 
handed.  Suggestions?

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: exit 2 levels of if/else and execute common code

2019-02-11 Thread Neal Becker
Rhodri James wrote:

> On 11/02/2019 15:25, Neal Becker wrote:
>> I have code with structure:
>> ```
>> if cond1:
>>[some code]
>>if cond2: #where cond2 depends on the above [some code]
>>  [ more code]
>> 
>>else:
>>  [ do xxyy ]
>> else:
>>[ do the same xxyy as above ]
>> ```
>> 
>> So what's the best style to handle this?  As coded, it violates DRY.
>> Try/except could be used with a custom exception, but that seems a bit
>> heavy
>> handed.  Suggestions?
> 
> If it's trivial, ignore DRY.  That's making work for the sake of making
> work in such a situation.
> 
> If it isn't trivial, is there any reason not to put the common code in a
> function?
> 

Well the common code is 2 lines.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: exit 2 levels of if/else and execute common code

2019-02-11 Thread Neal Becker
Chris Angelico wrote:

> On Tue, Feb 12, 2019 at 2:27 AM Neal Becker  wrote:
>>
>> I have code with structure:
>> ```
>> if cond1:
>>   [some code]
>>   if cond2: #where cond2 depends on the above [some code]
>> [ more code]
>>
>>   else:
>> [ do xxyy ]
>> else:
>>   [ do the same xxyy as above ]
>> ```
>>
>> So what's the best style to handle this?  As coded, it violates DRY.
>> Try/except could be used with a custom exception, but that seems a bit
>> heavy
>> handed.  Suggestions?
> 
> One common way to do this is to toss a "return" after the cond2 block.
> Means this has to be the end of a function, but that's usually not
> hard. Or, as Rhodri suggested, refactor xxyy into a function, which
> you then call twice.
> 
> ChrisA

Not bad, but turns out it would be the same return statement for both the 
normal return path (cond1 and cond2 satisfied) as well as the abnormal 
return, so not really much of an improvement.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: exit 2 levels of if/else and execute common code

2019-02-11 Thread Neal Becker
Chris Angelico wrote:

> On Tue, Feb 12, 2019 at 3:21 AM Neal Becker  wrote:
>>
>> Chris Angelico wrote:
>>
>> > On Tue, Feb 12, 2019 at 2:27 AM Neal Becker 
>> > wrote:
>> >>
>> >> I have code with structure:
>> >> ```
>> >> if cond1:
>> >>   [some code]
>> >>   if cond2: #where cond2 depends on the above [some code]
>> >> [ more code]
>> >>
>> >>   else:
>> >> [ do xxyy ]
>> >> else:
>> >>   [ do the same xxyy as above ]
>> >> ```
>> >>
>> >> So what's the best style to handle this?  As coded, it violates DRY.
>> >> Try/except could be used with a custom exception, but that seems a bit
>> >> heavy
>> >> handed.  Suggestions?
>> >
>> > One common way to do this is to toss a "return" after the cond2 block.
>> > Means this has to be the end of a function, but that's usually not
>> > hard. Or, as Rhodri suggested, refactor xxyy into a function, which
>> > you then call twice.
>> >
>> > ChrisA
>>
>> Not bad, but turns out it would be the same return statement for both the
>> normal return path (cond1 and cond2 satisfied) as well as the abnormal
>> return, so not really much of an improvement.
> 
> Not sure what you mean there. The result would be something like this:
> 
> def frobnicate():
> if cond1:
> do_stuff()
> if cond2:
> do_more_stuff()
> return
> do_other_stuff()
> 
> ChrisA
sorry, I left out the return:

if cond1:
   [some code]
   if cond2: #where cond2 depends on the above [some code]
 [ more code]

   else:
 [ do xxyy ]
else:
   [ do the same xxyy as above ]
return a, b, c

So if we return normally, or return via some other path, the return 
statement is the same, and would be duplicated.

-- 
https://mail.python.org/mailman/listinfo/python-list


@staticmethod, backward compatibility?

2005-09-27 Thread Neal Becker
How can I write code to take advantage of new decorator syntax, while
allowing backward compatibility?

I almost want a preprocessor.

#if PYTHON_VERSION >= 2.4
@staticmethod
...


Since python < 2.4 will just choke on @staticmethod, how can I do this?

-- 
http://mail.python.org/mailman/listinfo/python-list


Compile fails on x86_64

2005-09-30 Thread Neal Becker
In file included from scipy/base/src/multiarraymodule.c:44:
scipy/base/src/arrayobject.c: In function 'array_frominterface':
scipy/base/src/arrayobject.c:5151: warning: passing argument 3 of
'PyArray_New' from incompatible pointer type
error: Command "gcc -pthread -fno-strict-aliasing -DNDEBUG -O2 -g -pipe
-Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -mtune=nocona -D_GNU_SOURCE -fPIC
-O2 -g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -m64 -mtune=nocona -fPIC
-Ibuild/src/scipy/base/src -Iscipy/base/include -Ibuild/src/scipy/base
-Iscipy/base/src -I/usr/include/python2.4 -c
scipy/base/src/multiarraymodule.c -o
build/temp.linux-x86_64-2.4/scipy/base/src/multiarraymodule.o" failed with
exit status 1
error: Bad exit status from /var/tmp/rpm/rpm-tmp.96024 (%build)


-- 
http://mail.python.org/mailman/listinfo/python-list


compile fails on x86_64 (more)

2005-09-30 Thread Neal Becker
In file included from scipy/base/src/multiarraymodule.c:44:
scipy/base/src/arrayobject.c:41: error: conflicting types for
'PyArray_PyIntAsIntp'
build/src/scipy/base/__multiarray_api.h:147: error: previous declaration of
'PyArray_PyIntAsIntp' was here


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: isatty() for file-like objects: Implement or not?

2005-10-04 Thread Neal Norwitz
HOWARD GOLDEN wrote:
> The standard documentation for isatty() says:
>
> "Return True if the file is connected to a tty(-like) device, else
> False. Note: If a file-like object is not associated with a real file,
> this method should not be implemented."
>
> In his book, "Text Processing in Python," David Mertz says: "...
> implementing it to always return 0 is probably a better approach."
>
> My reaction is to agree with Mertz.

I agree, I think the doc is wrong, e.g. StringIO has isatty() which
returns False. I think the doc was probably a thinko (or things have
changed) since Guido checked it in over 3 years ago.

It would be great if you can provide a patch or at least a bug report
on SourceForge.

Thanks,
n

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python 2.4.2 HPUX-PARISC compile issues

2005-10-05 Thread Neal Norwitz
[EMAIL PROTECTED] wrote:
> When compiling HPUX for PARISC with the following environemnt variables
> I get the following errors.
>
> CFLAGS=+DD64 -fast
> CC=aCC
> LDFLAGS=+DD64
>
> What do I need to do in order to get this to compile?

This info should be in the README file for 2.4.2:

+   To build a 64-bit executable on an Itanium 2 system using HP's
+   compiler, use these environment variables:
+
+   CC=cc
+   CXX=aCC
+   BASECFLAGS="+DD64"
+   LDFLAGS="+DD64 -lxnet"
+
+   and call configure as:
+
+   ./configure --without-gcc

Did you do that?

If so, you will probably have to track this down.  LONG_BIT is supposed
to be there according to POSIX AFAIK.  It would mean the wrong header
file wasn't included (unlikely) or that some #define prevented it from
being defined (more likely).

Good luck,
n

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Idle bytecode query on apparently unreachable returns

2005-10-09 Thread Neal Norwitz
Tom Anderson wrote:
> Evening all,
>
> Here's a brief chat with the interpretator:

[snip]

> What puzzles me, though, are bytecodes 17, 39 and 42 - surely these aren't
> reachable? Does the compiler just throw in a default 'return None'
> epilogue, with routes there from every code path, even when it's not
> needed? If so, why?

I think the last RETURN_VALUE (None) isn't thrown in unless there is
some sort of conditional the precedes it as in this example.

As to why:  it's easier and no one has done anything about fixing it.
If you (or anyone else) are interested, the code is in
Python/compile.c.  Search for the optimize_code() function.

n

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python reliability

2005-10-09 Thread Neal Norwitz
Ville Voipio wrote:
>
> The software should be running continously for
> practically forever (at least a year without a reboot).
> Is the Python interpreter (on Linux) stable and
> leak-free enough to achieve this?

Jp gave you the answer that he has done this.

I've spent quite a bit of time since 2.1 days trying to improve the
reliability.  I think it has gotten much better.  Valgrind is run on
(nearly) every release.  We look for various kinds of problems.  I try
to review C code for these sorts of problems etc.

There are very few known issues that can crash the interpreter.  I
don't know of any memory leaks.  socket code is pretty well tested and
heavily used, so you should be in fairly safe territory, particularly
on Unix.

n

-- 
http://mail.python.org/mailman/listinfo/python-list


Can module access global from __main__?

2005-10-11 Thread Neal Becker
Suppose I have a main program, e.g., A.py.  In A.py we have:

X = 2
import B

Now B is a module B.py.  In B, how can we access the value of X?


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Can module access global from __main__?

2005-10-11 Thread Neal Becker
Everything you said is absolutely correct.  I was being lazy.  I had a main
program in module, and wanted to reorganize it, putting most of it into a
new module.  Being python, it actually only took a small effort to fix this
properly, so that in B.py, what were global variables are now passed as
arguments to class constructors and functions.

Still curious about the answer.  If I know that I am imported from __main__,
then I can do access X as sys.modules[__main__].X.  In general, I don't
know how to determine who is importing me.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Can module access global from __main__?

2005-10-11 Thread Neal Norwitz
Steve Holden wrote:
> Neal Becker wrote:
> >
> > Still curious about the answer.  If I know that I am imported from __main__,
> > then I can do access X as sys.modules[__main__].X.  In general, I don't
> > know how to determine who is importing me.
> >
> I don't think you can without huge amounts of introspection - it's even
> worse than the "what's the name of this object" question that seems to
> come up regularly.

import sys

frame = sys._getframe()
caller = frame.f_back
print 'Called from', caller.f_code.co_filename, caller.f_lineno
# for more info, look into the traceback module

> A module can be imported from multiple modules, and
> you only get to execute code on the first import.
> Even then (on the first import) I am not sure how you could introspect
> to find the answer you want.

You can install your own __import__() hook to catch all imports.

Just because you can do something, it doesn't follow that you should do
it, especially in this case.  Unless you really, really need these
tricks, they shouldn't be used.

n

-- 
http://mail.python.org/mailman/listinfo/python-list


1-liner to iterate over infinite sequence of integers?

2005-10-13 Thread Neal Becker
I can do this with a generator:

def integers():
x = 1
while (True):
yield x
x += 1

for i in integers(): 

Is there a more elegant/concise way?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python interpreter

2005-10-13 Thread Neal Norwitz
g.franzkowiak wrote:
> Hi everybody,
>
> my interest is for the internals of the Python interpreter.
>
> I've used up to now FORTH for something and this indirect interpreter is
>  very smart.
> --- ASM ---
>
> Where can I find informations like this for Python ?

Depends on what you want.  If you want to see the disassembled bytes
similar to what I cut out, see the dis module.  (e.g.,
dis.dis(foo_func))

If you want to know how the byte codes are created, it's mostly in
Python/compile.c.  If you want to know how the byte codes are executed,
it's mostly in Python/ceval.c.

All base Python objects implemented in C are under Objects/.  All
standard modules implemented in C are under Modules/.

Get the source and build it.  It's quite readable.

n

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python script produces "sem_trywait: Permission denied"

2005-10-18 Thread Neal Norwitz
Mark E. Hamilton wrote:
> Sorry, I probably should have re-stated the problem:
>
> We're using Python 2.3.5 on AIX 5.2, and get the follow error messages
> from some of our code. I haven't yet tracked down exactly where it's
> coming from:
>
> sem_trywait: Permission denied
> sem_wait: Permission denied
> sem_post: Permission denied

I'm I would be very concerned about these.  Permission denied is
somewhat baffling to me.  man sem_wait on Linux doesn't show EPERM as a
possible error condition.

The code that likely generates these messages is in
Python/thread_pthread.h near line  319.

Do you have lots of threads?  Do you make heavy use of semaphores?
Maybe they are running out or something.  Do you know if your locking
is really working?

> We don't run these scripts as root, so I can't say whether they work as
> root. I suspect they would, though, since root has permissions to do
> anything.

I'm not sure.  I don't know what the Permission denied really means.

I wonder if there some weird compiler optimization thing going on.  Not
that I'm blaming the compiler, it could well be a python problem.  I
just don't know.

Do you have any memory debugging tools that you can run python under to
see if there's something weird python is doing?  Valgrind should work
on PPC now, but you may need to apply a patch, I'm not sure PPC support
is mainline in v3.  Other possibilities include purify if that works on
AIX or sentinel.

n

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: make: circular dependency for Modules/signalmodule.o

2005-10-18 Thread Neal Norwitz
James Buchanan wrote:
> Hi group,
>
> I'm preparing Python 2.4.2 for the upcoming Minix 3.x release, and I
> have problems with make.  configure runs fine and creates the makefile,
> but right at the end ends with an error about a circular dependency in
> Modules/signalmodule.o.

I've never heard of this problem.  The Makefile is generated by
configure so this is possibly a configure issue.  In my (generated)
Makefile, signalmodule.o is listed in MODOBJS, but not in SIGNAL_OBJS.
Maybe your signalmodule.o is listed in both?

Search through the Makefile for signalmodule and see what you can find.

Mine has two long lines for the rules which cause signalmodule.c to be
compiled.

Modules/signalmodule.o: $(srcdir)/Modules/signalmodule.c; $(CC)
$(PY_CFLAGS) -c $(srcdir)/Modules/signalmodule.c -o
Modules/signalmodule.o

Modules/signalmodule$(SO):  Modules/signalmodule.o; $(LDSHARED)
Modules/signalmodule.o -o Modules/signalmodule$(SO)

Good luck,
n

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Extention Woes

2005-10-19 Thread Neal Norwitz
Tuvas wrote:
> Forgot, var declartions
>
> int can_han;
> int com;
> char len;
> char dat[8];

That should probably be:

 int len;
 char *dat;

IIRC, "z" returns the internal string pointer.  "#" is definitely not
going to return a char.  I'm pretty sure it returns an int and not a
long.

n

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python gc performance in large apps

2005-10-22 Thread Neal Norwitz
Jp Calderone wrote:
> On Fri, 21 Oct 2005 16:13:09 -0400, Robby Dermody <[EMAIL PROTECTED]> wrote:
> >
> > [snip - it leaks memory]
>
> One thing to consider is that the process may be growing in size, not because 
> garbage objects are not being freed, but because objects which should be 
> garbage are being held onto by application-level code.

This is a big problem with Java too.  It's also likely to be a large
source of the memory growth here given that there isn't much cylcic
garbage.  I'm assuming that memory leaks in the python core are going
to be a small percentage of the total.  (Probably also true even if
there are memory leaks in Twisted, etc)  It's so easy to keep data
around you don't realize.

I don't have any particular insight into this problem.  I think Zope
servers can run a long without similar issues, so I think (certainly
hope) it's not endemic to the Python core.  I don't recall any
significant memory leaks fixed between 2.3 and current CVS.  But it
would be interesting to try your app on 2.4 at least to see if it
works.  CVS would also be interesting.

You might want to consider building your own python configuring
--with-pydebug.  This will cause your program to run slower and consume
more memory, but it has additional information available to help find
reference leaks.

Definitely also run under valgrind if possible.  Given the size, I
don't know if electric fence or dbmalloc are realistic options.

Feel free to mail me if you need help with valgrind etc.  I'm very
curious what the root cause of your problem is.  It's possible you are
exercising code in python that isn't commonly used and so we haven't
found a problem yet.  Also consider looking into the issues of the
third party libraries.  Jp mentioned some problems with Twisted stuff.

It would be good if you could provide small test cases that create the
problems you have encountered.

n

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: dictionary that have functions with arguments

2005-11-02 Thread Neal Norwitz
Ron Adam wrote:
>
> Eval or exec aren't needed.  Normally you would just do...
>
> execfunc['key1'](**args)
>
> If your arguments are stored ahead of time with your function...
>
> Committed revision 41366.

>
> You could then do...
>
> func, args = execfunc['key1']
> func(**args)

Interesting that both Ron and Alex made the same mistake.  Hmmm, makes
me wonder if they are two people or not...

If args is a tuple, it should be:

  func(*args)

If you want the full generality and use keyword args:

  func(*args, **kwargs)

kwargs would be a dictionary with string keys.

E.g.,

  execfunc = {'key1':(func1, (1,), {'keyarg': 42})}

HTH,
n

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: shared library search path

2005-11-03 Thread Neal Becker
Stefan Arentz wrote:

> 
> Hi. I've wrapped a C++ class with Boost.Python and that works great. But,
> I am now packaging my application so that it can be distributed. The
> structure is basically this:
> 
>  .../bin/foo.py
>  .../lib/foo.so
>  .../lib/bar.py
> 
> In foo.py I do the following:
> 
>  sys.path.append(os.path.dirname(sys.path[0]) + '/lib')
> 
> and this allows foo.py to import bar. Great.
> 
> But, the foo.so cannot be imported. The import only succeeds if I place
> foo.so next to foo.py in the bin directory.
> 
> I searched through the 2.4.2 documentation on python.org but I can't find
> a proper explanation on how the shared library loader works.
> 
> Does anyone understand what it going on here?
> 
>  S.
> 

No, but here is the code I use:
import sys
sys.path.append ('../wrap')


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: best way to discover this process's current memory usage, cross-platform?

2005-11-14 Thread Neal Norwitz
Alex Martelli wrote:
>
> So, I thought I'd turn to the "wisdom of crowds"... how would YOU guys
> go about adding to your automated regression tests one that checks that
> a certain memory leak has not recurred, as cross-platform as feasible?
> In particular, how would you code _memsize() "cross-platformly"?  (I can
> easily use C rather than Python if needed, adding it as an auxiliary
> function for testing purposes to my existing extension).

If you are doing Unix, can you use getrusage(2)?

>>> import resource
>>> r = resource.getrusage(resource.RUSAGE_SELF)
>>> print r[2:5]

I get zeroes on my gentoo amd64 box.  Not sure why.  I thought maybe it
was Python, but C gives the same results.

Another possibiity is to call sbrk(0) which should return the top of
the heap.  You could then return this value and check it.  It requires
a tiny C module, but should be easy and work on most unixes.  You can
determine direction heap grows by comparing it with id(0) which should
have been allocated early in the interpreters life.

I realize this isn't perfect as memory becomes fragmented, but might
work.  Since 2.3 and beyond use pymalloc, fragmentation may not be much
of an issue.  As memory is allocated in a big hunk, then doled out as
necessary.

These techniques could apply to Windows with some caveats.  If you are
interested in Windows, see:
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnucmg/html/UCMGch09.asp

Can't think of anything fool-proof though.

HTH,
n

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: best way to discover this process's current memory usage, cross-platform?

2005-11-15 Thread Neal Norwitz
Alex Martelli wrote:
> matt <[EMAIL PROTECTED]> wrote:
>
> > Perhaps you could extend Valgrind (http://www.valgrind.org) so it works
> > with python C extensions?  (x86 only)
>
> Alas, if it's x86 only I won't even look into the task (which does sound
> quite daunting as the way to solve the apparently-elementary question
> "how much virtual memory is this process using right now?"...!), since I
> definitely cannot drop support for all PPC-based Macs (nor would I WANT
> to, since they're my favourite platform anyway).

Valgrind actually runs on PPC (32 only?) and amd64, but I don't think
that's the way to go for this problem.

Here's a really screwy thought that I think should be portable to all
Unixes which have dynamic linking.  LD_PRELOAD.

You can create your own version of malloc (and friends) and free.  You
intercept each call to malloc and free (by making use of LD_PRELOAD),
keep track of the info (pointers and size) and pass the call along to
the real malloc/free.  You then have all information you should need.
It increases the scope of the problem, but I think it makes it soluble
and somewhat cross-platform.  Using LD_PRELOAD, requires the app be
dynamically linked which shouldn't be too big of a deal.  If you are
using C++, you can hook into new/delete directly.

n

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Any college offering Python short term course?

2005-11-21 Thread Neal Norwitz
There is the BayPiggies user group:  [EMAIL PROTECTED]  It meets
monthly alternating between Mt. VIew (Google) and San Bruno (IronPort).

n
--

bruce wrote:
> hey...
>
> i'm looking for classes (advanced) in python/php in the bay area as well...
> actually i'm looking for the students/teachers/profs of these classes... any
> idea as to how to find them. calling the various schools hasn't really been
> that helpful. The schools/institutions haven't had a good/large selection...
> it appears that some of the classes are taught by adjunct/part-time faculty,
> and they're not that easy to get to...
>
> if anybody knows of user-groups that also have this kind of talent, i'd
> appreciate it as well...
>
> send responses to the list as well!!!
>
> thanks
>
> -bruce
>
>
> -Original Message-
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED] Behalf
> Of arches73
> Sent: Sunday, November 20, 2005 4:04 PM
> To: [email protected]
> Subject: Any college offering Python short term course?
>
>
> Hi,
>
> I want to learn Python.  I appreciate if someone point me to the
> colleges / institutions offering any type of course in Python
> programming in the Bay area CA. Please send me the links to my email.
>
> Thanks,
> Arches
> 
> 
> 
> --
> http://mail.python.org/mailman/listinfo/python-list

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: 2.4.2 on AIX 4.3 make fails on threading

2005-11-22 Thread Neal Norwitz
Paul Watson wrote:
> When I try to build 2.4.2 on AIX 4.3, it fails on missing thread
> objects.  I ran ./configure --without-threads --without-gcc.
>
> Before using --without-threads I had several .pthread* symbols missing.

Perhaps you need to add -lpthread to the link line.  This should be
able to work.  What is the link line output from make?

> Can anyone
> suggest a configuration or some change that I can make to cause this to
> build correctly?  Thanks.
>
> ld: 0711-317 ERROR: Undefined symbol: ._PyGILState_NoteThreadState

I think that problem has been fixed.  Try downloading this file and
replace the one in your build directory:


http://svn.python.org/projects/python/branches/release24-maint/Python/pystate.c

n

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Singleton and C extensions

2005-11-25 Thread Neal Norwitz
Emmanuel Briot wrote:
>
> I am not really good at python, but I was trying to implement the
> singleton design pattern in C, so that for instance calling the constructor
>
>ed = Editor ("foo")
>
> would either return an existing instance of Editor currently editing
> "foo", or would create a new instance.

Hmm, if there can be more than one Editor I wouldn't call it a
singleton.
But this should do what you want:

class Editor(object):
  _cache = {}
  def __init__(self, arg):
  self._cache[arg] = self
  def __new__(cls, arg):
  if arg in cls._cache:
return cls._cache[arg]
  return object.__new__(cls, arg)

> Has any one an example on how to do that in C, or maybe even at the
> python level itself, and I will try to adapt it ?

C is a *lot* more work and tricky too.

hth,
n

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: A bug in struct module on the 64-bit platform?

2005-11-30 Thread Neal Norwitz
[EMAIL PROTECTED] wrote:
> Hi,
>
> I have a user who complained about how "struct" module computes C
> struct data size on Itanium2 based 64-bit machine.

I wouldn't be surprised, but I don't understand the problem.

>>>struct.calcsize('idi')
>16
>>>struct.calcsize('idid')
>24
>>>struct.calcsize('did')
>20

These are what I would expect on a 32 or 64 bit platform.  i == int, d
== float.  ints are typically 4 bytes on 64 bit platforms.  If you want
8 byte integers, you typically need to use longs (format letter is ell,
l).

You didn't say which version of python, so it's possible this was a bug
that was fixed too.

On my system:

python: ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV), for
GNU/Linux 2.4.1, dynamically linked (uses shared libs), not stripped

>>> struct.calcsize('l') #that's a lowercase ell
8

If you think it's a bug, you should file a bug report on source forge.

n

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: efficient 'tail' implementation

2005-12-08 Thread Neal Becker
[EMAIL PROTECTED] wrote:

> hi
> 
> I have a file which is very large eg over 200Mb , and i am going to use
> python to code  a "tail"
> command to get the last few lines of the file. What is a good algorithm
> for this type of task in python for very big files?
> Initially, i thought of reading everything into an array from the file
> and just get the last few elements (lines) but since it's a very big
> file, don't think is efficient.
> thanks
> 

You should look at pyinotify.  I assume we're talking linux here.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: tkinter socket client ?

2005-01-21 Thread Neal Norwitz
You are probably looking for Tkinter.createfilehandler().  Here are
some snippets to get you started:

tk_reactor = Tkinter._tkinter
self.sd = socket(AF_INET, SOCK_STREAM)
self.sd.connect((HOST, PORT))
tk_reactor.createfilehandler(self.sd, Tkinter.READABLE,
self.handle_input)

def handle_input(self, sd, mask):
data = self.sd.recv(SIZE)
(Sorry if the formatting is busted, blame google groups.)

HTH,
Neal

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Recommendations for CVS systems

2005-08-10 Thread Neal Becker
[EMAIL PROTECTED] wrote:

> I was wondering if anyone could make recomendations/comments about CVS
> systems, their experiences and what perhaps the strengths of each.
> 
> Currently we have 2 developers but expect to grow to perhaps 5.
> 
> Most of the developement is Python, but some C, Javascript, HTML, etc.
> 
> The IDE what have been using/experimenting with are drPython and
> eclipse with PyDev.
> 
> 

For a python newsgroup, you are required to consider mercurial.  It's not
ready for production use yet, but is making rapid progress, and many
(including myself) are using it.

-- 
http://mail.python.org/mailman/listinfo/python-list


PyChecker lives, version 0.8.15 released

2005-08-31 Thread Neal Norwitz
Special thanks to Ken Pronovici.  He did a lot of work for this
release and helped ensure it occurred.

Version 0.8.15 of PyChecker is available.  It's been over a year since
the last release.  Wow, time really does fly.  Since it's been so long
I'm sure I screwed something up, treat it delicately.  It may have bugs
and erase your hard drive.  If that happens, look on the bright side, 
you won't have any more bugs. :-)

PyChecker is a tool for finding bugs in Python source code.
It finds problems that are typically caught by a compiler for less
dynamic languages, like C and C++.  It is similar to lint.

Comments, criticisms, new ideas, and other feedback is welcome.

Since I expect there may be a bit more bugs than normal, I will try to
put out another release in a few weeks.  Please file bug reports
including problems with installation, false positives, &c on Source Forge.
You are welcome to use the mailling list to discuss anything pychecker 
related, including ideas for new checks.

Changes from 0.8.14 to 0.8.15:

  * Fix spurious warning about catching string exceptions
  * Don't barf if there is # -*- encoding: ... -*- lines and unicode strings
  * setup.py was rewritten to honor --root, --home, etc options
  * Fix internal error on processing nested scopes
  * Fix constant tuples in Python 2.4
  * Don't warn about implicit/explicit returns in Python 2.4, we can't tell
  * Fix crash when __slots__ was an instance w/o __len__
  * Fix bug that declared {}.pop to only take one argument, it takes 1 or 2
  * Fix spurious warning when using tuples for exceptions
  * Fix spurious warning  /  
  * Fix spurious warnings for sets module about __cmp__, __hash__
  * Changed abstract check to require raising NotImplementedError
rather than raising any error
  * Fix spurious warnings in Python 2.4 for Using is (not) None warnings
  * Fix spurious warnings for some instances of No class attribute found
  * Fix spurious warnings for implicit returns when using nested functions

PyChecker is available on Source Forge:
Web page:   http://pychecker.sourceforge.net/
Project page:   http://sourceforge.net/projects/pychecker/
Mailing List:   [EMAIL PROTECTED]

Neal
--
[EMAIL PROTECTED]
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python Design Principles

2005-09-09 Thread Neal Norwitz
[EMAIL PROTECTED] wrote:
>
> But I am still puzzled by the argument that has been given for why
> methods that operate on mutable types should return None, namely, that
> the designers of python didn't want the users to shoot themselves in
> the foot by thinking a method simply returned a result and left the
> data structure unchanged.

Let me try to answer your question with a question.  Given this code:

  >>> d = {}
  >>> e = d.update([(1, 2)])

If .update() returned a dictionary, does d == e?

I'm not sure what you would guess. I am pretty sure that everyone
wouldn't agree whether d should equal e or not.  If they are not equal,
that would mean a new copy would be made on each update which could be
incredibly expensive in speed and memory.  It is also different from
how Python works today, since the update() method mutates the
dictionary.

> In the context of fundamental design principles, if you asked a random
> sample of Python gurus what is more Pythonesque: preventing users from
> shooting themselves in the foot or making things easier to accomplish,
> my impression is that people would overwhelmingly choose the latter.

Probably true, but ...

> After all, the fact that Python is not strongly typed and is
> interpreted rather than compiled gives plenty of ways for people to
> shoot themselves in the foot but what is gained is the abilitity to do
> more with less code.

I think most people programming Python are pretty pragmatic.  There is
no single language that is ideal in all circumstances.  There are
necessarily some trade-offs.  Many believe that tools can help bridge
this gap.  There are at least 2 tools for finding bugs (or gotchas) of
this sort:  pychecker and pylint.

> But in this instance, by not allowing operations on mutable types to
> return the mutated objects, it seems that the other side is being
> taken, sacrificing programmer producitivity for concerns about
> producing possible side effects. It is somewhat ironic, I think, that
> Java, a language whose design principles clearly side on preventing
> users from shooting themselves in the foot, much more so thatn Python,
> generally allows you to get back the mutated object.

I think Python has attempted to create an internal consistency.  I
believe Java has tried to do the same.  However, these aren't the same
sets of consistency.  People are always going to have assumptions.
Python strives to be as intuitive as possible.  However, it can't be
right 100% of the time for all people.

HTH,
n

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Simplifying imports?

2005-09-12 Thread Neal Norwitz
[EMAIL PROTECTED] wrote:
> I like to keep my classes each in a separate file with the same name of
> the class. The problem with that is that I end up with multiple imports
> in the beginning of each file, like this:
>
> from foo.Bar import Bar
> from foo.Blah import Blah
> from foo.Zzz import Zzz

Must ... resist ... temptation ... to ... complain ... about ... J...

> What I'd like to do would be to replace it all by a single line:
>
> from foo.* import *
>
> Of course, that doesn't work, but is there a way to do something like
> that?

In foo there is a file __init__.py, right?  If you have 3 class files
in foo: bar.py, baz.py, and bax.py, you're __init__.py could contain:

# __init__.py
from foo.bar import Bar
from foo.baz import Baz
from foo.bax import Bax
# end of __init__.py

Then, voila:

>>> import foo
>>> dir(foo)
['Bar', 'Bax', 'Baz', '__builtins__', '__doc__', '__file__',
'__name__', '__path__', 'bar', 'bax', 'baz']

You could write code in __init__.py to import all the files in foo/ if
you wanted.  That way you wouldn't have to explicitly list each file.
(Hint:  see os.listdir() and __import__().)

HTH,
n

PS.  I don't really like this approach.  It seems too implicit
(magical).

-- 
http://mail.python.org/mailman/listinfo/python-list


python optimization

2005-09-15 Thread Neal Becker
I use cpython.  I'm accustomed (from c++/gcc) to a style of coding that is
highly readable, making the assumption that the compiler will do good
things to optimize the code despite the style in which it's written.  For
example, I assume constants are removed from loops.  In general, an entity
is defined as close to the point of usage as possible.

I don't know to what extent these kind of optimizations are available to
cpython.  For example, are constant calculations removed from loops?  How
about functions?  Is there a significant cost to putting a function def
inside a loop rather than outside?

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python optimization

2005-09-15 Thread Neal Becker
Reinhold Birkenfeld wrote:

> David Wilson wrote:
>> For the most part, CPython performs few optimisations by itself. You
>> may be interested in psyco, which performs several heavy optimisations
>> on running Python code.
>> 
>> http://psyco.sf.net/
>> 

I might be, if it supported x86_64, but AFAICT, it doesn't.

-- 
http://mail.python.org/mailman/listinfo/python-list


RE: [Python-Dev] python optimization

2005-09-16 Thread Neal Becker
One possible way to improve the situation is, that if we really believe
python cannot easily support such optimizations because the code is too
"dynamic", is to allow manual annotation of functions.  For example, gcc
has allowed such annotations using __attribute__ for quite a while.  This
would allow the programmer to specify that a variable is constant, or that
a function is pure (having no side effects).

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Python:C++ interfacing. Tool selection recommendations

2005-09-16 Thread Neal Becker
[EMAIL PROTECTED] wrote:

> Hi,
> 
> I am embedding Python with a C++ app and need to provide the Python
> world with access to objects & data with the C++ world.
> 
> I am aware or SWIG, BOOST, SIP. Are there more?
> 
> I welcome comments of the pros/cons of each and recommendations on when
> it appropriate to select one over the others.
> 

boost::python is alien technology.  It is amazingly powerful.  Once you
learn how to use it it's wonderful, but unless you are comfortable with
modern c++ you may find the learning curve steep.

-- 
http://mail.python.org/mailman/listinfo/python-list


unusual exponential formatting puzzle

2005-09-21 Thread Neal Becker
Like a puzzle?  I need to interface python output to some strange old
program.  It wants to see numbers formatted as:

e.g.: 0.23456789E01

That is, the leading digit is always 0, instead of the first significant
digit.  It is fixed width.  I can almost get it with '% 16.9E', but not
quite.

My solution is to print to a string with the '% 16.9E' format, then parse it
with re to pick off the pieces and fix it up.  Pretty ugly.  Any better
ideas?


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unusual exponential formatting puzzle

2005-09-21 Thread Neal Becker
[EMAIL PROTECTED] wrote:

> 
> [EMAIL PROTECTED] wrote:
>> Neal Becker wrote:
>> > Like a puzzle?  I need to interface python output to some strange old
>> > program.  It wants to see numbers formatted as:
>> >
>> > e.g.: 0.23456789E01
>> >
>> > That is, the leading digit is always 0, instead of the first
>> > significant
>> > digit.  It is fixed width.  I can almost get it with '% 16.9E', but not
>> > quite.
>> >
>> > My solution is to print to a string with the '% 16.9E' format, then
>> > parse it
>> > with re to pick off the pieces and fix it up.  Pretty ugly.  Any better
>> > ideas?
>>
>> If you have gmpy available...
>>
>> >>> import gmpy
>>
>> ...and your floats are mpf's...
>>
>> >>> s = gmpy.pi(64)
>> >>> s
>> mpf('3.14159265358979323846e0',64)
>>
>> ...you can use the fdigits function
>>
>> >>> t = gmpy.fdigits(s,10,8,0,0,2)
>>
>> ...to create a seperate digit string and exponent...
>>
>> >>> print t
>> ('31415927', 1, 64)
>>
>> ...which can then be printed in the desired format.
>>
>> >>> print "0.%sE%02d" % (t[0],t[1])
>> 0.31415927E01
> 
> Unless your numbers are negative.
> 
>>>> print "0.%sE%02d" % (t[0],t[1])
> 0.-31415927E03
> 
> Drat. Needs work.
> 
> 
> 
> And does the format permit large negative exponents (2 digits + sign)?
> 

I think the abs (exponent) < 10 for now

>>>> print "0.%sE%02d" % (t[0],t[1])
> 0.31415927E-13
> 


-- 
http://mail.python.org/mailman/listinfo/python-list


Re: unusual exponential formatting puzzle

2005-09-22 Thread Neal Becker
Paul Rubin wrote:

> Neal Becker <[EMAIL PROTECTED]> writes:
>> Like a puzzle?  I need to interface python output to some strange old
>> program.  It wants to see numbers formatted as:
>> 
>> e.g.: 0.23456789E01
> 
> Yeah, that was normal with FORTRAN.
> 
>> My solution is to print to a string with the '% 16.9E' format, then
>> parse it with re to pick off the pieces and fix it up.  Pretty ugly.
>> Any better ideas?
> 
> That's probably the simplest.

Acutally, I found a good solution using the new decimal module:
def Format(x):
 """Produce strange exponential format with leading 0"""
 s  = '%.9E' % x

 d = decimal.Decimal (s)
 (sign, digits, exp) = d.as_tuple()


 s = ''
 if (sign == 0):
  s += ' '
 else:
  s += '-'

 s += '0.'

 e = len (digits) + exp
 for x in digits:
  s += str (x)
 s += 'E'
 s += '%+03d' % e

 return s


-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   >