Re: [Twisted-Python] Twisted 15.4 was the last release to support Python 2.6; or: a HawkOwl Can't Words Situation
Is it possible to fix the documentation? https://twistedmatrix.com/trac/browser/tags/releases/twisted-15.5.0/NEWS?format=raw On Mon, Dec 7, 2015 at 4:06 PM, Amber "Hawkie" Brown wrote: > Hi everyone! > > It's been brought to my attention that I misworded something in the release > notes and it slipped through the cracks. In the NEWS I said: > >> This is the last Twisted release where Python 2.6 is supported, on any >> platform. > > However, I meant that this is the first Twisted release to drop 2.6 support > wholesale, preventing import on this platform. Twisted 15.4 will still > operate, so if you have Python 2.6 deployment requirements, bracket the > maximum to 15.4 on that platform by using an if statement in your setup.py, > and `Twisted >=*minreq*,<=15.4; python_version < '2.7'` under requires_dist > in your setup.cfg, where minreq is the minimum required Twisted. > > Sorry for the inconvenience! > > - Amber "HawkOwl" Brown > Twisted Release Manager > > ___ > Twisted-Python mailing list > [email protected] > http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python > -- anatoly t. -- https://mail.python.org/mailman/listinfo/python-list
Re: Path problems when I am in bash
On Wednesday, December 30, 2015 at 2:30:40 AM UTC, Karim wrote:
> On 30/12/2015 00:21, xeon Mailinglist wrote:
> > I have my source code inside the directory `medusa`, and my unit tests
> > inside `tests` dir. Both dirs are inside `medusa-2.0` dir. Here is my file
> > structure [1].
> >
> > When I run my tests inside pycharm, everything works fine, but when I try
> > to run my unit tests inside in the prompt [2], the
> > `scheduler/predictionranking.py` can't find the `hdfs` import [3]. I have
> > set my environment inside the `medusa-2.0` dir with the virtualenv, I even
> > have set `medusa-2.0` in the PYTHON_PATH [4]. The error that I have is in
> > [5].
> >
> > The question is, why the import is not being done correctly?
> >
> >
> > [1] My file structure
> >
> > medusa-2.0$
> > medusa (source code)
> > hdfs.py
> > scheduler (dir with my schedulers)
> > predictionranking.py
> > tests (my unit tests)
> >
> >
> > [2] I run the unit test like this.
> >
> > medusa-2.0$ python -v tests/testSimpleRun.py
> >
> >
> > [3] The header in `predictionranking.py`
> >
> > import hdfs
> >
> > def get_prediction_metrics(clusters, pinput):
> > """
> > :param pinput (list) list of input paths
> >
> > """
> >
> > input_size = hdfs.get_total_size(pinput)
> >
> > [4] My python path
> >
> > export MEDUSA_HOME=$HOME/repositories/git/medusa-2.0
> >
> > export PYTHONPATH=${PYTHONPATH}:${MEDUSA_HOME}/medusa
> >
> > [5] error that I have
> >
> > medusa-2.0$ python -v tests/testSimpleRun.py
> > # /home/xeon/repositories/git/medusa-2.0/medusa/local.pyc matches
> > /home/xeon/repositories/git/medusa-2.0/medusa/local.py
> > import medusa.local # precompiled from
> > /home/xeon/repositories/git/medusa-2.0/medusa/local.pyc
> > # /home/xeon/repositories/git/medusa-2.0/medusa/ranking.pyc matches
> > /home/xeon/repositories/git/medusa-2.0/medusa/ranking.py
> > import medusa.ranking # precompiled from
> > /home/xeon/repositories/git/medusa-2.0/medusa/ranking.pyc
> > # /home/xeon/repositories/git/medusa-2.0/medusa/decors.pyc matches
> > /home/xeon/repositories/git/medusa-2.0/medusa/decors.py
> > import medusa.decors # precompiled from
> > /home/xeon/repositories/git/medusa-2.0/medusa/decors.pyc
> > # /home/xeon/repositories/git/medusa-2.0/medusa/settings.pyc matches
> > /home/xeon/repositories/git/medusa-2.0/medusa/settings.py
> > import medusa.settings # precompiled from
> > /home/xeon/repositories/git/medusa-2.0/medusa/settings.pyc
> > import medusa.scheduler # directory
> > /home/xeon/repositories/git/medusa-2.0/medusa/scheduler
> > # /home/xeon/repositories/git/medusa-2.0/medusa/scheduler/__init__.pyc
> > matches /home/xeon/repositories/git/medusa-2.0/medusa/scheduler/__init__.py
> > import medusa.scheduler # precompiled from
> > /home/xeon/repositories/git/medusa-2.0/medusa/scheduler/__init__.pyc
> > #
> > /home/xeon/repositories/git/medusa-2.0/medusa/scheduler/predictionranking.pyc
> > matches
> > /home/xeon/repositories/git/medusa-2.0/medusa/scheduler/predictionranking.py
> > import medusa.scheduler.predictionranking # precompiled from
> > /home/xeon/repositories/git/medusa-2.0/medusa/scheduler/predictionranking.pyc
> > Traceback (most recent call last):
> >File "tests/simpleRun.py", line 4, in
> > from medusa.simplealgorithm import run_simple_execution
> >File "/home/xeon/repositories/git/medusa-2.0/medusa/simplealgorithm.py",
> > line 8, in
> > import accidentalfaults
> >File
> > "/home/xeon/repositories/git/medusa-2.0/medusa/accidentalfaults.py", line
> > 5, in
> > import hdfs
> >File "/home/xeon/repositories/git/medusa-2.0/medusa/hdfs.py", line 6, in
> >
> > from system import execute_command
> >File "/home/xeon/repositories/git/medusa-2.0/medusa/system.py", line 10,
> > in
> > from ranking import rank_clusters
> >File "/home/xeon/repositories/git/medusa-2.0/medusa/ranking.py", line 9,
> > in
> > from scheduler.predictionranking import get_prediction_metrics
> >File
> > "/home/xeon/repositories/git/medusa-2.0/medusa/scheduler/predictionranking.py",
> > line 6, in
> > import hdfs
>
> Hello,
>
> Can you try to set your PYTHONPATH like that:
>
> export PYTHONPATH=${PYTHONPATH}:${MEDUSA_HOME}
>
> Regards
> Karim
Thanks.
--
https://mail.python.org/mailman/listinfo/python-list
how to get names of attributes
Hi, How can I get *all* the names of an object's attributes? I have legacy code with mixed new style classes and old style classes and I need to write methods which deal with both. That's the immediate problem, but I'm always running into the need to understand how objects are linked, in particular when in pdb. The answers one always sees on StackOverflow is that you don't need to understand, understanding is not the pythonic way to do things. Alternatively, is there are map documented somewhere - more complete than python/python-2.7.3-docs-html/library/stdtypes.html? highlight=class#special-attributes Or, is the code available uncompiled somewhere on my machine? Does anyone know *why* the __members__ method was deprecated, to be replaced by dir(), which doesn't tell the truth (if only it took an optional parameter to say: "be truthful") cts -- https://mail.python.org/mailman/listinfo/python-list
Re: how to get names of attributes
On Wed, Dec 30, 2015 at 10:51 PM, Charles T. Smith wrote: > Does anyone know *why* the __members__ method was deprecated, to be > replaced by dir(), which doesn't tell the truth (if only it took an > optional parameter to say: "be truthful") Does vars() help here? It works on old-style and new-style classes, and it's doing broadly the same sort of thing as you're talking about. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: how to get names of attributes
On Wed, 30 Dec 2015 11:51:19 +, Charles T. Smith wrote: > Hi, > > How can I get *all* the names of an object's attributes? I have legacy > code with mixed new style classes and old style classes and I need to > write methods which deal with both. That's the immediate problem, but > I'm always running into the need to understand how objects are linked, > in particular when in pdb. The answers one always sees on StackOverflow > is that you don't need to understand, understanding is not the pythonic > way to do things. > > Alternatively, is there are map documented somewhere - more complete > than python/python-2.7.3-docs-html/library/stdtypes.html? > highlight=class#special-attributes > > Or, is the code available uncompiled somewhere on my machine? > > Does anyone know *why* the __members__ method was deprecated, to be > replaced by dir(), which doesn't tell the truth (if only it took an > optional parameter to say: "be truthful") > > cts For example: (PDB)pp dir (newclass.__class__) ['__class__', '__delattr__', '__dict__', '__doc__', '__format__', '__getattribute__', '__hash__', '__init__', '__module__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', 'm2'] (PDB)pp dir (oldclass.__class__) ['__doc__', '__module__', 'm3'] (PDB)pp(oldclass.__class__.__name__) 'C3' (PDB)pp(newclass.__class__.__name__) 'C2' Both dir() invocations are lying to me. The old-style class even ignores the pretty-print command. I'm glad I discovered __mro__(), but how can I do the same thing for old- style classes? -- https://mail.python.org/mailman/listinfo/python-list
Re: how to get names of attributes
On Wed, Dec 30, 2015 at 11:16 PM, Charles T. Smith
wrote:
> I'm glad I discovered __mro__(), but how can I do the same thing for old-
> style classes?
You should be able to track through __bases__ and use vars() at every level:
>>> class X: pass
...
>>> class Y(X): pass
...
>>> class Z(Y): pass
...
>>> X.x=1
>>> Y.y=2
>>> Z.z=3
>>> inst=Z()
>>> inst.i=4
>>> def class_vars(old_style_class):
... v = {}
... for cls in old_style_class.__bases__:
... v.update(class_vars(cls))
... v.update(vars(old_style_class))
... return v
...
>>> def all_vars(old_style_inst):
... v = class_vars(old_style_inst.__class__)
... v.update(vars(old_style_inst))
... return v
...
>>> all_vars(inst)
{'i': 4, '__module__': '__main__', 'y': 2, 'x': 1, 'z': 3, '__doc__': None}
I'm not 100% sure I've matched the MRO here, but if all you want is
the complete set of attribute names, this should work - I think.
ChrisA
--
https://mail.python.org/mailman/listinfo/python-list
Re: how to get names of attributes
On Wed, 30 Dec 2015 11:51:19 +, Charles T. Smith wrote:
> Hi,
>
> How can I get *all* the names of an object's attributes? I have legacy
> code with mixed new style classes and old style classes and I need to
> write methods which deal with both. That's the immediate problem, but
> I'm always running into the need to understand how objects are linked,
> in particular when in pdb. The answers one always sees on StackOverflow
> is that you don't need to understand, understanding is not the pythonic
> way to do things.
>
> Alternatively, is there are map documented somewhere - more complete
> than python/python-2.7.3-docs-html/library/stdtypes.html?
> highlight=class#special-attributes
>
> Or, is the code available uncompiled somewhere on my machine?
>
> Does anyone know *why* the __members__ method was deprecated, to be
> replaced by dir(), which doesn't tell the truth (if only it took an
> optional parameter to say: "be truthful")
>
> cts
Oh!
Although the referenced doc says:
"For compatibility reasons, classes are still old-style by default."
is it true that dictionaries are by default always new-style objects?
(PDB)c6 = { "abc" : 123, "def" : 456}
(PDB)isinstance (c6, dict)
True
(PDB)isinstance (c6, object)
True
--
https://mail.python.org/mailman/listinfo/python-list
Re: how to get names of attributes
On Wed, Dec 30, 2015 at 11:40 PM, Charles T. Smith
wrote:
> Oh!
>
> Although the referenced doc says:
>
> "For compatibility reasons, classes are still old-style by default."
>
> is it true that dictionaries are by default always new-style objects?
>
> (PDB)c6 = { "abc" : 123, "def" : 456}
>
> (PDB)isinstance (c6, dict)
> True
>
> (PDB)isinstance (c6, object)
> True
I believe that's true, yes. The meaning of "by default" there is that
"class X: pass" will make an old-style class. All built-in types are
now new-style classes.
ChrisA
--
https://mail.python.org/mailman/listinfo/python-list
Re: (Execution) Termination bit, Alternation bit.
On Wed, 30 Dec 2015 03:07 pm, Rustom Mody wrote: > By some coincidence was just reading: > from http://www.wordyard.com/2006/10/18/dijkstra-humble/ > > which has the following curious extract. > [Yeah its outlandish] > > -- > I consider the absolute worst programming construct to be > subroutine or the function. [...] Rustom, your quote was (I trust inadvertently!) horribly misleading. I thought this was said by Dijkstra, not some random crank on the Internet. The comment from some random person Cleo Saulnier, who starts off with the provocative comment that the *subroutine* is the "the absolute worst programming construct", then goes on to sing the praises of Unix pipes. What are small Unix programs communicating via pipes if not subroutines? Functions are subroutines, but not all subroutines are functions. It is fashionable to dismiss 1970s-style BASIC as a serious language, and for good reason, but let's not forget that for all its problems, a large number of programmers cut their teeth on it. BASIC is, in some ways, like a machine language except with a friendly syntax. There are subroutines, but you can jump into the middle of them, or out from the middle. There are no functions, just jumps to the instruction you want to execute next. Only a limited set of data types to work with. I think every programmer would learn something from trying to write, and maintain, an actual useful piece of code using only a BASIC-like language. At the least, they would appreciate what a huge step-up it was to introduce procedural programming. (A friend of mine once worked under a manager who insisted that writing functions and procedures was a terrible idea, that GOTO was vital, because it was far more efficient to jump to a line of code than call a function. And this was in the late 1990s.) Cleo continues: > I can prove how the subroutine causes all > parts of our software to become coupled and as such cannot > support this as the basic building blocks of software. This is a remarkable claim, considering that one of the advantages of the function is that it can, when used correctly, *reduce* coupling. In non-procedural code, any line of code may be used by any other chunk of code. You can jump to any line, and people did (because they had no alternatives), consequently coupling was very high. In procedural code, any line of code can only be reached from exactly one place: the previous line of code. (Well, a few more, in specialised circumstances: for- and while-loops, branches, etc.) Lines of code *within* a procedure can only couple with other lines within the same procedure; it's true that procedures can have high-coupling or low-coupling, but at least you know that you can change the inside of a procedure as much as you want, and so long as it accepts the same input and generates the same output (and has the same side-effects) it will continue to work. That's reduced coupling. > Unix has pipes. These are queues. As data is piped, software on > both ends of the queue can execute at the same time. As data > passes from one end of the queue to the other, there is no > concept of execution point. That's arguable. When I learned about Unix multi-processing, it was based on a single CPU model: each process would run for a few ticks, before being interrupted by the OS and the next process being allowed to run for a few ticks. A sequential process that merely appeared to be parallel because the CPU was so fast. Perhaps these days Unix is capable of actually running multiple programs in parallel on separate cores or multiple CPUs? As I said earlier, of course Unix programs communicating via pipes are a kind of subroutine. We also have generators, and coroutines, and threads, and other forms of parallel processing. The unsurprising reality is that such kinds of code are *harder* to get right than functions with their simple-minded stack-based sequential execution model. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
using __getitem()__ correctly
Hello, I thought __getitem__() was invoked when an object is postfixed with an expression in brackets: - abc[n] and __getattr__() was invoked when an object is postfixed with an dot: - abc.member but my __getitem__ is being invoked at this time, where there's no subscript (going into a spiral recursion death): self.mcc = self.attrs.mcc Can anybody explain to me why __getitem__() would be invoked here? I'm using __getitem__() AND __getattr__() to handle an array of objects which don't exist yet (autovivification). My __getattr__() could handle the line above, but I don't know how to handle that in __getitem__(): attrdict:av:__getitem__: entered for mcc That's printed by this line: print "attrdict:av:__getitem__: entered for ", key I expected that "key" would always be numeric or a slice TIA cts -- https://mail.python.org/mailman/listinfo/python-list
Re: using __getitem()__ correctly
On Wed, Dec 30, 2015 at 11:57 PM, Charles T. Smith wrote: > Hello, > > I thought __getitem__() was invoked when an object is postfixed with an > expression in brackets: > > - abc[n] > > and __getattr__() was invoked when an object is postfixed with an dot: > > - abc.member That would be normal (with the caveat that __getattr__ is called only if the attribute isn't found; __getattribute__ is called unconditionally). > but my __getitem__ is being invoked at this time, where there's no > subscript (going into a spiral recursion death): > > self.mcc = self.attrs.mcc > > Can anybody explain to me why __getitem__() would be invoked here? Can you post your entire class, or at least __getattr__? Also check superclasses, if there are any. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Is it safe to assume floats always have a 53-bit mantissa?
We know that Python floats are equivalent to C doubles, which are 64-bit IEEE-754 floating point numbers. Well, actually, C doubles are not strictly defined. The only promise the C standard makes is that double is no smaller than float. (That's C float, not Python float.) And of course, not all Python implementations use C. Nevertheless, it's well known (in the sense that "everybody knows") that Python floats are equivalent to C 64-bit IEEE-754 doubles. How safe is that assumption? I have a function with two implementations: a fast implementation that converts an int to a float, does some processing, then converts it back to int. That works fine so long as the int can be represented exactly as a float. The other implementation uses integer maths only, and is much slower but exact. As an optimization, I want to write: def func(n): if n <= 2**53: # use the floating point fast implementation else: # fall back on the slower, but exact, int algorithm (The optimization makes a real difference: for large n, the float version is about 500 times faster.) But I wonder whether I need to write this instead? def func(n): if n <= 2**sys.float_info.mant_dig: # ...float else: # ...int I don't suppose it really makes any difference performance-wise, but I can't help but wonder if it is really necessary. If sys.float_info.mant_dig is guaranteed to always be 53, why not just write 53? -- Steven -- https://mail.python.org/mailman/listinfo/python-list
PEAK-Rules package.
Hi, I don't see any package available under https://pypi.python.org/simple/PEAK-Rules/. Could you please let me know if it has seen a change recently. I need PEAK-Rules>=0.5a1.dev-r2600 using easy_install default behavior. Any help is appreciated. Thanks in advance! - Radhika -- https://mail.python.org/mailman/listinfo/python-list
Re: how to get names of attributes
On Wed, 30 Dec 2015 23:50:03 +1100, Chris Angelico wrote:
> On Wed, Dec 30, 2015 at 11:40 PM, Charles T. Smith
> wrote:
>> Oh!
>>
>> Although the referenced doc says:
>>
>> "For compatibility reasons, classes are still old-style by default."
>>
>> is it true that dictionaries are by default always new-style objects?
>>
>> (PDB)c6 = { "abc" : 123, "def" : 456}
>>
>> (PDB)isinstance (c6, dict)
>> True
>>
>> (PDB)isinstance (c6, object)
>> True
>
> I believe that's true, yes. The meaning of "by default" there is that
> "class X: pass" will make an old-style class. All built-in types are now
> new-style classes.
>
> ChrisA
Okay, thank you. I'm trying to understand your program.
Unfortunately, I haven't gotten the same output you had, using python 2.6
or 2.7. Maybe I haven't been able to restore the indentation correctly
after having been filtered through pan(1).
I wonder what the difference is between vars() and items()
--
https://mail.python.org/mailman/listinfo/python-list
Re: how to get names of attributes
On 30/12/2015 11:51, Charles T. Smith wrote: Hi, Does anyone know *why* the __members__ method was deprecated, to be replaced by dir(), which doesn't tell the truth (if only it took an optional parameter to say: "be truthful") https://bugs.python.org/issue456420 https://bugs.python.org/issue449989 -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: how to get names of attributes
On 30/12/2015 13:31, Charles T. Smith wrote: I wonder what the difference is between vars() and items() Not much. From https://docs.python.org/3/library/functions.html#vars vars([object]) Return the __dict__ attribute for a module, class, instance, or any other object with a __dict__ attribute. Objects such as modules and instances have an updateable __dict__ attribute; however, other objects may have write restrictions on their __dict__ attributes (for example, classes use a dictproxy to prevent direct dictionary updates). Without an argument, vars() acts like locals(). Note, the locals dictionary is only useful for reads since updates to the locals dictionary are ignored. -- My fellow Pythonistas, ask not what our language can do for you, ask what you can do for our language. Mark Lawrence -- https://mail.python.org/mailman/listinfo/python-list
Re: Is it safe to assume floats always have a 53-bit mantissa?
Steven D'Aprano : > Nevertheless, it's well known (in the sense that "everybody knows") > that Python floats are equivalent to C 64-bit IEEE-754 doubles. How > safe is that assumption? You'd need to have it in writing, wouldn't you? The only spec I know of promises no such thing: Floating point numbers are usually implemented using double in C; information about the precision and internal representation of floating point numbers for the machine on which your program is running is available in sys.float_info. https://docs.python.org/3/library/stdtypes.html#typesnumeric> > As an optimization, I want to write: > > def func(n): > if n <= 2**53: > # use the floating point fast implementation > else: > # fall back on the slower, but exact, int algorithm > > [...] > > But I wonder whether I need to write this instead? > > def func(n): > if n <= 2**sys.float_info.mant_dig: > # ...float > else: > # ...int > > I don't suppose it really makes any difference performance-wise, but I > can't help but wonder if it is really necessary. If > sys.float_info.mant_dig is guaranteed to always be 53, why not just > write 53? Mainly because 2**sys.float_info.mant_dig looks much better than 2**53 Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: how to get names of attributes
On Thu, Dec 31, 2015 at 12:31 AM, Charles T. Smith
wrote:
> Okay, thank you. I'm trying to understand your program.
>
> Unfortunately, I haven't gotten the same output you had, using python 2.6
> or 2.7. Maybe I haven't been able to restore the indentation correctly
> after having been filtered through pan(1).
>
> I wonder what the difference is between vars() and items()
What I sent you was a log of interactive Python, so there are prompts
and continuation prompts. Here's a script version of the same thing
(running under CPython 2.7):
class X: pass
class Y(X): pass
class Z(Y): pass
X.x=1
Y.y=2
Z.z=3
inst=Z()
inst.i=4
def class_vars(old_style_class):
v = {}
for cls in old_style_class.__bases__:
v.update(class_vars(cls))
v.update(vars(old_style_class))
return v
def all_vars(old_style_inst):
v = class_vars(old_style_inst.__class__)
v.update(vars(old_style_inst))
return v
print(all_vars(inst))
# {'i': 4, '__module__': '__main__', 'y': 2, 'x': 1, 'z': 3, '__doc__': None}
Does that work better?
ChrisA
--
https://mail.python.org/mailman/listinfo/python-list
Re: Is it safe to assume floats always have a 53-bit mantissa?
On 12/30/2015 8:18 AM, Steven D'Aprano wrote: We know that Python floats are equivalent to C doubles, Yes which are 64-bit IEEE-754 floating point numbers. I believe that this was not true on all systems when Python was first released. Not all 64-bit floats divided them the same way. I believe there has been some discussion on pydev whether the python code itself should assume IEEE now. I do not believe that there are currently any buildbots that are not IEEE. Does the standard allow exposing the 80 bit floats of FP processors? Well, actually, C doubles are not strictly defined. The only promise the C standard makes is that double is no smaller than float. (That's C float, not Python float.) And of course, not all Python implementations use C. Nevertheless, it's well known (in the sense that "everybody knows") that Python floats are equivalent to C 64-bit IEEE-754 doubles. How safe is that assumption? I have a function with two implementations: a fast implementation that converts an int to a float, does some processing, then converts it back to int. That works fine so long as the int can be represented exactly as a float. The other implementation uses integer maths only, and is much slower but exact. As an optimization, I want to write: def func(n): if n <= 2**53: The magic number 53 should be explained in the code. # use the floating point fast implementation else: # fall back on the slower, but exact, int algorithm (The optimization makes a real difference: for large n, the float version is about 500 times faster.) But I wonder whether I need to write this instead? def func(n): if n <= 2**sys.float_info.mant_dig: # ...float else: # ...int Pull the calculation of the constant out of the function. Naming the constant documents it and allows easy change. There is pretty standard in scientific computing (or was once). finmax = 2 ** sys.float_info.mant_dig # -1? def func(n): if n <= finmax: ... -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list
Re: using __getitem()__ correctly
On Thu, 31 Dec 2015 00:11:24 +1100, Chris Angelico wrote:
> On Wed, Dec 30, 2015 at 11:57 PM, Charles T. Smith
> wrote:
>> Hello,
>>
>> I thought __getitem__() was invoked when an object is postfixed with an
>> expression in brackets:
>>
>> - abc[n]
>>
>> and __getattr__() was invoked when an object is postfixed with an dot:
>>
>> - abc.member
>
> That would be normal (with the caveat that __getattr__ is called only if
> the attribute isn't found; __getattribute__ is called unconditionally).
>
>> but my __getitem__ is being invoked at this time, where there's no
>> subscript (going into a spiral recursion death):
>>
>> self.mcc = self.attrs.mcc
>>
>> Can anybody explain to me why __getitem__() would be invoked here?
>
> Can you post your entire class, or at least __getattr__? Also check
> superclasses, if there are any.
>
> ChrisA
(PDB)isinstance (self.attrs, attrdict)
True
As is so often the case, in composing my answer to your question, I discovered
a number of problems in my class (e.g. I was calling __getitem__() myself!), but
I'm puzzled now how to proceed. I thought the way you avoid triggering
__getattr__()
from within that was to use self.__dict__[name] but that doesn't work:
(PDB)p self.attrs.keys()
['mcc', 'abc']
(PDB)p self.attrs.__dict__['abc']
*** KeyError: KeyError('abc',)
class attrdict(dict):
def __init__ (self, name = None):
if name:
self.update (name)
print "attrdict: instantiated: ", name
# AutoVivification
def __getattr__ (self, name):
print "attrdict:av:__getattr__: entered for ", name #, " in ", self
#if not name in self.__dict__.keys():
if not name in self.keys():
print "attrdict:av:__getattr__: autovivifying ", name
#self.__dict__.__setitem__ (name, self.__class__())
#self.__setitem__ (name, self.__class__())
self.__setattr__ (name, self.__class__())
#return self.__getitem__(name)
#return self.__dict__.__getitem__(name)
return self.__getattribute__ (name)
def __getitem__ (self, key):
print "attrdict:av:__getitem__: entered for ", key #, " in ", self
return self.__getitem__(key)
def __getattr__deprecated (self, name):
return self[name]
def __setattr__(self, name, value):
self[name] = value
--
https://mail.python.org/mailman/listinfo/python-list
Re: how to get names of attributes
On Wed, 30 Dec 2015 14:10:14 +, Mark Lawrence wrote: > On 30/12/2015 11:51, Charles T. Smith wrote: >> Hi, >> >> Does anyone know *why* the __members__ method was deprecated, to be >> replaced by dir(), which doesn't tell the truth (if only it took an >> optional parameter to say: "be truthful") > > https://bugs.python.org/issue456420 > https://bugs.python.org/issue449989 Thank you, that was what I asked for, kinda. According to the experts, the reason that the __members__() method was deprecated is that it was an "ugly hack". *Why* it was an ugly hack wasn't made clear (to me), unfortunately. My other postings to this topic show, I think, why the ability to know what's available (in the way of attributes) is a good thing, as I try to figure out how to get at my attributes without triggering __getattr__(), which I'm implementing. -- https://mail.python.org/mailman/listinfo/python-list
Re: using __getitem()__ correctly
On Dec 30, 2015 7:46 AM, "Charles T. Smith" wrote:
> As is so often the case, in composing my answer to your question, I discovered
> a number of problems in my class (e.g. I was calling __getitem__() myself!),
> but
> I'm puzzled now how to proceed. I thought the way you avoid triggering
> __getattr__()
> from within that was to use self.__dict__[name] but that doesn't work:
>
> (PDB)p self.attrs.keys()
> ['mcc', 'abc']
> (PDB)p self.attrs.__dict__['abc']
> *** KeyError: KeyError('abc',)
What leads you to believe that this is triggering a call to
__getattr__? The KeyError probably just means that the key 'abc'
wasn't found in the dict.
> class attrdict(dict):
> def __init__ (self, name = None):
> if name:
> self.update (name)
> print "attrdict: instantiated: ", name
>
> # AutoVivification
> def __getattr__ (self, name):
> print "attrdict:av:__getattr__: entered for ", name #, " in ",
> self
> #if not name in self.__dict__.keys():
> if not name in self.keys():
Use the "not in" operator, e.g. "if name not in self.keys()".
> print "attrdict:av:__getattr__: autovivifying ", name
> #self.__dict__.__setitem__ (name, self.__class__())
> #self.__setitem__ (name, self.__class__())
> self.__setattr__ (name, self.__class__())
No reason to explicitly call __setitem__ or __setattr__ here. I'd
probably just do self[name] = self.__class__()
> #return self.__getitem__(name)
> #return self.__dict__.__getitem__(name)
> return self.__getattribute__ (name)
You shouldn't call __getattribute__ from __getattr__, because
__getattr__ is called from __getattribute__, so this would cause an
infinite loop.
Based on the preceding, you probably want to return the value you just
set in the dict, correct? So just return self[name].
--
https://mail.python.org/mailman/listinfo/python-list
Re: how to get names of attributes
On Wed, Dec 30, 2015, at 07:50, Chris Angelico wrote: > I believe that's true, yes. The meaning of "by default" there is that > "class X: pass" will make an old-style class. All built-in types are > now new-style classes. To be clear, AFAIK, built-in types were never old-style classes - prior to the introduction of the new type system (i.e. in Python 2.1 and earlier) they were not classes, and afterwards they were immediately new-style classes. -- https://mail.python.org/mailman/listinfo/python-list
Re: using __getitem()__ correctly
On Wed, 30 Dec 2015 08:35:57 -0700, Ian Kelly wrote:
> On Dec 30, 2015 7:46 AM, "Charles T. Smith"
> wrote:
>> As is so often the case, in composing my answer to your question, I
>> discovered a number of problems in my class (e.g. I was calling
>> __getitem__() myself!), but I'm puzzled now how to proceed. I thought
>> the way you avoid triggering __getattr__() from within that was to use
>> self.__dict__[name] but that doesn't work:
>>
>> (PDB)p self.attrs.keys()
>> ['mcc', 'abc']
>> (PDB)p self.attrs.__dict__['abc']
>> *** KeyError: KeyError('abc',)
>
> What leads you to believe that this is triggering a call to __getattr__?
> The KeyError probably just means that the key 'abc' wasn't found in the
> dict.
I meant, it doesn't work because I'm not getting at the attribute Although
keys()
sees it, it's not in the __dict__ attribute of attrs. If it's not there, where
is it?
> print "attrdict:av:__getattr__: autovivifying ", name
> #self.__dict__.__setitem__ (name, self.__class__())
> #self.__setitem__ (name, self.__class__()) self.__setattr__
> (name, self.__class__())
>
> No reason to explicitly call __setitem__ or __setattr__ here. I'd
> probably just do self[name] = self.__class__()
The reason I used this is to avoid trigging the __setitem__() method:
self.__setattr__(name, self.__class__())
which is invoked if I use the "self[name]" syntax. But that didn't work.
Is it just impossible to get at attributes without going through either
__getattr__() or __getitem__()?
> Based on the preceding, you probably want to return the value you just
> set in the dict, correct? So just return self[name].
The problem is that then triggers the __getitem__() method and I don't
know how to get to the attributes without triggering __getattr__().
It's the interplay of the two that's killing me.
In the example, if I have:
self.mcc = self.attrs.mcc
The crux:
Then __getattr__() triggers for the mcc. If I try to use self.attrs['mcc']
to get it, then that triggers __getitem__(). Okay, if the key is not an int,
I'll go and get it and return it... unfortunately that triggers __getattr__(),
an infinite loop.
I tried using:
attrdict.__getattr__ (self, 'mcc')
but that didn't help, of course. I also tried so, but I've got this wrong,
somehow:
super (attrdict, self).__getattr__ ('mcc')
class attrdict(dict):
def __init__ (self, name = None):
if name:
self.update (name)
print "attrdict: instantiated: ", name
# AutoVivification
def __getattr__ (self, name):
print "attrdict:av:__getattr__: entered for ", name
if name not in self.keys():
print "attrdict:av:__getattr__: autovivifying ", name
self[name] = self.__class__()
return self[name]
def __getitem__ (self, key):
print "attrdict:av:__getitem__: entered for ", key
if type (key) is int: # TODO: support slices
return self.__getitem__(key)
return attrdict.__getattr__(self, key)
def __setattr__(self, name, value):
self[name] = value
--
https://mail.python.org/mailman/listinfo/python-list
Re: Need help on a project To :"Create a class called BankAccount with the following parameters "
i have these task which i believe i have done well to some level Create a function get_algorithm_result to implement the algorithm below 1- Get a list of numbers L1, L2, L3LN as argument 2- Assume L1 is the largest, Largest = L1 3- Take next number Li from the list and do the following 4- If Largest is less than Li 5- Largest = Li 6- If Li is last number from the list then 7- return Largest and come out 8- Else repeat same process starting from step 3 Create a function prime_number that does the following Takes as parameter an integer and Returns boolean value true if the value is prime or Returns boolean value false if the value is not prime so i came up with this code below def get_algorithm_result(my_list): if not any(not type(y) is int for y in my_list): largest = 0 for item in range(0,len(my_list)): if largest < my_list[item]: largest = my_list[item] return largest else: return(my_list[-1]) def prime_number(integer): if integer%2==0 and 2!=integer: return False else: return True get_algorithm_result([1, 78, 34, 12, 10, 3]) get_algorithm_result(["apples", "oranges", "mangoes", "banana", "zoo"]) prime_number(1) prime_number(78) prime_number(11) for the question above, there is a unittes which reads import unittest class AlgorithmTestCases(unittest.TestCase): def test_maximum_number_one(self): result = get_algorithm_result([1, 78, 34, 12, 10, 3]) self.assertEqual(result, 78, msg="Incorrect number") def test_maximum_number_two(self): result = get_algorithm_result(["apples", "oranges", "mangoes", "banana", "zoo"]) self.assertEqual(result, "zoo", msg="Incorrect number") def test_prime_number_one(self): result = prime_number(1) self.assertEqual(result, True, msg="Result is invalid") def test_prime_number_two(self): result = prime_number(78) self.assertEqual(result, False, msg="Result is invalid") def test_prime_number_three(self): result = prime_number(11) self.assertEqual(result, True, msg="Result is invalid") but once i run my code ,it returns error saying Test Spec Failed Your solution failed to pass all the tests what is actually wrong with my code? -- https://mail.python.org/mailman/listinfo/python-list
Re: Need help on a project To :"Create a class called BankAccount with the following parameters "
On Wed, Dec 30, 2015 at 1:21 PM, Won Chang wrote: > > i have these task which i believe i have done well to some level > > Create a function get_algorithm_result to implement the algorithm below > > 1- Get a list of numbers L1, L2, L3LN as argument 2- Assume L1 is the > largest, Largest = L1 3- Take next number Li from the list and do the > following 4- If Largest is less than Li 5- Largest = Li 6- If Li is last > number from the list then 7- return Largest and come out 8- Else repeat > same process starting from step 3 > > Create a function prime_number that does the following Takes as parameter > an integer and Returns boolean value true if the value is prime or Returns > boolean value false if the value is not prime > > so i came up with this code below > > def get_algorithm_result(my_list): > if not any(not type(y) is int for y in my_list): > largest = 0 > for item in range(0,len(my_list)): > if largest < my_list[item]: > largest = my_list[item] > return largest > else: > > return(my_list[-1]) > > def prime_number(integer): > if integer%2==0 and 2!=integer: > return False > else: > return True > > get_algorithm_result([1, 78, 34, 12, 10, 3]) > get_algorithm_result(["apples", "oranges", "mangoes", "banana", "zoo"]) > prime_number(1) > prime_number(78) > prime_number(11) > for the question above, there is a unittes which reads > > import unittest > > class AlgorithmTestCases(unittest.TestCase): > def test_maximum_number_one(self): > result = get_algorithm_result([1, 78, 34, 12, 10, 3]) > self.assertEqual(result, 78, msg="Incorrect number") > > def test_maximum_number_two(self): > result = get_algorithm_result(["apples", "oranges", "mangoes", > "banana", "zoo"]) > self.assertEqual(result, "zoo", msg="Incorrect number") > > def test_prime_number_one(self): > result = prime_number(1) > self.assertEqual(result, True, msg="Result is invalid") > > def test_prime_number_two(self): > result = prime_number(78) > self.assertEqual(result, False, msg="Result is invalid") > > def test_prime_number_three(self): > result = prime_number(11) > self.assertEqual(result, True, msg="Result is invalid") > but once i run my code ,it returns error saying Test Spec Failed > > Your solution failed to pass all the tests > what is actually wrong with my code? > You need to copy and paste the complete results and or traceback. There is no string "Test Spec Failed" in anything you have shown > -- > https://mail.python.org/mailman/listinfo/python-list > -- Joel Goldstick http://joelgoldstick.com/stats/birthdays -- https://mail.python.org/mailman/listinfo/python-list
Re: PEAK-Rules package.
On Wed, Dec 30, 2015 at 8:24 AM, Radhika Grover wrote: > Hi, > > I don't see any package available under > https://pypi.python.org/simple/PEAK-Rules/. Could you please let me know > if it has seen a change recently. > > I need PEAK-Rules>=0.5a1.dev-r2600 using easy_install default behavior. > Any help is appreciated. Thanks in advance! > > - Radhika > -- > https://mail.python.org/mailman/listinfo/python-list > https://pypi.python.org/pypi/PEAK-Rules maybe? -- Joel Goldstick http://joelgoldstick.com/stats/birthdays -- https://mail.python.org/mailman/listinfo/python-list
Re: Need help on a project To :"Create a class called BankAccount with the following parameters "
On Wed, Dec 30, 2015 at 3:06 PM, Joel Goldstick wrote: > > > On Wed, Dec 30, 2015 at 1:21 PM, Won Chang wrote: > >> >> i have these task which i believe i have done well to some level >> >> Create a function get_algorithm_result to implement the algorithm below >> >> 1- Get a list of numbers L1, L2, L3LN as argument 2- Assume L1 is the >> largest, Largest = L1 3- Take next number Li from the list and do the >> following 4- If Largest is less than Li 5- Largest = Li 6- If Li is last >> number from the list then 7- return Largest and come out 8- Else repeat >> same process starting from step 3 >> >> Create a function prime_number that does the following Takes as parameter >> an integer and Returns boolean value true if the value is prime or Returns >> boolean value false if the value is not prime >> >> so i came up with this code below >> >> def get_algorithm_result(my_list): >> if not any(not type(y) is int for y in my_list): >> > Not sure what above line is trying to do? Check that item is int? That won't help with the case of strings in my_list > largest = 0 >> for item in range(0,len(my_list)): >> > The above line can be more pythonically expressed as for item in my_list: if largest < item: largest = item return largest Then below is gone. Not sure what the else clause below is for > if largest < my_list[item]: >> largest = my_list[item] >> return largest >> else: >> >> return(my_list[-1]) >> >> def prime_number(integer): >> if integer%2==0 and 2!=integer: >> return False >> else: >> return True >> >> get_algorithm_result([1, 78, 34, 12, 10, 3]) >> get_algorithm_result(["apples", "oranges", "mangoes", "banana", "zoo"]) >> prime_number(1) >> prime_number(78) >> prime_number(11) >> for the question above, there is a unittes which reads >> >> import unittest >> >> class AlgorithmTestCases(unittest.TestCase): >> def test_maximum_number_one(self): >> result = get_algorithm_result([1, 78, 34, 12, 10, 3]) >> self.assertEqual(result, 78, msg="Incorrect number") >> >> def test_maximum_number_two(self): >> result = get_algorithm_result(["apples", "oranges", "mangoes", >> "banana", "zoo"]) >> self.assertEqual(result, "zoo", msg="Incorrect number") >> >> def test_prime_number_one(self): >> result = prime_number(1) >> self.assertEqual(result, True, msg="Result is invalid") >> >> def test_prime_number_two(self): >> result = prime_number(78) >> self.assertEqual(result, False, msg="Result is invalid") >> >> def test_prime_number_three(self): >> result = prime_number(11) >> self.assertEqual(result, True, msg="Result is invalid") >> but once i run my code ,it returns error saying Test Spec Failed >> >> Your solution failed to pass all the tests >> what is actually wrong with my code? >> > > You need to copy and paste the complete results and or traceback. There > is no string "Test Spec Failed" in anything you have shown > >> -- >> https://mail.python.org/mailman/listinfo/python-list >> > > > > -- > Joel Goldstick > http://joelgoldstick.com/stats/birthdays > -- Joel Goldstick http://joelgoldstick.com/stats/birthdays -- https://mail.python.org/mailman/listinfo/python-list
subprocess check_output
Hi, Trying to run a specific command (ibstat) installed in /usr/sbin on an Ubuntu 15.04 machine, using subprocess.check_output and getting "/bin/sh: /usr/sbin/ibstat: No such file or directory" I tried the following: - running the command providing full path - running with executable=bash - running with (['/bin/bash', '-c' , "/usr/sbin/ibstat"]) Nothing worked ... Any idea? -carlos -- https://mail.python.org/mailman/listinfo/python-list
Re: using __getitem()__ correctly
On Wed, Dec 30, 2015 at 9:58 AM, Charles T. Smith
wrote:
> On Wed, 30 Dec 2015 08:35:57 -0700, Ian Kelly wrote:
>
>> On Dec 30, 2015 7:46 AM, "Charles T. Smith"
>> wrote:
>>> As is so often the case, in composing my answer to your question, I
>>> discovered a number of problems in my class (e.g. I was calling
>>> __getitem__() myself!), but I'm puzzled now how to proceed. I thought
>>> the way you avoid triggering __getattr__() from within that was to use
>>> self.__dict__[name] but that doesn't work:
>>>
>>> (PDB)p self.attrs.keys()
>>> ['mcc', 'abc']
>>> (PDB)p self.attrs.__dict__['abc']
>>> *** KeyError: KeyError('abc',)
>>
>> What leads you to believe that this is triggering a call to __getattr__?
>> The KeyError probably just means that the key 'abc' wasn't found in the
>> dict.
>
>
> I meant, it doesn't work because I'm not getting at the attribute Although
> keys()
> sees it, it's not in the __dict__ attribute of attrs. If it's not there,
> where is it?
I think you're probably getting confused because there are three
different dicts at play here:
* Since your attrdict class inherits from dict, self.attrs is a dict.
* self.attrs.__dict__ is a *different* dict, used to store the
instance attributes of self.attrs.
* self.attrs.__class__.__dict__ is another different dict, used to
store the class attributes of attrdict.
The keys method that you're calling above is a method of the
self.attrs dict, which is where your attrdict's __setattr__ is setting
it. That's why you find it there but not in self.attrs.__dict__.
>> print "attrdict:av:__getattr__: autovivifying ", name
>> #self.__dict__.__setitem__ (name, self.__class__())
>> #self.__setitem__ (name, self.__class__()) self.__setattr__
>> (name, self.__class__())
>>
>> No reason to explicitly call __setitem__ or __setattr__ here. I'd
>> probably just do self[name] = self.__class__()
>
>
> The reason I used this is to avoid trigging the __setitem__() method:
>
> self.__setattr__(name, self.__class__())
>
> which is invoked if I use the "self[name]" syntax. But that didn't work.
But the body of your __setattr__ method is just "self[name] =
self.__class__()", which is the exact same code as what I suggested
and will still invoke __setitem__.
That said, I don't get why you're trying to avoid calling __setitem__.
If you're trying to store the attribute as a dict item, as you seem to
be doing, why shouldn't that dict's __setitem__ be involved?
> Is it just impossible to get at attributes without going through either
> __getattr__() or __getitem__()?
No.
>> Based on the preceding, you probably want to return the value you just
>> set in the dict, correct? So just return self[name].
>
>
> The problem is that then triggers the __getitem__() method and I don't
> know how to get to the attributes without triggering __getattr__().
>
> It's the interplay of the two that's killing me.
The only interplay of the two is what you have written into your class.
> In the example, if I have:
>
> self.mcc = self.attrs.mcc
>
>
> The crux:
>
> Then __getattr__() triggers for the mcc. If I try to use self.attrs['mcc']
> to get it, then that triggers __getitem__(). Okay, if the key is not an int,
> I'll go and get it and return it... unfortunately that triggers __getattr__(),
> an infinite loop.
How precisely are you trying to store these: as an attribute, or as a
dict item? If it's supposed to be in the dict, then why is your
__getitem__ trying to look up an attribute to begin with?
> class attrdict(dict):
> def __init__ (self, name = None):
> if name:
> self.update (name)
> print "attrdict: instantiated: ", name
>
> # AutoVivification
> def __getattr__ (self, name):
> print "attrdict:av:__getattr__: entered for ", name
> if name not in self.keys():
> print "attrdict:av:__getattr__: autovivifying ", name
> self[name] = self.__class__()
> return self[name]
>
> def __getitem__ (self, key):
> print "attrdict:av:__getitem__: entered for ", key
> if type (key) is int: # TODO: support slices
> return self.__getitem__(key)
Here the method as written is just going to end up calling itself. You
probably want super(attrdict, self).__getitem__(key)
> return attrdict.__getattr__(self, key)
And here if you really want to access the instance attributes without
using __getattr__, just use self.__dict__[key].
I don't understand what it is that you're trying to accomplish here by
looking the key up in the instance attributes, though. It looks very
circular. I think you should clearly define where you expect the items
to be stored and then only check that location.
--
https://mail.python.org/mailman/listinfo/python-list
Re: EOFError: marshal data too short -- causes?
On 12/29/2015 5:56 AM, D'Arcy J.M. Cain wrote: On Tue, 29 Dec 2015 00:01:00 -0800 Glenn Linderman wrote: OK, so I actually renamed it instead of zapping it. Them, actually, Really, just zap them. They are object code. Even if you zap a perfectly good .pyc file a perfectly good one will be re-created as soon as you import it. No need to clutter up you file system. Yes, the only value would be if the type of corruption could be determined from the content. -- https://mail.python.org/mailman/listinfo/python-list
Re: EOFError: marshal data too short -- causes?
On 12/29/2015 1:00 PM, Terry Reedy wrote: I updated to 2.7.11, 3.4.4, and 3.5.1 a couple of weeks ago, so the timestamps are all fresh. So I don't know what happened with 3.4.3 timestamps from last April and whether Windows itself touches the files. I just tried importing a few and Python did not. I'm a Windows user, too, generally, but the web host runs Linux. I suppose, since the install does the compileall, that I could set all the __pycache__ files to read-only, even for "owner". Like you said, those files _can't_ be updated without Admin/root permission when it is a root install... so there would be no need, once compileall has been done, for the files to be updated until patches would be applied. This isn't a root install, though, but a "user" install. Level1 support at the web host claims they never touch user files unless the user calls and asks them to help with something that requires it. And maybe Level1 support religiously follows that policy, but other files have changed, so that policy doesn't appear to be universally applied for all personnel there... so the answer isn't really responsive to the question, but the tech I talked to was as much a parrot as a tech... Glenn -- https://mail.python.org/mailman/listinfo/python-list
Re: subprocess check_output
On 30Dec2015 21:14, Carlos Barera wrote: Trying to run a specific command (ibstat) installed in /usr/sbin on an Ubuntu 15.04 machine, using subprocess.check_output and getting "/bin/sh: /usr/sbin/ibstat: No such file or directory" I tried the following: - running the command providing full path - running with executable=bash - running with (['/bin/bash', '-c' , "/usr/sbin/ibstat"]) Nothing worked ... The first check is to run the command from a shell. Does it work? Does "which ibstat" confirm that the command exist at that path? Is it even installed? If it does, you should be able to run it directly without using a shell: subprocess.call(['/usr/sbin/ibstat'], ...) or just plain ['ibstat']. Also remember that using "sh -c blah" or "bash -c blah" is subject to all the same security issues that subprocess' "shell=True" parameter is, and that it should be avoided without special reason. Finally, remember to drop the common Linux fetish with "bash". Just use "sh"; on many systems it _is_ bash, but it will provide portable use. The bash is just a partiular Bourne style shell, not installed everywhere, and rarely of any special benefit for scripts over the system /bin/sh (which _every_ UNIX system has). If none of this solves your problem, please reply including the failing code and a transcript of the failure output. Thanks, Cameron Simpson -- https://mail.python.org/mailman/listinfo/python-list
Re: how to get names of attributes
On Thu, Dec 31, 2015 at 4:04 AM, Random832 wrote: > On Wed, Dec 30, 2015, at 07:50, Chris Angelico wrote: >> I believe that's true, yes. The meaning of "by default" there is that >> "class X: pass" will make an old-style class. All built-in types are >> now new-style classes. > > To be clear, AFAIK, built-in types were never old-style classes - prior > to the introduction of the new type system (i.e. in Python 2.1 and > earlier) they were not classes, and afterwards they were immediately > new-style classes. > -- > https://mail.python.org/mailman/listinfo/python-list Thanks Random. I wasn't actively using Python until about 2.5ish, and wasn't following python-dev until even more recently, so I didn't keep track of all that. And frankly, old-style classes have just been something I avoid whereever possible. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: subprocess check_output
On Thu, Dec 31, 2015 at 8:02 AM, Cameron Simpson wrote: > On 30Dec2015 21:14, Carlos Barera wrote: >> >> Trying to run a specific command (ibstat) installed in /usr/sbin on an >> Ubuntu 15.04 machine, using subprocess.check_output and getting "/bin/sh: >> /usr/sbin/ibstat: No such file or directory" >> >> I tried the following: >> - running the command providing full path >> - running with executable=bash >> - running with (['/bin/bash', '-c' , "/usr/sbin/ibstat"]) >> >> Nothing worked ... > > > The first check is to run the command from a shell. Does it work? Does > "which ibstat" confirm that the command exist at that path? Is it even > installed? And do those checks as the same user as your script runs as, with the same environment and all. You might find that a file is executable for one user but not for another, or something. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Newbie: How to convert a tuple of strings into a tuple of ints
How do I get from here
t = ('1024', '1280')
to
t = (1024, 1280)
Thanks for all help!
--
https://mail.python.org/mailman/listinfo/python-list
Re: Newbie: How to convert a tuple of strings into a tuple of ints
On Thu, Dec 31, 2015 at 9:46 AM, wrote:
> How do I get from here
>
> t = ('1024', '1280')
>
> to
>
> t = (1024, 1280)
>
>
> Thanks for all help!
t = (int(t[0]), int(t[1]))
If the situation is more general than that, post your actual code and
we can help out more. Working with a single line isn't particularly
easy. :)
ChrisA
--
https://mail.python.org/mailman/listinfo/python-list
Re: Newbie: How to convert a tuple of strings into a tuple of ints
[email protected] writes: > How do I get from here > > t = ('1024', '1280') > > to > > t = (1024, 1280) Both of those are assignment statements, so I'm not sure what you mean by “get from … to”. To translate one assignment statement to a different assignment statement, re-write the statement. But I think you want to produce a new sequence from an existing sequence. The ‘map’ built-in function is useful for that:: sequence_of_numbers_as_text = ['1024', '1280'] sequence_of_integers = map(int, sequence_of_numbers_as_text) That sequence can then be iterated. Another (more broadly useful) way is to use a generator expression:: sequence_of_integers = (int(item) for item in sequence_of_numbers_as_text) If you really want a tuple, just pass that sequence to the ‘tuple’ callable:: tuple_of_integers = tuple( int(item) for item in sequence_of_numbers_as_text) or:: tuple_of_integers = tuple(map(int, sequence_of_numbers_as_text)) -- \ “Nothing is more sacred than the facts.” —Sam Harris, _The End | `\ of Faith_, 2004 | _o__) | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Re: using __getitem()__ correctly
On Wed, 30 Dec 2015 13:40:44 -0700, Ian Kelly wrote: > On Wed, Dec 30, 2015 at 9:58 AM, Charles T. Smith >> The problem is that then triggers the __getitem__() method and I don't >> know how to get to the attributes without triggering __getattr__(). >> >> It's the interplay of the two that's killing me. > > The only interplay of the two is what you have written into your class. > >> In the example, if I have: >> >> self.mcc = self.attrs.mcc >> >> >> The crux: >> >> Then __getattr__() triggers for the mcc. If I try to use >> self.attrs['mcc'] to get it, then that triggers __getitem__(). Okay, >> if the key is not an int, I'll go and get it and return it... >> unfortunately that triggers __getattr__(), an infinite loop. > > How precisely are you trying to store these: as an attribute, or as a > dict item? If it's supposed to be in the dict, then why is your > __getitem__ trying to look up an attribute to begin with? I don't understand this distinction between an "attribute" and a "dict item". attrdict is a stupid legacy class that inherits from a dictionary. I think the intent was to be faster than a normal class, but I doubt there's any value to that. In any case, I thought that class attributes were, in fact, items of __dict__? The reason that my __getitem__() is trying to look up an attribute is, I think, because this syntax triggers __getitem__ with a key of "mcc": return self[name] Is that assumption wrong? That was the reason I was looking to find another way to get at the attributes, because return self.__getattr__(name) does, too, and this doesn't even find them: return self.__getattribute__(name) Just to establish the concept, this horrible thing is showing some promise: attriter = self.iteritems() for attr in attriter: if attr[0] == name: return attr[1] (because the subscripts there are on a tuple type) But I concede I must be doing something fundamentally wrong because this assert is triggering: def __getattr__ (self, name): print "attrdict:av:__getattr__: entered for ", name assert name not in self.keys(), "attrdict:__getattr__: who lied?" -- https://mail.python.org/mailman/listinfo/python-list
Re: Newbie: How to convert a tuple of strings into a tuple of ints
On Wed, Dec 30, 2015 at 3:46 PM, wrote:
> How do I get from here
>
> t = ('1024', '1280')
>
> to
>
> t = (1024, 1280)
Deja vu: https://mail.python.org/pipermail/python-list/2015-December/701017.html
--
https://mail.python.org/mailman/listinfo/python-list
Re: Newbie: How to convert a tuple of strings into a tuple of ints
Thanks much - both solutions work well for me On Wednesday, December 30, 2015 at 2:57:50 PM UTC-8, Ben Finney wrote: > [email protected] writes: > > > How do I get from here > > > > t = ('1024', '1280') > > > > to > > > > t = (1024, 1280) > > Both of those are assignment statements, so I'm not sure what you mean > by "get from ... to". To translate one assignment statement to a different > assignment statement, re-write the statement. > > > But I think you want to produce a new sequence from an existing sequence. > > The 'map' built-in function is useful for that:: > > sequence_of_numbers_as_text = ['1024', '1280'] > sequence_of_integers = map(int, sequence_of_numbers_as_text) > > That sequence can then be iterated. > > Another (more broadly useful) way is to use a generator expression:: > > sequence_of_integers = (int(item) for item in sequence_of_numbers_as_text) > > > If you really want a tuple, just pass that sequence to the 'tuple' > callable:: > > tuple_of_integers = tuple( > int(item) for item in sequence_of_numbers_as_text) > > or:: > > tuple_of_integers = tuple(map(int, sequence_of_numbers_as_text)) > > -- > \ "Nothing is more sacred than the facts." --Sam Harris, _The End | > `\ of Faith_, 2004 | > _o__) | > Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Re: using __getitem()__ correctly
On Wed, 30 Dec 2015 22:54:44 +, Charles T. Smith wrote: > But I concede I must be doing something fundamentally wrong because this > assert is triggering: > def __getattr__ (self, name): > print "attrdict:av:__getattr__: entered for ", name > assert name not in self.keys(), "attrdict:__getattr__: who lied?" Tt's really a hassle how pan(1) sometimes wraps and sometimes doesn't. Is there a better way to do this? -- https://mail.python.org/mailman/listinfo/python-list
Re: using __getitem()__ correctly
"Charles T. Smith" writes: > I don't understand this distinction between an "attribute" and a "dict > item". When did you most recently work through the Python tutorial https://docs.python.org/3/tutorial/>> You may want to work through it again, from start to finish and exercising each example, to be sure you have a solid understanding of basic concepts like these. In brief: Objects have attributes; looking up an attribute on an object has specific syntax. Dictionaries are collections of items; the items are looked up by key, using a quite different syntax. Those two different syntaxes translate to distinct special methods. You may be familiar with other languages where the distinction between “attribute of an object” is not distinct from “item in a dictionary”. Python is not one of those languages; the distinction is real and important. You'll need to do some remedial learning of Python, and I recommend working through the Python tutorial. -- \“But it is permissible to make a judgment after you have | `\examined the evidence. In some circles it is even encouraged.” | _o__)—Carl Sagan, _The Burden of Skepticism_, 1987 | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Re: using __getitem()__ correctly
On Thu, 31 Dec 2015 10:13:53 +1100, Ben Finney wrote: > "Charles T. Smith" writes: > >> I don't understand this distinction between an "attribute" and a "dict >> item". > > When did you most recently work through the Python tutorial > https://docs.python.org/3/tutorial/>> You may want to work through > it again, from start to finish and exercising each example, to be sure > you have a solid understanding of basic concepts like these. > > In brief: Objects have attributes; looking up an attribute on an object > has specific syntax. Dictionaries are collections of items; the items > are looked up by key, using a quite different syntax. Those two > different syntaxes translate to distinct special methods. > > You may be familiar with other languages where the distinction between > “attribute of an object” is not distinct from “item in a dictionary”. > Python is not one of those languages; the distinction is real and > important. You'll need to do some remedial learning of Python, and I > recommend working through the Python tutorial. Thanks, Ben, for your advice. Actually, I think that Ian and Chris and I are making fine progress. I think you misread my question. -- https://mail.python.org/mailman/listinfo/python-list
Re: using __getitem()__ correctly
On Thu, 31 Dec 2015 10:13 am, Ben Finney wrote: > You may be familiar with other languages where the distinction between > “attribute of an object” is not distinct from “item in a dictionary”. > Python is not one of those languages; the distinction is real and > important. I'm not sure what distinction you're referring to, can you explain? Obviously there is a syntax difference between x.attr and x['key'], but attributes *are* items in a dictionary (ignoring __slots__ and __getattr__ for the time being). Either the instance __dict__, the class __dict__, or a superclass __dict__. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: how to get names of attributes
On Wed, 30 Dec 2015 10:51 pm, Charles T. Smith wrote:
> Hi,
>
> How can I get *all* the names of an object's attributes?
In the most general case, you cannot.
Classes can define a __getattr__ method (and a __getattribute__ method, for
new-style classes only) which implement dynamic attributes. These can be
*extremely* dynamic and impossible to predict ahead of time. Starting from
the simplest cases to the most horrible:
def __getattr__(self, name):
if name == "spam":
return 1
elif name == self._attribute_name:
return 2
elif name == some_function(10, 20, 30):
return 3
elif name.lower() in ("x", "y") or name.startswith("foo"):
return 4
elif 1626740500 <= hash(name) <= 1626740600:
return 5
elif name == "surprise" and random.random() < 0.5:
return 6
raise AttributeError
So you can see that even in principle, there is no way for the Python
interpreter to look inside the __getattr__ method and determine what
attributes exist.
Fortunately, there's a way around that: you can customise the list of
attribute names returned by dir():
py> class X:
... def __dir__(self):
... return dir(X) + ["a", "b"]
... def spam(self):
... pass
...
py> x = X()
py> dir(x)
['__dir__', '__doc__', '__module__', 'a', 'b', 'spam']
So if you have dynamic attributes generated by __getattr__, the polite thing
to do is to return their names from __dir__.
> I have legacy
> code with mixed new style classes and old style classes and I need to
> write methods which deal with both. That's the immediate problem, but
> I'm always running into the need to understand how objects are linked, in
> particular when in pdb. The answers one always sees on StackOverflow is
> that you don't need to understand, understanding is not the pythonic way
> to do things.
That's why I don't think much of the majority of StackOverflow answers.
> Alternatively, is there are map documented somewhere - more complete than
> python/python-2.7.3-docs-html/library/stdtypes.html?
> highlight=class#special-attributes
>
> Or, is the code available uncompiled somewhere on my machine?
That depends on how you installed Python. If you installed it from source,
then it will be, unless you deleted it after compiling. But it is easy
enough to get the Python source code:
https://www.python.org/downloads/source/
https://docs.python.org/devguide/setup.html#checkout
https://hg.python.org/cpython/file/tip
> Does anyone know *why* the __members__ method was deprecated, to be
> replaced by dir(), which doesn't tell the truth (if only it took an
> optional parameter to say: "be truthful")
dir() is intentionally meant for use in the interactive interpreter, to
return "interesting" attributes. It isn't really intended for programmatic
use.
For that, you might wish to write your own version of dir(). Here is some
pseudo-code:
def mydir(obj):
if type(obj) defines __dir__, return the output of __dir__
names = set()
if obj is an instance:
if obj has __slots__, then add each slot to names;
try:
add each key from vars(obj) to names;
except TypeError:
# no instance __dict__
pass
call mydir(type(obj)) and add the names it returns to names;
else obj must be a class or type:
add each key from vars(obj) to names;
if obj is an old-style class:
do the same for each class in obj.__bases__;
else obj must be a new-style class:
do the same for each class in obj.__mro__
return sorted(names)
But I expect that the differences between this and what the built-in dir()
return will be minimal, and only attributes which are part of the machinery
used to get classes themselves working, not part of your class or instance.
--
Steven
--
https://mail.python.org/mailman/listinfo/python-list
raise None
I have a lot of functions that perform the same argument checking each time: def spam(a, b): if condition(a) or condition(b): raise TypeError if other_condition(a) or something_else(b): raise ValueError if whatever(a): raise SomethingError ... def eggs(a, b): if condition(a) or condition(b): raise TypeError if other_condition(a) or something_else(b): raise ValueError if whatever(a): raise SomethingError ... Since the code is repeated, I naturally pull it out into a function: def _validate(a, b): if condition(a) or condition(b): raise TypeError if other_condition(a) or something_else(b): raise ValueError if whatever(a): raise SomethingError def spam(a, b): _validate(a, b) ... def eggs(a, b): _validate(a, b) ... But when the argument checking fails, the traceback shows the error occurring in _validate, not eggs or spam. (Naturally, since that is where the exception is raised.) That makes the traceback more confusing than it need be. So I can change the raise to return in the _validate function: def _validate(a, b): if condition(a) or condition(b): return TypeError if other_condition(a) or something_else(b): return ValueError if whatever(a): return SomethingError and then write spam and eggs like this: def spam(a, b): ex = _validate(a, b) if ex is not None: raise ex ... It's not much of a gain though. I save an irrelevant level in the traceback, but only at the cost of an extra line of code everywhere I call the argument checking function. But suppose we allowed "raise None" to do nothing. Then I could rename _validate to _if_error and write this: def spam(a, b): raise _if_error(a, b) ... and have the benefits of "Don't Repeat Yourself" without the unnecessary, and misleading, extra level in the traceback. Obviously this doesn't work now, since raise None is an error, but if it did work, what do you think? -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: raise None
Steven D'Aprano writes: > def _validate(a, b): > if condition(a) or condition(b): return TypeError > ... > Obviously this doesn't work now, since raise None is an error, but if it did > work, what do you think? Never occurred to me. But in some analogous situations I've caught the exception inside _validate, then peeled away some layers of the traceback from the exception output before throwing again. -- https://mail.python.org/mailman/listinfo/python-list
Re: using __getitem()__ correctly
Steven D'Aprano writes: > On Thu, 31 Dec 2015 10:13 am, Ben Finney wrote: > > > You may be familiar with other languages where the distinction > > between “attribute of an object” is not distinct from “item in a > > dictionary”. Python is not one of those languages; the distinction > > is real and important. > > I'm not sure what distinction you're referring to, can you explain? Tersely: the relationship between an object and its attributes, is not the same as the relationship between a dictionary and its items. > Obviously there is a syntax difference between x.attr and x['key'] Not merely syntax; the attributes of an object are not generally available as items of the container. > but attributes *are* items in a dictionary That's like saying everything in Python is a number: it conflates the implementation with the semantics. The distinction between a Python integer and a Python boolean value is real and important, despite the incidental fact of their both being implemented as numbers. > Either the instance __dict__, the class __dict__, or a superclass > __dict__. No, I'm not referring to the ‘__dict__’ attribute of an object; I'm referring to the object itself. To talk about the attributes of an object ‘foo’ is distinct from talking about the items in a dictionary ‘foo’. That distinction is real, and important. -- \ “… correct code is great, code that crashes could use | `\ improvement, but incorrect code that doesn’t crash is a | _o__)horrible nightmare.” —Chris Smith, 2008-08-22 | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Validation in Python (was: raise None)
Steven D'Aprano writes: > I have a lot of functions that perform the same argument checking each > time: Not an answer to the question you ask, but: Have you tried the data validation library “voluptuous”? Voluptuous, despite the name, is a Python data validation library. It is primarily intended for validating data coming into Python as JSON, YAML, etc. It has three goals: Simplicity. Support for complex data structures. Provide useful error messages. https://pypi.python.org/pypi/voluptuous/> Seems like a good way to follow Don't Repeat Yourself in code that needs a lot of validation of inputs. -- \ “It's my belief we developed language because of our deep inner | `\ need to complain.” —Jane Wagner, via Lily Tomlin | _o__) | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Re: using __getitem()__ correctly
On Wed, Dec 30, 2015 at 3:54 PM, Charles T. Smith
wrote:
> On Wed, 30 Dec 2015 13:40:44 -0700, Ian Kelly wrote:
>
>> On Wed, Dec 30, 2015 at 9:58 AM, Charles T. Smith
>>> The problem is that then triggers the __getitem__() method and I don't
>>> know how to get to the attributes without triggering __getattr__().
>>>
>>> It's the interplay of the two that's killing me.
>>
>> The only interplay of the two is what you have written into your class.
>>
>>> In the example, if I have:
>>>
>>> self.mcc = self.attrs.mcc
>>>
>>>
>>> The crux:
>>>
>>> Then __getattr__() triggers for the mcc. If I try to use
>>> self.attrs['mcc'] to get it, then that triggers __getitem__(). Okay,
>>> if the key is not an int, I'll go and get it and return it...
>>> unfortunately that triggers __getattr__(), an infinite loop.
>>
>> How precisely are you trying to store these: as an attribute, or as a
>> dict item? If it's supposed to be in the dict, then why is your
>> __getitem__ trying to look up an attribute to begin with?
>
>
>
> I don't understand this distinction between an "attribute" and a "dict item".
> attrdict is a stupid legacy class that inherits from a dictionary. I think
> the intent was to be faster than a normal class, but I doubt there's any value
> to that.
>
> In any case, I thought that class attributes were, in fact, items of __dict__?
That's correct, but as I said in my previous message, self.attrs and
self.attrs.__dict__ are two different dicts, and you're confusing one
for the other. Maybe this will be illuminating:
>>> class mydict(dict): pass
...
>>> md = mydict()
>>> md['foo'] = 42 # Set an item in md
>>> md['foo'] # 'foo' exists as an item
42
>>> md.foo # but not as an attribute
Traceback (most recent call last):
File "", line 1, in
AttributeError: 'mydict' object has no attribute 'foo'
>>> md.__dict__['foo'] # and it's not in md.__dict__
Traceback (most recent call last):
File "", line 1, in
KeyError: 'foo'
>>> md.bar = 43 # Set an attribute on md
>>> md.bar # 'bar' exists as an attribute
43
>>> md.__dict__['bar'] # and it's in md.__dict__
43
>>> md['bar'] # but it's not in md itself
Traceback (most recent call last):
File "", line 1, in
KeyError: 'bar'
And to hopefully drive the point home:
>>> md.items()
[('foo', 42)]
>>> md.__dict__.items()
[('bar', 43)]
> The reason that my __getitem__() is trying to look up an attribute is, I
> think, because this syntax triggers __getitem__ with a key of "mcc":
>
> return self[name]
>
> Is that assumption wrong?
That assumption is correct, but this still doesn't explain to me why
you want __getitem__ to be looking up attributes, i.e. looking in
self.__dict__ when the expectation is that it would just look in self.
Is the goal here that self.foo and self['foo'] are the same thing? If
so, then you shouldn't need to worry about __getitem__ and __setitem__
at all. Just override your __getattr__ and __setattr__ to store
attributes in self instead of self.__dict__.
> Just to establish the concept, this horrible thing is showing some promise:
>
> attriter = self.iteritems()
> for attr in attriter:
> if attr[0] == name:
> return attr[1]
This is equivalent to but slower than "return self.get(name)". What
method is this in and what's the rest of the code? It's hard to
analyze your code in any helpful way when you keep changing it.
> (because the subscripts there are on a tuple type)
I don't know what this has to do with anything.
> But I concede I must be doing something fundamentally wrong because this
> assert is triggering:
> def __getattr__ (self, name):
> print "attrdict:av:__getattr__: entered for ", name
> assert name not in self.keys(), "attrdict:__getattr__: who lied?"
When does this trigger? What's calling it? What name is passed in?
--
https://mail.python.org/mailman/listinfo/python-list
Re: raise None
On Thu, Dec 31, 2015 at 11:09 AM, Steven D'Aprano wrote: > I have a lot of functions that perform the same argument checking each time: > > def spam(a, b): > if condition(a) or condition(b): raise TypeError > if other_condition(a) or something_else(b): raise ValueError > if whatever(a): raise SomethingError > ... > > def eggs(a, b): > if condition(a) or condition(b): raise TypeError > if other_condition(a) or something_else(b): raise ValueError > if whatever(a): raise SomethingError > ... > > > Since the code is repeated, I naturally pull it out into a function: > > def _validate(a, b): > if condition(a) or condition(b): raise TypeError > if other_condition(a) or something_else(b): raise ValueError > if whatever(a): raise SomethingError > > def spam(a, b): > _validate(a, b) > ... > > def eggs(a, b): > _validate(a, b) > ... > > > But when the argument checking fails, the traceback shows the error > occurring in _validate, not eggs or spam. (Naturally, since that is where > the exception is raised.) That makes the traceback more confusing than it > need be. If the validation really is the same in all of them, then is it a problem to see the validation function in the traceback? Its purpose isn't simply "raise an exception", but "validate a specific set of inputs". That sounds like a perfectly reasonable traceback line to me (imagine if your validation function has a bug). ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: raise None
On Thu, 31 Dec 2015 11:38 am, Chris Angelico wrote: > On Thu, Dec 31, 2015 at 11:09 AM, Steven D'Aprano > wrote: >> I have a lot of functions that perform the same argument checking each >> time: >> >> def spam(a, b): >> if condition(a) or condition(b): raise TypeError >> if other_condition(a) or something_else(b): raise ValueError >> if whatever(a): raise SomethingError >> ... >> >> def eggs(a, b): >> if condition(a) or condition(b): raise TypeError >> if other_condition(a) or something_else(b): raise ValueError >> if whatever(a): raise SomethingError >> ... >> >> >> Since the code is repeated, I naturally pull it out into a function: >> >> def _validate(a, b): >> if condition(a) or condition(b): raise TypeError >> if other_condition(a) or something_else(b): raise ValueError >> if whatever(a): raise SomethingError >> >> def spam(a, b): >> _validate(a, b) >> ... >> >> def eggs(a, b): >> _validate(a, b) >> ... >> >> >> But when the argument checking fails, the traceback shows the error >> occurring in _validate, not eggs or spam. (Naturally, since that is where >> the exception is raised.) That makes the traceback more confusing than it >> need be. > > If the validation really is the same in all of them, then is it a > problem to see the validation function in the traceback? Its purpose > isn't simply "raise an exception", but "validate a specific set of > inputs". That sounds like a perfectly reasonable traceback line to me > (imagine if your validation function has a bug). Right -- that's *exactly* why it is harmful that the _validate function shows up in the traceback. If _validate itself has a bug, then it will raise, and you will see the traceback: Traceback (most recent call last): File "spam", line 19, in this File "spam", line 29, in that File "spam", line 39, in other File "spam", line 5, in _validate ThingyError: ... which tells you that _validate raised an exception and therefore has a bug. Whereas if _validate does what it is supposed to do, and is working correctly, you will see: Traceback (most recent call last): File "spam", line 19, in this File "spam", line 29, in that File "spam", line 39, in other File "spam", line 5, in _validate ThingyError: ... and the reader has to understand the internal workings of _validate sufficiently to infer that this exception is not a bug in _validate but an expected failure mode of other when you pass a bad argument. Now obviously one can do that. It's often not even very hard: most bugs are obviously bugs, and the ThingyError will surely come with a descriptive error message like "Argument out of range" in the second case. In the case where _validate *returns* the exception instead of raising it, and the calling function (in this case other) raises, you see this in the case of a bug in _validate: Traceback (most recent call last): File "spam", line 19, in this File "spam", line 29, in that File "spam", line 39, in other File "spam", line 5, in _validate ThingyError: ... and this is the case of a bad argument to other: Traceback (most recent call last): File "spam", line 19, in this File "spam", line 29, in that File "spam", line 39, in other ThingyError: ... I think this is a win for debuggability. (Is that a word?) But it's a bit annoying to do it today, since you have to save the return result and explicitly compare it to None. If "raise None" was a no-op, it would feel more natural to just say raise _validate() and trust that if _validate falls out the end and returns None, the raise will be a no-op. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: raise None
Steven D'Aprano writes: > Traceback (most recent call last): > File "spam", line 19, in this > File "spam", line 29, in that > File "spam", line 39, in other > File "spam", line 5, in _validate > ThingyError: ... > > and the reader has to understand the internal workings of _validate > sufficiently to infer that this exception is not a bug in _validate > but an expected failure mode of other when you pass a bad argument. This point seems to advocate for suppressing *any* code that deliberately raises an exception. Is that your intent? -- \ “I don't want to live peacefully with difficult realities, and | `\ I see no virtue in savoring excuses for avoiding a search for | _o__)real answers.” —Paul Z. Myers, 2009-09-12 | Ben Finney -- https://mail.python.org/mailman/listinfo/python-list
Re: raise None
On Thu, Dec 31, 2015 at 12:26 PM, Steven D'Aprano wrote: > Traceback (most recent call last): > File "spam", line 19, in this > File "spam", line 29, in that > File "spam", line 39, in other > ThingyError: ... > > > I think this is a win for debuggability. (Is that a word?) But it's a bit > annoying to do it today, since you have to save the return result and > explicitly compare it to None. If "raise None" was a no-op, it would feel > more natural to just say raise _validate() and trust that if _validate > falls out the end and returns None, the raise will be a no-op. (Yes, it is.) Gotcha. So here's an alternative possibility. Instead of raising None doing nothing, what you really want is to have _validate signal an error with one less level of traceback - that is, you want it to raise an exception from the calling function. class remove_traceback_level: def __enter__(self): return self def __exit__(self, type, value, traceback): if type is None: return tb = traceback while tb.tb_next: tb = tb.tb_next tb.tb_next = None raise value from traceback def _validate(a, b): with remove_traceback_level(): if condition(a) or condition(b): raise TypeError if other_condition(a) or something_else(b): raise ValueError if whatever(a): raise SomethingError The trouble is that this doesn't actually work, because tb_next is read-only. But is there something along these lines that would make a function raise exceptions as if it were in another function? ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Stupid Python tricks
Stolen^W Inspired from a post by Tim Peters back in 2001:
https://mail.python.org/pipermail/python-dev/2001-January/011911.html
Suppose you have a huge string, and you want to quote it. Here's the obvious
way:
mystring = "spam"*10
result = '"' + mystring + '"'
But that potentially involves a lot of copying. How fast is it? Using
Jython2.5, I get these results on my computer:
jy> from timeit import Timer
jy> t = Timer("""'"' + mystring + '"'""", 'mystring = "spam"*10')
jy> min(t.repeat(number=1000))
2.411133514404
Perhaps % interpolation is faster?
jy> t = Timer("""'"%s"' % mystring""", 'mystring = "spam"*10')
jy> min(t.repeat(number=1000))
2.966801086426
Ouch, that's actually worse. But now we have the Stupid Python Trick:
result = mystring.join('""')
How fast is this?
jy> t = Timer("""mystring.join('""')""", 'mystring = "spam"*10')
jy> min(t.repeat(number=1000))
2.17131335449
That's Jython, which is not known for its speed. (If you want speed in
Jython, you ought to be calling Java libraries.) Here are some results
using Python 3.3:
py> from timeit import Timer
py> t = Timer("""'"' + mystring + '"'""", 'mystring = "spam"*10')
py> min(t.repeat(number=1000))
0.22504080459475517
Using % interpolation and the format method:
py> t = Timer("""'"{}"'.format(mystring)""", 'mystring = "spam"*10')
py> min(t.repeat(number=1000))
0.4634905573911965
py> t = Timer("""'"%s"' % mystring""", 'mystring = "spam"*10')
py> min(t.repeat(number=1000))
0.474040764849633
And the Stupid Python Trick:
py> t = Timer("""mystring.join('""')""", 'mystring = "spam"*10')
py> min(t.repeat(number=1000))
0.19407050590962172
Fifteen years later, and Tim Peters' Stupid Python Trick is still the
undisputed champion!
--
Steven
--
https://mail.python.org/mailman/listinfo/python-list
Re: raise None
On 12/30/2015 7:09 PM, Steven D'Aprano wrote: I have a lot of functions that perform the same argument checking each time: def spam(a, b): if condition(a) or condition(b): raise TypeError if other_condition(a) or something_else(b): raise ValueError if whatever(a): raise SomethingError ... def eggs(a, b): if condition(a) or condition(b): raise TypeError if other_condition(a) or something_else(b): raise ValueError if whatever(a): raise SomethingError ... Since the code is repeated, I naturally pull it out into a function: def _validate(a, b): if condition(a) or condition(b): raise TypeError if other_condition(a) or something_else(b): raise ValueError if whatever(a): raise SomethingError def spam(a, b): _validate(a, b) ... def eggs(a, b): _validate(a, b) ... But when the argument checking fails, the traceback shows the error occurring in _validate, not eggs or spam. (Naturally, since that is where the exception is raised.) That makes the traceback more confusing than it need be. So I can change the raise to return in the _validate function: def _validate(a, b): if condition(a) or condition(b): return TypeError if other_condition(a) or something_else(b): return ValueError if whatever(a): return SomethingError and then write spam and eggs like this: def spam(a, b): ex = _validate(a, b) if ex is not None: raise ex ... It's not much of a gain though. I save an irrelevant level in the traceback, but only at the cost of an extra line of code everywhere I call the argument checking function. It is nicer than the similar standard idiom try: _validate(a, b) except Exception as e: raise e from None If you could compute a reduced traceback, by copying one, then this might work (based on Chris' idea. def _validate(a, b): ex = None if condition(a) or condition(b): ex = TypeError elif other_condition(a) or something_else(b): ex = ValueError elif whatever(a): ex = SomethingError if ex: try: 1/0 except ZeroDivisionError as err: tb = err.__traceback__ tb = raise ex.with_traceback(tb) But suppose we allowed "raise None" to do nothing. Then I could rename _validate to _if_error and write this: def spam(a, b): raise _if_error(a, b) ... and have the benefits of "Don't Repeat Yourself" without the unnecessary, and misleading, extra level in the traceback. Obviously this doesn't work now, since raise None is an error, but if it did work, what do you think? Perhaps a bit too magical, but maybe not. -- Terry Jan Reedy -- https://mail.python.org/mailman/listinfo/python-list
Re: raise None
On 31Dec2015 12:26, Steven D'Aprano wrote: On Thu, 31 Dec 2015 11:38 am, Chris Angelico wrote: [... functions calling common _validate function ...] But when the argument checking fails, the traceback shows the error occurring in _validate, not eggs or spam. (Naturally, since that is where the exception is raised.) That makes the traceback more confusing than it need be. If the validation really is the same in all of them, then is it a problem to see the validation function in the traceback? Its purpose isn't simply "raise an exception", but "validate a specific set of inputs". That sounds like a perfectly reasonable traceback line to me (imagine if your validation function has a bug). Right -- that's *exactly* why it is harmful that the _validate function shows up in the traceback. I think I'm still disagreeing, but only on this point of distinguishing _validate bug exceptions from _validate test failures. If _validate itself has a bug, then it will raise, and you will see the traceback: Traceback (most recent call last): File "spam", line 19, in this File "spam", line 29, in that File "spam", line 39, in other File "spam", line 5, in _validate ThingyError: ... which tells you that _validate raised an exception and therefore has a bug. Ok Whereas if _validate does what it is supposed to do, and is working correctly, you will see: Traceback (most recent call last): File "spam", line 19, in this File "spam", line 29, in that File "spam", line 39, in other File "spam", line 5, in _validate ThingyError: ... and the reader has to understand the internal workings of _validate sufficiently to infer that this exception is not a bug in _validate but an expected failure mode of other when you pass a bad argument. Would it not be useful then to name the including function in the exception text? In the case where _validate *returns* the exception instead of raising it, and the calling function (in this case other) raises, you see this in the case of a bug in _validate: Traceback (most recent call last): File "spam", line 19, in this File "spam", line 29, in that File "spam", line 39, in other File "spam", line 5, in _validate ThingyError: ... and this is the case of a bad argument to other: Traceback (most recent call last): File "spam", line 19, in this File "spam", line 29, in that File "spam", line 39, in other ThingyError: ... I confess that when I want to check several things I would like to return several failure indications. So thing on that line, how about this: for blam in _validate(a, b): raise blam which leaves you open to gatheroing them all up instead of aborting on the first complaint. I think this is a win for debuggability. (Is that a word?) But it's a bit annoying to do it today, since you have to save the return result and explicitly compare it to None. If "raise None" was a no-op, it would feel more natural to just say raise _validate() and trust that if _validate falls out the end and returns None, the raise will be a no-op. This is a nice idea though. Succinct and expressive, though people would have to learn that: raise foo() does not unconditionally abort at this point. Cheers, Cameron Simpson -- https://mail.python.org/mailman/listinfo/python-list
Re: raise None
On Thu, 31 Dec 2015 12:44 pm, Ben Finney wrote: > Steven D'Aprano writes: > >> Traceback (most recent call last): >> File "spam", line 19, in this >> File "spam", line 29, in that >> File "spam", line 39, in other >> File "spam", line 5, in _validate >> ThingyError: ... >> >> and the reader has to understand the internal workings of _validate >> sufficiently to infer that this exception is not a bug in _validate >> but an expected failure mode of other when you pass a bad argument. > > This point seems to advocate for suppressing *any* code that > deliberately raises an exception. Is that your intent? No. The issue isn't that an exception is deliberately raised. The issue is that it is deliberately raised in a function separate from where the exception conceptually belongs. The exception is conceptually part of function "other", and was only refactored into a separate function _validate to avoid repeating the same validation code in multiple places. It is a mere implementation detail that the exception is actually raised inside _validate rather than other. As an implementation detail, exposing it to the user (in the form of a line in the stacktrace) doesn't help debugging. At best it is neutral (the user reads the error message and immediately realises that the problem lies with bad arguments passed to other, and _validate has nothing to do with it). At worst it actively misleads the user into thinking that there is a bug in _validate. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Stupid Python tricks
On Wednesday, December 30, 2015 at 9:51:48 PM UTC-6, Steven D'Aprano wrote: > Fifteen years later, and Tim Peters' Stupid Python Trick is still the > undisputed champion! And should we be happy about that revelation, or sad? -- https://mail.python.org/mailman/listinfo/python-list
Re: raise None
On Thu, 31 Dec 2015 03:03 pm, Cameron Simpson wrote: [...] Steven D'Aprano (that's me) wrote this: >>Whereas if _validate does what it is supposed to do, and is working >>correctly, you will see: >> >>Traceback (most recent call last): >> File "spam", line 19, in this >> File "spam", line 29, in that >> File "spam", line 39, in other >> File "spam", line 5, in _validate >>ThingyError: ... >> >>and the reader has to understand the internal workings of _validate >>sufficiently to infer that this exception is not a bug in _validate but an >>expected failure mode of other when you pass a bad argument. > > Would it not be useful then to name the including function in the > exception text? You mean change the signature of _validate to: def _validate(a, b, name_of_caller): ... and have function "other" call it like this: def other(arg1, arg2): _validate(arg1, arg2, "other") # if we reach this line, the arguments were validated # and we can continue ... I think that's pretty horrible. I'm not sure whether that would be more, or less, horrible than having _validate automagically determine the caller's name by looking in the call stack. [...] > I confess that when I want to check several things I would like to return > several failure indications. So thing on that line, how about this: > > for blam in _validate(a, b): > raise blam > > which leaves you open to gatheroing them all up instead of aborting on the > first complaint. Are you sure? I would have expected that raising the first exception would exit the loop. >>I think this is a win for debuggability. (Is that a word?) But it's a bit >>annoying to do it today, since you have to save the return result and >>explicitly compare it to None. If "raise None" was a no-op, it would feel >>more natural to just say raise _validate() and trust that if _validate >>falls out the end and returns None, the raise will be a no-op. > > This is a nice idea though. Succinct and expressive, though people would > have to learn that: > > raise foo() > > does not unconditionally abort at this point. Yes, that crossed my mind. Maybe if there was a second keyword: raiseif foo() which only raised if foo() returned a non-None value. That's kind of like the "or die" idiom from Perl, I guess. But of course requiring a second keyword will almost certainly doom this proposal -- it is only of benefit at the margins as it is. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: raise None
On 31Dec2015 16:12, Steven D'Aprano wrote: On Thu, 31 Dec 2015 03:03 pm, Cameron Simpson wrote: Steven D'Aprano (that's me) wrote this: Whereas if _validate does what it is supposed to do, and is working correctly, you will see: Traceback (most recent call last): File "spam", line 19, in this File "spam", line 29, in that File "spam", line 39, in other File "spam", line 5, in _validate ThingyError: ... and the reader has to understand the internal workings of _validate sufficiently to infer that this exception is not a bug in _validate but an expected failure mode of other when you pass a bad argument. Would it not be useful then to name the including function in the exception text? You mean change the signature of _validate to: def _validate(a, b, name_of_caller): ... and have function "other" call it like this: def other(arg1, arg2): _validate(arg1, arg2, "other") # if we reach this line, the arguments were validated # and we can continue ... I think that's pretty horrible. I'm not sure whether that would be more, or less, horrible than having _validate automagically determine the caller's name by looking in the call stack. No, I meant your latter suggestion above: consult the call stack to fish out the calling function name. Something like: from cs.py.stack import caller ... def _validate(...): frame = caller() funcname = frame.functionname and then use funcname in the messages, or greater detail. Caller() is available here: https://bitbucket.org/cameron_simpson/css/src/tip/lib/python/cs/py/stack.py?fileviewer=file-view-default for the fiddliness. The horribleness is at least concealed, unless you're nesting _validate implementations. I confess that when I want to check several things I would like to return several failure indications. So thing on that line, how about this: for blam in _validate(a, b): raise blam which leaves you open to gatheroing them all up instead of aborting on the first complaint. Are you sure? I would have expected that raising the first exception would exit the loop. In the bare form, surely, just like your "if". But in principle you could gather all the exceptions together and raise a new exception with details attached. I think this is a win for debuggability. (Is that a word?) But it's a bit annoying to do it today, since you have to save the return result and explicitly compare it to None. If "raise None" was a no-op, it would feel more natural to just say raise _validate() and trust that if _validate falls out the end and returns None, the raise will be a no-op. This is a nice idea though. Succinct and expressive, though people would have to learn that: raise foo() does not unconditionally abort at this point. Yes, that crossed my mind. Maybe if there was a second keyword: raiseif foo() which only raised if foo() returned a non-None value. That's kind of like the "or die" idiom from Perl, I guess. But of course requiring a second keyword will almost certainly doom this proposal -- it is only of benefit at the margins as it is. I'd rather your original myself: plain "raise". Another keyword seems a reach, and I don't like it. And I don't like "raiseif"; I've got a bunch of "blahif" functions of similar flavour and I'm stuff unhappy with their names. I think it is a small thing to learn, especially as "raise None" is already an error. Cheers, Cameron Simpson -- https://mail.python.org/mailman/listinfo/python-list
Re: Stupid Python tricks
On Thu, 31 Dec 2015 04:02 pm, Rick Johnson wrote: > On Wednesday, December 30, 2015 at 9:51:48 PM UTC-6, Steven D'Aprano > wrote: >> Fifteen years later, and Tim Peters' Stupid Python Trick is still the >> undisputed champion! > > And should we be happy about that revelation, or sad? Yes! -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Error 0x80070570
Good evening! I am trying to install Python , however is me presenting Error 0x80070570 saying that the folder or file is corrupted or unreadable. is attached the log file with the aforementioned error , already realized some procedures did not work .. procedures such as error correction, defragment files .. How can I resolve this error ? thanks Ivanor -- https://mail.python.org/mailman/listinfo/python-list
