Re: Extra AttributeError inside property - possible bug ?
[email protected] wrote: > Hello, > > bellow is a simple Python2 example of a class which defines __getattr__ > method and a property, where AttributeError exception is raised: > > from __future__ import print_function > > class MyClass(object): > def __getattr__(self, name): > print('__getattr__ <<', name) > raise AttributeError(name) > return 'need know the question' > > @property > def myproperty(self): > print(self.missing_attribute) > return 42 > > my_inst = MyClass() > print(my_inst.myproperty) > > # produces following output > __getattr__ << missing_attribute > __getattr__ << myproperty > Traceback (most recent call last): > File "a.py", line 84, in > main() > File "a.py", line 74, in main > print('==', my_inst.myproperty) > File "a.py", line 36, in __getattr__ > raise AttributeError(name) > AttributeError: myproperty > > > By the documentation > https://docs.python.org/2/reference/datamodel.html#object.__getattr__ , if > class defines __getattr__ method, it gets called at AttributeError > exception and should return a computed value for name, or raise (new) > AttributeError exception. > > Why is __getattr__ called 2nd time, with 'myproperty' argument ?!? > self.myproperty does exist and also at first call of __getattr__ new > AttributeException with 'missing_attribute' is raised. I can't see any > reason for this behavior. I believe this is an implementation accident, the code is not keeping track of the exact origin of the AttributeError. Until recently generators showed analogous behaviour and swallowed StopIterations: $ cat stopiteration.py from __future__ import generator_stop def stop(): raise StopIteration def f(items): for item in items: yield item stop() for item in f("abc"): print(item) $ python3.5 -x stopiteration.py # abusing -x to skip the __future__ import a $ python3.5 stopiteration.py a Traceback (most recent call last): File "stopiteration.py", line 10, in f stop() File "stopiteration.py", line 4, in stop raise StopIteration StopIteration The above exception was the direct cause of the following exception: Traceback (most recent call last): File "stopiteration.py", line 13, in for item in f("abc"): RuntimeError: generator raised StopIteration If the AttributeError behaviour were to be changed a similar transition period would be required, so no Python prior to 3.6 would be affected. -- https://mail.python.org/mailman/listinfo/python-list
How to compare lists
1. How can I save 256 lists, each list has 32 values( hexadecimal numbers) 2. How to compare the saved lists with another 256 lists ( that are read online and have the same structure as the list one)? ( the first list must be saved in the previous step) E.g Thanks -- https://mail.python.org/mailman/listinfo/python-list
Re: How to compare lists
On Tue, Sep 1, 2015 at 3:08 PM, Jahn wrote: > 1. > How can I save 256 lists, each list has 32 values( hexadecimal numbers) > 2. > How to compare the saved lists with another 256 lists ( that are read online > and have the > same structure as the list one)? > ( the first list must be saved in the previous step) > > E.g You seem to have missed out your example, but I'll guess at what you're talking about. Correct me if I'm wrong, and we'll move on from there. You want to take a sequence of 32 numbers and see if it's exactly the same sequence as some others. The easiest way to do this is with a tuple, rather than a list; then you can simply do an equality check, and they'll be checked recursively. If you want to ask, more simply, "does this 32-value unit exist in my collection of 256 acceptable 32-value units", then a set will serve you well. You can stuff tuples of integers into your set, and then query the set for a particular tuple. Does that help at all? If not, guide me to the problem you're actually solving. :) ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: How to compare lists
Jahn wrote: > 1. > How can I save 256 lists, each list has 32 values( hexadecimal numbers) > 2. > How to compare the saved lists with another 256 lists ( that are read > online and have the same structure as the list one)? > ( the first list must be saved in the previous step) You are giving details, but they aren't significant. You can save 256 lists with 32 numbers just like you would save 257 or 2560 with 23 numbers. How you should save them is largely dependent on how you are using them later. Simple methods are pickle.dump() or json.dump(). But what is the problem you are trying to solve by comparing these lists? Are you just looking online for lists matching the local ones? Do you want to find missing/extra lists? Or are you even looking for changes within the lists? How are the online lists provided? Are they text files, is there an API, or do you have to screenscrape them from web pages? Please provide a little more context. -- https://mail.python.org/mailman/listinfo/python-list
Re: Why Python is not both an interpreter and a compiler?
If I understand what you are saying, then I think what you are looking for is not a compiler, but docker. see: https://www.docker.com/ in particular https://www.docker.com/whatisdocker PyPy used this to produce portable PyPy binaries. See: https://github.com/squeaky-pl/portable-pypy/blob/master/README.rst Why did we need to make portable PyPy binaries? >From the PyPy downloads page: Linux binaries and common distributions Linux binaries are dynamically linked, as is usual, and thus might not be usable due to the sad story of linux binary compatibility. This means that Linux binaries are only usable on the distributions where they were created unless you're ready to hack your system by adding symlinks to the libraries it tries to open. We had to make versions for debian stable and unstable, and all the different versions of RHEL, and Centos, and Suse and every time we turned around somebody made a new linux distribution, with new and different versions of shared libraries, located in some new creative idea of where the best place was to put such things. And we had to make 32 bit and 64 bit versions, and versions for x86 and for ARM and the list went on, and on, and on. This made releases far harder than they needed to be, and, from the user end meant that people had to work rather hard just to figure out what version of pypy they should download from our site. Lots of people never could quite figure it out, though we tried to help them on the pypy mailing list and over irc ... So finally, we used docker to make a portable version of pypy, which you can use to biuld your own version of pypy from the nightly sources, for instance. I think that this is what you are looking for, as well. But go read up on docker and see if it suits. Laura -- https://mail.python.org/mailman/listinfo/python-list
Re: RPI.GPIO Help
On 31.08.2015 19:41, John McKenzie wrote:
> Still checking here and am discussing all this in the Raspberry pi
> newsgroup. Thanks to the several people who mentioned it.
>
> Again, still listening here if anyone has any more to add.
I've had the problem to use interrupt-driven GPIOs on the Pi about two
years back. Here's how I solved it:
http://pastebin.com/gdJaJByU
To explain the message you're getting: If you want to handle GPIOs in
the most resource-efficient way, you use interrupt-driven handling.
Interrupts for GPIOs can be configured to be off, level-triggered or
edge-triggered. For edge-triggering I'm also pretty sure that the type
of edge (rising, falling, both) can be specified.
IIRC (and I might not, been quite some time), these interrupts are
bundled together in GPIO ports ("channels"). All GPIOs in one channel
need to have the same configuration. You cannot have conflicing
configuration between two pins which belong to the same GPIO (and
apparently, your framework is trying to do it).
The code I posted does it all by hand (and it's not really hard, as you
can see). I used input and output functionality and do the interrupt
configuration myself (this works through the /proc filesystem on the Pi).
Hope this helps,
Cheers,
Johannes
--
>> Wo hattest Du das Beben nochmal GENAU vorhergesagt?
> Zumindest nicht öffentlich!
Ah, der neueste und bis heute genialste Streich unsere großen
Kosmologen: Die Geheim-Vorhersage.
- Karl Kaos über Rüdiger Thomas in dsa
--
https://mail.python.org/mailman/listinfo/python-list
Re: How to compare lists
On 09/01/2015 07:08 AM, Jahn wrote: 1. How can I save 256 lists, each list has 32 values( hexadecimal numbers) 2. How to compare the saved lists with another 256 lists ( that are read online and have the same structure as the list one)? ( the first list must be saved in the previous step) E.g Thanks Dumb down your problem to something simpler. saving 2 lists of 2 numbers. 1/ for saving/loading the list, use pickle if *you* will do the saving *and* the loading (don't load from an untrusted file) 2/ To compare 2 lists, simply use the == operator In [4]: [[1,2], [1,2]] == [[1,2], [1,3]] Out[4]: False In [5]: [[1,2], [1,2]] == [[1,2], [1,2]] Out[5]: True JM -- https://mail.python.org/mailman/listinfo/python-list
Re: How to compare lists
In a message of Tue, 01 Sep 2015 07:08:48 +0200, "Jahn" writes: >1. >How can I save 256 lists, each list has 32 values( hexadecimal numbers) >2. >How to compare the saved lists with another 256 lists ( that are read online >and have the >same structure as the list one)? >( the first list must be saved in the previous step) 'Saving' can mean many things, depending on context. You can save a reference to your list which will last as long as that reference is in scope in your program. If you only need to save it for as long as it takes you to read in another value, also done in the same scope of your program, this will be fine. If, on the other hand you want to get your next values you want to compare things to, by running a program, either this one or a different one, tomorrow, next week, then you will probably want to save your list in a file somewhere. And if you want to save it in a way that you won't lose the data, even if your disk drive breaks, your house burns down and all your posessions are destroyed ... then you probably want to save it in the cloud someplace -- and make sure that the data is mirrored because data centres burn down, on occasion, as well. So that is the first problem. 'What do you mean by save?' which is related to 'How and when are you getting these lists, anyway?' The next problem is similar. 'What do you mean by same?' Here is a list: ['Anders', 'Sigbritt', 'Eva'] and here is another list: ['Eva', 'Sigbritt', 'Anders'] They aren't equal. But are they 'the same'? This isn't a programming question, but a question of 'How is your problem defined?' You need to know the answer to this before you can write your code, which may mean asking the problem giver to clarify what he or she meant by 'same'. Laura -- https://mail.python.org/mailman/listinfo/python-list
Re: How to compare lists
On Tue, 01 Sep 2015 07:08:48 +0200, Jahn wrote: > 1. > How can I save 256 lists, each list has 32 values( hexadecimal numbers) > 2. > How to compare the saved lists with another 256 lists ( that are read > online and have the same structure as the list one)? > ( the first list must be saved in the previous step) Develop code that works for smaller lists. Test it. When it works, try it on bigger lists. For example, you seem to have two very separate requirements: Save (and presumably read) a list. My favourite mechanism for saving data structures is json. Read the json module help. Gogling "python store list data" might bring you other suggestions. Comparing two lists. One method is to step through the members of each list in turn, and see if it is in the other list. Another method is to check that the lists are the same length, and have the same value at each element position. Both may have flaws depending on the exact nature of your requirement - and what you consider to be identical lists. Googling "python compare lists" may lead you to some ideas. When you have written and tested your code, if it's not doing what you expect, you could come back here and post your code with a description of what you think it should do, what it actually does, and why you think that's wrong, and we'll try and help you fix. What we won't do is write your application from scratch. -- Denis McMahon, [email protected] -- https://mail.python.org/mailman/listinfo/python-list
Re: execute commands as su on remote server
On Tuesday, 18 August 2015 08:27:33 UTC+5:30, [email protected] wrote: > execute commands as su on remote server > > Postby hariram » Mon Aug 17, 2015 4:02 am > Needed: > I need to execute commands after doing su to other user on remote server(not > sudo which doesn't require password) how i can achieve this using python? > I googled and came to know that its not possible, so just for confirmation > asking again, is it possible ? > > Already Tried: > Tried paramiko that's too not working. Hey Laura, fabric doesnt work for me as fabric works with only up to python 2.7 and we are using python 3.3, so we may miss major functionalists if we use 2.7 again in the entire project... -- https://mail.python.org/mailman/listinfo/python-list
Re: execute commands as su on remote server
In a message of Tue, 01 Sep 2015 05:16:48 -0700, [email protected] wr ites: >On Tuesday, 18 August 2015 08:27:33 UTC+5:30, [email protected] wrote: >> execute commands as su on remote server >> >> Postby hariram » Mon Aug 17, 2015 4:02 am >> Needed: >> I need to execute commands after doing su to other user on remote server(not >> sudo which doesn't require password) how i can achieve this using python? >> I googled and came to know that its not possible, so just for confirmation >> asking again, is it possible ? >> >> Already Tried: >> Tried paramiko that's too not working. > >Hey Laura, > >fabric doesnt work for me as fabric works with only up to python 2.7 and we >are using python 3.3, so we may miss major functionalists if we use 2.7 again >in the entire project... >-- >https://mail.python.org/mailman/listinfo/python-list Over here is a Python 3 fork of fabric. https://github.com/pashinin/fabric It is not complete -- i.e. he disabled some tests so some things aren't working. But most of it is reported to work, so maybe it will work well enough for you to use until fabric gets ported for real. Laura -- https://mail.python.org/mailman/listinfo/python-list
Does mkstemp open files only if they don't already exist?
I assume the answer is "Yes", but is it safe to expect that tempfile.mkstemp() will only create a file that doesn't already exist? I presume that there's no chance of it over-writing an existing file (say, due to a race-condition). -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Does mkstemp open files only if they don't already exist?
On Wed, Sep 2, 2015 at 12:45 AM, Steven D'Aprano wrote: > I assume the answer is "Yes", but is it safe to expect that > tempfile.mkstemp() will only create a file that doesn't already exist? I > presume that there's no chance of it over-writing an existing file (say, > due to a race-condition). It depends on OS support, but with that, yes, it is guaranteed to be safe; the file is opened with an exclusivity flag. Check your system's man pages for details, or here: http://linux.die.net/man/3/open O_EXCL|O_CREATE makes an "atomic file creation" operation which will fail if another process is doing the same thing. I'm not sure how mkstemp() handles that failure, but my guess/expectation is that it would pick a different file name and try again. ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Low level file descriptors and high-level Python files
Let's suppose somebody passes me a file descriptor to work with. It could
come from somewhere else, but for the sake of discussion let's pretend I
create it myself this way:
import os
fd = os.open("some path", "w")
I then turn it into a file object:
file_obj = os.fdopen(fd, mode)
Q1: In this example, I know that I opened the fd in write mode, because I
did it myself. But since I'm not actually opening it, how do I know what
mode to use in the call to fdopen? Is there something I can call to find
out what mode a file descriptor has been opened with?
Now let's suppose I solve that problem, process the file_obj, and close it:
file_obj.close()
Q2: Do I still have to close the file descriptor with os.close(fd)?
(I think not.)
Q3: I could probably answer Q2 myself if I knew how to check whether a fd
was open or not. With a file object, I can inspect file_obj.closed and it
will tell me whether the file is open or not. Is there an equivalent for
file descriptors?
--
Steven
--
https://mail.python.org/mailman/listinfo/python-list
Re: Why Python is not both an interpreter and a compiler?
On Tue, 1 Sep 2015 03:48 am, Mahan Marwat wrote: >> Python programs *could* easily be compiled the same way, but it generally >> hasn't been considered all that useful. > > If it hasn't been considered all that useful, then why the tools like > cx_freeze, pytoexe are doing very hard! And if it is really easy, then why > cx_freeze, pytoexe developer are doing it in such a rubbish way instead of > creating one (compiler)? I believe that Marko is wrong. It is not so easy to compile Python to machine language for real machines. That's why the compiler targets a virtual machine instead. Let's take the simple case of adding two numbers: x + y With a real machine, this will probably compile down to a handful of assembly language statements. But, x will have to be an integer of a certain type, say, 32 bits, and y likewise will also have to be the same size. And what happens if the addition overflows? Some machines may overflow to a negative value, some might return the largest positive value. Python addition does not do that. With Python, it doesn't matter whether x and y are the same size, or different. In fact, Python integers are BigNums which support numbers as big as you like, limited only by the amount of memory. Or they could be floats, complex numbers, or fractions. Or x might be a string, and y a list, and the + operator has to safely raise an exception, not dump core, or worse. So to compile Python into assembly language for a real CPU would be very hard. Even simple instructions would require a large, complex chunk of machine code. If only we could design our own machine with a CPU that understood commands like "add two values together" without caring that the values are compatible (real) machine types. And that's what we do. Python runs in a virtual machine with "assembly language" commands that are far, far more complex than real assembly for real CPUs. Those commands, of course, eventually end up executing machine code in the real CPU, but it usually does so by calling the Python runtime engine. There is another reason why frozen Python code is so large. It has to include a full Python interpreter. The Python language includes a few commands for compiling and interpreting Python source code: - eval - exec - compile so the Python runtime has to include the Python compiler and interpreter, which effectively means that the Python runtime has to be *all* of Python, or at least nearly all. There may be projects that will compile Python down to machine code for a real machine (perhaps Cython?) but doing so isn't easy, which is why most compilers don't do it. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Low level file descriptors and high-level Python files
On Tue, Sep 1, 2015, at 10:57, Steven D'Aprano wrote: > Q1: In this example, I know that I opened the fd in write mode, because > I > did it myself. But since I'm not actually opening it, how do I know what > mode to use in the call to fdopen? Is there something I can call to find > out what mode a file descriptor has been opened with? In principle, you can find out with fcntl. In practice, don't you already know what kind of processing you intend to do with the file? If your "processing" involves writing, just try writing to it, and if it doesn't work then it's the caller's fault for passing in a read-only file handle. > Now let's suppose I solve that problem, process the file_obj, and close > it: > > file_obj.close() > > Q2: Do I still have to close the file descriptor with os.close(fd)? > (I think not.) You do not. > Q3: I could probably answer Q2 myself if I knew how to check whether a > fd > was open or not. With a file object, I can inspect file_obj.closed and it > will tell me whether the file is open or not. Is there an equivalent for > file descriptors? Well, if you try to call os.close, or any other operation for that matter, it will raise an OSError with errno=EBADF. Note that if the file _has_ been closed it may be reused by the next open call, so it's best not to use this test method in production code. -- https://mail.python.org/mailman/listinfo/python-list
Re: Why Python is not both an interpreter and a compiler?
Steven D'Aprano : > I believe that Marko is wrong. It is not so easy to compile Python to > machine language for real machines. That's why the compiler targets a > virtual machine instead. Somehow Guile manages it even though Scheme is at least as dynamic a language as Python. I never said a compiler would translate Python to (analogous) machine language. I said you could easily turn CPython into a dynamic library (run-time environment) and write a small bootstrapper that you package into an executable archive with the Python code (whether .py or .pyc). What results is a single executable that you can run analogously to any other command. In fact, the shebang notation turns any single .py file into such an executable. The problem is if you break your program into modules. Java, of course, solved a similar problem with .jar files (but still wouldn't jump over the final hurdle of making the .jar files executable). Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: Why Python is not both an interpreter and a compiler?
On Wed, Sep 2, 2015 at 2:20 AM, Marko Rauhamaa wrote: > In fact, the shebang notation turns any single .py file into such an > executable. The problem is if you break your program into modules. Java, > of course, solved a similar problem with .jar files (but still wouldn't > jump over the final hurdle of making the .jar files executable). The time machine strikes again! Python supports zip file execution, which gives you the same benefit as a .jar file, plus you can slap on a shebang and run the whole thing! Anyone for some pyzza? https://www.python.org/dev/peps/pep-0441/ ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Does mkstemp open files only if they don't already exist?
In a message of Wed, 02 Sep 2015 00:57:45 +1000, Chris Angelico writes: >On Wed, Sep 2, 2015 at 12:45 AM, Steven D'Aprano wrote: >> I assume the answer is "Yes", but is it safe to expect that >> tempfile.mkstemp() will only create a file that doesn't already exist? I >> presume that there's no chance of it over-writing an existing file (say, >> due to a race-condition). > >It depends on OS support, but with that, yes, it is guaranteed to be >safe; the file is opened with an exclusivity flag. Check your system's >man pages for details, or here: > >http://linux.die.net/man/3/open > >O_EXCL|O_CREATE makes an "atomic file creation" operation which will >fail if another process is doing the same thing. I'm not sure how >mkstemp() handles that failure, but my guess/expectation is that it >would pick a different file name and try again. > >ChrisA >-- >https://mail.python.org/mailman/listinfo/python-list I remember discussion on the mercurial mailing list about somebody who had a problem with this in conjunction with a virus scanner that really wanted to get in and do things to the poor temp files. see: https://selenic.com/pipermail/mercurial-devel/2009-February/010197.html for a very, very long thread about it. Apparantly, you can get windows to complain that you cannot create a file because it already exists, and it doesn't go back and try this again for you. But at the time I found the discussion puzzling, as my thought was 'why are these people using mkstemp directly, instead of tempfile.NamedTemporaryFile which seems to be what they want. But I found this thread looking for a different problem with a mercurial repository that we had that was corrupted, a year or so after the thread was written, so I didn't want to go back and ask them about it _then_. Then in the general excitement --"Your raid system is buggy! It is inserting sludge in your files!" I forgot about this puzzlement. Until now. Laura -- https://mail.python.org/mailman/listinfo/python-list
Re: Why Python is not both an interpreter and a compiler?
On Tue, Sep 1, 2015 at 12:20 PM, Marko Rauhamaa wrote: Steven D'Aprano : I believe that Marko is wrong. It is not so easy to compile Python to machine language for real machines. That's why the compiler targets a virtual machine instead. Somehow Guile manages it even though Scheme is at least as dynamic a language as Python. I'm wondering if any of the VM lessons learned with forth environments would help? https://en.wikipedia.org/wiki/Threaded_code -- https://mail.python.org/mailman/listinfo/python-list
Re: Why Python is not both an interpreter and a compiler?
On Mon, Aug 31, 2015 at 11:45 PM, Luca Menegotto wrote: > Il 31/08/2015 19:48, Mahan Marwat ha scritto: > >> If it hasn't been considered all that useful, then why > >> the tools like cx_freeze, pytoexe are doing very hard! > > Well, I consider those tools useless at all! > I appreciate Python because, taken one or two precautions, I can easily port > my code from one OS to another with no pain. > So, why should I loose this wonderful freedom? You don't. You can still take your unbundled code and port it just as easily as before. What is gained from those tools is the ability to easily distribute your code to (Windows) users who aren't knowledgable or interested in maintaining a Python installation on their system. It's something that you don't likely use unless you have a specific need to do that, however. At my previous job where IT had everybody on Windows, we published our code with batch file launchers onto the internal file server and maintained several Python installations there to run them with, rather than maintain them on the systems of 200+ users plus lab computers. With that setup we didn't require cx_freeze, but we used it in some cases for better network performance (it's typically faster to download one large file from the network than hundreds of small files). -- https://mail.python.org/mailman/listinfo/python-list
Re: How to compare lists
On Tuesday, September 1, 2015 at 12:54:08 PM UTC+5:30, Jahn wrote:
> 1.
> How can I save 256 lists, each list has 32 values( hexadecimal numbers)
> 2.
> How to compare the saved lists with another 256 lists ( that are read online
> and have the
> same structure as the list one)?
> ( the first list must be saved in the previous step)
To add to what the others have said/asked:
Many times programmers want sets but they are programmed(!!) to think "Lists!"
This can be because for example
>>> [1,2,3] == [2,1,3]
False
>>> {1,2,3} == {2,1,3}
True
>>> [1,2,3,3] == [1,2,3]
False
>>> {1,2,3,3} == {1,2,3}
True
>>> list(set([1,2,1,3,4,4,]))
[1, 2, 3, 4]
[Though theres no guarantee of the order of the last (that I know) ]
ie you may prefer sets to lists when order and/or repetition dont signify
>
> E.g
???
--
https://mail.python.org/mailman/listinfo/python-list
Re: Why Python is not both an interpreter and a compiler?
In a message of Tue, 01 Sep 2015 19:20:51 +0300, Marko Rauhamaa writes: >Somehow Guile manages it even though Scheme is at least as dynamic a >language as Python. But are Guile programs small? I the OP made a categorisation error, confusing correlation with causation. (i.e. the presence of feathers makes a animal able to fly). Laura -- https://mail.python.org/mailman/listinfo/python-list
netcdf read
Hi,
I'm starting in the Python scripts. I run this script:
import numpy as np
import netCDF4
f = netCDF4.Dataset('uwnd.mon.ltm.nc','r')
f.variables
and I had the message:
netcdf4.py
Traceback (most recent call last):
File "", line 1, in
NameError: name 'netcdf4' is not defined
What can I do to solve this.
I typed this three lines:
import netCD4
f = netCDF4.Dataset('uwnd.mon.ltm.nc','r')
f.variables
and it work.
Please, why my script didn't work.
Conrado
--
https://mail.python.org/mailman/listinfo/python-list
Re: Low level file descriptors and high-level Python files
In a message of Wed, 02 Sep 2015 00:57:22 +1000, "Steven D'Aprano" writes: >Let's suppose somebody passes me a file descriptor to work with. It could >come from somewhere else, but for the sake of discussion let's pretend I >create it myself this way: >Q1: In this example, I know that I opened the fd in write mode, because I >did it myself. But since I'm not actually opening it, how do I know what >mode to use in the call to fdopen? Is there something I can call to find >out what mode a file descriptor has been opened with? for POSIX things use fnclt. YOu have to parese the bits yourself and I always have to look that up to see what the grubby details are. No clue what you do on windows. Don't go around closing things you don't know are open. They could be some other processes' thing. Laura -- https://mail.python.org/mailman/listinfo/python-list
Re: Why Python is not both an interpreter and a compiler?
>But are Guile programs small? I the OP made a categorisation error, >confusing correlation with causation. (i.e. the presence of >feathers makes a animal able to fly). s/I the/I think the/ -- https://mail.python.org/mailman/listinfo/python-list
Re: netcdf read
On Wed, Sep 2, 2015 at 3:23 AM, wrote:
> I'm starting in the Python scripts. I run this script:
>
>
> import numpy as np
>
> import netCDF4
>
> f = netCDF4.Dataset('uwnd.mon.ltm.nc','r')
>
>
> f.variables
>
>
> and I had the message:
>
>
> netcdf4.py
> Traceback (most recent call last):
> File "", line 1, in
> NameError: name 'netcdf4' is not defined
>
>
> What can I do to solve this.
My crystal ball tells me you're probably running Windows.
On Windows, there's no difference between "netcdf4.py" and
"netCDF4.py", so when you ask Python to "import netCDF4", it finds
your program and imports that. To resolve this, just rename your
script to something else, and delete any "netcdf4.pyc" or related
files. Then try running it under the new name, and you should be fine.
ChrisA
--
https://mail.python.org/mailman/listinfo/python-list
Re: Why Python is not both an interpreter and a compiler?
Laura Creighton : > But are Guile programs small? They can be tiny because libguile-2.0.so, the interpreter, is a dynamic library and is installed on the computer. It's barely different from how compiled C programs can be a few kilobytes in size because libc.so is dynamic. Emacs is a lisp interpreter. As part of its installation, emacs dumps core, and the executable core file is installed as the editor we know and love. That way, the most useful lisp modules are already preloaded in the binary, executable image of the editor. On my machine, the emacs executable is 15 megabytes in size. Marko -- https://mail.python.org/mailman/listinfo/python-list
Re: netcdf read
On Tue, Sep 1, 2015 at 2:07 PM, Chris Angelico wrote:
> On Wed, Sep 2, 2015 at 3:23 AM, wrote:
> > I'm starting in the Python scripts. I run this script:
> >
> >
> > import numpy as np
> >
> > import netCDF4
> >
> > f = netCDF4.Dataset('uwnd.mon.ltm.nc','r')
> >
> >
> > f.variables
> >
> >
> > and I had the message:
> >
> >
> > netcdf4.py
> > Traceback (most recent call last):
> > File "", line 1, in
> > NameError: name 'netcdf4' is not defined
> >
> >
> > What can I do to solve this.
>
> My crystal ball tells me you're probably running Windows.
>
Or Mac OS X. Unless you go out of your way to specify otherwise, the
default OS X filesystem is case-insensitive.
All the best,
Jason
--
https://mail.python.org/mailman/listinfo/python-list
Re: Low level file descriptors and high-level Python files
On 01Sep2015 11:56, [email protected] wrote: On Tue, Sep 1, 2015, at 10:57, Steven D'Aprano wrote: Q3: I could probably answer Q2 myself if I knew how to check whether a fd was open or not. With a file object, I can inspect file_obj.closed and it will tell me whether the file is open or not. Is there an equivalent for file descriptors? Well, if you try to call os.close, or any other operation for that matter, it will raise an OSError with errno=EBADF. os.fstat might be safer. It won't have side effects. As additional remarks: Underwhat circumstances would you imagine probing an fd like this? For what purpose? It feels like a code smell for know having enough situational awareness, and then you're into guesswork world. One circumstance where you might use fdopen and _not_ want .close to close the underlying service is when you're handed a file descriptor over which you're supposed to perform some I/O, and the I/O library functions use high level files. In that case you might want code like this: fd2 = os.dup(fd) fp = open(fd2, 'a+b') # or whatever mode ... do stuff, perhaps passing fp to a library function ... fp.close() fd2 is not closed, but fd is still open for further use. Cheers, Cameron Simpson This is not a bug. It's just the way it works, and makes perfect sense. - Tom Christiansen I like that line. I hope my boss falls for it. - Chaim Frenkel -- https://mail.python.org/mailman/listinfo/python-list
Re: Low level file descriptors and high-level Python files
On 2015-09-01, Laura Creighton wrote: > Don't go around closing things you don't know are open. They > could be some other processes' thing. I don't understand. Closing a file descriptor that isn't open is harmless, isn't it? Closing one that _is_ open only affects the current process. If other processes had the same fd open, it's still open for them. -- Grant Edwards grant.b.edwardsYow! FUN is never having to at say you're SUSHI!! gmail.com -- https://mail.python.org/mailman/listinfo/python-list
Re: Why Python is not both an interpreter and a compiler?
On Wed, Sep 2, 2015 at 6:08 AM, Marko Rauhamaa wrote: > Laura Creighton : > >> But are Guile programs small? > > They can be tiny because libguile-2.0.so, the interpreter, is a dynamic > library and is installed on the computer. It's barely different from how > compiled C programs can be a few kilobytes in size because libc.so is > dynamic. And compiled C programs are notoriously hard to distribute. Can you pick up a Guile binary and carry it to another computer? Do you have to absolutely perfectly match the libguile version, architecture, build settings, etc? Also... how is this different from distributing .pyc files and expecting people to have a Python interpreter? ChrisA -- https://mail.python.org/mailman/listinfo/python-list
Re: Low level file descriptors and high-level Python files
On 02Sep2015 08:01, Cameron Simpson wrote: One circumstance where you might use fdopen and _not_ want .close to close the underlying service is when you're handed a file descriptor over which you're supposed to perform some I/O, and the I/O library functions use high level files. In that case you might want code like this: fd2 = os.dup(fd) fp = open(fd2, 'a+b') # or whatever mode ... do stuff, perhaps passing fp to a library function ... fp.close() fd2 is not closed, but fd is still open for further use. Um, "fd2 _is_ closed". Whoops. Cheers, Cameron Simpson Freedom is the right to be wrong, not the right to do wrong. - John G. Riefenbaker -- https://mail.python.org/mailman/listinfo/python-list
Re: Why Python is not both an interpreter and a compiler?
On Wed, 2 Sep 2015 02:20 am, Marko Rauhamaa wrote: > Steven D'Aprano : > >> I believe that Marko is wrong. It is not so easy to compile Python to >> machine language for real machines. That's why the compiler targets a >> virtual machine instead. > > Somehow Guile manages it even though Scheme is at least as dynamic a > language as Python. It's not about the dynamicism precisely, it's about what counts as primitive data types and operations. What are the primitives in Scheme? In Python, the primitives are pretty complex. I don't know how accurate this page is: https://en.wikibooks.org/wiki/Scheme_Programming/Scheme_Datatypes but it suggests that Scheme primitives are quite close to the machine. That may keep the runtime small. > I never said a compiler would translate Python to (analogous) machine > language. I said you could easily turn CPython into a dynamic library > (run-time environment) and write a small bootstrapper that you package > into an executable archive with the Python code (whether .py or .pyc). > What results is a single executable that you can run analogously to any > other command. Provided you have the correct version of the dynamic library installed in the correct place. But this doesn't solve the problem of being able to distribute a single executable file that requires no pre-installed libraries, which is the problem cx_freeze and pytoexe are made to solve. They solve the case when you can't assume that there is a Python run-time environment available. If you are going to require a Python run-time environment, let's call it "pylib", then you might as well require the python compiler and standard library be installed. (In the case of C, that's not the case, a distinct run-time environment makes sense, as the compile-time and run-time environments are sharply delineated in C. One certainly wouldn't want to have to re-compile the average C application each and every time you run it.) Maybe you could separate the REPL and remove it from pylib, but that's probably quite small, it might not make that much of a difference to the total size. Maybe you could build a pylib that was significantly smaller than the Python interpreter plus stdlib. That's fine, I'm not arguing it can't be done. I'm arguing that it's not *easy*, if it were easy somebody likely would have done it by now. I don't know the state of the art here. Does Cython work in this space? How about Nuitka? > In fact, the shebang notation turns any single .py file into such an > executable. I trust that you're not actually arguing that distributing .py files meets the requirement for a standalone application. > The problem is if you break your program into modules. Java, > of course, solved a similar problem with .jar files (but still wouldn't > jump over the final hurdle of making the .jar files executable). You can distribute your Python app as a zip file, except of course you still need the Python interpreter to be installed. -- Steven -- https://mail.python.org/mailman/listinfo/python-list
Re: Why Python is not both an interpreter and a compiler?
On 08/31/2015 02:35 AM, Mahan Marwat wrote: > What I know about an interpreter and a compiler is: they both convert > source code to machine code and the only difference is, an > interpreter convert it, line by line while compiler convert the whole > source file. Now if we compile a C source file on C compiler, it will > produce a small executable file. But if we compile (freeze) a Python > source file on Python interpreter (using cx_freeze), it will produce > a big executable file. Now the thing is C compiler does not embed a C > compiler inside our program, while the Python interpreter (cx_freeze, > pytoexe) embed Python interpreter inside our program, what is the > reason? The question is, why cx_freeze etc... embed Python > interpreter inside our programs, they are unable to produce machine > code like C compiler do? Cant we program a language, which is both > interpreted and compiled? The core developer cant add the compiling > functionality to Python? It think your questions have been well answered by others. But there are several attempts at making an actual python compiler. Often this involve less-dynamic subset of Python. For example, pypy has a dialect called rpython which compiles straight to C++ code, and then to machine code. Another subset compiler is cython, which is somewhat of a specialized compiler. It compiles a subset of Python to a binary shared library that can be imported into a Python program running in the normal interpreter. The dynamic nature of Python means that probably your best route to speed is going to be through just-in-time compilation. The pypy project is an attempt to do JIT with Python. So far the results are very promising. Pretty cool since pypy is written in Python and bootstraps from the standard python interpreter. Lastly, one attempt at a compiler is nuitka (google it). It produces self-contained executables. Nuitka compiles what it can, and interprets the rest (if I understand it correctly) by embedding libpython itself in the executable. At this time, nuitka isn't focusing on performance, more correctness. GvR doesn't really think much of nuitka, but I think it's a cool project and the developer is a nice guy. Maybe have its uses. So far I haven't had a use for nuikta; cPython is enough for me, with cython for compiling functions that need some more raw speed. I tend to use more conventional optimization techniques that work just fine with the interpreter. And often the results are fast enough. For example I implemented a simple memoization wrapper for a particularly expensive function that was called a lot, often over the same inputs several times. Runtime went from 10 seconds to less than 1. Enough for me! -- https://mail.python.org/mailman/listinfo/python-list
Re: Why Python is not both an interpreter and a compiler?
Steven D'Aprano :
> On Wed, 2 Sep 2015 02:20 am, Marko Rauhamaa wrote:
>> I never said a compiler would translate Python to (analogous) machine
>> language. I said you could easily turn CPython into a dynamic library
>> (run-time environment) and write a small bootstrapper that you
>> package into an executable archive with the Python code (whether .py
>> or .pyc). What results is a single executable that you can run
>> analogously to any other command.
>
> Provided you have the correct version of the dynamic library installed
> in the correct place.
Yes, virtually all Linux software builds upon dynamic libraries that
have been introduced to the linker via ldconfig.
> But this doesn't solve the problem of being able to distribute a single
> executable file that requires no pre-installed libraries, which is the
> problem cx_freeze and pytoexe are made to solve.
I wasn't trying to solve that particular problem. I was simply stating
you could compile (translate, turn) a Python program into a single,
executable file.
> I trust that you're not actually arguing that distributing .py files
> meets the requirement for a standalone application.
If your application consists of a single .py file, why not?
>> The problem is if you break your program into modules. Java, of
>> course, solved a similar problem with .jar files (but still wouldn't
>> jump over the final hurdle of making the .jar files executable).
>
> You can distribute your Python app as a zip file, except of course you
> still need the Python interpreter to be installed.
Again, having a Python interpreter around is not the issue I'm talking
about. I'm talking about the possibility to compile (translate, turn) a
Python program into a single, executable file.
Now, even C programs can suffer from the module problem: you sometimes
need to ship extra dynamic libraries ("modules") with your binary.
Marko
--
https://mail.python.org/mailman/listinfo/python-list
Re: Why Python is not both an interpreter and a compiler?
Steven D'Aprano writes: > On Wed, 2 Sep 2015 02:20 am, Marko Rauhamaa wrote: >> Steven D'Aprano: >> >>> I believe that Marko is wrong. It is not so easy to compile Python >>> to machine language for real machines. That's why the compiler >>> targets a virtual machine instead. >> >> Somehow Guile manages it even though Scheme is at least as dynamic a >> language as Python. > > It's not about the dynamicism precisely, it's about what counts as > primitive data types and operations. > > What are the primitives in Scheme? In Python, the primitives are > pretty complex. I don't know how accurate this page is: > > https://en.wikibooks.org/wiki/Scheme_Programming/Scheme_Datatypes > > but it suggests that Scheme primitives are quite close to the > machine. That may keep the runtime small. I think I spotted a tiny inaccuracy, but the real problem with that page is that it's *very* incomplete: no mention of exact integers and rationals, strings appear in text but not as an entry, were vectors even mentioned?, nothing about procedures!, let alone reified continuations, nothing about input/output ports. Also records (definable types with named fields) are now officially in for some time already, so that's another kind of structured types. Many implementations have object systems, possibly modules as objects. It's quite analogous to Python's objects. Eval in Scheme is more restricted. Procedure internals are not accessible at all, and implementations typically offer ways to declare that standard names in a compilation unit indeed have their standard meaning and variables will not be assigned outside the given compilation unit, so the compiler can propagate information about expected argument types around, infer more types, inline code, and lot's of things that are way beyond me. Some Scheme implementations use quite aggressive compilation techniques. Guile is one, I think. Gambit and Larceny are two others. [- -] -- https://mail.python.org/mailman/listinfo/python-list
