Dynamic doctests?
I'm trying to execute doc tests without writing to the filesystem (i.e.
in the Python interpreter). I have something like:
"""
Docstring:
>>> n
6
"""
# Code:
n=6
import doctest
doctest.testmod()
The tests all pass when saving this text to a python script (as it
should), but when I load this text into a string and run:
code='"""\n>>> n\n6\n"""\nn=6\nimport doctest\ndoctest.testmod()'
exec(code)
I get:
Traceback (most recent call last):
File "", line 1, in ?
File "", line 7, in ?
File "/usr/lib/python2.4/doctest.py", line 1841, in testmod
for test in finder.find(m, name, globs=globs,
extraglobs=extraglobs):
File "/usr/lib/python2.4/doctest.py", line 851, in find
self._find(tests, obj, name, module, source_lines, globs, {})
File "/usr/lib/python2.4/doctest.py", line 914, in _find
for valname, val in getattr(obj, '__test__', {}).items():
AttributeError: 'function' object has no attribute 'items'
Can what I'm trying to do be done?
Any help is greatly appreciated.
--
http://mail.python.org/mailman/listinfo/python-list
Store doctest verbose results to a variable
Is it possible to store doctest's verbose output to a variable? For example: import doctest, my_test_module a = doctest.testmod(my_test_module) The contents of 'a' is the tuple of passed and failed results. I tried passing verbose mode to the testmod function, but 'a' is still a tuple. Any help is greatly appreciated. -- http://mail.python.org/mailman/listinfo/python-list
Import on case insensitive filesystem
Hello,
I'm developing an app which runs Python on a filesystem which is not case
sensitive (Mac OS X), but is mounted as an NFS drive on a remote machine.
This causes errors because of the import being case sensitive but accessing
an FS which is case insensitive. Short of copying the entire directory tree
over to another filesystem, is there anything I can do to flag Python to act
as though it were on a case sensitive FS?
The exact issue I'm seeing is this file, named crypto.py, and relying on
pycrypto:
from Crypto.Cipher import AES
print("Made it!")
That attempts to import the same file (itself) instead of finding the Crypto
module since the filesystem casing is incorrect.
Any tips would be greatly appreciated.
Best,
Mitchell
--
http://mail.python.org/mailman/listinfo/python-list
Problem With Insert with MySQLdb
Hello,
I am a complete beginner with Python. I've managed to get mod_python up and
running with Apache2 and I'm trying to a simple insert into a table in a
MySQL database.
I'm using the MySQLdb library for connectivity. I can read from the database
no problem, but when I do an insert, the value never gets added to the
database, even though there is no error, and the SQL is fine (I print out
the SQL statement in the function). When I copy and paste the sql from my
browser and insert directly into MySQL, it works fine.
Here is the function in question:
def add(req):
db = MySQLdb.connect(host="intranet", user="root", passwd="",
db="intranet")
# create a cursor
cursor = db.cursor()
# execute SQL statement
sql = "INSERT INTO category (category_name) VALUES ('" +
req.form['category'] + "')"
cursor.execute(sql)
return sql
The SQL it is outputting is:
INSERT INTO category (category_name) VALUES ('Test')
Am I doing something obviously incorrect here?
Thanks,
Dave
--
http://mail.python.org/mailman/listinfo/python-list
Where to save classes? How to access classes?
Hi, I'm trying to get into the object oriented aspect of Python. If I create a custom class (in it's own file), how do I access that class in a function in a different file? In Java there's the notion of a CLASSPATH, where you can tell the compiler to look for classes. Is there something similar to this in Python? Thanks, Dave -- http://mail.python.org/mailman/listinfo/python-list
Can't instantiate class
Hello, Here is a very basic question, but it is frustrating me to no end nonetheless. I have one file called addLink.py. In a method in this file I am trying to instantiate a class and call a method from that class. Here is the code: def getCategories(): # instantiate the DataUtil class db = DataUtil() # and call the getConnection() method connection = db.getConnection() ... At the top of this file I am importing the DataUtil module (named DataUtil.py) with this line: import DataUtil The DataUtil.py file resides in the same directory as the above file and looks like this: import MySQLdb class DataUtil: def __init__(self): print "init" def getConnection(self): return MySQLdb.connect(host="host", user="user", passwd="pass", db="test") When I execute the getCategories() method above I get the following error: File "C:\Apache2\htdocs\Intranet\addLink.py", line 42, in getCategories db = DataUtil() TypeError: 'module' object is not callable Any idea what I'm doing wrong? Thanks, Dave -- http://mail.python.org/mailman/listinfo/python-list
Re: Can't instantiate class
Thanks for your prompt reply. Ok, so If use your first suggestion (db = DataUtil.DataUtil() ), I get this error: AttributeError: 'module' object has no attribute 'DataUtil' If I try importing the class directly (from DataUtil import DataUtil), I get this error: ImportError: cannot import name DataUtil Could these errors have something to do with the fact that I am doing this through mod_python? Thanks again, Dave Michael P. Soulier wrote: > On 11/6/05, David Mitchell <[EMAIL PROTECTED]> wrote: > >>import DataUtil >> >> File "C:\Apache2\htdocs\Intranet\addLink.py", line 42, in getCategories >> db = DataUtil() >> >>TypeError: 'module' object is not callable > > > You've imported module DataUtil, and by calling DataUtil(), you're > trying to call the module, hence the error. I think you want > > db = DataUtil.DataUtil() > > Or, > > from DataUtil import DataUtil > > And then your code will work. > > Mike > > -- > Michael P. Soulier <[EMAIL PROTECTED]> > -- http://mail.python.org/mailman/listinfo/python-list
Re: Python book for a non-programmer
http://www.python.org/doc/Intros.html and two great texts when she has covered the basics are: http://diveintopython.org/ http://www.mindview.net/Books/TIPython -- http://mail.python.org/mailman/listinfo/python-list
Memory error
Hello all,
I'm afraid I am new to all this so bear with me...
I am looking to find the statistical significance between two large netCDF data
sets.
Firstly I've loaded the two files into python:
swh=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/controlperiod/averages/swh_control_concat.nc',
'r')
swh_2050s=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/2050s/averages/swh_2050s_concat.nc',
'r')
I have then isolated the variables I want to perform the pearson correlation on:
hs=swh.variables['hs']
hs_2050s=swh_2050s.variables['hs']
Here is the metadata for those files:
print hs
int16 hs(time, latitude, longitude)
standard_name: significant_height_of_wind_and_swell_waves
long_name: significant_wave_height
units: m
add_offset: 0.0
scale_factor: 0.002
_FillValue: -32767
missing_value: -32767
unlimited dimensions: time
current shape = (86400, 350, 227)
print hs_2050s
int16 hs(time, latitude, longitude)
standard_name: significant_height_of_wind_and_swell_waves
long_name: significant_wave_height
units: m
add_offset: 0.0
scale_factor: 0.002
_FillValue: -32767
missing_value: -32767
unlimited dimensions: time
current shape = (86400, 350, 227)
Then to perform the pearsons correlation:
from scipy.stats.stats import pearsonr
pearsonr(hs,hs_2050s)
I then get a memory error:
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/sci/lib/python2.7/site-packages/scipy/stats/stats.py", line
2409, in pearsonr
x = np.asarray(x)
File "/usr/local/sci/lib/python2.7/site-packages/numpy/core/numeric.py", line
321, in asarray
return array(a, dtype, copy=False, order=order)
MemoryError
This also happens when I try to create numpy arrays from the data.
Does anyone know how I can alleviate theses memory errors?
Cheers,
Jamie
--
https://mail.python.org/mailman/listinfo/python-list
Re: Memory error
On Monday, March 24, 2014 11:32:31 AM UTC, Jamie Mitchell wrote:
> Hello all,
>
>
>
> I'm afraid I am new to all this so bear with me...
>
>
>
> I am looking to find the statistical significance between two large netCDF
> data sets.
>
>
>
> Firstly I've loaded the two files into python:
>
>
>
> swh=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/controlperiod/averages/swh_control_concat.nc',
> 'r')
>
>
>
> swh_2050s=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/2050s/averages/swh_2050s_concat.nc',
> 'r')
>
>
>
> I have then isolated the variables I want to perform the pearson correlation
> on:
>
>
>
> hs=swh.variables['hs']
>
>
>
> hs_2050s=swh_2050s.variables['hs']
>
>
>
> Here is the metadata for those files:
>
>
>
> print hs
>
>
>
> int16 hs(time, latitude, longitude)
>
> standard_name: significant_height_of_wind_and_swell_waves
>
> long_name: significant_wave_height
>
> units: m
>
> add_offset: 0.0
>
> scale_factor: 0.002
>
> _FillValue: -32767
>
> missing_value: -32767
>
> unlimited dimensions: time
>
> current shape = (86400, 350, 227)
>
>
>
> print hs_2050s
>
>
>
> int16 hs(time, latitude, longitude)
>
> standard_name: significant_height_of_wind_and_swell_waves
>
> long_name: significant_wave_height
>
> units: m
>
> add_offset: 0.0
>
> scale_factor: 0.002
>
> _FillValue: -32767
>
> missing_value: -32767
>
> unlimited dimensions: time
>
> current shape = (86400, 350, 227)
>
>
>
>
>
> Then to perform the pearsons correlation:
>
>
>
> from scipy.stats.stats import pearsonr
>
>
>
> pearsonr(hs,hs_2050s)
>
>
>
> I then get a memory error:
>
>
>
> Traceback (most recent call last):
>
> File "", line 1, in
>
> File "/usr/local/sci/lib/python2.7/site-packages/scipy/stats/stats.py",
> line 2409, in pearsonr
>
> x = np.asarray(x)
>
> File "/usr/local/sci/lib/python2.7/site-packages/numpy/core/numeric.py",
> line 321, in asarray
>
> return array(a, dtype, copy=False, order=order)
>
> MemoryError
>
>
>
> This also happens when I try to create numpy arrays from the data.
>
>
>
> Does anyone know how I can alleviate theses memory errors?
>
>
>
> Cheers,
>
>
>
> Jamie
Just realised that obviously pearson correlation requires two 1D arrays and
mine are 3D, silly mistake!
--
https://mail.python.org/mailman/listinfo/python-list
Line of best fit
I am new to python so apologies for the ignorance with this question.
How would I apply a line of best fit to a plot?
My data are netCDF4 data files and this is essentially what I have done so far:
swh1=netCDF4.Dataset('filename','r')
hs1=swh1.variables['hs']
swh2=netCDF4.Dataset('filename'.'r')
hs2=swh2.variables['hs']
plt.plot(hs1,hs2,'.')
Cheers,
Jamie
--
https://mail.python.org/mailman/listinfo/python-list
len() of unsized object - ks test
Hello all,
I am trying to perform a Kolmogorov-Smirnov test in Python but I'm having a few
difficulties.
# My files are netCDF so I import them as follows:
control=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/controlperiod/south_west/swhcontrol_swest_concatannavg_1D.nc','r')
# The string is simply a 1D array
# Then performing the ks test:
kstest(control,'norm')
# I then get the following error:
File "", line 1, in
File "/usr/local/sci/lib/python2.7/site-packages/scipy/stats/stats.py", line
3413, in kstest
N = len(vals)
TypeError: len() of unsized object
Any ideas on why this isn't working would be great.
Thanks,
Jamie
--
https://mail.python.org/mailman/listinfo/python-list
Re: len() of unsized object - ks test
On Friday, April 25, 2014 3:07:54 PM UTC+1, Jamie Mitchell wrote:
> Hello all,
>
>
>
> I am trying to perform a Kolmogorov-Smirnov test in Python but I'm having a
> few difficulties.
>
>
>
> # My files are netCDF so I import them as follows:
>
>
>
> control=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/controlperiod/south_west/swhcontrol_swest_concatannavg_1D.nc','r')
>
>
>
> # The string is simply a 1D array
>
>
>
> # Then performing the ks test:
>
>
>
> kstest(control,'norm')
>
>
>
> # I then get the following error:
>
>
>
> File "", line 1, in
>
> File "/usr/local/sci/lib/python2.7/site-packages/scipy/stats/stats.py",
> line 3413, in kstest
>
> N = len(vals)
>
> TypeError: len() of unsized object
>
>
>
> Any ideas on why this isn't working would be great.
>
>
>
> Thanks,
>
>
>
> Jamie
Thanks for your help.
Steven your right I wasn't reading in the file on netCDF4.Dataset, I was just
opening it. I have rectified it now - a silly mistake!
Thanks again,
Jamie
--
https://mail.python.org/mailman/listinfo/python-list
Saving a file as netCDF4 in Python
Dear all,
Apologies as this sounds like a very simple question but I can't find an answer
anywhere.
I have loaded a netCDF4 file into python as follows:
swh=netCDF4.Dataset('path/to/netCDFfile,'r')
I then isolate the variables I wish to plot:
hs=swh.variables['hs']
year=swh.variables['year']
I would then like to save these hs and year variables so that I don't have to
isolate them every time I want to plot them.
Any help would be much appreciated.
Cheers,
Jamie
--
https://mail.python.org/mailman/listinfo/python-list
Adding R squared value to scatter plot
I have made a plot using the following code:
python2.7
import netCDF4
import matplotlib.pyplot as plt
import numpy as np
swh_Q0_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/controlperiod/south_west/swhcontrol_swest_annavg1D.nc','r')
hs_Q0_con_sw=swh_Q0_con_sw.variables['hs'][:]
swh_Q3_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q3/swh/controlperiod/south_west/swhcontrol_swest_annavg1D.nc','r')
hs_Q3_con_sw=swh_Q3_con_sw.variables['hs'][:]
swh_Q4_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q4/swh/controlperiod/south_west/swhcontrol_swest_annavg1D.nc','r')
hs_Q4_con_sw=swh_Q4_con_sw.variables['hs'][:]
swh_Q14_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q14/swh/controlperiod/south_west/swhcontrol_swest_annavg1D.nc','r')
hs_Q14_con_sw=swh_Q14_con_sw.variables['hs'][:]
swh_Q16_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q16/swh/controlperiod/south_west/swhcontrol_swest_annavg1D.nc','r')
hs_Q16_con_sw=swh_Q16_con_sw.variables['hs'][:]
swh_Q0_fut_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/2050s/south_west/swh2050s_swest_annavg1D.nc','r')
hs_Q0_fut_sw=swh_Q0_fut_sw.variables['hs'][:]
swh_Q3_fut_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q3/swh/2050s/south_west/swh2050s_swest_annavg1D.nc','r')
hs_Q3_fut_sw=swh_Q3_fut_sw.variables['hs'][:]
swh_Q4_fut_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q4/swh/2050s/south_west/swh2050s_swest_annavg1D.nc','r')
hs_Q4_fut_sw=swh_Q4_fut_sw.variables['hs'][:]
swh_Q14_fut_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q14/swh/2050s/south_west/swh2050s_swest_annavg1D.nc','r')
hs_Q14_fut_sw=swh_Q14_fut_sw.variables['hs'][:]
swh_Q16_fut_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q16/swh/2050s/south_west/swh2050s_swest_annavg1D.nc','r')
hs_Q16_fut_sw=swh_Q16_fut_sw.variables['hs'][:]
fit_Q0_sw=np.polyfit(hs_Q0_con_sw,hs_Q0_fut_sw,1)
fit_fn_Q0_sw=np.poly1d(fit_Q0_sw)
plt.plot(hs_Q0_con_sw,hs_Q0_fut_sw,'g.')
plt.plot(hs_Q0_con_sw,fit_fn_Q0_sw(hs_Q0_con_sw),'g',label='Q0 no pert')
fit_Q3_sw=np.polyfit(hs_Q3_con_sw,hs_Q3_fut_sw,1)
fit_fn_Q3_sw=np.poly1d(fit_Q3_sw)
plt.plot(hs_Q3_con_sw,hs_Q3_fut_sw,'b.')
plt.plot(hs_Q3_con_sw,fit_fn_Q3_sw(hs_Q3_con_sw),'b',label='Q3 low sens')
fit_Q4_sw=np.polyfit(hs_Q4_con_sw,hs_Q4_fut_sw,1)
fit_fn_Q4_sw=np.poly1d(fit_Q4_sw)
plt.plot(hs_Q4_con_sw,hs_Q4_fut_sw,'y.')
plt.plot(hs_Q4_con_sw,fit_fn_Q4_sw(hs_Q4_con_sw),'y',label='Q4 low sens')
fit_Q14_sw=np.polyfit(hs_Q14_con_sw,hs_Q14_fut_sw,1)
fit_fn_Q14_sw=np.poly1d(fit_Q14_sw)
plt.plot(hs_Q14_con_sw,hs_Q14_fut_sw,'r.')
plt.plot(hs_Q14_con_sw,fit_fn_Q14_sw(hs_Q14_con_sw),'r',label='Q14 high sens')
fit_Q16_sw=np.polyfit(hs_Q16_con_sw,hs_Q16_fut_sw,1)
fit_fn_Q16_sw=np.poly1d(fit_Q16_sw)
plt.plot(hs_Q16_con_sw,hs_Q16_fut_sw,'c.')
plt.plot(hs_Q16_con_sw,fit_fn_Q16_sw(hs_Q16_con_sw),'c',label='Q16 high sens')
plt.legend(loc='best')
plt.xlabel('Significant Wave Height annual averages NW Scotland 1981-2010')
plt.ylabel('Significant Wave Height annual averages NW Scotland 2040-2069')
plt.title('Scatter plot of Significant Wave Height')
plt.show()
--
What I would like to do is display the R squared value next to the line of best
fits that I have made.
Does anyone know how to do this with matplotlib?
Thanks,
Jamie
--
https://mail.python.org/mailman/listinfo/python-list
Matplotlib - specifying bin widths
Hello all! Instead of setting the number of bins I want to set the bin width. I would like my bins to go from 1.7 to 2.4 in steps of 0.05. How do I say this in the code? Cheers, Jamie -- https://mail.python.org/mailman/listinfo/python-list
Re: Adding R squared value to scatter plot
On Wednesday, May 21, 2014 1:30:16 PM UTC+1, Jason Swails wrote:
>
>
>
>
>
>
> On Wed, May 21, 2014 at 7:59 AM, Jamie Mitchell wrote:
>
> I have made a plot using the following code:
>
>
>
> python2.7
>
> import netCDF4
>
> import matplotlib.pyplot as plt
>
> import numpy as np
>
>
>
> swh_Q0_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/controlperiod/south_west/swhcontrol_swest_annavg1D.nc','r')
>
> hs_Q0_con_sw=swh_Q0_con_sw.variables['hs'][:]
>
> swh_Q3_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q3/swh/controlperiod/south_west/swhcontrol_swest_annavg1D.nc','r')
>
> hs_Q3_con_sw=swh_Q3_con_sw.variables['hs'][:]
>
> swh_Q4_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q4/swh/controlperiod/south_west/swhcontrol_swest_annavg1D.nc','r')
>
> hs_Q4_con_sw=swh_Q4_con_sw.variables['hs'][:]
>
> swh_Q14_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q14/swh/controlperiod/south_west/swhcontrol_swest_annavg1D.nc','r')
>
> hs_Q14_con_sw=swh_Q14_con_sw.variables['hs'][:]
>
> swh_Q16_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q16/swh/controlperiod/south_west/swhcontrol_swest_annavg1D.nc','r')
>
> hs_Q16_con_sw=swh_Q16_con_sw.variables['hs'][:]
>
> swh_Q0_fut_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/2050s/south_west/swh2050s_swest_annavg1D.nc','r')
>
> hs_Q0_fut_sw=swh_Q0_fut_sw.variables['hs'][:]
>
> swh_Q3_fut_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q3/swh/2050s/south_west/swh2050s_swest_annavg1D.nc','r')
>
> hs_Q3_fut_sw=swh_Q3_fut_sw.variables['hs'][:]
>
> swh_Q4_fut_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q4/swh/2050s/south_west/swh2050s_swest_annavg1D.nc','r')
>
> hs_Q4_fut_sw=swh_Q4_fut_sw.variables['hs'][:]
>
> swh_Q14_fut_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q14/swh/2050s/south_west/swh2050s_swest_annavg1D.nc','r')
>
> hs_Q14_fut_sw=swh_Q14_fut_sw.variables['hs'][:]
>
> swh_Q16_fut_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q16/swh/2050s/south_west/swh2050s_swest_annavg1D.nc','r')
>
> hs_Q16_fut_sw=swh_Q16_fut_sw.variables['hs'][:]
>
>
>
> fit_Q0_sw=np.polyfit(hs_Q0_con_sw,hs_Q0_fut_sw,1)
>
> fit_fn_Q0_sw=np.poly1d(fit_Q0_sw)
>
>
>
> plt.plot(hs_Q0_con_sw,hs_Q0_fut_sw,'g.')
>
> plt.plot(hs_Q0_con_sw,fit_fn_Q0_sw(hs_Q0_con_sw),'g',label='Q0 no pert')
>
>
>
> fit_Q3_sw=np.polyfit(hs_Q3_con_sw,hs_Q3_fut_sw,1)
>
> fit_fn_Q3_sw=np.poly1d(fit_Q3_sw)
>
>
>
> plt.plot(hs_Q3_con_sw,hs_Q3_fut_sw,'b.')
>
> plt.plot(hs_Q3_con_sw,fit_fn_Q3_sw(hs_Q3_con_sw),'b',label='Q3 low sens')
>
>
>
> fit_Q4_sw=np.polyfit(hs_Q4_con_sw,hs_Q4_fut_sw,1)
>
> fit_fn_Q4_sw=np.poly1d(fit_Q4_sw)
>
>
>
> plt.plot(hs_Q4_con_sw,hs_Q4_fut_sw,'y.')
>
> plt.plot(hs_Q4_con_sw,fit_fn_Q4_sw(hs_Q4_con_sw),'y',label='Q4 low sens')
>
>
>
> fit_Q14_sw=np.polyfit(hs_Q14_con_sw,hs_Q14_fut_sw,1)
>
> fit_fn_Q14_sw=np.poly1d(fit_Q14_sw)
>
>
>
> plt.plot(hs_Q14_con_sw,hs_Q14_fut_sw,'r.')
>
> plt.plot(hs_Q14_con_sw,fit_fn_Q14_sw(hs_Q14_con_sw),'r',label='Q14 high sens')
>
>
>
> fit_Q16_sw=np.polyfit(hs_Q16_con_sw,hs_Q16_fut_sw,1)
>
> fit_fn_Q16_sw=np.poly1d(fit_Q16_sw)
>
>
>
> plt.plot(hs_Q16_con_sw,hs_Q16_fut_sw,'c.')
>
> plt.plot(hs_Q16_con_sw,fit_fn_Q16_sw(hs_Q16_con_sw),'c',label='Q16 high sens')
>
>
>
> plt.legend(loc='best')
>
> plt.xlabel('Significant Wave Height annual averages NW Scotland 1981-2010')
>
> plt.ylabel('Significant Wave Height annual averages NW Scotland 2040-2069')
>
> plt.title('Scatter plot of Significant Wave Height')
>
> plt.show()
>
>
>
> --
>
>
>
> What I would like to do is display the R squared value next to the line of
> best fits that I have made.
>
>
>
> Does anyone know how to do this with matplotlib?
>
>
>
> You can add plain text or annotations with arrows using any of the API
> functions described here: http://matplotlib.org/1.3.1/users/text_intro.html
> (information specifically regarding the text call is here:
> http://matplotlib.org/1.3.1/api/pyplot_api.html#matplotlib.pyplot.text)
>
>
>
> You can also use LaTeX typesetting here, so you can make the text something
> like r'$R^2$' to display R^2 with "nice" typesetting. (I typically use raw
> strings for matplotlib text strings with LaTeX formulas in them since LaTeX
> makes extensive use of the \ character.)
>
>
>
> The onus is on you, the programmer, to determine _where_ on the plot you want
> the text to appear. Since you know what you are plotting, you can write a
> quick helper function that will compute the optimal (to you) location for the
> label to occur based on where things are drawn on the canvas. There is a
> _lot_ of flexibility here so you should be able to get your text looking
> exactly how (and where) you want it.
>
>
>
> Hope this helps,
> Jason
>
>
> --
>
> Jason M. Swails
> BioMaPS,
> Rutgers University
> Postdoctoral Researcher
Hi Jason,
Thank you for your swift response - you solved my problem!
Sorry I took a while to get back to you.
Thanks again,
Jamie
--
https://mail.python.org/mailman/listinfo/python-list
Re: Matplotlib - specifying bin widths
On Thursday, June 5, 2014 4:54:16 PM UTC+1, Jamie Mitchell wrote: > Hello all! > > > > Instead of setting the number of bins I want to set the bin width. > > > > I would like my bins to go from 1.7 to 2.4 in steps of 0.05. > > > > How do I say this in the code? > > > > Cheers, > > > > Jamie That's great thanks Mark. -- https://mail.python.org/mailman/listinfo/python-list
Overlaying a boxplot onto a time series figure
Hi there,
I would like to overlay some boxplots onto a time series.
I have tried pylab.hold(True) in between the two plots in my code but this
hasn't worked.
The problem is that the x-axes of the boxplots and the time series are not the
same.
Code for time series:
python2.7
import netCDF4
import matplotlib.pyplot as plt
import numpy as np
swh_Q0_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/controlperiod/south_west/swhcontrol_swest_annavg1D.nc','r')
hs_Q0_con_sw=swh_Q0_con_sw.variables['hs'][:]
year_con=swh_Q0_con_sw.variables['year'][:]
swh_Q3_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q3/swh/controlperiod/south_west/swhcontrol_swest_annavg1D.nc','r')
hs_Q3_con_sw=swh_Q3_con_sw.variables['hs'][:]
swh_Q4_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q4/swh/controlperiod/south_west/swhcontrol_swest_annavg1D.nc','r')
hs_Q4_con_sw=swh_Q4_con_sw.variables['hs'][:]
swh_Q14_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q14/swh/controlperiod/south_west/swhcontrol_swest_annavg1D.nc','r')
hs_Q14_con_sw=swh_Q14_con_sw.variables['hs'][:]
swh_Q16_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q16/swh/controlperiod/south_west/swhcontrol_swest_annavg1D.nc','r')
hs_Q16_con_sw=swh_Q16_con_sw.variables['hs'][:]
swh_Q0_fut_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/2050s/south_west/swh2050s_swest_annavg1D.nc','r')
hs_Q0_fut_sw=swh_Q0_fut_sw.variables['hs'][:]
year_fut=swh_Q0_fut_sw.variables['year'][:]
swh_Q3_fut_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q3/swh/2050s/south_west/swh2050s_swest_annavg1D.nc','r')
hs_Q3_fut_sw=swh_Q3_fut_sw.variables['hs'][:]
swh_Q4_fut_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q4/swh/2050s/south_west/swh2050s_swest_annavg1D.nc','r')
hs_Q4_fut_sw=swh_Q4_fut_sw.variables['hs'][:]
swh_Q14_fut_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q14/swh/2050s/south_west/swh2050s_swest_annavg1D.nc','r')
hs_Q14_fut_sw=swh_Q14_fut_sw.variables['hs'][:]
swh_Q16_fut_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q16/swh/2050s/south_west/swh2050s_swest_annavg1D.nc','r')
hs_Q16_fut_sw=swh_Q16_fut_sw.variables['hs'][:]
fit_Q0_con_sw=np.polyfit(year_con,hs_Q0_con_sw,1)
fit_fn_Q0_con_sw=np.poly1d(fit_Q0_con_sw)
plt.plot(year_con,hs_Q0_con_sw,'g.')
plt.plot(year_con,fit_fn_Q0_con_sw(year_con),'g',label='Q0 no pert')
fit_Q3_con_sw=np.polyfit(year_con,hs_Q3_con_sw,1)
fit_fn_Q3_con_sw=np.poly1d(fit_Q3_con_sw)
plt.plot(year_con,hs_Q3_con_sw,'b.')
plt.plot(year_con,fit_fn_Q3_con_sw(year_con),'b',label='Q3 low sens')
fit_Q4_con_sw=np.polyfit(year_con,hs_Q4_con_sw,1)
fit_fn_Q4_con_sw=np.poly1d(fit_Q4_con_sw)
plt.plot(year_con,hs_Q4_con_sw,'y.')
plt.plot(year_con,fit_fn_Q4_con_sw(year_con),'y',label='Q4 low sens')
fit_Q14_con_sw=np.polyfit(year_con,hs_Q14_con_sw,1)
fit_fn_Q14_con_sw=np.poly1d(fit_Q14_con_sw)
plt.plot(year_con,hs_Q14_con_sw,'r.')
plt.plot(year_con,fit_fn_Q14_con_sw(year_con),'r',label='Q14 high sens')
fit_Q16_con_sw=np.polyfit(year_con,hs_Q16_con_sw,1)
fit_fn_Q16_con_sw=np.poly1d(fit_Q16_con_sw)
plt.plot(year_con,hs_Q16_con_sw,'c.')
plt.plot(year_con,fit_fn_Q16_con_sw(year_con),'c',label='Q16 high sens')
fit_Q0_fut_sw=np.polyfit(year_fut,hs_Q0_fut_sw,1)
fit_fn_Q0_fut_sw=np.poly1d(fit_Q0_fut_sw)
plt.plot(year_fut,hs_Q0_fut_sw,'g.')
plt.plot(year_fut,fit_fn_Q0_fut_sw(year_fut),'g')
fit_Q3_fut_sw=np.polyfit(year_fut,hs_Q3_fut_sw,1)
fit_fn_Q3_fut_sw=np.poly1d(fit_Q3_fut_sw)
plt.plot(year_fut,hs_Q3_fut_sw,'b.')
plt.plot(year_fut,fit_fn_Q3_fut_sw(year_fut),'b')
fit_Q4_fut_sw=np.polyfit(year_fut,hs_Q4_fut_sw,1)
fit_fn_Q4_fut_sw=np.poly1d(fit_Q4_fut_sw)
plt.plot(year_fut,hs_Q4_fut_sw,'y.')
plt.plot(year_fut,fit_fn_Q4_fut_sw(year_fut),'y')
fit_Q14_fut_sw=np.polyfit(year_fut,hs_Q14_fut_sw,1)
fit_fn_Q14_fut_sw=np.poly1d(fit_Q14_fut_sw)
plt.plot(year_fut,hs_Q14_fut_sw,'r.')
plt.plot(year_fut,fit_fn_Q14_fut_sw(year_fut),'y')
fit_Q16_fut_sw=np.polyfit(year_fut,hs_Q16_fut_sw,1)
fit_fn_Q16_fut_sw=np.poly1d(fit_Q16_fut_sw)
plt.plot(year_fut,hs_Q16_fut_sw,'c.')
plt.plot(year_fut,fit_fn_Q16_fut_sw(year_fut),'c')
plt.legend(loc='best')
plt.xlabel('Year')
plt.ylabel('Significant Wave Height annual averages SW England')
plt.title('Time series of Significant Wave Height')
plt.show()
Code for boxplots:
python2.7
from pylab import *
import netCDF4
data=(hs_Q0_con_sw,hs_Q3_con_sw,hs_Q4_con_sw,hs_Q14_con_sw,hs_Q16_con_sw)
figure(1)
boxplot(data)
labels=('QO no pert','Q3 low sens','Q4 low sens','Q14 high sens','Q16 high
sens')
xticks(range(1,6),labels,rotation=15)
xlabel('Ensemble Member')
ylabel('Significant Wave Height Annual Average')
title('Significant Wave Height SW England 1981-2010')
show()
If anybody knows how I could integrate these two plots I would be eternally
grateful!
Thanks,
Jamie
--
https://mail.python.org/mailman/listinfo/python-list
Matplotlib Colouring outline of histogram
Hi folks, Instead of colouring the entire bar of a histogram i.e. filling it, I would like to colour just the outline of the histogram. Does anyone know how to do this? Version - Python2.7 Cheers, Jamie -- https://mail.python.org/mailman/listinfo/python-list
Problem with numpy 2D Histogram
Hi folks,
I'm trying to plot a 2D histogram but I'm having some issues:
from pylab import *
import numpy as np
import netCDF4
hist,xedges,yedges=np.histogram2d(x,y,bins=10)
extent=[xedges[0],xedges[-1],yedges[0],yedges[-1]]
imshow(hist.T,extent=extent,interpolation='nearest')
colorbar()
show()
After the first line of code I get:
TypeError: Cannot cast array data from dtype('O') to dtype('float64') according
to the rule 'safe'
I'm using python2.7, x and y are type 'numpy.ndarray'
Cheers,
Jamie
--
https://mail.python.org/mailman/listinfo/python-list
Re: Problem with numpy 2D Histogram
On Friday, June 20, 2014 10:25:44 AM UTC+1, Peter Otten wrote:
> Jamie Mitchell wrote:
>
>
>
> > Hi folks,
>
> >
>
> > I'm trying to plot a 2D histogram but I'm having some issues:
>
> > from pylab import *
>
> > import numpy as np
>
> > import netCDF4
>
> > hist,xedges,yedges=np.histogram2d(x,y,bins=10)
>
> > extent=[xedges[0],xedges[-1],yedges[0],yedges[-1]]
>
> > imshow(hist.T,extent=extent,interpolation='nearest')
>
> > colorbar()
>
> > show()
>
> >
>
> > After the first line of code I get:
>
> > TypeError: Cannot cast array data from dtype('O') to dtype('float64')
>
> > according to the rule 'safe'
>
> >
>
> > I'm using python2.7, x and y are type 'numpy.ndarray'
>
>
>
> The error message complains about the dtype, i. e. the type of the elements
>
> in the array, not the array itself. Make sure the elements are floating
>
> point numbers or something compatible, not arbitrary Python objects.
>
> As a baseline the following works
>
>
>
> from pylab import *
>
> import numpy as np
>
>
>
> x, y = np.random.randn(2, 100)
>
> print "x", type(x), x.dtype
>
> print "y", type(y), y.dtype
>
>
>
> hist, xedges, yedges = np.histogram2d(x, y, bins=10)
>
> extent = [xedges[0], xedges[-1], yedges[0], yedges[-1]]
>
> imshow(hist.T, extent=extent, interpolation='nearest')
>
> colorbar()
>
> show()
>
>
>
> while this doesn't:
>
>
>
> #...
>
> x, y = np.random.randn(2, 100)
>
> import decimal
>
> y = np.array([decimal.Decimal.from_float(v) for v in y])
>
> #...
Thanks Peter.
I have changed my x and y data to float64 types but I am still getting the same
error message?
Cheers,
Jamie
--
https://mail.python.org/mailman/listinfo/python-list
Re: Problem with numpy 2D Histogram
On Friday, June 20, 2014 12:00:15 PM UTC+1, Peter Otten wrote:
> Jamie Mitchell wrote:
>
>
>
> > I have changed my x and y data to float64 types but I am still getting the
>
> > same error message?
>
>
>
> Please double-check by adding
>
>
>
> assert x.dtype == np.float64
>
> assert y.dtype == np.float64
>
>
>
> If none of these assertions fail try to make a minimal script including some
>
> data that provokes the TypeError and post it here.
OK this is my code:
swh_Q0_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q0/swh/controlperiod/south_west/swhcontrol_swest_annavg.nc','r')
hs_Q0_con_sw=swh_Q0_con_sw.variables['hs'][:]
x=hs_Q0_con_sw.astype(float64)
# When I print the dtype of x here it says 'float64'
mwp_Q0_con_sw=netCDF4.Dataset('/data/cr1/jmitchel/Q0/mean_wave_period/south_west/controlperiod/mwpcontrol_swest_annavg1D.nc','r')
te_Q0_con_sw=mwp_Q0_con_sw.variables['te'][:]
y=te_Q0_con_sw.astype(float64)
If I try assert x.dtype == np.float64 I get:
AssertionError
hist,xedges,yedges=np.histogram2d(x,y,bins=10)
TypeError: Cannot cast array data from dtype('O') to dtype('float64') according
to the rule 'safe'
Thanks,
Jamie
--
https://mail.python.org/mailman/listinfo/python-list
Re: Problem with numpy 2D Histogram
On Friday, June 20, 2014 9:46:29 AM UTC+1, Jamie Mitchell wrote:
> Hi folks,
>
>
>
> I'm trying to plot a 2D histogram but I'm having some issues:
>
> from pylab import *
>
> import numpy as np
>
> import netCDF4
>
> hist,xedges,yedges=np.histogram2d(x,y,bins=10)
>
> extent=[xedges[0],xedges[-1],yedges[0],yedges[-1]]
>
> imshow(hist.T,extent=extent,interpolation='nearest')
>
> colorbar()
>
> show()
>
>
>
> After the first line of code I get:
>
> TypeError: Cannot cast array data from dtype('O') to dtype('float64')
> according to the rule 'safe'
>
>
>
> I'm using python2.7, x and y are type 'numpy.ndarray'
>
>
>
> Cheers,
>
> Jamie
Thanks for your help Peter.
--
https://mail.python.org/mailman/listinfo/python-list
Re: Matplotlib Colouring outline of histogram
On Friday, June 20, 2014 2:47:03 PM UTC+1, Jason Swails wrote:
> On Fri, Jun 20, 2014 at 4:10 AM, Jamie Mitchell wrote:
>
> Hi folks,
>
>
>
> Instead of colouring the entire bar of a histogram i.e. filling it, I would
> like to colour just the outline of the histogram. Does anyone know how to do
> this?
>
> Version - Python2.7
>
>
>
> Look at the matplotlib.pyplot.hist function documentation:
> http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.hist
>
>
>
> In addition to the listed parameters, you'll see the "Other Parameters" taken
> are those that can be applied to the created Patch objects (which are the
> actual rectangles). For the Patch keywords, see the API documentation on the
> Patch object
> (http://matplotlib.org/api/artist_api.html#matplotlib.patches.Patch). So you
> can do one of two things:
>
>
>
> 1) Pass the necessary Patch keywords to effect what you want
>
>
>
> e.g. (untested):
> import matplotlib.pyplot as plt
>
>
>
> plt.hist(dataset, bins=10, range=(-5, 5), normed=True,
> edgecolor='b', linewidth=2, facecolor='none', # Patch options
>
> )
>
>
> plt.show()
>
>
>
> 2) Iterate over the Patch instances returned by plt.hist() and set the
> properties you want.
>
>
>
> e.g. (untested):
> import matplotlib.pyplot as plt
>
>
>
> n, bins, patches = plt.hist(dataset, bins=10, range=(-5, 5), normed=True)
> for patch in patches:
>
> patch.set_edgecolor('b') # color of the lines around each bin
> patch.set_linewidth(2) # Set width of bin edge
>
> patch.set_facecolor('none') # set no fill
> # Anything else you want to do
>
>
>
> plt.show()
>
>
> Approach (1) is the "easy" way, and is there to satisfy the majority of use
> cases. However, approach (2) is _much_ more flexible. Suppose you wanted to
> highlight a particular region of your data with a specific facecolor or
> edgecolor -- you can apply the features you want to individual patches using
> approach (2). Or if you wanted to highlight a specific bin with thicker
> lines.
>
>
>
> This is a common theme in matplotlib -- you can use keywords to apply the
> same features to every part of a plot or you can iterate over the drawn
> objects and customize them individually. This is a large part of what makes
> matplotlib nice to me -- it has a "simple" mode as well as a predictable API
> for customizing a plot in almost any way you could possibly want.
>
>
>
> HTH,
> Jason
>
>
> --
>
> Jason M. Swails
> BioMaPS,
> Rutgers University
> Postdoctoral Researcher
That's great Jason thanks for the detailed response, I went with the easier
option 1!
I am also trying to put hatches on my histograms like so:
plt.hist(dataset,bins=10,hatch=['*'])
When it comes to plt.show() I get the following error message:
File
"/usr/local/sci/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-linux-x86_64.egg/matplotlib/backends/backend_gtk.py",
line 435, in expose_event
self._render_figure(self._pixmap, w, h)
File
"/usr/local/sci/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-linux-x86_64.egg/matplotlib/backends/backend_gtkagg.py",
line 84, in _render_figure
FigureCanvasAgg.draw(self)
File
"/usr/local/sci/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-linux-x86_64.egg/matplotlib/backends/backend_agg.py",
line 451, in draw
self.figure.draw(self.renderer)
File
"/usr/local/sci/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-linux-x86_64.egg/matplotlib/artist.py",
line 55, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File
"/usr/local/sci/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-linux-x86_64.egg/matplotlib/figure.py",
line 1034, in draw
func(*args)
File
"/usr/local/sci/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-linux-x86_64.egg/matplotlib/artist.py",
line 55, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File
"/usr/local/sci/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-linux-x86_64.egg/matplotlib/axes.py",
line 2086, in draw
a.draw(renderer)
File
"/usr/local/sci/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-linux-x86_64.egg/matplotlib/artist.py",
line 55, in draw_wrapper
draw(artist, renderer, *args, **kwargs)
File
"/usr/local/sci/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-linux-x86_64.egg/matplotlib/patches.py",
line 429, in draw
renderer.draw_path(gc, tpath, affine, rgbFace)
File
"/usr/local/sci/lib/python2.7/site-packages/matplotlib-1.3.1-py2.7-linux-x
Re: Matplotlib Colouring outline of histogram
On Friday, June 20, 2014 9:10:58 AM UTC+1, Jamie Mitchell wrote: > Hi folks, > > > > Instead of colouring the entire bar of a histogram i.e. filling it, I would > like to colour just the outline of the histogram. Does anyone know how to do > this? > > Version - Python2.7 > > > > Cheers, > > Jamie Great thanks again Jason. -- https://mail.python.org/mailman/listinfo/python-list
Contouring a 2D histogram
Hi all,
I have plotted a 2D histogram like so:
python2.7
import netCDF4
import iris
import iris.palette
import numpy as np
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
from matplotlib.colors import from_levels_and_colors
fig=plt.figure()
nbins=20
nice_cmap=plt.get_cmap('brewer_RdYlBu_11')
colors=nice_cmap([5,6,7,8,9,10])
levels=[1,2,3,4,5]
cmap, norm=from_levels_and_colors(levels, colors, extend='both')
H, xedges, yedges=np.histogram2d(te_Q0_con_sw,hs_Q0_con_sw,bins=nbins)
Hmasked=np.ma.masked_where(H==0,H)
plt.pcolormesh(xedges,yedges,Hmasked,cmap=cmap,norm=norm,label='Q0 control')
# From this I get a 'scattered' 2D histogram.
Does anyone know how I can contour that scatter?
Thanks,
Jamie
--
https://mail.python.org/mailman/listinfo/python-list
Matplotlib Contour Plots
Hello all, I want to contour a scatter plot but I don't know how. Can anyone help me out? Cheers, Jamie -- https://mail.python.org/mailman/listinfo/python-list
Re: Matplotlib Contour Plots
On Thursday, August 14, 2014 5:53:09 PM UTC+1, Steven D'Aprano wrote: > Jamie Mitchell wrote: > > > > > Hello all, > > > > > > I want to contour a scatter plot but I don't know how. > > > > > > Can anyone help me out? > > > > Certainly. Which way did you come in? > > > > :-) > > > > Sorry, I couldn't resist. > > > > It took me literally 20 seconds to find this by googling for "matplotlib > > contour plot", and it only took that long because I misspelled "contour" > > the first time. > > > > http://matplotlib.org/examples/pylab_examples/contour_demo.html > > > > > > Does this help? If not, please explain what experience you have with > > matplotlib, what you have tried, what you expected it to do, and what it > > did instead. > > > > > > > > -- > > Steven Yep I've seen that thanks but I can't get it to work. I don't have much experience with matplotlib or programming in general. I just want to get a contour plot of two numpy arrays. When I call plt.contour on my data I get "input must be a 2D array" An example of one of my arrays: array([ 2.0886, 2.29400015, 2.00400019, 1.8811, 2.0480001 , 2.16800022, 2.0480001 , 1.8829, 1.9586, 2.0029, 2.02800012, 1.8124, 1.9505, 1.96200013, 1.95200014, 1.99800014, 2.0717, 1.8829, 1.9849, 2.1346, 2.1148, 1.8945, 2.0519, 2.0198, 2.03400016, 2.16600013, 2.0099, 1.86200011, 2.19800019, 2.0128], dtype=float32) How do I get the above array in to the right format for a contour plot? Thanks, Jamie -- https://mail.python.org/mailman/listinfo/python-list
Re: Matplotlib Contour Plots
On Friday, August 15, 2014 2:23:25 PM UTC+1, Steven D'Aprano wrote: > Jamie Mitchell wrote: > > > > [...] > > > I just want to get a contour plot of two numpy arrays. > > > When I call plt.contour on my data I get "input must be a 2D array" > > > > You are providing a 1D array, or possibly a 3D array. So the question you > > really want to ask is not "How do I do contour plots" but "how do I make a > > 2D array?" > > > > > > > An example of one of my arrays: > > > > > > array([ 2.0886, 2.29400015, 2.00400019, 1.8811, 2.0480001 , > > > 2.16800022, 2.0480001 , 1.8829, 1.9586, 2.0029, > > > 2.02800012, 1.8124, 1.9505, 1.96200013, 1.95200014, > > > 1.99800014, 2.0717, 1.8829, 1.9849, 2.1346, > > > 2.1148, 1.8945, 2.0519, 2.0198, 2.03400016, > > > 2.16600013, 2.0099, 1.86200011, 2.19800019, 2.0128], > > > dtype=float32) > > > > > > How do I get the above array in to the right format for a contour plot? > > > > Here's an example of making a 2D array: > > > > py> import numpy > > py> a = numpy.array([1.2, 2.5, 3.7, 4.8]) # One dimensional array > > py> a > > array([ 1.2, 2.5, 3.7, 4.8]) > > py> b = numpy.array([ [1.2, 2.5, 3.7, 4.8], > > ... [9.5, 8.1, 7.0, 6.2] ]) # Two dimensional array > > py> b > > array([[ 1.2, 2.5, 3.7, 4.8], > >[ 9.5, 8.1, 7. , 6.2]]) > > > > One dimensional arrays are made from a single list of numbers: [...] > > > > Two dimensional arrays are made from a list of lists: [ [...], [...] ] > > > > > > > > -- > > Steven Thank you Steven. I created the 2D array which read as: array([[[ 2.0886, 2.29400015, 2.00400019, 1.8811, 2.0480001 , 2.16800022, 2.0480001 , 1.8829, 1.9586, 2.0029, 2.02800012, 1.8124, 1.9505, 1.96200013, 1.95200014, 1.99800014, 2.0717, 1.8829, 1.9849, 2.1346, 2.1148, 1.8945, 2.0519, 2.0198, 2.03400016, 2.16600013, 2.0099, 1.86200011, 2.19800019, 2.0128]], [[ 8.515 , 8.8811, 8.5519, 7.9481, 8.6066, 8.515 , 8.8019, 8.1311, 8.6858, 8.7254, 8.4754, 8.25 , 8.4085, 8.4358, 8.3839, 8.3566, 8.6339, 8.5123, 8.3689, 8.6981, 8.5273, 8.1339, 8.3689, 8.4208, 8.5547, 8.7254, 9.0915, 8.1858, 8.7623, 8.5396]]], dtype=float32) Unfortunately when I called plt.contour on this, it said again "Input must be a 2D array". Is there something I have missed? Thanks, Jamie -- https://mail.python.org/mailman/listinfo/python-list
Re: Matplotlib Contour Plots
On Friday, August 15, 2014 4:13:26 PM UTC+1, Steven D'Aprano wrote: > Jamie Mitchell wrote: > > > > > I created the 2D array which read as: > > > > That's not a 2D array. > > > > When the amount of data you have is too big to clearly see what it > > happening, replace it with something smaller. Instead of 30 items per > > sub-array, try it with 5 items per sub-array. Instead of eight decimal > > places, try it with single-digit integers. Anything to make it small enough > > to see clearly. > > > > When I do that with your data, instead of this: > > > > > array([[[ 2.0886, 2.29400015, 2.00400019, 1.8811, 2.0480001 , > > > 2.16800022, 2.0480001 , 1.8829, 1.9586, 2.0029, > > > 2.02800012, 1.8124, 1.9505, 1.96200013, 1.95200014, > > > 1.99800014, 2.0717, 1.8829, 1.9849, 2.1346, > > > 2.1148, 1.8945, 2.0519, 2.0198, 2.03400016, > > > 2.16600013, 2.0099, 1.86200011, 2.19800019, > > > 2.0128]], > > > > > >[[ 8.515 , 8.8811, 8.5519, 7.9481, 8.6066, > > > 8.515 , 8.8019, 8.1311, 8.6858, 8.7254, > > > 8.4754, 8.25 , 8.4085, 8.4358, 8.3839, > > > 8.3566, 8.6339, 8.5123, 8.3689, 8.6981, > > > 8.5273, 8.1339, 8.3689, 8.4208, 8.5547, > > > 8.7254, 9.0915, 8.1858, 8.7623, > > > 8.5396]]], dtype=float32) > > > > > > I get this: > > > > > > array([[[ 2, 2, 2, 1, 2]], > >[[ 8, 8, 8, 7, 8]]], dtype=float32) > > > > > > which is much easier to work with. See the difference between that smaller > > example, and my earlier explanation of the difference between a 1D and 2D > > array? > > > > One dimensional arrays are made from a single list of numbers: [...] > > Two dimensional arrays are made from a list of lists: [ [...], [...] ] > > > > *Three* dimensional arrays are made from a list of lists of lists: > > [ [ [...], [...] ] ] > > > > *Four* dimensional arrays are made from a list of lists of lists of lists: > > [ [ [ [...], [...] ] ] ] > > > > and so on. You have a 3D array, with dimensions 2 x 1 x 30. > > > > You can check the dimensions by storing the array into a variable like this: > > > > py> a = numpy.array([[[ 2, 2, 2, 1, 2]], [[ 8, 8, 8, 7, 8]]]) > > py> a.shape > > (2, 1, 5) > > > > > > > > -- > > Steven Thanks for your suggestions Steven. Unfortunately I still can't make the plot I'm looking for. Do you mind if I go back to the start? Sorry I'm probably not explaining what I need very well. So I have two 1D arrays: 1st array - ([8, 8.8,8.5,7.9,8.6 ...], dtype=float32) It has a shape (150,) 2nd array - ([2, 2.2, 2.5, 2.3, ...],dtype=float32) It has a shape (150,) What I want to do is create a 2D array which merges the 1st and 2nd array so that I would have: ([[8, 8.8,8.5,7.9,8.6 ...],[2,2,2,2,5,2.3, ...]], dtype=float32) that would have a shape (150,150) In this form I could then plot a 2D contour. Thanks for your patience. Jamie -- https://mail.python.org/mailman/listinfo/python-list
Re: Matplotlib Contour Plots
I forgot to mention that when I try: a=np.array([[hs_con_sw],[te_con_sw]]) I get a 3D shape for some reason - (2,1,150) which is not what I'm after. Thanks, Jamie -- https://mail.python.org/mailman/listinfo/python-list
Re: Matplotlib Contour Plots
You were right Christian I wanted a shape (2,150). Thank you Rustom and Steven your suggestion has worked. Unfortunately the data doesn't plot as I imagined. What I would like is: X-axis - hs_con_sw Y-axis - te_con_sw Z-axis - Frequency What I would like is for the Z-axis to contour the frequency or amount of times that the X-axis data and Y-axis data meet at a particular point or bin. Does anyone know what function or graph could best show this? Thanks for all your help, Jamie -- https://mail.python.org/mailman/listinfo/python-list
Re: Matplotlib Contour Plots
On Tuesday, August 19, 2014 10:21:48 PM UTC+1, [email protected] wrote: > Jamie Mitchell writes: > > > > > You were right Christian I wanted a shape (2,150). > > > > > > Thank you Rustom and Steven your suggestion has worked. > > > > > > Unfortunately the data doesn't plot as I imagined. > > > > > > What I would like is: > > > > > > X-axis - hs_con_sw > > > Y-axis - te_con_sw > > > Z-axis - Frequency > > > > > > What I would like is for the Z-axis to contour the frequency or > > > amount of times that the X-axis data and Y-axis data meet at a > > > particular point or bin. > > > > > > Does anyone know what function or graph could best show this? > > > > in my understanding, you have 3 arrays of data that describe 3D data > > points, and you want to draw a 2D contour plot... > > > > in this case you have to interpolate the z-values on a regular grid, > > that's very easy if you already know what to do ;-) > > > > here I assume that data is in a .csv file > > > > % cat a.csv > > 0 ≤ x ≤ 10, 0 ≤ y ≤ 10, z = cos(sqrt((x-5)**2_(y-5)**2)) > > 1.922065,5.827944,-0.998953 > > 7.582322,0.559370,0.411861 > > 5.001753,3.279957,-0.148694 > > ... > > > > of course my z's are different from yours, but this shouldn't be a > > real problem --- and here it is my *tested* solution (tested on python > > 2.7, that is), please feel free to adapt to your needs > > > > hth, ciao > >g > > > > % cat contour.py > > from numpy import loadtxt, linspace > > from matplotlib.mlab import griddata > > import matplotlib.pyplot as pl > > > > # open 'a.csv', specify the delimiter, specify how many header rows, > > # slurp the data > > temp_array = loadtxt(open('a.csv'),delimiter=',',skiprows=1) > > > > # the shape of temp_array is (N,3), we want its transpose > > temp_array = temp_array.transpose() > > > > # now the shape is (3,N) and we can do "unpack and assignment: > > x, y, z = temp_array > > > > # now the tricky part, > > > > # 1: create two arrays with 101 (arbitrary number) equispaced values > > # between 0 and 10 --- that is the ranges of data x and data y > > xi = linspace(0,10,101) > > yi = linspace(0,10,101) > > > > # 2: create, by interpolation, the 2D array that contourf so eagerly > > # awaited! > > print griddata.__doc__ > > zi = griddata(x,y,z, xi,yi) > > > > # eventually, lets plot the stuff... > > # see http://matplotlib.org/examples/pylab_examples/griddata_demo.html > > # for further details and ideas > > > > pl.contour (xi,yi,zi,11,linewidths=1,colors='black') > > pl.contourf(xi,yi,zi); pl.colorbar() > > # optional > > pl.gca().set_aspect('equal', 'box') > > pl.show() > > % python contour.py This is great and works very well - thank you!! -- https://mail.python.org/mailman/listinfo/python-list
Trouble finding references that are keeping objects alive
Hi, I have a python gtk app that allows users to have one project open at a time. I have recently discovered that projects are not being freed when they are closed - the refcount is not hitting zero. I have used gc.get_referrers() to track down a few culprits, but I have now found that some of my dialog boxes are staying alive after being closed too (and keeping a reference to the project). e.g. gc.collect() print sys.getrefcount(self.data.project) # this function presents the dialog and loads the data idata = csvimport.ask_and_load(self.app.root, self.data.project) gc.collect() print sys.getrefcount(self.data.project) prints out: 39 40 (In this example I have cancelled the dialog and idata is None.) I have tracked down an offending tuple that has a reference to my dialog, which is a value in a large dictionary with integer keys that look like ID's. The other values in the dictionary appear to be widget connections and other stuff. Anyway, gc.get_referrers() keeps giving me lists, dictionarys and frames, which is not helping me find what code created this reference. I can't think of any other way of tracking this down apart from hacking some debug code into the python source. Does anyone have any other ideas for finding the code that created my unwanted reference before I have to dust off my C skills? Thanks Tim -- http://mail.python.org/mailman/listinfo/python-list
Re: Trouble finding references that are keeping objects alive
More info: The project has cyclic references to the objects in the projects, but this should be handled by gc.collect(). Here's is my 'project still alive' test: # store a weakref for debugging p = weakref.ref(self.data.project) self.data.setProject(None, None) gc.collect() # whole project is cyclic p = p() if p is not None: print 'Project still exists!!' Cheers Tim -- http://mail.python.org/mailman/listinfo/python-list
Re: AttributeError: 'Attributes' object has no attribute 'saveFile'
Hi Sounds like you've got a wizard-type interface thing happening. I haven't used wxGlade but I have done similar things in GTK several times. Try putting all the windows in a notebook widget with hidden tabs. Put the 'Next Page' button and the filename outside the notebook. This makes the filename always available and the 'Next Page' button would just switch pages in the notebook widget. Hope this is helpful Cheers Tim -- http://mail.python.org/mailman/listinfo/python-list
Re: Video: Professor of Physics Phd at Cal Tech says: 911 Inside Job
In article <[EMAIL PROTECTED]>,
War Office <[EMAIL PROTECTED]> wrote:
> On 28 abr, 14:15, Eric Gisse <[EMAIL PROTECTED]> wrote:
> > On Apr 24, 6:13 pm, [EMAIL PROTECTED] wrote:
[snip]
> > I love how folks like you ask for intellectual honesty when every
> > effort is made to ignore evidence that doesn't agree with your
> > presupposed findings.
>
> Which evidence would that be?
***{I'm not a fan of the Bush administration, and would not put it past
them to carry out an event such as 911, to create an excuse to jettison
the Constitution and Bill of Rights. What is certain in any case is
that, in fact, the Bush administration has used the events of 911 as an
excuse to toss out the Constitution and Bill of Rights. There are,
however, at least three possible scenarios regarding 911 itself:
(1) The plane crashes were planned and executed by terrorists. The
towers fell because of the impacts. Building 7 fell because of the
impact of debris from the north tower.
(2) The plane crashes were planned and executed by the Bush
administration. The towers fell because of the impacts. Building 7 fell
because of the impact of debris from the north tower.
(3) The plane crashes were planned and executed by the Bush
administration. The towers fell because of the impacts, plus the effects
of pre-planted demolition charges. Building 7 fell because of the impact
of debris from the north tower, plus the effects of pre-planted
explosive charges.
I analyzed (3), above, in great detail a month or so back, in a
sci.physics thread entitled "The amazing denial of what "conspiracy
kooks" really means" If you are really interested in a reasoned
response to those arguments, you can probably still find that thread on
Google.
My conclusion at the time was that possibility (3), above, fails because
pre-planted explosives are not needed to explain why the towers fell, or
why building 7 fell. Possibilities (1) and (2), therefore, are all that
remains.
This post is for informational purposes only, and is not to be taken as
an indication that I am interesting in slogging my way through all this
stuff again. Once is more than enough, and so I am killfiling this
thread after making this post.
--Mitchell Jones}***
*
If I seem to be ignoring you, consider the possibility
that you are in my killfile. --MJ
--
http://mail.python.org/mailman/listinfo/python-list
python scalability
Hi All, I work on a desktop application that has been developed using python and GTK (see www.leapfrog3d.com). We have around 150k lines of python code (and 200k+ lines of C). We also have a new project manager with a C# background who has deep concerns about the scalability of python as our code base continues to grow and we are looking at introducing more products. I am looking for examples of other people like us (who write desktop apps in python) with code bases of a similar size who I can point to (and even better talk to) to help convince him that python is scalable to 300+ lines of code and beyond. I have looked at the python success stories page and haven't come up with anyone quite like us. One of my project managers questions is: "Are we the only company in the world with this kind and size of project?" I want to say no, but am having trouble convincing myself, let alone him. If you are involved in this kind of thing please get in touch with me. Thanks, Tim -- http://mail.python.org/mailman/listinfo/python-list
Re: python scalability
Thanks for all the replies - they have all been helpful. On reflection I think our problems are probably design and people related. Cheers, Tim Michele Simionato wrote: On Jul 10, 6:32 am, Tim Mitchell <[EMAIL PROTECTED]> wrote: Hi All, I work on a desktop application that has been developed using python and GTK (seewww.leapfrog3d.com). We have around 150k lines of python code (and 200k+ lines of C). We have bigger numbers than yours here (although not for a desktop application) and of course we have the problems of a large size application, but they have nothing to do with Python. The real problem are sociological, not language-related. Essentially, if a project takes 10+ years and 10+ people, with most of the people new, you have an issue, but this is independent from the language. Python is helping us at least because it is readable and the situation would be probably be worse with another language. But as I said the software development practices used are more important than the language in this context. -- http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
how do I know if I'm using a debug build of python
Hi, A quick question: Is there any way for a python script to know if it's being executed by a debug build of python (python_d.exe) instead of python? Thanks Tim -- http://mail.python.org/mailman/listinfo/python-list
tkFileDialog question
Hi, This is my first attempt to write a script with any kind of gui. All I need the script to do is ask the user for a directory and then do stuff with the files in that directory. I used tkFileDialog.askdirectory(). It works great but it pops up an empty tk window. Is there any way to prevent the empty tk window from popping up? Here's the code: import tkFileDialog answer = tkFileDialog.askdirectory() if answer is not '': #do stuff Thanks! Matt -- http://mail.python.org/mailman/listinfo/python-list
RE: tkFileDialog question
--- The information contained in this electronic message and any attached document(s) is intended only for the personal and confidential use of the designated recipients named above. This message may be confidential. If the reader of this message is not the intended recipient, you are hereby notified that you have received this document in error, and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify sender immediately by telephone (603) 262-6300 or by electronic mail immediately. Thank you. -Original Message- From: [email protected] [mailto:[email protected]] On Behalf Of Matt Mitchell Sent: Friday, November 13, 2009 9:33 AM To: [email protected] Subject: tkFileDialog question Hi, This is my first attempt to write a script with any kind of gui. All I need the script to do is ask the user for a directory and then do stuff with the files in that directory. I used tkFileDialog.askdirectory(). It works great but it pops up an empty tk window. Is there any way to prevent the empty tk window from popping up? Here's the code: import tkFileDialog answer = tkFileDialog.askdirectory() if answer is not '': #do stuff Thanks! Matt -- http://mail.python.org/mailman/listinfo/python-list Hi, After a few more hours of googling I answered my own question: import Tkinter, tkFileDialog root = Tk() root.withdraw() answer = tkFileDialog.askdirectory() if answer is not '': #do stuff Thanks!! -- http://mail.python.org/mailman/listinfo/python-list
RE: tkFileDialog question
--- The information contained in this electronic message and any attached document(s) is intended only for the personal and confidential use of the designated recipients named above. This message may be confidential. If the reader of this message is not the intended recipient, you are hereby notified that you have received this document in error, and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify sender immediately by telephone (603) 262-6300 or by electronic mail immediately. Thank you. -Original Message- From: [email protected] [mailto:[email protected]] On Behalf Of r Sent: Monday, November 16, 2009 12:16 AM To: [email protected] Subject: Re: tkFileDialog question Matt, There is also a nice thing you need to know about Python if you already do not know. That is the fact that all empty collections bool to False. This makes Truth testing easier. >>> bool([]) False >>> bool('') False >>> bool({}) False >>> bool([1]) True >>> bool([[]]) True >>> bool(' ') True any empty collection, string, or 0 always bools to False. -- http://mail.python.org/mailman/listinfo/python-list Thank you both for all the help. Your suggestions have helped clean up a bunch of my code. Thanks! Matt -- http://mail.python.org/mailman/listinfo/python-list
RE: XML root node attributes
--- The information contained in this electronic message and any attached document(s) is intended only for the personal and confidential use of the designated recipients named above. This message may be confidential. If the reader of this message is not the intended recipient, you are hereby notified that you have received this document in error, and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify sender immediately by telephone (603) 262-6300 or by electronic mail immediately. Thank you. -Original Message- From: [email protected] [mailto:[email protected]] On Behalf Of Slafs Sent: Tuesday, November 17, 2009 9:20 AM To: [email protected] Subject: XML root node attributes Hi I'm little confused about adding attributes to the root node when creating an XML document. Can I do this using minidom or something else. I can't find anything that would fit my needs. i would like to have something like this: Please help. Regards. -- http://mail.python.org/mailman/listinfo/python-list Hi, I'm sure someone will point out a better way to do it but yes, you can do it with minidom. from xml.dom.minidom import Document doc = Document() root = doc.createElement('root') root.setAttribute('a', 'v') root.setAttribute('b', 'v2') root.setAttribute('c', '3') doc.appendChild(root) d = doc.createElement('d') root.appendChild(d) print doc.toprettyxml() -- http://mail.python.org/mailman/listinfo/python-list
RE: Gray Hat Python: Python Programming for Hackers Soft copy
You mean like: http://nostarch.com/ghpython.htm From: [email protected] [mailto:[email protected]] On Behalf Of Elf Scripter Sent: Friday, November 20, 2009 3:31 PM To: [email protected] Subject: Gray Hat Python: Python Programming for Hackers Soft copy Hi i`m looking for a place to get a soft copy of 'Gray Hat Python: Python Programming for Hackers' may be a pdf or chm format. Thank you --- The information contained in this electronic message and any attached document(s) is intended only for the personal and confidential use of the designated recipients named above. This message may be confidential. If the reader of this message is not the intended recipient, you are hereby notified that you have received this document in error, and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify sender immediately by telephone (603) 262-6300 or by electronic mail immediately. Thank you. -- http://mail.python.org/mailman/listinfo/python-list
Is there a better way to do this?
Hi,
I wrote a python script that uses pysvn to export projects from an svn
repo I have. The repo has hundreds of projects in it with a directory
structure that is pretty uniform however it's not exactly uniform
because of the capitalization. I.e.:
\root
\project English
\Stuff
\Stuff 2
\Project Spanish
\Stuff 3
\Stuff 4
My svn repo is case sensitive so if I try to get \root\project
Spanish\Stuff 3 I get an error. Fixing the capitalization is not an
option for me. My initial idea was to make a list of all the different
ways "project" has been capitalized in my repo and try each one. The
code looks like this:
import pysvn
def getstuff(stuffiwant, languageiwantitin):
projects = ("project %s/", "Project %s/", "pRojects %s/")
c = pysvn.Client()
for p in projects:
exportme = p % languageiwantitin
exportme = "http://localhost/"; + exportme + stuffiwant
try:
c.export(exportme, "C:\\temp\\")
break
except pysvn.ClientError:
print "Not the right capitalization."
# do the rest of the stuff I need to do.
This works, but to me it seems like there has to be a better way of
doing it. Any feedback or suggestions would be appreciated.
Thanks,
Matt
--
http://mail.python.org/mailman/listinfo/python-list
RE: Call Signtool using python
I think you need to use the /p switch to pass signtool.exe a password when using the /f switch. Check out http://msdn.microsoft.com/en-us/library/8s9b9yaz%28VS.80%29.aspx for more info. --- The information contained in this electronic message and any attached document(s) is intended only for the personal and confidential use of the designated recipients named above. This message may be confidential. If the reader of this message is not the intended recipient, you are hereby notified that you have received this document in error, and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify sender immediately by telephone (603) 262-6300 or by electronic mail immediately. Thank you. -Original Message- From: [email protected] [mailto:[email protected]] On Behalf Of enda man Sent: Tuesday, March 02, 2010 6:34 AM To: [email protected] Subject: Call Signtool using python Hi, I want to call the Windows signtool to sign a binary from a python script. Here is my script: // os.chdir('./Install/activex/cab') subprocess.call(["signtool", "sign", "/v", "/f", "webph.pfx", "/t", "http://timestamp.verisign.com/scripts/timstamp.dll";, "WebPh.exe" ]) // But I am getting this error: SignTool Error: The specified PFX password is not correct. Number of files successfully Signed: 0 Number of warnings: 0 Number of errors: 1 Finished building plugin installer scons: done building targets. This python script is called as part of a scons build, which is also python code. Anyone seen this before or can pass on any ideas. Tks, EM -- http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
Standard Asynchronous Python
After seeing David Mertz's talk at PyCon 2012, "Coroutines, event loops, and the history of Python generators" [1], I got thinking again about Python's expressive power for asynchronous programming. Generators, particularly with the addition of 'yield from' and 'return' in PEP 380 [2], allow us to write code that is executed "bit by bit" but still reads naturally. There are a number of frameworks that take advantage of this ability, but each is a little different -- enough so that there's negligible code re-use between these frameworks. I think that's a shame. I proposed a design PEP a while back [3] with the intent of defining a standard way of writing asynchronous code, with the goal of allowing code re-use and bringing users of the frameworks closer together. Ideally, we could have libraries to implement network protocols, database wrappers, subprocess execution, and so on, that would work in any of the available asynchronous frameworks. My proposal met with near-silence, and I didn't pursue it. Instead, I did what any self-respecting hacker would do - I wrote up a framework, uthreads [4], that implemented my idea. This was initially a simple trampoline scheduler, but I eventually refactored it to run atop Twisted, since that's what I use. To my knowledge, it's never been used. I'm considering re-drafting the PEP with the following changes: * De-emphasize the thread emulation aspects, and focus on code-portability issues: * callbacks vs. "blocking" calls (e.g., when accepting incoming connections on a socket, how is my code invoked?) * consistent access to primitives, regardless of framework (e.g., where's the function I call to branch execution?) * nested asynchronous methods * Account for PEP 380 (by making the StopIteration workarounds match PEP 380, and explicitly deprecating them after Python 3.3) * Look forward to a world with software transactional memory [5] by matching that API where appropriate As I get to work on the PEP, I'd like to hear any initial reactions to the idea. Dustin [1] https://us.pycon.org/2012/schedule/presentation/104/ [2] http://www.python.org/dev/peps/pep-0380 [3] http://code.google.com/p/uthreads/source/browse/trunk/microthreading-pep.txt [4] http://code.google.com/p/uthreads/ [5] https://bitbucket.org/pypy/pypy/raw/stm-thread/pypy/doc/stm.rst -- http://mail.python.org/mailman/listinfo/python-list
Re: Standard Asynchronous Python
The responses have certainly highlighted some errors in emphasis in my approach. * My idea is to propose a design PEP. (Steven, Dennis) I'm not at *all* suggesting including uthreads in the standard library. It's a toy implementation I used to develop my ideas. I think of this as a much smaller idea in the same vein as the DBAPI (PEP 249): a common set of expectations that allows portability. * I'd like to set aside the issue of threads vs. event-driven programming. There are legitimate reasons to do both, and the healthy ecosystem of frameworks for the latter indicates at least some people are interested. My idea is to introduce a tiny bit of coherence across those frameworks. * (Bryan) The Fibonacci example is a simple example of, among other things, a CPU-bound, recursive task -- something that many async frameworks don't handle fairly right now. I will add some text to call that out explicitly. * Regarding generators vs. coroutines (Bryan), I use the terms generator and generator function in the PEP carefully, as that's what the syntactic and runtime concepts are called in Python. I will include a paragraph distinguishing the two. I will need to take up the details of the idea with the developers of the async frameworks themselves, and get some agreement before actually proposing the PEP. However, among this group I'm interested to know whether this is an appropriate use of a design PEP. That's why I posted my old and flawed PEP text, rather than re-drafting first. Thanks for the responses so far! Dustin -- http://mail.python.org/mailman/listinfo/python-list
Re: Standard Asynchronous Python
Thanks for the second round of responses. I think this gives me some focus - concentrate on the API, talk to the framework developers, and start redrafting the PEP sooner rather than later. Thanks! Dustin -- http://mail.python.org/mailman/listinfo/python-list
Re: odt2sphinx 0.2.3 released
On Wed, Sep 12, 2012 at 10:06 AM, wrote: > ߒߤߒߡߜߦߡ ß ß§ And that's why you shouldn't let your kids play with your iPad :) Dustin -- http://mail.python.org/mailman/listinfo/python-list
Re: Dictionaries again - where do I make a mistake?
Lad wrote: > Sorting seems to be OK,. > the command > print key,val > prints the proper values > but I can not create Newdict to be sorted properly. > > Where do I make a mistake? > Thank you for help. Dictionaries are unordered -- the order in which items come out is unspecified. It's based on the details of their internal storage mechanism (a hash table), and you can't control it at all. If you need your pairs in a certain order, you'll have to use a list of tuples. Dustin -- http://mail.python.org/mailman/listinfo/python-list
httplib problems -- bug, or am I missing something?
I'm building an interface to Amazon's S3, using httplib. It uses a single object for multiple transactions. What's happening is this: HTTP > PUT /unitest-temp-1161039691 HTTP/1.1 HTTP > Date: Mon, 16 Oct 2006 23:01:32 GMT HTTP > Authorization: AWS <>:KiTWRuq/6aay0bI2J5DkE2TAWD0= HTTP > (end headers) HTTP < HTTP/1.1 200 OK HTTP < content-length: 0 HTTP < x-amz-id-2: 40uQn0OCpTiFcX+LqjMuzG6NnufdUk/.. HTTP < server: AmazonS3 HTTP < x-amz-request-id: FF504E8FD1B86F8C HTTP < location: /unitest-temp-1161039691 HTTP < date: Mon, 16 Oct 2006 23:01:33 GMT HTTPConnection.__state before response.read: Idle HTTPConnection.__response: closed? False length: 0 reading response HTTPConnection.__state after response.read: Idle HTTPConnection.__response: closed? False length: 0 ..later in the same connection.. HTTPConnection.__state before putrequest: Idle HTTPConnection.__response: closed? False length: 0 HTTP > DELETE /unitest-temp-1161039691 HTTP/1.1 HTTP > Date: Mon, 16 Oct 2006 23:01:33 GMT HTTP > Authorization: AWS <>:a5OizuLNwwV7eBUhha0B6rEJ+CQ= HTTP > (end headers) HTTPConnection.__state before getresponse: Request-sent HTTPConnection.__response: closed? False length: 0 File "/usr/lib64/python2.4/httplib.py", line 856, in getresponse raise ResponseNotReady() If the first request does not precede it, the second request is fine. To avoid excessive memory use, I'm calling request.read(16384) repeatedly, instead of just calling request.read(). This seems to be key to the problem -- if I omit the 'amt' argument to read(), then the last line of the first request reads HTTPConnection.__response: closed? True length: 0 and the later call to getresponse() doesn't raise ResponseNotReady. Looking at the source for httplib.HTTPResponse.read, self.close() gets called in the latter (working) case, but not in the former (non-working). It would seem sensible to add 'if self.length == 0: self.close()' to the end of that function (and, in fact, this change makes the whole thing work), but this comment makes me hesitant: # we do not use _safe_read() here because this may be a .will_close # connection, and the user is reading more bytes than will be provided # (for example, reading in 1k chunks) What's going on here? Is this a bug I should report, or am I missing something about how one should use httplib? Thanks for any assistance. Dustin -- http://mail.python.org/mailman/listinfo/python-list
Re: Calling functions
Tommy Grav wrote: > I have a small program that goes something like this > > def funcA() : pass > def funcB() : pass > def funcC() : pass > > def determine(f): > t = f() > return t > > What I would like to do is be able to > > n = determine(funcA) > m = determine(funcB) > > But I can't really figure out how to do this (I think it is > possible :) Except for the spaces after the def's at the top (are those legal?), it should work as written. determine(funcA) results in 'f' being bound to 'funcA'; then 't = f()' results in 'funcA' being called, and its resulting being bound to 't'; 'determine' returns that result, and it's bound to 'n'. Is that not what you wanted? Dustin -- http://mail.python.org/mailman/listinfo/python-list
Redux: Allowing 'return obj' in generators
This question was first brought up in October of 2005[1], and was included in
the "Unresolved Issues" section of my microthreading PEP, which I have quietly
withdrawn from consideration due to lack of community interest.
PEP 255 says
Q. Then why not allow an expression on "return" too?
A. Perhaps we will someday. In Icon, "return expr" means both "I'm
done", and "but I have one final useful value to return too, and
this is it". At the start, and in the absence of compelling uses
for "return expr", it's simply cleaner to use "yield" exclusively
for delivering values.
As those of you who looked at my PEP or are familiar with some of the
implementations will realize, microthreaded functions are syntactically
generator functions, but semantically act as regular functions. There
is a well-defined meaning to 'return x' in such a function: take the
value of x, and use it in the expression where this function was called.
For example:
def read_integer(sock):
txt = yield sock.readline().strip()
try:
return int(txt)
except:
raise AppProtocolError("Expected an integer")
The implementation of the syntax would be similar to that of an
expressionless 'return', but supplying the expression_list to the
StopIteration's 'args' -- this is described quite well in Piet Delport's
post[2].
Given this use-case (and note that I chose an example that will exercise
the interactions of try/except blocks with the StopIteration behavior),
is it time to revisit this issue? BDFL said:
I urge you to leave well enough alone. There's room for extensions
after people have built real systems with the raw material provided by
PEP 342 and 343.[3]
and Nick Coghlan said (to applause from GvR):
I'm starting to think we want to let PEP 342 bake for at least one
release cycle before deciding what (if any) additional behaviour
should be added to generators.[4]
I think we have a decent number of implementations in the wild now
(I have learned of Christopher Stawarz's 'multitask'[5] since last
posting my PEP). With 2.5.1 out, might I suggest this is worth
reconsidering for the 2.6 release?
Dustin
[1]
http://www.python.org/dev/summary/2005-10-01_2005-10-15/#allowing-return-obj-in-generators
[2] http://mail.python.org/pipermail/python-dev/2005-October/056957.html
[3] http://mail.python.org/pipermail/python-dev/2005-October/057119.html
[4] http://mail.python.org/pipermail/python-dev/2005-October/057133.html
[5] http://o2s.csail.mit.edu/o2s-wiki/multitask
--
http://mail.python.org/mailman/listinfo/python-list
sqlite3, qmarks, and NULL values
Suppose I have a simple query in sqlite3 in a function:
def lookupxy(x, y):
conn.execute("SELECT * FROM table WHERE COL1 = ? AND COL2 = ?",
(x, y))
However, COL2 might be NULL. I can't figure out a value for y that would
retrieve rows for which COL2 is NULL. It seems to me that I have to perform an
awkward test to determine whether to execute a query with one question mark or
two.
def lookupxy(x, y):
if y:
conn.execute("SELECT * FROM table WHERE COL1 = ? AND COL2 = ?",
(x, y))
else:
conn.execute("SELECT * FROM table WHERE COL1 = ? AND COL2 IS NULL",
(x,))
The more question marks involved the more complicated this would get,
especially if question marks in the middle of several would sometimes need to
be NULL. I hope I'm missing something and that someone can tell me what it is.
--
http://mail.python.org/mailman/listinfo/python-list
One function calling another defined in the same file being exec'd
[Python 3.1]
I thought I thoroughly understood eval, exec, globals, and locals, but I
encountered something bewildering today. I have some short files I
want to
exec. (Users of my application write them, and the application gives
them a
command that opens a file dialog box and execs the chosen file. Users
are
expected to be able to write simple Python scripts, including function
definitions. Neither security nor errors are relevant for the purposes
of this
discussion, though I do deal with them in my actual code.)
Here is a short piece of code to exec a file and report its result.
(The file
being exec'd must assign 'result'.)
def dofile(filename):
ldict = {'result': None}
with open(filename) as file:
exec(file.read(), globals(), ldict)
print('Result for {}: {}'.format(filename, ldict['result']))
First I call dofile() on a file containing the following:
def fn(arg):
return sum(range(arg))
result = fn(5)
The results are as expected.
Next I call dofile() on a slightly more complex file, in which one
function
calls another function defined earlier in the same file.
def fn1(val):
return sum(range(val))
def fn2(arg):
return fn1(arg)
result = fn2(5)
This produces a surprise:
NameError: global name 'fn1' is not defined
[1] How is it that fn2 can be called from the top-level of the script
but fn1
cannot be called from fn2?
[2] Is this correct behavior or is there something wrong with Python
here?
[3] How should I write a file to be exec'd that defines several
functions that
call each other, as in the trivial fn1-fn2 example above?
--
http://mail.python.org/mailman/listinfo/python-list
Re: One function calling another defined in the same file being exec'd
I forgot to offer one answer for question [3] in what I just posted: I can define all the secondary functions inside one main one and just call the main one. That provides a separate local scope within the main function, with the secondary functions defined inside it when (each time) the main function is called. Not too bad, but will freak out my users and it doesn't seem as if it should be necessary to resort to this. -- http://mail.python.org/mailman/listinfo/python-list
Re: One function calling another defined in the same file being exec'd
On Jan 7, 2010, at 10:45 PM, Steven D'Aprano > wrote an extensive answer to my questions about one function calling another in the same file being exec'd. His suggestion about printing out locals() and globals() in the various possible places provided the clues to explain what was going on. I would like to summarize what I have learned from this, because although I have known all the relevant pieces for many years I never put them together in a way that explains the odd behavior I observed. Statements that bind new names -- assignment, def, and class -- do so in the local scope. While exec'ing a file the local scope is determined by the arguments passed to exec; in my case, I passed an explicit local scope. It was particularly obtuse of me not to notice the effects of this because I was intentionally using it so that an assignment to 'result' in the exec'd script would enable the exec'ing code to retrieve the value of result. However, although the purity of Python with respect to the binding actions of def and class statements is wonderful and powerful, it is very difficult cognitively to view a def on a page and think "aha! that's just like an assignment of a newly created function to a name", even though that is precisely the documented behavior of def. So mentally I was making an incorrect distinction between what was getting bound locally and what was getting bound globally in the exec'd script. Moreover, the normal behavior of imported code, in which any function in the module can refer to any other function in the module, seduced me into this inappropriate distinction. To my eye I was just defining and using function definitions the way they are in modules. There is a key difference between module import and exec: as Steven pointed out, inside a module locals() is globals(). On further reflection, I will add that what appears to be happening is that during import both the global and local dictionaries are set to a copy of the globals() from the importing scope and that copy becomes the value of the module's __dict__ once import has completed successfully. Top-level statements bind names in locals(), as always, but because locals() and globals() are the same dictionary, they are also binding them in globals(), so that every function defined in the module uses the modified copy of globals -- the value of the module's __dict__ -- as its globals() when it executes. Because exec leaves locals() and globals() distinct, functions defined at the top level of a string being exec'd don't see other assignments and definitions that are also in the string. Another misleading detail is that top-level expressions in the exec can use other top-level names (assigned, def'd, etc.), which they will find in the exec string's local scope, but function bodies do not see the string's local scope. The problem I encountered arises because the function definitions need to access each other through the global scope, not the local scope. In fact, the problem would arise if one of the functions tried to call itself recursively, since its own name would not be in the global scope. So we have a combination of two distinctions: the different ways module import and exec use globals and locals and the difference between top-level statements finding other top-level names in locals but functions looking for them in globals. Sorry for the long post. These distinctions go deep into the semantics of Python namespaces, which though they are lean, pure, and beautiful, have some consequences that can be surprising -- more so the more familiar you are with other languages that do things differently. Oh, and as far as using import instead of exec for my scripts, I don't think that's appropriate, if only because I don't want my application's namespace polluted by what could be many of these pseudo- modules users might load during a session. (Yes, I could remove the name once the import is finished, but importing solely for side- effects rather than to use the imported module is offensive. Well, I would be using one module name -- result -- but that doesn't seem to justify the complexities of setting up the import and accessing the module when exec does in principle just what I need.) Finally, once all of this is really understood, there is a simple way to change an exec string's def's to bind globally instead of locally: simply begin the exec with a global declaration for any function called by one of the others. In my example, adding a "global fn1" at the beginning of the file fixes it so exec works. global fn1# enable fn1 to be called from fn2! def fn1(val): return sum(range(val)) def fn2(arg): return fn1(arg) result = fn2(5) -- http://mail.python.org/mailman/listinfo/python-list
Re: One function calling another defined in the same file being exec'd
On Jan 8, 2010, at 9:55 AM, "Gabriel Genellina" [email protected]> wrote: Ok - short answer or long answer? Short answer: Emulate how modules work. Make globals() same as locals(). (BTW, are you sure you want the file to run with the *same* globals as the caller? It sees the dofile() function and everything you have defined/imported there...). Simply use: exec(..., ldict, ldict) [1] How is it that fn2 can be called from the top-level of the script but fn1 cannot be called from fn2? Long answer: First, add these lines before result=fn2(5): print("globals=", globals().keys()) print("locals=", locals().keys()) import dis dis.dis(fn2) and you'll get: globals()= dict_keys(['dofile', '__builtins__', '__file__', '__package__', '__name__', '__doc__']) locals()= dict_keys(['result', 'fn1', 'fn2']) So fn1 and fn2 are defined in the *local* namespace (as always happens in Python, unless you use the global statement). Now look at the code of fn2: 6 0 LOAD_GLOBAL 0 (fn1) 3 LOAD_FAST0 (arg) 6 CALL_FUNCTION1 9 RETURN_VALUE Again, the compiler knows that fn1 is not local to fn2, so it must be global (because there is no other enclosing scope) and emits a LOAD_GLOBAL instruction. But when the code is executed, 'fn1' is not in the global scope... Solution: make 'fn1' exist in the global scope. Since assignments (implied by the def statement) are always in the local scope, the only alternative is to make both scopes (global and local) the very same one. This is very helpful additional information and clarification! Thanks. This shows that the identity "globals() is locals()" is essential for the module system to work. Yes, though I doubt more than a few Python programmers would guess that identity. [2] Is this correct behavior or is there something wrong with Python here? It's perfectly logical once you get it... :) I think I'm convinced. [3] How should I write a file to be exec'd that defines several functions that call each other, as in the trivial fn1-fn2 example above? Use the same namespace for both locals and globals: exec(file.read(), ldict, ldict) I was going to say that this wouldn't work because the script couldn't use any built-in names, but the way exec works if the value passed for the globals argument doesn't contain an entry for '__builtins__' it adds one. I would have a further problem in that there are some names I want users to be able to use in their scripts, in particular classes that have been imported into the scope of the code doing the exec, but come to think of it I don't want to expose the entire globals() anyway. The solution is do use the same dictionary for both globals and locals, as you suggest, to emulate the behavior of module import, and explicitly add to it the names I want to make available (and since they are primarily classes, there are relatively few of those, as opposed to an API of hundreds of functions). Thanks for the help. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python-list Digest, Vol 76, Issue 97
On Jan 8, 2010, at 7:35:39 PM EST, Terry Reedy wrote:
On 1/8/2010 12:02 PM, Mitchell L Model wrote:
On further reflection, I will add that
what appears to be happening is that during import both the global
and
local dictionaries are set to a copy of the globals() from the
importing
scope and that copy becomes the value of the module's __dict__ once
import has completed successfully.
I have no idea why you think that. The module dict starts empty
except for __name__, __file__, and perhaps a couple of other
'hidden' items. It is not a copy and has nothing to do with
importing scopes.
Why I think -- or, rather, thought -- that was because of some
defective experiments I ran. It was purely a delusion. Thank you for
correcting it.
> and that copy becomes the value of the module's __dict__ once
> import has completed successfully.
That new dict becomes .
Because exec leaves locals() and globals() distinct,
Not necessarily.
In 3.x, at least,
exec(s)
executes s in the current scope. If this is top level, where locals
is globals, then same should be true within exec.
Yes. To simplify some of my ramblings and incorporate the points you
and others have made, and to once again acknowledge Python's elegance,
an important observation which I bet even a lot of serious Python
programs don't realize (or at least not consciously) is that:
globals() is locals()
in the following contexts:
the interpreter top level
the top level of a module (though as you point out, starts out as a
very bare dictionary during import)
a string being exec'd when the call to exec includes
no dictionary argument(s)
one dictionary argument
the same dictionary as both the second and third arguments
The identity does not hold for:
a string being exec'd when a different dictionary is provided as the
second and third arguments to exec
inside anything that creates a scope: a function definition, class
definition, etc.
Did I get all that right? Are there any other contexts that should be
included in these?
d = {}
exec(s, d)
In 3.x, at least, d will also be used as locals.
Yes, talking about 3.x.
exec(s, d, d)
Again, globals and locals are not distinct.
It would seem that in 3.x, the only way for exec to have distinct
globals and locals is to call exec(s) where they are distinct or to
pass distince globals and locals.
Apparently so. To clarify "where they are distinct", that would mean
from a context in which they were already distinct, which is not the
case if exec is called from the top level, but is the case if called
from within, say, a function, as my code does.
Some of the issues of this thread are discussed in Language
Reference 4.1, Naming and Binding. I suppose it could be clearer
that it is, but the addition of nonlocal scope complicated things.
I pretty much have that section memorized and reread it at least
monthly. It's part of what I meant by starting my original comments by
saying that I thought I understood all of this. Thank you (and others)
for helping clarify exactly what's going on. As with so many things in
Python, it is not always easy to keep one's preconceptions, delusions,
and experiences with other languages out of the way of its simplicity,
even if one is a very experienced and knowledgeable Python programmer.
--- Mitchell--
http://mail.python.org/mailman/listinfo/python-list
sys.stdout vs. sys.stderr
In Python 3.1 is there any difference in the buffering behavior of the initial sys.stdout and sys.stderr streams? They are both line_buffered and stdout doesn't seem to use a larger-grain buffering, so they seem to be identical with respect to buffering. Were they different at some earlier point in Python's evolution? -- http://mail.python.org/mailman/listinfo/python-list
Re: I really need webbrowser.open('file://') to open a web browser
On Jan 15, 2010, at 3:59 PM, Timur Tabi After reading several web pages and mailing list threads, I've learned that the webbrowser module does not really support opening local files, even if I use a file:// URL designator. In most cases, webbrowser.open() will indeed open the default web browser, but with Python 2.6 on my Fedora 10 system, it opens a text editor instead. On Python 2.5, it opens the default web browser. This is a problem because my Python script creates a local HTML file and I want it displayed on the web browser. So is there any way to force webbrowser.open() to always use an actual web browser? I had some discussions with the Python documentation writers that led to the following note being included in the Python 3.1 library documentation for webbrowser.open: "Note that on some platforms, trying to open a filename using this function, may work and start the operating system’s associated program. However, this is neither supported nor portable." The discussions suggested that this lack of support and portability was actually always the case and that the webbrowser module is simply not meant to handle file URLs. I had taken advantage of the accidental functionality to generate HTML reports and open them, as well as to open specific documentation pages from within a program. You can control which browser opens the URL by using webbrowser.get to obtain a controller for a particular browser, specified by its argument, then call the open method on the controller instead of the module. For opening files reliability and the ability to pick a particular program (browser or otherwise) to open it with you might have to resort to invoking a command line via subprocess.Popen. -- http://mail.python.org/mailman/listinfo/python-list
Re: I really need webbrowser.open('file://') to open a web browser
On Jan 27, 2010, at 3:31 PM, Timur Tabi wrote:
On Wed, Jan 27, 2010 at 12:29 PM, Mitchell L Model
wrote:
I had some discussions with the Python documentation writers that
led to the
following note being included in the Python 3.1 library
documentation for
webbrowser.open: "Note that on some platforms, trying to open a
filename
using this function, may work and start the operating system’s
associated
program. However, this is neither supported nor portable."
Then they should have renamed the API. I appreciate that they're
finally documenting this, but I still think it's a bunch of baloney.
I agree, but I am pretty sure that, based on the discussions I had
with the Python
documenters and developers, that there's no hope of winning this
argument.
I suppose that since a file: URL is not, strictly speaking, on the
web, that it
shouldn't be opened with a "web" browser. It's just that the "web"
part of
"web browser" became more or less obsolete a long time ago since there
are so many more ways of using browsers and so many more things they can
do then just browse the web. So if you interpret the name "webbrowser"
to mean
that it browses the web, as opposed to files, which means going
through some
kind of server-based protocol, the module does what it says. But I
still like
the idea of using it to open files, especially when I want the file to
be opened
by its associated application and not a browser.
You can control which browser opens the URL by using webbrowser.get
to
obtain a controller for a particular browser, specified by its
argument,
then call the open method on the controller instead of the module.
How can I know which controller (application) the system will use when
it opens an http URL? I depend on webbrowser.open('http') to choose
the best web browser on the installed system. Does webbrowser.get()
tell me which application that will be?
webbrowser.get() with no arguments gives you the default kind of
browser controller, just as if you had used webbrowser.open()
directly.
For opening files reliability and the ability to pick a particular
program
(browser or otherwise) to open it with you might have to resort to
invoking
a command line via subprocess.Popen.
But that only works if I know which application to open.
Aha. You could use subprocess to specify the application from within
your Python code,
but not to indicate "the user's default browser", unless the platform
has a command for that.
On OS X, for instance, the command line:
open file.html
opens file.html with the application the user has associated with html
files, whereas
open -a safari file.html
will open it with Safari even if the user has chosen Firefox for html
files. There's
stuff like this for Windows, I suppose, but hardly as convenient. And
I think that
Linux environments are all over the place on this, but I'm not sure.
webbrowser.get() returns a control object of the default class for the
user's environment --
the one that means "use the default browser" so it won't help.
--
http://mail.python.org/mailman/listinfo/python-list
Re: python 3's adoption
I have been working with Python 3 for over a year. I used it in writing my book "Bioinformatics Programming Using Python" (http://oreilly.com/catalog/9780596154509 ). I didn't see any point in teaching an incompatible earlier version of a language in transition. In preparing the book and its examples I explored a large number of Python modules in some depth and encountered most of the differences between the language and libraries of Python 2 and Python 3. The change was a bit awkward for a while, and there were some surprises, but in the end I have found nothing in Python 3 for which I would prefer Python 2's version. Removal of old-style classes is a big win. Having print as a function provides a tremendous amount of flexibility. I use the sep and end keywords all the time. There is no reason for print to be a statement, and it was an awkward inconsistency in a language that leans towards functional styles. Likewise the elimination of cmp, while shocking, leads to much simpler comparison arguments to sort, since all the function does is return a key; then, sort uses __lt__ (I think) so it automatically uses each class's definition of that. The weird objects returned from things like sorted, dict.keys/values/items, and so on are values that in practice are used primarily in iterations; you can always turn the result into a list, though I have to admit that while developing and debugging I trip trying to pick out a specific element from one of these using indexing (typically [0]); I've learned to think of them as generators, even though they aren't. The rearrangements and name changes in the libraries are quite helpful. I could go on, but basically the language and library changes are on the whole large improvements with little, if any, downside. Conversion of old code is greatly facilitied by the 2to3 tool that comes with Python 3. The big issue in moving from 2 to 3 is the external libraries and development tools you use. Different IDEs have released versions that support Python 3 at different times. (I believe Wing was the first.) If you use numpy, for example, or one of the many libraries that require it, you are stuck. Possibly some important facilities will never be ported to Python 3, but probably most active projects will eventually produce a Python 3 version -- for example, according to its web page, a Python 3 version of PIL is on the way. I was able to cover all the topics in my book using only Python library modules, something I felt would be best for readers -- I used libraries such as elementree, sqlite3, and tkinter. The only disappointment was that I couldn't include a chapter on BioPython, since there is no Python 3 version. By now, many large facilities support both Python 2 and Python 3. I am currently building a complex GUI/Visualization application based on the Python 3 version of PyQt4 and Wing IDE and am delighted with all of it. It may well be that some very important large -- http://mail.python.org/mailman/listinfo/python-list
Re: python 3's adoption
On Jan 28, 2010, at 12:00 PM, [email protected] wrote: From: Roy Smith Date: January 28, 2010 11:09:58 AM EST To: [email protected] Subject: Re: python 3's adoption In article , Mitchell L Model wrote: I use the sep and end keywords all the time. What are 'sep' and 'end'? I'm looking in http://docs.python.org/3.1/genindex-all.html and don't see those mentioned at all. Am I just looking in the wrong place? Sorry -- I wasn't clear. They are keyword arguments to the print function. -- http://mail.python.org/mailman/listinfo/python-list
Re: python 3's adoption
On Jan 28, 2010, at 1:40 PM, Terry Reedy wrote ... On 1/28/2010 11:03 AM, Mitchell L Model wrote: I have been working with Python 3 for over a year. ... I agree completely. Such sweet words to read! Conversion of old code is greatly facilitied by the 2to3 tool that comes with Python 3. The big issue in moving from 2 to 3 is the external libraries and development tools you use. Different IDEs have released versions that support Python 3 at different times. (I believe Wing was the first.) If you use numpy, for example, or one of the many libraries that require it, you are stuck. Possibly some important facilities will never be ported to Python 3, but probably most active projects will eventually produce a Python 3 version -- for example, according to its web page, a Python 3 version of PIL is on the way. I was able to cover all the topics in my book using only Python library modules, something I felt would be best for readers -- I used libraries such as elementree, sqlite3, and tkinter. The only disappointment was that I couldn't include a chapter on BioPython, since there is no Python 3 version. By now, many large facilities support both Python 2 and Python 3. I am currently building a complex GUI/Visualization application based on the Python 3 version of PyQt4 and Wing IDE and am delighted with all of it. It may well be that some very important large Something got clipped ;-) Thanks for noticing. Actually, I had abandoned that sentence and went back and added more to the prior paragraph. Just never went back and deleted the false start. Anyway, thank you for the report. Glad to contribute; gladder to be appreciated. -- http://mail.python.org/mailman/listinfo/python-list
lists as an efficient implementation of large two-dimensional arrays(!)
An instructive lesson in YAGNI ("you aren't going to need it"),
premature optimization, and not making assumptions about Python data
structure implementations.
I need a 1000 x 1000 two-dimensional array of objects. (Since they are
instances of application classes it appears that the array module is
useless; likewise, since I am using Python 3.1, so among other things,
I can't use numpy or its relatives.) The usage pattern is that the
array is first completely filled with objects. Later, objects are
sometimes accessed individually by row and column and often the entire
array is iterated over.
Worried (unnecessarily, as it turns out) by the prospect of 1,000,000
element list I started by constructing a dictionary with the keys 1
through 1000, each of which had as its value another dictionary with
the keys 1 through 1000. Actual values were the values of the second
level dictionary.
Using numbers to fill the array to minimize the effect of creating my
more complex objects, and running Python 3.1.1 on an 8-core Mac Pro
with 8Gb memory, I tried the following
#create and fill the array:
t1 = time.time()
d2 = {}
for j in range(1000):
d2[j] = dict()
for k in range(1000):
d2[j][k] = k
print( round(time.time() - t1, 2))
0.41
# access each element of the array:
t1 = time.time()
for j in range(1000):
for k in range(1000):
elt = d2[j][k]
print( round(time.time() - t1, 2))
0.55
My program was too slow, so I started investigating whether I could
improve on the two-level dictionary, which got used a lot. To get
another baseline I tried a pure 1,000,000-element list, expecting the
times to be be horrendous, but look!
# fill a list using append
t1 = time.time()
lst = []
for n in range(100):
lst.append(n)
print( round(time.time() - t1, 2))
0.26
# access every element of a list
t1 = time.time()
for n in range(100):
elt = lst[n]
print( round(time.time() - t1, 2))
0.25
What a shock! I could save half the execution time and all my clever
work and awkward double-layer dictionary expressions by just using a
list!
Even better, look what happens using a comprehension to create the
list instead of a loop with list.append:
t1 = time.time()
lst = [n for n in range(100)]
print( round(time.time() - t1, 2))
0.11
Half again to create the list.
Iterating over the whole list is easier and faster than iterating over
the double-level dictionary, in particular because it doesn't involve
a two-level loop. But what about individual access given a row and a
column?
t1 = time.time()
for j in range(1000):
for k in range(1000):
elt = lst[j * 1000 + k]
print( round(time.time() - t1, 2))
0.45
This is the same as for the dictionary.
I tried a two-level list and a few other things but still haven't
found anything that works better than a single long list -- just like
2-D arrays are coded in old-style languages, with indices computed as
offsets from the beginning of the linear sequence of all the values.
What's amazing is that creating and accessing 1,000,000-element list
in Python is so efficient. The usual moral obtains: start simple,
analyze problems (functional or performance) as they arise, decide
whether they are worth the cost of change, then change in very limited
ways. And of course abstract and modularize so that, in my case, for
example, none of the program's code would be affected by the change
from a two-level dictionary representation to using a single long list.
I realize there are many directions an analysis like this can follow,
and many factors affecting it, including patterns of use. I just
wanted to demonstrate the basics for a situation that I just
encountered. In particular, if the array was sparse, rather than
completely full, the two-level dictionary implementation would be the
natural representation.
--
http://mail.python.org/mailman/listinfo/python-list
CGI, POST, and file uploads
Can someone tell me how to upload the contents of a (relatively small)
file using an HTML form and CGI in Python 3.1? As far as I can tell
from a half-day of experimenting, browsing, and searching the Python
issue tracker, this is broken. Very simple example:
http://localhost:9000/cgi/cgi-test.py";
enctype="multipart/form-data"
method="post">
File
Submit
cgi-test.py:
#!/usr/local/bin/python3
import cgi
import sys
form = cgi.FieldStorage()
print(form.getfirst('contents'), file=sys.stderr)
print('done')
I run a CGI server with:
#!/usr/bin/env python3
from http.server import HTTPServer, CGIHTTPRequestHandler
HTTPServer(('', 9000), CGIHTTPRequestHandler).serve_forever()
What happens is that the upload never stops. It works in 2.6.
If I cancel the upload from the browser, I get the following output,
so I know that basically things are working;
the cgi script just never finishes reading the POST input:
localhost - - [02/Mar/2010 16:37:36] "POST /cgi/cgi-test.py HTTP/1.1"
200 -
<<>>
Exception happened during processing of request from ('127.0.0.1',
55779)
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/socketserver.py", line 281, in _handle_request_noblock
self.process_request(request, client_address)
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/socketserver.py", line 307, in process_request
self.finish_request(request, client_address)
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/socketserver.py", line 320, in finish_request
self.RequestHandlerClass(request, client_address, self)
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/socketserver.py", line 614, in __init__
self.handle()
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/http/server.py", line 352, in handle
self.handle_one_request()
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/http/server.py", line 346, in handle_one_request
method()
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/http/server.py", line 868, in do_POST
self.run_cgi()
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/http/server.py", line 1045, in run_cgi
if not self.rfile.read(1):
File "/Library/Frameworks/Python.framework/Versions/3.1/lib/
python3.1/socket.py", line 214, in readinto
return self._sock.recv_into(b)
socket.error: [Errno 54] Connection reset by peer
--
http://mail.python.org/mailman/listinfo/python-list
CGI, POST, and file uploads
On Mar 2, 2010, at 4:48 PM, I wrote: Can someone tell me how to upload the contents of a (relatively small) file using an HTML form and CGI in Python 3.1? As far as I can tell from a half-day of experimenting, browsing, and searching the Python issue tracker, this is broken. followed by a detailed example demonstrating the problem. Having hear no response, let me clarify that this request was preliminary to filing a bug report -- I wanted to make sure I wasn't missing something here. If nothing else, this failure should be documented rather than the 3.1 library documentation continuing to describe how to upload file contents with POST. If someone thinks there is a way to make this work in 3.1, or that it isn't a bug because CGI is hopeless (i.e., non-WSGI-compliant), or that the documentation shouldn't be changed, please respond. I'd rather have this particular discussion here than in the bug tracking system. Meanwhile, let me heartily recommend the Bottle Web Framework (http://bottle.paws.de ) for its simplicity, flexibility, and power. Very cool stuff. To make it work in Python3.1, do the following: 1. run 2to3 on bottle.py (the only file there is to download) 2. copy or move the resulting bottle.py to the site-libs directory in your Python installation's library directory 3. don't use request.GET.getone or request.POST.getone -- instead of getone, use get (the protocol changed to that of the Mapping ABC from the collections module) 4. the contents of a file will be returned inside a cgi.FieldStorage object, so you need to add '.value' after the call to get in that case -- http://mail.python.org/mailman/listinfo/python-list
Re: sys.stdout vs. sys.stderr
On Jan 11, 2010, at 1:47 PM Nobody wrote: On Mon, 11 Jan 2010 10:09:36 +0100, Martin v. Loewis wrote: In Python 3.1 is there any difference in the buffering behavior of the initial sys.stdout and sys.stderr streams? No. Were they different at some earlier point in Python's evolution? That depends on the operating system. These used to be whatever the C library set up as stdout and stderr. Typically, they were buffered in the same way. On Unix, stdout will be line buffered if it is associated with a tty and fully buffered otherwise, while stderr is always unbuffered. On Windows, stdout and stderr are unbuffered if they refer to a character device, fully buffered otherwise (Windows doesn't have line buffering; setvbuf(_IOLBF) is equivalent to setvbuf(_IOFBF)). ANSI C says: As initially opened, the standard error stream is not fully buffered; the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device. I don't want to get into a quibble fight here, but I need to reraise this issue. [I teach and write and want to make sure I get this right. I already have an incorrect paragraph about this in my Bioinformatics Programming Using Python book.] The key question here is line buffering vs full buffering. In Unix (at least in an OS X Terminal), the following code prints a number every two seconds in Python 2: >>> for n in range(5): . . . print >> sys.stderr, n, # final , to not send newline . . . time.sleep(2) However, in Python 3, similar code does not print the numbers until the whole thing finishes (again, running from the terminal). >>> for n in range(5): . . . print(n, file=sys.stderr, end='') . . . time.sleep(2) So it appears that in a Unix terminal window, Python 2 does not line- buffer stderr whereas Python 3 does. That's what tripped me up. While developing and debugging code, I often print periods on a line as some loop progresses (sometimes every Nth time around, for some reasonable N) just to know the pace of execution and that the program is still doing something. In doing that recently in Python 3 I discovered that I either had to leave out the end='' or do sys.stderr.flush() after every print, which amounts to the same thing. This was a big surprise, after many, many years of C, C++, Java, and Python programming -- I have always thought of stderr as completely unbuffered in languages that have it. Doesn't mean some languages line- buffer stderr on some platforms, just pointing out an assumption I've lived with for a very long time that tripped me up writing a note about using stderr in Python 3 without actually demonstrating the code and therefore not catching my error.-- http://mail.python.org/mailman/listinfo/python-list
invoking a method from two superclasses
In Python 3, how should super() be used to invoke a method defined in C that
overrides its two superclasses A and B, in particular __init__?
class A:
def __init__(self):
print('A')
class B:
def __init__(self):
print('B')
class C(A, B):
def __init__(self):
super().__init__()
print('C')
C()
Output is:
A
C
I've discovered the surprising fact described in the documentation of super
that specifying a class as the first argument of super means to skip that class
when
scanning the mro so that if C.__init__ includes the line
super(A, self).__init__()
what gets called is B.__init__, so that if I want to call __init__ of both
classes the
definition of C should have both of the following lines:
super().__init__()
super(A, self).__init__()
and that
super(B, self).__init__()
does nothing because B is the last class in the mro.
This seems weird. Would someone please give a clear example and explanation of
the recommended way of initializing both superclasses in a simple multiple
inheritance
situation?
Note: I am EXTREMELY knowledgeable about OO, Python, and many OOLs.
I don't mean to be arrogant, I just want to focus the discussion not open it to
a broad
interchange about multiple inheritance, the ways it can be used or avoided,
etc. I just
want to know how to use super. The documentation states the following:
"There are two typical use cases for super. In a class hierarchy with single
inheritance,
super can be used to refer to parent classes without naming them explicitly,
thus
making the code more maintainable."
"The second use case is to support cooperative multiple inheritance in a
dynamic
execution environment. This use case is unique to Python and is not found in
statically compiled languages or languages that only support single
inheritance.
This makes it possible to implement "diamond diagrams" where multiple base
classes implement the same method."
"For both use cases, a typical superclass call looks like this:
class C(B):
def method(self, arg):
super().method(arg)# This does the same thing as:
# super(C, self).method(arg)
"
Though it claims to be demonstrating both cases, it is only demonstrating single
inheritance and a particular kind of multiple inheritance where the method is
found
in only one class in the mro. This avoids situations where you want to call the
method anywhere it is found in the mro, or at least in the direct superclasses.
Perhaps __init__ is a special case, but I don't see how to figure out how to
__init__
two superclasses of a class from the documentation. I often file "bug reports"
about
documentation ambiguities, vagueness, incompletenesses, etc., but I don't want
to
do so for this case until I've heard something definitive about how it should
be
handled.
Thanks in advance.
--
http://mail.python.org/mailman/listinfo/python-list
Re: invoking a method from two superclasses
Allow me to add to my previous question that certainly the superclass methods can be called explicitly without resorting to super(), e.g.: class C(A, B): def __init__(self): A.__init__(self) B.__init__(self) My question is really whether there is any way of getting around the explicit class names by using super() and if not, shouldn't the documentation of super point out that if more than one class on the mro defines a method only the first will get called? What's strange is that it specifically mentions diamond patterns, which is an important case to get right, but it doesn't show how. I suspect we should have a Multiple Inheritance HOWTO, though details and recommendations would be controversial. I've accumulated lots of abstract examples along the lines of my question, using multiple inheritance both to create combination classes (the kinds that are probably best done with composition instead of inheritance) and mixins. I like mixins, and I like abstract classes. And yes I understand the horrors of working with a large component library that uses mixins heavily, because I've experienced many of them, going all the way back to Lisp-Machine Lisp's window system with very many combo classes such as FancyFontScrollingTitledMinimizableWindow, or whatever. Also, I understand that properties might be better instead of multiple inheritance for some situations. What I'm trying to do is puzzle out what the reasonable uses of multiple inheritance are in Python 3 and how classes and methods that follow them should be written. -- http://mail.python.org/mailman/listinfo/python-list
Re: invoking a method from two superclasses
>From: Scott David Daniels
>Date: Tue, 30 Jun 2009 16:49:18 -0700
>Message-ID:
>Subject: Re: invoking a method from two superclasses
>
>Mitchell L Model wrote:
>>In Python 3, how should super() be used to invoke a method defined in C
> > that overrides its two superclasses A and B, in particular __init__?
>>...
>>I've discovered the surprising fact described in the documentation of super
>><http://docs.python.org/3.1/library/functions.html#super>
>>that specifying a class as the first argument of super means to skip that
>>class when
>>scanning the mro so that
>>
>>This seems weird. Would someone please give a clear example and explanation of
>>the recommended way of initializing both superclasses in a simple multiple
>>inheritance
>>situation?
>
>OK, in Diamond inheritance in Python (and all multi-inheritance is
>diamond-shaped in Python), the common ancestor must have a method
>in order to properly use super. The mro is guaranteed to have the
>top of the split (C below) before its children in the mro, and the
>join point (object or root below) after all of the classes from
>which it inherits.
>
>So, the correct way to do what you want:
>class A:
>def __init__(self):
>super().__init__()
>print('A')
>
>class B:
>def __init__(self):
>super().__init__()
>print('B')
>
>class C(A, B):
>def __init__(self):
>super().__init__()
>print('C')
>
>C()
>
>And, if you are doing it with a message not available in object:
>
>class root:
>def prints(self):
>print('root') # or pass if you prefer
>
>class A(root):
>def prints(self):
>super().prints()
>print('A')
>
>class B(root):
>def prints(self):
>super().prints()
>print('B')
>
>class C(A, B):
>def prints(self):
>super().prints()
>print('C')
>
>C().prints()
>
>--Scott David Daniels
>[email protected]
>
Great explanation, and 1/2 a "duh" to me. Thanks.
What I was missing is that each path up to and including the top of the diamond
must include a definition of the method, along with super() calls to move the
method
calling on its way up. Is this what the documentation means by
"cooperative multiple inheritance"?
If your correction of my example, if you remove super().__init__ from B.__init__
the results aren't affected, because object.__init__ doesn't do anything and
B comes after A in C's mro. However, if you remove super().__init__ from
A.__init__, it stops the "supering" process dead in its tracks.
It would appear that "super()" really means something like CLOS's
call-next-method.
I've seen various discussions in people's blogs to the effect that super()
doesn't really
mean superclass, and I'm beginning to develop sympathy with that view. I
realize that
implementationally super is a complicated proxy; understanding the practical
implications isn't so easy. While I've seen all sorts of arguments and
discussions,
including the relevant PEP(s), I don't think I've ever seen anyone lay out an
example
such as we are discussing with the recommendation that basically if you are
using
super() in multiple inheritance situations, make sure that the methods of all
the classes
in the mro up to at least the top of a diamond all call super() so it can
continue to
move the method calls along the mro. The documentation of super(), for instance,
recommends that all the methods in the diamond should have the same signature,
but
and it says that super() can be used to implement the diamond, but it never
actually
comes out and says that each method below the top must call super() at the risk
of the chain of calls being broken. I do wonder whether this should go in the
doc
of super, the tutorial, or a HOWTO -- it just seems to important and subtle to
leave
for people to discover.
Again, many thanks for the quick and clear response.
--
http://mail.python.org/mailman/listinfo/python-list
Re: invoking a method from two superclasses
[Continuing the discussion about super() and __init__] The documentation of super points out that good design of diamond patterns require the methods to have the same signature throughout the diamond. That's fine for non-mixin classes where the diamond captures different ways of handling the same data. The classical example is BufferedStram / \ / \ / \ BufInputStrmBufOutputStrm both have buffers, but use them differentlyu \/ \/ \/ RandomAccessStream or something like that The idea of the diamond is to have just one buffer, rather than the two buffers that would result in C++ without making the base classes virtual. All four classes could define __init__ with the argument filename, or whatever, and everything works fine. The problems start with the use of mixins. In essence, mixins intentionally do NOT want to be part of diamond patterns. They are orthogonal to the "true" or "main" class hierarchy and just poke their heads in hear and there in that hierarchy. Moreover, a class could inherit from multiple mixins. Typical simple orthogonal mixins would be NamedObject, TrackedObject, LoggedObject, ColoredWidget, and other such names compounded from an adjective, participle, or gerund and a completely meaningless name such as Object or Thing and which classes typically manage one action or piece of state to factor it out from the many other classes that need it where the pattern of which classes need them does not follow the regular class hierarchy. Suppose I have a class User that includes NamedObject, TrackedObject, and LoggedObject as base classes. (By Tracked I mean instances are automatically registered in a list or dictionary for use by class methods that search, count, or do other operations on them.) The problem is that each mixin's __init__ is likely to have a different signature. User.__init__ would have arguments (self, name, log), but it would need to call each mixin class's __init__ with different arguments. Mixins are different than what the document refers to as cooperative multiple inheritance -- does that make them uncooperative multiple inheritance classes :-)? I think I'd be back to having to call each mixin class's __init__ explicitly: class User(NamedObject, TrackedObject, LoggedObject) def __init__(self, name, log): NamedObject.__init__(self, name) TrackedObject.__init__(self) LoggedObject.__init__(self, log) This is not terrible. It seems somehow appropriate that because mixin use is "orthogonal" to the "real" inheritance hierarchy, they shouldn't have the right to use super() and the full mro (after all, who knows where the mro will take these calls anyway). And in many cases, two mixins will join into another (a NamedTrackedObject) for convenience, and then classes inheriting from that have one less init to worry about. I suspect this problem is largely with __init__. All the other __special__ fns have defined argument lists and so can be used with diamond patterns, mixins, etc. Does this all sound accurate? (Again, not looking for design advice, just trying to ferret out the subtleties and implications of multiple inheritance and super().) -- http://mail.python.org/mailman/listinfo/python-list
Using pySNMP
I realize this is not the pySNMP mailing list. However, can someone tell me if pySNMP uses human readable forms of mibs, such as Net-SNMP does, or does it use only the oid's? Thanks Mitch -- http://mail.python.org/mailman/listinfo/python-list
From Perl to Python
General newbie type question...open ended. I have been scripting in Perl for 8 years for net management purposes. Mostly interacting with SNMP and Cisco MIBS in general. I would like to re-write these scripts in Python. Any suggestions in how to get started in Python and , especially, needing an interface to SNMP would be appreciated. Thanks, Mitch -- http://mail.python.org/mailman/listinfo/python-list
From Perl to Python
General newbie type question...open ended. I have been scripting in Perl for 8 years for net management purposes. Mostly interacting with SNMP and Cisco MIBS in general. I would like to re-write these scripts in Python. Any suggestions in how to get started in Python and , especially, needing an interface to SNMP would be appreciated. Thanks, Mitch -- http://mail.python.org/mailman/listinfo/python-list
