Need bluez/python ( PyBluez) method to access BlueZ data- Please Help!
Hi, I need to access data that is handled and stored in the BlueZ file system, but is, unfortunately, not available via the current BlueZ D-Bus API. That is, the data I need is parsed by BlueZ, but not provided in the current D-Bus signal. I need a method or interface that does not rely on the D-Bus API and can directly access the BlueZ file system to get this parsed data. Can PyBlueZ do this? If not, any suggestions would be immensely appreciated. Thanks! Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: Why is the argparse module so inflexible?
On Sat, Jun 29, 2013 at 9:36 AM, Ethan Furman wrote: > On 06/27/2013 03:49 PM, Steven D'Aprano wrote: > >> >> Libraries should not call sys.exit, or raise SystemExit. Whether to quit >> or not is not the library's decision to make, that decision belongs to >> the application layer. Yes, the application could always catch >> SystemExit, but it shouldn't have to. >> > > So a library that is explicitly designed to make command-line scripts > easier and friendlier should quit with a traceback? > > Really? > Perhaps put the functionality "handling of the exception of library to sys.exit with a message" into a method so that the user can override it (e.g., so that it just throw the same exception to the caller of the library)? -- http://mail.python.org/mailman/listinfo/python-list
Question about nested loop
Hi all,
I am a very novice for Python. Currently, I am trying to read continuous
columns repeatedly in the form of array.
my code is like below:
import numpy as np
b = []
c = 4
f = open("text.file", "r")
while c < 10:
c = c + 1
for columns in ( raw.strip().split() for raw in f ):
b.append(columns[c])
y = np.array(b, float)
print c, y
I thought that can get the arrays of the columns[5] to [10], but I only could
get repetition of same arrays of columns[5].
The result was something like:
5 [1 2 3 4 .., 10 9 8]
6 [1 2 3 4 .., 10 9 8]
7 [1 2 3 4 .., 10 9 8]
8 [1 2 3 4 .., 10 9 8]
9 [1 2 3 4 .., 10 9 8]
10 [1 2 3 4 .., 10 9 8]
What I can't understand is that even though c increased incrementally upto 10,
y arrays stay same.
Would someone help me to understand this problem more?
I really appreciate any help.
Thank you,
Isaac
--
http://mail.python.org/mailman/listinfo/python-list
Re: Question about nested loop
On Monday, December 31, 2012 5:25:16 AM UTC-6, Gisle Vanem wrote: > "Isaac Won" wrote: > > > > > while c < 10: > > >c = c + 1 > > > > > >for columns in ( raw.strip().split() for raw in f ): > > > > > > > > >b.append(columns[c]) > > > > > >y = np.array(b, float) > > >print c, y > > > > > > > > > I thought that can get the arrays of the columns[5] to [10], > > > but I only could get repetition of same arrays of columns[5]. > > > > I don't pretend to know list comprehension very well, but > > 'c' isn't incremented in the inner loop ( .. for raw in f). > > Hence you only append to columns[5]. > > > > Maybe you could use another 'd' indexer inside the inner-loop? > > But there must a more elegant way to solve your issue. (I'm a > > PyCommer myself). > > > > --gv Thank you for your advice. I agree with you and tried to increment in inner loop, but still not very succesful. Anyway many thanks for you. -- http://mail.python.org/mailman/listinfo/python-list
Re: Question about nested loop
On Monday, December 31, 2012 6:59:34 AM UTC-6, Hans Mulder wrote:
> On 31/12/12 11:02:56, Isaac Won wrote:
>
> > Hi all,
>
> > I am a very novice for Python. Currently, I am trying to read continuous
>
> > columns repeatedly in the form of array.
>
> > my code is like below:
>
> >
>
> > import numpy as np
>
> >
>
> > b = []
>
> > c = 4
>
> > f = open("text.file", "r")
>
> >
>
> > while c < 10:
>
> > c = c + 1
>
> >
>
> > for columns in ( raw.strip().split() for raw in f ):
>
> > b.append(columns[c])
>
> >
>
> > y = np.array(b, float)
>
> > print c, y
>
> >
>
> >
>
> > I thought that can get the arrays of the columns[5] to [10], but I only
>
> > could get repetition of same arrays of columns[5].
>
> >
>
> > The result was something like:
>
> >
>
> > 5 [1 2 3 4 .., 10 9 8]
>
> > 6 [1 2 3 4 .., 10 9 8]
>
> > 7 [1 2 3 4 .., 10 9 8]
>
> > 8 [1 2 3 4 .., 10 9 8]
>
> > 9 [1 2 3 4 .., 10 9 8]
>
> > 10 [1 2 3 4 .., 10 9 8]
>
> >
>
> >
>
> > What I can't understand is that even though c increased incrementally upto
> > 10,
>
> > y arrays stay same.
>
> >
>
> > Would someone help me to understand this problem more?
>
>
>
> That's because the inner loop read from a file until his reaches
>
> the end of the file. Since you're not resetting the file pointer,
>
> during the second and later runs of the outer loop, the inner loop
>
> starts at the end of the file and terminates without any action.
>
>
>
> You'd get more interesting results if you rewind the file:
>
>
>
> import numpy as np
>
>
>
> b = []
>
> c = 4
>
> f = open("text.file", "r")
>
>
>
> while c < 10:
>
> c = c + 1
>
>
>
> f.seek(0,0)
>
> for columns in ( raw.strip().split() for raw in f ):
>
> b.append(columns[c])
>
>
>
> y = np.array(b, float)
>
> print c, y
>
>
>
> It's a bit inefficient to read the same file several times.
>
> You might consider reading it just once. For example:
>
>
>
> import numpy as np
>
>
>
> b = []
>
>
>
> f = open("text.file", "r")
>
> data = [ line.strip().split() for line in f ]
>
> f.close()
>
>
>
> for c in xrange(5, 11):
>
> for row in data:
>
> b.append(row[c])
>
>
>
> y = np.array(b, float)
>
> print c, y
>
>
>
>
>
> Hope this helps,
>
>
>
> -- HansM
Hi Hans,
I appreciate your advice and kind tips.
The both codes which you gave seem pretty interesting.
Both look working for incrementing inner loop number, but the results of y are
added repeatedly such as [1,2,3],[1,2,3,4,5,6], [1,2,3,4,5,6,7,8,9]. Anyhow,
really thank you for your help and I will look at this problem more in detail.
Isaac
--
http://mail.python.org/mailman/listinfo/python-list
avoding the accumulation of array when using loop.
Hi all,
Thanks to Hans, I have had a good progress on my problem.
Followings are Hans's Idea:
import numpy as np
b = []
c = 4
f = open("text.file", "r")
while c < 10:
c = c + 1
f.seek(0,0)
for columns in ( raw.strip().split() for raw in f ):
b.append(columns[c])
y = np.array(b, float)
print c, y
It's a bit inefficient to read the same file several times.
You might consider reading it just once. For example:
import numpy as np
b = []
f = open("text.file", "r")
data = [ line.strip().split() for line in f ]
f.close()
for c in xrange(5, 11):
for row in data:
b.append(row[c])
y = np.array(b, float)
print c, y
---
It is a great idea, but I found some problems. I want each individual array of
y. However, these two codes prodce accumulated array such as [1,2,3],
[1,2,3,4,5,6], [1,2,3,4,5,6,7,8,9] and so on. I have tried to initialize for
loop for each time to produce array. This effort has not been very successful.
Do you guys have any idea? I will really appreciate ant help and idea.
Thanks,
Isaac
--
http://mail.python.org/mailman/listinfo/python-list
Re: avoding the accumulation of array when using loop.
On Wednesday, January 2, 2013 5:54:18 PM UTC-6, Dave Angel wrote:
> On 01/02/2013 05:21 PM, Isaac Won wrote:
>
> > Hi all,
>
> >
>
> > Thanks to Hans, I have had a good progress on my problem.
>
> >
>
> > Followings are Hans's Idea:
>
> >
>
> > import numpy as np
>
> >
>
> > b = []
>
> > c = 4
>
> > f = open("text.file", "r")
>
> >
>
> > while c < 10:
>
> > c = c + 1
>
> >
>
> >
>
> > f.seek(0,0)
>
> >
>
> > for columns in ( raw.strip().split() for raw in f ):
>
> > b.append(columns[c])
>
> >
>
> > y = np.array(b, float)
>
> > print c, y
>
> >
>
> >
>
> > It's a bit inefficient to read the same file several times.
>
>
>
> Don't bet on it. The OS and the libraries and Python each do some
>
> buffering, so it might be nearly as fast to just reread if it's a small
>
> file. And if it's a huge one, the list would be even bigger. So the
>
> only sizes where the second approach is likely better is the mid-size file.
>
>
>
> > You might consider reading it just once. For example:
>
> >
>
> >
>
> > import numpy as np
>
> >
>
> > b = []
>
> >
>
> >
>
> >
>
> > f = open("text.file", "r")
>
> >
>
> > data = [ line.strip().split() for line in f ]
>
> > f.close()
>
> >
>
> > for c in xrange(5, 11):
>
> > for row in data:
>
> > b.append(row[c])
>
> >
>
> >
>
> > y = np.array(b, float)
>
> > print c, y
>
> > ---
>
> >
>
> > It is a great idea, but I found some problems. I want each individual array
> > of y. However, these two codes prodce accumulated array such as [1,2,3],
> > [1,2,3,4,5,6], [1,2,3,4,5,6,7,8,9] and so on. I have tried to initialize
> > for loop for each time to produce array. This effort has not been very
> > successful.
>
> > Do you guys have any idea? I will really appreciate ant help and idea.
>
>
>
> Your description is very confusing. But i don't see why you just don't
>
> just set b=[] inside the outer loop, rather than doing it at the begin
>
> of the program.
>
>
>
> for c in xrange(5, 11):
>
> b = []
>
> for row in data:
>
> b.append(row[c])
>
>
>
>
>
>
>
> --
>
>
>
> DaveA
Hi Dave,
I really appreciate your advice. It was really helpful.
Isaac
--
http://mail.python.org/mailman/listinfo/python-list
Memory error with quadratic interpolation
Hi all, I have tried to use different interpolation methods with Scipy. My code seems just fine with linear interpolation, but shows memory error with quadratic. I am a novice for python. I will appreciate any help. #code f = open(filin, "r") for columns in ( raw.strip().split() for raw in f ): a.append(columns[5]) x = np.array(a, float) not_nan = np.logical_not(np.isnan(x)) indices = np.arange(len(x)) interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic') p = interp(indices) The number of data is 31747. Thank you, Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: Memory error with quadratic interpolation
On Tuesday, January 22, 2013 10:06:41 PM UTC-6, Isaac Won wrote: > Hi all, > > > > I have tried to use different interpolation methods with Scipy. My code seems > just fine with linear interpolation, but shows memory error with quadratic. I > am a novice for python. I will appreciate any help. > > > > #code > > f = open(filin, "r") > > for columns in ( raw.strip().split() for raw in f ): > > a.append(columns[5]) > > x = np.array(a, float) > > > > > > not_nan = np.logical_not(np.isnan(x)) > > indices = np.arange(len(x)) > > interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic') > > > > p = interp(indices) > > > > ---- > > The number of data is 31747. > > > > Thank you, > > > > Isaac I really appreciate to both Ulich and Oscar. To Oscar My actual error message is: File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", line 311, in __init__ self._spline = splmake(x,oriented_y,order=order) File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", line 809, in splmake coefs = func(xk, yk, order, conds, B) File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", line 530, in _find_smoothest u,s,vh = np.dual.svd(B) File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py", line 91, in svd full_matrices=full_matrices, overwrite_a = overwrite_a) MemoryError -- Thank you, Hoonill -- http://mail.python.org/mailman/listinfo/python-list
Re: Memory error with quadratic interpolation
On Wednesday, January 23, 2013 4:08:13 AM UTC-6, Oscar Benjamin wrote: > On 23 January 2013 08:55, Ulrich Eckhardt > > wrote: > > > Am 23.01.2013 05:06, schrieb Isaac Won: > > > > > >> I have tried to use different interpolation methods with Scipy. My > > >> code seems just fine with linear interpolation, but shows memory > > >> error with quadratic. I am a novice for python. I will appreciate any > > >> help. > > > > > [SNIP] > > > > > > > > > Concerning the rest of your problems, there is lots of code and the datafile > > > missing. However, there is also too much of it, try replacing the file with > > > generated data and remove everything from the code that is not absolutely > > > necessary. > > > > Also please copy paste the actual error message rather than paraphrasing it. > > > > > > Oscar I really appreciate to both Ulich and Oscar. To Oscar My actual error message is: File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", line 311, in __init__ self._spline = splmake(x,oriented_y,order=order) File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", line 809, in splmake coefs = func(xk, yk, order, conds, B) File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", line 530, in _find_smoothest u,s,vh = np.dual.svd(B) File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py", line 91, in svd full_matrices=full_matrices, overwrite_a = overwrite_a) MemoryError -- Thank you, Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: Memory error with quadratic interpolation
On Wednesday, January 23, 2013 4:08:13 AM UTC-6, Oscar Benjamin wrote: > On 23 January 2013 08:55, Ulrich Eckhardt > > > > > Am 23.01.2013 05:06, schrieb Isaac Won: > > > > > >> I have tried to use different interpolation methods with Scipy. My > > >> code seems just fine with linear interpolation, but shows memory > > >> error with quadratic. I am a novice for python. I will appreciate any > > >> help. > > > > > [SNIP] > > > > > > > > > Concerning the rest of your problems, there is lots of code and the datafile > > > missing. However, there is also too much of it, try replacing the file with > > > generated data and remove everything from the code that is not absolutely > > > necessary. > > > > Also please copy paste the actual error message rather than paraphrasing it. > > > > > > Oscar I really appreciate to both Ulich and Oscar. To Oscar My actual error message is: File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", line 311, in __init__ self._spline = splmake(x,oriented_y,order=order) File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", line 809, in splmake coefs = func(xk, yk, order, conds, B) File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", line 530, in _find_smoothest u,s,vh = np.dual.svd(B) File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py", line 91, in svd full_matrices=full_matrices, overwrite_a = overwrite_a) MemoryError -- Thank you, Hoonill -- http://mail.python.org/mailman/listinfo/python-list
Re: Memory error with quadratic interpolation
On Wednesday, January 23, 2013 2:55:14 AM UTC-6, Ulrich Eckhardt wrote: > Am 23.01.2013 05:06, schrieb Isaac Won: > > > I have tried to use different interpolation methods with Scipy. My > > > code seems just fine with linear interpolation, but shows memory > > > error with quadratic. I am a novice for python. I will appreciate any > > > help. > > > > > > #code > > > f = open(filin, "r") > > > > Check out the "with open(...) as f" syntax. > > > > > > > for columns in ( raw.strip().split() for raw in f ): > > > > For the record, this first builds a sequence and then iterates over that > > sequence. This is not very memory-efficient, try this instead: > > > > for line in f: > > columns = line.strip().split() > > > > > > Concerning the rest of your problems, there is lots of code and the > > datafile missing. However, there is also too much of it, try replacing > > the file with generated data and remove everything from the code that is > > not absolutely necessary. > > > > Good luck! > > > > Uli Hi Ulich, I tried to change the code following your advice, but it doesn't seem to work still. My adjusted code is: a = [] with open(filin, "r") as f: for line in f: columns = line.strip().split() a.append(columns[5]) x = np.array(a, float) not_nan = np.logical_not(np.isnan(x)) indices = np.arange(len(x)) interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic') p = interp(indices) - And full error message is: interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic') File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", line 311, in __init__ self._spline = splmake(x,oriented_y,order=order) File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", line 809, in splmake coefs = func(xk, yk, order, conds, B) File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", line 530, in _find_smoothest u,s,vh = np.dual.svd(B) File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py", line 91, in svd full_matrices=full_matrices, overwrite_a = overwrite_a) MemoryError --- Could you give me some advice for this situation? Thank you always, Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: Memory error with quadratic interpolation
On Wednesday, January 23, 2013 8:40:54 AM UTC-6, Oscar Benjamin wrote: > On 23 January 2013 14:28, Isaac Won wrote: > > > On Wednesday, January 23, 2013 4:08:13 AM UTC-6, Oscar Benjamin wrote: > > > > > > To Oscar > > > My actual error message is: > > > File > > "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", > > line 311, in __init__ > > > self._spline = splmake(x,oriented_y,order=order) > > > File > > "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", > > line 809, in splmake > > > coefs = func(xk, yk, order, conds, B) > > > File > > "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", > > line 530, in _find_smoothest > > > u,s,vh = np.dual.svd(B) > > > File > > "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py", > > line 91, in svd > > > full_matrices=full_matrices, overwrite_a = overwrite_a) > > > MemoryError > > > > Are you sure that's the *whole* error message? The traceback only > > refers to the scipy modules. I can't see the line from your code that > > is generating the error. > > > > > > Oscar Dear Oscar, Following is full error message after I adjusted following Ulich's advice: interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic') File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", line 311, in __init__ self._spline = splmake(x,oriented_y,order=order) File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", line 809, in splmake coefs = func(xk, yk, order, conds, B) File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py", line 530, in _find_smoothest u,s,vh = np.dual.svd(B) File "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py", line 91, in svd full_matrices=full_matrices, overwrite_a = overwrite_a) MemoryError -- Thank you, Hoonill -- http://mail.python.org/mailman/listinfo/python-list
Re: Memory error with quadratic interpolation
On Wednesday, January 23, 2013 10:51:43 AM UTC-6, Oscar Benjamin wrote:
> On 23 January 2013 14:57, Isaac Won wrote:
>
> > On Wednesday, January 23, 2013 8:40:54 AM UTC-6, Oscar Benjamin wrote:
>
> >> On 23 January 2013 14:28, Isaac Won wrote:
>
> >>
>
> [SNIP]
>
> >
>
> > Following is full error message after I adjusted following Ulich's advice:
>
> >
>
> > interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic')
>
> > File
> > "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py",
> > line 311, in __init__
>
> > self._spline = splmake(x,oriented_y,order=order)
>
> > File
> > "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py",
> > line 809, in splmake
>
> > coefs = func(xk, yk, order, conds, B)
>
> > File
> > "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py",
> > line 530, in _find_smoothest
>
> > u,s,vh = np.dual.svd(B)
>
> > File
> > "/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py",
> > line 91, in svd
>
> > full_matrices=full_matrices, overwrite_a = overwrite_a)
>
> > MemoryError
>
>
>
> Where is the new code? You should show full working code (with the
>
> import statements) and the full error that is generated by exactly
>
> that code. If possible you should also write code that someone else
>
> could run even without having access to your data files. If you did
>
> that in your first post, you'd probably have an answer to your problem
>
> by now.
>
>
>
> Here is a version of your code that many people on this list can test
>
> straight away:
>
>
>
> import numpy as np
>
> from scipy.interpolate import interp1d
>
> x = np.array(31747 * [0.0], float)
>
> indices = np.arange(len(x))
>
> interp = interp1d(indices, x, kind='quadratic')
>
>
>
> Running this gives the following error:
>
>
>
> ~$ python tmp.py
>
> Traceback (most recent call last):
>
> File "tmp.py", line 5, in
>
> interp = interp1d(indices, x, kind='quadratic')
>
> File "/usr/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py",
>
> line 308, in __init__
>
> self._spline = splmake(x,oriented_y,order=order)
>
> File "/usr/lib/python2.7/dist-packages/scipy/interpolate/interpolate.py",
>
> line 805, in splmake
>
> B = _fitpack._bsplmat(order, xk)
>
> MemoryError
>
>
>
> Unless I've misunderstood how this function is supposed to be used, it
>
> just doesn't really seem to work for arrays of much more than a few
>
> hundred elements.
>
>
>
>
>
> Oscar
Thank you Oscar for your help and advice.
I agree with you. So, I tried to find the way to solve this problem.
My full code adjusted is:
from scipy.interpolate import interp1d
import numpy as np
import matplotlib.pyplot as plt
with open(filin, "r") as f:
for line in f:
columns = line.strip().split()
a.append(columns[5])
x = np.array(a, float)
not_nan = np.logical_not(np.isnan(x))
indices = np.arange(len(x))
interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic')
p = interp(indices)
k = np.arange(31747)
plt.subplot(211)
plt.plot(k, p)
plt.xlabel('Quadratic interpolation')
plt.subplot(212)
plt.plot(k, x)
plt.show()
-
Whole error message was:
Traceback (most recent call last):
File "QI1.py", line 22, in
interp = interp1d(indices[not_nan], x[not_nan], kind = 'quadratic')
File
"/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py",
line 311, in __init__
self._spline = splmake(x,oriented_y,order=order)
File
"/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py",
line 809, in splmake
coefs = func(xk, yk, order, conds, B)
File
"/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py",
line 530, in _find_smoothest
u,s,vh = np.dual.svd(B)
File
"/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/linalg/decomp_svd.py",
line 91, in svd
full_matrices=full_matrices, overwrite_a = overwrite_a)
MemoryError
--
Thank you again Oscar,
Isaac
--
http://mail.python.org/mailman/listinfo/python-list
Komodo, Python
I just started learning python. I have komodo2.5 in my computer. And I installed python2.7. I tried to write python scripts in komodo. But every time I run the code, there's always the error: Traceback (most recent call last): File "C:\Program Files\ActiveState Komodo 2.5\callkomodo\kdb.py", line 920, in requestor, connection_port, cookie = ConnectToListener(localhost_addr, port) File "C:\Program Files\ActiveState Komodo 2.5\callkomodo\kdb.py", line 872, in ConnectToListener cookie = makeCookie() File "C:\Program Files\ActiveState Komodo 2.5\callkomodo\kdb.py", line 146, in makeCookie generator=whrandom.whrandom() NameError: global name 'whrandom' is not defined Is it the compatibility problem? Can anybody tell how to fix this problem? Because komodo is not free, so I don't want to uninstall komodo. -- http://mail.python.org/mailman/listinfo/python-list
extract PDF pages
While pdftk is awesome http://www.accesspdf.com/pdftk/ I am looking for a Python solution. Just for PDF page extraction. Any hope? Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
grep
What's the standard replacement for the obsolete grep module? Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: calling matlab
"hrh1818" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > There is a module named pymat avvailable from > http://sourceforge.net/projects/pymat that provides a limited set of > functions for intertfacing Python to Matlab. I think that pymat was superceded by mlabwrap http://mlabwrap.sourceforge.net/ Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: grep
"Fredrik Lundh" <[EMAIL PROTECTED]> wrote::
def grep(pattern, *files):
search = re.compile(pattern).search
for file in files:
for index, line in enumerate(open(file)):
if search(line):
print ":".join((file, str(index+1), line[:-1]))
grep("grep", *glob.glob("*.py"))
I was afraid the re module was the answer. ;-)
Use of enumerate is a nice idea.
Thanks.
Alan
--
http://mail.python.org/mailman/listinfo/python-list
best cumulative sum
What's the good way to produce a cumulative sum? E.g., given the list x, cumx = x[:] for i in range(1,len(x)): cumx[i] = cumx[i]+cumx[i-1] What's the better way? Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: best cumulative sum
<[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > He seems to want scanl Yes. But it's not in Python, right? (I know about Keller's version.) Robert Kern wrote: > Define better. More accurate? Less code? Good point. As Bonono (?) suggested: I'd most like a solution that relies on a built-in to give me both of those. (Pretty is good too.) Like SciPy's cumsum. Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: best cumulative sum
> Alan Isaac wrote: >> Like SciPy's cumsum. "Colin J. Williams" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Doesn't numarray handle this? Sure. One might say that numarray is in the process of becoming scipy. But I was looking for a solution when these are available. Something like: def cumreduce(func, seq, init = None): """Return list of cumulative reductions. Example use: >>> cumreduce(operator.mul, range(1,5),init=1) [1, 2, 6, 24] >>> :author: Alan Isaac :license: public domain """ if not seq: cr = [init]*bool(init) else: cr = [seq[0]] * len(seq) if init: cr[0] = func(cr[0],init) for idx in range(1,len(seq)): cr[idx] = func(cr[idx-1],seq[idx]) return cr -- http://mail.python.org/mailman/listinfo/python-list
Re: Converting a flat list to a list of tuples
"Duncan Booth" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> >>> aList = ['a', 1, 'b', 2, 'c', 3]
> >>> it = iter(aList)
> >>> zip(it, it)
> [('a', 1), ('b', 2), ('c', 3)]
That behavior is currently an accident.
http://sourceforge.net/tracker/?group_id=5470&atid=105470&func=detail&aid=1121416
Alan Isaac
--
http://mail.python.org/mailman/listinfo/python-list
Re: best cumulative sum
> Michael Spencer wrote: > > This can be written more concisely as a generator: <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > If iterable has no elements, I believe the behaviour should be [init], > there is also the case of init=None that needs to be handled. Right. So it is "more concise" only by being incomplete, right? What other advantages might it have? > otherwise, that is more or less what I wrote for my scanl/scanl1. I didn't see a post with that code. Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: best cumulative sum
"Michael Spencer" <[EMAIL PROTECTED]> wrote in message news:mailman.1054.1132707811.18701.python-> This can be written more concisely as a generator: > > >>> import operator > >>> def ireduce(func, iterable, init): > ... for i in iterable: > ... init = func(init, i) > ... yield init OK, this might do it. But is a generator "better"? (I assume accuracy is the same, so what about speed?) def ireduce(func, iterable, init=None): if not init: iterable = iter(iterable) init = iterable.next() yield init elif not iterable: yield init for item in iterable: init = func(init, item) yield init Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: best cumulative sum
"Peter Otten" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > - allows arbitrary iterables, not sequences only > - smaller memory footprint if sequential access to the items is sufficient Sure; I meant aside from that. > - fewer special cases, therefore > - less error prone, e. g. >+ does your implementation work for functions with > f(a, b) != f(b, a)? See news:[EMAIL PROTECTED] >+ won't users be surprised that > cumreduce(f, [1]) == cumreduce(f, [], 1) > != > cumreduce(f, [0]) == cumreduce(f, [], 0) THANKS! > Of course nothing can beat a plain old for loop in terms of readability and > -- most likely -- speed. OK. Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: best cumulative sum
"Peter Otten" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Of course nothing can beat a plain old for loop in terms of readability and > -- most likely -- speed. Here are two versions, meant to be comparable. Thanks, Alan Isaac def cumreduce(func, seq, init = None): cr = seq[:] if not(init is None): if seq: cr[0] = func(init,seq[0]) else: cr = [init] for idx in range(1,len(seq)): cr[idx] = func(cr[idx-1],seq[idx]) return cr def ireduce(func, iterable, init=None): if init is None: iterable = iter(iterable) init = iterable.next() yield init elif not iterable: yield init for item in iterable: init = func(init, item) yield init -- http://mail.python.org/mailman/listinfo/python-list
Re: best cumulative sum
"Peter Otten" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > You are in for a surprise here: You got that right! > >>> def empty(): > ... for item in []: > ... yield item > ... > >>> bool(empty()) > True Ouch. > >>> bool(iter([])) > True # python 2.3 and probably 2.5 > > >>> bool(iter([])) > False # python 2.4 Double ouch. I was relying on Python 2.4 behavior. What is the reasoning behind the changes? (Can you offer a URL to a discussion?) So, is the only way to test for an empty iterable to see if it can generate an item? I found this: http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/413614 Seems like a reason to rely on sequences ... Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: best cumulative sum
"Peter Otten" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > I'd rather have a second look whether the test is really needed. That's too obscure of a hint. Can you be a bit more explicit? Here's an example (below). You're saying I think that most of it is unnecessary. Thanks, Alan def ireduce(func, iterable, init=None): iterable = iter(iterable) if init is None: init = iterable.next() yield init else: try: first = iterable.next() init = func(init, first) yield init except StopIteration: yield init for item in iterable: init = func(init, item) yield init -- http://mail.python.org/mailman/listinfo/python-list
Re: FTP over TLS
"Carl Waldbieser" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Does anyone know of any good examples for writing client side code to upload > files over a secure FTP connection? http://trevp.net/tlslite/ Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: best cumulative sum
"Peter Otten" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > I think that the test for an empty iterator makes ireduce() unintuitive. OK. I misunderstood you point. But that is needed to match the behavior of reduce. >>> reduce(operator.add,[],42) 42 Thanks, Alan -- http://mail.python.org/mailman/listinfo/python-list
Re: best cumulative sum
"Peter Otten" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > sufficiently similar I think I understand your points now. But I wanted to match these cases: >>> import operator >>> reduce(operator.add,[],42) 42 >>> reduce(operator.add,[1],42) 43 The idea is that the i-th yield of i-reduce shd be the result of reduce on seq[:i] with the given initializer. That said, for the applications I first intended, yes it is sufficiently similar. For now, I'll stick with the version below. Thanks, Alan def ireduce(func, iterable, init=None): iterable = iter(iterable) if init is None: init = iterable.next() yield init else: try: init = func(init, iterable.next()) yield init except StopIteration: yield init for item in iterable: init = func(init, item) yield init -- http://mail.python.org/mailman/listinfo/python-list
Re: python speed
Peter Hansen wrote: > David Rasmussen wrote: > > Frithiof Andreas Jensen wrote: > >>From the speed requirement: Is that correspondance chess by any chance?? > > > > Regular chess at tournament time controls requires speed too. Any pure > > Python chess program would lose badly to the best C/C++ programs out > > there now. > > > > I would also like to see Half Life 2 in pure Python. > > True, but so what? Why did you suddenly change the discussion to > require "pure" Python? And please define "pure" Python, given that the > interpreter and many builtins, not to mention many widely used extension > modules, are coded in C? And are you not allowed to use any of the > performance-boosting techniques available for Python, like Pyrex or > Psyco? Why such restrictions, when these are things Python programs use > on a daily basis: these are *part* of Python, as much as the -O switch > on the compiler is part of C/C++. > > Okay, let's compare a "pure" Python program (if you can define it in any > meaningful, practical way) with a "pure" Java program, running on a > non-JIT interpreter and with optimizations turned off (because, of > course, those optimizations are umm... somehow.. not "pure"...?). > > Judging by the other posts in this thread, the gauntlet is down: Python > is faster than Java. Let those who believe otherwise prove their point > with facts, and without artificially handcuffing their opponents with > non-real-world "purity" requirements. > > -Peter That form of argument is listed as one of the principal forms of illogical thinking in "Being Logical" D.Q.McInerny - "An Inability to Disprove Does Not Prove" "The fact that there is no concrete proof against a position does not constitute an argument in favour of the position. I cannot claim to be right simply because you can't prove me to be wrong." -- http://mail.python.org/mailman/listinfo/python-list
Re: python speed
Peter Hansen wrote: > Isaac Gouy wrote: > > Peter Hansen wrote: > >>Judging by the other posts in this thread, the gauntlet is down: Python > >>is faster than Java. Let those who believe otherwise prove their point > >>with facts, and without artificially handcuffing their opponents with > >>non-real-world "purity" requirements. > > > That form of argument is listed as one of the principal forms of > > illogical thinking in "Being Logical" D.Q.McInerny - "An Inability to > > Disprove Does Not Prove" > > Good thing this is the form of argument *against* which I was arguing, > rather than that which I choose to use myself. (Read very carefully, if > you really think I was saying otherwise, and point out exactly where I > made any such claims for my own part. In fact, I was referencing the > arguments of others -- who *were* supporting their arguments with facts, > as near as I can tell -- and I was calling on the opposition to do the > same, and without changing the rules mid-discussion.) > > > "The fact that there is no concrete proof against a position does not > > constitute an argument in favour of the position. I cannot claim to be > > right simply because you can't prove me to be wrong." > > Isn't that what I was saying? That those who claim Python isn't faster > were not supporting their arguments with actual facts? > > -Peter *Python is faster than Java. Let those who believe otherwise prove their point with facts* We must be looking at different threads :-) afaict the only posting that provided something like "facts" was http://groups.google.com/group/comp.lang.python/msg/309e439697279060 Which stated "Python is doing the heavy lifting with GMPY which is a compiled C program with a Python wrapper" - but didn't seem to compare that to GMPY with a Java wrapper? -- http://mail.python.org/mailman/listinfo/python-list
Re: python speed
[EMAIL PROTECTED] wrote: > Isaac Gouy wrote: > > > Which stated "Python is doing the heavy lifting with GMPY which is a > > compiled C program with a Python wrapper" - but didn't seem to compare > > that to GMPY with a Java wrapper? > > You are missing the main idea: Java is by design a general purpose > programming language. That's why all "GMPYs" and alike are written in > Java - now wrappers to C-libraries. Python, by design, is glue > language. Python program is assembly of C extensions and buildins > wrapped in Python sintax. > > IHMO "real life" benchmark yuo are critisizing represents real life > situation. "1.1.3 What is Python good for? Python is a high-level general-purpose programming language that can be applied to many different classes of problems." http://www.python.org/doc/faq/general.html#what-is-python-good-for -- http://mail.python.org/mailman/listinfo/python-list
Re: python speed
[EMAIL PROTECTED] wrote: > Isaac Gouy wrote: > > Peter Hansen wrote: > > > Isaac Gouy wrote: > > > > Peter Hansen wrote: > > > >>Judging by the other posts in this thread, the gauntlet is down: Python > > > >>is faster than Java. Let those who believe otherwise prove their point > > > >>with facts, and without artificially handcuffing their opponents with > > > >>non-real-world "purity" requirements. > > > > > > > That form of argument is listed as one of the principal forms of > > > > illogical thinking in "Being Logical" D.Q.McInerny - "An Inability to > > > > Disprove Does Not Prove" > > > > > > Good thing this is the form of argument *against* which I was arguing, > > > rather than that which I choose to use myself. (Read very carefully, if > > > you really think I was saying otherwise, and point out exactly where I > > > made any such claims for my own part. In fact, I was referencing the > > > arguments of others -- who *were* supporting their arguments with facts, > > > as near as I can tell -- and I was calling on the opposition to do the > > > same, and without changing the rules mid-discussion.) > > > > > > > "The fact that there is no concrete proof against a position does not > > > > constitute an argument in favour of the position. I cannot claim to be > > > > right simply because you can't prove me to be wrong." > > > > > > Isn't that what I was saying? That those who claim Python isn't faster > > > were not supporting their arguments with actual facts? > > > > > > -Peter > > > > *Python is faster than Java. Let those who believe otherwise prove > > their point with facts* > > > > We must be looking at different threads :-) > > > > afaict the only posting that provided something like "facts" was > > http://groups.google.com/group/comp.lang.python/msg/309e439697279060 > > > > Which stated "Python is doing the heavy lifting with GMPY which is a > > compiled C program with a Python wrapper" - but didn't seem to compare > > that to GMPY with a Java wrapper? > > Is there such an animal? I only know about Java's BigInteger. Google. http://dev.i2p.net/javadoc/net/i2p/util/NativeBigInteger.html > And if there is, it just proves my point that benchmarks are > worthless. How so? -- http://mail.python.org/mailman/listinfo/python-list
Re: python speed
Fredrik Lundh wrote: > Cameron Laird wrote: > > >>You are missing the main idea: Java is by design a general purpose > >>programming language. That's why all "GMPYs" and alike are written in > >>Java - now wrappers to C-libraries. Python, by design, is glue > > . > > I don't understand the sentence, "That's why all 'GMPYs' and alike ..." > > Are you saying that reuse of code written in languages other than Java > > is NOT important to Java? I think that's a reasonable proposition; I'm > > just having trouble following your paragraph. > > replace "now" with "not" or perhaps "instead of being implemented as", > and it may become a bit easier to parse. > > and yes, the proposition matches my experiences. java heads prefer to do > everything in java, while us pythoneers happily mix and match whenever we > can... (which is why guoy's "benchmarks" says so little about Python; if you > cannot use smart algorithms and extensions where appropriate, you're not > really using Python as it's supposed to be used) > > If you can't use C where appropriate, you're not really using Python as it's supposed to be used? :-) -- http://mail.python.org/mailman/listinfo/python-list
Re: python speed
Fredrik Lundh wrote: > Isaac Gouy wrote: > > >> and yes, the proposition matches my experiences. java heads prefer to do > >> everything in java, while us pythoneers happily mix and match whenever we > >> can... (which is why guoy's "benchmarks" says so little about Python; if > >> you > >> cannot use smart algorithms and extensions where appropriate, you're not > >> really using Python as it's supposed to be used) > > > > If you can't use C where appropriate, you're not really using Python as > > it's supposed to be used? :-) > > who's talking about C ? and what's the connection between C and smart > algorithms ? > > [EMAIL PROTECTED] wrote: > Python program is assembly of C extensions and buildins > wrapped in Python sintax. -- http://mail.python.org/mailman/listinfo/python-list
Re: Dr. Dobb's Python-URL! - weekly Python news and links (Dec 7)
"Cameron Laird" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Jibes against the lambda-clingers lead eventually to serious > questions of style in regard to variable namespacing, > lifespan, cleanup, and so on: > http://groups.google.com/group/comp.lang.python/browse_thread/thread/ad0e15cb6b8f2c32/ #evaluate polynomial (coefs) at x using Horner's ruledef horner(coefs,x): return reduce(lambda a1,a2: a1*x+a2,coefs)'Nuf said.Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: lambda (and reduce) are valuable
>>> Jibes against the lambda-clingers lead eventually to serious >>> questions of style in regard to variable namespacing, >>> lifespan, cleanup, and so on: >>> http://groups.google.com/group/comp.lang.python/browse_thread/thread/ad0e15cb6b8f2c32/ Alan Isaac <[EMAIL PROTECTED]> wrote: >> #evaluate polynomial (coefs) at x using Horner's rule >> def horner(coefs,x): return reduce(lambda a1,a2: a1*x+a2,coefs) "Cameron Laird" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > I *think* you're supporting a claim > about the value of lambda with a specific example. Do I have that > right? Are you saying that your definition of horner() would suffer > greatly without lambda? It is a simple example of how lambda and reduce can be very expressive. Anyone who understands Horner's rule can see at a glance that this code implements it. Anyone who has bothered to learn what lambda and reduce do can see at a glance what the algorithm is. It just cannot get simpler or more expressive. Suffer greatly? Surely not. For "suffer greatly" you would probably need to turn to people who do a lot of event-driven GUI programming. But suffer, yes. Simplicity and expressiveness are valuable. That is the point. Cheers, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: problems with duplicating and slicing an array
Yun Mao wrote: >a[ [1,0], [0,1] ] , which should give me >[[4, 5], [1,2]] Numeric: take(take(a,[1,0]),[0,1],1) fwiw, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: python without OO
>>>>> "beliavsky" == beliavsky <[EMAIL PROTECTED]> writes: beliavsky> I think the OO way is slightly more obscure. It's beliavsky> obvious what x = reverse(x) does, but it is not clear beliavsky> unless you have the source code whether x.reverse() beliavsky> reverses x or if it returns a reversed list. What make it so clear to you that reverse(x) will always return a reversed list rather than reversing x in place and return nothing? beliavsky> It is clearer and more concise to write beliavsky> z = reverse(x) + reverse(y) beliavsky> than beliavsky> x.reverse() beliavsky> y.reverse() beliavsky> z = x + y This isn't anything to do with OO programming. It is something about using in interface that your audience expects. You have exactly the same problem whether you are using procedural or OO style. It might be a case for functional programming, but that's something off-topic. beliavsky> Furthermore, if in Python the algorithm for the reverse beliavsky> function applies to many kinds of objects, it just beliavsky> needs to be coded once, whereas a reverse method would beliavsky> have to provided for each class that uses it (perhaps beliavsky> through inheritance). That the reverse() wants to be a function doesn't mean that the thing that reverse() operate on doesn't want to be an object. So this isn't very clear a problem about OO style vs. procedural style, but instead a problem about "generic" programming style vs. "concrete" programming style. On the other hand, if the thing that reverse() operate on isn't an object sharing the same interface, it will be more clumsy to implement a generic reverse() that works for all the different kinds of object---even if they share similar interfaces. Try to implement a generic "reverse" in C when the different type of containers are encoded as different style struct's accessible from different function, and you will understand what I mean. So this is, marginally, a case *for* OO style. Regards, Isaac. -- http://mail.python.org/mailman/listinfo/python-list
extract files from MS-TNEF attachments
I'm looking for Python code to extract files from MS-TNEF attachments. (I'm aware of the C code at http://tnef.sourceforge.net/ ) Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Python API to manipulate CAB files.
Does anyone know of a Python API to manipulate CAB files? Thanks, -- Isaac Rodriguez SWE Autodesk. There are 10 types of people. Those who undertand binary, and those who don't -- http://mail.python.org/mailman/listinfo/python-list
CAB files manipulation API (again).
Hi, I am sorry to post this question again, but when I did it the other day, my news reader got stucked downloading new messages, and it has been that way for a few days. It still gets stucked if I try to download old messages. Anyway, does anyone know of a Python module, API, etc. that allows to manipulate CAB files? Thanks, -- Isaac Rodriguez SWE Autodesk. There are 10 types of people. Those who undertand binary, and those who don't -- http://mail.python.org/mailman/listinfo/python-list
tuple.index(item)
Why don't tuples support an index method? It seems natural enough ... Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: Returning histogram-like data for items in a list
"Ric Deez" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > I have a list: > L1 = [1,1,1,2,2,3] > How can I easily turn this into a list of tuples where the first element > is the list element and the second is the number of times it occurs in > the list (I think that this is referred to as a histogram): For ease of reading (but not efficiency) I like: hist = [(x,L1.count(x)) for x in set(L1)] See http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/277600 Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: Software needed
"niXin" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Can anyone direct me to where I can find free software to do the following: > Document Management Software > --- > 1. Written in PHP or Python > 2. scanning feature - where I can scan a document http://furius.ca/nabu/ ? -- http://mail.python.org/mailman/listinfo/python-list
can list comprehensions replace map?
Newbie question: I have been generally open to the proposal that list comprehensions should replace 'map', but I ran into a need for something like map(None,x,y) when len(x)>len(y). I cannot it seems use 'zip' because I'll lose info from x. How do I do this as a list comprehension? (Or, more generally, what is the best way to do this without 'map'?) Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
MultiFile object does not iterate
Why is a MultiFile object not an iterator? For example if mfp = multifile.MultiFile(fp)I cannot dofor line in mfp: do_somethingRelated:MultiFile.next seems badly named.(Something like next_section would be better.)Is this just historical accident or am I missing the point?Thanks,Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
FTP over SSL (explicit encryption)
I am looking for a pure Python secure ftp solution. Does it exist? I would have thought that the existence of OpenSSL would imply "yes" but I cannot find anything. ftplib does not seem to provide any secure services. I know about fptutil http://codespeak.net/mailman/listinfo/ftputil but that does not seem to provide any secure services. (Btw, Matt Croydon's intro is helpful for newbies: http://postneo.com/stories/2003/01/01/beyondTheBasicPythonFtplibExample.html ) I know about M2Crypto http://sandbox.rulemaker.net/ngps/m2/ but that requires installing SWIG and OpenSSL. (If someone tells me they have found this trivial under Windows, I am willing to try ... ) I would have thought that this was a common need with a standard Python solution, so I suspect I'm overlooking something obvious. Hoping, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: FTP over SSL (explicit encryption)
"Eric Nieuwland" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Do you want SFTP or FTP/S? The latter. > I'm having a look at FTP/S right now. That's a little > more complicated, but it seems doable. > If I succeed, I guess I'll donate the stuff as an extension to ftplib. Great! Please post a link as soon as it is usable! Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: FTP over SSL (explicit encryption)
> David Isaac wrote: > > I am looking for a pure Python secure ftp solution. > > Does it exist? "Andrew MacIntyre" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > I recall coming across an extension package (pretty sure it wasn't pure > Python anyway, certainly not for the SSL bits) with SFTP - I think the > name was Paramiko or something like that. Unfortunately that's SSH2 only. It is indeed pure Python http://www.lag.net/paramiko/ However it requires the PyCrypto module. http://www.amk.ca/python/code/crypto Can you briefly outline how to use this as a client to upload and down files from a server using SFTP? Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: FTP over SSL (explicit encryption)
"Alan Isaac" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > http://www.lag.net/paramiko/ > However it requires the PyCrypto module. > http://www.amk.ca/python/code/crypto > > Can you briefly outline how to use this as a client > to upload and down files from a server using SFTP? OK, the mechanics are pretty easy. sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.settimeout(20) sock.connect((hostname, port)) my_t = paramiko.Transport(sock) my_t.connect(hostkey=None ,username=username, password=password, pkey=None) my_chan = my_t.open_session() my_chan.get_pty() my_chan.invoke_shell() my_sftp = paramiko.SFTP.from_transport(my_t) Now my_sftp is a paramiko sftp_client. See paramiko's sftp_client.py to see what it can do. Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: Permutation Generator
"Talin" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > I wanted to share > this: a generator which returns all permutations of a list: Try this instead: def permuteg(lst): return ([lst[i]]+x for i in range(len(lst)) for x in permute(lst[:i]+lst[i+1:])) \ or [[]] Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: Permutation Generator
"Casey Hawthorne" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > It's hard to make "complete" permutation generators, Knuth has a whole > fascicle on it - "The Art of Computer Programming - Volume 4 Fascicle > 2 - Generating All Tuples and Permutations" - 2005 Can you elaborate a bit on what you mean? Given a list of unique elements, it is easy enough to produce a complete permutation generator in Python, in the sense that it yields every possible permuation. (See my previous post.) So you must mean something else? Cheers, Alan Isaac PS If the elements are not unique, that is easy enough to deal with too, as long as you say what you want the outcome to be. -- http://mail.python.org/mailman/listinfo/python-list
Re: FTP over SSL (explicit encryption)
"Eric Nieuwland" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > I'm having a look at FTP/S right now. That's a little > more complicated, but it seems doable. > If I succeed, I guess I'll donate the stuff as an extension to ftplib. Just found this: http://trevp.net/tlslite/ I haven't even had time to try it, but I thought you'd want to know. Cheers, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: FTP over SSL (explicit encryption)
> > http://www.lag.net/paramiko/ "Alan Isaac" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) > sock.settimeout(20) > sock.connect((hostname, port)) > my_t = paramiko.Transport(sock) > my_t.connect(hostkey=None ,username=username, password=password, pkey=None) > my_chan = my_t.open_session() > my_chan.get_pty() > my_chan.invoke_shell() > my_sftp = paramiko.SFTP.from_transport(my_t) > > Now my_sftp is a paramiko sftp_client. > See paramiko's sftp_client.py to see what it can do. When it rains it pours. wxSFTP http://home.gna.org/wxsftp/ uses paramiko and provides a GUI. Cheers, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: a sequence question
"Nick Coghlan" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Using zip(*[iter(l)]*N) or zip(*(iter(l),)*N) simply extends the above to the > general case. Clearly true. But can you please go into much more detail for a newbie? I see that [iter(l)]*N produces an N element list with each element being the same iterator object, but after that http://www.python.org/doc/2.3.5/lib/built-in-funcs.html just didn't get me there. Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: a sequence question
> Alan Isaac wrote: > > I see that [iter(l)]*N produces an N element list with each element being > > the same iterator object, but after that > > http://www.python.org/doc/2.3.5/lib/built-in-funcs.html > > just didn't get me there. "Nick Coghlan" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Py> itr = iter(range(10)) > Py> zipped = zip(*(itr,)*3) # How does this bit work? > # Manual zip, actually behaving somewhat like the real thing > Py> itr = iter(range(10)) > Py> zipped = [] > Py> try: > ... while 1: zipped.append((itr.next(), itr.next(), itr.next())) > ... except StopIteration: > ... pass http://www.python.org/doc/2.3.5/lib/built-in-funcs.html says: "This function returns a list of tuples, where the i-th tuple contains the i-th element from each of the argument sequences." So an "argument sequence" can in fact be any iterable, and these in turn are asked *in rotation* for their yield, right? So we pass the (identical) iterables in a tuple or list, thereby allowing a variable number of arguments. We unpack the argument list with '*', which means we have provided three iterables as arguments. And then zip works as "expected", once we have learned to expect zip to "rotate" through the arguments. Is that about right? If that is right, I still cannot extract it from the doc cited above. So where should I have looked? Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: Iteration over two sequences
"Scott David Daniels" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]: > Numarray is the future, Numeric is the "past", This statement is not obviously true. See the recent discussion on the developer lists. (Search for Numeric3.) Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: a sequence question
"Nick Coghlan" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > A bug report on Sourceforge would help in getting the problem fixed for the 2.5 > docs Done. > For the 'left-to-right' evaluation thing, that's technically an implementation > artifact of the CPython implementation, since the zip() docs don't make any > promises. So updating the docs to include that information would probably be a > bigger issue, as it involves behaviour which is currently not defined by the > library. OK, thanks. Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: yield_all needed in Python
>>>>> "Douglas" == Douglas Alan <[EMAIL PROTECTED]> writes: Douglas> If you'll reread what I wrote, you'll see that I'm not Douglas> concerned with performance, but rather my concern is that Douglas> I want the syntactic sugar. I'm tired of writing code Douglas> that looks like Douglas> def foogen(arg1): Douglas> def foogen1(arg2): Douglas> # Some code here Douglas> # Some code here Douglas> for e in foogen1(arg3): yield e Douglas> # Some code here Douglas> for e in foogen1(arg4): yield e Douglas> # Some code here Douglas> for e in foogen1(arg5): yield e Douglas> # Some code here Douglas> for e in foogen1(arg6): yield e How about writing it like the following? def gen_all(gen): for e in gen: yield e def foogen(arg1): def foogen1(arg2): # Some code here # Some code here gen_all(arg3) # Some code here gen_all(arg4) # Some code here gen_all(arg5) # Some code here gen_all(arg6) Regards, Isaac. -- http://mail.python.org/mailman/listinfo/python-list
Re: yield_all needed in Python
>>>>> "Isaac" == Isaac To <[EMAIL PROTECTED]> writes:
def gen_all(gen):
for e in gen:
yield e
def foogen(arg1):
def foogen1(arg2):
# Some code here
# Some code here
gen_all(arg3)
^ I mean foogen1(arg3), obviously, and similar for below
# Some code here
gen_all(arg4)
# Some code here
gen_all(arg5)
# Some code here
gen_all(arg6)
Regards,
Isaac.
--
http://mail.python.org/mailman/listinfo/python-list
Re: yield_all needed in Python
>>>>> "Douglas" == Douglas Alan <[EMAIL PROTECTED]> writes: Douglas> If you actually try doing this, you will see why I want Douglas> "yield_all". Oh... I see your point. I was about to suggest that the code in my posts before should be made to work somehow. I mean, if in def fun1(x): if not x: raise MyErr() ... def fun2(): ... fun1(val) fun2() we can expect that main gets the exception thrown by fun1, why in def fun1(x): if not x: yield MyObj() ... def fun2(): fun1(val) for a in fun2(): ... we cannot expect MyObj() to be yielded to main? But soon I found that it is not realistic: there is no way to know that fun2 has generator semantics. Perhaps that is a short-sightness in not introducing a new keyword instead of def when defining generators. Regards, Isaac. -- http://mail.python.org/mailman/listinfo/python-list
Re: yield_all needed in Python
>>>>> "Paul" == Paul Moore <[EMAIL PROTECTED]> writes: Paul> You can work around the need for something like yield_all, Paul> or explicit loops, by defining an "iflatten" generator, Paul> which yields every element of its (iterable) argument, Paul> unless the element is a generator, in which case we recurse Paul> into it: Paul> ... Only if you'd never want to yield a generator. Regards, Isaac. -- http://mail.python.org/mailman/listinfo/python-list
Re: The use of :
>>>>> "Greg" == Greg Ewing <[EMAIL PROTECTED]> writes: >> The only punctuation you *need* is whitespace. See Forth Greg> You don't even need that... see FORTRAN. :-) And you don't need everything else either... see this. http://compsoc.dur.ac.uk/whitespace/ :-) Regards, Isaac. -- http://mail.python.org/mailman/listinfo/python-list
Re: creating generators from function
>>>>> "Mike" == Mike Meyer <[EMAIL PROTECTED]> writes: Mike> I think it's a bit abnormal, because you have to scan the Mike> loop body for breaks. I tend to write: Mike> condition = True Mike> while condition: # corrected Mike> #code which iterates my simulation Then you'd have to scan the loop body to find the location where condition is set, which is more difficult than locating breaks normally. If you get a break, you really breaks. If you set condition to False, you still might be modifying it to True later in your code. And of course, most editors will highlight the "break" for you, while no editor will highlight for you the "condition" variable that you are staring at. Regards, Isaac. -- http://mail.python.org/mailman/listinfo/python-list
checkbook manager
I'd like to try personal financial management using Python. I just found PyCheckbook, but it does not support check printing. Is there a Python check printing application kicking around? Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: checkbook manager -> cross platform printing
"Alan Isaac" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > I'd like to try personal financial management using Python. > I just found PyCheckbook, but it does not support check printing. > Is there a Python check printing application kicking around? OK, I'll assume silence means "no", so new question. What is the current be practice for cross platform printing of PostScript files from Python? Same question for PDF. (I'm aware of URL:http://tgolden.sc.sabren.com/python/win32_how_do_i/print.html.) Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
cross platform printing
"Alan Isaac" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > I'd like to try personal financial management using Python. > I just found PyCheckbook, but it does not support check printing. > Is there a Python check printing application kicking around? OK, I'll assume silence means "no", so new question: What is the current best practice for cross platform printing of PostScript files from Python? Same question for PDF. (I'm aware of URL:http://tgolden.sc.sabren.com/python/win32_how_do_i/print.html.) Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: cross platform printing
> Alan Isaac wrote: > > What is the current best practice for cross platform printing of PostScript > > files from Python? "Warren Postma" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Well since printing postscript files on most Unix systems (probably > including Mac OSX although I don't really know this for sure) is > trivially easy, why not investigate using cygwin on Windows and > launching an "lpr" task from your python script that prints the given > postscript file. Implementation time on Unix: 0 minutes, 0 seconds. > Implementation time on Windows; the time it takes make a cygwin batch > file that prints using ghostscript. I meant something that application users on different platforms can print with, not something that they could coerce a platform into supporting given enough energy (e.g., via Cygwin). The closest to an option so far seems to be to generate PDF and assume an application is available to print it. Not beautiful. Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Coding Standards (and Best Practices)
Hi, I am fairily new to Python, but I am really liking what I am seeing. My team is going to re-design some automation projects, and we were going to use Python as our programming language. One of the things we would like to do, since we are all new to the language, is to define a set of guidelines and best practices as our coding standards. Does anyone know where I can get some information about what the community is doing? Are there any well defined guidelines established? Thanks, -- Isaac Rodriguez SWE Autodesk. There are 10 types of people. Those who undertand binary, and those who don't -- http://mail.python.org/mailman/listinfo/python-list
mbx repair script
I'm looking for a Python script to repair the mbx header for a mail file where only the header is corrupted. Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: mbx repair script
"Donn Cave" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > All mbx files start with a 2048 byte > header, and a valid header can be copied to another > file and still be valid. For example, if the damaged > file still has 2048 bytes of header, > >1. Find or create another mbx file "spud". >2. Copy header:$ dd if=spud count=4 > newbx >3. Copy old file: $ dd if=oldbx skip=4 >> newbx >4. change ownership and permission to match oldbx. This did not work for me. Should it? I thought the header contained information tightly tied to the rest of the content (to speed search etc) so that e.g., byte counts could matter. Can you point me to documentation of the mbx format? Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: Tkinter weirdness item count
"phil" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > Using Tkinter Canvas to teach High School Geometry > with A LOT of success. Can you post a link to your code. I'd like to see what you are doing. Thx, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
mbx repair script: Python vs perl
I'm looking for the Python equivalent of the perl script and module described at http://comments.gmane.org/gmane.mail.imap.uw.c-client/707 Any hope? Thanks, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
using print() with multiprocessing and pythonw
I launch my program with pythonw and begin it with the code below so that all
my print()'s go to the log file specified.
if sys.executable.find('pythonw') >=0:
# Redirect all console output to file.
sys.stdout = open("pythonw - stdout stderr.log",'w')
sys.stderr = sys.stdout
During the course of my program, I call multiprocessing.Process() and launch a
function several times. That function has print()'s inside (which are from
warnings being printed by python). This printing causes the multiprocess to
crash. How can I fix my code so that the print()'s are supressed. I would hate
to do a warnings.filterwarnings('ignore') because when I unit test those
functions, the warnings dont appear.
Thanks in advance,
Isaac
--
https://mail.python.org/mailman/listinfo/python-list
Re: using print() with multiprocessing and pythonw
Thanks for the reply Bill. The problem is the text i am getting is from a python warning message, not one of my own print() function calls. -- https://mail.python.org/mailman/listinfo/python-list
Plot a contour inside a contour
I tried to plot one smaller contour inside of the other larger contour. I have two different 2D-arrays. One is with smaller grid spacing and smaller domain size and the other is with larger spacing and larger domain size. So, I tried to use fig.add_axes function as follows: fig = plt.figure() ax1 = fig.add_axes([0.1,0.1,0.8,0.8]) . . dx = 450 NX = SHFX_plt.shape[1] NY = SHFX_plt.shape[0] xdist= (np.arange(NX)*dx+dx/2.)/1000. ydist= (np.arange(NY)*dx+dx/2.)/1000. myPLT = plt.pcolor(xdist,ydist,SHFX_plt) . . ax2 = fig.add_axes([8.,8.,18.,18.]) dx1 = 150 NX1 = SHFX_plt1.shape[1] NY1 = SHFX_plt1.shape[0] print 'NX1=',NX1,'NY1=',NY1 xdist1= (np.arange(NX1)*dx1+dx1/2.)/1000. ydist1= (np.arange(NY1)*dx1+dx1/2.)/1000. myPLT1 = plt.pcolor(xdist1,ydist1,SHFX_plt1) plt.show() My intention is to plot ax2 on the top of ax1 from xdist and ydist = 8 with 18 by 18 size. However, the result seems only showing ax1. I will really appreciate any help or idea. Thank you, Isaac -- https://mail.python.org/mailman/listinfo/python-list
Re: Plot a contour inside a contour
On Thursday, November 14, 2013 2:01:39 PM UTC-8, John Ladasky wrote: > On Thursday, November 14, 2013 11:39:37 AM UTC-8, Isaac Won wrote: > > > I tried to plot one smaller contour inside of the other larger contour. > > > > Using what software? A plotting package is not part of the Python standard > library. > Thanks John, I am using Matplotlib package. I will ask the question in the > matplotlib-users discussion group as you suggested. Thank you again, Isaac > > > You did not show the import statements in your code. If I had to guess, I > would say that you are using the Matplotlib package. Questions which are > specific to matplotlib should be asked in the matplotlib-users discussion > group: > > > > https://lists.sourceforge.net/lists/listinfo/matplotlib-users -- https://mail.python.org/mailman/listinfo/python-list
Using MFDataset to combine netcdf files in python
I am trying to combine netcdf files, but it contifuously shows " File "CBL_plot.py", line 11, in f = MFDataset(fili) File "utils.pyx", line 274, in netCDF4.MFDataset.init (netCDF4.c:3822) IOError: master dataset THref_11:00.nc does not have a aggregation dimension." So, I checked only one netcdf files and the information of a netcdf file is as below: float64 th_ref(u't',) unlimited dimensions = () current size = (30,) It looks there is no aggregation dimension. However, I would like to combine those netcdf files rather than just using one by one. Is there any way to create aggregation dimension to make this MFData set work? Below is the python code I used: import numpy as np from netCDF4 import MFDataset varn = 'th_ref' fili = THref_*nc' f= MFDataset(fili) Th = f.variables[varn] Th_ref=np.array(Th[:]) print Th.shape I will really appreciate any help, idea, and hint. Thank you, Isaac -- https://mail.python.org/mailman/listinfo/python-list
Drawing shaded area depending on distance with latitude and altitude coordinate
I have tried to make a plot of points with longitude and latitude coordinate,
and draw shaded area with distance from one point. So, I thought that I could
uae contourf function from matplotlibrary. My code is:
import haversine
import numpy as np
import matplotlib.pyplot as plt
with open(filin, 'r') as f:
arrays = [map(float, line.split()) for line in f]
newa = [[x[1],-x[2]] for x in arrays]
lat = np.zeros(275)
lon = np.zeros(275)
for c in range(0,275):
lat[c] = newa[c][0]
lon[c] = newa[c][1]
with open(filin, 'r') as f:
arrays = [map(float, line.split()) for line in f]
newa = [[x[1],-x[2]] for x in arrays]
lat = np.zeros(275)
lon = np.zeros(275)
for c in range(0,275):
lat[c] = newa[c][0]
lon[c] = newa[c][1]
dis = np.zeros(275)
for c in range(0,275):
dis[c] = haversine.distance(newa[0],[lat[c],lon[c]])
dis1 = [[]]*1
for c in range(0,275):
dis1[0].append(dis[c])
cs = plt.contourf(lon,lat,dis1)
cb = plt.colorbar(cs)
plt.plot(-lon[0],lat[0],'ro')
plt.plot(-lon[275],lat[275],'ko')
plt.plot(-lon[1:275],lat[1:275],'bo')
plt.xlabel('Longitude(West)')
plt.ylabel('Latitude(North)')
plt.gca().invert_xaxis()
plt.show()
My idea in this code was that I could made a shaded contour by distance from a
certain point which was noted as newa[0] in the code. I calculated distances
between newa[0] and other points by haversine module which calculate distances
with longitudes and latitudes of two points. However, whenever I ran this code,
I got the error related to X, Y or Z in contourf such as:
TypeError: Length of x must be number of columns in z, and length of y must
be number of rows.
IF I use meshgrid for X and Y, I also get:
TypeError: Inputs x and y must be 1D or 2D.
I just need to draw shaded contour with distance from one point on the top of
the plot of each point.
If you give any idea or hint, I will really apprecite. Thank you, Isaac
--
https://mail.python.org/mailman/listinfo/python-list
Extracting the value from Netcdf file with longitude and lattitude
Hi,
My question may be confusing.
Now I would like to extract temperature values from model output with python.
My model output have separate temperature, longitude and latitude variables.
So, I overlap these three grid variables on one figure to show temperature with
longitude and latitude through model domain.
Up to this point, everything is fine. The problem is to extract temperature
value at certain longitude and latitude.
Temperature variable doesn't have coordinate, but only values with grid.
Do you have idea about this issue?
Below is my code for the 2 D plot with temperature on model domain.
varn1 = 'T2'
varn2 = 'XLONG'
varn3 = 'XLAT'
Temp = read_netcdf(filin,varn1)
Lon = read_netcdf(filin,varn2)
Lat = read_netcdf(filin,varn3)
Temp_plt = Temp[12,:,:]
Lon_plt = Lon[12,:,:]
Lat_plt = Lat[12,:,:]
x = Lon_plt
y = Lat_plt
Temp_c = Temp_plt-273.15
myPLT = plt.pcolor(x,y,Temp_c)
mxlabel = plt.xlabel('Latitude')
mylabel = plt.ylabel('Longitude')
plt.xlim(126.35,127.35)
plt.ylim(37.16,37.84)
myBAR = plt.colorbar(myPLT)
myBAR.set_label('Temperature ($^\circ$C)')
plt.show()
--
read_netcdf is a code for extracting values of [time, x,y] format.
I think that the point is to bind x, y in Temp_plt with x, y in Lon_plt and
Lat_plt to extract temperature values with longitude and latitude input.
This question might be confusing. If you can't understand please let me know.
Any idea or help will be really appreciated.
Best regards,
Hoonill
--
https://mail.python.org/mailman/listinfo/python-list
Python 3.2 | WIndows 7 -- Multiprocessing and files not closing
I have a function that looks like the following: #- filename = 'c:\testfile.h5' f = open(filename,'r') data = f.read() q = multiprocessing.Queue() p = multiprocess.Process(target=myFunction,args=(data,q)) p.start() result = q.get() p.join() q.close() f.close() os.remove(filename) #- When I run this code, I get an error on the last line when I try to remove the file. It tells me that someone has access to the file. When I remove the queue and multiprocessing stuff, the function works fine. What is going on here? Thanks in advance, Isaac -- https://mail.python.org/mailman/listinfo/python-list
Re: Python 3.2 | WIndows 7 -- Multiprocessing and files not closing
Sorry, I am just providing pseudo code since I the code i have is quite large. As I mentioned, the code works fine when I remove the multirpcessing stuff so the filename is not the issue (though you are right in your correction). Someone with the same problem posted a smaller, more complete example here: http://stackoverflow.com/questions/948119/preventing-file-handle-inheritance-in-multiprocessing-lib None of the solutions posted work. On Thursday, October 10, 2013 12:38:19 PM UTC-4, Piet van Oostrum wrote: > Isaac Gerg writes: > > > > > I have a function that looks like the following: > > > > That doesn't look like a function > > > > > > > > #- > > > filename = 'c:\testfile.h5' > > > > Your filename is most probably wrong. It should be something like: > > > > filename = 'c:/testfile.h5' > > filename = 'c:\\testfile.h5' > > filename = r'c:\testfile.h5' > > -- > > Piet van Oostrum > > WWW: http://pietvanoostrum.com/ > > PGP key: [8DAE142BE17999C4] -- https://mail.python.org/mailman/listinfo/python-list
Re: Python 3.2 | WIndows 7 -- Multiprocessing and files not closing
On Thu, Oct 10, 2013 at 2:41 PM, Ned Batchelder wrote: > On 10/10/13 12:44 PM, Isaac Gerg wrote: > >> Sorry, I am just providing pseudo code since I the code i have is quite >> large. >> >> As I mentioned, the code works fine when I remove the multirpcessing >> stuff so the filename is not the issue (though you are right in your >> correction). >> >> Someone with the same problem posted a smaller, more complete example >> here: >> >> http://stackoverflow.com/**questions/948119/preventing-** >> file-handle-inheritance-in-**multiprocessing-lib<http://stackoverflow.com/questions/948119/preventing-file-handle-inheritance-in-multiprocessing-lib> >> >> None of the solutions posted work. >> > > (BTW: it's better form to reply beneath the original text, not above it.) > > None of the solutions try the obvious thing of closing the file before > spawning more processes. Would that work for you? A "with" statement is a > convenient way to do this: > > with open(filename,'r') as f: > data = f.read() > > The file is closed automatically when the with statement ends. > > --Ned. > > >> On Thursday, October 10, 2013 12:38:19 PM UTC-4, Piet van Oostrum wrote: >> >>> Isaac Gerg writes: >>> >>> >>> >>> I have a function that looks like the following: >>>> >>> >>> >>> That doesn't look like a function >>> >>> >>> >>> #-** >>>> filename = 'c:\testfile.h5' >>>> >>> >>> >>> Your filename is most probably wrong. It should be something like: >>> >>> >>> >>> filename = 'c:/testfile.h5' >>> >>> filename = 'c:\\testfile.h5' >>> >>> filename = r'c:\testfile.h5' >>> >>> -- >>> >>> Piet van Oostrum >>> >>> WWW: http://pietvanoostrum.com/ >>> >>> PGP key: [8DAE142BE17999C4] >>> >> > I will try what you suggest and see if it works. Additionally, is there a place on the web to view this conversation and reply? Currently, I am only able to access this list through email. -- https://mail.python.org/mailman/listinfo/python-list
Re: Python 3.2 | WIndows 7 -- Multiprocessing and files not closing
On Thu, Oct 10, 2013 at 2:49 PM, Isaac Gerg wrote: > > > > On Thu, Oct 10, 2013 at 2:41 PM, Ned Batchelder wrote: > >> On 10/10/13 12:44 PM, Isaac Gerg wrote: >> >>> Sorry, I am just providing pseudo code since I the code i have is quite >>> large. >>> >>> As I mentioned, the code works fine when I remove the multirpcessing >>> stuff so the filename is not the issue (though you are right in your >>> correction). >>> >>> Someone with the same problem posted a smaller, more complete example >>> here: >>> >>> http://stackoverflow.com/**questions/948119/preventing-** >>> file-handle-inheritance-in-**multiprocessing-lib<http://stackoverflow.com/questions/948119/preventing-file-handle-inheritance-in-multiprocessing-lib> >>> >>> None of the solutions posted work. >>> >> >> (BTW: it's better form to reply beneath the original text, not above it.) >> >> None of the solutions try the obvious thing of closing the file before >> spawning more processes. Would that work for you? A "with" statement is a >> convenient way to do this: >> >> with open(filename,'r') as f: >> data = f.read() >> >> The file is closed automatically when the with statement ends. >> >> --Ned. >> >> >>> On Thursday, October 10, 2013 12:38:19 PM UTC-4, Piet van Oostrum wrote: >>> >>>> Isaac Gerg writes: >>>> >>>> >>>> >>>> I have a function that looks like the following: >>>>> >>>> >>>> >>>> That doesn't look like a function >>>> >>>> >>>> >>>> #-** >>>>> filename = 'c:\testfile.h5' >>>>> >>>> >>>> >>>> Your filename is most probably wrong. It should be something like: >>>> >>>> >>>> >>>> filename = 'c:/testfile.h5' >>>> >>>> filename = 'c:\\testfile.h5' >>>> >>>> filename = r'c:\testfile.h5' >>>> >>>> -- >>>> >>>> Piet van Oostrum >>>> >>>> WWW: http://pietvanoostrum.com/ >>>> >>>> PGP key: [8DAE142BE17999C4] >>>> >>> >> > I will try what you suggest and see if it works. > > Additionally, is there a place on the web to view this conversation and > reply? Currently, I am only able to access this list through email. > Ned, I am unable to try what you suggest. The multiprocess.Process call is within a class but its target is a static method outside of the class thus no pickling. I cannot close the file and then reopen after the multiprocess.Process call because other threads may be reading from the file during that time. Is there a way in Python 3.2 to prevent the multiprocess.Process from inheriting the file descriptors from the parent process OR, is there a way to ensure that the multiprocess is completely closed and garbaged collected by the time I want to use os.remove()? Isaac -- https://mail.python.org/mailman/listinfo/python-list
Re: Python 3.2 | WIndows 7 -- Multiprocessing and files not closing
Hi Piet, Here is a real code example: http://stackoverflow.com/questions/948119/preventing-file-handle-inheritance-in-multiprocessing-lib As I said before, I had provide pseudocode. I cannot close the file after reading because it is part of a class and other threads may be calling member functions which read from the file. Isaac -- https://mail.python.org/mailman/listinfo/python-list
About a value error called 'ValueError: A value in x_new is below the interpolation range'
Dear all,
I am trying to calculate correlation coefficients between one time series data
and other time series. However,there are some missing values. So, I
interploated each time series with 1d interpolation in scipy and got
correlation coefficients between them. This code works well for some data sets,
but doesn't for some others. Following is actual error I got:
0.0708904109589
0.0801369863014
0.0751141552511
0.0938356164384
0.0769406392694
Traceback (most recent call last):
File "error_removed.py", line 56, in
i2 = interp(indices)
File
"/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py",
line 394, in __call__
out_of_bounds = self._check_bounds(x_new)
File
"/lustre/work/apps/python-2.7.1/lib/python2.7/site-packages/scipy/interpolate/interpolate.py",
line 449, in _check_bounds
raise ValueError("A value in x_new is below the interpolation "
ValueError: A value in x_new is below the interpolation range.
This time is 'x_new is below the interpolation range", but some times, it shows
"above interpolation range.'
I would like to make some self-contained code, but, I am not sure how to make
it to represent my case well.
I just put all of my code here. I apologize for this inconvenience.
---
-
a = []
c = 4
with open(filin1, 'r') as f1:
arrays = [map(float, line.split()) for line in f1]
newa = [[x[1],x[2]] for x in arrays]
o = newa[58]
f = open(filin, "r")
percent1 = []
for columns in ( raw.strip().split() for raw in f ):
a.append(columns[63])
x = np.array(a, float)
not_nan = np.logical_not(np.isnan(x))
indices = np.arange(len(x))
interp = interp1d(indices[not_nan], x[not_nan])
#interp = np.interp(indices, indices[not_nan], x[not_nan])
i1 = interp(indices)
f.close
h1 = []
p1 = []
while c <278:
c = c + 1
d = c - 5
b = []
f.seek(0,0)
for columns in ( raw.strip().split() for raw in f ):
b.append(columns[c])
y = np.array(b, float)
h = haversine.distance(o, newa[d])
n = len(y)
l = b.count('nan')
percent = l/8760.
percent1 = percent1 + [percent]
#print l, percent
if percent < 0.1:
not_nan = np.logical_not(np.isnan(y))
indices = np.arange(len(y))
interp = interp1d(indices[not_nan], y[not_nan])
#interp = np.interp(indices, indices[not_nan], x[not_nan])
i2 = interp(indices)
pearcoef = sp.pearsonr(i1,i2)
p = pearcoef[0]
p1 = p1 + [p]
h1 = h1 + [h]
print percent
print h1
print p1
print len(p1)
plt.plot(h1, p1, 'o')
plt.xlabel('Distance(km)')
plt.ylabel('Correlation coefficient')
plt.grid(True)
plt.show()
---
For any help or advice, I will really appreciate.
Best regards,
Isaac
--
http://mail.python.org/mailman/listinfo/python-list
Import redirects
I have a package (say "foo") that I want to rename (say, to "bar"), and for compatibility reasons I want to be able to use the old package name to refer to the new package. Copying files or using filesystem symlinks is probably not the way to go, since that means any object in the modules of the package would be duplicated, changing one will not cause the other to be updated. Instead, I tried the following as the content of `foo/__init__.py`: import sys import bar sys.modules['foo'] = bar To my surprise, it seems to work. If I `import foo` now, the above will cause "bar" to be loaded and be used, which is expected. But even if I `import foo.baz` now (without first `import foo`), it will now correctly import "bar.baz" in its place. Except one thing: it doesn't really work. If I `import foo.baz.mymod` now, and if in "bar.baz.mymod" there is a statement `import bar.baz.depmod`, then it fails. It correctly load the file "bar/baz/depmod.py", and it assigns the resulting module to the package object bar.baz as the "depmod" variable. But it fails to assign the module object of "mymod" into the "bar.baz" module. So after `import foo.baz.mymod`, `foo.baz.mymod` results in an AttributeError saying 'module' object has no attribute 'mymod'. The natural `import bar.baz.mymod` is not affected. I tested it with both Python 2.7 and Python 3.2, and I see exactly the same behavior. Anyone knows why that happen? My current work-around is to use the above code only for modules and not for packages, which is suboptimal. Anyone knows a better work-around? -- http://mail.python.org/mailman/listinfo/python-list
Re: Import redirects
On Mon, Feb 11, 2013 at 8:27 PM, Oscar Benjamin
wrote:
> On 11 February 2013 06:50, Isaac To wrote:
> > Except one thing: it doesn't really work. If I `import foo.baz.mymod`
> now,
> > and if in "bar.baz.mymod" there is a statement `import bar.baz.depmod`,
> then
> > it fails. It correctly load the file "bar/baz/depmod.py", and it assigns
> > the resulting module to the package object bar.baz as the "depmod"
> variable.
> > But it fails to assign the module object of "mymod" into the "bar.baz"
> > module. So after `import foo.baz.mymod`, `foo.baz.mymod` results in an
> > AttributeError saying 'module' object has no attribute 'mymod'. The
> natural
> > `import bar.baz.mymod` is not affected.
>
> My guess is that you have two copies of the module object bar.baz with
> one under the name foo.baz and the other under the name bar.baz. mymod
> is inserted at bar.baz but not at foo.baz. I think a solution in this
> case would be to have your foo/__init__.py also import the subpackage
> 'bar.baz' and give it both names in sys.modules:
>
> import bar.baz
> sys.modules['foo.baz'] = bar.baz
>
Thanks for the suggestion. It is indeed attractive if I need only to
pre-import all the subpackage and not to redirect individual modules. On
the other hand, when I actually try this I found that it doesn't really
work as intended. What I actually wrote is, as foo/__init__.py:
import sys
import bar
import bar.baz
sys.modules['foo.baz'] = bar.baz
sys.modules['foo'] = bar
One funny effect I get is this:
>>> import bar.baz.mymod
>>> bar.baz.mymod
>>> import foo.baz.mymod
>>> bar.baz.mymod
By importing foo.baz.mymod, I change the name of the module from
"bar.baz.mymod" to "foo.baz.mymod". If that is not bad enough, I also see
this:
>>> import bar.baz.mymod as bbm
>>> import foo.baz.mymod as fbm
>>> bbm is fbm
False
Both effects are there even if bar/baz/mymod.py no longer `import
bar.baz.depmod`.
It looks to me that package imports are so magical that I shouldn't do
anything funny to it, as anything that seems to work might bite me a few
minutes later.
Regards,
Isaac
--
http://mail.python.org/mailman/listinfo/python-list
Triple nested loop python (While loop insde of for loop inside of while loop)
try to make my triple nested loop working. My code would be: c = 4 y1 = [] m1 = [] std1 = [] while c <24: c = c + 1 a = [] f.seek(0,0) for columns in ( raw.strip().split() for raw in f ): a.append(columns[c]) x = np.array(a, float) not_nan = np.logical_not(np.isnan(x)) indices = np.arange(len(x)) interp = interp1d(indices[not_nan], x[not_nan], kind = 'nearest') p = interp(indices) N = len(p) dt = 900.0 #Time step (seconds) fs = 1./dt #Sampling frequency KA,PSD = oned_Fourierspectrum(p,dt) # Call Song's 1D FS function time_axis = np.linspace(0.0,N,num = N,endpoint = False)*15/(60*24) plot_freq = 24*3600.*KA #Convert to cycles per day plot_period = 1.0/plot_freq # convert to days/cycle fpsd = plot_freq*PSD d = -1 while d <335: d = d + 1 y = fpsd[d] y1 = y1 + [y] m = np.mean(y1) m1 = m1 + [m] print m1 My purpose is make a list of [mean(fpsd[0]), mean(fpsd[1]), mean(fpsd[2]).. mean(fpsd[335])]. Each y1 would be the list of fpsd[d]. I check it is working pretty well before second while loop and I can get individual mean of fpsd[d]. However, with second whole loop, it produces definitely wrong numbers. Would you help me this problem? -- http://mail.python.org/mailman/listinfo/python-list
Re: Triple nested loop python (While loop insde of for loop inside of while loop)
Thank you, Chris. I just want to acculate value from y repeatedly. If y = 1,2,3...10, just have a [1,2,3...10] at onece. On Friday, March 1, 2013 7:41:05 AM UTC-6, Chris Angelico wrote: > On Fri, Mar 1, 2013 at 7:59 PM, Isaac Won wrote: > > > while c <24: > > > for columns in ( raw.strip().split() for raw in f ): > > > while d <335: > > > > Note your indentation levels: the code does not agree with your > > subject line. The third loop is not actually inside your second. > > Should it be? > > > > ChrisA -- http://mail.python.org/mailman/listinfo/python-list
Re: Triple nested loop python (While loop insde of for loop inside of while loop)
On Friday, March 1, 2013 7:41:05 AM UTC-6, Chris Angelico wrote: > On Fri, Mar 1, 2013 at 7:59 PM, Isaac Won wrote: > > > while c <24: > > > for columns in ( raw.strip().split() for raw in f ): > > > while d <335: > > > > Note your indentation levels: the code does not agree with your > > subject line. The third loop is not actually inside your second. > > Should it be? > > > > ChrisA Yes, the thiird lood should be inside of my whole loop. Thank you, Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: Triple nested loop python (While loop insde of for loop inside of while loop)
Thank you Ulich for reply, What I really want to get from this code is m1 as I told. For this purpose, for instance, values of fpsd upto second loop and that from third loop should be same, but they are not. Actually it is my main question. Thank you, Isaac On Friday, March 1, 2013 6:00:42 AM UTC-6, Ulrich Eckhardt wrote: > Am 01.03.2013 09:59, schrieb Isaac Won: > > > try to make my triple nested loop working. My code would be: > > > c = 4 > > [...] > > > while c <24: > > > c = c + 1 > > > > This is bad style and you shouldn't do that in python. The question that > > comes up for me is whether something else is modifying "c" in that loop, > > but I think the answer is "no". For that reason, use Python's way: > > > >for c in range(5, 25): > >... > > > > That way it is also clear that the first value in the loop is 5, while > > the initial "c = 4" seems to suggest something different. Also, the last > > value is 24, not 23. > > > > > > > > > while d <335: > > > d = d + 1 > > > y = fpsd[d] > > > y1 = y1 + [y] > > > m = np.mean(y1) > > > m1 = m1 + [m] > > > > Apart from the wrong indention (don't mix tabs and spaces, see PEP 8!) > > and the that "d in range(336)" is better style, you don't start with an > > empty "y1", except on the first iteration of the outer loop. > > > > I'm not really sure if that answers your problem. In any case, please > > drop everything not necessary to demostrate the problem before posting. > > This makes it easier to see what is going wrong both for you and others. > > Also make sure that others can actually run the code. > > > > > > Greetings from Hamburg! > > > > Uli -- http://mail.python.org/mailman/listinfo/python-list
Putting the loop inside of loop properly
I just would like to make my previous question simpler and I bit adjusted my code with help with Ulich and Chris. The basic structure of my code is: for c in range(5,25): for columns in ( raw.strip().split() for raw in f ): a.append(columns[c]) x = np.array(a, float) not_nan = np.logical_not(np.isnan(x)) indices = np.arange(len(x)) interp = interp1d(indices[not_nan], x[not_nan], kind = 'nearest') p = interp(indices) N = len(p) fpsd = plot_freq*PSD f.seek(0,0) for d in range(336): y = fpsd[d] y1 = y1 + [y] m = np.mean(y1) m1 = m1 + [m] -- I just removed seemingly unnecesary lines. I expect that last loop can produce the each order values (first, second, last(336th)) of fpsd from former loop. fpsd would be 20 lists. So, fpsd[0] in third loop shoul be first values from 20 lists and it expects to be accumulated to y1. So, y1 should be the list of first values from 20 fpsd lists. and m is mean of y1. I expect to repeat 356 times and accumulated to m1. However, it doesn't work and fpsd values in and out of the last loop are totally different. My question is clear? Any help or questions would be really appreciated. Isaac -- http://mail.python.org/mailman/listinfo/python-list
Re: Set x to to None and del x doesn't release memory in python 2.7.1 (HPUX 11.23, ia64)
In general, it is hard for any process to return the memory the OS allocate to it back to the OS, short of exiting the whole process. The only case that this works reliably is when the process allocates a chunk of memory by mmap (which is chosen by libc if it malloc or calloc a large chunk of memory), and that whole chunk is not needed any more. In that case the process can munmap it. Evidently you are not see that in your program. What you allocate might be too small (so libc choose to allocate it using another system call "sbrk"), or that the allocated memory also hold other objects not freed. If you want to reduce the footprint of a long running program that periodically allocates a large chunk of memory, the "easiest" solution is to fork a different process to achieve the computations that needs the memory. That way, you can exit the process after you complete the computation, and at that point all memory allocated to it is guaranteed to be freed to the OS. Modules like multiprocessing probably make the idea sufficiently easy to implement. On Sat, Mar 9, 2013 at 4:07 PM, Wong Wah Meng-R32813 wrote: > > > If the memory usage is continually growing, you have something > else that is a problem -- something is holding onto objects. Even if Python > is not returning memory to the OS, it should be reusing the memory it has > if objects are being freed. > -- > [] Yes I have verified my python application is reusing the memory (just > that it doesn't reduce once it has grown) and my python process doesn't > have any issue to run even though it is seen taking up more than 2G in > footprint. My problem is capacity planning on the server whereby since my > python process doesn't release memory back to the OS, the OS wasn't able to > allocate memory when a new process is spawn off. > > -- > http://mail.python.org/mailman/listinfo/python-list > -- http://mail.python.org/mailman/listinfo/python-list
Re: I hate you all
You underestimated the arrogance of Python. Python 3 tab doesn't map to 4 spaces. It doesn't map to any number of spaces. Tabs and spaces are completely unrelated. If you have a function having the first indentation level with 4 (or any number of) spaces, the next line starting not with 4 spaces but instead with a tab always lead you to the TabError exception. If you like to play tricks, you can use "4 spaces plus a tab" as the next indentation level. I'd rather not do this kind of things, and forget about use using tabs at all. You are out of luck if you want to play the tab-space tricks, but if you follow the lead, you'll soon find that code will be more reliable without tabs, especially if you cut-and-paste code of others. On Sat, Apr 6, 2013 at 6:04 AM, wrote: > On Saturday, April 6, 2013 12:55:29 AM UTC+3, John Gordon wrote: > > In <[email protected]> > [email protected] writes: > > > > > How can python authors be so arrogant to impose their tabs and spaces > > > options on me ? It should be my choice if I want to use tabs or not ! > > > > You are free to use tabs, but you must be consistent. You can't mix > > tabs and spaces for lines of code at the same indentation level. > > They say so, but python does not work that way. This is a simple script: > > from unittest import TestCase > > class SvnExternalCmdTests(TestCase) : > def test_parse_svn_external(self) : > for sample_external in sample_svn_externals : > self.assertEqual(parse_svn_externals(sample_external[0][0], > sample_external[0][1]), [ sample_external[1] ]) > > And at the `for` statement at line 5 I get: > > C:\Documents and Settings\Adrian\Projects>python sample-py3.py > File "sample-py3.py", line 5 > for sample_external in sample_svn_externals : > ^ > TabError: inconsistent use of tabs and spaces in indentation > > > Line 5 is the only line in the file that starts at col 9 (after a tab). > Being the only line in the file with that indent level, how can it be > inconsistent ? > > You can try the script as it is, and see python 3.3 will not run it > -- > http://mail.python.org/mailman/listinfo/python-list > -- http://mail.python.org/mailman/listinfo/python-list
Re: lambda (and reduce) are valuable
> Alan Isaac wrote: > >>> #evaluate polynomial (coefs) at x using Horner's rule > >>> def horner(coefs,x): return reduce(lambda a1,a2: a1*x+a2,coefs) > > It just cannot get simpler or more expressive. "Peter Otten" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > But is it correct? Yes. > Are we merely employing different conventions for the order of coefficients > or is that simple and expressive lambda/reduce stuff obscuring an error? It is too simple and expressive to obscure an error. ;-) This is particularly important since coefficient order is not standardized across uses. Cheers, Alan Isaac -- http://mail.python.org/mailman/listinfo/python-list
