Hello,
> else:
> val11[i][j], val22[i][j] = integrate.quad(lambda x: F1(x)*F2(x), 0, pi)
> But, this calculation takes so long time, let's say about 1 hour
> (theoretically)... Is there any better way to easily and fast calculate
> the process such as [ F( i ) for i in xlist ] or something li
Ah, I found a workaround: savetxt() can work with a StringIO
-> savetxt(file_buffer, A)
This is only a workaround. I still think A.tofile() should be capable of
writing into a StringIO.
--
O.C.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy
> if you want to write to a string, why not use .tostring()?
Because A.tostring() returns the binary data, while I would like the text
representation.
More precisely, I would like to use A.tofile(sep="\t").
> Yes, this is a known shortcoming of .tofile().
Is it worth filing a bug report ?
--
Hello,
I observe the following behavior:
numpy.r_[True, False] -> array([1, 0], dtype=int8)
numpy.r_[True] -> array([ True], dtype=bool)
I would expect the first line to give a boolean array:
array([ True, False], dtype=bool)
Is it normal? Is it a bug?
--
O.C.
numpy.__version__ = '
Hello and thank you for your answer.
> There are at least three methods I can think of, but choosing the best one
> requires more information. How long are the lists? Do the arrays have
> variable dimensions? The simplest and most adaptable method is probably
The lists would be made of 4x4 matric
Hello,
I have two lists of numpy matrices : LM = [M_i, i=1..N] and LN = [N_i, i =1..N]
and I would like to compute the list of the products : LP = [M_i * N_i,
i=1..N].
I can do :
P=[]
for i in range(N) :
P.append(LM[i]*LN[i])
But this is not vectorized. Is there a faster solution ?
Can
Hello,
I have data files where the decimal separator is a comma. Can I import this
data with numpy.loadtxt ?
Notes :
- I tried to set the locale LC_NUMERIC="fr_FR.UTF-8" but it didn't change
anything.
- Python 2.5.2, Numpy 1.1.0
Have a nice day,
O.C.
Créez votre adresse électronique [EMA
> > Shouldn't it raise an exception ValueError ? (because "abcd" is not a float)
>
> I don't think so, but it shouldn't return a zero either.
>
> That call should mean: scan this whitespace separated string for as many
> floating point numbers as it has. There are none, so it should return
> a
Hello,
I would like to build a big ndarray by adding rows progressively.
I considered the following functions : append, concatenate, vstack and the like.
It appears to me that they all create a new array (which requires twice the
memory).
Is there a method for just adding a row to a ndarray w
Thank you for the answers,
I am now disturbed by this result :
> In [1]: import numpy
> In [2]: numpy.fromstring("abcd", dtype = float, sep = ' ')
> Out[2]: array([ 0.])
Shouldn't it raise an exception ValueError ? (because "abcd" is not a float)
Regards,
O.C.
Créez votre adresse électroni
Hello,
the following code drives python into an endless loop :
>>> import numpy
>>> numpy.fromstring("abcd", dtype = float, sep = ' ')
I think the function numpy.fromstring is lacking an adequate error handling for
that case.
Is it a bug ?
Regards,
--
O.C.
Python 2.5.2
Debian Lenny
Cré
11 matches
Mail list logo