efiring wrote:
>
> On 05/14/2010 11:03 AM, Dr. Phillip M. Feldman wrote:
>>
> It is perfectly reasonable to have an algorithm that uses values
> sorted along
> the last axis, even if that dimension sometimes turns out to be one.
>
> Eric
>
Excellent point! I
Robert Kern-2 wrote:
>
> On Wed, May 12, 2010 at 20:19, Dr. Phillip M. Feldman
> wrote:
>>
>> When operating on an array whose last dimension is unity, the default
>> behavior of argsort is not very useful:
>>
>> |6> x=random.random((4,1))
>>
When operating on an array whose last dimension is unity, the default
behavior of argsort is not very useful:
|6> x=random.random((4,1))
|7> shape(x)
<7> (4, 1)
|8> argsort(x)
<8>
array([[0],
[0],
[0],
[0]])
|9> argsort(x,axis=0)
Warren Weckesser-3 wrote:
>
> A couple questions:
>
> How many floats will you be storing?
>
> When you test for membership, will you want to allow for a numerical
> tolerance, so that if the value 1 - 0.7 is added to the set, a test for
> the value 0.3 returns True? (0.3 is actually 0.2999
Anne Archibald-2 wrote:
>
> on a 32-bit machine,
> the space overhead is roughly a 32-bit object pointer or two for each
> float, plus about twice the number of floats times 32-bit pointers for
> the table.
>
Hello Anne,
I'm a bit confused by the above. It sounds as though the hash table
app
I have an application that involves managing sets of floats. I can use
Python's built-in set type, but a data structure that is optimized for
fixed-size objects that can be compared without hashing should be more
efficient than a more general set construct. Is something like this
available?
--
When I issue the command
np.lookfor('bessel')
I get the following:
Search results for 'bessel'
---
numpy.i0
Modified Bessel function of the first kind, order 0.
numpy.kaiser
Return the Kaiser window.
numpy.random.vonmises
Draw samples from a von Mises distrib
Anne Archibald wrote:
>
> 2009/11/29 Dr. Phillip M. Feldman :
>
>> All of the statistical packages that I am currently using and have used
>> in
>> the past (Matlab, Minitab, R, S-plus) calculate standard deviation using
>> the
>> sqrt(1/(n-1)) norm
Pauli Virtanen-3 wrote:
>
> Nevertheless, I can't really regard dropping the imaginary part a
> significant issue.
>
I am amazed that anyone could say this. For anyone who works with Fourier
transforms, or with electrical circuits, or with electromagnetic waves,
dropping the imaginary part is
Pauli Virtanen-3 wrote:
>
> I'd think that downcasting is different from dropping the imaginary part.
>
There are many ways (in fact, an unlimited number) to downcast from complex
to real. Here are three possibilities:
- Take the real part.
- Take the magnitude (root-mean-square of the real
There are a number of people who continue to use Matlab,
despite all of its deficiencies, because it can at least be counted on to
produce correct answers most of the time.
Dr. Phillip M. Feldman
--
View this message in context:
http://old.nabble.com/Assigning-complex-values-to-a-real-array-tp223
Stéfan van der Walt wrote:
>
>
> Would it be possible to, optionally, throw an exception?
>
> S.
> ___
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
>
>
I'm certain that it i
David Warde-Farley-2 wrote:
>
>
> A less harmful solution (if a solution is warranted, which is for the
> Council of the Elders to
> decide) would be to treat the Python complex type as a special case, so
> that the .real attribute is accessed instead of trying to cast to float.
>
>
There
e cleared up. My preference would be that the programmer should have to
explicitly downcast from complex to float, and that if he/she fails to do
this, that an exception be triggered.
Dr. Phillip M. Feldman
--
View this message in context:
http://old.nabble.com/Assigning-complex-values-to
Robert Kern-2 wrote:
>
>
>
> Downcasting data is a necessary operation sometimes. We explicitly
> made a choice a long time ago to allow this.
>
> Robert Kern
>
>
This might be the time to recognize that that was a bad choice and reverse
it. It is not clear to me why downcasting from comp
All of the statistical packages that I am currently using and have used in
the past (Matlab, Minitab, R, S-plus) calculate standard deviation using the
sqrt(1/(n-1)) normalization, which gives a result that is unbiased when
sampling from a normally-distributed population. NumPy uses the sqrt(1/n)
All of the statistical packages that I am currently using and have used in
the past (Matlab, Minitab, R, S-plus) calculate standard deviation using the
sqrt(1/(n-1)) normalization, which gives a result that is unbiased when
sampling from a normally-distributed population. NumPy uses the sqrt(1/n)
I opened ticket #1302 to make the following enhancement request:
I'd like to see hstack and vstack promote 1-D arguments to 2-D when this is
necessary to make the dimensions match. In the following example, c_ works
as expected while hstack does not:
[~]|8> x
<8>
array([[1, 2, 3],
[4,
I've defined the following one-line function that uses numpy.where:
def sin_half_period(x): return where(0.0 <= x <= pi, sin(x), 0.0)
When I try to use this function, I get an error message:
In [4]: z=linspace(0,2*pi,9)
In [5]: sin_half_period(z)
I have a 1-D array and would like to generate a list of indices for which a
given condition is satisfied. What is the cleanest way to do this?
--
View this message in context:
http://www.nabble.com/how-to-find-array-indices-at-which-a-condition-is-satisfied--tp25072656p25072656.html
Sent from t
I've been reading the online NumPy tutorial at the following URL:
http://numpy.scipy.org/numpydoc/numpy-10.html
When I try the following example, I get an error message:
In [1]: a=arange(10)
In [2]: a.itemsize()
---
TypeErr
I'd like to be able to make a slice of a 3-dimensional array, doing something
like the following:
Y= X[A, B, C]
where A, B, and C are lists of indices. This works, but has an unexpected
side-effect. When A, B, or C is a length-1 list, Y has fewer dimensions than
X. Is there a way to do the slice
Although I've used Matlab for many years and am quite new to Python, I'm
already convinced that the Python/NumPy combination is more powerful and
flexible than the Matlab base, and that it generally takes less Python code
to get the same job done. There is, however, at least one thing that is much
With Python/NumPy, is there a way to get the maximum element of an array and
also the index of the element having that value, at a single shot? (One can
do this in Matlab via a statement like the following:
[x_max,ndx]= max(x)
--
View this message in context:
http://www.nabble.com/maximum-valu
Usage examples:
In [1]: from myarray import array
In [2]: x=array([1,2,3],[4,5,6],dtype=int)
In [3]: x/2
Out[3]:
array([[0, 1, 1],
[2, 2, 3]])
Dr. Phillip M. Feldman 16 July 2009"""
args1= []
args2= []
for arg in args:
This does the right thing sometimes, but not always. Out[2] and Out[4]
are fine, but Out[3] is not (note the extra set of braces). Probably
the only right way to fix this is to modify numpy itself.
Phillip
In [1]: def myarray(*args, **kwargs): return np.array([z for z in args],
**kwargs) .
numpy.array understands
V= array([[1,2,3,4],[4,3,2,1]])
but not
V= array([1,2,3,4],[4,3,2,1])
It would be more convenient if it could handle either form.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listi
se()
b
matrix([[1, 4, 7],
[2, 5, 8],
[3, 6, 9]])
On Tue, Jul 7, 2009 at 1:34 AM,
Dr. Phillip M. Feldman
wrote:
I'm using the Enthought Python Distribution. When I
define a matrix and
transpose it, it appears that the result is no longer a
matrix (see below)
I'm using the Enthought Python Distribution. When I define a matrix and
transpose it, it appears that the result is no longer a matrix (see below).
This is both surprising and disappointing. Any suggestions will be
appreciated.
In [16]: A=matrix([[1,2,3],[4,5,6],[7,8,9]])
In [17]: B=A.transpos
29 matches
Mail list logo