On Fri, Aug 28, 2009 at 00:44, Gael
Varoquaux wrote:
> On Thu, Aug 27, 2009 at 03:33:30PM -0700, Robert Kern wrote:
>> From my experience, doing performance tests inside of your normal test
>> suite is entirely unreliable. Performance testing requires rigorous
>> control over external factors that
On Thu, Aug 27, 2009 at 03:33:30PM -0700, Robert Kern wrote:
> From my experience, doing performance tests inside of your normal test
> suite is entirely unreliable. Performance testing requires rigorous
> control over external factors that you cannot do inside of your test
> suite. Your tests will
On Thu, Aug 27, 2009 at 1:27 PM, wrote:
> On Thu, Aug 27, 2009 at 12:49 PM, Tim
> Michelsen wrote:
>>> Tim, do you mean, that you want to apply other functions, e.g. mean or
>>> variance, to the original values but calculated per bin?
>> Sorry that I forgot to add this. Shame.
>>
>> I would like t
On Thu, Aug 27, 2009 at 4:21 PM, Robert Kern wrote:
> On Thu, Aug 27, 2009 at 15:13, Christopher Barker
> wrote:
> > Charles R Harris wrote:
> >> I also intend to make it work with
> >>
> >> from future import division
> >
> > doesn't already?
> >
> > In [3]: from __future__ import division
> >
>
On 2009-08-27 19:56 , David Goldsmith wrote:
> --- On Thu, 8/27/09, Fons Adriaensen wrote:
[...]
>> 3. Finally remove all the redundancy and legacy stuff from the
>> world of numerical Python. It is *very* confusing to a new user.
>
> I like this also (but I also know that actually trying to ach
--- On Thu, 8/27/09, Fons Adriaensen wrote:
> 2. Adopting that format will make it even more important
> to
> clearly define in which cases data gets copied and when
> not.
> This should be based on some simple rules that can be
> evaluated
> by a code author without requiring a lookup in the
> r
On 2009-08-27 16:09 , Jonathan T wrote:
> Hi,
>
> I want to define a 3-D array as the sum of two 2-D arrays as follows:
>
> C[x,y,z] := A[x,y] + B[x,z]
>
> My linear algebra is a bit rusty; is there a good way to do this that does not
> require me to loop over x,y,z? Thanks!
Numpy's broadcasti
Perfect, that is exactly what I was looking for. Thanks to all who responded.
There is one more problem which currently has me stumped. Same idea but slightly
different effect:
V[p,x,r] := C[p, E[p,x,r], r]
This multidimensional array stuff is confusing but the time savings seem to be
worth i
On Thu, Aug 27, 2009 at 3:32 PM, Citi, Luca wrote:
> Or
> a[:,:,None] + b[:,None,:]
I think that is the way to go.
Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Thu, Aug 27, 2009 at 15:03, Gael
Varoquaux wrote:
> Hi list,
>
> This is slightly off topic, so please pardon me.
>
> I want to do performance testing. To be precise, I have a simple case: I
> want to check that 2 operations perform with a similar speed (so I am
> abstracted from the machines pe
On Thu, Aug 27, 2009 at 15:13, Christopher Barker wrote:
> Charles R Harris wrote:
>> I also intend to make it work with
>>
>> from future import division
>
> doesn't already?
>
> In [3]: from __future__ import division
>
> In [5]: 3 / 4
> Out[5]: 0.75
>
> In [6]: import numpy as np
>
> In [7]: np.
Charles R Harris wrote:
> I also intend to make it work with
>
> from future import division
doesn't already?
In [3]: from __future__ import division
In [5]: 3 / 4
Out[5]: 0.75
In [6]: import numpy as np
In [7]: np.array(3) / np.array(4)
Out[7]: 0.75
In [8]: np.array(3) // np.array(4)
Out[8]
Hi list,
This is slightly off topic, so please pardon me.
I want to do performance testing. To be precise, I have a simple case: I
want to check that 2 operations perform with a similar speed (so I am
abstracted from the machines performance).
What would be the recommended way of timing the oper
On Thu, Aug 27, 2009 at 3:22 PM, Christopher Barker
wrote:
> Robert Kern wrote:
> > On Thu, Aug 27, 2009 at 12:43, Charles R
> > Harris wrote:
> >> In [3]: floor_divide(x,y).dtype
> >> Out[3]: dtype('float64')
> >
> > Ewww. It should be an appropriate integer type. Probably whatever x*y is.
>
> +1
On Thu, Aug 27, 2009 at 14:41, David Warde-Farley wrote:
> On 27-Aug-09, at 3:27 PM, Robert Kern wrote:
>
>> no matter how many decimal places the two ints have.
>
> Er... I must be missing something here. ;)
I meant decimal digits.
--
Robert Kern
"I have come to believe that the whole world is
Hi Jonathan,
This isn't quite your typical linear algebra. NumPy has a nice feature
called array broadcasting, which enables you to perform element-wise
operations on arrays of different shapes. The number of dimensions of
the arrays must be the same, in your case, all the arrays must have
three d
Some weeks ago there was a post on this list requesting feedback
on possible future directions for numpy. As I was quite busy at that
time I'll reply to it now.
My POV is that of a novice user, who at the same time wants quite
badly to use the numpy framework for his numerical work which in
this c
On 27-Aug-09, at 3:27 PM, Robert Kern wrote:
> no matter how many decimal places the two ints have.
Er... I must be missing something here. ;)
David
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/nu
On Thu, Aug 27, 2009 at 3:26 PM, Robert Kern wrote:
> On Thu, Aug 27, 2009 at 14:22, Christopher Barker
> wrote:
>
> > By the way -- is there something about py3k that changes all this? Or is
> > this just an opportunity to perhaps make some backward-incompatible
> > changes to numpy?
>
> Python
Or
a[:,:,None] + b[:,None,:]
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
On Thu, Aug 27, 2009 at 14:22, Christopher Barker wrote:
> By the way -- is there something about py3k that changes all this? Or is
> this just an opportunity to perhaps make some backward-incompatible
> changes to numpy?
Python 3 makes the promised change of int/int => float.
--
Robert Kern
"
One solution I can think of still requires one loop (instead of three):
import numpy as np
a = np.arange(12).reshape(3,4)
b = np.arange(15).reshape(3,5)
z = np.empty(a.shape + (b.shape[-1],))
for i in range(len(z)):
z[i] = np.add.outer(a[i], b[i])
_
Jonathan T wrote:
> I want to define a 3-D array as the sum of two 2-D arrays as follows:
>
>C[x,y,z] := A[x,y] + B[x,z]
Is this what you mean?
In [14]: A = np.arange(6).reshape((2,3,1))
In [15]: B = np.arange(12).reshape((1,3,4))
In [18]: A
Out[18]:
array([[[0],
[1],
[2
Robert Kern wrote:
> On Thu, Aug 27, 2009 at 12:43, Charles R
> Harris wrote:
>> In [3]: floor_divide(x,y).dtype
>> Out[3]: dtype('float64')
>
> Ewww. It should be an appropriate integer type. Probably whatever x*y is.
+1 if you are working with integers, you should get integers, because
that's
Hi,
I want to define a 3-D array as the sum of two 2-D arrays as follows:
C[x,y,z] := A[x,y] + B[x,z]
My linear algebra is a bit rusty; is there a good way to do this that does not
require me to loop over x,y,z? Thanks!
Jonathan
___
NumPy-Discussi
On Thu, Aug 27, 2009 at 2:35 PM, wrote:
>
> I'm always a bit surprised about integers in numpy and try to avoid
> calculations with them. So I would be in favor of x/y is correct
> floating point answer.
>
> Josef
>
> >>> x = np.ones(1, dtype=np.uint64); y = np.ones(1, dtype=np.int64)
> >>> np.t
On Thu, Aug 27, 2009 at 3:57 PM, Charles R
Harris wrote:
>
>
> On Thu, Aug 27, 2009 at 1:46 PM, Robert Kern wrote:
>>
>> On Thu, Aug 27, 2009 at 12:43, Charles R
>> Harris wrote:
>> >
>> >
>> > On Thu, Aug 27, 2009 at 1:27 PM, Robert Kern
>> > wrote:
>> >>
>> >> On Thu, Aug 27, 2009 at 11:24, Cha
On Thu, Aug 27, 2009 at 1:46 PM, Robert Kern wrote:
> On Thu, Aug 27, 2009 at 12:43, Charles R
> Harris wrote:
> >
> >
> > On Thu, Aug 27, 2009 at 1:27 PM, Robert Kern
> wrote:
> >>
> >> On Thu, Aug 27, 2009 at 11:24, Charles R
> >> Harris wrote:
> >> > I'm thinking double. There is a potential
On Thu, Aug 27, 2009 at 12:43, Charles R
Harris wrote:
>
>
> On Thu, Aug 27, 2009 at 1:27 PM, Robert Kern wrote:
>>
>> On Thu, Aug 27, 2009 at 11:24, Charles R
>> Harris wrote:
>> > I'm thinking double. There is a potential loss of precision for 64 bit
>> > ints
>> > but nothing else seems reasona
I really do not mind avoiding the long doubles, In practice I used them only
once or twice, but I assume that short int -> float would be useful for many of
the numpy users. It also may align nicely with (u)int8->float16 on GPUs.
Nadav
-הודעה מקורית-
מאת: numpy-discussion-boun...@sci
On Thu, Aug 27, 2009 at 1:27 PM, Robert Kern wrote:
> On Thu, Aug 27, 2009 at 11:24, Charles R
> Harris wrote:
> > I'm thinking double. There is a potential loss of precision for 64 bit
> ints
> > but nothing else seems reasonable for a default. Thoughts?
>
> Python int / Python int => Python flo
On Thu, Aug 27, 2009 at 11:24, Charles R
Harris wrote:
> I'm thinking double. There is a potential loss of precision for 64 bit ints
> but nothing else seems reasonable for a default. Thoughts?
Python int / Python int => Python float
no matter how many decimal places the two ints have. I also say
2009/8/27 Nadav Horesh
>
> How about making this arch dependent translation:
>
> short int -> float
> int -> double
> long int -> long double
>
> or adding a flag that would switch between the above translation to the
> option that would produce only doubles.
>
> For some computing projects I mad
How about making this arch dependent translation:
short int -> float
int -> double
long int -> long double
or adding a flag that would switch between the above translation to the option
that would produce only doubles.
For some computing projects I made I would prefer the first option: There I
Charles R Harris wrote:
> The real problem is deciding what to do with integer precisions that fit
> in float32. At present we have
>
> In [2]: x = ones(1, dtype=int16)
>
> In [3]: true_divide(x,x)
> Out[3]: array([ 1.], dtype=float32)
A user perspective:
ambiguous cases should always be
reso
On Thu, Aug 27, 2009 at 12:50 PM, Charles R Harris <
charlesr.har...@gmail.com> wrote:
>
>
> 2009/8/27 Nadav Horesh
>
>> Double is the natural choice, there is a possibility of long double
>> (float96 on x86 or float128 on amd64) where there is no precision loss. Is
>> this option portable?
>
>
>
2009/8/27 Nadav Horesh
> Double is the natural choice, there is a possibility of long double
> (float96 on x86 or float128 on amd64) where there is no precision loss. Is
> this option portable?
Not really. The long double type can be a bit weird and varies from
architecture to architecture.
Ch
Double is the natural choice, there is a possibility of long double (float96 on
x86 or float128 on amd64) where there is no precision loss. Is this option
portable?
Nadav
-הודעה מקורית-
מאת: numpy-discussion-boun...@scipy.org בשם Charles R Harris
נשלח: ה 27-אוגוסט-09 21:24
אל: numpy
I'm thinking double. There is a potential loss of precision for 64 bit ints
but nothing else seems reasonable for a default. Thoughts?
Chuck
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discuss
On Thu, Aug 27, 2009 at 12:49 PM, Tim
Michelsen wrote:
>> Tim, do you mean, that you want to apply other functions, e.g. mean or
>> variance, to the original values but calculated per bin?
> Sorry that I forgot to add this. Shame.
>
> I would like to apply these mathematical functions on the origin
On Thu, Aug 27, 2009 at 10:00, Jack Yu wrote:
> Hi all,
>
> I am having trouble using the function numpy.linalg.svd(). It works fine on
> my personal computer. However, when I use it on a cluster at university, it
> returns 'Illegal Instruction', when the input matrix is complex. Is this
> funct
Hi all,
I am having trouble using the function numpy.linalg.svd(). It works fine on
my personal computer. However, when I use it on a cluster at university, it
returns 'Illegal Instruction', when the input matrix is complex. Is this
function meant to work on a complex array? If so, what could
> Tim, do you mean, that you want to apply other functions, e.g. mean or
> variance, to the original values but calculated per bin?
Sorry that I forgot to add this. Shame.
I would like to apply these mathematical functions on the original values
stacked in the respective bins.
For instance:
The
On Thu, Aug 27, 2009 at 9:23 AM, Vincent Schut wrote:
> Tim Michelsen wrote:
>> Hello,
>> I need some advice on histograms.
>> If I interpret the documentation [1, 2] for numpy.histogram correctly, the
>> result of the function is a count of the occurences sorted into each bin.
>>
>> (n, bins) = nu
Tim Michelsen wrote:
> Hello,
> I need some advice on histograms.
> If I interpret the documentation [1, 2] for numpy.histogram correctly, the
> result of the function is a count of the occurences sorted into each bin.
>
> (n, bins) = numpy.histogram(v, bins=50, normed=1)
>
> But how can I apply
On Thu, Aug 27, 2009 at 8:23 AM, alexander
baker wrote:
> Here is an example, this does something a extra at the end but shows how the
> bins can be used.
>
> Regards
>
> Alex Baker.
>
> from scipy.stats import norm
> r = norm.rvs(size=1)
>
> import numpy as np
> p, bins = np.histogram(r, width
Here is an example, this does something a extra at the end but shows how the
bins can be used.
Regards
Alex Baker.
from scipy.stats import norm
r = norm.rvs(size=1)
import numpy as np
p, bins = np.histogram(r, width, normed=True)
db = bins[1]-bins[0]
cdf = np.cumsum(p*db)
from pylab import
Hello,
I need some advice on histograms.
If I interpret the documentation [1, 2] for numpy.histogram correctly, the
result of the function is a count of the occurences sorted into each bin.
(n, bins) = numpy.histogram(v, bins=50, normed=1)
But how can I apply another function on these values stac
48 matches
Mail list logo