))
10 loops, best of 3: 23 ms per loop
y= x- 2.
timeit np.dot(y, y)
The slowest run took 18.60 times longer than the fastest. This could mean
that an intermediate result is being cached.
1000 loops, best of 3: 1.78 ms per loop
timeit np.dot(y, y)
1000 loops, best of 3: 1.73 ms per loop
Best,
eat
>
> is the relevant bug, like Warren points out. From the discussion there
> it doesn't look like np.array's handling of non-conformable lists has
> any defenders.)
>
+1 for 'object array [and matrix] construction should require explicitly
specifying dtype= object'
;http://en.wikipedia.org/wiki/Expression_%28mathematics%29>, arranged in *rows
> <http://en.wiktionary.org/wiki/row>* and *columns
> <http://en.wiktionary.org/wiki/column>*.[2]
> <http://en.wikipedia.org/wiki/Matrix_%28mathematics%29#cite_note-2>[3]
> <http://e
On Mon, Feb 10, 2014 at 9:08 PM, alex wrote:
> On Mon, Feb 10, 2014 at 2:03 PM, eat wrote:
> > Rhetorical or not, but FWIW I'll prefer to take singular value
> decomposition
> > (u, s, vt= svd(x)) and then based on the singular values s I'll estimate
> a
> >
erically feasible rank"
r. Thus the diagonal of such hat matrix would be (u[:, :r]** 2).sum(1).
Regards,
-eat
>
> Sorry for off-topic...
> ___
> NumPy-Discussion mailing list
> NumPy-Discus
[ 3, 9],
[ 4, 16],
[ 5, 25],
[ 6, 36],
[ 7, 49],
[ 8, 64],
[ 9, 81]])
My 2 cents,
-eat
>
> Creating two 1-dimensional arrays first is costly as one has to
> iterate twice over the data. So the only way I see is creating an
> empty [10,2] array a
Hi,
FWIW, apparently bug related to dtype of np.eye(.)
On Wed, May 22, 2013 at 8:07 PM, Nicolas Rougier
wrote:
>
>
> Hi all,
>
> I got a weird output from the following script:
>
> import numpy as np
>
> U = np.zeros(1, dtype=[('x', np.float32, (4,4))])
>
> U[0] = np.eye(4)
> print U[0]
> # out
>
>.. code:: python
>
> # Author: Somebody
>
> print np.nonzero([1,2,0,0,4,0])
>
>
> If you can provide the (assumed) level of the answer, that would be even
> better.
My 2 cents,
-eat
>
> Nicolas
>
>
.
>
> arr.dot ?
>
> the 99 most common functions for which chaining looks nice.
>
Care to elaborate more for us less uninitiated?
Regards,
-eat
>
> Josef
>
> >
> >
> > Ben Root
> >
> > ___
>
Hi,
On Fri, Jan 18, 2013 at 12:13 AM, Thouis (Ray) Jones wrote:
> On Thu, Jan 17, 2013 at 10:27 AM, Charles R Harris
> wrote:
> >
> >
> > On Wed, Jan 16, 2013 at 5:11 PM, eat wrote:
> >>
> >> Hi,
> >>
> >> In a recent thread
> >
raightforward manner.
What do you think?
-eat
P.S. FWIW, if this idea really gains momentum obviously I'm volunteering to
create a PR of it.
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
[ 8, 9],
[ 1, 12]])
In []: x[ind, [0, 1]]
Out[]:
array([[ 1, 3],
[ 4, 6],
[ 8, 7],
[11, 9],
[23, 12]])
-eat
>
> Best regards,
>
> Mads
>
> --
> +-+
> | Mads I
in?
>
> (Bonus, extra bike-sheddy survey: do people prefer
> np.filled((10, 10), np.nan)
> np.filled_like(my_arr, np.nan)
>
+0
OTOH, it might also be handy to let val to be an array as well, which is
then repeated to fill the array.
My 2 cents.
-eat
> or
&
Hi,
On Mon, Oct 29, 2012 at 11:01 AM, Larry Paltrow wrote:
> np.isnan(data) is True
> >>> False
>
Check with:
np.all(~np.isnan(x))
My 2 cents,
-eat
>
>
> On Mon, Oct 29, 2012 at 1:50 AM, Pauli Virtanen wrote:
>
>> Larry Paltrow gmail.com> writes:
In []: print np.prod( (arg > 0) for arg in args)
at 0x062BDA08>
In []: print np.prod([(arg > 0) for arg in args])
1
In []: print np.prod( (arg > 0) for arg in args).next()
True
In []: sys.version
Out[]: '2.7.2 (default, Jun 12 2011, 15:08:59) [MSC v.1500 32 bit (Intel)]'
In []
j = 0, 1
> >>>l = a[i, j,:]
> >>>print l
>
> [4 2 8]
>
> >>>print np.max(a[i,j,:]), maxs[i,j]
>
> 8 8
>
> >>>print l[np.argsort(l)][-2], second[i,j]
>
> 4 4
>
> Good luck.
>
Here the np.indicies function may help a little b
ontaining zeros:
> b_indices = b.argsort()
> b_matrix = b[b_indices[::-1]]
> new_b = b_matrix[len(b_matrix)-1]
>
> Is there an easy way to reorder it? Or is there at least a complicated
> way which produces the right output?
>
>
Hi,
On Tue, Jul 31, 2012 at 7:20 PM, Vlastimil Brom wrote:
> 2012/7/31 eat :
> > Hi,
> >
> > On Tue, Jul 31, 2012 at 5:01 PM, Vlastimil Brom <
> vlastimil.b...@gmail.com>
> > wrote:
> >>
> >> 2012/7/31 eat :
> >> >
Hi,
On Tue, Jul 31, 2012 at 7:30 PM, Nathaniel Smith wrote:
> On Tue, Jul 31, 2012 at 4:57 PM, eat wrote:
> > Hi,
> >
> > On Tue, Jul 31, 2012 at 6:43 PM, Nathaniel Smith wrote:
> >>
> >> On Tue, Jul 31, 2012 at 2:23 PM, eat wrote:
> >> &
Hi,
On Tue, Jul 31, 2012 at 6:43 PM, Nathaniel Smith wrote:
> On Tue, Jul 31, 2012 at 2:23 PM, eat wrote:
> > Apparently ast(.) does not return a view of the original matches rather a
> > copy of size (n* (2* distance+ 1)), thus you may run out of memory.
>
> The probl
Hi,
On Tue, Jul 31, 2012 at 5:01 PM, Vlastimil Brom wrote:
> 2012/7/31 eat :
> > Hi,
> >
> > On Tue, Jul 31, 2012 at 10:23 AM, Vlastimil Brom <
> vlastimil.b...@gmail.com>
> > wrote:
> >>
> >> 2012/7/30 eat :
> >> > Hi,
> >
Hi,
On Tue, Jul 31, 2012 at 10:23 AM, Vlastimil Brom
wrote:
> 2012/7/30 eat :
> > Hi,
> >
> > A partial answer to your questions:
> >
> > On Mon, Jul 30, 2012 at 10:33 PM, Vlastimil Brom <
> vlastimil.b...@gmail.com>
> > wrote:
> >>
&
if
it's not otherwise used. IMO nothing wrong looping with xrange(.) (if you
really need to loop ;).
> Is there some mor elegant way to check for the "underflowing" lower
> bound "lo" to replace with None?
>
> Is it significant, which container is used to coll
)]
Out[]: [[(4, 5), 1], [(5, 4), 1], [(4, 4), 2], [(4, 3), 2]]
My 2 cents,
-eat
>
> In bash I usually unique command
>
> thanks in advance
> Giuseppe
>
> --
> Giuseppe Amatulli
> Web: www.spatial-ecology.net
> ___
> NumPy-
t x and y must be
one dimensional and they must be equal length.
My 2 cents,
-eat
>
>
> On Fri, Jul 20, 2012 at 5:11 PM, Andreas Hilboll wrote:
>
>> Hi,
>>
>> I have a problem using histogram2d:
>>
>>from numpy import linspace, histogram2d
>
loops, best of 3: 34.2 us per loop
>>
>>
>>
>> -Travis
>>
>>
> However, what is the timing/memory cost of converting a large numpy array
> that already exists into python list of lists? If all my processing before
> the munkres step is using NumPy, conve
hm (more
details of the algorithms can be found for example at
http://www.assignmentproblems.com/).
How the assignment algorithms are (typically) described, it actually may be
quite a tedious job to create more performance ones utilizing numpy arrays
instead of lists of lists.
My 2 cents,
10978, 0.54991376, 0.78182313],
[ 0.42980812, 0.59622975, 0.29315485, 0.3828001 ],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]])
will help you.
My 2 cents,
-eat
>
> __
0.67 0.1 0.2 0.3 0.4 0.17 0.33 0.5 ]
Fast enough:
In []: data, lengths= gen([5, 15, 5e4])
In []: data.size
Out[]: 476028
In []: %timeit normalize(data, lengths)
10 loops, best of 3: 29.4 ms per loop
My 2 cents,
-eat
>
> -- srean
>
> On Thu, May 31, 2012 at 12:36 AM, Wolfgang
25 , -0., -0.5 , -1.,
inf, 1., 0.5 , 0., 0.25 ,
0.2 , 0.1667, 0.14285714, 0.125 , 0.])
In []: seterr(divide= 'raise')
In []: 1./ a
-----
= 1]
In []: C_out.reshape(2, 3, 4)
Out[]:
array([[[13, 2, 3, 1],
[ 5, 30, 7, 8],
[ 9, 1, 11, 12]],
[[ 1, 2, 15, 28],
[17, 18, 9, 20],
[-1, 2, 23, 4]]])
My 2 cents,
-eat
>
> ___
>
e5), rand(1e5)
In []: allclose(s0(y, u), s1(y, u))
Out[]: True
and definitely this transformation will outperform a plain python loop
In []: timeit s0(y, u)
10 loops, best of 3: 122 ms per loop
In []: timeit s1(y, u)
100 loops, best of 3: 2.16 ms per loop
In []: 122/ 2.16
Out[]: 56.48148148148148
self.data[k])/ self.scale
return c/ (self.var[k]* self.var)** .5
def obs_iterate(self):
for k in xrange(self.m):
yield self.obs_kth(k)
if __name__ == '__main__':
data= np.random.randn(5, 3)
print np.corrcoef(data).round(3)
print
c=
hould return something meaningful to act on (in
the spirit that methods are more like functions than subroutines).
Just my 2 cents,
-eat
>
> but it is a two-steps instruction to do one thing, which I feel doesn't
> look very nice.
>
> Did I miss an
tailed description of your problem, since now
your current spec seems to yield:
r= array([ 1, 1, 1, 48, 68, 1, 75, 1, 1, 115, 1, 95, 1,
1, 1, 1, 1, 1, 1, 28, 1, 68, 1, 1, 28])
My 2 cents,
-eat
>
>
>
> ___
You raise a good point. Neither arange nor linspace provides a close
> equivalent to the nice behavior of the Matlab colon, even though that is
> often what one really wants. Adding this, either via an arange kwarg, a
> linspace kwarg, or a new function, seems like a good idea.
>
M
along the two length-4 dimensions.
>
> m,n = data.shape
> cond = lamda x : (x <= t1) & (x >= t2)
>
I guess you meant here cond= lambda x: (x>= t1)& (x<= t2)
> x = cond(data).reshape((m//4, 4, n//4, 4))
> found = np.any(np.any(x, axis=1), axis=2)
>
[[ 60 470 521 ..., 147 435 295]
[246 127 662 ..., 718 525 256]
[354 384 205 ..., 225 364 239]
...,
[277 428 201 ..., 460 282 433]
[ 27 407 130 ..., 245 346 309]
[649 157 153 ..., 316 613 570]]
True
and compared in performance wise:
In []: %timei
}
def _view(D, shape= (4, 4)):
return ast(D, **_ss(D.strides, D.shape, shape))
def ds_1(data_in, t1= 1, t2= 4):
# return _view(data_in)
excerpt= _view(data_in)
mask= np.where((excerpt>= t1)& (excerpt<= t2), True, False)
return mask.sum(2).sum(2).astype(np.bool)
if __name__ == '
On Sat, Jan 28, 2012 at 11:14 PM, Charles R Harris <
charlesr.har...@gmail.com> wrote:
>
>
> On Sat, Jan 28, 2012 at 11:15 AM, eat wrote:
>
>> Hi,
>>
>> Short demonstration of the issue:
>> In []: sys.version
>> Out[]: '2.7.2 (default, Jun 12
polynomial order I'll not face this issue)
or
- it's just the 'nature' of computations with float values (if so, probably
I should be able to tackle this regardless of the polynomial order)
or
- it's a nasty bug in class Polynomial
Regards,
eat
___
t; >>>a.mean()
> 4000.0
> >>>np.version.full_version
> '1.6.1'
>
This indeed looks very nasty, regardless of whether it is a version or
platform related problem.
-eat
>
>
>
> On Tue, 2012-01-24 at 17:12 -0600, eat wrote:
>
> Hi,
>
>
>
99
In []: d.mean(axis= 1).mean()
Out[]: 3045.74724
Or does the results of calculations depend more on the platform?
My 2 cents,
eat
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Hi,
On Tue, Dec 20, 2011 at 3:41 AM, wrote:
> On Mon, Dec 19, 2011 at 8:16 PM, eat wrote:
> > Hi,
> >
> > On Tue, Dec 20, 2011 at 2:33 AM, wrote:
> >>
> >> On Mon, Dec 19, 2011 at 6:27 PM, eat wrote:
> >> > Hi,
> >> >
>
Hi,
On Tue, Dec 20, 2011 at 2:33 AM, wrote:
> On Mon, Dec 19, 2011 at 6:27 PM, eat wrote:
> > Hi,
> >
> > Especially when the keyword return_index of np.unique(.) is specified to
> be
> > True, would it in general also be reasonable to be able to specify th
over complicated unless I'm not able to request
a stable sorting order from np.unique(.) (like np.unique(., return_index=
True, kind= 'mergesort').
(FWIW, I apparently do have a working local hack for this kind of
functionality, but without extensive testing of 'all' cor
t; >>> a= np.array([[1, 2], [3, 4]])
> >>> np.kron(np.ones((2,2)), a)
> array([[ 1., 2., 1., 2.],
> [ 3., 4., 3., 4.],
> [ 1., 2., 1., 2.],
> [ 3., 4., 3., 4.]])
>
> >>> np.kron(a, np.ones((2,2)))
> array([[ 1., 1., 2.,
[1,2,1,2]
> [3,4,3,4]
> [1,2,1,2]
> [3,4,3,4]]
>
> i tried different things on numpy which didn't work
> any ideas ?
>
Perhaps something like this:
In []: a= np.array([[1, 2], [3, 4]])
In []: np.c_[[a, a], [a, a]]
Out[]:
array([[[1, 2, 1, 2],
[3, 4,
1, 2, 3, 4],
[1, 2, 3, 4, 5],
[2, 3, 4, 5, 6],
[3, 4, 5, 6, 7],
[4, 5, 6, 7, 8],
[5, 6, 7, 8, 9],
[6, 7, 8, 9, 0],
[7, 8, 9, 0, 1],
[8, 9, 0, 1, 2]])
In []: mod(a+ roll(b, 2), n)
Out[]:
array([[8, 9, 0, 1, 2],
[9, 0, 1, 2, 3]
Hi
On Mon, Aug 1, 2011 at 3:14 PM, Jeffrey Spencer wrote:
> Depends where it is contained but another option is and I find it to
> typically be faster:
>
> B = zeros(A.shape)
> maximum(A,B,A)
>
Since maximum(.) can handle broadcasting
maximum(A, 0, A)
will be even faster.
-e
ion
Out[]: '1.6.0'
In []: a= array(1)
In []: a.reshape((1, 1), order= 'F').flags
Out[]:
C_CONTIGUOUS : True
F_CONTIGUOUS : False
OWNDATA : False
WRITEABLE : True
ALIGNED : True
UPDATEIFCOPY : False
In []: a.reshape((1, 1), order= 'C&
ust to demonstrate how it was wrong
Perhaps slightly OT, but here is something very odd going on. I would expect
the performance to be in totally different ballpark.
> >>> t=timeit.Timer('m =- 0.5', setup='import numpy as np;m =
> np.ones([8092,8092],float)'
to
> explain.
>
Somehow very similar approach how I always have treated the NaNs.
(Thus postponing all the real (slightly dirty) work on to the imputation
procedures).
For me it has been sufficient to ignore what's the actual cause of NaNs. But
I believe there exists plenty other muc
f one chooses to proceed with this 'squaring'
avenue, wouldn't it then be more economical to base the calculations on a
square 5x5 matrix? Something like:
A_pinv= dot(A, pinv(dot(A.T, A))).T
Instead of a 380x380 based matrix:
A_pinv= dot(pinv(dot(A, A.T)), A).T
My two cents
- eat
>
On Mon, Jun 27, 2011 at 8:53 PM, Mark Wiebe wrote:
> On Mon, Jun 27, 2011 at 12:44 PM, eat wrote:
>
>> Hi,
>>
>> On Mon, Jun 27, 2011 at 6:55 PM, Mark Wiebe wrote:
>>
>>> First I'd like to thank everyone for all the feedback you're providin
uld corner cases, like
>>> a = np.array([np.NA, np.NA], dtype='f8', masked=True)
>>> np.mean(a, skipna=True)
>>> np.mean(a)
be handled?
My concern here is that there always seems to be such corner cases which can
only be handled with specifi
ve the impression that the mask-based design would be easier. Perhaps
> > you could do that one first and folks could try out the API and see how
> they
> > like it and discover whether the memory overhead is a problem in
> practice.
>
> That seems like a risky strategy to
Hi,
On Wed, Jun 22, 2011 at 6:30 PM, Alex Flint wrote:
> Is it possible to use argmax or something similar to find the locations of
> the largest N elements in a matrix?
How bout argsort(.)?, Like:
In []: a= arange(9)
In []: a.argsort()[::-1][:3]
Out[]: array([8, 7, 6])
My 2 cent
rence/odr.htm<http://docs.scipy.org/doc/scipy/reference/odr.html>
My 2 cents,
eat
>
> Regards, Christian K.
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mai
ems quite promising, indeed:
In []: A
Out[]:
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
In []: [0, 1, 2] in A
Out[]: True
In []: [0, 3, 6] in A
Out[]: True
In []: [0, 4, 8] in A
Out[]: True
In []: [8, 4, 0] in A
Out[]: True
In []: [2, 4, 6] in A
Out[]: True
In []: [6, 4, 2] in A
Ou
[None, :]
u0= u.repeat(n** 2)[None, :]
u1= u.repeat(n)[None, :].repeat(n, axis= 0).reshape(1, -1)
u2= u.repeat(n** 2, axis= 0).reshape(1, -1)
U= np.r_[u0, u1, u2, np.ones((1, n** 3))]
f= (np.dot(E, U)* U).sum(0).reshape(n, n, n)
Regards,eat
>
> Thanks Eleanor
> --
> View this mes
t has two disadvantages 1) needs more memory and 2) "global" bins
(which although should be quite straightforward to enhance if needed).
Regards,
eat
>
> Éric.
>
>
> >
> > Regards,
> > eat
> >
>
> Un clavier azerty en vaut deux
>
in order to better grasp your situation it would be beneficial to know
how the counts and bounds are used later on. Just wondering if this kind
massive histogramming could be somehow avoided totally.
Regards,
eat
>
> Éric.
> > So it seems that you give your array directly to histogram
Hi,
On Tue, Mar 29, 2011 at 5:13 PM, Éric Depagne wrote:
> > FWIW, have you considered to use
> >
> http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogramdd.html#
> > numpy.histogramdd
> >
> > Regards,
> > eat
> >
> I tried, but
me having to write a loop.
>
> Or maybe did I miss some parameters using np.histogram.
>
FWIW, have you considered to use
http://docs.scipy.org/doc/numpy/reference/generated/numpy.histogramdd.html#numpy.histogramdd
Regards,
eat
>
> Thanks.
>
> Éric.
>
> Un clavier azert
On Tue, Mar 29, 2011 at 1:29 PM, eat wrote:
> Hi,
>
> On Tue, Mar 29, 2011 at 12:00 PM, Alex Ter-Sarkissov
> wrote:
>
>> If I want to generate a string of random bits with equal probability I run
>>
>>
>> random.randint(0,2,size).
>>
>> Wha
y p<1/2 and 0 with probability q=1-p?
>
Would it be sufficient to:
In []: bs= ones(1e6, dtype= int)
In []: bs[randint(0, 1e6, 1e5)]= 0
In []: bs.sum()/ 1e6
Out[]: 0.904706
Regards,
eat
>
> thanks
>
> ___
> NumPy-Discussion mailing l
users will have
> > the same issue.
> >
> > You can wrap (parts of) your code in something like:
> > olderr = seterr(invalid= 'ignore')
> >
> > seterr(**olderr)
> >
>
> Also, as Robert pointed out to me before np.errstate is a
> cont
slowing computations and
> making text output completely non-readable.
>
Would old= seterr(invalid= 'ignore') be sufficient for you?
Regards,
eat
>
> >>> from numpy import __version__
> >>> __version__
> '2.0.0.dev-1fe8136'
>
>
vector with minimal norm.
>
> Is there more efficient way to do this than
> argmin(array([sqrt(dot(x,x)) for x in vec_array]))?
>
Try
argmin(sum(vec_array** 2, 0)** 0.5)
Regards,
eat
>
> Thanks in advance.
> Andrey.
>
> __
org/doc/numpy/reference/generated/numpy.loadtxt.html).
Regards,
eat
>
> --
> DILEEPKUMAR. R
> J R F, IIT DELHI
>
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discu
1 2 3 4]
# [ 1 2 3 4 5]
# [ 2 3 4 5 6]
# [ 3 4 5 6 0] # last item garbage
# [ 4 5 6 0 34]] # 2 last items garbage
My two cents,
eat
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@
]: x= rand(3, 7)
In []: y= rand(3, 7)
In []: d= (x* y).sum(0)
In [490]: d
Out[490]:
array([ 1.25404683, 0.19113117, 1.37267133, 0.74219888, 1.55296562,
0.15264303, 0.72039922])
In [493]: dot(x[:, 0].T, y[:, 0])
Out[493]: 1.2540468282421895
Regards,
eat
>
>
>
> __
Even the
simple linear algebra based full matrix calculations can be done less than 5
ms.
My two cents,
eat
> I'm using gcc on Linux.
> Now I'm wondering if I could go even faster !?
> My hope that the compiler might automagically do some SSE2
> optimization got disappointe
Hi Sturla,
On Sat, Feb 12, 2011 at 5:38 PM, Sturla Molden wrote:
> Den 10.02.2011 16:29, skrev eat:
> > One would expect sum to outperform dot with a clear marginal. Does
> > there exixts any 'tricks' to increase the performance of sum?
>
First of all, thanks for y
Hi,
On Fri, Feb 11, 2011 at 12:08 AM, Robert Kern wrote:
> On Thu, Feb 10, 2011 at 15:32, eat wrote:
> > Hi Robert,
> >
> > On Thu, Feb 10, 2011 at 10:58 PM, Robert Kern
> wrote:
> >>
> >> On Thu, Feb 10, 2011 at 14:29, eat wrote:
> >> >
Hi Robert,
On Thu, Feb 10, 2011 at 10:58 PM, Robert Kern wrote:
> On Thu, Feb 10, 2011 at 14:29, eat wrote:
> > Hi Robert,
> >
> > On Thu, Feb 10, 2011 at 8:16 PM, Robert Kern
> wrote:
> >>
> >> On Thu, Feb 10, 2011 at 11:53, eat wrote:
> >&
to further
advance this. (My C skills are allready bit rusty, but at any higher level
I'll try my best to contribute).
Thanks,
eat
>
> --
> Pauli Virtanen
>
> ___
> NumPy-Discussion mailing list
> NumPy-Discussion@scipy.org
&
Hi Robert,
On Thu, Feb 10, 2011 at 8:16 PM, Robert Kern wrote:
> On Thu, Feb 10, 2011 at 11:53, eat wrote:
> > Thanks Chuck,
> >
> > for replying. But don't you still feel very odd that dot outperforms sum
> in
> > your machine? Just to get it simply; why s
ss instructions, you'll reach to spend more
time ;-).
Regards,
eat
On Thu, Feb 10, 2011 at 7:10 PM, Charles R Harris wrote:
>
>
> On Thu, Feb 10, 2011 at 8:29 AM, eat wrote:
>
>> Hi,
>>
>> Observing following performance:
>> In []: m= 1e5
>> In
7.1 (r271:86832, Nov 27 2010, 18:30:46) [MSC v.1500 32 bit
(Intel)]'
# installed binaries from http://python.org/
In []: np.version.version
Out[]: '1.5.1'
# installed binaries from http://scipy.org/
Regards,
eat
___
NumPy-Discussion
ls down how to
handle the coefficient arrays reasonable (perhaps some kind of lightweigt
'database' for them ;-).
Please feel free to provide any more information.
Regards,
eat
On Tue, Feb 1, 2011 at 10:20 PM, wrote:
> I'm not sure I need to dive into cython or C for this - p
return NAN
elif far < 0.005:
ag= air_gamma_1(t)
ag[np.logical_or(t< 379., t> 4731.)]= NAN
return ag
elif far< 0.069:
ag= air_gamma_2(t, far)
ag[np.logical_or(t< 699., t> 4731.)]= NAN
return ag
else:
return NAN
Re
is array Ms is created? Do you really need to have it in the memory
as whole?
Assuming it's created by (200, 200) 'chunks' at a time, then you could
accumulate that right away to R. It still involves Python looping but that's
not so much overhead.
My 2 cents
eat
For instance I co
Hi,
On Wed, Jan 26, 2011 at 2:35 PM, wrote:
> On Wed, Jan 26, 2011 at 7:22 AM, eat wrote:
> > Hi,
> >
> > I just noticed a document/ implementation conflict with tril and triu.
> > According tril documentation it should return of same shape and data-type
> as
&
crude 'fix' incase someone is interested.
Regards,
eat
twodim_base_fix.py
Description: Binary data
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
of indices calculations?
Regards,
eat
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
gen(3, 5, m, n); print d2
# perform actual x-tabulation
xtab= np.zeros((len(c1), len(c2)), np.int)
for i in xrange(len(c1)):
tmp= d2[c1[i]== d1]
for j in xrange(len(c2)):
xtab[i, j]= np.sum(c2[j]== tmp)
print xtab, np.sum(xtab)== np.prod(d1.shape)
Anyway it's straightforward to extend it to nd x-tabulations ;-).
My 2 cents,
eat
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Note also the differencies
a= np.asarray([[1, 8, 2], [2, 3, 3], [3, 4, 1]])
n= 3
# between
print [np.unravel_index(ind, a.shape) for ind in np.argsort(a.ravel())[-n:]]
# and
print [np.where(val== a) for val in np.sort(a.ravel())[-n:]]
Regards,
eat
>
> Best,
>
>-Nikolaus
>
at positions (0,1) and (1,2).
>
> Best,
>
>-Niko
>
Hi,
Just
a= np.asarray([[1, 8, 2], [2, 1, 3]])
print np.where((a.T== a.max(axis= 1)).T)
However, if any row contains more than 1 max entity, above will fail. Please
let me to know if that's relevant for you.
-eat
_
.
BTW, does this current thread relate anyway to the earlier one '"Match" two
arrays'? If so, would you like to elaborate more about your 'real' problem?
Regards,
eat
>
> Thanks,
> Shailendra
___
NumPy-Discussi
Basically what I mean is that why to bother with the argmax at all,
if your only interest is x[cond].max()?
Just my 2 cents.
Regards,
eat
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
http://mail.scipy.org/mailman/listinfo/numpy-discussion
Oops.
Wrongly timed.
> t= np.array(timeit.repeat(perf, repeat= rep, number= 1))/ rep
should be
t= np.array(timeit.repeat(perf, repeat= rep, number= 1))
eat
___
NumPy-Discussion mailing list
NumPy-Discussion@scipy.org
h
oints'!
>
> Also, the size of the cordinates1 and cordinates2 are quite large and
> "outer" should not be used. I can think of only C style code to
> achieve this. Can any one suggest pythonic way of doing this?
>
> Thanks,
> Shailendra
>
This is straightforwar
94 matches
Mail list logo